text
stringlengths 0
3.34M
|
---|
open import Relation.Binary.Core
module InsertSort.Impl2.Correctness.Permutation.Alternative {A : Set}
(_≤_ : A → A → Set)
(tot≤ : Total _≤_) where
open import Bound.Lower A
open import Bound.Lower.Order _≤_
open import Data.List
open import Data.Sum
open import Function
open import InsertSort.Impl2 _≤_ tot≤
open import List.Permutation.Alternative A renaming (_∼_ to _∼′_)
open import List.Permutation.Alternative.Correctness A
open import List.Permutation.Base A
open import OList _≤_
lemma-insert∼′ : {b : Bound}{x : A}(b≤x : LeB b (val x))(xs : OList b) → (x ∷ forget xs) ∼′ forget (insert b≤x xs)
lemma-insert∼′ b≤x onil = ∼refl
lemma-insert∼′ {x = x} b≤x (:< {x = y} b≤y ys)
with tot≤ x y
... | inj₁ x≤y = ∼refl
... | inj₂ y≤x = ∼trans (∼swap ∼refl) (∼head y (lemma-insert∼′ (lexy y≤x) ys))
lemma-insertSort∼′ : (xs : List A) → xs ∼′ forget (insertSort xs)
lemma-insertSort∼′ [] = ∼refl
lemma-insertSort∼′ (x ∷ xs) = ∼trans (∼head x (lemma-insertSort∼′ xs)) (lemma-insert∼′ lebx (insertSort xs))
theorem-insertSort∼ : (xs : List A) → xs ∼ forget (insertSort xs)
theorem-insertSort∼ = lemma-∼′-∼ ∘ lemma-insertSort∼′
|
module Issue2486.Haskell where
{-# FOREIGN GHC
data MyList a = Nil | Cons a (MyList a)
#-}
|
open classical
namespace classical.tools
variables p q : Prop
variables (α : Type) (r s : α → Prop)
variables a : α
lemma neg_imp_as_conj : ¬(p → q) → p ∧ ¬q :=
λ (h : ¬(p → q)),
or.cases_on (em q)
(λ (hq : q), absurd (λ (hhh : p), hq) h)
(λ (hnq : ¬q),
or.cases_on (em p)
(λ (hp : p), ⟨hp, hnq⟩)
(λ (hnp : ¬p), absurd (λ (hp : p), absurd hp hnp) h))
lemma conj_as_neg_imp : p ∧ ¬q → ¬(p → q):=
λ (h : p ∧ ¬q), id (λ (c : p → q), absurd (c (h.left)) (h.right))
lemma rev_imp : (p → q) → ¬ q → ¬ p :=
λ (hpq : p → q) (hnq : ¬q), id (λ (hp : p), absurd (hpq hp) hnq)
lemma dne (h : ¬ ¬ p) : p :=
by_contradiction (assume h1: ¬ p, show false, from h h1)
lemma neg_universal_as_ex : ¬ (∀ x, ¬ r x) → ∃ x, r x :=
λ (h : ¬∀ (x : α), ¬r x),
by_contradiction
(λ (c : ¬∃ (x : α), r x),
rev_imp (¬∃ (x : α), r x) (∀ (x : α), ¬r x) forall_not_of_not_exists h c)
lemma dne_under_univ : (∀ z, ¬ ¬ r z) → (∀ z, r z) :=
λ (h : ∀ (z : α), ¬¬r z) (z : α), dne (r z) (h z)
lemma contra_pos : (¬q → ¬p) → (p → q) :=
λ (cp : ¬q → ¬p) (hp : p), dne q (id (λ (hnq : ¬q), absurd hp (cp hnq)))
lemma demorgan_or : ¬ (p ∨ q) → ¬ p ∧ ¬ q :=
assume h : ¬ (p ∨ q),
⟨λ hp : p, h (or.inl hp), λ hq : q, h (or.inr hq)⟩
end classical.tools
|
postulate
A : Set
F : { x : A } → Set
G : ⦃ x : A ⦄ → Set
|
Formal statement is: lemma closure_halfspace_lt [simp]: assumes "a \<noteq> 0" shows "closure {x. a \<bullet> x < b} = {x. a \<bullet> x \<le> b}" Informal statement is: If $a \neq 0$, then the closure of the half-space $\{x \in \mathbb{R}^n \mid a \cdot x < b\}$ is the half-space $\{x \in \mathbb{R}^n \mid a \cdot x \leq b\}$. |
variableä <- "Hello World"
|
lemmas eucl_rel_poly_unique_mod = eucl_rel_poly_unique [THEN conjunct2] |
According to the <unk> , this commandment " has been <unk> for Catholics " as one of the Church precepts . The organization cites the papal encyclical Dies Domini :
|
/-
Copyright (c) 2020 Bhavik Mehta. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Bhavik Mehta, Scott Morrison
-/
import category_theory.subobject.mono_over
import category_theory.skeletal
import category_theory.concrete_category.basic
import tactic.apply_fun
import tactic.elementwise
/-!
# Subobjects
We define `subobject X` as the quotient (by isomorphisms) of
`mono_over X := {f : over X // mono f.hom}`.
Here `mono_over X` is a thin category (a pair of objects has at most one morphism between them),
so we can think of it as a preorder. However as it is not skeletal, it is not a partial order.
There is a coercion from `subobject X` back to the ambient category `C`
(using choice to pick a representative), and for `P : subobject X`,
`P.arrow : (P : C) ⟶ X` is the inclusion morphism.
We provide
* `def pullback [has_pullbacks C] (f : X ⟶ Y) : subobject Y ⥤ subobject X`
* `def map (f : X ⟶ Y) [mono f] : subobject X ⥤ subobject Y`
* `def «exists» [has_images C] (f : X ⟶ Y) : subobject X ⥤ subobject Y`
and prove their basic properties and relationships.
These are all easy consequences of the earlier development
of the corresponding functors for `mono_over`.
The subobjects of `X` form a preorder making them into a category. We have `X ≤ Y` if and only if
`X.arrow` factors through `Y.arrow`: see `of_le`/`of_le_mk`/`of_mk_le`/`of_mk_le_mk` and
`le_of_comm`. Similarly, to show that two subobjects are equal, we can supply an isomorphism between
the underlying objects that commutes with the arrows (`eq_of_comm`).
See also
* `category_theory.subobject.factor_thru` :
an API describing factorization of morphisms through subobjects.
* `category_theory.subobject.lattice` :
the lattice structures on subobjects.
## Notes
This development originally appeared in Bhavik Mehta's "Topos theory for Lean" repository,
and was ported to mathlib by Scott Morrison.
### Implementation note
Currently we describe `pullback`, `map`, etc., as functors.
It may be better to just say that they are monotone functions,
and even avoid using categorical language entirely when describing `subobject X`.
(It's worth keeping this in mind in future use; it should be a relatively easy change here
if it looks preferable.)
### Relation to pseudoelements
There is a separate development of pseudoelements in `category_theory.abelian.pseudoelements`,
as a quotient (but not by isomorphism) of `over X`.
When a morphism `f` has an image, the image represents the same pseudoelement.
In a category with images `pseudoelements X` could be constructed as a quotient of `mono_over X`.
In fact, in an abelian category (I'm not sure in what generality beyond that),
`pseudoelements X` agrees with `subobject X`, but we haven't developed this in mathlib yet.
-/
universes v₁ v₂ u₁ u₂
noncomputable theory
namespace category_theory
open category_theory category_theory.category category_theory.limits
variables {C : Type u₁} [category.{v₁} C] {X Y Z : C}
variables {D : Type u₂} [category.{v₂} D]
/-!
We now construct the subobject lattice for `X : C`,
as the quotient by isomorphisms of `mono_over X`.
Since `mono_over X` is a thin category, we use `thin_skeleton` to take the quotient.
Essentially all the structure defined above on `mono_over X` descends to `subobject X`,
with morphisms becoming inequalities, and isomorphisms becoming equations.
-/
/--
The category of subobjects of `X : C`, defined as isomorphism classes of monomorphisms into `X`.
-/
@[derive [partial_order, category]]
def subobject (X : C) := thin_skeleton (mono_over X)
namespace subobject
/-- Convenience constructor for a subobject. -/
abbreviation mk {X A : C} (f : A ⟶ X) [mono f] : subobject X :=
(to_thin_skeleton _).obj (mono_over.mk' f)
section
local attribute [ext] category_theory.comma
protected lemma ind {X : C} (p : subobject X → Prop)
(h : ∀ ⦃A : C⦄ (f : A ⟶ X) [mono f], by exactI p (subobject.mk f)) (P : subobject X) : p P :=
begin
apply quotient.induction_on',
intro a,
convert h a.arrow,
ext; refl
end
protected lemma ind₂ {X : C} (p : subobject X → subobject X → Prop)
(h : ∀ ⦃A B : C⦄ (f : A ⟶ X) (g : B ⟶ X) [mono f] [mono g],
by exactI p (subobject.mk f) (subobject.mk g)) (P Q : subobject X) : p P Q :=
begin
apply quotient.induction_on₂',
intros a b,
convert h a.arrow b.arrow;
ext; refl
end
end
/-- Declare a function on subobjects of `X` by specifying a function on monomorphisms with
codomain `X`. -/
protected def lift {α : Sort*} {X : C} (F : Π ⦃A : C⦄ (f : A ⟶ X) [mono f], α)
(h : ∀ ⦃A B : C⦄ (f : A ⟶ X) (g : B ⟶ X) [mono f] [mono g] (i : A ≅ B),
i.hom ≫ g = f → by exactI F f = F g) : subobject X → α :=
λ P, quotient.lift_on' P (λ m, by exactI F m.arrow) $ λ m n ⟨i⟩,
h m.arrow n.arrow ((mono_over.forget X ⋙ over.forget X).map_iso i) (over.w i.hom)
@[simp]
protected lemma lift_mk {α : Sort*} {X : C} (F : Π ⦃A : C⦄ (f : A ⟶ X) [mono f], α) {h A}
(f : A ⟶ X) [mono f] : subobject.lift F h (subobject.mk f) = F f :=
rfl
/-- The category of subobjects is equivalent to the `mono_over` category. It is more convenient to
use the former due to the partial order instance, but oftentimes it is easier to define structures
on the latter. -/
noncomputable def equiv_mono_over (X : C) : subobject X ≌ mono_over X :=
thin_skeleton.equivalence _
/--
Use choice to pick a representative `mono_over X` for each `subobject X`.
-/
noncomputable
def representative {X : C} : subobject X ⥤ mono_over X :=
(equiv_mono_over X).functor
/--
Starting with `A : mono_over X`, we can take its equivalence class in `subobject X`
then pick an arbitrary representative using `representative.obj`.
This is isomorphic (in `mono_over X`) to the original `A`.
-/
noncomputable
def representative_iso {X : C} (A : mono_over X) :
representative.obj ((to_thin_skeleton _).obj A) ≅ A :=
(equiv_mono_over X).counit_iso.app A
/--
Use choice to pick a representative underlying object in `C` for any `subobject X`.
Prefer to use the coercion `P : C` rather than explicitly writing `underlying.obj P`.
-/
noncomputable
def underlying {X : C} : subobject X ⥤ C :=
representative ⋙ mono_over.forget _ ⋙ over.forget _
instance : has_coe (subobject X) C :=
{ coe := λ Y, underlying.obj Y, }
@[simp] lemma underlying_as_coe {X : C} (P : subobject X) : underlying.obj P = P := rfl
/--
If we construct a `subobject Y` from an explicit `f : X ⟶ Y` with `[mono f]`,
then pick an arbitrary choice of underlying object `(subobject.mk f : C)` back in `C`,
it is isomorphic (in `C`) to the original `X`.
-/
noncomputable
def underlying_iso {X Y : C} (f : X ⟶ Y) [mono f] : (subobject.mk f : C) ≅ X :=
(mono_over.forget _ ⋙ over.forget _).map_iso (representative_iso (mono_over.mk' f))
/--
The morphism in `C` from the arbitrarily chosen underlying object to the ambient object.
-/
noncomputable
def arrow {X : C} (Y : subobject X) : (Y : C) ⟶ X :=
(representative.obj Y).obj.hom
instance arrow_mono {X : C} (Y : subobject X) : mono (Y.arrow) :=
(representative.obj Y).property
@[simp]
lemma arrow_congr {A : C} (X Y : subobject A) (h : X = Y) :
eq_to_hom (congr_arg (λ X : subobject A, (X : C)) h) ≫ Y.arrow = X.arrow :=
by { induction h, simp, }
@[simp]
lemma representative_coe (Y : subobject X) :
(representative.obj Y : C) = (Y : C) :=
rfl
@[simp]
lemma representative_arrow (Y : subobject X) :
(representative.obj Y).arrow = Y.arrow :=
rfl
@[simp, reassoc]
lemma underlying_arrow {X : C} {Y Z : subobject X} (f : Y ⟶ Z) :
underlying.map f ≫ arrow Z = arrow Y :=
over.w (representative.map f)
@[simp, reassoc, elementwise]
lemma underlying_iso_arrow {X Y : C} (f : X ⟶ Y) [mono f] :
(underlying_iso f).inv ≫ (subobject.mk f).arrow = f :=
over.w _
@[simp, reassoc]
lemma underlying_iso_hom_comp_eq_mk {X Y : C} (f : X ⟶ Y) [mono f] :
(underlying_iso f).hom ≫ f = (mk f).arrow :=
(iso.eq_inv_comp _).1 (underlying_iso_arrow f).symm
/-- Two morphisms into a subobject are equal exactly if
the morphisms into the ambient object are equal -/
@[ext]
lemma eq_of_comp_arrow_eq {X Y : C} {P : subobject Y}
{f g : X ⟶ P} (h : f ≫ P.arrow = g ≫ P.arrow) : f = g :=
(cancel_mono P.arrow).mp h
lemma mk_le_mk_of_comm {B A₁ A₂ : C} {f₁ : A₁ ⟶ B} {f₂ : A₂ ⟶ B} [mono f₁] [mono f₂] (g : A₁ ⟶ A₂)
(w : g ≫ f₂ = f₁) : mk f₁ ≤ mk f₂ :=
⟨mono_over.hom_mk _ w⟩
@[simp] lemma mk_arrow (P : subobject X) : mk P.arrow = P :=
quotient.induction_on' P $ λ Q,
begin
obtain ⟨e⟩ := @quotient.mk_out' _ (is_isomorphic_setoid _) Q,
refine quotient.sound' ⟨mono_over.iso_mk _ _ ≪≫ e⟩;
tidy
end
lemma le_of_comm {B : C} {X Y : subobject B} (f : (X : C) ⟶ (Y : C)) (w : f ≫ Y.arrow = X.arrow) :
X ≤ Y :=
by convert mk_le_mk_of_comm _ w; simp
lemma le_mk_of_comm {B A : C} {X : subobject B} {f : A ⟶ B} [mono f] (g : (X : C) ⟶ A)
(w : g ≫ f = X.arrow) : X ≤ mk f :=
le_of_comm (g ≫ (underlying_iso f).inv) $ by simp [w]
lemma mk_le_of_comm {B A : C} {X : subobject B} {f : A ⟶ B} [mono f] (g : A ⟶ (X : C))
(w : g ≫ X.arrow = f) : mk f ≤ X :=
le_of_comm ((underlying_iso f).hom ≫ g) $ by simp [w]
/-- To show that two subobjects are equal, it suffices to exhibit an isomorphism commuting with
the arrows. -/
@[ext] lemma eq_of_comm {B : C} {X Y : subobject B} (f : (X : C) ≅ (Y : C))
(w : f.hom ≫ Y.arrow = X.arrow) : X = Y :=
le_antisymm (le_of_comm f.hom w) $ le_of_comm f.inv $ f.inv_comp_eq.2 w.symm
/-- To show that two subobjects are equal, it suffices to exhibit an isomorphism commuting with
the arrows. -/
@[ext] lemma eq_mk_of_comm {B A : C} {X : subobject B} (f : A ⟶ B) [mono f] (i : (X : C) ≅ A)
(w : i.hom ≫ f = X.arrow) : X = mk f :=
eq_of_comm (i.trans (underlying_iso f).symm) $ by simp [w]
/-- To show that two subobjects are equal, it suffices to exhibit an isomorphism commuting with
the arrows. -/
@[ext] lemma mk_eq_of_comm {B A : C} {X : subobject B} (f : A ⟶ B) [mono f] (i : A ≅ (X : C))
(w : i.hom ≫ X.arrow = f) : mk f = X :=
eq.symm $ eq_mk_of_comm _ i.symm $ by rw [iso.symm_hom, iso.inv_comp_eq, w]
/-- To show that two subobjects are equal, it suffices to exhibit an isomorphism commuting with
the arrows. -/
@[ext] lemma mk_eq_mk_of_comm {B A₁ A₂ : C} (f : A₁ ⟶ B) (g : A₂ ⟶ B) [mono f] [mono g]
(i : A₁ ≅ A₂) (w : i.hom ≫ g = f) : mk f = mk g :=
eq_mk_of_comm _ ((underlying_iso f).trans i) $ by simp [w]
/-- An inequality of subobjects is witnessed by some morphism between the corresponding objects. -/
-- We make `X` and `Y` explicit arguments here so that when `of_le` appears in goal statements
-- it is possible to see its source and target
-- (`h` will just display as `_`, because it is in `Prop`).
def of_le {B : C} (X Y : subobject B) (h : X ≤ Y) : (X : C) ⟶ (Y : C) :=
underlying.map $ h.hom
@[simp, reassoc] lemma of_le_arrow {B : C} {X Y : subobject B} (h : X ≤ Y) :
of_le X Y h ≫ Y.arrow = X.arrow :=
underlying_arrow _
instance {B : C} (X Y : subobject B) (h : X ≤ Y) : mono (of_le X Y h) :=
begin
fsplit,
intros Z f g w,
replace w := w =≫ Y.arrow,
ext,
simpa using w,
end
lemma of_le_mk_le_mk_of_comm
{B A₁ A₂ : C} {f₁ : A₁ ⟶ B} {f₂ : A₂ ⟶ B} [mono f₁] [mono f₂] (g : A₁ ⟶ A₂) (w : g ≫ f₂ = f₁) :
of_le _ _ (mk_le_mk_of_comm g w) = (underlying_iso _).hom ≫ g ≫ (underlying_iso _).inv :=
by { ext, simp [w], }
/-- An inequality of subobjects is witnessed by some morphism between the corresponding objects. -/
@[derive mono]
def of_le_mk {B A : C} (X : subobject B) (f : A ⟶ B) [mono f] (h : X ≤ mk f) : (X : C) ⟶ A :=
of_le X (mk f) h ≫ (underlying_iso f).hom
@[simp] lemma of_le_mk_comp {B A : C} {X : subobject B} {f : A ⟶ B} [mono f] (h : X ≤ mk f) :
of_le_mk X f h ≫ f = X.arrow :=
by simp [of_le_mk]
/-- An inequality of subobjects is witnessed by some morphism between the corresponding objects. -/
@[derive mono]
def of_mk_le {B A : C} (f : A ⟶ B) [mono f] (X : subobject B) (h : mk f ≤ X) : A ⟶ (X : C) :=
(underlying_iso f).inv ≫ of_le (mk f) X h
@[simp] lemma of_mk_le_arrow {B A : C} {f : A ⟶ B} [mono f] {X : subobject B} (h : mk f ≤ X) :
of_mk_le f X h ≫ X.arrow = f :=
by simp [of_mk_le]
/-- An inequality of subobjects is witnessed by some morphism between the corresponding objects. -/
@[derive mono]
def of_mk_le_mk {B A₁ A₂ : C} (f : A₁ ⟶ B) (g : A₂ ⟶ B) [mono f] [mono g] (h : mk f ≤ mk g) :
A₁ ⟶ A₂ :=
(underlying_iso f).inv ≫ of_le (mk f) (mk g) h ≫ (underlying_iso g).hom
@[simp] lemma of_mk_le_mk_comp {B A₁ A₂ : C} {f : A₁ ⟶ B} {g : A₂ ⟶ B} [mono f] [mono g]
(h : mk f ≤ mk g) : of_mk_le_mk f g h ≫ g = f :=
by simp [of_mk_le_mk]
@[simp, reassoc] lemma of_le_comp_of_le {B : C} (X Y Z : subobject B) (h₁ : X ≤ Y) (h₂ : Y ≤ Z) :
of_le X Y h₁ ≫ of_le Y Z h₂ = of_le X Z (h₁.trans h₂) :=
by simp [of_le, ←functor.map_comp underlying]
@[simp, reassoc] lemma of_le_comp_of_le_mk {B A : C} (X Y : subobject B) (f : A ⟶ B) [mono f]
(h₁ : X ≤ Y) (h₂ : Y ≤ mk f) : of_le X Y h₁ ≫ of_le_mk Y f h₂ = of_le_mk X f (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, ←functor.map_comp_assoc underlying]
@[simp, reassoc] lemma of_le_mk_comp_of_mk_le {B A : C} (X : subobject B) (f : A ⟶ B) [mono f]
(Y : subobject B) (h₁ : X ≤ mk f) (h₂ : mk f ≤ Y) :
of_le_mk X f h₁ ≫ of_mk_le f Y h₂ = of_le X Y (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, ←functor.map_comp underlying]
@[simp, reassoc] lemma of_le_mk_comp_of_mk_le_mk {B A₁ A₂ : C} (X : subobject B) (f : A₁ ⟶ B)
[mono f] (g : A₂ ⟶ B) [mono g] (h₁ : X ≤ mk f) (h₂ : mk f ≤ mk g) :
of_le_mk X f h₁ ≫ of_mk_le_mk f g h₂ = of_le_mk X g (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, of_mk_le_mk, ←functor.map_comp_assoc underlying]
@[simp, reassoc] lemma of_mk_le_comp_of_le {B A₁ : C} (f : A₁ ⟶ B) [mono f] (X Y : subobject B)
(h₁ : mk f ≤ X) (h₂ : X ≤ Y) :
of_mk_le f X h₁ ≫ of_le X Y h₂ = of_mk_le f Y (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, of_mk_le_mk, ←functor.map_comp underlying]
@[simp, reassoc] lemma of_mk_le_comp_of_le_mk {B A₁ A₂ : C} (f : A₁ ⟶ B) [mono f] (X : subobject B)
(g : A₂ ⟶ B) [mono g] (h₁ : mk f ≤ X) (h₂ : X ≤ mk g) :
of_mk_le f X h₁ ≫ of_le_mk X g h₂ = of_mk_le_mk f g (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, of_mk_le_mk, ←functor.map_comp_assoc underlying]
@[simp, reassoc] lemma of_mk_le_mk_comp_of_mk_le {B A₁ A₂ : C} (f : A₁ ⟶ B) [mono f] (g : A₂ ⟶ B)
[mono g] (X : subobject B) (h₁ : mk f ≤ mk g) (h₂ : mk g ≤ X) :
of_mk_le_mk f g h₁ ≫ of_mk_le g X h₂ = of_mk_le f X (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, of_mk_le_mk, ←functor.map_comp underlying]
@[simp, reassoc] lemma of_mk_le_mk_comp_of_mk_le_mk {B A₁ A₂ A₃ : C} (f : A₁ ⟶ B) [mono f]
(g : A₂ ⟶ B) [mono g] (h : A₃ ⟶ B) [mono h] (h₁ : mk f ≤ mk g) (h₂ : mk g ≤ mk h) :
of_mk_le_mk f g h₁ ≫ of_mk_le_mk g h h₂ = of_mk_le_mk f h (h₁.trans h₂) :=
by simp [of_mk_le, of_le_mk, of_le, of_mk_le_mk, ←functor.map_comp_assoc underlying]
@[simp] lemma of_le_refl {B : C} (X : subobject B) :
of_le X X le_rfl = 𝟙 _ :=
by { apply (cancel_mono X.arrow).mp, simp }
@[simp] lemma of_mk_le_mk_refl {B A₁ : C} (f : A₁ ⟶ B) [mono f] :
of_mk_le_mk f f le_rfl = 𝟙 _ :=
by { apply (cancel_mono f).mp, simp }
/-- An equality of subobjects gives an isomorphism of the corresponding objects.
(One could use `underlying.map_iso (eq_to_iso h))` here, but this is more readable.) -/
-- As with `of_le`, we have `X` and `Y` as explicit arguments for readability.
@[simps]
def iso_of_eq {B : C} (X Y : subobject B) (h : X = Y) : (X : C) ≅ (Y : C) :=
{ hom := of_le _ _ h.le,
inv := of_le _ _ h.ge, }
/-- An equality of subobjects gives an isomorphism of the corresponding objects. -/
@[simps]
def iso_of_eq_mk {B A : C} (X : subobject B) (f : A ⟶ B) [mono f] (h : X = mk f) : (X : C) ≅ A :=
{ hom := of_le_mk X f h.le,
inv := of_mk_le f X h.ge }
/-- An equality of subobjects gives an isomorphism of the corresponding objects. -/
@[simps]
def iso_of_mk_eq {B A : C} (f : A ⟶ B) [mono f] (X : subobject B) (h : mk f = X) : A ≅ (X : C) :=
{ hom := of_mk_le f X h.le,
inv := of_le_mk X f h.ge, }
/-- An equality of subobjects gives an isomorphism of the corresponding objects. -/
@[simps]
def iso_of_mk_eq_mk {B A₁ A₂ : C} (f : A₁ ⟶ B) (g : A₂ ⟶ B) [mono f] [mono g] (h : mk f = mk g) :
A₁ ≅ A₂ :=
{ hom := of_mk_le_mk f g h.le,
inv := of_mk_le_mk g f h.ge, }
end subobject
open category_theory.limits
namespace subobject
/-- Any functor `mono_over X ⥤ mono_over Y` descends to a functor
`subobject X ⥤ subobject Y`, because `mono_over Y` is thin. -/
def lower {Y : D} (F : mono_over X ⥤ mono_over Y) : subobject X ⥤ subobject Y :=
thin_skeleton.map F
/-- Isomorphic functors become equal when lowered to `subobject`.
(It's not as evil as usual to talk about equality between functors
because the categories are thin and skeletal.) -/
lemma lower_iso (F₁ F₂ : mono_over X ⥤ mono_over Y) (h : F₁ ≅ F₂) :
lower F₁ = lower F₂ :=
thin_skeleton.map_iso_eq h
/-- A ternary version of `subobject.lower`. -/
def lower₂ (F : mono_over X ⥤ mono_over Y ⥤ mono_over Z) :
subobject X ⥤ subobject Y ⥤ subobject Z :=
thin_skeleton.map₂ F
@[simp]
/-- An adjunction between `mono_over A` and `mono_over B` gives an adjunction
between `subobject A` and `subobject B`. -/
def lower_adjunction {A : C} {B : D}
{L : mono_over A ⥤ mono_over B} {R : mono_over B ⥤ mono_over A} (h : L ⊣ R) :
lower L ⊣ lower R :=
thin_skeleton.lower_adjunction _ _ h
/-- An equivalence between `mono_over A` and `mono_over B` gives an equivalence
between `subobject A` and `subobject B`. -/
@[simps]
def lower_equivalence {A : C} {B : D} (e : mono_over A ≌ mono_over B) : subobject A ≌ subobject B :=
{ functor := lower e.functor,
inverse := lower e.inverse,
unit_iso :=
begin
apply eq_to_iso,
convert thin_skeleton.map_iso_eq e.unit_iso,
{ exact thin_skeleton.map_id_eq.symm },
{ exact (thin_skeleton.map_comp_eq _ _).symm },
end,
counit_iso :=
begin
apply eq_to_iso,
convert thin_skeleton.map_iso_eq e.counit_iso,
{ exact (thin_skeleton.map_comp_eq _ _).symm },
{ exact thin_skeleton.map_id_eq.symm },
end }
section pullback
variables [has_pullbacks C]
/-- When `C` has pullbacks, a morphism `f : X ⟶ Y` induces a functor `subobject Y ⥤ subobject X`,
by pulling back a monomorphism along `f`. -/
def pullback (f : X ⟶ Y) : subobject Y ⥤ subobject X :=
lower (mono_over.pullback f)
lemma pullback_id (x : subobject X) : (pullback (𝟙 X)).obj x = x :=
begin
apply quotient.induction_on' x,
intro f,
apply quotient.sound,
exact ⟨mono_over.pullback_id.app f⟩,
end
lemma pullback_comp (f : X ⟶ Y) (g : Y ⟶ Z) (x : subobject Z) :
(pullback (f ≫ g)).obj x = (pullback f).obj ((pullback g).obj x) :=
begin
apply quotient.induction_on' x,
intro t,
apply quotient.sound,
refine ⟨(mono_over.pullback_comp _ _).app t⟩,
end
instance (f : X ⟶ Y) : faithful (pullback f) := {}
end pullback
section map
/--
We can map subobjects of `X` to subobjects of `Y`
by post-composition with a monomorphism `f : X ⟶ Y`.
-/
def map (f : X ⟶ Y) [mono f] : subobject X ⥤ subobject Y :=
lower (mono_over.map f)
lemma map_id (x : subobject X) : (map (𝟙 X)).obj x = x :=
begin
apply quotient.induction_on' x,
intro f,
apply quotient.sound,
exact ⟨mono_over.map_id.app f⟩,
end
lemma map_comp (f : X ⟶ Y) (g : Y ⟶ Z) [mono f] [mono g] (x : subobject X) :
(map (f ≫ g)).obj x = (map g).obj ((map f).obj x) :=
begin
apply quotient.induction_on' x,
intro t,
apply quotient.sound,
refine ⟨(mono_over.map_comp _ _).app t⟩,
end
/-- Isomorphic objects have equivalent subobject lattices. -/
def map_iso {A B : C} (e : A ≅ B) : subobject A ≌ subobject B :=
lower_equivalence (mono_over.map_iso e)
/-- In fact, there's a type level bijection between the subobjects of isomorphic objects,
which preserves the order. -/
-- @[simps] here generates a lemma `map_iso_to_order_iso_to_equiv_symm_apply`
-- whose left hand side is not in simp normal form.
def map_iso_to_order_iso (e : X ≅ Y) : subobject X ≃o subobject Y :=
{ to_fun := (map e.hom).obj,
inv_fun := (map e.inv).obj,
left_inv := λ g, by simp_rw [← map_comp, e.hom_inv_id, map_id],
right_inv := λ g, by simp_rw [← map_comp, e.inv_hom_id, map_id],
map_rel_iff' := λ A B, begin
dsimp, fsplit,
{ intro h,
apply_fun (map e.inv).obj at h,
simp_rw [← map_comp, e.hom_inv_id, map_id] at h,
exact h, },
{ intro h,
apply_fun (map e.hom).obj at h,
exact h, },
end }
@[simp] lemma map_iso_to_order_iso_apply (e : X ≅ Y) (P : subobject X) :
map_iso_to_order_iso e P = (map e.hom).obj P :=
rfl
@[simp] lemma map_iso_to_order_iso_symm_apply (e : X ≅ Y) (Q : subobject Y) :
(map_iso_to_order_iso e).symm Q = (map e.inv).obj Q :=
rfl
/-- `map f : subobject X ⥤ subobject Y` is
the left adjoint of `pullback f : subobject Y ⥤ subobject X`. -/
def map_pullback_adj [has_pullbacks C] (f : X ⟶ Y) [mono f] : map f ⊣ pullback f :=
lower_adjunction (mono_over.map_pullback_adj f)
@[simp]
lemma pullback_map_self [has_pullbacks C] (f : X ⟶ Y) [mono f] (g : subobject X) :
(pullback f).obj ((map f).obj g) = g :=
begin
revert g,
apply quotient.ind,
intro g',
apply quotient.sound,
exact ⟨(mono_over.pullback_map_self f).app _⟩,
end
lemma map_pullback [has_pullbacks C]
{X Y Z W : C} {f : X ⟶ Y} {g : X ⟶ Z} {h : Y ⟶ W} {k : Z ⟶ W} [mono h] [mono g]
(comm : f ≫ h = g ≫ k) (t : is_limit (pullback_cone.mk f g comm)) (p : subobject Y) :
(map g).obj ((pullback f).obj p) = (pullback k).obj ((map h).obj p) :=
begin
revert p,
apply quotient.ind',
intro a,
apply quotient.sound,
apply thin_skeleton.equiv_of_both_ways,
{ refine mono_over.hom_mk (pullback.lift pullback.fst _ _) (pullback.lift_snd _ _ _),
change _ ≫ a.arrow ≫ h = (pullback.snd ≫ g) ≫ _,
rw [assoc, ← comm, pullback.condition_assoc] },
{ refine mono_over.hom_mk (pullback.lift pullback.fst
(pullback_cone.is_limit.lift' t (pullback.fst ≫ a.arrow) pullback.snd _).1
(pullback_cone.is_limit.lift' _ _ _ _).2.1.symm) _,
{ rw [← pullback.condition, assoc], refl },
{ dsimp, rw [pullback.lift_snd_assoc],
apply (pullback_cone.is_limit.lift' _ _ _ _).2.2 } }
end
end map
section «exists»
variables [has_images C]
/--
The functor from subobjects of `X` to subobjects of `Y` given by
sending the subobject `S` to its "image" under `f`, usually denoted $\exists_f$.
For instance, when `C` is the category of types,
viewing `subobject X` as `set X` this is just `set.image f`.
This functor is left adjoint to the `pullback f` functor (shown in `exists_pullback_adj`)
provided both are defined, and generalises the `map f` functor, again provided it is defined.
-/
def «exists» (f : X ⟶ Y) : subobject X ⥤ subobject Y :=
lower (mono_over.exists f)
/--
When `f : X ⟶ Y` is a monomorphism, `exists f` agrees with `map f`.
-/
lemma exists_iso_map (f : X ⟶ Y) [mono f] : «exists» f = map f :=
lower_iso _ _ (mono_over.exists_iso_map f)
/--
`exists f : subobject X ⥤ subobject Y` is
left adjoint to `pullback f : subobject Y ⥤ subobject X`.
-/
def exists_pullback_adj (f : X ⟶ Y) [has_pullbacks C] : «exists» f ⊣ pullback f :=
lower_adjunction (mono_over.exists_pullback_adj f)
end «exists»
end subobject
end category_theory
|
From LF Require Export Logic.
Theorem or_distributes_over_and : forall P Q R : Prop,
P \/ (Q /\ R) <-> (P \/ Q) /\ (P \/ R).
Proof.
Admitted.
(*Essa não tem na lista lá, todas as outras tem resolvidas no github lá*)
Theorem dist_exists_and : forall (X:Type) (P Q : X -> Prop),
(exists x, P x /\ Q x) -> (exists x, P x) /\ (exists x, Q x).
Proof.
Admitted.
Lemma In_map_iff :
forall (A B : Type) (f : A -> B) (l : list A) (y : B),
In y (map f l) <->
exists x, f x = y /\ In x l.
Proof.
Admitted.
Lemma In_app_iff : forall A l l' (a:A),
In a (l++l') <-> In a l \/ In a l'.
Proof.
Admitted.
(* Inspirado na função [In], defina uma função [All] que é válida quando
uma proposição [P] é válida para todos elementos de uma lista [l]: *)
Fixpoint All {T : Type} (P : T -> Prop) (l : list T) : Prop.
Lemma All_In :
forall T (P : T -> Prop) (l : list T),
(forall x, In x l -> P x) <->
All P l.
Proof.
Admitted.
|
lemmas tendsto_Im [tendsto_intros] = bounded_linear.tendsto [OF bounded_linear_Im] |
Truely ( April 13 , 2010 ) daughter
|
{-# LANGUAGE AllowAmbiguousTypes, DataKinds, RankNTypes, TypeFamilies,
TypeOperators #-}
{-# OPTIONS_GHC -fplugin GHC.TypeLits.KnownNat.Solver #-}
{-# OPTIONS_GHC -fplugin GHC.TypeLits.Normalise #-}
module TestMnistRNN (testTrees, shortTestForCITrees) where
import Prelude
import Control.Monad (foldM)
import qualified Data.Array.DynamicS as OT
import Data.Array.Internal (valueOf)
import Data.List (foldl', unfoldr)
import Data.Proxy (Proxy (Proxy))
import qualified Data.Vector.Generic as V
import GHC.TypeLits (KnownNat)
import Numeric.LinearAlgebra (Matrix, Vector)
import qualified Numeric.LinearAlgebra as HM
import System.IO (hPutStrLn, stderr)
import System.Random
import Test.Tasty
import Test.Tasty.HUnit hiding (assert)
import Text.Printf
-- until stylish-haskell accepts NoStarIsType
import qualified GHC.TypeLits
import HordeAd
import HordeAd.Core.OutdatedOptimizer
import HordeAd.Tool.MnistRnnShaped
import HordeAd.Tool.MnistTools
testTrees :: [TestTree]
testTrees = [ sinRNNTests
, mnistRNNTestsShort
, mnistRNNTestsLong
]
shortTestForCITrees :: [TestTree]
shortTestForCITrees = [ sinRNNTests
, mnistRNNTestsShort
]
-- * A recurrent net and an autoregressive model for sine, following
-- https://blog.jle.im/entry/purely-functional-typed-models-2.html
-- and obtaining matching results
-- A version written using matrices
hiddenLayerSinRNN :: DualMonad d r m
=> r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
hiddenLayerSinRNN x s variables = do
let wX = var2 variables 0
wS = var2 variables 1
b = var1 variables 0
y <- returnLet $ wX #>!! V.singleton x + wS #>! s + b
yLogistic <- logisticAct y
return (y, yLogistic)
outputLayerSinRNN :: DualMonad d r m
=> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r)
outputLayerSinRNN vec variables = do
let w = var1 variables 1
b = var0 variables 0
returnLet $ w <.>! vec + b
fcfcrnn :: DualMonad d r m
=> r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r, DualNumber d (Vector r))
fcfcrnn x s variables = do
(hiddenLayer, sHiddenLayer) <- hiddenLayerSinRNN x s variables
outputLayer <- outputLayerSinRNN hiddenLayer variables
return (outputLayer, sHiddenLayer)
unrollLast' :: forall d r m. DualMonad d r m
=> (r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r, DualNumber d (Vector r)))
-> (Vector r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r, DualNumber d (Vector r)))
unrollLast' f xs s0 variables =
let g :: (DualNumber d r, DualNumber d (Vector r)) -> r
-> m (DualNumber d r, DualNumber d (Vector r))
g (_, s) x = f x s variables
in V.foldM' g (undefined, s0) xs
zeroState :: DualMonad d r m
=> Int
-> (a
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r2, DualNumber d (Vector r)))
-> (a
-> DualNumberVariables d r
-> m (DualNumber d r2))
zeroState k f xs variables =
fst <$> f xs (constant $ HM.konst 0 k) variables
nnSinRNN :: DualMonad d r m
=> Vector r
-> DualNumberVariables d r
-> m (DualNumber d r)
nnSinRNN = zeroState 30 (unrollLast' fcfcrnn)
nnSinRNNLoss :: DualMonad d r m
=> (Vector r, r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnSinRNNLoss (xs, target) variables = do
result <- nnSinRNN xs variables
lossSquared target result
series :: [Double]
series = [sin (2 * pi * t / 25) | t <- [0 ..]]
samples :: [(Vector Double, Double)]
samples = [(V.fromList $ init c, last c) | c <- chunksOf 19 series]
sgdShow :: HasDelta r
=> (a -> DualNumberVariables 'DModeGradient r -> DualMonadGradient r (DualNumber 'DModeGradient r))
-> [a]
-> Domains r
-> r
sgdShow f trainData parameters =
let result = fst $ sgd 0.1 f trainData parameters
in snd $ dReverse 1 (f $ head trainData) result
sgdTestCase :: String
-> (a
-> DualNumberVariables 'DModeGradient Double
-> DualMonadGradient Double (DualNumber 'DModeGradient Double))
-> (Int, [Int], [(Int, Int)], [OT.ShapeL])
-> IO [a]
-> Double
-> TestTree
sgdTestCase prefix f nParameters trainDataIO expected =
let ((nParams0, nParams1, nParams2, _), totalParams, range, parameters0) =
initializerFixed 44 0.05 nParameters
name = prefix ++ " "
++ unwords [ show nParams0, show nParams1, show nParams2
, show totalParams, show range ]
in testCase name $ do
trainData <- trainDataIO
sgdShow f trainData parameters0
@?= expected
sgdTestCaseAlt :: String
-> (a
-> DualNumberVariables 'DModeGradient Double
-> DualMonadGradient Double (DualNumber 'DModeGradient Double))
-> (Int, [Int], [(Int, Int)], [OT.ShapeL])
-> IO [a]
-> [Double]
-> TestTree
sgdTestCaseAlt prefix f nParameters trainDataIO expected =
let ((nParams0, nParams1, nParams2, _), totalParams, range, parameters0) =
initializerFixed 44 0.05 nParameters
name = prefix ++ " "
++ unwords [ show nParams0, show nParams1, show nParams2
, show totalParams, show range ]
in testCase name $ do
trainData <- trainDataIO
let res = sgdShow f trainData parameters0
assertBool ("wrong result: " ++ show res ++ " is expected to be a member of " ++ show expected) $ res `elem` expected
prime :: IsScalar 'DModeGradient r
=> (r
-> DualNumber 'DModeGradient (Vector r)
-> DualNumberVariables 'DModeGradient r
-> DualMonadValue r (DualNumber 'DModeGradient r, DualNumber 'DModeGradient (Vector r)))
-> Domains r
-> Vector r
-> [r]
-> Vector r
prime f parameters =
foldl' (\s x -> primalValue (fmap snd . f x (constant s)) parameters)
feedback :: IsScalar 'DModeGradient r
=> (r
-> DualNumber 'DModeGradient (Vector r)
-> DualNumberVariables 'DModeGradient r
-> DualMonadValue r (DualNumber 'DModeGradient r, DualNumber 'DModeGradient (Vector r)))
-> Domains r
-> Vector r
-> r
-> [r]
feedback f parameters s0 x0 =
let go (x, s) =
let (D y _, sd') = primalValueGeneral (f x s) parameters
in Just (x, (y, sd'))
in unfoldr go (x0, constant s0)
feedbackTestCase :: String
-> (Double
-> DualNumber 'DModeGradient (Vector Double)
-> DualNumberVariables 'DModeGradient Double
-> DualMonadValue Double
( DualNumber 'DModeGradient Double
, DualNumber 'DModeGradient (Vector Double) ))
-> (a
-> DualNumberVariables 'DModeGradient Double
-> DualMonadGradient Double (DualNumber 'DModeGradient Double))
-> (Int, [Int], [(Int, Int)], [OT.ShapeL])
-> [a]
-> [Double]
-> TestTree
feedbackTestCase prefix fp f nParameters trainData expected =
let ((nParams0, nParams1, nParams2, _), totalParams, range, parameters0) =
initializerFixed 44 0.05 nParameters
name = prefix ++ " "
++ unwords [ show nParams0, show nParams1, show nParams2
, show totalParams, show range ]
trained = fst $ sgd 0.1 f trainData parameters0
primed = prime fp trained (HM.konst 0 30) (take 19 series)
output = feedback fp trained primed (series !! 19)
in testCase name $
take 30 output @?= expected
-- A version written using vectors
hiddenLayerSinRNNV :: DualMonad d r m
=> r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
hiddenLayerSinRNNV x s variables = do
let wX = var1 variables 0
b = var1 variables 31
y <- returnLet
$ scale (HM.konst x 30) wX + sumTrainableInputsL s 1 variables 30 + b
yLogistic <- logisticAct y
return (y, yLogistic)
outputLayerSinRNNV :: DualMonad d r m
=> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r)
outputLayerSinRNNV vec variables = do
let w = var1 variables 32
b = var0 variables 0
returnLet $ w <.>! vec + b
fcfcrnnV :: DualMonad d r m
=> r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r, DualNumber d (Vector r))
fcfcrnnV x s variables = do
(hiddenLayer, sHiddenLayer) <- hiddenLayerSinRNNV x s variables
outputLayer <- outputLayerSinRNNV hiddenLayer variables
return (outputLayer, sHiddenLayer)
nnSinRNNLossV :: DualMonad d r m
=> (Vector r, r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnSinRNNLossV (xs, target) variables = do
result <- zeroState 30 (unrollLast' fcfcrnnV) xs variables
lossSquared target result
-- Autoregressive model with degree 2
ar2Sin :: DualMonad d r m
=> r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r, DualNumber d (Vector r))
ar2Sin yLast s variables = do
let c = var0 variables 0
phi1 = var0 variables 1
phi2 = var0 variables 2
yLastLast = index0 s 0 -- dummy vector for compatibility
y <- returnLet $ c + scale yLast phi1 + phi2 * yLastLast
return (y, constant $ V.singleton yLast)
ar2SinLoss :: DualMonad d r m
=> (Vector r, r)
-> DualNumberVariables d r
-> m (DualNumber d r)
ar2SinLoss (xs, target) variables = do
result <- zeroState 30 (unrollLast' ar2Sin) xs variables
lossSquared target result
sinRNNTests :: TestTree
sinRNNTests = testGroup "Sine RNN tests"
[ sgdTestCase "train" nnSinRNNLoss (1, [30, 30], [(30, 1), (30, 30)], [])
(return $ take 30000 samples) 5.060827754123346e-5
, feedbackTestCase "feedback" fcfcrnn nnSinRNNLoss
(1, [30, 30], [(30, 1), (30, 30)], [])
(take 10000 samples)
[-0.9980267284282716,-0.9655322144631203,-0.8919588317267176,-0.7773331580548076,-0.6212249872512189,-0.4246885094957385,-0.19280278430361192,6.316924614971235e-2,0.3255160857644734,0.5731149496491759,0.7872840563791541,0.957217059407527,1.0815006200684472,1.1654656874016613,1.2170717188563214,1.2437913143303263,1.251142657837598,1.2423738174804864,1.2186583377053681,1.1794148708577938,1.1226117988569018,1.0450711676413071,0.9428743310020188,0.8120257428038534,0.6495453130357101,0.45507653540664667,0.23281831228915612,-6.935736916677385e-3,-0.24789484923780786,-0.4705527193222155]
, sgdTestCase "trainVV" nnSinRNNLossV (1, replicate 33 30, [], [])
(return $ take 30000 samples) 4.6511403967229306e-5
-- different random initial paramaters produce a worse result;
-- matrix implementation faster, because the matrices still fit in cache
, feedbackTestCase "feedbackVV" fcfcrnnV nnSinRNNLossV
(1, replicate 33 30, [], [])
(take 10000 samples)
[-0.9980267284282716,-0.9660899403337656,-0.8930568599923028,-0.7791304201898077,-0.6245654477568863,-0.4314435277698684,-0.2058673183484546,4.0423225394292085e-2,0.29029630688547203,0.5241984159992963,0.7250013011527577,0.8820730400055012,0.9922277361823716,1.057620382863504,1.08252746840241,1.070784986731554,1.0245016946328942,0.9438848015250431,0.827868146535437,0.6753691437632174,0.48708347071773117,0.26756701680655437,2.6913747557207532e-2,-0.21912614372802072,-0.45154893423928943,-0.6525638736434227,-0.8098403108946983,-0.9180866488182939,-0.9775459850131992,-0.9910399864230198]
, sgdTestCase "trainAR" ar2SinLoss (3, [], [], [])
(return $ take 30000 samples) 6.327978161031336e-23
, feedbackTestCase "feedbackAR" ar2Sin ar2SinLoss
(3, [], [], [])
(take 10000 samples)
[-0.9980267284282716,-0.9510565162972417,-0.8443279255081759,-0.6845471059406962,-0.48175367412103653,-0.24868988719256901,-3.673766846290505e-11,0.24868988711894977,0.4817536740469978,0.6845471058659982,0.8443279254326351,0.9510565162207472,0.9980267283507953,0.9822872506502898,0.9048270523889208,0.7705132427021685,0.5877852522243431,0.3681245526237731,0.12533323351198067,-0.1253332336071494,-0.36812455271766376,-0.5877852523157643,-0.7705132427900961,-0.9048270524725681,-0.9822872507291605,-0.9980267284247174,-0.9510565162898851,-0.844327925497479,-0.6845471059273313,-0.48175367410584324]
]
-- * A 1 recurrent layer net with 128 neurons for MNIST, based on
-- https://medium.com/machine-learning-algorithms/mnist-using-recurrent-neural-network-2d070a5915a2
-- *Not* LSTM.
-- Doesn't train without Adam, regardless of whether mini-batch sgd
-- is used and whether a second recurrent layer. It does train with Adam,
-- but only after very carefully tweaking initialization. This is
-- extremely sensitive to initial parameters, more than to anything
-- else. Probably, gradient is vanishing if parameters are initialized
-- with a probability distribution that doesn't have the right variance. See
-- https://stats.stackexchange.com/questions/301285/what-is-vanishing-gradient.
hiddenLayerMnistRNNL :: DualMonad d r m
=> Vector r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
hiddenLayerMnistRNNL x s variables = do
let wX = var2 variables 0 -- 128x28
wS = var2 variables 1 -- 128x128
b = var1 variables 0 -- 128
y = wX #>!! x + wS #>! s + b
yTanh <- tanhAct y
return (yTanh, yTanh) -- tanh in both, as per https://github.com/keras-team/keras/blob/v2.8.0/keras/layers/legacy_rnn/rnn_cell_impl.py#L468
middleLayerMnistRNNL :: DualMonad d r m
=> DualNumber d (Vector r)
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
middleLayerMnistRNNL vec s variables = do
let wX = var2 variables 3 -- 128x128
wS = var2 variables 4 -- 128x128
b = var1 variables 2 -- 128
y = wX #>! vec + wS #>! s + b
yTanh <- tanhAct y
return (yTanh, yTanh)
outputLayerMnistRNNL :: DualMonad d r m
=> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r))
outputLayerMnistRNNL vec variables = do
let w = var2 variables 2 -- 10x128
b = var1 variables 1 -- 10
returnLet $ w #>! vec + b -- I assume there is no activations, as per https://www.tensorflow.org/api_docs/python/tf/compat/v1/layers/dense
fcfcrnnMnistL :: DualMonad d r m
=> Vector r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
fcfcrnnMnistL = hiddenLayerMnistRNNL
fcfcrnnMnistL2 :: DualMonad d r m
=> Vector r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
fcfcrnnMnistL2 x s@(D u _) variables = do
let len = V.length u `div` 2
s1 = slice1 0 len s
s2 = slice1 len len s
(vec1, s1') <- hiddenLayerMnistRNNL x s1 variables
(vec2, s2') <- middleLayerMnistRNNL vec1 s2 variables
s3 <- returnLet $ append1 s1' s2'
return (vec2, s3)
unrollLastG :: forall d a b c m r. DualMonad d r m
=> (a -> b -> DualNumberVariables d r -> m (c, b))
-> ([a] -> b -> DualNumberVariables d r -> m (c, b))
unrollLastG f xs s0 variables =
let g :: (c, b) -> a -> m (c, b)
g (_, s) x = f x s variables
in foldM g (undefined, s0) xs
nnMnistRNNL :: forall d r m. DualMonad d r m
=> Int
-> [Vector r]
-> DualNumberVariables d r
-> m (DualNumber d (Vector r))
nnMnistRNNL width x variables = do
rnnLayer <- zeroState width (unrollLastG fcfcrnnMnistL) x variables
outputLayerMnistRNNL rnnLayer variables
nnMnistRNNL2 :: DualMonad d r m
=> Int
-> [Vector r]
-> DualNumberVariables d r
-> m (DualNumber d (Vector r))
nnMnistRNNL2 width x variables = do
rnnLayer <- zeroState (2 * width) (unrollLastG fcfcrnnMnistL2) x variables
outputLayerMnistRNNL rnnLayer variables
nnMnistRNNLossL :: forall d r m. DualMonad d r m
=> Int
-> ([Vector r], Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnMnistRNNLossL width (xs, target) variables = do
result <- nnMnistRNNL width xs variables
lossSoftMaxCrossEntropyV target result
nnMnistRNNLossL2 :: DualMonad d r m
=> Int
-> ([Vector r], Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnMnistRNNLossL2 width (xs, target) variables = do
result <- nnMnistRNNL2 width xs variables
lossSoftMaxCrossEntropyV target result
testMnistRNNL :: forall r. IsScalar 'DModeGradient r
=> Int -> [([Vector r], Vector r)] -> Domains r -> r
testMnistRNNL width inputs parameters =
let matchesLabels :: ([Vector r], Vector r) -> Bool
matchesLabels (glyph, label) =
let nn = nnMnistRNNL width glyph
value = primalValue nn parameters
in V.maxIndex value == V.maxIndex label
in fromIntegral (length (filter matchesLabels inputs))
/ fromIntegral (length inputs)
testMnistRNNL2 :: forall r. IsScalar 'DModeGradient r
=> Int -> [([Vector r], Vector r)] -> Domains r -> r
testMnistRNNL2 width inputs parameters =
let matchesLabels :: ([Vector r], Vector r) -> Bool
matchesLabels (glyph, label) =
let nn = nnMnistRNNL2 width glyph
value = primalValue nn parameters
in V.maxIndex value == V.maxIndex label
in fromIntegral (length (filter matchesLabels inputs))
/ fromIntegral (length inputs)
-- A version written using vectors
hiddenLayerMnistRNNV :: DualMonad d r m
=> Int
-> Vector r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
hiddenLayerMnistRNNV width x s variables = do
let b = var1 variables (width + width) -- 128
y = sumConstantDataL x 0 variables width
+ sumTrainableInputsL s width variables width
+ b
yTanh <- tanhAct y
return (yTanh, yTanh)
outputLayerMnistRNNV :: DualMonad d r m
=> Int
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r))
outputLayerMnistRNNV width vec variables = do
let b = var1 variables (width + width + 1 + 10) -- 10
returnLet $ sumTrainableInputsL vec (width + width + 1) variables 10 + b
fcfcrnnMnistV :: DualMonad d r m
=> Int
-> Vector r
-> DualNumber d (Vector r)
-> DualNumberVariables d r
-> m (DualNumber d (Vector r), DualNumber d (Vector r))
fcfcrnnMnistV = hiddenLayerMnistRNNV
nnMnistRNNV :: DualMonad d r m
=> Int
-> [Vector r]
-> DualNumberVariables d r
-> m (DualNumber d (Vector r))
nnMnistRNNV width x variables = do
rnnLayer <- zeroState width (unrollLastG $ fcfcrnnMnistV width) x variables
outputLayerMnistRNNV width rnnLayer variables
nnMnistRNNLossV :: DualMonad d r m
=> Int
-> ([Vector r], Vector r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnMnistRNNLossV width (xs, target) variables = do
result <- nnMnistRNNV width xs variables
lossSoftMaxCrossEntropyV target result
testMnistRNNV :: forall r. IsScalar 'DModeGradient r
=> Int -> [([Vector r], Vector r)] -> Domains r -> r
testMnistRNNV width inputs parameters =
let matchesLabels :: ([Vector r], Vector r) -> Bool
matchesLabels (glyph, label) =
let nn = nnMnistRNNV width glyph
value = primalValue nn parameters
in V.maxIndex value == V.maxIndex label
in fromIntegral (length (filter matchesLabels inputs))
/ fromIntegral (length inputs)
lenMnistRNNL :: Int -> Int -> (Int, [Int], [(Int, Int)], [OT.ShapeL])
lenMnistRNNL width nLayers =
( 0
, [width, 10] ++ replicate (nLayers - 1) width
, [(width, 28), (width, width), (10, width)]
++ concat (replicate (nLayers - 1) [(width, width), (width, width)])
, []
)
lenMnistRNNV :: Int -> Int -> (Int, [Int], [(Int, Int)], [OT.ShapeL])
lenMnistRNNV width nLayers =
( 0
, replicate width 28 ++ replicate width width ++ [width]
++ replicate 10 width ++ [10]
++ concat (replicate (nLayers - 1)
(replicate width width ++ replicate width width ++ [width]))
, []
, []
)
mnistTestCaseRNN
:: String
-> Int
-> Int
-> (Int
-> ([Vector Double], Vector Double)
-> DualNumberVariables 'DModeGradient Double
-> DualMonadGradient Double (DualNumber 'DModeGradient Double))
-> (Int -> [([Vector Double], Vector Double)] -> Domains Double -> Double)
-> (Int -> Int -> (Int, [Int], [(Int, Int)], [OT.ShapeL]))
-> Int
-> Int
-> Double
-> TestTree
mnistTestCaseRNN prefix epochs maxBatches f ftest flen width nLayers
expected =
let ((nParams0, nParams1, nParams2, _), totalParams, range, parameters0) =
initializerFixed 44 0.2 (flen width nLayers)
name = prefix ++ ": "
++ unwords [ show epochs, show maxBatches
, show width, show nLayers
, show nParams0, show nParams1, show nParams2
, show totalParams, show range ]
in testCase name $ do
hPutStrLn stderr $ printf "\n%s: Epochs to run/max batches per epoch: %d/%d"
prefix epochs maxBatches
let rws (input, target) =
( map (\k -> V.slice (k * 28) 28 input) [0 .. 27]
, target )
trainData <- map rws <$> loadMnistData trainGlyphsPath trainLabelsPath
testData <- map rws <$> loadMnistData testGlyphsPath testLabelsPath
-- There is some visual feedback, because some of these take long.
let runBatch :: (Domains Double, StateAdam Double)
-> (Int, [([Vector Double], Vector Double)])
-> IO (Domains Double, StateAdam Double)
runBatch (parameters@(!_, !_, !_, !_), stateAdam) (k, chunk) = do
let res@(parameters2, _) =
sgdAdamBatch 150 (f width) chunk parameters stateAdam
!trainScore = ftest width chunk parameters2
!testScore = ftest width testData parameters2
!lenChunk = length chunk
hPutStrLn stderr $ printf "\n%s: (Batch %d with %d points)" prefix k lenChunk
hPutStrLn stderr $ printf "%s: Training error: %.2f%%" prefix ((1 - trainScore) * 100)
hPutStrLn stderr $ printf "%s: Validation error: %.2f%%" prefix ((1 - testScore ) * 100)
return res
runEpoch :: Int
-> (Domains Double, StateAdam Double)
-> IO (Domains Double)
runEpoch n (params2, _) | n > epochs = return params2
runEpoch n paramsStateAdam = do
hPutStrLn stderr $ printf "\n%s: [Epoch %d]" prefix n
let trainDataShuffled = shuffle (mkStdGen $ n + 5) trainData
chunks = take maxBatches
$ zip [1 ..] $ chunksOf 5000 trainDataShuffled
!res <- foldM runBatch paramsStateAdam chunks
runEpoch (succ n) res
res <- runEpoch 1 (parameters0, initialStateAdam parameters0)
let testErrorFinal = 1 - ftest width testData res
testErrorFinal @?= expected
-- * A version written using matrices to express mini-batches of data
-- and so using matrix multiplication to run the neural net
hiddenLayerMnistRNNB :: DualMonad d r m
=> Matrix r -- the mini-batch of data 28x150
-> DualNumber d (Matrix r) -- state for mini-batch 128x150
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r), DualNumber d (Matrix r))
hiddenLayerMnistRNNB x s variables = do
let wX = var2 variables 0 -- 128x28
wS = var2 variables 1 -- 128x128
b = var1 variables 0 -- 128
batchSize = HM.cols x
y = wX <>!! x + wS <>! s + asColumn2 b batchSize
yTanh <- returnLet $ tanh y
return (yTanh, yTanh)
middleLayerMnistRNNB :: DualMonad d r m
=> DualNumber d (Matrix r) -- 128x150
-> DualNumber d (Matrix r) -- 128x150
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r), DualNumber d (Matrix r))
middleLayerMnistRNNB batchOfVec@(D u _) s variables = do
let wX = var2 variables 3 -- 128x128
wS = var2 variables 4 -- 128x128
b = var1 variables 2 -- 128
batchSize = HM.cols u
y = wX <>! batchOfVec + wS <>! s + asColumn2 b batchSize
yTanh <- returnLet $ tanh y
return (yTanh, yTanh)
outputLayerMnistRNNB :: DualMonad d r m
=> DualNumber d (Matrix r) -- 128x150
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r))
outputLayerMnistRNNB batchOfVec@(D u _) variables = do
let w = var2 variables 2 -- 10x128
b = var1 variables 1 -- 10
batchSize = HM.cols u
returnLet $ w <>! batchOfVec + asColumn2 b batchSize
fcfcrnnMnistB :: DualMonad d r m
=> Matrix r
-> DualNumber d (Matrix r)
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r), DualNumber d (Matrix r))
fcfcrnnMnistB = hiddenLayerMnistRNNB
fcfcrnnMnistB2 :: DualMonad d r m
=> Matrix r -- 28x150
-> DualNumber d (Matrix r) -- 256x150
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r), DualNumber d (Matrix r))
fcfcrnnMnistB2 x s@(D u _) variables = do
let len = HM.rows u `div` 2
s1 = rowSlice2 0 len s
s2 = rowSlice2 len len s
(vec1, s1') <- hiddenLayerMnistRNNB x s1 variables
(vec2, s2') <- middleLayerMnistRNNB vec1 s2 variables
return (vec2, rowAppend2 s1' s2')
zeroStateB :: DualMonad d r m
=> (Int, Int)
-> (a
-> DualNumber d (Matrix r)
-> DualNumberVariables d r
-> m (DualNumber d r2, DualNumber d (Matrix r)))
-> (a
-> DualNumberVariables d r
-> m (DualNumber d r2))
zeroStateB ij f xs variables =
fst <$> f xs (constant $ HM.konst 0 ij) variables
nnMnistRNNB :: DualMonad d r m
=> Int
-> [Matrix r]
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r))
nnMnistRNNB width xs variables = do
let batchSize = HM.cols $ head xs
rnnLayer <- zeroStateB (width, batchSize) (unrollLastG fcfcrnnMnistB)
xs variables
outputLayerMnistRNNB rnnLayer variables
nnMnistRNNB2 :: DualMonad d r m
=> Int
-> [Matrix r]
-> DualNumberVariables d r
-> m (DualNumber d (Matrix r))
nnMnistRNNB2 width xs variables = do
let batchSize = HM.cols $ head xs
rnnLayer <- zeroStateB (2 * width, batchSize) (unrollLastG fcfcrnnMnistB2)
xs variables
outputLayerMnistRNNB rnnLayer variables
nnMnistRNNLossB :: DualMonad d r m
=> Int
-> ([Matrix r], Matrix r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnMnistRNNLossB width (xs, target) variables = do
result <- nnMnistRNNB width xs variables
vec@(D u _) <- lossSoftMaxCrossEntropyL target result
returnLet $ scale (recip $ fromIntegral $ V.length u) $ sumElements0 vec
nnMnistRNNLossB2 :: DualMonad d r m
=> Int
-> ([Matrix r], Matrix r)
-> DualNumberVariables d r
-> m (DualNumber d r)
nnMnistRNNLossB2 width (xs, target) variables = do
result <- nnMnistRNNB2 width xs variables
vec@(D u _) <- lossSoftMaxCrossEntropyL target result
returnLet $ scale (recip $ fromIntegral $ V.length u) $ sumElements0 vec
mnistTestCaseRNNB
:: String
-> Int
-> Int
-> (Int
-> ([Matrix Double], Matrix Double)
-> DualNumberVariables 'DModeGradient Double
-> DualMonadGradient Double (DualNumber 'DModeGradient Double))
-> (Int -> [([Vector Double], Vector Double)] -> Domains Double -> Double)
-> (Int -> Int -> (Int, [Int], [(Int, Int)], [OT.ShapeL]))
-> Int
-> Int
-> Double
-> TestTree
mnistTestCaseRNNB prefix epochs maxBatches f ftest flen width nLayers
expected =
let ((nParams0, nParams1, nParams2, _), totalParams, range, parameters0) =
initializerFixed 44 0.2 (flen width nLayers)
name = prefix ++ ": "
++ unwords [ show epochs, show maxBatches
, show width, show nLayers
, show nParams0, show nParams1, show nParams2
, show totalParams, show range ]
in testCase name $ do
hPutStrLn stderr $ printf "\n%s: Epochs to run/max batches per epoch: %d/%d"
prefix epochs maxBatches
let rws (input, target) =
( map (\k -> V.slice (k * 28) 28 input) [0 .. 27]
, target )
trainData <- map rws <$> loadMnistData trainGlyphsPath trainLabelsPath
testData <- map rws <$> loadMnistData testGlyphsPath testLabelsPath
let packChunk :: [([Vector Double], Vector Double)]
-> ([Matrix Double], Matrix Double)
packChunk chunk =
let (inputs, targets) = unzip chunk
behead !acc ([] : _) = reverse acc
behead !acc l = behead (HM.fromColumns (map head l) : acc)
(map tail l)
in (behead [] inputs, HM.fromColumns targets)
-- There is some visual feedback, because some of these take long.
runBatch :: (Domains Double, StateAdam Double)
-> (Int, [([Vector Double], Vector Double)])
-> IO (Domains Double, StateAdam Double)
runBatch (parameters@(!_, !_, !_, !_), stateAdam) (k, chunk) = do
let res@(parameters2, _) =
sgdAdam (f width) (map packChunk $ chunksOf 150 chunk)
parameters stateAdam
!trainScore = ftest width chunk parameters2
!testScore = ftest width testData parameters2
!lenChunk = length chunk
hPutStrLn stderr $ printf "\n%s: (Batch %d with %d points)" prefix k lenChunk
hPutStrLn stderr $ printf "%s: Training error: %.2f%%" prefix ((1 - trainScore) * 100)
hPutStrLn stderr $ printf "%s: Validation error: %.2f%%" prefix ((1 - testScore ) * 100)
return res
runEpoch :: Int
-> (Domains Double, StateAdam Double)
-> IO (Domains Double)
runEpoch n (params2, _) | n > epochs = return params2
runEpoch n paramsStateAdam = do
hPutStrLn stderr $ printf "\n%s: [Epoch %d]" prefix n
let trainDataShuffled = shuffle (mkStdGen $ n + 5) trainData
chunks = take maxBatches
$ zip [1 ..] $ chunksOf 5000 trainDataShuffled
!res <- foldM runBatch paramsStateAdam chunks
runEpoch (succ n) res
res <- runEpoch 1 (parameters0, initialStateAdam parameters0)
let testErrorFinal = 1 - ftest width testData res
testErrorFinal @?= expected
-- * A version written using shaped tensors
mnistTestCaseRNNS
:: forall out_width batch_size d r m.
( KnownNat out_width, KnownNat batch_size
, r ~ Double, d ~ 'DModeGradient, m ~ DualMonadGradient Double )
=> String
-> Int
-> Int
-> (forall out_width' batch_size'.
(DualMonad d r m, KnownNat out_width', KnownNat batch_size')
=> Proxy out_width'
-> MnistDataBatchS batch_size' r
-> DualNumberVariables d r
-> m (DualNumber d r))
-> (forall out_width' batch_size'.
(IsScalar d r, KnownNat out_width', KnownNat batch_size')
=> Proxy r -> Proxy out_width'
-> MnistDataBatchS batch_size' r
-> Domains r
-> r)
-> (forall out_width'. KnownNat out_width'
=> Proxy out_width' -> (Int, [Int], [(Int, Int)], [OT.ShapeL]))
-> Double
-> TestTree
mnistTestCaseRNNS prefix epochs maxBatches trainWithLoss ftest flen expected =
let proxy_out_width = Proxy @out_width
batch_size = valueOf @batch_size
((_, _, _, nParamsX), totalParams, range, parametersInit) =
initializerFixed 44 0.2 (flen proxy_out_width)
name = prefix ++ ": "
++ unwords [ show epochs, show maxBatches
, show (valueOf @out_width :: Int), show batch_size
, show nParamsX, show totalParams, show range ]
in testCase name $ do
hPutStrLn stderr $ printf "\n%s: Epochs to run/max batches per epoch: %d/%d"
prefix epochs maxBatches
trainData <- map shapeBatch
<$> loadMnistData trainGlyphsPath trainLabelsPath
testData <- map shapeBatch
<$> loadMnistData testGlyphsPath testLabelsPath
let testDataS = packBatch @LengthTestData testData
-- There is some visual feedback, because some of these take long.
runBatch :: (Domains r, StateAdam r)
-> (Int, [MnistDataS r])
-> IO (Domains r, StateAdam r)
runBatch (parameters@(!_, !_, !_, !_), stateAdam) (k, chunk) = do
let f = trainWithLoss proxy_out_width
chunkS = map (packBatch @batch_size)
$ filter (\ch -> length ch >= batch_size)
$ chunksOf batch_size chunk
res@(parameters2, _) = sgdAdam f chunkS parameters stateAdam
!trainScore =
ftest (Proxy @r) proxy_out_width
(packBatch @(10 GHC.TypeLits.* batch_size) chunk)
parameters2
!testScore = ftest (Proxy @r) proxy_out_width
testDataS parameters2
!lenChunk = length chunk
hPutStrLn stderr $ printf "\n%s: (Batch %d with %d points)" prefix k lenChunk
hPutStrLn stderr $ printf "%s: Training error: %.2f%%" prefix ((1 - trainScore) * 100)
hPutStrLn stderr $ printf "%s: Validation error: %.2f%%" prefix ((1 - testScore ) * 100)
return res
runEpoch :: Int -> (Domains r, StateAdam r) -> IO (Domains r)
runEpoch n (params2, _) | n > epochs = return params2
runEpoch n paramsStateAdam = do
hPutStrLn stderr $ printf "\n%s: [Epoch %d]" prefix n
let trainDataShuffled = shuffle (mkStdGen $ n + 5) trainData
chunks = take maxBatches
$ zip [1 ..]
$ chunksOf (10 * batch_size) trainDataShuffled
!res <- foldM runBatch paramsStateAdam chunks
runEpoch (succ n) res
res <- runEpoch 1 (parametersInit, initialStateAdam parametersInit)
let testErrorFinal = 1 - ftest (Proxy @r) proxy_out_width testDataS res
testErrorFinal @?= expected
mnistRNNTestsLong :: TestTree
mnistRNNTestsLong = testGroup "MNIST RNN long tests"
[ mnistTestCaseRNN "99LL 1 epoch, all batches" 1 99
nnMnistRNNLossL testMnistRNNL lenMnistRNNL 128 1
8.209999999999995e-2
, mnistTestCaseRNNB "99BB 1 epoch, all batches" 1 99
nnMnistRNNLossB testMnistRNNL lenMnistRNNL 128 1
8.209999999999995e-2
, mnistTestCaseRNN "99LL2 1 epoch, all batches" 1 99
nnMnistRNNLossL2 testMnistRNNL2 lenMnistRNNL 128 2
6.259999999999999e-2
, mnistTestCaseRNNB "99BB2 1 epoch, all batches" 1 99
nnMnistRNNLossB2 testMnistRNNL2 lenMnistRNNL 128 2
6.259999999999999e-2
, mnistTestCaseRNN "99VV 1 epoch, all batches" 1 99
nnMnistRNNLossV testMnistRNNV lenMnistRNNV 128 1
6.740000000000002e-2
, mnistTestCaseRNNS @128 @150 "1S 1 epoch, 1 batch" 1 1
rnnMnistLossFusedS rnnMnistTestS rnnMnistLenS
0.4375
]
mnistRNNTestsShort :: TestTree
mnistRNNTestsShort = testGroup "MNIST RNN short tests"
[ let glyph = V.unfoldrExactN sizeMnistGlyph (uniformR (0, 1))
label = V.unfoldrExactN sizeMnistLabel (uniformR (0, 1))
rws v = map (\k -> V.slice (k * 28) 28 v) [0 .. 27]
trainData = map ((\g -> (rws (glyph g), label g)) . mkStdGen) [1 .. 140]
in sgdTestCaseAlt "randomLL 140"
(nnMnistRNNLossL 128)
(lenMnistRNNL 128 1)
(return trainData)
[39.26529871965807, 39.26529500592892]
, let rws (input, target) =
(map (\k -> V.slice (k * 28) 28 input) [0 .. 27], target)
in sgdTestCase "firstLL 100 trainset samples only"
(nnMnistRNNLossL 128)
(lenMnistRNNL 128 1)
(map rws . take 100
<$> loadMnistData trainGlyphsPath trainLabelsPath)
2.7790856895965272
, mnistTestCaseRNN "1LL 1 epoch, 1 batch" 1 1
nnMnistRNNLossL testMnistRNNL lenMnistRNNL 128 1
0.2845
, mnistTestCaseRNNB "1BB 1 epoch, 1 batch" 1 1
nnMnistRNNLossB testMnistRNNL lenMnistRNNL 128 1
0.2845
, let glyph = V.unfoldrExactN sizeMnistGlyph (uniformR (0, 1))
label = V.unfoldrExactN sizeMnistLabel (uniformR (0, 1))
rws v = map (\k -> V.slice (k * 28) 28 v) [0 .. 27]
trainData = map ((\g -> (rws (glyph g), label g)) . mkStdGen) [1 .. 140]
in sgdTestCaseAlt "randomLL2 140"
(nnMnistRNNLossL2 128)
(lenMnistRNNL 128 2)
(return trainData)
[30.061871495723956, 30.06187089990927]
, let rws (input, target) =
(map (\k -> V.slice (k * 28) 28 input) [0 .. 27], target)
in sgdTestCase "firstLL2 99 trainset samples only"
(nnMnistRNNLossL2 128)
(lenMnistRNNL 128 2)
(map rws . take 99
<$> loadMnistData trainGlyphsPath trainLabelsPath)
2.772595855528805
, mnistTestCaseRNN "1LL2 1 epoch, 1 batch" 1 1
nnMnistRNNLossL2 testMnistRNNL2 lenMnistRNNL 128 2
0.2945
, mnistTestCaseRNNB "1BB2 1 epoch, 1 batch" 1 1
nnMnistRNNLossB2 testMnistRNNL2 lenMnistRNNL 128 2
0.2945
, let glyph = V.unfoldrExactN sizeMnistGlyph (uniformR (0, 1))
label = V.unfoldrExactN sizeMnistLabel (uniformR (0, 1))
rws v = map (\k -> V.slice (k * 28) 28 v) [0 .. 27]
trainData = map ((\g -> (rws (glyph g), label g)) . mkStdGen) [1 .. 100]
in sgdTestCase "randomVV 100"
(nnMnistRNNLossV 128)
(lenMnistRNNV 128 1)
(return trainData)
48.93543453250378
, let rws (input, target) =
(map (\k -> V.slice (k * 28) 28 input) [0 .. 27], target)
in sgdTestCase "firstVV 100 trainset samples only"
(nnMnistRNNLossV 128)
(lenMnistRNNV 128 1)
(map rws . take 100
<$> loadMnistData trainGlyphsPath trainLabelsPath)
2.749410768938081
, mnistTestCaseRNN "1VV 1 epoch, 1 batch" 1 1
nnMnistRNNLossV testMnistRNNV lenMnistRNNV 128 1
0.3024
, mnistTestCaseRNNS @120 @15 "1S 1 epoch, 1 batch" 1 1
rnnMnistLossFusedS rnnMnistTestS rnnMnistLenS
0.8418
]
|
import data.list.basic
open list
universe u
variables {α : Type} (x y z : α) (xs ys zs : list α)
def mk_symm (xs : list α) := xs ++ reverse xs
theorem reverse_mk_symm (xs : list α) :
reverse (mk_symm xs) = mk_symm xs :=
by simp [mk_symm]
attribute [simp] reverse_mk_symm
example (xs ys : list ℕ) (p : list ℕ → Prop) (h : p (reverse (xs ++ (mk_symm ys)))) :
p (mk_symm ys ++ reverse xs) :=
by { simp at h; assumption }
-- example (xs ys : list ℕ) (p : list ℕ → Prop) (h : p (reverse (xs ++ (mk_symm ys)))) :
-- p (mk_symm ys ++ reverse xs) :=
-- by { simp [-reverse_mk_symm] at h; assumption }
-- example (xs ys : list ℕ) (p : list ℕ → Prop) (h : p (reverse (xs ++ (mk_symm ys)))) :
-- p (mk_symm ys ++ reverse xs) :=
-- by { simp only [reverse_append] at h; assumption }
|
(* Copyright (c) 2012-2015, Robbert Krebbers. *)
(* This file is distributed under the terms of the BSD license. *)
From stdpp Require Import fin_map_dom natmap.
Require Export memory memory_map_refine values_refine.
Local Open Scope ctype_scope.
Section map_of_collection.
Context `{A, FinSet K C, FinMap K M}.
Definition map_of_collection (f : K → option A) (X : C) : (M A) :=
list_to_map (omap (λ i, (pair i) <$> f i) (elements X)).
Lemma lookup_map_of_collection
(f : K → option A) X i x :
map_of_collection f X !! i = Some x ↔ i ∈ X ∧ f i = Some x.
Proof.
assert (NoDup (fst <$> omap (λ i, (pair i) <$> f i) (elements X))).
{ induction (NoDup_elements X) as [|i' l]; csimpl; [constructor|].
destruct (f i') as [x'|]; csimpl; auto; constructor; auto.
rewrite elem_of_list_fmap. setoid_rewrite elem_of_list_omap.
by intros (?&?&?&?&?); simplify_option_eq. }
unfold map_of_collection. rewrite <-elem_of_list_to_map by done.
rewrite elem_of_list_omap. setoid_rewrite elem_of_elements; split.
* intros (?&?&?); simplify_option_eq; eauto.
* intros [??]; exists i; simplify_option_eq; eauto.
Qed.
End map_of_collection.
#[global] Instance locks_refine `{Env K} :
Refine K (env K) lockset := λ Γ α f Δ1 Δ2 Ω1 Ω2,
(**i 1.) *) ✓{Γ,Δ1} Ω1 ∧ ✓{Γ,Δ2} Ω2 ∧
(**i 2.) *) Δ1 ⊑{Γ,α,f} Δ2 ∧
(**i 3.) *) (∀ o1 o2 r τ1 i,
f !! o1 = Some (o2,r) → Δ1 ⊢ o1 : τ1 → index_alive Δ1 o1 →
i < bit_size_of Γ τ1 →
(o1,i) ∈ Ω1 ↔ (o2,ref_object_offset Γ r + i) ∈ Ω2).
Section memory.
Context `{EnvSpec K}.
Implicit Types Γ : env K.
Implicit Types Δ : memenv K.
Implicit Types τ : type K.
Implicit Types a : addr K.
Implicit Types p : ptr K.
Implicit Types w : mtree K.
Implicit Types v : val K.
Implicit Types m : mem K.
Implicit Types α β : bool.
Implicit Types βs : list bool.
Implicit Types γb : pbit K.
Implicit Types γbs : list (pbit K).
Implicit Types Ω : lockset.
Hint Immediate ctree_refine_typed_l ctree_refine_typed_r: core.
Hint Immediate vals_refine_typed_l vals_refine_typed_r: core.
Hint Resolve Forall_app_2 Forall2_app: core.
Hint Immediate cmap_lookup_typed val_typed_type_valid: core.
Ltac solve_length := repeat first
[ rewrite take_length | rewrite drop_length | rewrite app_length
| rewrite zip_with_length | rewrite replicate_length | rewrite resize_length
| rewrite fmap_length | erewrite ctree_flatten_length by eauto
| rewrite natset_to_bools_length
| match goal with H : Forall2 _ _ _ |- _ => apply Forall2_length in H end ];
lia.
Hint Extern 0 (length _ = _) => solve_length: core.
Hint Extern 0 (_ ≤ length _) => solve_length: core.
Hint Extern 0 (length _ ≤ _) => solve_length: core.
Lemma mem_lookup_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 v1 τ :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : TType τ →
m1 !!{Γ} a1 = Some v1 →
∃ v2, m2 !!{Γ} a2 = Some v2 ∧ v1 ⊑{Γ,α,f@Δ1↦Δ2} v2 : τ.
Proof.
unfold lookupE, mem_lookup. intros.
destruct (m1 !!{Γ} a1) as [w1|] eqn:?; simplify_option_eq.
destruct (cmap_lookup_refine Γ α f Δ1 Δ2
m1 m2 a1 a2 w1 τ) as (w2&->&?); auto.
exists (to_val Γ w2); simplify_option_eq by eauto using
pbits_refine_kind_subseteq, ctree_flatten_refine; eauto using to_val_refine.
Qed.
Lemma mem_force_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 τ :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : TType τ →
is_Some (m1 !!{Γ} a1) → mem_force Γ a1 m1 ⊑{Γ,α,f@Δ1↦Δ2} mem_force Γ a2 m2.
Proof.
unfold lookupE, mem_lookup, mem_force, lookupE, cmap_lookup.
intros ??? [v1 ?]; case_option_guard; simplify_equality'.
destruct (m1 !!{Γ} _) as [w1|] eqn:?; simplify_equality'.
destruct (addr_ref_refine Γ α f Δ1 Δ2 a1 a2 (TType τ)) as (r&?&_&?); auto.
destruct (cmap_lookup_ref_refine Γ α f Δ1 Δ2 m1 m2 (addr_index a1)
(addr_ref Γ a1) (addr_index a2) r w1) as (w2&?&?); auto.
erewrite <-(cmap_alter_ref_le _ _ _ _ (addr_ref Γ a2)) by eauto.
eapply cmap_alter_ref_refine; eauto.
case_decide; simplify_equality'; case_option_guard; simplify_equality'.
{ eapply ctree_Forall_not; eauto using pbits_mapped. }
destruct (w1 !!{Γ} _) as [w1'|] eqn:?; simplify_option_eq.
intros ?; eapply (ctree_Forall_not _ _ _ w1');
eauto using ctree_lookup_byte_Forall, pbit_unmapped_indetify,
pbits_mapped, ctree_lookup_byte_typed.
Qed.
Lemma mem_force_refine' Γ α f m1 m2 a1 a2 τ :
✓ Γ → m1 ⊑{Γ,α,f} m2 → a1 ⊑{Γ,α,f@'{m1}↦'{m2}} a2 : TType τ →
is_Some (m1 !!{Γ} a1) → mem_force Γ a1 m1 ⊑{Γ,α,f} mem_force Γ a2 m2.
Proof.
unfold refineM, cmap_refine'; intros ??? [v1 ?].
destruct (mem_lookup_refine Γ α f '{m1} '{m2}
m1 m2 a1 a2 v1 τ) as (v2&?&?); eauto.
erewrite !mem_force_memenv_of by eauto using cmap_refine_valid_l',
cmap_refine_valid_r'; eauto using mem_force_refine.
Qed.
Lemma mem_force_refine_l Γ Δ m a :
✓ Γ → ✓{Γ,Δ} m → is_Some (m !!{Γ} a) → frozen a →
mem_force Γ a m ⊑{Γ,true@Δ} m.
Proof.
intros ?? [v ?] ?.
split; split_and ?; eauto using mem_force_valid, memenv_refine_id.
intros ? o r w' μ'; rewrite lookup_meminj_id.
destruct m as [m]; unfold lookupE, mem_lookup, lookupE, cmap_lookup in *;
intros; simplify_equality'; case_option_guard; simplify_equality'.
destruct (m !! addr_index a) as [[|w μ]|] eqn:?; simplify_equality'.
destruct (w !!{Γ} addr_ref Γ a) as [w''|] eqn:?; simplify_equality'.
destruct (decide (o = addr_index a)); simplify_map_eq.
{ destruct (cmap_valid_Obj Γ Δ (CMap m) (addr_index a) w μ')
as (τ&?&_&?&_); eauto.
assert (¬ctree_unmapped w'').
{ case_decide; simplify_equality'; case_option_guard; simplify_equality'.
{ eapply ctree_Forall_not;
eauto using pbits_mapped, ctree_lookup_Some_type_of. }
destruct (w'' !!{Γ} addr_ref_byte Γ a)
as [w'''|] eqn:?; simplify_option_eq.
intros ?; eapply (ctree_Forall_not _ _ _ w''');
eauto using ctree_lookup_byte_Forall, pbit_unmapped_indetify,
pbits_mapped, ctree_lookup_byte_typed, ctree_lookup_Some_type_of. }
assert (freeze true <$> addr_ref Γ a = addr_ref Γ a).
{ rewrite <-addr_ref_freeze. by f_equal. }
eauto 10 using ctree_alter_id_refine. }
destruct (cmap_valid_Obj Γ Δ (CMap m) o w' μ')
as (τ&?&_&?&_); eauto 10 using ctree_refine_id.
Qed.
Lemma mem_writable_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 τ :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : TType τ →
mem_writable Γ a1 m1 → mem_writable Γ a2 m2.
Proof.
intros ??? (w1&?&?). destruct (cmap_lookup_refine Γ α f Δ1 Δ2
m1 m2 a1 a2 w1 τ) as (w2&?&?); auto.
exists w2; eauto using pbits_refine_kind_subseteq, ctree_flatten_refine.
Qed.
Lemma mem_insert_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 v1 v2 τ :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : TType τ →
mem_writable Γ a1 m1 → v1 ⊑{Γ,α,f@Δ1↦Δ2} v2 : τ →
<[a1:=v1]{Γ}>m1 ⊑{Γ,α,f@Δ1↦Δ2} <[a2:=v2]{Γ}>m2.
Proof.
intros ??? (w1&?&?) ?. destruct (cmap_lookup_refine Γ α f Δ1 Δ2
m1 m2 a1 a2 w1 τ) as (w2&?&?); auto.
eapply cmap_alter_refine; eauto 1.
* eapply ctree_Forall_not, pbits_mapped; eauto using pbits_kind_weaken.
* erewrite <-(pbits_refine_perm _ _ _ _ _ (ctree_flatten w1)
(ctree_flatten w2)) by eauto using ctree_flatten_refine.
eapply of_val_refine; eauto.
eapply pbits_perm_unshared, pbits_unshared; eauto using
pbits_kind_weaken, pbits_valid_sep_valid, ctree_flatten_valid.
* eapply ctree_Forall_not, of_val_flatten_mapped; eauto using
val_refine_typed_l, of_val_flatten_typed, cmap_lookup_Some.
Qed.
Lemma mem_insert_refine' Γ α f m1 m2 a1 a2 v1 v2 τ :
✓ Γ → m1 ⊑{Γ,α,f} m2 →
a1 ⊑{Γ,α,f@'{m1}↦'{m2}} a2 : TType τ → mem_writable Γ a1 m1 →
v1 ⊑{Γ,α,f@'{m1}↦'{m2}} v2 : τ → <[a1:=v1]{Γ}>m1 ⊑{Γ,α,f} <[a2:=v2]{Γ}>m2.
Proof.
unfold refineM, cmap_refine'; intros.
erewrite !mem_insert_memenv_of by eauto using cmap_refine_valid_l',
cmap_refine_valid_r', addr_refine_typed_l, addr_refine_typed_r,
val_refine_typed_l, val_refine_typed_r, mem_writable_refine.
eauto using mem_insert_refine.
Qed.
(* todo: prove a stronger version that allows to allocate multiple objects
on the left and all map to the same object on the right. *)
Lemma mem_refine_extend Γ α f Δ1 Δ2 o1 o2 :
✓ Γ → Δ1 ⊑{Γ,α,f} Δ2 → Δ1 !! o1 = None → Δ2 !! o2 = None → ∃ f',
(**i 1.) *) Δ1 ⊑{Γ,α,f'} Δ2 ∧
(**i 2.) *) f' !! o1 = Some (o2,[]) ∧
(**i 3.) *) meminj_extend f f' Δ1 Δ2.
Proof.
intros ? HΔ ??. set (f' := meminj_map $
(<[o1:=(o2,[])]> (map_of_collection (f !!.) (dom indexset Δ1)))).
assert (f' !! o1 = Some (o2,[])) as help1.
{ by unfold f', lookup; intros; simplify_map_eq. }
assert (∀ o' τ, Δ1 ⊢ o' : τ → f' !! o' = f !! o') as help2.
{ intros o' τ [β ?]; unfold lookup at 1, f'; simpl.
rewrite lookup_insert_ne by naive_solver.
apply option_eq; intros [o2' r]; simpl.
rewrite lookup_map_of_collection, elem_of_dom; naive_solver. }
exists f'; repeat split; auto.
* intros o3 o3' o4 r1 r2. destruct HΔ as [? _ ?? _ _ _ _].
unfold typed, index_typed in *; unfold lookup, f'; simpl.
rewrite !lookup_insert_Some, !lookup_map_of_collection, !elem_of_dom.
intros [[??]|(?&[[??]?]&?)] [[??]|(?&[[??]?]&?)]; naive_solver.
* intros o3 o4 r. destruct HΔ as [_ ? _ _ _ _ _ _]. unfold lookup, f'; simpl.
rewrite lookup_insert_Some, lookup_map_of_collection, elem_of_dom.
intros [[??]|(?&[[??]?]&?)]; simplify_map_eq; naive_solver.
* intros o3 o4 r τ Hf' ?; erewrite help2 in Hf' by eauto.
eauto using memenv_refine_typed_l.
* intros o3 o4 r τ. destruct HΔ as [_ _ _ ? _ _ _ _].
unfold lookup, f'; simpl; unfold typed, index_typed in *.
rewrite lookup_insert_Some, lookup_map_of_collection, elem_of_dom.
intros [[??]|(?&[[??]?]&?)]; simplify_map_eq; naive_solver.
* intros o3 o4 r Hf' Ho3.
assert (∃ τ, Δ1 ⊢ o3 : τ) as [τ ?] by (destruct Ho3; do 2 eexists; eauto).
erewrite help2 in Hf' by eauto; eauto using memenv_refine_alive_l.
* intros o3 o4 r ?. destruct HΔ as [_ _ _ _ _ ? _ _].
unfold lookup, f'; simpl; unfold index_alive in *.
rewrite lookup_insert_Some, lookup_map_of_collection, elem_of_dom.
intros [[??]|(?&[[??]?]&?)]; simplify_map_eq; naive_solver.
* intros o3 τ. destruct α; [by intros []|intros].
destruct (memenv_refine_perm_l Γ f Δ1 Δ2 o3 τ) as (o4&?&?); auto.
exists o4. by erewrite help2 by eauto.
* intros o4 τ. destruct α; [by intros []|intros].
destruct (decide (o4 = o2)) as [->|]; eauto.
destruct (memenv_refine_perm_r Γ f Δ1 Δ2 o4 τ) as (o3&?&?); auto.
exists o3. by erewrite help2 by eauto.
* intros o3 o4 r τ [??].
unfold lookup at 1, f'; simpl; unfold typed, index_typed in *.
rewrite lookup_insert_Some, lookup_map_of_collection, elem_of_dom.
intros [[??]|(?&[[??]?]&?)]; simplify_map_eq; naive_solver.
Qed.
Lemma mem_alloc_refine_env Γ α f Δ1 Δ2 τ o1 o2 :
Δ1 ⊑{Γ,α,f} Δ2 → Δ1 !! o1 = None → Δ2 !! o2 = None →
f !! o1 = Some (o2,[]) →
<[o1:=(τ,false)]> Δ1 ⊑{Γ,α,f} <[o2:=(τ,false)]> Δ2.
Proof.
intros HΔ; split; eauto using memenv_refine_injective.
* eauto using memenv_refine_frozen.
* intros o3 o4 r τ3 ? Ho3. destruct (decide (o1 = o3)) as [->|?].
+ destruct Ho3; simplify_map_eq/=.
setoid_rewrite ref_typed_nil; eauto using mem_alloc_index_typed.
+ destruct (memenv_refine_typed_l HΔ o3 o4 r τ3)
as (τ4&?&?); eauto using mem_alloc_forward,
memenv_forward_typed, mem_alloc_index_typed_inv.
* intros o3 o4 r τ4 ? Ho4. destruct (decide (o1 = o3)) as [->|?].
{ destruct Ho4; simplify_map_eq.
setoid_rewrite ref_typed_nil; eauto using mem_alloc_index_typed. }
destruct (meminj_injective_ne f o1 o2 o3 o4 [] r)
as [|[??]]; simplify_map_eq; eauto using memenv_refine_injective.
+ destruct (memenv_refine_typed_r HΔ o3 o4 r τ4)
as (τ3&?&?); eauto using mem_alloc_forward,
memenv_forward_typed, mem_alloc_index_typed_inv.
+ by destruct (ref_disjoint_nil_inv_l r).
* intros o3 o4 r ??; destruct (decide (o1 = o3)) as [->|?].
{ simplify_equality; eauto using mem_alloc_index_alive. }
destruct (meminj_injective_ne f o1 o2 o3 o4 [] r) as [?|[??]];
eauto using memenv_refine_injective, mem_alloc_index_alive_ne,
mem_alloc_index_alive_inv, memenv_refine_alive_l.
by destruct (ref_disjoint_nil_inv_l r).
* intros o3 o4 r ??; destruct (decide (o2 = o4)) as [->|?].
{ destruct (memenv_refine_injective Γ α f Δ1 Δ2 HΔ o1 o3 o4 [] r);
simplify_equality; eauto using mem_alloc_index_alive.
by destruct (ref_disjoint_nil_inv_l r). }
eauto using mem_alloc_index_alive_ne,
mem_alloc_index_alive_inv, memenv_refine_alive_r with congruence.
* intros o3 ???. destruct (decide (o1 = o3)) as [->|]; eauto.
eauto using memenv_refine_perm_l', mem_alloc_index_typed_inv.
* intros o4 ???. destruct (decide (o2 = o4)) as [->|]; eauto.
eauto using memenv_refine_perm_r', mem_alloc_index_typed_inv.
Qed.
Lemma mem_alloc_refine Γ α f Δ1 Δ2 m1 m2 o1 o2 mallc x v1 v2 τ :
let Δ1' := <[o1:=(τ,false)]>Δ1 in let Δ2' := <[o2:=(τ,false)]>Δ2 in
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 →
Δ1 !! o1 = None → Δ2 !! o2 = None → f !! o1 = Some (o2,[]) →
sep_unshared x → v1 ⊑{Γ,α,f@Δ1'↦Δ2'} v2 : τ →
mem_alloc Γ o1 mallc x v1 m1 ⊑{Γ,α,f@Δ1'↦Δ2'} mem_alloc Γ o2 mallc x v2 m2.
Proof.
simpl; intros ? (?&?&HΔ&Hm) ?????.
assert (sep_valid x) by (by apply sep_unshared_valid).
split; split_and ?; eauto 5 using mem_alloc_valid, mem_alloc_refine_env,
val_refine_typed_l, val_refine_typed_r, sep_unshared_unmapped.
destruct m1 as [m1], m2 as [m2]; intros o3 o4 r w3 μ' ?; simpl in *.
rewrite lookup_insert_Some; intros [[??]|[??]]; simplify_map_eq.
{ exists (of_val Γ (replicate (bit_size_of Γ τ) x) v2),
(of_val Γ (replicate (bit_size_of Γ τ) x) v2), τ.
erewrite val_refine_type_of_r, val_refine_type_of_l by eauto.
auto 7 using of_val_refine, Forall_replicate. }
destruct (meminj_injective_ne f o1 o2 o3 o4 [] r)
as [|[??]]; simplify_map_eq; eauto using memenv_refine_injective.
* destruct (Hm o3 o4 r w3 μ') as (w2&w2'&τ2&?&?&?&?); auto.
exists w2, w2', τ2; eauto 10 using ctree_refine_weaken,
mem_alloc_forward, mem_alloc_refine_env, meminj_extend_reflexive.
* by destruct (ref_disjoint_nil_inv_l r).
Qed.
Lemma mem_alloc_refine' Γ α f m1 m2 o1 o2 μ x v1 v2 τ :
✓ Γ → m1 ⊑{Γ,α,f} m2 → o1 ∉ dom indexset m1 → o2 ∉ dom indexset m2 →
sep_unshared x → v1 ⊑{Γ,α,f@'{m1}↦'{m2}} v2 : τ → ∃ f',
(**i 1.) *) f' !! o1 = Some (o2,[]) ∧
(**i 2.) *)
mem_alloc Γ o1 μ x v1 m1 ⊑{Γ,α,f'} mem_alloc Γ o2 μ x v2 m2 ∧
(**i 3.) *) meminj_extend f f' '{m1} '{m2}.
Proof.
intros ????? Hv. destruct (mem_refine_extend Γ α f '{m1} '{m2} o1 o2) as
(f'&?&?&?); eauto using mem_allocable_memenv_of,cmap_refine_memenv_refine.
exists f'; split_and ?; auto. unfold refineM, cmap_refine'.
erewrite !mem_alloc_memenv_of
by eauto using val_refine_typed_l, val_refine_typed_r.
eapply mem_alloc_refine;
eauto 10 using mem_allocable_memenv_of, cmap_refine_weaken,
val_refine_weaken, mem_alloc_forward, mem_alloc_refine_env.
Qed.
Lemma mem_alloc_new_refine' Γ α f m1 m2 o1 o2 μ x τ :
✓ Γ → m1 ⊑{Γ,α,f} m2 → o1 ∉ dom indexset m1 → o2 ∉ dom indexset m2 →
sep_unshared x → ✓{Γ} τ → ∃ f',
(**i 1.) *) f' !! o1 = Some (o2,[]) ∧
(**i 2.) *)
mem_alloc Γ o1 μ x (val_new Γ τ) m1
⊑{Γ,α,f'} mem_alloc Γ o2 μ x (val_new Γ τ) m2 ∧
(**i 3.) *) meminj_extend f f' '{m1} '{m2}.
Proof. eauto using mem_alloc_refine', (val_new_refine _ _ ∅). Qed.
Hint Immediate cmap_refine_valid_l' cmap_refine_valid_r': core.
Hint Immediate cmap_refine_memenv_refine: core.
Lemma mem_alloc_list_refine' Γ α f m1 m2 os1 os2 vs1 vs2 τs :
✓ Γ → m1 ⊑{Γ,α,f} m2 → vs1 ⊑{Γ,α,f@'{m1}↦'{m2}}* vs2 :* τs →
length os1 = length vs1 → length os2 = length vs2 →
Forall_fresh (dom indexset m1) os1 → Forall_fresh (dom indexset m2) os2 → ∃ f',
(**i 1.) *) Forall2 (λ o1 o2, f' !! o1 = Some (o2,[])) os1 os2 ∧
(**i 2.) *) mem_alloc_list Γ os1 vs1 m1
⊑{Γ,α,f'} mem_alloc_list Γ os2 vs2 m2 ∧
(**i 3.) *) meminj_extend f f' '{m1} '{m2}.
Proof.
rewrite <-!Forall2_same_length. intros ? Hm Hvs Hovs1 Hovs2 Hos1 Hos2.
revert τs os2 vs2 Hos1 Hos2 Hvs Hovs2.
induction Hovs1 as [|o1 v1 os1 vs1 ?? IH];
intros [|τ τs] [|o2 os2] [|v2 vs2]; inversion_clear 1;
inversion_clear 1; inversion_clear 1; intros; decompose_Forall_hyps.
{ eauto using meminj_extend_reflexive. }
assert ((Γ,'{m1}) ⊢ v1 : τ) by eauto using val_refine_typed_l.
assert ((Γ,'{m2}) ⊢ v2 : τ) by eauto using val_refine_typed_r.
assert (✓{Γ} τ) by eauto using val_typed_type_valid.
destruct (IH τs os2 vs2) as (f'&?&?&?); auto.
assert (o1 ∉ dom indexset (mem_alloc_list Γ os1 vs1 m1)).
{ rewrite mem_dom_alloc_list, elem_of_union, elem_of_list_to_set
by eauto using Forall2_length; tauto. }
assert (o2 ∉ dom indexset (mem_alloc_list Γ os2 vs2 m2)).
{ rewrite mem_dom_alloc_list, elem_of_union, elem_of_list_to_set
by eauto using Forall2_length; tauto. }
destruct (mem_alloc_refine' Γ α f' (mem_alloc_list Γ os1 vs1 m1)
(mem_alloc_list Γ os2 vs2 m2) o1 o2 false perm_full v1 v2 τ)
as (f''&?&?&?); eauto 6 using perm_full_mapped, perm_full_unshared,
val_refine_weaken, mem_alloc_list_forward.
exists f''; split_and ?; eauto using meminj_extend_transitive.
* assert ('{mem_alloc_list Γ os1 vs1 m1} ⊢* os1 :* τs)
by eauto using mem_alloc_list_index_typed.
constructor; auto.
decompose_Forall. eapply (@transitivity _ _ _ ) with (f' !! _);
eauto using eq_sym, meminj_extend_left.
* eapply meminj_extend_transitive; eauto using mem_alloc_list_forward.
Qed.
Lemma mem_freeable_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 τp :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 →
a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : τp → mem_freeable a1 m1 → mem_freeable a2 m2.
Proof.
intros ? (_&_&_&Hm) ? (Ha&w1&?&?).
rewrite addr_is_top_array_alt in Ha by eauto using addr_refine_typed_l.
destruct Ha as (τ'&n&?&Ha1&?).
destruct (addr_ref_refine Γ α f Δ1 Δ2 a1 a2 τp) as (r&?&_&Ha2); eauto.
destruct (Hm (addr_index a1) (addr_index a2) r w1 true)
as (?&w2&τ''&?&?&?&Hr); auto; specialize (Hr I); simplify_type_equality'.
split; [|exists w2; eauto using pbits_refine_perm_1, ctree_flatten_refine].
rewrite addr_is_top_array_alt by eauto using addr_refine_typed_r.
assert (addr_ref Γ a2 = [RArray 0 τ' n]) as ->.
{ by rewrite Ha1 in Ha2;
inversion Ha2 as [|???? Harr]; inversion Harr; decompose_Forall_hyps. }
erewrite <-addr_ref_byte_refine by eauto.
exists τ', n; split_and ?; eauto using addr_strict_refine.
Qed.
Lemma mem_freeable_index_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 τp :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : τp →
mem_freeable a1 m1 → f !! addr_index a1 = Some (addr_index a2, []).
Proof.
intros ? (_&_&_&Hm) ? (Ha&w1&?&?).
rewrite addr_is_top_array_alt in Ha by eauto using addr_refine_typed_l.
destruct Ha as (τ'&n&?&Ha1&?), (addr_ref_refine Γ α f Δ1 Δ2 a1 a2 τp)
as (r&?&Ha2); naive_solver.
Qed.
Lemma mem_free_refine_env Γ α f Δ1 Δ2 o1 o2 :
Δ1 ⊑{Γ,α,f} Δ2 → f !! o1 = Some (o2,[]) →
alter (prod_map id (λ _, true)) o1 Δ1
⊑{Γ,α,f} alter (prod_map id (λ _, true)) o2 Δ2.
Proof.
intros HΔ ?; split; eauto using memenv_refine_injective.
* eauto using memenv_refine_frozen.
* intros o3 o4 r τ3 ??.
destruct (memenv_refine_typed_l HΔ o3 o4 r τ3) as (τ4&?&?); eauto
using mem_free_index_typed_inv, mem_free_forward, memenv_forward_typed.
* intros o3 o4 r τ4 ??.
destruct (memenv_refine_typed_r HΔ o3 o4 r τ4) as (τ3&?&?); eauto
using mem_free_index_typed_inv, mem_free_forward, memenv_forward_typed.
* intros o3 o4 r ??. destruct (decide (o2 = o4)) as [->|?].
{ destruct (memenv_refine_injective Γ α f Δ1 Δ2 HΔ o1 o3 o4 [] r);
simplify_equality; eauto.
+ by destruct (mem_free_index_alive Δ1 o3).
+ by destruct (ref_disjoint_nil_inv_l r). }
eauto using mem_free_index_alive_ne,
mem_free_index_alive_inv, memenv_refine_alive_l.
* intros o3 o4 r ???. destruct (decide (o1 = o3)); simplify_equality.
+ by destruct (mem_free_index_alive Δ2 o2).
+ eauto using mem_free_index_alive_ne,
mem_free_index_alive_inv, memenv_refine_alive_r.
* intros o3 τ ??. destruct (decide (o1 = o3)) as [->|]; eauto.
eauto using memenv_refine_perm_l', mem_free_index_typed_inv.
* intros o4 τ ??. destruct (decide (o2 = o4)) as [->|]; eauto.
eauto using memenv_refine_perm_r', mem_free_index_typed_inv.
Qed.
Lemma mem_free_refine_env_l Γ f Δ1 Δ2 o :
Δ1 ⊑{Γ,true,f} Δ2 → alter (prod_map id (λ _, true)) o Δ1 ⊑{Γ,true,f} Δ2.
Proof.
destruct 1; split; simpl; try by auto.
* eauto using mem_free_index_typed_inv.
* naive_solver eauto using mem_free_forward, memenv_forward_typed.
* eauto using mem_free_index_alive_inv.
Qed.
Lemma mem_free_refine_env_r Γ f Δ1 Δ2 o :
Δ1 ⊑{Γ,true,f} Δ2 → (∀ o' r, f !! o' = Some (o,r) → ¬index_alive Δ1 o') →
Δ1 ⊑{Γ,true,f} alter (prod_map id (λ _, true)) o Δ2.
Proof.
intros [] Hf; split; simpl; try by auto.
* naive_solver eauto using mem_free_forward, memenv_forward_typed.
* eauto using mem_free_index_typed_inv.
* intros o1 o2 r ??.
destruct (decide (o2 = o)) as [->|?]; [by destruct (Hf o1 r)|].
eauto using mem_free_index_alive_ne.
Qed.
Lemma mem_free_refine Γ α f Δ1 Δ2 m1 m2 o1 o2 :
let Δ1' := alter (prod_map id (λ _, true)) o1 Δ1 in
let Δ2' := alter (prod_map id (λ _, true)) o2 Δ2 in
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → f !! o1 = Some (o2,[]) →
mem_free o1 m1 ⊑{Γ,α,f@Δ1'↦Δ2'} mem_free o2 m2.
Proof.
simpl; intros ?(?&?&?&Hm).
split; split_and ?; auto using mem_free_valid, mem_free_refine_env.
destruct m1 as [m1], m2 as [m2]; simpl in *.
intros o1' o2' r w1 μ ?; rewrite lookup_alter_Some;
intros [(?&[?|??]&?&?)|[??]]; simplify_equality'; eauto.
destruct (Hm o1' o2' r w1 μ) as (w2&w2'&τ2&?&?&?&?); auto.
destruct (decide (o2 = o2')) as [->|?]; simplify_map_eq.
* destruct (meminj_injective_alt f o1 o1' o2' [] r) as [->|[??]];
simplify_map_eq; eauto using memenv_refine_injective.
by destruct (ref_disjoint_nil_inv_l r).
* exists w2, w2', τ2; split_and ?; eauto using ctree_refine_weaken,
mem_free_forward, mem_free_refine_env, meminj_extend_reflexive.
Qed.
Lemma mem_free_refine_l Γ f Δ1 Δ2 m1 m2 o :
let Δ1' := alter (prod_map id (λ _, true)) o Δ1 in
✓ Γ → m1 ⊑{Γ,true,f@Δ1↦Δ2} m2 → mem_free o m1 ⊑{Γ,true,f@Δ1'↦Δ2} m2.
Proof.
simpl; intros ?(?&?&?&Hm).
split; split_and ?; auto using mem_free_valid, mem_free_refine_env_l.
destruct m1 as [m1], m2 as [m2]; simpl in *.
intros o1 o2 r w1 μ ?; rewrite lookup_alter_Some;
intros [(?&[?|??]&?&?)|[??]]; simplify_equality'; eauto.
destruct (Hm o1 o2 r w1 μ) as (w2&w2'&τ2&?&?&?&?); auto.
exists w2, w2', τ2; eauto 10 using ctree_refine_weaken,
mem_free_forward, mem_free_refine_env_l, meminj_extend_reflexive.
Qed.
Lemma mem_free_refine_r Γ f Δ1 Δ2 m1 m2 o :
let Δ2' := alter (prod_map id (λ _, true)) o Δ2 in ✓ Γ →
(∀ o' r, f !! o' = Some (o,r) → ¬index_alive Δ1 o') →
m1 ⊑{Γ,true,f@Δ1↦Δ2} m2 → m1 ⊑{Γ,true,f@Δ1↦Δ2'} mem_free o m2.
Proof.
simpl; intros ? Hf (Hm1&?&?&Hm).
split; split_and ?; auto using mem_free_valid, mem_free_refine_env_r.
destruct m1 as [m1], m2 as [m2]; simpl in *; intros o1 o2 r w1 μ ??.
destruct (cmap_valid_Obj Γ Δ1 (CMap m1) o1 w1 μ) as (τ1&?&?&_); auto.
destruct (decide (o2 = o)) as [->|?]; [by destruct (Hf o1 r)|].
destruct (Hm o1 o2 r w1 μ) as (w2&w2'&τ2&?&?&?&?); auto.
exists w2, w2', τ2; simplify_map_eq; eauto 7 using ctree_refine_weaken,
mem_free_forward, mem_free_refine_env_r, meminj_extend_reflexive.
Qed.
Lemma mem_free_refine' Γ α f m1 m2 o1 o2 :
✓ Γ → m1 ⊑{Γ,α,f} m2 → f !! o1 = Some (o2,[]) →
mem_free o1 m1 ⊑{Γ,α,f} mem_free o2 m2.
Proof.
unfold refineM, cmap_refine'.
rewrite !mem_free_memenv_of; eauto using mem_free_refine.
Qed.
Lemma mem_foldr_free_refine Γ α f m1 m2 os1 os2 :
✓ Γ → m1 ⊑{Γ,α,f} m2 →
Forall2 (λ o1 o2, f !! o1 = Some (o2, [])) os1 os2 →
foldr mem_free m1 os1 ⊑{Γ,α,f} foldr mem_free m2 os2.
Proof. induction 3; simpl; auto using mem_free_refine'. Qed.
Lemma locks_refine_id Γ α Δ Ω : ✓{Γ,Δ} Ω → Ω ⊑{Γ,α@Δ} Ω.
Proof.
split; split_and ?; intros *; rewrite ?lookup_meminj_id; intros;
simplify_type_equality'; eauto using memenv_refine_id.
Qed.
Lemma locks_refine_compose Γ α1 α2 f1 f2 Δ1 Δ2 Δ3 Ω1 Ω2 Ω3 :
✓ Γ → Ω1 ⊑{Γ,α1,f1@Δ1↦Δ2} Ω2 → Ω2 ⊑{Γ,α2,f2@Δ2↦Δ3} Ω3 →
Ω1 ⊑{Γ,α1||α2,f2 ◎ f1@Δ1↦Δ3} Ω3.
Proof.
intros ? (?&?&HΔ12&HΩ12) (?&?&HΔ23&HΩ23);
split; split_and ?; eauto using memenv_refine_compose.
intros o1 o3 r τ1 i.
rewrite lookup_meminj_compose_Some; intros (o2&r2&r3&?&?&->) ???.
destruct (memenv_refine_typed_l HΔ12 o1 o2 r2 τ1) as (τ2&?&?); auto.
destruct (memenv_refine_typed_l HΔ23 o2 o3 r3 τ2) as (τ3&?&?); auto.
assert (ref_object_offset Γ r2 + i < bit_size_of Γ τ2).
{ apply Nat.lt_le_trans with
(ref_object_offset Γ r2 + bit_size_of Γ τ1); [lia|].
eauto using ref_object_offset_size'. }
rewrite HΩ12, HΩ23 by eauto using memenv_refine_alive_l.
by rewrite ref_object_offset_app, Nat.add_assoc,
(Nat.add_comm (ref_object_offset Γ r2)).
Qed.
Lemma locks_refine_inverse Γ f Δ1 Δ2 Ω1 Ω2 :
Ω1 ⊑{Γ,false,f@Δ1↦Δ2} Ω2 → Ω2 ⊑{Γ,false,meminj_inverse f@Δ2↦Δ1} Ω1.
Proof.
intros (?&?&?&Hf); split; split_and ?; eauto using memenv_refine_inverse.
intros o2 o1 r τ i Ho2 ???. destruct (lookup_meminj_inverse_1 Γ f
Δ1 Δ2 o1 o2 r τ) as (?&?&->); simpl; auto.
symmetry; apply (Hf _ _ [] τ); eauto using memenv_refine_alive_r.
Qed.
Lemma locks_refine_valid_l Γ α f Δ1 Δ2 Ω1 Ω2 :
Ω1 ⊑{Γ,α,f@Δ1↦Δ2} Ω2 → ✓{Γ,Δ1} Ω1.
Proof. by intros (?&?&?&?). Qed.
Lemma locks_list_refine_valid_l Γ α f Δ1 Δ2 Ωs1 Ωs2 :
Ωs1 ⊑{Γ,α,f@Δ1↦Δ2}* Ωs2 → ✓{Γ,Δ1}* Ωs1.
Proof. induction 1; eauto using locks_refine_valid_l. Qed.
Lemma locks_refine_valid_r Γ α f Δ1 Δ2 Ω1 Ω2 :
Ω1 ⊑{Γ,α,f@Δ1↦Δ2} Ω2 → ✓{Γ,Δ2} Ω2.
Proof. by intros (?&?&?&?). Qed.
Lemma locks_list_refine_valid_r Γ α f Δ1 Δ2 Ωs1 Ωs2 :
Ωs1 ⊑{Γ,α,f@Δ1↦Δ2}* Ωs2 → ✓{Γ,Δ2}* Ωs2.
Proof. induction 1; eauto using locks_refine_valid_r. Qed.
Lemma locks_refine_weaken Γ α α' f f' Δ1 Δ2 Δ1' Δ2' Ω1 Ω2 :
✓ Γ → Ω1 ⊑{Γ,α,f@Δ1↦Δ2} Ω2 →
Δ1' ⊑{Γ,α',f'} Δ2' → Δ1 ⇒ₘ Δ1' → Δ2 ⇒ₘ Δ2' →
meminj_extend f f' Δ1 Δ2 → Ω1 ⊑{Γ,α',f'@Δ1'↦Δ2'} Ω2.
Proof.
intros ? (HΩ1&HΩ2&HΔ12&HΩ) ? HΔ ? [??];
split; split_and ?; eauto 2 using lockset_valid_weaken.
intros o1 o2 r τ1 i ????; split.
* intros ?. destruct (HΩ1 o1 i) as (τ1'&?&?); auto.
assert (τ1 = τ1') by eauto using typed_unique, memenv_forward_typed.
simplify_type_equality.
by erewrite <-HΩ by eauto using memenv_forward_alive, option_eq_1.
* intros ?. destruct (HΩ2 o2 (ref_object_offset Γ r + i)) as (τ2'&?&?); auto.
destruct (memenv_refine_typed_r HΔ12 o1 o2 r τ2') as (τ1'&?&?); eauto.
assert (τ1 = τ1') by eauto using typed_unique, memenv_forward_typed.
simplify_type_equality. by erewrite HΩ by eauto using memenv_forward_alive.
Qed.
Lemma locks_empty_refine Γ α f Δ1 Δ2 :
Δ1 ⊑{Γ,α,f} Δ2 → (∅ : lockset) ⊑{Γ,α,f@Δ1↦Δ2} ∅.
Proof. split; split_and ?; eauto using lockset_empty_valid; set_solver. Qed.
Lemma mem_locks_refine Γ α f m1 m2 :
✓ Γ → m1 ⊑{Γ,α,f} m2 → mem_locks m1 ⊑{Γ,α,f@'{m1}↦'{m2}} mem_locks m2.
Proof.
intros ? (Hm1&Hm2&?&Hm); split; split_and ?; auto using mem_locks_valid.
intros o1 o2 r σ1 i ?? [σ1' ?] ?. assert (∃ w1 μ,
cmap_car m1 !! o1 = Some (Obj w1 μ)) as (w1&μ&?).
{ destruct m1 as [m1]; simplify_map_eq.
destruct (m1 !! o1) as [[]|]; naive_solver. }
destruct (Hm o1 o2 r w1 μ) as (w2'&w2&τ2&?&?&?&?); auto; clear Hm.
assert ((Γ,'{m1}) ⊢ w1 : τ2) by eauto.
destruct (cmap_valid_Obj Γ '{m1} m1 o1 w1 μ) as (?&?&?&?&_),
(cmap_valid_Obj Γ '{m2} m2 o2 w2' μ) as (τ'&?&?&?&_);
simplify_type_equality'; auto.
rewrite !elem_of_mem_locks; simplify_option_eq.
rewrite <-!list_lookup_fmap.
erewrite pbits_refine_locked; eauto using ctree_flatten_refine.
rewrite <-(ctree_lookup_flatten Γ '{m2} w2' τ' r w2 σ1)
by eauto using ctree_refine_typed_r, ctree_lookup_le, ref_freeze_le_l.
by rewrite pbits_locked_mask, fmap_take, fmap_drop, lookup_take, lookup_drop.
Qed.
Lemma mem_lock_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 τ :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : TType τ →
mem_writable Γ a1 m1 → mem_lock Γ a1 m1 ⊑{Γ,α,f@Δ1↦Δ2} mem_lock Γ a2 m2.
Proof.
intros ??? (w1&?&?).
destruct (cmap_lookup_refine Γ α f Δ1 Δ2 m1 m2 a1 a2 w1 τ) as (w2&?&?); auto.
eapply cmap_alter_refine; eauto 1.
* eapply ctree_Forall_not, pbits_mapped; eauto using pbits_kind_weaken.
* apply ctree_map_refine; eauto using pbit_lock_unshared, pbit_lock_indetified,
pbits_lock_refine, ctree_flatten_refine, pbit_lock_mapped.
* eapply ctree_Forall_not; eauto 8 using ctree_map_typed, pbit_lock_indetified,
pbits_lock_valid, ctree_flatten_valid, pbit_lock_mapped.
rewrite ctree_flatten_map.
eauto using pbits_lock_mapped, pbits_mapped, pbits_kind_weaken.
Qed.
Lemma mem_lock_refine' Γ α f m1 m2 a1 a2 τ :
✓ Γ → m1 ⊑{Γ,α,f} m2 → a1 ⊑{Γ,α,f@'{m1}↦'{m2}} a2 : TType τ →
mem_writable Γ a1 m1 → mem_lock Γ a1 m1 ⊑{Γ,α,f} mem_lock Γ a2 m2.
Proof.
intros. unfold refineM, cmap_refine'. erewrite !mem_lock_memenv_of by eauto
using cmap_refine_valid_l, cmap_refine_valid_r, mem_writable_refine.
eauto using mem_lock_refine.
Qed.
Lemma ctree_unlock_refine Γ α f Δ1 Δ2 w1 w2 τ βs :
✓ Γ → w1 ⊑{Γ,α,f@Δ1↦Δ2} w2 : τ → length βs = bit_size_of Γ τ →
ctree_merge pbit_unlock_if w1 βs
⊑{Γ,α,f@Δ1↦Δ2} ctree_merge pbit_unlock_if w2 βs : τ.
Proof.
intros HΓ Hw Hlen.
apply ctree_leaf_refine_refine; eauto using ctree_unlock_typed.
revert w1 w2 τ Hw βs Hlen.
refine (ctree_refine_ind _ _ _ _ _ _ _ _ _ _ _ _); simpl.
* constructor; auto using pbits_unlock_refine.
* intros τ n ws1 ws2 -> ? IH _ βs. rewrite bit_size_of_array. intros Hlen.
constructor. revert βs Hlen. induction IH; intros; decompose_Forall_hyps;
erewrite ?Forall2_length by eauto using ctree_flatten_refine; auto.
* intros t τs wγbss1 wγbss2 Ht Hws IH Hγbss _ _ Hpad βs.
erewrite bit_size_of_struct by eauto; clear Ht. constructor.
+ revert wγbss1 wγbss2 βs Hws IH Hγbss Hlen Hpad. unfold field_bit_padding.
induction (bit_size_of_fields _ τs HΓ);
intros [|[w1 γbs1] ?] [|[w2 γbs2] ?];
do 2 inversion_clear 1; intros; decompose_Forall_hyps; [done|].
erewrite ?ctree_flatten_length, <-(Forall2_length _ γbs1 γbs2) by eauto.
constructor; eauto.
+ clear Hlen IH Hpad. revert βs. induction Hws as [|[w1 γbs1] [w2 γbs2]];
intros; decompose_Forall_hyps; auto.
erewrite ?ctree_flatten_length, <-(Forall2_length _ γbs1 γbs2) by eauto.
constructor; eauto using pbits_unlock_refine.
* intros. erewrite Forall2_length by eauto using ctree_flatten_refine.
constructor; auto using pbits_unlock_refine.
* constructor; auto using pbits_unlock_refine.
* intros t i τs w1 γbs1 γbs2 τ ???????? βs ?.
erewrite ctree_flatten_length by eauto.
constructor; auto using pbits_unlock_unshared.
rewrite ctree_flatten_merge, <-zip_with_app, take_drop by auto.
auto using pbits_unlock_refine.
Qed.
Lemma mem_unlock_refine Γ α f Δ1 Δ2 m1 m2 Ω1 Ω2 :
✓ Γ → m1 ⊑{Γ,α,f@Δ1↦Δ2} m2 → Ω1 ⊑{Γ,α,f@Δ1↦Δ2} Ω2 →
mem_unlock Ω1 m1 ⊑{Γ,α,f@Δ1↦Δ2} mem_unlock Ω2 m2.
Proof.
assert (∀ γb β,
pbit_unlock_if (pbit_indetify γb) β = pbit_indetify (pbit_unlock_if γb β)).
{ by intros ? []. }
assert (∀ γb β, sep_unshared γb → sep_unshared (pbit_unlock_if γb β)).
{ intros ? []; eauto using pbit_unlock_unshared. }
assert (∀ n γbs,
length γbs = n → zip_with pbit_unlock_if γbs (replicate n false) = γbs).
{ intros n γbs <-. rewrite zip_with_replicate_r by auto.
by elim γbs; intros; f_equal'. }
intros ? (Hm1&Hm2&?&Hm) (_&_&_&HΩ);
split; split_and ?; auto using mem_unlock_valid; intros o1 o2 r w1 μ ? Hw1.
destruct m1 as [m1], m2 as [m2], Ω1 as [Ω1 HΩ1], Ω2 as [Ω2 HΩ2]; simpl in *.
unfold elem_of, lockset_elem_of in HΩ; simpl in HΩ; clear HΩ1 HΩ2.
rewrite lookup_merge in Hw1 |- *. unfold diag_None in Hw1 |- *.
destruct (m1 !! o1) as [[|w1' μ']|] eqn:?; try by destruct (Ω1 !! o1).
destruct (Hm o1 o2 r w1' μ') as (w2&w2'&τ1&Ho2&?&?&?); auto; clear Hm.
assert ((Γ,Δ1) ⊢ w1' : τ1) by eauto using ctree_refine_typed_l.
assert ((Γ,Δ2) ⊢ w2' : τ1) by eauto using ctree_refine_typed_r.
destruct (cmap_valid_Obj Γ Δ1 (CMap m1) o1 w1' μ')as (?&?&?&?&_),
(cmap_valid_Obj Γ Δ2 (CMap m2) o2 w2 μ') as (τ2&?&?&?&_);
simplify_type_equality; auto.
destruct (ctree_lookup_Some Γ Δ2 w2 τ2 r w2')
as (τ1'&?&?); auto; simplify_type_equality.
assert (ref_object_offset Γ r + bit_size_of Γ τ1
≤ bit_size_of Γ τ2) by eauto using ref_object_offset_size'.
erewrite Ho2, ctree_flatten_length by eauto.
destruct (Ω1 !! o1) as [ω1|] eqn:?; simplify_equality'.
{ erewrite ctree_flatten_length by eauto. destruct (Ω2 !! o2) as [ω2|] eqn:?.
* assert (take (bit_size_of Γ τ1) (drop (ref_object_offset Γ r) (natset_to_bools
(bit_size_of Γ τ2) ω2)) = natset_to_bools (bit_size_of Γ τ1) ω1) as Hω2.
{ apply list_eq_same_length with (bit_size_of Γ τ1); try done.
intros i β1 β2 ?.
specialize (HΩ o1 o2 r τ1 i); feed specialize HΩ; auto.
assert (i ∈ ω1 ↔ ref_object_offset Γ r + i ∈ ω2) as Hi by naive_solver.
rewrite lookup_take, lookup_drop, !lookup_natset_to_bools, Hi by lia.
destruct β1, β2; intuition. }
do 3 eexists; split_and ?; eauto using ctree_lookup_merge.
rewrite Hω2; eauto using ctree_unlock_refine.
* assert (natset_to_bools (bit_size_of Γ τ1) ω1
= replicate (bit_size_of Γ τ1) false) as Hω.
{ apply list_eq_same_length with (bit_size_of Γ τ1); try done.
intros i β1 β2 ?. rewrite lookup_replicate_2 by done.
intros Hβ1 ?; destruct β1; simplify_equality'; try done.
rewrite lookup_natset_to_bools_true in Hβ1 by lia.
specialize (HΩ o1 o2 r τ1 i); feed specialize HΩ; auto.
destruct (proj1 HΩ) as (?&?&?); simplify_equality; eauto. }
do 3 eexists; split_and ?; eauto.
rewrite Hω, ctree_merge_id by auto; eauto. }
destruct (Ω2 !! o2) as [ω2|] eqn:?; [|by eauto 7].
assert (take (bit_size_of Γ τ1) (drop (ref_object_offset Γ r) (natset_to_bools
(bit_size_of Γ τ2) ω2)) = replicate (bit_size_of Γ τ1) false) as Hω2.
{ apply list_eq_same_length with (bit_size_of Γ τ1); try done.
intros i β1 β2 ?.
rewrite lookup_take, lookup_drop, lookup_replicate_2 by done.
intros Hβ1 ?; destruct β1; simplify_equality'; try done.
rewrite lookup_natset_to_bools_true in Hβ1 by lia.
specialize (HΩ o1 o2 r τ1 i); feed specialize HΩ; auto.
destruct (proj2 HΩ) as (?&?&?); simplify_equality; eauto. }
do 3 eexists; split_and ?; eauto using ctree_lookup_merge.
rewrite Hω2, ctree_merge_id by auto; eauto.
Qed.
Lemma mem_unlock_refine' Γ α f m1 m2 Ω1 Ω2 :
✓ Γ → m1 ⊑{Γ,α,f} m2 → Ω1 ⊑{Γ,α,f@'{m1}↦'{m2}} Ω2 →
mem_unlock Ω1 m1 ⊑{Γ,α,f} mem_unlock Ω2 m2.
Proof.
unfold refineM, cmap_refine'. rewrite !mem_unlock_memenv_of.
eauto using mem_unlock_refine.
Qed.
Lemma lock_singleton_refine Γ α f Δ1 Δ2 a1 a2 σ :
✓ Γ → a1 ⊑{Γ,α,f@Δ1↦Δ2} a2 : TType σ → addr_strict Γ a1 →
lock_singleton Γ a1 ⊑{Γ,α,f@Δ1↦Δ2} lock_singleton Γ a2.
Proof.
intros ? Ha ?.
assert (Δ1 ⊑{Γ,α,f} Δ2) as HΔ by eauto using addr_refine_memenv_refine.
assert ((Γ,Δ1) ⊢ a1 : TType σ) by eauto using addr_refine_typed_l.
assert ((Γ,Δ2) ⊢ a2 : TType σ) by eauto using addr_refine_typed_r.
split; split_and ?; eauto using lock_singleton_valid, addr_strict_refine.
intros o1 o2 r τ i ????. rewrite !elem_of_lock_singleton_typed by eauto.
destruct (addr_object_offset_refine Γ α f
Δ1 Δ2 a1 a2 (TType σ)) as (r'&?&?&->); auto.
split; [intros (->&?&?); simplify_equality'; intuition lia|intros (->&?&?)].
destruct (meminj_injective_alt f o1 (addr_index a1) (addr_index a2) r r')
as [|[??]]; simplify_equality'; eauto using memenv_refine_injective.
{ intuition lia. }
destruct (memenv_refine_typed_r HΔ o1 (addr_index a2) r
(addr_type_object a2)) as (?&?&?); eauto using addr_typed_index;
simplify_type_equality'.
assert (addr_object_offset Γ a1 + bit_size_of Γ σ
≤ bit_size_of Γ (addr_type_object a1)).
{ erewrite addr_object_offset_alt by eauto. transitivity
(ref_object_offset Γ (addr_ref Γ a1) + bit_size_of Γ (addr_type_base a1));
eauto using ref_object_offset_size', addr_typed_ref_typed.
rewrite <-Nat.add_assoc, <-Nat.add_le_mono_l; eauto using addr_bit_range. }
destruct (ref_disjoint_object_offset Γ (addr_type_object a2) r r'
τ (addr_type_object a1)); auto; lia.
Qed.
Lemma locks_union_refine Γ α f Δ1 Δ2 Ω1 Ω2 Ω1' Ω2' :
Ω1 ⊑{Γ,α,f@Δ1↦Δ2} Ω2 → Ω1' ⊑{Γ,α,f@Δ1↦Δ2} Ω2' →
Ω1 ∪ Ω1' ⊑{Γ,α,f@Δ1↦Δ2} Ω2 ∪ Ω2'.
Proof.
intros (?&?&?&HΩ) (?&?&_&HΩ');
split; split_and ?; auto using lockset_union_valid.
intros o1 o2 r τ1 i ????. by rewrite !elem_of_union, HΩ, HΩ' by eauto.
Qed.
Lemma locks_union_list_refine Γ α f Δ1 Δ2 Ωs1 Ωs2 :
Δ1 ⊑{Γ,α,f} Δ2 → Ωs1 ⊑{Γ,α,f@Δ1↦Δ2}* Ωs2 → ⋃ Ωs1 ⊑{Γ,α,f@Δ1↦Δ2} ⋃ Ωs2.
Proof.
induction 2; simpl; eauto using locks_union_refine, locks_empty_refine.
Qed.
End memory.
|
[GOAL]
s t : ℤ
r' : ℕ
s' t' : ℤ
⊢ xgcdAux 0 s t r' s' t' = (r', s', t')
[PROOFSTEP]
simp [xgcdAux]
[GOAL]
r : ℕ
s t : ℤ
r' : ℕ
s' t' : ℤ
h : 0 < r
⊢ xgcdAux r s t r' s' t' = xgcdAux (r' % r) (s' - ↑r' / ↑r * s) (t' - ↑r' / ↑r * t) r s t
[PROOFSTEP]
obtain ⟨r, rfl⟩ := Nat.exists_eq_succ_of_ne_zero h.ne'
[GOAL]
case intro
s t : ℤ
r' : ℕ
s' t' : ℤ
r : ℕ
h : 0 < succ r
⊢ xgcdAux (succ r) s t r' s' t' =
xgcdAux (r' % succ r) (s' - ↑r' / ↑(succ r) * s) (t' - ↑r' / ↑(succ r) * t) (succ r) s t
[PROOFSTEP]
rfl
[GOAL]
s : ℕ
⊢ gcdA 0 s = 0
[PROOFSTEP]
unfold gcdA
[GOAL]
s : ℕ
⊢ (xgcd 0 s).fst = 0
[PROOFSTEP]
rw [xgcd, xgcd_zero_left]
[GOAL]
s : ℕ
⊢ gcdB 0 s = 1
[PROOFSTEP]
unfold gcdB
[GOAL]
s : ℕ
⊢ (xgcd 0 s).snd = 1
[PROOFSTEP]
rw [xgcd, xgcd_zero_left]
[GOAL]
s : ℕ
h : s ≠ 0
⊢ gcdA s 0 = 1
[PROOFSTEP]
unfold gcdA xgcd
[GOAL]
s : ℕ
h : s ≠ 0
⊢ (xgcdAux s 1 0 0 0 1).snd.fst = 1
[PROOFSTEP]
obtain ⟨s, rfl⟩ := Nat.exists_eq_succ_of_ne_zero h
[GOAL]
case intro
s : ℕ
h : succ s ≠ 0
⊢ (xgcdAux (succ s) 1 0 0 0 1).snd.fst = 1
[PROOFSTEP]
rw [xgcdAux_succ]
[GOAL]
case intro
s : ℕ
h : succ s ≠ 0
⊢ (xgcdAux (0 % succ s) (0 - ↑0 / ↑(succ s) * 1) (1 - ↑0 / ↑(succ s) * 0) (succ s) 1 0).snd.fst = 1
[PROOFSTEP]
rfl
[GOAL]
s : ℕ
h : s ≠ 0
⊢ gcdB s 0 = 0
[PROOFSTEP]
unfold gcdB xgcd
[GOAL]
s : ℕ
h : s ≠ 0
⊢ (xgcdAux s 1 0 0 0 1).snd.snd = 0
[PROOFSTEP]
obtain ⟨s, rfl⟩ := Nat.exists_eq_succ_of_ne_zero h
[GOAL]
case intro
s : ℕ
h : succ s ≠ 0
⊢ (xgcdAux (succ s) 1 0 0 0 1).snd.snd = 0
[PROOFSTEP]
rw [xgcdAux_succ]
[GOAL]
case intro
s : ℕ
h : succ s ≠ 0
⊢ (xgcdAux (0 % succ s) (0 - ↑0 / ↑(succ s) * 1) (1 - ↑0 / ↑(succ s) * 0) (succ s) 1 0).snd.snd = 0
[PROOFSTEP]
rfl
[GOAL]
x y : ℕ
⊢ ∀ (n : ℕ) (s t s' t' : ℤ), (xgcdAux 0 s t n s' t').fst = gcd 0 n
[PROOFSTEP]
simp
[GOAL]
x✝ y✝ x y : ℕ
h : 0 < x
IH : ∀ (s t s' t' : ℤ), (xgcdAux (y % x) s t x s' t').fst = gcd (y % x) x
s t s' t' : ℤ
⊢ (xgcdAux x s t y s' t').fst = gcd x y
[PROOFSTEP]
simp [xgcdAux_rec, h, IH]
[GOAL]
x✝ y✝ x y : ℕ
h : 0 < x
IH : ∀ (s t s' t' : ℤ), (xgcdAux (y % x) s t x s' t').fst = gcd (y % x) x
s t s' t' : ℤ
⊢ gcd (y % x) x = gcd x y
[PROOFSTEP]
rw [← gcd_rec]
[GOAL]
x y : ℕ
⊢ xgcdAux x 1 0 y 0 1 = (gcd x y, xgcd x y)
[PROOFSTEP]
rw [xgcd, ← xgcdAux_fst x y 1 0 0 1]
[GOAL]
x y : ℕ
⊢ xgcd x y = (gcdA x y, gcdB x y)
[PROOFSTEP]
unfold gcdA gcdB
[GOAL]
x y : ℕ
⊢ xgcd x y = ((xgcd x y).fst, (xgcd x y).snd)
[PROOFSTEP]
cases xgcd x y
[GOAL]
case mk
x y : ℕ
fst✝ snd✝ : ℤ
⊢ (fst✝, snd✝) = ((fst✝, snd✝).fst, (fst✝, snd✝).snd)
[PROOFSTEP]
rfl
[GOAL]
x y r r' : ℕ
⊢ ∀ {s t s' t' : ℤ}, Nat.P x y (r, s, t) → Nat.P x y (r', s', t') → Nat.P x y (xgcdAux r s t r' s' t')
[PROOFSTEP]
induction r, r' using gcd.induction with
| H0 => simp
| H1 a b h IH =>
intro s t s' t' p p'
rw [xgcdAux_rec h]; refine' IH _ p; dsimp [P] at *
rw [Int.emod_def]; generalize (b / a : ℤ) = k
rw [p, p', mul_sub, sub_add_eq_add_sub, mul_sub, add_mul, mul_comm k t, mul_comm k s, ← mul_assoc, ← mul_assoc,
add_comm (x * s * k), ← add_sub_assoc, sub_sub]
[GOAL]
x y r r' : ℕ
⊢ ∀ {s t s' t' : ℤ}, Nat.P x y (r, s, t) → Nat.P x y (r', s', t') → Nat.P x y (xgcdAux r s t r' s' t')
[PROOFSTEP]
induction r, r' using gcd.induction with
| H0 => simp
| H1 a b h IH =>
intro s t s' t' p p'
rw [xgcdAux_rec h]; refine' IH _ p; dsimp [P] at *
rw [Int.emod_def]; generalize (b / a : ℤ) = k
rw [p, p', mul_sub, sub_add_eq_add_sub, mul_sub, add_mul, mul_comm k t, mul_comm k s, ← mul_assoc, ← mul_assoc,
add_comm (x * s * k), ← add_sub_assoc, sub_sub]
[GOAL]
case H0
x y n✝ : ℕ
⊢ ∀ {s t s' t' : ℤ}, Nat.P x y (0, s, t) → Nat.P x y (n✝, s', t') → Nat.P x y (xgcdAux 0 s t n✝ s' t')
[PROOFSTEP]
| H0 => simp
[GOAL]
case H0
x y n✝ : ℕ
⊢ ∀ {s t s' t' : ℤ}, Nat.P x y (0, s, t) → Nat.P x y (n✝, s', t') → Nat.P x y (xgcdAux 0 s t n✝ s' t')
[PROOFSTEP]
simp
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH : ∀ {s t s' t' : ℤ}, Nat.P x y (b % a, s, t) → Nat.P x y (a, s', t') → Nat.P x y (xgcdAux (b % a) s t a s' t')
⊢ ∀ {s t s' t' : ℤ}, Nat.P x y (a, s, t) → Nat.P x y (b, s', t') → Nat.P x y (xgcdAux a s t b s' t')
[PROOFSTEP]
| H1 a b h IH =>
intro s t s' t' p p'
rw [xgcdAux_rec h]; refine' IH _ p; dsimp [P] at *
rw [Int.emod_def]; generalize (b / a : ℤ) = k
rw [p, p', mul_sub, sub_add_eq_add_sub, mul_sub, add_mul, mul_comm k t, mul_comm k s, ← mul_assoc, ← mul_assoc,
add_comm (x * s * k), ← add_sub_assoc, sub_sub]
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH : ∀ {s t s' t' : ℤ}, Nat.P x y (b % a, s, t) → Nat.P x y (a, s', t') → Nat.P x y (xgcdAux (b % a) s t a s' t')
⊢ ∀ {s t s' t' : ℤ}, Nat.P x y (a, s, t) → Nat.P x y (b, s', t') → Nat.P x y (xgcdAux a s t b s' t')
[PROOFSTEP]
intro s t s' t' p p'
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH : ∀ {s t s' t' : ℤ}, Nat.P x y (b % a, s, t) → Nat.P x y (a, s', t') → Nat.P x y (xgcdAux (b % a) s t a s' t')
s t s' t' : ℤ
p : Nat.P x y (a, s, t)
p' : Nat.P x y (b, s', t')
⊢ Nat.P x y (xgcdAux a s t b s' t')
[PROOFSTEP]
rw [xgcdAux_rec h]
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH : ∀ {s t s' t' : ℤ}, Nat.P x y (b % a, s, t) → Nat.P x y (a, s', t') → Nat.P x y (xgcdAux (b % a) s t a s' t')
s t s' t' : ℤ
p : Nat.P x y (a, s, t)
p' : Nat.P x y (b, s', t')
⊢ Nat.P x y (xgcdAux (b % a) (s' - ↑b / ↑a * s) (t' - ↑b / ↑a * t) a s t)
[PROOFSTEP]
refine' IH _ p
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH : ∀ {s t s' t' : ℤ}, Nat.P x y (b % a, s, t) → Nat.P x y (a, s', t') → Nat.P x y (xgcdAux (b % a) s t a s' t')
s t s' t' : ℤ
p : Nat.P x y (a, s, t)
p' : Nat.P x y (b, s', t')
⊢ Nat.P x y (b % a, s' - ↑b / ↑a * s, t' - ↑b / ↑a * t)
[PROOFSTEP]
dsimp [P] at *
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH :
∀ {s t s' t' : ℤ},
↑b % ↑a = ↑x * s + ↑y * t →
↑a = ↑x * s' + ↑y * t' →
↑(xgcdAux (b % a) s t a s' t').fst =
↑x * (xgcdAux (b % a) s t a s' t').2.fst + ↑y * (xgcdAux (b % a) s t a s' t').2.snd
s t s' t' : ℤ
p : ↑a = ↑x * s + ↑y * t
p' : ↑b = ↑x * s' + ↑y * t'
⊢ ↑b % ↑a = ↑x * (s' - ↑b / ↑a * s) + ↑y * (t' - ↑b / ↑a * t)
[PROOFSTEP]
rw [Int.emod_def]
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH :
∀ {s t s' t' : ℤ},
↑b % ↑a = ↑x * s + ↑y * t →
↑a = ↑x * s' + ↑y * t' →
↑(xgcdAux (b % a) s t a s' t').fst =
↑x * (xgcdAux (b % a) s t a s' t').2.fst + ↑y * (xgcdAux (b % a) s t a s' t').2.snd
s t s' t' : ℤ
p : ↑a = ↑x * s + ↑y * t
p' : ↑b = ↑x * s' + ↑y * t'
⊢ ↑b - ↑a * (↑b / ↑a) = ↑x * (s' - ↑b / ↑a * s) + ↑y * (t' - ↑b / ↑a * t)
[PROOFSTEP]
generalize (b / a : ℤ) = k
[GOAL]
case H1
x y a b : ℕ
h : 0 < a
IH :
∀ {s t s' t' : ℤ},
↑b % ↑a = ↑x * s + ↑y * t →
↑a = ↑x * s' + ↑y * t' →
↑(xgcdAux (b % a) s t a s' t').fst =
↑x * (xgcdAux (b % a) s t a s' t').2.fst + ↑y * (xgcdAux (b % a) s t a s' t').2.snd
s t s' t' : ℤ
p : ↑a = ↑x * s + ↑y * t
p' : ↑b = ↑x * s' + ↑y * t'
k : ℤ
⊢ ↑b - ↑a * k = ↑x * (s' - k * s) + ↑y * (t' - k * t)
[PROOFSTEP]
rw [p, p', mul_sub, sub_add_eq_add_sub, mul_sub, add_mul, mul_comm k t, mul_comm k s, ← mul_assoc, ← mul_assoc,
add_comm (x * s * k), ← add_sub_assoc, sub_sub]
[GOAL]
x y : ℕ
⊢ ↑(gcd x y) = ↑x * gcdA x y + ↑y * gcdB x y
[PROOFSTEP]
have := @xgcdAux_P x y x y 1 0 0 1 (by simp [P]) (by simp [P])
[GOAL]
x y : ℕ
⊢ Nat.P x y (x, 1, 0)
[PROOFSTEP]
simp [P]
[GOAL]
x y : ℕ
⊢ Nat.P x y (y, 0, 1)
[PROOFSTEP]
simp [P]
[GOAL]
x y : ℕ
this : Nat.P x y (xgcdAux x 1 0 y 0 1)
⊢ ↑(gcd x y) = ↑x * gcdA x y + ↑y * gcdB x y
[PROOFSTEP]
rwa [xgcdAux_val, xgcd_val] at this
[GOAL]
k n : ℕ
hk : gcd n k < k
⊢ ∃ m, n * m % k = gcd n k
[PROOFSTEP]
have hk' := Int.ofNat_ne_zero.2 (ne_of_gt (lt_of_le_of_lt (zero_le (gcd n k)) hk))
[GOAL]
k n : ℕ
hk : gcd n k < k
hk' : ↑k ≠ 0
⊢ ∃ m, n * m % k = gcd n k
[PROOFSTEP]
have key := congr_arg (fun (m : ℤ) => (m % k).toNat) (gcd_eq_gcd_ab n k)
[GOAL]
k n : ℕ
hk : gcd n k < k
hk' : ↑k ≠ 0
key : (fun m => Int.toNat (m % ↑k)) ↑(gcd n k) = (fun m => Int.toNat (m % ↑k)) (↑n * gcdA n k + ↑k * gcdB n k)
⊢ ∃ m, n * m % k = gcd n k
[PROOFSTEP]
simp only at key
[GOAL]
k n : ℕ
hk : gcd n k < k
hk' : ↑k ≠ 0
key : Int.toNat (↑(gcd n k) % ↑k) = Int.toNat ((↑n * gcdA n k + ↑k * gcdB n k) % ↑k)
⊢ ∃ m, n * m % k = gcd n k
[PROOFSTEP]
rw [Int.add_mul_emod_self_left, ← Int.coe_nat_mod, Int.toNat_coe_nat, mod_eq_of_lt hk] at key
[GOAL]
k n : ℕ
hk : gcd n k < k
hk' : ↑k ≠ 0
key : gcd n k = Int.toNat (↑n * gcdA n k % ↑k)
⊢ ∃ m, n * m % k = gcd n k
[PROOFSTEP]
refine' ⟨(n.gcdA k % k).toNat, Eq.trans (Int.ofNat.inj _) key.symm⟩
[GOAL]
k n : ℕ
hk : gcd n k < k
hk' : ↑k ≠ 0
key : gcd n k = Int.toNat (↑n * gcdA n k % ↑k)
⊢ Int.ofNat (n * Int.toNat (gcdA n k % ↑k) % k) = Int.ofNat (Int.toNat (↑n * gcdA n k % ↑k))
[PROOFSTEP]
rw [Int.ofNat_eq_coe, Int.coe_nat_mod, Int.ofNat_mul, Int.toNat_of_nonneg (Int.emod_nonneg _ hk'), Int.ofNat_eq_coe,
Int.toNat_of_nonneg (Int.emod_nonneg _ hk'), Int.mul_emod, Int.emod_emod, ← Int.mul_emod]
[GOAL]
m n : ℕ
⊢ ↑(gcd ↑m -[n+1]) = ↑m * gcdA ↑m -[n+1] + -(↑n + 1) * -Nat.gcdB (natAbs ↑m) (Nat.succ n)
[PROOFSTEP]
rw [neg_mul_neg]
[GOAL]
m n : ℕ
⊢ ↑(gcd ↑m -[n+1]) = ↑m * gcdA ↑m -[n+1] + (↑n + 1) * Nat.gcdB (natAbs ↑m) (Nat.succ n)
[PROOFSTEP]
apply Nat.gcd_eq_gcd_ab
[GOAL]
m n : ℕ
⊢ ↑(gcd -[m+1] ↑n) = -(↑m + 1) * -Nat.gcdA (Nat.succ m) (natAbs ↑n) + ↑n * gcdB -[m+1] ↑n
[PROOFSTEP]
rw [neg_mul_neg]
[GOAL]
m n : ℕ
⊢ ↑(gcd -[m+1] ↑n) = (↑m + 1) * Nat.gcdA (Nat.succ m) (natAbs ↑n) + ↑n * gcdB -[m+1] ↑n
[PROOFSTEP]
apply Nat.gcd_eq_gcd_ab
[GOAL]
m n : ℕ
⊢ ↑(gcd -[m+1] -[n+1]) =
-(↑m + 1) * -Nat.gcdA (Nat.succ m) (natAbs -[n+1]) + -(↑n + 1) * -Nat.gcdB (natAbs -[m+1]) (Nat.succ n)
[PROOFSTEP]
rw [neg_mul_neg, neg_mul_neg]
[GOAL]
m n : ℕ
⊢ ↑(gcd -[m+1] -[n+1]) =
(↑m + 1) * Nat.gcdA (Nat.succ m) (natAbs -[n+1]) + (↑n + 1) * Nat.gcdB (natAbs -[m+1]) (Nat.succ n)
[PROOFSTEP]
apply Nat.gcd_eq_gcd_ab
[GOAL]
a b : ℤ
H : b ∣ a
⊢ natAbs (a / b) = natAbs a / natAbs b
[PROOFSTEP]
rcases Nat.eq_zero_or_pos (natAbs b) with (h | h)
[GOAL]
case inl
a b : ℤ
H : b ∣ a
h : natAbs b = 0
⊢ natAbs (a / b) = natAbs a / natAbs b
[PROOFSTEP]
rw [natAbs_eq_zero.1 h]
[GOAL]
case inl
a b : ℤ
H : b ∣ a
h : natAbs b = 0
⊢ natAbs (a / 0) = natAbs a / natAbs 0
[PROOFSTEP]
simp [Int.ediv_zero]
[GOAL]
case inr
a b : ℤ
H : b ∣ a
h : natAbs b > 0
⊢ natAbs (a / b) = natAbs a / natAbs b
[PROOFSTEP]
calc
natAbs (a / b) = natAbs (a / b) * 1 := by rw [mul_one]
_ = natAbs (a / b) * (natAbs b / natAbs b) := by rw [Nat.div_self h]
_ = natAbs (a / b) * natAbs b / natAbs b := by rw [Nat.mul_div_assoc _ dvd_rfl]
_ = natAbs (a / b * b) / natAbs b := by rw [natAbs_mul (a / b) b]
_ = natAbs a / natAbs b := by rw [Int.ediv_mul_cancel H]
[GOAL]
a b : ℤ
H : b ∣ a
h : natAbs b > 0
⊢ natAbs (a / b) = natAbs (a / b) * 1
[PROOFSTEP]
rw [mul_one]
[GOAL]
a b : ℤ
H : b ∣ a
h : natAbs b > 0
⊢ natAbs (a / b) * 1 = natAbs (a / b) * (natAbs b / natAbs b)
[PROOFSTEP]
rw [Nat.div_self h]
[GOAL]
a b : ℤ
H : b ∣ a
h : natAbs b > 0
⊢ natAbs (a / b) * (natAbs b / natAbs b) = natAbs (a / b) * natAbs b / natAbs b
[PROOFSTEP]
rw [Nat.mul_div_assoc _ dvd_rfl]
[GOAL]
a b : ℤ
H : b ∣ a
h : natAbs b > 0
⊢ natAbs (a / b) * natAbs b / natAbs b = natAbs (a / b * b) / natAbs b
[PROOFSTEP]
rw [natAbs_mul (a / b) b]
[GOAL]
a b : ℤ
H : b ∣ a
h : natAbs b > 0
⊢ natAbs (a / b * b) / natAbs b = natAbs a / natAbs b
[PROOFSTEP]
rw [Int.ediv_mul_cancel H]
[GOAL]
i j k : ℤ
k_non_zero : k ≠ 0
H : k * i ∣ k * j
l : ℤ
H1 : k * j = k * i * l
⊢ i ∣ j
[PROOFSTEP]
rw [mul_assoc] at H1
[GOAL]
i j k : ℤ
k_non_zero : k ≠ 0
H : k * i ∣ k * j
l : ℤ
H1 : k * j = k * (i * l)
⊢ i ∣ j
[PROOFSTEP]
exact ⟨_, mul_left_cancel₀ k_non_zero H1⟩
[GOAL]
i j k : ℤ
k_non_zero : k ≠ 0
H : i * k ∣ j * k
⊢ i ∣ j
[PROOFSTEP]
rw [mul_comm i k, mul_comm j k] at H
[GOAL]
i j k : ℤ
k_non_zero : k ≠ 0
H : k * i ∣ k * j
⊢ i ∣ j
[PROOFSTEP]
exact dvd_of_mul_dvd_mul_left k_non_zero H
[GOAL]
i j : ℤ
⊢ gcd i j * lcm i j = natAbs (i * j)
[PROOFSTEP]
rw [Int.gcd, Int.lcm, Nat.gcd_mul_lcm, natAbs_mul]
[GOAL]
i : ℤ
⊢ gcd i i = natAbs i
[PROOFSTEP]
simp [gcd]
[GOAL]
i : ℤ
⊢ gcd 0 i = natAbs i
[PROOFSTEP]
simp [gcd]
[GOAL]
i : ℤ
⊢ gcd i 0 = natAbs i
[PROOFSTEP]
simp [gcd]
[GOAL]
x y : ℤ
⊢ gcd x (-y) = gcd x y
[PROOFSTEP]
rw [Int.gcd, Int.gcd, natAbs_neg]
[GOAL]
x y : ℤ
⊢ gcd (-x) y = gcd x y
[PROOFSTEP]
rw [Int.gcd, Int.gcd, natAbs_neg]
[GOAL]
i j k : ℤ
⊢ gcd (i * j) (i * k) = natAbs i * gcd j k
[PROOFSTEP]
rw [Int.gcd, Int.gcd, natAbs_mul, natAbs_mul]
[GOAL]
i j k : ℤ
⊢ Nat.gcd (natAbs i * natAbs j) (natAbs i * natAbs k) = natAbs i * Nat.gcd (natAbs j) (natAbs k)
[PROOFSTEP]
apply Nat.gcd_mul_left
[GOAL]
i j k : ℤ
⊢ gcd (i * j) (k * j) = gcd i k * natAbs j
[PROOFSTEP]
rw [Int.gcd, Int.gcd, natAbs_mul, natAbs_mul]
[GOAL]
i j k : ℤ
⊢ Nat.gcd (natAbs i * natAbs j) (natAbs k * natAbs j) = Nat.gcd (natAbs i) (natAbs k) * natAbs j
[PROOFSTEP]
apply Nat.gcd_mul_right
[GOAL]
i j : ℤ
⊢ gcd i j = 0 ↔ i = 0 ∧ j = 0
[PROOFSTEP]
rw [gcd, Nat.gcd_eq_zero_iff, natAbs_eq_zero, natAbs_eq_zero]
[GOAL]
i j k : ℤ
H1 : k ∣ i
H2 : k ∣ j
⊢ gcd (i / k) (j / k) = gcd i j / natAbs k
[PROOFSTEP]
rw [gcd, natAbs_ediv i k H1, natAbs_ediv j k H2]
[GOAL]
i j k : ℤ
H1 : k ∣ i
H2 : k ∣ j
⊢ Nat.gcd (natAbs i / natAbs k) (natAbs j / natAbs k) = gcd i j / natAbs k
[PROOFSTEP]
exact Nat.gcd_div (natAbs_dvd_natAbs.mpr H1) (natAbs_dvd_natAbs.mpr H2)
[GOAL]
i j : ℤ
H : 0 < gcd i j
⊢ gcd (i / ↑(gcd i j)) (j / ↑(gcd i j)) = 1
[PROOFSTEP]
rw [gcd_div (gcd_dvd_left i j) (gcd_dvd_right i j), natAbs_ofNat, Nat.div_self H]
[GOAL]
i j : ℤ
H : j ∣ i
⊢ gcd i j = natAbs j
[PROOFSTEP]
rw [gcd_comm, gcd_eq_left H]
[GOAL]
x y : ℤ
hc : gcd x y ≠ 0
⊢ x ≠ 0 ∨ y ≠ 0
[PROOFSTEP]
contrapose! hc
[GOAL]
x y : ℤ
hc : x = 0 ∧ y = 0
⊢ gcd x y = 0
[PROOFSTEP]
rw [hc.left, hc.right, gcd_zero_right, natAbs_zero]
[GOAL]
m n : ℤ
k : ℕ
k0 : 0 < k
⊢ m ^ k ∣ n ^ k ↔ m ∣ n
[PROOFSTEP]
refine' ⟨fun h => _, fun h => pow_dvd_pow_of_dvd h _⟩
[GOAL]
m n : ℤ
k : ℕ
k0 : 0 < k
h : m ^ k ∣ n ^ k
⊢ m ∣ n
[PROOFSTEP]
rwa [← natAbs_dvd_natAbs, ← Nat.pow_dvd_pow_iff k0, ← Int.natAbs_pow, ← Int.natAbs_pow, natAbs_dvd_natAbs]
[GOAL]
a b : ℤ
n : ℕ
⊢ gcd a b ∣ n ↔ ∃ x y, ↑n = a * x + b * y
[PROOFSTEP]
constructor
[GOAL]
case mp
a b : ℤ
n : ℕ
⊢ gcd a b ∣ n → ∃ x y, ↑n = a * x + b * y
[PROOFSTEP]
intro h
[GOAL]
case mp
a b : ℤ
n : ℕ
h : gcd a b ∣ n
⊢ ∃ x y, ↑n = a * x + b * y
[PROOFSTEP]
rw [← Nat.mul_div_cancel' h, Int.ofNat_mul, gcd_eq_gcd_ab, add_mul, mul_assoc, mul_assoc]
[GOAL]
case mp
a b : ℤ
n : ℕ
h : gcd a b ∣ n
⊢ ∃ x y, a * (gcdA a b * ↑(n / gcd a b)) + b * (gcdB a b * ↑(n / gcd a b)) = a * x + b * y
[PROOFSTEP]
exact ⟨_, _, rfl⟩
[GOAL]
case mpr
a b : ℤ
n : ℕ
⊢ (∃ x y, ↑n = a * x + b * y) → gcd a b ∣ n
[PROOFSTEP]
rintro ⟨x, y, h⟩
[GOAL]
case mpr.intro.intro
a b : ℤ
n : ℕ
x y : ℤ
h : ↑n = a * x + b * y
⊢ gcd a b ∣ n
[PROOFSTEP]
rw [← Int.coe_nat_dvd, h]
[GOAL]
case mpr.intro.intro
a b : ℤ
n : ℕ
x y : ℤ
h : ↑n = a * x + b * y
⊢ ↑(gcd a b) ∣ a * x + b * y
[PROOFSTEP]
exact dvd_add (dvd_mul_of_dvd_left (gcd_dvd_left a b) _) (dvd_mul_of_dvd_left (gcd_dvd_right a b) y)
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a c = 1
⊢ a ∣ b
[PROOFSTEP]
have := gcd_eq_gcd_ab a c
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a c = 1
this : ↑(gcd a c) = a * gcdA a c + c * gcdB a c
⊢ a ∣ b
[PROOFSTEP]
simp only [hab, Int.ofNat_zero, Int.ofNat_succ, zero_add] at this
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a c = 1
this : 1 = a * gcdA a c + c * gcdB a c
⊢ a ∣ b
[PROOFSTEP]
have : b * a * gcdA a c + b * c * gcdB a c = b := by simp [mul_assoc, ← mul_add, ← this]
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a c = 1
this : 1 = a * gcdA a c + c * gcdB a c
⊢ b * a * gcdA a c + b * c * gcdB a c = b
[PROOFSTEP]
simp [mul_assoc, ← mul_add, ← this]
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a c = 1
this✝ : 1 = a * gcdA a c + c * gcdB a c
this : b * a * gcdA a c + b * c * gcdB a c = b
⊢ a ∣ b
[PROOFSTEP]
rw [← this]
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a c = 1
this✝ : 1 = a * gcdA a c + c * gcdB a c
this : b * a * gcdA a c + b * c * gcdB a c = b
⊢ a ∣ b * a * gcdA a c + b * c * gcdB a c
[PROOFSTEP]
exact dvd_add (dvd_mul_of_dvd_left (dvd_mul_left a b) _) (dvd_mul_of_dvd_left habc _)
[GOAL]
a b c : ℤ
habc : a ∣ b * c
hab : gcd a b = 1
⊢ a ∣ c
[PROOFSTEP]
rw [mul_comm] at habc
[GOAL]
a b c : ℤ
habc : a ∣ c * b
hab : gcd a b = 1
⊢ a ∣ c
[PROOFSTEP]
exact dvd_of_dvd_mul_left_of_gcd_one habc hab
[GOAL]
a b : ℤ
ha : a ≠ 0
⊢ IsLeast {n | 0 < n ∧ ∃ x y, ↑n = a * x + b * y} (gcd a b)
[PROOFSTEP]
simp_rw [← gcd_dvd_iff]
[GOAL]
a b : ℤ
ha : a ≠ 0
⊢ IsLeast {n | 0 < n ∧ gcd a b ∣ n} (gcd a b)
[PROOFSTEP]
constructor
[GOAL]
case left
a b : ℤ
ha : a ≠ 0
⊢ gcd a b ∈ {n | 0 < n ∧ gcd a b ∣ n}
[PROOFSTEP]
simpa [and_true_iff, dvd_refl, Set.mem_setOf_eq] using gcd_pos_of_ne_zero_left b ha
[GOAL]
case right
a b : ℤ
ha : a ≠ 0
⊢ gcd a b ∈ lowerBounds {n | 0 < n ∧ gcd a b ∣ n}
[PROOFSTEP]
simp only [lowerBounds, and_imp, Set.mem_setOf_eq]
[GOAL]
case right
a b : ℤ
ha : a ≠ 0
⊢ ∀ ⦃a_1 : ℕ⦄, 0 < a_1 → gcd a b ∣ a_1 → gcd a b ≤ a_1
[PROOFSTEP]
exact fun n hn_pos hn => Nat.le_of_dvd hn_pos hn
[GOAL]
i j : ℤ
⊢ lcm i j = lcm j i
[PROOFSTEP]
rw [Int.lcm, Int.lcm]
[GOAL]
i j : ℤ
⊢ Nat.lcm (natAbs i) (natAbs j) = Nat.lcm (natAbs j) (natAbs i)
[PROOFSTEP]
exact Nat.lcm_comm _ _
[GOAL]
i j k : ℤ
⊢ lcm (↑(lcm i j)) k = lcm i ↑(lcm j k)
[PROOFSTEP]
rw [Int.lcm, Int.lcm, Int.lcm, Int.lcm, natAbs_ofNat, natAbs_ofNat]
[GOAL]
i j k : ℤ
⊢ Nat.lcm (Nat.lcm (natAbs i) (natAbs j)) (natAbs k) = Nat.lcm (natAbs i) (Nat.lcm (natAbs j) (natAbs k))
[PROOFSTEP]
apply Nat.lcm_assoc
[GOAL]
i : ℤ
⊢ lcm 0 i = 0
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i : ℤ
⊢ Nat.lcm (natAbs 0) (natAbs i) = 0
[PROOFSTEP]
apply Nat.lcm_zero_left
[GOAL]
i : ℤ
⊢ lcm i 0 = 0
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i : ℤ
⊢ Nat.lcm (natAbs i) (natAbs 0) = 0
[PROOFSTEP]
apply Nat.lcm_zero_right
[GOAL]
i : ℤ
⊢ lcm 1 i = natAbs i
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i : ℤ
⊢ Nat.lcm (natAbs 1) (natAbs i) = natAbs i
[PROOFSTEP]
apply Nat.lcm_one_left
[GOAL]
i : ℤ
⊢ lcm i 1 = natAbs i
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i : ℤ
⊢ Nat.lcm (natAbs i) (natAbs 1) = natAbs i
[PROOFSTEP]
apply Nat.lcm_one_right
[GOAL]
i : ℤ
⊢ lcm i i = natAbs i
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i : ℤ
⊢ Nat.lcm (natAbs i) (natAbs i) = natAbs i
[PROOFSTEP]
apply Nat.lcm_self
[GOAL]
i j : ℤ
⊢ i ∣ ↑(lcm i j)
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i j : ℤ
⊢ i ∣ ↑(Nat.lcm (natAbs i) (natAbs j))
[PROOFSTEP]
apply coe_nat_dvd_right.mpr
[GOAL]
i j : ℤ
⊢ natAbs i ∣ Nat.lcm (natAbs i) (natAbs j)
[PROOFSTEP]
apply Nat.dvd_lcm_left
[GOAL]
i j : ℤ
⊢ j ∣ ↑(lcm i j)
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i j : ℤ
⊢ j ∣ ↑(Nat.lcm (natAbs i) (natAbs j))
[PROOFSTEP]
apply coe_nat_dvd_right.mpr
[GOAL]
i j : ℤ
⊢ natAbs j ∣ Nat.lcm (natAbs i) (natAbs j)
[PROOFSTEP]
apply Nat.dvd_lcm_right
[GOAL]
i j k : ℤ
⊢ i ∣ k → j ∣ k → ↑(lcm i j) ∣ k
[PROOFSTEP]
rw [Int.lcm]
[GOAL]
i j k : ℤ
⊢ i ∣ k → j ∣ k → ↑(Nat.lcm (natAbs i) (natAbs j)) ∣ k
[PROOFSTEP]
intro hi hj
[GOAL]
i j k : ℤ
hi : i ∣ k
hj : j ∣ k
⊢ ↑(Nat.lcm (natAbs i) (natAbs j)) ∣ k
[PROOFSTEP]
exact coe_nat_dvd_left.mpr (Nat.lcm_dvd (natAbs_dvd_natAbs.mpr hi) (natAbs_dvd_natAbs.mpr hj))
[GOAL]
M : Type u_1
inst✝ : Monoid M
x : M
m n : ℕ
hm : x ^ m = 1
hn : x ^ n = 1
⊢ x ^ Nat.gcd m n = 1
[PROOFSTEP]
rcases m with (rfl | m)
[GOAL]
case zero
M : Type u_1
inst✝ : Monoid M
x : M
n : ℕ
hn : x ^ n = 1
hm : x ^ Nat.zero = 1
⊢ x ^ Nat.gcd Nat.zero n = 1
[PROOFSTEP]
simp [hn]
[GOAL]
case succ
M : Type u_1
inst✝ : Monoid M
x : M
n : ℕ
hn : x ^ n = 1
m : ℕ
hm : x ^ Nat.succ m = 1
⊢ x ^ Nat.gcd (Nat.succ m) n = 1
[PROOFSTEP]
obtain ⟨y, rfl⟩ := isUnit_ofPowEqOne hm m.succ_ne_zero
[GOAL]
case succ.intro
M : Type u_1
inst✝ : Monoid M
n m : ℕ
y : Mˣ
hn : ↑y ^ n = 1
hm : ↑y ^ Nat.succ m = 1
⊢ ↑y ^ Nat.gcd (Nat.succ m) n = 1
[PROOFSTEP]
simp only [← Units.val_pow_eq_pow_val] at *
[GOAL]
case succ.intro
M : Type u_1
inst✝ : Monoid M
n m : ℕ
y : Mˣ
hn : ↑(y ^ n) = 1
hm : ↑(y ^ Nat.succ m) = 1
⊢ ↑(y ^ Nat.gcd (Nat.succ m) n) = 1
[PROOFSTEP]
rw [← Units.val_one, ← zpow_coe_nat, ← Units.ext_iff] at *
[GOAL]
case succ.intro
M : Type u_1
inst✝ : Monoid M
n m : ℕ
y : Mˣ
hn : y ^ ↑n = 1
hm : y ^ ↑(Nat.succ m) = 1
⊢ y ^ ↑(Nat.gcd (Nat.succ m) n) = 1
[PROOFSTEP]
simp only [Nat.gcd_eq_gcd_ab, zpow_add, zpow_mul, hm, hn, one_zpow, one_mul]
|
/-
Copyright (c) 2021 Ashvni Narayanan. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Ashvni Narayanan, David Loeffler
-/
import data.polynomial.algebra_map
import data.polynomial.derivative
import data.nat.choose.cast
import number_theory.bernoulli
/-!
# Bernoulli polynomials
The Bernoulli polynomials (defined here : https://en.wikipedia.org/wiki/Bernoulli_polynomials)
are an important tool obtained from Bernoulli numbers.
## Mathematical overview
The $n$-th Bernoulli polynomial is defined as
$$ B_n(X) = ∑_{k = 0}^n {n \choose k} (-1)^k * B_k * X^{n - k} $$
where $B_k$ is the $k$-th Bernoulli number. The Bernoulli polynomials are generating functions,
$$ t * e^{tX} / (e^t - 1) = ∑_{n = 0}^{\infty} B_n(X) * \frac{t^n}{n!} $$
## Implementation detail
Bernoulli polynomials are defined using `bernoulli`, the Bernoulli numbers.
## Main theorems
- `sum_bernoulli`: The sum of the $k^\mathrm{th}$ Bernoulli polynomial with binomial
coefficients up to n is `(n + 1) * X^n`.
- `bernoulli_generating_function`: The Bernoulli polynomials act as generating functions
for the exponential.
## TODO
- `bernoulli_eval_one_neg` : $$ B_n(1 - x) = (-1)^n B_n(x) $$
-/
noncomputable theory
open_locale big_operators
open_locale nat polynomial
open nat finset
namespace polynomial
/-- The Bernoulli polynomials are defined in terms of the negative Bernoulli numbers. -/
def bernoulli (n : ℕ) : ℚ[X] :=
∑ i in range (n + 1), polynomial.monomial (n - i) ((_root_.bernoulli i) * (choose n i))
lemma bernoulli_def (n : ℕ) : bernoulli n =
∑ i in range (n + 1), polynomial.monomial i ((_root_.bernoulli (n - i)) * (choose n i)) :=
begin
rw [←sum_range_reflect, add_succ_sub_one, add_zero, bernoulli],
apply sum_congr rfl,
rintros x hx,
rw mem_range_succ_iff at hx, rw [choose_symm hx, tsub_tsub_cancel_of_le hx],
end
/-
### examples
-/
section examples
@[simp] lemma bernoulli_zero : bernoulli 0 = 1 :=
by simp [bernoulli]
@[simp] lemma bernoulli_eval_zero (n : ℕ) : (bernoulli n).eval 0 = _root_.bernoulli n :=
begin
rw [bernoulli, eval_finset_sum, sum_range_succ],
have : ∑ (x : ℕ) in range n, _root_.bernoulli x * (n.choose x) * 0 ^ (n - x) = 0,
{ apply sum_eq_zero (λ x hx, _),
have h : 0 < n - x := tsub_pos_of_lt (mem_range.1 hx),
simp [h] },
simp [this],
end
@[simp] lemma bernoulli_eval_one (n : ℕ) : (bernoulli n).eval 1 = _root_.bernoulli' n :=
begin
simp only [bernoulli, eval_finset_sum],
simp only [←succ_eq_add_one, sum_range_succ, mul_one, cast_one, choose_self,
(_root_.bernoulli _).mul_comm, sum_bernoulli, one_pow, mul_one, eval_C, eval_monomial],
by_cases h : n = 1,
{ norm_num [h], },
{ simp [h],
exact bernoulli_eq_bernoulli'_of_ne_one h, }
end
end examples
lemma derivative_bernoulli_add_one (k : ℕ) :
(bernoulli (k + 1)).derivative = (k + 1) * bernoulli k :=
begin
simp_rw [bernoulli, derivative_sum, derivative_monomial, nat.sub_sub, nat.add_sub_add_right],
-- LHS sum has an extra term, but the coefficient is zero:
rw [range_add_one, sum_insert not_mem_range_self, tsub_self, cast_zero, mul_zero, map_zero,
zero_add, mul_sum],
-- the rest of the sum is termwise equal:
refine sum_congr (by refl) (λ m hm, _),
conv_rhs { rw [←nat.cast_one, ←nat.cast_add, ←C_eq_nat_cast, C_mul_monomial, mul_comm], },
rw [mul_assoc, mul_assoc, ←nat.cast_mul, ←nat.cast_mul],
congr' 3,
rw [(choose_mul_succ_eq k m).symm, mul_comm],
end
lemma derivative_bernoulli (k : ℕ) : (bernoulli k).derivative = k * bernoulli (k - 1) :=
begin
cases k,
{ rw [nat.cast_zero, zero_mul, bernoulli_zero, derivative_one], },
{ exact_mod_cast derivative_bernoulli_add_one k, }
end
@[simp] theorem sum_bernoulli (n : ℕ) :
∑ k in range (n + 1), ((n + 1).choose k : ℚ) • bernoulli k = monomial n (n + 1 : ℚ) :=
begin
simp_rw [bernoulli_def, finset.smul_sum, finset.range_eq_Ico, ←finset.sum_Ico_Ico_comm,
finset.sum_Ico_eq_sum_range],
simp only [cast_succ, add_tsub_cancel_left, tsub_zero, zero_add, linear_map.map_add],
simp_rw [smul_monomial, mul_comm (_root_.bernoulli _) _, smul_eq_mul, ←mul_assoc],
conv_lhs { apply_congr, skip, conv
{ apply_congr, skip,
rw [← nat.cast_mul, choose_mul ((le_tsub_iff_left $ mem_range_le H).1
$ mem_range_le H_1) (le.intro rfl), nat.cast_mul, add_comm x x_1, add_tsub_cancel_right,
mul_assoc, mul_comm, ←smul_eq_mul, ←smul_monomial] },
rw [←sum_smul], },
rw [sum_range_succ_comm],
simp only [add_right_eq_self, cast_succ, mul_one, cast_one, cast_add, add_tsub_cancel_left,
choose_succ_self_right, one_smul, _root_.bernoulli_zero, sum_singleton, zero_add,
linear_map.map_add, range_one],
apply sum_eq_zero (λ x hx, _),
have f : ∀ x ∈ range n, ¬ n + 1 - x = 1,
{ rintros x H, rw [mem_range] at H,
rw [eq_comm],
exact ne_of_lt (nat.lt_of_lt_of_le one_lt_two (le_tsub_of_add_le_left (succ_le_succ H))) },
rw [sum_bernoulli],
have g : (ite (n + 1 - x = 1) (1 : ℚ) 0) = 0,
{ simp only [ite_eq_right_iff, one_ne_zero],
intro h₁,
exact (f x hx) h₁, },
rw [g, zero_smul],
end
/-- Another version of `polynomial.sum_bernoulli`. -/
lemma bernoulli_eq_sub_sum (n : ℕ) : (n.succ : ℚ) • bernoulli n = monomial n (n.succ : ℚ) -
∑ k in finset.range n, ((n + 1).choose k : ℚ) • bernoulli k :=
by rw [nat.cast_succ, ← sum_bernoulli n, sum_range_succ, add_sub_cancel',
choose_succ_self_right, nat.cast_succ]
/-- Another version of `bernoulli.sum_range_pow`. -/
lemma sum_range_pow_eq_bernoulli_sub (n p : ℕ) :
(p + 1 : ℚ) * ∑ k in range n, (k : ℚ) ^ p = (bernoulli p.succ).eval n -
(_root_.bernoulli p.succ) :=
begin
rw [sum_range_pow, bernoulli_def, eval_finset_sum, ←sum_div, mul_div_cancel' _ _],
{ simp_rw [eval_monomial],
symmetry,
rw [←sum_flip _, sum_range_succ],
simp only [tsub_self, tsub_zero, choose_zero_right, cast_one, mul_one, pow_zero,
add_tsub_cancel_right],
apply sum_congr rfl (λ x hx, _),
apply congr_arg2 _ (congr_arg2 _ _ _) rfl,
{ rw nat.sub_sub_self (mem_range_le hx), },
{ rw ←choose_symm (mem_range_le hx), }, },
{ norm_cast, apply succ_ne_zero _, },
end
/-- Rearrangement of `polynomial.sum_range_pow_eq_bernoulli_sub`. -/
lemma bernoulli_succ_eval (n p : ℕ) : (bernoulli p.succ).eval n =
_root_.bernoulli (p.succ) + (p + 1 : ℚ) * ∑ k in range n, (k : ℚ) ^ p :=
by { apply eq_add_of_sub_eq', rw sum_range_pow_eq_bernoulli_sub, }
lemma bernoulli_eval_one_add (n : ℕ) (x : ℚ) :
(bernoulli n).eval (1 + x) = (bernoulli n).eval x + n * x^(n - 1) :=
begin
apply nat.strong_induction_on n (λ d hd, _),
have nz : ((d.succ : ℕ): ℚ) ≠ 0,
{ norm_cast, exact d.succ_ne_zero, },
apply (mul_right_inj' nz).1,
rw [← smul_eq_mul, ←eval_smul, bernoulli_eq_sub_sum, mul_add, ←smul_eq_mul,
←eval_smul, bernoulli_eq_sub_sum, eval_sub, eval_finset_sum],
conv_lhs { congr, skip, apply_congr, skip, rw [eval_smul, hd x_1 (mem_range.1 H)], },
rw [eval_sub, eval_finset_sum],
simp_rw [eval_smul, smul_add],
rw [sum_add_distrib, sub_add, sub_eq_sub_iff_sub_eq_sub, _root_.add_sub_sub_cancel],
conv_rhs { congr, skip, congr, rw [succ_eq_add_one, ←choose_succ_self_right d], },
rw [nat.cast_succ, ← smul_eq_mul, ←sum_range_succ _ d, eval_monomial_one_add_sub],
simp_rw [smul_eq_mul],
end
open power_series
variables {A : Type*} [comm_ring A] [algebra ℚ A]
-- TODO: define exponential generating functions, and use them here
-- This name should probably be updated afterwards
/-- The theorem that `∑ Bₙ(t)X^n/n!)(e^X-1)=Xe^{tX}` -/
theorem bernoulli_generating_function (t : A) :
mk (λ n, aeval t ((1 / n! : ℚ) • bernoulli n)) * (exp A - 1) =
power_series.X * rescale t (exp A) :=
begin
-- check equality of power series by checking coefficients of X^n
ext n,
-- n = 0 case solved by `simp`
cases n, { simp },
-- n ≥ 1, the coefficients is a sum to n+2, so use `sum_range_succ` to write as
-- last term plus sum to n+1
rw [coeff_succ_X_mul, coeff_rescale, coeff_exp, power_series.coeff_mul,
nat.sum_antidiagonal_eq_sum_range_succ_mk, sum_range_succ],
-- last term is zero so kill with `add_zero`
simp only [ring_hom.map_sub, tsub_self, constant_coeff_one, constant_coeff_exp,
coeff_zero_eq_constant_coeff, mul_zero, sub_self, add_zero],
-- Let's multiply both sides by (n+1)! (OK because it's a unit)
set u : units ℚ := ⟨(n+1)!, (n+1)!⁻¹,
mul_inv_cancel (by exact_mod_cast factorial_ne_zero (n+1)),
inv_mul_cancel (by exact_mod_cast factorial_ne_zero (n+1))⟩ with hu,
rw ←units.mul_right_inj (units.map (algebra_map ℚ A).to_monoid_hom u),
-- now tidy up unit mess and generally do trivial rearrangements
-- to make RHS (n+1)*t^n
rw [units.coe_map, mul_left_comm, ring_hom.to_monoid_hom_eq_coe,
ring_hom.coe_monoid_hom, ←ring_hom.map_mul, hu, units.coe_mk],
change _ = t^n * algebra_map ℚ A (((n+1)*n! : ℕ)*(1/n!)),
rw [cast_mul, mul_assoc, mul_one_div_cancel
(show (n! : ℚ) ≠ 0, from cast_ne_zero.2 (factorial_ne_zero n)), mul_one, mul_comm (t^n),
← aeval_monomial, cast_add, cast_one],
-- But this is the RHS of `sum_bernoulli_poly`
rw [← sum_bernoulli, finset.mul_sum, alg_hom.map_sum],
-- and now we have to prove a sum is a sum, but all the terms are equal.
apply finset.sum_congr rfl,
-- The rest is just trivialities, hampered by the fact that we're coercing
-- factorials and binomial coefficients between ℕ and ℚ and A.
intros i hi,
-- deal with coefficients of e^X-1
simp only [nat.cast_choose ℚ (mem_range_le hi), coeff_mk,
if_neg (mem_range_sub_ne_zero hi), one_div, alg_hom.map_smul, power_series.coeff_one,
units.coe_mk, coeff_exp, sub_zero, linear_map.map_sub, algebra.smul_mul_assoc, algebra.smul_def,
mul_right_comm _ ((aeval t) _), ←mul_assoc, ← ring_hom.map_mul, succ_eq_add_one],
-- finally cancel the Bernoulli polynomial and the algebra_map
congr',
apply congr_arg,
rw [mul_assoc, div_eq_mul_inv, ← mul_inv],
end
end polynomial
|
import super
set_option trace.super true
set_option profiler true
lemma foo (p : ℕ → Prop) (h1 : ∀ x, ¬ p x) : ∃ x, ¬ p (x + 1) :=
by super *
lemma bar (p : ℕ → Prop) : p 0 → (∀ x, p x → p (x + 1)) → p 10 :=
by super
lemma exst {α} (h : ∃ x : α, x = x) : true :=
by super *
lemma and_false' (h : ∃ w : ℕ, ∀ x : ℕ, w = x ∧ false) : false :=
by super *
lemma baz (a b c : ℕ) : a + (b + c) = (a + b) + c :=
by super [add_assoc, add_zero, add_comm]
example (y : ℕ) : 0 + y = y + 0 :=
by super [add_zero, zero_add]
#print foo
#print baz
|
(* Title: HOL/Conditionally_Complete_Lattices.thy
Author: Amine Chaieb and L C Paulson, University of Cambridge
Author: Johannes Hölzl, TU München
Author: Luke S. Serafin, Carnegie Mellon University
*)
section \<open>Conditionally-complete Lattices\<close>
theory Conditionally_Complete_Lattices
imports Finite_Set Lattices_Big Set_Interval
begin
locale preordering_bdd = preordering
begin
definition bdd :: \<open>'a set \<Rightarrow> bool\<close>
where unfold: \<open>bdd A \<longleftrightarrow> (\<exists>M. \<forall>x \<in> A. x \<^bold>\<le> M)\<close>
lemma empty [simp, intro]:
\<open>bdd {}\<close>
by (simp add: unfold)
lemma I [intro]:
\<open>bdd A\<close> if \<open>\<And>x. x \<in> A \<Longrightarrow> x \<^bold>\<le> M\<close>
using that by (auto simp add: unfold)
lemma E:
assumes \<open>bdd A\<close>
obtains M where \<open>\<And>x. x \<in> A \<Longrightarrow> x \<^bold>\<le> M\<close>
using assms that by (auto simp add: unfold)
lemma I2:
\<open>bdd (f ` A)\<close> if \<open>\<And>x. x \<in> A \<Longrightarrow> f x \<^bold>\<le> M\<close>
using that by (auto simp add: unfold)
lemma mono:
\<open>bdd A\<close> if \<open>bdd B\<close> \<open>A \<subseteq> B\<close>
using that by (auto simp add: unfold)
lemma Int1 [simp]:
\<open>bdd (A \<inter> B)\<close> if \<open>bdd A\<close>
using mono that by auto
lemma Int2 [simp]:
\<open>bdd (A \<inter> B)\<close> if \<open>bdd B\<close>
using mono that by auto
end
subsection \<open>Preorders\<close>
context preorder
begin
sublocale bdd_above: preordering_bdd \<open>(\<le>)\<close> \<open>(<)\<close>
defines bdd_above_primitive_def: bdd_above = bdd_above.bdd ..
sublocale bdd_below: preordering_bdd \<open>(\<ge>)\<close> \<open>(>)\<close>
defines bdd_below_primitive_def: bdd_below = bdd_below.bdd ..
lemma bdd_above_def: \<open>bdd_above A \<longleftrightarrow> (\<exists>M. \<forall>x \<in> A. x \<le> M)\<close>
by (fact bdd_above.unfold)
lemma bdd_below_def: \<open>bdd_below A \<longleftrightarrow> (\<exists>M. \<forall>x \<in> A. M \<le> x)\<close>
by (fact bdd_below.unfold)
lemma bdd_aboveI: "(\<And>x. x \<in> A \<Longrightarrow> x \<le> M) \<Longrightarrow> bdd_above A"
by (fact bdd_above.I)
lemma bdd_belowI: "(\<And>x. x \<in> A \<Longrightarrow> m \<le> x) \<Longrightarrow> bdd_below A"
by (fact bdd_below.I)
lemma bdd_aboveI2: "(\<And>x. x \<in> A \<Longrightarrow> f x \<le> M) \<Longrightarrow> bdd_above (f`A)"
by (fact bdd_above.I2)
lemma bdd_belowI2: "(\<And>x. x \<in> A \<Longrightarrow> m \<le> f x) \<Longrightarrow> bdd_below (f`A)"
by (fact bdd_below.I2)
lemma bdd_above_empty: "bdd_above {}"
by (fact bdd_above.empty)
lemma bdd_below_empty: "bdd_below {}"
by (fact bdd_below.empty)
lemma bdd_above_mono: "bdd_above B \<Longrightarrow> A \<subseteq> B \<Longrightarrow> bdd_above A"
by (fact bdd_above.mono)
lemma bdd_below_mono: "bdd_below B \<Longrightarrow> A \<subseteq> B \<Longrightarrow> bdd_below A"
by (fact bdd_below.mono)
lemma bdd_above_Int1: "bdd_above A \<Longrightarrow> bdd_above (A \<inter> B)"
by (fact bdd_above.Int1)
lemma bdd_above_Int2: "bdd_above B \<Longrightarrow> bdd_above (A \<inter> B)"
by (fact bdd_above.Int2)
lemma bdd_below_Int1: "bdd_below A \<Longrightarrow> bdd_below (A \<inter> B)"
by (fact bdd_below.Int1)
lemma bdd_below_Int2: "bdd_below B \<Longrightarrow> bdd_below (A \<inter> B)"
by (fact bdd_below.Int2)
lemma bdd_above_Ioo [simp, intro]: "bdd_above {a <..< b}"
by (auto simp add: bdd_above_def intro!: exI[of _ b] less_imp_le)
lemma bdd_above_Ico [simp, intro]: "bdd_above {a ..< b}"
by (auto simp add: bdd_above_def intro!: exI[of _ b] less_imp_le)
lemma bdd_above_Iio [simp, intro]: "bdd_above {..< b}"
by (auto simp add: bdd_above_def intro: exI[of _ b] less_imp_le)
lemma bdd_above_Ioc [simp, intro]: "bdd_above {a <.. b}"
by (auto simp add: bdd_above_def intro: exI[of _ b] less_imp_le)
lemma bdd_above_Icc [simp, intro]: "bdd_above {a .. b}"
by (auto simp add: bdd_above_def intro: exI[of _ b] less_imp_le)
lemma bdd_above_Iic [simp, intro]: "bdd_above {.. b}"
by (auto simp add: bdd_above_def intro: exI[of _ b] less_imp_le)
lemma bdd_below_Ioo [simp, intro]: "bdd_below {a <..< b}"
by (auto simp add: bdd_below_def intro!: exI[of _ a] less_imp_le)
lemma bdd_below_Ioc [simp, intro]: "bdd_below {a <.. b}"
by (auto simp add: bdd_below_def intro!: exI[of _ a] less_imp_le)
lemma bdd_below_Ioi [simp, intro]: "bdd_below {a <..}"
by (auto simp add: bdd_below_def intro: exI[of _ a] less_imp_le)
lemma bdd_below_Ico [simp, intro]: "bdd_below {a ..< b}"
by (auto simp add: bdd_below_def intro: exI[of _ a] less_imp_le)
lemma bdd_below_Icc [simp, intro]: "bdd_below {a .. b}"
by (auto simp add: bdd_below_def intro: exI[of _ a] less_imp_le)
lemma bdd_below_Ici [simp, intro]: "bdd_below {a ..}"
by (auto simp add: bdd_below_def intro: exI[of _ a] less_imp_le)
end
context order_top
begin
lemma bdd_above_top [simp, intro!]: "bdd_above A"
by (rule bdd_aboveI [of _ top]) simp
end
context order_bot
begin
lemma bdd_below_bot [simp, intro!]: "bdd_below A"
by (rule bdd_belowI [of _ bot]) simp
end
lemma bdd_above_image_mono: "mono f \<Longrightarrow> bdd_above A \<Longrightarrow> bdd_above (f`A)"
by (auto simp: bdd_above_def mono_def)
lemma bdd_below_image_mono: "mono f \<Longrightarrow> bdd_below A \<Longrightarrow> bdd_below (f`A)"
by (auto simp: bdd_below_def mono_def)
lemma bdd_above_image_antimono: "antimono f \<Longrightarrow> bdd_below A \<Longrightarrow> bdd_above (f`A)"
by (auto simp: bdd_above_def bdd_below_def antimono_def)
lemma bdd_below_image_antimono: "antimono f \<Longrightarrow> bdd_above A \<Longrightarrow> bdd_below (f`A)"
by (auto simp: bdd_above_def bdd_below_def antimono_def)
lemma
fixes X :: "'a::ordered_ab_group_add set"
shows bdd_above_uminus[simp]: "bdd_above (uminus ` X) \<longleftrightarrow> bdd_below X"
and bdd_below_uminus[simp]: "bdd_below (uminus ` X) \<longleftrightarrow> bdd_above X"
using bdd_above_image_antimono[of uminus X] bdd_below_image_antimono[of uminus "uminus`X"]
using bdd_below_image_antimono[of uminus X] bdd_above_image_antimono[of uminus "uminus`X"]
by (auto simp: antimono_def image_image)
subsection \<open>Lattices\<close>
context lattice
begin
lemma bdd_above_insert [simp]: "bdd_above (insert a A) = bdd_above A"
by (auto simp: bdd_above_def intro: le_supI2 sup_ge1)
lemma bdd_below_insert [simp]: "bdd_below (insert a A) = bdd_below A"
by (auto simp: bdd_below_def intro: le_infI2 inf_le1)
lemma bdd_finite [simp]:
assumes "finite A" shows bdd_above_finite: "bdd_above A" and bdd_below_finite: "bdd_below A"
using assms by (induct rule: finite_induct, auto)
lemma bdd_above_Un [simp]: "bdd_above (A \<union> B) = (bdd_above A \<and> bdd_above B)"
proof
assume "bdd_above (A \<union> B)"
thus "bdd_above A \<and> bdd_above B" unfolding bdd_above_def by auto
next
assume "bdd_above A \<and> bdd_above B"
then obtain a b where "\<forall>x\<in>A. x \<le> a" "\<forall>x\<in>B. x \<le> b" unfolding bdd_above_def by auto
hence "\<forall>x \<in> A \<union> B. x \<le> sup a b" by (auto intro: Un_iff le_supI1 le_supI2)
thus "bdd_above (A \<union> B)" unfolding bdd_above_def ..
qed
lemma bdd_below_Un [simp]: "bdd_below (A \<union> B) = (bdd_below A \<and> bdd_below B)"
proof
assume "bdd_below (A \<union> B)"
thus "bdd_below A \<and> bdd_below B" unfolding bdd_below_def by auto
next
assume "bdd_below A \<and> bdd_below B"
then obtain a b where "\<forall>x\<in>A. a \<le> x" "\<forall>x\<in>B. b \<le> x" unfolding bdd_below_def by auto
hence "\<forall>x \<in> A \<union> B. inf a b \<le> x" by (auto intro: Un_iff le_infI1 le_infI2)
thus "bdd_below (A \<union> B)" unfolding bdd_below_def ..
qed
lemma bdd_above_image_sup[simp]:
"bdd_above ((\<lambda>x. sup (f x) (g x)) ` A) \<longleftrightarrow> bdd_above (f`A) \<and> bdd_above (g`A)"
by (auto simp: bdd_above_def intro: le_supI1 le_supI2)
lemma bdd_below_image_inf[simp]:
"bdd_below ((\<lambda>x. inf (f x) (g x)) ` A) \<longleftrightarrow> bdd_below (f`A) \<and> bdd_below (g`A)"
by (auto simp: bdd_below_def intro: le_infI1 le_infI2)
lemma bdd_below_UN[simp]: "finite I \<Longrightarrow> bdd_below (\<Union>i\<in>I. A i) = (\<forall>i \<in> I. bdd_below (A i))"
by (induction I rule: finite.induct) auto
lemma bdd_above_UN[simp]: "finite I \<Longrightarrow> bdd_above (\<Union>i\<in>I. A i) = (\<forall>i \<in> I. bdd_above (A i))"
by (induction I rule: finite.induct) auto
end
text \<open>
To avoid name classes with the \<^class>\<open>complete_lattice\<close>-class we prefix \<^const>\<open>Sup\<close> and
\<^const>\<open>Inf\<close> in theorem names with c.
\<close>
subsection \<open>Conditionally complete lattices\<close>
class conditionally_complete_lattice = lattice + Sup + Inf +
assumes cInf_lower: "x \<in> X \<Longrightarrow> bdd_below X \<Longrightarrow> Inf X \<le> x"
and cInf_greatest: "X \<noteq> {} \<Longrightarrow> (\<And>x. x \<in> X \<Longrightarrow> z \<le> x) \<Longrightarrow> z \<le> Inf X"
assumes cSup_upper: "x \<in> X \<Longrightarrow> bdd_above X \<Longrightarrow> x \<le> Sup X"
and cSup_least: "X \<noteq> {} \<Longrightarrow> (\<And>x. x \<in> X \<Longrightarrow> x \<le> z) \<Longrightarrow> Sup X \<le> z"
begin
lemma cSup_upper2: "x \<in> X \<Longrightarrow> y \<le> x \<Longrightarrow> bdd_above X \<Longrightarrow> y \<le> Sup X"
by (metis cSup_upper order_trans)
lemma cInf_lower2: "x \<in> X \<Longrightarrow> x \<le> y \<Longrightarrow> bdd_below X \<Longrightarrow> Inf X \<le> y"
by (metis cInf_lower order_trans)
lemma cSup_mono: "B \<noteq> {} \<Longrightarrow> bdd_above A \<Longrightarrow> (\<And>b. b \<in> B \<Longrightarrow> \<exists>a\<in>A. b \<le> a) \<Longrightarrow> Sup B \<le> Sup A"
by (metis cSup_least cSup_upper2)
lemma cInf_mono: "B \<noteq> {} \<Longrightarrow> bdd_below A \<Longrightarrow> (\<And>b. b \<in> B \<Longrightarrow> \<exists>a\<in>A. a \<le> b) \<Longrightarrow> Inf A \<le> Inf B"
by (metis cInf_greatest cInf_lower2)
lemma cSup_subset_mono: "A \<noteq> {} \<Longrightarrow> bdd_above B \<Longrightarrow> A \<subseteq> B \<Longrightarrow> Sup A \<le> Sup B"
by (metis cSup_least cSup_upper subsetD)
lemma cInf_superset_mono: "A \<noteq> {} \<Longrightarrow> bdd_below B \<Longrightarrow> A \<subseteq> B \<Longrightarrow> Inf B \<le> Inf A"
by (metis cInf_greatest cInf_lower subsetD)
lemma cSup_eq_maximum: "z \<in> X \<Longrightarrow> (\<And>x. x \<in> X \<Longrightarrow> x \<le> z) \<Longrightarrow> Sup X = z"
by (intro order.antisym cSup_upper[of z X] cSup_least[of X z]) auto
lemma cInf_eq_minimum: "z \<in> X \<Longrightarrow> (\<And>x. x \<in> X \<Longrightarrow> z \<le> x) \<Longrightarrow> Inf X = z"
by (intro order.antisym cInf_lower[of z X] cInf_greatest[of X z]) auto
lemma cSup_le_iff: "S \<noteq> {} \<Longrightarrow> bdd_above S \<Longrightarrow> Sup S \<le> a \<longleftrightarrow> (\<forall>x\<in>S. x \<le> a)"
by (metis order_trans cSup_upper cSup_least)
lemma le_cInf_iff: "S \<noteq> {} \<Longrightarrow> bdd_below S \<Longrightarrow> a \<le> Inf S \<longleftrightarrow> (\<forall>x\<in>S. a \<le> x)"
by (metis order_trans cInf_lower cInf_greatest)
lemma cSup_eq_non_empty:
assumes 1: "X \<noteq> {}"
assumes 2: "\<And>x. x \<in> X \<Longrightarrow> x \<le> a"
assumes 3: "\<And>y. (\<And>x. x \<in> X \<Longrightarrow> x \<le> y) \<Longrightarrow> a \<le> y"
shows "Sup X = a"
by (intro 3 1 order.antisym cSup_least) (auto intro: 2 1 cSup_upper)
lemma cInf_eq_non_empty:
assumes 1: "X \<noteq> {}"
assumes 2: "\<And>x. x \<in> X \<Longrightarrow> a \<le> x"
assumes 3: "\<And>y. (\<And>x. x \<in> X \<Longrightarrow> y \<le> x) \<Longrightarrow> y \<le> a"
shows "Inf X = a"
by (intro 3 1 order.antisym cInf_greatest) (auto intro: 2 1 cInf_lower)
lemma cInf_cSup: "S \<noteq> {} \<Longrightarrow> bdd_below S \<Longrightarrow> Inf S = Sup {x. \<forall>s\<in>S. x \<le> s}"
by (rule cInf_eq_non_empty) (auto intro!: cSup_upper cSup_least simp: bdd_below_def)
lemma cSup_cInf: "S \<noteq> {} \<Longrightarrow> bdd_above S \<Longrightarrow> Sup S = Inf {x. \<forall>s\<in>S. s \<le> x}"
by (rule cSup_eq_non_empty) (auto intro!: cInf_lower cInf_greatest simp: bdd_above_def)
lemma cSup_insert: "X \<noteq> {} \<Longrightarrow> bdd_above X \<Longrightarrow> Sup (insert a X) = sup a (Sup X)"
by (intro cSup_eq_non_empty) (auto intro: le_supI2 cSup_upper cSup_least)
lemma cInf_insert: "X \<noteq> {} \<Longrightarrow> bdd_below X \<Longrightarrow> Inf (insert a X) = inf a (Inf X)"
by (intro cInf_eq_non_empty) (auto intro: le_infI2 cInf_lower cInf_greatest)
lemma cSup_singleton [simp]: "Sup {x} = x"
by (intro cSup_eq_maximum) auto
lemma cInf_singleton [simp]: "Inf {x} = x"
by (intro cInf_eq_minimum) auto
lemma cSup_insert_If: "bdd_above X \<Longrightarrow> Sup (insert a X) = (if X = {} then a else sup a (Sup X))"
using cSup_insert[of X] by simp
lemma cInf_insert_If: "bdd_below X \<Longrightarrow> Inf (insert a X) = (if X = {} then a else inf a (Inf X))"
using cInf_insert[of X] by simp
lemma le_cSup_finite: "finite X \<Longrightarrow> x \<in> X \<Longrightarrow> x \<le> Sup X"
proof (induct X arbitrary: x rule: finite_induct)
case (insert x X y) then show ?case
by (cases "X = {}") (auto simp: cSup_insert intro: le_supI2)
qed simp
lemma cInf_le_finite: "finite X \<Longrightarrow> x \<in> X \<Longrightarrow> Inf X \<le> x"
proof (induct X arbitrary: x rule: finite_induct)
case (insert x X y) then show ?case
by (cases "X = {}") (auto simp: cInf_insert intro: le_infI2)
qed simp
lemma cSup_eq_Sup_fin: "finite X \<Longrightarrow> X \<noteq> {} \<Longrightarrow> Sup X = Sup_fin X"
by (induct X rule: finite_ne_induct) (simp_all add: cSup_insert)
lemma cInf_eq_Inf_fin: "finite X \<Longrightarrow> X \<noteq> {} \<Longrightarrow> Inf X = Inf_fin X"
by (induct X rule: finite_ne_induct) (simp_all add: cInf_insert)
lemma cSup_atMost[simp]: "Sup {..x} = x"
by (auto intro!: cSup_eq_maximum)
lemma cSup_greaterThanAtMost[simp]: "y < x \<Longrightarrow> Sup {y<..x} = x"
by (auto intro!: cSup_eq_maximum)
lemma cSup_atLeastAtMost[simp]: "y \<le> x \<Longrightarrow> Sup {y..x} = x"
by (auto intro!: cSup_eq_maximum)
lemma cInf_atLeast[simp]: "Inf {x..} = x"
by (auto intro!: cInf_eq_minimum)
lemma cInf_atLeastLessThan[simp]: "y < x \<Longrightarrow> Inf {y..<x} = y"
by (auto intro!: cInf_eq_minimum)
lemma cInf_atLeastAtMost[simp]: "y \<le> x \<Longrightarrow> Inf {y..x} = y"
by (auto intro!: cInf_eq_minimum)
lemma cINF_lower: "bdd_below (f ` A) \<Longrightarrow> x \<in> A \<Longrightarrow> \<Sqinter>(f ` A) \<le> f x"
using cInf_lower [of _ "f ` A"] by simp
lemma cINF_greatest: "A \<noteq> {} \<Longrightarrow> (\<And>x. x \<in> A \<Longrightarrow> m \<le> f x) \<Longrightarrow> m \<le> \<Sqinter>(f ` A)"
using cInf_greatest [of "f ` A"] by auto
lemma cSUP_upper: "x \<in> A \<Longrightarrow> bdd_above (f ` A) \<Longrightarrow> f x \<le> \<Squnion>(f ` A)"
using cSup_upper [of _ "f ` A"] by simp
lemma cSUP_least: "A \<noteq> {} \<Longrightarrow> (\<And>x. x \<in> A \<Longrightarrow> f x \<le> M) \<Longrightarrow> \<Squnion>(f ` A) \<le> M"
using cSup_least [of "f ` A"] by auto
lemma cINF_lower2: "bdd_below (f ` A) \<Longrightarrow> x \<in> A \<Longrightarrow> f x \<le> u \<Longrightarrow> \<Sqinter>(f ` A) \<le> u"
by (auto intro: cINF_lower order_trans)
lemma cSUP_upper2: "bdd_above (f ` A) \<Longrightarrow> x \<in> A \<Longrightarrow> u \<le> f x \<Longrightarrow> u \<le> \<Squnion>(f ` A)"
by (auto intro: cSUP_upper order_trans)
lemma cSUP_const [simp]: "A \<noteq> {} \<Longrightarrow> (\<Squnion>x\<in>A. c) = c"
by (intro order.antisym cSUP_least) (auto intro: cSUP_upper)
lemma cINF_const [simp]: "A \<noteq> {} \<Longrightarrow> (\<Sqinter>x\<in>A. c) = c"
by (intro order.antisym cINF_greatest) (auto intro: cINF_lower)
lemma le_cINF_iff: "A \<noteq> {} \<Longrightarrow> bdd_below (f ` A) \<Longrightarrow> u \<le> \<Sqinter>(f ` A) \<longleftrightarrow> (\<forall>x\<in>A. u \<le> f x)"
by (metis cINF_greatest cINF_lower order_trans)
lemma cSUP_le_iff: "A \<noteq> {} \<Longrightarrow> bdd_above (f ` A) \<Longrightarrow> \<Squnion>(f ` A) \<le> u \<longleftrightarrow> (\<forall>x\<in>A. f x \<le> u)"
by (metis cSUP_least cSUP_upper order_trans)
lemma less_cINF_D: "bdd_below (f`A) \<Longrightarrow> y < (\<Sqinter>i\<in>A. f i) \<Longrightarrow> i \<in> A \<Longrightarrow> y < f i"
by (metis cINF_lower less_le_trans)
lemma cSUP_lessD: "bdd_above (f`A) \<Longrightarrow> (\<Squnion>i\<in>A. f i) < y \<Longrightarrow> i \<in> A \<Longrightarrow> f i < y"
by (metis cSUP_upper le_less_trans)
lemma cINF_insert: "A \<noteq> {} \<Longrightarrow> bdd_below (f ` A) \<Longrightarrow> \<Sqinter>(f ` insert a A) = inf (f a) (\<Sqinter>(f ` A))"
by (simp add: cInf_insert)
lemma cSUP_insert: "A \<noteq> {} \<Longrightarrow> bdd_above (f ` A) \<Longrightarrow> \<Squnion>(f ` insert a A) = sup (f a) (\<Squnion>(f ` A))"
by (simp add: cSup_insert)
lemma cINF_mono: "B \<noteq> {} \<Longrightarrow> bdd_below (f ` A) \<Longrightarrow> (\<And>m. m \<in> B \<Longrightarrow> \<exists>n\<in>A. f n \<le> g m) \<Longrightarrow> \<Sqinter>(f ` A) \<le> \<Sqinter>(g ` B)"
using cInf_mono [of "g ` B" "f ` A"] by auto
lemma cSUP_mono: "A \<noteq> {} \<Longrightarrow> bdd_above (g ` B) \<Longrightarrow> (\<And>n. n \<in> A \<Longrightarrow> \<exists>m\<in>B. f n \<le> g m) \<Longrightarrow> \<Squnion>(f ` A) \<le> \<Squnion>(g ` B)"
using cSup_mono [of "f ` A" "g ` B"] by auto
lemma cINF_superset_mono: "A \<noteq> {} \<Longrightarrow> bdd_below (g ` B) \<Longrightarrow> A \<subseteq> B \<Longrightarrow> (\<And>x. x \<in> B \<Longrightarrow> g x \<le> f x) \<Longrightarrow> \<Sqinter>(g ` B) \<le> \<Sqinter>(f ` A)"
by (rule cINF_mono) auto
lemma cSUP_subset_mono:
"\<lbrakk>A \<noteq> {}; bdd_above (g ` B); A \<subseteq> B; \<And>x. x \<in> A \<Longrightarrow> f x \<le> g x\<rbrakk> \<Longrightarrow> \<Squnion> (f ` A) \<le> \<Squnion> (g ` B)"
by (rule cSUP_mono) auto
lemma less_eq_cInf_inter: "bdd_below A \<Longrightarrow> bdd_below B \<Longrightarrow> A \<inter> B \<noteq> {} \<Longrightarrow> inf (Inf A) (Inf B) \<le> Inf (A \<inter> B)"
by (metis cInf_superset_mono lattice_class.inf_sup_ord(1) le_infI1)
lemma cSup_inter_less_eq: "bdd_above A \<Longrightarrow> bdd_above B \<Longrightarrow> A \<inter> B \<noteq> {} \<Longrightarrow> Sup (A \<inter> B) \<le> sup (Sup A) (Sup B) "
by (metis cSup_subset_mono lattice_class.inf_sup_ord(1) le_supI1)
lemma cInf_union_distrib: "A \<noteq> {} \<Longrightarrow> bdd_below A \<Longrightarrow> B \<noteq> {} \<Longrightarrow> bdd_below B \<Longrightarrow> Inf (A \<union> B) = inf (Inf A) (Inf B)"
by (intro order.antisym le_infI cInf_greatest cInf_lower) (auto intro: le_infI1 le_infI2 cInf_lower)
lemma cINF_union: "A \<noteq> {} \<Longrightarrow> bdd_below (f ` A) \<Longrightarrow> B \<noteq> {} \<Longrightarrow> bdd_below (f ` B) \<Longrightarrow> \<Sqinter> (f ` (A \<union> B)) = \<Sqinter> (f ` A) \<sqinter> \<Sqinter> (f ` B)"
using cInf_union_distrib [of "f ` A" "f ` B"] by (simp add: image_Un)
lemma cSup_union_distrib: "A \<noteq> {} \<Longrightarrow> bdd_above A \<Longrightarrow> B \<noteq> {} \<Longrightarrow> bdd_above B \<Longrightarrow> Sup (A \<union> B) = sup (Sup A) (Sup B)"
by (intro order.antisym le_supI cSup_least cSup_upper) (auto intro: le_supI1 le_supI2 cSup_upper)
lemma cSUP_union: "A \<noteq> {} \<Longrightarrow> bdd_above (f ` A) \<Longrightarrow> B \<noteq> {} \<Longrightarrow> bdd_above (f ` B) \<Longrightarrow> \<Squnion> (f ` (A \<union> B)) = \<Squnion> (f ` A) \<squnion> \<Squnion> (f ` B)"
using cSup_union_distrib [of "f ` A" "f ` B"] by (simp add: image_Un)
lemma cINF_inf_distrib: "A \<noteq> {} \<Longrightarrow> bdd_below (f`A) \<Longrightarrow> bdd_below (g`A) \<Longrightarrow> \<Sqinter> (f ` A) \<sqinter> \<Sqinter> (g ` A) = (\<Sqinter>a\<in>A. inf (f a) (g a))"
by (intro order.antisym le_infI cINF_greatest cINF_lower2)
(auto intro: le_infI1 le_infI2 cINF_greatest cINF_lower le_infI)
lemma SUP_sup_distrib: "A \<noteq> {} \<Longrightarrow> bdd_above (f`A) \<Longrightarrow> bdd_above (g`A) \<Longrightarrow> \<Squnion> (f ` A) \<squnion> \<Squnion> (g ` A) = (\<Squnion>a\<in>A. sup (f a) (g a))"
by (intro order.antisym le_supI cSUP_least cSUP_upper2)
(auto intro: le_supI1 le_supI2 cSUP_least cSUP_upper le_supI)
lemma cInf_le_cSup:
"A \<noteq> {} \<Longrightarrow> bdd_above A \<Longrightarrow> bdd_below A \<Longrightarrow> Inf A \<le> Sup A"
by (auto intro!: cSup_upper2[of "SOME a. a \<in> A"] intro: someI cInf_lower)
context
fixes f :: "'a \<Rightarrow> 'b::conditionally_complete_lattice"
assumes "mono f"
begin
lemma mono_cInf: "\<lbrakk>bdd_below A; A\<noteq>{}\<rbrakk> \<Longrightarrow> f (Inf A) \<le> (INF x\<in>A. f x)"
by (simp add: \<open>mono f\<close> conditionally_complete_lattice_class.cINF_greatest cInf_lower monoD)
lemma mono_cSup: "\<lbrakk>bdd_above A; A\<noteq>{}\<rbrakk> \<Longrightarrow> (SUP x\<in>A. f x) \<le> f (Sup A)"
by (simp add: \<open>mono f\<close> conditionally_complete_lattice_class.cSUP_least cSup_upper monoD)
lemma mono_cINF: "\<lbrakk>bdd_below (A`I); I\<noteq>{}\<rbrakk> \<Longrightarrow> f (INF i\<in>I. A i) \<le> (INF x\<in>I. f (A x))"
by (simp add: \<open>mono f\<close> conditionally_complete_lattice_class.cINF_greatest cINF_lower monoD)
lemma mono_cSUP: "\<lbrakk>bdd_above (A`I); I\<noteq>{}\<rbrakk> \<Longrightarrow> (SUP x\<in>I. f (A x)) \<le> f (SUP i\<in>I. A i)"
by (simp add: \<open>mono f\<close> conditionally_complete_lattice_class.cSUP_least cSUP_upper monoD)
end
end
text \<open>The special case of well-orderings\<close>
lemma wellorder_InfI:
fixes k :: "'a::{wellorder,conditionally_complete_lattice}"
assumes "k \<in> A" shows "Inf A \<in> A"
using wellorder_class.LeastI [of "\<lambda>x. x \<in> A" k]
by (simp add: Least_le assms cInf_eq_minimum)
lemma wellorder_Inf_le1:
fixes k :: "'a::{wellorder,conditionally_complete_lattice}"
assumes "k \<in> A" shows "Inf A \<le> k"
by (meson Least_le assms bdd_below.I cInf_lower)
subsection \<open>Complete lattices\<close>
instance complete_lattice \<subseteq> conditionally_complete_lattice
by standard (auto intro: Sup_upper Sup_least Inf_lower Inf_greatest)
lemma cSup_eq:
fixes a :: "'a :: {conditionally_complete_lattice, no_bot}"
assumes upper: "\<And>x. x \<in> X \<Longrightarrow> x \<le> a"
assumes least: "\<And>y. (\<And>x. x \<in> X \<Longrightarrow> x \<le> y) \<Longrightarrow> a \<le> y"
shows "Sup X = a"
proof cases
assume "X = {}" with lt_ex[of a] least show ?thesis by (auto simp: less_le_not_le)
qed (intro cSup_eq_non_empty assms)
lemma cInf_eq:
fixes a :: "'a :: {conditionally_complete_lattice, no_top}"
assumes upper: "\<And>x. x \<in> X \<Longrightarrow> a \<le> x"
assumes least: "\<And>y. (\<And>x. x \<in> X \<Longrightarrow> y \<le> x) \<Longrightarrow> y \<le> a"
shows "Inf X = a"
proof cases
assume "X = {}" with gt_ex[of a] least show ?thesis by (auto simp: less_le_not_le)
qed (intro cInf_eq_non_empty assms)
class conditionally_complete_linorder = conditionally_complete_lattice + linorder
begin
lemma less_cSup_iff:
"X \<noteq> {} \<Longrightarrow> bdd_above X \<Longrightarrow> y < Sup X \<longleftrightarrow> (\<exists>x\<in>X. y < x)"
by (rule iffI) (metis cSup_least not_less, metis cSup_upper less_le_trans)
lemma cInf_less_iff: "X \<noteq> {} \<Longrightarrow> bdd_below X \<Longrightarrow> Inf X < y \<longleftrightarrow> (\<exists>x\<in>X. x < y)"
by (rule iffI) (metis cInf_greatest not_less, metis cInf_lower le_less_trans)
lemma cINF_less_iff: "A \<noteq> {} \<Longrightarrow> bdd_below (f`A) \<Longrightarrow> (\<Sqinter>i\<in>A. f i) < a \<longleftrightarrow> (\<exists>x\<in>A. f x < a)"
using cInf_less_iff[of "f`A"] by auto
lemma less_cSUP_iff: "A \<noteq> {} \<Longrightarrow> bdd_above (f`A) \<Longrightarrow> a < (\<Squnion>i\<in>A. f i) \<longleftrightarrow> (\<exists>x\<in>A. a < f x)"
using less_cSup_iff[of "f`A"] by auto
lemma less_cSupE:
assumes "y < Sup X" "X \<noteq> {}" obtains x where "x \<in> X" "y < x"
by (metis cSup_least assms not_le that)
lemma less_cSupD:
"X \<noteq> {} \<Longrightarrow> z < Sup X \<Longrightarrow> \<exists>x\<in>X. z < x"
by (metis less_cSup_iff not_le_imp_less bdd_above_def)
lemma cInf_lessD:
"X \<noteq> {} \<Longrightarrow> Inf X < z \<Longrightarrow> \<exists>x\<in>X. x < z"
by (metis cInf_less_iff not_le_imp_less bdd_below_def)
lemma complete_interval:
assumes "a < b" and "P a" and "\<not> P b"
shows "\<exists>c. a \<le> c \<and> c \<le> b \<and> (\<forall>x. a \<le> x \<and> x < c \<longrightarrow> P x) \<and>
(\<forall>d. (\<forall>x. a \<le> x \<and> x < d \<longrightarrow> P x) \<longrightarrow> d \<le> c)"
proof (rule exI [where x = "Sup {d. \<forall>x. a \<le> x \<and> x < d \<longrightarrow> P x}"], safe)
show "a \<le> Sup {d. \<forall>c. a \<le> c \<and> c < d \<longrightarrow> P c}"
by (rule cSup_upper, auto simp: bdd_above_def)
(metis \<open>a < b\<close> \<open>\<not> P b\<close> linear less_le)
next
show "Sup {d. \<forall>c. a \<le> c \<and> c < d \<longrightarrow> P c} \<le> b"
by (rule cSup_least)
(use \<open>a<b\<close> \<open>\<not> P b\<close> in \<open>auto simp add: less_le_not_le\<close>)
next
fix x
assume x: "a \<le> x" and lt: "x < Sup {d. \<forall>c. a \<le> c \<and> c < d \<longrightarrow> P c}"
show "P x"
by (rule less_cSupE [OF lt]) (use less_le_not_le x in \<open>auto\<close>)
next
fix d
assume 0: "\<forall>x. a \<le> x \<and> x < d \<longrightarrow> P x"
then have "d \<in> {d. \<forall>c. a \<le> c \<and> c < d \<longrightarrow> P c}"
by auto
moreover have "bdd_above {d. \<forall>c. a \<le> c \<and> c < d \<longrightarrow> P c}"
unfolding bdd_above_def using \<open>a<b\<close> \<open>\<not> P b\<close> linear
by (simp add: less_le) blast
ultimately show "d \<le> Sup {d. \<forall>c. a \<le> c \<and> c < d \<longrightarrow> P c}"
by (auto simp: cSup_upper)
qed
end
subsection \<open>Instances\<close>
instance complete_linorder < conditionally_complete_linorder
..
lemma cSup_eq_Max: "finite (X::'a::conditionally_complete_linorder set) \<Longrightarrow> X \<noteq> {} \<Longrightarrow> Sup X = Max X"
using cSup_eq_Sup_fin[of X] by (simp add: Sup_fin_Max)
lemma cInf_eq_Min: "finite (X::'a::conditionally_complete_linorder set) \<Longrightarrow> X \<noteq> {} \<Longrightarrow> Inf X = Min X"
using cInf_eq_Inf_fin[of X] by (simp add: Inf_fin_Min)
lemma cSup_lessThan[simp]: "Sup {..<x::'a::{conditionally_complete_linorder, no_bot, dense_linorder}} = x"
by (auto intro!: cSup_eq_non_empty intro: dense_le)
lemma cSup_greaterThanLessThan[simp]: "y < x \<Longrightarrow> Sup {y<..<x::'a::{conditionally_complete_linorder, dense_linorder}} = x"
by (auto intro!: cSup_eq_non_empty intro: dense_le_bounded)
lemma cSup_atLeastLessThan[simp]: "y < x \<Longrightarrow> Sup {y..<x::'a::{conditionally_complete_linorder, dense_linorder}} = x"
by (auto intro!: cSup_eq_non_empty intro: dense_le_bounded)
lemma cInf_greaterThan[simp]: "Inf {x::'a::{conditionally_complete_linorder, no_top, dense_linorder} <..} = x"
by (auto intro!: cInf_eq_non_empty intro: dense_ge)
lemma cInf_greaterThanAtMost[simp]: "y < x \<Longrightarrow> Inf {y<..x::'a::{conditionally_complete_linorder, dense_linorder}} = y"
by (auto intro!: cInf_eq_non_empty intro: dense_ge_bounded)
lemma cInf_greaterThanLessThan[simp]: "y < x \<Longrightarrow> Inf {y<..<x::'a::{conditionally_complete_linorder, dense_linorder}} = y"
by (auto intro!: cInf_eq_non_empty intro: dense_ge_bounded)
lemma Inf_insert_finite:
fixes S :: "'a::conditionally_complete_linorder set"
shows "finite S \<Longrightarrow> Inf (insert x S) = (if S = {} then x else min x (Inf S))"
by (simp add: cInf_eq_Min)
lemma Sup_insert_finite:
fixes S :: "'a::conditionally_complete_linorder set"
shows "finite S \<Longrightarrow> Sup (insert x S) = (if S = {} then x else max x (Sup S))"
by (simp add: cSup_insert sup_max)
lemma finite_imp_less_Inf:
fixes a :: "'a::conditionally_complete_linorder"
shows "\<lbrakk>finite X; x \<in> X; \<And>x. x\<in>X \<Longrightarrow> a < x\<rbrakk> \<Longrightarrow> a < Inf X"
by (induction X rule: finite_induct) (simp_all add: cInf_eq_Min Inf_insert_finite)
lemma finite_less_Inf_iff:
fixes a :: "'a :: conditionally_complete_linorder"
shows "\<lbrakk>finite X; X \<noteq> {}\<rbrakk> \<Longrightarrow> a < Inf X \<longleftrightarrow> (\<forall>x \<in> X. a < x)"
by (auto simp: cInf_eq_Min)
lemma finite_imp_Sup_less:
fixes a :: "'a::conditionally_complete_linorder"
shows "\<lbrakk>finite X; x \<in> X; \<And>x. x\<in>X \<Longrightarrow> a > x\<rbrakk> \<Longrightarrow> a > Sup X"
by (induction X rule: finite_induct) (simp_all add: cSup_eq_Max Sup_insert_finite)
lemma finite_Sup_less_iff:
fixes a :: "'a :: conditionally_complete_linorder"
shows "\<lbrakk>finite X; X \<noteq> {}\<rbrakk> \<Longrightarrow> a > Sup X \<longleftrightarrow> (\<forall>x \<in> X. a > x)"
by (auto simp: cSup_eq_Max)
class linear_continuum = conditionally_complete_linorder + dense_linorder +
assumes UNIV_not_singleton: "\<exists>a b::'a. a \<noteq> b"
begin
lemma ex_gt_or_lt: "\<exists>b. a < b \<or> b < a"
by (metis UNIV_not_singleton neq_iff)
end
context
fixes f::"'a \<Rightarrow> 'b::{conditionally_complete_linorder,ordered_ab_group_add}"
begin
lemma bdd_above_uminus_image: "bdd_above ((\<lambda>x. - f x) ` A) \<longleftrightarrow> bdd_below (f ` A)"
by (metis bdd_above_uminus image_image)
lemma bdd_below_uminus_image: "bdd_below ((\<lambda>x. - f x) ` A) \<longleftrightarrow> bdd_above (f ` A)"
by (metis bdd_below_uminus image_image)
lemma uminus_cSUP:
assumes "bdd_above (f ` A)" "A \<noteq> {}"
shows "- (SUP x\<in>A. f x) = (INF x\<in>A. - f x)"
proof (rule antisym)
show "(INF x\<in>A. - f x) \<le> - Sup (f ` A)"
by (metis cINF_lower cSUP_least bdd_below_uminus_image assms le_minus_iff)
have *: "\<And>x. x \<in>A \<Longrightarrow> f x \<le> Sup (f ` A)"
by (simp add: assms cSup_upper)
then show "- Sup (f ` A) \<le> (INF x\<in>A. - f x)"
by (simp add: assms cINF_greatest)
qed
end
context
fixes f::"'a \<Rightarrow> 'b::{conditionally_complete_linorder,ordered_ab_group_add}"
begin
lemma uminus_cINF:
assumes "bdd_below (f ` A)" "A \<noteq> {}"
shows "- (INF x\<in>A. f x) = (SUP x\<in>A. - f x)"
by (metis (mono_tags, lifting) INF_cong uminus_cSUP assms bdd_above_uminus_image minus_equation_iff)
lemma Sup_add_eq:
assumes "bdd_above (f ` A)" "A \<noteq> {}"
shows "(SUP x\<in>A. a + f x) = a + (SUP x\<in>A. f x)" (is "?L=?R")
proof (rule antisym)
have bdd: "bdd_above ((\<lambda>x. a + f x) ` A)"
by (metis assms bdd_above_image_mono image_image mono_add)
with assms show "?L \<le> ?R"
by (simp add: assms cSup_le_iff cSUP_upper)
have "\<And>x. x \<in> A \<Longrightarrow> f x \<le> (SUP x\<in>A. a + f x) - a"
by (simp add: bdd cSup_upper le_diff_eq)
with \<open>A \<noteq> {}\<close> have "\<Squnion> (f ` A) \<le> (\<Squnion>x\<in>A. a + f x) - a"
by (simp add: cSUP_least)
then show "?R \<le> ?L"
by (metis add.commute le_diff_eq)
qed
lemma Inf_add_eq: \<comment>\<open>you don't get a shorter proof by duality\<close>
assumes "bdd_below (f ` A)" "A \<noteq> {}"
shows "(INF x\<in>A. a + f x) = a + (INF x\<in>A. f x)" (is "?L=?R")
proof (rule antisym)
show "?R \<le> ?L"
using assms mono_add mono_cINF by blast
have bdd: "bdd_below ((\<lambda>x. a + f x) ` A)"
by (metis add_left_mono assms(1) bdd_below.E bdd_below.I2 imageI)
with assms have "\<And>x. x \<in> A \<Longrightarrow> f x \<ge> (INF x\<in>A. a + f x) - a"
by (simp add: cInf_lower diff_le_eq)
with \<open>A \<noteq> {}\<close> have "(\<Sqinter>x\<in>A. a + f x) - a \<le> \<Sqinter> (f ` A)"
by (simp add: cINF_greatest)
with assms show "?L \<le> ?R"
by (metis add.commute diff_le_eq)
qed
end
instantiation nat :: conditionally_complete_linorder
begin
definition "Sup (X::nat set) = (if X={} then 0 else Max X)"
definition "Inf (X::nat set) = (LEAST n. n \<in> X)"
lemma bdd_above_nat: "bdd_above X \<longleftrightarrow> finite (X::nat set)"
proof
assume "bdd_above X"
then obtain z where "X \<subseteq> {.. z}"
by (auto simp: bdd_above_def)
then show "finite X"
by (rule finite_subset) simp
qed simp
instance
proof
fix x :: nat
fix X :: "nat set"
show "Inf X \<le> x" if "x \<in> X" "bdd_below X"
using that by (simp add: Inf_nat_def Least_le)
show "x \<le> Inf X" if "X \<noteq> {}" "\<And>y. y \<in> X \<Longrightarrow> x \<le> y"
using that unfolding Inf_nat_def ex_in_conv[symmetric] by (rule LeastI2_ex)
show "x \<le> Sup X" if "x \<in> X" "bdd_above X"
using that by (auto simp add: Sup_nat_def bdd_above_nat)
show "Sup X \<le> x" if "X \<noteq> {}" "\<And>y. y \<in> X \<Longrightarrow> y \<le> x"
proof -
from that have "bdd_above X"
by (auto simp: bdd_above_def)
with that show ?thesis
by (simp add: Sup_nat_def bdd_above_nat)
qed
qed
end
lemma Inf_nat_def1:
fixes K::"nat set"
assumes "K \<noteq> {}"
shows "Inf K \<in> K"
by (auto simp add: Min_def Inf_nat_def) (meson LeastI assms bot.extremum_unique subsetI)
lemma Sup_nat_empty [simp]: "Sup {} = (0::nat)"
by (auto simp add: Sup_nat_def)
instantiation int :: conditionally_complete_linorder
begin
definition "Sup (X::int set) = (THE x. x \<in> X \<and> (\<forall>y\<in>X. y \<le> x))"
definition "Inf (X::int set) = - (Sup (uminus ` X))"
instance
proof
{ fix x :: int and X :: "int set" assume "X \<noteq> {}" "bdd_above X"
then obtain x y where "X \<subseteq> {..y}" "x \<in> X"
by (auto simp: bdd_above_def)
then have *: "finite (X \<inter> {x..y})" "X \<inter> {x..y} \<noteq> {}" and "x \<le> y"
by (auto simp: subset_eq)
have "\<exists>!x\<in>X. (\<forall>y\<in>X. y \<le> x)"
proof
{ fix z assume "z \<in> X"
have "z \<le> Max (X \<inter> {x..y})"
proof cases
assume "x \<le> z" with \<open>z \<in> X\<close> \<open>X \<subseteq> {..y}\<close> *(1) show ?thesis
by (auto intro!: Max_ge)
next
assume "\<not> x \<le> z"
then have "z < x" by simp
also have "x \<le> Max (X \<inter> {x..y})"
using \<open>x \<in> X\<close> *(1) \<open>x \<le> y\<close> by (intro Max_ge) auto
finally show ?thesis by simp
qed }
note le = this
with Max_in[OF *] show ex: "Max (X \<inter> {x..y}) \<in> X \<and> (\<forall>z\<in>X. z \<le> Max (X \<inter> {x..y}))" by auto
fix z assume *: "z \<in> X \<and> (\<forall>y\<in>X. y \<le> z)"
with le have "z \<le> Max (X \<inter> {x..y})"
by auto
moreover have "Max (X \<inter> {x..y}) \<le> z"
using * ex by auto
ultimately show "z = Max (X \<inter> {x..y})"
by auto
qed
then have "Sup X \<in> X \<and> (\<forall>y\<in>X. y \<le> Sup X)"
unfolding Sup_int_def by (rule theI') }
note Sup_int = this
{ fix x :: int and X :: "int set" assume "x \<in> X" "bdd_above X" then show "x \<le> Sup X"
using Sup_int[of X] by auto }
note le_Sup = this
{ fix x :: int and X :: "int set" assume "X \<noteq> {}" "\<And>y. y \<in> X \<Longrightarrow> y \<le> x" then show "Sup X \<le> x"
using Sup_int[of X] by (auto simp: bdd_above_def) }
note Sup_le = this
{ fix x :: int and X :: "int set" assume "x \<in> X" "bdd_below X" then show "Inf X \<le> x"
using le_Sup[of "-x" "uminus ` X"] by (auto simp: Inf_int_def) }
{ fix x :: int and X :: "int set" assume "X \<noteq> {}" "\<And>y. y \<in> X \<Longrightarrow> x \<le> y" then show "x \<le> Inf X"
using Sup_le[of "uminus ` X" "-x"] by (force simp: Inf_int_def) }
qed
end
lemma interval_cases:
fixes S :: "'a :: conditionally_complete_linorder set"
assumes ivl: "\<And>a b x. a \<in> S \<Longrightarrow> b \<in> S \<Longrightarrow> a \<le> x \<Longrightarrow> x \<le> b \<Longrightarrow> x \<in> S"
shows "\<exists>a b. S = {} \<or>
S = UNIV \<or>
S = {..<b} \<or>
S = {..b} \<or>
S = {a<..} \<or>
S = {a..} \<or>
S = {a<..<b} \<or>
S = {a<..b} \<or>
S = {a..<b} \<or>
S = {a..b}"
proof -
define lower upper where "lower = {x. \<exists>s\<in>S. s \<le> x}" and "upper = {x. \<exists>s\<in>S. x \<le> s}"
with ivl have "S = lower \<inter> upper"
by auto
moreover
have "\<exists>a. upper = UNIV \<or> upper = {} \<or> upper = {.. a} \<or> upper = {..< a}"
proof cases
assume *: "bdd_above S \<and> S \<noteq> {}"
from * have "upper \<subseteq> {.. Sup S}"
by (auto simp: upper_def intro: cSup_upper2)
moreover from * have "{..< Sup S} \<subseteq> upper"
by (force simp add: less_cSup_iff upper_def subset_eq Ball_def)
ultimately have "upper = {.. Sup S} \<or> upper = {..< Sup S}"
unfolding ivl_disj_un(2)[symmetric] by auto
then show ?thesis by auto
next
assume "\<not> (bdd_above S \<and> S \<noteq> {})"
then have "upper = UNIV \<or> upper = {}"
by (auto simp: upper_def bdd_above_def not_le dest: less_imp_le)
then show ?thesis
by auto
qed
moreover
have "\<exists>b. lower = UNIV \<or> lower = {} \<or> lower = {b ..} \<or> lower = {b <..}"
proof cases
assume *: "bdd_below S \<and> S \<noteq> {}"
from * have "lower \<subseteq> {Inf S ..}"
by (auto simp: lower_def intro: cInf_lower2)
moreover from * have "{Inf S <..} \<subseteq> lower"
by (force simp add: cInf_less_iff lower_def subset_eq Ball_def)
ultimately have "lower = {Inf S ..} \<or> lower = {Inf S <..}"
unfolding ivl_disj_un(1)[symmetric] by auto
then show ?thesis by auto
next
assume "\<not> (bdd_below S \<and> S \<noteq> {})"
then have "lower = UNIV \<or> lower = {}"
by (auto simp: lower_def bdd_below_def not_le dest: less_imp_le)
then show ?thesis
by auto
qed
ultimately show ?thesis
unfolding greaterThanAtMost_def greaterThanLessThan_def atLeastAtMost_def atLeastLessThan_def
by (metis inf_bot_left inf_bot_right inf_top.left_neutral inf_top.right_neutral)
qed
lemma cSUP_eq_cINF_D:
fixes f :: "_ \<Rightarrow> 'b::conditionally_complete_lattice"
assumes eq: "(\<Squnion>x\<in>A. f x) = (\<Sqinter>x\<in>A. f x)"
and bdd: "bdd_above (f ` A)" "bdd_below (f ` A)"
and a: "a \<in> A"
shows "f a = (\<Sqinter>x\<in>A. f x)"
proof (rule antisym)
show "f a \<le> \<Sqinter> (f ` A)"
by (metis a bdd(1) eq cSUP_upper)
show "\<Sqinter> (f ` A) \<le> f a"
using a bdd by (auto simp: cINF_lower)
qed
lemma cSUP_UNION:
fixes f :: "_ \<Rightarrow> 'b::conditionally_complete_lattice"
assumes ne: "A \<noteq> {}" "\<And>x. x \<in> A \<Longrightarrow> B(x) \<noteq> {}"
and bdd_UN: "bdd_above (\<Union>x\<in>A. f ` B x)"
shows "(\<Squnion>z \<in> \<Union>x\<in>A. B x. f z) = (\<Squnion>x\<in>A. \<Squnion>z\<in>B x. f z)"
proof -
have bdd: "\<And>x. x \<in> A \<Longrightarrow> bdd_above (f ` B x)"
using bdd_UN by (meson UN_upper bdd_above_mono)
obtain M where "\<And>x y. x \<in> A \<Longrightarrow> y \<in> B(x) \<Longrightarrow> f y \<le> M"
using bdd_UN by (auto simp: bdd_above_def)
then have bdd2: "bdd_above ((\<lambda>x. \<Squnion>z\<in>B x. f z) ` A)"
unfolding bdd_above_def by (force simp: bdd cSUP_le_iff ne(2))
have "(\<Squnion>z \<in> \<Union>x\<in>A. B x. f z) \<le> (\<Squnion>x\<in>A. \<Squnion>z\<in>B x. f z)"
using assms by (fastforce simp add: intro!: cSUP_least intro: cSUP_upper2 simp: bdd2 bdd)
moreover have "(\<Squnion>x\<in>A. \<Squnion>z\<in>B x. f z) \<le> (\<Squnion> z \<in> \<Union>x\<in>A. B x. f z)"
using assms by (fastforce simp add: intro!: cSUP_least intro: cSUP_upper simp: image_UN bdd_UN)
ultimately show ?thesis
by (rule order_antisym)
qed
lemma cINF_UNION:
fixes f :: "_ \<Rightarrow> 'b::conditionally_complete_lattice"
assumes ne: "A \<noteq> {}" "\<And>x. x \<in> A \<Longrightarrow> B(x) \<noteq> {}"
and bdd_UN: "bdd_below (\<Union>x\<in>A. f ` B x)"
shows "(\<Sqinter>z \<in> \<Union>x\<in>A. B x. f z) = (\<Sqinter>x\<in>A. \<Sqinter>z\<in>B x. f z)"
proof -
have bdd: "\<And>x. x \<in> A \<Longrightarrow> bdd_below (f ` B x)"
using bdd_UN by (meson UN_upper bdd_below_mono)
obtain M where "\<And>x y. x \<in> A \<Longrightarrow> y \<in> B(x) \<Longrightarrow> f y \<ge> M"
using bdd_UN by (auto simp: bdd_below_def)
then have bdd2: "bdd_below ((\<lambda>x. \<Sqinter>z\<in>B x. f z) ` A)"
unfolding bdd_below_def by (force simp: bdd le_cINF_iff ne(2))
have "(\<Sqinter>z \<in> \<Union>x\<in>A. B x. f z) \<le> (\<Sqinter>x\<in>A. \<Sqinter>z\<in>B x. f z)"
using assms by (fastforce simp add: intro!: cINF_greatest intro: cINF_lower simp: bdd2 bdd)
moreover have "(\<Sqinter>x\<in>A. \<Sqinter>z\<in>B x. f z) \<le> (\<Sqinter>z \<in> \<Union>x\<in>A. B x. f z)"
using assms by (fastforce simp add: intro!: cINF_greatest intro: cINF_lower2 simp: bdd bdd_UN bdd2)
ultimately show ?thesis
by (rule order_antisym)
qed
lemma cSup_abs_le:
fixes S :: "('a::{linordered_idom,conditionally_complete_linorder}) set"
shows "S \<noteq> {} \<Longrightarrow> (\<And>x. x\<in>S \<Longrightarrow> \<bar>x\<bar> \<le> a) \<Longrightarrow> \<bar>Sup S\<bar> \<le> a"
apply (auto simp add: abs_le_iff intro: cSup_least)
by (metis bdd_aboveI cSup_upper neg_le_iff_le order_trans)
end
|
Formal statement is: lemma sgn_le_0_iff [simp]: "sgn x \<le> 0 \<longleftrightarrow> x \<le> 0" for x :: real Informal statement is: For any real number $x$, $sgn(x) \leq 0$ if and only if $x \leq 0$. |
[GOAL]
R : Type u
M : Type v
inst✝² : Ring R
inst✝¹ : AddCommGroup M
inst✝ : Module R M
⊢ ⊥ = span R {0}
[PROOFSTEP]
simp
[GOAL]
R : Type u
M : Type v
inst✝³ : Ring R
inst✝² : AddCommGroup M
inst✝¹ : Module R M
K : Type u
inst✝ : DivisionRing K
S : Ideal K
⊢ IsPrincipal S
[PROOFSTEP]
rcases Ideal.eq_bot_or_top S with (rfl | rfl)
[GOAL]
case inl
R : Type u
M : Type v
inst✝³ : Ring R
inst✝² : AddCommGroup M
inst✝¹ : Module R M
K : Type u
inst✝ : DivisionRing K
⊢ IsPrincipal ⊥
case inr
R : Type u
M : Type v
inst✝³ : Ring R
inst✝² : AddCommGroup M
inst✝¹ : Module R M
K : Type u
inst✝ : DivisionRing K
⊢ IsPrincipal ⊤
[PROOFSTEP]
apply bot_isPrincipal
[GOAL]
case inr
R : Type u
M : Type v
inst✝³ : Ring R
inst✝² : AddCommGroup M
inst✝¹ : Module R M
K : Type u
inst✝ : DivisionRing K
⊢ IsPrincipal ⊤
[PROOFSTEP]
apply top_isPrincipal
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
⊢ generator S ∈ S
[PROOFSTEP]
conv_rhs => rw [← span_singleton_generator S]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
| S
[PROOFSTEP]
rw [← span_singleton_generator S]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
| S
[PROOFSTEP]
rw [← span_singleton_generator S]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
| S
[PROOFSTEP]
rw [← span_singleton_generator S]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
⊢ generator S ∈ span R {generator S}
[PROOFSTEP]
exact subset_span (mem_singleton _)
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
x : M
⊢ x ∈ S ↔ ∃ s, x = s • generator S
[PROOFSTEP]
simp_rw [@eq_comm _ x, ← mem_span_singleton, span_singleton_generator]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : Ring R
inst✝¹ : Module R M
S : Submodule R M
inst✝ : IsPrincipal S
⊢ S = ⊥ ↔ generator S = 0
[PROOFSTEP]
rw [← @span_singleton_eq_bot R M, span_singleton_generator]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : CommRing R
inst✝¹ : Module R M
S : Ideal R
inst✝ : IsPrincipal S
x a : R
⊢ x = a • generator S ↔ x = generator S * a
[PROOFSTEP]
simp only [mul_comm, smul_eq_mul]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : CommRing R
inst✝¹ : Module R M
S : Ideal R
inst✝ : IsPrincipal S
is_prime : Ideal.IsPrime S
ne_bot : S ≠ ⊥
x✝¹ x✝ : R
⊢ generator S ∣ x✝¹ * x✝ → generator S ∣ x✝¹ ∨ generator S ∣ x✝
[PROOFSTEP]
simpa only [← mem_iff_generator_dvd S] using is_prime.2
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : CommRing R
inst✝¹ : Module R M
N : Submodule R M
ϕ : M →ₗ[R] R
inst✝ : IsPrincipal (map ϕ N)
x : M
hx : x ∈ N
⊢ generator (map ϕ N) ∣ ↑ϕ x
[PROOFSTEP]
rw [← mem_iff_generator_dvd, Submodule.mem_map]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : CommRing R
inst✝¹ : Module R M
N : Submodule R M
ϕ : M →ₗ[R] R
inst✝ : IsPrincipal (map ϕ N)
x : M
hx : x ∈ N
⊢ ∃ y, y ∈ N ∧ ↑ϕ y = ↑ϕ x
[PROOFSTEP]
exact ⟨x, hx, rfl⟩
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : CommRing R
inst✝¹ : Module R M
N O : Submodule R M
hNO : N ≤ O
ϕ : { x // x ∈ O } →ₗ[R] R
inst✝ : IsPrincipal (LinearMap.submoduleImage ϕ N)
x : M
hx : x ∈ N
⊢ generator (LinearMap.submoduleImage ϕ N) ∣ ↑ϕ { val := x, property := (_ : x ∈ O) }
[PROOFSTEP]
rw [← mem_iff_generator_dvd, LinearMap.mem_submoduleImage_of_le hNO]
[GOAL]
R : Type u
M : Type v
inst✝³ : AddCommGroup M
inst✝² : CommRing R
inst✝¹ : Module R M
N O : Submodule R M
hNO : N ≤ O
ϕ : { x // x ∈ O } →ₗ[R] R
inst✝ : IsPrincipal (LinearMap.submoduleImage ϕ N)
x : M
hx : x ∈ N
⊢ ∃ y yN, ↑ϕ { val := y, property := (_ : y ∈ O) } = ↑ϕ { val := x, property := (_ : x ∈ O) }
[PROOFSTEP]
exact ⟨x, hx, rfl⟩
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
⊢ ∀ (J : Ideal R) (x : R), S ≤ J → ¬x ∈ S → x ∈ J → 1 ∈ J
[PROOFSTEP]
intro T x hST hxS hxT
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
⊢ 1 ∈ T
[PROOFSTEP]
cases' (mem_iff_generator_dvd _).1 (hST <| generator_mem S) with z hz
[GOAL]
case intro
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
⊢ 1 ∈ T
[PROOFSTEP]
cases hpi.mem_or_mem (show generator T * z ∈ S from hz ▸ generator_mem S)
[GOAL]
case intro.inl
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h✝ : generator T ∈ S
⊢ 1 ∈ T
case intro.inr
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h✝ : z ∈ S
⊢ 1 ∈ T
[PROOFSTEP]
case inl h =>
have hTS : T ≤ S
rwa [← T.span_singleton_generator, Ideal.span_le, singleton_subset_iff]
exact (hxS <| hTS hxT).elim
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : generator T ∈ S
⊢ 1 ∈ T
[PROOFSTEP]
case inl h =>
have hTS : T ≤ S
rwa [← T.span_singleton_generator, Ideal.span_le, singleton_subset_iff]
exact (hxS <| hTS hxT).elim
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : generator T ∈ S
⊢ 1 ∈ T
[PROOFSTEP]
have hTS : T ≤ S
[GOAL]
case hTS
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : generator T ∈ S
⊢ T ≤ S
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : generator T ∈ S
hTS : T ≤ S
⊢ 1 ∈ T
[PROOFSTEP]
rwa [← T.span_singleton_generator, Ideal.span_le, singleton_subset_iff]
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : generator T ∈ S
hTS : T ≤ S
⊢ 1 ∈ T
[PROOFSTEP]
exact (hxS <| hTS hxT).elim
[GOAL]
case intro.inr
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h✝ : z ∈ S
⊢ 1 ∈ T
[PROOFSTEP]
case inr h =>
cases' (mem_iff_generator_dvd _).1 h with y hy
have : generator S ≠ 0 := mt (eq_bot_iff_generator_eq_zero _).2 hS
rw [← mul_one (generator S), hy, mul_left_comm, mul_right_inj' this] at hz
exact hz.symm ▸ T.mul_mem_right _ (generator_mem T)
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : z ∈ S
⊢ 1 ∈ T
[PROOFSTEP]
case inr h =>
cases' (mem_iff_generator_dvd _).1 h with y hy
have : generator S ≠ 0 := mt (eq_bot_iff_generator_eq_zero _).2 hS
rw [← mul_one (generator S), hy, mul_left_comm, mul_right_inj' this] at hz
exact hz.symm ▸ T.mul_mem_right _ (generator_mem T)
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : z ∈ S
⊢ 1 ∈ T
[PROOFSTEP]
cases' (mem_iff_generator_dvd _).1 h with y hy
[GOAL]
case intro
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : z ∈ S
y : R
hy : z = generator S * y
⊢ 1 ∈ T
[PROOFSTEP]
have : generator S ≠ 0 := mt (eq_bot_iff_generator_eq_zero _).2 hS
[GOAL]
case intro
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
hz : generator S = generator T * z
h : z ∈ S
y : R
hy : z = generator S * y
this : generator S ≠ 0
⊢ 1 ∈ T
[PROOFSTEP]
rw [← mul_one (generator S), hy, mul_left_comm, mul_right_inj' this] at hz
[GOAL]
case intro
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
S : Ideal R
hpi : IsPrime S
hS : S ≠ ⊥
T : Ideal R
x : R
hST : S ≤ T
hxS : ¬x ∈ S
hxT : x ∈ T
z : R
h : z ∈ S
y : R
hz : 1 = generator T * y
hy : z = generator S * y
this : generator S ≠ 0
⊢ 1 ∈ T
[PROOFSTEP]
exact hz.symm ▸ T.mul_mem_right _ (generator_mem T)
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
wf : WellFounded EuclideanDomain.r
hmin : WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ S ∧ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ≠ 0
x : R
hx : x ∈ S
⊢ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∣ x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h
[PROOFSTEP]
have : x % WellFounded.min wf {x : R | x ∈ S ∧ x ≠ 0} h ∉ {x : R | x ∈ S ∧ x ≠ 0} := fun h₁ =>
WellFounded.not_lt_min wf _ h h₁ (mod_lt x hmin.2)
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
wf : WellFounded EuclideanDomain.r
hmin : WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ S ∧ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ≠ 0
x : R
hx : x ∈ S
this : ¬x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ {x | x ∈ S ∧ x ≠ 0}
⊢ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∣ x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h
[PROOFSTEP]
have : x % WellFounded.min wf {x : R | x ∈ S ∧ x ≠ 0} h = 0 :=
by
simp only [not_and_or, Set.mem_setOf_eq, not_ne_iff] at this
exact this.neg_resolve_left <| (mod_mem_iff hmin.1).2 hx
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
wf : WellFounded EuclideanDomain.r
hmin : WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ S ∧ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ≠ 0
x : R
hx : x ∈ S
this : ¬x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ {x | x ∈ S ∧ x ≠ 0}
⊢ x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h = 0
[PROOFSTEP]
simp only [not_and_or, Set.mem_setOf_eq, not_ne_iff] at this
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
wf : WellFounded EuclideanDomain.r
hmin : WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ S ∧ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ≠ 0
x : R
hx : x ∈ S
this : ¬x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ S ∨ x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h = 0
⊢ x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h = 0
[PROOFSTEP]
exact this.neg_resolve_left <| (mod_mem_iff hmin.1).2 hx
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
wf : WellFounded EuclideanDomain.r
hmin : WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ S ∧ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ≠ 0
x : R
hx : x ∈ S
this✝ : ¬x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∈ {x | x ∈ S ∧ x ≠ 0}
this : x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h = 0
⊢ WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h ∣ x % WellFounded.min wf {x | x ∈ S ∧ x ≠ 0} h
[PROOFSTEP]
simp [*]
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : ¬Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
a : R
⊢ a ∈ S ↔ a ∈ span R {0}
[PROOFSTEP]
rw [← @Submodule.bot_coe R R _ _ _, span_eq, Submodule.mem_bot]
[GOAL]
R : Type u
M : Type v
inst✝ : EuclideanDomain R
S : Ideal R
h : ¬Set.Nonempty {x | x ∈ S ∧ x ≠ 0}
a : R
⊢ a ∈ S ↔ a = 0
[PROOFSTEP]
exact ⟨fun haS => by_contra fun ha0 => h ⟨a, ⟨haS, ha0⟩⟩, fun h₁ => h₁.symm ▸ S.zero_mem⟩
[GOAL]
R : Type u
M : Type v
inst✝¹ : Ring R
inst✝ : IsPrincipalIdealRing R
s : Ideal R
⊢ FG s
[PROOFSTEP]
rcases(IsPrincipalIdealRing.principal s).principal with ⟨a, rfl⟩
[GOAL]
case intro
R : Type u
M : Type v
inst✝¹ : Ring R
inst✝ : IsPrincipalIdealRing R
a : R
⊢ FG (span R {a})
[PROOFSTEP]
rw [← Finset.coe_singleton]
[GOAL]
case intro
R : Type u
M : Type v
inst✝¹ : Ring R
inst✝ : IsPrincipalIdealRing R
a : R
⊢ FG (span R ↑{a})
[PROOFSTEP]
exact ⟨{ a }, SetLike.coe_injective rfl⟩
[GOAL]
R : Type u
M : Type v
inst✝¹ : CommRing R
inst✝ : IsPrincipalIdealRing R
p : R
hp : Irreducible p
I : Ideal R
hI : span R {p} < I
⊢ I = ⊤
[PROOFSTEP]
rcases principal I with ⟨a, rfl⟩
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝¹ : CommRing R
inst✝ : IsPrincipalIdealRing R
p : R
hp : Irreducible p
a : R
hI : span R {p} < span R {a}
⊢ span R {a} = ⊤
[PROOFSTEP]
erw [Ideal.span_singleton_eq_top]
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝¹ : CommRing R
inst✝ : IsPrincipalIdealRing R
p : R
hp : Irreducible p
a : R
hI : span R {p} < span R {a}
⊢ IsUnit a
[PROOFSTEP]
rcases Ideal.span_singleton_le_span_singleton.1 (le_of_lt hI) with ⟨b, rfl⟩
[GOAL]
case mk.intro.intro
R : Type u
M : Type v
inst✝¹ : CommRing R
inst✝ : IsPrincipalIdealRing R
a b : R
hp : Irreducible (a * b)
hI : span R {a * b} < span R {a}
⊢ IsUnit a
[PROOFSTEP]
refine' (of_irreducible_mul hp).resolve_right (mt (fun hb => _) (not_le_of_lt hI))
[GOAL]
case mk.intro.intro
R : Type u
M : Type v
inst✝¹ : CommRing R
inst✝ : IsPrincipalIdealRing R
a b : R
hp : Irreducible (a * b)
hI : span R {a * b} < span R {a}
hb : IsUnit b
⊢ span R {a} ≤ span R {a * b}
[PROOFSTEP]
erw [Ideal.span_singleton_le_span_singleton, IsUnit.mul_right_dvd hb]
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
a : R
h : a ≠ 0
⊢ (∀ (b : R), b ∈ factors a → Irreducible b) ∧ Associated (Multiset.prod (factors a)) a
[PROOFSTEP]
unfold factors
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
a : R
h : a ≠ 0
⊢ (∀ (b : R),
(b ∈
if h : a = 0 then ∅
else choose (_ : ∃ f, (∀ (b : R), b ∈ f → Irreducible b) ∧ Associated (Multiset.prod f) a)) →
Irreducible b) ∧
Associated
(Multiset.prod
(if h : a = 0 then ∅
else choose (_ : ∃ f, (∀ (b : R), b ∈ f → Irreducible b) ∧ Associated (Multiset.prod f) a)))
a
[PROOFSTEP]
rw [dif_neg h]
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
a : R
h : a ≠ 0
⊢ (∀ (b : R),
b ∈ choose (_ : ∃ f, (∀ (b : R), b ∈ f → Irreducible b) ∧ Associated (Multiset.prod f) a) → Irreducible b) ∧
Associated (Multiset.prod (choose (_ : ∃ f, (∀ (b : R), b ∈ f → Irreducible b) ∧ Associated (Multiset.prod f) a))) a
[PROOFSTEP]
exact Classical.choose_spec (WfDvdMonoid.exists_factors a h)
[GOAL]
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
s : Submonoid R
a : R
ha : a ≠ 0
hfac : ∀ (b : R), b ∈ factors a → b ∈ s
hunit : ∀ (c : Rˣ), ↑c ∈ s
⊢ a ∈ s
[PROOFSTEP]
rcases(factors_spec a ha).2 with ⟨c, hc⟩
[GOAL]
case intro
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
s : Submonoid R
a : R
ha : a ≠ 0
hfac : ∀ (b : R), b ∈ factors a → b ∈ s
hunit : ∀ (c : Rˣ), ↑c ∈ s
c : Rˣ
hc : Multiset.prod (factors a) * ↑c = a
⊢ a ∈ s
[PROOFSTEP]
rw [← hc]
[GOAL]
case intro
R : Type u
M : Type v
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : IsPrincipalIdealRing R
s : Submonoid R
a : R
ha : a ≠ 0
hfac : ∀ (b : R), b ∈ factors a → b ∈ s
hunit : ∀ (c : Rˣ), ↑c ∈ s
c : Rˣ
hc : Multiset.prod (factors a) * ↑c = a
⊢ Multiset.prod (factors a) * ↑c ∈ s
[PROOFSTEP]
exact mul_mem (multiset_prod_mem _ hfac) (hunit _)
[GOAL]
R : Type u
M : Type v
S✝ : Type u_1
N : Type u_2
inst✝⁵ : Ring R
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Ring S✝
inst✝¹ : Module R M
inst✝ : Module R N
f : M →ₗ[R] N
hf : Surjective ↑f
S : Submodule R N
hI : IsPrincipal (comap f S)
⊢ S = span R {↑f (generator (comap f S))}
[PROOFSTEP]
rw [← Set.image_singleton, ← Submodule.map_span, IsPrincipal.span_singleton_generator,
Submodule.map_comap_eq_of_surjective hf]
[GOAL]
R : Type u
M : Type v
S : Type u_1
N : Type u_2
inst✝⁵ : Ring R
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Ring S
inst✝¹ : Module R M
inst✝ : Module R N
f : R →+* S
hf : Surjective ↑f
I : Ideal S
hI : IsPrincipal (comap f I)
⊢ I = Submodule.span S {↑f (IsPrincipal.generator (comap f I))}
[PROOFSTEP]
rw [Ideal.submodule_span_eq, ← Set.image_singleton, ← Ideal.map_span, Ideal.span_singleton_generator,
Ideal.map_comap_of_surjective f hf]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
⊢ Ideal.span {gcd x y} = Ideal.span {x, y}
[PROOFSTEP]
obtain ⟨d, hd⟩ := IsPrincipalIdealRing.principal (span ({ x, y } : Set R))
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Submodule.span R {d}
⊢ Ideal.span {gcd x y} = Ideal.span {x, y}
[PROOFSTEP]
rw [submodule_span_eq] at hd
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ Ideal.span {gcd x y} = Ideal.span {x, y}
[PROOFSTEP]
rw [hd]
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ Ideal.span {gcd x y} = Ideal.span {d}
[PROOFSTEP]
suffices Associated d (gcd x y) by
obtain ⟨D, HD⟩ := this
rw [← HD]
exact span_singleton_mul_right_unit D.isUnit _
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
this : Associated d (gcd x y)
⊢ Ideal.span {gcd x y} = Ideal.span {d}
[PROOFSTEP]
obtain ⟨D, HD⟩ := this
[GOAL]
case intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
D : Rˣ
HD : d * ↑D = gcd x y
⊢ Ideal.span {gcd x y} = Ideal.span {d}
[PROOFSTEP]
rw [← HD]
[GOAL]
case intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
D : Rˣ
HD : d * ↑D = gcd x y
⊢ Ideal.span {d * ↑D} = Ideal.span {d}
[PROOFSTEP]
exact span_singleton_mul_right_unit D.isUnit _
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ Associated d (gcd x y)
[PROOFSTEP]
apply associated_of_dvd_dvd
[GOAL]
case mk.intro.hab
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ d ∣ gcd x y
[PROOFSTEP]
rw [dvd_gcd_iff]
[GOAL]
case mk.intro.hab
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ d ∣ x ∧ d ∣ y
[PROOFSTEP]
constructor
[GOAL]
case mk.intro.hab.left
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ d ∣ x
[PROOFSTEP]
rw [← Ideal.mem_span_singleton, ← hd, Ideal.mem_span_pair]
[GOAL]
case mk.intro.hab.right
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ d ∣ y
[PROOFSTEP]
rw [← Ideal.mem_span_singleton, ← hd, Ideal.mem_span_pair]
[GOAL]
case mk.intro.hab.left
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ ∃ a b, a * x + b * y = x
[PROOFSTEP]
use 1, 0
[GOAL]
case h
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ 1 * x + 0 * y = x
[PROOFSTEP]
rw [one_mul, zero_mul, add_zero]
[GOAL]
case mk.intro.hab.right
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ ∃ a b, a * x + b * y = y
[PROOFSTEP]
use 0, 1
[GOAL]
case h
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ 0 * x + 1 * y = y
[PROOFSTEP]
rw [one_mul, zero_mul, zero_add]
[GOAL]
case mk.intro.hba
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ gcd x y ∣ d
[PROOFSTEP]
obtain ⟨r, s, rfl⟩ : ∃ r s, r * x + s * y = d := by rw [← Ideal.mem_span_pair, hd, Ideal.mem_span_singleton]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y d : R
hd : Ideal.span {x, y} = Ideal.span {d}
⊢ ∃ r s, r * x + s * y = d
[PROOFSTEP]
rw [← Ideal.mem_span_pair, hd, Ideal.mem_span_singleton]
[GOAL]
case mk.intro.hba.intro.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y r s : R
hd : Ideal.span {x, y} = Ideal.span {r * x + s * y}
⊢ gcd x y ∣ r * x + s * y
[PROOFSTEP]
apply dvd_add
[GOAL]
case mk.intro.hba.intro.intro.h₁
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y r s : R
hd : Ideal.span {x, y} = Ideal.span {r * x + s * y}
⊢ gcd x y ∣ r * x
[PROOFSTEP]
apply dvd_mul_of_dvd_right
[GOAL]
case mk.intro.hba.intro.intro.h₂
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y r s : R
hd : Ideal.span {x, y} = Ideal.span {r * x + s * y}
⊢ gcd x y ∣ s * y
[PROOFSTEP]
apply dvd_mul_of_dvd_right
[GOAL]
case mk.intro.hba.intro.intro.h₁.h
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y r s : R
hd : Ideal.span {x, y} = Ideal.span {r * x + s * y}
⊢ gcd x y ∣ x
case mk.intro.hba.intro.intro.h₂.h
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y r s : R
hd : Ideal.span {x, y} = Ideal.span {r * x + s * y}
⊢ gcd x y ∣ y
[PROOFSTEP]
exacts [gcd_dvd_left x y, gcd_dvd_right x y]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
a b z : R
⊢ gcd a b ∣ z ↔ ∃ x y, z = a * x + b * y
[PROOFSTEP]
simp_rw [mul_comm a, mul_comm b, @eq_comm _ z, ← Ideal.mem_span_pair, ← span_gcd, Ideal.mem_span_singleton]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
a b : R
⊢ ∃ x y, gcd a b = a * x + b * y
[PROOFSTEP]
rw [← gcd_dvd_iff_exists]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
⊢ IsUnit (gcd x y) ↔ IsCoprime x y
[PROOFSTEP]
rw [IsCoprime, ← Ideal.mem_span_pair, ← span_gcd, ← span_singleton_eq_top, eq_top_iff_one]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), z ∈ nonunits R → z ≠ 0 → z ∣ x → ¬z ∣ y
⊢ IsCoprime x y
[PROOFSTEP]
rw [← gcd_isUnit_iff]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), z ∈ nonunits R → z ≠ 0 → z ∣ x → ¬z ∣ y
⊢ IsUnit (gcd x y)
[PROOFSTEP]
by_contra h
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), z ∈ nonunits R → z ≠ 0 → z ∣ x → ¬z ∣ y
h : ¬IsUnit (gcd x y)
⊢ False
[PROOFSTEP]
refine' H _ h _ (gcd_dvd_left _ _) (gcd_dvd_right _ _)
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), z ∈ nonunits R → z ≠ 0 → z ∣ x → ¬z ∣ y
h : ¬IsUnit (gcd x y)
⊢ gcd x y ≠ 0
[PROOFSTEP]
rwa [Ne, gcd_eq_zero_iff]
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
h : Irreducible x
⊢ x ∣ y ∨ IsCoprime x y
[PROOFSTEP]
refine' or_iff_not_imp_left.2 fun h' => _
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
h : Irreducible x
h' : ¬x ∣ y
⊢ IsCoprime x y
[PROOFSTEP]
apply isCoprime_of_dvd
[GOAL]
case nonzero
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
h : Irreducible x
h' : ¬x ∣ y
⊢ ¬(x = 0 ∧ y = 0)
[PROOFSTEP]
rintro ⟨rfl, rfl⟩
[GOAL]
case nonzero.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
h : Irreducible 0
h' : ¬0 ∣ 0
⊢ False
[PROOFSTEP]
simp at h
[GOAL]
case H
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
h : Irreducible x
h' : ¬x ∣ y
⊢ ∀ (z : R), z ∈ nonunits R → z ≠ 0 → z ∣ x → ¬z ∣ y
[PROOFSTEP]
rintro z nu - ⟨w, rfl⟩ dy
[GOAL]
case H.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
y z : R
nu : z ∈ nonunits R
w : R
h : Irreducible (z * w)
h' : ¬z * w ∣ y
dy : z ∣ y
⊢ False
[PROOFSTEP]
refine' h' (dvd_trans _ dy)
[GOAL]
case H.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
y z : R
nu : z ∈ nonunits R
w : R
h : Irreducible (z * w)
h' : ¬z * w ∣ y
dy : z ∣ y
⊢ z * w ∣ z
[PROOFSTEP]
simpa using mul_dvd_mul_left z (isUnit_iff_dvd_one.1 <| (of_irreducible_mul h).resolve_left nu)
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
⊢ IsCoprime x y
[PROOFSTEP]
apply isCoprime_of_dvd x y nonzero
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
⊢ ∀ (z : R), z ∈ nonunits R → z ≠ 0 → z ∣ x → ¬z ∣ y
[PROOFSTEP]
intro z znu znz zx zy
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
z : R
znu : z ∈ nonunits R
znz : z ≠ 0
zx : z ∣ x
zy : z ∣ y
⊢ False
[PROOFSTEP]
obtain ⟨i, h1, h2⟩ := WfDvdMonoid.exists_irreducible_factor znu znz
[GOAL]
case intro.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
z : R
znu : z ∈ nonunits R
znz : z ≠ 0
zx : z ∣ x
zy : z ∣ y
i : R
h1 : Irreducible i
h2 : i ∣ z
⊢ False
[PROOFSTEP]
apply H i h1
[GOAL]
case intro.intro.a
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
z : R
znu : z ∈ nonunits R
znz : z ≠ 0
zx : z ∣ x
zy : z ∣ y
i : R
h1 : Irreducible i
h2 : i ∣ z
⊢ i ∣ x
[PROOFSTEP]
apply dvd_trans h2
[GOAL]
case intro.intro.a
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
z : R
znu : z ∈ nonunits R
znz : z ≠ 0
zx : z ∣ x
zy : z ∣ y
i : R
h1 : Irreducible i
h2 : i ∣ z
⊢ z ∣ x
[PROOFSTEP]
assumption
[GOAL]
case intro.intro.a
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
z : R
znu : z ∈ nonunits R
znz : z ≠ 0
zx : z ∣ x
zy : z ∣ y
i : R
h1 : Irreducible i
h2 : i ∣ z
⊢ i ∣ y
[PROOFSTEP]
apply dvd_trans h2
[GOAL]
case intro.intro.a
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
x y : R
nonzero : ¬(x = 0 ∧ y = 0)
H : ∀ (z : R), Irreducible z → z ∣ x → ¬z ∣ y
z : R
znu : z ∈ nonunits R
znz : z ≠ 0
zx : z ∣ x
zy : z ∣ y
i : R
h1 : Irreducible i
h2 : i ∣ z
⊢ z ∣ y
[PROOFSTEP]
assumption
[GOAL]
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
⊢ IsCoprime p n ↔ ¬p ∣ n
[PROOFSTEP]
constructor
[GOAL]
case mp
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
⊢ IsCoprime p n → ¬p ∣ n
[PROOFSTEP]
intro co H
[GOAL]
case mp
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
co : IsCoprime p n
H : p ∣ n
⊢ False
[PROOFSTEP]
apply pp.not_unit
[GOAL]
case mp
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
co : IsCoprime p n
H : p ∣ n
⊢ IsUnit p
[PROOFSTEP]
rw [isUnit_iff_dvd_one]
[GOAL]
case mp
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
co : IsCoprime p n
H : p ∣ n
⊢ p ∣ 1
[PROOFSTEP]
apply IsCoprime.dvd_of_dvd_mul_left co
[GOAL]
case mp
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
co : IsCoprime p n
H : p ∣ n
⊢ p ∣ n * 1
[PROOFSTEP]
rw [mul_one n]
[GOAL]
case mp
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
co : IsCoprime p n
H : p ∣ n
⊢ p ∣ n
[PROOFSTEP]
exact H
[GOAL]
case mpr
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
⊢ ¬p ∣ n → IsCoprime p n
[PROOFSTEP]
intro nd
[GOAL]
case mpr
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
nd : ¬p ∣ n
⊢ IsCoprime p n
[PROOFSTEP]
apply isCoprime_of_irreducible_dvd
[GOAL]
case mpr.nonzero
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
nd : ¬p ∣ n
⊢ ¬(p = 0 ∧ n = 0)
[PROOFSTEP]
rintro ⟨hp, -⟩
[GOAL]
case mpr.nonzero.intro
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
nd : ¬p ∣ n
hp : p = 0
⊢ False
[PROOFSTEP]
exact pp.ne_zero hp
[GOAL]
case mpr.H
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
nd : ¬p ∣ n
⊢ ∀ (z : R), Irreducible z → z ∣ p → ¬z ∣ n
[PROOFSTEP]
rintro z zi zp zn
[GOAL]
case mpr.H
R : Type u
M : Type v
inst✝³ : CommRing R
inst✝² : IsDomain R
inst✝¹ : IsPrincipalIdealRing R
inst✝ : GCDMonoid R
p n : R
pp : Irreducible p
nd : ¬p ∣ n
z : R
zi : Irreducible z
zp : z ∣ p
zn : z ∣ n
⊢ False
[PROOFSTEP]
exact nd ((zi.associated_of_dvd pp zp).symm.dvd.trans zn)
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
⊢ nonPrincipals R = ∅ ↔ IsPrincipalIdealRing R
[PROOFSTEP]
simp [Set.eq_empty_iff_forall_not_mem, isPrincipalIdealRing_iff, nonPrincipals_def]
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
⊢ ∃ I, I ∈ nonPrincipals R ∧ ∀ (J : Ideal R), J ∈ c → J ≤ I
[PROOFSTEP]
refine' ⟨sSup c, _, fun J hJ => le_sSup hJ⟩
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
⊢ sSup c ∈ nonPrincipals R
[PROOFSTEP]
rintro ⟨x, hx⟩
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
⊢ False
[PROOFSTEP]
have hxmem : x ∈ sSup c := hx.symm ▸ Submodule.mem_span_singleton_self x
[GOAL]
case mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
hxmem : x ∈ sSup c
⊢ False
[PROOFSTEP]
obtain ⟨J, hJc, hxJ⟩ := (Submodule.mem_sSup_of_directed ⟨K, hKmem⟩ hchain.directedOn).1 hxmem
[GOAL]
case mk.intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
hxmem : x ∈ sSup c
J : Submodule R R
hJc : J ∈ c
hxJ : x ∈ J
⊢ False
[PROOFSTEP]
have hsSupJ : sSup c = J := le_antisymm (by simp [hx, Ideal.span_le, hxJ]) (le_sSup hJc)
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
hxmem : x ∈ sSup c
J : Submodule R R
hJc : J ∈ c
hxJ : x ∈ J
⊢ sSup c ≤ J
[PROOFSTEP]
simp [hx, Ideal.span_le, hxJ]
[GOAL]
case mk.intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hs : c ⊆ nonPrincipals R
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
hxmem : x ∈ sSup c
J : Submodule R R
hJc : J ∈ c
hxJ : x ∈ J
hsSupJ : sSup c = J
⊢ False
[PROOFSTEP]
specialize hs hJc
[GOAL]
case mk.intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
hxmem : x ∈ sSup c
J : Submodule R R
hJc : J ∈ c
hxJ : x ∈ J
hsSupJ : sSup c = J
hs : J ∈ nonPrincipals R
⊢ False
[PROOFSTEP]
rw [← hsSupJ, hx, nonPrincipals_def] at hs
[GOAL]
case mk.intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
c : Set (Ideal R)
hchain : IsChain (fun x x_1 => x ≤ x_1) c
K : Ideal R
hKmem : K ∈ c
x : R
hx : sSup c = Submodule.span R {x}
hxmem : x ∈ sSup c
J : Submodule R R
hJc : J ∈ c
hxJ : x ∈ J
hsSupJ : sSup c = J
hs : ¬IsPrincipal (Submodule.span R {x})
⊢ False
[PROOFSTEP]
exact hs ⟨⟨x, rfl⟩⟩
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
⊢ IsPrincipalIdealRing R
[PROOFSTEP]
rw [← nonPrincipals_eq_empty_iff, Set.eq_empty_iff_forall_not_mem]
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
⊢ ∀ (x : Ideal R), ¬x ∈ nonPrincipals R
[PROOFSTEP]
intro J hJ
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
⊢ False
[PROOFSTEP]
obtain ⟨I, Ibad, -, Imax⟩ := zorn_nonempty_partialOrder₀ (nonPrincipals R) nonPrincipals_zorn _ hJ
[GOAL]
case intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
⊢ False
[PROOFSTEP]
have Imax' : ∀ {J}, I < J → J.IsPrincipal := by
intro J hJ
by_contra He
exact hJ.ne (Imax _ ((nonPrincipals_def R).2 He) hJ.le).symm
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
⊢ ∀ {J : Ideal R}, I < J → IsPrincipal J
[PROOFSTEP]
intro J hJ
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J✝ : Ideal R
hJ✝ : J✝ ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
J : Ideal R
hJ : I < J
⊢ IsPrincipal J
[PROOFSTEP]
by_contra He
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J✝ : Ideal R
hJ✝ : J✝ ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
J : Ideal R
hJ : I < J
He : ¬IsPrincipal J
⊢ False
[PROOFSTEP]
exact hJ.ne (Imax _ ((nonPrincipals_def R).2 He) hJ.le).symm
[GOAL]
case intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
⊢ False
[PROOFSTEP]
by_cases hI1 : I = ⊤
[GOAL]
case pos
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : I = ⊤
⊢ False
[PROOFSTEP]
subst hI1
[GOAL]
case pos
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
Ibad : ⊤ ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → ⊤ ≤ z → z = ⊤
Imax' : ∀ {J : Ideal R}, ⊤ < J → IsPrincipal J
⊢ False
[PROOFSTEP]
exact Ibad top_isPrincipal
[GOAL]
case neg
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
⊢ False
[PROOFSTEP]
refine' Ibad (H I ⟨hI1, fun {x y} hxy => or_iff_not_imp_right.mpr fun hy => _⟩)
[GOAL]
case neg
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
⊢ x ∈ I
[PROOFSTEP]
obtain ⟨a, ha⟩ : (I ⊔ span { y }).IsPrincipal :=
Imax'
(left_lt_sup.mpr (mt I.span_singleton_le_iff_mem.mp hy))
-- Then `x ∈ I.colon (span {y})`, which is equal to `I` if it's not principal.
[GOAL]
case neg.mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
⊢ x ∈ I
[PROOFSTEP]
suffices He : ¬(I.colon (span { y })).IsPrincipal
[GOAL]
case neg.mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
He : ¬IsPrincipal (colon I (Ideal.span {y}))
⊢ x ∈ I
[PROOFSTEP]
rw [← Imax _ ((nonPrincipals_def R).2 He) fun a ha => Ideal.mem_colon_singleton.2 (mul_mem_right _ _ ha)]
[GOAL]
case neg.mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
He : ¬IsPrincipal (colon I (Ideal.span {y}))
⊢ x ∈ colon I (Ideal.span {y})
[PROOFSTEP]
exact Ideal.mem_colon_singleton.2 hxy
[GOAL]
case He
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
⊢ ¬IsPrincipal (colon I (Ideal.span {y}))
[PROOFSTEP]
rintro
⟨b, hb⟩
-- We will show `I` is generated by `a * b`.
[GOAL]
case He.mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
⊢ False
[PROOFSTEP]
refine (nonPrincipals_def _).1 Ibad ⟨a * b, ?_⟩
[GOAL]
case He.mk.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
⊢ I = Submodule.span R {a * b}
[PROOFSTEP]
refine' le_antisymm (α := Ideal R) (fun i hi => _) <| (span_singleton_mul_span_singleton a b).ge.trans _
[GOAL]
case He.mk.intro.refine'_1
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
i : R
hi : i ∈ I
⊢ i ∈ Submodule.span R {a * b}
[PROOFSTEP]
have hisup : i ∈ I ⊔ span { y } := Ideal.mem_sup_left hi
[GOAL]
case He.mk.intro.refine'_1
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
i : R
hi : i ∈ I
hisup : i ∈ I ⊔ Ideal.span {y}
⊢ i ∈ Submodule.span R {a * b}
[PROOFSTEP]
have : y ∈ I ⊔ span { y } := Ideal.mem_sup_right (Ideal.mem_span_singleton_self y)
[GOAL]
case He.mk.intro.refine'_1
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
i : R
hi : i ∈ I
hisup : i ∈ I ⊔ Ideal.span {y}
this : y ∈ I ⊔ Ideal.span {y}
⊢ i ∈ Submodule.span R {a * b}
[PROOFSTEP]
erw [ha, mem_span_singleton'] at hisup this
[GOAL]
case He.mk.intro.refine'_1
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
i : R
hi : i ∈ I
hisup : ∃ a_1, a_1 * a = i
this : ∃ a_1, a_1 * a = y
⊢ i ∈ Submodule.span R {a * b}
[PROOFSTEP]
obtain ⟨v, rfl⟩ := this
[GOAL]
case He.mk.intro.refine'_1.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b i : R
hi : i ∈ I
hisup : ∃ a_1, a_1 * a = i
v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
⊢ i ∈ Submodule.span R {a * b}
[PROOFSTEP]
obtain ⟨u, rfl⟩ := hisup
[GOAL]
case He.mk.intro.refine'_1.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
u : R
hi : u * a ∈ I
⊢ u * a ∈ Submodule.span R {a * b}
[PROOFSTEP]
have hucolon : u ∈ I.colon (span {v * a}) :=
by
rw [Ideal.mem_colon_singleton, mul_comm v, ← mul_assoc]
exact mul_mem_right _ _ hi
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
u : R
hi : u * a ∈ I
⊢ u ∈ colon I (Ideal.span {v * a})
[PROOFSTEP]
rw [Ideal.mem_colon_singleton, mul_comm v, ← mul_assoc]
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
u : R
hi : u * a ∈ I
⊢ u * a * v ∈ I
[PROOFSTEP]
exact mul_mem_right _ _ hi
[GOAL]
case He.mk.intro.refine'_1.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
u : R
hi : u * a ∈ I
hucolon : u ∈ colon I (Ideal.span {v * a})
⊢ u * a ∈ Submodule.span R {a * b}
[PROOFSTEP]
erw [hb, mem_span_singleton'] at hucolon
[GOAL]
case He.mk.intro.refine'_1.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
u : R
hi : u * a ∈ I
hucolon : ∃ a, a * b = u
⊢ u * a ∈ Submodule.span R {a * b}
[PROOFSTEP]
obtain ⟨z, rfl⟩ := hucolon
[GOAL]
case He.mk.intro.refine'_1.intro.intro.intro
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
z : R
hi : z * b * a ∈ I
⊢ z * b * a ∈ Submodule.span R {a * b}
[PROOFSTEP]
exact mem_span_singleton'.2 ⟨z, by ring⟩
[GOAL]
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x a b v : R
hxy : x * (v * a) ∈ I
hy : ¬v * a ∈ I
ha : I ⊔ Ideal.span {v * a} = Submodule.span R {a}
hb : colon I (Ideal.span {v * a}) = Submodule.span R {b}
z : R
hi : z * b * a ∈ I
⊢ z * (a * b) = z * b * a
[PROOFSTEP]
ring
[GOAL]
case He.mk.intro.refine'_2
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
⊢ Ideal.span {a} * Ideal.span {b} ≤ I
[PROOFSTEP]
rw [← Ideal.submodule_span_eq, ← ha, Ideal.sup_mul, sup_le_iff, span_singleton_mul_span_singleton, mul_comm y,
Ideal.span_singleton_le_iff_mem]
[GOAL]
case He.mk.intro.refine'_2
R : Type u
M : Type v
inst✝ : CommRing R
H : ∀ (P : Ideal R), IsPrime P → IsPrincipal P
J : Ideal R
hJ : J ∈ nonPrincipals R
I : Ideal R
Ibad : I ∈ nonPrincipals R
Imax : ∀ (z : Ideal R), z ∈ nonPrincipals R → I ≤ z → z = I
Imax' : ∀ {J : Ideal R}, I < J → IsPrincipal J
hI1 : ¬I = ⊤
x y : R
hxy : x * y ∈ I
hy : ¬y ∈ I
a : R
ha : I ⊔ Ideal.span {y} = Submodule.span R {a}
b : R
hb : colon I (Ideal.span {y}) = Submodule.span R {b}
⊢ I * Ideal.span {b} ≤ I ∧ b * y ∈ I
[PROOFSTEP]
exact ⟨mul_le_right, Ideal.mem_colon_singleton.1 <| hb.symm ▸ Ideal.mem_span_singleton_self b⟩
|
test_that("succesfully wraps str_replace_all", {
expect_equal(str_remove_all("abababa", "ba"), "a")
expect_equal(str_remove("abababa", "ba"), "ababa")
})
|
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE FlexibleContexts #-}
module HVX.Primitives
-- TODO(mh): Currently primitives that should generate implicit constraints are
-- unsupported. The should be supported shortily. (2014-06-04)
( apply
, hadd
, (+~)
, hmul
, (*~)
, habs
, neg
, hlog
, hexp
, logsumexp
, hmax
, hmin
, norm
, berhu
, huber
, quadform
, powBaseP0
, powBaseP01
, powBaseP1
, powBaseP1InfEven
, powBaseP1InfNotInt
) where
import Numeric.LinearAlgebra
import Numeric.LinearAlgebra.HMatrix (eigSH')
import HVX.Internal.DCP
import HVX.Internal.Matrix
import HVX.Internal.Primitives
import HVX.Internal.Util
apply :: (ApplyVex vf mf ve me ~ vr, ValidVex vr)
=> Fun vf mf -> Expr ve me -> Expr vr (ApplyMon mf me)
apply = EFun
hadd :: (AddVex v1 v2 ~ v3, ValidVex v3)
=> Expr v1 m1 -> Expr v2 m2 -> Expr v3 (AddMon m1 m2)
hadd = EAdd
infixl 6 +~
(+~) :: (AddVex v1 v2 ~ v3, ValidVex v3)
=> Expr v1 m1 -> Expr v2 m2 -> Expr v3 (AddMon m1 m2)
(+~) = hadd
-- Constructors that enforce DCP constraints.
hmul :: (ApplyVex 'Affine 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Expr 'Affine 'Const -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
hmul (EConst a) e = apply (Mul a) e
hmul _ _ = error "the left argument of a multiply must be a constant"
infixl 7 *~
(*~) :: (ApplyVex 'Affine 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Expr 'Affine 'Const -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
(*~) (EConst a) e = apply (Mul a) e
(*~) _ _ = error "the left argument of a multiply must be a constant"
habs :: (ApplyVex 'Convex 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
habs = apply Abs
neg :: (ApplyVex 'Affine 'Noninc v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Noninc m1)
neg = apply Neg
hlog :: (ApplyVex 'Concave 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
hlog = apply Log
hexp :: (ApplyVex 'Convex 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
hexp e = apply Exp e
logsumexp :: (ApplyVex 'Convex 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
logsumexp = apply LogSumExp
hmax :: (ApplyVex 'Convex 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
hmax = apply Max
hmin :: (ApplyVex 'Concave 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
hmin = apply Min
norm :: (ApplyVex 'Convex 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
norm p
| p == infinity = error "Internal: Infinity norm should become max . abs."
| 1 <= p = apply (Norm p)
| otherwise = error "Internal: Norm only supports p >= 1."
berhu :: (ApplyVex 'Convex 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
berhu m
| 0 < m = apply (Berhu m)
| otherwise = error "Internal: Berhu only supports m >= 0."
huber :: (ApplyVex 'Convex 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
huber m
| 0 < m = apply (Huber m)
| otherwise = error "Internal: Huber only supports m >= 0."
quadform :: (ApplyVex 'Convex 'Nonmon 'Affine m1 ~ v2, ValidVex v2)
=> Expr 'Affine 'Const -> Expr 'Affine m1 -> Expr v2 (ApplyMon 'Nonmon m1)
quadform (EConst a) e
| rows a == cols a
&& fpequalsMat a (tr a)
&& 0 <= (maxElement.fst.eigSH' $ a) = apply (Quadform a) e -- we have checked that the matrix being passed in is Hermitian, so it's safe to use eigSH'.
| otherwise = error "Matrices in quadratic forms must be positive semidefinite."
quadform _ _ = error "The matrix sandwitched by the quadratic form must be constant."
powBaseP0 :: (ApplyVex 'Affine 'Const v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Const m1)
powBaseP0 p
| p `fpequals` 0 = apply PowBaseP0
| otherwise = error "Internal: PowBaseP0 only supports p == 0."
powBaseP01 :: (ApplyVex 'Concave 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
powBaseP01 p
| 0 < p && p < 1 = apply (PowBaseP01 p)
| otherwise = error "Internal: PowBaseP01 only supports 0 < p < 1."
powBaseP1 :: (ApplyVex 'Affine 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
powBaseP1 p
| p `fpequals` 1 = apply PowBaseP1
| otherwise = error "Internal: PowBaseP1 only supports p == 1."
powBaseP1InfEven :: (ApplyVex 'Convex 'Nonmon v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nonmon m1)
powBaseP1InfEven p
| 1 < p && isInteger p && even intP = apply (PowBaseP1InfEven intP)
| otherwise = error "Internal: PowBaseP1InfEven only supports even p > 1."
where intP = round p :: Integer
powBaseP1InfNotInt :: (ApplyVex 'Convex 'Nondec v1 m1 ~ v2, ValidVex v2)
=> Double -> Expr v1 m1 -> Expr v2 (ApplyMon 'Nondec m1)
powBaseP1InfNotInt p
| 1 < p && not (isInteger p) = apply (PowBaseP1InfNotInt p)
| otherwise = error "Internal: PowBaseP1InfNotInt only supports non integral p > 1."
|
/-
Copyright (c) 2020 Kevin Lacker. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kevin Lacker
-/
import data.nat.digits
/-!
# IMO 1960 Q1
Determine all three-digit numbers $N$ having the property that $N$ is divisible by 11, and
$\dfrac{N}{11}$ is equal to the sum of the squares of the digits of $N$.
Since Lean doesn't have a way to directly express problem statements of the form
"Determine all X satisfying Y", we express two predicates where proving that one implies the
other is equivalent to solving the problem. A human solver also has to discover the
second predicate.
The strategy here is roughly brute force, checking the possible multiples of 11.
-/
open nat
def sum_of_squares (L : list ℕ) : ℕ := (L.map (λ x, x * x)).sum
def problem_predicate (n : ℕ) : Prop :=
(nat.digits 10 n).length = 3 ∧ 11 ∣ n ∧ n / 11 = sum_of_squares (nat.digits 10 n)
def solution_predicate (n : ℕ) : Prop := n = 550 ∨ n = 803
/-
Proving that three digit numbers are the ones in [100, 1000).
-/
lemma not_zero {n : ℕ} (h1 : problem_predicate n) : n ≠ 0 :=
have h2 : nat.digits 10 n ≠ list.nil, from list.ne_nil_of_length_eq_succ h1.left,
digits_ne_nil_iff_ne_zero.mp h2
lemma ge_100 {n : ℕ} (h1 : problem_predicate n) : 100 ≤ n :=
have h2 : 10^3 ≤ 10 * n, begin
rw ← h1.left,
refine nat.base_pow_length_digits_le 10 n _ (not_zero h1),
simp,
end,
by linarith
lemma lt_1000 {n : ℕ} (h1 : problem_predicate n) : n < 1000 :=
have h2 : n < 10^3, begin
rw ← h1.left,
refine nat.lt_base_pow_length_digits _,
simp,
end,
by linarith
/-
We do an exhaustive search to show that all results are covered by `solution_predicate`.
-/
def search_up_to (c n : ℕ) : Prop :=
n = c * 11 ∧ ∀ m : ℕ, m < n → problem_predicate m → solution_predicate m
lemma search_up_to_start : search_up_to 9 99 := ⟨rfl, λ n h p, by linarith [ge_100 p]⟩
lemma search_up_to_step {c n} (H : search_up_to c n)
{c' n'} (ec : c + 1 = c') (en : n + 11 = n')
{l} (el : nat.digits 10 n = l)
(H' : c = sum_of_squares l → c = 50 ∨ c = 73) :
search_up_to c' n' :=
begin
subst ec, subst en, subst el,
obtain ⟨rfl, H⟩ := H,
refine ⟨by ring, λ m l p, _⟩,
obtain ⟨h₁, ⟨m, rfl⟩, h₂⟩ := id p,
by_cases h : 11 * m < c * 11, { exact H _ h p },
have : m = c, {linarith}, subst m,
rw [nat.mul_div_cancel_left _ (by norm_num : 11 > 0), mul_comm] at h₂,
refine (H' h₂).imp _ _; {rintro rfl, norm_num}
end
lemma search_up_to_end {c} (H : search_up_to c 1001)
{n : ℕ} (ppn : problem_predicate n) : solution_predicate n :=
H.2 _ (by linarith [lt_1000 ppn]) ppn
lemma right_direction {n : ℕ} : problem_predicate n → solution_predicate n :=
begin
have := search_up_to_start,
iterate 82 {
replace := search_up_to_step this (by norm_num1; refl) (by norm_num1; refl)
(by norm_num1; refl) dec_trivial },
exact search_up_to_end this
end
/-
Now we just need to prove the equivalence, for the precise problem statement.
-/
lemma left_direction (n : ℕ) (spn : solution_predicate n) : problem_predicate n :=
by rcases spn with (rfl | rfl); norm_num [problem_predicate, sum_of_squares]
theorem imo1960_q1 (n : ℕ) : problem_predicate n ↔ solution_predicate n :=
⟨right_direction, left_direction n⟩
|
lemma local_lipschitz_compose1: assumes ll: "local_lipschitz (g ` T) X (\<lambda>t. f t)" assumes g: "continuous_on T g" shows "local_lipschitz T X (\<lambda>t. f (g t))" |
text \<open>
Created 2021-09-10, by Anand Shetler
\<close>
theory CommAnd
imports Main
begin
text\<open> Apply style \<close>
lemma lem_k_1 : "(p \<and> q) \<longrightarrow> (q \<and> p)"
apply (rule impI)
apply (rule conjI)
apply (erule conjE)
apply assumption
apply (erule conjE)
apply assumption
done
text\<open> Isar style, the proof illustrates the use of intermediate facts,
once more with keywords 'from ... have ...' \<close>
lemma lem_l_1 : "(p \<and> q) \<longrightarrow> (q \<and> p)"
proof
assume a : "p \<and> q"
from a have b : "p" by (rule conjE) (* INTERMEDIATE fact *)
from a have c : "q" by (rule conjE) (* INTERMEDIATE fact *)
from c b show "q \<and> p" by (rule conjI) (* we must write 'from c b' not 'from b c' *)
qed
text\<open> Isar style, lem_l_1 again, with more abbreviations \<close>
lemma lem_l_2 : "(p \<and> q) \<longrightarrow> (q \<and> p)"
proof
assume a : "p \<and> q"
from a have b : "p" ..
from a have c : "q" ..
from c b show "(q \<and> p)" ..
qed
text\<open> Isar style, lem_l_1 again, with nested proofs \<close>
lemma lem_l_3 : "(p \<and> q) \<longrightarrow> (q \<and> p)"
proof
assume a : "p \<and> q"
from a show "q \<and> p"
proof
assume b : "p"
assume c : "q"
from c b show "q \<and> p" by (rule conjI)
qed
qed
end |
import category_theory.category.default
universes v u -- The order in this declaration matters: v often needs to be explicitly specified while u often can be omitted
namespace category_theory
variables (C : Type u) [category.{v} C]
--rewrite this
/-
# Category world
## Level 7: Composition of monomorphisms
Now we show that the composition of two monomorphisms produces another monomorphism.-/
/- Lemma
If $$f : X ⟶ Y$$ and $$g : X ⟶ Y$$ are monomorphisms, then $$f ≫ g : X ⟶ Z$$ is a monomorphism.
-/
lemma mono_of_mono' {X Y Z : C} (f : X ⟶ Y) (g : Y ⟶ Z) [mono (f ≫ g)] : mono f :=
begin
split,
intros Z h l hyp,
rw ← cancel_mono (f ≫ g),
rw ← category.assoc,
rw hyp,
rw category.assoc,
end
end category_theory |
import algebra.group order.basic ..logic
.rev .form .dnf .reify ..expr_of_unsat
namespace int
open tactic
run_cmd mk_simp_attr `sugar
attribute [sugar]
not_le not_lt
int.lt_iff_add_one_le
or_false false_or
and_true true_and
ge gt mul_add add_mul
mul_comm sub_eq_add_neg
classical.imp_iff_not_or
classical.iff_iff
meta def desugar := `[try {simp only with sugar}]
lemma uniclo_of_unsat_clausify (m : nat) (p : form) :
clauses.unsat (dnf (¬*p)) → uniclo p (λ x, 0) m | h1 :=
begin
apply uniclo_of_valid,
apply valid_of_unsat_not,
apply unsat_of_clauses_unsat,
exact h1
end
/- Given a p : form, return the expr of a term t : p.uniclo -/
meta def expr_of_uniclo (m : nat) (p : form) : tactic expr :=
do x ← expr_of_unsats (dnf (¬*p)),
return `(uniclo_of_unsat_clausify %%`(m) %%`(p) %%x)
--meta def expr_of_lia : tactic expr :=
--target >>= to_form >>= expr_of_uniclo
meta def expr_of_lia : tactic expr :=
do (p,m) ← target >>= to_form 0,
expr_of_uniclo m p
meta def omega : tactic unit :=
do rev, desugar,
expr_of_lia >>= apply,
skip
end int |
At this NGC we can offer PUBG for the first time as a tournament.
As for the last NGC Fractal Design is back and delivers us great prizes you can win!
We thank and appreciate the renewed cooperation. |
lemma imp_trans (P Q R : Prop) : (P → Q) → ((Q → R) → (P → R)) :=
begin
intro hpq,
intro hqr,
intro a,
apply hqr,
exact hpq(a),
end
|
! Begin MIT license text.
! _______________________________________________________________________________________________________
! Copyright 2019 Dr William R Case, Jr (dbcase29@gmail,com)
! Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
! associated documentation files (the "Software"), to deal in the Software without restriction, including
! without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
! copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to
! the following conditions:
! The above copyright notice and this permission notice shall be included in all copies or substantial
! portions of the Software and documentation.
! THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
! OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
! FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
! AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
! LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
! OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
! THE SOFTWARE.
! _______________________________________________________________________________________________________
! End MIT license text.
MODULE EMG_USE_IFs
! USE Interface statements for all subroutines called by SUBROUTINE EMG
USE IS_ELEM_PCOMP_PROPS_Interface
USE OURTIM_Interface
USE ELMDAT1_Interface
USE OUTA_HERE_Interface
USE ELMGM1_Interface
USE ELMGM2_Interface
USE ELMGM3_Interface
USE GET_MATANGLE_FROM_CID_Interface
USE MATERIAL_PROPS_2D_Interface
USE ROT_AXES_MATL_TO_LOC_Interface
USE MATERIAL_PROPS_3D_Interface
USE ELMOUT_Interface
USE SHELL_ABD_MATRICES_Interface
USE ELMDAT2_Interface
USE ELAS1_Interface
USE BREL1_Interface
USE BUSH_Interface
USE TREL1_Interface
USE QDEL1_Interface
USE HEXA_Interface
USE PENTA_Interface
USE TETRA_Interface
USE KUSER1_Interface
USE USERIN_Interface
USE ELMOFF_Interface
END MODULE EMG_USE_IFs
|
(*
File: Partial_Zeta_Bounds.thy
Author: Manuel Eberl, TU München
Asymptotic bounds on sums over k^(-s) for k=1..n, for fixed s and variable n.
*)
section \<open>Bounds on partial sums of the $\zeta$ function\<close>
theory Partial_Zeta_Bounds
imports
Euler_MacLaurin.Euler_MacLaurin_Landau
Zeta_Function.Zeta_Function
Prime_Number_Theorem.Prime_Number_Theorem_Library
Prime_Distribution_Elementary_Library
begin
(* TODO: This does not necessarily require the full complex zeta function. *)
text \<open>
We employ Euler--MacLaurin's summation formula to obtain asymptotic estimates
for the partial sums of the Riemann $\zeta(s)$ function for fixed real $a$, i.\,e.\ the function
\[f(n) = \sum_{k=1}^n k^{-s}\ .\]
We distinguish various cases. The case $s = 1$ is simply the Harmonic numbers and is
treated apart from the others.
\<close>
lemma harm_asymp_equiv: "sum_upto (\<lambda>n. 1 / n) \<sim>[at_top] ln"
proof -
have "sum_upto (\<lambda>n. n powr -1) \<sim>[at_top] (\<lambda>x. ln (\<lfloor>x\<rfloor> + 1))"
proof (rule asymp_equiv_sandwich)
have "eventually (\<lambda>x. sum_upto (\<lambda>n. n powr -1) x \<in> {ln (\<lfloor>x\<rfloor> + 1)..ln \<lfloor>x\<rfloor> + 1}) at_top"
using eventually_ge_at_top[of 1]
proof eventually_elim
case (elim x)
have "sum_upto (\<lambda>n. real n powr -1) x = harm (nat \<lfloor>x\<rfloor>)"
unfolding sum_upto_altdef harm_def by (intro sum.cong) (auto simp: field_simps powr_minus)
also have "\<dots> \<in> {ln (\<lfloor>x\<rfloor> + 1)..ln \<lfloor>x\<rfloor> + 1}"
using elim harm_le[of "nat \<lfloor>x\<rfloor>"] ln_le_harm[of "nat \<lfloor>x\<rfloor>"]
by (auto simp: le_nat_iff le_floor_iff)
finally show ?case by simp
qed
thus "eventually (\<lambda>x. sum_upto (\<lambda>n. n powr -1) x \<ge> ln (\<lfloor>x\<rfloor> + 1)) at_top"
"eventually (\<lambda>x. sum_upto (\<lambda>n. n powr -1) x \<le> ln \<lfloor>x\<rfloor> + 1) at_top"
by (eventually_elim; simp)+
qed real_asymp+
also have "\<dots> \<sim>[at_top] ln" by real_asymp
finally show ?thesis by (simp add: powr_minus field_simps)
qed
lemma
fixes s :: real
assumes s: "s > 0" "s \<noteq> 1"
shows zeta_partial_sum_bigo_pos:
"(\<lambda>n. (\<Sum>k=1..n. real k powr -s) - real n powr (1 - s) / (1 - s) - Re (zeta s))
\<in> O(\<lambda>x. real x powr -s)"
and zeta_partial_sum_bigo_pos':
"(\<lambda>n. \<Sum>k=1..n. real k powr -s) =o
(\<lambda>n. real n powr (1 - s) / (1 - s) + Re (zeta s)) +o O(\<lambda>x. real x powr -s)"
proof -
define F where "F = (\<lambda>x. x powr (1 - s) / (1 - s))"
define f where "f = (\<lambda>x. x powr -s)"
define f' where "f' = (\<lambda>x. -s * x powr (-s-1))"
define z where "z = Re (zeta s)"
interpret euler_maclaurin_nat' F f "(!) [f, f']" 1 0 z "{}"
proof
have "(\<lambda>b. (\<Sum>k=1..b. real k powr -s) - real b powr (1 - s) / (1 - s) - real b powr -s / 2)
\<longlonglongrightarrow> Re (zeta s) - 0"
proof (intro tendsto_diff)
let ?g = "\<lambda>b. (\<Sum>i<b. complex_of_real (real i + 1) powr - complex_of_real s) -
of_nat b powr (1 - complex_of_real s) / (1 - complex_of_real s)"
have "\<forall>\<^sub>F b in at_top. Re (?g b) = (\<Sum>k=1..b. real k powr -s) - real b powr (1 - s) / (1 - s)"
using eventually_ge_at_top[of 1]
proof eventually_elim
case (elim b)
have "(\<Sum>k=1..b. real k powr -s) = (\<Sum>k<b. real (Suc k) powr -s) "
by (intro sum.reindex_bij_witness[of _ Suc "\<lambda>n. n - 1"]) auto
also have "\<dots> - real b powr (1 - s) / (1 - s) = Re (?g b)"
by (auto simp: powr_Reals_eq add_ac)
finally show ?case ..
qed
moreover have "(\<lambda>b. Re (?g b)) \<longlonglongrightarrow> Re (zeta s)"
using hurwitz_zeta_critical_strip[of "of_real s" 1] s
by (intro tendsto_intros) (simp add: zeta_def)
ultimately show "(\<lambda>b. (\<Sum>k=1..b. real k powr -s) - real b powr (1 - s) / (1 - s)) \<longlonglongrightarrow> Re (zeta s)"
by (blast intro: Lim_transform_eventually)
qed (use s in real_asymp)
thus "(\<lambda>b. (\<Sum>k = 1..b. f (real k)) - F (real b) -
(\<Sum>i<2 * 0 + 1. (bernoulli' (Suc i) / fact (Suc i)) *\<^sub>R ([f, f'] ! i) (real b)))
\<longlonglongrightarrow> z" by (simp add: f_def F_def z_def)
qed (use \<open>s \<noteq> 1\<close> in
\<open>auto intro!: derivative_eq_intros continuous_intros
simp flip: has_field_derivative_iff_has_vector_derivative
simp: F_def f_def f'_def nth_Cons split: nat.splits\<close>)
{
fix n :: nat assume n: "n \<ge> 1"
have "(\<Sum>k=1..n. real k powr -s) =
n powr (1 - s) / (1 - s) + z + 1/2 * n powr -s -
EM_remainder 1 f' (int n)"
using euler_maclaurin_strong_nat'[of n] n by (simp add: F_def f_def)
} note * = this
have "(\<lambda>n. (\<Sum>k=1..n. real k powr -s) - n powr (1 - s) / (1 - s) - z) \<in>
\<Theta>(\<lambda>n. 1/2 * n powr -s - EM_remainder 1 f' (int n))"
using * by (intro bigthetaI_cong eventually_mono[OF eventually_ge_at_top[of 1]]) auto
also have "(\<lambda>n. 1/2 * n powr -s - EM_remainder 1 f' (int n)) \<in> O(\<lambda>n. n powr -s)"
proof (intro sum_in_bigo)
have "(\<lambda>x. norm (EM_remainder 1 f' (int x))) \<in> O(\<lambda>x. real x powr - s)"
proof (rule EM_remainder_strong_bigo_nat[where a = 1 and Y = "{}"])
fix x :: real assume "x \<ge> 1"
show "norm (f' x) \<le> s * x powr (-s-1)"
using s by (simp add: f'_def)
next
from s show "((\<lambda>x. x powr -s) \<longlongrightarrow> 0) at_top" by real_asymp
qed (auto simp: f'_def intro!: continuous_intros derivative_eq_intros)
thus "(\<lambda>x. EM_remainder 1 f' (int x)) \<in> O(\<lambda>x. real x powr -s)"
by simp
qed real_asymp+
finally show "(\<lambda>n. (\<Sum>k=1..n. real k powr -s) - real n powr (1 - s) / (1 - s) - z)
\<in> O(\<lambda>x. real x powr -s)" .
thus"(\<lambda>n. \<Sum>k=1..n. real k powr -s) =o
(\<lambda>n. real n powr (1 - s) / (1 - s) + z) +o O(\<lambda>x. real x powr -s)"
by (subst set_minus_plus [symmetric]) (simp_all add: fun_diff_def algebra_simps)
qed
lemma zeta_tail_bigo:
fixes s :: real
assumes s: "s > 1"
shows "(\<lambda>n. Re (hurwitz_zeta (real n + 1) s)) \<in> O(\<lambda>x. real x powr (1 - s))"
proof -
have [simp]: "complex_of_real (Re (zeta s)) = zeta s"
using zeta_real[of s] by (auto elim!: Reals_cases)
from s have s': "s > 0" "s \<noteq> 1" by auto
have "(\<lambda>n. -Re (hurwitz_zeta (real n + 1) s) - real n powr (1 - s) / (1 - s)
+ real n powr (1 - s) / (1 - s))
\<in> O(\<lambda>x. real x powr (1 - s))"
proof (rule sum_in_bigo)
have "(\<lambda>n. -Re (hurwitz_zeta (real n + 1) s) - real n powr (1 - s) / (1 - s)) =
(\<lambda>n. (\<Sum>k=1..n. real k powr -s) - real n powr (1 - s) / (1 - s) - Re (zeta s))"
(is "?lhs = ?rhs")
proof
fix n :: nat
have "hurwitz_zeta (1 + real n) s = zeta s - (\<Sum>k<n. real (Suc k) powr -s)"
by (subst hurwitz_zeta_shift) (use assms in \<open>auto simp: zeta_def powr_Reals_eq\<close>)
also have "(\<Sum>k<n. real (Suc k) powr -s) = (\<Sum>k=1..n. real k powr -s)"
by (rule sum.reindex_bij_witness[of _ "\<lambda>k. k - 1" Suc]) auto
finally show "?lhs n = ?rhs n" by (simp add: add_ac)
qed
also have "\<dots> \<in> O(\<lambda>x. real x powr (-s))"
by (rule zeta_partial_sum_bigo_pos) (use s in auto)
also have "(\<lambda>x. real x powr (-s)) \<in> O(\<lambda>x. real x powr (1-s))"
by real_asymp
finally show "(\<lambda>n. -Re (hurwitz_zeta (real n + 1) s) - real n powr (1 - s) / (1 - s)) \<in> \<dots>" .
qed (use s in real_asymp)
thus ?thesis by simp
qed
lemma zeta_tail_bigo':
fixes s :: real
assumes s: "s > 1"
shows "(\<lambda>n. Re (hurwitz_zeta (real n) s)) \<in> O(\<lambda>x. real x powr (1 - s))"
proof -
have "(\<lambda>n. Re (hurwitz_zeta (real n) s)) \<in> \<Theta>(\<lambda>n. Re (hurwitz_zeta (real (n - 1) + 1) s))"
by (intro bigthetaI_cong eventually_mono[OF eventually_ge_at_top[of 1]])
(auto simp: of_nat_diff)
also have "(\<lambda>n. Re (hurwitz_zeta (real (n - 1) + 1) s)) \<in> O(\<lambda>x. real (x - 1) powr (1 - s))"
by (rule landau_o.big.compose[OF zeta_tail_bigo[OF assms]]) real_asymp
also have "(\<lambda>x. real (x - 1) powr (1 - s)) \<in> O(\<lambda>x. real x powr (1 - s))"
by real_asymp
finally show ?thesis .
qed
lemma
fixes s :: real
assumes s: "s > 0"
shows zeta_partial_sum_bigo_neg:
"(\<lambda>n. (\<Sum>i=1..n. real i powr s) - n powr (1 + s) / (1 + s)) \<in> O(\<lambda>n. n powr s)"
and zeta_partial_sum_bigo_neg':
"(\<lambda>n. (\<Sum>i=1..n. real i powr s)) =o (\<lambda>n. n powr (1 + s) / (1 + s)) +o O(\<lambda>n. n powr s)"
proof -
define F where "F = (\<lambda>x. x powr (1 + s) / (1 + s))"
define f where "f = (\<lambda>x. x powr s)"
define f' where "f' = (\<lambda>x. s * x powr (s - 1))"
have "(\<Sum>i=1..n. f (real i)) - F n =
1 / 2 - F 1 + f n / 2 + EM_remainder' 1 f' 1 (real n)" if n: "n \<ge> 1" for n
proof -
have "(\<Sum>i\<in>{1<..n}. f (real i)) - integral {real 1..real n} f =
(\<Sum>k<1. (bernoulli' (Suc k) / fact (Suc k)) *\<^sub>R (([f, f'] ! k) (real n) -
([f, f'] ! k) (real 1))) + EM_remainder' 1 ([f, f'] ! 1) (real 1) (real n)"
by (rule euler_maclaurin_strong_raw_nat[where Y = "{}"])
(use \<open>s > 0\<close> \<open>n \<ge> 1\<close> in
\<open>auto intro!: derivative_eq_intros continuous_intros
simp flip: has_field_derivative_iff_has_vector_derivative
simp: F_def f_def f'_def nth_Cons split: nat.splits\<close>)
also have "(\<Sum>i\<in>{1<..n}. f (real i)) = (\<Sum>i\<in>insert 1 {1<..n}. f (real i)) - f 1"
using n by (subst sum.insert) auto
also from n have "insert 1 {1<..n} = {1..n}" by auto
finally have "(\<Sum>i=1..n. f (real i)) - F n = f 1 + (integral {1..real n} f - F n) +
(f (real n) - f 1) / 2 + EM_remainder' 1 f' 1 (real n)" by simp
hence "(\<Sum>i=1..n. f (real i)) - F n = 1 / 2 + (integral {1..real n} f - F n) +
f (real n) / 2 + EM_remainder' 1 f' 1 (real n)"
using s by (simp add: f_def diff_divide_distrib)
also have "(f has_integral (F (real n) - F 1)) {1..real n}" using assms n
by (intro fundamental_theorem_of_calculus)
(auto simp flip: has_field_derivative_iff_has_vector_derivative simp: F_def f_def
intro!: derivative_eq_intros continuous_intros)
hence "integral {1..real n} f - F n = - F 1"
by (simp add: has_integral_iff)
also have "1 / 2 + (-F 1) + f (real n) / 2 = 1 / 2 - F 1 + f n / 2"
by simp
finally show ?thesis .
qed
hence "(\<lambda>n. (\<Sum>i=1..n. f (real i)) - F n) \<in>
\<Theta>(\<lambda>n. 1 / 2 - F 1 + f n / 2 + EM_remainder' 1 f' 1 (real n))"
by (intro bigthetaI_cong eventually_mono[OF eventually_ge_at_top[of 1]])
also have "(\<lambda>n. 1 / 2 - F 1 + f n / 2 + EM_remainder' 1 f' 1 (real n))
\<in> O(\<lambda>n. real n powr s)"
unfolding F_def f_def
proof (intro sum_in_bigo)
have "(\<lambda>x. integral {1..real x} (\<lambda>t. pbernpoly 1 t *\<^sub>R f' t)) \<in> O(\<lambda>n. 1 / s * real n powr s)"
proof (intro landau_o.big.compose[OF integral_bigo])
have "(\<lambda>x. pbernpoly 1 x * f' x) \<in> O(\<lambda>x. 1 * x powr (s - 1))"
by (intro landau_o.big.mult pbernpoly_bigo) (auto simp: f'_def)
thus "(\<lambda>x. pbernpoly 1 x *\<^sub>R f' x) \<in> O(\<lambda>x. x powr (s - 1))"
by simp
from s show "filterlim (\<lambda>a. 1 / s * a powr s) at_top at_top" by real_asymp
next
fix a' x :: real assume "a' \<ge> 1" "a' \<le> x"
thus "(\<lambda>a. pbernpoly 1 a *\<^sub>R f' a) integrable_on {a'..x}"
by (intro integrable_EM_remainder') (auto intro!: continuous_intros simp: f'_def)
qed (use s in \<open>auto intro!: filterlim_real_sequentially continuous_intros derivative_eq_intros\<close>)
thus "(\<lambda>x. EM_remainder' 1 f' 1 (real x)) \<in> O(\<lambda>n. real n powr s)"
using \<open>s > 0\<close> by (simp add: EM_remainder'_def)
qed (use \<open>s > 0\<close> in real_asymp)+
finally show "(\<lambda>n. (\<Sum>i=1..n. real i powr s) - n powr (1 + s) / (1 + s)) \<in> O(\<lambda>n. n powr s)"
by (simp add: f_def F_def)
thus "(\<lambda>n. (\<Sum>i=1..n. real i powr s)) =o (\<lambda>n. n powr (1 + s) / (1 + s)) +o O(\<lambda>n. n powr s)"
by (subst set_minus_plus [symmetric]) (simp_all add: fun_diff_def algebra_simps)
qed
lemma zeta_partial_sum_le_pos:
assumes "s > 0" "s \<noteq> 1"
defines "z \<equiv> Re (zeta (complex_of_real s))"
shows "\<exists>c>0. \<forall>x\<ge>1. \<bar>sum_upto (\<lambda>n. n powr -s) x - (x powr (1-s) / (1-s) + z)\<bar> \<le> c * x powr -s"
proof (rule sum_upto_asymptotics_lift_nat_real)
show "(\<lambda>n. (\<Sum>k = 1..n. real k powr - s) - (real n powr (1 - s) / (1 - s) + z))
\<in> O(\<lambda>n. real n powr - s)"
using zeta_partial_sum_bigo_pos[OF assms(1,2)] unfolding z_def
by (simp add: algebra_simps)
from assms have "s < 1 \<or> s > 1" by linarith
thus "(\<lambda>n. real n powr (1 - s) / (1 - s) + z - (real (Suc n) powr (1 - s) / (1 - s) + z))
\<in> O(\<lambda>n. real n powr - s)"
by standard (use \<open>s > 0\<close> in \<open>real_asymp+\<close>)
show "(\<lambda>n. real n powr - s) \<in> O(\<lambda>n. real (Suc n) powr - s)"
by real_asymp
show "mono_on (\<lambda>a. a powr - s) {1..} \<or> mono_on (\<lambda>x. - (x powr - s)) {1..}"
using assms by (intro disjI2) (auto intro!: mono_onI powr_mono2')
from assms have "s < 1 \<or> s > 1" by linarith
hence "mono_on (\<lambda>a. a powr (1 - s) / (1 - s) + z) {1..}"
proof
assume "s < 1"
thus ?thesis using \<open>s > 0\<close>
by (intro mono_onI powr_mono2 divide_right_mono add_right_mono) auto
next
assume "s > 1"
thus ?thesis
by (intro mono_onI le_imp_neg_le add_right_mono divide_right_mono_neg powr_mono2') auto
qed
thus "mono_on (\<lambda>a. a powr (1 - s) / (1 - s) + z) {1..} \<or>
mono_on (\<lambda>x. - (x powr (1 - s) / (1 - s) + z)) {1..}" by blast
qed auto
lemma zeta_partial_sum_le_pos':
assumes "s > 0" "s \<noteq> 1"
defines "z \<equiv> Re (zeta (complex_of_real s))"
shows "\<exists>c>0. \<forall>x\<ge>1. \<bar>sum_upto (\<lambda>n. n powr -s) x - x powr (1-s) / (1-s)\<bar> \<le> c"
proof -
have "\<exists>c>0. \<forall>x\<ge>1. \<bar>sum_upto (\<lambda>n. n powr -s) x - x powr (1-s) / (1-s)\<bar> \<le> c * 1"
proof (rule sum_upto_asymptotics_lift_nat_real)
have "(\<lambda>n. (\<Sum>k = 1..n. real k powr - s) - (real n powr (1 - s) / (1 - s) + z))
\<in> O(\<lambda>n. real n powr - s)"
using zeta_partial_sum_bigo_pos[OF assms(1,2)] unfolding z_def
by (simp add: algebra_simps)
also have "(\<lambda>n. real n powr -s) \<in> O(\<lambda>n. 1)"
using assms by real_asymp
finally have "(\<lambda>n. (\<Sum>k = 1..n. real k powr - s) - real n powr (1 - s) / (1 - s) - z)
\<in> O(\<lambda>n. 1)" by (simp add: algebra_simps)
hence "(\<lambda>n. (\<Sum>k = 1..n. real k powr - s) - real n powr (1 - s) / (1 - s) - z + z) \<in> O(\<lambda>n. 1)"
by (rule sum_in_bigo) auto
thus "(\<lambda>n. (\<Sum>k = 1..n. real k powr - s) - (real n powr (1 - s) / (1 - s))) \<in> O(\<lambda>n. 1)"
by simp
from assms have "s < 1 \<or> s > 1" by linarith
thus "(\<lambda>n. real n powr (1 - s) / (1 - s) - (real (Suc n) powr (1 - s) / (1 - s))) \<in> O(\<lambda>n. 1)"
by standard (use \<open>s > 0\<close> in \<open>real_asymp+\<close>)
show "mono_on (\<lambda>a. 1) {1..} \<or> mono_on (\<lambda>x::real. -1 :: real) {1..}"
using assms by (intro disjI2) (auto intro!: mono_onI powr_mono2')
from assms have "s < 1 \<or> s > 1" by linarith
hence "mono_on (\<lambda>a. a powr (1 - s) / (1 - s)) {1..}"
proof
assume "s < 1"
thus ?thesis using \<open>s > 0\<close>
by (intro mono_onI powr_mono2 divide_right_mono add_right_mono) auto
next
assume "s > 1"
thus ?thesis
by (intro mono_onI le_imp_neg_le add_right_mono divide_right_mono_neg powr_mono2') auto
qed
thus "mono_on (\<lambda>a. a powr (1 - s) / (1 - s)) {1..} \<or>
mono_on (\<lambda>x. - (x powr (1 - s) / (1 - s))) {1..}" by blast
qed auto
thus ?thesis by simp
qed
lemma zeta_partial_sum_le_pos'':
assumes "s > 0" "s \<noteq> 1"
shows "\<exists>c>0. \<forall>x\<ge>1. \<bar>sum_upto (\<lambda>n. n powr -s) x\<bar> \<le> c * x powr max 0 (1 - s)"
proof -
from zeta_partial_sum_le_pos'[OF assms] obtain c where
c: "c > 0" "\<And>x. x \<ge> 1 \<Longrightarrow> \<bar>sum_upto (\<lambda>x. real x powr - s) x - x powr (1 - s) / (1 - s)\<bar> \<le> c"
by auto
{
fix x :: real assume x: "x \<ge> 1"
have "\<bar>sum_upto (\<lambda>x. real x powr - s) x\<bar> \<le> \<bar>x powr (1 - s) / (1 - s)\<bar> + c"
using c(1) c(2)[OF x] x by linarith
also have "\<bar>x powr (1 - s) / (1 - s)\<bar> = x powr (1 - s) / \<bar>1 - s\<bar>"
using assms by simp
also have "\<dots> \<le> x powr max 0 (1 - s) / \<bar>1 - s\<bar>"
using x by (intro divide_right_mono powr_mono) auto
also have "c = c * x powr 0" using x by simp
also have "c * x powr 0 \<le> c * x powr max 0 (1 - s)"
using c(1) x by (intro mult_left_mono powr_mono) auto
also have "x powr max 0 (1 - s) / \<bar>1 - s\<bar> + c * x powr max 0 (1 - s) =
(1 / \<bar>1 - s\<bar> + c) * x powr max 0 (1 - s)"
by (simp add: algebra_simps)
finally have "\<bar>sum_upto (\<lambda>x. real x powr - s) x\<bar> \<le> (1 / \<bar>1 - s\<bar> + c) * x powr max 0 (1 - s)"
by simp
}
moreover have "(1 / \<bar>1 - s\<bar> + c) > 0"
using c assms by (intro add_pos_pos divide_pos_pos) auto
ultimately show ?thesis by blast
qed
lemma zeta_partial_sum_le_pos_bigo:
assumes "s > 0" "s \<noteq> 1"
shows "(\<lambda>x. sum_upto (\<lambda>n. n powr -s) x) \<in> O(\<lambda>x. x powr max 0 (1 - s))"
proof -
from zeta_partial_sum_le_pos''[OF assms] obtain c
where "\<forall>x\<ge>1. \<bar>sum_upto (\<lambda>n. n powr -s) x\<bar> \<le> c * x powr max 0 (1 - s)" by auto
thus ?thesis
by (intro bigoI[of _ c] eventually_mono[OF eventually_ge_at_top[of 1]]) auto
qed
lemma zeta_partial_sum_01_asymp_equiv:
assumes "s \<in> {0<..<1}"
shows "sum_upto (\<lambda>n. n powr -s) \<sim>[at_top] (\<lambda>x. x powr (1 - s) / (1 - s))"
proof -
from zeta_partial_sum_le_pos'[of s] assms obtain c where
c: "c > 0" "\<forall>x\<ge>1. \<bar>sum_upto (\<lambda>x. real x powr -s) x - x powr (1 - s) / (1 - s)\<bar> \<le> c" by auto
hence "(\<lambda>x. sum_upto (\<lambda>x. real x powr -s) x - x powr (1 - s) / (1 - s)) \<in> O(\<lambda>_. 1)"
by (intro bigoI[of _ c] eventually_mono[OF eventually_ge_at_top[of 1]]) auto
also have "(\<lambda>_. 1) \<in> o(\<lambda>x. x powr (1 - s) / (1 - s))"
using assms by real_asymp
finally show ?thesis
by (rule smallo_imp_asymp_equiv)
qed
lemma zeta_partial_sum_gt_1_asymp_equiv:
assumes "s > 1"
defines "\<zeta> \<equiv> Re (zeta s)"
shows "sum_upto (\<lambda>n. n powr -s) \<sim>[at_top] (\<lambda>x. \<zeta>)"
proof -
have [simp]: "\<zeta> \<noteq> 0"
using assms zeta_Re_gt_1_nonzero[of s] zeta_real[of s] by (auto elim!: Reals_cases)
from zeta_partial_sum_le_pos[of s] assms obtain c where
c: "c > 0" "\<forall>x\<ge>1. \<bar>sum_upto (\<lambda>x. real x powr -s) x - (x powr (1 - s) / (1 - s) + \<zeta>)\<bar> \<le>
c * x powr -s" by (auto simp: \<zeta>_def)
hence "(\<lambda>x. sum_upto (\<lambda>x. real x powr -s) x - \<zeta> - x powr (1 - s) / (1 - s)) \<in> O(\<lambda>x. x powr -s)"
by (intro bigoI[of _ c] eventually_mono[OF eventually_ge_at_top[of 1]]) auto
also have "(\<lambda>x. x powr -s) \<in> o(\<lambda>_. 1)"
using \<open>s > 1\<close> by real_asymp
finally have "(\<lambda>x. sum_upto (\<lambda>x. real x powr -s) x - \<zeta> - x powr (1 - s) / (1 - s) +
x powr (1 - s) / (1 - s)) \<in> o(\<lambda>_. 1)"
by (rule sum_in_smallo) (use \<open>s > 1\<close> in real_asymp)
thus ?thesis by (simp add: smallo_imp_asymp_equiv)
qed
lemma zeta_partial_sum_pos_bigtheta:
assumes "s > 0" "s \<noteq> 1"
shows "sum_upto (\<lambda>n. n powr -s) \<in> \<Theta>(\<lambda>x. x powr max 0 (1 - s))"
proof (cases "s > 1")
case False
thus ?thesis
using asymp_equiv_imp_bigtheta[OF zeta_partial_sum_01_asymp_equiv[of s]] assms
by (simp add: max_def)
next
case True
have [simp]: "Re (zeta s) \<noteq> 0"
using True zeta_Re_gt_1_nonzero[of s] zeta_real[of s] by (auto elim!: Reals_cases)
show ?thesis
using True asymp_equiv_imp_bigtheta[OF zeta_partial_sum_gt_1_asymp_equiv[of s]]
by (simp add: max_def)
qed
lemma zeta_partial_sum_le_neg:
assumes "s > 0"
shows "\<exists>c>0. \<forall>x\<ge>1. \<bar>sum_upto (\<lambda>n. n powr s) x - x powr (1 + s) / (1 + s)\<bar> \<le> c * x powr s"
proof (rule sum_upto_asymptotics_lift_nat_real)
show "(\<lambda>n. (\<Sum>k = 1..n. real k powr s) - (real n powr (1 + s) / (1 + s)))
\<in> O(\<lambda>n. real n powr s)"
using zeta_partial_sum_bigo_neg[OF assms(1)] by (simp add: algebra_simps)
show "(\<lambda>n. real n powr (1 + s) / (1 + s) - (real (Suc n) powr (1 + s) / (1 + s)))
\<in> O(\<lambda>n. real n powr s)"
using assms by real_asymp
show "(\<lambda>n. real n powr s) \<in> O(\<lambda>n. real (Suc n) powr s)"
by real_asymp
show "mono_on (\<lambda>a. a powr s) {1..} \<or> mono_on (\<lambda>x. - (x powr s)) {1..}"
using assms by (intro disjI1) (auto intro!: mono_onI powr_mono2)
show "mono_on (\<lambda>a. a powr (1 + s) / (1 + s)) {1..} \<or>
mono_on (\<lambda>x. - (x powr (1 + s) / (1 + s))) {1..}"
using assms by (intro disjI1 divide_right_mono powr_mono2 mono_onI) auto
qed auto
lemma zeta_partial_sum_neg_asymp_equiv:
assumes "s > 0"
shows "sum_upto (\<lambda>n. n powr s) \<sim>[at_top] (\<lambda>x. x powr (1 + s) / (1 + s))"
proof -
from zeta_partial_sum_le_neg[of s] assms obtain c where
c: "c > 0" "\<forall>x\<ge>1. \<bar>sum_upto (\<lambda>x. real x powr s) x - x powr (1 + s) / (1 + s)\<bar> \<le> c * x powr s"
by auto
hence "(\<lambda>x. sum_upto (\<lambda>x. real x powr s) x - x powr (1 + s) / (1 + s)) \<in> O(\<lambda>x. x powr s)"
by (intro bigoI[of _ c] eventually_mono[OF eventually_ge_at_top[of 1]]) auto
also have "(\<lambda>x. x powr s) \<in> o(\<lambda>x. x powr (1 + s) / (1 + s))"
using assms by real_asymp
finally show ?thesis
by (rule smallo_imp_asymp_equiv)
qed
end |
! RUN: bbc -emit-fir %s -o - | FileCheck %s
! CHECK-LABEL: system_clock_test
subroutine system_clock_test()
integer(4) :: c
integer(8) :: m
real :: r
! CHECK-DAG: %[[c:.*]] = fir.alloca i32 {bindc_name = "c"
! CHECK-DAG: %[[m:.*]] = fir.alloca i64 {bindc_name = "m"
! CHECK-DAG: %[[r:.*]] = fir.alloca f32 {bindc_name = "r"
! CHECK: %[[c4:.*]] = arith.constant 4 : i32
! CHECK: %[[Count:.*]] = fir.call @_FortranASystemClockCount(%[[c4]]) : (i32) -> i64
! CHECK: %[[Count1:.*]] = fir.convert %[[Count]] : (i64) -> i32
! CHECK: fir.store %[[Count1]] to %[[c]] : !fir.ref<i32>
! CHECK: %[[c8:.*]] = arith.constant 8 : i32
! CHECK: %[[Rate:.*]] = fir.call @_FortranASystemClockCountRate(%[[c8]]) : (i32) -> i64
! CHECK: %[[Rate1:.*]] = fir.convert %[[Rate]] : (i64) -> f32
! CHECK: fir.store %[[Rate1]] to %[[r]] : !fir.ref<f32>
! CHECK: %[[c8_2:.*]] = arith.constant 8 : i32
! CHECK: %[[Max:.*]] = fir.call @_FortranASystemClockCountMax(%[[c8_2]]) : (i32) -> i64
! CHECK: fir.store %[[Max]] to %[[m]] : !fir.ref<i64>
call system_clock(c, r, m)
! print*, c, r, m
! CHECK-NOT: fir.call
! CHECK: %[[c8_3:.*]] = arith.constant 8 : i32
! CHECK: %[[Rate:.*]] = fir.call @_FortranASystemClockCountRate(%[[c8_3]]) : (i32) -> i64
! CHECK: fir.store %[[Rate]] to %[[m]] : !fir.ref<i64>
call system_clock(count_rate=m)
! CHECK-NOT: fir.call
! print*, m
end subroutine
|
/-
Copyright (c) 2020 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon
! This file was ported from Lean 3 source module testing.slim_check.testable
! leanprover-community/mathlib commit d13b3a4a392ea7273dfa4727dbd1892e26cfd518
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Testing.SlimCheck.Sampleable
/-!
# `testable` Class
Testable propositions have a procedure that can generate counter-examples
together with a proof that they invalidate the proposition.
This is a port of the Haskell QuickCheck library.
## Creating Customized Instances
The type classes `testable` and `sampleable` are the means by which
`slim_check` creates samples and tests them. For instance, the proposition
`∀ i j : ℕ, i ≤ j` has a `testable` instance because `ℕ` is sampleable
and `i ≤ j` is decidable. Once `slim_check` finds the `testable`
instance, it can start using the instance to repeatedly creating samples
and checking whether they satisfy the property. This allows the
user to create new instances and apply `slim_check` to new situations.
### Polymorphism
The property `testable.check (∀ (α : Type) (xs ys : list α), xs ++ ys
= ys ++ xs)` shows us that type-polymorphic properties can be
tested. `α` is instantiated with `ℤ` first and then tested as normal
monomorphic properties.
The monomorphisation limits the applicability of `slim_check` to
polymorphic properties that can be stated about integers. The
limitation may be lifted in the future but, for now, if
one wishes to use a different type than `ℤ`, one has to refer to
the desired type.
### What do I do if I'm testing a property about my newly defined type?
Let us consider a type made for a new formalization:
```lean
structure my_type :=
(x y : ℕ)
(h : x ≤ y)
```
How do we test a property about `my_type`? For instance, let us consider
`testable.check $ ∀ a b : my_type, a.y ≤ b.x → a.x ≤ b.y`. Writing this
property as is will give us an error because we do not have an instance
of `sampleable my_type`. We can define one as follows:
```lean
instance : sampleable my_type :=
{ sample := do
x ← sample ℕ,
xy_diff ← sample ℕ,
return { x := x, y := x + xy_diff, h := /- some proof -/ } }
```
We can see that the instance is very simple because our type is built
up from other type that have `sampleable` instances. `sampleable` also
has a `shrink` method but it is optional. We may want to implement one
for ease of testing as:
```lean
/- ... -/
shrink := λ ⟨x,y,h⟩, (λ ⟨x,y⟩, { x := x, y := x + y, h := /- proof -/}) <$> shrink (x, y - x) }
```
Again, we take advantage of the fact that other types have useful
`shrink` implementations, in this case `prod`.
### Optimizing the sampling
Some properties are guarded by a proposition. For instance, recall this
example:
```lean
#eval testable.check (∀ x : ℕ, 2 ∣ x → x < 100)
```
When testing the above example, we generate a natural number, we check
that it is even and test it if it is even or throw it away and start
over otherwise. Statistically, we can expect half of our samples to be
thrown away by such a filter. Sometimes, the filter is more
restrictive. For instance we might need `x` to be a `prime`
number. This would cause most of our samples to be discarded.
We can help `slim_check` find good samples by providing specialized
sampleable instances. Below, we show an instance for the subtype
of even natural numbers. This means that, when producing
a sample, it is forced to produce a proof that it is even.
```lean
instance {k : ℕ} [fact (0 < k)] : sampleable { x : ℕ // k ∣ x } :=
{ sample := do { n ← sample ℕ, pure ⟨k*n, dvd_mul_right _ _⟩ },
shrink := λ ⟨x,h⟩, (λ y, ⟨k*y, dvd_mul_right _ _⟩) <$> shrink x }
```
Such instance will be preferred when testing a proposition of the shape
`∀ x : T, p x → q`
We can observe the effect by enabling tracing:
```lean
/- no specialized sampling -/
#eval testable.check (∀ x : ℕ, 2 ∣ x → x < 100) { trace_discarded := tt }
-- discard
-- x := 1
-- discard
-- x := 41
-- discard
-- x := 3
-- discard
-- x := 5
-- discard
-- x := 5
-- discard
-- x := 197
-- discard
-- x := 469
-- discard
-- x := 9
-- discard
-- ===================
-- Found problems!
-- x := 552
-- -------------------
/- let us define a specialized sampling instance -/
instance {k : ℕ} : sampleable { x : ℕ // k ∣ x } :=
{ sample := do { n ← sample ℕ, pure ⟨k*n, dvd_mul_right _ _⟩ },
shrink := λ ⟨x,h⟩, (λ y, ⟨k*y, dvd_mul_right _ _⟩) <$> shrink x }
#eval testable.check (∀ x : ℕ, 2 ∣ x → x < 100) { enable_tracing := tt }
-- ===================
-- Found problems!
-- x := 358
-- -------------------
```
Similarly, it is common to write properties of the form: `∀ i j, i ≤ j → ...`
as the following example show:
```lean
#eval check (∀ i j k : ℕ, j < k → i - k < i - j)
```
Without subtype instances, the above property discards many samples
because `j < k` does not hold. Fortunately, we have appropriate
instance to choose `k` intelligently.
## Main definitions
* `testable` class
* `testable.check`: a way to test a proposition using random examples
## Tags
random testing
## References
* https://hackage.haskell.org/package/QuickCheck
-/
universe u v
variable (var var' : String)
variable (α : Type u)
variable (β : α → Prop)
variable (f : Type → Prop)
namespace SlimCheck
/- ./././Mathport/Syntax/Translate/Command.lean:364:30: infer kinds are unsupported in Lean 4: gave_up {} -/
#print SlimCheck.TestResult /-
/-- Result of trying to disprove `p`
The constructors are:
* `success : (psum unit p) → test_result`
succeed when we find another example satisfying `p`
In `success h`, `h` is an optional proof of the proposition.
Without the proof, all we know is that we found one example
where `p` holds. With a proof, the one test was sufficient to
prove that `p` holds and we do not need to keep finding examples.
* `gave_up {} : ℕ → test_result`
give up when a well-formed example cannot be generated.
`gave_up n` tells us that `n` invalid examples were tried.
Above 100, we give up on the proposition and report that we
did not find a way to properly test it.
* `failure : ¬ p → (list string) → ℕ → test_result`
a counter-example to `p`; the strings specify values for the relevant variables.
`failure h vs n` also carries a proof that `p` does not hold. This way, we can
guarantee that there will be no false positive. The last component, `n`,
is the number of times that the counter-example was shrunk.
-/
inductive TestResult (p : Prop)
| success : PSum Unit p → test_result
| gave_up : ℕ → test_result
| failure : ¬p → List String → ℕ → test_result
deriving Inhabited
#align slim_check.test_result SlimCheck.TestResult
-/
#print SlimCheck.TestResult.toString /-
/-- format a `test_result` as a string. -/
protected def TestResult.toString {p} : TestResult p → String
| test_result.success (PSum.inl ()) => "success (without proof)"
| test_result.success (PSum.inr h) => "success (with proof)"
| test_result.gave_up n => s! "gave up {n} times"
| test_result.failure a vs _ => s! "failed {vs}"
#align slim_check.test_result.to_string SlimCheck.TestResult.toString
-/
/-- configuration for testing a property -/
structure SlimCheckCfg where
numInst : ℕ := 100
-- number of examples
maxSize : ℕ := 100
-- final size argument
traceDiscarded : Bool := false
-- enable the printing out of discarded samples
traceSuccess : Bool := false
-- enable the printing out of successful tests
traceShrink : Bool := false
-- enable the printing out of shrinking steps
traceShrinkCandidates : Bool := false
-- enable the printing out of shrinking candidates
randomSeed : Option ℕ := none
-- specify a seed to the random number generator to
-- obtain a deterministic behavior
quiet : Bool := false
deriving has_reflect, Inhabited
#align slim_check.slim_check_cfg SlimCheck.SlimCheckCfg
-- suppress success message when running `slim_check`
instance {p} : ToString (TestResult p) :=
⟨TestResult.toString⟩
#print SlimCheck.PrintableProp /-
/-- `printable_prop p` allows one to print a proposition so that
`slim_check` can indicate how values relate to each other.
-/
class PrintableProp (p : Prop) where
printProp : Option String
#align slim_check.printable_prop SlimCheck.PrintableProp
-/
-- see [note priority]
instance (priority := 100) defaultPrintableProp {p} : PrintableProp p :=
⟨none⟩
#align slim_check.default_printable_prop SlimCheck.defaultPrintableProp
#print SlimCheck.Testable /-
/- ./././Mathport/Syntax/Translate/Command.lean:388:30: infer kinds are unsupported in Lean 4: #[`run] [] -/
/-- `testable p` uses random examples to try to disprove `p`. -/
class Testable (p : Prop) where
run (cfg : SlimCheckCfg) (minimize : Bool) : Gen (TestResult p)
#align slim_check.testable SlimCheck.Testable
-/
open _Root_.List
open TestResult
/-- applicative combinator proof carrying test results -/
def combine {p q : Prop} : PSum Unit (p → q) → PSum Unit p → PSum Unit q
| PSum.inr f, PSum.inr x => PSum.inr (f x)
| _, _ => PSum.inl ()
#align slim_check.combine SlimCheck.combine
/-- Combine the test result for properties `p` and `q` to create a test for their conjunction. -/
def andCounterExample {p q : Prop} : TestResult p → TestResult q → TestResult (p ∧ q)
| failure Hce xs n, _ => failure (fun h => Hce h.1) xs n
| _, failure Hce xs n => failure (fun h => Hce h.2) xs n
| success xs, success ys => success <| combine (combine (PSum.inr And.intro) xs) ys
| gave_up n, gave_up m => gaveUp <| n + m
| gave_up n, _ => gaveUp n
| _, gave_up n => gaveUp n
#align slim_check.and_counter_example SlimCheck.andCounterExample
/-- Combine the test result for properties `p` and `q` to create a test for their disjunction -/
def orCounterExample {p q : Prop} : TestResult p → TestResult q → TestResult (p ∨ q)
| failure Hce xs n, failure Hce' ys n' =>
failure (fun h => or_iff_not_and_not.1 h ⟨Hce, Hce'⟩) (xs ++ ys) (n + n')
| success xs, _ => success <| combine (PSum.inr Or.inl) xs
| _, success ys => success <| combine (PSum.inr Or.inr) ys
| gave_up n, gave_up m => gaveUp <| n + m
| gave_up n, _ => gaveUp n
| _, gave_up n => gaveUp n
#align slim_check.or_counter_example SlimCheck.orCounterExample
/-- If `q → p`, then `¬ p → ¬ q` which means that testing `p` can allow us
to find counter-examples to `q`. -/
def convertCounterExample {p q : Prop} (h : q → p) :
TestResult p → optParam (PSum Unit (p → q)) (PSum.inl ()) → TestResult q
| failure Hce xs n, _ => failure (mt h Hce) xs n
| success Hp, Hpq => success (combine Hpq Hp)
| gave_up n, _ => gaveUp n
#align slim_check.convert_counter_example SlimCheck.convertCounterExample
/-- Test `q` by testing `p` and proving the equivalence between the two. -/
def convertCounterExample' {p q : Prop} (h : p ↔ q) (r : TestResult p) : TestResult q :=
convertCounterExample h.2 r (PSum.inr h.1)
#align slim_check.convert_counter_example' SlimCheck.convertCounterExample'
/- ./././Mathport/Syntax/Translate/Expr.lean:177:8: unsupported: ambiguous notation -/
/-- When we assign a value to a universally quantified variable,
we record that value using this function so that our counter-examples
can be informative. -/
def addToCounterExample (x : String) {p q : Prop} (h : q → p) :
TestResult p → optParam (PSum Unit (p → q)) (PSum.inl ()) → TestResult q
| failure Hce xs n, _ => failure (mt h Hce) (x::xs) n
| r, hpq => convertCounterExample h r hpq
#align slim_check.add_to_counter_example SlimCheck.addToCounterExample
/-- Add some formatting to the information recorded by `add_to_counter_example`. -/
def addVarToCounterExample {γ : Type v} [Repr γ] (var : String) (x : γ) {p q : Prop} (h : q → p) :
TestResult p → optParam (PSum Unit (p → q)) (PSum.inl ()) → TestResult q :=
@addToCounterExample (var ++ " := " ++ repr x) _ _ h
#align slim_check.add_var_to_counter_example SlimCheck.addVarToCounterExample
#print SlimCheck.NamedBinder /-
/-- Gadget used to introspect the name of bound variables.
It is used with the `testable` typeclass so that
`testable (named_binder "x" (∀ x, p x))` can use the variable name
of `x` in error messages displayed to the user. If we find that instantiating
the above quantifier with 3 falsifies it, we can print:
```
==============
Problem found!
==============
x := 3
```
-/
@[simp, nolint unused_arguments]
def NamedBinder (n : String) (p : Prop) : Prop :=
p
#align slim_check.named_binder SlimCheck.NamedBinder
-/
/-- Is the given test result a failure? -/
def isFailure {p} : TestResult p → Bool
| test_result.failure _ _ _ => true
| _ => false
#align slim_check.is_failure SlimCheck.isFailure
instance andTestable (p q : Prop) [Testable p] [Testable q] : Testable (p ∧ q) :=
⟨fun cfg min => do
let xp ← Testable.run p cfg min
let xq ← Testable.run q cfg min
pure <| and_counter_example xp xq⟩
#align slim_check.and_testable SlimCheck.andTestable
instance orTestable (p q : Prop) [Testable p] [Testable q] : Testable (p ∨ q) :=
⟨fun cfg min => do
let xp ← Testable.run p cfg min
match xp with
| success (PSum.inl h) => pure <| success (PSum.inl h)
| success (PSum.inr h) => pure <| success (PSum.inr <| Or.inl h)
| _ => do
let xq ← testable.run q cfg min
pure <| or_counter_example xp xq⟩
#align slim_check.or_testable SlimCheck.orTestable
instance iffTestable (p q : Prop) [Testable (p ∧ q ∨ ¬p ∧ ¬q)] : Testable (p ↔ q) :=
⟨fun cfg min => do
let xp ← Testable.run (p ∧ q ∨ ¬p ∧ ¬q) cfg min
return <| convert_counter_example' (by tauto) xp⟩
#align slim_check.iff_testable SlimCheck.iffTestable
open PrintableProp
instance (priority := 1000) decGuardTestable (p : Prop) [PrintableProp p] [Decidable p]
(β : p → Prop) [∀ h, Testable (β h)] : Testable (NamedBinder var <| ∀ h, β h) :=
⟨fun cfg min => do
if h : p then
match print_prop p with
| none =>
(fun r => convert_counter_example (· <| h) r (PSum.inr fun q _ => q)) <$>
testable.run (β h) cfg min
| some str =>
(fun r =>
add_to_counter_example (s! "guard: {str}") (· <| h) r (PSum.inr fun q _ => q)) <$>
testable.run (β h) cfg min
else
if cfg ∨ cfg then
match print_prop p with
| none => trace "discard" <| return <| gave_up 1
| some str => (trace s! "discard: {str} does not hold") <| return <| gave_up 1
else return <| gave_up 1⟩
#align slim_check.dec_guard_testable SlimCheck.decGuardTestable
/-- Type tag that replaces a type's `has_repr` instance with its `has_to_string` instance. -/
def UseHasToString (α : Type _) :=
α
#align slim_check.use_has_to_string SlimCheck.UseHasToString
instance UseHasToString.inhabited [I : Inhabited α] : Inhabited (UseHasToString α) :=
I
#align slim_check.use_has_to_string.inhabited SlimCheck.UseHasToString.inhabited
/-- Add the type tag `use_has_to_string` to an expression's type. -/
def UseHasToString.mk {α} (x : α) : UseHasToString α :=
x
#align slim_check.use_has_to_string.mk SlimCheck.UseHasToString.mk
instance [ToString α] : Repr (UseHasToString α) :=
⟨@toString α _⟩
instance (priority := 2000) allTypesTestable [Testable (f ℤ)] :
Testable (NamedBinder var <| ∀ x, f x) :=
⟨fun cfg min => do
let r ← Testable.run (f ℤ) cfg min
return <| add_var_to_counter_example var (use_has_to_string.mk "ℤ") (· <| ℤ) r⟩
#align slim_check.all_types_testable SlimCheck.allTypesTestable
/- warning: slim_check.trace_if_giveup -> SlimCheck.traceIfGiveup is a dubious translation:
lean 3 declaration is
forall {p : Prop} {α : Type.{u1}} {β : Type.{u2}} [_inst_1 : Repr.{u1} α], Bool -> String -> α -> (SlimCheck.TestResult p) -> (Thunkₓ.{u2} β) -> β
but is expected to have type
forall {p : Prop} {α : Type.{u2}} {β : Type.{u1}} [_inst_1 : Repr.{u2} α], Bool -> String -> α -> (SlimCheck.TestResult p) -> (Thunkₓ.{u1} β) -> β
Case conversion may be inaccurate. Consider using '#align slim_check.trace_if_giveup SlimCheck.traceIfGiveupₓ'. -/
/-- Trace the value of sampled variables if the sample is discarded. -/
def traceIfGiveup {p α β} [Repr α] (tracing_enabled : Bool) (var : String) (val : α) :
TestResult p → Thunk β → β
| test_result.gave_up _ => if tracing_enabled then trace s! " {var } := {repr val}" else (· <| ())
| _ => (· <| ())
#align slim_check.trace_if_giveup SlimCheck.traceIfGiveup
/- ./././Mathport/Syntax/Translate/Expr.lean:177:8: unsupported: ambiguous notation -/
/-- testable instance for a property iterating over the element of a list -/
instance (priority := 5000) testForallInList [∀ x, Testable (β x)] [Repr α] :
∀ xs : List α, Testable (NamedBinder var <| ∀ x, NamedBinder var' <| x ∈ xs → β x)
| [] =>
⟨fun tracing min =>
return <|
success <|
PSum.inr
(by
introv x h
cases h)⟩
| x::xs =>
⟨fun cfg min => do
let r ← Testable.run (β x) cfg min
trace_if_giveup cfg var x r <|
match r with
| failure _ _ _ =>
return <|
add_var_to_counter_example var x
(by
intro h
apply h
left
rfl)
r
| success hp => do
let rs ← @testable.run _ (test_forall_in_list xs) cfg min
return <|
convert_counter_example
(by
intro h i h'
apply h
right
apply h')
rs
(combine
(PSum.inr <| by
intro j h
simp only [ball_cons, named_binder]
constructor <;> assumption)
hp)
| gave_up n => do
let rs ← @testable.run _ (test_forall_in_list xs) cfg min
match rs with
| success _ => return <| gave_up n
| failure Hce xs n =>
return <|
failure
(by
simp only [ball_cons, named_binder]
apply not_and_of_not_right _ Hce)
xs n
| gave_up n' => return <| gave_up (n + n')⟩
#align slim_check.test_forall_in_list SlimCheck.testForallInList
/-- Test proposition `p` by randomly selecting one of the provided
testable instances. -/
def combineTestable (p : Prop) (t : List <| Testable p) (h : 0 < t.length) : Testable p :=
⟨fun cfg min =>
have : 0 < length (map (fun t => @Testable.run _ t cfg min) t) :=
by
rw [length_map]
apply h
Gen.oneOf (List.map (fun t => @Testable.run _ t cfg min) t) this⟩
#align slim_check.combine_testable SlimCheck.combineTestable
open SampleableExt
/-- Format the counter-examples found in a test failure.
-/
def formatFailure (s : String) (xs : List String) (n : ℕ) : String :=
let counter_ex := String.intercalate "\n" xs
s! "
===================
{s }
{counter_ex }
({n} shrinks)
-------------------
"
#align slim_check.format_failure SlimCheck.formatFailure
/-- Format the counter-examples found in a test failure.
-/
def formatFailure' (s : String) {p} : TestResult p → String
| success a => ""
| gave_up a => ""
| test_result.failure _ xs n => formatFailure s xs n
#align slim_check.format_failure' SlimCheck.formatFailure'
/-- Increase the number of shrinking steps in a test result.
-/
def addShrinks {p} (n : ℕ) : TestResult p → TestResult p
| r@(success a) => r
| r@(gave_up a) => r
| test_result.failure h vs n' => TestResult.failure h vs <| n + n'
#align slim_check.add_shrinks SlimCheck.addShrinks
/-- Shrink a counter-example `x` by using `shrink x`, picking the first
candidate that falsifies a property and recursively shrinking that one.
The process is guaranteed to terminate because `shrink x` produces
a proof that all the values it produces are smaller (according to `sizeof`)
than `x`. -/
def minimizeAux [SampleableExt α] [∀ x, Testable (β x)] (cfg : SlimCheckCfg) (var : String) :
ProxyRepr α → ℕ → OptionT Gen (Σx, TestResult (β (interp α x))) :=
WellFounded.fix WellFoundedRelation.wf fun x f_rec n => do
if cfg then
return <|
trace
(s! "candidates for {var } :=
{repr (sampleable_ext.shrink x).toList}
")
()
else pure ()
let ⟨y, r, ⟨h₁⟩⟩ ←
(SampleableExt.shrink x).firstM fun ⟨a, h⟩ => do
let ⟨r⟩ ←
monadLift
(Uliftable.up <| Testable.run (β (interp α a)) cfg true :
Gen (ULift <| TestResult <| β <| interp α a))
if is_failure r then
pure (⟨a, r, ⟨h⟩⟩ : Σa, test_result (β (interp α a)) × PLift (sizeof_lt a x))
else failure
if cfg then
return <|
trace ((s! "{var } := {repr y}") ++ format_failure' "Shrink counter-example:" r) ()
else pure ()
f_rec y h₁ (n + 1) <|> pure ⟨y, add_shrinks (n + 1) r⟩
#align slim_check.minimize_aux SlimCheck.minimizeAux
/-- Once a property fails to hold on an example, look for smaller counter-examples
to show the user. -/
def minimize [SampleableExt α] [∀ x, Testable (β x)] (cfg : SlimCheckCfg) (var : String)
(x : ProxyRepr α) (r : TestResult (β (interp α x))) : Gen (Σx, TestResult (β (interp α x))) :=
do
if cfg then
return <| trace ((s! "{var } := {repr x}") ++ format_failure' "Shrink counter-example:" r) ()
else pure ()
let x' ← OptionT.run <| minimizeAux α _ cfg var x 0
pure <| x' ⟨x, r⟩
#align slim_check.minimize SlimCheck.minimize
instance (priority := 2000) existsTestable (p : Prop)
[Testable (NamedBinder var (∀ x, NamedBinder var' <| β x → p))] :
Testable (NamedBinder var' (NamedBinder var (∃ x, β x) → p)) :=
⟨fun cfg min => do
let x ← Testable.run (NamedBinder var (∀ x, NamedBinder var' <| β x → p)) cfg min
pure <| convert_counter_example' exists_imp x⟩
#align slim_check.exists_testable SlimCheck.existsTestable
/-- Test a universal property by creating a sample of the right type and instantiating the
bound variable with it -/
instance varTestable [SampleableExt α] [∀ x, Testable (β x)] :
Testable (NamedBinder var <| ∀ x : α, β x) :=
⟨fun cfg min => do
Uliftable.adaptDown (sampleable_ext.sample α) fun x => do
let r ← testable.run (β (sampleable_ext.interp α x)) cfg ff
Uliftable.adaptDown
(if is_failure r ∧ min then minimize _ _ cfg var x r
else if cfg then (trace s! " {var } := {repr x}") <| pure ⟨x, r⟩ else pure ⟨x, r⟩)
fun ⟨x, r⟩ =>
return <|
trace_if_giveup cfg var x r
(add_var_to_counter_example var x (· <| sampleable_ext.interp α x) r)⟩
#align slim_check.var_testable SlimCheck.varTestable
/-- Test a universal property about propositions -/
instance propVarTestable (β : Prop → Prop) [I : ∀ b : Bool, Testable (β b)] :
Testable (NamedBinder var <| ∀ p : Prop, β p) :=
⟨fun cfg min => do
(convert_counter_example fun h (b : Bool) => h b) <$>
@testable.run (named_binder var <| ∀ b : Bool, β b) _ cfg min⟩
#align slim_check.prop_var_testable SlimCheck.propVarTestable
instance (priority := 3000) unusedVarTestable (β) [Inhabited α] [Testable β] :
Testable (NamedBinder var <| ∀ x : α, β) :=
⟨fun cfg min => do
let r ← Testable.run β cfg min
pure <| convert_counter_example (· <| default) r (PSum.inr fun x _ => x)⟩
#align slim_check.unused_var_testable SlimCheck.unusedVarTestable
instance (priority := 2000) subtypeVarTestable {p : α → Prop} [∀ x, PrintableProp (p x)]
[∀ x, Testable (β x)] [I : SampleableExt (Subtype p)] :
Testable (NamedBinder var <| ∀ x : α, NamedBinder var' <| p x → β x) :=
⟨fun cfg min => do
let test (x : Subtype p) : Testable (β x) :=
⟨fun cfg min => do
let r ← Testable.run (β x.val) cfg min
match print_prop (p x) with
| none => pure r
| some str =>
pure <| add_to_counter_example (s! "guard: {str} (by construction)") id r (PSum.inr id)⟩
let r ← @Testable.run (∀ x : Subtype p, β x.val) (@SlimCheck.varTestable var _ _ I test) cfg min
pure <|
convert_counter_example'
⟨fun (h : ∀ x : Subtype p, β x) x h' => h ⟨x, h'⟩, fun h ⟨x, h'⟩ => h x h'⟩ r⟩
#align slim_check.subtype_var_testable SlimCheck.subtypeVarTestable
instance (priority := 100) decidableTestable (p : Prop) [PrintableProp p] [Decidable p] :
Testable p :=
⟨fun cfg min =>
return <|
if h : p then success (PSum.inr h)
else
match printProp p with
| none => failure h [] 0
| some str => failure h [s! "issue: {str} does not hold"] 0⟩
#align slim_check.decidable_testable SlimCheck.decidableTestable
#print SlimCheck.Eq.printableProp /-
instance Eq.printableProp {α} [Repr α] (x y : α) : PrintableProp (x = y) :=
⟨some s!"{(repr x)} = {repr y}"⟩
#align slim_check.eq.printable_prop SlimCheck.Eq.printableProp
-/
#print SlimCheck.Ne.printableProp /-
instance Ne.printableProp {α} [Repr α] (x y : α) : PrintableProp (x ≠ y) :=
⟨some s!"{(repr x)} ≠ {repr y}"⟩
#align slim_check.ne.printable_prop SlimCheck.Ne.printableProp
-/
instance Le.printableProp {α} [LE α] [Repr α] (x y : α) : PrintableProp (x ≤ y) :=
⟨some s!"{(repr x)} ≤ {repr y}"⟩
#align slim_check.le.printable_prop SlimCheck.Le.printableProp
instance Lt.printableProp {α} [LT α] [Repr α] (x y : α) : PrintableProp (x < y) :=
⟨some s!"{(repr x)} < {repr y}"⟩
#align slim_check.lt.printable_prop SlimCheck.Lt.printableProp
instance Perm.printableProp {α} [Repr α] (xs ys : List α) : PrintableProp (xs ~ ys) :=
⟨some s!"{(repr xs)} ~ {repr ys}"⟩
#align slim_check.perm.printable_prop SlimCheck.Perm.printableProp
#print SlimCheck.And.printableProp /-
instance And.printableProp (x y : Prop) [PrintableProp x] [PrintableProp y] :
PrintableProp (x ∧ y) :=
⟨do
let x' ← printProp x
let y' ← printProp y
some s! "({x' } ∧ {y'})"⟩
#align slim_check.and.printable_prop SlimCheck.And.printableProp
-/
#print SlimCheck.Or.printableProp /-
instance Or.printableProp (x y : Prop) [PrintableProp x] [PrintableProp y] :
PrintableProp (x ∨ y) :=
⟨do
let x' ← printProp x
let y' ← printProp y
some s! "({x' } ∨ {y'})"⟩
#align slim_check.or.printable_prop SlimCheck.Or.printableProp
-/
#print SlimCheck.Iff.printableProp /-
instance Iff.printableProp (x y : Prop) [PrintableProp x] [PrintableProp y] :
PrintableProp (x ↔ y) :=
⟨do
let x' ← printProp x
let y' ← printProp y
some s! "({x' } ↔ {y'})"⟩
#align slim_check.iff.printable_prop SlimCheck.Iff.printableProp
-/
#print SlimCheck.Imp.printableProp /-
instance Imp.printableProp (x y : Prop) [PrintableProp x] [PrintableProp y] :
PrintableProp (x → y) :=
⟨do
let x' ← printProp x
let y' ← printProp y
some s! "({x' } → {y'})"⟩
#align slim_check.imp.printable_prop SlimCheck.Imp.printableProp
-/
#print SlimCheck.Not.printableProp /-
instance Not.printableProp (x : Prop) [PrintableProp x] : PrintableProp ¬x :=
⟨do
let x' ← printProp x
some s! "¬ {x'}"⟩
#align slim_check.not.printable_prop SlimCheck.Not.printableProp
-/
#print SlimCheck.True.printableProp /-
instance True.printableProp : PrintableProp True :=
⟨some "true"⟩
#align slim_check.true.printable_prop SlimCheck.True.printableProp
-/
#print SlimCheck.False.printableProp /-
instance False.printableProp : PrintableProp False :=
⟨some "false"⟩
#align slim_check.false.printable_prop SlimCheck.False.printableProp
-/
#print SlimCheck.Bool.printableProp /-
instance Bool.printableProp (b : Bool) : PrintableProp b :=
⟨some <| if b then "true" else "false"⟩
#align slim_check.bool.printable_prop SlimCheck.Bool.printableProp
-/
section Io
open _Root_.Nat
variable {p : Prop}
#print SlimCheck.retry /-
/-- Execute `cmd` and repeat every time the result is `gave_up` (at most
`n` times). -/
def retry (cmd : Rand (TestResult p)) : ℕ → Rand (TestResult p)
| 0 => return <| gaveUp 1
| succ n => do
let r ← cmd
match r with
| success hp => return <| success hp
| failure Hce xs n => return (failure Hce xs n)
| gave_up _ => retry n
#align slim_check.retry SlimCheck.retry
-/
#print SlimCheck.giveUp /-
/-- Count the number of times the test procedure gave up. -/
def giveUp (x : ℕ) : TestResult p → TestResult p
| success (PSum.inl ()) => gaveUp x
| success (PSum.inr p) => success (PSum.inr p)
| gave_up n => gaveUp (n + x)
| failure Hce xs n => failure Hce xs n
#align slim_check.give_up SlimCheck.giveUp
-/
variable (p)
variable [Testable p]
/- warning: slim_check.testable.run_suite_aux -> SlimCheck.Testable.runSuiteAux is a dubious translation:
lean 3 declaration is
forall (p : Prop) [_inst_1 : SlimCheck.Testable p], SlimCheck.SlimCheckCfg -> (SlimCheck.TestResult p) -> Nat -> (Rand.{0} (SlimCheck.TestResult p))
but is expected to have type
forall (p : Prop) [_inst_1 : SlimCheck.Testable p], SlimCheck.Configuration -> (SlimCheck.TestResult p) -> Nat -> (Rand.{0} (SlimCheck.TestResult p))
Case conversion may be inaccurate. Consider using '#align slim_check.testable.run_suite_aux SlimCheck.Testable.runSuiteAuxₓ'. -/
/-- Try `n` times to find a counter-example for `p`. -/
def Testable.runSuiteAux (cfg : SlimCheckCfg) : TestResult p → ℕ → Rand (TestResult p)
| r, 0 => return r
| r, succ n => do
let size := (cfg.numInst - n - 1) * cfg.maxSize / cfg.numInst
when cfg <| return <| trace s!"[slim_check: sample]" ()
let x ← retry ((Testable.run p cfg true).run ⟨size⟩) 10
match x with
| success (PSum.inl ()) => testable.run_suite_aux r n
| success (PSum.inr Hp) => return <| success (PSum.inr Hp)
| failure Hce xs n => return (failure Hce xs n)
| gave_up g => testable.run_suite_aux (give_up g r) n
#align slim_check.testable.run_suite_aux SlimCheck.Testable.runSuiteAux
/- warning: slim_check.testable.run_suite -> SlimCheck.Testable.runSuite is a dubious translation:
lean 3 declaration is
forall (p : Prop) [_inst_1 : SlimCheck.Testable p], (optParam.{1} SlimCheck.SlimCheckCfg (SlimCheck.SlimCheckCfg.mk (OfNat.ofNat.{0} Nat 100 (OfNat.mk.{0} Nat 100 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))) (OfNat.ofNat.{0} Nat 100 (OfNat.mk.{0} Nat 100 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))) Bool.false Bool.false Bool.false Bool.false (Option.none.{0} Nat) Bool.false)) -> (Rand.{0} (SlimCheck.TestResult p))
but is expected to have type
forall (p : Prop) [_inst_1 : SlimCheck.Testable p], (optParam.{1} SlimCheck.Configuration (SlimCheck.Configuration.mk ([mdata structInstDefault:1 OfNat.ofNat.{0} Nat 100 (instOfNatNat 100)]) ([mdata structInstDefault:1 OfNat.ofNat.{0} Nat 100 (instOfNatNat 100)]) ([mdata structInstDefault:1 OfNat.ofNat.{0} Nat 10 (instOfNatNat 10)]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Option.none.{0} Nat]) ([mdata structInstDefault:1 Bool.false]))) -> (Rand.{0} (SlimCheck.TestResult p))
Case conversion may be inaccurate. Consider using '#align slim_check.testable.run_suite SlimCheck.Testable.runSuiteₓ'. -/
/-- Try to find a counter-example of `p`. -/
def Testable.runSuite (cfg : SlimCheckCfg := { }) : Rand (TestResult p) :=
Testable.runSuiteAux p cfg (success <| PSum.inl ()) cfg.numInst
#align slim_check.testable.run_suite SlimCheck.Testable.runSuite
/-- Run a test suite for `p` in `io`. -/
def Testable.check' (cfg : SlimCheckCfg := { }) : Io (TestResult p) :=
match cfg.randomSeed with
| some seed => Io.runRandWith seed (Testable.runSuite p cfg)
| none => Io.runRand (Testable.runSuite p cfg)
#align slim_check.testable.check' SlimCheck.Testable.check'
namespace Tactic
open _Root_.Tactic Expr
/-!
## Decorations
Instances of `testable` use `named_binder` as a decoration on
propositions in order to access the name of bound variables, as in
`named_binder "x" (forall x, x < y)`. This helps the
`testable` instances create useful error messages where variables
are matched with values that falsify a given proposition.
The following functions help support the gadget so that the user does
not have to put them in themselves.
-/
/-- `add_existential_decorations p` adds `a `named_binder` annotation at the
root of `p` if `p` is an existential quantification. -/
unsafe def add_existential_decorations : expr → expr
| e@q(@Exists $(α) $(lam n bi d b)) =>
let n := toString n
const `` named_binder [] (q(n) : expr) e
| e => e
#align slim_check.tactic.add_existential_decorations slim_check.tactic.add_existential_decorations
/-- Traverse the syntax of a proposition to find universal quantifiers
and existential quantifiers and add `named_binder` annotations next to
them. -/
unsafe def add_decorations : expr → expr
| e =>
e.replace fun e _ =>
match e with
| pi n bi d b =>
let n := toString n
some <|
const `` named_binder [] (q(n) : expr)
(pi n bi (add_existential_decorations d) (add_decorations b))
| e => none
#align slim_check.tactic.add_decorations slim_check.tactic.add_decorations
/-- `decorations_of p` is used as a hint to `mk_decorations` to specify
that the goal should be satisfied with a proposition equivalent to `p`
with added annotations. -/
@[reducible, nolint unused_arguments]
def DecorationsOf (p : Prop) :=
Prop
#align slim_check.tactic.decorations_of SlimCheck.Tactic.DecorationsOf
/-- In a goal of the shape `⊢ tactic.decorations_of p`, `mk_decoration` examines
the syntax of `p` and add `named_binder` around universal quantifications and
existential quantifications to improve error messages.
This tool can be used in the declaration of a function as follows:
```lean
def foo (p : Prop) (p' : tactic.decorations_of p . mk_decorations) [testable p'] : ...
```
`p` is the parameter given by the user, `p'` is an equivalent proposition where
the quantifiers are annotated with `named_binder`.
-/
unsafe def mk_decorations : tactic Unit := do
let q(Tactic.DecorationsOf $(p)) ← target
exact <| add_decorations p
#align slim_check.tactic.mk_decorations slim_check.tactic.mk_decorations
end Tactic
/- warning: slim_check.testable.check -> SlimCheck.Testable.check is a dubious translation:
lean 3 declaration is
forall (p : Prop), (optParam.{1} SlimCheck.SlimCheckCfg (SlimCheck.SlimCheckCfg.mk (OfNat.ofNat.{0} Nat 100 (OfNat.mk.{0} Nat 100 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))) (OfNat.ofNat.{0} Nat 100 (OfNat.mk.{0} Nat 100 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))) Bool.false Bool.false Bool.false Bool.false (Option.none.{0} Nat) Bool.false)) -> (forall (p' : autoParamₓ.{1} (SlimCheck.Tactic.DecorationsOf p) (Name.mk_string (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str String.empty (Char.ofNat (OfNat.ofNat.{0} Nat 109 (OfNat.mk.{0} Nat 109 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 107 (OfNat.mk.{0} Nat 107 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 95 (OfNat.mk.{0} Nat 95 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 100 (OfNat.mk.{0} Nat 100 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 101 (OfNat.mk.{0} Nat 101 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 99 (OfNat.mk.{0} Nat 99 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 111 (OfNat.mk.{0} Nat 111 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 114 (OfNat.mk.{0} Nat 114 (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 97 (OfNat.mk.{0} Nat 97 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 116 (OfNat.mk.{0} Nat 116 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 105 (OfNat.mk.{0} Nat 105 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 111 (OfNat.mk.{0} Nat 111 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 110 (OfNat.mk.{0} Nat 110 (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 115 (OfNat.mk.{0} Nat 115 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Name.mk_string (String.str (String.str (String.str (String.str (String.str (String.str String.empty (Char.ofNat (OfNat.ofNat.{0} Nat 116 (OfNat.mk.{0} Nat 116 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 97 (OfNat.mk.{0} Nat 97 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 99 (OfNat.mk.{0} Nat 99 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 116 (OfNat.mk.{0} Nat 116 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 105 (OfNat.mk.{0} Nat 105 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 99 (OfNat.mk.{0} Nat 99 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Name.mk_string (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str (String.str String.empty (Char.ofNat (OfNat.ofNat.{0} Nat 115 (OfNat.mk.{0} Nat 115 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 108 (OfNat.mk.{0} Nat 108 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 105 (OfNat.mk.{0} Nat 105 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 109 (OfNat.mk.{0} Nat 109 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 95 (OfNat.mk.{0} Nat 95 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 99 (OfNat.mk.{0} Nat 99 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 104 (OfNat.mk.{0} Nat 104 (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 101 (OfNat.mk.{0} Nat 101 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 99 (OfNat.mk.{0} Nat 99 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) (Char.ofNat (OfNat.ofNat.{0} Nat 107 (OfNat.mk.{0} Nat 107 (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (bit0.{0} Nat Nat.hasAdd (bit1.{0} Nat Nat.hasOne Nat.hasAdd (One.one.{0} Nat Nat.hasOne))))))))))) Name.anonymous)))) [_inst_2 : SlimCheck.Testable p'], Io PUnit.{1})
but is expected to have type
forall (p : Prop), (optParam.{1} SlimCheck.Configuration (SlimCheck.Configuration.mk ([mdata structInstDefault:1 OfNat.ofNat.{0} Nat 100 (instOfNatNat 100)]) ([mdata structInstDefault:1 OfNat.ofNat.{0} Nat 100 (instOfNatNat 100)]) ([mdata structInstDefault:1 OfNat.ofNat.{0} Nat 10 (instOfNatNat 10)]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Bool.false]) ([mdata structInstDefault:1 Option.none.{0} Nat]) ([mdata structInstDefault:1 Bool.false]))) -> (forall (p' : autoParam.{1} (SlimCheck.Decorations.DecorationsOf p) [email protected]._hyg.6287) [_inst_2 : SlimCheck.Testable p'], IO PUnit.{1})
Case conversion may be inaccurate. Consider using '#align slim_check.testable.check SlimCheck.Testable.checkₓ'. -/
/- ./././Mathport/Syntax/Translate/Tactic/Builtin.lean:69:18: unsupported non-interactive tactic tactic.mk_decorations -/
/-- Run a test suite for `p` and return true or false: should we believe that `p` holds? -/
def Testable.check (p : Prop) (cfg : SlimCheckCfg := { })
(p' : Tactic.DecorationsOf p := by
run_tac
tactic.mk_decorations)
[Testable p'] : Io PUnit := do
let x ←
match cfg.randomSeed with
| some seed => Io.runRandWith seed (Testable.runSuite p' cfg)
| none => Io.runRand (Testable.runSuite p' cfg)
match x with
| success _ => when ¬cfg <| Io.putStrLn "Success"
| gave_up n => Io.fail s! "Gave up {repr n} times"
| failure _ xs n => do
Io.fail <| format_failure "Found problems!" xs n
#align slim_check.testable.check SlimCheck.Testable.check
end Io
end SlimCheck
|
lemma poly_eq_fold_coeffs: \<open>poly p = fold_coeffs (\<lambda>a f x. a + x * f x) p (\<lambda>x. 0)\<close> |
[STATEMENT]
theorem Spy_not_see_encrypted_key:
"\<lbrakk>Says Server B
\<lbrace>NA, Crypt (shrK A) \<lbrace>NA, Key K\<rbrace>,
Crypt (shrK B) \<lbrace>NB, Key K\<rbrace>\<rbrace> \<in> set evs;
Notes Spy \<lbrace>NA, NB, Key K\<rbrace> \<notin> set evs;
A \<notin> bad; B \<notin> bad; evs \<in> otway\<rbrakk>
\<Longrightarrow> Key K \<notin> analz (knows Spy evs)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>Says Server B \<lbrace>NA, Crypt (shrK A) \<lbrace>NA, Key K\<rbrace>, Crypt (shrK B) \<lbrace>NB, Key K\<rbrace>\<rbrace> \<in> set evs; Notes Spy \<lbrace>NA, NB, Key K\<rbrace> \<notin> set evs; A \<notin> bad; B \<notin> bad; evs \<in> otway\<rbrakk> \<Longrightarrow> Key K \<notin> analz (knows Spy evs)
[PROOF STEP]
by (blast dest: Says_Server_message_form secrecy_lemma) |
lemma isCont_snd [simp]: "isCont f a \<Longrightarrow> isCont (\<lambda>x. snd (f x)) a" |
\section*{Run \#14}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{images/14/stream_image}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{images/14/trajectories}
\end{figure}
|
lemma complex_Re_numeral [simp]: "Re (numeral v) = numeral v" |
import number_theory.bernoulli
--OUTPUT 1
theorem bernoulli_eval_one_add (n : ℕ) (x : ℚ) :
(polynomial.bernoulli n).eval (1 + x) = (polynomial.bernoulli n).eval x + n * x^(n - 1) :=
begin
-- We apply strong induction on $n$. So, for all $m < d$, we have $B_{m} (1 + x) = B_{m} (x) + m x^{m - 1}$ and we want to show that $$B_{d} (1 + x) = B_{d} (x) + d x^{d - 1}$$
have h1 : ∀ (m : ℕ) (h : m < n), (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
assume (m : ℕ) (h : m < n),
begin
induction m with d hd,
-- base case
show (polynomial.bernoulli 0).eval (1 + x) = (polynomial.bernoulli 0).eval x + 0 * x^(0 - 1), from by {
rw polynomial.bernoulli,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_C
end --Needs more than 2000 tokens!
--OUTPUT 2
theorem bernoulli_eval_one_add (n : ℕ) (x : ℚ) :
(polynomial.bernoulli n).eval (1 + x) = (polynomial.bernoulli n).eval x + n * x^(n - 1) :=
begin
-- We apply strong induction on $n$. So, for all $m < d$, we have $B_{m} (1 + x) = B_{m} (x) + m x^{m - 1}$ and we want to show that $$B_{d} (1 + x) = B_{d} (x) + d x^{d - 1}$$
have h1 : ∀ m : ℕ, m < n → (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
assume (m : ℕ) (h : m < n),
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h2 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h3 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h4 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h5 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h6 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h7 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h8 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h9 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
have h10 : (polynomial.bernoulli m).eval (1 + x) = (polynomial.bernoulli m).eval x + m * x^(m - 1), from
begin
-- Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
-- $$ (d + 1) (1 + x)^d - (d +
end --Needs more than 2000 tokens!
--OUTPUT 3
theorem bernoulli_eval_one_add (n : ℕ) (x : ℚ) :
(polynomial.bernoulli n).eval (1 + x) = (polynomial.bernoulli n).eval x + n * x^(n - 1) :=
begin
-- We apply strong induction on $n$. So, for all $m < d$, we have $B_{m} (1 + x) = B_{m} (x) + m x^{m - 1}$ and we want to show that $$B_{d} (1 + x) = B_{d} (x) + d x^{d - 1}$$
induction n with d hd,
{ -- base case
show (polynomial.bernoulli 0).eval (1 + x) = (polynomial.bernoulli 0).eval x + 0 * x^(0 - 1), from by {
rw polynomial.bernoulli,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.eval_mul,
rw polynomial.eval_mul,
rw polynomial.eval_C,
rw polynomial.eval_C,
rw polynomial.eval_X,
rw polynomial.eval_X,
rw polynomial.eval_add,
rw polynomial.eval_add,
rw polynomial.
end --Needs more than 2000 tokens!
/- FEW SHOT PROMPTS TO CODEX(START)
/--`theorem`
Power Set is Closed under Intersection
Let $S$ be a set.
Let $\powerset S$ be the power set of $S$.
Then:
:$\forall A, B \in \powerset S: A \cap B \in \powerset S$
`proof`
Let $A, B \in \powerset S$.
Then by the definition of power set, $A \subseteq S$ and $B \subseteq S$.
From Intersection is Subset we have that $A \cap B \subseteq A$.
It follows from Subset Relation is Transitive that $A \cap B \subseteq S$.
Thus $A \cap B \in \powerset S$ and closure is proved.
{{qed}}
-/
theorem power_set_intersection_closed {α : Type*} (S : set α) : ∀ A B ∈ 𝒫 S, (A ∩ B) ∈ 𝒫 S :=
begin
-- $A$ and $B$ are sets. $A$ and $B$ belong to power set of $S$
assume (A : set α) (hA : A ∈ 𝒫 S) (B : set α) (hB : B ∈ 𝒫 S),
-- Then $A ⊆ S$ and $B ⊆ S$, by power set definition
have h1 : (A ⊆ S) ∧ (B ⊆ S), from by {split,apply set.subset_of_mem_powerset,exact hA,apply set.subset_of_mem_powerset,exact hB},
-- Then $(A ∩ B) ⊆ A$, by intersection of set is a subset
have h2 : (A ∩ B) ⊆ A, from by apply set.inter_subset_left,
-- Then $(A ∩ B) ⊆ S$, by subset relation is transitive
have h3 : (A ∩ B) ⊆ S, from by {apply set.subset.trans h2 h1.left},
-- Hence $(A ∩ B) ∈ 𝒫 S$, by power set definition
show (A ∩ B) ∈ 𝒫 S, from by {apply set.mem_powerset h3},
end
/--`theorem`
Square of Sum
:$\forall x, y \in \R: \paren {x + y}^2 = x^2 + 2 x y + y^2$
`proof`
Follows from the distribution of multiplication over addition:
{{begin-eqn}}
{{eqn | l = \left({x + y}\right)^2
| r = \left({x + y}\right) \cdot \left({x + y}\right)
}}
{{eqn | r = x \cdot \left({x + y}\right) + y \cdot \left({x + y}\right)
| c = Real Multiplication Distributes over Addition
}}
{{eqn | r = x \cdot x + x \cdot y + y \cdot x + y \cdot y
| c = Real Multiplication Distributes over Addition
}}
{{eqn | r = x^2 + 2xy + y^2
| c =
}}
{{end-eqn}}
{{qed}}
-/
theorem square_of_sum (x y : ℝ) : (x + y)^2 = (x^2 + 2*x*y + y^2) :=
begin
-- expand the power
calc (x + y)^2 = (x+y)*(x+y) : by rw sq
-- distributive property of multiplication over addition gives:
... = x*(x+y) + y*(x+y) : by rw add_mul
-- applying the above property further gives:
... = x*x + x*y + y*x + y*y : by {rw [mul_comm x (x+y),mul_comm y (x+y)], rw [add_mul,add_mul], ring}
-- rearranging the terms using commutativity and adding gives:
... = x^2 + 2*x*y + y^2 : by {repeat {rw ← sq}, rw mul_comm y x, ring}
end
/--`theorem`
Identity of Group is Unique
Let $\struct {G, \circ}$ be a group. Then there is a unique identity element $e \in G$.
`proof`
From Group has Latin Square Property, there exists a unique $x \in G$ such that:
:$a x = b$
and there exists a unique $y \in G$ such that:
:$y a = b$
Setting $b = a$, this becomes:
There exists a unique $x \in G$ such that:
:$a x = a$
and there exists a unique $y \in G$ such that:
:$y a = a$
These $x$ and $y$ are both $e$, by definition of identity element.
{{qed}}
-/
theorem group_identity_unique {G : Type*} [group G] : ∃! e : G, ∀ a : G, e * a = a ∧ a * e = a :=
begin
-- Group has Latin Square Property
have h1 : ∀ a b : G, ∃! x : G, a * x = b, from by {
assume a b : G, use a⁻¹ * b, obviously, },
have h2 : ∀ a b : G, ∃! y : G, y * a = b, from by {
assume a b : G, use b * a⁻¹, obviously, },
-- Setting $b = a$, this becomes:
have h3 : ∀ a : G, ∃! x : G, a * x = a, from
assume a : G, h1 a a,
have h4 : ∀ a : G, ∃! y : G, y * a = a, from
assume a : G, h2 a a,
-- These $x$ and $y$ are both $(1 : G)$, by definition of identity element
have h5 : ∀ a : G, classical.some (h3 a).exists = (1 : G), from assume a :G,
exists_unique.unique (h3 a) (classical.some_spec (exists_unique.exists (h3 a)))
(mul_one a),
have h6 : ∀ a : G, classical.some (h4 a).exists = (1 : G), from assume a : G,
exists_unique.unique (h4 a) (classical.some_spec (exists_unique.exists (h4 a))) (one_mul a),
show ∃! e : G, ∀ a : G, e * a = a ∧ a * e = a, from by {
use (1 : G),
have h7 : ∀ e : G, (∀ a : G, e * a = a ∧ a * e = a) → e = 1, from by {
assume (e : G) (hident : ∀ a : G, e * a = a ∧ a * e = a),
have h8 : ∀ a : G, e = classical.some (h3 a).exists, from assume (a : G),
exists_unique.unique (h3 a) (hident a).right
(classical.some_spec (exists_unique.exists (h3 a))),
have h9 : ∀ a : G, e = classical.some (h4 a).exists, from assume (a : G),
exists_unique.unique (h4 a) (hident a).left
(classical.some_spec (exists_unique.exists (h4 a))),
show e = (1 : G), from eq.trans (h9 e) (h6 _),
},
exact ⟨by obviously, h7⟩,
}
end
/--`theorem`
Bernoulli polynomial evaluation
Given a natural number $n$ and a rational $x$, let $B_n (x)$ denote the $n$-th Bernoulli polynomial evaluated at $x$. Then, $$B_n (1 + x) = B_n (x) + n x^{n - 1}$$
`proof`
We apply strong induction on $n$. So, for all $m < d$, we have $B_{m} (1 + x) = B_{m} (x) + m x^{m - 1}$ and we want to show that $$B_{d} (1 + x) = B_{d} (x) + d x^{d - 1}$$
Multiplying both sides by $d + 1$, and using the fact that, for all $l \in \mathbb{N}$, $\sum_{k = 0}^{l} {l + 1 \choose k} B_k = (l + 1) X^l$ (where $B_k$ is the $k$-th Bernoulli number), we get that
$$ (d + 1) (1 + x)^d - (d + 1) x^d = \sum_{l = 0}^{d} {d + 1 \choose l} l x^{l - 1} $$
The conclusion then follows easily.
QED
-/
theorem bernoulli_eval_one_add (n : ℕ) (x : ℚ) :
(polynomial.bernoulli n).eval (1 + x) = (polynomial.bernoulli n).eval x + n * x^(n - 1) :=
FEW SHOT PROMPTS TO CODEX(END)-/
|
Formal statement is: lemma sphere_trivial [simp]: "sphere x 0 = {x}" Informal statement is: The sphere of radius $0$ around $x$ is just the point $x$. |
import Language.Reflection
%language ElabReflection
solveReflected : TTImp -> Elab any
solveReflected `(Builtin.Equal {a=_} {b=_} ~(left) ~(right))
= do logTerm "" 0 "Left" left
logTerm "" 0 "Right" right
fail "Not done"
solveReflected g
= do logTerm "" 0 "Goal" g
fail "I don't know how to prove this"
%macro
prove : Elab any
prove
= do env <- localVars
Just g <- goal
| Nothing => fail "No goal to solve"
logMsg "" 0 (show env)
solveReflected g
commutes : (x, y : Nat) -> plus x y = plus y x
commutes x y = prove
|
[STATEMENT]
lemma power: "VARS (p::int) i
{ True }
p := 1;
i := 0;
WHILE i < n
INV { p = x^i \<and> i \<le> n }
DO p := p * x;
i := i + 1
OD
{ p = x^n }"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. {True}
p := 1; i := 0; WHILE i < n INV {p = x ^ i \<and> i \<le> n} VAR {0}
DO p := p * x; i := i + 1 OD
{p = x ^ n}
[PROOF STEP]
apply vcg_simp
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>i. i \<le> n \<and> \<not> i < n \<Longrightarrow> x ^ i = x ^ n
[PROOF STEP]
by auto |
> module EscardoOliva.Operations
> import Data.List
> import EscardoOliva.SelectionFunction
> import EscardoOliva.Quantifier
> %default total
> %access public export
> %auto_implicits off
> |||
> overline : {X, R : Type} -> J R X -> K R X
> overline e p = p (e p)
> |||
> otimes : {X : Type} -> {R : Type} ->
> J R X -> (X -> J R (List X)) -> J R (List X)
> otimes e f p = x :: xs where
> x = e (\ x' => overline (f x') (\ xs' => p (x' :: xs')))
> xs = f x (\ xs' => p (x :: xs'))
> |||
> partial
> bigotimes : {X, R : Type} -> List (List X -> J R X) -> J R (List X)
> bigotimes [] = \ p => []
> -- bigotimes (e :: es) = (e []) `otimes` (\x => bigotimes [\ xs => d (x :: xs) | d <- es])
> {-
> bigotimes {X} {R} (e :: es) = (e []) `otimes` f where
> partial
> f : X -> J R (List X)
> f x = bigotimes (map h es) where
> h : (List X -> J R X) -> List X -> J R X
> h d = \ xs => d (x :: xs)
> -}
> bigotimes {X} {R} (e :: es) = let f = \ x => bigotimes (map (\ d => \ xs => d (x :: xs) ) es) in
> (e []) `otimes` f
> {-
> ---}
|
------------------------------------------------------------------------
-- A definitional interpreter
------------------------------------------------------------------------
{-# OPTIONS --sized-types #-}
module Lambda.Simplified.Delay-monad.Interpreter where
open import Equality.Propositional
open import Prelude
open import Monad equality-with-J
open import Vec.Function equality-with-J
open import Delay-monad
open import Delay-monad.Bisimilarity
open import Delay-monad.Monad
open import Lambda.Simplified.Syntax
open Closure Tm
------------------------------------------------------------------------
-- The interpreter
infix 10 _∙_
mutual
⟦_⟧ : ∀ {i n} → Tm n → Env n → Delay Value i
⟦ var x ⟧ ρ = return (ρ x)
⟦ ƛ t ⟧ ρ = return (ƛ t ρ)
⟦ t₁ · t₂ ⟧ ρ = ⟦ t₁ ⟧ ρ >>= λ v₁ →
⟦ t₂ ⟧ ρ >>= λ v₂ →
v₁ ∙ v₂
_∙_ : ∀ {i} → Value → Value → Delay Value i
ƛ t₁ ρ ∙ v₂ = later λ { .force → ⟦ t₁ ⟧ (cons v₂ ρ) }
------------------------------------------------------------------------
-- An example
-- The semantics of Ω is the non-terminating computation never.
Ω-loops : ∀ {i} → [ i ] ⟦ Ω ⟧ nil ∼ never
Ω-loops = later λ { .force → Ω-loops }
|
/-
Copyright (c) 2021 OpenAI. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kunhao Zheng, Stanislas Polu, David Renshaw, OpenAI GPT-f
-/
import mathzoo.imports.miniF2F
open_locale nat rat real big_operators topological_space
theorem mathd_algebra_139
(s : ℝ → ℝ → ℝ)
(h₀ : ∀ x≠0, ∀y≠0, s x y = (1/y - 1/x) / (x-y)) :
s 3 11 = 1/33 :=
begin
norm_num [h₀],
end |
Require Import Problem.
(* problem B (Midpoint) might be useful *)
Theorem solution: task.
Proof.
unfold task.
(* FILL IN HERE *)
Qed.
|
/-
Copyright (c) 2018 Chris Hughes. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Chris Hughes, Abhimanyu Pallavi Sudhir, Jean Lo, Calle Sönne, Sébastien Gouëzel,
Rémy Degenne
! This file was ported from Lean 3 source module analysis.special_functions.pow_deriv
! leanprover-community/mathlib commit da1d134ab55eb58347924920695d8200f4740694
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Analysis.SpecialFunctions.Pow
import Mathbin.Analysis.SpecialFunctions.Complex.LogDeriv
import Mathbin.Analysis.Calculus.ExtendDeriv
import Mathbin.Analysis.SpecialFunctions.Log.Deriv
import Mathbin.Analysis.SpecialFunctions.Trigonometric.Deriv
/-!
# Derivatives of power function on `ℂ`, `ℝ`, `ℝ≥0`, and `ℝ≥0∞`
We also prove differentiability and provide derivatives for the power functions `x ^ y`.
-/
noncomputable section
open Classical Real Topology NNReal ENNReal Filter
open Filter
namespace Complex
theorem hasStrictFderivAt_cpow {p : ℂ × ℂ} (hp : 0 < p.1.re ∨ p.1.im ≠ 0) :
HasStrictFderivAt (fun x : ℂ × ℂ => x.1 ^ x.2)
((p.2 * p.1 ^ (p.2 - 1)) • ContinuousLinearMap.fst ℂ ℂ ℂ +
(p.1 ^ p.2 * log p.1) • ContinuousLinearMap.snd ℂ ℂ ℂ)
p :=
by
have A : p.1 ≠ 0 := by
intro h
simpa [h, lt_irrefl] using hp
have : (fun x : ℂ × ℂ => x.1 ^ x.2) =ᶠ[𝓝 p] fun x => exp (log x.1 * x.2) :=
((is_open_ne.preimage continuous_fst).eventually_mem A).mono fun p hp =>
cpow_def_of_ne_zero hp _
rw [cpow_sub _ _ A, cpow_one, mul_div_left_comm, mul_smul, mul_smul, ← smul_add]
refine' HasStrictFderivAt.congr_of_eventuallyEq _ this.symm
simpa only [cpow_def_of_ne_zero A, div_eq_mul_inv, mul_smul, add_comm] using
((has_strict_fderiv_at_fst.clog hp).mul hasStrictFderivAt_snd).cexp
#align complex.has_strict_fderiv_at_cpow Complex.hasStrictFderivAt_cpow
theorem hasStrictFderivAt_cpow' {x y : ℂ} (hp : 0 < x.re ∨ x.im ≠ 0) :
HasStrictFderivAt (fun x : ℂ × ℂ => x.1 ^ x.2)
((y * x ^ (y - 1)) • ContinuousLinearMap.fst ℂ ℂ ℂ +
(x ^ y * log x) • ContinuousLinearMap.snd ℂ ℂ ℂ)
(x, y) :=
@hasStrictFderivAt_cpow (x, y) hp
#align complex.has_strict_fderiv_at_cpow' Complex.hasStrictFderivAt_cpow'
theorem hasStrictDerivAt_const_cpow {x y : ℂ} (h : x ≠ 0 ∨ y ≠ 0) :
HasStrictDerivAt (fun y => x ^ y) (x ^ y * log x) y :=
by
rcases em (x = 0) with (rfl | hx)
· replace h := h.neg_resolve_left rfl
rw [log_zero, MulZeroClass.mul_zero]
refine' (hasStrictDerivAt_const _ 0).congr_of_eventuallyEq _
exact (is_open_ne.eventually_mem h).mono fun y hy => (zero_cpow hy).symm
·
simpa only [cpow_def_of_ne_zero hx, mul_one] using
((hasStrictDerivAt_id y).const_mul (log x)).cexp
#align complex.has_strict_deriv_at_const_cpow Complex.hasStrictDerivAt_const_cpow
theorem hasFderivAt_cpow {p : ℂ × ℂ} (hp : 0 < p.1.re ∨ p.1.im ≠ 0) :
HasFderivAt (fun x : ℂ × ℂ => x.1 ^ x.2)
((p.2 * p.1 ^ (p.2 - 1)) • ContinuousLinearMap.fst ℂ ℂ ℂ +
(p.1 ^ p.2 * log p.1) • ContinuousLinearMap.snd ℂ ℂ ℂ)
p :=
(hasStrictFderivAt_cpow hp).HasFderivAt
#align complex.has_fderiv_at_cpow Complex.hasFderivAt_cpow
end Complex
section fderiv
open Complex
variable {E : Type _} [NormedAddCommGroup E] [NormedSpace ℂ E] {f g : E → ℂ} {f' g' : E →L[ℂ] ℂ}
{x : E} {s : Set E} {c : ℂ}
theorem HasStrictFderivAt.cpow (hf : HasStrictFderivAt f f' x) (hg : HasStrictFderivAt g g' x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasStrictFderivAt (fun x => f x ^ g x)
((g x * f x ^ (g x - 1)) • f' + (f x ^ g x * log (f x)) • g') x :=
by convert(@has_strict_fderiv_at_cpow ((fun x => (f x, g x)) x) h0).comp x (hf.prod hg)
#align has_strict_fderiv_at.cpow HasStrictFderivAt.cpow
theorem HasStrictFderivAt.const_cpow (hf : HasStrictFderivAt f f' x) (h0 : c ≠ 0 ∨ f x ≠ 0) :
HasStrictFderivAt (fun x => c ^ f x) ((c ^ f x * log c) • f') x :=
(hasStrictDerivAt_const_cpow h0).comp_hasStrictFderivAt x hf
#align has_strict_fderiv_at.const_cpow HasStrictFderivAt.const_cpow
theorem HasFderivAt.cpow (hf : HasFderivAt f f' x) (hg : HasFderivAt g g' x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasFderivAt (fun x => f x ^ g x) ((g x * f x ^ (g x - 1)) • f' + (f x ^ g x * log (f x)) • g')
x :=
by convert(@Complex.hasFderivAt_cpow ((fun x => (f x, g x)) x) h0).comp x (hf.prod hg)
#align has_fderiv_at.cpow HasFderivAt.cpow
theorem HasFderivAt.const_cpow (hf : HasFderivAt f f' x) (h0 : c ≠ 0 ∨ f x ≠ 0) :
HasFderivAt (fun x => c ^ f x) ((c ^ f x * log c) • f') x :=
(hasStrictDerivAt_const_cpow h0).HasDerivAt.comp_hasFderivAt x hf
#align has_fderiv_at.const_cpow HasFderivAt.const_cpow
theorem HasFderivWithinAt.cpow (hf : HasFderivWithinAt f f' s x) (hg : HasFderivWithinAt g g' s x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasFderivWithinAt (fun x => f x ^ g x)
((g x * f x ^ (g x - 1)) • f' + (f x ^ g x * log (f x)) • g') s x :=
by
convert(@Complex.hasFderivAt_cpow ((fun x => (f x, g x)) x) h0).comp_hasFderivWithinAt x
(hf.prod hg)
#align has_fderiv_within_at.cpow HasFderivWithinAt.cpow
theorem HasFderivWithinAt.const_cpow (hf : HasFderivWithinAt f f' s x) (h0 : c ≠ 0 ∨ f x ≠ 0) :
HasFderivWithinAt (fun x => c ^ f x) ((c ^ f x * log c) • f') s x :=
(hasStrictDerivAt_const_cpow h0).HasDerivAt.comp_hasFderivWithinAt x hf
#align has_fderiv_within_at.const_cpow HasFderivWithinAt.const_cpow
theorem DifferentiableAt.cpow (hf : DifferentiableAt ℂ f x) (hg : DifferentiableAt ℂ g x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) : DifferentiableAt ℂ (fun x => f x ^ g x) x :=
(hf.HasFderivAt.cpow hg.HasFderivAt h0).DifferentiableAt
#align differentiable_at.cpow DifferentiableAt.cpow
theorem DifferentiableAt.const_cpow (hf : DifferentiableAt ℂ f x) (h0 : c ≠ 0 ∨ f x ≠ 0) :
DifferentiableAt ℂ (fun x => c ^ f x) x :=
(hf.HasFderivAt.const_cpow h0).DifferentiableAt
#align differentiable_at.const_cpow DifferentiableAt.const_cpow
theorem DifferentiableWithinAt.cpow (hf : DifferentiableWithinAt ℂ f s x)
(hg : DifferentiableWithinAt ℂ g s x) (h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
DifferentiableWithinAt ℂ (fun x => f x ^ g x) s x :=
(hf.HasFderivWithinAt.cpow hg.HasFderivWithinAt h0).DifferentiableWithinAt
#align differentiable_within_at.cpow DifferentiableWithinAt.cpow
theorem DifferentiableWithinAt.const_cpow (hf : DifferentiableWithinAt ℂ f s x)
(h0 : c ≠ 0 ∨ f x ≠ 0) : DifferentiableWithinAt ℂ (fun x => c ^ f x) s x :=
(hf.HasFderivWithinAt.const_cpow h0).DifferentiableWithinAt
#align differentiable_within_at.const_cpow DifferentiableWithinAt.const_cpow
end fderiv
section deriv
open Complex
variable {f g : ℂ → ℂ} {s : Set ℂ} {f' g' x c : ℂ}
/-- A private lemma that rewrites the output of lemmas like `has_fderiv_at.cpow` to the form
expected by lemmas like `has_deriv_at.cpow`. -/
private theorem aux :
((g x * f x ^ (g x - 1)) • (1 : ℂ →L[ℂ] ℂ).smul_right f' +
(f x ^ g x * log (f x)) • (1 : ℂ →L[ℂ] ℂ).smul_right g')
1 =
g x * f x ^ (g x - 1) * f' + f x ^ g x * log (f x) * g' :=
by
simp only [Algebra.id.smul_eq_mul, one_mul, ContinuousLinearMap.one_apply,
ContinuousLinearMap.smulRight_apply, ContinuousLinearMap.add_apply, Pi.smul_apply,
ContinuousLinearMap.coe_smul']
#align aux aux
theorem HasStrictDerivAt.cpow (hf : HasStrictDerivAt f f' x) (hg : HasStrictDerivAt g g' x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasStrictDerivAt (fun x => f x ^ g x) (g x * f x ^ (g x - 1) * f' + f x ^ g x * log (f x) * g')
x :=
by simpa only [aux] using (hf.cpow hg h0).HasStrictDerivAt
#align has_strict_deriv_at.cpow HasStrictDerivAt.cpow
theorem HasStrictDerivAt.const_cpow (hf : HasStrictDerivAt f f' x) (h : c ≠ 0 ∨ f x ≠ 0) :
HasStrictDerivAt (fun x => c ^ f x) (c ^ f x * log c * f') x :=
(hasStrictDerivAt_const_cpow h).comp x hf
#align has_strict_deriv_at.const_cpow HasStrictDerivAt.const_cpow
theorem Complex.hasStrictDerivAt_cpow_const (h : 0 < x.re ∨ x.im ≠ 0) :
HasStrictDerivAt (fun z : ℂ => z ^ c) (c * x ^ (c - 1)) x := by
simpa only [MulZeroClass.mul_zero, add_zero, mul_one] using
(hasStrictDerivAt_id x).cpow (hasStrictDerivAt_const x c) h
#align complex.has_strict_deriv_at_cpow_const Complex.hasStrictDerivAt_cpow_const
theorem HasStrictDerivAt.cpow_const (hf : HasStrictDerivAt f f' x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasStrictDerivAt (fun x => f x ^ c) (c * f x ^ (c - 1) * f') x :=
(Complex.hasStrictDerivAt_cpow_const h0).comp x hf
#align has_strict_deriv_at.cpow_const HasStrictDerivAt.cpow_const
theorem HasDerivAt.cpow (hf : HasDerivAt f f' x) (hg : HasDerivAt g g' x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasDerivAt (fun x => f x ^ g x) (g x * f x ^ (g x - 1) * f' + f x ^ g x * log (f x) * g') x :=
by simpa only [aux] using (hf.has_fderiv_at.cpow hg h0).HasDerivAt
#align has_deriv_at.cpow HasDerivAt.cpow
theorem HasDerivAt.const_cpow (hf : HasDerivAt f f' x) (h0 : c ≠ 0 ∨ f x ≠ 0) :
HasDerivAt (fun x => c ^ f x) (c ^ f x * log c * f') x :=
(hasStrictDerivAt_const_cpow h0).HasDerivAt.comp x hf
#align has_deriv_at.const_cpow HasDerivAt.const_cpow
theorem HasDerivAt.cpow_const (hf : HasDerivAt f f' x) (h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasDerivAt (fun x => f x ^ c) (c * f x ^ (c - 1) * f') x :=
(Complex.hasStrictDerivAt_cpow_const h0).HasDerivAt.comp x hf
#align has_deriv_at.cpow_const HasDerivAt.cpow_const
theorem HasDerivWithinAt.cpow (hf : HasDerivWithinAt f f' s x) (hg : HasDerivWithinAt g g' s x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasDerivWithinAt (fun x => f x ^ g x) (g x * f x ^ (g x - 1) * f' + f x ^ g x * log (f x) * g')
s x :=
by simpa only [aux] using (hf.has_fderiv_within_at.cpow hg h0).HasDerivWithinAt
#align has_deriv_within_at.cpow HasDerivWithinAt.cpow
theorem HasDerivWithinAt.const_cpow (hf : HasDerivWithinAt f f' s x) (h0 : c ≠ 0 ∨ f x ≠ 0) :
HasDerivWithinAt (fun x => c ^ f x) (c ^ f x * log c * f') s x :=
(hasStrictDerivAt_const_cpow h0).HasDerivAt.comp_hasDerivWithinAt x hf
#align has_deriv_within_at.const_cpow HasDerivWithinAt.const_cpow
theorem HasDerivWithinAt.cpow_const (hf : HasDerivWithinAt f f' s x)
(h0 : 0 < (f x).re ∨ (f x).im ≠ 0) :
HasDerivWithinAt (fun x => f x ^ c) (c * f x ^ (c - 1) * f') s x :=
(Complex.hasStrictDerivAt_cpow_const h0).HasDerivAt.comp_hasDerivWithinAt x hf
#align has_deriv_within_at.cpow_const HasDerivWithinAt.cpow_const
/-- Although `λ x, x ^ r` for fixed `r` is *not* complex-differentiable along the negative real
line, it is still real-differentiable, and the derivative is what one would formally expect. -/
theorem hasDerivAt_of_real_cpow {x : ℝ} (hx : x ≠ 0) {r : ℂ} (hr : r ≠ -1) :
HasDerivAt (fun y : ℝ => (y : ℂ) ^ (r + 1) / (r + 1)) (x ^ r) x :=
by
rw [Ne.def, ← add_eq_zero_iff_eq_neg, ← Ne.def] at hr
rcases lt_or_gt_of_ne hx.symm with (hx | hx)
· -- easy case : `0 < x`
convert(((hasDerivAt_id (x : ℂ)).cpow_const _).div_const (r + 1)).comp_of_real
· rw [add_sub_cancel, id.def, mul_one, mul_comm, mul_div_cancel _ hr]
· rw [id.def, of_real_re]
exact Or.inl hx
· -- harder case : `x < 0`
have :
∀ᶠ y : ℝ in nhds x,
(y : ℂ) ^ (r + 1) / (r + 1) = (-y : ℂ) ^ (r + 1) * exp (π * I * (r + 1)) / (r + 1) :=
by
refine' Filter.eventually_of_mem (Iio_mem_nhds hx) fun y hy => _
rw [of_real_cpow_of_nonpos (le_of_lt hy)]
refine' HasDerivAt.congr_of_eventuallyEq _ this
rw [of_real_cpow_of_nonpos (le_of_lt hx)]
suffices
HasDerivAt (fun y : ℝ => (-↑y) ^ (r + 1) * exp (↑π * I * (r + 1)))
((r + 1) * (-↑x) ^ r * exp (↑π * I * r)) x
by
convert this.div_const (r + 1) using 1
conv_rhs => rw [mul_assoc, mul_comm, mul_div_cancel _ hr]
rw [mul_add ((π : ℂ) * _), mul_one, exp_add, exp_pi_mul_I, mul_comm (_ : ℂ) (-1 : ℂ),
neg_one_mul]
simp_rw [mul_neg, ← neg_mul, ← of_real_neg]
suffices HasDerivAt (fun y : ℝ => ↑(-y) ^ (r + 1)) (-(r + 1) * ↑(-x) ^ r) x
by
convert this.neg.mul_const _
ring
suffices HasDerivAt (fun y : ℝ => ↑y ^ (r + 1)) ((r + 1) * ↑(-x) ^ r) (-x)
by
[email protected] ℝ _ ℂ _ _ x ℝ _ _ _ _ _ _ _ _ this (hasDerivAt_neg x) using 1
rw [real_smul, of_real_neg 1, of_real_one]
ring
suffices HasDerivAt (fun y : ℂ => y ^ (r + 1)) ((r + 1) * ↑(-x) ^ r) ↑(-x) by
exact this.comp_of_real
conv in ↑_ ^ _ => rw [(by ring : r = r + 1 - 1)]
convert(hasDerivAt_id ((-x : ℝ) : ℂ)).cpow_const _ using 1
· simp
· left
rwa [id.def, of_real_re, neg_pos]
#align has_deriv_at_of_real_cpow hasDerivAt_of_real_cpow
end deriv
namespace Real
variable {x y z : ℝ}
/-- `(x, y) ↦ x ^ y` is strictly differentiable at `p : ℝ × ℝ` such that `0 < p.fst`. -/
theorem hasStrictFderivAt_rpow_of_pos (p : ℝ × ℝ) (hp : 0 < p.1) :
HasStrictFderivAt (fun x : ℝ × ℝ => x.1 ^ x.2)
((p.2 * p.1 ^ (p.2 - 1)) • ContinuousLinearMap.fst ℝ ℝ ℝ +
(p.1 ^ p.2 * log p.1) • ContinuousLinearMap.snd ℝ ℝ ℝ)
p :=
by
have : (fun x : ℝ × ℝ => x.1 ^ x.2) =ᶠ[𝓝 p] fun x => exp (log x.1 * x.2) :=
(continuous_at_fst.eventually (lt_mem_nhds hp)).mono fun p hp => rpow_def_of_pos hp _
refine' HasStrictFderivAt.congr_of_eventuallyEq _ this.symm
convert((has_strict_fderiv_at_fst.log hp.ne').mul hasStrictFderivAt_snd).exp
rw [rpow_sub_one hp.ne', ← rpow_def_of_pos hp, smul_add, smul_smul, mul_div_left_comm,
div_eq_mul_inv, smul_smul, smul_smul, mul_assoc, add_comm]
#align real.has_strict_fderiv_at_rpow_of_pos Real.hasStrictFderivAt_rpow_of_pos
/-- `(x, y) ↦ x ^ y` is strictly differentiable at `p : ℝ × ℝ` such that `p.fst < 0`. -/
theorem hasStrictFderivAt_rpow_of_neg (p : ℝ × ℝ) (hp : p.1 < 0) :
HasStrictFderivAt (fun x : ℝ × ℝ => x.1 ^ x.2)
((p.2 * p.1 ^ (p.2 - 1)) • ContinuousLinearMap.fst ℝ ℝ ℝ +
(p.1 ^ p.2 * log p.1 - exp (log p.1 * p.2) * sin (p.2 * π) * π) •
ContinuousLinearMap.snd ℝ ℝ ℝ)
p :=
by
have : (fun x : ℝ × ℝ => x.1 ^ x.2) =ᶠ[𝓝 p] fun x => exp (log x.1 * x.2) * cos (x.2 * π) :=
(continuous_at_fst.eventually (gt_mem_nhds hp)).mono fun p hp => rpow_def_of_neg hp _
refine' HasStrictFderivAt.congr_of_eventuallyEq _ this.symm
convert((has_strict_fderiv_at_fst.log hp.ne).mul hasStrictFderivAt_snd).exp.mul
(has_strict_fderiv_at_snd.mul_const _).cos using
1
simp_rw [rpow_sub_one hp.ne, smul_add, ← add_assoc, smul_smul, ← add_smul, ← mul_assoc,
mul_comm (cos _), ← rpow_def_of_neg hp]
rw [div_eq_mul_inv, add_comm]
congr 2 <;> ring
#align real.has_strict_fderiv_at_rpow_of_neg Real.hasStrictFderivAt_rpow_of_neg
/-- The function `λ (x, y), x ^ y` is infinitely smooth at `(x, y)` unless `x = 0`. -/
theorem contDiffAt_rpow_of_ne (p : ℝ × ℝ) (hp : p.1 ≠ 0) {n : ℕ∞} :
ContDiffAt ℝ n (fun p : ℝ × ℝ => p.1 ^ p.2) p :=
by
cases' hp.lt_or_lt with hneg hpos
exacts[(((cont_diff_at_fst.log hneg.ne).mul contDiffAt_snd).exp.mul
(cont_diff_at_snd.mul contDiffAt_const).cos).congr_of_eventuallyEq
((continuous_at_fst.eventually (gt_mem_nhds hneg)).mono fun p hp => rpow_def_of_neg hp _),
((cont_diff_at_fst.log hpos.ne').mul contDiffAt_snd).exp.congr_of_eventuallyEq
((continuous_at_fst.eventually (lt_mem_nhds hpos)).mono fun p hp => rpow_def_of_pos hp _)]
#align real.cont_diff_at_rpow_of_ne Real.contDiffAt_rpow_of_ne
theorem differentiableAt_rpow_of_ne (p : ℝ × ℝ) (hp : p.1 ≠ 0) :
DifferentiableAt ℝ (fun p : ℝ × ℝ => p.1 ^ p.2) p :=
(contDiffAt_rpow_of_ne p hp).DifferentiableAt le_rfl
#align real.differentiable_at_rpow_of_ne Real.differentiableAt_rpow_of_ne
theorem HasStrictDerivAt.rpow {f g : ℝ → ℝ} {f' g' : ℝ} (hf : HasStrictDerivAt f f' x)
(hg : HasStrictDerivAt g g' x) (h : 0 < f x) :
HasStrictDerivAt (fun x => f x ^ g x) (f' * g x * f x ^ (g x - 1) + g' * f x ^ g x * log (f x))
x :=
by
convert(has_strict_fderiv_at_rpow_of_pos ((fun x => (f x, g x)) x) h).comp_hasStrictDerivAt _
(hf.prod hg) using
1
simp [mul_assoc, mul_comm, mul_left_comm]
#align has_strict_deriv_at.rpow HasStrictDerivAt.rpow
theorem hasStrictDerivAt_rpow_const_of_ne {x : ℝ} (hx : x ≠ 0) (p : ℝ) :
HasStrictDerivAt (fun x => x ^ p) (p * x ^ (p - 1)) x :=
by
cases' hx.lt_or_lt with hx hx
· have :=
(has_strict_fderiv_at_rpow_of_neg (x, p) hx).comp_hasStrictDerivAt x
((hasStrictDerivAt_id x).Prod (hasStrictDerivAt_const _ _))
convert this
simp
· simpa using (hasStrictDerivAt_id x).rpow (hasStrictDerivAt_const x p) hx
#align real.has_strict_deriv_at_rpow_const_of_ne Real.hasStrictDerivAt_rpow_const_of_ne
theorem hasStrictDerivAt_const_rpow {a : ℝ} (ha : 0 < a) (x : ℝ) :
HasStrictDerivAt (fun x => a ^ x) (a ^ x * log a) x := by
simpa using (hasStrictDerivAt_const _ _).rpow (hasStrictDerivAt_id x) ha
#align real.has_strict_deriv_at_const_rpow Real.hasStrictDerivAt_const_rpow
/-- This lemma says that `λ x, a ^ x` is strictly differentiable for `a < 0`. Note that these
values of `a` are outside of the "official" domain of `a ^ x`, and we may redefine `a ^ x`
for negative `a` if some other definition will be more convenient. -/
theorem hasStrictDerivAt_const_rpow_of_neg {a x : ℝ} (ha : a < 0) :
HasStrictDerivAt (fun x => a ^ x) (a ^ x * log a - exp (log a * x) * sin (x * π) * π) x := by
simpa using
(has_strict_fderiv_at_rpow_of_neg (a, x) ha).comp_hasStrictDerivAt x
((hasStrictDerivAt_const _ _).Prod (hasStrictDerivAt_id _))
#align real.has_strict_deriv_at_const_rpow_of_neg Real.hasStrictDerivAt_const_rpow_of_neg
end Real
namespace Real
variable {z x y : ℝ}
theorem hasDerivAt_rpow_const {x p : ℝ} (h : x ≠ 0 ∨ 1 ≤ p) :
HasDerivAt (fun x => x ^ p) (p * x ^ (p - 1)) x :=
by
rcases ne_or_eq x 0 with (hx | rfl)
· exact (has_strict_deriv_at_rpow_const_of_ne hx _).HasDerivAt
replace h : 1 ≤ p := h.neg_resolve_left rfl
apply
hasDerivAt_of_hasDerivAt_of_ne fun x hx =>
(has_strict_deriv_at_rpow_const_of_ne hx p).HasDerivAt
exacts[continuous_at_id.rpow_const (Or.inr (zero_le_one.trans h)),
continuous_at_const.mul (continuous_at_id.rpow_const (Or.inr (sub_nonneg.2 h)))]
#align real.has_deriv_at_rpow_const Real.hasDerivAt_rpow_const
theorem differentiable_rpow_const {p : ℝ} (hp : 1 ≤ p) : Differentiable ℝ fun x : ℝ => x ^ p :=
fun x => (hasDerivAt_rpow_const (Or.inr hp)).DifferentiableAt
#align real.differentiable_rpow_const Real.differentiable_rpow_const
theorem deriv_rpow_const {x p : ℝ} (h : x ≠ 0 ∨ 1 ≤ p) :
deriv (fun x : ℝ => x ^ p) x = p * x ^ (p - 1) :=
(hasDerivAt_rpow_const h).deriv
#align real.deriv_rpow_const Real.deriv_rpow_const
theorem deriv_rpow_const' {p : ℝ} (h : 1 ≤ p) :
(deriv fun x : ℝ => x ^ p) = fun x => p * x ^ (p - 1) :=
funext fun x => deriv_rpow_const (Or.inr h)
#align real.deriv_rpow_const' Real.deriv_rpow_const'
theorem contDiffAt_rpow_const_of_ne {x p : ℝ} {n : ℕ∞} (h : x ≠ 0) :
ContDiffAt ℝ n (fun x => x ^ p) x :=
(contDiffAt_rpow_of_ne (x, p) h).comp x (contDiffAt_id.Prod contDiffAt_const)
#align real.cont_diff_at_rpow_const_of_ne Real.contDiffAt_rpow_const_of_ne
theorem contDiff_rpow_const_of_le {p : ℝ} {n : ℕ} (h : ↑n ≤ p) : ContDiff ℝ n fun x : ℝ => x ^ p :=
by
induction' n with n ihn generalizing p
· exact contDiff_zero.2 (continuous_id.rpow_const fun x => by exact_mod_cast Or.inr h)
· have h1 : 1 ≤ p := le_trans (by simp) h
rw [Nat.cast_succ, ← le_sub_iff_add_le] at h
rw [contDiff_succ_iff_deriv, deriv_rpow_const' h1]
refine' ⟨differentiable_rpow_const h1, cont_diff_const.mul (ihn h)⟩
#align real.cont_diff_rpow_const_of_le Real.contDiff_rpow_const_of_le
theorem contDiffAt_rpow_const_of_le {x p : ℝ} {n : ℕ} (h : ↑n ≤ p) :
ContDiffAt ℝ n (fun x : ℝ => x ^ p) x :=
(contDiff_rpow_const_of_le h).ContDiffAt
#align real.cont_diff_at_rpow_const_of_le Real.contDiffAt_rpow_const_of_le
theorem contDiffAt_rpow_const {x p : ℝ} {n : ℕ} (h : x ≠ 0 ∨ ↑n ≤ p) :
ContDiffAt ℝ n (fun x : ℝ => x ^ p) x :=
h.elim contDiffAt_rpow_const_of_ne contDiffAt_rpow_const_of_le
#align real.cont_diff_at_rpow_const Real.contDiffAt_rpow_const
theorem hasStrictDerivAt_rpow_const {x p : ℝ} (hx : x ≠ 0 ∨ 1 ≤ p) :
HasStrictDerivAt (fun x => x ^ p) (p * x ^ (p - 1)) x :=
ContDiffAt.has_strict_deriv_at' (contDiffAt_rpow_const (by rwa [Nat.cast_one]))
(hasDerivAt_rpow_const hx) le_rfl
#align real.has_strict_deriv_at_rpow_const Real.hasStrictDerivAt_rpow_const
end Real
section Differentiability
open Real
section fderiv
variable {E : Type _} [NormedAddCommGroup E] [NormedSpace ℝ E] {f g : E → ℝ} {f' g' : E →L[ℝ] ℝ}
{x : E} {s : Set E} {c p : ℝ} {n : ℕ∞}
theorem HasFderivWithinAt.rpow (hf : HasFderivWithinAt f f' s x) (hg : HasFderivWithinAt g g' s x)
(h : 0 < f x) :
HasFderivWithinAt (fun x => f x ^ g x)
((g x * f x ^ (g x - 1)) • f' + (f x ^ g x * log (f x)) • g') s x :=
(hasStrictFderivAt_rpow_of_pos (f x, g x) h).HasFderivAt.comp_hasFderivWithinAt x (hf.Prod hg)
#align has_fderiv_within_at.rpow HasFderivWithinAt.rpow
theorem HasFderivAt.rpow (hf : HasFderivAt f f' x) (hg : HasFderivAt g g' x) (h : 0 < f x) :
HasFderivAt (fun x => f x ^ g x) ((g x * f x ^ (g x - 1)) • f' + (f x ^ g x * log (f x)) • g')
x :=
(hasStrictFderivAt_rpow_of_pos (f x, g x) h).HasFderivAt.comp x (hf.Prod hg)
#align has_fderiv_at.rpow HasFderivAt.rpow
theorem HasStrictFderivAt.rpow (hf : HasStrictFderivAt f f' x) (hg : HasStrictFderivAt g g' x)
(h : 0 < f x) :
HasStrictFderivAt (fun x => f x ^ g x)
((g x * f x ^ (g x - 1)) • f' + (f x ^ g x * log (f x)) • g') x :=
(hasStrictFderivAt_rpow_of_pos (f x, g x) h).comp x (hf.Prod hg)
#align has_strict_fderiv_at.rpow HasStrictFderivAt.rpow
theorem DifferentiableWithinAt.rpow (hf : DifferentiableWithinAt ℝ f s x)
(hg : DifferentiableWithinAt ℝ g s x) (h : f x ≠ 0) :
DifferentiableWithinAt ℝ (fun x => f x ^ g x) s x :=
(differentiableAt_rpow_of_ne (f x, g x) h).comp_differentiableWithinAt x (hf.Prod hg)
#align differentiable_within_at.rpow DifferentiableWithinAt.rpow
theorem DifferentiableAt.rpow (hf : DifferentiableAt ℝ f x) (hg : DifferentiableAt ℝ g x)
(h : f x ≠ 0) : DifferentiableAt ℝ (fun x => f x ^ g x) x :=
(differentiableAt_rpow_of_ne (f x, g x) h).comp x (hf.Prod hg)
#align differentiable_at.rpow DifferentiableAt.rpow
theorem DifferentiableOn.rpow (hf : DifferentiableOn ℝ f s) (hg : DifferentiableOn ℝ g s)
(h : ∀ x ∈ s, f x ≠ 0) : DifferentiableOn ℝ (fun x => f x ^ g x) s := fun x hx =>
(hf x hx).rpow (hg x hx) (h x hx)
#align differentiable_on.rpow DifferentiableOn.rpow
theorem Differentiable.rpow (hf : Differentiable ℝ f) (hg : Differentiable ℝ g) (h : ∀ x, f x ≠ 0) :
Differentiable ℝ fun x => f x ^ g x := fun x => (hf x).rpow (hg x) (h x)
#align differentiable.rpow Differentiable.rpow
theorem HasFderivWithinAt.rpow_const (hf : HasFderivWithinAt f f' s x) (h : f x ≠ 0 ∨ 1 ≤ p) :
HasFderivWithinAt (fun x => f x ^ p) ((p * f x ^ (p - 1)) • f') s x :=
(hasDerivAt_rpow_const h).comp_hasFderivWithinAt x hf
#align has_fderiv_within_at.rpow_const HasFderivWithinAt.rpow_const
theorem HasFderivAt.rpow_const (hf : HasFderivAt f f' x) (h : f x ≠ 0 ∨ 1 ≤ p) :
HasFderivAt (fun x => f x ^ p) ((p * f x ^ (p - 1)) • f') x :=
(hasDerivAt_rpow_const h).comp_hasFderivAt x hf
#align has_fderiv_at.rpow_const HasFderivAt.rpow_const
theorem HasStrictFderivAt.rpow_const (hf : HasStrictFderivAt f f' x) (h : f x ≠ 0 ∨ 1 ≤ p) :
HasStrictFderivAt (fun x => f x ^ p) ((p * f x ^ (p - 1)) • f') x :=
(hasStrictDerivAt_rpow_const h).comp_hasStrictFderivAt x hf
#align has_strict_fderiv_at.rpow_const HasStrictFderivAt.rpow_const
theorem DifferentiableWithinAt.rpow_const (hf : DifferentiableWithinAt ℝ f s x)
(h : f x ≠ 0 ∨ 1 ≤ p) : DifferentiableWithinAt ℝ (fun x => f x ^ p) s x :=
(hf.HasFderivWithinAt.rpow_const h).DifferentiableWithinAt
#align differentiable_within_at.rpow_const DifferentiableWithinAt.rpow_const
@[simp]
theorem DifferentiableAt.rpow_const (hf : DifferentiableAt ℝ f x) (h : f x ≠ 0 ∨ 1 ≤ p) :
DifferentiableAt ℝ (fun x => f x ^ p) x :=
(hf.HasFderivAt.rpow_const h).DifferentiableAt
#align differentiable_at.rpow_const DifferentiableAt.rpow_const
theorem DifferentiableOn.rpow_const (hf : DifferentiableOn ℝ f s) (h : ∀ x ∈ s, f x ≠ 0 ∨ 1 ≤ p) :
DifferentiableOn ℝ (fun x => f x ^ p) s := fun x hx => (hf x hx).rpow_const (h x hx)
#align differentiable_on.rpow_const DifferentiableOn.rpow_const
theorem Differentiable.rpow_const (hf : Differentiable ℝ f) (h : ∀ x, f x ≠ 0 ∨ 1 ≤ p) :
Differentiable ℝ fun x => f x ^ p := fun x => (hf x).rpow_const (h x)
#align differentiable.rpow_const Differentiable.rpow_const
theorem HasFderivWithinAt.const_rpow (hf : HasFderivWithinAt f f' s x) (hc : 0 < c) :
HasFderivWithinAt (fun x => c ^ f x) ((c ^ f x * log c) • f') s x :=
(hasStrictDerivAt_const_rpow hc (f x)).HasDerivAt.comp_hasFderivWithinAt x hf
#align has_fderiv_within_at.const_rpow HasFderivWithinAt.const_rpow
theorem HasFderivAt.const_rpow (hf : HasFderivAt f f' x) (hc : 0 < c) :
HasFderivAt (fun x => c ^ f x) ((c ^ f x * log c) • f') x :=
(hasStrictDerivAt_const_rpow hc (f x)).HasDerivAt.comp_hasFderivAt x hf
#align has_fderiv_at.const_rpow HasFderivAt.const_rpow
theorem HasStrictFderivAt.const_rpow (hf : HasStrictFderivAt f f' x) (hc : 0 < c) :
HasStrictFderivAt (fun x => c ^ f x) ((c ^ f x * log c) • f') x :=
(hasStrictDerivAt_const_rpow hc (f x)).comp_hasStrictFderivAt x hf
#align has_strict_fderiv_at.const_rpow HasStrictFderivAt.const_rpow
theorem ContDiffWithinAt.rpow (hf : ContDiffWithinAt ℝ n f s x) (hg : ContDiffWithinAt ℝ n g s x)
(h : f x ≠ 0) : ContDiffWithinAt ℝ n (fun x => f x ^ g x) s x :=
(contDiffAt_rpow_of_ne (f x, g x) h).comp_contDiffWithinAt x (hf.Prod hg)
#align cont_diff_within_at.rpow ContDiffWithinAt.rpow
theorem ContDiffAt.rpow (hf : ContDiffAt ℝ n f x) (hg : ContDiffAt ℝ n g x) (h : f x ≠ 0) :
ContDiffAt ℝ n (fun x => f x ^ g x) x :=
(contDiffAt_rpow_of_ne (f x, g x) h).comp x (hf.Prod hg)
#align cont_diff_at.rpow ContDiffAt.rpow
theorem ContDiffOn.rpow (hf : ContDiffOn ℝ n f s) (hg : ContDiffOn ℝ n g s) (h : ∀ x ∈ s, f x ≠ 0) :
ContDiffOn ℝ n (fun x => f x ^ g x) s := fun x hx => (hf x hx).rpow (hg x hx) (h x hx)
#align cont_diff_on.rpow ContDiffOn.rpow
theorem ContDiff.rpow (hf : ContDiff ℝ n f) (hg : ContDiff ℝ n g) (h : ∀ x, f x ≠ 0) :
ContDiff ℝ n fun x => f x ^ g x :=
contDiff_iff_contDiffAt.mpr fun x => hf.ContDiffAt.rpow hg.ContDiffAt (h x)
#align cont_diff.rpow ContDiff.rpow
theorem ContDiffWithinAt.rpow_const_of_ne (hf : ContDiffWithinAt ℝ n f s x) (h : f x ≠ 0) :
ContDiffWithinAt ℝ n (fun x => f x ^ p) s x :=
hf.rpow contDiffWithinAt_const h
#align cont_diff_within_at.rpow_const_of_ne ContDiffWithinAt.rpow_const_of_ne
theorem ContDiffAt.rpow_const_of_ne (hf : ContDiffAt ℝ n f x) (h : f x ≠ 0) :
ContDiffAt ℝ n (fun x => f x ^ p) x :=
hf.rpow contDiffAt_const h
#align cont_diff_at.rpow_const_of_ne ContDiffAt.rpow_const_of_ne
theorem ContDiffOn.rpow_const_of_ne (hf : ContDiffOn ℝ n f s) (h : ∀ x ∈ s, f x ≠ 0) :
ContDiffOn ℝ n (fun x => f x ^ p) s := fun x hx => (hf x hx).rpow_const_of_ne (h x hx)
#align cont_diff_on.rpow_const_of_ne ContDiffOn.rpow_const_of_ne
theorem ContDiff.rpow_const_of_ne (hf : ContDiff ℝ n f) (h : ∀ x, f x ≠ 0) :
ContDiff ℝ n fun x => f x ^ p :=
hf.rpow contDiff_const h
#align cont_diff.rpow_const_of_ne ContDiff.rpow_const_of_ne
variable {m : ℕ}
theorem ContDiffWithinAt.rpow_const_of_le (hf : ContDiffWithinAt ℝ m f s x) (h : ↑m ≤ p) :
ContDiffWithinAt ℝ m (fun x => f x ^ p) s x :=
(contDiffAt_rpow_const_of_le h).comp_contDiffWithinAt x hf
#align cont_diff_within_at.rpow_const_of_le ContDiffWithinAt.rpow_const_of_le
theorem ContDiffAt.rpow_const_of_le (hf : ContDiffAt ℝ m f x) (h : ↑m ≤ p) :
ContDiffAt ℝ m (fun x => f x ^ p) x :=
by
rw [← contDiffWithinAt_univ] at *
exact hf.rpow_const_of_le h
#align cont_diff_at.rpow_const_of_le ContDiffAt.rpow_const_of_le
theorem ContDiffOn.rpow_const_of_le (hf : ContDiffOn ℝ m f s) (h : ↑m ≤ p) :
ContDiffOn ℝ m (fun x => f x ^ p) s := fun x hx => (hf x hx).rpow_const_of_le h
#align cont_diff_on.rpow_const_of_le ContDiffOn.rpow_const_of_le
theorem ContDiff.rpow_const_of_le (hf : ContDiff ℝ m f) (h : ↑m ≤ p) :
ContDiff ℝ m fun x => f x ^ p :=
contDiff_iff_contDiffAt.mpr fun x => hf.ContDiffAt.rpow_const_of_le h
#align cont_diff.rpow_const_of_le ContDiff.rpow_const_of_le
end fderiv
section deriv
variable {f g : ℝ → ℝ} {f' g' x y p : ℝ} {s : Set ℝ}
theorem HasDerivWithinAt.rpow (hf : HasDerivWithinAt f f' s x) (hg : HasDerivWithinAt g g' s x)
(h : 0 < f x) :
HasDerivWithinAt (fun x => f x ^ g x) (f' * g x * f x ^ (g x - 1) + g' * f x ^ g x * log (f x))
s x :=
by
convert(hf.has_fderiv_within_at.rpow hg.has_fderiv_within_at h).HasDerivWithinAt using 1
dsimp; ring
#align has_deriv_within_at.rpow HasDerivWithinAt.rpow
theorem HasDerivAt.rpow (hf : HasDerivAt f f' x) (hg : HasDerivAt g g' x) (h : 0 < f x) :
HasDerivAt (fun x => f x ^ g x) (f' * g x * f x ^ (g x - 1) + g' * f x ^ g x * log (f x)) x :=
by
rw [← hasDerivWithinAt_univ] at *
exact hf.rpow hg h
#align has_deriv_at.rpow HasDerivAt.rpow
theorem HasDerivWithinAt.rpow_const (hf : HasDerivWithinAt f f' s x) (hx : f x ≠ 0 ∨ 1 ≤ p) :
HasDerivWithinAt (fun y => f y ^ p) (f' * p * f x ^ (p - 1)) s x :=
by
convert(has_deriv_at_rpow_const hx).comp_hasDerivWithinAt x hf using 1
ring
#align has_deriv_within_at.rpow_const HasDerivWithinAt.rpow_const
theorem HasDerivAt.rpow_const (hf : HasDerivAt f f' x) (hx : f x ≠ 0 ∨ 1 ≤ p) :
HasDerivAt (fun y => f y ^ p) (f' * p * f x ^ (p - 1)) x :=
by
rw [← hasDerivWithinAt_univ] at *
exact hf.rpow_const hx
#align has_deriv_at.rpow_const HasDerivAt.rpow_const
theorem derivWithin_rpow_const (hf : DifferentiableWithinAt ℝ f s x) (hx : f x ≠ 0 ∨ 1 ≤ p)
(hxs : UniqueDiffWithinAt ℝ s x) :
derivWithin (fun x => f x ^ p) s x = derivWithin f s x * p * f x ^ (p - 1) :=
(hf.HasDerivWithinAt.rpow_const hx).derivWithin hxs
#align deriv_within_rpow_const derivWithin_rpow_const
@[simp]
theorem deriv_rpow_const (hf : DifferentiableAt ℝ f x) (hx : f x ≠ 0 ∨ 1 ≤ p) :
deriv (fun x => f x ^ p) x = deriv f x * p * f x ^ (p - 1) :=
(hf.HasDerivAt.rpow_const hx).deriv
#align deriv_rpow_const deriv_rpow_const
end deriv
end Differentiability
section Limits
open Real Filter
/-- The function `(1 + t/x) ^ x` tends to `exp t` at `+∞`. -/
theorem tendsto_one_plus_div_rpow_exp (t : ℝ) :
Tendsto (fun x : ℝ => (1 + t / x) ^ x) atTop (𝓝 (exp t)) :=
by
apply ((real.continuous_exp.tendsto _).comp (tendsto_mul_log_one_plus_div_at_top t)).congr' _
have h₁ : (1 : ℝ) / 2 < 1 := by linarith
have h₂ : tendsto (fun x : ℝ => 1 + t / x) at_top (𝓝 1) := by
simpa using (tendsto_inv_at_top_zero.const_mul t).const_add 1
refine' (eventually_ge_of_tendsto_gt h₁ h₂).mono fun x hx => _
have hx' : 0 < 1 + t / x := by linarith
simp [mul_comm x, exp_mul, exp_log hx']
#align tendsto_one_plus_div_rpow_exp tendsto_one_plus_div_rpow_exp
/-- The function `(1 + t/x) ^ x` tends to `exp t` at `+∞` for naturals `x`. -/
theorem tendsto_one_plus_div_pow_exp (t : ℝ) :
Tendsto (fun x : ℕ => (1 + t / (x : ℝ)) ^ x) atTop (𝓝 (Real.exp t)) :=
((tendsto_one_plus_div_rpow_exp t).comp tendsto_nat_cast_atTop_atTop).congr (by simp)
#align tendsto_one_plus_div_pow_exp tendsto_one_plus_div_pow_exp
end Limits
|
function preprocess(s::String, doc::Document; config::Dict = Dict())
# order matters for the following string transformations!
abs_src = joinpath(doc.root, doc.rel_path)
rel_base = first(splitext(doc.rel_path))
repo_root = get(config, "repo_root_path", REPO_DIR)
repo_root_url = get(config, "repo_root_url", "<unknown>")
filename = Literate.filename(abs_src)
relrepo_path = relpath(abs_src, repo_root)
s = replace(s, "@__FILE_URL__" => "@__REPO_ROOT_URL__/$(relrepo_path)")
s = replace(s, "@__FILE__" => relrepo_path)
exfile = joinpath(REPO_DIR, PATHS.examples_tarfile)
rel_exfile = relpath(exfile, abs_src)
#error(rel_exfile)
#s = replace(s, "@__EXAMPLES__" => PATHS.examples_tarfile)
s = replace(s, "@__EXAMPLES__" => rel_exfile)
s = replace(
s,
"@__EXAMPLES_README__" => read(joinpath(EXAMPLE_DIR, "README.md"), String),
)
if doc.kind === :documenter
s = parse_documenter(s).body
s = add_documenter_title(s, doc.config[:title])
# Since we are doing an out-of-source build
# we need to add correct EditURL for Documenter
s = add_documenter_editurl(s)
elseif doc.kind === :literate
s = parse_literate(s).body
s = add_literate_title(s, doc.config[:title])
if :script in doc.config[:builds] || :notebook in doc.config[:builds]
# If we are building executable scripts/notebooks, add admonition at top of file
# linking to examples
s = add_literate_examples_header(s, repo_root, abs_src)
end
else
error("Unknown document kind: $(doc.kind)")
end
s
end
function add_documenter_editurl(content::String)
if occursin("EditURL", content)
print(content)
error("$path already contains an EditURL")
end
"""
```@meta
EditURL = "@__FILE_URL__"
```
""" * content
end
function add_literate_examples_header(content::String, repo_root, abs_src)
#examplehowto = joinpath(SRC_DIR, "examples/example_howto.md")
#@assert isfile(examplehowto)
#path = relpath(examplehowto, abs_src)
# TODO
examplehowto = "https://docs.lyceum.ml/dev/examples/example_howto/"
"""
#md # !!! note "Running examples locally"
#md # This example and more are also available as Julia scripts and Jupyter notebooks.
#md #
#md # See [the how-to page]($(examplehowto)) for more information.
#md #
""" * content
end
function add_literate_title(content::String, title::String)
"""
#md # # $title
""" * content
end
function add_documenter_title(content::String, title::String)
"""
# $title
""" * content
end
|
[STATEMENT]
lemma nat_of_natural_of_nat [simp]:
"nat_of_natural (of_nat n) = n"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. nat_of_natural (of_nat n) = n
[PROOF STEP]
by transfer rule |
Formal statement is: lemma emeasure_lborel_box[simp]: assumes [simp]: "\<And>b. b \<in> Basis \<Longrightarrow> l \<bullet> b \<le> u \<bullet> b" shows "emeasure lborel (box l u) = (\<Prod>b\<in>Basis. (u - l) \<bullet> b)" Informal statement is: The Lebesgue measure of a box is the product of the lengths of its sides. |
Require Import Spec.Proc Spec.ProcTheorems.
Require Import Tactical.Propositional.
Require Import Helpers.RelationAlgebra.
Require Import Helpers.RelationRewriting.
Require Import Helpers.RelationTheorems.
Import RelationNotations.
(** Defining specifications, which are just convenient ways to express program
behavior. *)
Record SpecProps T R State :=
{ pre: Prop;
post: State -> T -> Prop;
alternate: State -> R -> Prop; }.
Definition Specification T R State := State -> SpecProps T R State.
Definition spec_exec T R State (spec: Specification T R State) :
relation State State T :=
fun s s' r => (spec s).(pre) -> (spec s).(post) s' r.
Definition spec_aexec T R State (spec: Specification T R State) :
relation State State R :=
fun s s' r => (spec s).(pre) -> (spec s).(alternate) s' r.
Definition spec_impl
`(spec1: Specification T R State)
`(spec2: Specification T R State) :=
forall s, (spec2 s).(pre) -> (spec1 s).(pre) /\
(forall s' v, (spec1 s).(post) s' v ->
(spec2 s).(post) s' v) /\
(forall s' rv, (spec1 s).(alternate) s' rv ->
(spec2 s).(alternate) s' rv).
Definition op_spec `(sem: Dynamics Op State) `(op : Op T) : Specification T unit State :=
fun state =>
{|
pre := True;
post :=
fun state' v => sem.(step) op state state' v;
alternate :=
fun state' r =>
r = tt /\ (crash_step sem state state' r
\/ exists smid v, sem.(step) op state smid v
/\ crash_step sem smid state' r);
|}.
Section Hoare.
Context `(sem: Dynamics Op State).
Notation proc := (proc Op).
Notation exec := (exec sem).
Notation exec_crash := (exec_crash sem).
Notation crash_step := sem.(crash_step).
Notation rexec := (rexec sem).
Definition proc_rspec T R
(p: proc T) (rec: proc R)
(spec: Specification T R State) :=
exec p ---> spec_exec spec /\
rexec p rec ---> spec_aexec spec.
Definition proc_hspec T
(p: proc T)
(spec: Specification T unit State) :=
exec p ---> spec_exec spec /\
exec_crash p ---> spec_aexec spec.
Theorem proc_rspec_expand T R
(p: proc T) (rec: proc R)
(spec: Specification T R State) :
proc_rspec p rec spec <->
forall s,
(spec s).(pre) ->
(forall s' v, exec p s s' v ->
(spec s).(post) s' v) /\
(forall s' rv, rexec p rec s s' rv ->
(spec s).(alternate) s' rv).
Proof.
unfold proc_rspec, rimpl, spec_exec, spec_aexec; split; intros.
- intuition eauto 10.
- split; intros x y ?.
specialize (H x); intuition eauto.
specialize (H x); intuition eauto.
Qed.
Theorem proc_hspec_expand T (p: proc T)
(spec: Specification T unit State) :
proc_hspec p spec <->
forall s,
(spec s).(pre) ->
(forall s' v, exec p s s' v ->
(spec s).(post) s' v) /\
(forall s' rv, exec_crash p s s' rv ->
(spec s).(alternate) s' rv).
Proof.
unfold proc_hspec, rimpl, spec_exec, spec_aexec; split; intros.
- intuition eauto 10.
- split; intros x y ?.
specialize (H x); intuition eauto.
specialize (H x); intuition eauto.
Qed.
Theorem proc_rspec_impl
`(spec1: Specification T R State)
`(spec2: Specification T R State)
p rec :
spec_impl spec1 spec2 ->
proc_rspec p rec spec1 ->
proc_rspec p rec spec2.
Proof.
unfold spec_impl; intros.
pose proof (proj1 (proc_rspec_expand _ _ _) H0); clear H0.
apply proc_rspec_expand; intros.
specialize (H s); propositional.
specialize (H1 s); propositional.
eauto 10.
Qed.
Theorem proc_hspec_impl
`(spec1: Specification T unit State)
`(spec2: Specification T unit State)
p :
spec_impl spec1 spec2 ->
proc_hspec p spec1 ->
proc_hspec p spec2.
Proof.
unfold spec_impl; intros.
pose proof (proj1 (proc_hspec_expand _ _) H0); clear H0.
apply proc_hspec_expand; intros.
firstorder.
Qed.
Theorem proc_rspec_exec_equiv T `(spec: Specification T R State)
(p p': proc T) `(rec: proc R):
exec_equiv sem p p' ->
proc_rspec p' rec spec ->
proc_rspec p rec spec.
Proof. unfold proc_rspec. intros ->; auto. Qed.
Theorem proc_hspec_exec_equiv T `(spec: Specification T unit State)
(p p': proc T):
exec_equiv sem p p' ->
proc_hspec p' spec ->
proc_hspec p spec.
Proof. unfold proc_hspec. intros ->; auto. Qed.
Theorem proc_rspec_rx T T' R `(spec: Specification T R State)
`(p: proc T) `(rec: proc R)
`(rx: T -> proc T')
`(spec': Specification T' R State):
proc_rspec p rec spec ->
(forall state, pre (spec' state) -> pre (spec state) /\
(forall r,
proc_rspec (rx r) rec
(fun state' =>
{| pre := post (spec state) state' r;
post :=
fun (state'' : State) r =>
post (spec' state) state'' r;
alternate :=
fun (state'' : State) r =>
alternate (spec' state) state'' r |})
) /\
(forall (r: R) (state': State), alternate (spec state) state' r ->
alternate (spec' state) state' r)) ->
proc_rspec (Bind p rx) rec spec'.
Proof.
unfold proc_rspec at 3. intros (Hp_ok&Hp_rec) Hrx.
split.
- simpl; rew Hp_ok.
intros state state' t' (t&(state_mid&Hspec_mid&Hexec_mid)) Hpre'.
specialize (Hrx _ Hpre') as (Hpre&Hok&Hrec).
specialize (Hok t). rewrite proc_rspec_expand in Hok.
destruct (Hok state_mid) as (Hrx_ok&Hrx_rec); simpl; eauto.
- rewrite rexec_unfold. rewrite rexec_unfold in Hp_rec.
simpl. rewrite bind_dist_r.
apply rel_or_elim.
+ rewrite Hp_rec; auto.
intros state state' r Hspec_aexec Hpre'.
specialize (Hrx _ Hpre') as (Hpre&?&Hrec); eauto.
+ rewrite bind_assoc, Hp_ok.
intros state state' t' (t&(state_mid&Hspec_mid&Hcrash_mid)) Hpre'.
specialize (Hrx _ Hpre') as (Hpre&Hok&Hrec).
specialize (Hok t). rewrite proc_rspec_expand in Hok.
destruct (Hok state_mid) as (Hrx_ok&Hrx_rec); simpl; eauto.
Qed.
Theorem proc_hspec_rx T T' `(spec: Specification T unit State)
`(p: proc T)
`(rx: T -> proc T')
`(spec': Specification T' unit State):
proc_hspec p spec ->
(forall state, pre (spec' state) -> pre (spec state) /\
(forall r,
proc_hspec (rx r)
(fun state' =>
{| pre := post (spec state) state' r;
post :=
fun (state'' : State) r =>
post (spec' state) state'' r;
alternate :=
fun (state'' : State) r =>
alternate (spec' state) state'' r |})
) /\
(forall (r: unit) (state': State), alternate (spec state) state' r ->
alternate (spec' state) state' r)) ->
proc_hspec (Bind p rx) spec'.
Proof.
unfold proc_hspec at 3. intros (Hp_ok&Hp_rec) Hrx.
split.
- simpl; rew Hp_ok.
intros state state' t' (t&(state_mid&Hspec_mid&Hexec_mid)) Hpre'.
specialize (Hrx _ Hpre') as (Hpre&Hok&Hrec).
specialize (Hok t). rewrite proc_hspec_expand in Hok.
destruct (Hok state_mid) as (Hrx_ok&Hrx_rec); simpl; eauto.
- simpl.
apply rel_or_elim.
+ rewrite Hp_rec; auto.
intros state state' r Hspec_aexec Hpre'.
specialize (Hrx _ Hpre') as (Hpre&?&Hrec); eauto.
+ rewrite Hp_ok.
intros state state' t' (t&(state_mid&Hspec_mid&Hcrash_mid)) Hpre'.
specialize (Hrx _ Hpre') as (Hpre&Hok&Hrec).
specialize (Hok t). rewrite proc_hspec_expand in Hok.
destruct (Hok state_mid) as (Hrx_ok&Hrx_rec); simpl; eauto.
Qed.
(** ** Reasoning about the [Ret] return operation.
The simplest procedure we can construct in our model is
the return operation, [Ret]. Writing a specification for
[Ret] should be intuitively straightforward, but turns out
to be slightly complicated by the possibility of crashes.
The [rec_noop] definition below captures this notion: a
[Ret v] procedure has no precondition, and has a simple
postcondition (the state does not change and the return
value is [v]), but in case of a crash, the state is wiped
according to some [wipe] relation.
[rec_noop] is a proposition that states that [Ret v] actually
meets this specification. Proving [rec_noop] will be a
proof obligation, and boils down to showing that the recovery
procedure [rec] corresponds to the wipe relation [wipe].
*)
Definition rec_noop `(rec: proc R) (wipe: State -> State -> Prop) :=
forall T (v:T),
proc_rspec
(Ret v) rec
(fun state =>
{| pre := True;
post := fun state' r => r = v /\
state' = state;
alternate := fun state' _ => wipe state state'; |}).
(** A more general theorem about recovery specifications for [Ret], which
we will use as part of our proof automation, says
that [Ret v] meets a specification [spec] if the [rec_noop]
theorem holds (i.e., the recovery procedure is correctly
described by a wipe relation [wipe]), and the specification
[spec] matches the [wipe] relation:
*)
Theorem ret_rspec T R (wipe: State -> State -> Prop) `(spec: Specification T R State)
(v:T) (rec: proc R):
rec_noop rec wipe ->
(forall state, pre (spec state) ->
post (spec state) state v /\
forall state', wipe state state' ->
forall (r : R), alternate (spec state) state' r) ->
proc_rspec (Ret v) rec spec .
Proof.
unfold proc_rspec; intros Hnoop Himpl; split.
- intros state state' t Hexec Hpre.
inversion Hexec; subst. specialize (Himpl _ Hpre). intuition.
- destruct (Hnoop _ v) as (?&->).
unfold spec_aexec. firstorder.
Qed.
Theorem ret_hspec T `(spec: Specification T unit State)
(v:T):
(forall state, pre (spec state) ->
post (spec state) state v /\
(forall state', crash_step state state' tt -> alternate (spec state) state' tt)) ->
proc_hspec (Ret v) spec .
Proof.
unfold proc_hspec, spec_exec, spec_aexec; simpl.
unfold "--->", pure; split; propositional.
specialize (H _ H1); propositional.
specialize (H _ H1); propositional.
destruct o; eauto.
Qed.
(** Define what it means for a spec to be idempotent: *)
Definition idempotent A T R `(spec: A -> Specification T R State) :=
forall a state,
pre (spec a state) ->
forall v state', alternate (spec a state) state' v ->
exists a', pre (spec a' state') /\
forall rv state'', post (spec a' state') state'' rv ->
post (spec a state) state'' rv.
(** In some situations, the precondition of a specification
may define variables or facts that you want to [intros].
Here we define several helper theorems and an Ltac tactic, [spec_intros],
that does so. This is done by changing the specification's precondition
from an arbitrary Prop (i.e., [pre]) into a statement that there's
some state [state0] such that [state = state0], and [intros]ing the
arbitrary Prop in question as a hypothesis about [state0].
*)
Theorem rspec_intros T R `(spec: Specification T R State)
`(p: proc T) `(rec: proc R):
(forall state0,
pre (spec state0) ->
proc_rspec p rec
(fun state =>
{| pre := state = state0;
post :=
fun state' r => post (spec state) state' r;
alternate :=
fun state' r => alternate (spec state) state' r;
|})) ->
proc_rspec p rec spec.
Proof.
unfold proc_rspec at 2; intros H.
split; intros s s' r Hexec Hpre; eapply H; simpl; eauto.
Qed.
Theorem hspec_intros T `(spec: Specification T unit State)
`(p: proc T):
(forall state0,
pre (spec state0) ->
proc_hspec p
(fun state =>
{| pre := state = state0;
post :=
fun state' r => post (spec state) state' r;
alternate :=
fun state' r => alternate (spec state) state' r;
|})) ->
proc_hspec p spec.
Proof.
unfold proc_hspec at 2; intros H.
split; intros s s' r Hexec Hpre; eapply H; simpl; eauto using tt.
Qed.
Theorem op_spec_sound T (op: Op T):
proc_hspec (Call op) (op_spec sem op).
Proof.
unfold proc_hspec; split.
- intros state state' t Hexec Hpre; eauto.
- simpl. apply rel_or_elim.
* intros s s' [] Hl Hpre. simpl. split; auto.
* intros s s' [] Hl Hpre.
inversion Hl as (?&?&?&Hrest).
firstorder.
Qed.
Theorem op_spec_complete T (op: Op T):
spec_exec (op_spec sem op) ---> exec (Call op) /\
spec_aexec (op_spec sem op) ---> exec_crash (Call op).
Proof. split; firstorder. Qed.
Theorem op_spec_complete1 T (op: Op T):
spec_exec (op_spec sem op) ---> exec (Call op).
Proof. firstorder. Qed.
Theorem op_spec_complete2 T (op: Op T):
spec_aexec (op_spec sem op) ---> crash_step + (sem.(step) op;; crash_step).
Proof. firstorder. Qed.
Lemma spec_aexec_cancel T R1 R2 (spec1 : Specification T R1 State)
(spec2: Specification T R2 State) (r: relation State State R2) :
(forall s, (spec2 s).(pre) -> (spec1 s).(pre)) ->
(forall s r1, _ <- test (fun s' => (spec1 s).(pre) /\ (spec1 s).(alternate) s' r1);
r ---> (fun s2a s2b r => (spec2 s).(pre) -> (spec2 s).(alternate) s2b r)) ->
(_ <- spec_aexec spec1; r) ---> spec_aexec spec2.
Proof.
intros Hpre_impl Hrest s1 s2 r2 Hl Hpre'.
destruct Hl as (r1&smid&?&?).
eapply (Hrest s1 r1); eauto. exists tt, smid. split; simpl; eauto.
unfold test. firstorder.
Qed.
Theorem proc_hspec_to_rspec A' T R (p_hspec: Specification T unit State)
`(rec_hspec: A' -> Specification R unit State)
`(p_rspec: Specification T R State)
`(p: proc T) `(rec: proc R):
proc_hspec p p_hspec ->
(forall a, proc_hspec rec (rec_hspec a)) ->
idempotent rec_hspec ->
(forall s, (p_rspec s).(pre) -> (p_hspec s).(pre) /\
(forall s' v, (p_hspec s).(post) s' v ->
(p_rspec s).(post) s' v)) ->
(* alternate of hspec implies pre of rec for some ghost*)
(forall state state' v,
pre (p_hspec state) ->
alternate (p_hspec state) state' v ->
exists a, pre (rec_hspec a state')) ->
(* recovery post implies alternate of rspec *)
(forall a s sc, (p_rspec s).(pre) ->
(forall sfin rv, (rec_hspec a sc).(post) sfin rv ->
(p_rspec s).(alternate) sfin rv)) ->
proc_rspec p rec p_rspec.
Proof.
intros (Hpe&Hpc) Hc.
unfold idempotent. intros Hidemp.
intros Himpl1 Hc_crash_r Hr_alt.
split.
- rew Hpe; auto.
intros s1 s2 t Hl Hpre.
eapply Himpl1; eauto. eapply Hl. eapply Himpl1; eauto.
- unfold rexec. rewrite Hpc.
unfold exec_recover.
eapply spec_aexec_cancel.
{ eapply Himpl1. }
intros s1 [].
setoid_rewrite <-bind_assoc.
assert (test (fun s' : State => (p_hspec s1).(pre) /\ (p_hspec s1).(alternate) s' tt)
---> @any _ _ unit ;; test (fun s' : State => exists a', (rec_hspec a' s').(pre)))
as HCI.
{
unfold test, rimpl, any; propositional.
exists tt; eexists; intuition eauto.
}
rew HCI.
setoid_rewrite <-bind_assoc at 2.
setoid_rewrite <-bind_assoc.
rewrite seq_star_mid_invariant.
* rewrite bind_assoc.
intros sa sb r Hl Hpre_s1.
destruct Hl as ([]&smid&_&Hl).
destruct Hl as ([]&?&Htest&?).
destruct Htest as ((a'&?)&?).
subst. eapply Hr_alt; eauto.
eapply Hc; eauto.
* intros s s' [] Hl.
destruct Hl as ([]&?&((a'&Hhspec)&<-)&Hexec_crash).
unfold any; exists tt; eexists; split; auto.
split; [| eauto].
edestruct Hidemp as (a''&?); eauto.
eapply Hc; eauto. eexists; intuition eauto.
* apply any_seq_star_any.
Qed.
End Hoare.
|
postulate
I : Set
U : I → Set
El : ∀ {i} → U i → Set
infixr 4 _,_
record Σ (A : Set) (B : A → Set) : Set where
constructor _,_
field
proj₁ : A
proj₂ : B proj₁
open Σ
∃ : ∀ {A : Set} → (A → Set) → Set
∃ = Σ _
mutual
infixl 5 _▻_
data Ctxt : Set where
_▻_ : (Γ : Ctxt) → Type Γ → Ctxt
Type : Ctxt → Set
Type Γ = ∃ λ i → Env Γ → U i
Env : Ctxt → Set
Env (Γ ▻ σ) = Σ (Env Γ) λ γ → El (proj₂ σ γ)
mutual
data Ctxt⁺ (Γ : Ctxt) : Set where
_▻_ : (Γ⁺ : Ctxt⁺ Γ) → Type (Γ ++ Γ⁺) → Ctxt⁺ Γ
infixl 5 _++_
_++_ : (Γ : Ctxt) → Ctxt⁺ Γ → Ctxt
Γ ++ (Γ⁺ ▻ σ) = (Γ ++ Γ⁺) ▻ σ
data P : (Γ : Ctxt) → Type Γ → Set where
c : ∀ {Γ σ τ} → P (Γ ▻ σ) τ
f : ∀ {Γ} → Ctxt⁺ Γ → Set₁
f {Γ} (Γ⁺ ▻ σ) = Set
where
g : ∀ τ → P (Γ ++ Γ⁺ ▻ σ) τ → Set₁
g _ c = Set
|
/-
Copyright (c) 2021 David Renshaw. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: David Renshaw
-/
import algebra.geom_sum
import data.rat.basic
import data.real.basic
/-!
# IMO 2013 Q5
Let ℚ>₀ be the set of positive rational numbers. Let f: ℚ>₀ → ℝ be a function satisfying
the conditions
(1) f(x) * f(y) ≥ f(x * y)
(2) f(x + y) ≥ f(x) + f(y)
for all x,y ∈ ℚ>₀. Given that f(a) = a for some rational a > 1, prove that f(x) = x for
all x ∈ ℚ>₀.
# Solution
We provide a direct translation of the solution found in
https://www.imo-official.org/problems/IMO2013SL.pdf
-/
open_locale big_operators
lemma le_of_all_pow_lt_succ {x y : ℝ} (hx : 1 < x) (hy : 1 < y)
(h : ∀ n : ℕ, 0 < n → x^n - 1 < y^n) :
x ≤ y :=
begin
by_contra hxy,
push_neg at hxy,
have hxmy : 0 < x - y := sub_pos.mpr hxy,
have hn : ∀ n : ℕ, 0 < n → (x - y) * (n : ℝ) ≤ x^n - y^n,
{ intros n hn,
have hterm : ∀ i : ℕ, i ∈ finset.range n → 1 ≤ x^i * y^(n - 1 - i),
{ intros i hi,
have hx' : 1 ≤ x ^ i := one_le_pow_of_one_le hx.le i,
have hy' : 1 ≤ y ^ (n - 1 - i) := one_le_pow_of_one_le hy.le (n - 1 - i),
calc 1 ≤ x^i : hx'
... = x^i * 1 : (mul_one _).symm
... ≤ x^i * y^(n-1-i) : mul_le_mul_of_nonneg_left hy' (zero_le_one.trans hx') },
calc (x - y) * (n : ℝ)
= (n : ℝ) * (x - y) : mul_comm _ _
... = (∑ (i : ℕ) in finset.range n, (1 : ℝ)) * (x - y) :
by simp only [mul_one, finset.sum_const, nsmul_eq_mul,
finset.card_range]
... ≤ (∑ (i : ℕ) in finset.range n, x ^ i * y ^ (n - 1 - i)) * (x-y) :
(mul_le_mul_right hxmy).mpr (finset.sum_le_sum hterm)
... = x^n - y^n : geom_sum₂_mul x y n, },
-- Choose n larger than 1 / (x - y).
obtain ⟨N, hN⟩ := exists_nat_gt (1 / (x - y)),
have hNp : 0 < N, { exact_mod_cast (one_div_pos.mpr hxmy).trans hN },
have := calc 1 = (x - y) * (1 / (x - y)) : by field_simp [ne_of_gt hxmy]
... < (x - y) * N : (mul_lt_mul_left hxmy).mpr hN
... ≤ x^N - y^N : hn N hNp,
linarith [h N hNp]
end
/--
Like le_of_all_pow_lt_succ, but with a weaker assumption for y.
-/
lemma le_of_all_pow_lt_succ' {x y : ℝ} (hx : 1 < x) (hy : 0 < y)
(h : ∀ n : ℕ, 0 < n → x^n - 1 < y^n) :
x ≤ y :=
begin
refine le_of_all_pow_lt_succ hx _ h,
by_contra hy'',
push_neg at hy'', -- hy'' : y ≤ 1.
-- Then there exists y' such that 0 < y ≤ 1 < y' < x.
let y' := (x + 1) / 2,
have h_y'_lt_x : y' < x,
{ have hh : (x + 1)/2 < (x * 2) / 2, { linarith },
calc y' < (x * 2) / 2 : hh
... = x : by field_simp },
have h1_lt_y' : 1 < y',
{ have hh' : 1 * 2 / 2 < (x + 1) / 2, { linarith },
calc 1 = 1 * 2 / 2 : by field_simp
... < y' : hh' },
have h_y_lt_y' : y < y' := hy''.trans_lt h1_lt_y',
have hh : ∀ n, 0 < n → x^n - 1 < y'^n,
{ intros n hn,
calc x^n - 1 < y^n : h n hn
... ≤ y'^n : pow_le_pow_of_le_left hy.le h_y_lt_y'.le n },
exact h_y'_lt_x.not_le (le_of_all_pow_lt_succ hx h1_lt_y' hh)
end
lemma f_pos_of_pos {f : ℚ → ℝ} {q : ℚ} (hq : 0 < q)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n) :
0 < f q :=
begin
have hfqn := calc f q.num = f (q * q.denom) : by rw ←rat.mul_denom_eq_num
... ≤ f q * f q.denom : H1 q q.denom hq (nat.cast_pos.mpr q.pos),
-- Now we just need to show that `f q.num` and `f q.denom` are positive.
-- Then nlinarith will be able to close the goal.
have num_pos : 0 < q.num := rat.num_pos_iff_pos.mpr hq,
have hqna : (q.num.nat_abs : ℤ) = q.num := int.nat_abs_of_nonneg num_pos.le,
have hqfn' :=
calc (q.num : ℝ)
= ((q.num.nat_abs : ℤ) : ℝ) : congr_arg coe (eq.symm hqna)
... ≤ f q.num.nat_abs : H4 q.num.nat_abs
(int.nat_abs_pos_of_ne_zero (ne_of_gt num_pos))
... = f q.num : by exact_mod_cast congr_arg f (congr_arg coe hqna),
have f_num_pos := calc (0 : ℝ) < q.num : int.cast_pos.mpr num_pos
... ≤ f q.num : hqfn',
have f_denom_pos := calc (0 : ℝ) < q.denom : nat.cast_pos.mpr q.pos
... ≤ f q.denom : H4 q.denom q.pos,
nlinarith
end
lemma fx_gt_xm1 {f : ℚ → ℝ} {x : ℚ} (hx : 1 ≤ x)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H2 : ∀ x y, 0 < x → 0 < y → f x + f y ≤ f (x + y))
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n) :
((x - 1 : ℚ) : ℝ) < f x :=
begin
have hfe : (⌊x⌋.nat_abs : ℤ) = ⌊x⌋ := int.nat_abs_of_nonneg
(floor_nonneg.mpr (zero_le_one.trans hx)),
have hfe' : (⌊x⌋.nat_abs : ℚ) = ⌊x⌋, { exact_mod_cast hfe },
have h0fx : 0 < ⌊x⌋ := int.cast_pos.mp ((sub_nonneg.mpr hx).trans_lt (sub_one_lt_floor x)),
have h_nat_abs_floor_pos : 0 < ⌊x⌋.nat_abs := int.nat_abs_pos_of_ne_zero (ne_of_gt h0fx),
have hx0 := calc ((x - 1 : ℚ) : ℝ)
< ⌊x⌋ : by exact_mod_cast sub_one_lt_floor x
... = ↑⌊x⌋.nat_abs : by exact_mod_cast hfe.symm
... ≤ f ⌊x⌋.nat_abs : H4 ⌊x⌋.nat_abs h_nat_abs_floor_pos
... = f ⌊x⌋ : by rw hfe',
obtain (h_eq : (⌊x⌋ : ℚ) = x) | (h_lt : (⌊x⌋ : ℚ) < x) := (floor_le x).eq_or_lt,
{ rwa h_eq at hx0 },
calc ((x - 1 : ℚ) : ℝ) < f ⌊x⌋ : hx0
... < f (x - ⌊x⌋) + f ⌊x⌋ : lt_add_of_pos_left (f ↑⌊x⌋)
(f_pos_of_pos (sub_pos.mpr h_lt) H1 H4)
... ≤ f (x - ⌊x⌋ + ⌊x⌋) : H2 (x - ⌊x⌋) ⌊x⌋ (sub_pos.mpr h_lt)
(by exact_mod_cast h0fx)
... = f x : by simp only [sub_add_cancel]
end
lemma pow_f_le_f_pow {f : ℚ → ℝ} {n : ℕ} (hn : 0 < n) {x : ℚ} (hx : 1 < x)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n) :
f (x^n) ≤ (f x)^n :=
begin
induction n with pn hpn,
{ exfalso, exact nat.lt_asymm hn hn },
cases pn,
{ simp only [pow_one] },
have hpn' := hpn pn.succ_pos,
rw [pow_succ' x (pn + 1), pow_succ' (f x) (pn + 1)],
have hxp : 0 < x := zero_lt_one.trans hx,
calc f ((x ^ (pn+1)) * x)
≤ f (x ^ (pn+1)) * f x : H1 (x ^ (pn+1)) x (pow_pos hxp (pn+1)) hxp
... ≤ (f x) ^ (pn+1) * f x : (mul_le_mul_right (f_pos_of_pos hxp H1 H4)).mpr hpn'
end
lemma fixed_point_of_pos_nat_pow {f : ℚ → ℝ} {n : ℕ} (hn : 0 < n)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n)
(H5 : ∀ x : ℚ, 1 < x → (x : ℝ) ≤ f x)
{a : ℚ} (ha1 : 1 < a) (hae : f a = a) :
f (a^n) = a^n :=
begin
have hh0 : (a : ℝ) ^ n ≤ f (a ^ n),
{ exact_mod_cast H5 (a ^ n) (one_lt_pow ha1 (nat.succ_le_iff.mpr hn)) },
have hh1 := calc f (a^n) ≤ (f a)^n : pow_f_le_f_pow hn ha1 H1 H4
... = (a : ℝ)^n : by rw ← hae,
exact hh1.antisymm hh0
end
lemma fixed_point_of_gt_1 {f : ℚ → ℝ} {x : ℚ} (hx : 1 < x)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H2 : ∀ x y, 0 < x → 0 < y → f x + f y ≤ f (x + y))
(H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n)
(H5 : ∀ x : ℚ, 1 < x → (x : ℝ) ≤ f x)
{a : ℚ} (ha1 : 1 < a) (hae : f a = a) :
f x = x :=
begin
-- Choose n such that 1 + x < a^n.
obtain ⟨N, hN⟩ := pow_unbounded_of_one_lt (1 + x) ha1,
have h_big_enough : (1:ℚ) < a^N - x := lt_sub_iff_add_lt.mpr hN,
have h1 := calc (x : ℝ) + ((a^N - x) : ℚ)
≤ f x + ((a^N - x) : ℚ) : add_le_add_right (H5 x hx) _
... ≤ f x + f (a^N - x) : add_le_add_left (H5 _ h_big_enough) _,
have hxp : 0 < x := zero_lt_one.trans hx,
have hNp : 0 < N,
{ by_contra H, push_neg at H, rw [nat.le_zero_iff.mp H] at hN, linarith },
have h2 := calc f x + f (a^N - x)
≤ f (x + (a^N - x)) : H2 x (a^N - x) hxp (zero_lt_one.trans h_big_enough)
... = f (a^N) : by ring_nf
... = a^N : fixed_point_of_pos_nat_pow hNp H1 H4 H5 ha1 hae
... = x + (a^N - x) : by ring,
have heq := h1.antisymm (by exact_mod_cast h2),
linarith [H5 x hx, H5 _ h_big_enough]
end
theorem imo2013_q5
(f : ℚ → ℝ)
(H1 : ∀ x y, 0 < x → 0 < y → f (x * y) ≤ f x * f y)
(H2 : ∀ x y, 0 < x → 0 < y → f x + f y ≤ f (x + y))
(H_fixed_point : ∃ a, 1 < a ∧ f a = a) :
∀ x, 0 < x → f x = x :=
begin
obtain ⟨a, ha1, hae⟩ := H_fixed_point,
have H3 : ∀ x : ℚ, 0 < x → ∀ n : ℕ, 0 < n → ↑n * f x ≤ f (n * x),
{ intros x hx n hn,
cases n,
{ exfalso, exact nat.lt_asymm hn hn },
induction n with pn hpn,
{ simp only [one_mul, nat.cast_one] },
calc (↑pn + 1 + 1) * f x
= ((pn : ℝ) + 1) * f x + 1 * f x : add_mul (↑pn + 1) 1 (f x)
... = (↑pn + 1) * f x + f x : by rw one_mul
... ≤ f ((↑pn.succ) * x) + f x : by exact_mod_cast add_le_add_right
(hpn pn.succ_pos) (f x)
... ≤ f ((↑pn + 1) * x + x) : by exact_mod_cast H2 _ _
(mul_pos pn.cast_add_one_pos hx) hx
... = f ((↑pn + 1) * x + 1 * x) : by rw one_mul
... = f ((↑pn + 1 + 1) * x) : congr_arg f (add_mul (↑pn + 1) 1 x).symm },
have H4 : ∀ n : ℕ, 0 < n → (n : ℝ) ≤ f n,
{ intros n hn,
have hf1 : 1 ≤ f 1,
{ have a_pos : (0 : ℝ) < a := rat.cast_pos.mpr (zero_lt_one.trans ha1),
suffices : ↑a * 1 ≤ ↑a * f 1, from (mul_le_mul_left a_pos).mp this,
calc ↑a * 1 = ↑a : mul_one ↑a
... = f a : hae.symm
... = f (a * 1) : by rw mul_one
... ≤ f a * f 1 : (H1 a 1) (zero_lt_one.trans ha1) zero_lt_one
... = ↑a * f 1 : by rw hae },
calc (n : ℝ) = (n : ℝ) * 1 : (mul_one _).symm
... ≤ (n : ℝ) * f 1 : (mul_le_mul_left (nat.cast_pos.mpr hn)).mpr hf1
... ≤ f (n * 1) : H3 1 zero_lt_one n hn
... = f n : by rw mul_one },
have H5 : ∀ x : ℚ, 1 < x → (x : ℝ) ≤ f x,
{ intros x hx,
have hxnm1 : ∀ n : ℕ, 0 < n → (x : ℝ)^n - 1 < (f x)^n,
{ intros n hn,
calc (x : ℝ)^n - 1 < f (x^n) : by exact_mod_cast fx_gt_xm1 (one_le_pow_of_one_le hx.le n)
H1 H2 H4
... ≤ (f x)^n : pow_f_le_f_pow hn hx H1 H4 },
have hx' : 1 < (x : ℝ) := by exact_mod_cast hx,
have hxp : 0 < x := zero_lt_one.trans hx,
exact le_of_all_pow_lt_succ' hx' (f_pos_of_pos hxp H1 H4) hxnm1 },
have h_f_commutes_with_pos_nat_mul : ∀ n : ℕ, 0 < n → ∀ x : ℚ, 0 < x → f (n * x) = n * f x,
{ intros n hn x hx,
have h2 : f (n * x) ≤ n * f x,
{ cases n,
{ exfalso, exact nat.lt_asymm hn hn },
cases n,
{ simp only [one_mul, nat.cast_one] },
have hfneq : f (n.succ.succ) = n.succ.succ,
{ have := fixed_point_of_gt_1
(nat.one_lt_cast.mpr (nat.succ_lt_succ n.succ_pos)) H1 H2 H4 H5 ha1 hae,
rwa (rat.cast_coe_nat n.succ.succ) at this },
rw ← hfneq,
exact H1 (n.succ.succ : ℚ) x (nat.cast_pos.mpr hn) hx },
exact h2.antisymm (H3 x hx n hn) },
-- For the final calculation, we expand x as (2*x.num) / (2*x.denom), because
-- we need the top of the fraction to be strictly greater than 1 in order
-- to apply fixed_point_of_gt_1.
intros x hx,
let x2denom := 2 * x.denom,
let x2num := 2 * x.num,
have hx2pos := calc 0 < x.denom : x.pos
... < x.denom + x.denom : lt_add_of_pos_left x.denom x.pos
... = 2 * x.denom : by ring,
have hxcnez : (x.denom : ℚ) ≠ (0 : ℚ) := ne_of_gt (nat.cast_pos.mpr x.pos),
have hx2cnezr : (x2denom : ℝ) ≠ (0 : ℝ) := nat.cast_ne_zero.mpr (ne_of_gt hx2pos),
have hrat_expand2 := calc x = x.num / x.denom : by exact_mod_cast rat.num_denom.symm
... = x2num / x2denom : by { field_simp [-rat.num_div_denom], linarith },
have h_denom_times_fx :=
calc (x2denom : ℝ) * f x = f (x2denom * x) : (h_f_commutes_with_pos_nat_mul
x2denom hx2pos x hx).symm
... = f (x2denom * (x2num / x2denom)) : by rw hrat_expand2
... = f x2num : by { congr, field_simp, ring },
have h_fx2num_fixed : f x2num = x2num,
{ have hx2num_gt_one : (1 : ℚ) < (2 * x.num : ℤ),
{ norm_cast, linarith [rat.num_pos_iff_pos.mpr hx] },
have hh := fixed_point_of_gt_1 hx2num_gt_one H1 H2 H4 H5 ha1 hae,
rwa (rat.cast_coe_int x2num) at hh },
calc f x = f x * 1 : (mul_one (f x)).symm
... = f x * (x2denom / x2denom) : by rw ←(div_self hx2cnezr)
... = (f x * x2denom) / x2denom : mul_div_assoc' (f x) _ _
... = (x2denom * f x) / x2denom : by rw mul_comm
... = f x2num / x2denom : by rw h_denom_times_fx
... = x2num / x2denom : by rw h_fx2num_fixed
... = (((x2num : ℚ) / (x2denom : ℚ) : ℚ) : ℝ) : by norm_cast
... = x : by rw ←hrat_expand2
end
|
open import prelude
open import ch6 using (Term ; var ; fun ; _$_ ; two ; four)
-- A stack is a list of values
-- Values are either closures or errors
data Value : Set where
error : Value
closure : List(Value) → Term → Value
Stack = List(Value)
lookup : (s : Stack) → (x : Nat) → Value
lookup nil x = error
lookup (v :: s) zero = v
lookup (v :: s) (suc x) = lookup s x
-- Big step sematics
data _▷_⇓_ : Stack → Term → Value → Set where
E─Var : ∀ {s x} →
--------------------------
s ▷ (var x) ⇓ (lookup s x)
E─Fun : ∀ {s t} →
---------------------------
s ▷ (fun t) ⇓ (closure s t)
E─App : ∀ {s s′ t t′ u u′ v} →
s ▷ t ⇓ (closure s′ t′) →
s ▷ u ⇓ u′ →
(u′ :: s′) ▷ t′ ⇓ v →
-------------------
s ▷ (t $ u) ⇓ v
E─AppErr : ∀ {s t u} →
s ▷ t ⇓ error →
-------------------
s ▷ (t $ u) ⇓ error
-- An interpreter result
data Result : Stack → Term → Set where
result : ∀ {s t} →
(v : Value) →
(s ▷ t ⇓ v) →
-------------
Result s t
{-# NON_TERMINATING #-}
interpret : (s : Stack) → (t : Term) → Result s t
interpret s (var x) = result (lookup s x) E─Var
interpret s (fun t) = result (closure s t) E─Fun
interpret s (t₁ $ t₂) = helper₁ (interpret s t₁) where
helper₃ : ∀ {s′ t′ u′} → (s ▷ t₁ ⇓ (closure s′ t′)) → (s ▷ t₂ ⇓ u′) → Result (u′ :: s′) t′ → Result s (t₁ $ t₂)
helper₃ p q (result v r) = result v (E─App p q r)
helper₂ : ∀ {s′ t′} → (s ▷ t₁ ⇓ (closure s′ t′)) → Result s t₂ → Result s (t₁ $ t₂)
helper₂ {s′} {t′} p (result u′ q) = helper₃ p q (interpret (u′ :: s′) t′)
helper₁ : Result s t₁ → Result s (t₁ $ t₂)
helper₁ (result error p) = result error (E─AppErr p)
helper₁ (result (closure s′ t′) p) = helper₂ p (interpret s t₂)
-- Shorthand for starting from the empty stack
i = interpret nil
|
sanitizedCode = concerto.test.sanitizeSource(code)
result = eval(parse(text=sanitizedCode)) |
= Position ; GP =
|
Formal statement is: lemma order_decomp: assumes "p \<noteq> 0" shows "\<exists>q. p = [:- a, 1:] ^ order a p * q \<and> \<not> [:- a, 1:] dvd q" Informal statement is: If $p$ is a nonzero polynomial, then there exists a polynomial $q$ such that $p = (x - a)^{ord_a(p)} q$ and $(x - a)$ does not divide $q$. |
[STATEMENT]
theorem Class_subset:
"(a, b) \<in> E \<Longrightarrow> Class a \<subseteq> Class b"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (a, b) \<in> E \<Longrightarrow> Class a \<subseteq> Class b
[PROOF STEP]
proof
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
fix a and b and c
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
assume "(a, b) \<in> E" and "c \<in> Class a"
[PROOF STATE]
proof (state)
this:
(a, b) \<in> E
c \<in> Class a
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
(a, b) \<in> E
c \<in> Class a
[PROOF STEP]
have "(c, a) \<in> E"
[PROOF STATE]
proof (prove)
using this:
(a, b) \<in> E
c \<in> Class a
goal (1 subgoal):
1. (c, a) \<in> E
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
(c, a) \<in> E
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(c, a) \<in> E
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
note \<open>(a, b) \<in> E\<close>
[PROOF STATE]
proof (state)
this:
(a, b) \<in> E
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
(c, b) \<in> E
[PROOF STEP]
have "(c, b) \<in> E"
[PROOF STATE]
proof (prove)
using this:
(c, b) \<in> E
goal (1 subgoal):
1. (c, b) \<in> E
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
(c, b) \<in> E
goal (1 subgoal):
1. \<And>x. \<lbrakk>(a, b) \<in> E; x \<in> Class a\<rbrakk> \<Longrightarrow> x \<in> Class b
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
(c, b) \<in> E
[PROOF STEP]
show "c \<in> Class b"
[PROOF STATE]
proof (prove)
using this:
(c, b) \<in> E
goal (1 subgoal):
1. c \<in> Class b
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
c \<in> Class b
goal:
No subgoals!
[PROOF STEP]
qed |
(*
Copyright 2021-2022 Boris Shminke
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*)
theory DistributiveCase
imports Residuated_Lattices.Residuated_Lattices
begin
class DistributiveResiduatedBinar=residuated_lgroupoid + distrib_lattice
begin
lemma lemma2_2_3:
"(\<forall> x y z w. (x \<squnion> y) \<rightarrow> (z \<squnion> w) \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)) \<longleftrightarrow>
(\<forall> x y z. x \<rightarrow> (y \<squnion> z) = (x \<rightarrow> y) \<squnion> (x \<rightarrow> z))"
by (metis local.resr_subdist local.resr_superdist_var local.sup_absorb2 local.sup_commute local.sup_idem local.sup_mono)
lemma lemma2_2_5:
"(\<forall> x y z w. (x \<sqinter> y) \<rightarrow> (z \<sqinter> w) \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)) \<longleftrightarrow>
(\<forall> x y z. (x \<sqinter> y) \<rightarrow> z = (x \<rightarrow> z) \<squnion> (y \<rightarrow> z))"
proof -
have "(\<forall> x y z w. (x \<sqinter> y) \<rightarrow> (z \<sqinter> w) \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)) \<longrightarrow>
(\<forall> x y z. (x \<sqinter> y) \<rightarrow> z = (x \<rightarrow> z) \<squnion> (y \<rightarrow> z))"
by (smt (z3) abel_semigroup.commute local.distrib_inf_le local.inf.abel_semigroup_axioms local.inf.orderE local.inf.semilattice_axioms local.resr_distl local.sup_inf_absorb semilattice.idem)
have "(\<forall> x y z. (x \<sqinter> y) \<rightarrow> z = (x \<rightarrow> z) \<squnion> (y \<rightarrow> z)) \<longrightarrow>
(\<forall> x y z w. (x \<sqinter> y) \<rightarrow> (z \<sqinter> w) \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w))"
by (metis local.inf.cobounded1 local.inf.commute local.resr_iso local.sup.mono)
show ?thesis
using \<open>(\<forall>x y z w. x \<sqinter> y \<rightarrow> z \<sqinter> w \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)) \<longrightarrow> (\<forall>x y z. x \<sqinter> y \<rightarrow> z = (x \<rightarrow> z) \<squnion> (y \<rightarrow> z))\<close> \<open>(\<forall>x y z. x \<sqinter> y \<rightarrow> z = (x \<rightarrow> z) \<squnion> (y \<rightarrow> z)) \<longrightarrow> (\<forall>x y z w. x \<sqinter> y \<rightarrow> z \<sqinter> w \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w))\<close> by blast
qed
theorem Main:
assumes one: "\<forall>x y z. (x \<squnion> y) \<leftarrow> z = (x \<leftarrow> z) \<squnion> (y \<leftarrow> z)"
and two: "\<forall>x y z. (x \<sqinter> y) \<rightarrow> z = (x \<rightarrow> z) \<squnion> (y \<rightarrow> z)"
shows "\<forall>x y z. x \<rightarrow> (y \<squnion> z) = (x \<rightarrow> y) \<squnion> (x \<rightarrow> z)"
proof -
{
fix x y z w u
assume "u = (x \<squnion> y) \<rightarrow> (z \<squnion> w)"
have "u \<le> ((x \<sqinter> (z \<leftarrow> u)) \<rightarrow> z) \<squnion> ((y \<sqinter> (w \<leftarrow> u)) \<rightarrow> w)"
using local.inf.cobounded2 local.resl_galois local.sup.coboundedI1 by blast
have "u \<le> ((x \<sqinter> (z \<leftarrow> u)) \<rightarrow> z) \<squnion> ((y \<sqinter> (z \<leftarrow> u)) \<rightarrow> w)"
using local.inf.cobounded2 local.le_supI1 local.resl_galois by blast
have "u \<le> ((x \<sqinter> (w \<leftarrow> u)) \<rightarrow> z) \<squnion> ((y \<sqinter> (w \<leftarrow> u)) \<rightarrow> w)"
using local.inf.cobounded2 local.resl_galois local.sup.coboundedI2 by blast
have "u \<le> ((x \<sqinter> (w \<leftarrow> u)) \<sqinter> (y \<sqinter> (z \<leftarrow> u))) \<rightarrow> (z \<sqinter> w)"
by (meson local.inf_le1 local.inf_le2 local.le_inf_iff local.resl_galois local.resrI)
have "u \<le> ((x \<sqinter> (w \<leftarrow> u)) \<rightarrow> z) \<squnion> ((y \<sqinter> (z \<leftarrow> u)) \<rightarrow> w)"
using \<open>u \<le> x \<sqinter> (w \<leftarrow> u) \<sqinter> (y \<sqinter> (z \<leftarrow> u)) \<rightarrow> z \<sqinter> w\<close> lemma2_2_5 local.dual_order.trans two by blast
have "u \<le>
(
(((x \<sqinter> (w \<leftarrow> u)) \<rightarrow> z)
\<sqinter> ((x \<sqinter> (z \<leftarrow> u))\<rightarrow> z))
\<squnion> ((y \<sqinter> (z \<leftarrow> u)) \<rightarrow> w)
) \<sqinter> (
(((x \<sqinter> (z \<leftarrow> u)) \<rightarrow> z)
\<sqinter> ((x \<sqinter> (w \<leftarrow> u)) \<rightarrow> z))
\<squnion> ((y \<sqinter> (w \<leftarrow> u)) \<rightarrow> w))"
by (simp add: \<open>u \<le> (x \<sqinter> (w \<leftarrow> u) \<rightarrow> z) \<squnion> (y \<sqinter> (w \<leftarrow> u) \<rightarrow> w)\<close> \<open>u \<le> (x \<sqinter> (w \<leftarrow> u) \<rightarrow> z) \<squnion> (y \<sqinter> (z \<leftarrow> u) \<rightarrow> w)\<close> \<open>u \<le> (x \<sqinter> (z \<leftarrow> u) \<rightarrow> z) \<squnion> (y \<sqinter> (w \<leftarrow> u) \<rightarrow> w)\<close> \<open>u \<le> (x \<sqinter> (z \<leftarrow> u) \<rightarrow> z) \<squnion> (y \<sqinter> (z \<leftarrow> u) \<rightarrow> w)\<close> local.sup_inf_distrib2)
have "u \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)"
by (metis \<open>u = x \<squnion> y \<rightarrow> z \<squnion> w\<close> \<open>u \<le> (x \<sqinter> (w \<leftarrow> u) \<rightarrow> z) \<sqinter> (x \<sqinter> (z \<leftarrow> u) \<rightarrow> z) \<squnion> (y \<sqinter> (z \<leftarrow> u) \<rightarrow> w) \<sqinter> ((x \<sqinter> (z \<leftarrow> u) \<rightarrow> z) \<sqinter> (x \<sqinter> (w \<leftarrow> u) \<rightarrow> z) \<squnion> (y \<sqinter> (w \<leftarrow> u) \<rightarrow> w))\<close> local.inf.order_iff local.inf_sup_distrib1 local.jipsen1l local.le_supE local.resr_distl local.sup.commute local.sup_inf_distrib2 one)
}
have "\<forall> x y z w. (x \<squnion> y) \<rightarrow> (z \<squnion> w) \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)"
by (simp add: \<open>\<And>z y x w u. u = x \<squnion> y \<rightarrow> z \<squnion> w \<Longrightarrow> u \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)\<close>)
show ?thesis
using \<open>\<forall>x y z w. x \<squnion> y \<rightarrow> z \<squnion> w \<le> (x \<rightarrow> z) \<squnion> (y \<rightarrow> w)\<close> lemma2_2_3 by blast
qed
end
end
|
import Lbar.torsion_free_profinite
import condensed.condensify
noncomputable theory
universe u
open category_theory opposite
open_locale nnreal
namespace CompHausFiltPseuNormGrp
lemma to_Condensed_torsion_free (M : CompHausFiltPseuNormGrp) [no_zero_smul_divisors ℤ M]
(T : ExtrDisc) :
no_zero_smul_divisors ℤ ((to_Condensed.obj M).val.obj (op T.val)) :=
begin
dsimp, constructor,
intros n f hf,
rw or_iff_not_imp_left,
intro hn,
ext t,
apply_fun (λ φ, φ.down.val t) at hf,
apply smul_right_injective M hn,
dsimp [presheaf.has_zero] at hf ⊢,
convert hf using 1,
apply smul_zero
end
end CompHausFiltPseuNormGrp
namespace Lbar
variables (r' : ℝ≥0) [fact (0 < r')]
lemma Fintype_Lbar_torsion_free (X : Fintype) :
no_zero_smul_divisors ℤ ((Fintype_Lbar r' ⋙ PFPNGT₁_to_CHFPNG₁ₑₗ r').obj X) :=
Fintype.Lbar_no_zero_smul_divisors _ _
lemma condensify_torsion_free (A : Fintype.{u} ⥤ CompHausFiltPseuNormGrp₁)
(hA : ∀ X, no_zero_smul_divisors ℤ (A.obj X))
(S : Profinite) (T : ExtrDisc.{u}) :
no_zero_smul_divisors ℤ (((condensify A).obj S).val.obj (op T.val)) :=
begin
apply_with CompHausFiltPseuNormGrp.to_Condensed_torsion_free {instances := ff},
apply Profinite.extend_torsion_free,
apply hA,
end
def condensed : Profinite.{u} ⥤ Condensed.{u} Ab.{u+1} :=
condensify (Fintype_Lbar.{u u} r' ⋙ PFPNGT₁_to_CHFPNG₁ₑₗ r')
instance (S : Profinite.{u}) (T : ExtrDisc.{u}) :
no_zero_smul_divisors ℤ (((condensed.{u} r').obj S).val.obj (op T.val)) :=
condensify_torsion_free _ (Fintype_Lbar_torsion_free r') _ _
end Lbar
|
variables x y z w : ℕ
example (h₁ : x = y) (h₂ : y = z) (h₃ : z = w) : x = w :=
begin
apply eq.trans,
assumption,
apply eq.trans,
assumption,
assumption
end
|
/-
Copyright (c) 2020 Eric Wieser. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Eric Wieser, Utensil Song
-/
import algebra.ring_quot
import linear_algebra.tensor_algebra
import linear_algebra.exterior_algebra
import linear_algebra.quadratic_form
/-!
# Clifford Algebras
We construct the Clifford algebra of a module `M` over a commutative ring `R`, equipped with
a quadratic_form `Q`.
## Notation
The Clifford algebra of the `R`-module `M` equipped with a quadratic_form `Q` is denoted as
`clifford_algebra Q`.
Given a linear morphism `f : M → A` from a module `M` to another `R`-algebra `A`, such that
`cond : ∀ m, f m * f m = algebra_map _ _ (Q m)`, there is a (unique) lift of `f` to an `R`-algebra
morphism, which is denoted `clifford_algebra.lift Q f cond`.
The canonical linear map `M → clifford_algebra Q` is denoted `clifford_algebra.ι Q`.
## Theorems
The main theorems proved ensure that `clifford_algebra Q` satisfies the universal property
of the Clifford algebra.
1. `ι_comp_lift` is the fact that the composition of `ι Q` with `lift Q f cond` agrees with `f`.
2. `lift_unique` ensures the uniqueness of `lift Q f cond` with respect to 1.
Additionally, when `Q = 0` an `alg_equiv` to the `exterior_algebra` is provided as `as_exterior`.
## Implementation details
The Clifford algebra of `M` is constructed as a quotient of the tensor algebra, as follows.
1. We define a relation `clifford_algebra.rel Q` on `tensor_algebra R M`.
This is the smallest relation which identifies squares of elements of `M` with `Q m`.
2. The Clifford algebra is the quotient of the tensor algebra by this relation.
This file is almost identical to `linear_algebra/exterior_algebra.lean`.
-/
variables {R : Type*} [comm_ring R]
variables {M : Type*} [add_comm_group M] [module R M]
variables (Q : quadratic_form R M)
variable {n : ℕ}
namespace clifford_algebra
open tensor_algebra
/-- `rel` relates each `ι m * ι m`, for `m : M`, with `Q m`.
The Clifford algebra of `M` is defined as the quotient modulo this relation.
-/
inductive rel : tensor_algebra R M → tensor_algebra R M → Prop
| of (m : M) : rel (ι R m * ι R m) (algebra_map R _ (Q m))
end clifford_algebra
/--
The Clifford algebra of an `R`-module `M` equipped with a quadratic_form `Q`.
-/
@[derive [inhabited, ring, algebra R]]
def clifford_algebra := ring_quot (clifford_algebra.rel Q)
namespace clifford_algebra
/--
The canonical linear map `M →ₗ[R] clifford_algebra Q`.
-/
def ι : M →ₗ[R] clifford_algebra Q :=
(ring_quot.mk_alg_hom R _).to_linear_map.comp (tensor_algebra.ι R)
/-- As well as being linear, `ι Q` squares to the quadratic form -/
@[simp]
theorem ι_sq_scalar (m : M) : ι Q m * ι Q m = algebra_map R _ (Q m) :=
begin
erw [←alg_hom.map_mul, ring_quot.mk_alg_hom_rel R (rel.of m), alg_hom.commutes],
refl,
end
variables {Q} {A : Type*} [semiring A] [algebra R A]
@[simp]
theorem comp_ι_sq_scalar (g : clifford_algebra Q →ₐ[R] A) (m : M) :
g (ι Q m) * g (ι Q m) = algebra_map _ _ (Q m) :=
by rw [←alg_hom.map_mul, ι_sq_scalar, alg_hom.commutes]
variables (Q)
/--
Given a linear map `f : M →ₗ[R] A` into an `R`-algebra `A`, which satisfies the condition:
`cond : ∀ m : M, f m * f m = Q(m)`, this is the canonical lift of `f` to a morphism of `R`-algebras
from `clifford_algebra Q` to `A`.
-/
@[simps symm_apply]
def lift :
{f : M →ₗ[R] A // ∀ m, f m * f m = algebra_map _ _ (Q m)} ≃ (clifford_algebra Q →ₐ[R] A) :=
{ to_fun := λ f,
ring_quot.lift_alg_hom R ⟨tensor_algebra.lift R (f : M →ₗ[R] A),
(λ x y (h : rel Q x y), by {
induction h,
rw [alg_hom.commutes, alg_hom.map_mul, tensor_algebra.lift_ι_apply, f.prop], })⟩,
inv_fun := λ F, ⟨F.to_linear_map.comp (ι Q), λ m, by rw [
linear_map.comp_apply, alg_hom.to_linear_map_apply, comp_ι_sq_scalar]⟩,
left_inv := λ f, by { ext, simp [ι] },
right_inv := λ F, by { ext, simp [ι] } }
variables {Q}
@[simp]
theorem ι_comp_lift (f : M →ₗ[R] A) (cond : ∀ m, f m * f m = algebra_map _ _ (Q m)) :
(lift Q ⟨f, cond⟩).to_linear_map.comp (ι Q) = f :=
(subtype.mk_eq_mk.mp $ (lift Q).symm_apply_apply ⟨f, cond⟩)
@[simp]
theorem lift_ι_apply (f : M →ₗ[R] A) (cond : ∀ m, f m * f m = algebra_map _ _ (Q m)) (x) :
lift Q ⟨f, cond⟩ (ι Q x) = f x :=
(linear_map.ext_iff.mp $ ι_comp_lift f cond) x
@[simp]
theorem lift_unique (f : M →ₗ[R] A) (cond : ∀ m : M, f m * f m = algebra_map _ _ (Q m))
(g : clifford_algebra Q →ₐ[R] A) :
g.to_linear_map.comp (ι Q) = f ↔ g = lift Q ⟨f, cond⟩ :=
begin
convert (lift Q).symm_apply_eq,
rw lift_symm_apply,
simp only,
end
attribute [irreducible] clifford_algebra ι lift
@[simp]
theorem lift_comp_ι (g : clifford_algebra Q →ₐ[R] A) :
lift Q ⟨g.to_linear_map.comp (ι Q), comp_ι_sq_scalar _⟩ = g :=
begin
convert (lift Q).apply_symm_apply g,
rw lift_symm_apply,
refl,
end
/-- See note [partially-applied ext lemmas]. -/
@[ext]
theorem hom_ext {A : Type*} [semiring A] [algebra R A] {f g : clifford_algebra Q →ₐ[R] A} :
f.to_linear_map.comp (ι Q) = g.to_linear_map.comp (ι Q) → f = g :=
begin
intro h,
apply (lift Q).symm.injective,
rw [lift_symm_apply, lift_symm_apply],
simp only [h],
end
/-- If `C` holds for the `algebra_map` of `r : R` into `clifford_algebra Q`, the `ι` of `x : M`,
and is preserved under addition and muliplication, then it holds for all of `clifford_algebra Q`.
-/
-- This proof closely follows `tensor_algebra.induction`
@[elab_as_eliminator]
lemma induction {C : clifford_algebra Q → Prop}
(h_grade0 : ∀ r, C (algebra_map R (clifford_algebra Q) r))
(h_grade1 : ∀ x, C (ι Q x))
(h_mul : ∀ a b, C a → C b → C (a * b))
(h_add : ∀ a b, C a → C b → C (a + b))
(a : clifford_algebra Q) :
C a :=
begin
-- the arguments are enough to construct a subalgebra, and a mapping into it from M
let s : subalgebra R (clifford_algebra Q) := {
carrier := C,
mul_mem' := h_mul,
add_mem' := h_add,
algebra_map_mem' := h_grade0, },
let of : { f : M →ₗ[R] s // ∀ m, f m * f m = algebra_map _ _ (Q m) } :=
⟨(ι Q).cod_restrict s.to_submodule h_grade1,
λ m, subtype.eq $ ι_sq_scalar Q m ⟩,
-- the mapping through the subalgebra is the identity
have of_id : alg_hom.id R (clifford_algebra Q) = s.val.comp (lift Q of),
{ ext,
simp [of], },
-- finding a proof is finding an element of the subalgebra
convert subtype.prop (lift Q of a),
exact alg_hom.congr_fun of_id a,
end
/-- A Clifford algebra with a zero quadratic form is isomorphic to an `exterior_algebra` -/
def as_exterior : clifford_algebra (0 : quadratic_form R M) ≃ₐ[R] exterior_algebra R M :=
alg_equiv.of_alg_hom
(clifford_algebra.lift 0 ⟨(exterior_algebra.ι R), by simp⟩)
(exterior_algebra.lift R ⟨(ι (0 : quadratic_form R M)), by simp⟩)
(by { ext, simp, })
(by { ext, simp, })
end clifford_algebra
namespace tensor_algebra
variables {Q}
/-- The canonical image of the `tensor_algebra` in the `clifford_algebra`, which maps
`tensor_algebra.ι R x` to `clifford_algebra.ι Q x`. -/
def to_clifford : tensor_algebra R M →ₐ[R] clifford_algebra Q :=
tensor_algebra.lift R (clifford_algebra.ι Q)
@[simp] lemma to_clifford_ι (m : M) : (tensor_algebra.ι R m).to_clifford = clifford_algebra.ι Q m :=
by simp [to_clifford]
end tensor_algebra
|
#=
This is where I create denoising functions for the plug and play methods
=#
using BSON: @load
using MLDatasets: FashionMNIST
using NNlib
using Flux
using Zygote
using LinearAlgebra
#=
Here I copy Babhru's VAE denoiser
=#
# This function determines the architecture
function create_vae()
# Define the encoder and decoder networks
encoder_features = Chain(
Dense(784,500, relu),
Dense(500,500, relu)
)
encoder_μ = Chain(encoder_features, Dense(500, 20))
encoder_logvar = Chain(encoder_features, Dense(500, 20))
decoder = Chain(
Dense(20, 500, relu, bias = false),
Dense(500, 500, relu, bias = false),
Dense(500, 784, sigmoid, bias = false)
)
return encoder_μ, encoder_logvar, decoder
end
# This function loads the parameters to the model
# For now I'm using a pre-trained model
function load_model(load_dir::String, epoch::Int)
print("Loading model...")
@load joinpath(load_dir, "model-$epoch.bson") encoder_μ encoder_logvar decoder
println("Done")
return encoder_μ, encoder_logvar, decoder
end
function reconstruct_images(encoder_μ, encoder_logvar, decoder, x)
# Forward propagate through mean encoder and std encoders
μ = encoder_μ(x)
logvar = encoder_logvar(x)
# Apply reparameterisation trick to sample latent
z = μ + randn(Float32, size(logvar)) .* exp.(0.5f0 * logvar)
# Reconstruct from latent sample
x̂ = decoder(z)
return clamp.(x̂, 0 ,1)
end
function decoder_loss_function(z_variable, decoder, y_noisy)
return 0.5*norm(y_noisy - decoder(z_variable))^2
end
# TO DO: rewrite as iterator
function gradient_descent(loss_function, z0, decoder, y_noisy, maxit, stepsize_numerator)
zk = z0
for t in 0:maxit
loss_k = loss_function(zk, decoder, y_noisy)
#println("at iteration $t loss is $loss_k \n")
#resize_and_save_single_MNIST_image(clamp.(decoder(zk), 0.0, 1.0), "./scripts/temp/run_denoising", "during_GD_$t")
zk .= zk - (stepsize_numerator/sqrt(t+1))*Zygote.gradient(z -> loss_function(z, decoder, y_noisy), zk)[1]
end
return zk
end
struct gradient_descent_state
z_curr
loss_value
gradient
end
mutable struct denoising_problem
z # optimization variable
b # fit D(z) to observed image vector b
loss_function # z -> 1/2||b - D(z)||_{2}^2
loss_value # 1/2||b - D(z_curr)||_{2}^2
decoder # D
end
function decoder_denoiser!(x,y,encoder_μ, encoder_logvar, decoder,num_iter)
#TO DO: load model outside of denoiser otherwise it's slow
#model_epoch_number = 20
#model_dir = "./src/utilities/saved_models/MNIST"
#encoder_μ, encoder_logvar, decoder = load_model(model_dir,model_epoch_number)
#resize_and_save_single_MNIST_image(y_curr,"./scripts/temp/denoising_decoder","before_denoising")
loss_function = decoder_loss_function
# as an initial point for gradient descent, take the current iterate (y) and apply encoder
μ = encoder_μ(y)
logvar = encoder_logvar(y)
z0 = μ + randn(Float32, size(logvar)) .* exp.(0.5f0 * logvar)
# num_iter = 20 #number of iterations of gradient descent
z_out = gradient_descent(loss_function,z0,decoder,y,num_iter,0.05)
x .= decoder(z_out)
end
|
classdef PTKCoordinateSystem
% PTKCoordinateSystem. Legacy support class for backwards compatibility. Replaced by MimCoordinateSystem
%
% Licence
% -------
% Part of the TD MIM Toolkit. https://github.com/tomdoel
% Author: Tom Doel, 2013. www.tomdoel.com
% Distributed under the MIT licence. Please see website for details.
%
properties (Constant)
PTK = MimCoordinateSystem.PTK; % Coordinates in mm relatve to the top-left corner of the image volume
Dicom = MimCoordinateSystem.Dicom % Coordinates in mm relative to the scanner origin
DicomUntranslated = MimCoordinateSystem.DicomUntranslated % Origin in the centre of the first voxel of the image
end
end
|
module Numeral.Natural.Oper.Modulo.Proofs.Elim where
import Lvl
open import Data.Boolean.Stmt
open import Logic.Propositional
open import Numeral.Natural
open import Numeral.Natural.Inductions
open import Numeral.Natural.Oper
open import Numeral.Natural.Oper.Comparisons
open import Numeral.Natural.Oper.DivMod.Proofs
open import Numeral.Natural.Oper.FlooredDivision
open import Numeral.Natural.Oper.Modulo
open import Numeral.Natural.Oper.Modulo.Proofs
open import Numeral.Natural.Oper.Proofs.Order
open import Numeral.Natural.Relation.Order
open import Numeral.Natural.Relation.Order.Proofs
open import Relator.Equals
open import Relator.Equals.Proofs
open import Structure.Relator
open import Structure.Relator.Ordering
open import Structure.Relator.Properties
open import Type
mod-elim : ∀{ℓ} → (P : {ℕ} → ℕ → Type{ℓ}) → ∀{b} ⦃ _ : IsTrue(positive?(b)) ⦄ → (∀{a n} → (a < b) → P{a + (n ⋅ b)}(a)) → (∀{a} → P{a}(a mod b))
mod-elim P {𝐒 b} proof {a} with [<][≥]-dichotomy {a}{𝐒 b}
... | [∨]-introₗ lt = substitute₂(\x y → P{x}(y))
(reflexivity(_≡_))
(symmetry(_≡_) (mod-lesser-than-modulus ⦃ [≤]-without-[𝐒] lt ⦄))
(proof{a}{0} lt)
... | [∨]-introᵣ ge = substitute₂(\x y → P{x}(y))
([↔]-to-[→] ([−₀][+]-nullify2ᵣ {(a ⌊/⌋ 𝐒(b)) ⋅ 𝐒(b)}{a}) (subtransitivityᵣ(_≤_)(_≡_) ([≤]-of-[+]ₗ {(a ⌊/⌋ 𝐒(b)) ⋅ 𝐒(b)}{a mod 𝐒(b)}) ([⌊/⌋][mod]-is-division-with-remainder {a}{b})))
(symmetry(_≡_) ([⌊/⌋][⋅]-inverseOperatorᵣ-error {a}{b}))
(proof{a −₀ ((a ⌊/⌋ 𝐒(b)) ⋅ 𝐒(b))}{a ⌊/⌋ 𝐒(b)} (subtransitivityₗ(_<_)(_≡_) (symmetry(_≡_) ([⌊/⌋][⋅]-inverseOperatorᵣ-error {a}{b})) (mod-maxᵣ{a}{𝐒 b})))
|
infix:50 " ≅ " => HEq
theorem ex1 {α : Sort u} {a b : α} (h : a ≅ b) : a = b :=
match h with
| HEq.refl _ => rfl
theorem ex2 {α : Sort u2} {a : α} {motive : {β : Sort u2} → β → Sort u1} (m : motive a) {β : Sort u2} {b : β} (h : a ≅ b) : motive b :=
match h, m with
| HEq.refl _, m => m
theorem ex3 {α : Sort u} {a : α} {p : α → Sort v} {b : α} (h₁ : a ≅ b) (h₂ : p a) : p b :=
match h₁, h₂ with
| HEq.refl _, h₂ => h₂
theorem ex4 {α β : Sort u} {a : α} {b : β} (h : a ≅ b) : b ≅ a :=
match h with
| HEq.refl _ => HEq.refl _
theorem ex5 {α : Sort u} {a a' : α} (h : a = a') : a ≅ a' :=
match h with
| rfl => HEq.refl _
theorem ex6 {α β : Sort u} (h : α = β) (a : α) : cast h a ≅ a :=
match h with
| rfl => HEq.refl _
theorem ex7 {α β σ : Sort u} {a : α} {b : β} {c : σ} (h₁ : a ≅ b) (h₂ : b ≅ c) : a ≅ c :=
match h₁, h₂ with
| HEq.refl _, HEq.refl _ => HEq.refl _
|
-- WARNING: This file was generated automatically by Vehicle
-- and should not be modified manually!
-- Metadata
-- - Agda version: 2.6.2
-- - AISEC version: 0.1.0.1
-- - Time generated: ???
{-# OPTIONS --allow-exec #-}
open import Vehicle
open import Vehicle.Data.Tensor
open import Data.Rational as ℚ using (ℚ)
open import Data.List
open import Relation.Binary.PropositionalEquality
module autoencoderError-temp-output where
postulate encode : Tensor ℚ (5 ∷ []) → Tensor ℚ (2 ∷ [])
postulate decode : Tensor ℚ (2 ∷ []) → Tensor ℚ (5 ∷ [])
abstract
identity : ∀ (x : Tensor ℚ (5 ∷ [])) → decode (encode x) ≡ x
identity = checkSpecification record
{ proofCache = "/home/matthew/Code/AISEC/vehicle/proofcache.vclp"
} |
VendState : Type
VendState = (Nat, Nat)
data Input = COIN
| VEND
| CHANGE
| REFILL Nat
data MachineCmd : Type ->
VendState ->
VendState ->
Type where
InsertCoin : MachineCmd () (pounds, chocs) (S pounds, chocs)
Vend : MachineCmd () (S pounds, S chocs) (pounds, chocs)
GetCoins : MachineCmd () (pounds, chocs) (Z, chocs)
Refill : (bars : Nat) -> MachineCmd () (Z, chocs) (Z, bars + chocs)
Display : String -> MachineCmd () state state
GetInput : MachineCmd (Maybe Input) state state
Pure : ty -> MachineCmd ty state state
(>>=) : MachineCmd a state1 state2 ->
(a -> MachineCmd b state2 state3) ->
MachineCmd b state1 state3
data MachineIO : VendState -> Type where
Do : MachineCmd a state1 state2 ->
(a -> Inf (MachineIO state2)) -> MachineIO state1
namespace MachineDo
(>>=) : MachineCmd a state1 state2 -> (a -> Inf (MachineIO state2)) ->
MachineIO state1
(>>=) = Do
mutual
vend : MachineIO (pounds, chocs)
vend {pounds = Z} = do Display "Insert a coin"
machineLoop
vend {chocs = Z} = do Display "Out of stock"
machineLoop
vend {pounds = (S k)} {chocs = (S j)} = do Vend
Display "Enjoy!"
machineLoop
refill : (num : Nat) -> MachineIO (pounds, chocs)
refill {pounds = Z} num = do Refill num
machineLoop
refill _ = do Display "Can't refill: Coins in machine"
machineLoop
machineLoop : MachineIO (pounds, chocs)
machineLoop =
do Just x <- GetInput
| Nothing => do Display "Invalid input"
machineLoop
case x of
COIN => do InsertCoin
machineLoop
VEND => vend
CHANGE => do GetCoins
Display "Change returned"
machineLoop
REFILL num => refill num
|
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE TemplateHaskell #-}
module Myelin.PyNN.PyNN where
import Prelude hiding (id)
import Control.Monad.Except
import Control.Monad.State.Lazy
import Control.Lens
import Data.Aeson
import Data.List (intercalate)
import qualified Data.Map.Strict as Map
import Data.String.Interpolate
import qualified Data.ByteString.Lazy.Char8 as BS
import Numeric.LinearAlgebra
import Text.Regex as Regex
import Myelin.Model
import Myelin.Neuron
import Myelin.SNN
data PyNNPreample
= PyNNPreample
{ configuration :: String
}
deriving (Eq, Show)
data PyNNModel = PyNNModel
{ _declarations :: [String]
, _layers :: Map.Map String String -- ^ Layers between the nodes
, _populations :: Map.Map String String -- ^ Nodes indexed by their Python variables
}
deriving (Eq, Show)
makeLenses ''PyNNModel
-- | An empty initial model
emptyPyNNModel :: PyNNModel
emptyPyNNModel = PyNNModel [] Map.empty Map.empty
type PyNNState = ExceptT String (State PyNNModel)
-- | Attempts to translate a network into a Python code string
translate :: Task -> PyNNPreample -> Either String String
translate (Task target network simulationTime) preample =
let (result, (PyNNModel {..})) = runState (runExceptT $ translate' network) emptyPyNNModel
in case result of
Left translationError -> Left $ "Failed to translate SNN model to Python: " ++ translationError
Right _ ->
let populationLines = unlines $ Map.elems _populations
layerLines = unlines $ Map.elems _layers
declarationLines = unlines $ _declarations
preampleLines = pyNNPreample target preample
in Right $ concat [preampleLines, populationLines, layerLines, declarationLines, training]
where training = [i|
optimiser = v.GradientDescentOptimiser(0.1, simulation_time=#{simulationTime})
if __name__ == "__main__":
v.Main(model).train(optimiser)
|]
translate' :: Network -> PyNNState ()
translate' Network {..} = do
_ <- mapM pyNNNode _inputs
_ <- mapM pyNNNode _nodes
outputStrings <- mapM pyNNNode _outputs
_ <- mapM pyNNEdge _edges
layers <- fmap _layers get
let decodeLayer = "l_decode = v.Decode(" ++ populationReference ((last $ _outputs) ^.id) ++ ")"
let layersString = intercalate ", " $ Map.keys layers
let model = "model = v.Model(" ++ layersString ++ ", l_decode)"
declarations <>= [decodeLayer, model]
-- | The Python PyNN preamble for import statements
pyNNPreample :: ExecutionTarget -> PyNNPreample -> String
pyNNPreample target (PyNNPreample {..}) = let
pyNNTarget =
case target of
Nest { .. } -> "nest"
SpiNNaker -> "spiNNaker"
in [i|import numpy as np
import volrpynn.#{pyNNTarget} as v
import pyNN.#{pyNNTarget} as pynn
#{configuration}
|]
pyNNNode :: Node -> PyNNState String
pyNNNode node = case node of
Population {..} -> do
pyNNPopulation _neuronType _numNeurons _id
_ -> throwError $ "Unknown node type " ++ (show node)
-- SpikeSourceArray { .. } ->
-- SpikeSourcePoisson { .. } -> do
-- | Statefully encodes a pyNNPopulation, returning the variable name for that
-- population
pyNNPopulation :: NeuronType -> Integer -> Int -> PyNNState String
pyNNPopulation tpe numNeurons id =
case pyNNPopulationString tpe numNeurons id of
Right codeString -> populationVariable id codeString
Left errorString -> throwError errorString
typeRegex = Regex.mkRegex "\"type\":\"\\w*\","
-- | Encodes a PyNN population as a string without state
pyNNPopulationString :: NeuronType -> Integer -> Int -> Either String String
pyNNPopulationString tpe numNeurons id =
let params = BS.unpack $ encode tpe
stripType = Regex.subRegex typeRegex params ""
in case tpe of
IFCondExp { .. } -> Right $ "pynn.Population(" ++ (show numNeurons) ++ ", pynn.IF_cond_exp(**" ++ stripType ++ "))"
_ -> Left $ "Unknown neuron type " ++ (show tpe)
-- | Creates and stores an Edge as a PyNN projection
pyNNEdge :: Edge -> PyNNState String
pyNNEdge projection = do
(PyNNProjection {..}, inputs, outputs) <- pyNNProjection projection
let popRef1 = concatReferences inputs
let popRef2 = concatReferences outputs
let code = "v." ++ layerType ++ "(" ++ popRef1 ++", " ++ popRef2 ++ (weightString weight) ++ (biasString bias) ++ ")"
layerVariable code
where
concatReferences :: [Node] -> String
concatReferences nodes =
case length nodes of
1 -> populationReference (nodes !! 0 ^. id)
_ -> let strings = intercalate ", " $ map populationReference $ map (\node -> node ^. id) nodes
in "(" ++ strings ++ ")"
weightString (Just weight) = ", weights=" ++ weight
weightString Nothing = ""
biasString (Just bias) = ", biases=" ++ bias
biasString Nothing = ""
data PyNNProjection = PyNNProjection
{ bias :: Maybe String
, layerType :: String
, weight :: Maybe String
}
-- | Create a PyNN Connector definition
pyNNProjection :: Edge-> PyNNState (PyNNProjection, [Node], [Node])
pyNNProjection (DenseProjection (Static Excitatory (AllToAll bias weight)) nodeIn nodeOut) = do
b <- pyNNBias bias (_numNeurons nodeOut)
w <- pyNNWeight weight (_numNeurons nodeIn) (_numNeurons nodeOut)
return $ (PyNNProjection (Just b) "Dense" (Just w), [nodeIn], [nodeOut])
pyNNProjection (MergeProjection _ (nodeIn1, nodeIn2) nodeOut) = do
let input1Size = (_numNeurons nodeIn1)
let input2Size = (_numNeurons nodeIn2)
return $ input1Size + input2Size
return $ (PyNNProjection Nothing "Merge" Nothing, [nodeIn1, nodeIn2], [nodeOut])
pyNNProjection (ReplicateProjection (Static Excitatory (AllToAll bias weight)) nodeIn (nodeOut1, nodeOut2)) = do
let inputSize = _numNeurons nodeIn
let output1Size = _numNeurons nodeOut1
let output2Size = _numNeurons nodeOut2
outputSize <- if (output1Size == output2Size)
then return $ output1Size
else throwError $ "Replicaten connections require two outputs of the same size, found " ++ (show (output1Size, output2Size))
w1 <- pyNNWeight weight inputSize output1Size
w2 <- pyNNWeight weight inputSize output1Size
let w = [i|(#{w1}, #{w2})|]
b <- pyNNBias bias output1Size
return $ (PyNNProjection (Just b) "Replicate" (Just w), [nodeIn], [nodeOut1, nodeOut2])
pyNNProjection p = throwError $ "Unknown projection effect" ++ (show p)
-- | Converts biases to a Python bias setting
pyNNBias :: Biases -> Integer -> PyNNState String
pyNNBias (BiasGenerator (Constant n)) _ = return (show n)
pyNNBias (BiasGenerator (GaussianRandom mean scale)) toSize
= return [i|np.random.normal(#{mean}, #{scale}, (#{toSize}))|]
pyNNBias (Biases list) toSize = return [i|np.array(#{show list})|]
-- | Converts weights to a PyNN weight setting
pyNNWeight :: Weights -> Integer -> Integer -> PyNNState String
pyNNWeight (WeightGenerator (Constant n)) _ _ = return (show n)
pyNNWeight (WeightGenerator (GaussianRandom mean scale)) fromSize toSize
= return [i|np.random.normal(#{mean}, #{scale}, (#{fromSize}, #{toSize}))|]
pyNNWeight (Weights matrix) fromSize toSize =
let (rows, columns) = join bimap toInteger $ size matrix
str = show $ toLists matrix
in if (rows == toSize && columns == fromSize)
then return [i|np.array(#{str})|]
else throwError $ [i|Expected sizes (#{toSize},#{fromSize}), but got (#{rows},#{columns})|]
populationReference :: Int -> String
populationReference nodeId = "p" ++ (show nodeId)
populationVariable :: Int -> String -> PyNNState String
populationVariable popId code = do
let name = "p" ++ (show popId)
let variable = pythonVariable name code
populations %= (Map.insert name variable)
return name
layerVariable :: String -> PyNNState String
layerVariable code = do
projections <- use layers
let name = "layer" ++ (show $ Map.size projections)
let variable = pythonVariable name code
layers %= (Map.insert name variable)
return name
pythonVariable :: String -> String -> String
pythonVariable name value = name ++ " = " ++ value
|
[STATEMENT]
lemma quot_Mem: "\<guillemotleft>x IN y\<guillemotright> = HPair (HTuple 0) (HPair (\<guillemotleft>x\<guillemotright>) (\<guillemotleft>y\<guillemotright>))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<guillemotleft>x IN y\<guillemotright> = HPair (HTuple 0) (HPair \<guillemotleft>x\<guillemotright> \<guillemotleft>y\<guillemotright>)
[PROOF STEP]
by (simp add: quot_fm_def quot_tm_def) |
lemma coeffs_poly_cutoff [code abstract]: "coeffs (poly_cutoff n p) = strip_while ((=) 0) (take n (coeffs p))" |
/-
Copyright (c) 2020 Johan Commelin. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johan Commelin
-/
import data.mv_polynomial.rename
/-!
# `comap` operation on `mv_polynomial`
This file defines the `comap` function on `mv_polynomial`.
`mv_polynomial.comap` is a low-tech example of a map of "algebraic varieties," modulo the fact that
`mathlib` does not yet define varieties.
## Notation
As in other polynomial files, we typically use the notation:
+ `σ : Type*` (indexing the variables)
+ `R : Type*` `[comm_semiring R]` (the coefficients)
-/
namespace mv_polynomial
variables {σ : Type*} {τ : Type*} {υ : Type*} {R : Type*} [comm_semiring R]
/--
Given an algebra hom `f : mv_polynomial σ R →ₐ[R] mv_polynomial τ R`
and a variable evaluation `v : τ → R`,
`comap f v` produces a variable evaluation `σ → R`.
-/
noncomputable def comap (f : mv_polynomial σ R →ₐ[R] mv_polynomial τ R) :
(τ → R) → (σ → R) :=
λ x i, aeval x (f (X i))
@[simp] lemma comap_apply (f : mv_polynomial σ R →ₐ[R] mv_polynomial τ R) (x : τ → R) (i : σ) :
comap f x i = aeval x (f (X i)) := rfl
@[simp] lemma comap_id_apply (x : σ → R) : comap (alg_hom.id R (mv_polynomial σ R)) x = x :=
by { funext i, simp only [comap, alg_hom.id_apply, id.def, aeval_X], }
variables (σ R)
lemma comap_id : comap (alg_hom.id R (mv_polynomial σ R)) = id :=
by { funext x, exact comap_id_apply x }
variables {σ R}
lemma comap_comp_apply (f : mv_polynomial σ R →ₐ[R] mv_polynomial τ R)
(g : mv_polynomial τ R →ₐ[R] mv_polynomial υ R) (x : υ → R) :
comap (g.comp f) x = comap f (comap g x) :=
begin
funext i,
transitivity (aeval x (aeval (λ i, g (X i)) (f (X i)))),
{ apply eval₂_hom_congr rfl rfl,
rw alg_hom.comp_apply,
suffices : g = aeval (λ i, g (X i)), { rw ← this, },
exact aeval_unique g },
{ simp only [comap, aeval_eq_eval₂_hom, map_eval₂_hom, alg_hom.comp_apply],
refine eval₂_hom_congr _ rfl rfl,
ext r, apply aeval_C },
end
lemma comap_comp (f : mv_polynomial σ R →ₐ[R] mv_polynomial τ R)
(g : mv_polynomial τ R →ₐ[R] mv_polynomial υ R) :
comap (g.comp f) = comap f ∘ comap g :=
by { funext x, exact comap_comp_apply _ _ _ }
lemma comap_eq_id_of_eq_id (f : mv_polynomial σ R →ₐ[R] mv_polynomial σ R)
(hf : ∀ φ, f φ = φ) (x : σ → R) :
comap f x = x :=
by { convert comap_id_apply x, ext1 φ, rw [hf, alg_hom.id_apply] }
lemma comap_rename (f : σ → τ) (x : τ → R) : comap (rename f) x = x ∘ f :=
by { ext i, simp only [rename_X, comap_apply, aeval_X] }
/--
If two polynomial types over the same coefficient ring `R` are equivalent,
there is a bijection between the types of functions from their variable types to `R`.
-/
noncomputable def comap_equiv (f : mv_polynomial σ R ≃ₐ[R] mv_polynomial τ R) :
(τ → R) ≃ (σ → R) :=
{ to_fun := comap f,
inv_fun := comap f.symm,
left_inv := by { intro x, rw [← comap_comp_apply], apply comap_eq_id_of_eq_id, intro,
simp only [alg_hom.id_apply, alg_equiv.comp_symm], },
right_inv := by { intro x, rw [← comap_comp_apply], apply comap_eq_id_of_eq_id, intro,
simp only [alg_hom.id_apply, alg_equiv.symm_comp] }, }
@[simp] lemma comap_equiv_coe (f : mv_polynomial σ R ≃ₐ[R] mv_polynomial τ R) :
(comap_equiv f : (τ → R) → (σ → R)) = comap f := rfl
@[simp] lemma comap_equiv_symm_coe (f : mv_polynomial σ R ≃ₐ[R] mv_polynomial τ R) :
((comap_equiv f).symm : (σ → R) → (τ → R)) = comap f.symm := rfl
end mv_polynomial
|
(****************************************************************************)
(* Copyright 2020 The Project Oak Authors *)
(* *)
(* Licensed under the Apache License, Version 2.0 (the "License") *)
(* you may not use this file except in compliance with the License. *)
(* You may obtain a copy of the License at *)
(* *)
(* http://www.apache.org/licenses/LICENSE-2.0 *)
(* *)
(* Unless required by applicable law or agreed to in writing, software *)
(* distributed under the License is distributed on an "AS IS" BASIS, *)
(* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. *)
(* See the License for the specific language governing permissions and *)
(* limitations under the License. *)
(****************************************************************************)
Require Import Coq.Lists.List Coq.Arith.Arith.
Require Import Cava.Util.List.
Require Import Psatz.
Section Spec.
Context {byte : Type}.
Local Notation word := (list byte) (only parsing).
Local Notation state := (list word) (only parsing).
Definition shift_row_once (w : word) :=
match w with
| nil => nil
| h :: t => t ++ h :: nil
end.
Fixpoint shift_row (w : word) (r : nat) :=
match r with
| O => w
| S r' => shift_row (shift_row_once w) r'
end.
Definition shift_rows_start (st : state) (n : nat) :=
map2 shift_row st (List.seq n (length st)).
Definition shift_rows (st : state) :=
shift_rows_start st 0.
Definition inv_shift_row (w : word) (r : nat) :=
rev (shift_row (rev w) r).
Definition inv_shift_rows (st : state) :=
map2 inv_shift_row st (List.seq 0 (length st)).
Section Properties.
Lemma shift_row_nil : forall shift,
shift_row nil shift = nil.
Proof.
induction shift; [reflexivity|].
simpl.
rewrite IHshift.
reflexivity.
Qed.
Lemma inv_shift_row_hd w h : inv_shift_row (w ++ h :: nil) 1 = h :: w.
Proof.
unfold inv_shift_row.
rewrite rev_app_distr.
simpl.
rewrite rev_app_distr.
simpl.
rewrite rev_involutive.
reflexivity.
Qed.
Lemma inv_shift_row_O w : inv_shift_row w 0 = w.
Proof.
unfold inv_shift_row.
simpl.
apply rev_involutive.
Qed.
Lemma shift_row_plus x a b : shift_row x (a + b) = shift_row (shift_row x a) b.
generalize dependent x.
generalize dependent a.
induction a; intros.
{ simpl.
reflexivity. }
{ simpl.
rewrite IHa.
reflexivity. }
Qed.
Lemma inv_shift_row_plus x a b : inv_shift_row x (a + b) = inv_shift_row (inv_shift_row x a) b.
unfold inv_shift_row.
rewrite rev_involutive.
rewrite shift_row_plus.
reflexivity.
Qed.
Lemma inverse_shift_row_1 a : inv_shift_row (shift_row a 1) 1 = a.
induction a; [reflexivity|].
simpl.
rewrite inv_shift_row_hd.
reflexivity.
Qed.
Lemma inverse_shift_row a n : inv_shift_row (shift_row a n) n = a.
Proof.
generalize dependent a.
induction n; intros.
{ simpl.
rewrite inv_shift_row_O.
reflexivity. }
{ replace (S n) with (n+1) by lia.
rewrite shift_row_plus.
rewrite Nat.add_comm.
rewrite inv_shift_row_plus.
rewrite inverse_shift_row_1.
apply IHn. }
Qed.
Theorem inverse_shift_rows (x : state) : inv_shift_rows (shift_rows x) = x.
Proof.
unfold shift_rows.
unfold inv_shift_rows.
unfold shift_rows_start.
rewrite map2_length.
rewrite seq_length.
rewrite Min.min_idempotent.
rewrite map2_map2_r.
erewrite map2_ext by (intros; rewrite inverse_shift_row; reflexivity).
refine (map2_id_l _ _ _).
rewrite seq_length.
lia.
Qed.
Theorem shift_rows_length_outer (x : state) :
length (shift_rows x) = length x.
Proof.
cbv [shift_rows shift_rows_start].
autorewrite with push_length.
lia.
Qed.
Lemma shift_row_once_length (row : list byte) :
length (shift_row_once row) = length row.
Proof.
cbv [shift_row_once]. destruct row; length_hammer.
Qed.
Lemma shift_row_length (row : list byte) n :
length (shift_row row n) = length row.
Proof.
revert row; induction n; intros; [ reflexivity | ].
cbn [shift_row]. rewrite IHn, shift_row_once_length.
reflexivity.
Qed.
Theorem shift_rows_length_inner (x : state) n :
(forall r, In r x -> length r = n) ->
forall r, In r (shift_rows x) -> length r = n.
Proof.
cbv [shift_rows shift_rows_start]; intros Hx ? Hin.
apply in_map2_impl in Hin. destruct Hin as [? [? ?]].
intuition; subst.
rewrite shift_row_length. auto.
Qed.
End Properties.
End Spec.
#[export] Hint Resolve shift_rows_length_inner shift_rows_length_outer : length.
Section BasicTests.
Import ListNotations.
(* use nat as the "byte" type so shifting is easy to read *)
Definition nat_rows := [[0; 1; 2; 3]; [4; 5; 6; 7]; [8; 9; 10; 11]; [12; 13; 14; 15]].
Definition shifted_nat_rows := [[0; 1; 2; 3]; [5; 6; 7; 4]; [10; 11; 8; 9]; [15; 12; 13; 14]].
Goal (shift_rows nat_rows = shifted_nat_rows).
Proof. vm_compute. reflexivity. Qed.
Goal (inv_shift_rows shifted_nat_rows = nat_rows).
Proof. vm_compute. reflexivity. Qed.
Goal (inv_shift_rows (shift_rows nat_rows) = nat_rows).
Proof. vm_compute. reflexivity. Qed.
(* and do 3x4 to make sure col/row indices aren't ever switched, which can complicated proofs *)
Definition nat_rows' := [[0; 1; 2; 3]; [4; 5; 6; 7]; [8; 9; 10; 11]].
Definition shifted_nat_rows' := [[0; 1; 2; 3]; [5; 6; 7; 4]; [10; 11; 8; 9]].
Goal (shift_rows nat_rows' = shifted_nat_rows').
Proof. vm_compute. reflexivity. Qed.
Goal (inv_shift_rows shifted_nat_rows' = nat_rows').
Proof. vm_compute. reflexivity. Qed.
(* and 4x3 as well *)
Definition nat_rows'' := [[0; 1; 2]; [4; 5; 6]; [8; 9; 10]; [12; 13; 14]].
Definition shifted_nat_rows'' := [[0; 1; 2]; [5; 6; 4]; [10; 8; 9]; [12; 13; 14]].
Goal (shift_rows nat_rows'' = shifted_nat_rows'').
Proof. vm_compute. reflexivity. Qed.
Goal (inv_shift_rows shifted_nat_rows'' = nat_rows'').
Proof. vm_compute. reflexivity. Qed.
End BasicTests.
|
Formal statement is: lemma holomorphic_on_If_Un [holomorphic_intros]: assumes "f holomorphic_on A" "g holomorphic_on B" "open A" "open B" assumes "\<And>z. z \<in> A \<Longrightarrow> z \<in> B \<Longrightarrow> f z = g z" shows "(\<lambda>z. if z \<in> A then f z else g z) holomorphic_on (A \<union> B)" (is "?h holomorphic_on _") Informal statement is: If $f$ and $g$ are holomorphic on open sets $A$ and $B$, respectively, and $f$ and $g$ agree on $A \cap B$, then the function $h$ defined by $h(z) = f(z)$ if $z \<in> A$ and $h(z) = g(z)$ if $z \<in> B$ is holomorphic on $A \cup B$. |
(** This file contains all of Coq's default logical types and a few basic things
related to them. Because we pass -nois to the compiler, we need to explicitly
export these things to be able to use them. However, I don't like a lot of the
names that Coq made for them, so I give a bunch of new names here. *)
Require Coq.Init.Logic.
Require Export Coq.Init.Ltac.
Require Export Coq.Init.Notations.
(* Even though we never use it, not requiring this makes things break? *)
Require Utf8.
Require Export notations.
Set Implicit Arguments.
Notation "x → y" := (forall (_ : x), y)
(at level 99, y at level 200, right associativity): type_scope.
Notation "'equal'" := Coq.Init.Logic.eq.
Export Coq.Init.Logic (ex).
Export Coq.Init.Logic (ex_intro).
Export Coq.Init.Logic (iff).
Export Coq.Init.Logic (not).
Export Coq.Init.Logic (inhabits).
Export Coq.Init.Logic (inhabited).
Export Coq.Init.Logic (all).
Export Coq.Init.Logic (f_equal).
Export Coq.Init.Logic (f_equal2).
Export Coq.Init.Logic (f_equal3).
Export Coq.Init.Logic (f_equal4).
Export Coq.Init.Logic (True).
Definition true := Coq.Init.Logic.I.
Export Coq.Init.Logic (False).
Export Coq.Init.Logic (False_rect).
Export Coq.Init.Logic (and).
Notation "'make_and'" := Coq.Init.Logic.conj.
Notation "A ∧ B" := (and A B).
Section Conjunction.
Variables A B : Prop.
Theorem land : A ∧ B → A.
Proof.
destruct 1; trivial.
Qed.
Theorem rand : A ∧ B → B.
Proof.
destruct 1; trivial.
Qed.
End Conjunction.
Export Coq.Init.Logic (or).
Notation "'make_lor'" := Coq.Init.Logic.or_introl.
Notation "'make_ror'" := Coq.Init.Logic.or_intror.
Notation "A ∨ B" := (or A B) : type_scope.
Notation "A ↔ B" := (iff A B) : type_scope.
Notation "¬ x" := (not x) : type_scope.
Notation "x = y" := (equal x y) : type_scope.
Notation "x ≠ y" := (¬ (x = y)) : type_scope.
Notation "'exists' x .. y , p" := (ex (fun x => .. (ex (fun y => p)) ..))
(at level 200, x binder, right associativity,
format "'[' 'exists' '/ ' x .. y , '/ ' p ']'")
: type_scope.
(* Logic *)
Notation "∀ x .. y , P" := (forall x, .. (forall y, P) ..)
(at level 200, x binder, y binder, right associativity,
format "'[ ' '[ ' ∀ x .. y ']' , '/' P ']'") : type_scope.
Notation "∃ x .. y , P" := (exists x, .. (exists y, P) ..)
(at level 200, x binder, y binder, right associativity,
format "'[ ' '[ ' ∃ x .. y ']' , '/' P ']'") : type_scope.
(* Abstraction *)
Notation "'λ' x .. y , t" := (fun x => .. (fun y => t) ..)
(at level 200, x binder, y binder, right associativity,
format "'[ ' '[ ' 'λ' x .. y ']' , '/' t ']'").
#[universes(template)]
Inductive ex_type (A : Type) (P : A → Prop) : Type :=
ex_type_constr : ∀ x : A, P x → ex_type P.
Inductive strong_or (A B : Prop) : Set :=
| strong_or_left : A → {A} + {B}
| strong_or_right : B → {A} + {B}
where "{ A } + { B }" := (strong_or A B).
Arguments strong_or_left {A B} _, [A] B _.
Arguments strong_or_right {A B} _ , A [B] _.
#[universes(template)]
Inductive semi_or (A:Type) (B:Prop) : Type :=
| semi_or_left : A → A + {B}
| semi_or_right : B → A + {B}
where "A + { B }" := (semi_or A B).
Arguments semi_or_left {A B} _ , [A] B _.
Arguments semi_or_right {A B} _ , A [B] _.
|
variable <- "안녕 세상아"
print (variable)
|
State Before: ι : Type u_1
I✝ J✝ : Box ι
x y : ι → ℝ
I J : WithBot (Box ι)
⊢ ↑I = ↑J ↔ I = J State After: no goals Tactic: simp only [Subset.antisymm_iff, ← le_antisymm_iff, withBotCoe_subset_iff] |
/-
Copyright (c) 2017 Johannes Hölzl. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johannes Hölzl, Mario Carneiro, Floris van Doorn
! This file was ported from Lean 3 source module set_theory.cardinal.basic
! leanprover-community/mathlib commit 4c19a16e4b705bf135cf9a80ac18fcc99c438514
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathlib.Data.Fintype.BigOperators
import Mathlib.Data.Finsupp.Defs
import Mathlib.Data.Nat.PartENat
import Mathlib.Data.Set.Countable
import Mathlib.Logic.Small.Basic
import Mathlib.Order.ConditionallyCompleteLattice.Basic
import Mathlib.Order.SuccPred.Basic
import Mathlib.SetTheory.Cardinal.SchroederBernstein
import Mathlib.Tactic.Positivity
/-!
# Cardinal Numbers
We define cardinal numbers as a quotient of types under the equivalence relation of equinumerity.
## Main definitions
* `Cardinal` the type of cardinal numbers (in a given universe).
* `Cardinal.mk α` or `#α` is the cardinality of `α`. The notation `#` lives in the locale
`Cardinal`.
* Addition `c₁ + c₂` is defined by `Cardinal.add_def α β : #α + #β = #(α ⊕ β)`.
* Multiplication `c₁ * c₂` is defined by `Cardinal.mul_def : #α * #β = #(α × β)`.
* The order `c₁ ≤ c₂` is defined by `Cardinal.le_def α β : #α ≤ #β ↔ Nonempty (α ↪ β)`.
* Exponentiation `c₁ ^ c₂` is defined by `Cardinal.power_def α β : #α ^ #β = #(β → α)`.
* `Cardinal.aleph0` or `ℵ₀` is the cardinality of `ℕ`. This definition is universe polymorphic:
`Cardinal.aleph0.{u} : Cardinal.{u}` (contrast with `ℕ : Type`, which lives in a specific
universe). In some cases the universe level has to be given explicitly.
* `Cardinal.sum` is the sum of an indexed family of cardinals, i.e. the cardinality of the
corresponding sigma type.
* `Cardinal.prod` is the product of an indexed family of cardinals, i.e. the cardinality of the
corresponding pi type.
* `Cardinal.powerlt a b` or `a ^< b` is defined as the supremum of `a ^ c` for `c < b`.
## Main instances
* Cardinals form a `CanonicallyOrderedCommCemiring` with the aforementioned sum and product.
* Cardinals form a `succ_order`. Use `Order.succ c` for the smallest cardinal greater than `c`.
* The less than relation on cardinals forms a well-order.
* Cardinals form a `ConditionallyCompleteLinearOrderBot`. Bounded sets for cardinals in universe
`u` are precisely the sets indexed by some type in universe `u`, see
`Cardinal.bddAbove_iff_small`. One can use `supₛ` for the cardinal supremum, and `infₛ` for the
minimum of a set of cardinals.
## Main Statements
* Cantor's theorem: `Cardinal.cantor c : c < 2 ^ c`.
* König's theorem: `Cardinal.sum_lt_prod`
## Implementation notes
* There is a type of cardinal numbers in every universe level:
`Cardinal.{u} : Type (u + 1)` is the quotient of types in `Type u`.
The operation `Cardinal.lift` lifts cardinal numbers to a higher level.
* Cardinal arithmetic specifically for infinite cardinals (like `κ * κ = κ`) is in the file
`SetTheory/CardinalOrdinal.lean`.
* There is an instance `Pow Cardinal`, but this will only fire if Lean already knows that both
the base and the exponent live in the same universe. As a workaround, you can add
```
local infixr:0 "^'" => @Pow.pow Cardinal Cardinal Cardinal.instPowCardinal
```
to a file. This notation will work even if Lean doesn't know yet that the base and the exponent
live in the same universe (but no exponents in other types can be used).
(Porting note: This last point might need to be updated.)
## References
* <https://en.wikipedia.org/wiki/Cardinal_number>
## Tags
cardinal number, cardinal arithmetic, cardinal exponentiation, aleph,
Cantor's theorem, König's theorem, Konig's theorem
-/
open Function Set Order
open BigOperators Classical
noncomputable section
universe u v w
variable {α β : Type u}
/-- The equivalence relation on types given by equivalence (bijective correspondence) of types.
Quotienting by this equivalence relation gives the cardinal numbers.
-/
instance Cardinal.isEquivalent : Setoid (Type u) where
r α β := Nonempty (α ≃ β)
iseqv := ⟨
fun α => ⟨Equiv.refl α⟩,
fun ⟨e⟩ => ⟨e.symm⟩,
fun ⟨e₁⟩ ⟨e₂⟩ => ⟨e₁.trans e₂⟩⟩
#align cardinal.is_equivalent Cardinal.isEquivalent
/-- `Cardinal.{u}` is the type of cardinal numbers in `Type u`,
defined as the quotient of `Type u` by existence of an equivalence
(a bijection with explicit inverse). -/
def Cardinal : Type (u + 1) :=
Quotient Cardinal.isEquivalent
#align cardinal Cardinal
namespace Cardinal
/-- The cardinal number of a type -/
def mk : Type u → Cardinal :=
Quotient.mk'
#align cardinal.mk Cardinal.mk
-- mathport name: cardinal.mk
@[inherit_doc]
scoped prefix:0 "#" => Cardinal.mk
instance canLiftCardinalType : CanLift Cardinal.{u} (Type u) mk fun _ => True :=
⟨fun c _ => Quot.inductionOn c fun α => ⟨α, rfl⟩⟩
#align cardinal.can_lift_cardinal_Type Cardinal.canLiftCardinalType
@[elab_as_elim]
theorem inductionOn {p : Cardinal → Prop} (c : Cardinal) (h : ∀ α, p (#α)) : p c :=
Quotient.inductionOn c h
#align cardinal.induction_on Cardinal.inductionOn
@[elab_as_elim]
theorem inductionOn₂ {p : Cardinal → Cardinal → Prop} (c₁ : Cardinal) (c₂ : Cardinal)
(h : ∀ α β, p (#α) (#β)) : p c₁ c₂ :=
Quotient.inductionOn₂ c₁ c₂ h
#align cardinal.induction_on₂ Cardinal.inductionOn₂
@[elab_as_elim]
theorem inductionOn₃ {p : Cardinal → Cardinal → Cardinal → Prop} (c₁ : Cardinal) (c₂ : Cardinal)
(c₃ : Cardinal) (h : ∀ α β γ, p (#α) (#β) (#γ)) : p c₁ c₂ c₃ :=
Quotient.inductionOn₃ c₁ c₂ c₃ h
#align cardinal.induction_on₃ Cardinal.inductionOn₃
protected theorem eq : (#α) = (#β) ↔ Nonempty (α ≃ β) :=
Quotient.eq'
#align cardinal.eq Cardinal.eq
@[simp]
theorem mk'_def (α : Type u) : @Eq Cardinal ⟦α⟧ (#α) :=
rfl
#align cardinal.mk_def Cardinal.mk'_def
@[simp]
theorem mk_out (c : Cardinal) : (#c.out) = c :=
Quotient.out_eq _
#align cardinal.mk_out Cardinal.mk_out
/-- The representative of the cardinal of a type is equivalent ot the original type. -/
def outMkEquiv {α : Type v} : (#α).out ≃ α :=
Nonempty.some <| Cardinal.eq.mp (by simp)
#align cardinal.out_mk_equiv Cardinal.outMkEquiv
theorem mk_congr (e : α ≃ β) : (#α) = (#β) :=
Quot.sound ⟨e⟩
#align cardinal.mk_congr Cardinal.mk_congr
alias mk_congr ← _root_.Equiv.cardinal_eq
#align equiv.cardinal_eq Equiv.cardinal_eq
/-- Lift a function between `Type _`s to a function between `Cardinal`s. -/
def map (f : Type u → Type v) (hf : ∀ α β, α ≃ β → f α ≃ f β) : Cardinal.{u} → Cardinal.{v} :=
Quotient.map f fun α β ⟨e⟩ => ⟨hf α β e⟩
#align cardinal.map Cardinal.map
@[simp]
theorem map_mk (f : Type u → Type v) (hf : ∀ α β, α ≃ β → f α ≃ f β) (α : Type u) :
map f hf (#α) = (#f α) :=
rfl
#align cardinal.map_mk Cardinal.map_mk
/-- Lift a binary operation `Type _ → Type _ → Type _` to a binary operation on `Cardinal`s. -/
def map₂ (f : Type u → Type v → Type w) (hf : ∀ α β γ δ, α ≃ β → γ ≃ δ → f α γ ≃ f β δ) :
Cardinal.{u} → Cardinal.{v} → Cardinal.{w} :=
Quotient.map₂ f fun α β ⟨e₁⟩ γ δ ⟨e₂⟩ => ⟨hf α β γ δ e₁ e₂⟩
#align cardinal.map₂ Cardinal.map₂
/-- The universe lift operation on cardinals. You can specify the universes explicitly with
`lift.{u v} : Cardinal.{v} → Cardinal.{max v u}` -/
def lift (c : Cardinal.{v}) : Cardinal.{max v u} :=
map ULift.{u, v} (fun _ _ e => Equiv.ulift.trans <| e.trans Equiv.ulift.symm) c
#align cardinal.lift Cardinal.lift
@[simp]
theorem mk_uLift (α) : (#ULift.{v, u} α) = lift.{v} (#α) :=
rfl
#align cardinal.mk_ulift Cardinal.mk_uLift
-- Porting note : simpNF is not happy with universe levels, but this is needed as simp lemma
-- further down in this file
/-- `lift.{(max u v) u}` equals `lift.{v u}`. Using `set_option pp.universes true` will make it much
easier to understand what's happening when using this lemma. -/
@[simp, nolint simpNF]
theorem lift_umax : lift.{max u v, u} = lift.{v, u} :=
funext fun a => inductionOn a fun _ => (Equiv.ulift.trans Equiv.ulift.symm).cardinal_eq
#align cardinal.lift_umax Cardinal.lift_umax
-- Porting note : simpNF is not happy with universe levels, but this is needed as simp lemma
-- further down in this file
/-- `lift.{(max v u) u}` equals `lift.{v u}`. Using `set_option pp.universes true` will make it much
easier to understand what's happening when using this lemma. -/
@[simp, nolint simpNF]
theorem lift_umax' : lift.{max v u, u} = lift.{v, u} :=
lift_umax
#align cardinal.lift_umax' Cardinal.lift_umax'
-- Porting note : simpNF is not happy with universe levels, but this is needed as simp lemma
-- further down in this file
/-- A cardinal lifted to a lower or equal universe equals itself. -/
@[simp, nolint simpNF]
theorem lift_id' (a : Cardinal.{max u v}) : lift.{u} a = a :=
inductionOn a fun _ => mk_congr Equiv.ulift
#align cardinal.lift_id' Cardinal.lift_id'
/-- A cardinal lifted to the same universe equals itself. -/
@[simp]
theorem lift_id (a : Cardinal) : lift.{u, u} a = a :=
lift_id'.{u, u} a
#align cardinal.lift_id Cardinal.lift_id
/-- A cardinal lifted to the zero universe equals itself. -/
-- Porting note : simp can prove this
-- @[simp]
theorem lift_uzero (a : Cardinal.{u}) : lift.{0} a = a :=
lift_id'.{0, u} a
#align cardinal.lift_uzero Cardinal.lift_uzero
@[simp]
theorem lift_lift (a : Cardinal) : lift.{w} (lift.{v} a) = lift.{max v w} a :=
inductionOn a fun _ => (Equiv.ulift.trans <| Equiv.ulift.trans Equiv.ulift.symm).cardinal_eq
#align cardinal.lift_lift Cardinal.lift_lift
/-- We define the order on cardinal numbers by `#α ≤ #β` if and only if
there exists an embedding (injective function) from α to β. -/
instance : LE Cardinal.{u} :=
⟨fun q₁ q₂ =>
Quotient.liftOn₂ q₁ q₂ (fun α β => Nonempty <| α ↪ β) fun _ _ _ _ ⟨e₁⟩ ⟨e₂⟩ =>
propext ⟨fun ⟨e⟩ => ⟨e.congr e₁ e₂⟩, fun ⟨e⟩ => ⟨e.congr e₁.symm e₂.symm⟩⟩⟩
instance partialOrder : PartialOrder Cardinal.{u} where
le := (· ≤ ·)
le_refl := by
rintro ⟨α⟩
exact ⟨Embedding.refl _⟩
le_trans := by
rintro ⟨α⟩ ⟨β⟩ ⟨γ⟩ ⟨e₁⟩ ⟨e₂⟩
exact ⟨e₁.trans e₂⟩
le_antisymm := by
rintro ⟨α⟩ ⟨β⟩ ⟨e₁⟩ ⟨e₂⟩
exact Quotient.sound (e₁.antisymm e₂)
theorem le_def (α β : Type u) : (#α) ≤ (#β) ↔ Nonempty (α ↪ β) :=
Iff.rfl
#align cardinal.le_def Cardinal.le_def
theorem mk_le_of_injective {α β : Type u} {f : α → β} (hf : Injective f) : (#α) ≤ (#β) :=
⟨⟨f, hf⟩⟩
#align cardinal.mk_le_of_injective Cardinal.mk_le_of_injective
theorem _root_.Function.Embedding.cardinal_le {α β : Type u} (f : α ↪ β) : (#α) ≤ (#β) :=
⟨f⟩
#align function.embedding.cardinal_le Function.Embedding.cardinal_le
theorem mk_le_of_surjective {α β : Type u} {f : α → β} (hf : Surjective f) : (#β) ≤ (#α) :=
⟨Embedding.ofSurjective f hf⟩
#align cardinal.mk_le_of_surjective Cardinal.mk_le_of_surjective
theorem le_mk_iff_exists_set {c : Cardinal} {α : Type u} : c ≤ (#α) ↔ ∃ p : Set α, (#p) = c :=
⟨inductionOn c fun _ ⟨⟨f, hf⟩⟩ => ⟨Set.range f, (Equiv.ofInjective f hf).cardinal_eq.symm⟩,
fun ⟨_, e⟩ => e ▸ ⟨⟨Subtype.val, fun _ _ => Subtype.eq⟩⟩⟩
#align cardinal.le_mk_iff_exists_set Cardinal.le_mk_iff_exists_set
theorem mk_subtype_le {α : Type u} (p : α → Prop) : (#Subtype p) ≤ (#α) :=
⟨Embedding.subtype p⟩
#align cardinal.mk_subtype_le Cardinal.mk_subtype_le
theorem mk_set_le (s : Set α) : (#s) ≤ (#α) :=
mk_subtype_le s
#align cardinal.mk_set_le Cardinal.mk_set_le
theorem out_embedding {c c' : Cardinal} : c ≤ c' ↔ Nonempty (c.out ↪ c'.out) := by
trans
rw [← Quotient.out_eq c, ← Quotient.out_eq c']
rfl
#align cardinal.out_embedding Cardinal.out_embedding
theorem lift_mk_le {α : Type u} {β : Type v} :
lift.{max v w} (#α) ≤ lift.{max u w} (#β) ↔ Nonempty (α ↪ β) :=
⟨fun ⟨f⟩ => ⟨Embedding.congr Equiv.ulift Equiv.ulift f⟩, fun ⟨f⟩ =>
⟨Embedding.congr Equiv.ulift.symm Equiv.ulift.symm f⟩⟩
#align cardinal.lift_mk_le Cardinal.lift_mk_le
/-- A variant of `Cardinal.lift_mk_le` with specialized universes.
Because Lean often can not realize it should use this specialization itself,
we provide this statement separately so you don't have to solve the specialization problem either.
-/
theorem lift_mk_le' {α : Type u} {β : Type v} : lift.{v} (#α) ≤ lift.{u} (#β) ↔ Nonempty (α ↪ β) :=
lift_mk_le.{u, v, 0}
#align cardinal.lift_mk_le' Cardinal.lift_mk_le'
theorem lift_mk_eq {α : Type u} {β : Type v} :
lift.{max v w} (#α) = lift.{max u w} (#β) ↔ Nonempty (α ≃ β) :=
Quotient.eq'.trans
⟨fun ⟨f⟩ => ⟨Equiv.ulift.symm.trans <| f.trans Equiv.ulift⟩, fun ⟨f⟩ =>
⟨Equiv.ulift.trans <| f.trans Equiv.ulift.symm⟩⟩
#align cardinal.lift_mk_eq Cardinal.lift_mk_eq
/-- A variant of `Cardinal.lift_mk_eq` with specialized universes.
Because Lean often can not realize it should use this specialization itself,
we provide this statement separately so you don't have to solve the specialization problem either.
-/
theorem lift_mk_eq' {α : Type u} {β : Type v} : lift.{v} (#α) = lift.{u} (#β) ↔ Nonempty (α ≃ β) :=
lift_mk_eq.{u, v, 0}
#align cardinal.lift_mk_eq' Cardinal.lift_mk_eq'
@[simp]
theorem lift_le {a b : Cardinal.{u}} : lift.{v, u} a ≤ lift.{v, u} b ↔ a ≤ b :=
inductionOn₂ a b fun α β => by
rw [← lift_umax]
exact lift_mk_le.{u, u, v}
#align cardinal.lift_le Cardinal.lift_le
-- Porting note: changed `simps` to `simps!` because the linter told to do so.
/-- `Cardinal.lift` as an `OrderEmbedding`. -/
@[simps! (config := { fullyApplied := false })]
def liftOrderEmbedding : Cardinal.{v} ↪o Cardinal.{max v u} :=
OrderEmbedding.ofMapLEIff lift.{u, v} fun _ _ => lift_le
#align cardinal.lift_order_embedding Cardinal.liftOrderEmbedding
theorem lift_injective : Injective lift.{u, v} :=
liftOrderEmbedding.injective
#align cardinal.lift_injective Cardinal.lift_injective
@[simp]
theorem lift_inj {a b : Cardinal.{u}} : lift.{v, u} a = lift.{v, u} b ↔ a = b :=
lift_injective.eq_iff
#align cardinal.lift_inj Cardinal.lift_inj
@[simp]
theorem lift_lt {a b : Cardinal.{u}} : lift.{v, u} a < lift.{v, u} b ↔ a < b :=
liftOrderEmbedding.lt_iff_lt
#align cardinal.lift_lt Cardinal.lift_lt
theorem lift_strictMono : StrictMono lift := fun _ _ => lift_lt.2
#align cardinal.lift_strict_mono Cardinal.lift_strictMono
theorem lift_monotone : Monotone lift :=
lift_strictMono.monotone
#align cardinal.lift_monotone Cardinal.lift_monotone
instance : Zero Cardinal.{u} :=
⟨#PEmpty⟩
instance : Inhabited Cardinal.{u} :=
⟨0⟩
theorem mk_eq_zero (α : Type u) [IsEmpty α] : (#α) = 0 :=
(Equiv.equivPEmpty α).cardinal_eq
#align cardinal.mk_eq_zero Cardinal.mk_eq_zero
@[simp]
theorem lift_zero : lift 0 = 0 :=
mk_congr (Equiv.equivPEmpty _)
#align cardinal.lift_zero Cardinal.lift_zero
@[simp]
theorem lift_eq_zero {a : Cardinal.{v}} : lift.{u} a = 0 ↔ a = 0 :=
lift_injective.eq_iff' lift_zero
#align cardinal.lift_eq_zero Cardinal.lift_eq_zero
theorem mk_eq_zero_iff {α : Type u} : (#α) = 0 ↔ IsEmpty α :=
⟨fun e =>
let ⟨h⟩ := Quotient.exact e
h.isEmpty,
@mk_eq_zero α⟩
#align cardinal.mk_eq_zero_iff Cardinal.mk_eq_zero_iff
theorem mk_ne_zero_iff {α : Type u} : (#α) ≠ 0 ↔ Nonempty α :=
(not_iff_not.2 mk_eq_zero_iff).trans not_isEmpty_iff
#align cardinal.mk_ne_zero_iff Cardinal.mk_ne_zero_iff
@[simp]
theorem mk_ne_zero (α : Type u) [Nonempty α] : (#α) ≠ 0 :=
mk_ne_zero_iff.2 ‹_›
#align cardinal.mk_ne_zero Cardinal.mk_ne_zero
instance : One Cardinal.{u} :=
⟨#PUnit⟩
instance : Nontrivial Cardinal.{u} :=
⟨⟨1, 0, mk_ne_zero _⟩⟩
theorem mk_eq_one (α : Type u) [Unique α] : (#α) = 1 :=
(Equiv.equivPUnit α).cardinal_eq
#align cardinal.mk_eq_one Cardinal.mk_eq_one
theorem le_one_iff_subsingleton {α : Type u} : (#α) ≤ 1 ↔ Subsingleton α :=
⟨fun ⟨f⟩ => ⟨fun _ _ => f.injective (Subsingleton.elim _ _)⟩, fun ⟨h⟩ =>
⟨⟨fun _ => PUnit.unit, fun _ _ _ => h _ _⟩⟩⟩
#align cardinal.le_one_iff_subsingleton Cardinal.le_one_iff_subsingleton
@[simp]
theorem mk_le_one_iff_set_subsingleton {s : Set α} : (#s) ≤ 1 ↔ s.Subsingleton :=
le_one_iff_subsingleton.trans s.subsingleton_coe
#align cardinal.mk_le_one_iff_set_subsingleton Cardinal.mk_le_one_iff_set_subsingleton
alias mk_le_one_iff_set_subsingleton ↔ _ _root_.Set.Subsingleton.cardinal_mk_le_one
#align set.subsingleton.cardinal_mk_le_one Set.Subsingleton.cardinal_mk_le_one
instance : Add Cardinal.{u} :=
⟨map₂ Sum fun _ _ _ _ => Equiv.sumCongr⟩
theorem add_def (α β : Type u) : (#α) + (#β) = (#Sum α β) :=
rfl
#align cardinal.add_def Cardinal.add_def
-- Porting note: Should this be changed to
-- `⟨fun n => lift (#(Fin n))⟩` in the future?
instance : NatCast Cardinal.{u} :=
⟨Nat.unaryCast⟩
@[simp]
theorem mk_sum (α : Type u) (β : Type v) : (#Sum α β) = lift.{v, u} (#α) + lift.{u, v} (#β) :=
mk_congr (Equiv.ulift.symm.sumCongr Equiv.ulift.symm)
#align cardinal.mk_sum Cardinal.mk_sum
@[simp]
theorem mk_option {α : Type u} : (#Option α) = (#α) + 1 :=
(Equiv.optionEquivSumPUnit α).cardinal_eq
#align cardinal.mk_option Cardinal.mk_option
@[simp]
theorem mk_pSum (α : Type u) (β : Type v) : (#PSum α β) = lift.{v} (#α) + lift.{u} (#β) :=
(mk_congr (Equiv.psumEquivSum α β)).trans (mk_sum α β)
#align cardinal.mk_psum Cardinal.mk_pSum
@[simp]
theorem mk_fintype (α : Type u) [h : Fintype α] : (#α) = Fintype.card α := by
refine Fintype.induction_empty_option ?_ ?_ ?_ α (h_fintype := h)
· intro α β h e hα
letI := Fintype.ofEquiv β e.symm
rwa [mk_congr e, Fintype.card_congr e] at hα
· rfl
· intro α h hα
simp [hα]
rfl
#align cardinal.mk_fintype Cardinal.mk_fintype
instance : Mul Cardinal.{u} :=
⟨map₂ Prod fun _ _ _ _ => Equiv.prodCongr⟩
theorem mul_def (α β : Type u) : (#α) * (#β) = (#α × β) :=
rfl
#align cardinal.mul_def Cardinal.mul_def
@[simp]
theorem mk_prod (α : Type u) (β : Type v) : (#α × β) = lift.{v, u} (#α) * lift.{u, v} (#β) :=
mk_congr (Equiv.ulift.symm.prodCongr Equiv.ulift.symm)
#align cardinal.mk_prod Cardinal.mk_prod
private theorem mul_comm' (a b : Cardinal.{u}) : a * b = b * a :=
inductionOn₂ a b fun α β => mk_congr <| Equiv.prodComm α β
-- #align cardinal.mul_comm' Cardinal.mul_comm'
/-- The cardinal exponential. `#α ^ #β` is the cardinal of `β → α`. -/
instance instPowCardinal : Pow Cardinal.{u} Cardinal.{u} :=
⟨map₂ (fun α β => β → α) fun _ _ _ _ e₁ e₂ => e₂.arrowCongr e₁⟩
-- Porting note: This "workaround" does not work and break everything.
-- I changed it now from `^` to `^'` to prevent a clash
-- with `HPow`, but somebody should figure out
-- if this is still relevant in Lean4.
-- mathport name: cardinal.pow
local infixr:0 "^'" => @HPow.hPow Cardinal Cardinal Cardinal.instPowCardinal
-- -- mathport name: cardinal.pow.nat
local infixr:80 " ^ℕ " => @HPow.hPow Cardinal ℕ Cardinal instHPow
theorem power_def (α β) : ((#α) ^ (#β)) = (#β → α) :=
rfl
#align cardinal.power_def Cardinal.power_def
theorem mk_arrow (α : Type u) (β : Type v) : (#α → β) = (lift.{u} (#β)^lift.{v} (#α)) :=
mk_congr (Equiv.ulift.symm.arrowCongr Equiv.ulift.symm)
#align cardinal.mk_arrow Cardinal.mk_arrow
@[simp]
theorem lift_power (a b : Cardinal.{u}) : lift.{v} (a ^ b) = ((lift.{v} a) ^ (lift.{v} b)) :=
inductionOn₂ a b fun _ _ =>
mk_congr <| Equiv.ulift.trans (Equiv.ulift.arrowCongr Equiv.ulift).symm
#align cardinal.lift_power Cardinal.lift_power
@[simp]
theorem power_zero {a : Cardinal} : (a ^ 0) = 1 :=
inductionOn a fun α => mk_congr <| Equiv.pemptyArrowEquivPUnit α
#align cardinal.power_zero Cardinal.power_zero
@[simp]
theorem power_one {a : Cardinal} : (a ^ 1) = a :=
inductionOn a fun α => mk_congr <| Equiv.punitArrowEquiv α
#align cardinal.power_one Cardinal.power_one
theorem power_add {a b c : Cardinal} : (a ^ (b + c)) = (a ^ b) * (a ^ c) :=
inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.sumArrowEquivProdArrow β γ α
#align cardinal.power_add Cardinal.power_add
instance commSemiring : CommSemiring Cardinal.{u} where
zero := 0
one := 1
add := (· + ·)
mul := (· * ·)
zero_add a := inductionOn a fun α => mk_congr <| Equiv.emptySum PEmpty α
add_zero a := inductionOn a fun α => mk_congr <| Equiv.sumEmpty α PEmpty
add_assoc a b c := inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.sumAssoc α β γ
add_comm a b := inductionOn₂ a b fun α β => mk_congr <| Equiv.sumComm α β
zero_mul a := inductionOn a fun α => mk_congr <| Equiv.pemptyProd α
mul_zero a := inductionOn a fun α => mk_congr <| Equiv.prodPEmpty α
one_mul a := inductionOn a fun α => mk_congr <| Equiv.punitProd α
mul_one a := inductionOn a fun α => mk_congr <| Equiv.prodPUnit α
mul_assoc a b c := inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.prodAssoc α β γ
mul_comm := mul_comm'
left_distrib a b c := inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.prodSumDistrib α β γ
right_distrib a b c := inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.sumProdDistrib α β γ
npow n c := c^n
npow_zero := @power_zero
npow_succ n c := show (c ^ (n + 1)) = c * (c ^ n) by rw [power_add, power_one, mul_comm']
/-! Porting note: Deprecated section. Remove. -/
section deprecated
set_option linter.deprecated false
@[deprecated]
theorem power_bit0 (a b : Cardinal) : (a ^ bit0 b) = (a ^ b) * (a ^ b) :=
power_add
#align cardinal.power_bit0 Cardinal.power_bit0
@[deprecated]
theorem power_bit1 (a b : Cardinal) : (a ^ bit1 b) = (a ^ b) * (a ^ b) * a := by
rw [bit1, ← power_bit0, power_add, power_one]
#align cardinal.power_bit1 Cardinal.power_bit1
end deprecated
@[simp]
theorem one_power {a : Cardinal} : (1 ^ a) = 1 :=
inductionOn a fun α => (Equiv.arrowPUnitEquivPUnit α).cardinal_eq
#align cardinal.one_power Cardinal.one_power
-- Porting note : simp can prove this
-- @[simp]
theorem mk_bool : (#Bool) = 2 := by simp
#align cardinal.mk_bool Cardinal.mk_bool
-- Porting note : simp can prove this
-- @[simp]
theorem mk_Prop : (#Prop) = 2 := by simp
#align cardinal.mk_Prop Cardinal.mk_Prop
@[simp]
theorem zero_power {a : Cardinal} : a ≠ 0 → (0 ^ a) = 0 :=
inductionOn a fun _ heq =>
mk_eq_zero_iff.2 <|
isEmpty_pi.2 <|
let ⟨a⟩ := mk_ne_zero_iff.1 heq
⟨a, inferInstance⟩
#align cardinal.zero_power Cardinal.zero_power
theorem power_ne_zero {a : Cardinal} (b) : a ≠ 0 → (a ^ b) ≠ 0 :=
inductionOn₂ a b fun _ _ h =>
let ⟨a⟩ := mk_ne_zero_iff.1 h
mk_ne_zero_iff.2 ⟨fun _ => a⟩
#align cardinal.power_ne_zero Cardinal.power_ne_zero
theorem mul_power {a b c : Cardinal} : ((a * b) ^ c) = (a ^ c) * (b ^ c) :=
inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.arrowProdEquivProdArrow α β γ
#align cardinal.mul_power Cardinal.mul_power
theorem power_mul {a b c : Cardinal} : (a ^ (b * c)) = ((a ^ b) ^ c) := by
rw [mul_comm b c]
exact inductionOn₃ a b c fun α β γ => mk_congr <| Equiv.curry γ β α
#align cardinal.power_mul Cardinal.power_mul
@[simp]
theorem pow_cast_right (a : Cardinal.{u}) (n : ℕ) : (a^(↑n : Cardinal.{u})) = a ^ℕ n :=
rfl
#align cardinal.pow_cast_right Cardinal.pow_cast_right
@[simp]
theorem lift_one : lift 1 = 1 :=
mk_congr <| Equiv.ulift.trans Equiv.punitEquivPUnit
#align cardinal.lift_one Cardinal.lift_one
@[simp]
theorem lift_add (a b : Cardinal.{u}) : lift.{v} (a + b) = lift.{v} a + lift.{v} b :=
inductionOn₂ a b fun _ _ =>
mk_congr <| Equiv.ulift.trans (Equiv.sumCongr Equiv.ulift Equiv.ulift).symm
#align cardinal.lift_add Cardinal.lift_add
@[simp]
theorem lift_mul (a b : Cardinal.{u}) : lift.{v} (a * b) = lift.{v} a * lift.{v} b :=
inductionOn₂ a b fun _ _ =>
mk_congr <| Equiv.ulift.trans (Equiv.prodCongr Equiv.ulift Equiv.ulift).symm
#align cardinal.lift_mul Cardinal.lift_mul
/-! Porting note: Deprecated section. Remove. -/
section deprecated
set_option linter.deprecated false
@[simp, deprecated]
theorem lift_bit0 (a : Cardinal) : lift.{v} (bit0 a) = bit0 (lift.{v} a) :=
lift_add a a
#align cardinal.lift_bit0 Cardinal.lift_bit0
@[simp, deprecated]
theorem lift_bit1 (a : Cardinal) : lift.{v} (bit1 a) = bit1 (lift.{v} a) := by simp [bit1]
#align cardinal.lift_bit1 Cardinal.lift_bit1
end deprecated
-- Porting note: Proof used to be simp, needed to remind simp that 1 + 1 = 2
theorem lift_two : lift.{u, v} 2 = 2 := by simp [←one_add_one_eq_two]
#align cardinal.lift_two Cardinal.lift_two
@[simp]
theorem mk_set {α : Type u} : (#Set α) = (2 ^ (#α)) := by simp [←one_add_one_eq_two, Set, mk_arrow]
#align cardinal.mk_set Cardinal.mk_set
/-- A variant of `cardinal.mk_set` expressed in terms of a `set` instead of a `Type`. -/
@[simp]
theorem mk_powerset {α : Type u} (s : Set α) : (#↥(𝒫 s)) = (2 ^ (#↥s)) :=
(mk_congr (Equiv.Set.powerset s)).trans mk_set
#align cardinal.mk_powerset Cardinal.mk_powerset
theorem lift_two_power (a) : lift.{v} (2 ^ a) = (2 ^ lift.{v} a) := by simp [←one_add_one_eq_two]
#align cardinal.lift_two_power Cardinal.lift_two_power
section OrderProperties
open Sum
protected theorem zero_le : ∀ a : Cardinal, 0 ≤ a := by
rintro ⟨α⟩
exact ⟨Embedding.ofIsEmpty⟩
#align cardinal.zero_le Cardinal.zero_le
private theorem add_le_add' : ∀ {a b c d : Cardinal}, a ≤ b → c ≤ d → a + c ≤ b + d := by
rintro ⟨α⟩ ⟨β⟩ ⟨γ⟩ ⟨δ⟩ ⟨e₁⟩ ⟨e₂⟩; exact ⟨e₁.sumMap e₂⟩
-- #align cardinal.add_le_add' Cardinal.add_le_add'
instance add_covariantClass : CovariantClass Cardinal Cardinal (· + ·) (· ≤ ·) :=
⟨fun _ _ _ => add_le_add' le_rfl⟩
#align cardinal.add_covariant_class Cardinal.add_covariantClass
instance add_swap_covariantClass : CovariantClass Cardinal Cardinal (swap (· + ·)) (· ≤ ·) :=
⟨fun _ _ _ h => add_le_add' h le_rfl⟩
#align cardinal.add_swap_covariant_class Cardinal.add_swap_covariantClass
instance : CanonicallyOrderedCommSemiring Cardinal.{u} :=
{ Cardinal.commSemiring,
Cardinal.partialOrder with
bot := 0
bot_le := Cardinal.zero_le
add_le_add_left := fun a b => add_le_add_left
exists_add_of_le := by
intro a b
exact inductionOn₂ a b fun α β ⟨⟨f, hf⟩⟩ =>
have : Sum α (range fᶜ : Set β) ≃ β :=
(Equiv.sumCongr (Equiv.ofInjective f hf) (Equiv.refl _)).trans <|
Equiv.Set.sumCompl (range f)
⟨#↥(range fᶜ), mk_congr this.symm⟩
le_self_add := fun a b => (add_zero a).ge.trans <| add_le_add_left (Cardinal.zero_le _) _
eq_zero_or_eq_zero_of_mul_eq_zero := by
intro a b
exact inductionOn₂ a b fun α β => by
simpa only [mul_def, mk_eq_zero_iff, isEmpty_prod] using id }
theorem zero_power_le (c : Cardinal.{u}) : ((0 : Cardinal.{u})^c) ≤ 1 := by
by_cases h : c = 0
· rw [h, power_zero]
· rw [zero_power h]
apply zero_le
#align cardinal.zero_power_le Cardinal.zero_power_le
theorem power_le_power_left : ∀ {a b c : Cardinal}, a ≠ 0 → b ≤ c → (a^b) ≤ (a^c) := by
rintro ⟨α⟩ ⟨β⟩ ⟨γ⟩ hα ⟨e⟩
let ⟨a⟩ := mk_ne_zero_iff.1 hα
exact ⟨@Function.Embedding.arrowCongrLeft _ _ _ ⟨a⟩ e⟩
#align cardinal.power_le_power_left Cardinal.power_le_power_left
theorem self_le_power (a : Cardinal) {b : Cardinal} (hb : 1 ≤ b) : a ≤ (a^b) :=
by
rcases eq_or_ne a 0 with (rfl | ha)
· exact zero_le _
· convert power_le_power_left ha hb
exact power_one.symm
#align cardinal.self_le_power Cardinal.self_le_power
/-- **Cantor's theorem** -/
theorem cantor (a : Cardinal.{u}) : a < (2^a) :=
by
induction' a using Cardinal.inductionOn with α
rw [← mk_set]
refine' ⟨⟨⟨singleton, fun a b => singleton_eq_singleton_iff.1⟩⟩, _⟩
rintro ⟨⟨f, hf⟩⟩
exact cantor_injective f hf
#align cardinal.cantor Cardinal.cantor
instance : NoMaxOrder Cardinal.{u} :=
{ Cardinal.partialOrder with exists_gt := fun a => ⟨_, cantor a⟩ }
instance : CanonicallyLinearOrderedAddMonoid Cardinal.{u} :=
{ (inferInstance : CanonicallyOrderedAddMonoid Cardinal),
-- Porting note: Needed to add .{u} below
Cardinal.partialOrder.{u} with
le_total := by
rintro ⟨α⟩ ⟨β⟩
apply Embedding.total
decidable_le := Classical.decRel _}
-- short-circuit type class inference
instance : DistribLattice Cardinal.{u} := inferInstance
theorem one_lt_iff_nontrivial {α : Type u} : 1 < (#α) ↔ Nontrivial α := by
rw [← not_le, le_one_iff_subsingleton, ← not_nontrivial_iff_subsingleton, Classical.not_not]
#align cardinal.one_lt_iff_nontrivial Cardinal.one_lt_iff_nontrivial
theorem power_le_max_power_one {a b c : Cardinal} (h : b ≤ c) : (a^b) ≤ max (a^c) 1 :=
by
by_cases ha : a = 0
· simp [ha, zero_power_le]
· exact (power_le_power_left ha h).trans (le_max_left _ _)
#align cardinal.power_le_max_power_one Cardinal.power_le_max_power_one
theorem power_le_power_right {a b c : Cardinal} : a ≤ b → (a^c) ≤ (b^c) :=
inductionOn₃ a b c fun _ _ _ ⟨e⟩ => ⟨Embedding.arrowCongrRight e⟩
#align cardinal.power_le_power_right Cardinal.power_le_power_right
theorem power_pos {a : Cardinal} (b) (ha : 0 < a) : 0 < (a^b) :=
(power_ne_zero _ ha.ne').bot_lt
#align cardinal.power_pos Cardinal.power_pos
end OrderProperties
protected theorem lt_wf : @WellFounded Cardinal.{u} (· < ·) :=
⟨fun a =>
byContradiction fun h => by
let ι := { c : Cardinal // ¬Acc (· < ·) c }
let f : ι → Cardinal := Subtype.val
haveI hι : Nonempty ι := ⟨⟨_, h⟩⟩
obtain ⟨⟨c : Cardinal, hc : ¬Acc (· < ·) c⟩, ⟨h_1 : ∀ j, (f ⟨c, hc⟩).out ↪ (f j).out⟩⟩ :=
Embedding.min_injective fun i => (f i).out
apply hc (Acc.intro _ fun j h' => byContradiction fun hj => h'.2 _)
-- Porting note: Needed to add this intro
intro j _ hj
have : (#_) ≤ (#_) := ⟨h_1 ⟨j, hj⟩⟩
simpa only [mk_out] using this⟩
#align cardinal.lt_wf Cardinal.lt_wf
instance : WellFoundedRelation Cardinal.{u} :=
⟨(· < ·), Cardinal.lt_wf⟩
instance : WellFoundedLT Cardinal.{u} :=
⟨Cardinal.lt_wf⟩
instance wo : @IsWellOrder Cardinal.{u} (· < ·) where
#align cardinal.wo Cardinal.wo
instance : ConditionallyCompleteLinearOrderBot Cardinal :=
IsWellOrder.conditionallyCompleteLinearOrderBot _
@[simp]
theorem infₛ_empty : infₛ (∅ : Set Cardinal.{u}) = 0 :=
dif_neg Set.not_nonempty_empty
#align cardinal.Inf_empty Cardinal.infₛ_empty
/-- Note that the successor of `c` is not the same as `c + 1` except in the case of finite `c`.-/
instance : SuccOrder Cardinal :=
SuccOrder.ofSuccLeIff (fun c => infₛ { c' | c < c' })
-- Porting note: Needed to insert `by apply` in the next line
⟨by apply lt_of_lt_of_le <| cinfₛ_mem <| exists_gt _,
-- Porting note used to be just `cinfₛ_le'`
fun h ↦ by apply cinfₛ_le'; exact h⟩
theorem succ_def (c : Cardinal) : succ c = infₛ { c' | c < c' } :=
rfl
#align cardinal.succ_def Cardinal.succ_def
theorem add_one_le_succ (c : Cardinal.{u}) : c + 1 ≤ succ c := by
-- Porting note: rewrote the next three lines to avoid defeq abuse.
have : Set.Nonempty { c' | c < c' } := exists_gt c
simp_rw [succ_def, le_cinfₛ_iff'' this, mem_setOf]
intro b hlt
rcases b, c with ⟨⟨β⟩, ⟨γ⟩⟩
cases' le_of_lt hlt with f
have : ¬Surjective f := fun hn => (not_le_of_lt hlt) (mk_le_of_surjective hn)
simp only [Surjective, not_forall] at this
rcases this with ⟨b, hb⟩
calc
(#γ) + 1 = (#Option γ) := mk_option.symm
_ ≤ (#β) := (f.optionElim b hb).cardinal_le
#align cardinal.add_one_le_succ Cardinal.add_one_le_succ
theorem succ_pos : ∀ c : Cardinal, 0 < succ c :=
bot_lt_succ
#align cardinal.succ_pos Cardinal.succ_pos
theorem succ_ne_zero (c : Cardinal) : succ c ≠ 0 :=
(succ_pos _).ne'
#align cardinal.succ_ne_zero Cardinal.succ_ne_zero
/-- The indexed sum of cardinals is the cardinality of the
indexed disjoint union, i.e. sigma type. -/
def sum {ι} (f : ι → Cardinal) : Cardinal :=
mk (Σi, (f i).out)
#align cardinal.sum Cardinal.sum
theorem le_sum {ι} (f : ι → Cardinal) (i) : f i ≤ sum f := by
rw [← Quotient.out_eq (f i)]
exact ⟨⟨fun a => ⟨i, a⟩, fun a b h => by injection h⟩⟩
#align cardinal.le_sum Cardinal.le_sum
@[simp]
theorem mk_sigma {ι} (f : ι → Type _) : (#Σi, f i) = sum fun i => #f i :=
mk_congr <| Equiv.sigmaCongrRight fun _ => outMkEquiv.symm
#align cardinal.mk_sigma Cardinal.mk_sigma
@[simp]
theorem sum_const (ι : Type u) (a : Cardinal.{v}) :
(sum fun _ : ι => a) = lift.{v} (#ι) * lift.{u} a :=
inductionOn a fun α =>
mk_congr <|
calc
(Σ _ : ι, Quotient.out (#α)) ≃ ι × Quotient.out (#α) := Equiv.sigmaEquivProd _ _
_ ≃ ULift ι × ULift α := Equiv.ulift.symm.prodCongr (outMkEquiv.trans Equiv.ulift.symm)
#align cardinal.sum_const Cardinal.sum_const
theorem sum_const' (ι : Type u) (a : Cardinal.{u}) : (sum fun _ : ι => a) = (#ι) * a := by simp
#align cardinal.sum_const' Cardinal.sum_const'
@[simp]
theorem sum_add_distrib {ι} (f g : ι → Cardinal) : sum (f + g) = sum f + sum g := by
have := mk_congr (Equiv.sigmaSumDistrib (Quotient.out ∘ f) (Quotient.out ∘ g))
simp only [comp_apply, mk_sigma, mk_sum, mk_out, lift_id] at this
exact this
#align cardinal.sum_add_distrib Cardinal.sum_add_distrib
@[simp]
theorem sum_add_distrib' {ι} (f g : ι → Cardinal) :
(Cardinal.sum fun i => f i + g i) = sum f + sum g :=
sum_add_distrib f g
#align cardinal.sum_add_distrib' Cardinal.sum_add_distrib'
@[simp]
theorem lift_sum {ι : Type u} (f : ι → Cardinal.{v}) :
Cardinal.lift.{w} (Cardinal.sum f) = Cardinal.sum fun i => Cardinal.lift.{w} (f i) :=
Equiv.cardinal_eq <|
Equiv.ulift.trans <|
Equiv.sigmaCongrRight fun a =>
-- Porting note: Inserted universe hint .{_,_,v} below
Nonempty.some <| by rw [← lift_mk_eq.{_,_,v}, mk_out, mk_out, lift_lift]
#align cardinal.lift_sum Cardinal.lift_sum
theorem sum_le_sum {ι} (f g : ι → Cardinal) (H : ∀ i, f i ≤ g i) : sum f ≤ sum g :=
⟨(Embedding.refl _).sigmaMap fun i =>
Classical.choice <| by have := H i; rwa [← Quot.out_eq (f i), ← Quot.out_eq (g i)] at this⟩
#align cardinal.sum_le_sum Cardinal.sum_le_sum
theorem mk_le_mk_mul_of_mk_preimage_le {c : Cardinal} (f : α → β) (hf : ∀ b : β, (#f ⁻¹' {b}) ≤ c) :
(#α) ≤ (#β) * c := by
simpa only [← mk_congr (@Equiv.sigmaFiberEquiv α β f), mk_sigma, ← sum_const'] using
sum_le_sum _ _ hf
#align cardinal.mk_le_mk_mul_of_mk_preimage_le Cardinal.mk_le_mk_mul_of_mk_preimage_le
theorem lift_mk_le_lift_mk_mul_of_lift_mk_preimage_le {α : Type u} {β : Type v} {c : Cardinal}
(f : α → β) (hf : ∀ b : β, lift.{v} (#f ⁻¹' {b}) ≤ c) : lift.{v} (#α) ≤ lift.{u} (#β) * c :=
(mk_le_mk_mul_of_mk_preimage_le fun x : ULift.{v} α => ULift.up.{u} (f x.1)) <|
ULift.forall.2 fun b =>
(mk_congr <|
(Equiv.ulift.image _).trans
(Equiv.trans
(by
rw [Equiv.image_eq_preimage]
/- Porting note: Need to insert the following `have` b/c bad fun coercion
behaviour for Equivs -/
have : FunLike.coe (Equiv.symm (Equiv.ulift (α := α))) = ULift.up (α := α) := rfl
rw [this]
simp [Set.preimage]
exact Equiv.refl _)
Equiv.ulift.symm)).trans_le
(hf b)
#align
cardinal.lift_mk_le_lift_mk_mul_of_lift_mk_preimage_le
Cardinal.lift_mk_le_lift_mk_mul_of_lift_mk_preimage_le
/-- The range of an indexed cardinal function, whose outputs live in a higher universe than the
inputs, is always bounded above. -/
theorem bddAbove_range {ι : Type u} (f : ι → Cardinal.{max u v}) : BddAbove (Set.range f) :=
⟨_, by
rintro a ⟨i, rfl⟩
-- Porting note: Added universe reference below
exact le_sum.{v,u} f i⟩
#align cardinal.bdd_above_range Cardinal.bddAbove_range
instance (a : Cardinal.{u}) : Small.{u} (Set.Iic a) :=
by
rw [← mk_out a]
apply @small_of_surjective (Set a.out) (Iic (#a.out)) _ fun x => ⟨#x, mk_set_le x⟩
rintro ⟨x, hx⟩
simpa using le_mk_iff_exists_set.1 hx
instance (a : Cardinal.{u}) : Small.{u} (Set.Iio a) :=
small_subset Iio_subset_Iic_self
/-- A set of cardinals is bounded above iff it's small, i.e. it corresponds to an usual ZFC set. -/
theorem bddAbove_iff_small {s : Set Cardinal.{u}} : BddAbove s ↔ Small.{u} s :=
⟨fun ⟨a, ha⟩ => @small_subset _ (Iic a) s (fun x h => ha h) _,
by
rintro ⟨ι, ⟨e⟩⟩
suffices (range fun x : ι => (e.symm x).1) = s
by
rw [← this]
apply bddAbove_range.{u, u}
ext x
refine' ⟨_, fun hx => ⟨e ⟨x, hx⟩, _⟩⟩
· rintro ⟨a, rfl⟩
exact (e.symm a).2
· simp_rw [Equiv.symm_apply_apply]⟩
#align cardinal.bdd_above_iff_small Cardinal.bddAbove_iff_small
theorem bddAbove_of_small (s : Set Cardinal.{u}) [h : Small.{u} s] : BddAbove s :=
bddAbove_iff_small.2 h
#align cardinal.bdd_above_of_small Cardinal.bddAbove_of_small
theorem bddAbove_image (f : Cardinal.{u} → Cardinal.{max u v}) {s : Set Cardinal.{u}}
(hs : BddAbove s) : BddAbove (f '' s) := by
rw [bddAbove_iff_small] at hs ⊢
-- Porting note: added universes below
exact small_lift.{_,v,_} _
#align cardinal.bdd_above_image Cardinal.bddAbove_image
theorem bddAbove_range_comp {ι : Type u} {f : ι → Cardinal.{v}} (hf : BddAbove (range f))
(g : Cardinal.{v} → Cardinal.{max v w}) : BddAbove (range (g ∘ f)) :=
by
rw [range_comp]
exact bddAbove_image.{v,w} g hf
#align cardinal.bdd_above_range_comp Cardinal.bddAbove_range_comp
theorem supᵢ_le_sum {ι} (f : ι → Cardinal) : supᵢ f ≤ sum f :=
csupᵢ_le' <| le_sum.{u_2,u_1} _
#align cardinal.supr_le_sum Cardinal.supᵢ_le_sum
-- Porting note: Added universe hint .{v,_} below
theorem sum_le_supᵢ_lift {ι : Type u}
(f : ι → Cardinal.{max u v}) : sum f ≤ Cardinal.lift.{v,_} (#ι) * supᵢ f :=
by
rw [← (supᵢ f).lift_id, ← lift_umax, lift_umax.{max u v, u}, ← sum_const]
exact sum_le_sum _ _ (le_csupᵢ <| bddAbove_range.{u, v} f)
#align cardinal.sum_le_supr_lift Cardinal.sum_le_supᵢ_lift
theorem sum_le_supᵢ {ι : Type u} (f : ι → Cardinal.{u}) : sum f ≤ (#ι) * supᵢ f :=
by
rw [← lift_id (#ι)]
exact sum_le_supᵢ_lift f
#align cardinal.sum_le_supr Cardinal.sum_le_supᵢ
theorem sum_nat_eq_add_sum_succ (f : ℕ → Cardinal.{u}) :
Cardinal.sum f = f 0 + Cardinal.sum fun i => f (i + 1) :=
by
refine' (Equiv.sigmaNatSucc fun i => Quotient.out (f i)).cardinal_eq.trans _
simp only [mk_sum, mk_out, lift_id, mk_sigma]
#align cardinal.sum_nat_eq_add_sum_succ Cardinal.sum_nat_eq_add_sum_succ
-- Porting note: LFS is not in normal form.
-- @[simp]
/-- A variant of `csupᵢ_of_empty` but with `0` on the RHS for convenience -/
protected theorem supᵢ_of_empty {ι} (f : ι → Cardinal) [IsEmpty ι] : supᵢ f = 0 :=
csupᵢ_of_empty f
#align cardinal.supr_of_empty Cardinal.supᵢ_of_empty
-- Portin note: simpNF is not happy with universe levels.
@[simp, nolint simpNF]
theorem lift_mk_shrink (α : Type u) [Small.{v} α] :
Cardinal.lift.{max u w} (#Shrink.{v} α) = Cardinal.lift.{max v w} (#α) :=
-- Porting note: Added .{v,u,w} universe hint below
lift_mk_eq.{v,u,w}.2 ⟨(equivShrink α).symm⟩
#align cardinal.lift_mk_shrink Cardinal.lift_mk_shrink
@[simp]
theorem lift_mk_shrink' (α : Type u) [Small.{v} α] :
Cardinal.lift.{u} (#Shrink.{v} α) = Cardinal.lift.{v} (#α) :=
lift_mk_shrink.{u, v, 0} α
#align cardinal.lift_mk_shrink' Cardinal.lift_mk_shrink'
@[simp]
theorem lift_mk_shrink'' (α : Type max u v) [Small.{v} α] :
Cardinal.lift.{u} (#Shrink.{v} α) = (#α) := by
rw [← lift_umax', lift_mk_shrink.{max u v, v, 0} α, ← lift_umax, lift_id]
#align cardinal.lift_mk_shrink'' Cardinal.lift_mk_shrink''
/-- The indexed product of cardinals is the cardinality of the Pi type
(dependent product). -/
def prod {ι : Type u} (f : ι → Cardinal) : Cardinal :=
#∀ i, (f i).out
#align cardinal.prod Cardinal.prod
@[simp]
theorem mk_pi {ι : Type u} (α : ι → Type v) : (#∀ i, α i) = prod fun i => #α i :=
mk_congr <| Equiv.piCongrRight fun _ => outMkEquiv.symm
#align cardinal.mk_pi Cardinal.mk_pi
@[simp]
theorem prod_const (ι : Type u) (a : Cardinal.{v}) :
(prod fun _ : ι => a) = (lift.{u} a^lift.{v} (#ι)) :=
inductionOn a fun _ =>
mk_congr <| Equiv.piCongr Equiv.ulift.symm fun _ => outMkEquiv.trans Equiv.ulift.symm
#align cardinal.prod_const Cardinal.prod_const
theorem prod_const' (ι : Type u) (a : Cardinal.{u}) : (prod fun _ : ι => a) = (a^(#ι)) :=
inductionOn a fun _ => (mk_pi _).symm
#align cardinal.prod_const' Cardinal.prod_const'
@[simp]
theorem prod_eq_zero {ι} (f : ι → Cardinal.{u}) : prod f = 0 ↔ ∃ i, f i = 0 :=
by
lift f to ι → Type u using fun _ => trivial
simp only [mk_eq_zero_iff, ← mk_pi, isEmpty_pi]
#align cardinal.prod_eq_zero Cardinal.prod_eq_zero
theorem prod_ne_zero {ι} (f : ι → Cardinal) : prod f ≠ 0 ↔ ∀ i, f i ≠ 0 := by simp [prod_eq_zero]
#align cardinal.prod_ne_zero Cardinal.prod_ne_zero
@[simp]
theorem lift_prod {ι : Type u} (c : ι → Cardinal.{v}) :
lift.{w} (prod c) = prod fun i => lift.{w} (c i) :=
by
lift c to ι → Type v using fun _ => trivial
simp only [← mk_pi, ← mk_uLift]
exact mk_congr (Equiv.ulift.trans <| Equiv.piCongrRight fun i => Equiv.ulift.symm)
#align cardinal.lift_prod Cardinal.lift_prod
theorem prod_eq_of_fintype {α : Type u} [h : Fintype α] (f : α → Cardinal.{v}) :
prod f = Cardinal.lift.{u} (∏ i, f i) := by
revert f
refine' Fintype.induction_empty_option _ _ _ α (h_fintype := h)
· intro α β hβ e h f
letI := Fintype.ofEquiv β e.symm
rw [← e.prod_comp f, ← h]
exact mk_congr (e.piCongrLeft _).symm
· intro f
rw [Fintype.univ_pempty, Finset.prod_empty, lift_one, Cardinal.prod, mk_eq_one]
· intro α hα h f
rw [Cardinal.prod, mk_congr Equiv.piOptionEquivProd, mk_prod, lift_umax'.{v, u}, mk_out, ←
Cardinal.prod, lift_prod, Fintype.prod_option, lift_mul, ← h fun a => f (some a)]
simp only [lift_id]
#align cardinal.prod_eq_of_fintype Cardinal.prod_eq_of_fintype
-- Porting note: Inserted .{u,v} below
@[simp]
theorem lift_infₛ (s : Set Cardinal) : lift.{u,v} (infₛ s) = infₛ (lift.{u,v} '' s) :=
by
rcases eq_empty_or_nonempty s with (rfl | hs)
· simp
· exact lift_monotone.map_cinfₛ hs
#align cardinal.lift_Inf Cardinal.lift_infₛ
-- Porting note: Inserted .{u,v} below
@[simp]
theorem lift_infᵢ {ι} (f : ι → Cardinal) : lift.{u,v} (infᵢ f) = ⨅ i, lift.{u,v} (f i) :=
by
unfold infᵢ
convert lift_infₛ (range f)
simp_rw [←comp_apply (f := lift), range_comp]
#align cardinal.lift_infi Cardinal.lift_infᵢ
theorem lift_down {a : Cardinal.{u}} {b : Cardinal.{max u v}} :
b ≤ lift.{v,u} a → ∃ a', lift.{v,u} a' = b :=
inductionOn₂ a b fun α β => by
rw [← lift_id (#β), ← lift_umax, ← lift_umax.{u, v}, lift_mk_le.{_,_,v}]
exact fun ⟨f⟩ =>
⟨#Set.range f,
Eq.symm <| lift_mk_eq.{_, _, v}.2
⟨Function.Embedding.equivOfSurjective (Embedding.codRestrict _ f Set.mem_range_self)
fun ⟨a, ⟨b, e⟩⟩ => ⟨b, Subtype.eq e⟩⟩⟩
#align cardinal.lift_down Cardinal.lift_down
-- Porting note: Inserted .{u,v} below
theorem le_lift_iff {a : Cardinal.{u}} {b : Cardinal.{max u v}} :
b ≤ lift.{v,u} a ↔ ∃ a', lift.{v,u} a' = b ∧ a' ≤ a :=
⟨fun h =>
let ⟨a', e⟩ := lift_down h
⟨a', e, lift_le.1 <| e.symm ▸ h⟩,
fun ⟨_, e, h⟩ => e ▸ lift_le.2 h⟩
#align cardinal.le_lift_iff Cardinal.le_lift_iff
-- Porting note: Inserted .{u,v} below
theorem lt_lift_iff {a : Cardinal.{u}} {b : Cardinal.{max u v}} :
b < lift.{v,u} a ↔ ∃ a', lift.{v,u} a' = b ∧ a' < a :=
⟨fun h =>
let ⟨a', e⟩ := lift_down h.le
⟨a', e, lift_lt.1 <| e.symm ▸ h⟩,
fun ⟨_, e, h⟩ => e ▸ lift_lt.2 h⟩
#align cardinal.lt_lift_iff Cardinal.lt_lift_iff
-- Porting note: Inserted .{u,v} below
@[simp]
theorem lift_succ (a) : lift.{v,u} (succ a) = succ (lift.{v,u} a) :=
le_antisymm
(le_of_not_gt fun h => by
rcases lt_lift_iff.1 h with ⟨b, e, h⟩
rw [lt_succ_iff, ← lift_le, e] at h
exact h.not_lt (lt_succ _))
(succ_le_of_lt <| lift_lt.2 <| lt_succ a)
#align cardinal.lift_succ Cardinal.lift_succ
-- Porting note: simpNF is not happy with universe levels.
-- Porting note: Inserted .{u,v} below
@[simp, nolint simpNF]
theorem lift_umax_eq {a : Cardinal.{u}} {b : Cardinal.{v}} :
lift.{max v w} a = lift.{max u w} b ↔ lift.{v} a = lift.{u} b := by
rw [← lift_lift.{u,v,w}, ← lift_lift.{v,u,w}, lift_inj]
#align cardinal.lift_umax_eq Cardinal.lift_umax_eq
-- Porting note: Inserted .{u,v} below
@[simp]
theorem lift_min {a b : Cardinal} : lift.{u,v} (min a b) = min (lift.{u,v} a) (lift.{u,v} b) :=
lift_monotone.map_min
#align cardinal.lift_min Cardinal.lift_min
-- Porting note: Inserted .{u,v} below
@[simp]
theorem lift_max {a b : Cardinal} : lift.{u,v} (max a b) = max (lift.{u,v} a) (lift.{u,v} b) :=
lift_monotone.map_max
#align cardinal.lift_max Cardinal.lift_max
/-- The lift of a supremum is the supremum of the lifts. -/
theorem lift_supₛ {s : Set Cardinal} (hs : BddAbove s) : lift.{u} (supₛ s) = supₛ (lift.{u} '' s) :=
by
apply ((le_csupₛ_iff' (bddAbove_image.{_,u} _ hs)).2 fun c hc => _).antisymm (csupₛ_le' _)
· intro c hc
by_contra h
obtain ⟨d, rfl⟩ := Cardinal.lift_down (not_le.1 h).le
simp_rw [lift_le] at h hc
rw [csupₛ_le_iff' hs] at h
exact h fun a ha => lift_le.1 <| hc (mem_image_of_mem _ ha)
· rintro i ⟨j, hj, rfl⟩
exact lift_le.2 (le_csupₛ hs hj)
#align cardinal.lift_Sup Cardinal.lift_supₛ
/-- The lift of a supremum is the supremum of the lifts. -/
theorem lift_supᵢ {ι : Type v} {f : ι → Cardinal.{w}} (hf : BddAbove (range f)) :
lift.{u} (supᵢ f) = ⨆ i, lift.{u} (f i) := by
rw [supᵢ, supᵢ, lift_supₛ hf, ← range_comp]
simp [Function.comp]
#align cardinal.lift_supr Cardinal.lift_supᵢ
/-- To prove that the lift of a supremum is bounded by some cardinal `t`,
it suffices to show that the lift of each cardinal is bounded by `t`. -/
theorem lift_supᵢ_le {ι : Type v} {f : ι → Cardinal.{w}} {t : Cardinal} (hf : BddAbove (range f))
(w : ∀ i, lift.{u} (f i) ≤ t) : lift.{u} (supᵢ f) ≤ t :=
by
rw [lift_supᵢ hf]
exact csupᵢ_le' w
#align cardinal.lift_supr_le Cardinal.lift_supᵢ_le
@[simp]
theorem lift_supᵢ_le_iff {ι : Type v} {f : ι → Cardinal.{w}} (hf : BddAbove (range f))
{t : Cardinal} : lift.{u} (supᵢ f) ≤ t ↔ ∀ i, lift.{u} (f i) ≤ t := by
rw [lift_supᵢ hf]
exact csupᵢ_le_iff' (bddAbove_range_comp.{_,_,u} hf _)
#align cardinal.lift_supr_le_iff Cardinal.lift_supᵢ_le_iff
universe v' w'
/-- To prove an inequality between the lifts to a common universe of two different supremums,
it suffices to show that the lift of each cardinal from the smaller supremum
if bounded by the lift of some cardinal from the larger supremum.
-/
theorem lift_supᵢ_le_lift_supᵢ {ι : Type v} {ι' : Type v'} {f : ι → Cardinal.{w}}
{f' : ι' → Cardinal.{w'}} (hf : BddAbove (range f)) (hf' : BddAbove (range f')) {g : ι → ι'}
(h : ∀ i, lift.{w'} (f i) ≤ lift.{w} (f' (g i))) : lift.{w'} (supᵢ f) ≤ lift.{w} (supᵢ f') :=
by
rw [lift_supᵢ hf, lift_supᵢ hf']
exact csupᵢ_mono' (bddAbove_range_comp.{_,_,w} hf' _) fun i => ⟨_, h i⟩
#align cardinal.lift_supr_le_lift_supr Cardinal.lift_supᵢ_le_lift_supᵢ
/-- A variant of `lift_supᵢ_le_lift_supᵢ` with universes specialized via `w = v` and `w' = v'`.
This is sometimes necessary to avoid universe unification issues. -/
theorem lift_supᵢ_le_lift_supᵢ' {ι : Type v} {ι' : Type v'} {f : ι → Cardinal.{v}}
{f' : ι' → Cardinal.{v'}} (hf : BddAbove (range f)) (hf' : BddAbove (range f')) (g : ι → ι')
(h : ∀ i, lift.{v'} (f i) ≤ lift.{v} (f' (g i))) : lift.{v'} (supᵢ f) ≤ lift.{v} (supᵢ f') :=
lift_supᵢ_le_lift_supᵢ hf hf' h
#align cardinal.lift_supr_le_lift_supr' Cardinal.lift_supᵢ_le_lift_supᵢ'
/-- `ℵ₀` is the smallest infinite cardinal. -/
def aleph0 : Cardinal.{u} :=
lift (#ℕ)
#align cardinal.aleph_0 Cardinal.aleph0
-- mathport name: cardinal.aleph_0
@[inherit_doc]
scoped notation "ℵ₀" => Cardinal.aleph0
theorem mk_nat : (#ℕ) = ℵ₀ :=
(lift_id _).symm
#align cardinal.mk_nat Cardinal.mk_nat
theorem aleph0_ne_zero : ℵ₀ ≠ 0 :=
mk_ne_zero _
#align cardinal.aleph_0_ne_zero Cardinal.aleph0_ne_zero
theorem aleph0_pos : 0 < ℵ₀ :=
pos_iff_ne_zero.2 aleph0_ne_zero
#align cardinal.aleph_0_pos Cardinal.aleph0_pos
@[simp]
theorem lift_aleph0 : lift ℵ₀ = ℵ₀ :=
lift_lift _
#align cardinal.lift_aleph_0 Cardinal.lift_aleph0
@[simp]
theorem aleph0_le_lift {c : Cardinal.{u}} : ℵ₀ ≤ lift.{v} c ↔ ℵ₀ ≤ c := by
rw [← lift_aleph0.{u,v}, lift_le]
#align cardinal.aleph_0_le_lift Cardinal.aleph0_le_lift
@[simp]
theorem lift_le_aleph0 {c : Cardinal.{u}} : lift.{v} c ≤ ℵ₀ ↔ c ≤ ℵ₀ := by
rw [← lift_aleph0.{u,v}, lift_le]
#align cardinal.lift_le_aleph_0 Cardinal.lift_le_aleph0
/-! ### Properties about the cast from `ℕ` -/
-- Porting note : simp can prove this
-- @[simp]
theorem mk_fin (n : ℕ) : (#Fin n) = n := by simp
#align cardinal.mk_fin Cardinal.mk_fin
@[simp]
theorem lift_natCast (n : ℕ) : lift.{u} (n : Cardinal.{v}) = n := by induction n <;> simp [*]
#align cardinal.lift_nat_cast Cardinal.lift_natCast
@[simp]
theorem lift_eq_nat_iff {a : Cardinal.{u}} {n : ℕ} : lift.{v} a = n ↔ a = n :=
lift_injective.eq_iff' (lift_natCast n)
#align cardinal.lift_eq_nat_iff Cardinal.lift_eq_nat_iff
@[simp]
theorem nat_eq_lift_iff {n : ℕ} {a : Cardinal.{u}} :
(n : Cardinal) = lift.{v} a ↔ (n : Cardinal) = a := by
rw [← lift_natCast.{v,u} n, lift_inj]
#align cardinal.nat_eq_lift_iff Cardinal.nat_eq_lift_iff
theorem lift_mk_fin (n : ℕ) : lift (#Fin n) = n := by simp
#align cardinal.lift_mk_fin Cardinal.lift_mk_fin
theorem mk_coe_finset {α : Type u} {s : Finset α} : (#s) = ↑(Finset.card s) := by simp
#align cardinal.mk_coe_finset Cardinal.mk_coe_finset
theorem mk_finset_of_fintype [Fintype α] : (#Finset α) = 2 ^ℕ Fintype.card α := by
simp [Pow.pow]
#align cardinal.mk_finset_of_fintype Cardinal.mk_finset_of_fintype
@[simp]
theorem mk_finsupp_lift_of_fintype (α : Type u) (β : Type v) [Fintype α] [Zero β] :
(#α →₀ β) = lift.{u} (#β) ^ℕ Fintype.card α := by
simpa using (@Finsupp.equivFunOnFinite α β _ _).cardinal_eq
#align cardinal.mk_finsupp_lift_of_fintype Cardinal.mk_finsupp_lift_of_fintype
theorem mk_finsupp_of_fintype (α β : Type u) [Fintype α] [Zero β] :
(#α →₀ β) = (#β) ^ℕ Fintype.card α := by simp
#align cardinal.mk_finsupp_of_fintype Cardinal.mk_finsupp_of_fintype
theorem card_le_of_finset {α} (s : Finset α) : (s.card : Cardinal) ≤ (#α) :=
@mk_coe_finset _ s ▸ mk_set_le _
#align cardinal.card_le_of_finset Cardinal.card_le_of_finset
-- Porting note: was `simp`. LHS is not normal form.
-- @[simp, norm_cast]
@[norm_cast]
theorem natCast_pow {m n : ℕ} : (↑(m ^ n) : Cardinal) = (m^n) := by
induction n <;> simp [pow_succ', power_add, *, Pow.pow]
#align cardinal.nat_cast_pow Cardinal.natCast_pow
-- Porting note : simp can prove this
-- @[simp, norm_cast]
@[norm_cast]
theorem natCast_le {m n : ℕ} : (m : Cardinal) ≤ n ↔ m ≤ n := by
rw [← lift_mk_fin, ← lift_mk_fin, lift_le, le_def, Function.Embedding.nonempty_iff_card_le,
Fintype.card_fin, Fintype.card_fin]
#align cardinal.nat_cast_le Cardinal.natCast_le
-- Porting note : simp can prove this
-- @[simp, norm_cast]
@[norm_cast]
theorem natCast_lt {m n : ℕ} : (m : Cardinal) < n ↔ m < n := by
rw [lt_iff_le_not_le, ← not_le]
simp only [natCast_le, not_le, and_iff_right_iff_imp]
exact fun h ↦ le_of_lt h
#align cardinal.nat_cast_lt Cardinal.natCast_lt
instance : CharZero Cardinal :=
⟨StrictMono.injective fun _ _ => natCast_lt.2⟩
theorem natCast_inj {m n : ℕ} : (m : Cardinal) = n ↔ m = n :=
Nat.cast_inj
#align cardinal.nat_cast_inj Cardinal.natCast_inj
theorem natCast_injective : Injective ((↑) : ℕ → Cardinal) :=
Nat.cast_injective
#align cardinal.nat_cast_injective Cardinal.natCast_injective
@[simp, norm_cast]
theorem nat_succ (n : ℕ) : (n.succ : Cardinal) = succ ↑n :=
(add_one_le_succ _).antisymm (succ_le_of_lt <| natCast_lt.2 <| Nat.lt_succ_self _)
#align cardinal.nat_succ Cardinal.nat_succ
@[simp]
theorem succ_zero : succ (0 : Cardinal) = 1 := by norm_cast
#align cardinal.succ_zero Cardinal.succ_zero
theorem card_le_of {α : Type u} {n : ℕ} (H : ∀ s : Finset α, s.card ≤ n) : (#α) ≤ n :=
by
refine' le_of_lt_succ (lt_of_not_ge fun hn => _)
rw [← Cardinal.nat_succ, ← lift_mk_fin n.succ] at hn
cases' hn with f
refine' (H <| Finset.univ.map f).not_lt _
rw [Finset.card_map, ← Fintype.card, Fintype.card_ulift, Fintype.card_fin]
exact n.lt_succ_self
#align cardinal.card_le_of Cardinal.card_le_of
theorem cantor' (a) {b : Cardinal} (hb : 1 < b) : a < (b^a) :=
by
rw [← succ_le_iff, (by norm_cast : succ (1 : Cardinal) = 2)] at hb
exact (cantor a).trans_le (power_le_power_right hb)
#align cardinal.cantor' Cardinal.cantor'
theorem one_le_iff_pos {c : Cardinal} : 1 ≤ c ↔ 0 < c := by
rw [← succ_zero, succ_le_iff]
#align cardinal.one_le_iff_pos Cardinal.one_le_iff_pos
theorem one_le_iff_ne_zero {c : Cardinal} : 1 ≤ c ↔ c ≠ 0 := by
rw [one_le_iff_pos, pos_iff_ne_zero]
#align cardinal.one_le_iff_ne_zero Cardinal.one_le_iff_ne_zero
theorem nat_lt_aleph0 (n : ℕ) : (n : Cardinal.{u}) < ℵ₀ :=
succ_le_iff.1
(by
rw [← nat_succ, ← lift_mk_fin, aleph0, lift_mk_le.{0, 0, u}]
exact ⟨⟨(↑), fun a b => Fin.ext⟩⟩)
#align cardinal.nat_lt_aleph_0 Cardinal.nat_lt_aleph0
@[simp]
theorem one_lt_aleph0 : 1 < ℵ₀ := by simpa using nat_lt_aleph0 1
#align cardinal.one_lt_aleph_0 Cardinal.one_lt_aleph0
theorem one_le_aleph0 : 1 ≤ ℵ₀ :=
one_lt_aleph0.le
#align cardinal.one_le_aleph_0 Cardinal.one_le_aleph0
theorem lt_aleph0 {c : Cardinal} : c < ℵ₀ ↔ ∃ n : ℕ, c = n :=
⟨fun h => by
rcases lt_lift_iff.1 h with ⟨c, rfl, h'⟩
rcases le_mk_iff_exists_set.1 h'.1 with ⟨S, rfl⟩
suffices S.Finite by
lift S to Finset ℕ using this
simp
contrapose! h'
haveI := Infinite.to_subtype h'
exact ⟨Infinite.natEmbedding S⟩, fun ⟨n, e⟩ => e.symm ▸ nat_lt_aleph0 _⟩
#align cardinal.lt_aleph_0 Cardinal.lt_aleph0
theorem aleph0_le {c : Cardinal} : ℵ₀ ≤ c ↔ ∀ n : ℕ, ↑n ≤ c :=
⟨fun h n => (nat_lt_aleph0 _).le.trans h, fun h =>
le_of_not_lt fun hn => by
rcases lt_aleph0.1 hn with ⟨n, rfl⟩
exact (Nat.lt_succ_self _).not_le (natCast_le.1 (h (n + 1)))⟩
#align cardinal.aleph_0_le Cardinal.aleph0_le
@[simp]
theorem range_natCast : range ((↑) : ℕ → Cardinal) = Iio ℵ₀ :=
ext fun x => by simp only [mem_Iio, mem_range, eq_comm, lt_aleph0]
#align cardinal.range_nat_cast Cardinal.range_natCast
theorem mk_eq_nat_iff {α : Type u} {n : ℕ} : (#α) = n ↔ Nonempty (α ≃ Fin n) := by
rw [← lift_mk_fin, ← lift_uzero (#α), lift_mk_eq']
#align cardinal.mk_eq_nat_iff Cardinal.mk_eq_nat_iff
theorem lt_aleph0_iff_finite {α : Type u} : (#α) < ℵ₀ ↔ Finite α := by
simp only [lt_aleph0, mk_eq_nat_iff, finite_iff_exists_equiv_fin]
#align cardinal.lt_aleph_0_iff_finite Cardinal.lt_aleph0_iff_finite
theorem lt_aleph0_iff_fintype {α : Type u} : (#α) < ℵ₀ ↔ Nonempty (Fintype α) :=
lt_aleph0_iff_finite.trans (finite_iff_nonempty_fintype _)
#align cardinal.lt_aleph_0_iff_fintype Cardinal.lt_aleph0_iff_fintype
theorem lt_aleph0_of_finite (α : Type u) [Finite α] : (#α) < ℵ₀ :=
lt_aleph0_iff_finite.2 ‹_›
#align cardinal.lt_aleph_0_of_finite Cardinal.lt_aleph0_of_finite
-- Porting note : simp can prove this
-- @[simp]
theorem lt_aleph0_iff_set_finite {S : Set α} : (#S) < ℵ₀ ↔ S.Finite :=
lt_aleph0_iff_finite.trans finite_coe_iff
#align cardinal.lt_aleph_0_iff_set_finite Cardinal.lt_aleph0_iff_set_finite
alias lt_aleph0_iff_set_finite ↔ _ _root_.Set.Finite.lt_aleph0
#align set.finite.lt_aleph_0 Set.Finite.lt_aleph0
@[simp]
theorem lt_aleph0_iff_subtype_finite {p : α → Prop} : (#{ x // p x }) < ℵ₀ ↔ { x | p x }.Finite :=
lt_aleph0_iff_set_finite
#align cardinal.lt_aleph_0_iff_subtype_finite Cardinal.lt_aleph0_iff_subtype_finite
theorem mk_le_aleph0_iff : (#α) ≤ ℵ₀ ↔ Countable α := by
rw [countable_iff_nonempty_embedding, aleph0, ← lift_uzero (#α), lift_mk_le']
#align cardinal.mk_le_aleph_0_iff Cardinal.mk_le_aleph0_iff
@[simp]
theorem mk_le_aleph0 [Countable α] : (#α) ≤ ℵ₀ :=
mk_le_aleph0_iff.mpr ‹_›
#align cardinal.mk_le_aleph_0 Cardinal.mk_le_aleph0
-- Porting note : simp can prove this
-- @[simp]
theorem le_aleph0_iff_set_countable {s : Set α} : (#s) ≤ ℵ₀ ↔ s.Countable := by
rw [mk_le_aleph0_iff, countable_coe_iff]
#align cardinal.le_aleph_0_iff_set_countable Cardinal.le_aleph0_iff_set_countable
alias le_aleph0_iff_set_countable ↔ _ _root_.Set.Countable.le_aleph0
#align set.countable.le_aleph_0 Set.Countable.le_aleph0
@[simp]
theorem le_aleph0_iff_subtype_countable {p : α → Prop} :
(#{ x // p x }) ≤ ℵ₀ ↔ { x | p x }.Countable :=
le_aleph0_iff_set_countable
#align cardinal.le_aleph_0_iff_subtype_countable Cardinal.le_aleph0_iff_subtype_countable
instance canLiftCardinalNat : CanLift Cardinal ℕ (↑) fun x => x < ℵ₀ :=
⟨fun _ hx =>
let ⟨n, hn⟩ := lt_aleph0.mp hx
⟨n, hn.symm⟩⟩
#align cardinal.can_lift_cardinal_nat Cardinal.canLiftCardinalNat
theorem add_lt_aleph0 {a b : Cardinal} (ha : a < ℵ₀) (hb : b < ℵ₀) : a + b < ℵ₀ :=
match a, b, lt_aleph0.1 ha, lt_aleph0.1 hb with
| _, _, ⟨m, rfl⟩, ⟨n, rfl⟩ => by rw [← Nat.cast_add]; apply nat_lt_aleph0
#align cardinal.add_lt_aleph_0 Cardinal.add_lt_aleph0
theorem add_lt_aleph0_iff {a b : Cardinal} : a + b < ℵ₀ ↔ a < ℵ₀ ∧ b < ℵ₀ :=
⟨fun h => ⟨(self_le_add_right _ _).trans_lt h, (self_le_add_left _ _).trans_lt h⟩, fun ⟨h1, h2⟩ =>
add_lt_aleph0 h1 h2⟩
#align cardinal.add_lt_aleph_0_iff Cardinal.add_lt_aleph0_iff
theorem aleph0_le_add_iff {a b : Cardinal} : ℵ₀ ≤ a + b ↔ ℵ₀ ≤ a ∨ ℵ₀ ≤ b := by
simp only [← not_lt, add_lt_aleph0_iff, not_and_or]
#align cardinal.aleph_0_le_add_iff Cardinal.aleph0_le_add_iff
/-- See also `Cardinal.nsmul_lt_aleph0_iff_of_ne_zero` if you already have `n ≠ 0`. -/
theorem nsmul_lt_aleph0_iff {n : ℕ} {a : Cardinal} : n • a < ℵ₀ ↔ n = 0 ∨ a < ℵ₀ :=
by
cases n with
| zero => simpa using nat_lt_aleph0 0
| succ n =>
simp only [Nat.succ_ne_zero, false_or_iff]
induction' n with n ih
· simp
rw [succ_nsmul, add_lt_aleph0_iff, ih, and_self_iff]
#align cardinal.nsmul_lt_aleph_0_iff Cardinal.nsmul_lt_aleph0_iff
/-- See also `Cardinal.nsmul_lt_aleph0_iff` for a hypothesis-free version. -/
theorem nsmul_lt_aleph0_iff_of_ne_zero {n : ℕ} {a : Cardinal} (h : n ≠ 0) : n • a < ℵ₀ ↔ a < ℵ₀ :=
nsmul_lt_aleph0_iff.trans <| or_iff_right h
#align cardinal.nsmul_lt_aleph_0_iff_of_ne_zero Cardinal.nsmul_lt_aleph0_iff_of_ne_zero
theorem mul_lt_aleph0 {a b : Cardinal} (ha : a < ℵ₀) (hb : b < ℵ₀) : a * b < ℵ₀ :=
match a, b, lt_aleph0.1 ha, lt_aleph0.1 hb with
| _, _, ⟨m, rfl⟩, ⟨n, rfl⟩ => by rw [← Nat.cast_mul]; apply nat_lt_aleph0
#align cardinal.mul_lt_aleph_0 Cardinal.mul_lt_aleph0
theorem mul_lt_aleph0_iff {a b : Cardinal} : a * b < ℵ₀ ↔ a = 0 ∨ b = 0 ∨ a < ℵ₀ ∧ b < ℵ₀ :=
by
refine' ⟨fun h => _, _⟩
· by_cases ha : a = 0
· exact Or.inl ha
right
by_cases hb : b = 0
· exact Or.inl hb
right
rw [← Ne, ← one_le_iff_ne_zero] at ha hb
constructor
· rw [← mul_one a]
refine' (mul_le_mul' le_rfl hb).trans_lt h
· rw [← one_mul b]
refine' (mul_le_mul' ha le_rfl).trans_lt h
rintro (rfl | rfl | ⟨ha, hb⟩) <;> simp only [*, mul_lt_aleph0, aleph0_pos, zero_mul, mul_zero]
#align cardinal.mul_lt_aleph_0_iff Cardinal.mul_lt_aleph0_iff
/-- See also `Cardinal.aleph0_le_mul_iff`. -/
theorem aleph0_le_mul_iff {a b : Cardinal} : ℵ₀ ≤ a * b ↔ a ≠ 0 ∧ b ≠ 0 ∧ (ℵ₀ ≤ a ∨ ℵ₀ ≤ b) :=
by
let h := (@mul_lt_aleph0_iff a b).not
rwa [not_lt, not_or, not_or, not_and_or, not_lt, not_lt] at h
#align cardinal.aleph_0_le_mul_iff Cardinal.aleph0_le_mul_iff
/-- See also `Cardinal.aleph0_le_mul_iff'`. -/
theorem aleph0_le_mul_iff' {a b : Cardinal.{u}} : ℵ₀ ≤ a * b ↔ a ≠ 0 ∧ ℵ₀ ≤ b ∨ ℵ₀ ≤ a ∧ b ≠ 0 :=
by
have : ∀ {a : Cardinal.{u}}, ℵ₀ ≤ a → a ≠ 0 := fun a => ne_bot_of_le_ne_bot aleph0_ne_zero a
simp only [aleph0_le_mul_iff, and_or_left, and_iff_right_of_imp this, @and_left_comm (a ≠ 0)]
simp only [and_comm, or_comm]
#align cardinal.aleph_0_le_mul_iff' Cardinal.aleph0_le_mul_iff'
theorem mul_lt_aleph0_iff_of_ne_zero {a b : Cardinal} (ha : a ≠ 0) (hb : b ≠ 0) :
a * b < ℵ₀ ↔ a < ℵ₀ ∧ b < ℵ₀ := by simp [mul_lt_aleph0_iff, ha, hb]
#align cardinal.mul_lt_aleph_0_iff_of_ne_zero Cardinal.mul_lt_aleph0_iff_of_ne_zero
theorem power_lt_aleph0 {a b : Cardinal} (ha : a < ℵ₀) (hb : b < ℵ₀) : (a^b) < ℵ₀ :=
match a, b, lt_aleph0.1 ha, lt_aleph0.1 hb with
| _, _, ⟨m, rfl⟩, ⟨n, rfl⟩ => by rw [← natCast_pow]; apply nat_lt_aleph0
#align cardinal.power_lt_aleph_0 Cardinal.power_lt_aleph0
theorem eq_one_iff_unique {α : Type _} : (#α) = 1 ↔ Subsingleton α ∧ Nonempty α :=
calc
(#α) = 1 ↔ (#α) ≤ 1 ∧ 1 ≤ (#α) := le_antisymm_iff
_ ↔ Subsingleton α ∧ Nonempty α :=
le_one_iff_subsingleton.and (one_le_iff_ne_zero.trans mk_ne_zero_iff)
#align cardinal.eq_one_iff_unique Cardinal.eq_one_iff_unique
theorem infinite_iff {α : Type u} : Infinite α ↔ ℵ₀ ≤ (#α) := by
rw [← not_lt, lt_aleph0_iff_finite, not_finite_iff_infinite]
#align cardinal.infinite_iff Cardinal.infinite_iff
@[simp]
theorem aleph0_le_mk (α : Type u) [Infinite α] : ℵ₀ ≤ (#α) :=
infinite_iff.1 ‹_›
#align cardinal.aleph_0_le_mk Cardinal.aleph0_le_mk
@[simp]
theorem mk_eq_aleph0 (α : Type _) [Countable α] [Infinite α] : (#α) = ℵ₀ :=
mk_le_aleph0.antisymm <| aleph0_le_mk _
#align cardinal.mk_eq_aleph_0 Cardinal.mk_eq_aleph0
theorem denumerable_iff {α : Type u} : Nonempty (Denumerable α) ↔ (#α) = ℵ₀ :=
⟨fun ⟨h⟩ => mk_congr ((@Denumerable.eqv α h).trans Equiv.ulift.symm), fun h =>
by
cases' Quotient.exact h with f
exact ⟨Denumerable.mk' <| f.trans Equiv.ulift⟩⟩
#align cardinal.denumerable_iff Cardinal.denumerable_iff
-- Porting note : simp can prove this
-- @[simp]
theorem mk_denumerable (α : Type u) [Denumerable α] : (#α) = ℵ₀ :=
denumerable_iff.1 ⟨‹_›⟩
#align cardinal.mk_denumerable Cardinal.mk_denumerable
@[simp]
theorem aleph0_add_aleph0 : ℵ₀ + ℵ₀ = ℵ₀ :=
mk_denumerable _
#align cardinal.aleph_0_add_aleph_0 Cardinal.aleph0_add_aleph0
theorem aleph0_mul_aleph0 : ℵ₀ * ℵ₀ = ℵ₀ :=
mk_denumerable _
#align cardinal.aleph_0_mul_aleph_0 Cardinal.aleph0_mul_aleph0
@[simp]
theorem nat_mul_aleph0 {n : ℕ} (hn : n ≠ 0) : ↑n * ℵ₀ = ℵ₀ :=
le_antisymm (lift_mk_fin n ▸ mk_le_aleph0) <|
le_mul_of_one_le_left (zero_le _) <| by
rwa [← Nat.cast_one, natCast_le, Nat.one_le_iff_ne_zero]
#align cardinal.nat_mul_aleph_0 Cardinal.nat_mul_aleph0
@[simp]
theorem aleph0_mul_nat {n : ℕ} (hn : n ≠ 0) : ℵ₀ * n = ℵ₀ := by rw [mul_comm, nat_mul_aleph0 hn]
#align cardinal.aleph_0_mul_nat Cardinal.aleph0_mul_nat
@[simp]
theorem add_le_aleph0 {c₁ c₂ : Cardinal} : c₁ + c₂ ≤ ℵ₀ ↔ c₁ ≤ ℵ₀ ∧ c₂ ≤ ℵ₀ :=
⟨fun h => ⟨le_self_add.trans h, le_add_self.trans h⟩, fun h =>
aleph0_add_aleph0 ▸ add_le_add h.1 h.2⟩
#align cardinal.add_le_aleph_0 Cardinal.add_le_aleph0
@[simp]
theorem aleph0_add_nat (n : ℕ) : ℵ₀ + n = ℵ₀ :=
(add_le_aleph0.2 ⟨le_rfl, (nat_lt_aleph0 n).le⟩).antisymm le_self_add
#align cardinal.aleph_0_add_nat Cardinal.aleph0_add_nat
@[simp]
theorem nat_add_aleph0 (n : ℕ) : ↑n + ℵ₀ = ℵ₀ := by rw [add_comm, aleph0_add_nat]
#align cardinal.nat_add_aleph_0 Cardinal.nat_add_aleph0
/-- This function sends finite cardinals to the corresponding natural, and infinite cardinals
to 0. -/
def toNat : ZeroHom Cardinal ℕ :=
⟨fun c => if h : c < aleph0.{v} then Classical.choose (lt_aleph0.1 h) else 0,
by
have h : 0 < ℵ₀ := nat_lt_aleph0 0
dsimp only
rw [dif_pos h, ← Cardinal.natCast_inj, ← Classical.choose_spec (lt_aleph0.1 h),
Nat.cast_zero]⟩
#align cardinal.to_nat Cardinal.toNat
theorem toNat_apply_of_lt_aleph0 {c : Cardinal} (h : c < ℵ₀) :
toNat c = Classical.choose (lt_aleph0.1 h) :=
dif_pos h
#align cardinal.to_nat_apply_of_lt_aleph_0 Cardinal.toNat_apply_of_lt_aleph0
theorem toNat_apply_of_aleph0_le {c : Cardinal} (h : ℵ₀ ≤ c) : toNat c = 0 :=
dif_neg h.not_lt
#align cardinal.to_nat_apply_of_aleph_0_le Cardinal.toNat_apply_of_aleph0_le
theorem cast_toNat_of_lt_aleph0 {c : Cardinal} (h : c < ℵ₀) : ↑(toNat c) = c := by
rw [toNat_apply_of_lt_aleph0 h, ← Classical.choose_spec (lt_aleph0.1 h)]
#align cardinal.cast_to_nat_of_lt_aleph_0 Cardinal.cast_toNat_of_lt_aleph0
theorem cast_toNat_of_aleph0_le {c : Cardinal} (h : ℵ₀ ≤ c) : ↑(toNat c) = (0 : Cardinal) := by
rw [toNat_apply_of_aleph0_le h, Nat.cast_zero]
#align cardinal.cast_to_nat_of_aleph_0_le Cardinal.cast_toNat_of_aleph0_le
theorem toNat_le_iff_le_of_lt_aleph0 {c d : Cardinal} (hc : c < ℵ₀) (hd : d < ℵ₀) :
toNat c ≤ toNat d ↔ c ≤ d := by
rw [← natCast_le, cast_toNat_of_lt_aleph0 hc, cast_toNat_of_lt_aleph0 hd]
#align cardinal.to_nat_le_iff_le_of_lt_aleph_0 Cardinal.toNat_le_iff_le_of_lt_aleph0
theorem toNat_lt_iff_lt_of_lt_aleph0 {c d : Cardinal} (hc : c < ℵ₀) (hd : d < ℵ₀) :
toNat c < toNat d ↔ c < d := by
rw [← natCast_lt, cast_toNat_of_lt_aleph0 hc, cast_toNat_of_lt_aleph0 hd]
#align cardinal.to_nat_lt_iff_lt_of_lt_aleph_0 Cardinal.toNat_lt_iff_lt_of_lt_aleph0
theorem toNat_le_of_le_of_lt_aleph0 {c d : Cardinal} (hd : d < ℵ₀) (hcd : c ≤ d) :
toNat c ≤ toNat d :=
(toNat_le_iff_le_of_lt_aleph0 (hcd.trans_lt hd) hd).mpr hcd
#align cardinal.to_nat_le_of_le_of_lt_aleph_0 Cardinal.toNat_le_of_le_of_lt_aleph0
theorem toNat_lt_of_lt_of_lt_aleph0 {c d : Cardinal} (hd : d < ℵ₀) (hcd : c < d) :
toNat c < toNat d :=
(toNat_lt_iff_lt_of_lt_aleph0 (hcd.trans hd) hd).mpr hcd
#align cardinal.to_nat_lt_of_lt_of_lt_aleph_0 Cardinal.toNat_lt_of_lt_of_lt_aleph0
@[simp]
theorem toNat_cast (n : ℕ) : Cardinal.toNat n = n :=
by
rw [toNat_apply_of_lt_aleph0 (nat_lt_aleph0 n), ← natCast_inj]
exact (Classical.choose_spec (lt_aleph0.1 (nat_lt_aleph0 n))).symm
#align cardinal.to_nat_cast Cardinal.toNat_cast
/-- `toNat` has a right-inverse: coercion. -/
theorem toNat_rightInverse : Function.RightInverse ((↑) : ℕ → Cardinal) toNat :=
toNat_cast
#align cardinal.to_nat_right_inverse Cardinal.toNat_rightInverse
theorem toNat_surjective : Surjective toNat :=
toNat_rightInverse.surjective
#align cardinal.to_nat_surjective Cardinal.toNat_surjective
theorem exists_nat_eq_of_le_nat {c : Cardinal} {n : ℕ} (h : c ≤ n) : ∃ m, m ≤ n ∧ c = m :=
let he := cast_toNat_of_lt_aleph0 (h.trans_lt <| nat_lt_aleph0 n)
⟨toNat c, natCast_le.1 (he.trans_le h), he.symm⟩
#align cardinal.exists_nat_eq_of_le_nat Cardinal.exists_nat_eq_of_le_nat
@[simp]
theorem mk_toNat_of_infinite [h : Infinite α] : toNat (#α) = 0 :=
dif_neg (infinite_iff.1 h).not_lt
#align cardinal.mk_to_nat_of_infinite Cardinal.mk_toNat_of_infinite
@[simp]
theorem aleph0_toNat : toNat ℵ₀ = 0 :=
toNat_apply_of_aleph0_le le_rfl
#align cardinal.aleph_0_to_nat Cardinal.aleph0_toNat
theorem mk_toNat_eq_card [Fintype α] : toNat (#α) = Fintype.card α := by simp
#align cardinal.mk_to_nat_eq_card Cardinal.mk_toNat_eq_card
-- Porting note : simp can prove this
-- @[simp]
theorem zero_toNat : toNat 0 = 0 := by rw [← toNat_cast 0, Nat.cast_zero]
#align cardinal.zero_to_nat Cardinal.zero_toNat
@[simp]
theorem one_toNat : toNat 1 = 1 := by rw [← toNat_cast 1, Nat.cast_one]
#align cardinal.one_to_nat Cardinal.one_toNat
theorem toNat_eq_iff {c : Cardinal} {n : ℕ} (hn : n ≠ 0) : toNat c = n ↔ c = n :=
⟨fun h =>
(cast_toNat_of_lt_aleph0
(lt_of_not_ge (hn ∘ h.symm.trans ∘ toNat_apply_of_aleph0_le))).symm.trans
(congr_arg _ h),
fun h => (congr_arg toNat h).trans (toNat_cast n)⟩
#align cardinal.to_nat_eq_iff Cardinal.toNat_eq_iff
@[simp]
theorem toNat_eq_one {c : Cardinal} : toNat c = 1 ↔ c = 1 := by
rw [toNat_eq_iff one_ne_zero, Nat.cast_one]
#align cardinal.to_nat_eq_one Cardinal.toNat_eq_one
theorem toNat_eq_one_iff_unique {α : Type _} : toNat (#α) = 1 ↔ Subsingleton α ∧ Nonempty α :=
toNat_eq_one.trans eq_one_iff_unique
#align cardinal.to_nat_eq_one_iff_unique Cardinal.toNat_eq_one_iff_unique
@[simp]
theorem toNat_lift (c : Cardinal.{v}) : toNat (lift.{u, v} c) = toNat c :=
by
apply natCast_injective
cases' lt_or_ge c ℵ₀ with hc hc
· rw [cast_toNat_of_lt_aleph0, ← lift_natCast.{u,v}, cast_toNat_of_lt_aleph0 hc]
rwa [← lift_aleph0.{v,u}, lift_lt]
· rw [cast_toNat_of_aleph0_le, ← lift_natCast.{u,v}, cast_toNat_of_aleph0_le hc, lift_zero]
rwa [← lift_aleph0.{v,u}, lift_le]
#align cardinal.to_nat_lift Cardinal.toNat_lift
theorem toNat_congr {β : Type v} (e : α ≃ β) : toNat (#α) = toNat (#β) := by
-- Porting note: Inserted universe hint below
rw [← toNat_lift, (lift_mk_eq.{_,_,v}).mpr ⟨e⟩, toNat_lift]
#align cardinal.to_nat_congr Cardinal.toNat_congr
@[simp]
theorem toNat_mul (x y : Cardinal) : toNat (x * y) = toNat x * toNat y :=
by
rcases eq_or_ne x 0 with (rfl | hx1)
· rw [zero_mul, zero_toNat, zero_mul]
rcases eq_or_ne y 0 with (rfl | hy1)
· rw [mul_zero, zero_toNat, mul_zero]
cases' lt_or_le x ℵ₀ with hx2 hx2
· cases' lt_or_le y ℵ₀ with hy2 hy2
· lift x to ℕ using hx2
lift y to ℕ using hy2
rw [← Nat.cast_mul, toNat_cast, toNat_cast, toNat_cast]
· rw [toNat_apply_of_aleph0_le hy2, mul_zero, toNat_apply_of_aleph0_le]
exact aleph0_le_mul_iff'.2 (Or.inl ⟨hx1, hy2⟩)
· rw [toNat_apply_of_aleph0_le hx2, zero_mul, toNat_apply_of_aleph0_le]
exact aleph0_le_mul_iff'.2 (Or.inr ⟨hx2, hy1⟩)
#align cardinal.to_nat_mul Cardinal.toNat_mul
/-- `Cardinal.toNat` as a `MonoidWithZeroHom`. -/
@[simps]
def toNatHom : Cardinal →*₀ ℕ where
toFun := toNat
map_zero' := zero_toNat
map_one' := one_toNat
map_mul' := toNat_mul
#align cardinal.to_nat_hom Cardinal.toNatHom
theorem toNat_finset_prod (s : Finset α) (f : α → Cardinal) :
toNat (∏ i in s, f i) = ∏ i in s, toNat (f i) :=
map_prod toNatHom _ _
#align cardinal.to_nat_finset_prod Cardinal.toNat_finset_prod
@[simp]
theorem toNat_add_of_lt_aleph0 {a : Cardinal.{u}} {b : Cardinal.{v}} (ha : a < ℵ₀) (hb : b < ℵ₀) :
toNat (lift.{v, u} a + lift.{u, v} b) = toNat a + toNat b :=
by
apply Cardinal.natCast_injective
replace ha : lift.{v, u} a < ℵ₀ := by
rw [← lift_aleph0.{u,v}]
exact lift_lt.2 ha
replace hb : lift.{u, v} b < ℵ₀ := by
rw [← lift_aleph0.{v,u}]
exact lift_lt.2 hb
rw [Nat.cast_add, ← toNat_lift.{v, u} a, ← toNat_lift.{u, v} b, cast_toNat_of_lt_aleph0 ha,
cast_toNat_of_lt_aleph0 hb, cast_toNat_of_lt_aleph0 (add_lt_aleph0 ha hb)]
#align cardinal.to_nat_add_of_lt_aleph_0 Cardinal.toNat_add_of_lt_aleph0
/-- This function sends finite cardinals to the corresponding natural, and infinite cardinals
to `⊤`. -/
def toPartENat : Cardinal →+ PartENat
where
toFun c := if c < ℵ₀ then toNat c else ⊤
map_zero' := by simp [if_pos (zero_lt_one.trans one_lt_aleph0)]
map_add' x y := by
by_cases hx : x < ℵ₀
· obtain ⟨x0, rfl⟩ := lt_aleph0.1 hx
by_cases hy : y < ℵ₀
· obtain ⟨y0, rfl⟩ := lt_aleph0.1 hy
simp only [add_lt_aleph0 hx hy, hx, hy, toNat_cast, if_true]
rw [← Nat.cast_add, toNat_cast, Nat.cast_add]
· simp_rw [if_neg hy, PartENat.add_top]
contrapose! hy
simp only [ne_eq, ite_eq_right_iff,
PartENat.natCast_ne_top, not_forall, exists_prop, and_true] at hy
exact le_add_self.trans_lt hy
· simp_rw [if_neg hx, if_neg, PartENat.top_add]
contrapose! hx
simp only [ne_eq, ite_eq_right_iff,
PartENat.natCast_ne_top, not_forall, exists_prop, and_true] at hx
exact le_self_add.trans_lt hx
#align cardinal.to_part_enat Cardinal.toPartENat
theorem toPartENat_apply_of_lt_aleph0 {c : Cardinal} (h : c < ℵ₀) : toPartENat c = toNat c :=
if_pos h
#align cardinal.to_part_enat_apply_of_lt_aleph_0 Cardinal.toPartENat_apply_of_lt_aleph0
theorem toPartENat_apply_of_aleph0_le {c : Cardinal} (h : ℵ₀ ≤ c) : toPartENat c = ⊤ :=
if_neg h.not_lt
#align cardinal.to_part_enat_apply_of_aleph_0_le Cardinal.toPartENat_apply_of_aleph0_le
@[simp]
theorem toPartENat_cast (n : ℕ) : toPartENat n = n := by
rw [toPartENat_apply_of_lt_aleph0 (nat_lt_aleph0 n), toNat_cast]
#align cardinal.to_part_enat_cast Cardinal.toPartENat_cast
@[simp]
theorem mk_toPartENat_of_infinite [h : Infinite α] : toPartENat (#α) = ⊤ :=
toPartENat_apply_of_aleph0_le (infinite_iff.1 h)
#align cardinal.mk_to_part_enat_of_infinite Cardinal.mk_toPartENat_of_infinite
@[simp]
theorem aleph0_toPartENat : toPartENat ℵ₀ = ⊤ :=
toPartENat_apply_of_aleph0_le le_rfl
#align cardinal.aleph_0_to_part_enat Cardinal.aleph0_toPartENat
theorem toPartENat_surjective : Surjective toPartENat := fun x =>
PartENat.casesOn x ⟨ℵ₀, toPartENat_apply_of_aleph0_le le_rfl⟩ fun n => ⟨n, toPartENat_cast n⟩
#align cardinal.to_part_enat_surjective Cardinal.toPartENat_surjective
theorem mk_toPartENat_eq_coe_card [Fintype α] : toPartENat (#α) = Fintype.card α := by simp
#align cardinal.mk_to_part_enat_eq_coe_card Cardinal.mk_toPartENat_eq_coe_card
theorem mk_int : (#ℤ) = ℵ₀ :=
mk_denumerable ℤ
#align cardinal.mk_int Cardinal.mk_int
theorem mk_pNat : (#ℕ+) = ℵ₀ :=
mk_denumerable ℕ+
#align cardinal.mk_pnat Cardinal.mk_pNat
/-- **König's theorem** -/
theorem sum_lt_prod {ι} (f g : ι → Cardinal) (H : ∀ i, f i < g i) : sum f < prod g :=
lt_of_not_ge fun ⟨F⟩ =>
by
have : Inhabited (∀ i : ι, (g i).out) :=
by
refine' ⟨fun i => Classical.choice <| mk_ne_zero_iff.1 _⟩
rw [mk_out]
exact (H i).ne_bot
let G := invFun F
have sG : Surjective G := invFun_surjective F.2
choose C hc using
show ∀ i, ∃ b, ∀ a, G ⟨i, a⟩ i ≠ b by
intro i
simp only [not_exists.symm, not_forall.symm]
refine' fun h => (H i).not_le _
rw [← mk_out (f i), ← mk_out (g i)]
exact ⟨Embedding.ofSurjective _ h⟩
exact
let ⟨⟨i, a⟩, h⟩ := sG C
hc i a (congr_fun h _)
#align cardinal.sum_lt_prod Cardinal.sum_lt_prod
-- Porting note : simp can prove this
-- @[simp]
theorem mk_empty : (#Empty) = 0 :=
mk_eq_zero _
#align cardinal.mk_empty Cardinal.mk_empty
-- Porting note : simp can prove this
-- @[simp]
theorem mk_pEmpty : (#PEmpty) = 0 :=
mk_eq_zero _
#align cardinal.mk_pempty Cardinal.mk_pEmpty
-- Porting note : simp can prove this
-- @[simp]
theorem mk_pUnit : (#PUnit) = 1 :=
mk_eq_one PUnit
#align cardinal.mk_punit Cardinal.mk_pUnit
theorem mk_unit : (#Unit) = 1 :=
mk_pUnit
#align cardinal.mk_unit Cardinal.mk_unit
-- Porting note : simp can prove this
-- @[simp]
theorem mk_singleton {α : Type u} (x : α) : (#({x} : Set α)) = 1 :=
mk_eq_one _
#align cardinal.mk_singleton Cardinal.mk_singleton
-- Porting note : simp can prove this
-- @[simp]
theorem mk_pLift_true : (#PLift True) = 1 :=
mk_eq_one _
#align cardinal.mk_plift_true Cardinal.mk_pLift_true
-- Porting note : simp can prove this
-- @[simp]
theorem mk_pLift_false : (#PLift False) = 0 :=
mk_eq_zero _
#align cardinal.mk_plift_false Cardinal.mk_pLift_false
@[simp]
theorem mk_vector (α : Type u) (n : ℕ) : (#Vector α n) = (#α) ^ℕ n :=
(mk_congr (Equiv.vectorEquivFin α n)).trans <| by simp
#align cardinal.mk_vector Cardinal.mk_vector
theorem mk_list_eq_sum_pow (α : Type u) : (#List α) = sum fun n : ℕ => (#α) ^ℕ n :=
calc
(#List α) = (#Σn, Vector α n) := mk_congr (Equiv.sigmaFiberEquiv List.length).symm
_ = sum fun n : ℕ => (#α) ^ℕ n := by simp
#align cardinal.mk_list_eq_sum_pow Cardinal.mk_list_eq_sum_pow
theorem mk_quot_le {α : Type u} {r : α → α → Prop} : (#Quot r) ≤ (#α) :=
mk_le_of_surjective Quot.exists_rep
#align cardinal.mk_quot_le Cardinal.mk_quot_le
theorem mk_quotient_le {α : Type u} {s : Setoid α} : (#Quotient s) ≤ (#α) :=
mk_quot_le
#align cardinal.mk_quotient_le Cardinal.mk_quotient_le
theorem mk_subtype_le_of_subset {α : Type u} {p q : α → Prop} (h : ∀ ⦃x⦄, p x → q x) :
(#Subtype p) ≤ (#Subtype q) :=
⟨Embedding.subtypeMap (Embedding.refl α) h⟩
#align cardinal.mk_subtype_le_of_subset Cardinal.mk_subtype_le_of_subset
-- Porting note : simp can prove this
-- @[simp]
theorem mk_emptyCollection (α : Type u) : (#(∅ : Set α)) = 0 :=
mk_eq_zero _
#align cardinal.mk_emptyc Cardinal.mk_emptyCollection
theorem mk_emptyCollection_iff {α : Type u} {s : Set α} : (#s) = 0 ↔ s = ∅ :=
by
constructor
· intro h
rw [mk_eq_zero_iff] at h
exact eq_empty_iff_forall_not_mem.2 fun x hx => h.elim' ⟨x, hx⟩
· rintro rfl
exact mk_emptyCollection _
#align cardinal.mk_emptyc_iff Cardinal.mk_emptyCollection_iff
@[simp]
theorem mk_univ {α : Type u} : (#@univ α) = (#α) :=
mk_congr (Equiv.Set.univ α)
#align cardinal.mk_univ Cardinal.mk_univ
theorem mk_image_le {α β : Type u} {f : α → β} {s : Set α} : (#f '' s) ≤ (#s) :=
mk_le_of_surjective surjective_onto_image
#align cardinal.mk_image_le Cardinal.mk_image_le
theorem mk_image_le_lift {α : Type u} {β : Type v} {f : α → β} {s : Set α} :
lift.{u} (#f '' s) ≤ lift.{v} (#s) :=
lift_mk_le.{v, u, 0}.mpr ⟨Embedding.ofSurjective _ surjective_onto_image⟩
#align cardinal.mk_image_le_lift Cardinal.mk_image_le_lift
theorem mk_range_le {α β : Type u} {f : α → β} : (#range f) ≤ (#α) :=
mk_le_of_surjective surjective_onto_range
#align cardinal.mk_range_le Cardinal.mk_range_le
theorem mk_range_le_lift {α : Type u} {β : Type v} {f : α → β} :
lift.{u} (#range f) ≤ lift.{v} (#α) :=
lift_mk_le.{v, u, 0}.mpr ⟨Embedding.ofSurjective _ surjective_onto_range⟩
#align cardinal.mk_range_le_lift Cardinal.mk_range_le_lift
theorem mk_range_eq (f : α → β) (h : Injective f) : (#range f) = (#α) :=
mk_congr (Equiv.ofInjective f h).symm
#align cardinal.mk_range_eq Cardinal.mk_range_eq
theorem mk_range_eq_of_injective {α : Type u} {β : Type v} {f : α → β} (hf : Injective f) :
lift.{u} (#range f) = lift.{v} (#α) :=
lift_mk_eq'.mpr ⟨(Equiv.ofInjective f hf).symm⟩
#align cardinal.mk_range_eq_of_injective Cardinal.mk_range_eq_of_injective
theorem mk_range_eq_lift {α : Type u} {β : Type v} {f : α → β} (hf : Injective f) :
lift.{max u w} (#range f) = lift.{max v w} (#α) :=
lift_mk_eq.{v,u,w}.mpr ⟨(Equiv.ofInjective f hf).symm⟩
#align cardinal.mk_range_eq_lift Cardinal.mk_range_eq_lift
theorem mk_image_eq {α β : Type u} {f : α → β} {s : Set α} (hf : Injective f) : (#f '' s) = (#s) :=
mk_congr (Equiv.Set.image f s hf).symm
#align cardinal.mk_image_eq Cardinal.mk_image_eq
theorem mk_unionᵢ_le_sum_mk {α ι : Type u} {f : ι → Set α} : (#⋃ i, f i) ≤ sum fun i => #f i :=
calc
(#⋃ i, f i) ≤ (#Σi, f i) := mk_le_of_surjective (Set.sigmaToUnionᵢ_surjective f)
_ = sum fun i => #f i := mk_sigma _
#align cardinal.mk_Union_le_sum_mk Cardinal.mk_unionᵢ_le_sum_mk
theorem mk_unionᵢ_eq_sum_mk {α ι : Type u} {f : ι → Set α}
(h : ∀ i j, i ≠ j → Disjoint (f i) (f j)) : (#⋃ i, f i) = sum fun i => #f i :=
calc
(#⋃ i, f i) = (#Σi, f i) := mk_congr (Set.unionEqSigmaOfDisjoint h)
_ = sum fun i => #f i := mk_sigma _
#align cardinal.mk_Union_eq_sum_mk Cardinal.mk_unionᵢ_eq_sum_mk
theorem mk_unionᵢ_le {α ι : Type u} (f : ι → Set α) : (#⋃ i, f i) ≤ (#ι) * ⨆ i, #f i :=
mk_unionᵢ_le_sum_mk.trans (sum_le_supᵢ _)
#align cardinal.mk_Union_le Cardinal.mk_unionᵢ_le
theorem mk_unionₛ_le {α : Type u} (A : Set (Set α)) : (#⋃₀ A) ≤ (#A) * ⨆ s : A, #s :=
by
rw [unionₛ_eq_unionᵢ]
apply mk_unionᵢ_le
#align cardinal.mk_sUnion_le Cardinal.mk_unionₛ_le
theorem mk_bunionᵢ_le {ι α : Type u} (A : ι → Set α) (s : Set ι) :
(#⋃ x ∈ s, A x) ≤ (#s) * ⨆ x : s, #A x.1 :=
by
rw [bunionᵢ_eq_unionᵢ]
apply mk_unionᵢ_le
#align cardinal.mk_bUnion_le Cardinal.mk_bunionᵢ_le
theorem finset_card_lt_aleph0 (s : Finset α) : (#(↑s : Set α)) < ℵ₀ :=
lt_aleph0_of_finite _
#align cardinal.finset_card_lt_aleph_0 Cardinal.finset_card_lt_aleph0
theorem mk_set_eq_nat_iff_finset {α} {s : Set α} {n : ℕ} :
(#s) = n ↔ ∃ t : Finset α, (t : Set α) = s ∧ t.card = n :=
by
constructor
· intro h
lift s to Finset α using lt_aleph0_iff_set_finite.1 (h.symm ▸ nat_lt_aleph0 n)
simpa using h
· rintro ⟨t, rfl, rfl⟩
exact mk_coe_finset
#align cardinal.mk_set_eq_nat_iff_finset Cardinal.mk_set_eq_nat_iff_finset
theorem mk_eq_nat_iff_finset {n : ℕ} :
(#α) = n ↔ ∃ t : Finset α, (t : Set α) = univ ∧ t.card = n :=
by rw [← mk_univ, mk_set_eq_nat_iff_finset]
#align cardinal.mk_eq_nat_iff_finset Cardinal.mk_eq_nat_iff_finset
theorem mk_eq_nat_iff_fintype {n : ℕ} : (#α) = n ↔ ∃ h : Fintype α, @Fintype.card α h = n :=
by
rw [mk_eq_nat_iff_finset]
constructor
· rintro ⟨t, ht, hn⟩
exact ⟨⟨t, eq_univ_iff_forall.1 ht⟩, hn⟩
· rintro ⟨⟨t, ht⟩, hn⟩
exact ⟨t, eq_univ_iff_forall.2 ht, hn⟩
#align cardinal.mk_eq_nat_iff_fintype Cardinal.mk_eq_nat_iff_fintype
theorem mk_union_add_mk_inter {α : Type u} {S T : Set α} :
(#(S ∪ T : Set α)) + (#(S ∩ T : Set α)) = (#S) + (#T) :=
Quot.sound ⟨Equiv.Set.unionSumInter S T⟩
#align cardinal.mk_union_add_mk_inter Cardinal.mk_union_add_mk_inter
/-- The cardinality of a union is at most the sum of the cardinalities
of the two sets. -/
theorem mk_union_le {α : Type u} (S T : Set α) : (#(S ∪ T : Set α)) ≤ (#S) + (#T) :=
@mk_union_add_mk_inter α S T ▸ self_le_add_right (#(S ∪ T : Set α)) (#(S ∩ T : Set α))
#align cardinal.mk_union_le Cardinal.mk_union_le
theorem mk_union_of_disjoint {α : Type u} {S T : Set α} (H : Disjoint S T) :
(#(S ∪ T : Set α)) = (#S) + (#T) :=
Quot.sound ⟨Equiv.Set.union H.le_bot⟩
#align cardinal.mk_union_of_disjoint Cardinal.mk_union_of_disjoint
theorem mk_insert {α : Type u} {s : Set α} {a : α} (h : a ∉ s) :
(#(insert a s : Set α)) = (#s) + 1 :=
by
rw [← union_singleton, mk_union_of_disjoint, mk_singleton]
simpa
#align cardinal.mk_insert Cardinal.mk_insert
theorem mk_sum_compl {α} (s : Set α) : (#s) + (#(sᶜ : Set α)) = (#α) :=
mk_congr (Equiv.Set.sumCompl s)
#align cardinal.mk_sum_compl Cardinal.mk_sum_compl
theorem mk_le_mk_of_subset {α} {s t : Set α} (h : s ⊆ t) : (#s) ≤ (#t) :=
⟨Set.embeddingOfSubset s t h⟩
#align cardinal.mk_le_mk_of_subset Cardinal.mk_le_mk_of_subset
theorem mk_subtype_mono {p q : α → Prop} (h : ∀ x, p x → q x) :
(#{ x // p x }) ≤ (#{ x // q x }) :=
⟨embeddingOfSubset _ _ h⟩
#align cardinal.mk_subtype_mono Cardinal.mk_subtype_mono
theorem le_mk_diff_add_mk (S T : Set α) : (#S) ≤ (#(S \ T : Set α)) + (#T) :=
(mk_le_mk_of_subset <| subset_diff_union _ _).trans <| mk_union_le _ _
#align cardinal.le_mk_diff_add_mk Cardinal.le_mk_diff_add_mk
theorem mk_diff_add_mk {S T : Set α} (h : T ⊆ S) : (#(S \ T : Set α)) + (#T) = (#S) := by
refine (mk_union_of_disjoint <| ?_).symm.trans <| by rw [diff_union_of_subset h]
-- Porting note: `apply` works here, `exact` does not
apply disjoint_sdiff_self_left
#align cardinal.mk_diff_add_mk Cardinal.mk_diff_add_mk
theorem mk_union_le_aleph0 {α} {P Q : Set α} :
(#(P ∪ Q : Set α)) ≤ ℵ₀ ↔ (#P) ≤ ℵ₀ ∧ (#Q) ≤ ℵ₀ := by
simp only [le_aleph0_iff_subtype_countable, mem_union, setOf_mem_eq, Set.union_def,
← countable_union]
#align cardinal.mk_union_le_aleph_0 Cardinal.mk_union_le_aleph0
theorem mk_image_eq_lift {α : Type u} {β : Type v} (f : α → β) (s : Set α) (h : Injective f) :
lift.{u} (#f '' s) = lift.{v} (#s) :=
lift_mk_eq.{v, u, 0}.mpr ⟨(Equiv.Set.image f s h).symm⟩
#align cardinal.mk_image_eq_lift Cardinal.mk_image_eq_lift
theorem mk_image_eq_of_injOn_lift {α : Type u} {β : Type v} (f : α → β) (s : Set α)
(h : InjOn f s) : lift.{u} (#f '' s) = lift.{v} (#s) :=
lift_mk_eq.{v, u, 0}.mpr ⟨(Equiv.Set.imageOfInjOn f s h).symm⟩
#align cardinal.mk_image_eq_of_inj_on_lift Cardinal.mk_image_eq_of_injOn_lift
theorem mk_image_eq_of_injOn {α β : Type u} (f : α → β) (s : Set α) (h : InjOn f s) :
(#f '' s) = (#s) :=
mk_congr (Equiv.Set.imageOfInjOn f s h).symm
#align cardinal.mk_image_eq_of_inj_on Cardinal.mk_image_eq_of_injOn
theorem mk_subtype_of_equiv {α β : Type u} (p : β → Prop) (e : α ≃ β) :
(#{ a : α // p (e a) }) = (#{ b : β // p b }) :=
mk_congr (Equiv.subtypeEquivOfSubtype e)
#align cardinal.mk_subtype_of_equiv Cardinal.mk_subtype_of_equiv
theorem mk_sep (s : Set α) (t : α → Prop) : (#({ x ∈ s | t x } : Set α)) = (#{ x : s | t x.1 }) :=
mk_congr (Equiv.Set.sep s t)
#align cardinal.mk_sep Cardinal.mk_sep
theorem mk_preimage_of_injective_lift {α : Type u} {β : Type v} (f : α → β) (s : Set β)
(h : Injective f) : lift.{v} (#f ⁻¹' s) ≤ lift.{u} (#s) := by
-- Porting note: Needed to insert `by exact` below
rw [lift_mk_le.{u, v, 0}]; use Subtype.coind (fun x => f x.1) fun x => by exact x.2
apply Subtype.coind_injective; exact h.comp Subtype.val_injective
#align cardinal.mk_preimage_of_injective_lift Cardinal.mk_preimage_of_injective_lift
theorem mk_preimage_of_subset_range_lift {α : Type u} {β : Type v} (f : α → β) (s : Set β)
(h : s ⊆ range f) : lift.{u} (#s) ≤ lift.{v} (#f ⁻¹' s) :=
by
rw [lift_mk_le.{v, u, 0}]
refine' ⟨⟨_, _⟩⟩
· rintro ⟨y, hy⟩
rcases Classical.subtype_of_exists (h hy) with ⟨x, rfl⟩
exact ⟨x, hy⟩
rintro ⟨y, hy⟩ ⟨y', hy'⟩; dsimp
rcases Classical.subtype_of_exists (h hy) with ⟨x, rfl⟩
rcases Classical.subtype_of_exists (h hy') with ⟨x', rfl⟩
simp; intro hxx'; rw [hxx']
#align cardinal.mk_preimage_of_subset_range_lift Cardinal.mk_preimage_of_subset_range_lift
theorem mk_preimage_of_injective_of_subset_range_lift {β : Type v} (f : α → β) (s : Set β)
(h : Injective f) (h2 : s ⊆ range f) : lift.{v} (#f ⁻¹' s) = lift.{u} (#s) :=
le_antisymm (mk_preimage_of_injective_lift f s h) (mk_preimage_of_subset_range_lift f s h2)
#align
cardinal.mk_preimage_of_injective_of_subset_range_lift
Cardinal.mk_preimage_of_injective_of_subset_range_lift
theorem mk_preimage_of_injective (f : α → β) (s : Set β) (h : Injective f) :
(#f ⁻¹' s) ≤ (#s) := by
rw [← lift_id (#↑(f ⁻¹' s)), ← lift_id (#↑(s))]
exact mk_preimage_of_injective_lift f s h
#align cardinal.mk_preimage_of_injective Cardinal.mk_preimage_of_injective
theorem mk_preimage_of_subset_range (f : α → β) (s : Set β) (h : s ⊆ range f) :
(#s) ≤ (#f ⁻¹' s) := by
rw [← lift_id (#↑(f ⁻¹' s)), ← lift_id (#↑(s))]
exact mk_preimage_of_subset_range_lift f s h
#align cardinal.mk_preimage_of_subset_range Cardinal.mk_preimage_of_subset_range
theorem mk_preimage_of_injective_of_subset_range (f : α → β) (s : Set β) (h : Injective f)
(h2 : s ⊆ range f) : (#f ⁻¹' s) = (#s) := by
convert mk_preimage_of_injective_of_subset_range_lift.{u, u} f s h h2 using 1 <;> rw [lift_id]
#align
cardinal.mk_preimage_of_injective_of_subset_range
Cardinal.mk_preimage_of_injective_of_subset_range
theorem mk_subset_ge_of_subset_image_lift {α : Type u} {β : Type v} (f : α → β) {s : Set α}
{t : Set β} (h : t ⊆ f '' s) : lift.{u} (#t) ≤ lift.{v} (#({ x ∈ s | f x ∈ t } : Set α)) :=
by
rw [image_eq_range] at h
convert mk_preimage_of_subset_range_lift _ _ h using 1
rw [mk_sep]
rfl
#align cardinal.mk_subset_ge_of_subset_image_lift Cardinal.mk_subset_ge_of_subset_image_lift
theorem mk_subset_ge_of_subset_image (f : α → β) {s : Set α} {t : Set β} (h : t ⊆ f '' s) :
(#t) ≤ (#({ x ∈ s | f x ∈ t } : Set α)) :=
by
rw [image_eq_range] at h
convert mk_preimage_of_subset_range _ _ h using 1
rw [mk_sep]
rfl
#align cardinal.mk_subset_ge_of_subset_image Cardinal.mk_subset_ge_of_subset_image
theorem le_mk_iff_exists_subset {c : Cardinal} {α : Type u} {s : Set α} :
c ≤ (#s) ↔ ∃ p : Set α, p ⊆ s ∧ (#p) = c :=
by
rw [le_mk_iff_exists_set, ← Subtype.exists_set_subtype]
apply exists_congr; intro t; rw [mk_image_eq]; apply Subtype.val_injective
#align cardinal.le_mk_iff_exists_subset Cardinal.le_mk_iff_exists_subset
theorem two_le_iff : (2 : Cardinal) ≤ (#α) ↔ ∃ x y : α, x ≠ y := by
rw [← Nat.cast_two, nat_succ, succ_le_iff, Nat.cast_one, one_lt_iff_nontrivial, nontrivial_iff]
#align cardinal.two_le_iff Cardinal.two_le_iff
theorem two_le_iff' (x : α) : (2 : Cardinal) ≤ (#α) ↔ ∃ y : α, y ≠ x := by
rw [two_le_iff, ← nontrivial_iff, nontrivial_iff_exists_ne x]
#align cardinal.two_le_iff' Cardinal.two_le_iff'
theorem mk_eq_two_iff : (#α) = 2 ↔ ∃ x y : α, x ≠ y ∧ ({x, y} : Set α) = univ :=
by
simp only [← @Nat.cast_two Cardinal, mk_eq_nat_iff_finset, Finset.card_eq_two]
constructor
· rintro ⟨t, ht, x, y, hne, rfl⟩
exact ⟨x, y, hne, by simpa using ht⟩
· rintro ⟨x, y, hne, h⟩
exact ⟨{x, y}, by simpa using h, x, y, hne, rfl⟩
#align cardinal.mk_eq_two_iff Cardinal.mk_eq_two_iff
theorem mk_eq_two_iff' (x : α) : (#α) = 2 ↔ ∃! y, y ≠ x :=
by
rw [mk_eq_two_iff]; constructor
· rintro ⟨a, b, hne, h⟩
simp only [eq_univ_iff_forall, mem_insert_iff, mem_singleton_iff] at h
rcases h x with (rfl | rfl)
exacts[⟨b, hne.symm, fun z => (h z).resolve_left⟩, ⟨a, hne, fun z => (h z).resolve_right⟩]
· rintro ⟨y, hne, hy⟩
exact ⟨x, y, hne.symm, eq_univ_of_forall fun z => or_iff_not_imp_left.2 (hy z)⟩
#align cardinal.mk_eq_two_iff' Cardinal.mk_eq_two_iff'
theorem exists_not_mem_of_length_lt {α : Type _} (l : List α) (h : ↑l.length < (#α)) :
∃ z : α, z ∉ l := by
contrapose! h
calc
(#α) = (#(Set.univ : Set α)) := mk_univ.symm
_ ≤ (#l.toFinset) := mk_le_mk_of_subset fun x _ => List.mem_toFinset.mpr (h x)
_ = l.toFinset.card := Cardinal.mk_coe_finset
_ ≤ l.length := Cardinal.natCast_le.mpr (List.toFinset_card_le l)
#align cardinal.exists_not_mem_of_length_lt Cardinal.exists_not_mem_of_length_lt
theorem three_le {α : Type _} (h : 3 ≤ (#α)) (x : α) (y : α) : ∃ z : α, z ≠ x ∧ z ≠ y :=
by
have : ↑(3 : ℕ) ≤ (#α); simpa using h
have : ↑(2 : ℕ) < (#α); rwa [← succ_le_iff, ← Cardinal.nat_succ]
have := exists_not_mem_of_length_lt [x, y] this
simpa [not_or] using this
#align cardinal.three_le Cardinal.three_le
/-- The function `a ^< b`, defined as the supremum of `a ^ c` for `c < b`. -/
def powerlt (a b : Cardinal.{u}) : Cardinal.{u} :=
⨆ c : Iio b, a^c
#align cardinal.powerlt Cardinal.powerlt
-- mathport name: «expr ^< »
@[inherit_doc]
infixl:80 " ^< " => powerlt
theorem le_powerlt {b c : Cardinal.{u}} (a) (h : c < b) : (a^c) ≤ a ^< b :=
by
apply @le_csupᵢ _ _ _ (fun y : Iio b => a^y) _ ⟨c, h⟩
rw [← image_eq_range]
exact bddAbove_image.{u, u} _ bddAbove_Iio
#align cardinal.le_powerlt Cardinal.le_powerlt
theorem powerlt_le {a b c : Cardinal.{u}} : a ^< b ≤ c ↔ ∀ x < b, (a^x) ≤ c :=
by
rw [powerlt, csupᵢ_le_iff']
· simp
· rw [← image_eq_range]
exact bddAbove_image.{u, u} _ bddAbove_Iio
#align cardinal.powerlt_le Cardinal.powerlt_le
theorem powerlt_le_powerlt_left {a b c : Cardinal} (h : b ≤ c) : a ^< b ≤ a ^< c :=
powerlt_le.2 fun _ hx => le_powerlt a <| hx.trans_le h
#align cardinal.powerlt_le_powerlt_left Cardinal.powerlt_le_powerlt_left
theorem powerlt_mono_left (a) : Monotone fun c => a ^< c := fun _ _ => powerlt_le_powerlt_left
#align cardinal.powerlt_mono_left Cardinal.powerlt_mono_left
theorem powerlt_succ {a b : Cardinal} (h : a ≠ 0) : a ^< succ b = (a^b) :=
(powerlt_le.2 fun _ h' => power_le_power_left h <| le_of_lt_succ h').antisymm <|
le_powerlt a (lt_succ b)
#align cardinal.powerlt_succ Cardinal.powerlt_succ
theorem powerlt_min {a b c : Cardinal} : a ^< min b c = min (a ^< b) (a ^< c) :=
(powerlt_mono_left a).map_min
#align cardinal.powerlt_min Cardinal.powerlt_min
theorem powerlt_max {a b c : Cardinal} : a ^< max b c = max (a ^< b) (a ^< c) :=
(powerlt_mono_left a).map_max
#align cardinal.powerlt_max Cardinal.powerlt_max
theorem zero_powerlt {a : Cardinal} (h : a ≠ 0) : 0 ^< a = 1 :=
by
apply (powerlt_le.2 fun c _ => zero_power_le _).antisymm
rw [← power_zero]
exact le_powerlt 0 (pos_iff_ne_zero.2 h)
#align cardinal.zero_powerlt Cardinal.zero_powerlt
@[simp]
theorem powerlt_zero {a : Cardinal} : a ^< 0 = 0 :=
-- Porting note: used to expect that `convert` would leave an instance argument as a goal
@Cardinal.supᵢ_of_empty _ _
(Subtype.isEmpty_of_false fun x => mem_Iio.not.mpr (Cardinal.zero_le x).not_lt)
#align cardinal.powerlt_zero Cardinal.powerlt_zero
end Cardinal
-- namespace Tactic
-- open Cardinal Positivity
-- Porting note: Meta code, do not port directly
-- /-- Extension for the `positivity` tactic: The cardinal power of a positive cardinal is
-- positive. -/
-- @[positivity]
-- unsafe def positivity_cardinal_pow : expr → tactic strictness
-- | q(@Pow.pow _ _ $(inst) $(a) $(b)) => do
-- let strictness_a ← core a
-- match strictness_a with
-- | positive p => positive <$> mk_app `` power_pos [b, p]
-- | _ => failed
-- |-- We already know that `0 ≤ x` for all `x : Cardinal`
-- _ =>
-- failed
-- #align tactic.positivity_cardinal_pow tactic.positivity_cardinal_pow
-- end Tactic
|
import data.real.basic
import order.interval
import order.lattice
import topology.algebra.order.compact
import topology.instances.real
import data.nat.factorial.basic
import data.int.interval
import tactic.slim_check
-- import number_theory.geometry_of_numbers
import measure_theory.measure.haar
import linear_algebra.finite_dimensional
import analysis.normed_space.pointwise
import measure_theory.group.pointwise
.
open set
open_locale nat
lemma mediant_btwn {K : Type*} [linear_ordered_field K] {a b c d : K} (hc : 0 < c) (hd : 0 < d)
(h : a / c < b / d) :
a / c < (a + b) / (c + d) ∧
(a + b) / (c + d) < b / d :=
begin
rw div_lt_div_iff hc hd at h,
rw [div_lt_div_iff (add_pos hc hd) hd, div_lt_div_iff hc (add_pos hc hd)],
split;
linarith,
end
/--
if b / d < a / c with ad - bc = 1
we have (a + b) / (c + d) in between
so (a + b) / (c + d) < a / c has same property ie
a * (c + d) - (a + b) * c = 1
and
b / d < (a + b) / (c + d) has same property ie
(a + b) * d - b * (c + d) = 1
-/
lemma mediant_unimod {a b c d : ℤ} (ha : a * d - b * c = 1) : a * (c + d) - (a + b) * c = 1 ∧
(a + b) * d - b * (c + d) = 1 :=
begin
ring_nf,
rw mul_comm,
simp [ha],
end
lemma mediant_reduced {a b c d : ℤ} (ha : a * d - b * c = 1) :
(a + b).nat_abs.coprime (c + d).nat_abs :=
begin
-- have := calc b + d = (b + d) * 1 : by simp
-- ... = (b + d) * (a * d - b * c) : by rw ← ha
-- ... = b * ((a + b) * d - b * (c + d)) + d * (a * (c + d) - (a + b) * c) : by ring,
change (a + b).gcd (c + d) = 1,
have := dvd_sub ((int.gcd_dvd_right (a + b) (c+d)).mul_left a)
((int.gcd_dvd_left (a + b) (c+d)).mul_right c),
rw (mediant_unimod ha).1 at this,
rw ← int.nat_abs_dvd_iff_dvd at this,
simpa using this,
end
-- See Apostol Modular forms and Dirichlet seties in number theory p. 98
lemma mediant_denom {a c : ℤ} {b d : ℕ} (hb : 0 < b) (ha : c * b - d * a = 1) : ((a + c : ℚ) / (b + d)).denom = b + d :=
begin
-- TODO this is a horrible proof, serious cleanup needed
rw mul_comm _ a at ha,
rw add_comm b d,
have := rat.denom_div_eq_of_coprime _ (mediant_reduced ha),
rw add_comm c a at this,
norm_cast at this,
rw ← this,
congr,
simp only [add_comm, nat.cast_add],
rw rat.mk_eq_div,
norm_cast,
norm_cast,
linarith,
end
lemma farey_set_finite (n : ℕ) (x y : ℝ) :
{r : ℝ | ∃ q : ℚ, (q.denom ≤ n ∧ x ≤ q ∧ ↑q ≤ y) ∧ ↑q = r}.finite :=
begin
have nfact_pos : 0 < (n! : ℝ),
{ norm_cast,
rw pos_iff_ne_zero,
exact (nat.factorial_ne_zero n), },
have : (((*) n! : ℝ → ℝ) '' _).finite,
swap,
apply this.of_finite_image,
apply function.injective.inj_on,
exact mul_right_injective₀ nfact_pos.ne.symm,
{ have : {t : ℝ | ∃ q : ℤ, (x * n! ≤ q ∧ ↑q ≤ y * n!) ∧ (↑q : ℝ) = t}.finite,
{ apply set.finite.image,
change {r : ℤ | _}.finite,
apply (finite_Icc ⌊x * ↑n!⌋ ⌈y * n!⌉).subset,
rintros y ⟨hyl, hyr⟩,
rw [mem_Icc],
split,
have := (int.floor_le _).trans hyl, -- TODO can this be a le_floor type iff lemma?
exact_mod_cast this,
have := hyr.trans (int.le_ceil _), -- TODO can this be a le_floor type iff lemma?
exact_mod_cast this, },
apply this.subset,
intros x hx,
rcases hx with ⟨hx_w, ⟨r, ⟨ha, hb, hc⟩, hd⟩, rfl⟩,
simp only [←hd, mem_set_of_eq, eq_self_iff_true] at *,
have key : (r * n!).denom = 1,
{ rw rat.mul_denom,
simp only [rat.coe_nat_denom, mul_one, rat.coe_nat_num],
suffices : (r.num * ↑n!).nat_abs.gcd r.denom = r.denom,
simp only [this, nat.div_self r.pos],
apply nat.gcd_eq_right,
simp only [int.nat_abs_mul, int.nat_abs_of_nat],
apply dvd_mul_of_dvd_right,
apply nat.dvd_factorial r.pos ha, },
use (r * n!).num,
have := rat.coe_int_num_of_denom_eq_one key, -- TODO a general version of this for any coe
apply_fun (coe : ℚ → ℝ) at this,
push_cast at this,
simp only [this],
repeat { split },
rwa mul_le_mul_right nfact_pos,
rwa mul_le_mul_right nfact_pos,
rw mul_comm, },
end
|
Definition teq_dbselect_Sn :=
fun (i n : nat) (eqT gtT : Term) =>
nat_ind (fun i0 : nat => dbselect i0 n eqT gtT (var n) = dbselect i0 n eqT gtT (var (S (dbprev n))))
(nat_ind
(fun n0 : nat => dbselect 0 n0 eqT gtT (var n0) = dbselect 0 n0 eqT gtT (var (S (dbprev n0))))
eq_refl
(fun (n0 : nat)
(_ : dbselect 0 n0 eqT gtT (var n0) = dbselect 0 n0 eqT gtT (var (S (dbprev n0)))) => eq_refl)
n)
(fun (i0 : nat) (IHi : dbselect i0 n eqT gtT (var n) = dbselect i0 n eqT gtT (var (S (dbprev n))))
=>
nat_ind
(fun n0 : nat =>
dbselect i0 n0 eqT gtT (var n0) = dbselect i0 n0 eqT gtT (var (S (dbprev n0))) ->
dbselect (S i0) n0 eqT gtT (var n0) = dbselect (S i0) n0 eqT gtT (var (S (dbprev n0))))
(fun _ : dbselect i0 0 eqT gtT (var 0) = dbselect i0 0 eqT gtT (var (S (dbprev 0))) => eq_refl)
(fun (n0 : nat)
(_ : dbselect i0 n0 eqT gtT (var n0) = dbselect i0 n0 eqT gtT (var (S (dbprev n0))) ->
dbselect (S i0) n0 eqT gtT (var n0) = dbselect (S i0) n0 eqT gtT (var (S (dbprev n0))))
(_ : dbselect i0 (S n0) eqT gtT (var (S n0)) =
dbselect i0 (S n0) eqT gtT (var (S (dbprev (S n0))))) => eq_refl) n IHi) i
: forall (i n : nat) (eqT gtT : Term),
dbselect i n eqT gtT (var n) = dbselect i n eqT gtT (var (S (dbprev n))).
|
-- Andreas, 2012-09-13
module RelevanceSubtyping where
-- this naturally type-checks:
one : {A B : Set} → (.A → B) → A → B
one f x = f x
-- this type-checks because of subtyping
one' : {A B : Set} → (.A → B) → A → B
one' f = f
|
axiom P : Prop → Prop
@[congr]
axiom P_congr (a b : Prop) (h : a ↔ b) : P a ↔ P b
theorem ex1 {p q : Prop} (h : p ↔ q) (h' : P q) : P p := by
simp [h]
assumption
#print ex1
theorem ex2 {p q : Prop} (h : p = q) (h' : P q) : P p := by
simp [h]
assumption
#print ex2
|
(* -*- mode: coq; coq-prog-args: ("-quick") -*- *)
Module M.
Definition foo := nonexistent.
End M.
|
/-
Copyright (c) 2019 Neil Strickland. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Neil Strickland
Sums of finite geometric series
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.algebra.group_with_zero.power
import Mathlib.algebra.big_operators.order
import Mathlib.algebra.big_operators.ring
import Mathlib.algebra.big_operators.intervals
import Mathlib.PostPort
universes u u_1
namespace Mathlib
/-- Sum of the finite geometric series $\sum_{i=0}^{n-1} x^i$. -/
def geom_series {α : Type u} [semiring α] (x : α) (n : ℕ) : α :=
finset.sum (finset.range n) fun (i : ℕ) => x ^ i
theorem geom_series_def {α : Type u} [semiring α] (x : α) (n : ℕ) :
geom_series x n = finset.sum (finset.range n) fun (i : ℕ) => x ^ i :=
rfl
@[simp] theorem geom_series_zero {α : Type u} [semiring α] (x : α) : geom_series x 0 = 0 := rfl
@[simp] theorem geom_series_one {α : Type u} [semiring α] (x : α) : geom_series x 1 = 1 := sorry
@[simp] theorem op_geom_series {α : Type u} [ring α] (x : α) (n : ℕ) :
opposite.op (geom_series x n) = geom_series (opposite.op x) n :=
sorry
/-- Sum of the finite geometric series $\sum_{i=0}^{n-1} x^i y^{n-1-i}$. -/
def geom_series₂ {α : Type u} [semiring α] (x : α) (y : α) (n : ℕ) : α :=
finset.sum (finset.range n) fun (i : ℕ) => x ^ i * y ^ (n - 1 - i)
theorem geom_series₂_def {α : Type u} [semiring α] (x : α) (y : α) (n : ℕ) :
geom_series₂ x y n = finset.sum (finset.range n) fun (i : ℕ) => x ^ i * y ^ (n - 1 - i) :=
rfl
@[simp] theorem geom_series₂_zero {α : Type u} [semiring α] (x : α) (y : α) :
geom_series₂ x y 0 = 0 :=
rfl
@[simp] theorem geom_series₂_one {α : Type u} [semiring α] (x : α) (y : α) :
geom_series₂ x y 1 = 1 :=
sorry
@[simp] theorem geom_series₂_with_one {α : Type u} [semiring α] (x : α) (n : ℕ) :
geom_series₂ x 1 n = geom_series x n :=
sorry
/-- $x^n-y^n = (x-y) \sum x^ky^{n-1-k}$ reformulated without `-` signs. -/
protected theorem commute.geom_sum₂_mul_add {α : Type u} [semiring α] {x : α} {y : α}
(h : commute x y) (n : ℕ) : geom_series₂ (x + y) y n * x + y ^ n = (x + y) ^ n :=
sorry
theorem geom_series₂_self {α : Type u_1} [comm_ring α] (x : α) (n : ℕ) :
geom_series₂ x x n = ↑n * x ^ (n - 1) :=
sorry
/-- $x^n-y^n = (x-y) \sum x^ky^{n-1-k}$ reformulated without `-` signs. -/
theorem geom_sum₂_mul_add {α : Type u} [comm_semiring α] (x : α) (y : α) (n : ℕ) :
geom_series₂ (x + y) y n * x + y ^ n = (x + y) ^ n :=
commute.geom_sum₂_mul_add (commute.all x y) n
theorem geom_sum_mul_add {α : Type u} [semiring α] (x : α) (n : ℕ) :
geom_series (x + 1) n * x + 1 = (x + 1) ^ n :=
eq.mp
(Eq._oldrec (Eq.refl (geom_series₂ (x + 1) 1 n * x + 1 = (x + 1) ^ n))
(geom_series₂_with_one (x + 1) n))
(eq.mp (Eq._oldrec (Eq.refl (geom_series₂ (x + 1) 1 n * x + 1 ^ n = (x + 1) ^ n)) (one_pow n))
(commute.geom_sum₂_mul_add (commute.one_right x) n))
theorem geom_sum₂_mul_comm {α : Type u} [ring α] {x : α} {y : α} (h : commute x y) (n : ℕ) :
geom_series₂ x y n * (x - y) = x ^ n - y ^ n :=
sorry
theorem geom_sum₂_mul {α : Type u} [comm_ring α] (x : α) (y : α) (n : ℕ) :
geom_series₂ x y n * (x - y) = x ^ n - y ^ n :=
geom_sum₂_mul_comm (commute.all x y) n
theorem geom_sum_mul {α : Type u} [ring α] (x : α) (n : ℕ) :
geom_series x n * (x - 1) = x ^ n - 1 :=
eq.mp
(Eq._oldrec (Eq.refl (geom_series₂ x 1 n * (x - 1) = x ^ n - 1)) (geom_series₂_with_one x n))
(eq.mp (Eq._oldrec (Eq.refl (geom_series₂ x 1 n * (x - 1) = x ^ n - 1 ^ n)) (one_pow n))
(geom_sum₂_mul_comm (commute.one_right x) n))
theorem mul_geom_sum {α : Type u} [ring α] (x : α) (n : ℕ) :
(x - 1) * geom_series x n = x ^ n - 1 :=
sorry
theorem geom_sum_mul_neg {α : Type u} [ring α] (x : α) (n : ℕ) :
geom_series x n * (1 - x) = 1 - x ^ n :=
sorry
theorem mul_neg_geom_sum {α : Type u} [ring α] (x : α) (n : ℕ) :
(1 - x) * geom_series x n = 1 - x ^ n :=
sorry
theorem geom_sum {α : Type u} [division_ring α] {x : α} (h : x ≠ 1) (n : ℕ) :
geom_series x n = (x ^ n - 1) / (x - 1) :=
sorry
theorem geom_sum_Ico_mul {α : Type u} [ring α] (x : α) {m : ℕ} {n : ℕ} (hmn : m ≤ n) :
(finset.sum (finset.Ico m n) fun (i : ℕ) => x ^ i) * (x - 1) = x ^ n - x ^ m :=
sorry
theorem geom_sum_Ico_mul_neg {α : Type u} [ring α] (x : α) {m : ℕ} {n : ℕ} (hmn : m ≤ n) :
(finset.sum (finset.Ico m n) fun (i : ℕ) => x ^ i) * (1 - x) = x ^ m - x ^ n :=
sorry
theorem geom_sum_Ico {α : Type u} [division_ring α] {x : α} (hx : x ≠ 1) {m : ℕ} {n : ℕ}
(hmn : m ≤ n) :
(finset.sum (finset.Ico m n) fun (i : ℕ) => x ^ i) = (x ^ n - x ^ m) / (x - 1) :=
sorry
theorem geom_sum_inv {α : Type u} [division_ring α] {x : α} (hx1 : x ≠ 1) (hx0 : x ≠ 0) (n : ℕ) :
geom_series (x⁻¹) n = x - 1⁻¹ * (x - x⁻¹ ^ n * x) :=
sorry
theorem ring_hom.map_geom_series {α : Type u} {β : Type u_1} [semiring α] [semiring β] (x : α)
(n : ℕ) (f : α →+* β) : coe_fn f (geom_series x n) = geom_series (coe_fn f x) n :=
sorry
theorem ring_hom.map_geom_series₂ {α : Type u} {β : Type u_1} [semiring α] [semiring β] (x : α)
(y : α) (n : ℕ) (f : α →+* β) :
coe_fn f (geom_series₂ x y n) = geom_series₂ (coe_fn f x) (coe_fn f y) n :=
sorry
end Mathlib |
import polycodable_init
@[user_attribute]
meta def polyfun : user_attribute :=
{ name := `polyfun,
descr := "lemmas usable to prove polynomial time" }
attribute [polyfun]
polytime_fun.id
polytime_fun.const
@[polyfun]
lemma polytime_fun.id' {α} [ptree.pencodable α] : polytime_fun (λ x : α, x) := polytime_fun.id
namespace tactic
meta def polytime_fun_lemmas : list name :=
[``polytime_fun, ``polytime_fun₂, ``polytime_fun₃]
meta def polytime_fun_comp_lemmas : list name :=
[``polytime_fun.comp, ``polytime_fun.comp₂, ``polytime_fun.comp₃]
meta def unfold_polytime (md : transparency) : tactic unit :=
do dunfold_target (``function.uncurry :: polytime_fun_lemmas.tail),
try dsimp_target
-- In order to help resolve polytime_fun of propositions (which are converted to bool's)
meta def simp_to_bool : tactic unit :=
`[simp only [bool.to_bool_not, bool.to_bool_and, bool.to_bool_or, bool.to_bool_coe]]
-- Please help, idk how to write tactics
meta def is_polycodable (e : expr) : tactic bool :=
(do
e' ← infer_type e,
cache ← mk_instance_cache e',
(cache', s) ← instance_cache.get cache ``ptree.pencodable,
return tt) <|> (return ff)
meta def get_num_params : tactic ℕ :=
do `(polytime_fun %%s) ← target,
guard s.is_lambda,
mv ← mk_meta_var s.binding_domain,
e ← instantiate_mvars (s.instantiate_lambdas [mv]),
f ← mfilter is_polycodable e.get_app_args,
return f.length
meta def apply_polyfun.comp (md : transparency) : tactic ℕ :=
do fail_if_success `[exact polytime_fun.const _],
fail_if_success (to_expr ``(polytime_fun.pair) >>= λ e, apply e {md := md}),
old_goal ← target,
n ← get_num_params, guard (0 < n ∧ n ≤ polytime_fun_lemmas.length),
s ← resolve_name (polytime_fun_comp_lemmas.inth (n-1)),
s' ← to_expr s,
apply s' {md := md},
try `[ any_goals { apply_instance, } ], -- why is this necessary??
(fail_if_success (unfold_polytime md >> target >>= λ t, unify t old_goal md)) <|>
focus1 (apply_rules [] [``polyfun] 50 { md := md } >> done),
return (n-1)
meta def polyfun_tactics (md : transparency := reducible) : list (tactic string) :=
[
apply_rules [] [``polyfun] 50 { md := md }
>> pure "apply_rules with polyfun",
unfold_polytime md >> pure "dunfold_target polytime_fun_lemmas.tail",
simp_to_bool >> pure "simp only [bool.to_bool_not, bool.to_bool_and, bool.to_bool_or]",
apply_polyfun.comp md >>= λ n, pure ("apply " ++ (to_string $ polytime_fun_comp_lemmas.inth (n-1)))
]
namespace interactive
setup_tactic_parser
meta def polyfun
(bang : parse $ optional (tk "!")) (trace : parse $ optional (tk "?")) (cfg : tidy.cfg := {}) :
tactic unit :=
let md := if bang.is_some then semireducible else reducible,
polyfun_core := tactic.tidy { tactics := polyfun_tactics md, ..cfg },
trace_fn := if trace.is_some then show_term else id in
trace_fn polyfun_core
end interactive
end tactic
section
attribute [polyfun]
polytime_fun.fst
polytime_fun.snd
polytime_fun.pair
polytime_fun.node
polytime_fun.polytime_code
polytime_fun.ptree_left
polytime_fun.ptree_right
polytime_fun.encode
polytime_fun.decode'
-- section
-- parameters {α β γ δ : Type*} [polycodable α] [polycodable β] [polycodable γ] [polycodable δ]
-- example {f : α → β} (hf : polytime_fun f) : polytime_fun f := by polyfun
-- example {f : α → β} : polytime_fun f := by { try { polyfun }, sorry, }
-- example {f : α → β → γ} : polytime_fun₂ f := by { polyfun, }
-- @[irreducible]
-- def f : α → β → γ := sorry
-- lemma f_polyfun : polytime_fun₂ f := sorry
-- local attribute [polyfun] f_polyfun
-- example : polytime_fun₂ f := by { polyfun, }
-- example : polytime_fun (λ x : α × β, f x.1 x.2) := by { polyfun, }
-- end
end
|
(*-------------------------------------------*
| DFP package |
| June 2005 |
| December 2005 (modified) |
| |
| DFP on CSP-Prover ver.3.0 |
| September 2006 (modified) |
| April 2007 (modified) |
| |
| Yoshinao Isobe (AIST JAPAN) |
*-------------------------------------------*)
theory DFP_Proof_Rule1
imports DFP_Block
begin
(* The following simplification rules are deleted in this theory file *)
(* because they unexpectly rewrite UnionT and InterT. *)
(* Union (B ` A) = (UN x:A. B x) *)
(* Inter (B ` A) = (INT x:A. B x) *)
(*
declare Union_image_eq [simp del]
declare Inter_image_eq [simp del]
*)
declare Sup_image_eq [simp del]
declare Inf_image_eq [simp del]
(* The following simplification rules are deleted in this theory file *)
(* because they unexpectly rewrite (notick | t = []t) *)
(* *)
(* disj_not1: (~ P | Q) = (P --> Q) *)
declare disj_not1 [simp del]
(*****************************************************************
1.
2.
3.
4.
*****************************************************************)
(*--------------------------------------------------*
| Theorem 1 [Roscoe_Dathi_1987 P.8] |
*--------------------------------------------------*)
theorem Theorem1_Roscoe_Dathi_1987:
"[| (I,FXf) isFailureOf (I,PXf) ; I ~= {} ; finite I ;
triple_disjoint (I,FXf) ; BusyNetwork (I,FXf) ;
EX f::('i => 'a failure => ('pi::order)).
ALL t Yf. (t,Yf) isStateOf (I,FXf) -->
(ALL i j. (I,FXf) >> i --[(t,Yf), (VocabularyOf (I,FXf))]-->o j
--> f j (t rest-tr (snd (FXf j)), Yf j)
< f i (t rest-tr (snd (FXf i)), Yf i)) |]
==> DeadlockFreeNetwork (I,PXf)"
apply (simp add: DeadlockFree_notDeadlockState)
apply (case_tac "ALL t Yf. ~ (t, Yf) isDeadlockStateOf (I, FXf)", simp)
(* by contradiction *)
apply (simp)
apply (elim conjE exE)
apply (subgoal_tac "(t,Yf) isStateOf (I,FXf)")
apply (simp add: Lemma1_Roscoe_Dathi_1987)
apply (subgoal_tac "EX j:I. ALL i:I. ~(f i (t rest-tr (snd (FXf i)), Yf i)
< f j (t rest-tr (snd (FXf j)), Yf j))")
apply (elim bexE)
apply (drule_tac x="j" in bspec, simp)
apply (simp add: isBlockedIn_def)
apply (elim conjE exE)
apply (drule_tac x="x" in spec)
apply (simp)
apply (drule_tac x="t" in spec)
apply (drule_tac x="Yf" in spec)
apply (simp)
apply (drule_tac x="j" in spec)
apply (drule_tac x="x" in spec)
apply (simp)
apply (drule_tac x="x" in bspec, simp add: isRequestOf_def)
apply (simp)
apply (simp add: nonempty_finite_set_exists_min_fun)
apply (simp add: isDeadlockStateOf_def)
done
(*--------------------------------------------------*
| Lemma 2 [Roscoe_Dathi_1987 P.9] |
*--------------------------------------------------*)
lemma Lemma2_Roscoe_Dathi_1987:
"[| I ~= {} ; finite I ;
triple_disjoint (I,FXf) ; BusyNetwork (I,FXf) ;
EX f::('i => 'a failure => ('pi::order)).
ALL i:I. ALL j:I. i ~= j -->
(ALL t Yf.
(t,Yf) isStateOf ({i,j},FXf) &
({i,j},FXf) >> i --[(t,Yf), (VocabularyOf (I,FXf))]-->o j
--> f j (t rest-tr (snd (FXf j)), Yf j)
< f i (t rest-tr (snd (FXf i)), Yf i)) |]
==> EX f::('i => 'a failure => ('pi::order)).
ALL t Yf. (t,Yf) isStateOf (I,FXf) -->
(ALL i j. (I,FXf) >> i --[(t,Yf), (VocabularyOf (I,FXf))]-->o j
--> f j (t rest-tr (snd (FXf j)), Yf j)
< f i (t rest-tr (snd (FXf i)), Yf i))"
apply (elim exE)
apply (rule_tac x="f" in exI)
apply (intro allI impI)
apply (drule_tac x="i" in bspec)
apply (simp add: in_index_I)
apply (drule_tac x="j" in bspec)
apply (simp add: in_index_I)
apply (simp add: in_index_I)
apply (drule_tac x="t rest-tr ((snd (FXf i) Un snd (FXf j)))" in spec)
apply (drule_tac x="Yf" in spec)
apply (drule mp)
apply (rule conjI)
apply (rule isStateOf_subsetI)
apply (simp)
apply (simp add: in_index_I)
apply (blast)
apply (rule isUngrantedRequestOfwrt_subsetI)
apply (simp_all add: in_index_I)
apply (simp add: in_index_I)
apply (fast)
apply (fast)
apply (subgoal_tac
"t rest-tr ((snd (FXf i) Un snd (FXf j))) rest-tr (snd (FXf i))
= t rest-tr (snd (FXf i))")
apply (subgoal_tac
"t rest-tr ((snd (FXf i) Un snd (FXf j))) rest-tr (snd (FXf j))
= t rest-tr (snd (FXf j))")
apply (simp)
apply (rule rest_tr_of_rest_tr_subset)
apply (force)
apply (rule rest_tr_of_rest_tr_subset)
apply (force)
done
(*--------------------------------------------------*
| Rule 1 [Roscoe_Dathi_1987 P.9] |
*--------------------------------------------------*)
lemma Rule1_Roscoe_Dathi_1987:
"[| (I,FXf) isFailureOf (I,PXf) ; I ~= {} ; finite I ;
triple_disjoint (I,FXf) ; BusyNetwork (I,FXf) ;
EX f::('i => 'a failure => ('pi::order)).
ALL i:I. ALL j:I. i ~= j -->
(ALL t Yf.
({i,j},FXf) >> i --[(t,Yf), (VocabularyOf (I,FXf))]-->o j
--> f j (t rest-tr (snd (FXf j)), Yf j)
< f i (t rest-tr (snd (FXf i)), Yf i)) |]
==> DeadlockFreeNetwork (I,PXf)"
apply (rule Theorem1_Roscoe_Dathi_1987)
apply (simp_all)
apply (rule Lemma2_Roscoe_Dathi_1987)
apply (simp_all)
apply (simp add: isUngrantedRequestOfwrt_def)
apply (simp add: isUngrantedRequestOf_def)
apply (simp add: isRequestOf_def)
done
lemma Rule1_Roscoe_Dathi_1987_I:
"[| (I,FXf) isFailureOf V ; I ~= {} ; finite I ;
triple_disjoint (I,FXf) ; BusyNetwork (I,FXf) ;
EX f::('i => 'a failure => ('pi::order)).
ALL i:I. ALL j:I. i ~= j -->
(ALL t Yf.
({i,j},FXf) >> i --[(t,Yf), (VocabularyOf (I,FXf))]-->o j
--> f j (t rest-tr (snd (FXf j)), Yf j)
< f i (t rest-tr (snd (FXf i)), Yf i)) |]
==> DeadlockFreeNetwork V"
apply (insert decompo_V[of V])
apply (rotate_tac -1, erule exE)
apply (rotate_tac -1, erule exE)
apply (subgoal_tac "Ia = I")
apply (simp add: Rule1_Roscoe_Dathi_1987)
apply (simp add: isFailureOf_def)
done
(*** looks test ***)
theorem Rule1_Roscoe_Dathi_1987_simp:
"[| VF = {(F i, X i) | i:I}Fnet ; V = {(P i, X i) | i:I}net ;
VF isFailureOf V ; I ~= {} ; finite I ;
triple_disjoint VF ; BusyNetwork VF ;
EX f::('i => 'a failure => ('pi::order)).
ALL i:I. ALL j:I. i ~= j -->
(ALL t Y.
{(F i, X i) | i:{i,j}}Fnet >>
i --[(t,Y), (VocabularyOf VF)]-->o j
--> f j (t rest-tr (X j), Y j)
< f i (t rest-tr (X i), Y i)) |]
==> DeadlockFreeNetwork V"
apply (simp)
apply (rule Rule1_Roscoe_Dathi_1987_I)
apply (simp_all)
apply (elim conjE exE)
apply (rule_tac x="f" in exI)
apply (auto)
done
(****************** to add it again ******************)
declare disj_not1 [simp]
(*
declare Union_image_eq [simp]
declare Inter_image_eq [simp]
*)
declare Sup_image_eq [simp]
declare Inf_image_eq [simp]
end
|
[STATEMENT]
lemma sum_upto_moebius_mu_integral': "x > 1 \<Longrightarrow> (A has_integral x * A x - M x) {1..x}"
and sum_upto_moebius_mu_integrable': "a \<ge> 1 \<Longrightarrow> A integrable_on {a..b}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}) &&& (1 \<le> a \<Longrightarrow> A integrable_on {a..b})
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
{
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
fix a b :: real
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
assume ab: "a \<ge> 1" "a < b"
[PROOF STATE]
proof (state)
this:
1 \<le> a
a < b
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
have "((\<lambda>t. A t * 1) has_integral A b * b - A a * a -
(\<Sum>n\<in>real -` {a<..b}. moebius_mu n / n * n)) {a..b}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. A t * 1) has_integral A b * b - A a * a - (\<Sum>n\<in>real -` {a<..b}. moebius_mu n / real n * real n)) {a..b}
[PROOF STEP]
unfolding M_def A_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>t. sum_upto (\<lambda>n. moebius_mu n / real n) t * 1) has_integral sum_upto (\<lambda>n. moebius_mu n / real n) b * b - sum_upto (\<lambda>n. moebius_mu n / real n) a * a - (\<Sum>n\<in>real -` {a<..b}. moebius_mu n / real n * real n)) {a..b}
[PROOF STEP]
using ab
[PROOF STATE]
proof (prove)
using this:
1 \<le> a
a < b
goal (1 subgoal):
1. ((\<lambda>t. sum_upto (\<lambda>n. moebius_mu n / real n) t * 1) has_integral sum_upto (\<lambda>n. moebius_mu n / real n) b * b - sum_upto (\<lambda>n. moebius_mu n / real n) a * a - (\<Sum>n\<in>real -` {a<..b}. moebius_mu n / real n * real n)) {a..b}
[PROOF STEP]
by (intro partial_summation_strong [where X = "{}"])
(auto intro!: derivative_eq_intros continuous_intros
simp flip: has_real_derivative_iff_has_vector_derivative)
[PROOF STATE]
proof (state)
this:
((\<lambda>t. A t * 1) has_integral A b * b - A a * a - (\<Sum>n\<in>real -` {a<..b}. moebius_mu n / real n * real n)) {a..b}
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
}
[PROOF STATE]
proof (state)
this:
\<lbrakk>1 \<le> ?aa2; ?aa2 < ?ba2\<rbrakk> \<Longrightarrow> ((\<lambda>t. A t * 1) has_integral A ?ba2 * ?ba2 - A ?aa2 * ?aa2 - (\<Sum>n\<in>real -` {?aa2<..?ba2}. moebius_mu n / real n * real n)) {?aa2..?ba2}
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
note * = this
[PROOF STATE]
proof (state)
this:
\<lbrakk>1 \<le> ?aa2; ?aa2 < ?ba2\<rbrakk> \<Longrightarrow> ((\<lambda>t. A t * 1) has_integral A ?ba2 * ?ba2 - A ?aa2 * ?aa2 - (\<Sum>n\<in>real -` {?aa2<..?ba2}. moebius_mu n / real n * real n)) {?aa2..?ba2}
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
{
[PROOF STATE]
proof (state)
this:
\<lbrakk>1 \<le> ?aa2; ?aa2 < ?ba2\<rbrakk> \<Longrightarrow> ((\<lambda>t. A t * 1) has_integral A ?ba2 * ?ba2 - A ?aa2 * ?aa2 - (\<Sum>n\<in>real -` {?aa2<..?ba2}. moebius_mu n / real n * real n)) {?aa2..?ba2}
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
fix x :: real
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
assume x: "x > 1"
[PROOF STATE]
proof (state)
this:
1 < x
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
have [simp]: "A 1 = 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. A 1 = 1
[PROOF STEP]
by (simp add: A_def)
[PROOF STATE]
proof (state)
this:
A 1 = 1
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
have "(\<Sum>n\<in>real -` {1<..x}. moebius_mu n / n * n) =
(\<Sum>n\<in>insert 1 (real -` {1<..x}). moebius_mu n / n * n) - 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = (\<Sum>n\<in>insert 1 (real -` {1<..x}). moebius_mu n / real n * real n) - 1
[PROOF STEP]
using finite_vimage_real_of_nat_greaterThanAtMost[of 1 x]
[PROOF STATE]
proof (prove)
using this:
finite (real -` {1<..x})
goal (1 subgoal):
1. (\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = (\<Sum>n\<in>insert 1 (real -` {1<..x}). moebius_mu n / real n * real n) - 1
[PROOF STEP]
by (subst sum.insert) auto
[PROOF STATE]
proof (state)
this:
(\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = (\<Sum>n\<in>insert 1 (real -` {1<..x}). moebius_mu n / real n * real n) - 1
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
(\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = (\<Sum>n\<in>insert 1 (real -` {1<..x}). moebius_mu n / real n * real n) - 1
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
have "insert 1 (real -` {1<..x}) = {n. n > 0 \<and> real n \<le> x}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. insert 1 (real -` {1<..x}) = {n. 0 < n \<and> real n \<le> x}
[PROOF STEP]
using x
[PROOF STATE]
proof (prove)
using this:
1 < x
goal (1 subgoal):
1. insert 1 (real -` {1<..x}) = {n. 0 < n \<and> real n \<le> x}
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
insert 1 (real -` {1<..x}) = {n. 0 < n \<and> real n \<le> x}
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
insert 1 (real -` {1<..x}) = {n. 0 < n \<and> real n \<le> x}
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
have "(\<Sum>n | 0 < n \<and> real n \<le> x. moebius_mu n / real n * real n) = M x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>n | 0 < n \<and> real n \<le> x. moebius_mu n / real n * real n) = M x
[PROOF STEP]
unfolding M_def sum_upto_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<Sum>n | 0 < n \<and> real n \<le> x. moebius_mu n / real n * real n) = sum moebius_mu {i. 0 < i \<and> real i \<le> x}
[PROOF STEP]
by (intro sum.cong) auto
[PROOF STATE]
proof (state)
this:
(\<Sum>n | 0 < n \<and> real n \<le> x. moebius_mu n / real n * real n) = M x
goal (2 subgoals):
1. 1 < x \<Longrightarrow> (A has_integral x * A x - M x) {1..x}
2. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
(\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = M x - 1
[PROOF STEP]
show "(A has_integral x * A x - M x) {1..x}"
[PROOF STATE]
proof (prove)
using this:
(\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = M x - 1
goal (1 subgoal):
1. (A has_integral x * A x - M x) {1..x}
[PROOF STEP]
using *[of 1 x] x
[PROOF STATE]
proof (prove)
using this:
(\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n) = M x - 1
\<lbrakk>1 \<le> 1; 1 < x\<rbrakk> \<Longrightarrow> ((\<lambda>t. A t * 1) has_integral A x * x - A 1 * 1 - (\<Sum>n\<in>real -` {1<..x}. moebius_mu n / real n * real n)) {1..x}
1 < x
goal (1 subgoal):
1. (A has_integral x * A x - M x) {1..x}
[PROOF STEP]
by (simp add: mult_ac)
[PROOF STATE]
proof (state)
this:
(A has_integral x * A x - M x) {1..x}
goal (1 subgoal):
1. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
}
[PROOF STATE]
proof (state)
this:
1 < ?xa2 \<Longrightarrow> (A has_integral ?xa2 * A ?xa2 - M ?xa2) {1..?xa2}
goal (1 subgoal):
1. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
{
[PROOF STATE]
proof (state)
this:
1 < ?xa2 \<Longrightarrow> (A has_integral ?xa2 * A ?xa2 - M ?xa2) {1..?xa2}
goal (1 subgoal):
1. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
fix a b :: real
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
assume ab: "a \<ge> 1"
[PROOF STATE]
proof (state)
this:
1 \<le> a
goal (1 subgoal):
1. 1 \<le> a \<Longrightarrow> A integrable_on {a..b}
[PROOF STEP]
show "A integrable_on {a..b}"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. A integrable_on {a..b}
[PROOF STEP]
using *[of a b] ab
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>1 \<le> a; a < b\<rbrakk> \<Longrightarrow> ((\<lambda>t. A t * 1) has_integral A b * b - A a * a - (\<Sum>n\<in>real -` {a<..b}. moebius_mu n / real n * real n)) {a..b}
1 \<le> a
goal (1 subgoal):
1. A integrable_on {a..b}
[PROOF STEP]
by (cases a b rule: linorder_cases) (auto intro: integrable_negligible)
[PROOF STATE]
proof (state)
this:
A integrable_on {a..b}
goal:
No subgoals!
[PROOF STEP]
}
[PROOF STATE]
proof (state)
this:
1 \<le> ?aa2 \<Longrightarrow> A integrable_on {?aa2..?ba2}
goal:
No subgoals!
[PROOF STEP]
qed |
[STATEMENT]
lemma wprepare_loop_goon_on_rightbmost_Bk_False[simp]: "\<lbrakk>lm \<noteq> []; wprepare_loop_start_on_rightmost m lm (b, Bk # a # lista)\<rbrakk>
\<Longrightarrow> wprepare_loop_goon_on_rightmost m lm (Bk # b, a # lista)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>lm \<noteq> []; wprepare_loop_start_on_rightmost m lm (b, Bk # a # lista)\<rbrakk> \<Longrightarrow> wprepare_loop_goon_on_rightmost m lm (Bk # b, a # lista)
[PROOF STEP]
apply(simp only: wprepare_loop_start_on_rightmost.simps
wprepare_loop_goon_on_rightmost.simps, auto simp: tape_of_nl_rev)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. \<And>rn. \<lbrakk>lm \<noteq> []; b = <rev lm> @ Bk # Bk # Oc \<up> m @ [Oc]; Bk # a # lista = Bk \<up> rn\<rbrakk> \<Longrightarrow> Oc \<up> m @ [Oc] = Oc # Oc \<up> m
2. \<And>rn. \<lbrakk>lm \<noteq> []; b = <rev lm> @ Bk # Bk # Oc \<up> m @ [Oc]; Bk # a # lista = Bk \<up> rn\<rbrakk> \<Longrightarrow> \<exists>rn. a # lista = Bk \<up> rn
[PROOF STEP]
apply(simp add: replicate_Suc[THEN sym] exp_ind tape_of_nl_rev del: replicate_Suc)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<And>rn. \<lbrakk>lm \<noteq> []; b = <rev lm> @ Bk # Bk # Oc \<up> m @ [Oc]; Bk # a # lista = Bk \<up> rn\<rbrakk> \<Longrightarrow> \<exists>rn. a # lista = Bk \<up> rn
[PROOF STEP]
by (meson Cons_replicate_eq) |
[STATEMENT]
lemma (in ccpo) Sup_image_mono:
assumes ccpo: "class.ccpo luba orda lessa"
and mono: "monotone orda (\<le>) f"
and chain: "Complete_Partial_Order.chain orda A"
and "A \<noteq> {}"
shows "Sup (f ` A) \<le> (f (luba A))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<Squnion> (f ` A) \<le> f (luba A)
[PROOF STEP]
proof(rule ccpo_Sup_least)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. Complete_Partial_Order.chain (\<le>) (f ` A)
2. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
from chain
[PROOF STATE]
proof (chain)
picking this:
Complete_Partial_Order.chain orda A
[PROOF STEP]
show "Complete_Partial_Order.chain (\<le>) (f ` A)"
[PROOF STATE]
proof (prove)
using this:
Complete_Partial_Order.chain orda A
goal (1 subgoal):
1. Complete_Partial_Order.chain (\<le>) (f ` A)
[PROOF STEP]
by(rule chain_imageI)(rule monotoneD[OF mono])
[PROOF STATE]
proof (state)
this:
Complete_Partial_Order.chain (\<le>) (f ` A)
goal (1 subgoal):
1. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
fix x
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
assume "x \<in> f ` A"
[PROOF STATE]
proof (state)
this:
x \<in> f ` A
goal (1 subgoal):
1. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
x \<in> f ` A
[PROOF STEP]
obtain y where "x = f y" "y \<in> A"
[PROOF STATE]
proof (prove)
using this:
x \<in> f ` A
goal (1 subgoal):
1. (\<And>y. \<lbrakk>x = f y; y \<in> A\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
x = f y
y \<in> A
goal (1 subgoal):
1. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
from \<open>y \<in> A\<close>
[PROOF STATE]
proof (chain)
picking this:
y \<in> A
[PROOF STEP]
have "orda y (luba A)"
[PROOF STATE]
proof (prove)
using this:
y \<in> A
goal (1 subgoal):
1. orda y (luba A)
[PROOF STEP]
by(rule ccpo.ccpo_Sup_upper[OF ccpo chain])
[PROOF STATE]
proof (state)
this:
orda y (luba A)
goal (1 subgoal):
1. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
hence "f y \<le> f (luba A)"
[PROOF STATE]
proof (prove)
using this:
orda y (luba A)
goal (1 subgoal):
1. f y \<le> f (luba A)
[PROOF STEP]
by(rule monotoneD[OF mono])
[PROOF STATE]
proof (state)
this:
f y \<le> f (luba A)
goal (1 subgoal):
1. \<And>x. x \<in> f ` A \<Longrightarrow> x \<le> f (luba A)
[PROOF STEP]
thus "x \<le> f (luba A)"
[PROOF STATE]
proof (prove)
using this:
f y \<le> f (luba A)
goal (1 subgoal):
1. x \<le> f (luba A)
[PROOF STEP]
using \<open>x = f y\<close>
[PROOF STATE]
proof (prove)
using this:
f y \<le> f (luba A)
x = f y
goal (1 subgoal):
1. x \<le> f (luba A)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
x \<le> f (luba A)
goal:
No subgoals!
[PROOF STEP]
qed |
variable (p : Prop)
open Classical
theorem dne (h : ¬¬p) : p :=
Or.elim (em p)
(fun (hp : p) => hp)
(fun (hnp: ¬p) => absurd hnp h)
theorem step (h : ¬(p ∨ ¬ p)) : ¬p :=
fun (hp : p) => h (Or.intro_left (¬p) (hp))
theorem exclmid : p ∨ ¬p :=
dne (p ∨ ¬p) (
fun (h : ¬(p ∨ ¬p)) =>
h (Or.intro_right (p) (step p h))
)
#check dne
#check p ∨ ¬p
#check dne (p ∨ ¬p) |
Formal statement is: lemma topological_basis_iff: assumes "\<And>B'. B' \<in> B \<Longrightarrow> open B'" shows "topological_basis B \<longleftrightarrow> (\<forall>O'. open O' \<longrightarrow> (\<forall>x\<in>O'. \<exists>B'\<in>B. x \<in> B' \<and> B' \<subseteq> O'))" (is "_ \<longleftrightarrow> ?rhs") Informal statement is: A set $B$ is a topological basis if and only if for every open set $O$, every point $x \in O$ is contained in some element of $B$ that is contained in $O$. |
module Oscar.Relation where
open import Oscar.Level
_⟨_⟩→_ : ∀ {a} {A : Set a} {b} → A → (A → Set b) → A → Set b
m ⟨ B ⟩→ n = B m → B n
Transitive : ∀ {a} {A : Set a} {b} (B : A → A → Set b) → Set (a ⊔ b)
Transitive B = ∀ {y z} → B y z → ∀ {x} → B x y → B x z
module _ {𝔬} {⋆ : Set 𝔬} {𝔪} {_↦_ : ⋆ → ⋆ → Set 𝔪} (_∙_ : Transitive _↦_) {𝔮} (_≞_ : ∀ {x} {y} → x ↦ y → x ↦ y → Set 𝔮) where
Extensional : Set (𝔬 ⊔ 𝔪 ⊔ 𝔮)
Extensional =
∀ {x y} {f₁ f₂ : x ↦ y}
→ f₁ ≞ f₂ → ∀ {z} {g₁ g₂ : y ↦ z}
→ g₁ ≞ g₂
→ (g₁ ∙ f₁) ≞ (g₂ ∙ f₂)
Associative : Set (𝔬 ⊔ 𝔪 ⊔ 𝔮)
Associative =
∀ {w x}
(f : w ↦ x)
{y}
(g : x ↦ y)
{z}
(h : y ↦ z)
→ ((h ∙ g) ∙ f) ≞ (h ∙ (g ∙ f))
|
(** * Biased coin program. *)
Set Implicit Arguments.
Set Contextual Implicit.
From Coq Require Import Streams Basics QArith String Lqa.
Local Open Scope program_scope.
Local Open Scope string_scope.
From ITree Require Import
ITree ITreeFacts.
Import ITreeNotations.
Local Open Scope itree_scope.
From zar Require Import
compile cotree cocwp cpGCL cpo cwp equidistribution eR misc itree order tactics tree.
Local Open Scope cpGCL_scope.
Require Import prelude.
Definition coin (out : string) (p : Q) : cpGCL :=
CChoice (const p) (fun b => if b then out <-- true else out <-- false).
Lemma wf_coin (out : string) (p : Q) :
(0 <= p <= 1)%Q ->
wf_cpGCL (coin out p).
Proof.
intros [H0 H1]; constructor; intro; auto.
destruct b; constructor.
Qed.
(** The probability of assigning `true` to the output variable is
equal to p. *)
Theorem coin_correct (out : string) (p : Q) :
(p <= 1)%Q ->
cwp (coin out p) (fun s => if as_bool (s out) then 1 else 0) empty = Q2eR p.
Proof.
intro Hp.
unfold cwp, coin, wp, wlp, const; simpl; eRauto.
unfold upd; simpl.
rewrite String.eqb_refl; simpl; eRauto.
Qed.
Section coin_equidistribution.
Context (env : SamplingEnvironment) (P : St -> bool) (samples : nat -> St).
Context (out : string) (p : Q) (Hp : (0 <= p <= 1)%Q).
Hypothesis bitstreams_samples :
forall i, iproduces (eq (samples i)) (env.(bitstreams) i)
(cpGCL_to_itree (coin out p) empty).
Theorem coin_samples_equidistributed :
converges (freq (is_true ∘ P) ∘ prefix samples)
(cwp (coin out p) (fun s => if P s then 1 else 0) empty).
Proof.
eapply cpGCL_samples_equidistributed; eauto; apply wf_coin; auto.
Qed.
End coin_equidistribution.
(** Extracting the sampler. *)
From Coq Require Import ExtrOcamlBasic ExtrOcamlString.
Definition sampler (p : Q) : itree boolE bool :=
ITree.map (fun s => as_bool (s "b")) (cpGCL_to_itree (coin "b" p) empty).
Extraction "extract/coin/coin.ml" sampler.
|
from pavlov.stats.timeseries.formatters import channel
import aljpy
import re
import time
import numpy as np
import pandas as pd
from bokeh import models as bom
from bokeh import plotting as bop
from bokeh import io as boi
from bokeh import layouts as bol
from contextlib import contextmanager
from .. import runs, tests
from . import registry
from collections import defaultdict
class Plotter:
def __init__(self, run=-1, rule='60s', **kwargs):
self.run = run
self.readers = registry.StatsReaders(run, **kwargs)
self.groups = {}
self.plotters = {}
self.handle = None
self.rule = rule
bop.output_notebook(hide_banner=True)
self.refresh()
def refresh_groups(self):
self.readers.refresh()
reinit = False
for prefix in self.readers:
s = registry.parse_prefix(prefix)
if s.group not in self.groups:
self.groups[s.group] = []
if prefix not in self.groups[s.group]:
self.groups[s.group].append(prefix)
reinit = True
return reinit
def initialize(self, **kwargs):
plotters = {}
for subplot, prefixes in self.groups.items():
readers = [self.readers[p] for p in prefixes]
plotters[subplot] = readers[0].plotter(readers, **kwargs)
self.plotters = plotters
children = [p.figure for p in self.plotters.values()]
grid = bol.gridplot(children, ncols=5, plot_width=300, plot_height=300, merge_tools=False)
from IPython.display import clear_output
clear_output(wait=True)
self.handle = bop.show(grid, notebook_handle=True)
def refresh(self):
reinit = self.refresh_groups()
if reinit:
self.initialize(rule=self.rule)
for subplot, plotter in self.plotters.items():
plotter.refresh()
if self.handle:
boi.push_notebook(handle=self.handle)
def review(run=-1, **kwargs):
Plotter(run, **kwargs)
def view(run=-1, **kwargs):
plotter = Plotter(run, **kwargs)
while True:
plotter.refresh()
time.sleep(1.)
@tests.mock_dir
@tests.mock_time
def demo():
from . import mean, mean_std, mean_percent
run = runs.new_run()
with registry.to_run(run):
plotter = Plotter(run)
plotter.refresh()
time.sleep(1)
tests.set_time(30)
mean('single', 1)
plotter.refresh()
tests.set_time(90)
time.sleep(1)
mean('single', 2)
plotter.refresh()
tests.set_time(150)
time.sleep(1)
mean('double.one', 2)
mean('double.two', 3)
mean_std('ms', 1, 1)
mean_percent('mp', 1, 1)
plotter.refresh()
tests.set_time(210)
time.sleep(1)
mean('single', 4)
mean('double.one', 5)
mean('double.two', 6)
mean('new', 7)
mean_std('ms', 1, 1)
plotter.refresh() |
{-# OPTIONS --without-K #-}
{- INDEX: Some constructions with non-recursive HITs,
and related results.
This formalization covers the hardest results of my
paper titled "Constructions with non-recursive HITs".
To be precise: I have formalized the result that all
functions in the sequence of approximations (see Section 6)
are weakly constant, and that the colimit is thus
propositional. This requires a lot of lemmas. Some of
these lemmas are trivial on paper and only tedious in Agda,
but many lemmas are (at least for me) even on paper far
from trivial.
Formalized are Section 2, Section 3 (without the example
applications), especially the first main result; all of
Section 4 except Lemma 4.9, the definitions and principles
of Section 5, all of Section 6 expect the last corollaries.
The parts of the paper that are not formalized are
(A) the examples in Section 3
(B) the statements of remarks
(C) Lemma 4.9, 5.4, 5.5, and 5.6
(D) Theorem 5.7, Corollary 5.8, Theorem 6.2
(E) the discussions and results in the concluding Section 7
All of these are relatively easy compared with the results
that are formalized. The items in (C) could be implemented
easily, but are not very interesting on their own.
The same is true for the second and third item in (D),
which however depend on Theorem 5.7.
Theorem 5.7 itself could be interesting to formalize.
However, it relies on the main result of
Capriotti, Kraus, Vezzosi,
"Functions out of Higher Truncations".
This result is formalized, but unfortunately in another
library; thus, we omit Theorem 5.7 (for now) in the current
formalization. (E) would require (D) first.
This development type-checks with Agda 2.4.2.5 and similar
versions (I assume with 2.4.2.x in general; same as the
HoTT library).
-}
module nicolai.pseudotruncations.NONRECURSIVE-INDEX where
{- Some preliminary definitions/lemmas, and an explanation
why we need to work with the spheres defined by suspension-
iteration of the form
Σ¹⁺ⁿ :≡ Σⁿ ∘ Σ -}
open import nicolai.pseudotruncations.Preliminary-definitions
open import nicolai.pseudotruncations.Liblemmas
{- SECTION 2
The Sequential colimit. I am aware that there is some
overlap with lib/types/NatColim.agda -}
open import nicolai.pseudotruncations.SeqColim
{- Here is some prepartion for Section 3 -}
open import nicolai.pseudotruncations.wconst-preparation
{- The rather lengthy argument that some heptagon commutes;
very tedious; this is still preparation for Section 3 -}
open import nicolai.pseudotruncations.heptagon
{- SECTION 3 (without the sample applications)
One result of the paper: If we have a sequence of weakly
constant functions, then the colimit is propositional -}
open import nicolai.pseudotruncations.wconstSequence
{- SECTION 4
The correspondance between loops and maps from spheres:
a lot of tedious technical content. This was hard work for me!
The results are in two files; first, essentially the fact that
the 'pointed' 0-sphere [i.e. (bool, true)] is "as good as"
the unit type if we consider pointed maps out of it.
Second, the main lemmas. -}
open import nicolai.pseudotruncations.pointed-O-Sphere
open import nicolai.pseudotruncations.LoopsAndSpheres
{- SECTION 5 (mainly the definition and some auxiliary lemmas)
Definition of pseudo-truncations -}
open import nicolai.pseudotruncations.PseudoTruncs
{- SECTION 6
The sequence of approximations with increasing "connectedness-
level", and the proof that every map is weakly constant,
and the corollary that its colimit is propositional. -}
open import nicolai.pseudotruncations.PseudoTruncs-wconst-seq
|
import Data.List
import Data.List1
import Data.String
import System.File
fuel : List Int -> Int -> Int
fuel [] y = 0
fuel (x :: xs) y = abs (x - y) + fuel xs y
run : String -> IO ()
run s = do let l = catMaybes $ parseInteger {a=Int} <$> (forget $ split (== ',') s)
let u = (\(c ::: _) => c) <$> (group $ sort $ l)
let fs = head' $ sort $ fuel l <$> u
putStrLn $ show fs
main : IO ()
main = do Right s <- readFile "input.txt"
| Left err => putStrLn $ show err
run s
|
module ode_params
use, intrinsic :: iso_c_binding
implicit none
integer(c_long), parameter :: neq = 1
end module ode_params
! Right hand side of ODE for CVODE to solve.
module rhs_mod
implicit none
contains
integer(c_int) function RhsFn(tn, sunvec_y, sunvec_f, user_data) &
result(ierr) bind(C,name='RhsFn')
use, intrinsic :: iso_c_binding
use fnvector_serial
use cvode_interface
use ode_params
implicit none
real(c_double), value :: tn
type(c_ptr), value :: sunvec_y
type(c_ptr), value :: sunvec_f
type(c_ptr), value :: user_data
! pointers to data in SUNDAILS vectors
real(c_double), pointer :: yvec(:)
real(c_double), pointer :: fvec(:)
! get data arrays from SUNDIALS vectors
call N_VGetData_Serial(sunvec_f, neq, fvec)
fvec(1) = 2.0*tn
end function RhsFn
end module rhs_mod
|
Set Implicit Arguments.
Set Maximal Implicit Insertion.
Set Contextual Implicit.
Set Universe Polymorphism.
From Equations Require Import Equations.
From Coq Require Import
Relation_Definitions
RelationClasses
.
Require Import Fix.
Require Import GHC.Base.
Require Import ClassesOfFunctors.DictDerive.
Require Import ClassesOfFunctors.Laws.
Require Import ClassesOfFunctors.AppKleenePlus.
Require Import Adverb.Composable.Adverb.
Open Scope composable_adverb_scope.
Require Import Tactics.Tactics.
(* begin hide *)
Local Ltac solve :=
Tactics.program_simplify; Equations.CoreTactics.equations_simpl; try Tactics.program_solve_wf;
repeat destruct_match; reflexivity.
Local Obligation Tactic := solve.
(* end hide *)
(* begin repeatedly_adv *)
Variant ReifiedKleenePlus (K : Set -> Set) (R : Set) : Set :=
| KPlus : K R -> ReifiedKleenePlus K R.
(* end repeatedly_adv *)
Arguments KPlus {_} {_}.
Program Instance Functor1__ReifiedKleenePlus : Functor1 ReifiedKleenePlus :=
{| fmap1 := fun _ _ _ f a =>
match a with
| KPlus a => KPlus (f _ a)
end
|}.
Section Instances.
Variable D : (Set -> Set) -> Set -> Set.
Context `{ReifiedKleenePlus -≪ D} `{Functor1 D}.
(** TODO: adverb simulation. *)
Local Definition kplus {A : Set} (a : Fix1 D A) : Fix1 D A :=
@inF1 _ _ _ (inj1 (KPlus a)).
Context `{Functor (Fix1 D)}.
Context `{Applicative (Fix1 D)}.
Global Instance FunctorKleenePlus__Repeatedly : AppKleenePlus (Fix1 D) :=
fun _ k => k {| kleenePlus__ := fun _ => kplus |}.
End Instances.
|
Whether you are looking for cheap hotels in Hairizberg, best family-friendly hotel for children and elderlies in Hairizberg, getaway hotels in Hairizberg for a large group, Hotels.com makes hotel hunting quick and easy for a memorable trip ahead. If you are planning for a family trip to Hairizberg, a special and romantic hotel stay for couples in Hairizberg, a relaxing or quick getaway over the weekend in Hairizberg, or even a corporate business function in Hairizberg, Hotels.com suggests the best accommodations that fit your exact wishlist.
2. Provide insights on staying experience with 66 genuine reviews.
The detailed location mapping allows you to find your ideal Hairizberg hotel closest to tourist attractions. Navigate the map image of this page or results from search to identify the must-visit places and landmarks in Hairizberg followed by hotels in that area. You can further refine your searches for specific neighborhood and transport options such as train stations, airports or public transport to help you travel around with ease.
Start booking Hairizberg hotels with Hotels.com today. |
Subsets and Splits