text
stringlengths 0
3.34M
|
---|
\documentclass[revision-guide.tex]{subfiles}
%% Current Author:
\setcounter{chapter}{11}
\begin{document}
\chapter{Electric Fields}
\begin{content}
\item concept of an electric field
\item uniform electric fields
\item capacitance
\item electric potential
\item electric field of a point charge
\end{content}
\subsection{Candidates should be able to:}
\spec{explain what is meant by an electric field and recall and use $E=\frac{F}{q}$ for electric field strength}
An electric field is the region of space in which electrical forces are exerted on charged bodies. The definition of electric field strength is force per unit charge, and this is written in symbols as:
\[ E=\frac{F}{q} \]
Electric field therefore has units of \si{\newton\per\coulomb}.
\spec{recall that applying a potential difference to two parallel plates stores charge on the plates and produces a uniform electric field in the central region between them}
If an electric potential is created between two parallel plates a distance $d$ apart then an electric field exists between them. Except at the edges, the field is \emph{uniform}. This is shown in figure \ref{parallelplates} by the fact that the field lines are parallel.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
\draw[thick] (0,0) -- (10,0);
\draw[thick] (0,3) -- (10,3);
\foreach \x in {1.2,2,...,8.8} {
\draw[-{Latex[length=5mm, width=2mm]},red] (\x,3) -- (\x,1.25);
\draw[red] (\x,1.25) -- (\x,0);
}
\foreach \x in {0,0.4,...,10} {
\draw (\x,3.2) node {+};
\draw (\x, -0.2) node {-};
}
\end{tikzpicture}
\end{center}
\caption{Uniform field between parallel plates}
\label{parallelplates}
\end{figure}
Note that field lines show the direction in which a force would act on a \emph{postive} charge so they are from positive to negative.
\spec{derive and use the equations Fd = QV and $E=\frac{V}{d}$ for a charge moving through a potential difference in a uniform electric field}
From the definition potential difference we can say that the work done moving through a potential difference is \[ W = QV \] and that is equal to the force multiplied by the distance moved, \[ Fd \] If the movement is through a uniform field we can equate these two to give
\[ Fd = QV \]
The definition of electric field strength gives $F=QE$ and therefore substitution and re-arrangement give \[ E = \frac{V}{d} \]
\spec{recall that the charge stored on parallel plates is proportional to the potential difference between them}
This is an application of Gauss' law which relates the electric field strength around an object to the charge contained within a surface. It is enough to recall this fact at Pre-U.
\spec{recall and use $C = \frac{Q}{V}$ for capacitance}
This is the definition of capacitance and should be learnt. It can also be calculated from the gradient of a graph of $Q$ against $V$.
\spec{derive, recall and use $W = \frac{1}{2}QV$ for the energy stored by a capacitor, derive the equation from the area under a graph of charge stored against potential difference, and derive and use related equations such as $W = \frac{1}{2}CV^2$}
If a capacitor is partially charged with a charge $Q$ then to increase its potential difference by $\delta V$ will require a small amount of work $\delta W$ such that
\[ \delta W = Q \delta V \]
If a graph is plotted of $Q$ against $V$ then this can be seen as the area of a small section. Thus, the energy required to charge a capacitor from uncharged to a p.d. of $V$ is given by the area of a graph of $Q$ against $V$ from zero to $V$, hence
\[ W = \frac{1}{2}QV \]
We can substitute for each quantity in turn to give the following variations of the equation:
\[ W = \frac{1}{2}QV = \frac{1}{2}CV^2 = \frac{1}{2}\frac{Q^2}{C} \]
This can, of course, also be done with integration
\[ W = \int_0^V Q dV = \int_0^V CV dV = \frac{1}{2}CV^2 \]
\spec{analyse graphs of the variation with time of potential difference, charge and current for a capacitor discharging through a resistor}
When a capacitor discharges through a resistor the potential difference on the capacitor drives current around the circuit. Since it is this current which removes charge from the capacitor, it is the case that the rate of change of charge on the capacitor is proportional to the charge on the capacitor.
\[ \frac{dQ}{dt} = -I = -\frac{V}{R} = -\frac{Q}{RC} \]
This is a first order differential equation and has a solution
\[ Q = Q_0 e^{-\frac{t}{RC}} \]
The substitutions $Q=CV$ and then $I = \frac{V}{R}$ can be used to get similar equations for each of the above. The graph of this change is exponential decay. The key features are the initial value (e.g. $V_0$) and the fact that all three quantities tend to zero.
\spec{define and use the time constant of a discharging capacitor}
The time constant, $\tau$, is defines as follows:
\[ \tau = RC \]
This means that in one time constant the current, voltage and charge on a capacitor have declined to $\frac{1}{e}$ of their initial values.
A common rule-of-thumb from electronics is that a capacitor takes five time constants to discharge. If we plug this into the equation
\[ V = V_0 e^{-\frac{t}{RC}} = V_0 e^{-5} = 0.067\ V_0 \]
So the voltage across the capacitor has declined to less than 1\% of its initial value.
\spec{analyse the discharge of a capacitor using equations of the form $x=x_0e^{\frac{-t}{RC}}$}
Much of this has been covered above. One important point to note is that the analysis of capacitor decay is usually carried out by plotting a graph of $\ln{V}$ against time. This changes the equation to give
\[ \ln{V} = -\frac{t}{RC} + \ln{V_0} \]
Therefore the gradient of the graph is $-\frac{1}{RC}$ and the intercept $\ln{V_0}$.
\spec{understand that the direction and electric field strength of an electric field may be represented by field lines (lines of force), and recall the patterns of field lines that represent uniform and radial electric fields}
Field lines are a way of visualising the field. The direction of the field lines is from north to south and show the direction in which a positively charged particle will move. The density of field lines represent the strength of the field. For example in figure \ref{solenoid} the region \textbf{A} contains a uniform strong field which is stronger than the field at \textbf{B} as the field lines are more closely spaced.
\begin{figure}[ht]
\begin{tikzpicture}
\foreach \x in {1,2,3,4,5} {
\draw (\x-0.6,0) circle (0.2cm);
\draw (\x-0.6,3) circle (0.2cm);
}
% top loop
\draw[-{latex[length=5mm,width=2mm]}] (0,2.5) -- (2.5,2.5);
\draw (2.5,2.5) -- (5,2.5);
\draw (5,2.5) .. controls (7,2.5) and (7,4) .. (5,4);
\draw[-{latex[length=5mm,width=2mm]}] (5,4) -- (2.5,4);
\draw (2.5,4) -- (0,4);
\draw (0,4) .. controls (-2,4) and (-2,2.5) .. (0,2.5);
% 2nd loop
\draw[-{latex[length=5mm,width=2mm]}] (0,2) -- (2.5,2);
\draw (2.5,2) -- (5,2);
\draw (5,2) .. controls (8,2) and (8,5) .. (5,5);
\draw[-{latex[length=5mm,width=2mm]}] (5,5) -- (2.5,5);
\draw (2.5,5) -- (0,5);
\draw (0,5) .. controls (-3,5) and (-3,2) .. (0,2);
% middle loop
\draw[-{latex[length=5mm,width=2mm]}] (-2,1.5) -- (2.5,1.5);
\draw (2.5,1.5) -- (7,1.5);
% 4th loop
\draw[-{latex[length=5mm,width=2mm]}] (0,1) -- (2.5,1);
\draw (2.5,1) -- (5,1);
\draw (5,1) .. controls (8,1) and (8,-2) .. (5,-2);
\draw[-{latex[length=5mm,width=2mm]}] (5,-2) -- (2.5,-2);
\draw (2.5,-2) -- (0,-2);
\draw (0,-2) .. controls (-3,-2) and (-3,1) .. (0,1);
% bottom loop
\draw[-{latex[length=5mm,width=2mm]}] (0,0.5) -- (2.5,0.5);
\draw (2.5,0.5) -- (5,0.5);
\draw (5,0.5) .. controls (7,0.5) and (7,-1) .. (5,-1);
\draw[-{latex[length=5mm,width=2mm]}] (5,-1) -- (2.5,-1);
\draw (2.5,-1) -- (0,-1);
\draw (0,-1) .. controls (-2,-1) and (-2,0.5) .. (0,0.5);
% Labels
\draw (2.5,1.5) node[anchor=south] {\textbf{A}};
\draw (2.5,4.5) node {\textbf{B}};
\end{tikzpicture}
\caption{Field around a solenoid}
\label{solenoid}
\end{figure}
\spec{understand electric potential and equipotentials}
Electric potential is defined as the energy per unit charge due to an electric field. Space without an electric field is defined as having zero potential. An equipotential is a line or surface on which the potential is a constant value. Therefore no work is done against the electric force moving along an equipotential. On diagrams equipotentials always cross field lines at right angles.
\spec{understand the relationship between electric field and potential gradient, and recall and use $E = -\frac{dV}{dX}$}
The strength of the electric field at any point is equal to the negative of the potential gradient. This can be seen most easily in a uniform field. If a unit charge is moved through a distance $\Delta x$ within a uniform field $E$ then the work done, $F\Delta x = -q\Delta V$ (for a unit charge). The negative sign comes from the fact that if the force is doing work on the charge it must be moving in the opposite direction to the force on that charge. Thus, the change in energy per unit charge is given by $\Delta V = -\frac{F}{q}\Delta x$. As $\frac{F}{q}$ is the field strength $E$, this can be rearranged to give $E = -\frac{\Delta V}{\Delta x}$. This can be generalised for a non-uniform field as
\[ E = - \frac{dV}{dx} \]
\spec{recognise and use \[ F = \frac{Q_1 Q_2}{4\pi\epsilon_0 r^2} \] for point charges}
The equation above is known as Coulomb's law and enables the calculation of the force between two point charges separated by a distance $r$.
\spec{derive and use $E = \frac{Q}{4\pi\epsilon_o r^2} $ for the electric field due to a point charge}
This is simply from the definition of electric field strength:
\[ E = \frac{F}{Q} = \frac{1}{Q_2} \frac{Q_1 Q_2}{4\pi\epsilon_0 r^2} = \frac{Q}{4\pi\epsilon_o r^2} \]
\spec{*use integration to derive $W = \frac{Q_1 Q_2}{4\pi\epsilon_0 r}$ from $F=\frac{Q_1 Q_2}{4\pi\epsilon_0 r^2}$ for point charges}
Since free space is defined as having zero potential, the electrostatic potential energy, $W$, of a particle is equal to the work done by the field moving the particle from a distance $r$ to infinity. Since work done is equal to $\int F dx$ it follows:
\[ W = \int_{r}^\infty \frac{Q_1 Q_2}{4\pi\epsilon_0 r^2} dr = \frac{Q_1 Q_2}{4\pi\epsilon_0 r}\]
You can define this alternatively by thinking about the work done bringing a charged particle from infinity to a distance $r$. If you do this it is important to remember that $F$ acts in the opposite direction to $r$ so the limits of integration are reversed and the formula has a minus sign, thus reaching the same answer.
\spec{*recognise and use $W = \frac{Q_1 Q_2}{4\pi\epsilon_0 r }$ for the electrostatic potential energy for point charges.}
This is simply applying the equation above. This will allow the calculation of changes in potential energy and possibly transfer to other forms of energy (e.g. kinetic).
\end{document}
|
/-
Copyright (c) 2017 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
universes u w
def buffer (α : Type u) := Σ n, array n α
def mk_buffer {α : Type u} : buffer α :=
⟨0, {data := λ i, fin.elim0 i}⟩
def array.to_buffer {α : Type u} {n : nat} (a : array n α) : buffer α :=
⟨n, a⟩
namespace buffer
variables {α : Type u} {β : Type w}
def nil : buffer α :=
mk_buffer
def size (b : buffer α) : nat :=
b.1
def to_array (b : buffer α) : array (b.size) α :=
b.2
def push_back : buffer α → α → buffer α
| ⟨n, a⟩ v := ⟨n+1, a.push_back v⟩
def pop_back : buffer α → buffer α
| ⟨0, a⟩ := ⟨0, a⟩
| ⟨n+1, a⟩ := ⟨n, a.pop_back⟩
def read : Π (b : buffer α), fin b.size → α
| ⟨n, a⟩ i := a.read i
def write : Π (b : buffer α), fin b.size → α → buffer α
| ⟨n, a⟩ i v := ⟨n, a.write i v⟩
def read' [inhabited α] : buffer α → nat → α
| ⟨n, a⟩ i := a.read' i
def write' : buffer α → nat → α → buffer α
| ⟨n, a⟩ i v := ⟨n, a.write' i v⟩
lemma read_eq_read' [inhabited α] (b : buffer α) (i : nat) (h : i < b.size) :
read b ⟨i, h⟩ = read' b i :=
by cases b; unfold read read'; simp [array.read_eq_read']
lemma write_eq_write' (b : buffer α) (i : nat) (h : i < b.size) (v : α) :
write b ⟨i, h⟩ v = write' b i v :=
by cases b; unfold write write'; simp [array.write_eq_write']
def to_list (b : buffer α) : list α :=
b.to_array.to_list
protected def to_string (b : buffer char) : string :=
b.to_array.to_list.as_string
def append_list {α : Type u} : buffer α → list α → buffer α
| b [] := b
| b (v::vs) := append_list (b.push_back v) vs
def append_string (b : buffer char) (s : string) : buffer char :=
b.append_list s.to_list
lemma lt_aux_1 {a b c : nat} (h : a + c < b) : a < b :=
lt_of_le_of_lt (nat.le_add_right a c) h
lemma lt_aux_2 {n : nat} (h : 0 < n) : n - 1 < n :=
nat.sub_lt h (nat.succ_pos 0)
lemma lt_aux_3 {n i} (h : i + 1 < n) : n - 2 - i < n :=
have n > 0, from lt_trans (nat.zero_lt_succ i) h,
have n - 2 < n, from nat.sub_lt this (dec_trivial),
lt_of_le_of_lt (nat.sub_le _ _) this
def append_array {α : Type u} {n : nat} (nz : 0 < n) :
buffer α → array n α → ∀ i : nat, i < n → buffer α
| ⟨m, b⟩ a 0 _ :=
let i : fin n := ⟨n - 1, lt_aux_2 nz⟩ in
⟨m+1, b.push_back (a.read i)⟩
| ⟨m, b⟩ a (j+1) h :=
let i : fin n := ⟨n - 2 - j, lt_aux_3 h⟩ in
append_array ⟨m+1, b.push_back (a.read i)⟩ a j (lt_aux_1 h)
protected def append {α : Type u} : buffer α → buffer α → buffer α
| b ⟨0, a⟩ := b
| b ⟨n+1, a⟩ := append_array (nat.zero_lt_succ _) b a n (nat.lt_succ_self _)
def iterate : Π b : buffer α, β → (fin b.size → α → β → β) → β
| ⟨_, a⟩ b f := a.iterate b f
def foreach : Π b : buffer α, (fin b.size → α → α) → buffer α
| ⟨n, a⟩ f := ⟨n, a.foreach f⟩
/-- Monadically map a function over the buffer. -/
@[inline]
def mmap {m} [monad m] (b : buffer α) (f : α → m β) : m (buffer β) :=
do b' ← b.2.mmap f, return b'.to_buffer
/-- Map a function over the buffer. -/
@[inline]
def map : buffer α → (α → β) → buffer β
| ⟨n, a⟩ f := ⟨n, a.map f⟩
def foldl : buffer α → β → (α → β → β) → β
| ⟨_, a⟩ b f := a.foldl b f
def rev_iterate : Π (b : buffer α), β → (fin b.size → α → β → β) → β
| ⟨_, a⟩ b f := a.rev_iterate b f
def take (b : buffer α) (n : nat) : buffer α :=
if h : n ≤ b.size then ⟨n, b.to_array.take n h⟩ else b
def take_right (b : buffer α) (n : nat) : buffer α :=
if h : n ≤ b.size then ⟨n, b.to_array.take_right n h⟩ else b
def drop (b : buffer α) (n : nat) : buffer α :=
if h : n ≤ b.size then ⟨_, b.to_array.drop n h⟩ else b
def reverse (b : buffer α) : buffer α :=
⟨b.size, b.to_array.reverse⟩
protected def mem (v : α) (a : buffer α) : Prop := ∃i, read a i = v
instance : has_mem α (buffer α) := ⟨buffer.mem⟩
instance : has_append (buffer α) :=
⟨buffer.append⟩
instance [has_repr α] : has_repr (buffer α) :=
⟨repr ∘ to_list⟩
meta instance [has_to_format α] : has_to_format (buffer α) :=
⟨to_fmt ∘ to_list⟩
meta instance [has_to_tactic_format α] : has_to_tactic_format (buffer α) :=
⟨tactic.pp ∘ to_list⟩
end buffer
def list.to_buffer {α : Type u} (l : list α) : buffer α :=
mk_buffer.append_list l
@[reducible] def char_buffer := buffer char
/-- Convert a format object into a character buffer with the provided
formatting options. -/
meta constant format.to_buffer : format → options → buffer char
def string.to_char_buffer (s : string) : char_buffer :=
buffer.nil.append_string s
|
// Warning! This file is autogenerated.
#include <boost/text/grapheme_break.hpp>
#include <gtest/gtest.h>
#include <algorithm>
TEST(grapheme, breaks_9)
{
// ÷ AC00 × 200D ÷
// ÷ [0.2] HANGUL SYLLABLE GA (LV) × [9.0] ZERO WIDTH JOINER (ZWJ_ExtCccZwj) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac00, 0x200d }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
}
// ÷ AC00 × 0308 × 200D ÷
// ÷ [0.2] HANGUL SYLLABLE GA (LV) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) × [9.0] ZERO WIDTH JOINER (ZWJ_ExtCccZwj) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac00, 0x308, 0x200d }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
}
// ÷ AC00 ÷ 0378 ÷
// ÷ [0.2] HANGUL SYLLABLE GA (LV) ÷ [999.0] <reserved-0378> (Other) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac00, 0x378 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC00 × 0308 ÷ 0378 ÷
// ÷ [0.2] HANGUL SYLLABLE GA (LV) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] <reserved-0378> (Other) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac00, 0x308, 0x378 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC00 ÷ D800 ÷
// ÷ [0.2] HANGUL SYLLABLE GA (LV) ÷ [5.0] <surrogate-D800> (Control) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac00, 0xd800 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC00 × 0308 ÷ D800 ÷
// ÷ [0.2] HANGUL SYLLABLE GA (LV) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <surrogate-D800> (Control) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac00, 0x308, 0xd800 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 0020 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] SPACE (Other) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x20 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 0020 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] SPACE (Other) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x20 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 000D ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [5.0] <CARRIAGE RETURN (CR)> (CR) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0xd }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 000D ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <CARRIAGE RETURN (CR)> (CR) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0xd }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 000A ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [5.0] <LINE FEED (LF)> (LF) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0xa }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 000A ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <LINE FEED (LF)> (LF) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0xa }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 0001 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [5.0] <START OF HEADING> (Control) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x1 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 0001 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <START OF HEADING> (Control) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x1 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 × 034F ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING GRAPHEME JOINER (Extend) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x34f }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 × 034F ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) × [9.0] COMBINING GRAPHEME JOINER (Extend) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x34f }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 1F1E6 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] REGIONAL INDICATOR SYMBOL LETTER A (RI) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x1f1e6 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 1F1E6 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] REGIONAL INDICATOR SYMBOL LETTER A (RI) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x1f1e6 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 0600 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] ARABIC NUMBER SIGN (Prepend) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x600 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 0600 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] ARABIC NUMBER SIGN (Prepend) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x600 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 × 0903 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.1] DEVANAGARI SIGN VISARGA (SpacingMark) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x903 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 × 0903 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) × [9.1] DEVANAGARI SIGN VISARGA (SpacingMark) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x903 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 1100 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] HANGUL CHOSEONG KIYEOK (L) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x1100 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 1100 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] HANGUL CHOSEONG KIYEOK (L) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x1100 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 1160 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] HANGUL JUNGSEONG FILLER (V) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x1160 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 1160 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] HANGUL JUNGSEONG FILLER (V) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x1160 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 × 11A8 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [8.0] HANGUL JONGSEONG KIYEOK (T) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x11a8 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 11A8 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] HANGUL JONGSEONG KIYEOK (T) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x11a8 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ AC00 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] HANGUL SYLLABLE GA (LV) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0xac00 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ AC00 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] HANGUL SYLLABLE GA (LV) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0xac00 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ AC01 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] HANGUL SYLLABLE GAG (LVT) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0xac01 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ AC01 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] HANGUL SYLLABLE GAG (LVT) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0xac01 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 231A ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] WATCH (ExtPict) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x231a }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 231A ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] WATCH (ExtPict) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x231a }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 × 0300 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING GRAVE ACCENT (Extend_ExtCccZwj) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x300 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 × 0300 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) × [9.0] COMBINING GRAVE ACCENT (Extend_ExtCccZwj) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x300 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 × 200D ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] ZERO WIDTH JOINER (ZWJ_ExtCccZwj) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x200d }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 × 200D ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) × [9.0] ZERO WIDTH JOINER (ZWJ_ExtCccZwj) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x200d }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ 0378 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [999.0] <reserved-0378> (Other) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0x378 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ 0378 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] <reserved-0378> (Other) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0x378 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ AC01 ÷ D800 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) ÷ [5.0] <surrogate-D800> (Control) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0xac01, 0xd800 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ AC01 × 0308 ÷ D800 ÷
// ÷ [0.2] HANGUL SYLLABLE GAG (LVT) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <surrogate-D800> (Control) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0xac01, 0x308, 0xd800 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ 231A ÷ 0020 ÷
// ÷ [0.2] WATCH (ExtPict) ÷ [999.0] SPACE (Other) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0x231a, 0x20 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ 231A × 0308 ÷ 0020 ÷
// ÷ [0.2] WATCH (ExtPict) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [999.0] SPACE (Other) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0x231a, 0x308, 0x20 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ 231A ÷ 000D ÷
// ÷ [0.2] WATCH (ExtPict) ÷ [5.0] <CARRIAGE RETURN (CR)> (CR) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0x231a, 0xd }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ 231A × 0308 ÷ 000D ÷
// ÷ [0.2] WATCH (ExtPict) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <CARRIAGE RETURN (CR)> (CR) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0x231a, 0x308, 0xd }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ 231A ÷ 000A ÷
// ÷ [0.2] WATCH (ExtPict) ÷ [5.0] <LINE FEED (LF)> (LF) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0x231a, 0xa }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ 231A × 0308 ÷ 000A ÷
// ÷ [0.2] WATCH (ExtPict) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <LINE FEED (LF)> (LF) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0x231a, 0x308, 0xa }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
// ÷ 231A ÷ 0001 ÷
// ÷ [0.2] WATCH (ExtPict) ÷ [5.0] <START OF HEADING> (Control) ÷ [0.3]
{
std::array<uint32_t, 2> cps = {{ 0x231a, 0x1 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 1);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 1, cps.end()) - cps.begin(), 2);
}
// ÷ 231A × 0308 ÷ 0001 ÷
// ÷ [0.2] WATCH (ExtPict) × [9.0] COMBINING DIAERESIS (Extend_ExtCccZwj) ÷ [5.0] <START OF HEADING> (Control) ÷ [0.3]
{
std::array<uint32_t, 3> cps = {{ 0x231a, 0x308, 0x1 }};
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 0, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 1, cps.end()) - cps.begin(), 0);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 0, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 2, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
EXPECT_EQ(boost::text::prev_grapheme_break(cps.begin(), cps.begin() + 3, cps.end()) - cps.begin(), 2);
EXPECT_EQ(boost::text::next_grapheme_break(cps.begin() + 2, cps.end()) - cps.begin(), 3);
}
}
|
State Before: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toSheafify J P) F ≫ (sheafifyCompIso J F P).hom = toSheafify J (P ⋙ F) State After: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toSheafify J P) F ≫ (plusCompIso J F (plusObj J P)).hom ≫ plusMap J (plusCompIso J F P).hom =
toSheafify J (P ⋙ F) Tactic: dsimp [sheafifyCompIso] State Before: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toSheafify J P) F ≫ (plusCompIso J F (plusObj J P)).hom ≫ plusMap J (plusCompIso J F P).hom =
toSheafify J (P ⋙ F) State After: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toPlus J P) F ≫
whiskerRight (plusMap J (toPlus J P)) F ≫
(plusCompIso J F (plusObj J P)).hom ≫ plusMap J (plusCompIso J F P).hom =
toSheafify J (P ⋙ F) Tactic: erw [whiskerRight_comp, Category.assoc] State Before: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toPlus J P) F ≫
whiskerRight (plusMap J (toPlus J P)) F ≫
(plusCompIso J F (plusObj J P)).hom ≫ plusMap J (plusCompIso J F P).hom =
toSheafify J (P ⋙ F) State After: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toPlus J P) F ≫
((plusCompIso J F P).hom ≫ plusMap J (whiskerRight (toPlus J P) F)) ≫ plusMap J (plusCompIso J F P).hom =
toSheafify J (P ⋙ F) Tactic: slice_lhs 2 3 => rw [plusCompIso_whiskerRight] State Before: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ whiskerRight (toPlus J P) F ≫
((plusCompIso J F P).hom ≫ plusMap J (whiskerRight (toPlus J P) F)) ≫ plusMap J (plusCompIso J F P).hom =
toSheafify J (P ⋙ F) State After: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ toPlus J (P ⋙ F) ≫ plusMap J (toPlus J (P ⋙ F)) = toSheafify J (P ⋙ F) Tactic: rw [Category.assoc, ← J.plusMap_comp, whiskerRight_toPlus_comp_plusCompIso_hom, ←
Category.assoc, whiskerRight_toPlus_comp_plusCompIso_hom] State Before: C : Type u
inst✝⁸ : Category C
J : GrothendieckTopology C
D : Type w₁
inst✝⁷ : Category D
E : Type w₂
inst✝⁶ : Category E
F : D ⥤ E
inst✝⁵ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) D
inst✝⁴ : ∀ (α β : Type (max v u)) (fst snd : β → α), HasLimitsOfShape (WalkingMulticospan fst snd) E
inst✝³ : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ D
inst✝² : ∀ (X : C), HasColimitsOfShape (Cover J X)ᵒᵖ E
inst✝¹ : (X : C) → PreservesColimitsOfShape (Cover J X)ᵒᵖ F
inst✝ : (X : C) → (W : Cover J X) → (P : Cᵒᵖ ⥤ D) → PreservesLimit (MulticospanIndex.multicospan (Cover.index W P)) F
P : Cᵒᵖ ⥤ D
⊢ toPlus J (P ⋙ F) ≫ plusMap J (toPlus J (P ⋙ F)) = toSheafify J (P ⋙ F) State After: no goals Tactic: rfl
|
%RELMTEST
%script to test two earthqauake rate hypotheses using earthquake data
%
region='Northern California Aftershock model';
nsim=100;
mt=2;
park1=vRatesH;
park2=vRatesN;
%load H1;
%load H2;
%park1=H1; %Hypothesis with variable b-values
%park2=H2; %Null hypothesis, uniform b-value
clear test null;
% xmin(i)=park1(j,1);
% xmax(i)=park1(j,2);
% ymin(i)=park1(j,3);
% ymax(i)=park1(j,4);
% zmin(i)=park1(j,5);
% zmax(i)=park1(j,6);
% magmin(i)=park1(j,7);
% magmax(i)=park1(j,8);
% lamda1(i)=park1(j,9);
% weight(i)=park1(j,10);
nquake=park1(:,11);
[m,n] = size(park1);
magmin=park1(:,7);
lamda1=park1(:,9);
lamda2=park2(:,9);
weight1=park1(:,10);
weight2=park2(:,10);
weight=weight1.*weight2.*(magmin>mt);
%
% Remove rows of matrix for which weight is zero
%
j=0;
for i = 1:m
if weight(i)>0
j=j+1;
w(j)=weight(i);
nq(j)=nquake(i);
lam1(j)=lamda1(i);
lam2(j)=lamda2(i);
mmin(j)=magmin(i);
end
end
nq=w.*nq;
Nquake=sum(nq);
lam1=w.*lam1;
lam2=w.*lam2;
clear park1 park2 lamda1 lamda2 nquake magmin weight weight1 weight2;
%
%make a weighted magnitude-frequency plot
%
mf=[mmin;nq;lam1;lam2]';
mfsort=sortrows(mf);
mag=mfsort(:,1);
Fobs=flip(cumsum(flip(mfsort(:,2))));
Fth1=flip(cumsum(flip(mfsort(:,3))));
Fth2=flip(cumsum(flip(mfsort(:,4))));
figure%(1)
semilogy(mag,Fobs,'r',mag,Fth1,'g',mag,Fth2,'b');
grid;
axis([3,8,.0001,100]);
%
% Evaluate whether total number of quakes is consistent with H1
%
%
Nhat=sum(lam1)
peq=poisspdf(Nquake, Nhat); % probability of exactly Nquake
Ple=poisscdf(Nquake, Nhat); % probability of less than or equal to Nquake
Pless=Ple-peq; % probability of less than Nquake
Pmore=1-Ple % probability of more than Nquake
P1_equal=peq;
P1_less=Pless;
P1_more=Pmore;
Nhat1=Nhat;
lamcum1=cumsum(lam1)/Nhat1;
% Evaluate whether total number of quakes is consistent with H2
Nhat=sum(lam2)
peq=poisspdf(Nquake, Nhat); % probability of exactly Nquake
Ple=poisscdf(Nquake, Nhat); % probability of less than or equal to Nquake
Pless=Ple-peq; % probability of less than Nquake
Pmore=1-Ple % probability of more than Nquake
P2_equal=peq;
P2_less=Pless;
P2_more=Pmore;
Nhat2=Nhat;
lamcum2=cumsum(lam2)/Nhat2;
%
% simulate catalogs according to H1,
% and evaluate likelihood scores of nsquake1 and real catalog using lamda1 and lamda2
%
nsquake=simulate(Nquake,lam1, nsim);
[LLR1, rank11,rank12] = Rtest(lam1, lam2, nq, nsquake, w);
%
% simulate catalogs according to H2,
% and evaluate likelihood scores of nsquake1 and real catalog using lamda1 and lamda2
%
nsquake=simulate(Nquake,lam2, nsim);
[LLR2, rank21,rank22] = Rtest(lam1, lam2, nq, nsquake, w);
%
%Plot cumulative likelihood scores for two hypotheses
%
alpha = sum(LLR2>0)/nsim
beta = sum(LLR1<0)/nsim
index=[1:nsim]/nsim;
x=[0,0];y=[0,1];
figure_w_normalized_uicontrolunits(2);
plot(LLR1,index,'g',LLR2,index,'r',x,y,'b')';
xlabel('Likelihood ratio (Variable b/Constant b)')';
ylabel('Fraction of cases');
title('Green assumes variable-b hypothesis; Red assumes constant=b hypothesis');
region, mt,Nquake, Nhat1,Nhat2,P1_less,P1_more,P2_less,P2_more, alpha, beta, rank11,rank12, rank21, rank22
%[rank1,rank2]
%[Nhat1,Nhat2]
%[P1_less, P2_less]
%[P1_more, P2_more]
|
From Test Require Import tactic.
Section FOFProblem.
Variable Universe : Set.
Variable UniverseElement : Universe.
Variable wd_ : Universe -> Universe -> Prop.
Variable col_ : Universe -> Universe -> Universe -> Prop.
Variable col_swap1_1 : (forall A B C : Universe, (col_ A B C -> col_ B A C)).
Variable col_swap2_2 : (forall A B C : Universe, (col_ A B C -> col_ B C A)).
Variable col_triv_3 : (forall A B : Universe, col_ A B B).
Variable wd_swap_4 : (forall A B : Universe, (wd_ A B -> wd_ B A)).
Variable col_trans_5 : (forall P Q A B C : Universe, ((wd_ P Q /\ (col_ P Q A /\ (col_ P Q B /\ col_ P Q C))) -> col_ A B C)).
Theorem pipo_6 : (forall O E Eprime A B C Aprime Bprime Cprime : Universe, ((wd_ O Eprime /\ (wd_ A O /\ (wd_ B O /\ (wd_ C O /\ (wd_ Aprime O /\ (wd_ Bprime O /\ (wd_ Cprime O /\ (wd_ O E /\ (wd_ O Eprime /\ (wd_ E Eprime /\ (col_ O E A /\ (col_ O E B /\ (col_ O E C /\ (col_ O Eprime Aprime /\ (col_ O Eprime Bprime /\ (col_ O Eprime Cprime /\ (col_ O E A /\ (col_ O E B /\ (col_ O E C /\ col_ O Eprime Bprime))))))))))))))))))) -> col_ O A B)).
Proof.
time tac.
Qed.
End FOFProblem.
|
lemma open_scaling[intro]: fixes s :: "'a::real_normed_vector set" assumes "c \<noteq> 0" and "open s" shows "open((\<lambda>x. c *\<^sub>R x) ` s)"
|
[STATEMENT]
lemma scopeExtPar:
fixes P :: pi
and Q :: pi
and x :: name
assumes xFreshP: "x \<sharp> P"
shows "<\<nu>x>(P \<parallel> Q) \<sim>\<^sup>s P \<parallel> <\<nu>x>Q"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) \<sim>\<^sup>s P \<parallel> <\<nu>x>Q
[PROOF STEP]
proof(auto simp add: substClosed_def)
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
fix s :: "(name \<times> name) list"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
have "\<exists>c::name. c \<sharp> (P, Q, s)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>c. c \<sharp> (P, Q, s)
[PROOF STEP]
by(blast intro: name_exists_fresh)
[PROOF STATE]
proof (state)
this:
\<exists>c. c \<sharp> (P, Q, s)
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
\<exists>c. c \<sharp> (P, Q, s)
[PROOF STEP]
obtain c::name where cFreshP: "c \<sharp> P" and cFreshQ: "c \<sharp> Q" and cFreshs: "c \<sharp> s"
[PROOF STATE]
proof (prove)
using this:
\<exists>c. c \<sharp> (P, Q, s)
goal (1 subgoal):
1. (\<And>c. \<lbrakk>c \<sharp> P; c \<sharp> Q; c \<sharp> s\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by(force simp add: fresh_prod)
[PROOF STATE]
proof (state)
this:
c \<sharp> P
c \<sharp> Q
c \<sharp> s
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
have "<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
[PROOF STEP]
from cFreshP cFreshQ
[PROOF STATE]
proof (chain)
picking this:
c \<sharp> P
c \<sharp> Q
[PROOF STEP]
have "c \<sharp> P \<parallel> Q"
[PROOF STATE]
proof (prove)
using this:
c \<sharp> P
c \<sharp> Q
goal (1 subgoal):
1. c \<sharp> P \<parallel> Q
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
c \<sharp> P \<parallel> Q
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
[PROOF STEP]
hence "<\<nu>x>(P \<parallel> Q) = <\<nu>c>([(x, c)] \<bullet> (P \<parallel> Q))"
[PROOF STATE]
proof (prove)
using this:
c \<sharp> P \<parallel> Q
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) = <\<nu>c>([(x, c)] \<bullet> P \<parallel> Q)
[PROOF STEP]
by(simp add: alphaRes)
[PROOF STATE]
proof (state)
this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>([(x, c)] \<bullet> P \<parallel> Q)
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
[PROOF STEP]
with xFreshP cFreshP
[PROOF STATE]
proof (chain)
picking this:
x \<sharp> P
c \<sharp> P
<\<nu>x>(P \<parallel> Q) = <\<nu>c>([(x, c)] \<bullet> P \<parallel> Q)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
x \<sharp> P
c \<sharp> P
<\<nu>x>(P \<parallel> Q) = <\<nu>c>([(x, c)] \<bullet> P \<parallel> Q)
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
[PROOF STEP]
by(simp add: name_fresh_fresh)
[PROOF STATE]
proof (state)
this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
from cFreshQ
[PROOF STATE]
proof (chain)
picking this:
c \<sharp> Q
[PROOF STEP]
have "<\<nu>x>Q = <\<nu>c>([(x, c)] \<bullet> Q)"
[PROOF STATE]
proof (prove)
using this:
c \<sharp> Q
goal (1 subgoal):
1. <\<nu>x>Q = <\<nu>c>([(x, c)] \<bullet> Q)
[PROOF STEP]
by(simp add: alphaRes)
[PROOF STATE]
proof (state)
this:
<\<nu>x>Q = <\<nu>c>([(x, c)] \<bullet> Q)
goal (1 subgoal):
1. \<And>\<sigma>. <\<nu>x>(P \<parallel> Q)[<\<sigma>>] \<sim> P[<\<sigma>>] \<parallel> <\<nu>x>Q[<\<sigma>>]
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
<\<nu>x>Q = <\<nu>c>([(x, c)] \<bullet> Q)
[PROOF STEP]
show "(<\<nu>x>(P \<parallel> Q))[<s>] \<sim> P[<s>] \<parallel> (<\<nu>x>Q)[<s>]"
[PROOF STATE]
proof (prove)
using this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
<\<nu>x>Q = <\<nu>c>([(x, c)] \<bullet> Q)
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q)[<s>] \<sim> P[<s>] \<parallel> <\<nu>x>Q[<s>]
[PROOF STEP]
using cFreshs cFreshP
[PROOF STATE]
proof (prove)
using this:
<\<nu>x>(P \<parallel> Q) = <\<nu>c>(P \<parallel> ([(x, c)] \<bullet> Q))
<\<nu>x>Q = <\<nu>c>([(x, c)] \<bullet> Q)
c \<sharp> s
c \<sharp> P
goal (1 subgoal):
1. <\<nu>x>(P \<parallel> Q)[<s>] \<sim> P[<s>] \<parallel> <\<nu>x>Q[<s>]
[PROOF STEP]
by(force intro: Strong_Late_Bisim_SC.scopeExtPar)
[PROOF STATE]
proof (state)
this:
<\<nu>x>(P \<parallel> Q)[<s>] \<sim> P[<s>] \<parallel> <\<nu>x>Q[<s>]
goal:
No subgoals!
[PROOF STEP]
qed
|
{-# LANGUAGE OverloadedLists #-}
module Main where
import Numeric.GSL.ODE
import Numeric.LinearAlgebra
-- Differential equation
f :: Double -> [Double] -> [Double]
f t [x, v] = [v, - x + mu * v * (1 - x ^ 2)]
-- Mu scalar, dampening strenth
mu :: Double
mu = 0.1
-- Boundary conditions
ts :: Vector Double
ts = linspace 1000 (0, 50)
-- Use default solver: Embedded Runge-Kutta-Fehlberg (4, 5) method.
vanderpol1 :: [Vector Double]
vanderpol1 = toColumns $ odeSolve f [1, 0] ts
-- Use Runge-Kutta (2,3) solver
vanderpol2 :: [Vector Double]
vanderpol2 = toColumns $ odeSolveV RK2 hi epsAbs epsRel (l2v f) [1, 0] ts
where
epsAbs = 1.49012e-08
epsRel = epsAbs
hi = (ts ! 1 - ts ! 0) / 100
l2v f = \t -> fromList . f t . toList
main :: IO ()
main = do
print vanderpol1
print vanderpol2
|
function test_old_ft_topoplotTFR
% MEM 2gb
% WALLTIME 00:20:00
% DEPENDENCY_FT_TOPOPLOTTFR
% This script tests the ft_topoplotTFR function and should display a figure
% with the ctf275 layout showing power decreases at the parietal lobes
% load time-frequency data
datadir = dccnpath('/home/common/matlab/fieldtrip/data/test/');
fprintf('loading data\n');
%load observe_comm_moves_freqmtmconvol.mat
load(fullfile(datadir,'observe_comm_moves_freqmtmconvol.mat'));
% topoplot a low frequency band
figure;
cfg = [];
cfg.baselinetype = 'relchange';
cfg.baseline = [-2 -.5];
cfg.xlim = [.5 3];
cfg.ylim = [8 12];
cfg.zlim = [-1 1];
ft_topoplotTFR(cfg, obs_lo);
% topoplot a high frequency band
figure;
cfg = [];
cfg.baselinetype = 'relchange';
cfg.baseline = [-2 -.5];
cfg.xlim = [.5 3];
cfg.ylim = [30 40];
cfg.zlim = [-1 1];
ft_topoplotTFR(cfg, obs_hi);
|
# Original author: D. Eppstein, UC Irvine, August 12, 2003.
# The original code at https://www.ics.uci.edu/~eppstein/PADS/ is public domain.
"""Functions for reading and writing graphs in the *sparse6* format.
The *sparse6* file format is a space-efficient format for large sparse
graphs. For small graphs or large dense graphs, use the *graph6* file
format.
For more information, see the `sparse6`_ homepage.
.. _sparse6: https://users.cecs.anu.edu.au/~bdm/data/formats.html
"""
import networkx as nx
from networkx.exception import NetworkXError
from networkx.utils import open_file, not_implemented_for
from networkx.readwrite.graph6 import data_to_n, n_to_data
__all__ = ["from_sparse6_bytes", "read_sparse6", "to_sparse6_bytes", "write_sparse6"]
def _generate_sparse6_bytes(G, nodes, header):
"""Yield bytes in the sparse6 encoding of a graph.
`G` is an undirected simple graph. `nodes` is the list of nodes for
which the node-induced subgraph will be encoded; if `nodes` is the
list of all nodes in the graph, the entire graph will be
encoded. `header` is a Boolean that specifies whether to generate
the header ``b'>>sparse6<<'`` before the remaining data.
This function generates `bytes` objects in the following order:
1. the header (if requested),
2. the encoding of the number of nodes,
3. each character, one-at-a-time, in the encoding of the requested
node-induced subgraph,
4. a newline character.
This function raises :exc:`ValueError` if the graph is too large for
the graph6 format (that is, greater than ``2 ** 36`` nodes).
"""
n = len(G)
if n >= 2 ** 36:
raise ValueError(
"sparse6 is only defined if number of nodes is less " "than 2 ** 36"
)
if header:
yield b">>sparse6<<"
yield b":"
for d in n_to_data(n):
yield str.encode(chr(d + 63))
k = 1
while 1 << k < n:
k += 1
def enc(x):
"""Big endian k-bit encoding of x"""
return [1 if (x & 1 << (k - 1 - i)) else 0 for i in range(k)]
edges = sorted((max(u, v), min(u, v)) for u, v in G.edges())
bits = []
curv = 0
for (v, u) in edges:
if v == curv: # current vertex edge
bits.append(0)
bits.extend(enc(u))
elif v == curv + 1: # next vertex edge
curv += 1
bits.append(1)
bits.extend(enc(u))
else: # skip to vertex v and then add edge to u
curv = v
bits.append(1)
bits.extend(enc(v))
bits.append(0)
bits.extend(enc(u))
if k < 6 and n == (1 << k) and ((-len(bits)) % 6) >= k and curv < (n - 1):
# Padding special case: small k, n=2^k,
# more than k bits of padding needed,
# current vertex is not (n-1) --
# appending 1111... would add a loop on (n-1)
bits.append(0)
bits.extend([1] * ((-len(bits)) % 6))
else:
bits.extend([1] * ((-len(bits)) % 6))
data = [
(bits[i + 0] << 5)
+ (bits[i + 1] << 4)
+ (bits[i + 2] << 3)
+ (bits[i + 3] << 2)
+ (bits[i + 4] << 1)
+ (bits[i + 5] << 0)
for i in range(0, len(bits), 6)
]
for d in data:
yield str.encode(chr(d + 63))
yield b"\n"
def from_sparse6_bytes(string):
"""Read an undirected graph in sparse6 format from string.
Parameters
----------
string : string
Data in sparse6 format
Returns
-------
G : Graph
Raises
------
NetworkXError
If the string is unable to be parsed in sparse6 format
Examples
--------
>>> G = nx.from_sparse6_bytes(b":A_")
>>> sorted(G.edges())
[(0, 1), (0, 1), (0, 1)]
See Also
--------
read_sparse6, write_sparse6
References
----------
.. [1] Sparse6 specification
<https://users.cecs.anu.edu.au/~bdm/data/formats.html>
"""
if string.startswith(b">>sparse6<<"):
string = string[11:]
if not string.startswith(b":"):
raise NetworkXError("Expected leading colon in sparse6")
chars = [c - 63 for c in string[1:]]
n, data = data_to_n(chars)
k = 1
while 1 << k < n:
k += 1
def parseData():
"""Returns stream of pairs b[i], x[i] for sparse6 format."""
chunks = iter(data)
d = None # partial data word
dLen = 0 # how many unparsed bits are left in d
while 1:
if dLen < 1:
try:
d = next(chunks)
except StopIteration:
return
dLen = 6
dLen -= 1
b = (d >> dLen) & 1 # grab top remaining bit
x = d & ((1 << dLen) - 1) # partially built up value of x
xLen = dLen # how many bits included so far in x
while xLen < k: # now grab full chunks until we have enough
try:
d = next(chunks)
except StopIteration:
return
dLen = 6
x = (x << 6) + d
xLen += 6
x = x >> (xLen - k) # shift back the extra bits
dLen = xLen - k
yield b, x
v = 0
G = nx.MultiGraph()
G.add_nodes_from(range(n))
multigraph = False
for b, x in parseData():
if b == 1:
v += 1
# padding with ones can cause overlarge number here
if x >= n or v >= n:
break
elif x > v:
v = x
else:
if G.has_edge(x, v):
multigraph = True
G.add_edge(x, v)
if not multigraph:
G = nx.Graph(G)
return G
def to_sparse6_bytes(G, nodes=None, header=True):
"""Convert an undirected graph to bytes in sparse6 format.
Parameters
----------
G : Graph (undirected)
nodes: list or iterable
Nodes are labeled 0...n-1 in the order provided. If None the ordering
given by ``G.nodes()`` is used.
header: bool
If True add '>>sparse6<<' bytes to head of data.
Raises
------
NetworkXNotImplemented
If the graph is directed.
ValueError
If the graph has at least ``2 ** 36`` nodes; the sparse6 format
is only defined for graphs of order less than ``2 ** 36``.
Examples
--------
>>> nx.to_sparse6_bytes(nx.path_graph(2))
b'>>sparse6<<:An\\n'
See Also
--------
to_sparse6_bytes, read_sparse6, write_sparse6_bytes
Notes
-----
The returned bytes end with a newline character.
The format does not support edge or node labels.
References
----------
.. [1] Graph6 specification
<https://users.cecs.anu.edu.au/~bdm/data/formats.html>
"""
if nodes is not None:
G = G.subgraph(nodes)
G = nx.convert_node_labels_to_integers(G, ordering="sorted")
return b"".join(_generate_sparse6_bytes(G, nodes, header))
@open_file(0, mode="rb")
def read_sparse6(path):
"""Read an undirected graph in sparse6 format from path.
Parameters
----------
path : file or string
File or filename to write.
Returns
-------
G : Graph/Multigraph or list of Graphs/MultiGraphs
If the file contains multiple lines then a list of graphs is returned
Raises
------
NetworkXError
If the string is unable to be parsed in sparse6 format
Examples
--------
You can read a sparse6 file by giving the path to the file::
>>> import tempfile
>>> with tempfile.NamedTemporaryFile() as f:
... _ = f.write(b">>sparse6<<:An\\n")
... _ = f.seek(0)
... G = nx.read_sparse6(f.name)
>>> list(G.edges())
[(0, 1)]
You can also read a sparse6 file by giving an open file-like object::
>>> import tempfile
>>> with tempfile.NamedTemporaryFile() as f:
... _ = f.write(b">>sparse6<<:An\\n")
... _ = f.seek(0)
... G = nx.read_sparse6(f)
>>> list(G.edges())
[(0, 1)]
See Also
--------
read_sparse6, from_sparse6_bytes
References
----------
.. [1] Sparse6 specification
<https://users.cecs.anu.edu.au/~bdm/data/formats.html>
"""
glist = []
for line in path:
line = line.strip()
if not len(line):
continue
glist.append(from_sparse6_bytes(line))
if len(glist) == 1:
return glist[0]
else:
return glist
@not_implemented_for("directed")
@open_file(1, mode="wb")
def write_sparse6(G, path, nodes=None, header=True):
"""Write graph G to given path in sparse6 format.
Parameters
----------
G : Graph (undirected)
path : file or string
File or filename to write
nodes: list or iterable
Nodes are labeled 0...n-1 in the order provided. If None the ordering
given by G.nodes() is used.
header: bool
If True add '>>sparse6<<' string to head of data
Raises
------
NetworkXError
If the graph is directed
Examples
--------
You can write a sparse6 file by giving the path to the file::
>>> import tempfile
>>> with tempfile.NamedTemporaryFile() as f:
... nx.write_sparse6(nx.path_graph(2), f.name)
... print(f.read())
b'>>sparse6<<:An\\n'
You can also write a sparse6 file by giving an open file-like object::
>>> with tempfile.NamedTemporaryFile() as f:
... nx.write_sparse6(nx.path_graph(2), f)
... _ = f.seek(0)
... print(f.read())
b'>>sparse6<<:An\\n'
See Also
--------
read_sparse6, from_sparse6_bytes
Notes
-----
The format does not support edge or node labels.
References
----------
.. [1] Sparse6 specification
<https://users.cecs.anu.edu.au/~bdm/data/formats.html>
"""
if nodes is not None:
G = G.subgraph(nodes)
G = nx.convert_node_labels_to_integers(G, ordering="sorted")
for b in _generate_sparse6_bytes(G, nodes, header):
path.write(b)
|
Well the only thing i have to conplain about is the size of the cap. it is friggin huge. my head just doesnt fit in it so it takes more than a few tries to get it to stay in. if i have it too high it shows through the top hair and if its too low theres no possible way i can make it stay comfortably (because like i said its too big) idk maybe my head is just small.... But other than that im very happy with them. my husband is stationed in alaska and ever since we moved up here my hair started falling out because of the weather so these are a great alternative. there soft and shiny (but not sooo shiny that they dont look real) my coworkers and friends didnt even notice i was wearing extensions lol. And the color matches just fine.
The extensions came early (woo!), it took about 3 weeks to get them. These extensions are absolutely real hair, but they won't be super thick $200 quality hair. And that's fine! You need to expect that they won't be the most amazing extensions in the world, but for the price, I have no complaints. My hair is thinner anyway, so a ton of extra volume would look silly. The color is great, it matches perfectly with my light blonde hair. Also worth to mention--my favorite part is that the clips are SOLID. I wore them all night through the bar scene, and never had to readjust or re-clip them in the bathroom. If you want subtle volume and lots of length, definitely invest in these extensions.
|
function s = collapse(t,dims,fun)
%COLLAPSE Collapse sparse tensor along specified dimensions.
%
% S = COLLAPSE(T,DIMS) sums the entries of T along all dimensions
% specified in DIMS. If DIMS is negative, then T is summed across
% all dimensions *not* specified by -DIMS.
%
% S = COLLAPSE(T) is shorthand for S = COLLAPSE(T,1:ndims(T)).
%
% S = COLLAPSE(T,DIMS,FUN) accumulates the entries of T using the
% accumulation function @FUN.
%
% Examples
% subs = [1 1 1; 1 1 3; 2 2 4; 4 4 4]
% vals = [10.5; 1.5; 2.5; 3.5]
% X = sptensor(subs,vals,[4 4 4]);
% Y = collapse(X,[2 3]) %<-- sum of entries in each mode-1 slice
% Y = collapse(ones(X),[1 2]) %<-- nnz in each mode-3 slide
% Y = collapse(ones(X),[1 2],@max) %<-- 1 if mode-3 has any entry
% Y = collapse(ones(X),-3,@max); %<-- equivalent
%
% See also SPTENSOR, SPTENSOR/SCALE.
%
%MATLAB Tensor Toolbox.
%Copyright 2015, Sandia Corporation.
% This is the MATLAB Tensor Toolbox by T. Kolda, B. Bader, and others.
% http://www.sandia.gov/~tgkolda/TensorToolbox.
% Copyright (2015) Sandia Corporation. Under the terms of Contract
% DE-AC04-94AL85000, there is a non-exclusive license for use of this
% work by or on behalf of the U.S. Government. Export of this data may
% require a license from the United States Government.
% The full license terms can be found in the file LICENSE.txt
if ~exist('fun', 'var')
fun = @sum;
end
if ~exist('dims', 'var')
dims = 1:ndims(t);
end
dims = tt_dimscheck(dims,ndims(t));
remdims = setdiff(1:ndims(t),dims);
% Check for the case where we accumulate over *all* dimensions
if isempty(remdims)
s = fun(t.vals);
return;
end
% Calculate the size of the result
newsiz = size(t,remdims);
% Check for the case where the result is just a dense vector
if numel(remdims) == 1
if ~isempty(t.subs)
s = accumarray(t.subs(:,remdims), t.vals, [newsiz 1], fun);
else
s = zeros(newsiz,1);
end
return;
end
% Create the result
if ~isempty(t.subs)
s = sptensor(t.subs(:,remdims), t.vals, newsiz, fun);
else
s = sptensor([],[],newsiz);
end
|
[STATEMENT]
lemma less_convert:"\<lbrakk> a = b; c < b \<rbrakk> \<Longrightarrow> c < a"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>a = b; c < b\<rbrakk> \<Longrightarrow> c < a
[PROOF STEP]
by auto
|
function [nll,g,H,T] = LogisticLoss(w,X,y)
% w(feature,1)
% X(instance,feature)
% y(instance,1)
[n,p] = size(X);
Xw = X*w;
yXw = y.*Xw;
nll = sum(mylogsumexp([zeros(n,1) -yXw]));
if nargout > 1
if nargout > 2
sig = 1./(1+exp(-yXw));
g = -X.'*(y.*(1-sig));
else
%g = -X.'*(y./(1+exp(yXw)));
g = -(X.'*(y./(1+exp(yXw))));
end
end
if nargout > 2
H = X.'*diag(sparse(sig.*(1-sig)))*X;
end
if nargout > 3
T = zeros(p,p,p);
for j1 = 1:p
for j2 = 1:p
for j3 = 1:p
T(j1,j2,j3) = sum(y(:).^3.*X(:,j1).*X(:,j2).*X(:,j3).*sig.*(1-sig).*(1-2*sig));
end
end
end
end
|
From iris_examples.logrel_heaplang Require Export ltyping.
From iris.heap_lang.lib Require Import assert.
From iris.algebra Require Import auth.
From iris.base_logic.lib Require Import invariants.
From iris.heap_lang Require Import notation proofmode.
(* Semantic typing of a symbol ADT (taken from Dreyer's POPL'18 talk) *)
Definition symbol_adt_inc : val := λ: "x" <>, FAA "x" #1.
Definition symbol_adt_check : val := λ: "x" "y", assert: "y" < !"x".
Definition symbol_adt : val := λ: <>,
let: "x" := Alloc #0 in (symbol_adt_inc "x", symbol_adt_check "x").
Definition symbol_adt_ty `{heapG Σ} : lty Σ :=
(() → ∃ A, (() → A) * (A → ()))%lty.
(* The required ghost theory *)
Class symbolG Σ := { symbol_inG :> inG Σ (authR mnatUR) }.
Definition symbolΣ : gFunctors := #[GFunctor (authR mnatUR)].
Instance subG_symbolΣ {Σ} : subG symbolΣ Σ → symbolG Σ.
Proof. solve_inG. Qed.
Section symbol_ghosts.
Context `{!symbolG Σ}.
Definition counter (γ : gname) (n : nat) : iProp Σ := own γ (● (n : mnat)).
Definition symbol (γ : gname) (n : nat) : iProp Σ := own γ (◯ (S n : mnat)).
Global Instance counter_timeless γ n : Timeless (counter γ n).
Proof. apply _. Qed.
Global Instance symbol_timeless γ n : Timeless (symbol γ n).
Proof. apply _. Qed.
Lemma counter_exclusive γ n1 n2 : counter γ n1 -∗ counter γ n2 -∗ False.
Proof.
apply bi.wand_intro_r. rewrite -own_op own_valid /=. by iDestruct 1 as %[].
Qed.
Global Instance symbol_persistent γ n : Persistent (symbol γ n).
Proof. apply _. Qed.
Lemma counter_alloc n : (|==> ∃ γ, counter γ n)%I.
Proof.
iMod (own_alloc (● (n:mnat) ⋅ ◯ (n:mnat))) as (γ) "[Hγ Hγf]";
first by apply auth_both_valid.
iExists γ. by iFrame.
Qed.
Lemma counter_inc γ n : counter γ n ==∗ counter γ (S n) ∗ symbol γ n.
Proof.
rewrite -own_op.
apply own_update, auth_update_alloc, mnat_local_update. omega.
Qed.
Lemma symbol_obs γ s n : counter γ n -∗ symbol γ s -∗ ⌜(s < n)%nat⌝.
Proof.
iIntros "Hc Hs".
iDestruct (own_valid_2 with "Hc Hs") as %[?%mnat_included _]%auth_both_valid.
iPureIntro. omega.
Qed.
End symbol_ghosts.
Typeclasses Opaque counter symbol.
Section ltyped_symbol_adt.
Context `{heapG Σ, symbolG Σ}.
Definition symbol_adtN := nroot .@ "symbol_adt".
Definition symbol_inv (γ : gname) (l : loc) : iProp Σ :=
(∃ n : nat, l ↦ #n ∗ counter γ n)%I.
Definition lty_symbol (γ : gname) : lty Σ := Lty (λ w,
∃ n : nat, ⌜w = #n⌝ ∧ symbol γ n)%I.
Lemma ltyped_symbol_adt Γ : Γ ⊨ symbol_adt : symbol_adt_ty.
Proof.
iIntros (vs) "!# _ /=". iApply wp_value.
iIntros "!#" (v ->). wp_lam. wp_alloc l as "Hl"; wp_pures.
iMod (counter_alloc 0) as (γ) "Hc".
iMod (inv_alloc symbol_adtN _ (symbol_inv γ l) with "[Hl Hc]") as "#?".
{ iExists 0%nat. by iFrame. }
do 2 (wp_lam; wp_pures).
iExists (lty_symbol γ), _, _; repeat iSplit=> //.
- repeat rewrite /lty_car /=. iIntros "!#" (? ->). wp_pures.
iInv symbol_adtN as (n) ">[Hl Hc]". wp_faa.
iMod (counter_inc with "Hc") as "[Hc #Hs]".
iModIntro; iSplitL; last eauto.
iExists (S n). rewrite Nat2Z.inj_succ -Z.add_1_r. iFrame.
- repeat rewrite /lty_car /=. iIntros "!#" (v).
iDestruct 1 as (n ->) "#Hs". wp_pures. iApply wp_assert.
wp_bind (!_)%E. iInv symbol_adtN as (n') ">[Hl Hc]". wp_load.
iDestruct (symbol_obs with "Hc Hs") as %?. iModIntro. iSplitL.
{ iExists n'. iFrame. }
wp_op. rewrite bool_decide_true; last lia. eauto.
Qed.
End ltyped_symbol_adt.
|
/-
Copyright (c) 2019 Chris Hughes. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Chris Hughes
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.data.rat.default
import Mathlib.set_theory.cardinal
import Mathlib.PostPort
namespace Mathlib
namespace rat
protected instance infinite : infinite ℚ := infinite.of_injective coe nat.cast_injective
protected instance denumerable : denumerable ℚ :=
let T : Type :=
Subtype fun (x : ℤ × ℕ) => 0 < prod.snd x ∧ nat.coprime (int.nat_abs (prod.fst x)) (prod.snd x);
let _inst : infinite T := sorry;
let _inst_1 : encodable T := encodable.subtype;
let _inst_2 : denumerable T := denumerable.of_encodable_of_infinite T;
denumerable.of_equiv T denumerable_aux
end rat
namespace cardinal
theorem mk_rat : mk ℚ = omega := iff.mp denumerable_iff (Nonempty.intro rat.denumerable)
end Mathlib
|
module Main
-- Using :di we can see the internal structure. MkFoo should be a newtype,
-- MkBar not
data Foo : Type where
MkFoo : String -> Foo
data Bar : Type where
[noNewtype]
MkBar : String -> Bar
|
#=
sqrt(eps(Float64)) == ldexp(0.5, -25) ~ 1.5e-8
isapprox here uses ldexp(0.5, -30) ~ 1.0e-9 , so the system default (sqrt(eps(T))) is ~32x less sensitive 5bits
ldexp(0.5, -31) ~ 2.3e-10, ~64x less sensitive 6bits
(32.5 sig bits must match)
=#
releps(::Type{Float64}) = ldexp(0.5, -31)
releps(::Type{Float32}) = ldexp(inv(sqrt(2.0f0)), -17)
releps(::Type{Double64}) = ldexp(0.5, -62)
releps(::Type{Double32}) = ldexp(inv(sqrt(2.0f0)), -34)
function Base.isapprox(x::DoubleFloat{T}, y::T; atol::Real=0.0, rtol::Real=atol>0 ? 0 : releps(T), nans::Bool=false, norm::Function=norm) where {T<:IEEEFloat}
return isapprox(x, DoubleFloat{T}(y), atol=atol, rtol=rtol, nans=nans, norm=norm)
end
function Base.isapprox(x::T, y::DoubleFloat{T}; atol::Real=0.0, rtol::Real=atol>0 ? 0 : releps(T), nans::Bool=false, norm::Function=norm) where {T<:IEEEFloat}
return isapprox(DoubleFloat{T}(x), y, atol=atol, rtol=rtol, nans=nans, norm=norm)
end
function Base.isapprox(x::DoubleFloat{T}, y::DoubleFloat{T}; atol::Real=0.0, rtol::Real=atol>0 ? 0 : releps(DoubleFloat{T}), nans::Bool=false, norm::Function=norm) where {T<:IEEEFloat}
x == y || (isfinite(x) && isfinite(y) && abs(x-y) <= max(max(1.0e-32, atol), rtol*max(abs(x), abs(y)))) || (nans && isnan(x) && isnan(y))
end
Base.isapprox(x::DoubleFloat{T}, y::F; atol::Real=0.0, rtol::Real=atol>0.0 ? 0.0 : eps(max(abs(x), abs(y)))^(37/64), nans::Bool=false, norm::Function=norm) where {T<:IEEEFloat, F<:Real} =
isapprox(promote(x, y)..., atol=atol, rtol=rtol, nans=nans, norm=norm)
Base.isapprox(x::F, y::DoubleFloat{T}; atol::Real=0.0, rtol::Real=atol>0.0 ? 0.0 : eps(max(abs(x), abs(y)))^(37/64), nans::Bool=false, norm::Function=norm) where {T<:IEEEFloat, F<:Real} =
isapprox(promote(x, y)..., atol=atol, rtol=rtol, nans=nans, norm=norm)
function Base.lerpi(j::Integer, d::Integer, a::DoubleFloat{T}, b::DoubleFloat{T}) where {T}
t = DoubleFloat{T}(j)/d
a = fma(-t, a, a)
return fma(t, b, a)
end
function Base.clamp(x::DoubleFloat{T}, lo::DoubleFloat{T}, hi::DoubleFloat{T}) where {T}
lo <= x <= hi && return x
lo <= x && return hi
return lo
end
function Base.clamp(x::DoubleFloat{T}, lo::T, hi::T) where {T}
lo <= x <= hi && return x
lo <= x && return hi
return lo
end
function Base.clamp(x::T, lo::DoubleFloat{T}, hi::DoubleFloat{T}) where {T}
lo <= x <= hi && return x
lo <= x && return hi
return lo
end
abs2(x::DoubleFloat{T}) where {T} =
let absx = abs(x)
absx * absx
end
# for compatibility with old or unrevised outside linalg functions
function Base.:(+)(v::Vector{DoubleFloat{T}}, x::T) where {T}
return v .+ x
end
function Base.:(-)(v::Vector{DoubleFloat{T}}, x::T) where {T}
return v .- x
end
function Base.:(+)(m::Matrix{DoubleFloat{T}}, x::T) where {T}
return m .+ x
end
function Base.:(-)(m::Matrix{DoubleFloat{T}}, x::T) where {T}
return m .- x
end
|
!----------------------------------------------------------------------
!----------------------------------------------------------------------
! enz : A two-cell, one-substrate enzyme model
!----------------------------------------------------------------------
!----------------------------------------------------------------------
SUBROUTINE FUNC(NDIM,U,ICP,PAR,IJAC,F,DFDU,DFDP)
! ---------- ----
IMPLICIT NONE
INTEGER, INTENT(IN) :: NDIM, ICP(*), IJAC
DOUBLE PRECISION, INTENT(IN) :: U(NDIM), PAR(*)
DOUBLE PRECISION, INTENT(OUT) :: F(NDIM)
DOUBLE PRECISION, INTENT(INOUT) :: DFDU(NDIM,NDIM), DFDP(NDIM,*)
DOUBLE PRECISION R,s,s1,s2,s0,mu,rho,kappa
R(s)=s/(1+s+kappa*s**2)
s1=U(1)
s2=U(2)
s0=PAR(1)
mu=PAR(2)
rho=PAR(3)
kappa=PAR(4)
F(1)=(s0 -s1) + (s2-s1) - rho * R(s1)
F(2)=(s0+mu-s2) + (s1-s2) - rho * R(s2)
END SUBROUTINE FUNC
SUBROUTINE STPNT(NDIM,U,PAR,T)
! ---------- -----
IMPLICIT NONE
INTEGER, INTENT(IN) :: NDIM
DOUBLE PRECISION, INTENT(INOUT) :: U(NDIM),PAR(*)
DOUBLE PRECISION, INTENT(IN) :: T
PAR(1)=0
PAR(2)=0
PAR(3)=100
PAR(4)=1
U(1)=0
U(2)=0
END SUBROUTINE STPNT
SUBROUTINE BCND
END SUBROUTINE BCND
SUBROUTINE ICND
END SUBROUTINE ICND
SUBROUTINE FOPT
END SUBROUTINE FOPT
SUBROUTINE PVLS
END SUBROUTINE PVLS
|
||| Additional data types related to ordering notions
module Data.Order
%default total
||| Trichotomous formalises the fact that three relations are mutually exclusive.
||| It is meant to be used with relations that complement each other so that the
||| `Trichotomous lt eq gt` relation is the total relation.
public export
data Trichotomous : (lt, eq, gt : a -> a -> Type) -> (a -> a -> Type) where
MkLT : {0 lt, eq, gt : a -> a -> Type} ->
lt v w -> Not (eq v w) -> Not (gt v w) -> Trichotomous lt eq gt v w
MkEQ : {0 lt, eq, gt : a -> a -> Type} ->
Not (lt v w) -> eq v w -> Not (gt v w) -> Trichotomous lt eq gt v w
MkGT : {0 lt, eq, gt : a -> a -> Type} ->
Not (lt v w) -> Not (eq v w) -> gt v w -> Trichotomous lt eq gt v w
|
x = seq(0,4,0.05);
a = 2;
lambda = 3
k = 2
aprime = 0;
y1 = a*(1 - exp(-(x/lambda)^k))*(1-aprime)-a
y2 = -y1;
df = data.frame(xv = x, yv1=y1, yv2 = y2)
ggplot(df, aes(x)) +
geom_line(aes(y = yv1, colour = "yv1")) +
geom_line(aes(y = yv2, colour = "yv2"))
|
Formal statement is: lemma sums_emeasure: "disjoint_family F \<Longrightarrow> (\<And>i. F i \<in> sets M) \<Longrightarrow> (\<lambda>i. emeasure M (F i)) sums emeasure M (\<Union>i. F i)" Informal statement is: If $F$ is a disjoint family of measurable sets, then the sum of the measures of the sets in $F$ is equal to the measure of the union of the sets in $F$.
|
The limit of $f$ at $a$ within $S$ is $l$ if and only if for every $\epsilon > 0$, there exists a $\delta > 0$ such that for all $x \in S$, if $0 < |x - a| \leq \delta$, then $|f(x) - l| < \epsilon$.
|
/-
Copyright (c) 2015 Nathaniel Thomas. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Nathaniel Thomas, Jeremy Avigad, Johannes Hölzl, Mario Carneiro
Modules over a ring.
-/
import algebra.ring algebra.big_operators group_theory.subgroup group_theory.group_action
open function
universes u v w x
variables {α : Type u} {β : Type v} {γ : Type w} {δ : Type x}
-- /-- Typeclass for types with a scalar multiplication operation, denoted `•` (`\bu`) -/
-- class has_scalar (α : Type u) (γ : Type v) := (smul : α → γ → γ)
-- infixr ` • `:73 := has_scalar.smul
/-- A semimodule is a generalization of vector spaces to a scalar semiring.
It consists of a scalar semiring `α` and an additive monoid of "vectors" `β`,
connected by a "scalar multiplication" operation `r • x : β`
(where `r : α` and `x : β`) with some natural associativity and
distributivity axioms similar to those on a ring. -/
class semimodule (α : Type u) (β : Type v) [semiring α]
[add_comm_monoid β] extends distrib_mul_action α β :=
(add_smul : ∀(r s : α) (x : β), (r + s) • x = r • x + s • x)
(zero_smul : ∀x : β, (0 : α) • x = 0)
section semimodule
variables [R:semiring α] [add_comm_monoid β] [semimodule α β] (r s : α) (x y : β)
include R
theorem add_smul : (r + s) • x = r • x + s • x := semimodule.add_smul r s x
variables (α)
@[simp] theorem zero_smul : (0 : α) • x = 0 := semimodule.zero_smul α x
lemma smul_smul : r • s • x = (r * s) • x := (mul_smul _ _ _).symm
instance smul.is_add_monoid_hom {r : α} : is_add_monoid_hom (λ x : β, r • x) :=
by refine_struct {..}; simp [smul_add]
end semimodule
/-- A module is a generalization of vector spaces to a scalar ring.
It consists of a scalar ring `α` and an additive group of "vectors" `β`,
connected by a "scalar multiplication" operation `r • x : β`
(where `r : α` and `x : β`) with some natural associativity and
distributivity axioms similar to those on a ring. -/
class module (α : Type u) (β : Type v) [ring α] [add_comm_group β] extends semimodule α β
structure module.core (α β) [ring α] [add_comm_group β] extends has_scalar α β :=
(smul_add : ∀(r : α) (x y : β), r • (x + y) = r • x + r • y)
(add_smul : ∀(r s : α) (x : β), (r + s) • x = r • x + s • x)
(mul_smul : ∀(r s : α) (x : β), (r * s) • x = r • s • x)
(one_smul : ∀x : β, (1 : α) • x = x)
def module.of_core {α β} [ring α] [add_comm_group β] (M : module.core α β) : module α β :=
by letI := M.to_has_scalar; exact
{ zero_smul := λ x,
have (0 : α) • x + (0 : α) • x = (0 : α) • x + 0, by rw ← M.add_smul; simp,
add_left_cancel this,
smul_zero := λ r,
have r • (0:β) + r • 0 = r • 0 + 0, by rw ← M.smul_add; simp,
add_left_cancel this,
..M }
section module
variables [ring α] [add_comm_group β] [module α β] (r s : α) (x y : β)
@[simp] theorem neg_smul : -r • x = - (r • x) :=
eq_neg_of_add_eq_zero (by rw [← add_smul, add_left_neg, zero_smul])
variables (α)
theorem neg_one_smul (x : β) : (-1 : α) • x = -x := by simp
variables {α}
@[simp] theorem smul_neg : r • (-x) = -(r • x) :=
by rw [← neg_one_smul α, ← mul_smul, mul_neg_one, neg_smul]
theorem smul_sub (r : α) (x y : β) : r • (x - y) = r • x - r • y :=
by simp [smul_add]; rw smul_neg
theorem sub_smul (r s : α) (y : β) : (r - s) • y = r • y - s • y :=
by simp [add_smul]
end module
instance semiring.to_semimodule [r : semiring α] : semimodule α α :=
{ smul := (*),
smul_add := mul_add,
add_smul := add_mul,
mul_smul := mul_assoc,
one_smul := one_mul,
zero_smul := zero_mul,
smul_zero := mul_zero, ..r }
@[simp] lemma smul_eq_mul [semiring α] {a a' : α} : a • a' = a * a' := rfl
instance ring.to_module [r : ring α] : module α α :=
{ ..semiring.to_semimodule }
class is_linear_map (α : Type u) {β : Type v} {γ : Type w}
[ring α] [add_comm_group β] [add_comm_group γ] [module α β] [module α γ]
(f : β → γ) : Prop :=
(add : ∀x y, f (x + y) = f x + f y)
(smul : ∀(c : α) x, f (c • x) = c • f x)
structure linear_map (α : Type u) (β : Type v) (γ : Type w)
[ring α] [add_comm_group β] [add_comm_group γ] [module α β] [module α γ] :=
(to_fun : β → γ)
(add : ∀x y, to_fun (x + y) = to_fun x + to_fun y)
(smul : ∀(c : α) x, to_fun (c • x) = c • to_fun x)
infixr ` →ₗ `:25 := linear_map _
notation β ` →ₗ[`:25 α `] ` γ := linear_map α β γ
namespace linear_map
variables [ring α] [add_comm_group β] [add_comm_group γ] [add_comm_group δ]
variables [module α β] [module α γ] [module α δ]
variables (f g : β →ₗ[α] γ)
include α
instance : has_coe_to_fun (β →ₗ[α] γ) := ⟨_, to_fun⟩
theorem is_linear : is_linear_map α f := {..f}
@[extensionality] theorem ext {f g : β →ₗ[α] γ} (H : ∀ x, f x = g x) : f = g :=
by cases f; cases g; congr'; exact funext H
theorem ext_iff {f g : β →ₗ[α] γ} : f = g ↔ ∀ x, f x = g x :=
⟨by rintro rfl; simp, ext⟩
@[simp] lemma map_add (x y : β) : f (x + y) = f x + f y := f.add x y
@[simp] lemma map_smul (c : α) (x : β) : f (c • x) = c • f x := f.smul c x
@[simp] lemma map_zero : f 0 = 0 :=
by rw [← zero_smul α, map_smul f 0 0, zero_smul]
instance : is_add_group_hom f := ⟨map_add f⟩
@[simp] lemma map_neg (x : β) : f (- x) = - f x :=
by rw [← neg_one_smul α, map_smul, neg_one_smul]
@[simp] lemma map_sub (x y : β) : f (x - y) = f x - f y :=
by simp [map_neg, map_add]
@[simp] lemma map_sum {ι} {t : finset ι} {g : ι → β} :
f (t.sum g) = t.sum (λi, f (g i)) :=
(finset.sum_hom f).symm
def comp (f : γ →ₗ[α] δ) (g : β →ₗ[α] γ) : β →ₗ[α] δ := ⟨f ∘ g, by simp, by simp⟩
@[simp] lemma comp_apply (f : γ →ₗ[α] δ) (g : β →ₗ[α] γ) (x : β) : f.comp g x = f (g x) := rfl
def id : β →ₗ[α] β := ⟨id, by simp, by simp⟩
@[simp] lemma id_apply (x : β) : @id α β _ _ _ x = x := rfl
end linear_map
namespace is_linear_map
variables [ring α] [add_comm_group β] [add_comm_group γ]
variables [module α β] [module α γ]
include α
def mk' (f : β → γ) (H : is_linear_map α f) : β →ₗ γ := {to_fun := f, ..H}
@[simp] theorem mk'_apply {f : β → γ} (H : is_linear_map α f) (x : β) :
mk' f H x = f x := rfl
lemma is_linear_map_neg :
is_linear_map α (λ (z : β), -z) :=
is_linear_map.mk neg_add (λ x y, (smul_neg x y).symm)
lemma is_linear_map_smul {α R : Type*} [add_comm_group α] [comm_ring R] [module R α] (c : R):
is_linear_map R (λ (z : α), c • z) :=
begin
refine is_linear_map.mk (smul_add c) _,
intros _ _,
simp [smul_smul],
ac_refl
end
--TODO: move
lemma is_linear_map_smul' {α R : Type*} [add_comm_group α] [comm_ring R] [module R α] (a : α):
is_linear_map R (λ (c : R), c • a) :=
begin
refine is_linear_map.mk (λ x y, add_smul x y a) _,
intros _ _,
simp [smul_smul]
end
end is_linear_map
/-- A submodule of a module is one which is closed under vector operations.
This is a sufficient condition for the subset of vectors in the submodule
to themselves form a module. -/
structure submodule (α : Type u) (β : Type v) [ring α]
[add_comm_group β] [module α β] : Type v :=
(carrier : set β)
(zero : (0:β) ∈ carrier)
(add : ∀ {x y}, x ∈ carrier → y ∈ carrier → x + y ∈ carrier)
(smul : ∀ (c:α) {x}, x ∈ carrier → c • x ∈ carrier)
namespace submodule
variables [ring α] [add_comm_group β] [add_comm_group γ]
variables [module α β] [module α γ]
variables (p p' : submodule α β)
variables {r : α} {x y : β}
instance : has_coe (submodule α β) (set β) := ⟨submodule.carrier⟩
instance : has_mem β (submodule α β) := ⟨λ x p, x ∈ (p : set β)⟩
@[simp] theorem mem_coe : x ∈ (p : set β) ↔ x ∈ p := iff.rfl
theorem ext' {s t : submodule α β} (h : (s : set β) = t) : s = t :=
by cases s; cases t; congr'
protected theorem ext'_iff {s t : submodule α β} : (s : set β) = t ↔ s = t :=
⟨ext', λ h, h ▸ rfl⟩
@[extensionality] theorem ext {s t : submodule α β}
(h : ∀ x, x ∈ s ↔ x ∈ t) : s = t := ext' $ set.ext h
@[simp] lemma zero_mem : (0 : β) ∈ p := p.zero
lemma add_mem (h₁ : x ∈ p) (h₂ : y ∈ p) : x + y ∈ p := p.add h₁ h₂
lemma smul_mem (r : α) (h : x ∈ p) : r • x ∈ p := p.smul r h
lemma neg_mem (hx : x ∈ p) : -x ∈ p := by rw ← neg_one_smul α; exact p.smul_mem _ hx
lemma sub_mem (hx : x ∈ p) (hy : y ∈ p) : x - y ∈ p := p.add_mem hx (p.neg_mem hy)
lemma neg_mem_iff : -x ∈ p ↔ x ∈ p :=
⟨λ h, by simpa using neg_mem p h, neg_mem p⟩
lemma add_mem_iff_left (h₁ : y ∈ p) : x + y ∈ p ↔ x ∈ p :=
⟨λ h₂, by simpa using sub_mem _ h₂ h₁, λ h₂, add_mem _ h₂ h₁⟩
lemma add_mem_iff_right (h₁ : x ∈ p) : x + y ∈ p ↔ y ∈ p :=
⟨λ h₂, by simpa using sub_mem _ h₂ h₁, add_mem _ h₁⟩
lemma sum_mem {ι : Type w} [decidable_eq ι] {t : finset ι} {f : ι → β} :
(∀c∈t, f c ∈ p) → t.sum f ∈ p :=
finset.induction_on t (by simp [p.zero_mem]) (by simp [p.add_mem] {contextual := tt})
instance : has_add p := ⟨λx y, ⟨x.1 + y.1, add_mem _ x.2 y.2⟩⟩
instance : has_zero p := ⟨⟨0, zero_mem _⟩⟩
instance : has_neg p := ⟨λx, ⟨-x.1, neg_mem _ x.2⟩⟩
instance : has_scalar α p := ⟨λ c x, ⟨c • x.1, smul_mem _ c x.2⟩⟩
@[simp] lemma coe_add (x y : p) : (↑(x + y) : β) = ↑x + ↑y := rfl
@[simp] lemma coe_zero : ((0 : p) : β) = 0 := rfl
@[simp] lemma coe_neg (x : p) : ((-x : p) : β) = -x := rfl
@[simp] lemma coe_smul (r : α) (x : p) : ((r • x : p) : β) = r • ↑x := rfl
instance : add_comm_group p :=
by refine {add := (+), zero := 0, neg := has_neg.neg, ..};
{ intros, apply set_coe.ext, simp }
instance submodule_is_add_subgroup : is_add_subgroup (p : set β) :=
{ zero_mem := p.zero,
add_mem := p.add,
neg_mem := λ _, p.neg_mem }
lemma coe_sub (x y : p) : (↑(x - y) : β) = ↑x - ↑y := by simp
instance : module α p :=
by refine {smul := (•), ..};
{ intros, apply set_coe.ext, simp [smul_add, add_smul, mul_smul] }
protected def subtype : p →ₗ[α] β :=
by refine {to_fun := coe, ..}; simp [coe_smul]
@[simp] theorem subtype_apply (x : p) : p.subtype x = x := rfl
end submodule
@[reducible] def ideal (α : Type u) [comm_ring α] := submodule α α
namespace ideal
variables [comm_ring α] (I : ideal α) {a b : α}
protected lemma zero_mem : (0 : α) ∈ I := I.zero_mem
protected lemma add_mem : a ∈ I → b ∈ I → a + b ∈ I := I.add_mem
lemma neg_mem_iff : -a ∈ I ↔ a ∈ I := I.neg_mem_iff
lemma add_mem_iff_left : b ∈ I → (a + b ∈ I ↔ a ∈ I) := I.add_mem_iff_left
lemma add_mem_iff_right : a ∈ I → (a + b ∈ I ↔ b ∈ I) := I.add_mem_iff_right
protected lemma sub_mem : a ∈ I → b ∈ I → a - b ∈ I := I.sub_mem
lemma mul_mem_left : b ∈ I → a * b ∈ I := I.smul_mem _
lemma mul_mem_right (h : a ∈ I) : a * b ∈ I := mul_comm b a ▸ I.mul_mem_left h
end ideal
/-- A vector space is the same as a module, except the scalar ring is actually
a field. (This adds commutativity of the multiplication and existence of inverses.)
This is the traditional generalization of spaces like `ℝ^n`, which have a natural
addition operation and a way to multiply them by real numbers, but no multiplication
operation between vectors. -/
class vector_space (α : Type u) (β : Type v) [discrete_field α] [add_comm_group β] extends module α β
instance discrete_field.to_vector_space {α : Type*} [discrete_field α] : vector_space α α :=
{ .. ring.to_module }
/-- Subspace of a vector space. Defined to equal `submodule`. -/
@[reducible] def subspace (α : Type u) (β : Type v)
[discrete_field α] [add_comm_group β] [vector_space α β] : Type v :=
submodule α β
instance subspace.vector_space {α β}
{f : discrete_field α} [add_comm_group β] [vector_space α β]
(p : subspace α β) : vector_space α p := {..submodule.module p}
namespace submodule
variables {R:discrete_field α} [add_comm_group β] [add_comm_group γ]
variables [vector_space α β] [vector_space α γ]
variables (p p' : submodule α β)
variables {r : α} {x y : β}
include R
set_option class.instance_max_depth 36
theorem smul_mem_iff (r0 : r ≠ 0) : r • x ∈ p ↔ x ∈ p :=
⟨λ h, by simpa [smul_smul, inv_mul_cancel r0] using p.smul_mem (r⁻¹) h,
p.smul_mem r⟩
end submodule
namespace add_comm_monoid
open add_monoid
variables {M : Type*} [add_comm_monoid M]
instance : semimodule ℕ M :=
{ smul := smul,
smul_add := λ _ _ _, smul_add _ _ _,
add_smul := λ _ _ _, add_smul _ _ _,
mul_smul := λ _ _ _, mul_smul _ _ _,
one_smul := one_smul,
zero_smul := zero_smul,
smul_zero := smul_zero }
end add_comm_monoid
namespace add_comm_group
variables {M : Type*} [add_comm_group M]
instance : module ℤ M :=
{ smul := gsmul,
smul_add := λ _ _ _, gsmul_add _ _ _,
add_smul := λ _ _ _, add_gsmul _ _ _,
mul_smul := λ _ _ _, gsmul_mul _ _ _,
one_smul := one_gsmul,
zero_smul := zero_gsmul,
smul_zero := gsmul_zero }
end add_comm_group
|
%-------------------------
% Resume in Latex
% Author : Mohit Sharma
% License : MIT
%------------------------
\documentclass[letterpaper,10.5pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage{fancyhdr}
\usepackage{xcolor}
\usepackage{siunitx}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-0.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{orange} \titlerule \vspace{-5pt}]
%-------------------------
% Custom commands
\newcommand{\resumeItem}[2]{
\item\small{
\textbf{#1}{: #2 \vspace{-2pt}}
}
}
\newcommand{\resumePubItem}[3]{
\item\small{
\textbf{#1 \null\hfill{#2}}{#3} \vspace{-2pt}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}}
\newcommand{\resumePublicationItem}[3]{\resumePubItem{#1}{#2}\newline{#3}\vspace{-4pt}}
\renewcommand{\labelitemii}{$\circ$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
\newcommand{\resumeTilde}{\raise.17ex\hbox{$\scriptstyle\sim$}}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{\href{https://sharmamohit.com/} {\color[HTML]{DF691A} Mohit Sharma}} & \color[HTML]{DF691A} Email : \href{mailto:[email protected]}{[email protected]}\\
\href{https://sharmamohit.com/}{https://www.sharmamohit.com} & Mobile : +1-778-587-6241\\
\end{tabular*}
%-----------SUMMARY-------------------
\section{\color[HTML]{DF691A} Summary}
Innovative DevOps engineer with a strong Linux background and over 8 years of experience designing, automating and managing mission critical infrastructure deployments by leveraging SRE principles and other DevOps processes. Expert in scripting using python with an emphasis on real-time, high speed data pipelines and distributed computing across networks.
%-----------EXPERIENCE-----------------
\section{\color[HTML]{DF691A} Experience}
\resumeSubHeadingListStart
\resumeSubheading
{Workday}{Victoria, BC, Canada}
{Software Development Engineer III, DevOps}{May 2021 - Present}
\textit{\small{Software Development Engineer II, DevOps}} \null\hfill \textit{\small{May 2019 - May 2021.}}
\resumeItemListStart
\resumeItem{Development and Operations}{Created an automated AWS deployment pipeline to deploy microservice application cross account and cross region
reducing operations toil by several hours.}
\newline
{Actively manage, improve, and monitor infrastructure resources in DC and on AWS including but not limited to
EC2, ECS, Route53, S3, RDS, Lambda, ES etc. using tools like terraform, wavefront, ELK stack and slackbot.}
\newline
{Introduced team to using policies (OPA) for managing infrastructure giving developers more control over their cloud resources.}
\newline
{Writing ansible roles to help manage the hosts and to perform application deployments.}
\newline
{Writing Jenkinsfiles and improving shared libraries for automated build and deployment of several applications and services using Jenkins.}
\newline
{Migrating application services to an in-house flavor of Kubernetes platform.}
\resumeItem{Knowledge Share}{Spearheaded an initiative between operations and multiple dev teams to empower the developers with the knowledge
they would need to take ownership of their services.}
\resumeItem{Scrum Master}{Helping team self organize and self manage by incorporating servant leadership principles.}
\resumeItemListEnd
\resumeSubheading
{NYU CUSP}{Brooklyn, NY, USA}
{Associate Research Scientist}{May 2015 - May 2019}
\textit{\small{Assistant Research Scientist}} \null\hfill \textit{\small{June 2014 - May 2015 .}}
\resumeItemListStart
\resumeItem{Dockkeeper}
{Developed a scalable and secure container scheduling and monitoring tool that leverages the docker ecosystem and prometheus for provisioning services on physical hosts. This helped eliminate the VM license fees of over \$35,000 per year and optimizing the efficiency of hosts by over 55\%. \\Deployed a multi-node kubernetes cluster for exposing load-balanced web applications on the web.}
\resumeItem{UOInfra}
{Architected the NYU/CUSP Urban Observatory's multi-site physical infrastructure consisting of multiple dense compute and storage nodes comprising of over half a petabyte of storage space, provisioned for multi-user mini-HPC environments, using Ansible and Packer.\\
Deployed a 27 screen vizwall using a cluster of networked raspberry Pi's enabling researchers to interpret their visualizable data, while keeping the whole price to 1/8 th of that of a commercial solution.}
\resumeItem{SONYC}
{Developed a secure machine critical IoT platform and implemented CI/CD framework for deploying and maintaining over 100 urban noise monitoring sensors in NYC. This project has won the \href{https://www.nsf.gov/awardsearch/showAward?AWD_ID=1544753}{\$ $4.6$ Million} CPS frontier award from NSF. }
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------EDUCATION-----------------
\section{\color[HTML]{DF691A} Education}
\resumeSubHeadingListStart
\resumeSubheading
{NYU Polytechnic School of Engineering}{Brooklyn, NY, USA}
{Master of Science in Telecommunication Networks}{Aug. 2012 -- May. 2014}
\textit{\small{Thesis - \href{https://sharmamohit.com/project/citysynth/}{CitySynth: Imaging with a Network of Devices}}}
%\resumeSubheading
% {Birla Institute of Technology and Science}{Pilani, India}
% {Bachelor of Engineering in Electrical and Electronics; GPA: 3.66 (9.15/10.0)}{Aug. 2008 -- July. 2012}
\resumeSubHeadingListEnd
%-----------CERTIFICATIONS-----------
\section{\color[HTML]{DF691A} Certifications}
\resumeSubHeadingListStart
\resumeSubItem{CKA Certified Kubernetes Administrator} \null\hfill \textit{\small{August 2020 - August 2023}}
\newline
\newline
{Certificate Id: \href{https://www.youracclaim.com/badges/ce11744e-269f-459d-836a-9ce4c11ff2c9}{LF-bu4edd0v34}}
\resumeSubHeadingListEnd
%-----------PROJECTS-----------------
\section{\color[HTML]{DF691A} Projects}
\resumeSubHeadingListStart
\resumeSubItem{CUIC}
{Open source python library for interfacing with GigE vision broadband, thermographic and hyperspectral cameras using advanced message queuing protocol to acquire images and perform pre-processing on-the-fly.}
\resumeSubItem{UCSLHUB}
{Developed a resilient and scalable back-end infrastructure using docker swarm, jupyterhub and keycloak for hosting the CUSP's UCSL bootcamp which will be accessed by hundreds of students every year.}
\resumeSubItem{HOMELAB}
{Running homelab with ansible and terraform managed VMWare ESXI, KVM and VSphere instances behind a Pfsense firewall that is used for testing and development including emulation of distributed datacenters for UOInfra.}
\resumeSubHeadingListEnd
\section{\color[HTML]{DF691A} Publications, Teaching Experience and more..}
\resumeSubHeadingListStart
\resumeItem{View it online}{\href{https://sharmamohit.com/work/}{https://sharmamohit.com/}}
\newline
\resumeSubHeadingListEnd
%%-----------PUBLICATIONS-----------------
%\section{\color[HTML]{DF691A} Publications}
%\resumeSubHeadingListStart
%\resumePublicationItem{\href{https://ieeexplore.ieee.org/document/8646419}{Persistent Hyperspectral Observations of the Urban Lightscape}}{\textit{IEEE GlobalSIP}, 2018}{Training a supervised classifier to automatically determine location of light sources on persistent hyperspectral imaging of the New York City urban lightscape, with \resumeTilde 7.2 x \num{e-4} \SI{}{\micro\metre} spectral resolution, surveyed over 25 consecutive summer nights over a 6 minute time resolution using Dockkeeper infrastructure.}
%\resumePublicationItem{\href{http://www.mdpi.com/1424-8220/16/12/2047/html}{A Hyperspectral Survey of New York City Lighting Technology}, 2016}{\textit{Sensors}, 16, 12}
%{Using a scanning, single channel spectrograph to identify the lighting technologies in use in the NYC}
%\resumePublicationItem{\href{http://dl.acm.org/citation.cfm?id=2993570}{Hypertemporal imaging of NYC Grid Dynamics}, 2016}{\textit{BuildSys '16}}
%{Demonstrating the concept of capturing the 120 Hz flicker of lights across a NYC skyline as a proxy to indicate the health of distribution transformers}
%\resumePublicationItem{\href{http://www.sciencedirect.com/science/article/pii/S0306437915001167}{Dynamics of Urban Lightscape}, 2015}{\textit{Information System}, 54, 115}
%{Using a network of cameras to understand \textit{the pulse of the city}}
%\resumeSubHeadingListEnd
%
%%-----------Teaching-----------------
%\section{\color[HTML]{DF691A} Teaching Experience}
%\resumeSubHeadingListStart
%\resumePublicationItem{ \href{https://sharmamohit.com/\#teaching}{\textbf{Urban Computing Skills Lab}} at NYU}{\textit{2014 - 2019}}
%{Instructor for summer boot camp course on introduction to Python and SciPy packages.}
%\resumePublicationItem{ \href{https://sharmamohit.com/\#teaching}{\textbf{Advanced Topics in Urban Informatics}} at NYU}{\textit{2016, 2017, 2019}}
%{Instructor for a 3 week intensive course on topics including Wireless Sensor Networks, IoT and Microservices}
%%\resumePublicationItem {CUSP City Challenge Week}{\textit{2016}}{Advised 14 graduate students on a 2 day challenge of Analyzing Crime in NYC}
%\resumePublicationItem{Advised NYU/CUSP graduate student Denis Khryashchev}{\textit{2015 - 2016}}{project: \href{https://www.overleaf.com/read/ssrzkqkznpkw}{\textit{Social Pattern Detection by scanning GSM downlink spectrum}}}
%\resumeSubHeadingListEnd
\end{document}
|
Visit Komodo Island, Singapore, Colombo, Sri Lanka, Mina Qabos, UAE, Dubai, Aqaba, Malta (La Valletta), Barcelona, Cadiz, Amsterdam, Warnemunde, Riga, Tallinn, Estonia, Helsinki, Finland, St Petersburg, Russia, Stockholm, Sweden, Copenhagen, Denmark and Dover.
Komodo lizards quietly thrived in the harsh climate of Indonesia's Lesser Sunda Islands for millions of years until their existence was discovered about 100 years ago?when Dutch sailors encountered the creatures for the first time, they returned with reports of fire-breathing dragons. Reaching 10 feet in length and weighing over 300 pounds, Komodo dragons are the world's largest and heaviest lizards. The best place to view these magnificent and endangered creatures is on Komodo Island, the largest island in Komodo National Park, a UNESCO World Heritage Site and Man and Biosphere Reserve. Although Komodo National Park is famous for its most recognized inhabitant it's also noted for its diverse marine habitat. 1,000 species of fish, 260 species of reef-building coral, manta rays, sharks, dolphins, whales and sea turtles live in the park's coral reefs, mangroves, sea grass beds and semi-enclosed bays.
Singapore - the very name summons visions of the mysterious East. The commercial center of Southeast Asia, this island city-state of four million people is a metropolis of modern high-rise buildings, Chinese shop-houses with red-tiled roofs, sturdy Victorian buildings, Buddhist temples and Arab bazaars. Founded in 1819 by Sir Stamford Raffles of the fabled East India Company, the city is a melting pot of people and cultures. Malay, Chinese, English and Tamil are official languages. Buddhism, Taoism, Islam, Hinduism and Christianity are the major faiths. Singapore is an ever-fascinating island boasting colorful traditions, luxurious hotels and some of the finest duty-free shopping in the world. Lying just 85 miles north of the Equator at the tip of the Malay Peninsula, the island was a haven for Malay pirates and Chinese and Arab traders.
Sri Lanka conjures up the exotic and the mysterious. Once known as Ceylon, the island boasts a fantastic landscape that ranges from primeval rain forest to the bustling modern streets of Colombo, the capital. A visitor to Sri Lanka has a wealth of options. Relax on some of the world's finest beaches. Explore the temples, halls and palaces of the last Sinhalese kingdom at Kandy. Or take a guided tour of an elephant orphanage. Colombo also offers an array of charms, from the Royal Botanic Gardens, once a royal pleasure garden, to the Pettah Bazaar, where vendors hawk everything under the sun. Colombo and Sri Lanka were shaped by Hindu, Buddhist, Muslim and European influences. Colombo also serves as a gateway for Overland Adventures to India.
Oman's capital was once a major trading centre controlled and influenced by the Portuguese. Those intrepid explores and traders are long gone. Today, visitors flock to Oman thanks to its azure air, towering desert mountains, and crystalline waters. Muscat itself is an Arabian fable sprung to life. Old 16th century forts guard the bay and the palace, while the vibrant souqs offer daggers, superb silver jewellery, and traditional crafts and costumes.
Dubai has always served as a bridge between East and West. In the past, Dubai's trade links stretched from Western Europe to Southeast Asia and China. The result was the creation of one of the most protean societies in the world. Nestled in the very heart of Islam, Dubai remains unique in its embrace of the West. Bedouin may still roam the desert, but Dubai also plays hosts to international tennis and golf tournaments. Tourists flock to its shores while the pace of development continues at a frenetic pace, from massive artificial islands to the astounding Burj Al Arab Hotel. Dubai is actually two cities in one: the Khor Dubai, an inlet of the Persian Gulf, separates Deira, the old city, from Bur Dubai.
The port of Aqaba has been an important strategic and commercial center for over three millennia. Originally called Elath, the home of the Edomites became in Roman times a trading center where goods from as far away as China found entry to Africa, Europe, and the Middle East. Today Aqaba is Jordan's only seaport, and the city serves as an intriguing gateway for travelers. In the surrounding desert lies the lost city of Petra - a city that may date to 6,000 B.C. - and Wadi Rum, where an English soldier mystic named T.E. Lawrence found his destiny as "Lawrence of Arabia." Perched at the apex of the Gulf of Aqaba, Aqaba offers internationally renowned diving opportunities and the richest marine life in the entire Red Sea. The old fortress on the waterfront dates to the 14th-century. Passengers should drink only bottled water while ashore. Please respect local customs and dress accordingly, avoiding exposed shoulders and knees.
Transiting through the Suez Canal is sure to be one of the lifelong memories of your cruise. The thought of a canal linking the Mediterranean and Red Sea extends back in history as far as 2100 B.C. Napoleon Bonaparte, pursuing his dreams of conquest, entertained the notion in 1798. But it was French engineer Ferdinand de Lesseps who finally proved that a canal across the Suez was practicable. Work on the canal began in 1858. Eleven years later the opening of the Suez Canal was an international event. The world had acquired a quicker route to Asia-as well as a Verdi opera called Aida. Of course the Suez Canal was a source of immediate controversy. The British wrested control of the canal from Egypt in 1882. Egypt regained control during its revolution of 1952. In 1956, the British, allied with the French and Israelis, nearly took the canal back. The Arab-Israeli Six Day War of 1967 closed the canal until 1973, when another war and intense international negotiations led to its return to Egyptian control.
Malta is the largest in a group of seven islands that occupy a strategic position between Europe and Africa. The island's history is long and turbulent. Everyone from the Normans to the Nazis have vied for control of this small, honey-colored rock. For centuries the island was the possession of the knightly Order of St. John - the Knights Hospitaller. Valletta, Malta's current capital, was planned by the Order's Grandmaster Jean de la Valette to secure the island's eastern coast from Turk incursions. Founded in 1566, Valletta's bustling streets are lined with superb Baroque buildings and churches. Malta has a long history: the megalithic stone temples at Gozo may be the oldest freestanding structures on Earth. Malta has two official languages, Maltese (constitutionally the national language) and English. Malta was admitted to the European Union in 2004 and in 2008 became part of the eurozone.
Berlin is a worthy rival to London or Paris in terms of history, art and culture. The city's highlights include the restored Reichstag Building with its magnificent glass dome and the stunning Pergamon Museum. Warnemünde is a seaside resort near the harbor entrance to Rostock, one of the city-states that formed the medieval Hanseatic League. Originally a fishing village turned spa and resort. Explore the old Cold War hot spots and view the Brandenburg Gate, restored to its original magnificence. Or, stroll along the Kurfurstendamm and take coffee in a local cafe. Warnemünde is also your gateway to Mecklenburg and the German countryside.
Capital of Latvia and the largest city of the Baltic Republics, Riga has long been a center of commerce and culture. Founded in the 13th century, the city rose to prominence as a member of the Hanseatic League, the great German-Baltic trading consortium that dominated Northern Europe during the Middle Ages. In the long struggle for Latvian independence, Riga has been ruled by Germans, Swedes and Russians. Today this "Little Paris of the Baltic" is a UNESCO World Heritage Site renowned for its architecture including one of the finest collections of Art Nouveau buildings in Northern Europe. The city's German heritage contributed to the city's rich architecture. Riga's Art Nouveau buildings are outstanding examples of the German style known as Jugendstil.
Perhaps their country's harsh climate encouraged the Finns' love and respect for design and the arts. Whatever the cause, there's no denying that Helsinki is one of the most vibrant and beautiful cities in Scandinavia. Hailed as the "Daughter of the Baltic," Finland's capital is a city of graceful neoclassical buildings, striking modern architecture and spacious boulevards dotted with squares and parks. In the past century, Finland has nurtured some of the major creative talents of Western culture, from the composer Sibelius to architects Eliel & Eero Saarinen and Alvar Aalto. The center of Finnish commerce and culture, Helsinki is home to some 616.000 people. Much of the city's neoclassical architecture dates from the period of Tsarist rule, which began in 1809 after political control of Finland passed from Sweden to Russia, Finland gained its independence in 1917.
St. Petersburg has provided a historic stage since the day Peter the Great ordained its construction on the banks of the Neva. In its relatively short history - the city is younger than New York - St. Petersburg has witnessed the rise and fall of Imperial Russia, three shattering revolutions, and civil war. The city survived a long and tragic siege during World War II - indeed St. Petersburg became a symbol of Russian resistance to Nazi invasion. Russia's "Window on the West," St. Petersburg remains one of the world's most beautiful metropolises. Perched on the banks of the Neva, the city is crisscrossed by canals. Two great architects helped bring Peter the Great's vision of St. Petersburg to life: Rastrelli and Carlo Rossi. The rich architecture that resulted features a mixture of styles from ornate Russian Baroque churches to neo-classical palaces. St. Petersburg has also been the cultural soul of Russia, a repository of priceless art and a home to poets, musicians and composers ranging from Pushkin to Shostakovich. Peter the Great instilled his near-mania for architecture and building in his successors, making the then capital of Imperial Russia one of the architectural treasures of the world.
Often described as the "Capital of Scandinavia," Stockholm traces its origins back seven centuries, when it was founded on the island of Gamla Stan and became the capital of Sweden. Today, the city covers 14 separate islands connected by bays, channels and inlets. The skyline is a sea of copper roofs grown green with patina, towers, spires and graceful cupolas stand sentinel over the historic Old Town (Gamla Stan). With its population of nearly a million people, Stockholm is one of the world's most beautiful, clean and orderly cities. With a history stretching over seven centuries, Stockholm is not just a beautiful city but also Sweden's center of art and culture.
Copenhagen was founded during the 12th century. The city owes much of its charm to the buildings erected by Denmark's monarchs, and boasts a treasure trove of late-Renaissance and Rococo architecture. Copenhagen deserves its accolade as the Venice of the North. Founded on a series of islands and islets, the city today is laced with graceful canals and boasts some of the most delightful architecture in Northern Europe. See the fabled statue of Hans Christian Andersen's Little Mermaid, a symbol of the city. Stroll along the old harbor of Nyhavn, lined with cafés, restaurants and 500-year-old gabled houses. Browse the superb shops on the world-famous Stroget or view the Rococo palaces lining Amalienborg Square. Best of all, savor the taste of local delicacies while wandering the paths of Tivoli Gardens, one of Europe's most celebrated pleasure gardens.
Prices quoted valid for sale until 11 May 2019 for travel during the period specified (if applicable) unless otherwise stated or sold out prior.
2 night cruise sailing from Sydney, Australia aboard the Sea Princess.
2 night cruise sailing from Brisbane aboard the Sea Princess.
3 night cruise sailing from Sydney, Australia aboard the Sea Princess.
3 night cruise sailing from Fremantle aboard the Sea Princess.
4 night cruise sailing from Brisbane aboard the Sea Princess.
4 night cruise sailing from Fremantle aboard the Sea Princess.
|
function g = dnetGradient(params, model)
% DNETGRADIENT Density Network gradient wrapper.
% FORMAT
% DESC is a wrapper function for the gradient of the negative log
% likelihood of an Density Network model with respect to the latent postions
% and parameters.
% ARG params : vector of parameters and latent postions where the
% gradient is to be evaluated.
% ARG model : the model structure into which the latent positions
% and the parameters will be placed.
% RETURN g : the gradient of the negative log likelihood with
% respect to the latent positions and the parameters at the given
% point.
%
% SEEALSO : dnetLogLikeGradients, dnetExpandParam
%
% COPYRIGHT : Neil D. Lawrence, 2008
% MLTOOLS
model = dnetExpandParam(model, params);
g = - dnetLogLikeGradients(model);
|
#!/usr/bin/env Rscript
library("optparse")
GetSep<-function(x){
listsep=c(',', ' ', '\t')
listsepnum<-c('COM', 'SPA', 'TAB')
if(x %in% listsep)return(x)
x<-toupper(x)
x<-substr(x,1,3)
if(x %in% listsepnum)return(listsep[listsepnum==x])
cat('\nnot found sep ', x, '\nexit\n')
q(save='no',status=1)
}
option_list = list(
make_option(c("-f", "--file"), type="character", default=NULL,
help="dataset file name", metavar="character"),
make_option(c("-o", "--out"), type="character", default="out.txt",
help="output file name [default= %default]", metavar="character"),
make_option(c("-r", "--head_rs"), type="character", default="out.txt",
help="head rs [default= %default]", metavar="character"),
make_option(c("-b", "--head_bp"), type="character", default="out.txt",
help="head rs [default= %default]", metavar="character"),
make_option(c("-c", "--head_chr"), type="character", default="out.txt",
help="head rs [default= %default]", metavar="character"),
make_option(c("-s", "--sep"), type="character", default="out.txt",
help="head rs [default= %default]", metavar="character")
);
opt_parser = OptionParser(option_list=option_list);
args = parse_args(opt_parser);
FileI=args[['file']]
FileOutRs=paste(args[['out']],'.rs',sep='')
FileOutPos=paste(args[['out']],'.pos',sep='')
Sep=GetSep(args[['sep']])
cat(Sep)
ChrHead=args[['head_chr']]
BpHead=args[['head_bp']]
RsHead=args[['head_rs']]
#Data<-read.table('gwas_catalog.tsv', header=T, sep='\t', comment.char="",quote="")
Data<-read.table(FileI, header=T,sep=Sep,comment.char="", quote="", stringsAsFactors=F)
Data2<-Data[, c(ChrHead,BpHead,RsHead)];names(Data2)<-c('Chr', 'Pos', 'rsid')
Tmp<-Data[ , c(ChrHead,BpHead,BpHead,RsHead)];names(Tmp)<-c('Chr', 'PosBegin', 'PosEnd', 'rsid')
ListRs<-unlist(unlist(strsplit(Tmp[,'rsid'],split=';')))
writeLines(ListRs, con=FileOutRs)
#Tmp$Chr<-as.character(Tmp$Chr)
#bal=grep('chr',Tmp$Chr, invert=T)
#Tmp$Chr[bal]<-paste("chr",as.character(Tmp$Chr[bal]),sep="")
#write.table(unique(Tmp), quote=F, row.names=F, col.names=F,file=FileOut)
#writeLines(unlist(as.vector(strsplit(as.character(Data$SNPS),split="[,;]"))), con="rsTosearch")
PosBegin<-sapply(strsplit(as.character(Data2$Pos),split=";"),function(x){
min(as.integer(x,na.rm=T))
})
PosEnd<-sapply(strsplit(as.character(Data2$Pos),split=";"),function(x){
max(as.integer(x), na.rm=T)
})
Chr<-sapply(strsplit(as.character(Data2$Chr),split=";"),function(x){
Un<-unique(x[as.integer(x)>0])
if(length(Un)==1)return(Un)
else return(NA)
}
)
Data2$PosBeginI<-PosBegin
Data2$PosEndI<-PosEnd
Data2$ChrI<-Chr
Data2$RsId<-Data2$rsid
Data2$Num<-1:nrow(Data2)
Bal<- !is.na(Data2$Chr) & !is.na(Data2$PosBegin) & !is.infinite(Data2$PosBegin)
bal=grep('chr',Data2$ChrI, invert=T)
Data2$ChrI[bal]<-paste("chr",as.character(Data2$ChrI[bal]),sep="")
Data2$Strand<-'+'
Data2ToP<-Data2[Bal, c("ChrI","PosBeginI","PosEndI", 'Strand',"Num")]
write.table(Data2ToP, file=FileOutPos, sep="\t", quote=F, row.names=F, col.names=F)
|
[STATEMENT]
lemma t_ins_input: "t_ins_inv x xt \<lparr>folding = False, item = x, subtrees = [xt]\<rparr>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. t_ins_inv x xt \<lparr>t_type.folding = False, item = x, subtrees = [xt]\<rparr>
[PROOF STEP]
by simp
|
library("futile.logger",quietly = T)
set_logger <- function(config){
logfile = paste(config[["input"]][["gomap_dir"]], "/logs/", config[["input"]][["basename"]], '-R.log',sep = "")
a = flog.appender(appender.file(logfile),name="ROOT")
}
|
import data.int.parity
import tactic.apply_fun
import data.real.basic
import algebra.group.pi
import lib.tactiques
namespace tactic.interactive
setup_tactic_parser
open tactic
meta def verifie : tactic unit :=
`[ { repeat { unfold limite_suite},
repeat { unfold continue_en },
push_neg,
try { simp only [exists_prop] },
try { exact iff.rfl },
done } <|> fail "Ce n'est pas cela. Essayez encore." ]
end tactic.interactive
notation `|`:1024 x:1 `|`:1 := abs x
namespace m154
lemma inferieur_ssi {x y : ℝ} : x ≤ y ↔ 0 ≤ y - x :=
sub_nonneg.symm
lemma pos_pos {x y : ℝ} (hx : 0 ≤ x) (hy : 0 ≤ y) : 0 ≤ x*y :=
mul_nonneg hx hy
lemma neg_neg {x y : ℝ} (hx : x ≤ 0) (hy : y ≤ 0) : 0 ≤ x*y :=
mul_nonneg_of_nonpos_of_nonpos hx hy
lemma inferieur_ssi' {x y : ℝ} : x ≤ y ↔ x - y ≤ 0 :=
by rw [show x-y = -(y-x), by ring, inferieur_ssi, neg_le, neg_zero]
end m154
open nat
def pgcd := nat.gcd
lemma divise_refl (a : ℕ) : a ∣ a :=
dvd_refl a
lemma divise_pgcd_ssi {a b c : ℕ} : c ∣ pgcd a b ↔ c ∣ a ∧ c ∣ b :=
dvd_gcd_iff
lemma divise_antisym {a b : ℕ} : a ∣ b → b ∣ a → a = b :=
dvd_antisymm
lemma divise_def (a b : ℤ) : a ∣ b ↔ ∃ k, b = a*k :=
iff.rfl
def pair (n : ℤ) := ∃ k, n = 2*k
def impair (n : ℤ) := ∃ k, n = 2*k + 1
lemma pair_ou_impair (n : ℤ) : pair n ∨ impair n :=
by by_cases h : n % 2 = 0 ; [left, {right ; rw int.mod_two_ne_zero at h}] ;
rw [← int.mod_add_div n 2, h] ; use n/2 ; ring
lemma non_pair_et_impair (n : ℤ) : ¬ (pair n ∧ impair n) :=
begin
rintro ⟨h, h'⟩,
change even n at h,
rw int.even_iff at h,
rcases h' with ⟨k, rfl⟩,
simp only [int.add_mul_mod_self_left, add_comm, euclidean_domain.mod_eq_zero] at h,
cases h with l hl,
rw eq_comm at hl,
have := int.eq_one_of_mul_eq_one_right (by linarith) hl,
linarith
end
lemma abs_inferieur_ssi (x y : ℝ) : |x| ≤ y ↔ -y ≤ x ∧ x ≤ y :=
abs_le
lemma abs_diff (x y : ℝ) : |x - y| = |y - x| :=
abs_sub_comm x y
lemma pos_abs (x : ℝ) : |x| > 0 ↔ x ≠ 0 :=
abs_pos
variables {α : Type*} [linear_order α]
lemma superieur_max_ssi (p q r : α) : r ≥ max p q ↔ r ≥ p ∧ r ≥ q :=
max_le_iff
lemma inferieur_max_gauche (p q : α) : p ≤ max p q :=
le_max_left _ _
lemma inferieur_max_droite (p q : α) : q ≤ max p q :=
le_max_right _ _
lemma egal_si_abs_diff_neg {a b : ℝ} : |a - b| ≤ 0 → a = b :=
eq_of_abs_sub_nonpos
lemma egal_si_abs_eps (x y : ℝ) : (∀ ε > 0, |x - y| ≤ ε) → x = y :=
begin
intro h,
apply egal_si_abs_diff_neg,
by_contradiction H,
push_neg at H,
specialize h ( |x-y|/2) (by linarith),
linarith
end
lemma ineg_triangle (x y : ℝ) : |x + y| ≤ |x| + |y| :=
abs_add x y
namespace m154
def limite_suite (u : ℕ → ℝ) (l : ℝ) : Prop :=
∀ ε > 0, ∃ N, ∀ n ≥ N, |u n - l| ≤ ε
lemma unicite_limite {u l l'}: limite_suite u l → limite_suite u l' → l = l' :=
begin
-- sorry
intros hl hl',
apply egal_si_abs_eps,
intros ε ε_pos,
specialize hl (ε/2) (by linarith),
cases hl with N hN,
specialize hl' (ε/2) (by linarith),
cases hl' with N' hN',
specialize hN (max N N') (inferieur_max_gauche _ _),
specialize hN' (max N N') (inferieur_max_droite _ _),
calc |l - l'| = |(l-u (max N N')) + (u (max N N') -l')| : by ring_nf
... ≤ |l - u (max N N')| + |u (max N N') - l'| : by apply ineg_triangle
... = |u (max N N') - l| + |u (max N N') - l'| : by rw abs_diff
... ≤ ε/2 + ε/2 : by linarith
... = ε : by ring,
end
end m154
open m154
def extraction (φ : ℕ → ℕ) := ∀ n m, n < m → φ n < φ m
-- Dans la suite, φ désignera toujours une fonction de ℕ dans ℕ
variable { φ : ℕ → ℕ}
/-- Un réel `a` est valeur d'adhérence d'une suite `u` s'il
existe une suite extraite de `u` qui tend vers `a`. -/
def valeur_adherence (u : ℕ → ℝ) (a : ℝ) :=
∃ φ, extraction φ ∧ limite_suite (u ∘ φ) a
/-- Toute extraction est supérieure à l'identité. -/
lemma extraction_superieur_id : extraction φ → ∀ n, n ≤ φ n :=
begin
intros hyp n,
induction n with n hn,
exact nat.zero_le _,
exact nat.succ_le_of_lt (by linarith [hyp n (n+1) (by linarith)]),
end
open filter
lemma extraction_machine (ψ : ℕ → ℕ) (hψ : ∀ n, ψ n ≥ n) : ∃ f : ℕ → ℕ, extraction (ψ ∘ f) ∧ ∀ n, f n ≥ n :=
begin
refine ⟨λ n, nat.rec_on n 0 (λ n ih, ψ ih + 1), λ m n h, _, λ n, _⟩,
{ induction h; dsimp [(∘)],
{ exact hψ _ },
{ exact lt_trans h_ih (hψ _) } },
{ induction n, {apply le_refl},
exact nat.succ_le_succ (le_trans n_ih (hψ _)) }
end
|
[STATEMENT]
lemma prv_sdsj_cases:
assumes "F \<subseteq> fmla" "finite F" "\<psi> \<in> fmla"
and "prv (sdsj F)" and "\<And> \<phi>. \<phi> \<in> F \<Longrightarrow> prv (imp \<phi> \<psi>)"
shows "prv \<psi>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. prv \<psi>
[PROOF STEP]
by (meson assms prv_imp_mp prv_sdsj_imp sdsj)
|
{-# OPTIONS --rewriting #-}
module Term.Core where
open import Context
open import Type
open import Function
open import Relation.Binary.PropositionalEquality hiding ([_])
open import Data.Empty
open import Data.Unit.Base
open import Data.Sum.Base
{-# BUILTIN REWRITE _≡_ #-}
{-# REWRITE foldlᶜ-▻-ε #-}
{-# REWRITE mapᶜ-▻▻ #-}
{-# REWRITE keep-id #-}
{-# REWRITE dupl-keep-vs #-}
{-# REWRITE renᵗ-id #-}
{-# REWRITE renᵗ-∘ #-}
{-# REWRITE subᵗ-idᵉ #-}
{-# REWRITE subᵗ-renᵗ #-}
{-# REWRITE subᵗ-Var #-}
{-# REWRITE inject-foldr-⇒-renᵗ #-}
{-# REWRITE inject-foldr-⇒-subᵗ #-}
infix 3 _⊢_
infix 4 Λ_ ƛ_
infixl 7 _·_
infix 9 _[_]
unfold : ∀ {Θ κ} -> Θ ⊢ᵗ (κ ⇒ᵏ ⋆) ⇒ᵏ κ ⇒ᵏ ⋆ -> Θ ⊢ᵗ κ -> Θ ⊢ᵗ ⋆
unfold ψ α = normalize $ ψ ∙ Lam (μ (shiftᵗ ψ) (Var vz)) ∙ α
data _⊢_ : ∀ {Θ} -> Con (Star Θ) -> Star Θ -> Set where
var : ∀ {Θ Γ} {α : Star Θ} -> α ∈ Γ -> Γ ⊢ α
Λ_ : ∀ {Θ Γ σ} {α : Star (Θ ▻ σ)} -> shiftᶜ Γ ⊢ α -> Γ ⊢ π σ α
-- A normalizing at the type level type instantiation.
_[_] : ∀ {Θ Γ σ} {β : Star (Θ ▻ σ)} -> Γ ⊢ π σ β -> (α : Θ ⊢ᵗ σ) -> Γ ⊢ β [ α ]ᵗ
-- A non-normalizing at the type level type instantiation.
_<_> : ∀ {Θ Γ σ} {β : Star (Θ ▻ σ)} -> Γ ⊢ π σ β -> (α : Θ ⊢ᵗ σ) -> Γ ⊢ β < α >ᵗ
-- A shorthand for `_< Var vz >` with more convenient computation at the type level.
_<> : ∀ {Θ σ Γ} {β : Star (Θ ▻ σ ▻ σ)} -> Γ ⊢ π σ β -> Γ ⊢ unshiftᵗ β
ƛ_ : ∀ {Θ Γ} {α β : Star Θ} -> Γ ▻ α ⊢ β -> Γ ⊢ α ⇒ β
_·_ : ∀ {Θ Γ} {α β : Star Θ} -> Γ ⊢ α ⇒ β -> Γ ⊢ α -> Γ ⊢ β
iwrap : ∀ {Θ Γ κ ψ} {α : Θ ⊢ᵗ κ} -> Γ ⊢ unfold ψ α -> Γ ⊢ μ ψ α
unwrap : ∀ {Θ Γ κ ψ} {α : Θ ⊢ᵗ κ} -> Γ ⊢ μ ψ α -> Γ ⊢ unfold ψ α
Term⁺ : Star⁺ -> Set
Term⁺ α = ∀ {Θ} {Γ : Con (Star Θ)} -> Γ ⊢ normalize α
Term⁻ : ∀ {Θ} -> Star Θ -> Set
Term⁻ α = ∀ {Γ} -> Γ ⊢ normalize α
bind : ∀ {Θ Γ} {α β : Star Θ} -> Γ ⊢ α -> Γ ▻ α ⊢ β -> Γ ⊢ β
bind term body = (ƛ body) · term
ren : ∀ {Θ Γ Δ} {α : Star Θ} -> Γ ⊆ Δ -> Γ ⊢ α -> Δ ⊢ α
ren ι (var v) = var (renᵛ ι v)
ren ι (Λ body) = Λ (ren (mapᶜ-⊆ ι) body)
ren ι (fun [ α ]) = ren ι fun [ α ]
ren ι (fun < α >) = ren ι fun < α >
ren ι (fun <>) = ren ι fun <>
ren ι (ƛ body) = ƛ (ren (keep ι) body)
ren ι (fun · arg) = ren ι fun · ren ι arg
ren ι (iwrap term) = iwrap (ren ι term)
ren ι (unwrap term) = unwrap (ren ι term)
shiftⁿ : ∀ {Θ Γ} {α : Star Θ} Δ -> Γ ⊢ α -> Γ ▻▻ Δ ⊢ α
shiftⁿ = ren ∘ extʳ
shift : ∀ {Θ Γ} {α β : Star Θ} -> Γ ⊢ α -> Γ ▻ β ⊢ α
shift = shiftⁿ (ε ▻ _)
instNonRec : ∀ {Θ Γ Δ} {α : Star Θ} -> Seq (Γ ⊢_) Δ -> Γ ▻▻ Δ ⊢ α -> Γ ⊢ α
instNonRec ø body = body
instNonRec {Δ = Δ ▻ _} (seq ▶ term) body = instNonRec seq (bind (shiftⁿ Δ term) body)
module Y where
self : Type⁺ (⋆ ⇒ᵏ ⋆)
self = Lam μ (Lam Lam (Var (vs vz) ∙ Var vz ⇒ Var vz)) (Var vz)
unfoldˢ : Term⁺ (π ⋆ $ self ∙ Var vz ⇒ self ∙ Var vz ⇒ Var vz)
unfoldˢ = Λ ƛ unwrap (var vz)
unrollˢ : Term⁺ (π ⋆ $ self ∙ Var vz ⇒ Var vz)
unrollˢ = Λ ƛ unfoldˢ [ Var vz ] · var vz · var vz
fix : Term⁺ (π ⋆ $ (Var vz ⇒ Var vz) ⇒ Var vz)
fix = Λ ƛ unrollˢ [ Var vz ] · iwrap (ƛ var (vs vz) · (unrollˢ [ Var vz ] · var vz))
open Y using (fix)
tuple : ∀ {Θ} -> Star (Θ ▻ ⋆) -> Star Θ
tuple Fv = π ⋆ $ Fv ⇒ Var vz
endo : ∀ {Θ} -> Star (Θ ▻ ⋆) -> Star Θ
endo Fv = π ⋆ $ Fv ⇒ Fv
fixBy
: ∀ {Θ} {Γ : Con (Star Θ)}
-> (Fv : Star (Θ ▻ ⋆))
-> Γ ⊢ (tuple Fv ⇒ tuple Fv) ⇒ endo Fv ⇒ tuple Fv
fixBy Fv =
ƛ fix < endo Fv ⇒ tuple Fv > ·
(ƛ ƛ Λ ƛ (var (vs vs vs vz) ·
(Λ ƛ (var (vs vs vs vz) · var (vs vs vz)) <> ·
-- `bind` is in order to make REWRITE rules fire (yeah, it's silly).
(bind (var (vs vs vz) <>) (var vz) · var vz))) <> · var vz)
ƛⁿ
: ∀ {Θ Ξ Γ} {f : Star Ξ -> Star Θ} {β}
-> ∀ Δ -> Γ ▻▻ mapᶜ f Δ ⊢ β -> Γ ⊢ conToTypeBy f Δ β
ƛⁿ ε term = term
ƛⁿ (Δ ▻ τ) body = ƛⁿ Δ (ƛ body)
ƛⁿ'
: ∀ {Θ Γ} {β : Star Θ}
-> ∀ Δ -> Γ ▻▻ Δ ⊢ β -> Γ ⊢ conToTypeBy id Δ β
ƛⁿ' ε term = term
ƛⁿ' (Δ ▻ τ) body = ƛⁿ' Δ (ƛ body)
applyⁿ
: ∀ {Θ Ξ Γ} {β : Star Θ} {f : Star Ξ -> Star Θ}
-> ∀ Δ -> Γ ⊢ conToTypeBy f Δ β -> Seq (λ τ -> Γ ⊢ f τ) Δ -> Γ ⊢ β
applyⁿ _ y ø = y
applyⁿ _ f (seq ▶ x) = applyⁿ _ f seq · x
deeply
: ∀ {Θ Ξ Γ} {f : Star Ξ -> Star Θ} {β γ}
-> ∀ Δ
-> Γ ▻▻ mapᶜ f Δ ⊢ β ⇒ γ
-> Γ ⊢ conToTypeBy f Δ β
-> Γ ⊢ conToTypeBy f Δ γ
deeply ε g y = g · y
deeply (Δ ▻ τ) g f = deeply Δ (ƛ ƛ shiftⁿ (ε ▻ _ ▻ _) (ƛ g) · var vz · (var (vs vz) · var vz)) f
conToTypeShift : ∀ {Θ} -> Con (Star Θ) -> Star (Θ ▻ ⋆)
conToTypeShift Δ = conToTypeBy shiftᵗ Δ (Var vz)
-- This is not easy, but makes perfect sense and should be doable.
postulate
shift-⊢ : ∀ {Θ Γ β} {α : Star Θ} -> Γ ⊢ α -> shiftᶜ {τ = β} Γ ⊢ shiftᵗ α
recursive
: ∀ {Θ} {Γ : Con (Star (Θ ▻ ⋆))}
-> (Δ : Con (Star Θ))
-> Γ ⊢ π ⋆ (conToTypeBy (shiftᵗⁿ (ε ▻ _ ▻ _)) Δ (Var vz) ⇒ Var vz)
-> Seq (λ β -> Γ ⊢ shiftᵗ β) Δ
recursive ε h = ø
recursive (Δ ▻ τ) h
= recursive Δ (Λ ƛ shift (shift-⊢ h <>) · deeply Δ (ƛ ƛ var (vs vz)) (var vz))
▶ h < shiftᵗ τ > · (ƛⁿ Δ (ƛ var vz)) where
byCon : ∀ {Θ} (Δ : Con (Star Θ)) -> let F = conToTypeShift Δ in ∀ {Γ} -> Γ ⊢ tuple F ⇒ tuple F
byCon Δ = ƛ Λ ƛ applyⁿ Δ (var vz) (recursive Δ (var (vs vz)))
fixCon : ∀ {Θ} (Δ : Con (Star Θ)) -> let F = conToTypeShift Δ in ∀ {Γ} -> Γ ⊢ endo F ⇒ tuple F
fixCon Δ = fixBy (conToTypeShift Δ) · byCon Δ
record Tuple {Θ} (Γ Δ : Con (Star Θ)) : Set where
constructor PackTuple
field tupleTerm : Γ ⊢ tuple (conToTypeShift Δ)
mutualFix : ∀ {Θ} {Γ Δ : Con (Star Θ)} -> Seq (Γ ▻▻ Δ ⊢_) Δ -> Tuple Γ Δ
mutualFix {Γ = Γ} {Δ} seq
= PackTuple
$ fixCon Δ
· (Λ ƛ ƛⁿ Δ (applyⁿ Δ (var (vsⁿ (shiftᶜ Δ))) $
mapˢ (ren (keepⁿ (shiftᶜ Δ) ∘ extʳ $ ε ▻ _) ∘′ shift-⊢) seq))
-- Note that we do not perform the "pass the whole tuple in and extract each its element
-- separately" trick, because we do know the type of the result (it's `α`).
bindTuple : ∀ {Θ Γ Δ} {α : Star Θ} -> Tuple Γ Δ -> Γ ▻▻ Δ ⊢ α -> Γ ⊢ α
bindTuple {Θ} {Γ} {Δ} {α} (PackTuple tup) body = tup < α > · (ƛⁿ Δ body)
instRec : ∀ {Θ Γ Δ} {α : Star Θ} -> Seq (Γ ▻▻ Δ ⊢_) Δ -> Γ ▻▻ Δ ⊢ α -> Γ ⊢ α
instRec = bindTuple ∘ mutualFix
|
from utils.data_loader import prepare_data_seq
from utils import config
from model.transformer import Transformer
from model.transformer_mulexpert import Transformer_experts
from model.common_layer import evaluate, count_parameters, make_infinite
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.init import xavier_uniform_
import torch.utils.data as data
from tqdm import tqdm
import os
import time
import numpy as np
import math
from collections import deque
DIALOG_SIZE = 3
class Dataset(data.Dataset):
"""Custom data.Dataset compatible with data.DataLoader."""
def __init__(self, data, vocab):
"""Reads source and target sequences from txt files."""
self.vocab = vocab
self.data = data
def __len__(self):
return 1
def __getitem__(self, index):
# here we ignore index since we only have one input
item = {}
item["context_text"] = [x for x in self.data if x!="None"]
X_dial = [config.CLS_idx]
X_mask = [config.CLS_idx]
for i, sentence in enumerate(item["context_text"]):
X_dial += [self.vocab.word2index[word] if word in self.vocab.word2index else config.UNK_idx for word in sentence.split()]
spk = self.vocab.word2index["USR"] if i % 2 == 0 else self.vocab.word2index["SYS"]
X_mask += [spk for _ in range(len(sentence.split()))]
assert len(X_dial) == len(X_mask)
item["context"] = X_dial
item["mask"] = X_mask
item["len"] = len(X_dial)
return item
def collate_fn(data):
input_batch = torch.LongTensor([data[0]["context"]])
input_mask = torch.LongTensor([data[0]["mask"]])
if config.USE_CUDA:
input_batch = input_batch.cuda()
input_mask = input_mask.cuda()
d = {}
d["input_batch"] = input_batch
d["input_lengths"] = torch.LongTensor([data[0]["len"]])
d["mask_input"] = input_mask
d["program_label"] = torch.LongTensor([9]) #fake label
if config.USE_CUDA:
d["program_label"] = d["program_label"].cuda()
return d
def make_batch(inp,vacab):
d = Dataset(inp,vacab)
loader = torch.utils.data.DataLoader(dataset=d, batch_size=1, shuffle=False, collate_fn=collate_fn)
return iter(loader).next()
data_loader_tra, data_loader_val, data_loader_tst, vocab, program_number = prepare_data_seq(batch_size=config.batch_size)
if(config.model == "trs"):
model = Transformer(vocab,decoder_number=program_number, model_file_path=config.save_path, is_eval=True)
elif(config.model == "experts"):
model = Transformer_experts(vocab,decoder_number=program_number, model_file_path=config.save_path, is_eval=True)
if (config.USE_CUDA):
model.cuda()
model = model.eval()
print('Start to chat')
context = deque(DIALOG_SIZE * ['None'], maxlen=DIALOG_SIZE)
while(True):
msg = input(">>> ")
if(len(str(msg).rstrip().lstrip()) != 0):
context.append(str(msg).rstrip().lstrip())
#print(context)
batch = make_batch(context, vocab)
sent_g = model.decoder_greedy(batch,max_dec_step=30)
#sent_t = model.decoder_topk(batch, max_dec_step=30)
print(">>>",sent_g[0])
context.append(sent_g[0])
|
(* Author: Dmitriy Traytel *)
header "Initial Normalization of the Input"
(*<*)
theory Init_Normalization
imports Pi_Regular_Exp "~~/src/HOL/Library/Simps_Case_Conv"
begin
(*>*)
fun toplevel_inters where
"toplevel_inters (Inter r s) = toplevel_inters r \<union> toplevel_inters s"
| "toplevel_inters r = {r}"
lemma toplevel_inters_nonempty[simp]:
"toplevel_inters r \<noteq> {}"
by (induct r) auto
lemma toplevel_inters_finite[simp]:
"finite (toplevel_inters r)"
by (induct r) auto
context alphabet
begin
lemma toplevel_inters_wf:
"wf n s = (\<forall>r\<in>toplevel_inters s. wf n r)"
by (induct s) auto
end
context project
begin
lemma toplevel_inters_lang:
"r \<in> toplevel_inters s \<Longrightarrow> lang n s \<subseteq> lang n r"
by (induct s) auto
lemma toplevel_inters_lang_INT:
"lang n s = (\<Inter>r\<in>toplevel_inters s. lang n r)"
by (induct s) auto
lemma toplevel_inters_in_lang:
"w \<in> lang n s = (\<forall>r\<in>toplevel_inters s. w \<in> lang n r)"
by (induct s) auto
lemma lang_flatten_INTERSECT_finite[simp]:
"finite X \<Longrightarrow> w \<in> lang n (flatten INTERSECT X) =
(if X = {} then w \<in> lists (\<Sigma> n) else (\<forall>r \<in> X. w \<in> lang n r))"
unfolding lang_INTERSECT using sorted_list_of_set[of X] by auto
end
fun merge_distinct where
"merge_distinct [] xs = xs"
| "merge_distinct xs [] = xs"
| "merge_distinct (a # xs) (b # ys) =
(if a = b then merge_distinct xs (b # ys)
else if a < b then a # merge_distinct xs (b # ys)
else b # merge_distinct (a # xs) ys)"
lemma set_merge_distinct[simp]: "set (merge_distinct xs ys) = set xs \<union> set ys"
by (induct xs ys rule: merge_distinct.induct) auto
lemma sorted_merge_distinct[simp]: "\<lbrakk>sorted xs; sorted ys\<rbrakk> \<Longrightarrow> sorted (merge_distinct xs ys)"
by (induct xs ys rule: merge_distinct.induct) (auto simp: sorted_Cons)
lemma distinct_merge_distinct[simp]: "\<lbrakk>sorted xs; distinct xs; sorted ys; distinct ys\<rbrakk> \<Longrightarrow>
distinct (merge_distinct xs ys)"
by (induct xs ys rule: merge_distinct.induct) (auto simp: sorted_Cons)
lemma sorted_list_of_set_merge_distinct[simp]: "\<lbrakk>sorted xs; distinct xs; sorted ys; distinct ys\<rbrakk> \<Longrightarrow>
merge_distinct xs ys = sorted_list_of_set (set xs \<union> set ys)"
by (auto intro: sorted_distinct_set_unique)
fun zip_with_option where
"zip_with_option f (Some a) (Some b) = Some (f a b)"
| "zip_with_option _ _ _ = None"
lemma zip_with_option_eq_Some[simp]:
"zip_with_option f x y = Some z \<longleftrightarrow> (\<exists>a b. z = f a b \<and> x = Some a \<and> y = Some b)"
by (induct f x y rule: zip_with_option.induct) auto
fun Pluss where
"Pluss (Plus r s) = zip_with_option merge_distinct (Pluss r) (Pluss s)"
| "Pluss Zero = Some []"
| "Pluss Full = None"
| "Pluss r = Some [r]"
lemma Pluss_None[symmetric]: "Pluss r = None \<longleftrightarrow> Full \<in> toplevel_summands r"
by (induct r) auto
lemma Pluss_Some: "Pluss r = Some xs \<longleftrightarrow>
(Full \<notin> set xs \<and> xs = sorted_list_of_set (toplevel_summands r - {Zero}))"
proof (induct r arbitrary: xs)
case (Plus r s)
show ?case
proof safe
assume "Pluss (Plus r s) = Some xs"
then obtain a b where *: "Pluss r = Some a" "Pluss s = Some b" "xs = merge_distinct a b" by auto
with Plus(1)[of a] Plus(2)[of b]
show "xs = sorted_list_of_set (toplevel_summands (Plus r s) - {Zero})" by (simp add: Un_Diff)
assume "Full \<in> set xs" with Plus(1)[of a] Plus(2)[of b] * show False by (simp add: Pluss_None)
next
assume "Full \<notin> set (sorted_list_of_set (toplevel_summands (Plus r s) - {Zero}))"
with Plus(1)[of "sorted_list_of_set (toplevel_summands r - {Zero})"]
Plus(2)[of "sorted_list_of_set (toplevel_summands s - {Zero})"]
show "Pluss (Plus r s) = Some (sorted_list_of_set (toplevel_summands (Plus r s) - {Zero}))"
by (simp add: Un_Diff)
qed
qed force+
fun Inters where
"Inters (Inter r s) = zip_with_option merge_distinct (Inters r) (Inters s)"
| "Inters Zero = None"
| "Inters Full = Some []"
| "Inters r = Some [r]"
lemma Inters_None[symmetric]: "Inters r = None \<longleftrightarrow> Zero \<in> toplevel_inters r"
by (induct r) auto
lemma Inters_Some: "Inters r = Some xs \<longleftrightarrow>
(Zero \<notin> set xs \<and> xs = sorted_list_of_set (toplevel_inters r - {Full}))"
proof (induct r arbitrary: xs)
case (Inter r s)
show ?case
proof safe
assume "Inters (Inter r s) = Some xs"
then obtain a b where *: "Inters r = Some a" "Inters s = Some b" "xs = merge_distinct a b" by auto
with Inter(1)[of a] Inter(2)[of b]
show "xs = sorted_list_of_set (toplevel_inters (Inter r s) - {Full})" by (simp add: Un_Diff)
assume "Zero \<in> set xs" with Inter(1)[of a] Inter(2)[of b] * show False by (simp add: Inters_None)
next
assume "Zero \<notin> set (sorted_list_of_set (toplevel_inters (Inter r s) - {Full}))"
with Inter(1)[of "sorted_list_of_set (toplevel_inters r - {Full})"]
Inter(2)[of "sorted_list_of_set (toplevel_inters s - {Full})"]
show "Inters (Inter r s) = Some (sorted_list_of_set (toplevel_inters (Inter r s) - {Full}))"
by (simp add: Un_Diff)
qed
qed force+
definition inPlus where
"inPlus r s = (case Pluss (Plus r s) of None \<Rightarrow> Full | Some rs \<Rightarrow> PLUS rs)"
lemma inPlus_alt: "inPlus r s = (let X = toplevel_summands (Plus r s) - {Zero} in
flatten PLUS (if Full \<in> X then {Full} else X))"
proof (cases "Pluss r" "Pluss s" rule: option.exhaust[case_product option.exhaust])
case Some_Some then show ?thesis by (simp add: inPlus_def Pluss_None) (simp add: Pluss_Some Un_Diff)
qed (simp_all add: inPlus_def Pluss_None)
fun inTimes where
"inTimes Zero _ = Zero"
| "inTimes _ Zero = Zero"
| "inTimes One r = r"
| "inTimes r One = r"
| "inTimes (Times r s) t = Times r (inTimes s t)"
| "inTimes r s = Times r s"
fun inStar where
"inStar Zero = One"
| "inStar Full = Full"
| "inStar One = One"
| "inStar (Star r) = Star r"
| "inStar r = Star r"
definition inInter where
"inInter r s = (case Inters (Inter r s) of None \<Rightarrow> Zero | Some rs \<Rightarrow> INTERSECT rs)"
lemma inInter_alt: "inInter r s = (let X = toplevel_inters (Inter r s) - {Full} in
flatten INTERSECT (if Zero \<in> X then {Zero} else X))"
proof (cases "Inters r" "Inters s" rule: option.exhaust[case_product option.exhaust])
case Some_Some then show ?thesis by (simp add: inInter_def Inters_None) (simp add: Inters_Some Un_Diff)
qed (simp_all add: inInter_def Inters_None)
fun inNot where
"inNot Zero = Full"
| "inNot Full = Zero"
| "inNot (Not r) = r"
| "inNot (Plus r s) = Inter (inNot r) (inNot s)"
| "inNot (Inter r s) = Plus (inNot r) (inNot s)"
| "inNot r = Not r"
fun inPr where
"inPr Zero = Zero"
| "inPr One = One"
| "inPr (Plus r s) = Plus (inPr r) (inPr s)"
| "inPr r = Pr r"
primrec inorm where
"inorm Zero = Zero"
| "inorm Full = Full"
| "inorm One = One"
| "inorm (Atom a) = Atom a"
| "inorm (Plus r s) = Plus (inorm r) (inorm s)"
| "inorm (Times r s) = Times (inorm r) (inorm s)"
| "inorm (Star r) = inStar (inorm r)"
| "inorm (Not r) = inNot (inorm r)"
| "inorm (Inter r s) = inInter (inorm r) (inorm s)"
| "inorm (Pr r) = inPr (inorm r)"
context alphabet begin
lemma wf_inPlus[simp]: "\<lbrakk>wf n r; wf n s\<rbrakk> \<Longrightarrow> wf n (inPlus r s)"
by (subst (asm) (1 2) toplevel_summands_wf) (auto simp: inPlus_alt)
lemma wf_inTimes[simp]: "\<lbrakk>wf n r; wf n s\<rbrakk> \<Longrightarrow> wf n (inTimes r s)"
by (induct r s rule: inTimes.induct) auto
lemma wf_inStar[simp]: "wf n r \<Longrightarrow> wf n (inStar r)"
by (induct r rule: inStar.induct) auto
lemma wf_inNot[simp]: "wf n r \<Longrightarrow> wf n (inNot r)"
by (induct r rule: inNot.induct) auto
lemma wf_inPr[simp]: "wf (Suc n) r \<Longrightarrow> wf n (inPr r)"
by (induct r rule: inPr.induct) auto
lemma wf_inorm[simp]: "wf n r \<Longrightarrow> wf n (inorm r)"
by (induct r arbitrary: n) auto
end
context project begin
lemma lang_inTimes[simp]: "\<lbrakk>wf n r; wf n s\<rbrakk> \<Longrightarrow> lang n (inTimes r s) = lang n (Times r s)"
by (induct r s rule: inTimes.induct) (auto simp: conc_assoc)
lemma lang_inStar[simp]: "wf n r \<Longrightarrow> lang n (inStar r) = lang n (Star r)"
by (induct r rule: inStar.induct)
(auto intro: star_if_lang dest: subsetD[OF star_subset_lists, rotated])
lemma Zero_toplevel_inters[dest]: "Zero \<in> toplevel_inters r \<Longrightarrow> lang n r = {}"
by (metis lang.simps(1) subset_empty toplevel_inters_lang)
lemma toplevel_inters_Full: "\<lbrakk>toplevel_inters r = {Full}; wf n r\<rbrakk> \<Longrightarrow> lang n r = lists (\<Sigma> n)"
by (metis antisym lang.simps(2) subsetI toplevel_inters.simps(3) toplevel_inters_in_lang)
lemma toplevel_inters_subset_singleton[simp]: "toplevel_inters r \<subseteq> {s} \<longleftrightarrow> toplevel_inters r = {s}"
by (metis subset_refl subset_singletonD toplevel_inters_nonempty)
lemma lang_inInter[simp]: "\<lbrakk>wf n r; wf n s\<rbrakk> \<Longrightarrow> lang n (inInter r s) = lang n (Inter r s)"
using lang_subset_lists[of n, unfolded lang.simps(2)[symmetric]]
toplevel_inters_nonempty[of r] toplevel_inters_nonempty[of s]
apply (auto 0 2 simp: inInter_alt toplevel_inters_in_lang[of _ n r] toplevel_inters_in_lang[of _ n s]
toplevel_inters_wf[of n r] toplevel_inters_wf[of n s] Ball_def simp del: toplevel_inters_nonempty
dest!: toplevel_inters_Full[of _ n] split: if_splits)
by fastforce+
lemma lang_inNot[simp]: "wf n r \<Longrightarrow> lang n (inNot r) = lang n (Not r)"
by (induct r rule: inNot.induct) (auto dest: lang_subset_lists)
lemma lang_inPr[simp]: "wf (Suc n) r \<Longrightarrow> lang n (inPr r) = lang n (Pr r)"
by (induct r rule: inPr.induct) auto
lemma lang_inorm[simp]: "wf n r \<Longrightarrow> lang n (inorm r) = lang n r"
by (induct r arbitrary: n) auto
end
(*<*)
end
(*>*)
|
(* This Isabelle theory is produced using the TIP tool offered at the following website:
https://github.com/tip-org/tools
This file was originally provided as part of TIP benchmark at the following website:
https://github.com/tip-org/benchmarks
Yutaka Nagashima at CIIRC, CTU changed the TIP output theory file slightly
to make it compatible with Isabelle2017.*)
theory TIP_sort_TSortIsSort
imports "../../Test_Base"
begin
datatype 'a list = nil2 | cons2 "'a" "'a list"
datatype Tree = TNode "Tree" "int" "Tree" | TNil
fun insert :: "int => int list => int list" where
"insert x (nil2) = cons2 x (nil2)"
| "insert x (cons2 z xs) =
(if x <= z then cons2 x (cons2 z xs) else cons2 z (insert x xs))"
fun isort :: "int list => int list" where
"isort (nil2) = nil2"
| "isort (cons2 y xs) = insert y (isort xs)"
fun flatten :: "Tree => int list => int list" where
"flatten (TNode p z q) y = flatten p (cons2 z (flatten q y))"
| "flatten (TNil) y = y"
fun add :: "int => Tree => Tree" where
"add x (TNode p z q) =
(if x <= z then TNode (add x p) z q else TNode p z (add x q))"
| "add x (TNil) = TNode TNil x TNil"
fun toTree :: "int list => Tree" where
"toTree (nil2) = TNil"
| "toTree (cons2 y xs) = add y (toTree xs)"
fun tsort :: "int list => int list" where
"tsort x = flatten (toTree x) (nil2)"
theorem property0 :
"((tsort xs) = (isort xs))"
oops
end
|
/**
* @file JacobianHelpers.hpp
* @brief Header file for utility functions to help computing Jacobians.
* @author Jianzhu Huai
*/
#ifndef INCLUDE_SWIFT_VIO_JACOBIAN_HELPERS_HPP_
#define INCLUDE_SWIFT_VIO_JACOBIAN_HELPERS_HPP_
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <okvis/kinematics/Transformation.hpp>
namespace okvis {
namespace ceres {
template <int globalDim, int localDim, int numResiduals>
inline void zeroJacobian(int index, double** jacobians, double** jacobiansMinimal) {
using JacType = typename std::conditional<
(globalDim > 1),
Eigen::Matrix<double, numResiduals, globalDim, Eigen::RowMajor>,
Eigen::Matrix<double, numResiduals, globalDim> >::type;
using MinimalJacType = typename std::conditional<
(localDim > 1),
Eigen::Matrix<double, numResiduals, localDim, Eigen::RowMajor>,
Eigen::Matrix<double, numResiduals, localDim> >::type;
if (jacobians[index] != NULL) {
Eigen::Map<JacType> J0(jacobians[index]);
J0.setZero();
if (jacobiansMinimal != NULL) {
if (jacobiansMinimal[index] != NULL) {
Eigen::Map<MinimalJacType> J0_minimal_mapped(jacobiansMinimal[index]);
J0_minimal_mapped.setZero();
}
}
}
}
} // namespace ceres
} // namespace okvis
#endif // INCLUDE_SWIFT_VIO_JACOBIAN_HELPERS_HPP_
|
#include <chrono>
#include <windows.h>
#include <CommCtrl.h>
#include <functional>
#include <numeric>
#include <fmt/format.h>
#include "cli-parser.hpp"
#include "update-client.hpp"
#include "logger/log.h"
#include "crash-reporter.hpp"
#include "utils.hpp"
#include <atomic>
#include <thread>
#include <fstream>
#include <boost/algorithm/string.hpp>
namespace fs = std::filesystem;
namespace chrono = std::chrono;
using chrono::high_resolution_clock;
using chrono::duration_cast;
struct update_parameters params;
/* Some basic constants that are adjustable at compile time */
const double average_bw_time_span = 250;
const int max_bandwidth_in_average = 8;
const int ui_padding = 10;
const int ui_basic_height = 40;
bool update_completed = false;
void ShowError(LPCWSTR lpMsg)
{
if (params.interactive)
{
MessageBoxW(NULL, lpMsg, TEXT("Error while updating"), MB_ICONEXCLAMATION | MB_OK);
}
}
void ShowInfo(LPCWSTR lpMsg)
{
if (params.interactive)
{
MessageBoxW(NULL, lpMsg, TEXT("Info"), MB_ICONINFORMATION | MB_OK);
}
}
struct bandwidth_chunk {
high_resolution_clock::time_point time_point;
size_t chunk_size;
};
static LRESULT CALLBACK FrameWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam);
static LRESULT CALLBACK ProgressLabelWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData);
static LRESULT CALLBACK BlockersListWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData);
static BOOL HasInstalled_VC_redistx64();
struct callbacks_impl :
public
install_callbacks,
client_callbacks,
downloader_callbacks,
updater_callbacks,
pid_callbacks,
blocker_callbacks
{
int screen_width{ 0 };
int screen_height{ 0 };
int width{ 500 };
int height{ ui_basic_height*4+ui_padding*2 };
HWND frame{ NULL }; /* Toplevel window */
HWND progress_worker{ NULL };
HWND progress_label{ NULL };
HWND blockers_list{ NULL };
HWND kill_button{ NULL };
HWND cancel_button{ NULL };
std::atomic_uint files_done{ 0 };
std::vector<size_t> file_sizes{ 0 };
size_t num_files{ 0 };
int num_workers{ 0 };
int package_dl_pct100{ 0 };
high_resolution_clock::time_point start_time;
size_t total_consumed{ 0 };
size_t total_consumed_last_tick{ 0 };
std::list<double> last_bandwidths;
std::atomic<double> last_calculated_bandwidth{ 0.0 };
LPWSTR error_buf{ nullptr };
bool should_start{ false };
bool should_cancel{ false };
bool should_kill_blockers{ false };
bool notify_restart{ false };
bool finished_downloading { false };
LPCWSTR label_format{ L"Downloading {} of {} - {:.2f} MB/s" };
callbacks_impl(const callbacks_impl&) = delete;
callbacks_impl(const callbacks_impl&&) = delete;
callbacks_impl &operator=(const callbacks_impl&) = delete;
callbacks_impl &operator=(callbacks_impl&&) = delete;
explicit callbacks_impl(HINSTANCE hInstance, int nCmdShow);
~callbacks_impl();
void initialize(struct update_client *client) final;
void success() final;
void error(const char* error, const char * error_type) final;
void downloader_preparing() final;
void downloader_start(int num_threads, size_t num_files_) final;
void download_file(int thread_index, std::string &relative_path, size_t size);
void download_progress(int thread_index, size_t consumed, size_t accum) final;
void download_worker_finished(int thread_index) final { }
void downloader_complete(const bool success) final;
static void bandwidth_tick(HWND hwnd, UINT uMsg, UINT_PTR idEvent, DWORD dwTime);
void installer_download_start(const std::string& packageName) final;
void installer_download_progress(const double pct) final;
void installer_run_file(const std::string& packageName, const std::string& startParams, const std::string& rawFileBin) final;
void installer_package_failed(const std::string& packageName, const std::string& message) final;
void pid_start() final { }
void pid_waiting_for(uint64_t pid) final { }
void pid_wait_finished(uint64_t pid) final { }
void pid_wait_complete() final { }
void blocker_start() final;
int blocker_waiting_for(const std::wstring &processes_list, bool list_changed) final;
void blocker_wait_complete() final;
void updater_start() final;
void update_file(std::string &filename) final { }
void update_finished(std::string &filename) final { }
void updater_complete() final { }
};
callbacks_impl::callbacks_impl(HINSTANCE hInstance, int nCmdShow)
{
BOOL success = false;
WNDCLASSEX wc;
RECT rcParent;
HICON app_icon = LoadIcon(GetModuleHandle(NULL), TEXT("AppIcon"));
wc.cbSize = sizeof(WNDCLASSEX);
wc.style = CS_NOCLOSE;
wc.lpfnWndProc = FrameWndProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.hInstance = hInstance;
wc.hIcon = app_icon;
wc.hCursor = LoadCursor(NULL, IDC_ARROW);
wc.hbrBackground = CreateSolidBrush(RGB(23, 36, 45));;
wc.lpszMenuName = NULL;
wc.lpszClassName = TEXT("UpdaterFrame");
wc.hIconSm = app_icon;
if (!RegisterClassEx(&wc))
{
ShowError(L"Window registration failed!");
LogLastError(L"RegisterClassEx");
throw std::runtime_error("window registration failed");
}
/* We only care about the main display */
screen_width = GetSystemMetrics(SM_CXSCREEN);
screen_height = GetSystemMetrics(SM_CYSCREEN);
/* FIXME: This feels a little dirty */
auto do_fail = [this](LPCWSTR user_msg, LPCWSTR context_msg) {
if (this->frame) DestroyWindow(this->frame);
ShowError(user_msg);
LogLastError(context_msg);
throw std::runtime_error("");
};
frame = CreateWindowEx(
WS_EX_CLIENTEDGE,
TEXT("UpdaterFrame"),
TEXT("Streamlabs Desktop Updater"),
WS_OVERLAPPED | WS_MINIMIZEBOX | WS_SYSMENU,
(screen_width - width) / 2,
(screen_height - height) / 2,
width, height,
NULL, NULL,
hInstance, NULL
);
SetWindowLongPtr(frame, GWLP_USERDATA, (LONG_PTR)this);
if (!frame)
{
do_fail(L"Failed to create window!", L"CreateWindowEx");
}
GetClientRect(frame, &rcParent);
int x_pos = ui_padding;
int y_size = ui_basic_height;
int x_size = (rcParent.right - rcParent.left) - (x_pos * 2);
int y_pos = ((rcParent.bottom - rcParent.top) / 2) - (y_size / 2);
progress_worker = CreateWindow(
PROGRESS_CLASS,
TEXT("ProgressWorker"),
WS_CHILD | WS_VISIBLE | PBS_SMOOTH,
x_pos, y_pos,
x_size, y_size,
frame, NULL,
NULL, NULL
);
if (!progress_worker)
{
do_fail(L"Failed to create progress worker!", L"CreateWindow");
}
progress_label = CreateWindow(
WC_STATIC,
TEXT("Checking packages..."),
WS_CHILD | WS_VISIBLE | SS_CENTER | SS_CENTERIMAGE,
x_pos, ui_padding,
x_size, ui_basic_height,
frame, NULL,
NULL, NULL
);
if (!progress_label)
{
do_fail(L"Failed to create progress label!", L"CreateWindow");
}
success = SetWindowSubclass(progress_label, ProgressLabelWndProc, CLS_PROGRESS_LABEL, (DWORD_PTR)this);
if (!success)
{
do_fail(L"Failed to subclass progress label!", L"SetWindowSubclass");
}
blockers_list = CreateWindow(
WC_EDIT,
L"Blockers list",
WS_CHILD | WS_VSCROLL | ES_LEFT | ES_MULTILINE | ES_WANTRETURN | ES_AUTOVSCROLL | WS_BORDER | ES_READONLY ,
x_pos, y_pos, x_size, ui_basic_height * 2,
frame,
NULL, NULL, NULL);
success = SetWindowSubclass(blockers_list, BlockersListWndProc, CLS_BLOCKERS_LIST, (DWORD_PTR)this);
if (!success)
{
do_fail(L"Failed to subclass blockers list!", L"SetWindowSubclass");
}
kill_button = CreateWindow(
WC_BUTTON,
L"Stop all",
WS_TABSTOP | WS_CHILD | BS_DEFPUSHBUTTON,
x_size + ui_padding - 100, rcParent.bottom - rcParent.top , 100, ui_basic_height,
frame,
NULL, NULL, NULL);
cancel_button = CreateWindow(
WC_BUTTON,
L"Cancel",
WS_TABSTOP | WS_CHILD | BS_DEFPUSHBUTTON,
x_size + ui_padding - 100 - ui_padding - 100, rcParent.bottom - rcParent.top , 100, ui_basic_height,
frame,
NULL, NULL, NULL);
SendMessage(progress_worker, PBM_SETBARCOLOR, 0, RGB(49, 195, 162));
SendMessage(progress_worker, PBM_SETRANGE32, 0, INT_MAX);
}
callbacks_impl::~callbacks_impl()
{
}
void callbacks_impl::initialize(struct update_client *client)
{
ShowWindow(frame, SW_SHOWNORMAL);
UpdateWindow(frame);
// ; todo, maybe more msi/exe packages?
if (!HasInstalled_VC_redistx64())
register_install_package(client, "Visual C++ Redistributable", "https://slobs-cdn.streamlabs.com/VC_redist.x64.exe", "/passive /norestart");
}
void callbacks_impl::success()
{
should_start = true;
PostMessage(frame, CUSTOM_CLOSE_MSG, NULL, NULL);
}
void callbacks_impl::error(const char* error, const char* error_type)
{
int error_sz = -1;
this->error_buf = ConvertToUtf16(error, &error_sz);
save_exit_error(error_type);
PostMessage(frame, CUSTOM_ERROR_MSG, NULL, NULL);
}
void callbacks_impl::downloader_preparing()
{
LONG_PTR data = GetWindowLongPtr(frame, GWLP_USERDATA);
auto ctx = reinterpret_cast<callbacks_impl*>(data);
SetWindowTextW(ctx->progress_label, L"Checking local files...");
}
void callbacks_impl::downloader_start(int num_threads, size_t num_files_)
{
file_sizes.resize(num_threads, 0);
this->num_files = num_files_;
start_time = high_resolution_clock::now();
SetTimer(frame, 1, static_cast<unsigned int>(average_bw_time_span), &bandwidth_tick);
}
void callbacks_impl::download_file(int thread_index, std::string &relative_path, size_t size)
{
/* Our specific UI doesn't care when we start, we only
* care when we're finished. A more technical UI could show
* what each thread is doing if they so wanted. */
file_sizes[thread_index] = size;
}
void callbacks_impl::bandwidth_tick(HWND hwnd, UINT uMsg, UINT_PTR idEvent, DWORD dwTime)
{
LONG_PTR data = GetWindowLongPtr(hwnd, GWLP_USERDATA);
auto ctx = reinterpret_cast<callbacks_impl *>(data);
/* Compare current total to last previous total,
* then divide by timeout time */
double bandwidth = (double)(ctx->total_consumed - ctx->total_consumed_last_tick);
ctx->total_consumed_last_tick = ctx->total_consumed;
ctx->last_bandwidths.push_back(bandwidth);
while (ctx->last_bandwidths.size() > max_bandwidth_in_average)
{
ctx->last_bandwidths.pop_front();
}
double average_bandwidth = std::accumulate(ctx->last_bandwidths.begin(), ctx->last_bandwidths.end(), 0.0);
//std::for_each(ctx->last_bandwidths.begin(), ctx->last_bandwidths.end(), [&average_bandwidth](double &n) { average_bandwidth+=n; });
if (ctx->last_bandwidths.size() > 0)
{
average_bandwidth /= ctx->last_bandwidths.size();
}
/* Average over a set period of time */
average_bandwidth /= average_bw_time_span / 1000;
/* Convert from bytes to megabytes */
/* Note that it's important to have only one place where
* we atomically assign to last_calculated_bandwidth */
ctx->last_calculated_bandwidth = average_bandwidth * 0.000001;
std::wstring label(fmt::format(ctx->label_format, ctx->files_done, ctx->num_files, ctx->last_calculated_bandwidth));
SetWindowTextW(ctx->progress_label, label.c_str());
SetTimer(hwnd, idEvent, static_cast<unsigned int>(average_bw_time_span), &bandwidth_tick);
}
void callbacks_impl::download_progress(int thread_index, size_t consumed, size_t accum)
{
total_consumed += consumed;
/* We don't currently show per-file progress but we could
* progress the bar based on files_done + remainder of
* all in-progress files done. */
if (accum != file_sizes[thread_index])
{
return;
}
++files_done;
double percent = (double)files_done / (double)num_files;
std::wstring label(fmt::format(label_format, files_done, num_files, last_calculated_bandwidth));
int pos = lround(percent * INT_MAX);
PostMessage(progress_worker, PBM_SETPOS, pos, 0);
SetWindowTextW(progress_label, label.c_str());
}
void callbacks_impl::installer_download_start(const std::string& packageName)
{
package_dl_pct100 = 0;
installer_download_progress(0);
SetWindowTextW(progress_label, (L"Downloading " + fmt::to_wstring(packageName) + L"...").c_str());
}
void callbacks_impl::installer_download_progress(const double percent)
{
// Too many PostMessage per/sec overwhelm gui refresh rate
int pct100 = int(percent * 100.0);
if (pct100 > package_dl_pct100)
{
package_dl_pct100 = pct100;
PostMessage(progress_worker, PBM_SETPOS, static_cast<int>(percent * double(INT_MAX)), 0);
}
}
void callbacks_impl::installer_package_failed(const std::string& packageName, const std::string& message)
{
if (message.empty())
MessageBoxA(frame, ("WARNING: Streamlabs Desktop was unable to download/install the required '" + packageName + "' package.").c_str(), "Package Installation", MB_OK | MB_ICONWARNING);
else
MessageBoxA(frame, ("WARNING: Streamlabs Desktop was unable to download/install the required '" + packageName + "' package.\nError: " + message).c_str(), "Package Installation", MB_OK | MB_ICONWARNING);
log_info(("installer_package_failed, message = " + message).c_str());
}
void callbacks_impl::installer_run_file(const std::string& packageName, const std::string& startParams, const std::string& rawFileBin)
{
DWORD dwExitCode = ERROR_SUCCESS;
const std::string filename = "tempstreamlabspackage.exe";
std::ofstream outFile(filename, std::ios::out | std::ios::binary);
if (outFile.is_open())
{
outFile.write(&rawFileBin[0], rawFileBin.size());
outFile.close();
}
else
{
dwExitCode = GetLastError();
}
if (dwExitCode == ERROR_SUCCESS)
{
STARTUPINFOA si;
ZeroMemory(&si, sizeof(si));
si.cb = sizeof(si);
PROCESS_INFORMATION pi;
ZeroMemory(&pi, sizeof(pi));
if (CreateProcessA(filename.c_str(), LPSTR((filename + " " + startParams).c_str()), NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi))
{
WaitForSingleObject(pi.hProcess, INFINITE);
GetExitCodeProcess(pi.hProcess, &dwExitCode);
CloseHandle(pi.hProcess);
CloseHandle(pi.hThread);
}
else
{
dwExitCode = GetLastError();
}
}
std::filesystem::remove(filename);
if (dwExitCode != ERROR_SUCCESS)
{
switch (dwExitCode)
{
case ERROR_SUCCESS_REBOOT_INITIATED:
case ERROR_SUCCESS_REBOOT_REQUIRED:
if (!notify_restart && (notify_restart = true))
{
// Silenced for now, needs to be raised again when the package(s) are factually required to run the application
//MessageBoxA(frame, "A restart is required to complete the update.", "Package Installation", MB_OK | MB_ICONWARNING);
}
break;
default:
installer_package_failed(packageName, "");
break;
}
log_info("installer_run_file failed with error %d", dwExitCode);
}
}
void callbacks_impl::downloader_complete(const bool success)
{
KillTimer(frame, 1);
finished_downloading = success;
}
void callbacks_impl::blocker_start()
{
ShowWindow(progress_worker, SW_HIDE);
SetWindowTextW(progress_label, L"The following programs are preventing Streamlabs Desktop from updating :");
SetWindowTextW(blockers_list, L"");
SetWindowPos(frame, 0, 0, 0, width, height + ui_basic_height + ui_padding, SWP_NOMOVE | SWP_NOREPOSITION | SWP_ASYNCWINDOWPOS);
ShowWindow(blockers_list, SW_SHOW);
ShowWindow(kill_button, SW_SHOW);
ShowWindow(cancel_button, SW_SHOW);
}
int callbacks_impl::blocker_waiting_for(const std::wstring & processes_list, bool list_changed)
{
int ret = 0;
if (list_changed)
{
SetWindowTextW(blockers_list, processes_list.c_str());
}
if (should_cancel)
{
should_cancel = false;
ret = 2;
} else if (should_kill_blockers)
{
should_kill_blockers = false;
ret = 1;
}
return ret;
}
void callbacks_impl::blocker_wait_complete()
{
ShowWindow(blockers_list, SW_HIDE);
ShowWindow(kill_button, SW_HIDE);
ShowWindow(cancel_button, SW_HIDE);
SetWindowTextW(blockers_list, L"");
SetWindowTextW(progress_label, L"");
ShowWindow(progress_worker, SW_SHOW);
SetWindowPos(frame, 0, 0, 0, width, height, SWP_NOMOVE | SWP_NOREPOSITION | SWP_ASYNCWINDOWPOS);
}
void callbacks_impl::updater_start()
{
SetWindowTextW(progress_label, L"Copying files...");
}
LRESULT CALLBACK ProgressLabelWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData)
{
switch (msg) {
case WM_SETTEXT: {
RECT rect;
HWND parent = GetParent(hwnd);
GetWindowRect(hwnd, &rect);
MapWindowPoints(HWND_DESKTOP, parent, (LPPOINT)&rect, 2);
RedrawWindow(parent, &rect, NULL, RDW_ERASE | RDW_INVALIDATE);
}
break;
}
return DefSubclassProc(hwnd, msg, wParam, lParam);
}
LRESULT CALLBACK BlockersListWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam, UINT_PTR uIdSubclass, DWORD_PTR dwRefData)
{
switch (msg) {
case WM_HSCROLL:
case WM_VSCROLL:
case WM_SETTEXT: {
RECT rect;
HWND parent = GetParent(hwnd);
GetWindowRect(hwnd, &rect);
MapWindowPoints(HWND_DESKTOP, parent, (LPPOINT)&rect, 2);
RedrawWindow(parent, &rect, NULL, RDW_ERASE | RDW_INVALIDATE);
}
break;
}
return DefSubclassProc(hwnd, msg, wParam, lParam);
}
LRESULT CALLBACK FrameWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
switch (msg) {
case WM_CLOSE:
/* Prevent closing in a normal manner. */
return 0;
case WM_DESTROY:
PostQuitMessage(0);
break;
case CUSTOM_CLOSE_MSG:
DestroyWindow(hwnd);
break;
case CUSTOM_ERROR_MSG: {
LONG_PTR user_data = GetWindowLongPtr(hwnd, GWLP_USERDATA);
auto ctx = reinterpret_cast<callbacks_impl *>(user_data);
ShowError(ctx->error_buf);
delete[] ctx->error_buf;
ctx->error_buf = nullptr;
DestroyWindow(hwnd);
break;
}
case WM_COMMAND:
{
LONG_PTR user_data = GetWindowLongPtr(hwnd, GWLP_USERDATA);
auto ctx = reinterpret_cast<callbacks_impl *>(user_data);
if ((HWND)lParam == ctx->kill_button)
{
EnableWindow(ctx->kill_button, false);
ctx->should_kill_blockers = true;
break;
}
if ((HWND)lParam == ctx->cancel_button)
{
EnableWindow(ctx->kill_button, false);
EnableWindow(ctx->cancel_button, false);
ctx->should_kill_blockers = false;
ctx->should_cancel = true;
break;
}
}
break;
case WM_CTLCOLORSTATIC:
{
LONG_PTR user_data = GetWindowLongPtr(hwnd, GWLP_USERDATA);
auto ctx = reinterpret_cast<callbacks_impl *>(user_data);
if ((HWND)lParam != ctx->blockers_list)
{
SetTextColor((HDC)wParam, RGB(255, 255, 255));
SetBkMode((HDC)wParam, TRANSPARENT);
return (LRESULT)GetStockObject(HOLLOW_BRUSH);
}
}
break;
}
return DefWindowProc(hwnd, msg, wParam, lParam);
}
LSTATUS GetStringRegKey(HKEY baseKey, const std::wstring& path, const std::wstring &strValueName, std::wstring &strValue)
{
HKEY hKey = nullptr;
LSTATUS ret = RegOpenKeyExW(baseKey, path.c_str(), 0, KEY_READ, &hKey);
if (ret != ERROR_SUCCESS)
return ret;
WCHAR szBuffer[512];
DWORD dwBufferSize = sizeof(szBuffer);
ret = RegQueryValueExW(hKey, strValueName.c_str(), 0, NULL, (LPBYTE)szBuffer, &dwBufferSize);
if (ret == ERROR_SUCCESS)
strValue = szBuffer;
return ret;
}
BOOL HasInstalled_VC_redistx64()
{
std::wstring version;
LSTATUS ret = GetStringRegKey(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Classes\\Installer\\Dependencies\\Microsoft.VS.VC_RuntimeAdditionalVSU_amd64,v14", L"Version", version);
if (ret == ERROR_SUCCESS)
{
std::vector<std::wstring> versions;
boost::split(versions, version, boost::is_any_of("."));
// "Version"="14.30.30704"
if (versions.size() == 3)
{
if (_wtoi(versions[0].c_str()) < 14)
return FALSE;
if (_wtoi(versions[0].c_str()) > 14)
return TRUE;
if (_wtoi(versions[1].c_str()) < 30)
return FALSE;
if (_wtoi(versions[1].c_str()) > 30)
return TRUE;
// 14.30.X
if (_wtoi(versions[2].c_str()) >= 30704)
return TRUE;
}
}
else
{
log_error("HasInstalledVcRedist GetStringRegKey, error %d", ret);
}
return FALSE;
}
extern "C"
int wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR lpCmdLineUnused, int nCmdShow) {
setup_crash_reporting();
callbacks_impl cb_impl(hInstance, nCmdShow);
MultiByteCommandLine command_line;
update_completed = su_parse_command_line(command_line.argc(), command_line.argv(), ¶ms);
if (!update_completed)
{
ShowError(L"Failed to parse cli arguments!");
save_exit_error("Failed parising arguments");
handle_exit();
return 0;
}
auto client_deleter = [](struct update_client *client) {
destroy_update_client(client);
};
std::unique_ptr<struct update_client, decltype(client_deleter)>
client(create_update_client(¶ms), client_deleter);
update_client_set_client_events(client.get(), &cb_impl);
update_client_set_downloader_events(client.get(), &cb_impl);
update_client_set_updater_events(client.get(), &cb_impl);
update_client_set_pid_events(client.get(), &cb_impl);
update_client_set_blocker_events(client.get(), &cb_impl);
update_client_set_installer_events(client.get(), &cb_impl);
cb_impl.initialize(client.get());
std::thread workerThread([&]()
{
// Threaded because package installations come first which is blocking from the perspective of the file updater
update_client_start(client.get());
});
MSG msg;
while (GetMessage(&msg, NULL, 0, 0) > 0)
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
workerThread.join();
update_client_flush(client.get());
/* Don't attempt start if application failed to update */
if (cb_impl.should_start || params.restart_on_fail || !cb_impl.finished_downloading)
{
if( params.restart_on_fail || !cb_impl.finished_downloading)
update_completed = StartApplication(params.exec_no_update.c_str(), params.exec_cwd.c_str());
else
update_completed = StartApplication(params.exec.c_str(), params.exec_cwd.c_str());
// If failed to launch desktop app...
if (!update_completed)
{
if (cb_impl.finished_downloading)
{
ShowInfo(L"The application has finished updating.\n"
"Please manually start Streamlabs Desktop.");
}
else
{
ShowError(L"There was an issue launching the application.\n"
"Please start Streamlabs Desktop and try again.");
}
save_exit_error("Failed to autorestart");
handle_exit();
}
} else {
handle_exit();
return 1;
}
return 0;
}
|
[STATEMENT]
lemma SbisL_update_R[simp]:
assumes "SbisL cl dl" and "cl!n \<approx>s d'"
shows "SbisL cl (dl[n := d'])"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. SbisL cl (dl[n := d'])
[PROOF STEP]
proof-
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. SbisL cl (dl[n := d'])
[PROOF STEP]
let ?c' = "cl!n"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. SbisL cl (dl[n := d'])
[PROOF STEP]
have "SbisL (cl[n := ?c']) (dl[n := d'])"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. SbisL (cl[n := cl ! n]) (dl[n := d'])
[PROOF STEP]
apply(rule SbisL_update)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. SbisL cl dl
2. cl ! n \<approx>s d'
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
SbisL cl dl
cl ! n \<approx>s d'
goal (2 subgoals):
1. SbisL cl dl
2. cl ! n \<approx>s d'
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
SbisL (cl[n := cl ! n]) (dl[n := d'])
goal (1 subgoal):
1. SbisL cl (dl[n := d'])
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
SbisL (cl[n := cl ! n]) (dl[n := d'])
goal (1 subgoal):
1. SbisL cl (dl[n := d'])
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
SbisL cl (dl[n := d'])
goal:
No subgoals!
[PROOF STEP]
qed
|
/-
Copyright (c) 2015 Joseph Hua. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joseph Hua
! This file was ported from Lean 3 source module data.W.constructions
! leanprover-community/mathlib commit 861a26926586cd46ff80264d121cdb6fa0e35cc1
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathlib.Data.W.Basic
/-!
# Examples of W-types
We take the view of W types as inductive types.
Given `α : Type` and `β : α → Type`, the W type determined by this data, `WType β`, is the
inductively with constructors from `α` and arities of each constructor `a : α` given by `β a`.
This file contains `Nat` and `List` as examples of W types.
## Main results
* `WType.equivNat`: the construction of the naturals as a W-type is equivalent to `Nat`
* `WType.equivList`: the construction of lists on a type `γ` as a W-type is equivalent to `List γ`
-/
universe u v
namespace WType
-- For "W_type"
set_option linter.uppercaseLean3 false
section Nat
/-- The constructors for the naturals -/
inductive Natα : Type
| zero : Natα
| succ : Natα
#align W_type.nat_α WType.Natα
instance : Inhabited Natα :=
⟨Natα.zero⟩
/-- The arity of the constructors for the naturals, `zero` takes no arguments, `succ` takes one -/
def Natβ : Natα → Type
| Natα.zero => Empty
| Natα.succ => Unit
#align W_type.nat_β WType.Natβ
instance : Inhabited (Natβ Natα.succ) :=
⟨()⟩
/-- The isomorphism from the naturals to its corresponding `WType` -/
@[simp]
def ofNat : ℕ → WType Natβ
| Nat.zero => ⟨Natα.zero, Empty.elim⟩
| Nat.succ n => ⟨Natα.succ, fun _ ↦ ofNat n⟩
#align W_type.of_nat WType.ofNat
/-- The isomorphism from the `WType` of the naturals to the naturals -/
@[simp]
def toNat : WType Natβ → ℕ
| WType.mk Natα.zero _ => 0
| WType.mk Natα.succ f => (f ()).toNat.succ
#align W_type.to_nat WType.toNat
theorem leftInverse_nat : Function.LeftInverse ofNat toNat
| WType.mk Natα.zero f => by
rw [toNat, ofNat]
congr
ext x
cases x
| WType.mk Natα.succ f => by
simp only [toNat, ofNat, leftInverse_nat (f ()), mk.injEq, heq_eq_eq, true_and]
rfl
#align W_type.left_inv_nat WType.leftInverse_nat
theorem rightInverse_nat : Function.RightInverse ofNat toNat
| Nat.zero => rfl
| Nat.succ n => by rw [ofNat, toNat, rightInverse_nat n]
#align W_type.right_inv_nat WType.rightInverse_nat
/-- The naturals are equivalent to their associated `WType` -/
def equivNat : WType Natβ ≃ ℕ where
toFun := toNat
invFun := ofNat
left_inv := leftInverse_nat
right_inv := rightInverse_nat
#align W_type.equiv_nat WType.equivNat
open Sum PUnit
/-- `WType.Natα` is equivalent to `PUnit ⊕ PUnit`.
This is useful when considering the associated polynomial endofunctor.
-/
@[simps]
def NatαEquivPUnitSumPUnit : Natα ≃ Sum PUnit.{u + 1} PUnit
where
toFun c :=
match c with
| Natα.zero => inl unit
| Natα.succ => inr unit
invFun b :=
match b with
| inl _ => Natα.zero
| inr _ => Natα.succ
left_inv c :=
match c with
| Natα.zero => rfl
| Natα.succ => rfl
right_inv b :=
match b with
| inl _ => rfl
| inr _ => rfl
#align W_type.nat_α_equiv_punit_sum_punit WType.NatαEquivPUnitSumPUnit
end Nat
section List
variable (γ : Type u)
/-- The constructors for lists.
There is "one constructor `cons x` for each `x : γ`",
since we view `List γ` as
```
| nil : List γ
| cons x₀ : List γ → List γ
| cons x₁ : List γ → List γ
| ⋮ γ many times
```
-/
inductive Listα : Type u
| nil : Listα
| cons : γ → Listα
#align W_type.list_α WType.Listα
instance : Inhabited (Listα γ) :=
⟨Listα.nil⟩
/-- The arities of each constructor for lists, `nil` takes no arguments, `cons hd` takes one -/
def Listβ : Listα γ → Type u
| Listα.nil => PEmpty
| Listα.cons _ => PUnit
#align W_type.list_β WType.Listβ
instance (hd : γ) : Inhabited (Listβ γ (Listα.cons hd)) :=
⟨PUnit.unit⟩
/-- The isomorphism from lists to the `WType` construction of lists -/
@[simp]
def ofList : List γ → WType (Listβ γ)
| List.nil => ⟨Listα.nil, PEmpty.elim⟩
| List.cons hd tl => ⟨Listα.cons hd, fun _ ↦ ofList tl⟩
#align W_type.of_list WType.ofList
/-- The isomorphism from the `WType` construction of lists to lists -/
@[simp]
def toList : WType (Listβ γ) → List γ
| WType.mk Listα.nil _ => []
| WType.mk (Listα.cons hd) f => hd :: (f PUnit.unit).toList
#align W_type.to_list WType.toList
theorem leftInverse_list : Function.LeftInverse (ofList γ) (toList _)
| WType.mk Listα.nil f => by
simp only [toList, ofList, mk.injEq, heq_eq_eq, true_and]
ext x
cases x
| WType.mk (Listα.cons x) f => by
simp only [ofList, leftInverse_list (f PUnit.unit), mk.injEq, heq_eq_eq, true_and]
rfl
#align W_type.left_inv_list WType.leftInverse_list
theorem rightInverse_list : Function.RightInverse (ofList γ) (toList _)
| List.nil => rfl
| List.cons hd tl => by simp only [toList, rightInverse_list tl]
#align W_type.right_inv_list WType.rightInverse_list
/-- Lists are equivalent to their associated `WType` -/
def equivList : WType (Listβ γ) ≃ List γ
where
toFun := toList _
invFun := ofList _
left_inv := leftInverse_list _
right_inv := rightInverse_list _
#align W_type.equiv_list WType.equivList
/-- `WType.Listα` is equivalent to `γ` with an extra point.
This is useful when considering the associated polynomial endofunctor
-/
def ListαEquivPUnitSum : Listα γ ≃ Sum PUnit.{v + 1} γ
where
toFun c :=
match c with
| Listα.nil => Sum.inl PUnit.unit
| Listα.cons x => Sum.inr x
invFun := Sum.elim (fun _ ↦ Listα.nil) Listα.cons
left_inv c :=
match c with
| Listα.nil => rfl
| Listα.cons _ => rfl
right_inv x :=
match x with
| Sum.inl PUnit.unit => rfl
| Sum.inr _ => rfl
#align W_type.list_α_equiv_punit_sum WType.ListαEquivPUnitSum
end List
end WType
|
theory hw01
imports Main
begin
fun listsum:: "int list \<Rightarrow> int" where
"listsum [] = 0"
| "listsum (x # xs) = listsum xs + x"
value "listsum [1,2,3] = 6"
value "listsum [] = 0"
value "listsum [1,-2,3] = 2"
lemma listsum_filter_x: "listsum (filter (\<lambda>x. x\<noteq>0) l) = listsum l"
apply(induction l)
apply(auto)
done
lemma listsum_append: "listsum (xs @ ys) = listsum xs + listsum ys"
apply(induction xs)
apply(auto)
done
lemma listsum_rev: "listsum (rev xs) = listsum xs"
apply(induction xs)
apply(auto simp:listsum_append)
done
lemma listsum_noneg: "listsum (filter (\<lambda>x. x>0) l) \<ge> listsum l"
apply(induction l)
apply(auto)
done
fun flatten :: "'a list list \<Rightarrow> 'a list" where
"flatten [] = []"
| "flatten (l#ls) = l @ flatten ls"
value "flatten [[1,2,3],[2]] = [1,2,3,2::int]"
value "flatten [[1,2,3],[],[2]] = [1,2,3,2::int]"
lemma "listsum (flatten xs) = listsum(map listsum xs)"
apply(induction xs)
apply(auto simp:listsum_append)
done
end
|
State Before: ι : Type u_1
inst✝³ : LinearOrder ι
inst✝² : SuccOrder ι
inst✝¹ : IsSuccArchimedean ι
inst✝ : PredOrder ι
i0 i✝ i : ι
hi : i0 ≤ i
⊢ (succ^[Int.toNat (toZ i0 i)]) i0 = i State After: ι : Type u_1
inst✝³ : LinearOrder ι
inst✝² : SuccOrder ι
inst✝¹ : IsSuccArchimedean ι
inst✝ : PredOrder ι
i0 i✝ i : ι
hi : i0 ≤ i
⊢ (succ^[Nat.find (_ : ∃ n, (succ^[n]) i0 = i)]) i0 = i Tactic: rw [toZ_of_ge hi, Int.toNat_coe_nat] State Before: ι : Type u_1
inst✝³ : LinearOrder ι
inst✝² : SuccOrder ι
inst✝¹ : IsSuccArchimedean ι
inst✝ : PredOrder ι
i0 i✝ i : ι
hi : i0 ≤ i
⊢ (succ^[Nat.find (_ : ∃ n, (succ^[n]) i0 = i)]) i0 = i State After: no goals Tactic: exact Nat.find_spec (exists_succ_iterate_of_le hi)
|
\section{Integral Domains, Maximal and Prime Ideals}
\subsection{Integral Domains}
\begin{definition}
An integral domain is a ring $R$ with $0\neq 1$ and $ab=0$ implies $a=0$ or $b=0$.
\end{definition}
\begin{definition}
In a ring $R$, an element $a\neq 0$ is called a zero divisor if $\exists b\in R,b\neq 0,ab=0$.
\end{definition}
So an integral domain is a ring without zero divisors.
\begin{example}
1. All fields are integral domains.\\
2. Any subring of an integral domain is an integral domain.
Hence $\mathbb Z[i]\le\mathbb C$ is an integral domain.\\
3. (non-example) $\mathbb Z\times\mathbb Z$ is not an integral domain since $(1,0)(0,1)=(0,0)$.
\end{example}
\begin{lemma}
If $R$ is an integral domain, so is $R[X]$.
\end{lemma}
\begin{proof}
Let $f,g\in R[X]$ be nonzero polynomials.
Suffice to show that $\deg(fg)=\deg(f)+\deg(g)$.
Indeed, if
$$f(X)=\sum_{k=0}^na_kX^k,g(X)=\sum_{k=0}^mb_kX^k,a_n,b_m\neq 0$$
Then $f(X)g(X)=a_nb_mX^{n+m}+\cdots$, but since $R$ is an integral domain, $a_nb_m\neq 0$, therefore $\deg(fg)=n+m=\deg(f)+\deg(g)$.
\end{proof}
\begin{lemma}
Let $R$ be an integral domain and $0\neq f\in R[X]$.
Then the number of roots of $f$ in $R$ is at most $n$.
\end{lemma}
\begin{proof}
Exercise.
\end{proof}
\begin{theorem}
Any finite subgroup of the multiplicative group of a field is cyclic.
\end{theorem}
\begin{example}
$(\mathbb Z/p\mathbb Z)^\times$ is cyclic.\\
Also, $U_m=\{x^m=1:x\in\mathbb C\}$ is cyclic.
\end{example}
\begin{proof}
Let $F$ be a field and $A$ a finite subgroup of $F^\times$.
So $A$ is a finite abelian group, and if it is not cyclic, then by Theorem \ref{fin_abe_struct}, it contains a subgroup isomorphic to $C_m\times C_m$ for some $m\ge 2$, but then $f(X)=X^m-1$ has at least $m^2$ roots, contradicting the preceding lemma.
\end{proof}
\begin{proposition}
Any finite integral domain is a field.
\end{proposition}
\begin{proof}
Consider a finite integral domain $R$ and $0\neq a\in\mathbb R$.
The map $\phi:R\to R$ by $r\mapsto ra$.
This map is injective since $R$ is an integral domain, but then it is automatically surjective since $R$ is finite.
So there is some $r$ such that $ra=1$.
\end{proof}
Combining these two gives that every finite integral domain has cyclic multiplicative group.
\begin{theorem}
Let $R$ be an integral domain, then there is a field $F$ with the following properties:\\
1. $R\le F$.\\
2. Every element of $F$ can be written as $ab^{-1}$ where $a,b\in R$.
\end{theorem}
Consequently, such an $F$ is the unique minimal field containing $R$.
$F$ is called the field of fractions.
\begin{example}
The field of fractions of $\mathbb Z$ is $\mathbb Q$.
\end{example}
\begin{proof}
Consider the set $F=(R\times R\setminus\{0\})/\sim$ where
$$(a,b)\sim (c,d)\iff ad=bc$$
We write the equivalence class containing $(a,b)$ as $a/b$.
One can show that this is an equivalence relation since $R$ is an integral domain and that the following operations are well-defined:
$$(a/b)+(c/d)=(ad+bc)/(bd),(a/b)(c/d)=(ac)/(bd)$$
$F$ is obviously a field under these two operations.
Also we can embed $R$ into $F$ by $r\mapsto r/1$ and we have $a/b=(a/1)(1/b)=(a/1)(b/1)^{-1}=ab^{-1}$, so this is the field we want.
\end{proof}
\begin{example}
1. The field of fraction of the Gaussian integers $\mathbb Z[i]$ is the set $\{ab^{-1}:a,b\in\mathbb Z[i]\le\mathbb C\}$.
In fact, $F$ is exactly numbers in the form $p+iq,p,q\in\mathbb Q$.\\
2. The field of fraction of the polynomial ring $R[X]$ over a ring $R$ is called the field of fractions $R(X)$ of $R$.
\end{example}
\subsection{Prime and Maximal Ideals}
\begin{lemma}
A ring $R$ is a field iff its only ideas are $\{0\},R$.
\end{lemma}
\begin{proof}
Trivial.
\end{proof}
\begin{definition}
Let $S$ be a collection of subsets of a set $X$.
$A\in S$ is maximal if there does not exists $B\in S$ such that $A\subsetneq B$.\\
An ideal $I\unlhd R$ is maximal if it is maximal in the set of all proper ideas $\mathcal I_R=\{J\unlhd R:\{0\}\subsetneq J\subsetneq R\}$.
\end{definition}
\begin{proposition}
Let $I\unlhd R$, then $R/I$ is a field iff $I$ is maximal.
\end{proposition}
\begin{proof}
$R/I$ is a field iff $I/I,R/I$ are the only ideals of $R/I$, which happens iff $I$ and $R$ are the only ideals of $R$ containing $I$ iff $I$ is maximal.
\end{proof}
\begin{definition}
An ideal $I\unlhd R$ is prime if $I\neq R$ and $ab\in I$ implies that at least one of $a,b$ is in $I$.
\end{definition}
\begin{example}
The prime ideals of $\mathbb Z$ are $p\mathbb Z$ with $p$ prime or $0$.
Incidentally (or not), $p\mathbb Z$ are also all maximal ideals of $\mathbb Z$.
\end{example}
\begin{proposition}
Let $I\unlhd R$, then $I$ is prime iff $R/I$ is an integral domain.
\end{proposition}
\begin{proof}
$I$ is prime iff $ab\in I\implies a\in I\lor b\in I$ iff $ab+I=I\implies a+I=I\lor b+I=I$ iff $I$ is an integral domain.
\end{proof}
\begin{remark}
Combining the results reveals that every maximal ideal is prime.
\end{remark}
\begin{remark}
If $\operatorname{char}(R)=n\ge 2$, then $\mathbb Z/n\mathbb Z\le R$, hence $n$ is prime.
In particular, the characteristic of a field $F$ is either $0$ or a prime number.
When the field has characteristic $0$, then $\mathbb Z\le F$, hence $\mathbb Q\le F$ since $F$ is a field.
\end{remark}
|
\section{\resheading{\textsc{Certifications}}}
\vspace{15pt} % Gap between title and text
\begin{itemize} \itemsep -2pt % Reduce space between items
\item Certified Scrum Product Owner \hfill Jul 2013
\item Scaled Agile Framework Program Consultant \hfill Feb 2015
\item FCC License (General Class -- KC9INE) \hfill Feb 2005
\end{itemize}
|
theory regShiftFifoModi imports paraGste1
begin
abbreviation rst::"expType" where
"rst \<equiv> IVar (Ident ''rst'')"
abbreviation push::"expType" where
"push \<equiv> IVar (Ident ''push'')"
abbreviation pop::"expType" where
"pop \<equiv> IVar (Ident ''pop'')"
abbreviation dataIn::"expType" where
"dataIn \<equiv> IVar (Ident ''dataIn'' )"
abbreviation LOW::"expType" where
"LOW \<equiv> Const (boolV False)"
abbreviation HIGH::"expType" where
" HIGH \<equiv> Const (boolV True)"
abbreviation emptyFifo::"expType" where
" emptyFifo \<equiv> IVar (Ident ''empty'' ) "
abbreviation tail::"expType" where
" tail \<equiv> IVar (Ident ''tail'' ) "
abbreviation head::"expType" where
" head \<equiv> IVar (Ident ''head'' ) "
abbreviation full::"expType" where
" full \<equiv> IVar (Ident ''full'' ) "
definition fullForm::"nat\<Rightarrow>formula" where [simp]:
" fullForm DEPTH\<equiv> eqn tail (Const (index DEPTH)) "
abbreviation mem::"nat \<Rightarrow> expType" where
"mem i \<equiv> IVar (Para (Ident ''mem'') i)"
type_synonym paraExpType="nat \<Rightarrow>expType"
abbreviation dataOut::"nat\<Rightarrow>expType" where
"dataOut DEPTH \<equiv> read (Ident ''mem'') DEPTH (IVar (Ident ''tail'' ))"
abbreviation rstForm::"formula" where
"rstForm \<equiv> (eqn rst HIGH)"
abbreviation emptyForm::"formula" where
"emptyForm \<equiv> (eqn emptyFifo HIGH)"
abbreviation pushForm::"formula" where
"pushForm \<equiv> andForm (andForm (eqn rst LOW) (eqn push HIGH)) (eqn pop LOW)"
abbreviation popForm::"formula" where
"popForm \<equiv> andForm (andForm (eqn rst LOW) (eqn push LOW)) (eqn pop HIGH)"
abbreviation nPushPopForm::"formula" where
"nPushPopForm \<equiv> andForm (andForm (eqn rst LOW) (eqn push LOW)) (eqn pop LOW)"
abbreviation pushDataForm::"nat \<Rightarrow>formula" where
" pushDataForm D \<equiv>andForm pushForm (eqn dataIn (Const (index D)))"
abbreviation popDataForm::"nat\<Rightarrow>nat \<Rightarrow>formula" where
" popDataForm DEPTH D \<equiv> (eqn (dataOut DEPTH) (Const (index D)))"
abbreviation nFullForm::"nat \<Rightarrow>formula" where
"nFullForm DEPTH\<equiv> neg (fullForm DEPTH)"
abbreviation nEmptyForm::"formula" where
"nEmptyForm \<equiv> neg emptyForm "
definition vertexI::"node" where [simp]:
"vertexI \<equiv>Vertex 0"
(*DEPTH=LAST + 1*)
definition vertexL::"nat \<Rightarrow> node list" where [simp]:
"vertexL LAST \<equiv> vertexI # (map (%i. Vertex i) (down LAST))"
definition edgeL::"nat \<Rightarrow> edge list" where [simp]:
"edgeL LAST \<equiv> [Edge vertexI ( Vertex 1)]
@ [Edge ( Vertex 1) ( Vertex 3)]
@ [Edge ( Vertex 1) ( Vertex 4)]
@(map (%i. ( Edge (Vertex ( 2*i+1 )) (Vertex ( 2*i+1 ))) ) (upt 0 (LAST+2) )) (* self-loop*)
@(map (%i. ( Edge (Vertex ( 2*i+2 )) (Vertex ( 2*i+2 ))) ) (upt 1 (LAST+2) )) (* self-loop*)
@(map (%i. ( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 3))) ) ( upt 1 (LAST+1)))
@(map (%i. ( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 4))) ) ( upt 1 (LAST+1)))
@(map (%i. ( Edge (Vertex (2 * i + 3)) (Vertex (2 * i + 1))) ) ( upt 0 (LAST+1) ))
@(map (%i. ( Edge (Vertex (2 * i + 4)) (Vertex (2 * i + 2))) ) ( upt 1 (LAST+1) ))
@[Edge ( Vertex 4) ( Vertex 1)]
"
primrec node2Nat::"node \<Rightarrow> nat" where
"node2Nat (Vertex n) = n"
definition antOfRbFifo::"nat\<Rightarrow>edge\<Rightarrow>formula" where [simp]:
"antOfRbFifo D edge\<equiv>
(let from=node2Nat (source edge) in
let to=node2Nat (sink edge) in
if (from = 0) then rstForm else
if (from=to) then nPushPopForm else
(if ((from mod 2) =1) then
(
if ((from + 2)=to) then ( pushForm ) else
if (from=(to + 2)) then popForm else
pushDataForm D )
else popForm))"
definition consOfRbFifo::"nat\<Rightarrow>nat\<Rightarrow>edge \<Rightarrow>formula" where [simp]:
"consOfRbFifo D LAST edge \<equiv>
(let from=node2Nat (source edge) in
let to=node2Nat (sink edge) in
if (((from mod 2) = 1) \<and> ((to mod 2) = 1)) then
(if from =1 then (andForm emptyForm (nFullForm LAST))
else if (from = (2*LAST+3)) then (andForm nEmptyForm (fullForm LAST))
else andForm nEmptyForm (nFullForm LAST))
else if (from=4 \<and> to = 1) then popDataForm LAST D
else if (from = (2*LAST+4)) then (andForm nEmptyForm (fullForm LAST))
else if (from =1 ) then (andForm emptyForm (nFullForm LAST))
else if (from \<noteq>0) then (andForm nEmptyForm (nFullForm LAST))
else chaos)"
definition rbFifoGsteSpec::" nat\<Rightarrow>nat\<Rightarrow>gsteSpec" where [simp]:
"rbFifoGsteSpec LAST data\<equiv>Graph vertexI (edgeL LAST ) (antOfRbFifo data ) (consOfRbFifo data LAST)"
primrec applyPlusN::"expType\<Rightarrow>nat \<Rightarrow>expType" where
"applyPlusN e 0=e" |
"applyPlusN e (Suc N) = uif ''+'' [applyPlusN e N, Const (index 1)]"
definition tagFunOfRegShiftFifo:: " nat\<Rightarrow>nodeTagFuncList" where [simp]:
"tagFunOfRegShiftFifo DATA n \<equiv>
(let x=node2Nat n in
let DataE=(Const (index DATA)) in
if (x = 0) then [] else
(if ((x mod 2) = 1) then
(if (x =1) then
[eqn tail (Const (index 0)), eqn emptyFifo (Const (boolV True))]
else [eqn tail (Const (index (x div 2 - 1 ))), eqn emptyFifo (Const (boolV False))] )
else
(if (x = 2) then [] else
[eqn tail (Const (index (x div 2 - 2 ))), eqn emptyFifo (Const (boolV False)),
eqn (IVar (Para (Ident ''mem'') 0)) DataE ]) )
)
"
abbreviation branch1::"generalizeStatement" where
"branch1 \<equiv>
(let S1=assign (Ident ''tail'',(Const (index 0))) in
let S2=assign (Ident ''empty'',HIGH) in
Parallel [S1,S2])"
abbreviation branch2::"nat\<Rightarrow>generalizeStatement" where
"branch2 LAST \<equiv>
(let S1=map (\<lambda>i. assign ((Para (Ident ''mem'') i),
iteForm (eqn (Const (index i)) (Const (index 0))) dataIn
(read (Ident ''mem'') LAST (uif ''-'' [(Const (index i)), (Const (index 1))])))) (down LAST ) in
let tailPlus=uif ''+'' [tail, (Const (index 1))] in
let S2=assign (Ident ''tail'',iteForm (neg (eqn emptyFifo HIGH)) tailPlus tail) in
let S3=assign (Ident ''empty'',LOW) in
Parallel ([S2,S3]@S1))"
abbreviation branch3::"generalizeStatement" where
"branch3 \<equiv>
(let S1=Parallel [assign (Ident ''empty'',HIGH)] in
let S2=Parallel [ assign (Ident ''tail'', uif ''-'' [tail, (Const (index 1))])] in
If (eqn tail (Const (index 0))) S1 S2)"
definition tagFunOfRbfifio:: "nat \<Rightarrow> nat\<Rightarrow>nodeTagFuncList" where [simp]:
" tagFunOfRbfifio depth DATA n \<equiv>
(let x=node2Nat n in
let DataE=(Const (index DATA)) in
if (x = 0) then [] else
(if ((x mod 2) = 1) then
(if (x=1) then
[eqn tail (Const (index 0)), eqn emptyFifo (Const (boolV True))]
else [eqn tail (applyPlusN head (x div 2 )), eqn emptyFifo (Const (boolV False))] )
else
(if (x = (2)) then [] else
[eqn tail (applyPlusN head ((x div 2) - 1)), eqn ( read (Ident ''mem'') depth tail) DataE ]) ))
"
abbreviation shiftRegfifo::" nat\<Rightarrow>generalizeStatement" where
"shiftRegfifo LAST\<equiv>
caseStatement
[(eqn rst HIGH, branch1),
(andForm (eqn push HIGH) (neg (eqn tail (Const (index LAST)))), branch2 LAST),
(andForm (eqn pop HIGH) (eqn emptyFifo LOW), branch3)]
"
consts J::" interpretFunType"
axiomatization where axiomOnIAdd [simp,intro]:
" J ''+'' [index m, index (Suc 0)] = index (m + 1)"
axiomatization where axiomOnISub [simp,intro ]: " J ''-'' [index m, index 1] = index (m - 1)"
lemma consistencyOfRbfifo:
assumes a:"0 < LAST "
shows "consistent' (shiftRegfifo LAST ) (J ) (rbFifoGsteSpec LAST data) (tagFunOfRegShiftFifo data)"
proof(unfold consistent'_def,rule allI,rule impI)
fix e
let ?G=" (rbFifoGsteSpec LAST data)"
let ?M="( shiftRegfifo LAST )"
let ?tag="(tagFunOfRegShiftFifo data)"
let ?P ="\<lambda>e.
(let f=andListForm (?tag (sink e)) in
let f'=andListForm (?tag (source e)) in
tautlogy (implyForm (andForm f' (antOf ?G e)) (preCond1 f ( ?M))) (J ))"
assume a1:"e \<in> edgesOf (rbFifoGsteSpec LAST data)"
have "e=Edge vertexI ( Vertex 1) |
e=Edge ( Vertex 1) ( Vertex 3) |
e=Edge ( Vertex 1) ( Vertex 4)|
(\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex ( 2*i+1 )) (Vertex ( 2*i+1 ))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST +1\<and> e=( Edge (Vertex ( 2*i+2 )) (Vertex ( 2*i+2 ))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 3))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 4))) ) |
(\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e= ( Edge (Vertex (2 * i + 3)) (Vertex (2 * i + 1))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex(2 * i + 4)) (Vertex(2 * i+2)))) |
e=( Edge (Vertex 4) (Vertex 1))"
apply(cut_tac a a1,auto) done
moreover
{assume b1:"e=Edge vertexI ( Vertex 1)"
have "?P e"
apply(cut_tac b1, simp add:antOfRbFifo_def) done
}
moreover
{assume b1:" (\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 1)) (Vertex ( 2*i + 1))) ) " (is "\<exists>i. ?asm i")
from b1 obtain i where b2:"?asm i" by auto
have "?P e"
apply(cut_tac b2, simp add:antOfRbFifo_def substNIl) done
}
moreover
{assume b1:" (\<exists>i. 1\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 2)) (Vertex ( 2*i + 2))) ) " (is "\<exists>i. ?asm i")
from b1 obtain i where b2:"?asm i" by auto
have "?P e"
apply(cut_tac b2, simp add:antOfRbFifo_def substNIl) done
}
moreover
{assume b1:" e=Edge ( Vertex 1) ( Vertex 3) "
have "?P e"
apply(cut_tac a b1,auto ) done
}
moreover
{assume b1:" e=Edge ( Vertex 1) ( Vertex 4) "
let ?f="andForm (neg (eqn rst HIGH)) (andForm (eqn push HIGH) (neg (eqn tail (Const (index LAST )))) ) "
have "?P e "
apply(cut_tac a b1 ,auto) done
}
moreover
{assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i + 3))" (is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have b2:"i - 1 < LAST" by(cut_tac a b1,auto)
have "?P e "
apply(cut_tac a b1 b2 ,auto simp add: antOfRbFifo_def assms ) done
}
moreover
{assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i+4)) "(is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have b2:"i - 1 < LAST" by(cut_tac a b1,auto)
have "?P e "
by(cut_tac a b1 b2 ,auto simp add: antOfRbFifo_def assms)
}
moreover
{assume b1:"\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 3)) ( Vertex (2*i + 1)) " (is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have "?P e "
using axiomOnISub by(cut_tac a b1 ,auto simp add: antOfRbFifo_def assms )
}
moreover
{assume b1:"\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 4)) ( Vertex (2*i +2)) " (is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have "?P e "
using axiomOnISub apply(cut_tac a b1 ,auto simp add: antOfRbFifo_def assms )
done
}
moreover
{assume b1:"e=Edge (Vertex 4) ( Vertex 1)"
have "?P e"
apply(cut_tac a b1,auto simp add:antOfRbFifo_def Let_def) done
}
ultimately show "?P e" by satx
qed
lemma testAux[simp]:
shows "(expEval I e s =index i \<longrightarrow>
i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)@S))) s
=expEval I (mem i) s) \<and>
(expEval I e s =index i \<longrightarrow>
LAST < i\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)@S))) s
=expEval I (caseExp S) s)" (is "?P LAST")
proof(induct_tac LAST,auto )qed
lemma test[simp]:
shows "(expEval I e s =index i \<longrightarrow>
i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)))) s
=expEval I (mem i) s)"
proof -
have a:"(expEval I e s =index i \<longrightarrow>
i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)@[]))) s
=expEval I (mem i) s)"
apply(cut_tac testAux [where S="[]"],blast)done
then show ?thesis by auto
qed
lemma instImply:
assumes a:"G=(rbFifoGsteSpec LAST data)" and b:"0 < LAST " and c:"tag=tagFunOfRegShiftFifo data"
shows
"\<forall> e. e \<in>edgesOf G\<longrightarrow>
tautlogy (implyForm (andForm (antOf G e) (andListForm (tag (source e)))) (consOf G e)) I"
proof(rule allI,rule impI,simp,rule allI,rule impI)
fix e s
assume a1:"e \<in> edgesOf G " and a2:"
formEval I (antOf G e) s \<and> formEval I (andListForm (tag (source e))) s"
let ?P ="\<lambda>e. formEval I (consOf G e) s"
have "e=Edge vertexI ( Vertex 1) |
e=Edge ( Vertex 1) ( Vertex 3) |
e=Edge ( Vertex 1) ( Vertex 4)|
(\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex ( 2*i+1 )) (Vertex ( 2*i+1 ))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex ( 2*i+2 )) (Vertex ( 2*i+2 ))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 3))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 4))) ) |
(\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e= ( Edge (Vertex (2 * i + 3)) (Vertex (2 * i + 1))) ) |
(\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex(2 * i + 4)) (Vertex(2 * i+2)))) |
e=( Edge (Vertex 4) (Vertex 1))"
apply(cut_tac a a1,auto) done
moreover
{assume b1:"e=Edge vertexI ( Vertex 1)"
have "?P e"
apply(cut_tac a b1, auto simp add:antOfRbFifo_def)
done
}
moreover
{assume b1:" (\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 1)) (Vertex ( 2*i + 1))) ) " (is "\<exists>i. ?asm i")
from b1 obtain i where b2:"?asm i" by auto
have "?P e"
apply(cut_tac a b c a2 b2, auto) done
}
moreover
{assume b1:" (\<exists>i. 1\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 2)) (Vertex ( 2*i + 2))) ) " (is "\<exists>i. ?asm i")
from b1 obtain i where b2:"?asm i" by auto
have "?P e"
apply(cut_tac a b c a2 b2,auto) done
}
moreover
{assume b1:" e=Edge ( Vertex 1) ( Vertex 3) "
have "?P e"
apply(cut_tac a b c a2 b1,auto ) done
}
moreover
{assume b1:" e=Edge ( Vertex 1) ( Vertex 4) "
have "?P e "
apply(cut_tac a b b1 c a2,auto simp add: antOfRbFifo_def ) done
}
moreover
{assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i + 3))" (is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have "?P e "
by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def assms )
}
moreover
{assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i+4)) "(is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have "?P e "
by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def assms)
}
moreover
{assume b1:"\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 3)) ( Vertex (2*i + 1)) " (is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have "?P e "
by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def assms )
}
moreover
{assume b1:"\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 4)) ( Vertex (2*i+2 )) " (is "\<exists>i. ?Q i")
from b1 obtain i where b1:"?Q i" by blast
have "?P e "
by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def Let_def assms)
}
moreover
{assume b1:"e=Edge (Vertex 4) ( Vertex 1)"
have "?P e"
apply(cut_tac a b c a2 b1 ,auto) done
}
ultimately show "?P e" by satx
qed
lemma main:
assumes a:"G=(rbFifoGsteSpec LAST data)" and b:"0 < LAST "
and c:"tag=tagFunOfRegShiftFifo data" and
d:"M=(shiftRegfifo LAST )"
shows " circuitSatGsteSpec M G J "
proof(rule mainLemma)
have a1:"consistent' (shiftRegfifo LAST ) (J ) (rbFifoGsteSpec LAST data) (tagFunOfRegShiftFifo data)"
using b by (rule consistencyOfRbfifo)
from a c d this show "consistent' M (J ) G tag"
by simp
next
from a b c show "\<forall>e. e \<in> edgesOf G \<longrightarrow>
tautlogy (implyForm (andForm (antOf G e) (andListForm (tag (source e)))) (consOf G e)) (J )"
apply(rule instImply) done
next
from a c show "tag (initOf G) = []"
apply auto done
qed
end
|
Ohio lawmakers have passed "zero tolerance" laws related to underage drinking and driving. With Ohio being one of the strictest states in regards to this offense, it will be an uproad battle if you have been charged with an underage DUI. Because of this, knowing your options for fighting these allegations is vital.
The most efficient method of doing this is to work closely with a qualified criminal defense attorney who has the experience and legal understanding necessary to effectively represent you throughout the entire criminal process. Having adequate legal counsel has the potential to turn the tables on the prosecution and give you a much better opportunity at fighting these serous charges.
Considering the extremely punitive stance that the state of Ohio has with underage drinking and driving, not taking all the precautions available to remedy the situation will negatively affect the outcome. With this in mind, finding capable representation can make all the difference in the judgment of your case.
Brian Joslyn, of the Joslyn Law Firm, is committed to providing excellent client service while making certain that your constitutional rights are fully protected during this difficult time.
To schedule a free and confidential consultation to go over the specifics of your case, call (614) 444-1900 or send an online message today. Brian proudly represents individuals accuse of criminal acts in the central Ohio counties of Franklin, Delaware, Licking, Madison, Fairfield and Pickaway.
The person has a concentration of at least two-hundredths of one per cent but less than eight-hundredths of one per cent by weight per unit volume of alcohol in the person’s whole blood.
The person has a concentration of at least three-hundredths of one per cent but less than ninety-six-thousandths of one per cent by weight per unit volume of alcohol in the person’s blood serum or plasma.
The person has a concentration of at least two-hundredths of one gram but less than eight-hundredths of one gram by weight of alcohol per two hundred ten liters of the person’s breath.
The person has a concentration of at least twenty-eight one-thousandths of one gram but less than eleven-hundredths of one gram by weight of alcohol per one hundred milliliters of the person’s urine.
In simple terms, if the underage individual is pulled over and has .02 or more BAC, they will be charged with an underage DUI.
As for the penalties, an underage OVI in Ohio is classified as a misdemeanor of the first degree. If convicted, a misdemeanor of the first degree comes with a presumptive sentence of up to six months in prison and / or fines of up to $1,075. In addition, if convicted, the court may require intervention and / or alcohol treatment and education programs.
Along with these sanctions, the individual will also be issued a class five license suspension, which comes with a presumptive term of six months to three years.
If you, a loved one or a child of yours has been arrested for an underage DUI or OVI allegations in central Ohio, take the steps necessary to remedy the situation by working closely with a qualified and experienced criminal defense attorney.
The Joslyn Law Firm proudly represents those accuse of DUI-related crimes and will use its broad legal knowledge and knowledge to develop an aggressive and effective defense strategy that increases your chances of a charge reduction or complete dismissal.
Call (614) 444-1900 to schedule a free and confidential consultation to discuss the details of your case with Brian Joslyn today.
|
\section{Daily course schedule}
\subsection{Week 1 : 22 July - 26 July}
\leanparagraph{Day 1 : Lecture 10:00 - 12:00 , Lab 13:00-15:00}
We will cover Basics of computer organization, programming, history of Python.
Lab session will have corresponding exercises.
\leanparagraph{Day 2 : Lecture 10:00 - 13:00, Lab 14:00 - 17:00}
We will cover Variables, data types in Python.
Lab session will have corresponding exercises.
\leanparagraph{Day 3 : Lecture 10:00 - 13:00}
We will cover User input and output Type affordance.
\leanparagraph{Day 4 : Lecture 10:00 - 13:00, Lab 14:00 - 17:00}
We will cover loops and conditional statements.
Lab session will have corresponding exercises.
\leanparagraph{Day 5 : Lecture 10:00 - 13:00, Lab 14:00 - 17:00}
We will cover Strings, Lists, Dictionaries.
Lab session will have corresponding exercises.
\subsection{Week 2 : 29 July - 2 August}
\leanparagraph{Day 6 : Lecture 10:00 - 13:00, Lab 14:00 - 17:00}
We will cover List Comprehension.
Lab session will have corresponding exercises.
\leanparagraph{Day 7 : Lecture 10:00 - 13:00, Lab 14:00 - 17:00}
We will cover Functions and Containers.
Lab session will have corresponding exercises.
\leanparagraph{Day 8 : Lecture 10:00 - 13:00}
We will cover Generators Decorators.
\leanparagraph{Day 9 : Lecture 10:00 - 13:00, Lab 14:00 - 17:00}
We will be using Libraries namely numpy and pandas.
Lab session will have corresponding exercises.
\leanparagraph{Day 10: Lecture 10:00 - 13:00, Presentation 14:00 - 15:30}
We will do Final assignment presentation and discuss questions from students.
|
# Empoderando o Python
# Empoderando o Python
* Importando bibliotecas
* Espaços de nomes
* NumPy
* Matplolib
# Importando bibliotecas
## Sintaxe
`import BIBLIOTECA`
`import BIBLIOTECA as BIB`
`from BIBLIOTECA import FUNCAO1, FUNCAO2, ...`
Após importar os comandos ficam imediatamente disponíveis. É fortemente recomendado que os comandos de importação fiquem no início do arquivo. Quando importando de uma biblioteca utilizando o `import` as funções disponíveis ficam associadas a um espaço de nomes, que é uma forma de evitar conflito de nomes e facilmente identificar a origem do objeto.
## Exemplos de importação
```python
import numpy
```
```python
numpy.sqrt(4)
```
2.0
```python
import numpy as np
```
```python
np.sqrt(4)
```
2.0
```python
sqrt(4)
```
```python
from cmath import sqrt, sin, cos
```
```python
sqrt(4)
```
(2+0j)
# A biblioteca NumPy
* Parte do projeto SciPy;
* Implementa uma estrutura númerica n-dimensional muito poderosa;
* Computacionalmente muito eficiente;
* Possui uma vasta coleção de funções e métodos;
* Funções de álgebra linear (`numpy.linalg`)
* É provavelmente o motivo pelo qual o Python atingiu sua fama.
Inicialmente lançada como `numeric` em 1995, a biblioteca [NumPy](https://www.numpy.org) renomeada em 2005 é provavelmente a responsável pela fama que a linguagem Python possui. É a biblioteca fundamental na computação científica com Python, sendo parte do projeto [SciPy](https://www.scipy.org/). Em sua essência implementa o objeto **numpy.array** que é uma estrutura n-dimensional com suporte a operações vetoriais, matriciais e tensoriais, computacionalmente eficiente em termos de performance e precisão.
Apesar do Python ser considerada uma lingagem lenta, quando o numpy.array é utilizado de forma correta o speedup obtido é na casa de 3 a 4 ordens de magnitude, muitas vezes ultrapassando a performance de linguagens famosas por computação numérica eficiente como Fortran e Júlia. Isso é possível pois internamente toda computação é realizada de forma extremamente otimizada, quando possível utilizando instruções otimizadas do hardware disponível. Mostraremos exemplos comparando a performance entre uma implementação regular em Python e uma implementada utilizando corretamente o NumPy onde o ganho de performance chega a ser 8000x maior, mas apesar deste exemplo ilustrativo, nosso objetivo com esta aula não é otimização de performance.
A biblioteca NumPy possui uma [vasta coleção de funções](https://docs.scipy.org/doc/numpy/reference/routines.html), aqui trataremos somente de uma parte bem pequena. Dentro da NumPy existem os submódulos: `FFT` (transformada de Fourier), `linalg` (Algebra Linear), `matlib` (operações com matrizes) e `random` (números pseudo aleatórios).
## Operações com Numpy Array
### Criando Numpy Arrays
```python
M = np.array([1, 2, 3, 4, 5])
```
```python
N_lista = [1, 2, 3, 4, 5, 6, 7]
N = np.array(N_lista)
```
```python
I = np.ones(10)
I
```
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
```python
Z = np.zeros_like(I)
Z
```
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```python
M.dtype
```
dtype('int64')
```python
I.dtype
```
dtype('float64')
Como os *numpy arrays*, são otimizados para performance, mudar o seu tamanho não é uma tarefa simples como acontece com as listas Python. Por esta razão esta operação não será explicada durante este curso. Sempre crie os arrays numpy com o tamanho correto para a aplicação.
Os elementos de um array numpy devem pertencer todos ao mesmo tipo de objeto (`int64`, `float64`, `object`, etc), quando adicionado tipos diferentes a um array, ele será convertido para o tipo do array (por exemplo adicionar um número com casas decimais a um array do tipo `int64` converterá o número para o tipo inteiro. Este tipo de operação deve ser evitada a todo custo. Os tipos de dados válidos para um numpy array podem ser vistos em [https://docs.scipy.org/doc/numpy/reference/arrays.scalars.html]
Caso a biblioteca não consiga identificar o melhor tipo para um array ele irá configurar o tipo `object` que tem o comportamento semelhante as listas python (ou seja todos os tipos podem ser inseridos), mas que possui a pior performance, evite a todo custo.
## Aritmética com Numpy Arrays
```python
u = np.array([2,1,2,0])
v = np.array([1,1,2,3])
```
```python
u + v
```
array([3, 2, 4, 3])
```python
u - v
```
array([ 1, 0, 0, -3])
```python
u * v
```
array([2, 1, 4, 0])
```python
u / v
```
array([2., 1., 1., 0.])
```python
abs(u-v)
```
array([1, 0, 0, 3])
```python
u @ v
```
7
Arrays numpy se comportam na maioria dos casos como vetores matemáticos desta forma as operações ($+$, $-$, $*$, $/$) são aplicadas elemento a elemento. Isso exige que os vetores possuam as mesmas dimensões.
Além disso os arrays numpy também suportam a indexação e o fatiamento similar as listas.
## Exercicio 1
A diferença progressiva de primeira ordem de uma série, é dada pela diferença entre o elemento seguinte da série e o elemento atual. Se consideramos um elementos unitariamente espaçados de uma série, podemos considerar a diferença progressiva como um análogo discreto ao conceito de derivada.
Seja a série $x = \{ x_i \} \; 1 \le i \le \infty$. A $i$-ésima diferença progressiva de primeira ordem é dada por
\begin{equation}
\Delta x_i = x_{i+1} - x_i
\end{equation}
Utilizando o que aprendeu até o momento, crie um algoritmo para calcular a diferença progressiva dos 30 primeiros numeros de Fibonacci armazenados na variável `fibo`
```python
# Armazena os 30 primeiros Fibonacci na lista fibo
n = 30
a, b, i = 0, 1, 2
fibo = [a, b]
while i < n :
i = i + 1
a, b = b, a+b
fibo.append(b)
```
```python
# Sua resposta aqui
```
```python
# Gabarito 1
# %load .resps/difprog.py
```
```python
# Gabarito 2
# %load .resps/difprog_o.py
```
# Gráficos
* Biblioteca `matplotlib`
* Imita o comportamento do Matlab
* Suporte a orientação a objetos
* Suporte a $\LaTeX$
* [Multiplos formatos de gráficos](https://matplotlib.org/gallery/index.html)
* Vamos ver:
* Função
* Imagens e Matrizes
* Espalhamento
* Barra
* Setores (Pizza/Torta)
* Histogramas
## Iniciando a MatplotLib
```python
import matplotlib.pyplot as plt
```
```python
x = np.arange(5)
```
```python
plt.plot(x, x**2);
```
## Definindo o domínio
`np.linspace(a, b, n)`
Onde `a` é o limite inferior, `b` o limite superior e `n` o número de intervalos entre os limites.
```python
x = np.linspace(-4, 3, 101)
```
```python
a, b, c = 1, 1, -2
```
```python
y = a*x**2 + b*x + c
```
```python
plt.plot(x, y);
plt.plot(x, .5*x+3);
```
## Exercício
Converta os primeiros 30 números de Fibonacci em um numpy array e crie um gráfico contendo tanto os numeros Fibonacci como as suas diferenças progressivas. Não se esqueça que quando gerando gráficos com mais de uma função é necessário que o número de pontos no domínio seja o mesmo em todas as funções.
```python
# Sua resposta aqui
```
```python
# Gabarito
# %load .resps/fibo_graf.py
```
# Tente
`import antigravity`
```python
```
|
// Copyright Nick Thompson, 2017
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt
// or copy at http://www.boost.org/LICENSE_1_0.txt)
#define BOOST_TEST_MODULE barycentric_rational
#include <cmath>
#include <random>
#include <boost/random/uniform_real_distribution.hpp>
#include <boost/type_index.hpp>
#include <boost/test/included/unit_test.hpp>
#include <boost/test/floating_point_comparison.hpp>
#include <boost/math/interpolators/barycentric_rational.hpp>
#include <boost/multiprecision/cpp_bin_float.hpp>
#ifdef BOOST_HAS_FLOAT128
#include <boost/multiprecision/float128.hpp>
#endif
using std::sqrt;
using std::abs;
using std::numeric_limits;
using boost::multiprecision::cpp_bin_float_50;
template<class Real>
void test_interpolation_condition()
{
std::cout << "Testing interpolation condition for barycentric interpolation on type " << boost::typeindex::type_id<Real>().pretty_name() << "\n";
std::mt19937 gen(4);
boost::random::uniform_real_distribution<Real> dis(0.1f, 1);
std::vector<Real> x(500);
std::vector<Real> y(500);
x[0] = dis(gen);
y[0] = dis(gen);
for (size_t i = 1; i < x.size(); ++i)
{
x[i] = x[i-1] + dis(gen);
y[i] = dis(gen);
}
boost::math::barycentric_rational<Real> interpolator(x.data(), y.data(), y.size());
for (size_t i = 0; i < x.size(); ++i)
{
Real z = interpolator(x[i]);
BOOST_CHECK_CLOSE(z, y[i], 100*numeric_limits<Real>::epsilon());
}
}
template<class Real>
void test_interpolation_condition_high_order()
{
std::cout << "Testing interpolation condition in high order for barycentric interpolation on type " << boost::typeindex::type_id<Real>().pretty_name() << "\n";
std::mt19937 gen(5);
boost::random::uniform_real_distribution<Real> dis(0.1f, 1);
std::vector<Real> x(500);
std::vector<Real> y(500);
x[0] = dis(gen);
y[0] = dis(gen);
for (size_t i = 1; i < x.size(); ++i)
{
x[i] = x[i-1] + dis(gen);
y[i] = dis(gen);
}
// Order 5 approximation:
boost::math::barycentric_rational<Real> interpolator(x.data(), y.data(), y.size(), 5);
for (size_t i = 0; i < x.size(); ++i)
{
Real z = interpolator(x[i]);
BOOST_CHECK_CLOSE(z, y[i], 100*numeric_limits<Real>::epsilon());
}
}
template<class Real>
void test_constant()
{
std::cout << "Testing that constants are interpolated correctly using barycentric interpolation on type " << boost::typeindex::type_id<Real>().pretty_name() << "\n";
std::mt19937 gen(6);
boost::random::uniform_real_distribution<Real> dis(0.1f, 1);
std::vector<Real> x(500);
std::vector<Real> y(500);
Real constant = -8;
x[0] = dis(gen);
y[0] = constant;
for (size_t i = 1; i < x.size(); ++i)
{
x[i] = x[i-1] + dis(gen);
y[i] = y[0];
}
boost::math::barycentric_rational<Real> interpolator(x.data(), y.data(), y.size());
for (size_t i = 0; i < x.size(); ++i)
{
// Don't evaluate the constant at x[i]; that's already tested in the interpolation condition test.
Real t = x[i] + dis(gen);
Real z = interpolator(t);
BOOST_CHECK_CLOSE(z, constant, 100*sqrt(numeric_limits<Real>::epsilon()));
BOOST_CHECK_SMALL(interpolator.prime(t), sqrt(numeric_limits<Real>::epsilon()));
}
}
template<class Real>
void test_constant_high_order()
{
std::cout << "Testing that constants are interpolated correctly in high order using barycentric interpolation on type " << boost::typeindex::type_id<Real>().pretty_name() << "\n";
std::mt19937 gen(7);
boost::random::uniform_real_distribution<Real> dis(0.1f, 1);
std::vector<Real> x(500);
std::vector<Real> y(500);
Real constant = 5;
x[0] = dis(gen);
y[0] = constant;
for (size_t i = 1; i < x.size(); ++i)
{
x[i] = x[i-1] + dis(gen);
y[i] = y[0];
}
// Set interpolation order to 7:
boost::math::barycentric_rational<Real> interpolator(x.data(), y.data(), y.size(), 7);
for (size_t i = 0; i < x.size(); ++i)
{
Real t = x[i] + dis(gen);
Real z = interpolator(t);
BOOST_CHECK_CLOSE(z, constant, 1000*sqrt(numeric_limits<Real>::epsilon()));
BOOST_CHECK_SMALL(interpolator.prime(t), 100*sqrt(numeric_limits<Real>::epsilon()));
}
}
template<class Real>
void test_runge()
{
std::cout << "Testing interpolation of Runge's 1/(1+25x^2) function using barycentric interpolation on type " << boost::typeindex::type_id<Real>().pretty_name() << "\n";
std::mt19937 gen(8);
boost::random::uniform_real_distribution<Real> dis(0.005f, 0.01f);
std::vector<Real> x(500);
std::vector<Real> y(500);
x[0] = -2;
y[0] = 1/(1+25*x[0]*x[0]);
for (size_t i = 1; i < x.size(); ++i)
{
x[i] = x[i-1] + dis(gen);
y[i] = 1/(1+25*x[i]*x[i]);
}
boost::math::barycentric_rational<Real> interpolator(x.data(), y.data(), y.size(), 5);
for (size_t i = 0; i < x.size(); ++i)
{
Real t = x[i];
Real z = interpolator(t);
BOOST_CHECK_CLOSE(z, y[i], 0.03);
Real z_prime = interpolator.prime(t);
Real num = -50*t;
Real denom = (1+25*t*t)*(1+25*t*t);
if (abs(num/denom) > 0.00001)
{
BOOST_CHECK_CLOSE_FRACTION(z_prime, num/denom, 0.03);
}
}
Real tol = 0.0001;
for (size_t i = 0; i < x.size(); ++i)
{
Real t = x[i] + dis(gen);
Real z = interpolator(t);
BOOST_CHECK_CLOSE(z, 1/(1+25*t*t), tol);
Real z_prime = interpolator.prime(t);
Real num = -50*t;
Real denom = (1+25*t*t)*(1+25*t*t);
Real runge_prime = num/denom;
if (abs(runge_prime) > 0 && abs(z_prime - runge_prime)/abs(runge_prime) > tol)
{
std::cout << "Error too high for t = " << t << " which is a distance " << t - x[i] << " from node " << i << "/" << x.size() << " associated with data (" << x[i] << ", " << y[i] << ")\n";
BOOST_CHECK_CLOSE_FRACTION(z_prime, runge_prime, tol);
}
}
}
template<class Real>
void test_weights()
{
std::cout << "Testing weights are calculated correctly using barycentric interpolation on type " << boost::typeindex::type_id<Real>().pretty_name() << "\n";
std::mt19937 gen(9);
boost::random::uniform_real_distribution<Real> dis(0.005, 0.01);
std::vector<Real> x(500);
std::vector<Real> y(500);
x[0] = -2;
y[0] = 1/(1+25*x[0]*x[0]);
for (size_t i = 1; i < x.size(); ++i)
{
x[i] = x[i-1] + dis(gen);
y[i] = 1/(1+25*x[i]*x[i]);
}
boost::math::detail::barycentric_rational_imp<Real> interpolator(x.data(), x.data() + x.size(), y.data(), 0);
for (size_t i = 0; i < x.size(); ++i)
{
Real w = interpolator.weight(i);
if (i % 2 == 0)
{
BOOST_CHECK_CLOSE(w, 1, 0.00001);
}
else
{
BOOST_CHECK_CLOSE(w, -1, 0.00001);
}
}
// d = 1:
interpolator = boost::math::detail::barycentric_rational_imp<Real>(x.data(), x.data() + x.size(), y.data(), 1);
for (size_t i = 1; i < x.size() -1; ++i)
{
Real w = interpolator.weight(i);
Real w_expect = 1/(x[i] - x[i - 1]) + 1/(x[i+1] - x[i]);
if (i % 2 == 0)
{
BOOST_CHECK_CLOSE(w, -w_expect, 0.00001);
}
else
{
BOOST_CHECK_CLOSE(w, w_expect, 0.00001);
}
}
}
BOOST_AUTO_TEST_CASE(barycentric_rational)
{
test_weights<double>();
test_constant<float>();
test_constant<double>();
test_constant<long double>();
test_constant<cpp_bin_float_50>();
test_constant_high_order<float>();
test_constant_high_order<double>();
test_constant_high_order<long double>();
test_constant_high_order<cpp_bin_float_50>();
test_interpolation_condition<float>();
test_interpolation_condition<double>();
test_interpolation_condition<long double>();
test_interpolation_condition<cpp_bin_float_50>();
test_interpolation_condition_high_order<float>();
test_interpolation_condition_high_order<double>();
test_interpolation_condition_high_order<long double>();
test_interpolation_condition_high_order<cpp_bin_float_50>();
test_runge<double>();
test_runge<long double>();
test_runge<cpp_bin_float_50>();
#ifdef BOOST_HAS_FLOAT128
test_interpolation_condition<boost::multiprecision::float128>();
test_constant<boost::multiprecision::float128>();
test_constant_high_order<boost::multiprecision::float128>();
test_interpolation_condition_high_order<boost::multiprecision::float128>();
test_runge<boost::multiprecision::float128>();
#endif
}
|
Of Course BPA Can Be Found Inside You!
It doesn’t take a rocket-scientist to understand how toxins from plastics can be found in the umbilical cord of our newborn babies, in our blood and in our tissues. Generations of exposure to those toxins that can be found in EVERY plastic bottle is always leached into your water, and it always finds it’s way into the body. When N.I.H (National Institute of Health, the primary agency of the United States government responsible for biomedical and public health research) labels our children as being “PRE-POLLUTED” then you KNOW we have bigger problems than most people want to acknowledge at all.
The average temperature of water that is processed through reverse osmosis is 190°. This means that the source water has already reached temperatures that cause the chemicals to leach out from the plastic BEFORE you buy and refrigerate it! In other words, the dangers of consuming fluids from a plastic bottle begin at the bottling plant and there’s nothing you can do to prevent exposure to these chemicals whatsoever.
It takes approximately 5 liters of water to produce (and fill) a 1 liter bottle of water using the reverse osmosis process. If you understand the volatility of our natural resources, this should be of great concern to you!
The reverse osmosis process doesn’t discriminate – is very good at removing the bad things (at least before it hits the bottle, that is) but it removes all of the natural minerals right along with the bad things. Water without minerals has absolutely NO BENEFIT to the human body whatsoever. This is true for distilled water too!
It doesn’t matter if the plastic is made with BPA or if it’s BPA-free.
There is NO such thing as safe plastic…not for you and your family, and not for the environment. Just say “NO” to all plastics…especially those used for your water consumption!
If you decide to take a positive step in the right direction by making the decision to do away with all those harmful plastics impacting YOUR health (and the health of the future generations) reach out to me. There is a better, healthier way and it doesn’t involve a toxic water bottle filled with clear fluids. Your body will thank you for it, the future generations will thank you for it AND the environmental will thank you for it too.
I’m an environmental advocate specializing in water. A mini “Erin Brockovich” …instead of fighting for justice, I have solutions for businesses and families. Families like the Donald Trump family, the Bill Gates family, the Seattle Seahawks families, the NY Yankees families and familes like yours and mine. If you’d like more information about the solutions that I have to offer, reach out to me.
Lynn Gardner, [email protected] or call 202 241 1542.
With bottled water out-selling carbonated drinks for the first time, there is no limit to the options being made available to consumers by money-hungry providers and most of it is NOT what it claims to be! Bottled water dominates more square footage in the grocery story than any other product! More and more “alkaline” waters and more and more”hydrogen infused” options are hitting the shelves and the $30.00 per gallon price tag doesn’t seem to be slowing anybody down! Wise consumers are looking at affordable, long-term options so that they can have an endless supply of alkalized water in their own home and these wise consumers are looking to consumer ionization appliances as a result. These wise consumers also understand that by purchasing a long-term solution; they save thousands and thousands of dollars over the long run.
Longevity. Because consumers are rapidly catching on to the benefits of drinking cleaner, healthier water and more alkaline water and the ionizer industry is beginning to boom as a result. Unfortunately, there are many start-up and 3rd party ionizer companies chasing the market, promising the moon and selling the truth short online. Look for a company that has been in the market long enough to be established AND long enough to have proven to stand behind their product and their warranties, one that offers reliable and reasonable service inside of the U.S. and one that can speak with experience about the life-cycle expectations of your unit. a quality ionizer SHOULD last for 20+ years with simple, routine maintenance. Look for a company that has the stay-power to be there for you long after your investment and one that already has a track record to “prove” how long your investment should last!
Reputable. There are good things and not-so-good things about the Internet. One of the not-so-good things is that it’s easy to “look good on paper”. Dealing with strangers on the Internet over ANY investment but particularly a sizable investment is never a good idea. Many of the 3rd party ionizer companies that you see online have already undergone name changes a few times and some have the worst Better Business Bureau ratings possible; that’s saying a whole lot when so many of them have been in business for just a few years. It’s not a good sign OR a good investment to put your money with a company that has already proven to be a problem for other consumers. Look for a company that has been around the block a few times and one that you can go directly to for any concerns or problems that you have.
USA Focused. Most (if not all) ionizers are manufactured outside of the US. The majority of the units that you see on the market today are manufactured by the same Korean manufacturer under different labels. The company that you may buy from is likely a 3rd party representative and not the manufacturer at all and in that case, they don’t manufacture, service or warranty the units at a;;. When services is needed, your unit must be shipped back to the original manufacturer in Korea, and at your expense. Look for a USA focused ionizer company that will actually stand behind and service your investment here in the United States. In other words, look for a direct manufacturer and NOT a 3rd party sales entity.
Sight Unseen. If you’re going to make the investment in ionization technology for you and your family, it’s not a good idea purchase sight unseen. You should have the chance to see it (and try it) for yourself before you buy. Although one company may give you the opportunity to see and try for yourself most of them don’t. Each ionizer on the market is vastly different from another and it’s not reasonable for you to expect that what you find online is like what you’ve already seen and tried for yourself! Consumers don’t buy vehicles that way…”test driving” one brand and then buying a completely different vehicle without ever seeing or driving it first. That is a risky approach no matter how large or how small the price tag is and it makes no financial sense whatsoever. Look for a company that will allow you to see (and try) the product for yourself before you buy!
Warranty: There are warranties and then there are warranties, and it’s important to select a company that will not only stand behind the warranty, but also one that will be able to honor it within the U.S. Wise consumers recognize that there is no such thing as a lifetime, unlimited warranty without a “catch” of some kind. Late-night infomercials often use this strategy to entice sales, but they generally have low-cost items and studies have proven that 98.9% of consumers will not bother to ship back a low-ticket item even with that “money back guarantee”. If necessary, ask to see the full warranty before you purchase (versus a warranty certificate) and don’t just “buy in” to a company that says they’ll stand behind the product “forever”and ever no matter what. Every “lifetime” warranty on the planet has exclusions of some sort and “lifetime” warranty ionizer companies are no different. If they go out of business; you have no warranty and if you’re required to ship your product back to Korea to the manufacturer (avg shipping $600 each way), it makes no financial sense whatsoever. Look for a company that has a solid (and realistic) warranty along with U.S. service centers prepared to offer timely and efficient serving whenever it’s needed.
Investing in ionization water technology is (by far) the single most significant investment that you can make for your health and well-being and it makes perfect financial sense for the long run…if you buy the right one, that is! When it comes to anything consumable, it pays to do your homework AND it never pays to skimp.
If you’d like more information on ionization technology, reach out to me.
Experts Blame Dehydration for Stroke Damage? It’s About Time!
Headlines in a an article published by Everyday Health reads – “Dehydration Leads to Greater Stroke Damage”. More accurately it should read “Dehydration Leads to Greater Stroke Risk“. Dehydration is your enemy and it’s time to give it the blame that it deserves. The human brain is 85% water and the blood is 90% water. When the body is deprived of water, the consequences go well beyond what most people understand. We’ve replaced the body’s NEED for water with a beverage-of-choice and we no longer recognize the signs of dehydration.
Dr. Paul Bendheim, a clinical professor of neurology from the University of Arizona College of Medicine in Tucson, says “there are no hard-and-fast rules for staying well-hydrated, despite recommendations to drink eight glasses of water each day“. That’s a no-brainer! Obviously the amount of water each individual needs depends on a number of factors; activity, weight and status of health just to name a few. Dr. Bendheim goes on to say, “prior research found that the majority of people are dehydrated when they have a stroke, but it’s not clear why”. How in the world can it not be clear why people are dehydrated? It’s interesting to note that most heart attacks and strokes happen in the middle of the night when the body is the most dehydrated. It’s a very good thing to drink water before you retire for the night!
The 8 Glasses of Water a Day Rule is just THE MINIMUM needed to keep your organs functioning…but it sure isn’t enough to keep your blood hydrated and your veins free and clear!
If you’re active, if you’re sick or if you’re overweight, you need a whole lot more than just eight, 8-ounce glasses of water a day to equip the body to battle things like stroke or high blood pressure, diabetes, arthritis, and a multitude of other conditions! Water is positively key to your prevention and recovery…you were wired that way!
Seniors are generally more dehydrated than younger people primarily because we lose our sense of thirst as we age. We may need to be reminded to drink instead of waiting for the brain to trigger thirst. Lots of people set alarms to be reminded to take medications, perhaps it’s time to set up a reliable reminder to prompt you to drink water too. Relying on your sense of thirst won’t do the trick. By the time the average person “feels” thirsty, they are already chronically dehydrated and ready for disaster.
Now that you know that stroke (among LOTS OF OTHER THINGS) can be triggered or worsened by dehydration, don’t you think it’s time to do something about it?
How much research do you need to read before you take action to properly hydrate your body with W A T E R? It all comes down to basic biology. No amount of consuming any other fluids can replace the body’s need for water. In fact, most choice beverages actually cause the body to work harder just to rid itself of toxins introduced by replacing water with another beverage.
If water is so important, don’t you think the kind of water you drink might be important too? Of course it is! It doesn’t make much sense to consume water that has no minerals whatsoever (causes the body to deplete itself of minerals from within which leads to degeneration) and it doesn’t make much sense to drink water that is laced with the chemicals necessary to make human waste potable. After you make the decision to consume the amount of water your body needs; you’ll need to decide what kind of SAFE and HEALTHY water you’ll be introducing to your body. There aren’t a whole lot of options when you look at it that way! Electrolyzed Reduced Water (ERW) is the ONLY way to go for me and my family. After doing our own research; the choice was easy for us. How much research (and testimonials) do you need to see before you embrace Electrolyzed Reduced Water as being the most beneficial water for the human body?
I know that my days are numbered but I don’t buy in to the idea that I’m supposed to rot prior to my death, so I’m giving my body what it needs to avoid things like stroke, cancer, arthritis, diabetes, etc., instead of looking for a pill to fix my problem later. After all, there is no cure for any disease whatsoever. With that in mind, it’s time to FIGHT FOR HEALTH!
Do yourself, your brain (and the rest of your body) and your family a favor. HYDRATE NOW!
Your Bones Are ALIVE And They Need Your Attention!
Most of us take our bones for granted unless we’re faced with a crisis like a broken bone or a fractured hip. Our bones are very much alive and they need our attention if we want them to “stand up” to the job that we ask them to do day-after-day and year-after-year.
More than 250,000 people over the age of 65 break a hip each year in the United States, according to the Centers for Disease Control and Prevention. It’s interesting to note that most of the time people don’t fall a break a hip…they break a hip and fall. How’s that for brittle bones? Demographic reports suggest that 10,000 people a day are turning 65 and that this will continue for the next 17 years! It looks like we have a lot of bone problems on the horizon if we don’t take action to protect them!
Contrary to popular belief, men are at risk too. About 1 in 5 men will break a bone due to osteoporosis. In fact, men claim 1 out of every 3 broken hips worldwide. Sadly, both health and quality of life can plummet after a hip fracture. It’s alarming to note that 37% of men die within a year after they break a hip — twice as many as women, found a 2014 study published by the International Osteoporosis Foundation. This speaks volumes to the poor condition of the overall health of those battling brittle bones. It’s time to give your bones (and all of the rest of you) the attention they deserve!
Are You “Bad to the Bone”?
Our bones consist of 25% water. If you aren’t efficiently hydrating your body, your bones (and the rest of you) will pay the ultimate price. This is just one of the reasons that the elderly often have hip and knee problems. We lose our sense of thirst as we age and if we don’t make a conscious effort to drink lots of healthy water (naturally laced with minerals), we will be living in a state of chronic dehydration and the bones will begin to break down…and that’s just the beginning!
Red blood cells don’t last long in the bloodstream so they need to be replaced continuously by new cells, which originate in your bone marrow. This area in the center of the bone is a spongy tissue that churns out billions of new, mature blood cells each day (wow!) just to keep us supplied with the oxygen and energy we need. If we refuse to give the body the amount of (and the quality of) water it needs, this entire system begins to fail. Drinking plenty of healthy water is a very small price to pay to sustain health.
How Are You Neglecting Your Bones?
1. You overlook calcium. Adults need about 1,000 milligrams of calcium a day to keep the bones in good shape. A huge source of calcium is healthy, mineral-laced water. Unfortunately most people are replacing healthy water with useless bottled water (all minerals have been removed). So in addition to not consuming enough healthy water to stay hydrated, you’re robbing minerals from your bones to make up for the minerals that are not in your drinking water. Does that make sense to you? I sure hope not! Change you water…change your life!
Calcium is just part of the equation though. In addition to calcium, daily magnesium, protein, and vitamin D are essential. Vitamin D increases calcium absorption by as much as 50 percent Since we avoid the sun like the plague these days, it’s important for many of us to supplement with a daily dose of Vitamin D.
2. Digestive conditions increase your risk of having weak bones. These include inflammatory bowel disease, irritable bowel syndrome, lactose intolerance, and anorexia nervosa (according to the National Institute of Health Osteoporosis Resource Center ). It is important to note that the leading colon specialist in the world, Dr. Hiromi Shinya, recommends proper hydration with Kangen Water® for colon health. He has examined the colon of thousands of patients before and after a Kangen Water® protocol and the results are astounding. If you suffer with digestive issues, you need to write your own prescription for Kangen Water®.
How Are You Damaging Your Bones?
1. Treating your GERD/acid reflux with meds. Bone loss can result from the use of antacids that contain aluminum, which can deplete your calcium supply; this includes products like Alamag, Maalox, or Mylanta. Even the commonly used drugs that treat heartburn — Prilosec, Nexim, and Prevacid — may lead to bone loss, according to the National Osteoporosis Foundation. Alkalize the body and say “good bye” to these digestive issues instead of opting for meds that may be destroying your bones!
2. Taking steroid medications like Cortisone, Prednisone, or Desxamethasone can cause debilitating bone loss over time, notes the National Osteoporosis Foundation (NOF). You may have taken steroid meds — also called glucocorticoids or corticosteroids — to treat rheumatoid arthritis, asthma or allergies, lupus, Crohn’s disease or cancer. The root cause of these conditions is inflammation and chronic dehydration. Drink ionized, alkalized, electrically charged water and you’ll likely be able to nix the steroids as your body begins to mend itself. EXPERIENCE IT 100% RISK FREE FOR 30 DAYS AND SEE FOR YOURSELF.
3. Phenobarbital is often used to treat seizures as well as anxiety and insomnia. How many people do you know that “need a little something” to help them sleep or to manage stress? The drug is sometimes habit-forming and often misused. Dilantin, another anti-seizure medication, is also linked to bone loss.
4. Moderating your alcohol intake is important for bone health. Too much alcohol keeps you from getting the nutrients strong bones need. Abusing alcohol leads to bone weakness and a greater risk of breaking bones. If you consume alcohol remember; less is always best.
So you see, your bones are very much ALIVE and badly in need of your attention.
Are you going to give them the attention they deserve or will you hold out for surgery and pain killers? Just askin’.
If you decide that it’s time to take control of your own health, reach out to me! I’ve got a solution that may even outlive you and it’s a game-changer when it comes to your health!
WANT TO SEE WHAT I BELIEVE ABOUT WATER?
I Believe from Kangen Pro Tools on Vimeo.
I Lied About Alkaline Water!
It is said that “the truth will set you free” so here goes…I lied about alkaline water. I have been busy preaching the benefits of alkaline water without understanding that long-term use of alkaline water can be detrimental to your health! It is vitally important to good health that we keep the body in an alkaline state, however the method that we use to accomplish this can “make or break” us. An alkaline diet is a great start but it’s just not enough to maintain the balance needed. Water is a crucial component.
There is a HUGE difference between alkaline water and alkalized water. How can you tell the difference? That’s easy! Alkaline water is enhanced with minerals and it can be tested with a standard pool-type pH strip because pH strips react to a chemical change. Alkalized water will not change the color of a pH strip because there are NO added minerals or chemicals used in the process. Alkalized is produced by the introduction of electricity and therefore it must be tested using pH drops.
Alkalized water is water that has been modified with the use of very high voltage electricity. If the electrical influence is strong enough. it will create was is referred to as “dissociation of water”.
Dr. Michael Donaldson (below) has become an expert on the impact of alkalized water on the human body and he explains the differences between alkaline and alkalized water much better than I can. I encourage you to preview this brief explanation. This is key to your health!
The bad news is that I offered an “uneducated” version of the need for alkaline water. The good news is that the source that I’ve been recommending all along produces ALKALIZED water NOT alkaline water. Just when you thought it couldn’t get any better….
CLICK HERE FOR ALKALIZED WATER!
If you’ve ever wondered how food suppliers could possibly get by with deceiving the public about what we’re really eating just look in the mirror! We are their secret weapon.
Since water is the #1 requirement for health (and life) it’s important to understand what you are (and are not) drinking too! Don’t let your willful ignorance get in the way of making wise decisions about the water you and your family drink!
The Water Quality Association has awarded Kangen Water® the Gold Seal for water quality (aka healthy water) year after year.
With over 30 million people across the globe already drinking Kangen Water® isn’t it time you got on board too?
CLICK HERE for water endorsed by the WQA!
Too Cheap to Protect Your Health? You’ll Be Sorry!
Some things in life are clearly superior to others of like kind. Here is one of my favorites.
Stradivarius violins…there are none in the world like the violins mastered by the Stradivari family in the late 1600’s. These instruments are famous for the quality of sound they produce. Many attribute the difference to the wood used, the region the wood came from, the environmental conditions during the growth period of the wood and the minerals used to “finish” the wood. An original Stradivarius violin is not like any other in the world and it is deserving of the praise it receives.
I remember taking a tour of a temperature controlled private facility that houses only the finest things in the world. This facility has armed guards 24/7 and every security measure known to mankind to protect the magnificent items stored there. In one room was an original Stradivarius violin. It was the only thing in this room and it had its own guard. Carefully stored in a temperature and humidity controlled glass case this violin was gently removed from its case twice a day and played by a master violinist just to be sure that it was being cared for in the way it was intended to be and that it is maintained to “perfection” at all times. It was a magnificent sight to see. It’s worth? Millions.
What’s my point? My point is that your health is worth far more than the finest Stradivarius and yet we skimp, we cut corners, we ignore the obvious and then freak out when the body “cries wolf” and it quits working. Frankly, some people are just too cheap to protect their own health. They prefer to spend their money on indulgence instead of maintenance. Rest assured, that precious Stradivarius would not be so special if it was not provided the finest “care”at all costs 24 four hours a day. Your body is no different. To maintain “perfection” you can’t continue to ignore what it needs.
We are what we breathe, what we eat and what we drink. Quality is worth every penny you pay for it and THEN some. Do whatever you have to do and sacrifice whatever you have to sacrifice to buy non-GMO and organic foods. Take the highest quality supplements you can get your hands on AND last, but certainly not least, hydrate your body with the absolute healthiest water you can find. None of these things are cheap but you definitely get what you pay for when it comes to quality products for your health. I think you’re worth it! Do you?
CLICK HERE for the healthiest water you can get your hands on!
|
REBOL [
Title: "Korrigiert Text-list, Demo"
Date: 27-Jan-2001/23:39:17+1:00
Name: "text-list'"
Version: 0.9
File: %textlist-patch1.r
Home: http://jove.prohosting.com/~screbol/
Author: "volker"
Owner: "volker"
Rights: "gpl"
Needs: [view 0.10.38]
Tabs: none
Usage: none
Purpose: none
Comment: none
History: [27-Jan-2000 ""]
Language: "german"
]
patched: context [
l: last-shown-lines: styles: text-list': update-slider:
none
[
]
[
%tlp2.r
]
styles: stylize [
text-list': text-list with [
"add size-change scrolling"
last-shown-lines: -1
update-slider: does [
either 0 = length? data [sld/redrag 1] [
sld/redrag lc / length? data]
]
append init [
sub-area/feel/redraw: does [
l: length? data
if l <> last-shown-lines [
last-shown-lines: l
update-slider
]
]
]
"leere liste erlauben."
words/data:
func [new args] [
if not empty? second args [
new/text: first new/texts: second args
]
next args
]
]
]
]
|
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
⊢ fst f g ≫ f = snd f g ≫ g
[PROOFSTEP]
ext ⟨_, h⟩
[GOAL]
case w.mk
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
val✝ : ↑X.toTop × ↑Y.toTop
h : val✝ ∈ {xy | ↑f xy.fst = ↑g xy.snd}
⊢ ↑(fst f g ≫ f) { val := val✝, property := h } = ↑(snd f g ≫ g) { val := val✝, property := h }
[PROOFSTEP]
exact h
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a : Z ⟶ X
b : Z ⟶ Y
w : a ≫ f = b ≫ g
z : ↑Z.toTop
⊢ (↑a z, ↑b z) ∈ {xy | ↑f xy.fst = ↑g xy.snd}
[PROOFSTEP]
apply_fun (fun q => q z) at w
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a : Z ⟶ X
b : Z ⟶ Y
z : ↑Z.toTop
w : ↑(a ≫ f) z = ↑(b ≫ g) z
⊢ (↑a z, ↑b z) ∈ {xy | ↑f xy.fst = ↑g xy.snd}
[PROOFSTEP]
exact w
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a : Z ⟶ X
b : Z ⟶ Y
w : a ≫ f = b ≫ g
⊢ Continuous fun z => { val := (↑a z, ↑b z), property := (_ : ↑(a ≫ f) z = ↑(b ≫ g) z) }
[PROOFSTEP]
apply Continuous.subtype_mk
[GOAL]
case h
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a : Z ⟶ X
b : Z ⟶ Y
w : a ≫ f = b ≫ g
⊢ Continuous fun x => (↑a x, ↑b x)
[PROOFSTEP]
rw [continuous_prod_mk]
[GOAL]
case h
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a : Z ⟶ X
b : Z ⟶ Y
w : a ≫ f = b ≫ g
⊢ (Continuous fun x => ↑a x) ∧ Continuous fun x => ↑b x
[PROOFSTEP]
exact ⟨a.continuous, b.continuous⟩
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a b : Z ⟶ pullback f g
hfst : a ≫ fst f g = b ≫ fst f g
hsnd : a ≫ snd f g = b ≫ snd f g
⊢ a = b
[PROOFSTEP]
ext z
[GOAL]
case w
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a b : Z ⟶ pullback f g
hfst : a ≫ fst f g = b ≫ fst f g
hsnd : a ≫ snd f g = b ≫ snd f g
z : (forget CompHaus).obj Z
⊢ ↑a z = ↑b z
[PROOFSTEP]
apply_fun (fun q => q z) at hfst hsnd
[GOAL]
case w
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a b : Z ⟶ pullback f g
z : (forget CompHaus).obj Z
hfst : ↑(a ≫ fst f g) z = ↑(b ≫ fst f g) z
hsnd : ↑(a ≫ snd f g) z = ↑(b ≫ snd f g) z
⊢ ↑a z = ↑b z
[PROOFSTEP]
apply Subtype.ext
[GOAL]
case w.a
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a b : Z ⟶ pullback f g
z : (forget CompHaus).obj Z
hfst : ↑(a ≫ fst f g) z = ↑(b ≫ fst f g) z
hsnd : ↑(a ≫ snd f g) z = ↑(b ≫ snd f g) z
⊢ ↑(↑a z) = ↑(↑b z)
[PROOFSTEP]
apply Prod.ext
[GOAL]
case w.a.h₁
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a b : Z ⟶ pullback f g
z : (forget CompHaus).obj Z
hfst : ↑(a ≫ fst f g) z = ↑(b ≫ fst f g) z
hsnd : ↑(a ≫ snd f g) z = ↑(b ≫ snd f g) z
⊢ (↑(↑a z)).fst = (↑(↑b z)).fst
[PROOFSTEP]
exact hfst
[GOAL]
case w.a.h₂
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
Z : CompHaus
a b : Z ⟶ pullback f g
z : (forget CompHaus).obj Z
hfst : ↑(a ≫ fst f g) z = ↑(b ≫ fst f g) z
hsnd : ↑(a ≫ snd f g) z = ↑(b ≫ snd f g) z
⊢ (↑(↑a z)).snd = (↑(↑b z)).snd
[PROOFSTEP]
exact hsnd
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
⊢ pullback.fst f g = (pullbackIsoPullback f g).hom ≫ Limits.pullback.fst
[PROOFSTEP]
dsimp [pullbackIsoPullback]
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
⊢ pullback.fst f g =
(Limits.IsLimit.conePointUniqueUpToIso (pullback.isLimit f g) (Limits.limit.isLimit (Limits.cospan f g))).hom ≫
Limits.pullback.fst
[PROOFSTEP]
simp only [Limits.limit.conePointUniqueUpToIso_hom_comp, pullback.cone_pt, pullback.cone_π]
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
⊢ pullback.snd f g = (pullbackIsoPullback f g).hom ≫ Limits.pullback.snd
[PROOFSTEP]
dsimp [pullbackIsoPullback]
[GOAL]
X Y B : CompHaus
f : X ⟶ B
g : Y ⟶ B
⊢ pullback.snd f g =
(Limits.IsLimit.conePointUniqueUpToIso (pullback.isLimit f g) (Limits.limit.isLimit (Limits.cospan f g))).hom ≫
Limits.pullback.snd
[PROOFSTEP]
simp only [Limits.limit.conePointUniqueUpToIso_hom_comp, pullback.cone_pt, pullback.cone_π]
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
e : (a : α) → X a ⟶ B
⊢ Continuous fun x =>
match x with
| { fst := a, snd := x } => ↑(e a) x
[PROOFSTEP]
apply continuous_sigma
[GOAL]
case hf
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
e : (a : α) → X a ⟶ B
⊢ ∀ (i : α),
Continuous fun a =>
match { fst := i, snd := a } with
| { fst := a, snd := x } => ↑(e a) x
[PROOFSTEP]
intro a
[GOAL]
case hf
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
e : (a : α) → X a ⟶ B
a : α
⊢ Continuous fun a_1 =>
match { fst := a, snd := a_1 } with
| { fst := a, snd := x } => ↑(e a) x
[PROOFSTEP]
exact (e a).continuous
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
f g : finiteCoproduct X ⟶ B
h : ∀ (a : α), ι X a ≫ f = ι X a ≫ g
⊢ f = g
[PROOFSTEP]
ext ⟨a, x⟩
[GOAL]
case w.mk
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
f g : finiteCoproduct X ⟶ B
h : ∀ (a : α), ι X a ≫ f = ι X a ≫ g
a : α
x : ↑(X a).toTop
⊢ ↑f { fst := a, snd := x } = ↑g { fst := a, snd := x }
[PROOFSTEP]
specialize h a
[GOAL]
case w.mk
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
f g : finiteCoproduct X ⟶ B
a : α
x : ↑(X a).toTop
h : ι X a ≫ f = ι X a ≫ g
⊢ ↑f { fst := a, snd := x } = ↑g { fst := a, snd := x }
[PROOFSTEP]
apply_fun (fun q => q x) at h
[GOAL]
case w.mk
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
f g : finiteCoproduct X ⟶ B
a : α
x : ↑(X a).toTop
h : ↑(ι X a ≫ f) x = ↑(ι X a ≫ g) x
⊢ ↑f { fst := a, snd := x } = ↑g { fst := a, snd := x }
[PROOFSTEP]
exact h
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
s : Limits.Cocone (Discrete.functor X)
m : (cocone X).pt ⟶ s.pt
hm : ∀ (j : Discrete α), NatTrans.app (cocone X).ι j ≫ m = NatTrans.app s.ι j
a : α
⊢ ι (fun a => X a) a ≫ m = ι (fun a => X a) a ≫ (fun s => desc (fun a => X a) fun a => NatTrans.app s.ι { as := a }) s
[PROOFSTEP]
specialize hm ⟨a⟩
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
s : Limits.Cocone (Discrete.functor X)
m : (cocone X).pt ⟶ s.pt
a : α
hm : NatTrans.app (cocone X).ι { as := a } ≫ m = NatTrans.app s.ι { as := a }
⊢ ι (fun a => X a) a ≫ m = ι (fun a => X a) a ≫ (fun s => desc (fun a => X a) fun a => NatTrans.app s.ι { as := a }) s
[PROOFSTEP]
ext t
[GOAL]
case w
α : Type
inst✝ : Fintype α
X : α → CompHaus
s : Limits.Cocone (Discrete.functor X)
m : (cocone X).pt ⟶ s.pt
a : α
hm : NatTrans.app (cocone X).ι { as := a } ≫ m = NatTrans.app s.ι { as := a }
t : (forget CompHaus).obj (X a)
⊢ ↑(ι (fun a => X a) a ≫ m) t =
↑(ι (fun a => X a) a ≫ (fun s => desc (fun a => X a) fun a => NatTrans.app s.ι { as := a }) s) t
[PROOFSTEP]
apply_fun (fun q => q t) at hm
[GOAL]
case w
α : Type
inst✝ : Fintype α
X : α → CompHaus
s : Limits.Cocone (Discrete.functor X)
m : (cocone X).pt ⟶ s.pt
a : α
t : (forget CompHaus).obj (X a)
hm : ↑(NatTrans.app (cocone X).ι { as := a } ≫ m) t = ↑(NatTrans.app s.ι { as := a }) t
⊢ ↑(ι (fun a => X a) a ≫ m) t =
↑(ι (fun a => X a) a ≫ (fun s => desc (fun a => X a) fun a => NatTrans.app s.ι { as := a }) s) t
[PROOFSTEP]
exact hm
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
a : α
⊢ Limits.Sigma.ι X a ≫ (coproductIsoCoproduct X).inv = finiteCoproduct.ι X a
[PROOFSTEP]
dsimp [coproductIsoCoproduct]
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
a : α
⊢ Limits.Sigma.ι X a ≫
(Limits.IsColimit.coconePointUniqueUpToIso (finiteCoproduct.isColimit X)
(Limits.colimit.isColimit (Discrete.functor X))).inv =
finiteCoproduct.ι X a
[PROOFSTEP]
simp only [Limits.colimit.comp_coconePointUniqueUpToIso_inv, finiteCoproduct.cocone_pt, finiteCoproduct.cocone_ι,
Discrete.natTrans_app]
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
a : α
⊢ Function.Injective ↑(ι X a)
[PROOFSTEP]
intro x y hxy
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
a : α
x y : (forget CompHaus).obj (X a)
hxy : ↑(ι X a) x = ↑(ι X a) y
⊢ x = y
[PROOFSTEP]
exact eq_of_heq (Sigma.ext_iff.mp hxy).2
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
π : (a : α) → X a ⟶ B
a : α
⊢ ∀ (x : (forget CompHaus).obj (X a)), ↑(desc X π) (↑(ι X a) x) = ↑(π a) x
[PROOFSTEP]
intro x
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
π : (a : α) → X a ⟶ B
a : α
x : (forget CompHaus).obj (X a)
⊢ ↑(desc X π) (↑(ι X a) x) = ↑(π a) x
[PROOFSTEP]
change (ι X a ≫ desc X π) _ = _
[GOAL]
α : Type
inst✝ : Fintype α
X : α → CompHaus
B : CompHaus
π : (a : α) → X a ⟶ B
a : α
x : (forget CompHaus).obj (X a)
⊢ ↑(ι X a ≫ desc X π) x = ↑(π a) x
[PROOFSTEP]
simp only [ι_desc]
-- `elementwise` should work here, but doesn't
|
[STATEMENT]
lemma natceiling_lessD: "nat(ceiling x) < n \<Longrightarrow> x < real n"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. nat \<lceil>x\<rceil> < n \<Longrightarrow> x < real n
[PROOF STEP]
by linarith
|
State Before: n : ℕ
⊢ factors (n + 2) = minFac (n + 2) :: factors ((n + 2) / minFac (n + 2)) State After: no goals Tactic: rw [factors]
|
import .love02_backward_proofs_demo
/- # LoVe Exercise 2: Backward Proofs -/
set_option pp.beta true
set_option pp.generalized_field_notation false
namespace LoVe
namespace backward_proofs
/- ## Question 1: Connectives and Quantifiers
1.1. Carry out the following proofs using basic tactics.
Hint: Some strategies for carrying out such proofs are described at the end of
Section 2.3 in the Hitchhiker's Guide. -/
lemma I (a : Prop) :
a → a :=
begin
intro ha,
exact ha
end
lemma K (a b : Prop) :
a → b → b :=
begin
intros ha hb,
exact hb
end
lemma C (a b c : Prop) :
(a → b → c) → b → a → c :=
begin
intros hg hb ha,
apply hg,
exact ha,
exact hb
end
lemma proj_1st (a : Prop) :
a → a → a :=
begin
intros ha ha',
exact ha
end
/- Please give a different answer than for `proj_1st`: -/
lemma proj_2nd (a : Prop) :
a → a → a :=
begin
intros ha ha',
exact ha'
end
lemma some_nonsense (a b c : Prop) :
(a → b → c) → a → (a → c) → b → c :=
begin
intros hg ha hf hb,
apply hg,
exact ha,
exact hb
end
/- 1.2. Prove the contraposition rule using basic tactics. -/
lemma contrapositive (a b : Prop) :
(a → b) → ¬ b → ¬ a :=
begin
intros hab hnb ha,
apply hnb,
apply hab,
apply ha
end
/- 1.3. Prove the distributivity of `∀` over `∧` using basic tactics.
Hint: This exercise is tricky, especially the right-to-left direction. Some
forward reasoning, like in the proof of `and_swap₂` in the lecture, might be
necessary. -/
lemma forall_and {α : Type} (p q : α → Prop) :
(∀x, p x ∧ q x) ↔ (∀x, p x) ∧ (∀x, q x) :=
begin
apply iff.intro,
{ intro h,
apply and.intro,
{ intro x,
apply and.elim_left,
apply h },
{ intro x,
apply and.elim_right,
apply h } },
{ intros h x,
apply and.intro,
{ apply and.elim_left h },
{ apply and.elim_right h } }
end
/- ## Question 2: Natural Numbers
2.1. Prove the following recursive equations on the first argument of the
`mul` operator defined in lecture 1. -/
#check mul
lemma mul_zero (n : ℕ) :
mul 0 n = 0 :=
begin
induction' n,
{ refl },
{ simp [add, mul, ih] }
end
lemma mul_succ (m n : ℕ) :
mul (nat.succ m) n = add (mul m n) n :=
begin
induction' n,
{ refl },
{ simp [add, add_succ, add_assoc, mul, ih] }
end
/- 2.2. Prove commutativity and associativity of multiplication using the
`induction'` tactic. Choose the induction variable carefully. -/
lemma mul_comm (m n : ℕ) :
mul m n = mul n m :=
begin
induction' m,
{ simp [mul, mul_zero] },
{ simp [mul, mul_succ, ih],
cc }
end
lemma mul_assoc (l m n : ℕ) :
mul (mul l m) n = mul l (mul m n) :=
begin
induction' n,
{ refl },
{ simp [mul, mul_add, ih] }
end
/- 2.3. Prove the symmetric variant of `mul_add` using `rw`. To apply
commutativity at a specific position, instantiate the rule by passing some
arguments (e.g., `mul_comm _ l`). -/
lemma add_mul (l m n : ℕ) :
mul (add l m) n = add (mul n l) (mul n m) :=
begin
rw mul_comm _ n,
rw mul_add
end
/- ## Question 3 (**optional**): Intuitionistic Logic
Intuitionistic logic is extended to classical logic by assuming a classical
axiom. There are several possibilities for the choice of axiom. In this
question, we are concerned with the logical equivalence of three different
axioms: -/
def excluded_middle :=
∀a : Prop, a ∨ ¬ a
def peirce :=
∀a b : Prop, ((a → b) → a) → a
def double_negation :=
∀a : Prop, (¬¬ a) → a
/- For the proofs below, please avoid using lemmas from Lean's `classical`
namespace, because this would defeat the purpose of the exercise.
3.1 (**optional**). Prove the following implication using tactics.
Hint: You will need `or.elim` and `false.elim`. You can use
`rw excluded_middle` to unfold the definition of `excluded_middle`,
and similarly for `peirce`. -/
lemma peirce_of_em :
excluded_middle → peirce :=
begin
rw excluded_middle,
rw peirce,
intro hem,
intros a b haba,
apply or.elim (hem a),
{ intro,
assumption },
{ intro hna,
apply haba,
intro ha,
apply false.elim,
apply hna,
assumption }
end
/- 3.2 (**optional**). Prove the following implication using tactics. -/
lemma dn_of_peirce :
peirce → double_negation :=
begin
rw peirce,
rw double_negation,
intros hpeirce a hnna,
apply hpeirce a false,
intro hna,
apply false.elim,
apply hnna,
exact hna
end
/- We leave the missing implication for the homework: -/
namespace sorry_lemmas
lemma em_of_dn :
double_negation → excluded_middle :=
sorry
end sorry_lemmas
end backward_proofs
end LoVe
|
import data.fintype
import lib.sets
import lib.lists
#print set
example {α} {s : set α} (x : subtype s) : subtype s := x
/-
instance finite_all_unit : finite (λ (_ : unit), true)
:= finite.mk [()] (λ x _, by {cases x, simp})
def finite_sub [fin : finite (all α)] (s : set α) [decidable_pred s] [decidable_eq α] : finite s :=
let ⟨xs, h₀⟩ := fin in
⟨list.filter s xs, λ x h₁, begin
have h₂ : x ∈ xs := h₀ x true.intro,
clear h₀,
induction xs with y ys ih,
{simp at *, assumption},
{by_cases eq_hd: y = x,
{rw [eq_hd, filter_hd],
unfold has_mem.mem, delta list.mem,
simp [h₁], assumption },
{have xys : x ∈ ys := by {
simp [‹¬y = x›] at h₂,
cases h₂; simp * at *},
by_cases s y; simp *}}
end⟩
class has_ub (s : set ℕ) := (ub : ℕ) (h : ∀ n, s n → n ≤ ub)
instance finite_ub_nat {s : set ℕ} [decidable_pred s] [ub : has_ub s] : finite s :=
finite.mk (list.filter s (iota (nat.succ ub.ub))) (by {
intros x sx,
have h := ub.h,
specialize h x sx,
revert h,
generalize : has_ub.ub s = u,
intro,
})
def set_sum {α : Type u} [has_add α] [has_zero α] (s : set α) [f : finite s] : α :=
finite.cases_on f (λ l _, list.foldl (λ a b, a + b) 0 l)
-/
|
(** Anthony Bordg, June 2017 ********************************************
Contents:
- Bilinear morphisms between modules over a ring ([bilinearfun])
- Algebras over a commutative ring ([algebra]) and associative, commutative, unital algebras ([assoc_comm_unital_algebra]), see Serge Lang,
Algebra, III.1, p.121 in the revised third edition.
- Morphisms between (non-associative) algebras aver a commutative ring ([algebrafun])
- The opposite algebra ([algebra_opp])
- Subalgebras of an algebra ([subalgebra])
***********************************************)
Require Import UniMath.Algebra.Rigs_and_Rings.
Require Import UniMath.Algebra.Modules.
Require Import Types_and_groups_with_operators.
(** * Bilinear morphims between modules over a ring *)
Definition isbilinear {R : rng} {M N P : module R} (f : M -> N -> P) : UU :=
(∏ x : M, ismodulefun (λ y : N, f x y) ) × (∏ y : N, ismodulefun (λ x : M, f x y)).
Definition bilinearfun {R : rng} (M N P : module R) : UU := ∑ f : M -> N -> P, isbilinear f.
Definition pr1bilinearfun {R : rng} {M N P : module R} (f : bilinearfun M N P) : M -> N -> P := pr1 f.
Coercion pr1bilinearfun : bilinearfun >-> Funclass.
(** * Algebras over a commutative ring *)
Section algebras.
Variable R : commrng.
(** Non-associative algebras over a commutative ring *)
Definition algebra : UU := ∑ M : module R, bilinearfun M M M.
Definition pr1algebra (A : algebra) : module R := pr1 A.
Coercion pr1algebra : algebra >-> module.
Definition algebra_pair (M : module R) (f : bilinearfun M M M) : algebra := tpair (λ X : module R, bilinearfun X X X) M f.
Definition mult_algebra (A : algebra) : binop A := pr1 (pr2 A).
Definition isbilinear_mult_algebra (A : algebra) : isbilinear (mult_algebra A) := pr2 (pr2 A).
Notation "x * y" := (mult_algebra _ x y) : algebras_scope.
Delimit Scope algebras_scope with algebras.
(** Commutative algebras over a commutative ring *)
Definition iscomm_algebra (A : algebra) : UU := iscomm (mult_algebra A).
Definition commalgebra : UU := ∑ A : algebra, iscomm_algebra A.
Definition commalgebra_pair (A : algebra) (is : iscomm_algebra A) : commalgebra := tpair _ A is.
Definition commalgebra_to_algebra (A : commalgebra) : algebra := pr1 A.
Coercion commalgebra_to_algebra : commalgebra >-> algebra.
(** Associative algebras over a commutative ring *)
Definition isassoc_algebra (A : algebra) : UU := isassoc (mult_algebra A).
Definition assocalgebra : UU := ∑ A : algebra, isassoc_algebra A.
Definition assocalgebra_pair (A : algebra) (is : isassoc_algebra A) : assocalgebra := tpair _ A is.
Definition assocalgebra_to_algebra (A : assocalgebra) : algebra := pr1 A.
Coercion assocalgebra_to_algebra : assocalgebra >-> algebra.
(** Unital algebras over a commutative ring *)
Definition isunital_algebra (A : algebra) : UU := isunital (mult_algebra A).
Definition unitalalgebra : UU := ∑ A : algebra, isunital_algebra A.
Definition unitalalgebra_pair (A : algebra) (is : isunital_algebra A) : unitalalgebra := tpair _ A is.
Definition unitalalgebra_to_algebra (A : unitalalgebra) : algebra := pr1 A.
Coercion unitalalgebra_to_algebra : unitalalgebra >-> algebra.
(** Unital associative algebras over a commutative ring *)
Definition unital_assoc_algebra : UU := ∑ A : algebra, (isassoc_algebra A) × (isunital_algebra A).
Definition unital_assoc_algebra_to_algebra (A : unital_assoc_algebra) : algebra := pr1 A.
Coercion unital_assoc_algebra_to_algebra : unital_assoc_algebra >-> algebra.
(** Associative, commutative, unital algebras over a ring *)
Definition assoc_comm_unital_algebra : UU := ∑ A : unital_assoc_algebra, iscomm_algebra A.
(** Morphisms between (non-associative) algebras over a commutative ring *)
Local Open Scope algebras.
Definition algebrafun (A B : algebra) : UU := ∑ f : modulefun A B, ∏ x y : A, f (x * y) = f x * f y.
(** * The opposite algebra *)
Definition mult_opp (A : algebra) : A -> A -> A := λ x y : A, y * x.
Definition isbilinear_mult_opp (A : algebra) : isbilinear (mult_opp A).
Proof.
apply dirprodpair.
- intro a. apply dirprodpair.
+ intros x x'. apply (pr1 (pr2 (isbilinear_mult_algebra A) a) x x').
+ intros r b. apply (pr2 (pr2 (isbilinear_mult_algebra A) a) r b).
- intro a. apply dirprodpair.
+ intros x x'. apply (pr1 (pr1 (isbilinear_mult_algebra A) a) x x').
+ intros r b. apply (pr2 (pr1 (isbilinear_mult_algebra A) a) r b).
Defined.
Definition bilinear_mult_opp (A : algebra) : bilinearfun A A A := tpair _ (mult_opp A) (isbilinear_mult_opp A).
Definition algebra_opp (A : algebra) : algebra := tpair (λ X : module R, bilinearfun X X X) A (bilinear_mult_opp A).
(** * Subalgebras of an algebra *)
Definition subalgebra (A : algebra) : UU := ∑ B : submodule (pr1 A), isstable_by_action (mult_algebra A) (pr1 B).
Definition subalgebra_to_module {A : algebra} (B : subalgebra A) : module R := submodule_to_module (pr1 B).
Definition subalgebra_to_mult {A : algebra} (B : subalgebra A) : binop (subalgebra_to_module B).
Proof.
intros x y.
split with (mult_algebra A (pr1 x) (pr1 y)).
exact (pr2 B (pr1 x) (pr1 y) (pr2 y)).
Defined.
Definition isbilinear_subalgebra_to_mult {A : algebra } (B : subalgebra A) : isbilinear (subalgebra_to_mult B).
Proof.
apply dirprodpair.
- intro x. unfold ismodulefun.
apply dirprodpair.
+ unfold isbinopfun. intros x0 x'.
use total2_paths2_f.
apply (dirprod_pr1 (pr2 (pr2 A))).
apply propproperty.
+ intros r y.
use total2_paths2_f.
apply (dirprod_pr1 (pr2 (pr2 A))).
apply propproperty.
- intro y. apply dirprodpair.
+ intros x x'.
use total2_paths2_f.
apply (dirprod_pr2 (pr2 (pr2 A))).
apply propproperty.
+ intros r x.
use total2_paths2_f.
apply (dirprod_pr2 (pr2 (pr2 A))).
apply propproperty.
Defined.
Definition subalgebra_to_bilinearfun {A : algebra} (B : subalgebra A) :
bilinearfun (subalgebra_to_module B) (subalgebra_to_module B) (subalgebra_to_module B) :=
tpair _ (subalgebra_to_mult B) (isbilinear_subalgebra_to_mult B).
Definition subalgebra_to_algebra {A : algebra} (B : subalgebra A) : algebra :=
algebra_pair (subalgebra_to_module B) (subalgebra_to_bilinearfun B).
End algebras.
|
section pred_logic
variables X Y Z : Prop
/- *** FORALL and ARROW *** -/
-- → and ∀
def arrow_all_equiv := (∀ (x : X), Y) ↔ (X → Y)
/-
To prove either (∀ (x : X), Y) or (X → Y), you first assume
that you're given an arbitrary but specific proof of X, and
in that context, you show that you can derive a proof (thus
deducing the truth) of Y. It's exactly the same reasoning in
each case. This is the *introduction* rule for ∀ and →.
-/
/-
In fact, in constructive logic, X → Y is simply a notation
*defined* as ∀ (x : X), Y. What each of these propositions
states in constructive logic is that "From *any* proof, x,
of X, we can derive a proof of Y." In fact, in Lean, these
propositions are not only equivalent but equal.
-/
#check X → Y -- Lean confirms this is a proposition
#check ∀ (x : X), Y -- Lean understands this to say X → Y!
/- OPTIONAL
As an aside, here's a proof that these propositions are
actually equal. This proof uses an inference rule, rfl, for
equality that we've not yet studied. Don't worry about the
"rfl" for now, but trust that we're giving a correct proof
of the equality of these two propositions in Lean
-/
theorem all_imp_equal : (∀ (x : X), Y) = (X → Y) := rfl
/-
The reason it's super-helpful to know these propositions
are equivalent is that it tells you that you can *use* a
proof of a ∀ proposition or of a → proposition in exactly
the same way. So let's turn to the *elimination* rules for
→ and ∀.
-/
def arrow_elim := (X → Y) → X → Y
def all_elim := (∀ (x : X), Y) → X → Y
/-
The idea underlying these rules date to ancient times.
They both say "if from the truth or a proof of X you
can derive a proof or the truth of Y, and if you also
have a proof, or know the truth, of X, then you can (in
constructive logic) derive a proof of Y (or deduce the
truth of Y."
Here's an example. What we want to say in logic is
that if every ball is blue and b is some specific
ball then b is blue. The elimination rule for ∀ and
→ applies a generalization to a specific instance to
deduce that the generalized statement specialized to
a particular instance is true.
Note: In this example, Y is a proposition obtained by
plugging "x" into a one-argument predicate. So suppose
(∀ (x : X), Y) is read as "for any Ball x, x is blue."
Here X is "Ball;" x is an arbitrary but specific Ball;
and Y is read as "x is blue."
Now suppose that, in this context, you're given a
*particular* ball, (b : X). What the overall rules
says is that you now conclude that "b is blue."
The elimination rule works by *applying* a proof of
a universal generalization (showing that something
is true of *every* object of a particular kind) to
a *specific* object of that kind, to deduce that the
generalized statement is also true of that specific
object.
If every ball is blue, and if b is a ball, then b
must be blue. Another way to say it that makes a
bit more sense for the (X → Y) notation is that
"if being any ball, x, implies that x is blue, and
if b is some particular ball, then b is blue.
-/
/-
As an example, consider a predicate, (isBlue _), where you can fill
in the blank/argument with any Ball-type object. If b is a specific
Ball-type object, then (isBlue b) is a proposition, representing the
English-language claim that b is blue. Here's how we represent this
predicate in Lean.
-/
variable Ball : Type -- Ball is a type of object
variable isBlue : Ball → Prop
/-
First we Ball to be the name of a type of object (like int or
bool). Then we define isBlue to be a construct (think function!)
that when given any object of type Ball as an argument yields a
proposition. To see how this works, suppose we have some specific
balls, b1 and b2.
-/
variables (b1 b2 : Ball)
/-
Now let's use isBlue to make some propositions!
-/
#check isBlue -- a predicate
#check isBlue b1 -- a proposition about b1
#check isBlue b2 -- a proposition about b2
#check (∀ (x : Ball), isBlue x) -- generalization
variable all_balls_blue : (∀ (x : Ball), isBlue x) -- proof of it
#check all_balls_blue b1 -- proof b1 is blue
#check all_balls_blue b2 -- proof b2 is blue
/-
Here's an English-language version.
Suppose b1 and b2 are objects of some type, Ball, and that isBlue
is one-place predicate taking any Ball, b, as an argument, and that
reduces to a proposition, denoted (isBlue b), that we understand as
asserting that the particular ball, b, is blue. Next (295), we take
all_balls_blue as a proof that all balls are blue. Finally (296 and
297), we see that we can can use this proof/truth by *applying* it
to any particular ball, b, to obtain a proof/truth that b is blue.
For any type S, given any X: (∀ s : S), T and any s : S, the ∀
and → elimination rule(s) say that you can derive a value/proof of
type T; moreover this operation is basically done by *applying* ,
viewed as a function from parameter value to proposition, to the
actual parameter, s (in Lean denoted as (X s)), to obtain a value
(proof) of (type) T. Modus ponens is like function application. In
constructive logic, a proof of the ∀ proposition *is* a function.
Here you begin to see how profound is that proofs in constructive
logic tell you not only that a proposition is true but why. Here a
proof of X → Y or of ∀ (x : X), Y, is a program that when given any
value/proof of X as an argument returns a value/proof of Y. If you
can produce a function that turns any proof of X into a proof of Y,
then you've shown that whenever X is true, so is Y; and that's just
what X → Y is meant to say (similarly for ∀ (x : X), Y).
-/
/-
Walk-away message: Applying a proof/truth of a universal
generalization to a specific object yields a proof of the
generalization *specialized* to that particular object. That
is in the higher-order predicate logic of Lean.
-/
/-
Finally, let's compare our elimination rule, in the higher-order
predicate logic of Lean, with its first-order logic counterpart.
There are two big differences, first, in first-order logic, you
have to present the rule outside of the logic: you can't write
rules like this, ∀ (X Y : Prop), X → Y → (X ∧ Y), in first-order
logic because in first order logic you can't quantify over types,
propositions, predicates, functions. Here we do just this with the
"∀ (X Y : Prop)." By contrast, in the higher-order logic of Lean,
we can represent the rules of first-order logic with no problem:
e.g., "∀ (X Y : Prop), X → Y → (X ∧ Y)."
Second, as we've discussed, using Lean's higher-order logic, you
can think of a proof of "∀ (X Y : Prop), X → Y → (X ∧ Y)" as a
function. Each variable bound by a ∀ and each implication premise
is an argument, with the type of the return value at the end of
the line. So, here, a proof of this proposition can be taken as
a function that takes two propositions, X and Y as arguments, then
a proof (value) of (type) X, then a proof (value) of type Y, and
that finally returns a proof (value) X ∧ Y. Whereas the proof of
∀ (X Y : Prop), X → Y → (X ∧ Y) is a function the returned proof
of (X ∧ Y) is a pair-like data structure. Proofs in constructive
logic are *computational*, and you can even compute with them, as
you do when you *apply* a proof of a certain kind to an argument
to obtain a resulting proof/value.
-/
/-
Quiz questions:
First-order logic. I know that every natural number is
beautiful (∀ n, NaturalNumber(n) → Beautiful(n) : true),
and I want to prove (7 is beautiful : true). Prove it.
Name the inference rule and identify the arguments you
give it to prove it.
Constructive logic. Suppose I have a proof, pf, that every
natural number is beautiful (∀ (n : ℕ), beautiful n), and I
need a proof that 7 is beautiful. How can I get the proof
I need? Answer in both English and with a Lean expression.
Formalize this story: All people are mortal, and Plato
is a person, therefore Plato is Mortal.
-/
/- Quick exercise. Give a proof of this (in English, and
give it a try in Lean as well.
-/
def arrow_trans := (X → Y) → (Y → Z) → (X → Z)
end pred_logic
|
[STATEMENT]
lemma zero_vector_left_zero:
assumes "zero_vector x"
shows "x * y = x * bot"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x * y = x * bot
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. x * y = x * bot
[PROOF STEP]
have "x * y \<le> x * bot"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x * y \<le> x * bot
[PROOF STEP]
by (metis assms mult_isotone top.extremum vector_mult_closed zero_vector zero_vector_def)
[PROOF STATE]
proof (state)
this:
x * y \<le> x * bot
goal (1 subgoal):
1. x * y = x * bot
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
x * y \<le> x * bot
goal (1 subgoal):
1. x * y = x * bot
[PROOF STEP]
by (simp add: order.antisym mult_right_isotone)
[PROOF STATE]
proof (state)
this:
x * y = x * bot
goal:
No subgoals!
[PROOF STEP]
qed
|
% language=uk
\environment luatex-style
\environment luatex-logos
\startcomponent luatex-math
\startchapter[reference=math,title={Math}]
The handling of mathematics in \LUATEX\ differs quite a bit from how \TEX82 (and
therefore \PDFTEX) handles math. First, \LUATEX\ adds primitives and extends some
others so that \UNICODE\ input can be used easily. Second, all of \TEX82's
internal special values (for example for operator spacing) have been made
accessible and changeable via control sequences. Third, there are extensions that
make it easier to use \OPENTYPE\ math fonts. And finally, there are some
extensions that have been proposed or considered in the past that are now added
to the engine.
\section{The current math style}
It is possible to discover the math style that will be used for a formula in an
expandable fashion (while the math list is still being read). To make this
possible, \LUATEX\ adds the new primitive: \type {\mathstyle}. This is a \quote
{convert command} like e.g. \type {\romannumeral}: its value can only be read,
not set.
\subsection{\type {\mathstyle}}
The returned value is between 0 and 7 (in math mode), or $-1$ (all other modes).
For easy testing, the eight math style commands have been altered so that the can
be used as numeric values, so you can write code like this:
\starttyping
\ifnum\mathstyle=\textstyle
\message{normal text style}
\else \ifnum\mathstyle=\crampedtextstyle
\message{cramped text style}
\fi \fi
\stoptyping
\subsection{\type {\Ustack}}
There are a few math commands in \TEX\ where the style that will be used is not
known straight from the start. These commands (\type {\over}, \type {\atop},
\type {\overwithdelims}, \type {\atopwithdelims}) would therefore normally return
wrong values for \type {\mathstyle}. To fix this, \LUATEX\ introduces a special
prefix command: \type {\Ustack}:
\starttyping
$\Ustack {a \over b}$
\stoptyping
The \type {\Ustack} command will scan the next brace and start a new math group
with the correct (numerator) math style.
\section{Unicode math characters}
Character handling is now extended up to the full \UNICODE\ range (the \type {\U}
prefix), which is compatible with \XETEX.
The math primitives from \TEX\ are kept as they are, except for the ones that
convert from input to math commands: \type {mathcode}, and \type {delcode}. These
two now allow for a 21-bit character argument on the left hand side of the equals
sign.
Some of the new \LUATEX\ primitives read more than one separate value. This is
shown in the tables below by a plus sign in the second column.
The input for such primitives would look like this:
\starttyping
\def\overbrace{\Umathaccent 0 1 "23DE }
\stoptyping
The altered \TEX82 primitives are:
\starttabulate[|l|l|r|c|l|r|]
\NC \bf primitive \NC \bf min \NC \bf max \NC \kern 2em \NC \bf min \NC \bf max \NC \NR
\NC \type {\mathcode} \NC 0 \NC 10FFFF \NC = \NC 0 \NC 8000 \NC \NR
\NC \type {\delcode} \NC 0 \NC 10FFFF \NC = \NC 0 \NC FFFFFF \NC \NR
\stoptabulate
The unaltered ones are:
\starttabulate[|l|l|r|]
\NC \bf primitive \NC \bf min \NC \bf max \NC \NR
\NC \type {\mathchardef} \NC 0 \NC 8000 \NC \NR
\NC \type {\mathchar} \NC 0 \NC 7FFF \NC \NR
\NC \type {\mathaccent} \NC 0 \NC 7FFF \NC \NR
\NC \type {\delimiter} \NC 0 \NC 7FFFFFF \NC \NR
\NC \type {\radical} \NC 0 \NC 7FFFFFF \NC \NR
\stoptabulate
For practical reasons \type {\mathchardef} will silently accept values larger
that \type {0x8000} and interpret it as \type {\Umathcharnumdef}. This is needed
to satisfy older macro packages.
The following new primitives are compatible with \XETEX:
% somewhat fuzzy:
\starttabulate[|l|l|r|c|l|r|]
\NC \bf primitive \NC \bf min \NC \bf max \NC \kern 2em \NC \bf min \NC \bf max \NC \NR
\NC \type {\Umathchardef} \NC 0+0+0 \NC 7+FF+10FFFF\rlap{\high{1}} \NC \NC \NC \NC \NR
\NC \type {\Umathcharnumdef}\rlap{\high{5}} \NC -80000000 \NC 7FFFFFFF\rlap{\high{3}} \NC \NC \NC \NC \NR
\NC \type {\Umathcode} \NC 0 \NC 10FFFF \NC = \NC 0+0+0 \NC 7+FF+10FFFF\rlap{\high{1}} \NC \NR
\NC \type {\Udelcode} \NC 0 \NC 10FFFF \NC = \NC 0+0 \NC FF+10FFFF\rlap{\high{2}} \NC \NR
\NC \type {\Umathchar} \NC 0+0+0 \NC 7+FF+10FFFF \NC \NC \NC \NC \NR
\NC \type {\Umathaccent} \NC 0+0+0 \NC 7+FF+10FFFF\rlap{\high{2,4}} \NC \NC \NC \NC \NR
\NC \type {\Udelimiter} \NC 0+0+0 \NC 7+FF+10FFFF\rlap{\high{2}} \NC \NC \NC \NC \NR
\NC \type {\Uradical} \NC 0+0 \NC FF+10FFFF\rlap{\high{2}} \NC \NC \NC \NC \NR
\NC \type {\Umathcharnum} \NC -80000000 \NC 7FFFFFFF\rlap{\high{3}} \NC \NC \NC \NC \NR
\NC \type {\Umathcodenum} \NC 0 \NC 10FFFF \NC = \NC -80000000 \NC 7FFFFFFF\rlap{\high{3}} \NC \NR
\NC \type {\Udelcodenum} \NC 0 \NC 10FFFF \NC = \NC -80000000 \NC 7FFFFFFF\rlap{\high{3}} \NC \NR
\stoptabulate
Specifications typically look like:
\starttyping
\Umathchardef\xx="1"0"456
\Umathcode 123="1"0"789
\stoptyping
Note 1: The new primitives that deal with delimiter|-|style objects do not set up a
\quote {large family}. Selecting a suitable size for display purposes is expected
to be dealt with by the font via the \type {\Umathoperatorsize} parameter (more
information can be found in a following section).
Note 2: For these three primitives, all information is packed into a single
signed integer. For the first two (\type {\Umathcharnum} and \type
{\Umathcodenum}), the lowest 21 bits are the character code, the 3 bits above
that represent the math class, and the family data is kept in the topmost bits
(This means that the values for math families 128--255 are actually negative).
For \type {\Udelcodenum} there is no math class. The math family information is
stored in the bits directly on top of the character code. Using these three
commands is not as natural as using the two- and three|-|value commands, so
unless you know exactly what you are doing and absolutely require the speedup
resulting from the faster input scanning, it is better to use the verbose
commands instead.
Note 3: The \type {\Umathaccent} command accepts optional keywords to control
various details regarding math accents. See \in {section} [mathacc] below for
details.
New primitives that exist in \LUATEX\ only (all of these will be explained
in following sections):
\starttabulate[|l|l|l|l|]
\NC \bf primitive \NC \bf value range (in hex) \NC \NR
\NC \type {\Uroot} \NC 0+0--FF+10FFFF$^2$ \NC \NR
\NC \type {\Uoverdelimiter} \NC 0+0--FF+10FFFF$^2$ \NC \NR
\NC \type {\Uunderdelimiter} \NC 0+0--FF+10FFFF$^2$ \NC \NR
\NC \type {\Udelimiterover} \NC 0+0--FF+10FFFF$^2$ \NC \NR
\NC \type {\Udelimiterunder} \NC 0+0--FF+10FFFF$^2$ \NC \NR
\stoptabulate
\section{Cramped math styles}
\LUATEX\ has four new primitives to set the cramped math styles directly:
\starttyping
\crampeddisplaystyle
\crampedtextstyle
\crampedscriptstyle
\crampedscriptscriptstyle
\stoptyping
These additional commands are not all that valuable on their own, but they come
in handy as arguments to the math parameter settings that will be added shortly.
In Eijkhouts \quotation {\TEX\ by Topic} the rules for handling styles in scripts
are described as follows:
\startitemize
\startitem
In any style superscripts and subscripts are taken from the next smaller style.
Exception: in display style they are taken in script style.
\stopitem
\startitem
Subscripts are always in the cramped variant of the style; superscripts are only
cramped if the original style was cramped.
\stopitem
\startitem
In an \type {..\over..} formula in any style the numerator and denominator are
taken from the next smaller style.
\stopitem
\startitem
The denominator is always in cramped style; the numerator is only in cramped
style if the original style was cramped.
\stopitem
\startitem
Formulas under a \type {\sqrt} or \type {\overline} are in cramped style.
\stopitem
\stopitemize
In \LUATEX\ one can set the styles in more detail which means that you sometimes
have to set both normal and cramped styles to get the effect you want. If we
force styles in the script using \type {\scriptstyle} and \type {\crampedscriptstyle}
we get this:
\startbuffer[demo]
\starttabulate
\NC default \NC $b_{x=xx}^{x=xx}$ \NC \NR
\NC script \NC $b_{\scriptstyle x=xx}^{\scriptstyle x=xx}$ \NC \NR
\NC crampedscript \NC $b_{\crampedscriptstyle x=xx}^{\crampedscriptstyle x=xx}$ \NC \NR
\stoptabulate
\stopbuffer
\getbuffer[demo]
Now we set the following parameters
\startbuffer[setup]
\Umathordrelspacing\scriptstyle=30mu
\Umathordordspacing\scriptstyle=30mu
\stopbuffer
\typebuffer[setup]
This gives:
\start\getbuffer[setup,demo]\stop
But, as this is not what is expected (visually) we should say:
\startbuffer[setup]
\Umathordrelspacing\scriptstyle=30mu
\Umathordordspacing\scriptstyle=30mu
\Umathordrelspacing\crampedscriptstyle=30mu
\Umathordordspacing\crampedscriptstyle=30mu
\stopbuffer
\typebuffer[setup]
Now we get:
\start\getbuffer[setup,demo]\stop
\section{Math parameter settings}
In \LUATEX, the font dimension parameters that \TEX\ used in math typesetting are
now accessible via primitive commands. In fact, refactoring of the math engine
has resulted in many more parameters than were accessible before.
\starttabulate
\NC \bf primitive name \NC \bf description \NC \NR
\NC \type {\Umathquad} \NC the width of 18 mu's \NC \NR
\NC \type {\Umathaxis} \NC height of the vertical center axis of
the math formula above the baseline \NC \NR
\NC \type {\Umathoperatorsize} \NC minimum size of large operators in display mode \NC \NR
\NC \type {\Umathoverbarkern} \NC vertical clearance above the rule \NC \NR
\NC \type {\Umathoverbarrule} \NC the width of the rule \NC \NR
\NC \type {\Umathoverbarvgap} \NC vertical clearance below the rule \NC \NR
\NC \type {\Umathunderbarkern} \NC vertical clearance below the rule \NC \NR
\NC \type {\Umathunderbarrule} \NC the width of the rule \NC \NR
\NC \type {\Umathunderbarvgap} \NC vertical clearance above the rule \NC \NR
\NC \type {\Umathradicalkern} \NC vertical clearance above the rule \NC \NR
\NC \type {\Umathradicalrule} \NC the width of the rule \NC \NR
\NC \type {\Umathradicalvgap} \NC vertical clearance below the rule \NC \NR
\NC \type {\Umathradicaldegreebefore}\NC the forward kern that takes place before placement of
the radical degree \NC \NR
\NC \type {\Umathradicaldegreeafter} \NC the backward kern that takes place after placement of
the radical degree \NC \NR
\NC \type {\Umathradicaldegreeraise} \NC this is the percentage of the total height and depth of
the radical sign that the degree is raised by; it is
expressed in \type {percents}, so 60\% is expressed as the
integer $60$ \NC \NR
\NC \type {\Umathstackvgap} \NC vertical clearance between the two
elements in a \type {\atop} stack \NC \NR
\NC \type {\Umathstacknumup} \NC numerator shift upward in \type {\atop} stack \NC \NR
\NC \type {\Umathstackdenomdown} \NC denominator shift downward in \type {\atop} stack \NC \NR
\NC \type {\Umathfractionrule} \NC the width of the rule in a \type {\over} \NC \NR
\NC \type {\Umathfractionnumvgap} \NC vertical clearance between the numerator and the rule \NC \NR
\NC \type {\Umathfractionnumup} \NC numerator shift upward in \type {\over} \NC \NR
\NC \type {\Umathfractiondenomvgap} \NC vertical clearance between the denominator and the rule \NC \NR
\NC \type {\Umathfractiondenomdown} \NC denominator shift downward in \type {\over} \NC \NR
\NC \type {\Umathfractiondelsize} \NC minimum delimiter size for \type {\...withdelims} \NC \NR
\NC \type {\Umathlimitabovevgap} \NC vertical clearance for limits above operators \NC \NR
\NC \type {\Umathlimitabovebgap} \NC vertical baseline clearance for limits above operators \NC \NR
\NC \type {\Umathlimitabovekern} \NC space reserved at the top of the limit \NC \NR
\NC \type {\Umathlimitbelowvgap} \NC vertical clearance for limits below operators \NC \NR
\NC \type {\Umathlimitbelowbgap} \NC vertical baseline clearance for limits below operators \NC \NR
\NC \type {\Umathlimitbelowkern} \NC space reserved at the bottom of the limit \NC \NR
\NC \type {\Umathoverdelimitervgap} \NC vertical clearance for limits above delimiters \NC \NR
\NC \type {\Umathoverdelimiterbgap} \NC vertical baseline clearance for limits above delimiters \NC \NR
\NC \type {\Umathunderdelimitervgap} \NC vertical clearance for limits below delimiters \NC \NR
\NC \type {\Umathunderdelimiterbgap} \NC vertical baseline clearance for limits below delimiters \NC \NR
\NC \type {\Umathsubshiftdrop} \NC subscript drop for boxes and subformulas \NC \NR
\NC \type {\Umathsubshiftdown} \NC subscript drop for characters \NC \NR
\NC \type {\Umathsupshiftdrop} \NC superscript drop (raise, actually) for boxes and subformulas \NC \NR
\NC \type {\Umathsupshiftup} \NC superscript raise for characters \NC \NR
\NC \type {\Umathsubsupshiftdown} \NC subscript drop in the presence of a superscript \NC \NR
\NC \type {\Umathsubtopmax} \NC the top of standalone subscripts cannot be higher than this
above the baseline \NC \NR
\NC \type {\Umathsupbottommin} \NC the bottom of standalone superscripts cannot be less than
this above the baseline \NC \NR
\NC \type {\Umathsupsubbottommax} \NC the bottom of the superscript of a combined super- and subscript
be at least as high as this above the baseline \NC \NR
\NC \type {\Umathsubsupvgap} \NC vertical clearance between super- and subscript \NC \NR
\NC \type {\Umathspaceafterscript} \NC additional space added after a super- or subscript \NC \NR
\NC \type {\Umathconnectoroverlapmin}\NC minimum overlap between parts in an extensible recipe \NC \NR
\stoptabulate
Each of the parameters in this section can be set by a command like this:
\starttyping
\Umathquad\displaystyle=1em
\stoptyping
they obey grouping, and you can use \type {\the\Umathquad\displaystyle} if
needed.
\section{Skips around display math}
The injection of \type {\abovedisplayskip} and \type {\belowdisplayskip} is not
symmetrical. An above one is always inserted, also when zero, but the below is
only inserted when larger than zero. Especially the later mkes it sometimes hard
to fully control spacing. Therefore \LUATEX\ comes with a new directive: \type
{\mathdisplayskipmode}. The following values apply:
\starttabulate
\NC 0 \NC normal \TEX\ behaviour: always above, only below when larger than zero \NC \NR
\NC 1 \NC always \NC \NR
\NC 2 \NC only when not zero \NC \NR
\NC 3 \NC never, not even when not zero \NC \NR
\stoptabulate
\section{Font-based Math Parameters}
While it is nice to have these math parameters available for tweaking, it would
be tedious to have to set each of them by hand. For this reason, \LUATEX\
initializes a bunch of these parameters whenever you assign a font identifier to
a math family based on either the traditional math font dimensions in the font
(for assignments to math family~2 and~3 using \TFM|-|based fonts like \type
{cmsy} and \type {cmex}), or based on the named values in a potential \type
{MathConstants} table when the font is loaded via Lua. If there is a \type
{MathConstants} table, this takes precedence over font dimensions, and in that
case no attention is paid to which family is being assigned to: the \type
{MathConstants} tables in the last assigned family sets all parameters.
In the table below, the one|-|letter style abbreviations and symbolic tfm font
dimension names match those using in the \TeX book. Assignments to \type
{\textfont} set the values for the cramped and uncramped display and text styles,
\type {\scriptfont} sets the script styles, and \type {\scriptscriptfont} sets
the scriptscript styles, so we have eight parameters for three font sizes. In the
\TFM\ case, assignments only happen in family~2 and family~3 (and of course only
for the parameters for which there are font dimensions).
Besides the parameters below, \LUATEX\ also looks at the \quote {space} font
dimension parameter. For math fonts, this should be set to zero.
\start
\switchtobodyfont[8pt]
\starttabulate[|l|l|l|p|]
\NC \bf variable \NC \bf style \NC \bf default value opentype \NC \bf default value tfm \NC \NR
\NC \type {\Umathaxis} \NC -- \NC AxisHeight \NC axis_height \NC \NR
\NC \type {\Umathoperatorsize} \NC D, D' \NC DisplayOperatorMinHeight \NC $^6$ \NC \NR
\NC \type {\Umathfractiondelsize} \NC D, D' \NC FractionDelimiterDisplayStyleSize$^9$ \NC delim1 \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC FractionDelimiterSize$^9$ \NC delim2 \NC \NR
\NC \type {\Umathfractiondenomdown} \NC D, D' \NC FractionDenominatorDisplayStyleShiftDown \NC denom1 \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC FractionDenominatorShiftDown \NC denom2 \NC \NR
\NC \type {\Umathfractiondenomvgap} \NC D, D' \NC FractionDenominatorDisplayStyleGapMin \NC 3*default_rule_thickness \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC FractionDenominatorGapMin \NC default_rule_thickness \NC \NR
\NC \type {\Umathfractionnumup} \NC D, D' \NC FractionNumeratorDisplayStyleShiftUp \NC num1 \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC FractionNumeratorShiftUp \NC num2 \NC \NR
\NC \type {\Umathfractionnumvgap} \NC D, D' \NC FractionNumeratorDisplayStyleGapMin \NC 3*default_rule_thickness \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC FractionNumeratorGapMin \NC default_rule_thickness \NC \NR
\NC \type {\Umathfractionrule} \NC -- \NC FractionRuleThickness \NC default_rule_thickness \NC \NR
\NC \type {\Umathskewedfractionhgap} \NC -- \NC SkewedFractionHorizontalGap \NC math_quad/2 \NC \NR
\NC \type {\Umathskewedfractionvgap} \NC -- \NC SkewedFractionVerticalGap \NC math_x_height \NC \NR
\NC \type {\Umathlimitabovebgap} \NC -- \NC UpperLimitBaselineRiseMin \NC big_op_spacing3 \NC \NR
\NC \type {\Umathlimitabovekern} \NC -- \NC 0$^1$ \NC big_op_spacing5 \NC \NR
\NC \type {\Umathlimitabovevgap} \NC -- \NC UpperLimitGapMin \NC big_op_spacing1 \NC \NR
\NC \type {\Umathlimitbelowbgap} \NC -- \NC LowerLimitBaselineDropMin \NC big_op_spacing4 \NC \NR
\NC \type {\Umathlimitbelowkern} \NC -- \NC 0$^1$ \NC big_op_spacing5 \NC \NR
\NC \type {\Umathlimitbelowvgap} \NC -- \NC LowerLimitGapMin \NC big_op_spacing2 \NC \NR
\NC \type {\Umathoverdelimitervgap} \NC -- \NC StretchStackGapBelowMin \NC big_op_spacing1 \NC \NR
\NC \type {\Umathoverdelimiterbgap} \NC -- \NC StretchStackTopShiftUp \NC big_op_spacing3 \NC \NR
\NC \type {\Umathunderdelimitervgap} \NC-- \NC StretchStackGapAboveMin \NC big_op_spacing2 \NC \NR
\NC \type {\Umathunderdelimiterbgap} \NC-- \NC StretchStackBottomShiftDown \NC big_op_spacing4 \NC \NR
\NC \type {\Umathoverbarkern} \NC -- \NC OverbarExtraAscender \NC default_rule_thickness \NC \NR
\NC \type {\Umathoverbarrule} \NC -- \NC OverbarRuleThickness \NC default_rule_thickness \NC \NR
\NC \type {\Umathoverbarvgap} \NC -- \NC OverbarVerticalGap \NC 3*default_rule_thickness \NC \NR
\NC \type {\Umathquad} \NC -- \NC <font_size(f)>$^1$ \NC math_quad \NC \NR
\NC \type {\Umathradicalkern} \NC -- \NC RadicalExtraAscender \NC default_rule_thickness \NC \NR
\NC \type {\Umathradicalrule} \NC -- \NC RadicalRuleThickness \NC <not set>$^2$ \NC \NR
\NC \type {\Umathradicalvgap} \NC D, D' \NC RadicalDisplayStyleVerticalGap \NC (default_rule_thickness+\crlf
(abs(math_x_height)/4))$^3$ \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC RadicalVerticalGap \NC (default_rule_thickness+\crlf
(abs(default_rule_thickness)/4))$^3$ \NC \NR
\NC \type {\Umathradicaldegreebefore} \NC -- \NC RadicalKernBeforeDegree \NC <not set>$^2$ \NC \NR
\NC \type {\Umathradicaldegreeafter} \NC -- \NC RadicalKernAfterDegree \NC <not set>$^2$ \NC \NR
\NC \type {\Umathradicaldegreeraise} \NC -- \NC RadicalDegreeBottomRaisePercent \NC <not set>$^{2,7}$ \NC \NR
\NC \type {\Umathspaceafterscript} \NC -- \NC SpaceAfterScript \NC script_space$^4$ \NC \NR
\NC \type {\Umathstackdenomdown} \NC D, D' \NC StackBottomDisplayStyleShiftDown \NC denom1 \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC StackBottomShiftDown \NC denom2 \NC \NR
\NC \type {\Umathstacknumup} \NC D, D' \NC StackTopDisplayStyleShiftUp \NC num1 \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC StackTopShiftUp \NC num3 \NC \NR
\NC \type {\Umathstackvgap} \NC D, D' \NC StackDisplayStyleGapMin \NC 7*default_rule_thickness \NC \NR
\NC \NC T, T', S, S', SS, SS' \NC StackGapMin \NC 3*default_rule_thickness \NC \NR
\NC \type {\Umathsubshiftdown} \NC -- \NC SubscriptShiftDown \NC sub1 \NC \NR
\NC \type {\Umathsubshiftdrop} \NC -- \NC SubscriptBaselineDropMin \NC sub_drop \NC \NR
\NC \type {\Umathsubsupshiftdown} \NC -- \NC SubscriptShiftDownWithSuperscript$^8$ \NC \NC \NR
\NC \NC \NC \quad\ or SubscriptShiftDown \NC sub2 \NC \NR
\NC \type {\Umathsubtopmax} \NC -- \NC SubscriptTopMax \NC (abs(math_x_height * 4) / 5) \NC \NR
\NC \type {\Umathsubsupvgap} \NC -- \NC SubSuperscriptGapMin \NC 4*default_rule_thickness \NC \NR
\NC \type {\Umathsupbottommin} \NC -- \NC SuperscriptBottomMin \NC (abs(math_x_height) / 4) \NC \NR
\NC \type {\Umathsupshiftdrop} \NC -- \NC SuperscriptBaselineDropMax \NC sup_drop \NC \NR
\NC \type {\Umathsupshiftup} \NC D \NC SuperscriptShiftUp \NC sup1 \NC \NR
\NC \NC T, S, SS, \NC SuperscriptShiftUp \NC sup2 \NC \NR
\NC \NC D', T', S', SS' \NC SuperscriptShiftUpCramped \NC sup3 \NC \NR
\NC \type {\Umathsupsubbottommax} \NC -- \NC SuperscriptBottomMaxWithSubscript \NC (abs(math_x_height * 4) / 5) \NC \NR
\NC \type {\Umathunderbarkern} \NC -- \NC UnderbarExtraDescender \NC default_rule_thickness \NC \NR
\NC \type {\Umathunderbarrule} \NC -- \NC UnderbarRuleThickness \NC default_rule_thickness \NC \NR
\NC \type {\Umathunderbarvgap} \NC -- \NC UnderbarVerticalGap \NC 3*default_rule_thickness \NC \NR
\NC \type {\Umathconnectoroverlapmin} \NC -- \NC MinConnectorOverlap \NC 0$^5$ \NC \NR
\stoptabulate
\stop
Note 1: \OPENTYPE\ fonts set \type {\Umathlimitabovekern} and \type
{\Umathlimitbelowkern} to zero and set \type {\Umathquad} to the font size of the
used font, because these are not supported in the \type {MATH} table,
Note 2: Traditional \TFM\ fonts do not set \type {\Umathradicalrule} because
\TEX82\ uses the height of the radical instead. When this parameter is indeed not
set when \LUATEX\ has to typeset a radical, a backward compatibility mode will
kick in that assumes that an oldstyle \TEX\ font is used. Also, they do not set
\type {\Umathradicaldegreebefore}, \type {\Umathradicaldegreeafter}, and \type
{\Umathradicaldegreeraise}. These are then automatically initialized to
$5/18$quad, $-10/18$quad, and 60.
Note 3: If \TFM\ fonts are used, then the \type {\Umathradicalvgap} is not set
until the first time \LUATEX\ has to typeset a formula because this needs
parameters from both family~2 and family~3. This provides a partial backward
compatibility with \TEX82, but that compatibility is only partial: once the \type
{\Umathradicalvgap} is set, it will not be recalculated any more.
Note 4: When \TFM\ fonts are used a similar situation arises with respect to
\type {\Umathspaceafterscript}: it is not set until the first time \LUATEX\ has
to typeset a formula. This provides some backward compatibility with \TEX82. But
once the \type {\Umathspaceafterscript} is set, \type {\scriptspace} will never
be looked at again.
Note 5: Traditional \TFM\ fonts set \type {\Umathconnectoroverlapmin} to zero
because \TEX82\ always stacks extensibles without any overlap.
Note 6: The \type {\Umathoperatorsize} is only used in \type {\displaystyle}, and
is only set in \OPENTYPE\ fonts. In \TFM\ font mode, it is artificially set to
one scaled point more than the initial attempt's size, so that always the \quote
{first next} will be tried, just like in \TEX82.
Note 7: The \type {\Umathradicaldegreeraise} is a special case because it is the
only parameter that is expressed in a percentage instead of as a number of scaled
points.
Note 8: \type {SubscriptShiftDownWithSuperscript} does not actually exist in the
\quote {standard} \OPENTYPE\ math font Cambria, but it is useful enough to be
added.
Note 9: \type {FractionDelimiterDisplayStyleSize} and \type
{FractionDelimiterSize} do not actually exist in the \quote {standard} \OPENTYPE\
math font Cambria, but were useful enough to be added.
\section{Math spacing setting}
Besides the parameters mentioned in the previous sections, there are also 64 new
primitives to control the math spacing table (as explained in Chapter~18 of the
\TEX book). The primitive names are a simple matter of combining two math atom
types, but for completeness' sake, here is the whole list:
\starttwocolumns
\starttyping
\Umathordordspacing
\Umathordopspacing
\Umathordbinspacing
\Umathordrelspacing
\Umathordopenspacing
\Umathordclosespacing
\Umathordpunctspacing
\Umathordinnerspacing
\Umathopordspacing
\Umathopopspacing
\Umathopbinspacing
\Umathoprelspacing
\Umathopopenspacing
\Umathopclosespacing
\Umathoppunctspacing
\Umathopinnerspacing
\Umathbinordspacing
\Umathbinopspacing
\Umathbinbinspacing
\Umathbinrelspacing
\Umathbinopenspacing
\Umathbinclosespacing
\Umathbinpunctspacing
\Umathbininnerspacing
\Umathrelordspacing
\Umathrelopspacing
\Umathrelbinspacing
\Umathrelrelspacing
\Umathrelopenspacing
\Umathrelclosespacing
\Umathrelpunctspacing
\Umathrelinnerspacing
\Umathopenordspacing
\Umathopenopspacing
\Umathopenbinspacing
\Umathopenrelspacing
\Umathopenopenspacing
\Umathopenclosespacing
\Umathopenpunctspacing
\Umathopeninnerspacing
\Umathcloseordspacing
\Umathcloseopspacing
\Umathclosebinspacing
\Umathcloserelspacing
\Umathcloseopenspacing
\Umathcloseclosespacing
\Umathclosepunctspacing
\Umathcloseinnerspacing
\Umathpunctordspacing
\Umathpunctopspacing
\Umathpunctbinspacing
\Umathpunctrelspacing
\Umathpunctopenspacing
\Umathpunctclosespacing
\Umathpunctpunctspacing
\Umathpunctinnerspacing
\Umathinnerordspacing
\Umathinneropspacing
\Umathinnerbinspacing
\Umathinnerrelspacing
\Umathinneropenspacing
\Umathinnerclosespacing
\Umathinnerpunctspacing
\Umathinnerinnerspacing
\stoptyping
\stoptwocolumns
These parameters are of type \type {\muskip}, so setting a parameter can be done
like this:
\starttyping
\Umathopordspacing\displaystyle=4mu plus 2mu
\stoptyping
They are all initialized by \type {initex} to the values mentioned in the table
in Chapter~18 of the \TEX book.
Note 1: for ease of use as well as for backward compatibility, \type
{\thinmuskip}, \type {\medmuskip} and \type {\thickmuskip} are treated
especially. In their case a pointer to the corresponding internal parameter is
saved, not the actual \type {\muskip} value. This means that any later changes to
one of these three parameters will be taken into account.
Note 2: Careful readers will realise that there are also primitives for the items
marked \type {*} in the \TEX book. These will not actually be used as those
combinations of atoms cannot actually happen, but it seemed better not to break
orthogonality. They are initialized to zero.
\section[mathacc]{Math accent handling}
\LUATEX\ supports both top accents and bottom accents in math mode, and math
accents stretch automatically (if this is supported by the font the accent comes
from, of course). Bottom and combined accents as well as fixed-width math accents
are controlled by optional keywords following \type {\Umathaccent}.
The keyword \type {bottom} after \type {\Umathaccent} signals that a bottom accent
is needed, and the keyword \type {both} signals that both a top and a bottom
accent are needed (in this case two accents need to be specified, of course).
Then the set of three integers defining the accent is read. This set of integers
can be prefixed by the \type {fixed} keyword to indicate that a non-stretching
variant is requested (in case of both accents, this step is repeated).
A simple example:
\starttyping
\Umathaccent both fixed 0 0 "20D7 fixed 0 0 "20D7 {example}
\stoptyping
If a math top accent has to be placed and the accentee is a character and has a
non-zero \type {top_accent} value, then this value will be used to place the
accent instead of the \type {\skewchar} kern used by \TEX82.
The \type {top_accent} value represents a vertical line somewhere in the
accentee. The accent will be shifted horizontally such that its own \type
{top_accent} line coincides with the one from the accentee. If the \type
{top_accent} value of the accent is zero, then half the width of the accent
followed by its italic correction is used instead.
The vertical placement of a top accent depends on the \type {x_height} of the
font of the accentee (as explained in the \TEX book), but if value that turns out
to be zero and the font had a \type {MathConstants} table, then \type
{AccentBaseHeight} is used instead.
The vertical placement of a bottom accent is straight below the accentee, no
correction takes place.
Possible locations are \type {top}, \type {bottom}, \type {both} and \type
{center}. When no location is given \type {top} is assumed. An additional
parameter \type {fraction} can be specified followed by a number; a value of for
instance 1200 means that the criterium is 1.2 times the width of the nuclues. The
fraction only applies to the stepwise selected shapes and is mostly meant for the
\type {overlay} location. It also works for the other locations but then it
concerns the width.
\section{Math root extension}
The new primitive \type {\Uroot} allows the construction of a radical noad
including a degree field. Its syntax is an extension of \type {\Uradical}:
\starttyping
\Uradical <fam integer> <char integer> <radicand>
\Uroot <fam integer> <char integer> <degree> <radicand>
\stoptyping
The placement of the degree is controlled by the math parameters \type
{\Umathradicaldegreebefore}, \type {\Umathradicaldegreeafter}, and \type
{\Umathradicaldegreeraise}. The degree will be typeset in \type
{\scriptscriptstyle}.
\section{Math kerning in super- and subscripts}
The character fields in a \LUA|-|loaded \OPENTYPE\ math font can have a \quote
{mathkern} table. The format of this table is the same as the \quote {mathkern}
table that is returned by the \type {fontloader} library, except that all height
and kern values have to be specified in actual scaled points.
When a super- or subscript has to be placed next to a math item, \LUATEX\ checks
whether the super- or subscript and the nucleus are both simple character items.
If they are, and if the fonts of both character items are \OPENTYPE\ fonts (as
opposed to legacy \TEX\ fonts), then \LUATEX\ will use the \OPENTYPE\ math
algorithm for deciding on the horizontal placement of the super- or subscript.
This works as follows:
\startitemize
\startitem
The vertical position of the script is calculated.
\stopitem
\startitem
The default horizontal position is flat next to the base character.
\stopitem
\startitem
For superscripts, the italic correction of the base character is added.
\stopitem
\startitem
For a superscript, two vertical values are calculated: the bottom of the
script (after shifting up), and the top of the base. For a subscript, the two
values are the top of the (shifted down) script, and the bottom of the base.
\stopitem
\startitem
For each of these two locations:
\startitemize
\startitem
find the math kern value at this height for the base (for a subscript
placement, this is the bottom_right corner, for a superscript
placement the top_right corner)
\stopitem
\startitem
find the math kern value at this height for the script (for a
subscript placement, this is the top_left corner, for a superscript
placement the bottom_left corner)
\stopitem
\startitem
add the found values together to get a preliminary result.
\stopitem
\stopitemize
\stopitem
\startitem
The horizontal kern to be applied is the smallest of the two results from
previous step.
\stopitem
\stopitemize
The math kern value at a specific height is the kern value that is specified by the
next higher height and kern pair, or the highest one in the character (if there is no
value high enough in the character), or simply zero (if the character has no math kern
pairs at all).
\section{Scripts on horizontally extensible items like arrows}
The primitives \type {\Uunderdelimiter} and \type {\Uoverdelimiter} allow the
placement of a subscript or superscript on an automatically extensible item and
\type {\Udelimiterunder} and \type {\Udelimiterover} allow the placement of an
automatically extensible item as a subscript or superscript on a nucleus. The
input:
% these produce radical noads .. in fact the code base has the numbers wrong for
% quite a while, so no one seems to use this
\startbuffer
$\Uoverdelimiter 0 "2194 {\hbox{\strut overdelimiter}}$
$\Uunderdelimiter 0 "2194 {\hbox{\strut underdelimiter}}$
$\Udelimiterover 0 "2194 {\hbox{\strut delimiterover}}$
$\Udelimiterunder 0 "2194 {\hbox{\strut delimiterunder}}$
\stopbuffer
\typebuffer will render this:
\blank \startnarrower \getbuffer \stopnarrower \blank
The vertical placements are controlled by \type {\Umathunderdelimiterbgap}, \type
{\Umathunderdelimitervgap}, \type {\Umathoverdelimiterbgap}, and \type
{\Umathoverdelimitervgap} in a similar way as limit placements on large operators.
The superscript in \type {\Uoverdelimiter} is typeset in a suitable scripted style,
the subscript in \type {\Uunderdelimiter} is cramped as well.
These primitives accepts an option \type {width} specification. When used the
also optional keywords \type {left}, \type {middle} and \type {right} will
determine what happens when a requested size can't be met (which can happen when
we step to successive larger variants).
An extra primitive \type {\Uhextensible} is available that can be used like this:
\startbuffer
$\Uhextensible width 10cm 0 "2194$
\stopbuffer
\typebuffer This will render this:
\blank \startnarrower \getbuffer \stopnarrower \blank
Here you can also pass options, like:
\startbuffer
$\Uhextensible width 1pt middle 0 "2194$
\stopbuffer
\typebuffer This gives:
\blank \startnarrower \getbuffer \stopnarrower \blank
\LUATEX\ internally uses a structure that supports \OPENTYPE\ \quote
{MathVariants} as well as \TFM\ \quote {extensible recipes}. In most cases where
font metrics are involved we have a different code path for traditional fonts end
\OPENTYPE\ fonts.
\section {Extracting values}
You can extract the components of a math character. Say that we have defined:
\starttyping
\Umathcode 1 2 3 4
\stoptyping
then
\starttyping
[\Umathcharclass1] [\Umathcharfam1] [\Umathcharslot1]
\stoptyping
will return:
\starttyping
[2] [3] [4]
\stoptyping
These commands are provides as convenience. Before they came available you could
do the following:
\starttyping
\def\Umathcharclass{\directlua{tex.print(tex.getmathcode(token.scan_int())[1])}}
\def\Umathcharfam {\directlua{tex.print(tex.getmathcode(token.scan_int())[2])}}
\def\Umathcharslot {\directlua{tex.print(tex.getmathcode(token.scan_int())[3])}}
\stoptyping
\section{fractions}
The \type {\abovewithdelims} command accepts a keyword \type {exact}. When issued
the extra space relative to the rule thickness is not added. One can of course
use the \type {\Umathfraction..gap} commands to influence the spacing. Also the
rule is still positioned around the math axis.
\starttyping
$$ { {a} \abovewithdelims() exact 4pt {b} }$$
\stoptyping
The math parameter table contains some parameters that specify a horizontal and
vertical gap for skewed fractions. Of course some guessing is needed in order to
implement something that uses them. And so we now provide a primitive similar to the
other fraction related ones but with a few options so that one can influence the
rendering. Of course a user can also mess around a bit with the parameters
\type {\Umathskewedfractionhgap} and \type {\Umathskewedfractionvgap}.
The syntax used here is:
\starttyping
{ {1} \Uskewed / <options> {2} }
{ {1} \Uskewedwithdelims / () <options> {2} }
\stoptyping
where the options can be \type {noaxis} and \type {exact}. By default we add half
the axis to the shifts and by default we zero the width of the middle character.
For Latin Modern The result looks as follows:
\def\ShowA#1#2#3{$x + { {#1} \Uskewed / #3 {#2} } + x$}
\def\ShowB#1#2#3{$x + { {#1} \Uskewedwithdelims / () #3 {#2} } + x$}
\start
\switchtobodyfont[modern]
\starttabulate[||||||]
\NC \NC
\ShowA{a}{b}{} \NC
\ShowA{1}{2}{} \NC
\ShowB{a}{b}{} \NC
\ShowB{1}{2}{} \NC
\NR
\NC \type{exact} \NC
\ShowA{a}{b}{exact} \NC
\ShowA{1}{2}{exact} \NC
\ShowB{a}{b}{exact} \NC
\ShowB{1}{2}{exact} \NC
\NR
\NC \type{noaxis} \NC
\ShowA{a}{b}{noaxis} \NC
\ShowA{1}{2}{noaxis} \NC
\ShowB{a}{b}{noaxis} \NC
\ShowB{1}{2}{noaxis} \NC
\NR
\NC \type{exact noaxis} \NC
\ShowA{a}{b}{exact noaxis} \NC
\ShowA{1}{2}{exact noaxis} \NC
\ShowB{a}{b}{exact noaxis} \NC
\ShowB{1}{2}{exact noaxis} \NC
\NR
\stoptabulate
\stop
\section {Other Math changes}
\subsection {Verbose versions of single-character math commands}
\LUATEX\ defines six new primitives that have the same function as
\type {^}, \type {_}, \type {$}, and \type {$$}: %$
\starttabulate[|l|l|l|l|]
\NC \bf primitive \NC \bf explanation \NC \NR
\NC \type {\Usuperscript} \NC Duplicates the functionality of \type {^} \NC \NR
\NC \type {\Usubscript} \NC Duplicates the functionality of \type {_} \NC \NR
\NC \type {\Ustartmath} \NC Duplicates the functionality of \type {$}, % $
when used in non-math mode. \NC \NR
\NC \type {\Ustopmath} \NC Duplicates the functionality of \type {$}, % $
when used in inline math mode. \NC \NR
\NC \type {\Ustartdisplaymath} \NC Duplicates the functionality of \type {$$}, % $$
when used in non-math mode. \NC \NR
\NC \type {\Ustopdisplaymath} \NC Duplicates the functionality of \type {$$}, % $$
when used in display math mode. \NC \NR
\stoptabulate
The \type {\Ustopmath} and \type {\Ustopdisplaymath} primitives check if the current
math mode is the correct one (inline vs.\ displayed), but you can freely intermix
the four mathon|/|mathoff commands with explicit dollar sign(s).
\subsection{Allowed math commands in non-math modes}
The commands \type {\mathchar}, and \type {\Umathchar} and control sequences that
are the result of \type {\mathchardef} or \type {\Umathchardef} are also
acceptable in the horizontal and vertical modes. In those cases, the \type
{\textfont} from the requested math family is used.
\section{Math surrounding skips}
Inline math is surrounded by (optional) \type {\mathsurround} spacing but that is fixed
dimension. There is now an additional parameter \type {\mathsurroundskip}. When set to a
non|-|zero value (or zero with some stretch or shrink) this parameter will replace
\type {\mathsurround}. By using an additional parameter instead of changing the nature
of \type {\mathsurround}, we can remain compatible.
% \section{Math todo}
%
% The following items are still todo.
%
% \startitemize
% \startitem
% Pre-scripts.
% \stopitem
% \startitem
% Multi-story stacks.
% \stopitem
% \startitem
% Flattened accents for high characters (maybe).
% \stopitem
% \startitem
% Better control over the spacing around displays and handling of equation numbers.
% \stopitem
% \startitem
% Support for multi|-|line displays using \MATHML\ style alignment points.
% \stopitem
% \stopitemize
\subsection {Delimiters: \type{\Uleft}, \type {\Umiddle} and \type {\Uright}}
Normally you will force delimiters to certain sizes by putting an empty box or
rule next to it. The resulting delimiter will either be a character from the
stepwise size range or an extensible. The latter can be quite differently
positioned that the characters as it depends on the fit as well as the fact if
the used characters in the font have depth or height. Commands like (plain \TEX
s) \type {\big} need use this feature. In \LUATEX\ we provide a bit more control
by three variants that supporting optional parameters \type {height}, \type
{depth} and \type {axis}. The following example uses this:
\startbuffer
\Uleft height 30pt depth 10pt \Udelimiter "0 "0 "000028
\quad x\quad
\Umiddle height 40pt depth 15pt \Udelimiter "0 "0 "002016
\quad x\quad
\Uright height 30pt depth 10pt \Udelimiter "0 "0 "000029
\quad \quad \quad
\Uleft height 30pt depth 10pt axis \Udelimiter "0 "0 "000028
\quad x\quad
\Umiddle height 40pt depth 15pt axis \Udelimiter "0 "0 "002016
\quad x\quad
\Uright height 30pt depth 10pt axis \Udelimiter "0 "0 "000029
\stopbuffer
\typebuffer
\startlinecorrection
\ruledhbox{\mathematics{\getbuffer}}
\stoplinecorrection
The keyword \type {exact} can be used as directive that the real dimensions
should be applied when the criteria can't be met which can happen when we're
still stepping through the successively larger variants. When no dimensions are
given the \type {noaxis} command can be used to prevent shifting over the axis.
You can influence the final class with the keyword \type {class} which will
influence the spacing.
\subsection{Fixed scripts}
We have three parameters that are used for this fixed anchoring:
\starttabulate[|l|l|]
\NC $d$ \NC \type {\Umathsubshiftdown} \NC \NR
\NC $u$ \NC \type {\Umathsupshiftup} \NC \NR
\NC $s$ \NC \type {\Umathsubsupshiftdown} \NC \NR
\stoptabulate
When we set \type {\mathscriptsmode} to a value other than zero these are used
for calculating fixed positions. This is something that is needed for instance
for chemistry. You can manipulate the mentioned variables to achive different
effects.
\def\SampleMath#1%
{$\mathscriptsmode#1\mathupright CH_2 + CH^+_2 + CH^2_2$}
\starttabulate[|c|c|c|l|]
\NC \bf mode \NC \bf down \NC \bf up \NC \NC \NR
\NC 0 \NC dynamic \NC dynamic \NC \SampleMath{0} \NC \NR
\NC 1 \NC $d$ \NC $u$ \NC \SampleMath{1} \NC \NR
\NC 2 \NC $s$ \NC $u$ \NC \SampleMath{2} \NC \NR
\NC 3 \NC $s$ \NC $u + s - d$ \NC \SampleMath{3} \NC \NR
\NC 4 \NC $d + (s-d)/2$ \NC $u + (s-d)/2$ \NC \SampleMath{4} \NC \NR
\NC 5 \NC $d$ \NC $u + s - d$ \NC \SampleMath{5} \NC \NR
\stoptabulate
The value of this parameter obeys grouping but applies to the whole current
formula.
% if needed we can put the value in stylenodes but maybe more should go there
\subsection {Tracing}
Because there are quite some math related parameters and values, it is possible
to limit tracing. Only when \type {tracingassigns} and|/|or \type
{tracingrestores} are set to~2 or more they will be traced.
\subsection {Math options}
The logic in the math engine is rather complex and there are often no universal
solutions (read: what works out well for one font, fails for another). Therefore
some variations in the implementation will be driven by options for which a new
primitive \type {\mathoption} has been introduced (so that we don't end up with
many new commands). The approach of options also permits us to see what effect a
specific solution has.
\subsubsection {\type {\mathoption noitaliccompensation}}
This option compensates placement for characters with a built|-|in italic
correction.
\startbuffer
{\showboxes\int}\quad
{\showboxes\int_{|}^{|}}\quad
{\showboxes\int\limits_{|}^{|}}
\stopbuffer
\typebuffer
Gives (with computer modern that has such italics):
\startlinecorrection[blank]
\switchtobodyfont[modern]
\startcombination[nx=2,ny=2,distance=5em]
{\mathoption noitaliccompensation 0\relax \mathematics{\getbuffer}}
{\nohyphens\type{0:inline}}
{\mathoption noitaliccompensation 0\relax \mathematics{\displaymath\getbuffer}}
{\nohyphens\type{0:display}}
{\mathoption noitaliccompensation 1\relax \mathematics{\getbuffer}}
{\nohyphens\type{1:inline}}
{\mathoption noitaliccompensation 1\relax \mathematics{\displaymath\getbuffer}}
{\nohyphens\type{1:display}}
\stopcombination
\stoplinecorrection
\subsubsection {\type {\mathoption nocharitalic}}
When two characters follow each other italic correction can interfere. The
following example shows what this option does:
\startbuffer
\catcode"1D443=11
\catcode"1D444=11
\catcode"1D445=11
P( PP PQR
\stopbuffer
\typebuffer
Gives (with computer modern that has such italics):
\startlinecorrection[blank]
\switchtobodyfont[modern]
\startcombination[nx=2,ny=2,distance=5em]
{\mathoption nocharitalic 0\relax \mathematics{\getbuffer}}
{\nohyphens\type{0:inline}}
{\mathoption nocharitalic 0\relax \mathematics{\displaymath\getbuffer}}
{\nohyphens\type{0:display}}
{\mathoption nocharitalic 1\relax \mathematics{\getbuffer}}
{\nohyphens\type{1:inline}}
{\mathoption nocharitalic 1\relax \mathematics{\displaymath\getbuffer}}
{\nohyphens\type{1:display}}
\stopcombination
\stoplinecorrection
\subsubsection {\type {\mathoption useoldfractionscaling}}
This option has been introduced as solution for tracker item 604 for fuzzy cases
around either or not present fraction related settings for new fonts.
\stopchapter
\stopcomponent
|
State Before: α : Type u_1
β : Type u_2
γ : Type ?u.195438
ι : Sort ?u.195441
κ : ι → Sort ?u.195446
inst✝¹ : Preorder α
inst✝ : Preorder β
s : Set α
t : Set β
⊢ lowerClosure (s ×ˢ t) = lowerClosure s ×ˢ lowerClosure t State After: case a.h
α : Type u_1
β : Type u_2
γ : Type ?u.195438
ι : Sort ?u.195441
κ : ι → Sort ?u.195446
inst✝¹ : Preorder α
inst✝ : Preorder β
s : Set α
t : Set β
x✝ : α × β
⊢ x✝ ∈ ↑(lowerClosure (s ×ˢ t)) ↔ x✝ ∈ ↑(lowerClosure s ×ˢ lowerClosure t) Tactic: ext State Before: case a.h
α : Type u_1
β : Type u_2
γ : Type ?u.195438
ι : Sort ?u.195441
κ : ι → Sort ?u.195446
inst✝¹ : Preorder α
inst✝ : Preorder β
s : Set α
t : Set β
x✝ : α × β
⊢ x✝ ∈ ↑(lowerClosure (s ×ˢ t)) ↔ x✝ ∈ ↑(lowerClosure s ×ˢ lowerClosure t) State After: no goals Tactic: simp [Prod.le_def, @and_and_and_comm _ (_ ∈ t)]
|
There are no indications missing teenager Scott Redman is still alive, according to South Australian police investigating his disappearance following a high-speed chase earlier this year.
Police have now declared Mr Redman's disappearance a major crime and have launched a fresh search for the 19-year-old, who has not been seen since April.
Investigators are focusing their search efforts on a 12-square-kilometre area west of Kimba on Eyre Peninsula.
Police believe Mr Redman and an associate travelled to the area after abandoning an SUV which was involved in a brief police chase.
The black Kia Sorento was being pursued on the Eyre Highway near Kimba about 3:50pm on Saturday, April 21.
Police terminated the chase a short time later, and said the SUV then turned onto a dirt road.
The other alleged occupant was arrested two days later after hitch-hiking from Middleback Range, and the SUV was found abandoned at Secret Rocks about 40 kilometres east of Kimba on April 25.
But police have been unable to find any trace of Mr Redman, despite several searches.
"Investigators have not been able to find any indication that he is still alive," police said in a statement.
Mounted police will today be assisted by officers from the Major Crime Investigation Branch, STAR Group and the State Tactical Response Group, as well as local police.
"Police are determined to do everything possible to locate Scott and return him to his family, and are committing significant resources to this search in the hope of finding him," Detective Superintendent Des Bray said.
In May, police said they held "grave fears" for Mr Redman's wellbeing, after monitoring his social media accounts and speaking to friends and family.
But at that time it was thought he could have travelled interstate, with police stating that "Scott doesn't want to necessarily be found".
"It is possible that Mr Redman has also caught a lift with someone, but we have no evidence of that at this time," Inspector Mark Hubbard said in May.
Police have not provided detail on why Mr Redman was being pursued in the first place.
|
import matplotlib.pyplot as plt
import numpy as np
import cv2
import glob
def load_images_from_folder(folder, imagesType):
images = []
for type in imagesType:
filenames = glob.glob(folder + '/' + type)
if(len(filenames) > 0):
for filename in filenames:
images.append(cv2.imread(filename).astype(np.uint8))
break
return images
def toLog(val):
return np.array(np.log(val), dtype=np.float64)
def plot_ResponseCurves(ag):
px = list(range(0,256))
plt.figure(constrained_layout=False,figsize=(5,5))
plt.title("Response curves for BGR", fontsize=20)
plt.plot(px,np.exp(ag[2]),'r')
plt.plot(px,np.exp(ag[1]),'g')
plt.plot(px,np.exp(ag[0]),'b')
plt.ylabel("log Exposure X", fontsize=20)
plt.xlabel("Pixel value Z", fontsize=20)
plt.show()
|
-- MIT License
-- Copyright (c) 2021 Luca Ciccone and Luca Padovani
-- Permission is hereby granted, free of charge, to any person
-- obtaining a copy of this software and associated documentation
-- files (the "Software"), to deal in the Software without
-- restriction, including without limitation the rights to use,
-- copy, modify, merge, publish, distribute, sublicense, and/or sell
-- copies of the Software, and to permit persons to whom the
-- Software is furnished to do so, subject to the following
-- conditions:
-- The above copyright notice and this permission notice shall be
-- included in all copies or substantial portions of the Software.
-- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-- EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
-- OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-- NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
-- HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
-- WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-- FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
-- OTHER DEALINGS IN THE SOFTWARE.
{-# OPTIONS --guardedness #-}
open import Data.Product
open import Data.List using (List; []; _∷_; _∷ʳ_; _++_)
open import Data.List.Properties using (∷-injective)
open import Relation.Nullary
open import Relation.Binary.PropositionalEquality using (_≡_; _≢_; refl; cong)
open import Common
module Trace {ℙ : Set} (message : Message ℙ)
where
open import Action message public
open import SessionType message
Trace : Set
Trace = List Action
--| CO-TRACES |--
co-trace : Trace -> Trace
co-trace = Data.List.map co-action
co-trace-involution : (φ : Trace) -> co-trace (co-trace φ) ≡ φ
co-trace-involution [] = refl
co-trace-involution (α ∷ φ) rewrite co-action-involution α | co-trace-involution φ = refl
co-trace-++ : (φ ψ : Trace) -> co-trace (φ ++ ψ) ≡ co-trace φ ++ co-trace ψ
co-trace-++ [] _ = refl
co-trace-++ (α ∷ φ) ψ = cong (co-action α ∷_) (co-trace-++ φ ψ)
co-trace-injective : ∀{φ ψ} -> co-trace φ ≡ co-trace ψ -> φ ≡ ψ
co-trace-injective {[]} {[]} eq = refl
co-trace-injective {x ∷ φ} {x₁ ∷ ψ} eq with ∷-injective eq
... | eq1 , eq2 rewrite co-action-injective eq1 | co-trace-injective eq2 = refl
--| PREFIX RELATION |--
data _⊑_ : Trace -> Trace -> Set where
none : ∀{φ} -> [] ⊑ φ
some : ∀{φ ψ α} -> φ ⊑ ψ -> (α ∷ φ) ⊑ (α ∷ ψ)
⊑-refl : (φ : Trace) -> φ ⊑ φ
⊑-refl [] = none
⊑-refl (_ ∷ φ) = some (⊑-refl φ)
⊑-tran : ∀{φ ψ χ} -> φ ⊑ ψ -> ψ ⊑ χ -> φ ⊑ χ
⊑-tran none _ = none
⊑-tran (some le1) (some le2) = some (⊑-tran le1 le2)
⊑-++ : ∀{φ χ} -> φ ⊑ (φ ++ χ)
⊑-++ {[]} = none
⊑-++ {_ ∷ φ} = some (⊑-++ {φ})
⊑-precong-++ : ∀{φ ψ χ} -> ψ ⊑ χ -> (φ ++ ψ) ⊑ (φ ++ χ)
⊑-precong-++ {[]} le = le
⊑-precong-++ {_ ∷ _} le = some (⊑-precong-++ le)
⊑-co-trace : ∀{φ ψ} -> φ ⊑ ψ -> co-trace φ ⊑ co-trace ψ
⊑-co-trace none = none
⊑-co-trace (some le) = some (⊑-co-trace le)
⊑-trace : ∀{φ ψ} -> ψ ⊑ φ -> ∃[ φ' ] (φ ≡ ψ ++ φ')
⊑-trace {φ} none = φ , refl
⊑-trace {α ∷ φ} (some le) with ⊑-trace le
... | φ' , eq rewrite eq = φ' , refl
absurd-++-≡ : ∀{φ ψ : Trace}{α} -> (φ ++ α ∷ ψ) ≢ []
absurd-++-≡ {[]} ()
absurd-++-≡ {_ ∷ _} ()
absurd-++-⊑ : ∀{φ α ψ} -> ¬ (φ ++ α ∷ ψ) ⊑ []
absurd-++-⊑ {[]} ()
absurd-++-⊑ {_ ∷ _} ()
--| STRICT PREFIX RELATION |--
data _⊏_ : Trace -> Trace -> Set where
none : ∀{α φ} -> [] ⊏ (α ∷ φ)
some : ∀{φ ψ α} -> φ ⊏ ψ -> (α ∷ φ) ⊏ (α ∷ ψ)
⊏-irreflexive : ∀{φ} -> ¬ φ ⊏ φ
⊏-irreflexive (some lt) = ⊏-irreflexive lt
⊏-++ : ∀{φ ψ α} -> φ ⊏ (φ ++ (ψ ∷ʳ α))
⊏-++ {[]} {[]} = none
⊏-++ {[]} {_ ∷ _} = none
⊏-++ {_ ∷ φ} = some (⊏-++ {φ})
⊏->≢ : ∀{φ ψ} -> φ ⊏ ψ -> φ ≢ ψ
⊏->≢ (some lt) refl = ⊏-irreflexive lt
⊏->⊑ : ∀{φ ψ} -> φ ⊏ ψ -> φ ⊑ ψ
⊏->⊑ none = none
⊏->⊑ (some lt) = some (⊏->⊑ lt)
|
State Before: C : Type u_1
inst✝² : Category C
D : Type ?u.741
inst✝¹ : Category D
E : Type ?u.748
inst✝ : Category E
J : GrothendieckTopology C
K : GrothendieckTopology D
L : GrothendieckTopology E
U✝ : C
S✝ : Sieve ((𝟭 C).obj U✝)
h : S✝ ∈ GrothendieckTopology.sieves J ((𝟭 C).obj U✝)
⊢ Sieve.functorPullback (𝟭 C) S✝ ∈ GrothendieckTopology.sieves J U✝ State After: no goals Tactic: simpa using h
|
\chapter{Type declarations of standard Common Lisp functions}
This module contains portable type declarations for all standard
Common Lisp functions. It could be used by implementers of Common
Lisp compilers to accomplish error checking and type inferencing.
|
Formal statement is: lemma lim_mono: fixes X Y :: "nat \<Rightarrow> 'a::linorder_topology" assumes "\<And>n. N \<le> n \<Longrightarrow> X n \<le> Y n" and "X \<longlonglongrightarrow> x" and "Y \<longlonglongrightarrow> y" shows "x \<le> y" Informal statement is: If $X_n \leq Y_n$ for all $n \geq N$ and $\lim_{n \to \infty} X_n = x$ and $\lim_{n \to \infty} Y_n = y$, then $x \leq y$.
|
c=======================================================================
c
c GENCHI
c
c Chi-square distribution generator
c
c Generates random deviate from the distribution of a chisquare
C with DF degrees of freedom random variable.
c
c-----------------------------------------------------------------------
c
c Copyright (c) 2014 NumX
c All rights reserved.
c
c This software is the confidential and proprietary information
c of NumX. You shall not disclose such Confidential
C Information and shall use it only in accordance with the terms
c of the licence agreement you entered into with NumX.
c
c author: Yann Vernaz
c
c-----------------------------------------------------------------------
DOUBLE PRECISION FUNCTION genchi(df)
c-----------------------------------------------------------------------
c
c INPUT :
c DF : degrees of freedom of the chisquare (DF>0) double
c
c Method - Uses relation between chisquare and gamma.
c
c----------------------------------------------------------------------
c
c scalar arguments
DOUBLE PRECISION df
c
c external functions
c DOUBLE PRECISION gengam
c EXTERNAL gengam
DOUBLE PRECISION sgamma
EXTERNAL sgamma
c
c executable statements
c changed this to call sgamma directly
c 10 genchi = 2.0*gengam(1.0,df/2.0)
10 genchi = 2.0*sgamma(df/2.0)
RETURN
END
|
/-
Copyright (c) 2020 Oliver Nash. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Oliver Nash, Antoine Labelle
! This file was ported from Lean 3 source module linear_algebra.contraction
! leanprover-community/mathlib commit 657df4339ae6ceada048c8a2980fb10e393143ec
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.LinearAlgebra.Dual
import Mathbin.LinearAlgebra.Matrix.ToLin
/-!
# Contractions
Given modules $M, N$ over a commutative ring $R$, this file defines the natural linear maps:
$M^* \otimes M \to R$, $M \otimes M^* \to R$, and $M^* \otimes N → Hom(M, N)$, as well as proving
some basic properties of these maps.
## Tags
contraction, dual module, tensor product
-/
variable {ι : Type _} (R M N P Q : Type _)
attribute [local ext] TensorProduct.ext
section Contraction
open TensorProduct LinearMap Matrix Module
open TensorProduct BigOperators
section CommSemiring
variable [CommSemiring R]
variable [AddCommMonoid M] [AddCommMonoid N] [AddCommMonoid P] [AddCommMonoid Q]
variable [Module R M] [Module R N] [Module R P] [Module R Q]
variable [DecidableEq ι] [Fintype ι] (b : Basis ι R M)
/-- The natural left-handed pairing between a module and its dual. -/
def contractLeft : Module.Dual R M ⊗ M →ₗ[R] R :=
(uncurry _ _ _ _).toFun LinearMap.id
#align contract_left contractLeft
/-- The natural right-handed pairing between a module and its dual. -/
def contractRight : M ⊗ Module.Dual R M →ₗ[R] R :=
(uncurry _ _ _ _).toFun (LinearMap.flip LinearMap.id)
#align contract_right contractRight
/-- The natural map associating a linear map to the tensor product of two modules. -/
def dualTensorHom : Module.Dual R M ⊗ N →ₗ[R] M →ₗ[R] N :=
let M' := Module.Dual R M
(uncurry R M' N (M →ₗ[R] N) : _ → M' ⊗ N →ₗ[R] M →ₗ[R] N) LinearMap.smulRightₗ
#align dual_tensor_hom dualTensorHom
variable {R M N P Q}
@[simp]
theorem contractLeft_apply (f : Module.Dual R M) (m : M) : contractLeft R M (f ⊗ₜ m) = f m :=
rfl
#align contract_left_apply contractLeft_apply
@[simp]
theorem contractRight_apply (f : Module.Dual R M) (m : M) : contractRight R M (m ⊗ₜ f) = f m :=
rfl
#align contract_right_apply contractRight_apply
@[simp]
theorem dualTensorHom_apply (f : Module.Dual R M) (m : M) (n : N) :
dualTensorHom R M N (f ⊗ₜ n) m = f m • n :=
rfl
#align dual_tensor_hom_apply dualTensorHom_apply
@[simp]
theorem transpose_dualTensorHom (f : Module.Dual R M) (m : M) :
Dual.transpose (dualTensorHom R M M (f ⊗ₜ m)) = dualTensorHom R _ _ (Dual.eval R M m ⊗ₜ f) :=
by
ext (f' m')
simp only [dual.transpose_apply, coe_comp, Function.comp_apply, dualTensorHom_apply,
LinearMap.map_smulₛₗ, RingHom.id_apply, Algebra.id.smul_eq_mul, dual.eval_apply, smul_apply]
exact mul_comm _ _
#align transpose_dual_tensor_hom transpose_dualTensorHom
@[simp]
theorem dualTensorHom_prodMap_zero (f : Module.Dual R M) (p : P) :
((dualTensorHom R M P) (f ⊗ₜ[R] p)).Prod_map (0 : N →ₗ[R] Q) =
dualTensorHom R (M × N) (P × Q) ((f ∘ₗ fst R M N) ⊗ₜ inl R P Q p) :=
by
ext <;>
simp only [coe_comp, coe_inl, Function.comp_apply, prod_map_apply, dualTensorHom_apply,
fst_apply, Prod.smul_mk, zero_apply, smul_zero]
#align dual_tensor_hom_prod_map_zero dualTensorHom_prodMap_zero
@[simp]
theorem zero_prodMap_dualTensorHom (g : Module.Dual R N) (q : Q) :
(0 : M →ₗ[R] P).Prod_map ((dualTensorHom R N Q) (g ⊗ₜ[R] q)) =
dualTensorHom R (M × N) (P × Q) ((g ∘ₗ snd R M N) ⊗ₜ inr R P Q q) :=
by
ext <;>
simp only [coe_comp, coe_inr, Function.comp_apply, prod_map_apply, dualTensorHom_apply,
snd_apply, Prod.smul_mk, zero_apply, smul_zero]
#align zero_prod_map_dual_tensor_hom zero_prodMap_dualTensorHom
theorem map_dualTensorHom (f : Module.Dual R M) (p : P) (g : Module.Dual R N) (q : Q) :
TensorProduct.map (dualTensorHom R M P (f ⊗ₜ[R] p)) (dualTensorHom R N Q (g ⊗ₜ[R] q)) =
dualTensorHom R (M ⊗[R] N) (P ⊗[R] Q) (dualDistrib R M N (f ⊗ₜ g) ⊗ₜ[R] p ⊗ₜ[R] q) :=
by ext (m n);
simp only [compr₂_apply, mk_apply, map_tmul, dualTensorHom_apply, dual_distrib_apply, ←
smul_tmul_smul]
#align map_dual_tensor_hom map_dualTensorHom
@[simp]
theorem comp_dualTensorHom (f : Module.Dual R M) (n : N) (g : Module.Dual R N) (p : P) :
dualTensorHom R N P (g ⊗ₜ[R] p) ∘ₗ dualTensorHom R M N (f ⊗ₜ[R] n) =
g n • dualTensorHom R M P (f ⊗ₜ p) :=
by
ext m;
simp only [coe_comp, Function.comp_apply, dualTensorHom_apply, LinearMap.map_smul,
RingHom.id_apply, smul_apply]
rw [smul_comm]
#align comp_dual_tensor_hom comp_dualTensorHom
/-- As a matrix, `dual_tensor_hom` evaluated on a basis element of `M* ⊗ N` is a matrix with a
single one and zeros elsewhere -/
theorem toMatrix_dualTensorHom {m : Type _} {n : Type _} [Fintype m] [Fintype n] [DecidableEq m]
[DecidableEq n] (bM : Basis m R M) (bN : Basis n R N) (j : m) (i : n) :
toMatrix bM bN (dualTensorHom R M N (bM.Coord j ⊗ₜ bN i)) = stdBasisMatrix i j 1 :=
by
ext (i' j')
by_cases hij : i = i' ∧ j = j' <;>
simp [LinearMap.toMatrix_apply, Finsupp.single_eq_pi_single, hij]
rw [and_iff_not_or_not, Classical.not_not] at hij; cases hij <;> simp [hij]
#align to_matrix_dual_tensor_hom toMatrix_dualTensorHom
end CommSemiring
section CommRing
variable [CommRing R]
variable [AddCommGroup M] [AddCommGroup N] [AddCommGroup P] [AddCommGroup Q]
variable [Module R M] [Module R N] [Module R P] [Module R Q]
variable [DecidableEq ι] [Fintype ι] (b : Basis ι R M)
variable {R M N P Q}
/-- If `M` is free, the natural linear map $M^* ⊗ N → Hom(M, N)$ is an equivalence. This function
provides this equivalence in return for a basis of `M`. -/
@[simps apply]
noncomputable def dualTensorHomEquivOfBasis : Module.Dual R M ⊗[R] N ≃ₗ[R] M →ₗ[R] N :=
LinearEquiv.ofLinear (dualTensorHom R M N)
(∑ i, TensorProduct.mk R _ N (b.dualBasis i) ∘ₗ LinearMap.applyₗ (b i))
(by
ext (f m)
simp only [applyₗ_apply_apply, coe_fn_sum, dualTensorHom_apply, mk_apply, id_coe, id.def,
Fintype.sum_apply, Function.comp_apply, Basis.coe_dualBasis, coe_comp, Basis.coord_apply, ←
f.map_smul, (dualTensorHom R M N).map_sum, ← f.map_sum, b.sum_repr])
(by
ext (f m)
simp only [applyₗ_apply_apply, coe_fn_sum, dualTensorHom_apply, mk_apply, id_coe, id.def,
Fintype.sum_apply, Function.comp_apply, Basis.coe_dualBasis, coe_comp, compr₂_apply,
tmul_smul, smul_tmul', ← sum_tmul, Basis.sum_dual_apply_smul_coord])
#align dual_tensor_hom_equiv_of_basis dualTensorHomEquivOfBasis
@[simp]
theorem dualTensorHomEquivOfBasis_toLinearMap :
(dualTensorHomEquivOfBasis b : Module.Dual R M ⊗[R] N ≃ₗ[R] M →ₗ[R] N).toLinearMap =
dualTensorHom R M N :=
rfl
#align dual_tensor_hom_equiv_of_basis_to_linear_map dualTensorHomEquivOfBasis_toLinearMap
@[simp]
theorem dualTensorHomEquivOfBasis_symm_cancel_left (x : Module.Dual R M ⊗[R] N) :
(dualTensorHomEquivOfBasis b).symm (dualTensorHom R M N x) = x := by
rw [← dualTensorHomEquivOfBasis_apply b, LinearEquiv.symm_apply_apply]
#align dual_tensor_hom_equiv_of_basis_symm_cancel_left dualTensorHomEquivOfBasis_symm_cancel_left
@[simp]
theorem dualTensorHomEquivOfBasis_symm_cancel_right (x : M →ₗ[R] N) :
dualTensorHom R M N ((dualTensorHomEquivOfBasis b).symm x) = x := by
rw [← dualTensorHomEquivOfBasis_apply b, LinearEquiv.apply_symm_apply]
#align dual_tensor_hom_equiv_of_basis_symm_cancel_right dualTensorHomEquivOfBasis_symm_cancel_right
variable (R M N P Q)
variable [Module.Free R M] [Module.Finite R M] [Nontrivial R]
open Classical
/-- If `M` is finite free, the natural map $M^* ⊗ N → Hom(M, N)$ is an
equivalence. -/
@[simp]
noncomputable def dualTensorHomEquiv : Module.Dual R M ⊗[R] N ≃ₗ[R] M →ₗ[R] N :=
dualTensorHomEquivOfBasis (Module.Free.chooseBasis R M)
#align dual_tensor_hom_equiv dualTensorHomEquiv
end CommRing
end Contraction
section HomTensorHom
open TensorProduct
open Module TensorProduct LinearMap
section CommRing
variable [CommRing R]
variable [AddCommGroup M] [AddCommGroup N] [AddCommGroup P] [AddCommGroup Q]
variable [Module R M] [Module R N] [Module R P] [Module R Q]
variable [Free R M] [Finite R M] [Free R N] [Finite R N] [Nontrivial R]
/-- When `M` is a finite free module, the map `ltensor_hom_to_hom_ltensor` is an equivalence. Note
that `ltensor_hom_equiv_hom_ltensor` is not defined directly in terms of
`ltensor_hom_to_hom_ltensor`, but the equivalence between the two is given by
`ltensor_hom_equiv_hom_ltensor_to_linear_map` and `ltensor_hom_equiv_hom_ltensor_apply`. -/
noncomputable def ltensorHomEquivHomLtensor : P ⊗[R] (M →ₗ[R] Q) ≃ₗ[R] M →ₗ[R] P ⊗[R] Q :=
congr (LinearEquiv.refl R P) (dualTensorHomEquiv R M Q).symm ≪≫ₗ
TensorProduct.leftComm R P _ Q ≪≫ₗ
dualTensorHomEquiv R M _
#align ltensor_hom_equiv_hom_ltensor ltensorHomEquivHomLtensor
/-- When `M` is a finite free module, the map `rtensor_hom_to_hom_rtensor` is an equivalence. Note
that `rtensor_hom_equiv_hom_rtensor` is not defined directly in terms of
`rtensor_hom_to_hom_rtensor`, but the equivalence between the two is given by
`rtensor_hom_equiv_hom_rtensor_to_linear_map` and `rtensor_hom_equiv_hom_rtensor_apply`. -/
noncomputable def rtensorHomEquivHomRtensor : (M →ₗ[R] P) ⊗[R] Q ≃ₗ[R] M →ₗ[R] P ⊗[R] Q :=
congr (dualTensorHomEquiv R M P).symm (LinearEquiv.refl R Q) ≪≫ₗ TensorProduct.assoc R _ P Q ≪≫ₗ
dualTensorHomEquiv R M _
#align rtensor_hom_equiv_hom_rtensor rtensorHomEquivHomRtensor
@[simp]
theorem ltensorHomEquivHomLtensor_toLinearMap :
(ltensorHomEquivHomLtensor R M P Q).toLinearMap = ltensorHomToHomLtensor R M P Q :=
by
let e := congr (LinearEquiv.refl R P) (dualTensorHomEquiv R M Q)
have h : Function.Surjective e.to_linear_map := e.surjective
refine' (cancel_right h).1 _
ext (p f q m)
dsimp [ltensorHomEquivHomLtensor]
simp only [ltensorHomEquivHomLtensor, dualTensorHomEquiv, compr₂_apply, mk_apply, coe_comp,
LinearEquiv.coe_toLinearMap, Function.comp_apply, map_tmul, LinearEquiv.coe_coe,
dualTensorHomEquivOfBasis_apply, LinearEquiv.trans_apply, congr_tmul, LinearEquiv.refl_apply,
dualTensorHomEquivOfBasis_symm_cancel_left, left_comm_tmul, dualTensorHom_apply,
ltensor_hom_to_hom_ltensor_apply, tmul_smul]
#align ltensor_hom_equiv_hom_ltensor_to_linear_map ltensorHomEquivHomLtensor_toLinearMap
@[simp]
theorem rtensorHomEquivHomRtensor_toLinearMap :
(rtensorHomEquivHomRtensor R M P Q).toLinearMap = rtensorHomToHomRtensor R M P Q :=
by
let e := congr (dualTensorHomEquiv R M P) (LinearEquiv.refl R Q)
have h : Function.Surjective e.to_linear_map := e.surjective
refine' (cancel_right h).1 _
ext (f p q m)
simp only [rtensorHomEquivHomRtensor, dualTensorHomEquiv, compr₂_apply, mk_apply, coe_comp,
LinearEquiv.coe_toLinearMap, Function.comp_apply, map_tmul, LinearEquiv.coe_coe,
dualTensorHomEquivOfBasis_apply, LinearEquiv.trans_apply, congr_tmul,
dualTensorHomEquivOfBasis_symm_cancel_left, LinearEquiv.refl_apply, assoc_tmul,
dualTensorHom_apply, rtensor_hom_to_hom_rtensor_apply, smul_tmul']
#align rtensor_hom_equiv_hom_rtensor_to_linear_map rtensorHomEquivHomRtensor_toLinearMap
variable {R M N P Q}
@[simp]
theorem ltensorHomEquivHomLtensor_apply (x : P ⊗[R] (M →ₗ[R] Q)) :
ltensorHomEquivHomLtensor R M P Q x = ltensorHomToHomLtensor R M P Q x := by
rw [← LinearEquiv.coe_toLinearMap, ltensorHomEquivHomLtensor_toLinearMap]
#align ltensor_hom_equiv_hom_ltensor_apply ltensorHomEquivHomLtensor_apply
@[simp]
theorem rtensorHomEquivHomRtensor_apply (x : (M →ₗ[R] P) ⊗[R] Q) :
rtensorHomEquivHomRtensor R M P Q x = rtensorHomToHomRtensor R M P Q x := by
rw [← LinearEquiv.coe_toLinearMap, rtensorHomEquivHomRtensor_toLinearMap]
#align rtensor_hom_equiv_hom_rtensor_apply rtensorHomEquivHomRtensor_apply
variable (R M N P Q)
/-- When `M` and `N` are free `R` modules, the map `hom_tensor_hom_map` is an equivalence. Note that
`hom_tensor_hom_equiv` is not defined directly in terms of `hom_tensor_hom_map`, but the equivalence
between the two is given by `hom_tensor_hom_equiv_to_linear_map` and `hom_tensor_hom_equiv_apply`.
-/
noncomputable def homTensorHomEquiv : (M →ₗ[R] P) ⊗[R] (N →ₗ[R] Q) ≃ₗ[R] M ⊗[R] N →ₗ[R] P ⊗[R] Q :=
rtensorHomEquivHomRtensor R M P _ ≪≫ₗ
(LinearEquiv.refl R M).arrowCongr (ltensorHomEquivHomLtensor R N _ Q) ≪≫ₗ
lift.equiv R M N _
#align hom_tensor_hom_equiv homTensorHomEquiv
@[simp]
theorem homTensorHomEquiv_toLinearMap :
(homTensorHomEquiv R M N P Q).toLinearMap = homTensorHomMap R M N P Q :=
by
ext (f g m n)
simp only [homTensorHomEquiv, compr₂_apply, mk_apply, LinearEquiv.coe_toLinearMap,
LinearEquiv.trans_apply, lift.equiv_apply, LinearEquiv.arrowCongr_apply, LinearEquiv.refl_symm,
LinearEquiv.refl_apply, rtensorHomEquivHomRtensor_apply, ltensorHomEquivHomLtensor_apply,
ltensor_hom_to_hom_ltensor_apply, rtensor_hom_to_hom_rtensor_apply, hom_tensor_hom_map_apply,
map_tmul]
#align hom_tensor_hom_equiv_to_linear_map homTensorHomEquiv_toLinearMap
variable {R M N P Q}
@[simp]
theorem homTensorHomEquiv_apply (x : (M →ₗ[R] P) ⊗[R] (N →ₗ[R] Q)) :
homTensorHomEquiv R M N P Q x = homTensorHomMap R M N P Q x := by
rw [← LinearEquiv.coe_toLinearMap, homTensorHomEquiv_toLinearMap]
#align hom_tensor_hom_equiv_apply homTensorHomEquiv_apply
end CommRing
end HomTensorHom
|
What controls us? Is it Fear? Anger? Hope? Every day these emotions influence our decisions, some more than others. Wren Burton, one of the top architects in Chicago, has never let anything stand in the way of providing a better life for her family, but those choices haven’t been without sacrifice. And when the power shuts down across the city, which submerges the masses into chaos and cuts her family in half, tough decisions are only the beginning.
This is the Kindle version of Static: White Noise- An EMP Thriller- Book 0 that you can also download and read on your computer and mobile phone. Kindle books are DRM protected and therefore, unlike ebooks that are in PDF or ePUB format, you cannot read this ebook without the official Kindle apps.
|
import logic.basic -- Needed for imp_false
section question11
variables P Q R : Prop
example (h : P ∧ (Q ∧ R)) : (P ∧ Q) ∧ R :=
begin
sorry
end
end question11
section question6
variables P Q R : Prop
example : (P ∧ R) ∨ (Q ∧ R) → (P ∨ Q) ∧ R :=
begin
sorry
end
end question6
section question9
variables A B C : Prop
example (h : C → A ∨ B) : (C ∧ ¬A) → B :=
begin
sorry
end
-- HINT: for this question, you may find it useful to to use `exfalso, contradiction` somewhere in the proof
end question9
|
Formal statement is: corollary\<^marker>\<open>tag unimportant\<close> Cauchy_theorem_disc: "\<lbrakk>finite K; continuous_on (cball a e) f; \<And>x. x \<in> ball a e - K \<Longrightarrow> f field_differentiable at x; valid_path g; path_image g \<subseteq> cball a e; pathfinish g = pathstart g\<rbrakk> \<Longrightarrow> (f has_contour_integral 0) g" Informal statement is: If $f$ is a continuous function on the closed disc of radius $e$ centered at $a$, and $f$ is differentiable on the open disc of radius $e$ centered at $a$ except at finitely many points, then the integral of $f$ around the boundary of the closed disc is zero.
|
open import Data.Bool as Bool using (Bool; false; true; if_then_else_; not)
open import Data.String using (String)
open import Data.Nat using (ℕ; _+_; _≟_; suc; _>_; _<_; _∸_)
open import Relation.Nullary.Decidable using (⌊_⌋)
open import Data.List as l using (List; filter; map; take; foldl; length; []; _∷_)
open import Data.List.Properties
-- open import Data.List.Extrema using (max)
open import Data.Maybe using (to-witness)
open import Data.Fin using (fromℕ; _-_; zero; Fin)
open import Data.Fin.Properties using (≤-totalOrder)
open import Data.Product as Prod using (∃; ∃₂; _×_; _,_; Σ)
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; cong)
open Eq.≡-Reasoning
open import Level using (Level)
open import Data.Vec as v using (Vec; fromList; toList; last; length; []; _∷_; [_]; _∷ʳ_; _++_; lookup; head; initLast; filter; map)
open import Data.Vec.Bounded as vb using ([]; _∷_; fromVec; filter; Vec≤)
open import Relation.Binary.PropositionalEquality as P
using (_≡_; _≢_; refl; _≗_; cong₂)
open import Data.Nat.Properties using (+-comm)
open import Relation.Unary using (Pred; Decidable)
open import Relation.Nullary using (does)
open import Data.Vec.Bounded.Base using (padRight; ≤-cast)
import Data.Nat.Properties as ℕₚ
open import Relation.Nullary.Decidable.Core using (dec-false)
open import Function using (_∘_)
open import Data.List.Extrema ℕₚ.≤-totalOrder
-- TODO add to std-lib
vecLast : ∀ {a} {A : Set a} {l} {n : ℕ} (xs : Vec A n) → last (xs ∷ʳ l) ≡ l
vecLast [] = refl
vecLast (_ ∷ xs) = P.trans (prop (xs ∷ʳ _)) (vecLast xs)
where
prop : ∀ {a} {A : Set a} {n x} (xs : Vec A (suc n)) → last (x v.∷ xs) ≡ last xs
prop xs with initLast xs
... | _ , _ , refl = refl
-- operations
-- AddTodo
-- DeleteTodo
-- CompleteTodo
-- ClearCompleted
-- CompleteAllTodos
-- horizontal properties
-- AddTodo
-- non-commutative
-- DeleteTodo
-- idempotent
-- CompleteTodo
-- idempotent
-- ClearCompleted
-- idempotent
-- CompleteAllTodos
-- idempotent
-- EditTodo
-- EditTodo - EditTodo Todo list length doesn't change
-- vertical properties
-- AddTodo
-- AddTodoSetsNewCompletedToFalse
-- AddTodoSetsNewIdToNonExistingId
-- AddTodoSetsNewTextToText
-- doesn't change id of other Todos
-- doesn't change text of other Todos
-- doesn't change completed of other Todos
-- DeleteTodo
-- DeleteTodoRemoveTodoWithId
-- DeleteTodoRemoves1Element
-- only way to add todo is with AddTodo and AddTodo gives non existing id to new todo
-- doesn't change id of other Todos
-- doesn't change text of other Todos
-- doesn't change completed of other Todos
-- CompleteTodo
-- CompleteTodoSetsTodoWithIdCompletedToTrue
-- doesn't touch any other Todo
-- doesn't change id of any Todo
-- doesn't change text of any Todo
-- ClearCompleted
-- doesn't remove Todos where completed = false
-- doesn't change id of any Todo
-- doesn't change completed of any Todo
-- doesn't change text of any Todo
-- CompleteAllTodos
-- all Todos have completed = true
-- doesn't change id of any Todo
-- doesn't change text of any Todo
-- EditTodo
-- modifies Todo with given id's text
-- doesn't change the id
-- doesn't change completed
-- doesn't modify other Todos
record Todo : Set where
field
text : String
completed : Bool
id : ℕ
AddTodo : ∀ {n : ℕ} → (Vec Todo n) → String → (Vec Todo (1 + n))
AddTodo todos text =
todos ∷ʳ
record
{ id = 1 -- argmax (λ todo → λ e → e) todos) + 1
; completed = false
; text = text
}
ListAddTodo : List Todo → String → List Todo
ListAddTodo todos text =
todos l.∷ʳ
record
{ id = (max 0 (l.map (λ e → Todo.id e) todos)) + 1
; completed = false
; text = text
}
ListAddTodoAddsNewListItem :
(todos : List Todo) (text : String) →
l.length (ListAddTodo todos text) ≡ l.length todos + 1
ListAddTodoAddsNewListItem todos text = length-++ todos
listVec : Vec ℕ 1
listVec = v.fromList (2 l.∷ l.[])
listLast : ℕ
listLast = v.last (v.fromList (2 l.∷ l.[]))
listLastIs2 : v.last (v.fromList (2 l.∷ l.[])) ≡ 2
listLastIs2 = refl
record Id : Set where
field
id : ℕ
natListToVec : (xs : List ℕ) → Vec ℕ (l.length xs)
natListToVec nats = v.fromList nats
-- natListLast : List ℕ → ℕ
-- natListLast nats = v.last {(l.length nats) ∸ 1} (v.fromList nats)
-- natListLast : List ℕ → ℕ
-- natListLast [] = 0
-- natListLast nats@(x ∷ xs) = v.last (v.fromList nats)
natListFromList : v.fromList (1 l.∷ l.[]) ≡ (1 v.∷ v.[])
natListFromList = refl
ListOf1sConcatFromList : v.fromList (1 l.∷ l.[] l.++ 1 l.∷ l.[]) ≡ (1 v.∷ v.[] v.++ 1 v.∷ v.[])
ListOf1sConcatFromList = refl
open import Data.Nat.Properties
{-# BUILTIN SIZE Size #-}
private
variable
a : Level
A : Set a
i : Size
vec-length-++ :
∀ {n : ℕ} (xs : Vec A n) {ys} →
v.length (xs v.++ ys) ≡ v.length xs + v.length ys
vec-length-++ xs {ys} = refl
-- vec-fromList-++ :
-- (as bs : List A) →
-- v.fromList (as l.++ bs) ≡ v.fromList as v.++ v.fromList bs
-- vec-fromList-++ [] bs = v.[]
-- vec-fromList-++ (a ∷ as) bs = ?
-- natListConcatFromList : (nats1 : List ℕ) → (nats2 : List ℕ) → v.fromList (nats1 l.++ nats2) ≡ (nats1 v.++ nats2)
-- natListConcatFromList nats = ?
-- natListConcatFromList : (nats : List ℕ) → v.fromList (nats l.++ (1 l.∷ l.[])) ≡ ((v.fromList nats) v.++ (1 v.∷ v.[]))
-- natListConcatFromList = {! !}
natListLast : List ℕ → ℕ
natListLast [] = 0
natListLast (x ∷ []) = x
natListLast (_ ∷ y ∷ l) = natListLast (y l.∷ l)
natListConcatLast : ∀ l → natListLast (l l.++ l.[ 1 ]) ≡ 1
natListConcatLast [] = refl
natListConcatLast (_ ∷ []) = refl
natListConcatLast (_ ∷ _ ∷ l) = natListConcatLast (_ l.∷ l)
TodoListConcatLast [] = refl
TodoListConcatLast (_ ∷ []) = refl
TodoListConcatLast (_ ∷ _ ∷ l) = TodoListConcatLast (_ l.∷ l)
TodoListConcatLast : ∀ l → (TodoListLast (l l.++ l.[
record
{ id = 1
; completed = false
; text = "text"
}
])) ≡
record
{ id = 1
; completed = false
; text = "text"
}
TodoListConcatLastCompleted :
∀ l →
Todo.completed (TodoListLast (l l.++ l.[
record
{ id = 1
; completed = false
; text = "text"
}
])) ≡ false
TodoListConcatLastCompleted [] = refl
TodoListConcatLastCompleted (_ ∷ []) = refl
TodoListConcatLastCompleted (_ ∷ _ ∷ l) = TodoListConcatLastCompleted (_ l.∷ l)
-- _ : v.last (v.fromList (2 l.∷ l.[] l.++ 1 l.∷ l.[])) ≡ 1
-- _ = refl
-- natListConcatLast : (nats : List ℕ) → natListLast (nats l.++ 1 l.∷ l.[]) ≡ 1
-- natListConcatLast [] = refl
-- natListConcatLast nats@(x ∷ xs) =
-- begin
-- natListLast (nats l.++ 1 l.∷ l.[])
-- ≡⟨⟩
-- v.last (v.fromList ((x l.∷ xs) l.++ 1 l.∷ l.[]))
-- ≡⟨⟩
-- ?
-- ≡⟨⟩
-- 1
-- ∎
-- TodoListLast : List Todo → Todo
-- TodoListLast [] = record {}
-- TodoListLast todos@(x ∷ xs) = v.last (v.fromList todos)
-- ListWith2ToVec : v.fromList (2 l.∷ l.[]) ≡ 2 v.∷ v.[]
-- ListWith2ToVec = refl
-- idListLastIdIs2 : Id.id (v.last (v.fromList (record {id = 2} l.∷ l.[]))) ≡ 2
-- idListLastIdIs2 = refl
-- todoListLastIdIs2 : Todo.id (v.last (v.fromList (record {id = 2; text = ""; completed = false} l.∷ l.[]))) ≡ 2
-- todoListLastIdIs2 = refl
-- ListTodoLastTextIsText :
-- (todos : List Todo) (text : String) →
-- Todo.text (v.last (v.fromList (
-- record
-- { text = text
-- ; completed = false
-- ; id = max 0 (l.map Todo.id todos) + 1
-- } l.∷ l.[]
-- ))) ≡ text
-- ListTodoLastTextIsText todos text = refl
-- -- ListAddTodoLastAddedElementIsTodo :
-- -- (todos : List Todo) (text : String) →
-- -- Todo.text (TodoListLast (ListAddTodo todos text)) ≡ text
-- -- ListAddTodoLastAddedElementIsTodo [] text = refl
-- -- ListAddTodoLastAddedElementIsTodo todos@(x ∷ xs) text =
-- -- begin
-- -- Todo.text (TodoListLast (ListAddTodo todos text))
-- -- ≡⟨⟩
-- -- Todo.text (
-- -- TodoListLast (
-- -- todos
-- -- l.++
-- -- (
-- -- record
-- -- { text = text
-- -- ; completed = false
-- -- ; id = max 0 (l.map Todo.id todos) + 1
-- -- }
-- -- l.∷
-- -- l.[]
-- -- )
-- -- )
-- -- )
-- -- ≡⟨⟩
-- -- Todo.text (
-- -- record
-- -- { text = text
-- -- ; completed = false
-- -- ; id = max 0 (l.map Todo.id todos) + 1
-- -- }
-- -- )
-- -- ≡⟨ ? ⟩
-- -- text
-- -- ∎
-- -- open import Data.Nat.Base
-- -- infixr 5 _vb∷ʳ_
-- -- _vb∷ʳ_ : ∀ {n} → Vec≤ A n → A → Vec≤ A (suc n)
-- -- (as , p) vb∷ʳ a = as , s≤s p v.∷ʳ a
-- -- Vec≤AddTodo : ∀ {n : ℕ} → (Vec≤ Todo n) → String → (Vec≤ Todo (1 + n))
-- -- Vec≤AddTodo todos text =
-- -- todos vb∷ʳ
-- -- record
-- -- { id = 1 -- argmax (λ todo → λ e → e) todos) + 1
-- -- ; completed = false
-- -- ; text = text
-- -- }
-- vecLength-++ :
-- ∀ {n m} (xs : Vec A n) {ys : Vec A m} →
-- v.length (xs ++ ys) ≡ v.length xs + v.length ys
-- vecLength-++ [] = refl
-- vecLength-++ (x ∷ xs) = cong suc (vecLength-++ xs)
-- AddTodoAddsNewListItem :
-- ∀ {n : ℕ} → (todos : Vec Todo n) (text : String) →
-- v.length (AddTodo todos text) ≡ v.length todos + 1
-- AddTodoAddsNewListItem [] text = refl
-- AddTodoAddsNewListItem (x v.∷ xs) text =
-- begin
-- v.length (AddTodo (x v.∷ xs) text)
-- ≡⟨⟩
-- 1 + v.length (x v.∷ xs)
-- ≡⟨ +-comm 1 (v.length (x v.∷ xs))⟩
-- v.length (x v.∷ xs) + 1
-- ∎
-- ListTodoAddTodoAddsNewListItem :
-- (todos : List Todo) (text : String) →
-- l.length (ListAddTodo todos text) ≡ l.length todos + 1
-- ListTodoAddTodoAddsNewListItem [] text = refl
-- ListTodoAddTodoAddsNewListItem todos text =
-- +-comm 1 (l.length todos)
-- AddTodoLastAddedElementIsTodo :
-- ∀ {n} (todos : Vec Todo n) (text : String) →
-- last (AddTodo todos text) ≡
-- record
-- { id = 1
-- ; completed = false
-- ; text = text
-- }
-- AddTodoLastAddedElementIsTodo todos text = vecLast todos
-- -- should set (new element).completed to false
-- AddTodoSetsNewCompletedToFalse :
-- ∀ {n} (todos : Vec Todo n) (text : String) →
-- Todo.completed (last (AddTodo todos text)) ≡ false
-- AddTodoSetsNewCompletedToFalse todos text
-- rewrite
-- (AddTodoLastAddedElementIsTodo todos text) =
-- refl
-- -- should set (new element).id to an id not existing already in the list
-- AddTodoSetsNewIdTo1 :
-- ∀ {n} (todos : Vec Todo n) (text : String) →
-- Todo.id (last (AddTodo todos text)) ≡ 1
-- AddTodoSetsNewIdTo1 todos text
-- rewrite
-- (AddTodoLastAddedElementIsTodo todos text) =
-- refl
-- -- TODO should not touch other elements in the list
-- {-# COMPILE JS AddTodo =
-- function (todos) {
-- return function (text) {
-- return [
-- ...todos,
-- {
-- id: todos.reduce((maxId, todo) => Math.max(todo.id, maxId), -1) + 1,
-- completed: false,
-- text: text
-- }
-- ]
-- }
-- }
-- #-}
-- -- DeleteTodo : (List Todo) → ℕ → (List Todo)
-- -- DeleteTodo todos id' = filter (λ todo → Todo.id todo ≟ id') todos
-- open import Relation.Nullary
-- dec-¬ : ∀ {a} {P : Set a} → Dec P → Dec (¬ P)
-- dec-¬ (yes p) = no λ prf → prf p
-- dec-¬ (no ¬p) = yes ¬p
-- VecFilter : Vec≤.vec (v.filter (λ e → e ≟ 2) (2 v.∷ 1 v.∷ v.[])) ≡ (2 v.∷ v.[])
-- VecFilter = refl
-- -- VecFilter' : Vec≤.vec (v.filter (λ e → dec-¬ (e ≟ 2)) (2 v.∷ 1 v.∷ v.[])) ≡ (1 v.∷ v.[])
-- -- VecFilter' = refl
-- -- VecFilter'' : {n : ℕ} → Vec≤.vec (v.filter (λ e → e ≟ 2) (n v.∷ v.[])) ≡ 2 v.∷ v.[]
-- -- VecFilter'' = ?
-- -- VecFilter''' : {n : ℕ} → v.filter (λ e → e ≟ n) (n v.∷ v.[]) ≡ n vb.∷ vb.[]
-- -- VecFilter''' = {! !}
-- -- ListFilter : l.filter (λ e → e ≟ 2) (2 l.∷ l.[]) ≡ 2 l.∷ l.[]
-- -- ListFilter = refl
-- -- ListFilter' : {n : ℕ} → l.filter (λ e → e ≟ n) (n l.∷ l.[]) ≡ n l.∷ l.[]
-- -- ListFilter' = {! !}
-- -- DeleteNat : ∀ {n m} → (Vec ℕ n) → ℕ → (Vec ℕ m)
-- -- DeleteNat nats nat = Vec≤.vec (v.filter (λ e → e ≟ nat) nats)
-- DeleteNat : List ℕ → ℕ → List ℕ
-- DeleteNat nats nat = l.filter (λ e → dec-¬ (e ≟ nat)) nats
-- DeleteNat-idem :
-- (nats : List ℕ) →
-- (nat : ℕ) →
-- DeleteNat (DeleteNat nats nat) nat ≡ DeleteNat nats nat
-- DeleteNat-idem nats nat = filter-idem (λ e → dec-¬ (e ≟ nat)) nats
-- -- begin
-- -- DeleteNat (DeleteNat nats nat) nat
-- -- ≡⟨⟩
-- -- DeleteNat (l.filter (λ e → dec-¬ (e ≟ nat)) nats) nat
-- -- ≡⟨⟩
-- -- l.filter (λ e → dec-¬ (e ≟ nat)) (l.filter (λ e → dec-¬ (e ≟ nat)) nats)
-- -- ≡⟨⟩
-- -- (l.filter (λ e → dec-¬ (e ≟ nat)) ∘ l.filter (λ e → dec-¬ (e ≟ nat))) nats
-- -- ≡⟨ filter-idem (λ e → dec-¬ (e ≟ nat)) nats ⟩
-- -- l.filter (λ e → dec-¬ (e ≟ nat)) nats
-- -- ≡⟨⟩
-- -- DeleteNat nats nat
-- -- ∎
-- private
-- variable
-- p : Level
-- -- VecTodoDeleteTodo :
-- -- ∀ {n} →
-- -- (Vec Todo n)
-- -- → ℕ →
-- -- (Vec Todo n)
-- -- VecTodoDeleteTodo todos id' =
-- -- Vec≤.vec (v.filter (λ todo → dec-¬ (Todo.id todo ≟ id')) todos)
-- ListTodoDeleteTodo :
-- ∀ {n} →
-- (List Todo)
-- → ℕ →
-- (List Todo)
-- ListTodoDeleteTodo todos id' =
-- l.filter (λ todo → dec-¬ (Todo.id todo ≟ id')) todos
-- ListTodoDeleteTodo-idem :
-- (todos : List Todo) →
-- (id' : ℕ) →
-- ListTodoDeleteTodo (ListTodoDeleteTodo todos id') id' ≡ ListTodoDeleteTodo todos id'
-- ListTodoDeleteTodo-idem todos id' =
-- filter-idem (λ e → dec-¬ (Todo.id e ≟ id')) todos
-- -- Vec≤DeleteTodo : ∀ {n} → (Vec≤ Todo n) → ℕ → (Vec≤ Todo n)
-- -- Vec≤DeleteTodo todos id' = vb.filter (λ todo → Todo.id todo ≟ id') todos
DeleteNat : (List ℕ) → ℕ → (List ℕ)
DeleteNat nats id = filter' (λ n → not (n ≡ᵇ id)) nats
-- https://stackoverflow.com/questions/65622605/agda-std-lib-list-check-that-a-filtered-list-is-empty/65622709#65622709
DeleteNatRemoveNatWithId :
(nats : List ℕ) (id : ℕ) →
filter' (λ n → n ≡ᵇ id) (DeleteNat nats id) ≡ l.[]
DeleteNatRemoveNatWithId [] id = refl
DeleteNatRemoveNatWithId (x ∷ xs) id with (x ≡ᵇ id) | inspect (_≡ᵇ id) x
... | true | P.[ eq ] = DeleteNatRemoveNatWithId xs id
... | false | P.[ eq ] rewrite eq = DeleteNatRemoveNatWithId xs id
-- DONT WORK
-- DeleteNatRemoveNatWithId (x ∷ xs) id with x ≡ᵇ id | inspect (l._∷ (filter' (λ n → not (n ≡ᵇ id)) xs)) x
-- ... | true | P.[ eq ] = DeleteNatRemoveNatWithId xs id
-- ... | false | P.[ eq ] rewrite eq = {! !}
-- cong (x List.∷_) (DeleteNatRemoveNatWithId xs id)
-- x List.∷ filter' (λ n → n ≡ᵇ id) (DeleteNat xs id) ≡ x List.∷ List.[]
-- cong (List._∷_) (DeleteNatRemoveNatWithId xs id)
-- List._∷_ (filter' (λ n → n ≡ᵇ id) (DeleteNat xs id)) ≡ List._∷_ List.[]
-- cong₂ (List._∷_) refl (DeleteNatRemoveNatWithId xs id)
-- _x_1193 List.∷ filter' (λ n → n ≡ᵇ id) (DeleteNat xs id) ≡ _x_1193 List.∷ List.[]
-- cong₂ (x List.∷_) refl (DeleteNatRemoveNatWithId xs id)
-- cong (filter' (λ n → n ≡ᵇ id) (x List.∷_)) (DeleteNatRemoveNatWithId xs id)
-- cong (filter' (λ n → n ≡ᵇ id)) (DeleteNatRemoveNatWithId xs id)
-- filter' (λ n → n ≡ᵇ id) (filter' (λ n → n ≡ᵇ id) (DeleteNat xs id)) ≡ List.[]
-- DeleteTodo is well-defined
-- DeleteTodoRemoveTodoWithId :
-- (todos : List Todo) (id : ℕ) →
-- l.filter (λ todo → Todo.id todo ≟ id) (DeleteTodo todos id) ≡ l.[]
-- DeleteTodoRemoveTodoWithId [] id = refl
-- DeleteTodoRemoveTodoWithId (x ∷ xs) id with (Todo.id x ≟ id) | inspect (_≡ᵇ id) (Todo.id x)
-- ... | yes px | P.[ eq ] = DeleteTodoRemoveTodoWithId xs id
-- ... | no npx | P.[ eq ] rewrite eq = {! !}
-- -- filterProof : v.filter (λ e → e ≟ 2) (2 v.∷ v.[]) ≡ (2 vb.∷ vb.[])
-- -- filterProof = refl
-- -- filterProof' : v.filter (λ e → dec-¬ (e ≟ 2)) (2 v.∷ v.[]) ≡ vb.[]
-- -- filterProof' = refl
-- -- dropWhileProof : v.dropWhile (λ e → e ≟ 2) (2 v.∷ 3 v.∷ v.[]) ≡ 3 vb.∷ vb.[]
-- -- dropWhileProof = refl
-- -- dropWhileProof' : v.dropWhile (λ e → e ≟ 3) (2 v.∷ 3 v.∷ v.[]) ≡ (2 vb.∷ 3 vb.∷ vb.[])
-- -- dropWhileProof' = refl
-- -- todoFilterProof :
-- -- v.filter
-- -- (λ todo → dec-¬ ((Todo.id todo) ≟ 2))
-- -- (
-- -- record
-- -- { id = 2
-- -- ; completed = false
-- -- ; text = ""
-- -- }
-- -- v.∷
-- -- record
-- -- { id = 1
-- -- ; completed = false
-- -- ; text = ""
-- -- }
-- -- v.∷
-- -- v.[]
-- -- )
-- -- ≡
-- -- (
-- -- record
-- -- { id = 1
-- -- ; completed = false
-- -- ; text = ""
-- -- }
-- -- vb.∷
-- -- vb.[]
-- -- )
-- -- todoFilterProof = refl
-- -- should remove element from the list unless there are no elements
-- -- should remove element with given id
-- -- DeleteTodoRemoveTodoById :
-- -- ∀ {n : ℕ} (id' : ℕ) (todos : Vec Todo n) →
-- -- padRight
-- -- record
-- -- { id = 1 -- argmax (λ todo → λ e → e) todos) + 1
-- -- ; completed = false
-- -- ; text = ""
-- -- }
-- -- (v.filter (λ todo → dec-¬ ((Todo.id todo) ≟ id')) (DeleteTodo todos id'))
-- -- ≡ DeleteTodo todos id'
-- -- DeleteTodoRemoveTodoById id' v.[] = refl
-- -- DeleteTodoRemoveTodoById id' (x v.∷ xs) = {! !} (DeleteTodoRemoveTodoById xs)
-- -- {-# COMPILE JS DeleteTodo =
-- -- function (todos) {
-- -- return function (id) {
-- -- return todos.filter(function (todo) {
-- -- return todo.id !== id
-- -- });
-- -- }
-- -- }
-- -- #-}
-- -- EditTodo: can't use updateAt since id doesn't necessarily correspond to Vec index
-- VecTodoEditTodo : ∀ {n} → (Vec Todo n) → ℕ → String → (Vec Todo n)
-- VecTodoEditTodo todos id text =
-- v.map (λ todo →
-- if (⌊ Todo.id todo ≟ id ⌋)
-- then record todo { text = text }
-- else todo)
-- todos
-- ListTodoEditTodo : (List Todo) → ℕ → String → (List Todo)
-- ListTodoEditTodo todos id text =
-- l.map (λ todo →
-- if (⌊ Todo.id todo ≟ id ⌋)
-- then record todo { text = text }
-- else todo)
-- todos
-- ListTodoEditTodo-idem :
-- (todos : List Todo) →
-- (id' : ℕ) →
-- (text : String) →
-- ListTodoEditTodo (ListTodoEditTodo todos id' text) id' text ≡ ListTodoEditTodo todos id' text
-- ListTodoEditTodo-idem todos id' text =
-- begin
-- ListTodoEditTodo (ListTodoEditTodo todos id' text) id' text
-- ≡⟨ {! !} ⟩
-- ListTodoEditTodo todos id' text
-- ∎
-- -- {-# COMPILE JS EditTodo =
-- -- function (todos) {
-- -- return function (id) {
-- -- return function (text) {
-- -- return todos.map(function (todo) {
-- -- if (todo.id === id) {
-- -- todo.text = text;
-- -- }
-- -- return todo;
-- -- });
-- -- }
-- -- }
-- -- }
-- -- #-}
-- VecTodoCompleteTodo : ∀ {n} → (Vec Todo n) → ℕ → (Vec Todo n)
-- VecTodoCompleteTodo todos id =
-- v.map (λ todo →
-- if (⌊ Todo.id todo ≟ id ⌋)
-- then record todo { completed = true }
-- else todo)
-- todos
-- ListTodoCompleteTodo : (List Todo) → ℕ → (List Todo)
-- ListTodoCompleteTodo todos id =
-- l.map (λ todo →
-- if (⌊ Todo.id todo ≟ id ⌋)
-- then record todo { completed = true }
-- else todo)
-- todos
-- ListTodoCompleteTodo-idem :
-- (todos : List Todo) →
-- (id' : ℕ) →
-- ListTodoCompleteTodo (ListTodoCompleteTodo todos id') id' ≡ ListTodoCompleteTodo todos id'
-- ListTodoCompleteTodo-idem todos id' =
-- begin
-- ListTodoCompleteTodo (ListTodoCompleteTodo todos id') id'
-- ≡⟨ {! !} ⟩
-- ListTodoCompleteTodo todos id'
-- ∎
-- -- {-# COMPILE JS CompleteTodo =
-- -- function (todos) {
-- -- return function (id) {
-- -- return todos.map(function (todo) {
-- -- if (todo.id === id) {
-- -- todo.completed = true;
-- -- }
-- -- return todo;
-- -- });
-- -- }
-- -- }
-- -- #-}
-- CompleteAllTodos : ∀ {n} → (Vec Todo n) → (Vec Todo n)
-- CompleteAllTodos todos =
-- v.map (λ todo →
-- record todo { completed = true })
-- todos
-- ListTodoCompleteAllTodos : (List Todo) → (List Todo)
-- ListTodoCompleteAllTodos todos =
-- l.map (λ todo →
-- record todo { completed = true })
-- todos
-- ListTodoCompleteAllTodos-idem :
-- (todos : List Todo) →
-- ListTodoCompleteAllTodos (ListTodoCompleteAllTodos todos) ≡ ListTodoCompleteAllTodos todos
-- ListTodoCompleteAllTodos-idem todos = {! !}
-- -- {-# COMPILE JS CompleteAllTodos =
-- -- function (todos) {
-- -- return todos.map(function(todo) {
-- -- todo.completed = true;
-- -- return todo;
-- -- });
-- -- }
-- -- #-}
-- VecTodoClearCompleted : ∀ {n} → (Vec Todo n) → (Vec Todo n)
-- VecTodoClearCompleted todos =
-- padRight
-- record
-- { id = 1 -- argmax (λ todo → λ e → e) todos) + 1
-- ; completed = false
-- ; text = ""
-- }
-- (v.filter (λ todo → dec-¬ ((Todo.completed todo) Bool.≟ true)) todos)
-- ListTodoClearCompleted : (List Todo) → (List Todo)
-- ListTodoClearCompleted todos =
-- (l.filter (λ todo → dec-¬ ((Todo.completed todo) Bool.≟ true)) todos)
-- ListTodoClearCompleted-idem :
-- (todos : List Todo) →
-- ListTodoClearCompleted (ListTodoClearCompleted todos) ≡ ListTodoClearCompleted todos
-- ListTodoClearCompleted-idem todos =
-- filter-idem (λ e → dec-¬ (Todo.completed e Bool.≟ true)) todos
-- -- should remove all elements where completed = true
-- -- should not change other elements
-- -- should not change (all elements).text
-- -- should not change (all elements).id
-- -- {-# COMPILE JS ClearCompleted =
-- -- function (todos) {
-- -- return todos.filter(function(todo) {
-- -- return !todo.completed;
-- -- });
-- -- }
-- -- #-}
-- -- add-todos-length-increased-by-1 : ∀ (todos : List Todo) → length (AddTodo todos "test")
-- -- add-todos-length-increased-by-1 = ?
-- -- delete-todos-length-decreased-by-1-except-if-length-0 : ()
-- -- edit-todos-length-not-changed : ()
-- -- complete-todos-length-not-changed : ()
-- -- complete-all-todos-length-not-changed : ()
-- -- clear-completed-todos-not-have-completed : ()
-- -- should not generate duplicate ids after CLEAR_COMPLETE
-- -- data Action : Set where
-- -- ADD_TODO DELETE_TODO EDIT_TODO COMPLETE_TODO COMPLETE_ALL_TODOS CLEAR_COMPLETED : Action
-- -- Reducer : Todos → Action → Todos
-- -- Reducer todos ADD_TODO = AddTodo todos id
-- -- Reducer todos DELETE_TODO = DeleteTodo todos id
-- -- Reducer todos EDIT_TODO = EditTodo todos id
-- -- Reducer todos COMPLETE_TODO = CompleteTodo todos id
-- -- Reducer todos COMPLETE_ALL_TODOS = CompleteAllTodos todos id
-- -- Reducer todos CLEAR_COMPLETED = ClearCompleted todos id
|
(* Author: Tobias Nipkow *)
section \<open>Unbalanced Tree Implementation of Set\<close>
theory Tree_Set
imports
"HOL-Library.Tree"
Cmp
Set_Specs
begin
definition empty :: "'a tree" where
"empty = Leaf"
fun isin :: "'a::linorder tree \<Rightarrow> 'a \<Rightarrow> bool" where
"isin Leaf x = False" |
"isin (Node l a r) x =
(case cmp x a of
LT \<Rightarrow> isin l x |
EQ \<Rightarrow> True |
GT \<Rightarrow> isin r x)"
hide_const (open) insert
fun insert :: "'a::linorder \<Rightarrow> 'a tree \<Rightarrow> 'a tree" where
"insert x Leaf = Node Leaf x Leaf" |
"insert x (Node l a r) =
(case cmp x a of
LT \<Rightarrow> Node (insert x l) a r |
EQ \<Rightarrow> Node l a r |
GT \<Rightarrow> Node l a (insert x r))"
text \<open>Deletion by replacing:\<close>
fun split_min :: "'a tree \<Rightarrow> 'a * 'a tree" where
"split_min (Node l a r) =
(if l = Leaf then (a,r) else let (x,l') = split_min l in (x, Node l' a r))"
fun delete :: "'a::linorder \<Rightarrow> 'a tree \<Rightarrow> 'a tree" where
"delete x Leaf = Leaf" |
"delete x (Node l a r) =
(case cmp x a of
LT \<Rightarrow> Node (delete x l) a r |
GT \<Rightarrow> Node l a (delete x r) |
EQ \<Rightarrow> if r = Leaf then l else let (a',r') = split_min r in Node l a' r')"
text \<open>Deletion by joining:\<close>
fun join :: "('a::linorder)tree \<Rightarrow> 'a tree \<Rightarrow> 'a tree" where
"join t Leaf = t" |
"join Leaf t = t" |
"join (Node t1 a t2) (Node t3 b t4) =
(case join t2 t3 of
Leaf \<Rightarrow> Node t1 a (Node Leaf b t4) |
Node u2 x u3 \<Rightarrow> Node (Node t1 a u2) x (Node u3 b t4))"
fun delete2 :: "'a::linorder \<Rightarrow> 'a tree \<Rightarrow> 'a tree" where
"delete2 x Leaf = Leaf" |
"delete2 x (Node l a r) =
(case cmp x a of
LT \<Rightarrow> Node (delete2 x l) a r |
GT \<Rightarrow> Node l a (delete2 x r) |
EQ \<Rightarrow> join l r)"
subsection "Functional Correctness Proofs"
lemma isin_set: "sorted(inorder t) \<Longrightarrow> isin t x = (x \<in> set (inorder t))"
by (induction t) (auto simp: isin_simps)
lemma inorder_insert:
"sorted(inorder t) \<Longrightarrow> inorder(insert x t) = ins_list x (inorder t)"
by(induction t) (auto simp: ins_list_simps)
lemma split_minD:
"split_min t = (x,t') \<Longrightarrow> t \<noteq> Leaf \<Longrightarrow> x # inorder t' = inorder t"
by(induction t arbitrary: t' rule: split_min.induct)
(auto simp: sorted_lems split: prod.splits if_splits)
lemma inorder_delete:
"sorted(inorder t) \<Longrightarrow> inorder(delete x t) = del_list x (inorder t)"
by(induction t) (auto simp: del_list_simps split_minD split: prod.splits)
interpretation S: Set_by_Ordered
where empty = empty and isin = isin and insert = insert and delete = delete
and inorder = inorder and inv = "\<lambda>_. True"
proof (standard, goal_cases)
case 1 show ?case by (simp add: empty_def)
next
case 2 thus ?case by(simp add: isin_set)
next
case 3 thus ?case by(simp add: inorder_insert)
next
case 4 thus ?case by(simp add: inorder_delete)
qed (rule TrueI)+
lemma inorder_join:
"inorder(join l r) = inorder l @ inorder r"
by(induction l r rule: join.induct) (auto split: tree.split)
lemma inorder_delete2:
"sorted(inorder t) \<Longrightarrow> inorder(delete2 x t) = del_list x (inorder t)"
by(induction t) (auto simp: inorder_join del_list_simps)
interpretation S2: Set_by_Ordered
where empty = empty and isin = isin and insert = insert and delete = delete2
and inorder = inorder and inv = "\<lambda>_. True"
proof (standard, goal_cases)
case 1 show ?case by (simp add: empty_def)
next
case 2 thus ?case by(simp add: isin_set)
next
case 3 thus ?case by(simp add: inorder_insert)
next
case 4 thus ?case by(simp add: inorder_delete2)
qed (rule TrueI)+
end
|
section \<open>Relators\<close>
theory Relators
imports "../Lib/Refine_Lib"
begin
text \<open>
We define the concept of relators. The relation between a concrete type and
an abstract type is expressed by a relation of type \<open>('c\<times>'a) set\<close>.
For each composed type, say \<open>'a list\<close>, we can define a {\em relator},
that takes as argument a relation for the element type, and returns a relation
for the list type. For most datatypes, there exists a {\em natural relator}.
For algebraic datatypes, this is the relator that preserves the structure
of the datatype, and changes the components. For example,
\<open>list_rel::('c\<times>'a) set \<Rightarrow> ('c list\<times>'a list) set\<close> is the natural
relator for lists.
However, relators can also be used to change the representation, and thus
relate an implementation with an abstract type. For example, the relator
\<open>list_set_rel::('c\<times>'a) set \<Rightarrow> ('c list\<times>'a set) set\<close> relates lists
with the set of their elements.
In this theory, we define some basic notions for relators, and
then define natural relators for all HOL-types, including the function type.
For each relator, we also show a single-valuedness property, and initialize a
solver for single-valued properties.
\<close>
subsection \<open>Basic Definitions\<close>
text \<open>
For smoother handling of relator unification, we require relator arguments to
be applied by a special operator, such that we avoid higher-order
unification problems. We try to set up some syntax to make this more
transparent, and give relators a type-like prefix-syntax.
\<close>
definition relAPP
:: "(('c1\<times>'a1) set \<Rightarrow> _) \<Rightarrow> ('c1\<times>'a1) set \<Rightarrow> _"
where "relAPP f x \<equiv> f x"
syntax "_rel_APP" :: "args \<Rightarrow> 'a \<Rightarrow> 'b" ("\<langle>_\<rangle>_" [0,900] 900)
translations
"\<langle>x,xs\<rangle>R" == "\<langle>xs\<rangle>(CONST relAPP R x)"
"\<langle>x\<rangle>R" == "CONST relAPP R x"
ML \<open>
structure Refine_Relators_Thms = struct
structure rel_comb_def_rules = Named_Thms (
val name = @{binding refine_rel_defs}
val description = "Refinement Framework: " ^
"Relator definitions"
);
end
\<close>
setup Refine_Relators_Thms.rel_comb_def_rules.setup
subsection \<open>Basic HOL Relators\<close>
subsubsection \<open>Function\<close>
definition fun_rel where
fun_rel_def_internal: "fun_rel A B \<equiv> { (f,f'). \<forall>(a,a')\<in>A. (f a, f' a')\<in>B }"
abbreviation fun_rel_syn (infixr "\<rightarrow>" 60) where "A\<rightarrow>B \<equiv> \<langle>A,B\<rangle>fun_rel"
lemma fun_rel_def[refine_rel_defs]:
"A\<rightarrow>B \<equiv> { (f,f'). \<forall>(a,a')\<in>A. (f a, f' a')\<in>B }"
by (simp add: relAPP_def fun_rel_def_internal)
lemma fun_relI[intro!]: "\<lbrakk>\<And>a a'. (a,a')\<in>A \<Longrightarrow> (f a,f' a')\<in>B\<rbrakk> \<Longrightarrow> (f,f')\<in>A\<rightarrow>B"
by (auto simp: fun_rel_def)
lemma fun_relD:
shows " ((f,f')\<in>(A\<rightarrow>B)) \<Longrightarrow>
(\<And>x x'. \<lbrakk> (x,x')\<in>A \<rbrakk> \<Longrightarrow> (f x, f' x')\<in>B)"
apply rule
by (auto simp: fun_rel_def)
lemma fun_relD1:
assumes "(f,f')\<in>Ra\<rightarrow>Rr"
assumes "f x = r"
shows "\<forall>x'. (x,x')\<in>Ra \<longrightarrow> (r,f' x')\<in>Rr"
using assms by (auto simp: fun_rel_def)
lemma fun_relD2:
assumes "(f,f')\<in>Ra\<rightarrow>Rr"
assumes "f' x' = r'"
shows "\<forall>x. (x,x')\<in>Ra \<longrightarrow> (f x,r')\<in>Rr"
using assms by (auto simp: fun_rel_def)
lemma fun_relE1:
assumes "(f,f')\<in>Id \<rightarrow> Rv"
assumes "t' = f' x"
shows "(f x,t')\<in>Rv" using assms
by (auto elim: fun_relD)
lemma fun_relE2:
assumes "(f,f')\<in>Id \<rightarrow> Rv"
assumes "t = f x"
shows "(t,f' x)\<in>Rv" using assms
by (auto elim: fun_relD)
subsubsection \<open>Terminal Types\<close>
abbreviation unit_rel :: "(unit\<times>unit) set" where "unit_rel == Id"
abbreviation "nat_rel \<equiv> Id::(nat\<times>_) set"
abbreviation "int_rel \<equiv> Id::(int\<times>_) set"
abbreviation "bool_rel \<equiv> Id::(bool\<times>_) set"
subsubsection \<open>Product\<close>
definition prod_rel where
prod_rel_def_internal: "prod_rel R1 R2
\<equiv> { ((a,b),(a',b')) . (a,a')\<in>R1 \<and> (b,b')\<in>R2 }"
abbreviation prod_rel_syn (infixr "\<times>\<^sub>r" 70) where "a\<times>\<^sub>rb \<equiv> \<langle>a,b\<rangle>prod_rel"
lemma prod_rel_def[refine_rel_defs]:
"(\<langle>R1,R2\<rangle>prod_rel) \<equiv> { ((a,b),(a',b')) . (a,a')\<in>R1 \<and> (b,b')\<in>R2 }"
by (simp add: prod_rel_def_internal relAPP_def)
lemma prod_relI: "\<lbrakk>(a,a')\<in>R1; (b,b')\<in>R2\<rbrakk> \<Longrightarrow> ((a,b),(a',b'))\<in>\<langle>R1,R2\<rangle>prod_rel"
by (auto simp: prod_rel_def)
lemma prod_relE:
assumes "(p,p')\<in>\<langle>R1,R2\<rangle>prod_rel"
obtains a b a' b' where "p=(a,b)" and "p'=(a',b')"
and "(a,a')\<in>R1" and "(b,b')\<in>R2"
using assms
by (auto simp: prod_rel_def)
lemma prod_rel_simp[simp]:
"((a,b),(a',b'))\<in>\<langle>R1,R2\<rangle>prod_rel \<longleftrightarrow> (a,a')\<in>R1 \<and> (b,b')\<in>R2"
by (auto intro: prod_relI elim: prod_relE)
lemma in_Domain_prod_rel_iff[iff]: "(a,b)\<in>Domain (A\<times>\<^sub>rB) \<longleftrightarrow> a\<in>Domain A \<and> b\<in>Domain B"
by (auto simp: prod_rel_def)
lemma prod_rel_comp: "(A \<times>\<^sub>r B) O (C \<times>\<^sub>r D) = (A O C) \<times>\<^sub>r (B O D)"
unfolding prod_rel_def
by auto
subsubsection \<open>Option\<close>
definition option_rel where
option_rel_def_internal:
"option_rel R \<equiv> { (Some a,Some a') | a a'. (a,a')\<in>R } \<union> {(None,None)}"
lemma option_rel_def[refine_rel_defs]:
"\<langle>R\<rangle>option_rel \<equiv> { (Some a,Some a') | a a'. (a,a')\<in>R } \<union> {(None,None)}"
by (simp add: option_rel_def_internal relAPP_def)
lemma option_relI:
"(None,None)\<in>\<langle>R\<rangle> option_rel"
"\<lbrakk> (a,a')\<in>R \<rbrakk> \<Longrightarrow> (Some a, Some a')\<in>\<langle>R\<rangle>option_rel"
by (auto simp: option_rel_def)
lemma option_relE:
assumes "(x,x')\<in>\<langle>R\<rangle>option_rel"
obtains "x=None" and "x'=None"
| a a' where "x=Some a" and "x'=Some a'" and "(a,a')\<in>R"
using assms by (auto simp: option_rel_def)
lemma option_rel_simp[simp]:
"(None,a)\<in>\<langle>R\<rangle>option_rel \<longleftrightarrow> a=None"
"(c,None)\<in>\<langle>R\<rangle>option_rel \<longleftrightarrow> c=None"
"(Some x,Some y)\<in>\<langle>R\<rangle>option_rel \<longleftrightarrow> (x,y)\<in>R"
by (auto intro: option_relI elim: option_relE)
subsubsection \<open>Sum\<close>
definition sum_rel where sum_rel_def_internal:
"sum_rel Rl Rr
\<equiv> { (Inl a, Inl a') | a a'. (a,a')\<in>Rl } \<union>
{ (Inr a, Inr a') | a a'. (a,a')\<in>Rr }"
lemma sum_rel_def[refine_rel_defs]:
"\<langle>Rl,Rr\<rangle>sum_rel \<equiv>
{ (Inl a, Inl a') | a a'. (a,a')\<in>Rl } \<union>
{ (Inr a, Inr a') | a a'. (a,a')\<in>Rr }"
by (simp add: sum_rel_def_internal relAPP_def)
lemma sum_rel_simp[simp]:
"\<And>a a'. (Inl a, Inl a') \<in> \<langle>Rl,Rr\<rangle>sum_rel \<longleftrightarrow> (a,a')\<in>Rl"
"\<And>a a'. (Inr a, Inr a') \<in> \<langle>Rl,Rr\<rangle>sum_rel \<longleftrightarrow> (a,a')\<in>Rr"
"\<And>a a'. (Inl a, Inr a') \<notin> \<langle>Rl,Rr\<rangle>sum_rel"
"\<And>a a'. (Inr a, Inl a') \<notin> \<langle>Rl,Rr\<rangle>sum_rel"
unfolding sum_rel_def by auto
lemma sum_relI:
"(l,l')\<in>Rl \<Longrightarrow> (Inl l, Inl l') \<in> \<langle>Rl,Rr\<rangle>sum_rel"
"(r,r')\<in>Rr \<Longrightarrow> (Inr r, Inr r') \<in> \<langle>Rl,Rr\<rangle>sum_rel"
by simp_all
lemma sum_relE:
assumes "(x,x')\<in>\<langle>Rl,Rr\<rangle>sum_rel"
obtains
l l' where "x=Inl l" and "x'=Inl l'" and "(l,l')\<in>Rl"
| r r' where "x=Inr r" and "x'=Inr r'" and "(r,r')\<in>Rr"
using assms by (auto simp: sum_rel_def)
subsubsection \<open>Lists\<close>
definition list_rel where list_rel_def_internal:
"list_rel R \<equiv> {(l,l'). list_all2 (\<lambda>x x'. (x,x')\<in>R) l l'}"
lemma list_rel_def[refine_rel_defs]:
"\<langle>R\<rangle>list_rel \<equiv> {(l,l'). list_all2 (\<lambda>x x'. (x,x')\<in>R) l l'}"
by (simp add: list_rel_def_internal relAPP_def)
lemma list_rel_induct[induct set,consumes 1, case_names Nil Cons]:
assumes "(l,l')\<in>\<langle>R\<rangle> list_rel"
assumes "P [] []"
assumes "\<And>x x' l l'. \<lbrakk> (x,x')\<in>R; (l,l')\<in>\<langle>R\<rangle>list_rel; P l l' \<rbrakk>
\<Longrightarrow> P (x#l) (x'#l')"
shows "P l l'"
using assms unfolding list_rel_def
apply simp
by (rule list_all2_induct)
lemma list_rel_eq_listrel: "list_rel = listrel"
apply (rule ext)
apply safe
proof goal_cases
case (1 x a b) thus ?case
unfolding list_rel_def_internal
apply simp
apply (induct a b rule: list_all2_induct)
apply (auto intro: listrel.intros)
done
next
case 2 thus ?case
apply (induct)
apply (auto simp: list_rel_def_internal)
done
qed
lemma list_relI:
"([],[])\<in>\<langle>R\<rangle>list_rel"
"\<lbrakk> (x,x')\<in>R; (l,l')\<in>\<langle>R\<rangle>list_rel \<rbrakk> \<Longrightarrow> (x#l,x'#l')\<in>\<langle>R\<rangle>list_rel"
by (auto simp: list_rel_def)
lemma list_rel_simp[simp]:
"([],l')\<in>\<langle>R\<rangle>list_rel \<longleftrightarrow> l'=[]"
"(l,[])\<in>\<langle>R\<rangle>list_rel \<longleftrightarrow> l=[]"
"([],[])\<in>\<langle>R\<rangle>list_rel"
"(x#l,x'#l')\<in>\<langle>R\<rangle>list_rel \<longleftrightarrow> (x,x')\<in>R \<and> (l,l')\<in>\<langle>R\<rangle>list_rel"
by (auto simp: list_rel_def)
lemma list_relE1:
assumes "(l,[])\<in>\<langle>R\<rangle>list_rel" obtains "l=[]" using assms by auto
lemma list_relE2:
assumes "([],l)\<in>\<langle>R\<rangle>list_rel" obtains "l=[]" using assms by auto
lemma list_relE3:
assumes "(x#xs,l')\<in>\<langle>R\<rangle>list_rel" obtains x' xs' where
"l'=x'#xs'" and "(x,x')\<in>R" and "(xs,xs')\<in>\<langle>R\<rangle>list_rel"
using assms
apply (cases l')
apply auto
done
lemma list_relE4:
assumes "(l,x'#xs')\<in>\<langle>R\<rangle>list_rel" obtains x xs where
"l=x#xs" and "(x,x')\<in>R" and "(xs,xs')\<in>\<langle>R\<rangle>list_rel"
using assms
apply (cases l)
apply auto
done
lemmas list_relE = list_relE1 list_relE2 list_relE3 list_relE4
lemma list_rel_imp_same_length:
"(l, l') \<in> \<langle>R\<rangle>list_rel \<Longrightarrow> length l = length l'"
unfolding list_rel_eq_listrel relAPP_def
by (rule listrel_eq_len)
lemma list_rel_split_right_iff:
"(x#xs,l)\<in>\<langle>R\<rangle>list_rel \<longleftrightarrow> (\<exists>y ys. l=y#ys \<and> (x,y)\<in>R \<and> (xs,ys)\<in>\<langle>R\<rangle>list_rel)"
by (cases l) auto
lemma list_rel_split_left_iff:
"(l,y#ys)\<in>\<langle>R\<rangle>list_rel \<longleftrightarrow> (\<exists>x xs. l=x#xs \<and> (x,y)\<in>R \<and> (xs,ys)\<in>\<langle>R\<rangle>list_rel)"
by (cases l) auto
subsubsection \<open>Sets\<close>
text \<open>Pointwise refinement: The abstract set is the image of
the concrete set, and the concrete set only contains elements that
have an abstract counterpart\<close>
definition set_rel where
set_rel_def_internal:
"set_rel R \<equiv> {(A,B). (\<forall>x\<in>A. \<exists>y\<in>B. (x,y)\<in>R) \<and> (\<forall>y\<in>B. \<exists>x\<in>A. (x,y)\<in>R)}"
term set_rel
lemma set_rel_def[refine_rel_defs]:
"\<langle>R\<rangle>set_rel \<equiv> {(A,B). (\<forall>x\<in>A. \<exists>y\<in>B. (x,y)\<in>R) \<and> (\<forall>y\<in>B. \<exists>x\<in>A. (x,y)\<in>R)}"
by (simp add: set_rel_def_internal relAPP_def)
lemma set_rel_alt: "\<langle>R\<rangle>set_rel = {(A,B). A \<subseteq> R\<inverse>``B \<and> B \<subseteq> R``A}"
unfolding set_rel_def by auto
lemma set_relI[intro?]:
assumes "\<And>x. x\<in>A \<Longrightarrow> \<exists>y\<in>B. (x,y)\<in>R"
assumes "\<And>y. y\<in>B \<Longrightarrow> \<exists>x\<in>A. (x,y)\<in>R"
shows "(A,B)\<in>\<langle>R\<rangle>set_rel"
using assms unfolding set_rel_def by blast
text \<open>Original definition of \<open>set_rel\<close> in refinement framework.
Abandoned in favour of more symmetric definition above: \<close>
definition old_set_rel where old_set_rel_def_internal:
"old_set_rel R \<equiv> {(S,S'). S'=R``S \<and> S\<subseteq>Domain R}"
lemma old_set_rel_def[refine_rel_defs]:
"\<langle>R\<rangle>old_set_rel \<equiv> {(S,S'). S'=R``S \<and> S\<subseteq>Domain R}"
by (simp add: old_set_rel_def_internal relAPP_def)
text \<open>Old definition coincides with new definition for single-valued
element relations. This is probably the reason why the old definition worked
for most applications.\<close>
lemma old_set_rel_sv_eq: "single_valued R \<Longrightarrow> \<langle>R\<rangle>old_set_rel = \<langle>R\<rangle>set_rel"
unfolding set_rel_def old_set_rel_def single_valued_def
by blast
lemma set_rel_simp[simp]:
"({},{})\<in>\<langle>R\<rangle>set_rel"
by (auto simp: set_rel_def)
lemma set_rel_empty_iff[simp]:
"({},y)\<in>\<langle>A\<rangle>set_rel \<longleftrightarrow> y={}"
"(x,{})\<in>\<langle>A\<rangle>set_rel \<longleftrightarrow> x={}"
by (auto simp: set_rel_def; fastforce)+
lemma set_relD1: "(s,s')\<in>\<langle>R\<rangle>set_rel \<Longrightarrow> x\<in>s \<Longrightarrow> \<exists>x'\<in>s'. (x,x')\<in>R"
unfolding set_rel_def by blast
lemma set_relD2: "(s,s')\<in>\<langle>R\<rangle>set_rel \<Longrightarrow> x'\<in>s' \<Longrightarrow> \<exists>x\<in>s. (x,x')\<in>R"
unfolding set_rel_def by blast
lemma set_relE1[consumes 2]:
assumes "(s,s')\<in>\<langle>R\<rangle>set_rel" "x\<in>s"
obtains x' where "x'\<in>s'" "(x,x')\<in>R"
using set_relD1[OF assms] ..
lemma set_relE2[consumes 2]:
assumes "(s,s')\<in>\<langle>R\<rangle>set_rel" "x'\<in>s'"
obtains x where "x\<in>s" "(x,x')\<in>R"
using set_relD2[OF assms] ..
subsection \<open>Automation\<close>
subsubsection \<open>A solver for relator properties\<close>
lemma relprop_triggers:
"\<And>R. single_valued R \<Longrightarrow> single_valued R"
"\<And>R. R=Id \<Longrightarrow> R=Id"
"\<And>R. R=Id \<Longrightarrow> Id=R"
"\<And>R. Range R = UNIV \<Longrightarrow> Range R = UNIV"
"\<And>R. Range R = UNIV \<Longrightarrow> UNIV = Range R"
"\<And>R R'. R\<subseteq>R' \<Longrightarrow> R\<subseteq>R'"
by auto
ML \<open>
structure relator_props = Named_Thms (
val name = @{binding relator_props}
val description = "Additional relator properties"
)
structure solve_relator_props = Named_Thms (
val name = @{binding solve_relator_props}
val description = "Relator properties that solve goal"
)
\<close>
setup relator_props.setup
setup solve_relator_props.setup
declaration \<open>
Tagged_Solver.declare_solver
@{thms relprop_triggers}
@{binding relator_props_solver}
"Additional relator properties solver"
(fn ctxt => (REPEAT_ALL_NEW (CHANGED o (
match_tac ctxt (solve_relator_props.get ctxt) ORELSE'
match_tac ctxt (relator_props.get ctxt)
))))
\<close>
declaration \<open>
Tagged_Solver.declare_solver
[]
@{binding force_relator_props_solver}
"Additional relator properties solver (instantiate schematics)"
(fn ctxt => (REPEAT_ALL_NEW (CHANGED o (
resolve_tac ctxt (solve_relator_props.get ctxt) ORELSE'
match_tac ctxt (relator_props.get ctxt)
))))
\<close>
lemma
relprop_id_orient[relator_props]: "R=Id \<Longrightarrow> Id=R" and
relprop_eq_refl[solve_relator_props]: "t = t"
by auto
lemma
relprop_UNIV_orient[relator_props]: "R=UNIV \<Longrightarrow> UNIV=R"
by auto
subsubsection \<open>ML-Level utilities\<close>
ML \<open>
signature RELATORS = sig
val mk_relT: typ * typ -> typ
val dest_relT: typ -> typ * typ
val mk_relAPP: term -> term -> term
val list_relAPP: term list -> term -> term
val strip_relAPP: term -> term list * term
val mk_fun_rel: term -> term -> term
val list_rel: term list -> term -> term
val rel_absT: term -> typ
val rel_concT: term -> typ
val mk_prodrel: term * term -> term
val is_prodrel: term -> bool
val dest_prodrel: term -> term * term
val strip_prodrel_left: term -> term list
val list_prodrel_left: term list -> term
val declare_natural_relator:
(string*string) -> Context.generic -> Context.generic
val remove_natural_relator: string -> Context.generic -> Context.generic
val natural_relator_of: Proof.context -> string -> string option
val mk_natural_relator: Proof.context -> term list -> string -> term option
val setup: theory -> theory
end
structure Relators :RELATORS = struct
val mk_relT = HOLogic.mk_prodT #> HOLogic.mk_setT
fun dest_relT (Type (@{type_name set},[Type (@{type_name prod},[cT,aT])]))
= (cT,aT)
| dest_relT ty = raise TYPE ("dest_relT",[ty],[])
fun mk_relAPP x f = let
val xT = fastype_of x
val fT = fastype_of f
val rT = range_type fT
in
Const (@{const_name relAPP},fT-->xT-->rT)$f$x
end
val list_relAPP = fold mk_relAPP
fun strip_relAPP R = let
fun aux @{mpat "\<langle>?R\<rangle>?S"} l = aux S (R::l)
| aux R l = (l,R)
in aux R [] end
val rel_absT = fastype_of #> HOLogic.dest_setT #> HOLogic.dest_prodT #> snd
val rel_concT = fastype_of #> HOLogic.dest_setT #> HOLogic.dest_prodT #> fst
fun mk_fun_rel r1 r2 = let
val (r1T,r2T) = (fastype_of r1,fastype_of r2)
val (c1T,a1T) = dest_relT r1T
val (c2T,a2T) = dest_relT r2T
val (cT,aT) = (c1T --> c2T, a1T --> a2T)
val rT = mk_relT (cT,aT)
in
list_relAPP [r1,r2] (Const (@{const_name fun_rel},r1T-->r2T-->rT))
end
val list_rel = fold_rev mk_fun_rel
fun mk_prodrel (A,B) = @{mk_term "?A \<times>\<^sub>r ?B"}
fun is_prodrel @{mpat "_ \<times>\<^sub>r _"} = true | is_prodrel _ = false
fun dest_prodrel @{mpat "?A \<times>\<^sub>r ?B"} = (A,B) | dest_prodrel t = raise TERM("dest_prodrel",[t])
fun strip_prodrel_left @{mpat "?A \<times>\<^sub>r ?B"} = strip_prodrel_left A @ [B]
| strip_prodrel_left @{mpat (typs) "unit_rel"} = []
| strip_prodrel_left R = [R]
val list_prodrel_left = Refine_Util.list_binop_left @{term unit_rel} mk_prodrel
structure natural_relators = Generic_Data (
type T = string Symtab.table
val empty = Symtab.empty
val extend = I
val merge = Symtab.join (fn _ => fn (_,cn) => cn)
)
fun declare_natural_relator tcp =
natural_relators.map (Symtab.update tcp)
fun remove_natural_relator tname =
natural_relators.map (Symtab.delete_safe tname)
fun natural_relator_of ctxt =
Symtab.lookup (natural_relators.get (Context.Proof ctxt))
(* [R1,\<dots>,Rn] T is mapped to \<langle>R1,\<dots>,Rn\<rangle> Trel *)
fun mk_natural_relator ctxt args Tname =
case natural_relator_of ctxt Tname of
NONE => NONE
| SOME Cname => SOME let
val argsT = map fastype_of args
val (cTs, aTs) = map dest_relT argsT |> split_list
val aT = Type (Tname,aTs)
val cT = Type (Tname,cTs)
val rT = mk_relT (cT,aT)
in
list_relAPP args (Const (Cname,argsT--->rT))
end
fun
natural_relator_from_term (t as Const (name,T)) = let
fun err msg = raise TERM (msg,[t])
val (argTs,bodyT) = strip_type T
val (conTs,absTs) = argTs |> map (HOLogic.dest_setT #> HOLogic.dest_prodT) |> split_list
val (bconT,babsT) = bodyT |> HOLogic.dest_setT |> HOLogic.dest_prodT
val (Tcon,bconTs) = dest_Type bconT
val (Tcon',babsTs) = dest_Type babsT
val _ = Tcon = Tcon' orelse err "Type constructors do not match"
val _ = conTs = bconTs orelse err "Concrete types do not match"
val _ = absTs = babsTs orelse err "Abstract types do not match"
in
(Tcon,name)
end
| natural_relator_from_term t =
raise TERM ("Expected constant",[t]) (* TODO: Localize this! *)
local
fun decl_natrel_aux t context = let
fun warn msg = let
val tP =
Context.cases Syntax.pretty_term_global Syntax.pretty_term
context t
val m = Pretty.block [
Pretty.str "Ignoring invalid natural_relator declaration:",
Pretty.brk 1,
Pretty.str msg,
Pretty.brk 1,
tP
] |> Pretty.string_of
val _ = warning m
in context end
in
declare_natural_relator (natural_relator_from_term t) context
handle
TERM (msg,_) => warn msg
| exn => if Exn.is_interrupt exn then Exn.reraise exn else warn ""
end
in
val natural_relator_attr = Scan.repeat1 Args.term >> (fn ts =>
Thm.declaration_attribute ( fn _ => fold decl_natrel_aux ts)
)
end
val setup = I
#> Attrib.setup
@{binding natural_relator} natural_relator_attr "Declare natural relator"
end
\<close>
setup Relators.setup
subsection \<open>Setup\<close>
subsubsection "Natural Relators"
declare [[natural_relator
unit_rel int_rel nat_rel bool_rel
fun_rel prod_rel option_rel sum_rel list_rel
]]
(*declaration {* let open Relators in
fn _ =>
declare_natural_relator (@{type_name unit},@{const_name unit_rel})
#> declare_natural_relator (@{type_name fun},@{const_name fun_rel})
#> declare_natural_relator (@{type_name prod},@{const_name prod_rel})
#> declare_natural_relator (@{type_name option},@{const_name option_rel})
#> declare_natural_relator (@{type_name sum},@{const_name sum_rel})
#> declare_natural_relator (@{type_name list},@{const_name list_rel})
end
*}*)
ML_val \<open>
Relators.mk_natural_relator
@{context}
[@{term "Ra::('c\<times>'a) set"},@{term "\<langle>Rb\<rangle>option_rel"}]
@{type_name prod}
|> the
|> Thm.cterm_of @{context}
;
Relators.mk_fun_rel @{term "\<langle>Id\<rangle>option_rel"} @{term "\<langle>Id\<rangle>list_rel"}
|> Thm.cterm_of @{context}
\<close>
subsubsection "Additional Properties"
lemmas [relator_props] =
single_valued_Id
subset_refl
refl
(* TODO: Move *)
lemma eq_UNIV_iff: "S=UNIV \<longleftrightarrow> (\<forall>x. x\<in>S)" by auto
lemma fun_rel_sv[relator_props]:
assumes RAN: "Range Ra = UNIV"
assumes SV: "single_valued Rv"
shows "single_valued (Ra \<rightarrow> Rv)"
proof (intro single_valuedI ext impI allI)
fix f g h x'
assume R1: "(f,g)\<in>Ra\<rightarrow>Rv"
and R2: "(f,h)\<in>Ra\<rightarrow>Rv"
from RAN obtain x where AR: "(x,x')\<in>Ra" by auto
from fun_relD[OF R1 AR] have "(f x,g x') \<in> Rv" .
moreover from fun_relD[OF R2 AR] have "(f x,h x') \<in> Rv" .
ultimately show "g x' = h x'" using SV by (auto dest: single_valuedD)
qed
lemmas [relator_props] = Range_Id
lemma fun_rel_id[relator_props]: "\<lbrakk>R1=Id; R2=Id\<rbrakk> \<Longrightarrow> R1 \<rightarrow> R2 = Id"
by (auto simp: fun_rel_def)
lemma fun_rel_id_simp[simp]: "Id\<rightarrow>Id = Id" by tagged_solver
lemma fun_rel_comp_dist[relator_props]:
"(R1\<rightarrow>R2) O (R3\<rightarrow>R4) \<subseteq> ((R1 O R3) \<rightarrow> (R2 O R4))"
by (auto simp: fun_rel_def)
lemma fun_rel_mono[relator_props]: "\<lbrakk> R1\<subseteq>R2; R3\<subseteq>R4 \<rbrakk> \<Longrightarrow> R2\<rightarrow>R3 \<subseteq> R1\<rightarrow>R4"
by (force simp: fun_rel_def)
lemma prod_rel_sv[relator_props]:
"\<lbrakk>single_valued R1; single_valued R2\<rbrakk> \<Longrightarrow> single_valued (\<langle>R1,R2\<rangle>prod_rel)"
by (auto intro: single_valuedI dest: single_valuedD simp: prod_rel_def)
lemma prod_rel_id[relator_props]: "\<lbrakk>R1=Id; R2=Id\<rbrakk> \<Longrightarrow> \<langle>R1,R2\<rangle>prod_rel = Id"
by (auto simp: prod_rel_def)
lemma prod_rel_id_simp[simp]: "\<langle>Id,Id\<rangle>prod_rel = Id" by tagged_solver
lemma prod_rel_mono[relator_props]:
"\<lbrakk> R2\<subseteq>R1; R3\<subseteq>R4 \<rbrakk> \<Longrightarrow> \<langle>R2,R3\<rangle>prod_rel \<subseteq> \<langle>R1,R4\<rangle>prod_rel"
by (auto simp: prod_rel_def)
lemma prod_rel_range[relator_props]: "\<lbrakk>Range Ra=UNIV; Range Rb=UNIV\<rbrakk>
\<Longrightarrow> Range (\<langle>Ra,Rb\<rangle>prod_rel) = UNIV"
apply (auto simp: prod_rel_def)
by (metis Range_iff UNIV_I)+
lemma option_rel_sv[relator_props]:
"\<lbrakk>single_valued R\<rbrakk> \<Longrightarrow> single_valued (\<langle>R\<rangle>option_rel)"
by (auto intro: single_valuedI dest: single_valuedD simp: option_rel_def)
lemma option_rel_id[relator_props]:
"R=Id \<Longrightarrow> \<langle>R\<rangle>option_rel = Id" by (auto simp: option_rel_def)
lemma option_rel_id_simp[simp]: "\<langle>Id\<rangle>option_rel = Id" by tagged_solver
lemma option_rel_mono[relator_props]: "R\<subseteq>R' \<Longrightarrow> \<langle>R\<rangle>option_rel \<subseteq> \<langle>R'\<rangle>option_rel"
by (auto simp: option_rel_def)
lemma option_rel_range: "Range R = UNIV \<Longrightarrow> Range (\<langle>R\<rangle>option_rel) = UNIV"
apply (auto simp: option_rel_def Range_iff)
by (metis Range_iff UNIV_I option.exhaust)
lemma option_rel_inter[simp]: "\<langle>R1 \<inter> R2\<rangle>option_rel = \<langle>R1\<rangle>option_rel \<inter> \<langle>R2\<rangle>option_rel"
by (auto simp: option_rel_def)
lemma option_rel_constraint[simp]:
"(x,x)\<in>\<langle>UNIV\<times>C\<rangle>option_rel \<longleftrightarrow> (\<forall>v. x=Some v \<longrightarrow> v\<in>C)"
by (auto simp: option_rel_def)
lemma sum_rel_sv[relator_props]:
"\<lbrakk>single_valued Rl; single_valued Rr\<rbrakk> \<Longrightarrow> single_valued (\<langle>Rl,Rr\<rangle>sum_rel)"
by (auto intro: single_valuedI dest: single_valuedD simp: sum_rel_def)
lemma sum_rel_id[relator_props]: "\<lbrakk>Rl=Id; Rr=Id\<rbrakk> \<Longrightarrow> \<langle>Rl,Rr\<rangle>sum_rel = Id"
apply (auto elim: sum_relE)
apply (case_tac b)
apply simp_all
done
lemma sum_rel_id_simp[simp]: "\<langle>Id,Id\<rangle>sum_rel = Id" by tagged_solver
lemma sum_rel_mono[relator_props]:
"\<lbrakk> Rl\<subseteq>Rl'; Rr\<subseteq>Rr' \<rbrakk> \<Longrightarrow> \<langle>Rl,Rr\<rangle>sum_rel \<subseteq> \<langle>Rl',Rr'\<rangle>sum_rel"
by (auto simp: sum_rel_def)
lemma sum_rel_range[relator_props]:
"\<lbrakk> Range Rl=UNIV; Range Rr=UNIV \<rbrakk> \<Longrightarrow> Range (\<langle>Rl,Rr\<rangle>sum_rel) = UNIV"
apply (auto simp: sum_rel_def Range_iff)
by (metis Range_iff UNIV_I sumE)
lemma list_rel_sv_iff:
"single_valued (\<langle>R\<rangle>list_rel) \<longleftrightarrow> single_valued R"
apply (intro iffI[rotated] single_valuedI allI impI)
apply (clarsimp simp: list_rel_def)
proof -
fix x y z
assume SV: "single_valued R"
assume "list_all2 (\<lambda>x x'. (x, x') \<in> R) x y" and
"list_all2 (\<lambda>x x'. (x, x') \<in> R) x z"
thus "y=z"
apply (induct arbitrary: z rule: list_all2_induct)
apply simp
apply (case_tac z)
apply force
apply (force intro: single_valuedD[OF SV])
done
next
fix x y z
assume SV: "single_valued (\<langle>R\<rangle>list_rel)"
assume "(x,y)\<in>R" "(x,z)\<in>R"
hence "([x],[y])\<in>\<langle>R\<rangle>list_rel" and "([x],[z])\<in>\<langle>R\<rangle>list_rel"
by (auto simp: list_rel_def)
with single_valuedD[OF SV] show "y=z" by blast
qed
lemma list_rel_sv[relator_props]:
"single_valued R \<Longrightarrow> single_valued (\<langle>R\<rangle>list_rel)"
by (simp add: list_rel_sv_iff)
lemma list_rel_id[relator_props]: "\<lbrakk>R=Id\<rbrakk> \<Longrightarrow> \<langle>R\<rangle>list_rel = Id"
by (auto simp add: list_rel_def list_all2_eq[symmetric])
lemma list_rel_id_simp[simp]: "\<langle>Id\<rangle>list_rel = Id" by tagged_solver
lemma list_rel_mono[relator_props]:
assumes A: "R\<subseteq>R'"
shows "\<langle>R\<rangle>list_rel \<subseteq> \<langle>R'\<rangle>list_rel"
proof clarsimp
fix l l'
assume "(l,l')\<in>\<langle>R\<rangle>list_rel"
thus "(l,l')\<in>\<langle>R'\<rangle>list_rel"
apply induct
using A
by auto
qed
lemma list_rel_range[relator_props]:
assumes A: "Range R = UNIV"
shows "Range (\<langle>R\<rangle>list_rel) = UNIV"
proof (clarsimp simp: eq_UNIV_iff)
fix l
show "l\<in>Range (\<langle>R\<rangle>list_rel)"
apply (induct l)
using A[unfolded eq_UNIV_iff]
by (auto simp: Range_iff intro: list_relI)
qed
lemma bijective_imp_sv:
"bijective R \<Longrightarrow> single_valued R"
"bijective R \<Longrightarrow> single_valued (R\<inverse>)"
by (simp_all add: bijective_alt)
(* TODO: Move *)
declare bijective_Id[relator_props]
declare bijective_Empty[relator_props]
text \<open>Pointwise refinement for set types:\<close>
lemma set_rel_sv[relator_props]:
"single_valued R \<Longrightarrow> single_valued (\<langle>R\<rangle>set_rel)"
unfolding single_valued_def set_rel_def by blast
lemma set_rel_id[relator_props]: "R=Id \<Longrightarrow> \<langle>R\<rangle>set_rel = Id"
by (auto simp add: set_rel_def)
lemma set_rel_id_simp[simp]: "\<langle>Id\<rangle>set_rel = Id" by tagged_solver
lemma set_rel_csv[relator_props]:
"\<lbrakk> single_valued (R\<inverse>) \<rbrakk>
\<Longrightarrow> single_valued ((\<langle>R\<rangle>set_rel)\<inverse>)"
unfolding single_valued_def set_rel_def converse_iff
by fast
subsection \<open>Invariant and Abstraction\<close>
text \<open>
Quite often, a relation can be described as combination of an
abstraction function and an invariant, such that the invariant describes valid
values on the concrete domain, and the abstraction function maps valid
concrete values to its corresponding abstract value.
\<close>
definition build_rel where
"build_rel \<alpha> I \<equiv> {(c,a) . a=\<alpha> c \<and> I c}"
abbreviation "br\<equiv>build_rel"
lemmas br_def[refine_rel_defs] = build_rel_def
lemma in_br_conv: "(c,a)\<in>br \<alpha> I \<longleftrightarrow> a=\<alpha> c \<and> I c"
by (auto simp: br_def)
lemma brI[intro?]: "\<lbrakk> a=\<alpha> c; I c \<rbrakk> \<Longrightarrow> (c,a)\<in>br \<alpha> I"
by (simp add: br_def)
lemma br_id[simp]: "br id (\<lambda>_. True) = Id"
unfolding build_rel_def by auto
lemma br_chain:
"(build_rel \<beta> J) O (build_rel \<alpha> I) = build_rel (\<alpha>\<circ>\<beta>) (\<lambda>s. J s \<and> I (\<beta> s))"
unfolding build_rel_def by auto
lemma br_sv[simp, intro!,relator_props]: "single_valued (br \<alpha> I)"
unfolding build_rel_def
apply (rule single_valuedI)
apply auto
done
lemma converse_br_sv_iff[simp]:
"single_valued (converse (br \<alpha> I)) \<longleftrightarrow> inj_on \<alpha> (Collect I)"
by (auto intro!: inj_onI single_valuedI dest: single_valuedD inj_onD
simp: br_def) []
lemmas [relator_props] = single_valued_relcomp
lemma br_comp_alt: "br \<alpha> I O R = { (c,a) . I c \<and> (\<alpha> c,a)\<in>R }"
by (auto simp add: br_def)
lemma br_comp_alt':
"{(c,a) . a=\<alpha> c \<and> I c} O R = { (c,a) . I c \<and> (\<alpha> c,a)\<in>R }"
by auto
lemma single_valued_as_brE:
assumes "single_valued R"
obtains \<alpha> invar where "R=br \<alpha> invar"
apply (rule that[of "\<lambda>x. THE y. (x,y)\<in>R" "\<lambda>x. x\<in>Domain R"])
using assms unfolding br_def
by (auto dest: single_valuedD
intro: the_equality[symmetric] theI)
lemma sv_add_invar:
"single_valued R \<Longrightarrow> single_valued {(c, a). (c, a) \<in> R \<and> I c}"
by (auto dest: single_valuedD intro: single_valuedI)
lemma br_Image_conv[simp]: "br \<alpha> I `` S = {\<alpha> x | x. x\<in>S \<and> I x}"
by (auto simp: br_def)
subsection \<open>Miscellanneous\<close>
lemma rel_cong: "(f,g)\<in>Id \<Longrightarrow> (x,y)\<in>Id \<Longrightarrow> (f x, g y)\<in>Id" by simp
lemma rel_fun_cong: "(f,g)\<in>Id \<Longrightarrow> (f x, g x)\<in>Id" by simp
lemma rel_arg_cong: "(x,y)\<in>Id \<Longrightarrow> (f x, f y)\<in>Id" by simp
subsection \<open>Conversion between Predicate and Set Based Relators\<close>
text \<open>
Autoref uses set-based relators of type @{typ \<open>('a\<times>'b) set\<close>}, while the
transfer and lifting package of Isabelle/HOL uses predicate based relators
of type @{typ \<open>'a \<Rightarrow> 'b \<Rightarrow> bool\<close>}. This section defines some utilities
to convert between the two.
\<close>
definition "rel2p R x y \<equiv> (x,y)\<in>R"
definition "p2rel P \<equiv> {(x,y). P x y}"
lemma rel2pD: "\<lbrakk>rel2p R a b\<rbrakk> \<Longrightarrow> (a,b)\<in>R" by (auto simp: rel2p_def)
lemma p2relD: "\<lbrakk>(a,b) \<in> p2rel R\<rbrakk> \<Longrightarrow> R a b" by (auto simp: p2rel_def)
lemma rel2p_inv[simp]:
"rel2p (p2rel P) = P"
"p2rel (rel2p R) = R"
by (auto simp: rel2p_def[abs_def] p2rel_def)
named_theorems rel2p
named_theorems p2rel
lemma rel2p_dflt[rel2p]:
"rel2p Id = (=)"
"rel2p (A\<rightarrow>B) = rel_fun (rel2p A) (rel2p B)"
"rel2p (A\<times>\<^sub>rB) = rel_prod (rel2p A) (rel2p B)"
"rel2p (\<langle>A,B\<rangle>sum_rel) = rel_sum (rel2p A) (rel2p B)"
"rel2p (\<langle>A\<rangle>option_rel) = rel_option (rel2p A)"
"rel2p (\<langle>A\<rangle>list_rel) = list_all2 (rel2p A)"
by (auto
simp: rel2p_def[abs_def]
intro!: ext
simp: fun_rel_def rel_fun_def
simp: sum_rel_def elim: rel_sum.cases
simp: option_rel_def elim: option.rel_cases
simp: list_rel_def
simp: set_rel_def rel_set_def Image_def
)
lemma p2rel_dflt[p2rel]:
"p2rel (=) = Id"
"p2rel (rel_fun A B) = p2rel A \<rightarrow> p2rel B"
"p2rel (rel_prod A B) = p2rel A \<times>\<^sub>r p2rel B"
"p2rel (rel_sum A B) = \<langle>p2rel A, p2rel B\<rangle>sum_rel"
"p2rel (rel_option A) = \<langle>p2rel A\<rangle>option_rel"
"p2rel (list_all2 A) = \<langle>p2rel A\<rangle>list_rel"
by (auto
simp: p2rel_def[abs_def]
simp: fun_rel_def rel_fun_def
simp: sum_rel_def elim: rel_sum.cases
simp: option_rel_def elim: option.rel_cases
simp: list_rel_def
)
lemma [rel2p]: "rel2p (\<langle>A\<rangle>set_rel) = rel_set (rel2p A)"
unfolding set_rel_def rel_set_def rel2p_def[abs_def]
by blast
lemma [p2rel]: "left_unique A \<Longrightarrow> p2rel (rel_set A) = (\<langle>p2rel A\<rangle>set_rel)"
unfolding set_rel_def rel_set_def p2rel_def[abs_def]
by blast
lemma rel2p_comp: "rel2p A OO rel2p B = rel2p (A O B)"
by (auto simp: rel2p_def[abs_def] intro!: ext)
lemma rel2p_inj[simp]: "rel2p A = rel2p B \<longleftrightarrow> A=B"
by (auto simp: rel2p_def[abs_def]; meson)
subsection \<open>More Properties\<close>
(* TODO: Do compp-lemmas for other standard relations *)
lemma list_rel_compp: "\<langle>A O B\<rangle>list_rel = \<langle>A\<rangle>list_rel O \<langle>B\<rangle>list_rel"
using list.rel_compp[of "rel2p A" "rel2p B"]
by (auto simp: rel2p(2-)[symmetric] rel2p_comp) (* TODO: Not very systematic proof *)
lemma option_rel_compp: "\<langle>A O B\<rangle>option_rel = \<langle>A\<rangle>option_rel O \<langle>B\<rangle>option_rel"
using option.rel_compp[of "rel2p A" "rel2p B"]
by (auto simp: rel2p(2-)[symmetric] rel2p_comp) (* TODO: Not very systematic proof *)
lemma prod_rel_compp: "\<langle>A O B, C O D\<rangle>prod_rel = \<langle>A,C\<rangle>prod_rel O \<langle>B,D\<rangle>prod_rel"
using prod.rel_compp[of "rel2p A" "rel2p B" "rel2p C" "rel2p D"]
by (auto simp: rel2p(2-)[symmetric] rel2p_comp) (* TODO: Not very systematic proof *)
lemma sum_rel_compp: "\<langle>A O B, C O D\<rangle>sum_rel = \<langle>A,C\<rangle>sum_rel O \<langle>B,D\<rangle>sum_rel"
using sum.rel_compp[of "rel2p A" "rel2p B" "rel2p C" "rel2p D"]
by (auto simp: rel2p(2-)[symmetric] rel2p_comp) (* TODO: Not very systematic proof *)
lemma set_rel_compp: "\<langle>A O B\<rangle>set_rel = \<langle>A\<rangle>set_rel O \<langle>B\<rangle>set_rel"
using rel_set_OO[of "rel2p A" "rel2p B"]
by (auto simp: rel2p(2-)[symmetric] rel2p_comp) (* TODO: Not very systematic proof *)
lemma map_in_list_rel_conv:
shows "(l, map \<alpha> l) \<in> \<langle>br \<alpha> I\<rangle>list_rel \<longleftrightarrow> (\<forall>x\<in>set l. I x)"
by (induction l) (auto simp: in_br_conv)
lemma br_set_rel_alt: "(s',s)\<in>\<langle>br \<alpha> I\<rangle>set_rel \<longleftrightarrow> (s=\<alpha>`s' \<and> (\<forall>x\<in>s'. I x))"
by (auto simp: set_rel_def br_def)
(* TODO: Find proof that does not depend on br, and move to Misc *)
lemma finite_Image_sv: "single_valued R \<Longrightarrow> finite s \<Longrightarrow> finite (R``s)"
by (erule single_valued_as_brE) simp
lemma finite_set_rel_transfer: "\<lbrakk>(s,s')\<in>\<langle>R\<rangle>set_rel; single_valued R; finite s\<rbrakk> \<Longrightarrow> finite s'"
unfolding set_rel_alt
by (blast intro: finite_subset[OF _ finite_Image_sv])
lemma finite_set_rel_transfer_back: "\<lbrakk>(s,s')\<in>\<langle>R\<rangle>set_rel; single_valued (R\<inverse>); finite s'\<rbrakk> \<Longrightarrow> finite s"
unfolding set_rel_alt
by (blast intro: finite_subset[OF _ finite_Image_sv])
end
|
Cambridge’s Dimple Creation procedure aims to create natural-looking dimples through a simple and minimally-invasive surgery.
Our experienced doctor first analyses the patient’s facial structure to determine the ideal position of the dimple. After marking it out, an incision is made on the inside of the cheek, and a tiny section of muscles and fats are removed. The incision is then stitched together with suture threads that dissolve in a matter of days.
The procedure is generally completed in less than an hour, with minimal swelling.
|
#=
test_df:
- Julia version: 1.5.0
- Author: shisa
- Date: 2020-08-14
=#
module TestDf
# packages
using Test
# external modules
include("create_df.jl")
include("operate_df.jl")
# methods
function test()
@testset "DataFrames" begin
@testset "DfCreator" begin
@test_nowarn DfCreator.main()
end
@testset "DfOperator" begin
@test_nowarn DfOperator.main()
end
end
end
end
if abspath(PROGRAM_FILE) == @__FILE__
using .TestDf
TestDf.test()
end
|
\documentclass{article}
% The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out.
\usepackage{cite}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{xcolor}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\begin{document}
\begin{titlepage}
\begin{center}
\vspace{4cm}
\large
\textbf{
A Survey of Quantum Programming Languages: History, Methods, and Tools \\
Donald A. Sofge \\
Proceedings of the Second International Conference on Quantum, Nano, and Micro Technologies \\
2008 \\
Dylan Miracle \\
ICS 698-02 \\
Spring 2021 \\
Feb 17, 2021 \\
Dr. Jigang Liu
}
\end{center}
\end{titlepage}
\title{A Survey of Quantum Programming Languages: History, Methods, and Tools}
\author{Dylan Miracle\\
\textit{Department of Computer Science} \\
\textit{Metropolitan State University}\\
St. Paul, Minnesota, USA \\
[email protected]
}
\maketitle
\section{Background}
This paper is old in the world of quantum computing, but it gives a look at the early days of quantum language development. Theoretical work has been done on quantum information since Von Neuman in the 1930's, but actual computational frameworks did not begin developing unit the work of Feynman in 1982 and Deutsch in 1985. Deutsch proposed a quantum Turing machine, a model that is used to show an example of a quantum algorithm that has substantial speedup over its classical counterpart.
\section{Main idea/conclusion}
Quantum programming languages, like classical languages, can be imperative or functional. Various approaches have been made to developing quantum algorithms including the Quantum Turing Machine, Linear logic machines, quantum random access machine (QRAM). The gate model of quantum algorithms builds on QRAM fundamentals and is the most developed.
\section{Support facts/algorithms/methods}
Origins of quantum computing date to ideas of Feynman in 1982 who proposed the quantum computer as a means of simulating other quantum systems such as molecules. Classical simulations of quantum systems require exponential resources. In the example of a molecule this can be understood because as we add more atoms we have to track the interactions between all the atoms in the molecule. As the number of atoms increase the number of pairwise interaction increases exponentially.
Several important quantum algorithms have been developed using the gate model of quantum computing. These include the Deutsch-Jozsa algorithm which is an example of a toy problem that a quantum computer can solve better than a classical computer.
\section{Arguments/disagreements/concerns}
Complexity classes in quantum mechanical problems are thought to be different than those of classical computing, however this is not proven. All this work could end up without appreciable gains for most problems, and the technological leads that the digital computer has means that we would be better off using a classical computer for most problems if we cannot reduce complexity using a quantum computer.
While a taxonomy separating functional and imperative languages is useful as an analogy to digital computing, there is probably a more logical split quantum languages. One taxonomy that would be interesting is the levels of abstraction available to different languages. For example QASM is a language that only allows direct control of quantum gates, while qiskit gives access to applications in different industries, apis that connect to hardware and built in simulators. A taxonomy that takes into account the depth or levels of abstraction available for different languages would be useful.
\section{Interesting findings}
Early languages focused on the Quantum Turing Machine. This idea was coined by Deutsch and extended to study complexity theory of quantum computing. This was however not a very useful machine for implementing quantum algorithms and has been supplanted by the gate model of quantum computing for algorithm design.
The authors propose a taxonomy of quantum programming languages: imperative, functional, other quantum programming language paradigms. This is useful to a classical programmer as we are already familiar with the imperative/ functional divide.
\section{Quotations}
Quantum programming languages may be taxonomically divided into (A) imperative quantum programming languages, (B) functional quantum programming languages, and (C)others (may include mathematical formalisms not intended for computer execution).
The difficulties in formulating useful, effective, and in some sense universally capable quantum programming languages arise from several root causes. First, quantum mechanics itself (and by extension quantum information theory) is incomplete. Specifically missing is a theory of measurement. Quantum theory is quite successful in describing the evolution of quantum states, and even in predicting probabilistic outcomes after measurements have been made, but the process of state collapse is (with a few exceptional cases) not covered. So issues such as decoherence, diffusion, entanglement between particles (or entangled state, of whatever physical instantiation), and communication (including teleportation) are not well defined from a quantum information (and by extension quantum computation) perspective. Work with semantic formalisms and linear logic attempt to redress this by providing a firmer basis in a more complete logic consistent with quantum mechanics.
\end{document}
|
[STATEMENT]
lemma beforeM:
"P \<turnstile> C sees M,b: Ts\<rightarrow>T = body in D \<Longrightarrow>
compP\<^sub>2 P,D,M,0 \<rhd> compE\<^sub>2 body @ [Return]"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. P \<turnstile> C sees M, b : Ts\<rightarrow>T = body in D \<Longrightarrow> compP\<^sub>2 P,D,M,0 \<rhd> compE\<^sub>2 body @ [Return]
[PROOF STEP]
by(drule sees_method_idemp) (simp add:before_def compMb\<^sub>2_def)
|
abstract type CopperPlatePowerModel <: PM.AbstractActivePowerFormulation end
abstract type StandardPTDFForm <: PM.DCPlosslessForm end
#= This code is from PowerModel's network definitions. Added here just for reference.
export
# exact non-convex models
ACPPowerModel, StandardACPForm,
ACRPowerModel, StandardACRForm,
ACTPowerModel, StandardACTForm,
# linear approximations
DCPPowerModel, DCPlosslessForm,
NFAPowerModel, NFAForm,
# quadratic approximations
DCPLLPowerModel, StandardDCPLLForm,
LPACCPowerModel, AbstractLPACCForm,
# quadratic relaxations
SOCWRPowerModel, SOCWRForm,
QCWRPowerModel, QCWRForm,
SOCWRConicPowerModel, SOCWRConicForm,
QCWRTriPowerModel, QCWRTriForm,
SOCBFPowerModel, SOCBFForm,
SOCBFConicPowerModel, SOCBFConicForm,
# sdp relaxations
SDPWRMPowerModel, SDPWRMForm,
SparseSDPWRMPowerModel, SparseSDPWRMForm
##### Top Level Abstract Types #####
"active power only models"
abstract type AbstractActivePowerFormulation <: AbstractPowerFormulation end
"variants that target conic solvers"
abstract type AbstractConicPowerFormulation <: AbstractPowerFormulation end
"for branch flow models"
abstract type AbstractBFForm <: AbstractPowerFormulation end
"for variants of branch flow models that target QP or NLP solvers"
abstract type AbstractBFQPForm <: AbstractBFForm end
"for variants of branch flow models that target conic solvers"
abstract type AbstractBFConicForm <: AbstractBFForm end
##### Exact Non-Convex Models #####
""
abstract type AbstractACPForm <: AbstractPowerFormulation end
""
abstract type StandardACPForm <: AbstractACPForm end
"""
AC power flow formulation with polar bus voltage variables.
The seminal reference of AC OPF:
```
@article{carpentier1962contribution,
title={Contribution to the economic dispatch problem},
author={Carpentier, J},
journal={Bulletin de la Societe Francoise des Electriciens},
volume={3},
number={8},
pages={431--447},
year={1962}
}
```
History and discussion:
```
@techreport{Cain2012,
author = {Cain, Mary B and {O' Neill}, Richard P and Castillo, Anya},
title = {{History of optimal power flow and formulations}},
year = {2012}
pages = {1--36},
url = {https://www.ferc.gov/industries/electric/indus-act/market-planning/opf-papers/acopf-1-history-formulation-testing.pdf}
}
```
"""
const ACPPowerModel = GenericPowerModel{StandardACPForm}
""
ACPPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, StandardACPForm; kwargs...)
""
abstract type AbstractACRForm <: AbstractPowerFormulation end
""
abstract type StandardACRForm <: AbstractACRForm end
"""
AC power flow formulation with rectangular bus voltage variables.
```
@techreport{Cain2012,
author = {Cain, Mary B and {O' Neill}, Richard P and Castillo, Anya},
pages = {1--36},
title = {{History of optimal power flow and formulations}},
url = {https://www.ferc.gov/industries/electric/indus-act/market-planning/opf-papers/acopf-1-history-formulation-testing.pdf}
year = {2012}
}
```
"""
const ACRPowerModel = GenericPowerModel{StandardACRForm}
"default rectangular AC constructor"
ACRPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, StandardACRForm; kwargs...)
""
abstract type AbstractACTForm <: AbstractPowerFormulation end
""
abstract type StandardACTForm <: AbstractACTForm end
"""
AC power flow formulation (nonconvex) with variables for voltage angle, voltage magnitude squared, and real and imaginary part of voltage crossproducts. A tangens constraint is added to represent meshed networks in an exact manner.
```
@ARTICLE{4349090,
author={R. A. Jabr},
title={A Conic Quadratic Format for the Load Flow Equations of Meshed Networks},
journal={IEEE Transactions on Power Systems},
year={2007},
month={Nov},
volume={22},
number={4},
pages={2285-2286},
doi={10.1109/TPWRS.2007.907590},
ISSN={0885-8950}
}
```
"""
const ACTPowerModel = GenericPowerModel{StandardACTForm}
"default AC constructor"
ACTPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, StandardACTForm; kwargs...)
##### Linear Approximations #####
""
abstract type AbstractDCPForm <: AbstractActivePowerFormulation end
"active power only formulations where p[(i,j)] = -p[(j,i)]"
abstract type DCPlosslessForm <: AbstractDCPForm end
"""
Linearized 'DC' power flow formulation with polar voltage variables.
```
@ARTICLE{4956966,
author={B. Stott and J. Jardim and O. Alsac},
journal={IEEE Transactions on Power Systems},
title={DC Power Flow Revisited},
year={2009},
month={Aug},
volume={24},
number={3},
pages={1290-1300},
doi={10.1109/TPWRS.2009.2021235},
ISSN={0885-8950}
}
```
"""
const DCPPowerModel = GenericPowerModel{DCPlosslessForm}
"default DC constructor"
DCPPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, DCPlosslessForm; kwargs...)
abstract type NFAForm <: DCPlosslessForm end
"""
The an active power only network flow approximation, also known as the transportation model.
"""
const NFAPowerModel = GenericPowerModel{NFAForm}
"default DC constructor"
NFAPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, NFAForm; kwargs...)
##### Quadratic Approximations #####
""
abstract type AbstractDCPLLForm <: AbstractDCPForm end
""
abstract type StandardDCPLLForm <: AbstractDCPLLForm end
""
const DCPLLPowerModel = GenericPowerModel{StandardDCPLLForm}
"default DC constructor"
DCPLLPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, StandardDCPLLForm; kwargs...)
""
abstract type AbstractLPACForm <: AbstractPowerFormulation end
abstract type AbstractLPACCForm <: AbstractLPACForm end
"""
The LPAC Cold-Start AC Power Flow Approximation.
Note that the LPAC Cold-Start model requires the least amount of information
but is also the least accurate variant of the LPAC formulations. If a
nominal AC operating point is available, the LPAC Warm-Start model will provide
improved accuracy.
The original publication suggests to use polyhedral outer approximations for
the cosine and line thermal lit constraints. Given the recent improvements in
MIQCQP solvers, this implementation uses quadratic functions for those
constraints.
```
@article{doi:10.1287/ijoc.2014.0594,
author = {Coffrin, Carleton and Van Hentenryck, Pascal},
title = {A Linear-Programming Approximation of AC Power Flows},
journal = {INFORMS Journal on Computing},
volume = {26},
number = {4},
pages = {718-734},
year = {2014},
doi = {10.1287/ijoc.2014.0594},
eprint = {https://doi.org/10.1287/ijoc.2014.0594}
}
```
"""
const LPACCPowerModel = GenericPowerModel{AbstractLPACCForm}
"default LPACC constructor"
LPACCPowerModel(data::Dict{String,Any}; kwargs...) =
GenericPowerModel(data, AbstractLPACCForm; kwargs...)
##### Quadratic Relaxations #####
""
abstract type AbstractWRForm <: AbstractPowerFormulation end
""
abstract type AbstractWRConicForm <: AbstractConicPowerFormulation end
""
abstract type SOCWRConicForm <: AbstractWRConicForm end
""
abstract type SOCWRForm <: AbstractWRForm end
"""
Second-order cone relaxation of bus injection model of AC OPF.
The implementation casts this as a convex quadratically constrained problem.
```
@article{1664986,
author={R. A. Jabr},
title={Radial distribution load flow using conic programming},
journal={IEEE Transactions on Power Systems},
year={2006},
month={Aug},
volume={21},
number={3},
pages={1458-1459},
doi={10.1109/TPWRS.2006.879234},
ISSN={0885-8950}
}
```
"""
const SOCWRPowerModel = GenericPowerModel{SOCWRForm}
"""
Second-order cone relaxation of bus injection model of AC OPF.
This implementation casts the problem as a convex conic problem.
"""
const SOCWRConicPowerModel = GenericPowerModel{SOCWRConicForm}
"default SOC constructor"
SOCWRPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, SOCWRForm; kwargs...)
SOCWRConicPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, SOCWRConicForm; kwargs...)
""
abstract type QCWRForm <: AbstractWRForm end
"""
"Quadratic-Convex" relaxation of AC OPF
```
@Article{Hijazi2017,
author="Hijazi, Hassan and Coffrin, Carleton and Hentenryck, Pascal Van",
title="Convex quadratic relaxations for mixed-integer nonlinear programs in power systems",
journal="Mathematical Programming Computation",
year="2017",
month="Sep",
volume="9",
number="3",
pages="321--367",
issn="1867-2957",
doi="10.1007/s12532-016-0112-z",
url="https://doi.org/10.1007/s12532-016-0112-z"
}
```
"""
const QCWRPowerModel = GenericPowerModel{QCWRForm}
"default QC constructor"
QCWRPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, QCWRForm; kwargs...)
""
abstract type QCWRTriForm <: QCWRForm end
"""
"Quadratic-Convex" relaxation of AC OPF with convex hull of triple product
```
@Article{Hijazi2017,
author="Hijazi, Hassan and Coffrin, Carleton and Hentenryck, Pascal Van",
title="Convex quadratic relaxations for mixed-integer nonlinear programs in power systems",
journal="Mathematical Programming Computation",
year="2017",
month="Sep",
volume="9",
number="3",
pages="321--367",
issn="1867-2957",
doi="10.1007/s12532-016-0112-z",
url="https://doi.org/10.1007/s12532-016-0112-z"
}
```
"""
const QCWRTriPowerModel = GenericPowerModel{QCWRTriForm}
"default QC trilinear model constructor"
QCWRTriPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, QCWRTriForm; kwargs...)
""
abstract type SOCBFForm <: AbstractBFQPForm end
"""
Second-order cone relaxation of branch flow model
The implementation casts this as a convex quadratically constrained problem.
```
@INPROCEEDINGS{6425870,
author={M. Farivar and S. H. Low},
title={Branch flow model: Relaxations and convexification},
booktitle={2012 IEEE 51st IEEE Conference on Decision and Control (CDC)},
year={2012},
month={Dec},
pages={3672-3679},
doi={10.1109/CDC.2012.6425870},
ISSN={0191-2216}
}
```
Extended as discussed in:
```
@misc{1506.04773,
author = {Carleton Coffrin and Hassan L. Hijazi and Pascal Van Hentenryck},
title = {DistFlow Extensions for AC Transmission Systems},
year = {2018},
eprint = {arXiv:1506.04773},
url = {https://arxiv.org/abs/1506.04773}
}
```
"""
const SOCBFPowerModel = GenericPowerModel{SOCBFForm}
"default SOC constructor"
SOCBFPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, SOCBFForm; kwargs...)
""
abstract type SOCBFConicForm <: AbstractBFConicForm end
""
const SOCBFConicPowerModel = GenericPowerModel{SOCBFConicForm}
"default SOC constructor"
SOCBFConicPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, SOCBFConicForm; kwargs...)
###### SDP Relaxations ######
""
abstract type AbstractWRMForm <: AbstractConicPowerFormulation end
""
abstract type SDPWRMForm <: AbstractWRMForm end
"""
Semi-definite relaxation of AC OPF
Originally proposed by:
```
@article{BAI2008383,
author = "Xiaoqing Bai and Hua Wei and Katsuki Fujisawa and Yong Wang",
title = "Semidefinite programming for optimal power flow problems",
journal = "International Journal of Electrical Power & Energy Systems",
volume = "30",
number = "6",
pages = "383 - 392",
year = "2008",
issn = "0142-0615",
doi = "https://doi.org/10.1016/j.ijepes.2007.12.003",
url = "http://www.sciencedirect.com/science/article/pii/S0142061507001378",
}
```
First paper to use "W" variables in the BIM of AC OPF:
```
@INPROCEEDINGS{6345272,
author={S. Sojoudi and J. Lavaei},
title={Physics of power networks makes hard optimization problems easy to solve},
booktitle={2012 IEEE Power and Energy Society General Meeting},
year={2012},
month={July},
pages={1-8},
doi={10.1109/PESGM.2012.6345272},
ISSN={1932-5517}
}
```
"""
const SDPWRMPowerModel = GenericPowerModel{SDPWRMForm}
""
SDPWRMPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, SDPWRMForm; kwargs...)
abstract type SparseSDPWRMForm <: SDPWRMForm end
"""
Sparsity-exploiting semidefinite relaxation of AC OPF
Proposed in:
```
@article{doi:10.1137/S1052623400366218,
author = {Fukuda, M. and Kojima, M. and Murota, K. and Nakata, K.},
title = {Exploiting Sparsity in Semidefinite Programming via Matrix Completion I: General Framework},
journal = {SIAM Journal on Optimization},
volume = {11},
number = {3},
pages = {647-674},
year = {2001},
doi = {10.1137/S1052623400366218},
URL = {https://doi.org/10.1137/S1052623400366218},
eprint = {https://doi.org/10.1137/S1052623400366218}
}
```
Original application to OPF by:
```
@ARTICLE{6064917,
author={R. A. Jabr},
title={Exploiting Sparsity in SDP Relaxations of the OPF Problem},
journal={IEEE Transactions on Power Systems},
volume={27},
number={2},
pages={1138-1139},
year={2012},
month={May},
doi={10.1109/TPWRS.2011.2170772},
ISSN={0885-8950}
}
```
"""
const SparseSDPWRMPowerModel = GenericPowerModel{SparseSDPWRMForm}
""
SparseSDPWRMPowerModel(data::Dict{String,Any}; kwargs...) = GenericPowerModel(data, SparseSDPWRMForm; kwargs...)
##### Union Types #####
#
# These types should not be exported because they exist only to prevent code
# replication
#
# Note that Union types are discouraged in Julia,
# https://docs.julialang.org/en/v1/manual/style-guide/#Avoid-strange-type-Unions-1
# and should be used with discretion.
#
# If you are about to add a union type, first double check if refactoring the
# type hierarchy can resolve the issue instead.
#
AbstractWRForms = Union{AbstractACTForm, AbstractWRForm, AbstractWRConicForm, AbstractWRMForm}
AbstractWForms = Union{AbstractWRForms, AbstractBFForm}
AbstractPForms = Union{AbstractACPForm, AbstractACTForm, AbstractDCPForm, AbstractLPACForm}
"union of all conic form branches"
AbstractConicForms = Union{AbstractConicPowerFormulation, AbstractBFConicForm}
=#
|
\section{Rewards and the Epoch Boundary}
\label{sec:epoch}
\newcommand{\UTxOEpState}{\type{UTxOEpState}}
\newcommand{\Acnt}{\type{Acnt}}
\newcommand{\PlReapState}{\type{PlReapState}}
\newcommand{\NewPParamEnv}{\type{NewPParamEnv}}
\newcommand{\Snapshots}{\type{Snapshots}}
\newcommand{\SnapshotEnv}{\type{SnapshotEnv}}
\newcommand{\SnapshotState}{\type{SnapshotState}}
\newcommand{\NewPParamState}{\type{NewPParamState}}
\newcommand{\EpochState}{\type{EpochState}}
\newcommand{\BlocksMade}{\type{BlocksMade}}
\newcommand{\Snapshot}{\type{Snapshot}}
\newcommand{\RewardUpdate}{\type{RewardUpdate}}
\newcommand{\obligation}[4]{\fun{obligation}~ \var{#1}~ \var{#2}~ \var{#3}~ \var{#4}}
\newcommand{\reward}[7]{\fun{reward}
~ \var{#1}~ \var{#2}~ \var{#3}~ \var{#4}~ \var{#5}~ \var{#6}~ \var{#7}}
\newcommand{\rewardOnePool}[9]{\fun{rewardOnePool}
~\var{#1}~\var{#2}~\var{#3}~\var{#4}~\var{#5}~\var{#6}~\var{#7}~\var{#8}~\var{#9}}
\newcommand{\isActive}[4]{\fun{isActive}~ \var{#1}~ \var{#2}~ \var{#3}~ \var{#4}}
\newcommand{\activeStake}[5]{\fun{activeStake}~ \var{#1}~ \var{#2}~ \var{#3}~ \var{#4}~ \var{#5}}
\newcommand{\poolRefunds}[3]{\fun{poolRefunds}~ \var{#1}~ \var{#2}~ \var{#3}}
\newcommand{\poolStake}[3]{\fun{poolStake}~ \var{#1}~ \var{#2}~ \var{#3}}
\newcommand{\stakeDistr}[3]{\fun{stakeDistr}~ \var{#1}~ \var{#2}~ \var{#3}}
\newcommand{\lReward}[4]{\fun{r_{operator}}~ \var{#1}~ \var{#2}~ \var{#3}~ {#4}}
\newcommand{\mReward}[4]{\fun{r_{member}}~ \var{#1}~ \var{#2}~ \var{#3}~ {#4}}
\newcommand{\poolReward}[5]{\fun{poolReward}~\var{#1}~{#2}~\var{#3}~\var{#4}~\var{#5}}
\newcommand{\createRUpd}[2]{\fun{createRUpd}~\var{#1}~\var{#2}}
In order to handle rewards and staking, we must change the stake distribution
calculation function to add up only the Ada in the UTxO
before performing any calculations. In Figure~\ref{fig:functions:stake-distribution}
below, we do so using the function $\fun{utxoAda}$, which returns the amount of Ada
tokens in an address.
%%
%% Figure Functions for Stake Distribution
%%
\begin{figure}[htb]
\emph{Helper function}
%
\begin{align*}
& \fun{utxoAda} \in \UTxO \to \Addr \to \Coin \\
& \fun{utxoAda}~{\var{utxo}}~\var{addr} ~=~\sum_{\var{out} \in \range \var{utxo}, \fun{getAddr}~\var{out} = \var{addr}} \fun{getCoin}~\var{out}
\end{align*}
%
\emph{Stake Distribution (using functions and maps as relations)}
%
\begin{align*}
& \fun{stakeDistr} \in \UTxO \to \DState \to \PState \to \Snapshot \\
& \fun{stakeDistr}~{utxo}~{dstate}~{pstate} = \\
& ~~~~ \big((\dom{\var{activeDelegs}})
\restrictdom\left(\fun{aggregate_{+}}~\var{stakeRelation}\right),
~\var{delegations},~\var{poolParams}\big)\\
& \where \\
& ~~~~ (~\var{rewards},~\var{delegations},~\var{ptrs},~\wcard,~\wcard,~\wcard)
= \var{dstate} \\
& ~~~~ (~\var{poolParams},~\wcard,~\wcard) = \var{pstate} \\
& ~~~~ \var{stakeRelation} = \left(
\left(\fun{stakeCred_b}^{-1}\cup\left(\fun{addrPtr}\circ\var{ptr}\right)^{-1}\right)
\circ\left(\hldiff{\fun{utxoAda}}~{\var{utxo}}\right)
\right)
\cup \left(\fun{stakeCred_r}^{-1}\circ\var{rewards}\right) \\
& ~~~~ \var{activeDelegs} =
(\dom{rewards}) \restrictdom \var{delegations} \restrictrange (\dom{poolParams})
\end{align*}
\caption{Stake Distribution Function}
\label{fig:functions:stake-distribution}
\end{figure}
|
using Test
using DataStructures
using problems.linkedlist
@testset "dedupe 1 (immutable)" begin
@test dedupe(list()) == list()
@testset "case 1" begin
actual = collect(dedupe(list(4, 6, 3, 1, 3, 2, 6, 1)))
expected = [4, 6, 3, 1, 2]
@test actual == expected
end
@testset "case 2" begin
actual = collect(dedupe(list(4, 6, 3, 1, 3, 2, 6, 1, 7, 1, 2)))
expected = [4, 6, 3, 1, 2, 7]
@test actual == expected
end
@testset "case 3" begin
actual = collect(dedupe(list(4, 6, 3, 1, 3, 2, 6, 1, 7, 1, 2)))
expected = [4, 6, 3, 1, 2, 7]
@test actual == expected
end
@testset "case 4" begin
actual = collect(dedupe(list(1, 1, 1, 2, 3, 1, 1, 2, 3, 1, 4, 4)))
expected = [1, 2, 3, 4]
@test actual == expected
end
@testset "case 5" begin
actual = collect(dedupe(list(1, 1, 'a', 2, 3, 1, 1, 2, 3, 'b', 4, 4)))
expected = [1, 'a', 2, 3, 'b', 4]
@test actual == expected
end
end
@testset "dedupe 2 (mutable)" begin
@test dedupe(MutableLinkedList()) == MutableLinkedList()
@testset "case 1" begin
lst = to_mutable_list([4, 6, 3, 1, 3, 2, 6, 1])
actual = collect(dedupe(lst))
expected = [4, 6, 3, 1, 2]
@test actual == expected
end
@testset "case 2" begin
lst = to_mutable_list([4, 6, 3, 1, 3, 2, 6, 1, 7, 1, 2])
actual = collect(dedupe(lst))
expected = [4, 6, 3, 1, 2, 7]
@test actual == expected
end
@testset "case 3" begin
lst = to_mutable_list([4, 6, 3, 1, 3, 2, 6, 1, 7, 1, 2])
actual = collect(dedupe(lst))
expected = [4, 6, 3, 1, 2, 7]
@test actual == expected
end
@testset "case 4" begin
lst = to_mutable_list([1, 1, 1, 2, 3, 1, 1, 2, 3, 1, 4, 4])
actual = collect(dedupe(lst))
expected = [1, 2, 3, 4]
@test actual == expected
end
@testset "case 5" begin
lst = to_mutable_list([1, 1, 'a', 2, 3, 1, 1, 2, 3, 'b', 4, 4])
actual = collect(dedupe(lst))
expected = [1, 'a', 2, 3, 'b', 4]
@test actual == expected
end
end
@testset "k-th last" begin
@testset "case 1" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = kth_last(lst, 0)
expected = 1
@test actual == expected
end
@testset "case 2" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = kth_last(lst, 1)
expected = 6
@test actual == expected
end
@testset "case 3" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = kth_last(lst, 2)
expected = 2
@test actual == expected
end
@testset "case 4" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = kth_last(lst, 7)
expected = 4
@test actual == expected
end
@testset "case 5" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = kth_last(lst, 6)
expected = 6
@test actual == expected
end
end
@testset "delete middle" begin
@testset "case 1" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = delete_middle(lst, 4)
expected = list(4, 6, 3, 3, 2, 6, 1)
@test actual == expected
end
@testset "case 2" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1)
actual = delete_middle(lst, 3)
expected = list(4, 6, 3, 1, 3, 2, 6, 1)
@test actual == expected
end
@testset "case 3" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 7)
actual = delete_middle(lst, 6)
expected = list(4, 6, 2, 3, 1, 2, 6, 1, 7)
@test actual == expected
end
@testset "case 4" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 7, 8)
actual = delete_middle(lst, 2)
expected = list(4, 2, 3, 1, 3, 2, 6, 1, 7, 8)
@test actual == expected
end
@testset "case 5" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 7, 8, 10)
actual = delete_middle(lst, 10)
expected = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 8, 10)
@test actual == expected
end
end
@testset "partition" begin
@testset "case 1" begin
lst = list(4, 6, 3, 1, 3, 2, 6, 1)
actual = partition(lst, 4)
expected = list(1, 2, 3, 1, 3, 6, 6, 4)
@test actual == expected
end
@testset "case 2" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1)
actual = partition(lst, 3)
expected = list(1, 2, 1, 2, 6, 3, 3, 6, 4)
@test actual == expected
end
@testset "case 3" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 7)
actual = partition(lst, 6)
expected = list(1, 2, 3, 1, 3, 2, 4, 7, 6, 6)
@test actual == expected
end
@testset "case 4" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 7, 8)
actual = partition(lst, 2)
expected = list(1, 1, 8, 7, 6, 2, 3, 3, 2, 6, 4)
@test actual == expected
end
@testset "case 5" begin
lst = list(4, 6, 2, 3, 1, 3, 2, 6, 1, 7, 8, 10)
actual = partition(lst, 10)
expected = list(8, 7, 1, 6, 2, 3, 1, 3, 2, 6, 4, 10)
@test actual == expected
end
end
|
A new badge was revealed in May 2007 , for the 2007 – 08 season and beyond . The new badge includes a star to represent the European Cup win in 1982 , and has a light blue background behind Villa 's ' lion rampant ' . The traditional motto " Prepared " remains in the badge , and the name Aston Villa has been shortened to <unk> , FC having been omitted from the previous badge . The lion is now unified as opposed to fragmented lions of the past . Randy Lerner petitioned fans to help with the design of the new badge .
|
If $N$ is a set of measure zero, then any subset of $N$ also has measure zero.
|
(* Title: HOL/Auth/n_g2kAbsAfter_lemma_inv__17_on_rules.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_g2kAbsAfter Protocol Case Study*}
theory n_g2kAbsAfter_lemma_inv__17_on_rules imports n_g2kAbsAfter_lemma_on_inv__17
begin
section{*All lemmas on causal relation between inv__17*}
lemma lemma_inv__17_on_rules:
assumes b1: "r \<in> rules N" and b2: "(f=inv__17 )"
shows "invHoldForRule s f r (invariants N)"
proof -
have c1: "(\<exists> d. d\<le>N\<and>r=n_n_Store_i1 d)\<or>
(\<exists> d. d\<le>N\<and>r=n_n_AStore_i1 d)\<or>
(r=n_n_SendReqS_j1 )\<or>
(r=n_n_SendReqEI_i1 )\<or>
(r=n_n_SendReqES_i1 )\<or>
(r=n_n_RecvReq_i1 )\<or>
(r=n_n_SendInvE_i1 )\<or>
(r=n_n_SendInvS_i1 )\<or>
(r=n_n_SendInvAck_i1 )\<or>
(r=n_n_RecvInvAck_i1 )\<or>
(r=n_n_SendGntS_i1 )\<or>
(r=n_n_SendGntE_i1 )\<or>
(r=n_n_RecvGntS_i1 )\<or>
(r=n_n_RecvGntE_i1 )\<or>
(r=n_n_ASendReqIS_j1 )\<or>
(r=n_n_ASendReqSE_j1 )\<or>
(r=n_n_ASendReqEI_i1 )\<or>
(r=n_n_ASendReqES_i1 )\<or>
(r=n_n_SendReqEE_i1 )\<or>
(r=n_n_ARecvReq_i1 )\<or>
(r=n_n_ASendInvE_i1 )\<or>
(r=n_n_ASendInvS_i1 )\<or>
(r=n_n_ASendInvAck_i1 )\<or>
(r=n_n_ARecvInvAck_i1 )\<or>
(r=n_n_ASendGntS_i1 )\<or>
(r=n_n_ASendGntE_i1 )\<or>
(r=n_n_ARecvGntS_i1 )\<or>
(r=n_n_ARecvGntE_i1 )"
apply (cut_tac b1, auto) done
moreover {
assume d1: "(\<exists> d. d\<le>N\<and>r=n_n_Store_i1 d)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_Store_i1Vsinv__17) done
}
moreover {
assume d1: "(\<exists> d. d\<le>N\<and>r=n_n_AStore_i1 d)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_AStore_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendReqS_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqS_j1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendReqEI_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqEI_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendReqES_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqES_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_RecvReq_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvReq_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendInvE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvE_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendInvS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvS_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvAck_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_RecvInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvInvAck_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendGntS_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendGntE_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_RecvGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvGntS_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_RecvGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvGntE_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendReqIS_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqIS_j1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendReqSE_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqSE_j1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendReqEI_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqEI_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendReqES_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqES_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_SendReqEE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqEE_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ARecvReq_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvReq_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendInvE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvE_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendInvS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvS_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvAck_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ARecvInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvInvAck_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendGntS_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ASendGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendGntE_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ARecvGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvGntS_i1Vsinv__17) done
}
moreover {
assume d1: "(r=n_n_ARecvGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvGntE_i1Vsinv__17) done
}
ultimately show "invHoldForRule s f r (invariants N)"
by satx
qed
end
|
/-
Copyright (c) 2022 Thomas Browning. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Thomas Browning
-/
import group_theory.complement
import group_theory.group_action.basic
import group_theory.index
/-!
# The Transfer Homomorphism
In this file we construct the transfer homomorphism.
## Main definitions
- `diff ϕ S T` : The difference of two left transversals `S` and `T` under the homomorphism `ϕ`.
- `transfer ϕ` : The transfer homomorphism induced by `ϕ`.
-/
open_locale big_operators
variables {G : Type*} [group G] {H : subgroup G} {A : Type*} [comm_group A] (ϕ : H →* A)
namespace subgroup
namespace left_transversals
open finset mul_action
open_locale pointwise
variables (R S T : left_transversals (H : set G)) [fintype (G ⧸ H)]
/-- The difference of two left transversals -/
@[to_additive "The difference of two left transversals"]
noncomputable def diff : A :=
let α := mem_left_transversals.to_equiv S.2, β := mem_left_transversals.to_equiv T.2 in
∏ q, ϕ ⟨(α q)⁻¹ * β q, quotient.exact' ((α.symm_apply_apply q).trans (β.symm_apply_apply q).symm)⟩
@[to_additive] lemma diff_mul_diff : diff ϕ R S * diff ϕ S T = diff ϕ R T :=
prod_mul_distrib.symm.trans (prod_congr rfl (λ q hq, (ϕ.map_mul _ _).symm.trans (congr_arg ϕ
(by simp_rw [subtype.ext_iff, coe_mul, coe_mk, mul_assoc, mul_inv_cancel_left]))))
@[to_additive] lemma diff_self : diff ϕ T T = 1 :=
mul_right_eq_self.mp (diff_mul_diff ϕ T T T)
@[to_additive] lemma diff_inv : (diff ϕ S T)⁻¹ = diff ϕ T S :=
inv_eq_of_mul_eq_one_right $ (diff_mul_diff ϕ S T S).trans $ diff_self ϕ S
@[to_additive] lemma smul_diff_smul (g : G) : diff ϕ (g • S) (g • T) = diff ϕ S T :=
prod_bij' (λ q _, g⁻¹ • q) (λ _ _, mem_univ _) (λ _ _, congr_arg ϕ (by simp_rw [coe_mk,
smul_apply_eq_smul_apply_inv_smul, smul_eq_mul, mul_inv_rev, mul_assoc, inv_mul_cancel_left]))
(λ q _, g • q) (λ _ _, mem_univ _) (λ q _, smul_inv_smul g q) (λ q _, inv_smul_smul g q)
end left_transversals
end subgroup
namespace monoid_hom
variables [fintype (G ⧸ H)]
open subgroup subgroup.left_transversals
/-- Given `ϕ : H →* A` from `H : subgroup G` to a commutative group `A`,
the transfer homomorphism is `transfer ϕ : G →* A`. -/
@[to_additive "Given `ϕ : H →+ A` from `H : add_subgroup G` to an additive commutative group `A`,
the transfer homomorphism is `transfer ϕ : G →+ A`."]
noncomputable def transfer : G →* A :=
let T : left_transversals (H : set G) := inhabited.default in
{ to_fun := λ g, diff ϕ T (g • T),
map_one' := by rw [one_smul, diff_self],
map_mul' := λ g h, by rw [mul_smul, ←diff_mul_diff, smul_diff_smul] }
variables (T : left_transversals (H : set G))
@[to_additive] lemma transfer_def (g : G) : transfer ϕ g = diff ϕ T (g • T) :=
by rw [transfer, ←diff_mul_diff, ←smul_diff_smul, mul_comm, diff_mul_diff]; refl
end monoid_hom
|
function orientation = FlyOrient(subset_frame, threshold)
%FLYORIENT
% Usage:
% orientation = FlyOrient(subset_frame, threshold)
%
% This function takes in a subset frame around the fly (calculated by
% FindFly) and discards the 3D data by placing points where the pixel intensity
% is larger than the user chosen threshold. Then, Principal Components
% Analysis (by way fot the pca1 function) is performed on the resulting
% scatter plot to find the direction of maximum variance --- this direction
% is taken to be the fly's (ambiguous) orientation.
% orientation is a vector consisting of two angles (complements) that comprise
% the body axis. The first element is an angle in the upper half plane; the
% second element is an angle in the lower half plane.
% Written by Dan Valente
% 11 October 2006
%Normalize frame data by pixel of maximum intensity
subset_frame = subset_frame/max(max(subset_frame));
% Put dots where fly is and do PCA on reduced data set
[rows, cols] = find(subset_frame >= threshold);
rows = length(subset_frame(:,1))-rows+1;
x = [cols';rows'];
[xnew, PC, V, data] = pca1(x);
% Find orientation vectors (two, mirrored across diagonal), and group into
% upper half and lower half planes.
a1 = PC(1,1);
b1 = PC(2,1);
a2 = -PC(1,1);
b2 = -PC(2,1);
if (b1 >= 0 );
orientUHP = atan2(b1,a1);
orientLHP = atan2(b2,a2);
elseif (b2 >=0);
orientUHP = atan2(b2,a2);
orientLHP = atan2(b1,a1);
else
end
% The vector we will return
orientation = [orientUHP orientLHP];
return;
|
/-
Copyright (c) 2023 María Inés de Frutos-Fernández. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author : María Inés de Frutos-Fernández
-/
import tactic -- importa todas las tácticas de Lean
/-!
# Lógica proposicional en Lean
`P : Prop` significa que `P` es una proposición.
`h : P` significa que `h` es una demostración de que `P` es cierta.
En el apartado `Tactic state` de la ventana `Lean infoview`, detrás del símbolo `⊢` se muestra
el resultado que queremos demostrar. Encima de dicha línea tenemos las hipótesis activas.
Lean 3 utiliza la siguente notación para las conectivas lógicas:
* `→` ("implica" -- escrito `\l`)
* `¬` ("no" -- escrito `\not` o `\n`)
* `∧` ("y" -- escrito `\and` o `\an`)
* `↔` ("si y sólo si" -- escrito `\iff` o `\lr`)
* `∨` ("o" -- escrito `\or` o `\v`
NOTA: en VSCode, para saber cómo se ha escrito un símbolo UNICODE, basta dejar el cursor
sobre él.
# Tácticas
Para completar los ejercicios, será necesario utlizar las siguientes tácticas de Lean, cuyo
funcionamiento se describe en el fichero `tacticas.lean`. Los ejercicios de la primera sección
pueden resolverse utilizando exclusivamente `intro`, `exact`, y `apply`. En los comentarios al
inicio de cada sección se indica qué nuevas tácticas son necesarias.
* `intro`
* `exact`
* `apply`
* `triv`
* `exfalso`
* `change`
* `by_contra`
* `cases`
* `split`
* `refl`
* `rw`
* `have`
* `left`
* `right`
-/
-- `P`, `Q`, `R` y `S` denotan proposiciones.
variables (P Q R S : Prop)
/- Convención: utilizaremos variables cuyo nombre comienza por with `h` (como `hP` or `h1`) para
demostraciones o hipótesis. -/
/-
## Implicación
La táctica `sorry` se utiliza para evitar el error que de otro modo produce Lean cuando no
aportamos la demostración de un resultado.
En los ejemplos siguientes, reemplaza el `sorry` por una demostración que utilice las tácticas
`intro`, `exact` y `apply`. Recuerda añadir una coma al final de cada instrucción.
-/
section implicacion
/-- Toda proposición se sigue de sí misma -/
example : P → P :=
begin
sorry
end
/- NOTA: La convención en Lean es que `P → Q → R` significa `P → (Q → R)` (es decir, los
paréntesis implícitos asocian por la derecha).
En particular, en este ejemplo se nos pide demostrar `P → (Q → P)`.
Como consejo general, si no estamos seguros de si una cierta operación se está asociando por la
derecha o por la izquierda, podemos consultarlo pasando el cursor sobre la línea correspondiente
en el `Tactic state`.
-/
example : P → Q → P :=
begin
sorry
end
/-- "Modus Ponens": dado `P` y `P → Q`, podemos deducir `Q`. -/
lemma modus_ponens : P → (P → Q) → Q :=
begin
sorry
end
/-- `→` es transitiva. Es decir, si `P → Q` y `Q → R` son verdaderas, `P → R` también. -/
example : (P → Q) → (Q → R) → (P → R) :=
begin
sorry,
end
example : (P → Q → R) → (P → Q) → (P → R) :=
begin
sorry
end
/-
Termina los ejemplos de esta sección si quieres más práctica con `intro`, `exact` y `apply`; de lo
contrario, puedes pasar a la sección `verdadero_falso`.
-/
example : (P → Q) → ((P → Q) → P) → Q :=
begin
sorry
end
example : ((P → Q) → R) → ((Q → R) → P) → ((R → P) → Q) → P :=
begin
sorry
end
example : ((Q → P) → P) → (Q → R) → (R → P) → P :=
begin
sorry
end
example : (((P → Q) → Q) → Q) → (P → Q) :=
begin
sorry
end
example :
(((P → Q → Q) → ((P → Q) → Q)) → R) →
((((P → P) → Q) → (P → P → Q)) → R) →
(((P → P → Q) → ((P → P) → Q)) → R) → R :=
begin
sorry
end
end implicacion
section verdadero_falso
/-!
# Verdadero y falso
Introducimos dos nuevas tácticas:
* `triv`: demuestra `⊢ true`.
* `exfalso`: sustituye el resultado a probar por `false`.
-/
example : true :=
begin
sorry
end
example : true → true :=
begin
sorry
end
example : false → true :=
begin
sorry
end
example : false → false :=
begin
sorry
end
example : (true → false) → false :=
begin
sorry
end
example : false → P :=
begin
sorry
end
example : true → false → true → false → true → false :=
begin
sorry
end
example : P → ((P → false) → false) :=
begin
sorry
end
example : (P → false) → P → Q :=
begin
sorry
end
example : (true → false) → P :=
begin
sorry
end
end verdadero_falso
section negacion
/-!
# Negación
En Lean, `¬ P` *está definido como* `P → false`. Por tanto, `¬ P` y `P → false`
son *iguales por definición* (hablaremos de esto más adelante en el curso).
Las siguientes tácticas podrían ser útiles en esta sección:
* `change`
* `by_contra`
-/
example : ¬ true → false :=
begin
sorry
end
example : false → ¬ true :=
begin
sorry
end
example : ¬ false → true :=
begin
sorry
end
example : true → ¬ false :=
begin
sorry
end
example : false → ¬ P :=
begin
sorry
end
example : P → ¬ P → false :=
begin
sorry
end
example : P → ¬ (¬ P) :=
begin
sorry
end
example : (P → Q) → (¬ Q → ¬ P) :=
begin
sorry
end
example : ¬ ¬ false → false :=
begin
sorry
end
example : ¬ ¬ P → P :=
begin
sorry
end
example : (¬ Q → ¬ P) → (P → Q) :=
begin
sorry,
end
end negacion
section conjuncion
/-!
# Conjunción
Añadimos las tácticas:
* `cases`
* `split`
-/
example : P ∧ Q → P :=
begin
sorry
end
example : P ∧ Q → Q :=
begin
sorry
end
example : (P → Q → R) → (P ∧ Q → R) :=
begin
sorry
end
example : P → Q → P ∧ Q :=
begin
sorry
end
/-- `∧` es simétrica. -/
example : P ∧ Q → Q ∧ P :=
begin
sorry
end
example : P → P ∧ true :=
begin
sorry
end
example : false → P ∧ false :=
begin
sorry
end
/-- `∧` es transitiva. -/
example : (P ∧ Q) → (Q ∧ R) → (P ∧ R) :=
begin
sorry,
end
example : ((P ∧ Q) → R) → (P → Q → R) :=
begin
sorry,
end
end conjuncion
section doble_implicacion
/-!
# Doble implicación
Nuevas tácticas:
* `refl`
* `rw`
* `have`
-/
example : P ↔ P :=
begin
sorry
end
example : (P ↔ Q) → (Q ↔ P) :=
begin
sorry
end
example : (P ↔ Q) ↔ (Q ↔ P) :=
begin
sorry
end
example : (P ↔ Q) → (Q ↔ R) → (P ↔ R) :=
begin
sorry
end
example : P ∧ Q ↔ Q ∧ P :=
begin
sorry
end
example : ((P ∧ Q) ∧ R) ↔ (P ∧ (Q ∧ R)) :=
begin
sorry
end
example : P ↔ (P ∧ true) :=
begin
sorry
end
example : false ↔ (P ∧ false) :=
begin
sorry
end
example : (P ↔ Q) → (R ↔ S) → (P ∧ R ↔ Q ∧ S) :=
begin
sorry
end
/- Una forma de demostrar este teorema es utilizando `by_cases hP : P`. Sin embargo, también es
posible dar una demostración constructiva, utilizando la táctica `have`. -/
example : ¬ (P ↔ ¬ P) :=
begin
sorry,
end
end doble_implicacion
section disyuncion
/-!
# Disyunción
Nuevas tácticas
* `left` y `right`
* `cases` (nueva funcionalidad)
-/
example : P → P ∨ Q :=
begin
sorry
end
example : Q → P ∨ Q :=
begin
sorry,
end
example : P ∨ Q → (P → R) → (Q → R) → R :=
begin
sorry
end
/- `∨` es simétrica. -/
example : P ∨ Q → Q ∨ P :=
begin
sorry
end
/- `∨` es asociativa. -/
example : (P ∨ Q) ∨ R ↔ P ∨ (Q ∨ R) :=
begin
sorry,
end
example : (P → R) → (Q → S) → P ∨ Q → R ∨ S :=
begin
sorry,
end
example : (P → Q) → P ∨ R → Q ∨ R :=
begin
sorry,
end
example : (P ↔ R) → (Q ↔ S) → (P ∨ Q ↔ R ∨ S) :=
begin
sorry,
end
-- Leyes de de Morgan.
example : ¬ (P ∨ Q) ↔ ¬ P ∧ ¬ Q :=
begin
sorry
end
example : ¬ (P ∧ Q) ↔ ¬ P ∨ ¬ Q :=
begin
sorry
end
end disyuncion
|
lemma translation_invert: fixes a :: "'a::ab_group_add" assumes "(\<lambda>x. a + x) ` A = (\<lambda>x. a + x) ` B" shows "A = B"
|
\section{Conclusions and Future Work}
\label{sec:conclusion}
In this paper, we have described a full-featured EDSL for high-assurance
embedded systems programming. Ivory's type system ensures safe C development,
and being an EDSL, it allows programmers the flexibility to create high-level
constructs in a type-safe fashion. %
We have demonstrated the feasibility of developing large embedded systems in
Ivory ourselves, and there is a growing user community. For a detailed
experience report of using Ivory, and EDSLs for embedded programming in general,
see our previous work~\cite{smaccm}.
As Ivory's type system is embedded in GHC's, the properties that Ivory's type
system can encode is limited to what can be expressed in GHC: for instance,
while procedures are guaranteed to be consistent in their return type, that they
use the return statement must be checked during a separate phase. In practice,
this limitation results in the discovery of errors later in the compilation
pipeline than would be the case in a standalone compiler. Conversely, Ivory can
take advantage of new developments in GHC's type system. For instance, there
are plans to integrate SMT solving into GHC's constraint solver, which would
enable more expressive array operations in the Ivory core language, as well as
enabling a richer set of derived operations.
Use of Ivory has exposed a number of avenues for future work. As mentioned in
\autoref{sec:semantics}, we are investigating the addition of nested references.
We also plan to investigate decoupling regions from function bodies, thus giving
finer-grained control over memory lifetimes. Finally, we are considering making
regions first-class, allowing allocation to take place in a parent region. On
the verification side, we are considering developing a weakest-precondition
style verification tool for Ivory programs, and extending the assertion language
with separation-logic predicates.
%Furthermore, Ivory's restricted core language should lend itself to static
%analysis,
|
// Copyright 2019 Erik Teichmann <[email protected]>
#ifndef INCLUDE_SAM_SYSTEM_EULER_SYSTEM_HPP_
#define INCLUDE_SAM_SYSTEM_EULER_SYSTEM_HPP_
#include <vector>
#include <boost/numeric/odeint/integrate/null_observer.hpp>
#include "./generic_system.hpp"
namespace sam {
/*! \brief A system that is integrated with an Euler method of order O(dt).
*
* A system that is integrated with an Euler method of order O(dt). The Euler
* method is defined as
* \f[ x_{n+1} = x_{n} + f_n dt, \f]
* where \f$ f_n \f$ is the derivative at timestep \f$ n \f$.
*/
template<typename ODE, typename state_type = std::vector<double>>
class EulerSystem: public GenericSystem<ODE, state_type> {
public:
template<typename... Ts>
explicit EulerSystem(unsigned int system_size, unsigned int dimension,
Ts... parameters);
template<typename observer_type = boost::numeric::odeint::null_observer>
void Integrate(double dt, unsigned int number_steps,
observer_type observer
= boost::numeric::odeint::null_observer());
private:
template<typename system_type, typename observer_type>
double EulerMethod(system_type system, state_type& x, double t, double dt,
unsigned int number_steps, observer_type observer);
};
// Implementation
template<typename ODE, typename state_type>
template<typename... Ts>
EulerSystem<ODE, state_type>::EulerSystem(unsigned int system_size,
unsigned int dimension,
Ts... parameters)
: GenericSystem<ODE, state_type>(system_size, dimension, parameters...) {}
template<typename ODE, typename state_type>
template<typename observer_type>
void EulerSystem<ODE, state_type>::Integrate(double dt,
unsigned int number_steps,
observer_type observer) {
this->t_ = EulerMethod(*(this->ode_), this->x_, this->t_, dt, number_steps,
observer);
}
template<typename ODE, typename state_type>
template<typename system_type, typename observer_type>
double EulerSystem<ODE, state_type>::EulerMethod(system_type system,
state_type& x, double t,
double dt,
unsigned int number_steps,
observer_type observer) {
observer(x, t);
for (unsigned int i = 0; i < number_steps; ++i) {
// TODO(boundter): copy the value? how to best initialize?
state_type dx = x;
system(x, dx, t);
// TODO(boundter): Rather use iterators
for (size_t j = 0; j < x.size(); ++j) x[j] += dx[j]*dt;
t += dt;
observer(x, t);
}
return t;
}
} // namespace sam
#endif // INCLUDE_SAM_SYSTEM_EULER_SYSTEM_HPP_
|
# Gabarito da prova II de Biomecânica I - 2017
> http://demotu.org/ensino/biomecanica-i/
As soluções das questões das provas são apresentadas em Python.
```python
from sympy import Symbol, symbols, Matrix,latex
from sympy import cos, sin
from sympy.physics.mechanics import dynamicsymbols, init_vprinting
from IPython.display import display, Math
init_vprinting()
import numpy as np
t = Symbol('t')
l1, l2 = symbols('ell_1 ell_2', positive=True)
alpha1, alpha2 = dynamicsymbols('alpha1 alpha2')
```
## Diurno
**1** Considere as seguintes posições de marcas colocadas sobre uma coxa mensuradas por um sistema de captura do movimento: maléolo lateral (ml = [2.92, 10.10, 18.85]), maléolo medial (mm = [2.71, 10.22, 26.52]), cabeça da fíbula (fib = [5.05, 41.90, 15.41]), e côndilo medial da tíbia (tib = [8.29, 41.88, 26.52]). Estas posições estão na ordem x, y, z e são descritas no sistema de coordenadas do laboratório onde x aponta para a frente do sujeito, y aponta para cima e z aponta para o lado. Os centros articulares do tornozelo e joelho estão localizados respectivamente nos centros geométricos entre as marcas ml e mm e entre as marcas fib e tib. Um sistema de coordenadas anatômico para a perna pode ser definido como um eixo quase vertical apontando para cima e passando pelos centros articulares do tornozelo e joelho; outro eixo quase anteroposterior como o produto vetorial do eixo quase vertical e um vetor na direção médio-lateral passando por mm e ml; e o último eixo como o produto vetorial dos dois eixos anteriores, e a origem no centro articular do tornozelo.
a. [1,0] Calcule o sistema de coordenadas anatômico para a perna como descrito acima.
b. [1,0] A partir do sistema de coordenadas, encontre a matriz de rotação que leva da coordenada local para a global e a matriz de rotação que leva da coordenada global para a local.
c. [2,0] Encontre os ângulos de Euler , considerando que a ordem de rotação foi feita em torno dos eixos do sistema de referência global YZX, nesta ordem. A matriz de rotação nesta sequência de rotação é dada abaixo:
```python
ml = np.array([2.92, 10.10, 18.85])
mm = np.array([2.71, 10.22, 26.52])
fib = np.array([5.05, 41.90, 15.41])
tib = np.array([8.29, 41.88, 26.52])
tornoz = (ml + mm)/2
joelho = (fib + tib)/2
v1 = joelho - tornoz # first axis
v2 = np.cross(v1, mm - ml) # second axis
v3 = np.cross(v2, v1) # third axis
# Vector normalization
e1 = v1/np.linalg.norm(v1)
e2 = v2/np.linalg.norm(v2)
e3 = v3/np.linalg.norm(v3)
print('Origem:', '\nO =', tornoz)
print('Versores:', '\ne1 =', e1, '\ne2 =', e2, '\ne3 =', e3)
```
Origem:
O = [ 2.815 10.16 22.685]
Versores:
e1 = [ 0.12043275 0.99126617 -0.05373394]
e2 = [ 0.99246903 -0.11900497 0.02903508]
e3 = [-0.02238689 0.05682604 0.99813307]
```python
Rlg = np.array([e1,e2,e3])
Rgl = Rlg.T
print('Rlg = ' + str(Rlg))
print('\nRgl = ' + str(Rgl))
```
Rlg = [[ 0.12043275 0.99126617 -0.05373394]
[ 0.99246903 -0.11900497 0.02903508]
[-0.02238689 0.05682604 0.99813307]]
Rgl = [[ 0.12043275 0.99246903 -0.02238689]
[ 0.99126617 -0.11900497 0.05682604]
[-0.05373394 0.02903508 0.99813307]]
$\alpha = \arctan\left(\frac{Rgl[2,1]}{Rgl[1,1]}\right)$
$\beta = \arctan\left(\frac{Rgl[0,2]}{Rgl[0,0]}\right)$
$\gamma = \arctan\left(\frac{-Rgl[0,1]}{\sqrt{(Rgl[1,1]^2+Rgl[2,1]^2)}}\right)$
```python
alpha = np.arctan2(Rgl[2,1],Rgl[1,1])*180/np.pi
beta = np.arctan2(Rgl[0,2],Rgl[0,0])*180/np.pi
gamma = np.arctan2(-Rgl[0,1],np.sqrt(Rgl[1,1]**2+Rgl[2,1]**2))*180/np.pi
print('alpha = ' + str(alpha) + ' graus')
print('beta = ' + str(beta) + ' graus')
print('gamma = ' + str(gamma) + ' graus')
```
alpha = 166.288730743 graus
beta = -10.5303538359 graus
gamma = -82.9638360669 graus
**2** (3 pontos) Uma pessoa segura com a mão uma extremidade de um bastão com comprimento . Tanto a mão da pessoa quanto o bastão se movem no plano paralelo ao solo. A mão da pessoa segue a seguinte trajetória ao longo do tempo: , medida em relação a um ponto fixo. A velocidade angular do bastão durante o movimento foi de . A posição inicial da extremidade B do bastão que não está sendo segurada pela mão da pessoa é .
a. [0,5] Calcule a velocidade da mão da pessoa.
b. [0,5] Calcule o vetor posição do ponto B em relação à mão da pessoa no instante inicial.
c. [0,5] Calcule o vetor posição do ponto B em relação à mão da pessoa ao longo da trajetória.
d. [1,0] Calcule a velocidade do ponto B do bastão ao longo do tempo.
e. [0,5] Calcule a aceleração do ponto B do bastão ao longo do tempo.
Velocidade da mão:
$\vec{v}_M = 0,3\hat{i}+0,1\hat{j} $
Posição do ponto B no instante inicial:
$\vec{r_{B/M}}(0) = \vec{r_{B}}(0) - \vec{r_{M}}(0) = 6\hat{i}+6\hat{j} - 4\hat{i} - 6\hat{j} = 2\hat{i}$
Posição do ponto B:
$\vec{r_{B/M}} = l\cos(\omega t)\hat{i} + l\sin(\omega t) \hat{j} = 2\cos(3t)\hat{i} + 2\sin(3t) \hat{j} $
Velocidade do ponto B:
$\vec{v_{B}} = \vec{v_{M}} + \vec{\omega}\times\vec{r_{B/M}} = 0,3\hat{i}+0,1\hat{j} + 3\hat{k}\times(2\cos(3t)\hat{i} + 2\sin(3t) \hat{j})=$
$=0,3\hat{i}+0,1\hat{j} - 6\sin(3t)\hat{i}+6\cos(3t)\hat{j} = (0,3- 6\sin(3t))\hat{i}+(0,1 +6\cos(3t))\hat{j}$
Aceleração do ponto B:
$\vec{a_{B}} = \frac{d\vec{v_{B}}}{dt} = -18\cos(3t)\hat{i}-18\sin(3t)\hat{j}$
<figure></figure>
**3** Considere a cadeia cinemática representada ao lado. Calcule a expressão para:
a) (1,0) A posição do ponto P em termos dos ângulos articulares.
b) (1,0) O Jacobiano para esta cadeia cinemática.
c) (1,0) Velocidade linear do ponto P em termos dos ângulos articulares.
A posição do ponto P em termos dos ângulos articulares:
```python
rp = Matrix([1 + l1*sin(alpha1) - l2*sin(alpha2),
1 + l1*cos(alpha1) + l2*cos(alpha2)])
rp
```
Onde $\alpha_1$ e $\alpha_2$ são considerados ângulos positivos como indicado na figura.
O Jacobiano para esta cadeia cinemática:
```python
J = rp.jacobian([alpha1, alpha2])
J
```
Velocidade linear do ponto P em termos dos ângulos articulares:
```python
w = Matrix([alpha1, alpha2]).diff(t)
vel = J*w
vel
```
## Noturno
**1** (4 pontos) Considere as seguintes posições de marcas colocadas sobre uma coxa mensuradas por um sistema de captura do movimento: epicôndilo lateral (el = [2.92, 10.10, 18.85]) e epicôndilo medial (em = [2.71, 10.22, 26.52]). Além disso, foi estimada a posição da cabeça do trocanter deste fêmur (tr = [5.05, 41.90, 15.41]). Estas posições estão na ordem x, y, z e são descritas no sistema de coordenadas do laboratório onde x aponta para a frente do sujeito, y aponta para cima e z aponta para o lado. O centro articular do joelho está localizado no centro geométrico entre as marcas el e em o centro articular do quadril coincide com a cabeça do trocanter. Um sistema de coordenadas anatômico para a coxa pode ser definido como um eixo quase vertical apontando para cima e passando pelos centros articulares do joelho e do quadril; outro eixo quase anteroposterior apontando para frente como o produto vetorial do eixo quase vertical e um vetor na direção médio-lateral passando por el e em; e o último eixo como o produto vetorial dos dois eixos anteriores, e a origem no centro articular do joelho.
a. [1,0] Calcule o sistema de coordenadas anatômico para a coxa como descrito acima.
b. [1,0] A partir do sistema de coordenadas, encontre a matriz de rotação que leva da coordenada local para a global e a matriz de rotação que leva da coordenada global para a local.
c. [2,0] Encontre os ângulos de Euler , considerando que a ordem de rotação foi feita em torno dos eixos do sistema de referência global ZXY, nesta ordem. A matriz de rotação nesta sequência de rotação é dada abaixo:
```python
el = np.array([2.92, 10.10, 18.85])
em = np.array([2.71, 10.22, 26.52])
tr = np.array([5.05, 41.90, 15.41])
quadri = tr
joelho = (el + em)/2
v1 = quadri - joelho # first axis
v2 = np.cross(v1, em - el) # second axis
v3 = np.cross(v2, v1) # third axis
# Vector normalization
e1 = v1/np.linalg.norm(v1)
e2 = v2/np.linalg.norm(v2)
e3 = v3/np.linalg.norm(v3)
print('Origem:', '\nO =', joelho)
print('Versores:', '\ne1 =', e1, '\ne2 =', e2, '\ne3 =', e3)
```
Origem:
O = [ 2.815 10.16 22.685]
Versores:
e1 = [ 0.06847494 0.97243612 -0.22288824]
e2 = [ 0.99756392 -0.06375548 0.02831018]
e3 = [-0.0133195 0.22428381 0.97443284]
```python
Rlg = np.array([e1,e2,e3])
Rgl = Rlg.T
print('Rlg = ' + str(Rlg))
print('\nRgl = ' + str(Rgl))
```
Rlg = [[ 0.06847494 0.97243612 -0.22288824]
[ 0.99756392 -0.06375548 0.02831018]
[-0.0133195 0.22428381 0.97443284]]
Rgl = [[ 0.06847494 0.99756392 -0.0133195 ]
[ 0.97243612 -0.06375548 0.22428381]
[-0.22288824 0.02831018 0.97443284]]
$\alpha = \arctan\left(\frac{-Rgl[1,2]}{\sqrt{(Rgl[0,2]^2+Rgl[2,2]^2)}}\right)$
$\beta = \arctan\left(\frac{Rgl[0,2]}{Rgl[2,2]}\right)$
$\gamma = \arctan\left(\frac{Rgl[1,0]}{Rgl[1,1]}\right)$
```python
alpha = np.arctan2(-Rgl[1,2],np.sqrt(Rgl[0,2]**2+Rgl[2,2]**2))*180/np.pi
beta = np.arctan2(Rgl[0,2],Rgl[2,2])*180/np.pi
gamma = np.arctan2(Rgl[1,0],Rgl[1,1])*180/np.pi
print('alpha = ' + str(alpha) + ' graus')
print('beta = ' + str(beta) + ' graus')
print('gamma = ' + str(gamma) + ' graus')
```
alpha = -12.9607669696 graus
beta = -0.783125663366 graus
gamma = 93.7510938527 graus
**2** (3 pontos) Uma pessoa segura com a mão uma extremidade de um bastão com comprimento . Tanto a mão da pessoa quanto o bastão se movem no plano paralelo ao solo. A mão da pessoa segue a seguinte trajetória ao longo do tempo: , medida em relação a um ponto fixo. A velocidade angular do bastão durante o movimento foi de . A posição inicial da extremidade B do bastão que não está sendo segurada pela mão da pessoa é .
a. [0,5] Calcule a velocidade da mão da pessoa.
b. [0,5] Calcule o vetor posição do ponto B em relação à mão da pessoa no instante inicial.
c. [0,5] Calcule o vetor posição do ponto B em relação à mão da pessoa ao longo da trajetória.
d. [1,0] Calcule a velocidade do ponto B do bastão ao longo do tempo.
e. [0,5] Calcule a aceleração do ponto B do bastão ao longo do tempo.
Velocidade da mão:
$\vec{v}_M = 0,2\hat{i}+0,5\hat{j} $
Posição do ponto B no instante inicial:
$\vec{r_{B/M}}(0) = \vec{r_{B}}(0) - \vec{r_{M}}(0) = 6\hat{i}+10\hat{j} - 5\hat{i} - 10\hat{j} = 1\hat{i}$
Posição do ponto B:
$\vec{r_{B/M}} = l\cos(\omega t)\hat{i} + l\sin(\omega t) \hat{j} = \cos(2t)\hat{i} + \sin(2t) \hat{j} $
Velocidade do ponto B:
$\vec{v_{B}} = \vec{v_{M}} + \vec{\omega}\times\vec{r_{B/M}} = 0,2\hat{i}+0,5\hat{j} + 2\hat{k}\times(\cos(2t)\hat{i} + \sin(2t) \hat{j})=$
$=0,2\hat{i}+0,5\hat{j} - 2\sin(2t)\hat{i}+2\cos(2t)\hat{j} = (0,2- 2\sin(2t))\hat{i}+(0,2 +2\cos(2t))\hat{j}$
Aceleração do ponto B:
$\vec{a_{B}} = \frac{d\vec{v_{B}}}{dt} = -4\cos(2t)\hat{i}-4\sin(2t)\hat{j}$
<figure></figure>
**3** Considere a cadeia cinemática representada ao lado. Calcule a expressão para:
a) (1,0) A posição do ponto P em termos dos ângulos articulares.
b) (1,0) O Jacobiano para esta cadeia cinemática.
c) (1,0) Velocidade linear do ponto P em termos dos ângulos articulares.
A posição do ponto P em termos dos ângulos articulares:
```python
rp = Matrix([l1*sin(alpha1) + l2*sin(alpha2),
-l1*cos(alpha1) - l2*cos(alpha2)])
rp
```
Onde $\alpha_1$ e $\alpha_2$ são considerados ângulos positivos como indicado na figura.
O Jacobiano para esta cadeia cinemática:
```python
J = rp.jacobian([alpha1, alpha2])
J
```
Velocidade linear do ponto P em termos dos ângulos articulares:
```python
w = Matrix([alpha1, alpha2]).diff(t)
vel = J*w
vel
```
|
module Issue396b where
import Common.Irrelevance
data A : Set where
-- just an irrelevant field
record PrfA : Set where
field
.f : A
Foo : Set -> Set1
Foo R = (P : R → Set) → ((x : R) → P x → P x) →
(x y : R) → P x → P y
foo : Foo PrfA
foo P hyp x y = hyp x
-- Error was:
-- x != y of type ⊤
-- when checking that the expression hyp x has type P x → P y
record Top : Set where
-- only singleton components
record R : Set where
field
p1 : PrfA
.p2 : A
p3 : Top
bla : Foo R
bla P hyp x y = hyp x
|
! Copyright (c) 2017-2018 Etienne Descamps
! All rights reserved.
!
! Redistribution and use in source and binary forms, with or without modification,
! are permitted provided that the following conditions are met:
!
! 1. Redistributions of source code must retain the above copyright notice,
! this list of conditions and the following disclaimer.
!
! 2. Redistributions in binary form must reproduce the above copyright notice,
! this list of conditions and the following disclaimer in the documentation and/or
! other materials provided with the distribution.
!
! 3. Neither the name of the copyright holder nor the names of its contributors may be
! used to endorse or promote products derived from this software without specific prior
! written permission.
!
! THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
! ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
! WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
! IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
! INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
! (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
! LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
! THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
! NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
! EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Program test_emgmm
use ieee_arithmetic
use iso_c_binding
use mlf_utils
use mlf_kmeans
use mlf_intf
use mlf_emgmm
use mlf_matrix
use mlf_rand
use mlf_gaussian
use mlf_hdf5
use test_common
implicit none
integer :: ND, NC, NX, info
integer(kind=8) :: Nstep
real(c_double) :: sigmaC, sigmaX
info = mlf_init()
if(COMMAND_ARGUMENT_COUNT()<6) then
print *, "Error missing arguments: ND NC NX Nstep sigmaC sigmaX"
stop
endif
ND = GetIntParameter(1)
NC = GetIntParameter(2)
NX = GetIntParameter(3)
Nstep = GetIntParameter(4)
sigmaC = GetRealParameter(5)
sigmaX = GetRealParameter(6)
print *,"ND: ", ND, " NC: ", NC, " NX: ", NX, " Nstep: ", Nstep, " sigmaC: ", sigmaC, " sigmaX: ", sigmaX
call testOrtho()
call testMN()
call testGMM()
info = mlf_quit()
contains
subroutine testOrtho()
real(c_double) :: C(ND,ND)
call randOrthogonal(C)
print *, "Random Ortho:"
call PrintMatrix(C)
print *, "Shall be Id:"
call PrintMatrix(matmul(transpose(C), C))
end subroutine testOrtho
subroutine testMN()
real(c_double), allocatable :: invC(:,:), C12(:,:), C(:,:), X(:,:)
real(c_double), allocatable :: weight(:), P(:), Mu(:)
real(c_double) :: lnd
integer :: i
ALLOCATE(weight(ND), P(NX), Mu(ND), X(ND,NX), C12(ND,ND), C(ND,ND))
P = 1
weight = (/ (1d0/real(i), i=1,ND) /)
weight = sqrt(sigmaX*weight/Mean(weight))
call randOrthogonal(C12, weight)
print *, "Random Org:"
call PrintMatrix(C12)
print *, "Covariance Matrix:"
C = matmul(C12, transpose(C12))
call PrintMatrix(C)
allocate(invC(ND,ND))
i = InverseSymMatrix(C, invC, lnd, .TRUE.)
print *, "Inverse covariance matrix: lnd: ", lnd, " i: ", i
call PrintMatrix(invC, matmul(C,invC))
call randN(X, C12 = C12)
lnd = mlf_MaxGaussian(X, P, Mu, C)
print *, "Evaluated Mu"
print *, Mu
print *, "Evaluated Covariance Matrix:"
call PrintMatrix(C)
i = mlf_EvalGaussian(X, P, Mu, C, 1d0, lnd)
print *, "Evaluated Probabilities: lnd: ", lnd
print *, P
end subroutine testMN
subroutine testGMM()
real(c_double), allocatable :: XC(:,:), X(:,:), C12(:,:,:), weight(:)
real(c_double) :: dt
type(mlf_algo_emgmm) :: em_algo
integer :: i, info, r
integer, allocatable :: idx(:), idc(:)
type(mlf_hdf5_file) :: mh5
ALLOCATE(XC(ND,NC), X(ND,NX*NC), C12(ND,ND, NC), weight(ND), idx(NX*NC), idc(NC))
! Generates random centres
call RandN(XC, sigmaC)
! Weights used as diagonal matrix (multiplied by orthogonal matrix)
weight = [(1d0/real(i), i=1,ND)]
weight = sqrt(sigmaX*weight/Mean(weight))
do i = 1,NC
call randOrthogonal(C12(:,:,i), weight)
call randN(X(:,1+(i-1)*NX:i*NX), X0 = XC(:,i), C12 = C12(:,:,i))
end do
idx = [(i, i=1,NX*NC)]
call randPerm(idx)
X = X(:,idx)
info = em_algo%init(X, nC, 1d0)
if(info<0) then
print *,"Error initialization"
RETURN
endif
r = em_algo%step(dt, Nstep)
print *,"Time: ", dt
print *, "Matrix Mu: (determined by GMM_EMAlgo)/ XC"
call mlf_match_points(em_algo%Mu, XC, idc)
call PrintMatrix(em_algo%Mu, XC(:, idc))
do i=1,NC
print *, "Matrix Covar:", i
call PrintMatrix(em_algo%Cov(:,:,i), matmul(C12(:,:,idc(i)), transpose(C12(:,:,idc(i)))))
end do
info = mh5%createFile("emgmm.h5")
info = mh5%pushState(em_algo)
call mh5%finalize()
call em_algo%finalize()
end subroutine testGMM
End Program test_emgmm
|
# Generating C Code to implement Method of Lines Timestepping for Explicit Runge Kutta Methods
## Authors: Zach Etienne & Brandon Clark
## This tutorial module generates three blocks of C Code in order to perform Method of Lines timestepping.
**Module Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial module has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). All Runge-Kutta Butcher tables were validated using truncated Taylor series in [a separate module](Tutorial-RK_Butcher_Table_Validation.ipynb). Finally, C-code implementation of RK4 was validated against a trusted version. C-code implementations of other RK methods seem to work as expected in the context of solving the scalar wave equation in Cartesian coordinates.
### NRPy+ Source Code for this module:
* [MoLtimestepping/C_Code_Generation.py](../edit/MoLtimestepping/C_Code_Generation.py)
* [MoLtimestepping/RK_Butcher_Table_Dictionary.py](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) ([**Tutorial**](Tutorial-RK_Butcher_Table_Dictionary.ipynb)) Stores the Butcher tables for the explicit Runge Kutta methods
## Introduction:
When numerically solving a partial differential equation initial-value problem, subject to suitable boundary conditions, we implement Method of Lines to "integrate" the solution forward in time.
### The Method of Lines:
Once we have the initial data for a PDE, we "evolve it forward in time", using the [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html). In short, the Method of Lines enables us to handle
1. the **spatial derivatives** of an initial value problem PDE using **standard finite difference approaches**, and
2. the **temporal derivatives** of an initial value problem PDE using **standard strategies for solving ordinary differential equations (ODEs), like Runge Kutta methods** so long as the initial value problem PDE can be written in the first-order-in-time form
$$\partial_t \vec{f} = \mathbf{M}\ \vec{f},$$
where $\mathbf{M}$ is an $N\times N$ matrix containing only *spatial* differential operators that act on the $N$-element column vector $\vec{f}$. $\mathbf{M}$ may not contain $t$ or time derivatives explicitly; only *spatial* partial derivatives are allowed to appear inside $\mathbf{M}$.
You may find the next module [Tutorial-ScalarWave](Tutorial-ScalarWave.ipynb) extremely helpful as an example for implementing the Method of Lines for solving the Scalar Wave equation in Cartesian coordinates.
### Generating the C code:
This module describes how three C code blocks are written to implement Method of Lines timestepping for a specified RK method. The first block is dedicated to allocating memory for the appropriate number of grid function lists needed for the given RK method. The second block will implement the Runge Kutta numerical scheme based on the corresponding Butcher table. The third block will free up the previously allocated memory after the Method of Lines run is complete. These blocks of code are stored within the following three header files respectively
1. `MoLtimestepping/RK_Allocate_Memory.h`
1. `MoLtimestepping/RK_MoL.h`
1. `MoLtimestepping/RK_Free_Memory.h`
The generated code is then included in future Start-to-Finish example tutorial modules when solving PDEs numerically.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This module is organized as follows
1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules
1. [Step 2](#diagonal): Checking if Butcher Table is Diagonal
1. [Step 3](#ccode): Generating the C Code
1. [Step 3.a](#allocate): Allocating Memory, `MoLtimestepping/RK_Allocate_Memory.h`
1. [Step 3.b](#rkmol): Implementing the Runge Kutta Scheme for Method of Lines Timestepping, `MoLtimestepping/RK_MoL.h`
1. [Step 3.c](#free): Freeing Allocated Memory, `MoLtimestepping/RK_Free_Memory.h`
1. [Step 4](#code_validation): Code Validation against `MoLtimestepping.RK_Butcher_Table_Generating_C_Code` NRPy+ module
1. [Step 5](#latex_pdf_output): Output this module to $\LaTeX$-formatted PDF
<a id='initializenrpy'></a>
# Step 1: Initialize needed Python/NRPy+ modules [Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from Python/NRPy+:
```python
import sympy as sp
import NRPy_param_funcs as par
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
```
<a id='diagonal'></a>
# Step 2: Checking if a Butcher table is Diagonal [Back to [top](#toc)\]
$$\label{diagonal}$$
A diagonal Butcher table takes the form
$$\begin{array}{c|cccccc}
0 & \\
a_1 & a_1 & \\
a_2 & 0 & a_2 & \\
a_3 & 0 & 0 & a_3 & \\
\vdots & \vdots & \ddots & \ddots & \ddots \\
a_s & 0 & 0 & 0 & \cdots & a_s \\ \hline
& b_1 & b_2 & b_3 & \cdots & b_{s-1} & b_s
\end{array}$$
where $s$ is the number of required predictor-corrector steps for a given RK method (see [Butcher, John C. (2008)](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470753767)). One known diagonal RK method is the classic RK4 represented in Butcher table form as:
$$\begin{array}{c|cccc}
0 & \\
1/2 & 1/2 & \\
1/2 & 0 & 1/2 & \\
1 & 0 & 0 & 1 & \\ \hline
& 1/6 & 1/3 & 1/3 & 1/6
\end{array} $$
Diagonal Butcher tables are nice when it comes to saving required memory space. Each new step for a diagonal RK method, when computing the new $k_i$, does not depend on the previous calculation, and so there are ways to save memory. Signifcantly so in large three-dimensional spatial grid spaces.
```python
def diagonal(key):
diagonal = True # Start with the Butcher table is diagonal
Butcher = Butcher_dict[key][0]
L = len(Butcher)-1 # Establish the number of rows to check for diagonal trait, all bust last row
row_idx = 0 # Initialize the Butcher table row index
for i in range(L): # Check all the desired rows
for j in range(1,row_idx): # Check each element before the diagonal element in a row
if Butcher[i][j] != sp.sympify(0): # If any element is non-zero, then the table is not diagonal
diagonal = False
break
row_idx += 1 # Update to check the next row
return diagonal
# State whether each Butcher table is diagonal or not
for key, value in Butcher_dict.items():
if diagonal(key) == True:
print("The RK method "+str(key)+" is diagonal!")
else:
print("The RK method "+str(key)+" is NOT diagonal!")
```
The RK method Euler is diagonal!
The RK method RK2 Heun is diagonal!
The RK method RK2 MP is diagonal!
The RK method RK2 Ralston is diagonal!
The RK method RK3 is NOT diagonal!
The RK method RK3 Heun is diagonal!
The RK method RK3 Ralston is diagonal!
The RK method SSPRK3 is NOT diagonal!
The RK method RK4 is diagonal!
The RK method DP5 is NOT diagonal!
The RK method DP5alt is NOT diagonal!
The RK method CK5 is NOT diagonal!
The RK method DP6 is NOT diagonal!
The RK method L6 is NOT diagonal!
The RK method DP8 is NOT diagonal!
<a id='ccode'></a>
# Step 3: Generating the C Code [Back to [top](#toc)\]
$$\label{ccode}$$
The following sections build up the C code for implementing the Method of Lines timestepping algorithm for solving PDEs. To see what the C code looks like for a particular method, simply change the `RK_method` below, otherwise it will default to `"RK4"`.
<a id='allocate'></a>
## Step 3.a: Allocating Memory, `MoLtimestepping/RK_Allocate_Memory.h` [Back to [top](#toc)\]
$$\label{allocate}$$
We define the function `RK_Allocate()` which generates the C code for allocating the memory for the appropriate number of grid function lists given a Runge Kutta method. The function writes the C code to the header file `MoLtimestepping/RK_Allocate_Memory.h`.
```python
# Choose a method to see the C code print out for
RK_method = "RK3 Ralston"
```
```python
def RK_Allocate(RK_method="RK4"):
with open("MoLtimestepping/RK_Allocate_Memory"+str(RK_method).replace(" ", "_")+".h", "w") as file:
file.write("// Code snippet allocating gridfunction memory for \""+str(RK_method)+"\" method:\n")
# No matter the method we define gridfunctions "y_n_gfs" to store the initial data
file.write("REAL *restrict y_n_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n")
if diagonal(RK_method) == True and "RK3" in RK_method:
file.write("""REAL *restrict k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict k2_or_y_nplus_a32_k2_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict diagnostic_output_gfs = k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs;""")
else:
if diagonal(RK_method) == False: # Allocate memory for non-diagonal Butcher tables
# Determine the number of k_i steps based on length of Butcher Table
num_k = len(Butcher_dict[RK_method][0])-1
# For non-diagonal tables an intermediate gridfunction "next_y_input" is needed for rhs evaluations
file.write("REAL *restrict next_y_input_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n")
for i in range(num_k): # Need to allocate all k_i steps for a given method
file.write("REAL *restrict k"+str(i+1)+"_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n")
file.write("REAL *restrict diagnostic_output_gfs = k1_gfs;\n")
else: # Allocate memory for diagonal Butcher tables, which use a "y_nplus1_running_total gridfunction"
file.write("REAL *restrict y_nplus1_running_total_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n")
if RK_method != 'Euler': # Allocate memory for diagonal Butcher tables that aren't Euler
# Need k_odd for k_1,3,5... and k_even for k_2,4,6...
file.write("REAL *restrict k_odd_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n")
file.write("REAL *restrict k_even_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);\n")
file.write("REAL *restrict diagnostic_output_gfs = y_nplus1_running_total_gfs;\n")
RK_Allocate(RK_method)
print("This is the memory allocation C code for the "+str(RK_method)+" method: \n")
with open("MoLtimestepping/RK_Allocate_Memory"+str(RK_method).replace(" ", "_")+".h", "r") as file:
print(file.read())
```
This is the memory allocation C code for the RK3 Ralston method:
// Code snippet allocating gridfunction memory for "RK3 Ralston" method:
REAL *restrict y_n_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict k2_or_y_nplus_a32_k2_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict diagnostic_output_gfs = k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs;
<a id='rkmol'></a>
## Step 3.b: Implementing the Runge Kutta Scheme for Method of Lines Timestepping, `MoLtimestepping/RK_MoL.h` [Back to [top](#toc)\]
$$\label{rkmol}$$
We define the function `RK_MoL()` which generates the C code for implementing Method of Lines using a specified Runge Kutta scheme. The function writes the C code to the header file `MoLtimestepping/RK_MoL.h`.
```python
def RK_MoL(RK_method,RHS_string, post_RHS_string):
Butcher = Butcher_dict[RK_method][0] # Get the desired Butcher table from the dictionary
num_steps = len(Butcher)-1 # Specify the number of required steps to update solution
indent = " "
with open("MoLtimestepping/RK_MoL"+str(RK_method).replace(" ", "_")+".h", "w") as file:
file.write("// Code snippet implementing "+RK_method+" algorithm for Method of Lines timestepping\n")
# Diagonal RK3 only!!!
if diagonal(RK_method) == True and "RK3" in RK_method:
# In a diagonal RK3 method, only 3 gridfunctions need be defined. Below implements this approach.
file.write("""
// In a diagonal RK3 method like this one, only 3 gridfunctions need be defined. Below implements this approach.
// Using y_n_gfs as input, compute k1 and apply boundary conditions
"""+RHS_string.replace("RK_INPUT_GFS" ,"y_n_gfs").
replace("RK_OUTPUT_GFS","k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs")+"""
LOOP_ALL_GFS_GPS(i) {
// Store k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs now as
// the update for the next rhs evaluation y_n + a21*k1*dt:
k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] = ("""+sp.ccode(Butcher[1][1]).replace("L","")+""")*k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i]*dt + y_n_gfs[i];
}
// Apply boundary conditions to y_n + a21*k1*dt:
"""+post_RHS_string.replace("RK_OUTPUT_GFS","k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs")+"""
// Compute k2 using yn + a21*k1*dt
"""+RHS_string.replace("RK_INPUT_GFS" ,"k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs").
replace("RK_OUTPUT_GFS","k2_or_y_nplus_a32_k2_gfs")+"""
LOOP_ALL_GFS_GPS(i) {
// Reassign k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs to be
// the running total y_{n+1}
k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] = ("""+sp.ccode(Butcher[3][1]).replace("L","")+""")*(k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] - y_n_gfs[i])/("""+sp.ccode(Butcher[1][1]).replace("L","")+""") + y_n_gfs[i];
// Add a32*k2*dt to the running total
k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i]+= ("""+sp.ccode(Butcher[3][2]).replace("L","")+""")*k2_or_y_nplus_a32_k2_gfs[i]*dt;
// Store k2_or_y_nplus_a32_k2_gfs now as y_n + a32*k2*dt
k2_or_y_nplus_a32_k2_gfs[i] = ("""+sp.ccode(Butcher[2][2]).replace("L","")+""")*k2_or_y_nplus_a32_k2_gfs[i]*dt + y_n_gfs[i];
}
// Apply boundary conditions to both y_n + a32*k2 (stored in k2_or_y_nplus_a32_k2_gfs)
// ... and the y_{n+1} running total, as they have not been applied yet to k2-related gridfunctions:
"""+post_RHS_string.replace("RK_OUTPUT_GFS","k2_or_y_nplus_a32_k2_gfs")+"""
"""+post_RHS_string.replace("RK_OUTPUT_GFS","k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs")+"""
// Compute k3
"""+RHS_string.replace("RK_INPUT_GFS" ,"k2_or_y_nplus_a32_k2_gfs").
replace("RK_OUTPUT_GFS","y_n_gfs")+"""
LOOP_ALL_GFS_GPS(i) {
// Add k3 to the running total and save to y_n
y_n_gfs[i] = k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] + ("""+sp.ccode(Butcher[3][3]).replace("L","")+""")*y_n_gfs[i]*dt;
}
// Apply boundary conditions to the running total
"""+post_RHS_string.replace("RK_OUTPUT_GFS","y_n_gfs")+"\n")
else:
y_n = "y_n_gfs"
if diagonal(RK_method) == False:
for s in range(num_steps):
next_y_input = "next_y_input_gfs"
# If we're on the first step (s=0), we use y_n gridfunction as input.
# Otherwise next_y_input is input. Output is just the reverse.
if s==0: # If on first step:
file.write(RHS_string.replace("RK_INPUT_GFS",y_n).replace("RK_OUTPUT_GFS","k"+str(s+1)+"_gfs")+"\n")
else: # If on second step or later:
file.write(RHS_string.replace("RK_INPUT_GFS",next_y_input).replace("RK_OUTPUT_GFS","k"+str(s+1)+"_gfs")+"\n")
file.write("LOOP_ALL_GFS_GPS(i) {\n")
RK_update_string = ""
if s == num_steps-1: # If on final step:
RK_update_string += indent + y_n+"[i] += dt*("
else: # If on anything but the final step:
RK_update_string += indent + next_y_input+"[i] = "+y_n+"[i] + dt*("
for m in range(s+1):
if Butcher[s+1][m+1] != 0:
if Butcher[s+1][m+1] != 1:
RK_update_string += " + k"+str(m+1)+"_gfs[i]*("+sp.ccode(Butcher[s+1][m+1]).replace("L","")+")"
else:
RK_update_string += " + k"+str(m+1)+"_gfs[i]"
RK_update_string += " );\n}\n"
file.write(RK_update_string)
if s == num_steps-1: # If on final step:
file.write(post_RHS_string.replace("RK_OUTPUT_GFS",y_n)+"\n")
else: # If on anything but the final step:
file.write(post_RHS_string.replace("RK_OUTPUT_GFS",next_y_input)+"\n")
else:
y_nplus1_running_total = "y_nplus1_running_total_gfs"
if RK_method == 'Euler': # Euler's method doesn't require any k_i, and gets its own unique algorithm
file.write(RHS_string.replace("RK_INPUT_GFS",y_n).replace("RK_OUTPUT_GFS",y_nplus1_running_total)+"\n")
file.write("LOOP_ALL_GFS_GPS(i) {\n")
file.write(indent + y_n+"[i] += "+y_nplus1_running_total+"[i]*dt;\n")
file.write("}\n")
file.write(post_RHS_string.replace("RK_OUTPUT_GFS",y_n)+"\n")
else:
for s in range(num_steps):
# If we're on the first step (s=0), we use y_n gridfunction as input.
# and k_odd as output.
if s == 0:
rhs_input = "y_n_gfs"
rhs_output = "k_odd_gfs"
# For the remaining steps the inputs and ouputs alternate between k_odd and k_even
elif s%2 == 0:
rhs_input = "k_even_gfs"
rhs_output = "k_odd_gfs"
else:
rhs_input = "k_odd_gfs"
rhs_output = "k_even_gfs"
file.write(RHS_string.replace("RK_INPUT_GFS",rhs_input).replace("RK_OUTPUT_GFS",rhs_output)+"\n")
file.write("LOOP_ALL_GFS_GPS(i) {\n")
if s == num_steps-1: # If on the final step
if Butcher[num_steps][s+1] !=0:
if Butcher[num_steps][s+1] !=1:
file.write(indent+y_n+"[i] += "+y_nplus1_running_total+"[i] + "+rhs_output+"[i]*dt*("+sp.ccode(Butcher[num_steps][s+1]).replace("L","")+");\n")
else:
file.write(indent+y_n+"[i] += "+y_nplus1_running_total+"[i] + "+rhs_output+"[i]*dt;\n")
file.write("}\n")
file.write(post_RHS_string.replace("RK_OUTPUT_GFS",y_n)+"\n")
else: # For anything besides the final step
if s == 0:
file.write(indent+y_nplus1_running_total+"[i] = "+rhs_output+"[i]*dt*("+sp.ccode(Butcher[num_steps][s+1]).replace("L","")+");\n")
file.write(indent+rhs_output+"[i] = "+y_n+"[i] + "+rhs_output+"[i]*dt*("+sp.ccode(Butcher[s+1][s+1]).replace("L","")+");\n")
else:
if Butcher[num_steps][s+1] !=0:
if Butcher[num_steps][s+1] !=1:
file.write(indent+y_nplus1_running_total+"[i] += "+rhs_output+"[i]*dt*("+sp.ccode(Butcher[num_steps][s+1]).replace("L","")+");\n")
else:
file.write(indent+y_nplus1_running_total+"[i] += "+rhs_output+"[i]*dt;\n")
if Butcher[s+1][s+1] !=0:
if Butcher[s+1][s+1] !=1:
file.write(indent+rhs_output+"[i] = "+y_n+"[i] + "+rhs_output+"[i]*dt*("+sp.ccode(Butcher[s+1][s+1]).replace("L","")+");\n")
else:
file.write(indent+rhs_output+"[i] = "+y_n+"[i] + "+rhs_output+"[i]*dt;\n")
file.write("}\n")
file.write(post_RHS_string.replace("RK_OUTPUT_GFS",rhs_output)+"\n")
RK_MoL(RK_method,"rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, RK_INPUT_GFS, RK_OUTPUT_GFS);",
"")
print("This is the MoL timestepping RK scheme C code for the "+str(RK_method)+" method: \n")
with open("MoLtimestepping/RK_MoL"+str(RK_method).replace(" ", "_")+".h", "r") as file:
print(file.read())
```
This is the MoL timestepping RK scheme C code for the RK3 Ralston method:
// Code snippet implementing RK3 Ralston algorithm for Method of Lines timestepping
// In a diagonal RK3 method like this one, only 3 gridfunctions need be defined. Below implements this approach.
// Using y_n_gfs as input, compute k1 and apply boundary conditions
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, y_n_gfs, k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs);
LOOP_ALL_GFS_GPS(i) {
// Store k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs now as
// the update for the next rhs evaluation y_n + a21*k1*dt:
k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] = (1.0/2.0)*k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i]*dt + y_n_gfs[i];
}
// Apply boundary conditions to y_n + a21*k1*dt:
// Compute k2 using yn + a21*k1*dt
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs, k2_or_y_nplus_a32_k2_gfs);
LOOP_ALL_GFS_GPS(i) {
// Reassign k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs to be
// the running total y_{n+1}
k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] = (2.0/9.0)*(k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] - y_n_gfs[i])/(1.0/2.0) + y_n_gfs[i];
// Add a32*k2*dt to the running total
k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i]+= (1.0/3.0)*k2_or_y_nplus_a32_k2_gfs[i]*dt;
// Store k2_or_y_nplus_a32_k2_gfs now as y_n + a32*k2*dt
k2_or_y_nplus_a32_k2_gfs[i] = (3.0/4.0)*k2_or_y_nplus_a32_k2_gfs[i]*dt + y_n_gfs[i];
}
// Apply boundary conditions to both y_n + a32*k2 (stored in k2_or_y_nplus_a32_k2_gfs)
// ... and the y_{n+1} running total, as they have not been applied yet to k2-related gridfunctions:
// Compute k3
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, k2_or_y_nplus_a32_k2_gfs, y_n_gfs);
LOOP_ALL_GFS_GPS(i) {
// Add k3 to the running total and save to y_n
y_n_gfs[i] = k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs[i] + (4.0/9.0)*y_n_gfs[i]*dt;
}
// Apply boundary conditions to the running total
<a id='free'></a>
## Step 3.c: Freeing Allocated Memory, `MoLtimestepping/RK_Free_Memory.h` [Back to [top](#toc)\]
$$\label{free}$$
We define the function `RK_free()` which generates the C code for freeing the memory that was being occupied by the grid functions lists that had been allocated. The function writes the C code to the header file `MoLtimestepping/RK_Free_Memory.h`
```python
def RK_free(RK_method):
L = len(Butcher_dict[RK_method][0])-1 # Useful when freeing k_i gridfunctions
with open("MoLtimestepping/RK_Free_Memory"+str(RK_method).replace(" ", "_")+".h", "w") as file:
file.write("// CODE SNIPPET FOR FREEING ALL ALLOCATED MEMORY FOR "+str(RK_method)+" METHOD:\n")
if diagonal(RK_method) == True and "RK3" in RK_method:
file.write("""
free(k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs);
free(k2_or_y_nplus_a32_k2_gfs);
free(y_n_gfs);""")
else:
file.write("free(y_n_gfs);\n")
if diagonal(RK_method) == False: # Free memory for allocations made for non-diagonal cases
file.write("free(next_y_input_gfs);\n")
for i in range(L):
file.write("free(k"+str(i+1)+"_gfs);\n")
else: # Free memory for allocations made for diagonal cases
file.write("free(y_nplus1_running_total_gfs);\n")
if RK_method != 'Euler':
file.write("free(k_odd_gfs);\n")
file.write("free(k_even_gfs);\n")
RK_free(RK_method)
print("This is the freeing allocated memory C code for the "+str(RK_method)+" method: \n")
with open("MoLtimestepping/RK_Free_Memory"+str(RK_method).replace(" ", "_")+".h", "r") as file:
print(file.read())
```
This is the freeing allocated memory C code for the RK3 Ralston method:
// CODE SNIPPET FOR FREEING ALL ALLOCATED MEMORY FOR RK3 Ralston METHOD:
free(k1_or_y_nplus_a21_k1_or_y_nplus1_running_total_gfs);
free(k2_or_y_nplus_a32_k2_gfs);
free(y_n_gfs);
<a id='code_validation'></a>
# Step 4: Code Validation against `MoLtimestepping.RK_Butcher_Table_Generating_C_Code` NRPy+ module [Back to [top](#toc)\]
$$\label{code_validation}$$
As a code validation check, we verify agreement in the dictionary of Butcher tables between
1. this tutorial and
2. the NRPy+ [MoLtimestepping.RK_Butcher_Table_Generating_C_Code](../edit/MoLtimestepping/RK_Butcher_Table_Generating_C_Code.py) module.
We generate the header files for each RK method and check for agreement with the NRPY+ module.
```python
import sys
import MoLtimestepping.C_Code_Generation as MoLC
print("\n\n ### BEGIN VALIDATION TESTS ###")
import filecmp
fileprefix1 = "MoLtimestepping/RK_Allocate_Memory"
fileprefix2 = "MoLtimestepping/RK_MoL"
fileprefix3 = "MoLtimestepping/RK_Free_Memory"
for key, value in Butcher_dict.items():
MoLC.MoL_C_Code_Generation(key,
"rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, RK_INPUT_GFS, RK_OUTPUT_GFS);",
"apply_bcs(Nxx,Nxx_plus_2NGHOSTS, RK_OUTPUT_GFS);")
RK_Allocate(key)
RK_MoL(key,
"rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, RK_INPUT_GFS, RK_OUTPUT_GFS);",
"apply_bcs(Nxx,Nxx_plus_2NGHOSTS, RK_OUTPUT_GFS);")
RK_free(key)
if filecmp.cmp(fileprefix1+str(key).replace(" ", "_")+".h" , fileprefix1+".h") == False:
print("VALIDATION TEST FAILED ON files: "+fileprefix1+str(key).replace(" ", "_")+".h and "+ fileprefix1+".h")
sys.exit(1)
elif filecmp.cmp(fileprefix2+str(key).replace(" ", "_")+".h" , fileprefix2+".h") == False:
print("VALIDATION TEST FAILED ON files: "+fileprefix2+str(key).replace(" ", "_")+".h and "+ fileprefix2+".h")
sys.exit(1)
elif filecmp.cmp(fileprefix3+str(key).replace(" ", "_")+".h" , fileprefix3+".h") == False:
print("VALIDATION TEST FAILED ON files: "+fileprefix3+str(key).replace(" ", "_")+".h and "+ fileprefix3+".h")
sys.exit(1)
else:
print("VALIDATION TEST PASSED on all files from "+str(key)+" method")
print("### END VALIDATION TESTS ###")
```
### BEGIN VALIDATION TESTS ###
VALIDATION TEST PASSED on all files from Euler method
VALIDATION TEST PASSED on all files from RK2 Heun method
VALIDATION TEST PASSED on all files from RK2 MP method
VALIDATION TEST PASSED on all files from RK2 Ralston method
VALIDATION TEST PASSED on all files from RK3 method
VALIDATION TEST PASSED on all files from RK3 Heun method
VALIDATION TEST PASSED on all files from RK3 Ralston method
VALIDATION TEST PASSED on all files from SSPRK3 method
VALIDATION TEST PASSED on all files from RK4 method
VALIDATION TEST PASSED on all files from DP5 method
VALIDATION TEST PASSED on all files from DP5alt method
VALIDATION TEST PASSED on all files from CK5 method
VALIDATION TEST PASSED on all files from DP6 method
VALIDATION TEST PASSED on all files from L6 method
VALIDATION TEST PASSED on all files from DP8 method
### END VALIDATION TESTS ###
<a id='latex_pdf_output'></a>
# Step 5: Output this module to $\LaTeX$-formatted PDF \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-RK_Butcher_Table_Generating_C_Code.pdf](Tutorial-RK_Butcher_Table_Generating_C_Code.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Method_of_Lines-C_Code_Generation.ipynb
!pdflatex -interaction=batchmode Tutorial-Method_of_Lines-C_Code_Generation.tex
!pdflatex -interaction=batchmode Tutorial-Method_of_Lines-C_Code_Generation.tex
!pdflatex -interaction=batchmode Tutorial-Method_of_Lines-C_Code_Generation.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
[NbConvertApp] Converting notebook Tutorial-Method_of_Lines-C_Code_Generation.ipynb to latex
[NbConvertApp] Writing 86658 bytes to Tutorial-Method_of_Lines-C_Code_Generation.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
If $p$ is a prime number and $a$ is an integer, then $p$ and $a^n$ are coprime for all $n \geq 1$.
|
M1822 .69 cal ( flintlock ) 5 @,@ 625
|
module Everything where
-- basic utilities
open import Library
open import Isomorphism
-- basic category theory
open import Categories
open import Categories.Sets
open import Categories.Families
open import Categories.Initial
open import Categories.Terminal
open import Categories.CoProducts
open import Categories.PushOuts
open import Categories.Setoids -- should be replaced by standard libary def
open import Functors
open import Functors.Fin
open import Functors.FullyFaithful
open import Naturals
-- basic examples
open import Monoids
open import FunctorCat
-- ordinary monads
open import Monads
open import Monads.MonadMorphs
open import Adjunctions
open import Adjunctions.Adj2Mon
open import Monads.Kleisli
open import Monads.Kleisli.Functors
open import Monads.Kleisli.Adjunction
open import Monads.EM
open import Monads.EM.Functors
open import Monads.EM.Adjunction
open import Monads.CatofAdj
open import Monads.CatofAdj.InitAdj
open import Monads.CatofAdj.TermAdjObj
open import Monads.CatofAdj.TermAdjHom
open import Monads.CatofAdj.TermAdjUniq
open import Monads.CatofAdj.TermAdj
-- relative monads
open import RMonads
open import RMonads.RMonadMorphs
open import RAdjunctions
open import RAdjunctions.RAdj2RMon
open import RMonads.REM
open import RMonads.REM.Functors
open import RMonads.REM.Adjunction
open import RMonads.RKleisli
open import RMonads.RKleisli.Functors
open import RMonads.RKleisli.Adjunction
open import RMonads.Restriction
open import RMonads.SpecialCase
open import RMonads.CatofRAdj
open import RMonads.CatofRAdj.InitRAdj
open import RMonads.CatofRAdj.TermRAdjObj
open import RMonads.CatofRAdj.TermRAdjHom
open import RMonads.CatofRAdj.TermRAdj
open import RMonads.Modules
-- rmonad examples
open import WellScopedTerms
open import WellScopedTermsModel
open import WellTypedTerms
open import WellTypedTermsModel
open import Lawvere
|
data DoorState = DoorClosed | DoorOpen
data DoorCmd : Type ->
DoorState ->
DoorState ->
Type where
Open : DoorCmd () DoorClosed DoorOpen
Close : DoorCmd () DoorOpen DoorClosed
RingBell : DoorCmd () DoorClosed DoorClosed
Pure : ty -> DoorCmd ty state state
(>>=) : DoorCmd a state1 state2 ->
(a -> DoorCmd b state2 state3) ->
DoorCmd b state1 state3
doorProg : DoorCmd () DoorClosed DoorClosed
doorProg = do RingBell
Open
Close
{-doorProgBad : DoorCmd ()
doorProgBad = do Open
Open
RingBell-}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.