text
stringlengths 0
3.34M
|
---|
# 1. 1D Linear Convection
We consider the 1d linear Convection equation, under a constant velocity
$$
\partial_t u + c \partial_x u = 0
$$
If we denote $u_0(x) := u(x,t)$ then the exact solution is
$$
u(x,t) = u_0(x-ct)
$$
```python
# needed imports
from numpy import zeros, ones, linspace, zeros_like
from matplotlib.pyplot import plot, show
%matplotlib inline
```
```python
# Initial condition
import numpy as np
u0 = lambda x: np.exp(-(x-.3)**2/.05**2)
grid = linspace(0., 1., 401)
u = u0(grid)
```
```python
plot(grid, u) ; show()
```
### Time scheme
$$\frac{u^{n+1}-u^n}{\Delta t} + c \partial_x u^{n+1} = 0 $$
$$ \left(I + c \Delta t \partial_x \right) u^{n+1} = u^n $$
### Weak formulation
$$
\langle v, u^{n+1} \rangle - c \Delta t ~ \langle \partial_x v, u^{n+1} \rangle = \langle v, u^n \rangle
$$
expending $u^n$ over the fem basis, we get the linear system
$$A U^{n+1} = M U^n$$
where
$$
M_{ij} = \langle b_i, b_j \rangle
$$
$$
A_{ij} = \langle b_i, b_j \rangle - c \Delta t ~ \langle \partial_x b_i, b_j \rangle
$$
## Abstract Model using SymPDE
```python
from sympde.core import Constant
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.topology import ScalarFunctionSpace, Line, element_of, dx
from sympde.topology import dx1 # TODO: this is a bug right now
```
```python
# ... abstract model
domain = Line()
V = ScalarFunctionSpace('V', domain)
x = domain.coordinates
u,v = [element_of(V, name=i) for i in ['u', 'v']]
c = Constant('c')
dt = Constant('dt')
# bilinear form
# expr = v*u - c*dt*dx(v)*u # TODO BUG not working
expr = v*u - c*dt*dx1(v)*u
a = BilinearForm((u,v), integral(domain , expr))
# bilinear form for the mass matrix
expr = u*v
m = BilinearForm((u,v), integral(domain , expr))
# linear form for initial condition
from sympy import exp
expr = exp(-(x-.3)**2/.05**2)*v
l = LinearForm(v, integral(domain, expr))
```
## Discretization using Psydac
```python
from psydac.api.discretization import discretize
```
```python
c = 1 # wavespeed
T = 0.25 # T final time
dt = 0.001
niter = int(T / dt)
degree = [3] # spline degree
ncells = [64] # number of elements
```
```python
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=ncells, comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=degree)
# Discretize the bilinear forms
ah = discretize(a, domain_h, [Vh, Vh])
mh = discretize(m, domain_h, [Vh, Vh])
# Discretize the linear form for the initial condition
lh = discretize(l, domain_h, Vh)
```
```python
# assemble matrices and convert them to scipy
M = mh.assemble().tosparse()
A = ah.assemble(c=c, dt=dt).tosparse()
# assemble the rhs and convert it to numpy array
rhs = lh.assemble().toarray()
```
```python
from scipy.sparse.linalg import cg, gmres
```
```python
# L2 projection of the initial condition
un, status = cg(M, rhs, tol=1.e-8, maxiter=5000)
```
```python
from simplines import plot_field_1d
plot_field_1d(Vh.knots[0], Vh.degree[0], un, nx=401)
```
```python
for i in range(0, niter):
b = M.dot(un)
un, status = gmres(A, b, tol=1.e-8, maxiter=5000)
```
```python
plot_field_1d(Vh.knots[0], Vh.degree[0], un, nx=401)
```
```python
```
```python
```
|
# -*- coding: utf-8 -*-
"""
Created on Mon Jun 8 16:47:07 2020
@author: franc
"""
import pandas as pd
import numpy as np
base = pd.read_csv('credit_data.csv')
base.describe()
base.loc[base['age'] < 0]
# apagar a coluna
base.drop('age', 1, inplace=True)
# apagar apenas os elementos comprometidos
base.drop(base[base.age < 0].index, inplace=True)
# preencher os valores manualmente
# preencher os valores com a média
base.mean()
base['age'].mean()
base['age'][base.age > 0].mean()
# para modificação e pesquisa utiliza o loc
base.loc[base.age < 0, 'age'] = 40.92
# Tratamento de valores faltantes
pd.isnull(base['age'])
base.loc[pd.isnull(base['age'])]
# cria uma copia da base de dados apenas com os valores previsores da coluna 1 até a 3
# comando iloc serve para divisão da base de dados
previsores = base.iloc[:, 1:4].values
# armazenando os dados da classe meta
casse = base.iloc[:, 4].values
from sklearn.impute import SimpleImputer
# definindo imputer como um objeto do tipo SimpleImputer, que substitui valores NaN pela média
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
# retorna um SimpleImputer que emcaixa o imputer na base de dados
imputer = imputer.fit(previsores[:, 0:3])
# insere todos os valores ausentes transformados em média em previsores
previsores[:, 0:3] = imputer.transform(previsores[:, 0:3])
# Escalnamento de atributos
from sklearn.preprocessing import StandardScaler
# definindo o objeto scaler d tipo dardScaler
scaler = StandardScaler()
# ajustando e transformando os valores de previsores para uma mesma escala padronizada
previsores = scaler.fit_transform(previsores)
|
Angelique Donovan, aka Angel Donovan is part of the Management Team for Academy Lane Apartments.
Hello, I am the Property Manager at Academy Lane Apartments, previously Americana Arms. I was moved up from the Los Angeles area with my husband Jason, by Virtu Property Management when they purchased the property June 2012. My husband is also part of the Management Team, as the Head of Maintenance. Since we took over the property, we have been working on improvements not only what you see from the street, but also structural repairs, insulation in all the interior and exterior walls, and just recently all the new paint/trim and carpet in the interior hallways.
Come check out all our hard work!
Besides managing the property, we also have a total of 4 kids. His daughters are 15, one of which lives with us and the other lives and goes to school in LA with her mother. We also have 2 kids together, a son and a daughter, 8 and 11 who attend North Davis Elementary... We love this town, and have enjoyed the changes so far.
20130319 16:11:20 nbsp Welcome to the Wiki/Business Owner Welcome to the Wiki! Nice page and pics. Users/PeteB
|
section \<open>Priority Maps implemented with List and Map\<close>
theory IICF_Abs_Heapmap
imports IICF_Abs_Heap "HOL-Library.Rewrite" "../../Intf/IICF_Prio_Map"
begin
type_synonym ('k,'v) ahm = "'k list \<times> ('k \<rightharpoonup> 'v)"
subsection \<open>Basic Setup\<close>
text \<open>First, we define a mapping to list-based heaps\<close>
definition hmr_\<alpha> :: "('k,'v) ahm \<Rightarrow> 'v heap" where
"hmr_\<alpha> \<equiv> \<lambda>(pq,m). map (the o m) pq"
definition "hmr_invar \<equiv> \<lambda>(pq,m). distinct pq \<and> dom m \<supseteq> set pq"
definition "hmr_rel \<equiv> br hmr_\<alpha> hmr_invar"
lemmas hmr_rel_defs = hmr_rel_def hmr_\<alpha>_def hmr_invar_def
lemma hmr_empty_invar[simp]: "hmr_invar ([],Map.empty)"
by (auto simp: hmr_invar_def)
locale hmstruct = h: heapstruct prio for prio :: "'v \<Rightarrow> 'b::linorder"
begin
text \<open>Next, we define a mapping to priority maps.\<close>
definition heapmap_\<alpha> :: "('k,'v) ahm \<Rightarrow> ('k \<rightharpoonup> 'v)" where
"heapmap_\<alpha> \<equiv> \<lambda>(pq,m). m |` set pq"
definition heapmap_invar :: "('k,'v) ahm \<Rightarrow> bool" where
"heapmap_invar \<equiv> \<lambda>hm. hmr_invar hm \<and> h.heap_invar (hmr_\<alpha> hm)"
definition "heapmap_rel \<equiv> br heapmap_\<alpha> heapmap_invar"
lemmas heapmap_rel_defs = heapmap_rel_def br_def heapmap_\<alpha>_def heapmap_invar_def
lemma [refine_dref_RELATES]: "RELATES hmr_rel" by (simp add: RELATES_def)
lemma h_heap_invarI[simp]: "heapmap_invar hm \<Longrightarrow> h.heap_invar (hmr_\<alpha> hm)"
by (simp add: heapmap_invar_def)
lemma hmr_invarI[simp]: "heapmap_invar hm \<Longrightarrow> hmr_invar hm"
unfolding heapmap_invar_def by blast
lemma set_hmr_\<alpha>[simp]: "hmr_invar hm \<Longrightarrow> set (hmr_\<alpha> hm) = ran (heapmap_\<alpha> hm)"
apply (clarsimp simp: hmr_\<alpha>_def hmr_invar_def heapmap_\<alpha>_def)
by (smt Int_absorb1 comp_apply dom_restrict image_cong ran_is_image restrict_in)
lemma in_h_hmr_\<alpha>_conv[simp]: "hmr_invar hm \<Longrightarrow> x \<in># h.\<alpha> (hmr_\<alpha> hm) \<longleftrightarrow> x \<in> ran (heapmap_\<alpha> hm)"
apply (clarsimp simp: hmr_\<alpha>_def hmr_invar_def heapmap_\<alpha>_def in_multiset_in_set ran_is_image)
by (smt Int_absorb1 comp_apply image_cong restrict_in)
subsection \<open>Basic Operations\<close>
(* length, val_of_op, update, butlast, append, empty *)
text \<open>In this section, we define the basic operations on heapmaps,
and their relations to heaps and maps.\<close>
subsubsection \<open>Length\<close>
text \<open>Length of the list that represents the heap\<close>
definition hm_length :: "('k,'v) ahm \<Rightarrow> nat" where
"hm_length \<equiv> \<lambda>(pq,_). length pq"
lemma hm_length_refine: "(hm_length, length) \<in> hmr_rel \<rightarrow> nat_rel"
apply (intro fun_relI)
unfolding hm_length_def
by (auto simp: hmr_rel_defs in_br_conv)
lemma hm_length_hmr_\<alpha>[simp]: "length (hmr_\<alpha> hm) = hm_length hm"
by (auto simp: hm_length_def hmr_\<alpha>_def split: prod.splits)
lemmas [refine] = hm_length_refine[param_fo]
subsubsection \<open>Valid\<close>
text \<open>Check whether index is valid\<close>
definition "hm_valid hm i \<equiv> i>0 \<and> i\<le> hm_length hm"
lemma hm_valid_refine: "(hm_valid,h.valid)\<in>hmr_rel \<rightarrow> nat_rel \<rightarrow> bool_rel"
apply (intro fun_relI)
unfolding hm_valid_def h.valid_def
by (parametricity add: hm_length_refine)
lemma hm_valid_hmr_\<alpha>[simp]: "h.valid (hmr_\<alpha> hm) = hm_valid hm"
by (intro ext) (auto simp: h.valid_def hm_valid_def)
subsubsection \<open>Has-Child\<close>
definition "hm_has_child_op hm k \<longleftrightarrow> 2*k \<le> hm_length hm"
definition "hm_left_child_op (k::nat) \<equiv> 2*k"
definition "hm_next_child_op (k::nat) \<equiv> k+1"
definition "hm_has_next_child_op hm k \<equiv> k+1 \<le> hm_length hm"
subsubsection \<open>Key-Of\<close>
definition hm_key_of :: "('k,'v) ahm \<Rightarrow> nat \<Rightarrow> 'k" where
"hm_key_of \<equiv> \<lambda>(pq,m) i. pq!(i - 1)"
definition hm_key_of_op :: "('k,'v) ahm \<Rightarrow> nat \<Rightarrow> 'k nres" where
"hm_key_of_op \<equiv> \<lambda>(pq,m) i. doN {ASSERT (i>0); mop_list_get pq (i - 1)}"
lemma hm_key_of_op_unfold:
shows "hm_key_of_op hm i = doN {ASSERT (hm_valid hm i); RETURN (hm_key_of hm i)}"
unfolding hm_valid_def hm_length_def hm_key_of_op_def hm_key_of_def
by (auto split: prod.splits simp: pw_eq_iff refine_pw_simps)
lemma val_of_hmr_\<alpha>[simp]: "hm_valid hm i \<Longrightarrow> h.val_of (hmr_\<alpha> hm) i
= the (heapmap_\<alpha> hm (hm_key_of hm i))"
by (auto
simp: hmr_\<alpha>_def h.val_of_def heapmap_\<alpha>_def hm_key_of_def hm_valid_def hm_length_def
split: prod.splits)
lemma hm_\<alpha>_key_ex[simp]:
"\<lbrakk>hmr_invar hm; hm_valid hm i\<rbrakk> \<Longrightarrow> (heapmap_\<alpha> hm (hm_key_of hm i) \<noteq> None)"
unfolding heapmap_invar_def hmr_invar_def hm_valid_def heapmap_\<alpha>_def
hm_key_of_def hm_length_def
apply (clarsimp split: prod.splits)
by (meson domD in_set_conv_nth nz_le_conv_less subset_code(1))
subsubsection \<open>Lookup\<close>
abbreviation (input) hm_lookup where "hm_lookup \<equiv> heapmap_\<alpha>"
definition "hm_the_lookup_op k hm \<equiv> doN {
ASSERT (heapmap_\<alpha> hm k \<noteq> None \<and> hmr_invar hm);
RETURN (the (heapmap_\<alpha> hm k))}"
(*definition "hm_the_lookup_op' hm k \<equiv> do {
let (pq,ml) = hm;
(*ASSERT (heapmap_\<alpha> (hm_impl1_\<alpha> hm) k \<noteq> None \<and> hm_impl1_invar hm);*)
v \<leftarrow> mop_map_lookup ml k;
RETURN v
}"
lemma hm_the_lookup_op'_refine:
"(hm_the_lookup_op', hm_the_lookup_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> \<langle>Id\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_the_lookup_op'_def hm_the_lookup_op_def
apply refine_vcg
apply (auto)
apply (auto simp: hm_impl1_rel_defs heapmap_\<alpha>_def hmr_invar_def split: if_split_asm)
done
*)
subsubsection \<open>Exchange\<close>
text \<open>Exchange two indices\<close>
definition "hm_exch_op \<equiv> \<lambda>(pq,m) i j. do {
ASSERT (hm_valid (pq,m) i);
ASSERT (hm_valid (pq,m) j);
ASSERT (hmr_invar (pq,m));
pq \<leftarrow> mop_list_swap pq (i - 1) (j - 1);
RETURN (pq,m)
}"
lemma hm_exch_op_invar: "hm_exch_op hm i j \<le>\<^sub>n SPEC hmr_invar"
unfolding hm_exch_op_def h.exch_op_def h.val_of_op_def h.update_op_def
apply simp
apply refine_vcg
apply (auto simp: hm_valid_def map_swap hm_length_def hmr_rel_defs)
done
lemma hm_exch_op_refine: "(hm_exch_op,h.exch_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> nat_rel \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_exch_op_def h.exch_op_def h.val_of_op_def h.update_op_def
apply simp
apply refine_vcg
apply (auto simp: hm_valid_def map_swap hm_length_def hmr_rel_defs in_br_conv)
done
lemmas hm_exch_op_refine'[refine] = hm_exch_op_refine[param_fo, THEN nres_relD]
definition hm_exch :: "('k,'v) ahm \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> ('k,'v) ahm"
where "hm_exch \<equiv> \<lambda>(pq,m) i j. (swap pq (i-1) (j-1),m)"
lemma hm_exch_op_\<alpha>_correct: "hm_exch_op hm i j \<le>\<^sub>n SPEC (\<lambda>hm'.
hm_valid hm i \<and> hm_valid hm j \<and> hm'=hm_exch hm i j
)"
unfolding hm_exch_op_def
apply refine_vcg
apply (auto simp add: hm_key_of_def hm_exch_def swap_def)
done
lemma hm_exch_\<alpha>[simp]: "\<lbrakk>hm_valid hm i; hm_valid hm j\<rbrakk>
\<Longrightarrow> heapmap_\<alpha> (hm_exch hm i j) = (heapmap_\<alpha> hm)"
by (auto simp: heapmap_\<alpha>_def hm_exch_def hm_valid_def hm_length_def split: prod.splits)
lemma hm_exch_valid[simp]: "hm_valid (hm_exch hm i j) = hm_valid hm"
by (intro ext) (auto simp: hm_valid_def hm_length_def hm_exch_def split: prod.splits)
lemma hm_exch_length[simp]: "hm_length (hm_exch hm i j) = hm_length hm"
by (auto simp: hm_length_def hm_exch_def split: prod.splits)
lemma hm_exch_same[simp]: "hm_exch hm i i = hm"
by (auto simp: hm_exch_def split: prod.splits)
lemma hm_key_of_exch_conv[simp]:
"\<lbrakk>hm_valid hm i; hm_valid hm j; hm_valid hm k\<rbrakk> \<Longrightarrow>
hm_key_of (hm_exch hm i j) k = (
if k=i then hm_key_of hm j
else if k=j then hm_key_of hm i
else hm_key_of hm k
)"
unfolding hm_exch_def hm_valid_def hm_length_def hm_key_of_def
by (auto split: prod.splits simp: swap_nth)
lemma hm_key_of_exch_matching[simp]:
"\<lbrakk>hm_valid hm i; hm_valid hm j\<rbrakk> \<Longrightarrow> hm_key_of (hm_exch hm i j) i = hm_key_of hm j"
"\<lbrakk>hm_valid hm i; hm_valid hm j\<rbrakk> \<Longrightarrow> hm_key_of (hm_exch hm i j) j = hm_key_of hm i"
by simp_all
subsubsection \<open>Index\<close>
text \<open>Obtaining the index of a key\<close>
definition "hm_index \<equiv> \<lambda>(pq,m) k. index pq k + 1"
lemma hm_index_valid[simp]: "\<lbrakk>hmr_invar hm; heapmap_\<alpha> hm k \<noteq> None\<rbrakk> \<Longrightarrow> hm_valid hm (hm_index hm k)"
by (auto
simp: hm_valid_def heapmap_\<alpha>_def hmr_invar_def hm_index_def hm_length_def Suc_le_eq
simp: restrict_map_def split: if_splits)
lemma hm_index_key_of[simp]: "\<lbrakk>hmr_invar hm; heapmap_\<alpha> hm k \<noteq> None\<rbrakk> \<Longrightarrow> hm_key_of hm (hm_index hm k) = k"
by (auto
simp: hm_valid_def heapmap_\<alpha>_def hmr_invar_def hm_index_def hm_length_def hm_key_of_def Suc_le_eq
simp: restrict_map_def split: if_splits)
definition "hm_index_op \<equiv> \<lambda>(pq,m) k.
do {
ASSERT (hmr_invar (pq,m) \<and> heapmap_\<alpha> (pq,m) k \<noteq> None);
i \<leftarrow> mop_list_index pq k;
RETURN (i+1)
}"
lemma hm_index_op_correct:
assumes "hmr_invar hm"
assumes "heapmap_\<alpha> hm k \<noteq> None"
shows "hm_index_op hm k \<le> SPEC (\<lambda>r. r= hm_index hm k)"
using assms unfolding hm_index_op_def
apply refine_vcg
apply (auto simp: heapmap_\<alpha>_def hmr_invar_def hm_index_def index_nth_id)
done
lemmas [refine_vcg] = hm_index_op_correct
subsubsection \<open>Update\<close>
text \<open>Updating the heap at an index\<close>
definition hm_update_op :: "('k,'v) ahm \<Rightarrow> nat \<Rightarrow> 'v \<Rightarrow> ('k,'v) ahm nres" where
"hm_update_op \<equiv> \<lambda>(pq,m) i v. do {
ASSERT (hm_valid (pq,m) i \<and> hmr_invar (pq,m));
k \<leftarrow> mop_list_get pq (i - 1);
RETURN (pq, m(k \<mapsto> v))
}"
lemma hm_update_op_invar: "hm_update_op hm k v \<le>\<^sub>n SPEC hmr_invar"
unfolding hm_update_op_def h.update_op_def
apply refine_vcg
by (auto simp: hmr_rel_defs map_distinct_upd_conv hm_valid_def hm_length_def)
lemma hm_update_op_refine: "(hm_update_op, h.update_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> Id \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_update_op_def h.update_op_def mop_list_get_alt mop_list_set_alt
apply refine_vcg
apply (auto simp: hmr_rel_defs map_distinct_upd_conv hm_valid_def hm_length_def in_br_conv)
done
lemmas [refine] = hm_update_op_refine[param_fo, THEN nres_relD]
lemma hm_update_op_\<alpha>_correct:
assumes "hmr_invar hm"
assumes "heapmap_\<alpha> hm k \<noteq> None"
shows "hm_update_op hm (hm_index hm k) v \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = (heapmap_\<alpha> hm)(k\<mapsto>v))"
using assms
unfolding hm_update_op_def
apply refine_vcg
apply (auto simp: heapmap_rel_defs hmr_rel_defs hm_index_def restrict_map_def split: if_splits)
done
subsubsection \<open>Butlast\<close>
text \<open>Remove last element\<close>
definition hm_butlast_op :: "('k,'v) ahm \<Rightarrow> ('k,'v) ahm nres" where
"hm_butlast_op \<equiv> \<lambda>(pq,m). do {
ASSERT (hmr_invar (pq,m));
ASSERT (pq\<noteq>[]);
k \<leftarrow> mop_list_get pq (length pq - 1);
pq \<leftarrow> mop_list_butlast pq;
RETURN (pq,m)
}"
lemma hm_butlast_op_refine: "(hm_butlast_op, h.butlast_op) \<in> hmr_rel \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
supply [simp del] = map_upd_eq_restrict
apply (intro fun_relI nres_relI)
unfolding hm_butlast_op_def h.butlast_op_def
apply simp
apply refine_vcg
apply (clarsimp_all simp: hmr_rel_defs map_butlast distinct_butlast in_br_conv)
apply (auto simp: neq_Nil_rev_conv) []
done
lemmas [refine] = hm_butlast_op_refine[param_fo, THEN nres_relD]
lemma hm_butlast_op_\<alpha>_correct: "hm_butlast_op hm \<le>\<^sub>n SPEC (
\<lambda>hm'. heapmap_\<alpha> hm' = (heapmap_\<alpha> hm)( hm_key_of hm (hm_length hm) := None ))"
proof -
have AUX: "xs\<noteq>[]
\<Longrightarrow> set (butlast xs) = (if xs!(length xs - 1) \<in> set (butlast xs) then set xs else set xs - {xs!(length xs - 1)})"
for xs :: "'a list"
apply (cases xs rule: rev_cases)
apply auto
done
have AUX: "\<lbrakk> distinct xs; xs\<noteq>[] \<rbrakk> \<Longrightarrow> set (butlast xs) = set xs - {xs!(length xs - 1)}"
for xs :: "'a list"
apply (cases xs rule: rev_cases)
by auto
show ?thesis
unfolding hm_butlast_op_def
apply refine_vcg
by (auto simp: heapmap_\<alpha>_def hm_key_of_def hm_length_def hmr_invar_def AUX)
qed
subsubsection \<open>Append\<close>
text \<open>Append new element at end of heap\<close>
definition hm_append_op :: "('k,'v) ahm \<Rightarrow> 'k \<Rightarrow> 'v \<Rightarrow> ('k,'v) ahm nres"
where "hm_append_op \<equiv> \<lambda>(pq,m) k v. do {
ASSERT (k \<notin> set pq);
ASSERT (hmr_invar (pq,m));
pq \<leftarrow> mop_list_append pq k;
let m = m (k \<mapsto> v);
RETURN (pq,m)
}"
lemma hm_append_op_invar: "hm_append_op hm k v \<le>\<^sub>n SPEC hmr_invar"
unfolding hm_append_op_def h.append_op_def
apply refine_vcg
unfolding heapmap_\<alpha>_def hmr_rel_defs
apply (auto simp: )
done
lemma hm_append_op_refine: "\<lbrakk> heapmap_\<alpha> hm k = None; (hm,h)\<in>hmr_rel \<rbrakk>
\<Longrightarrow> (hm_append_op hm k v, h.append_op h v) \<in> \<langle>hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_append_op_def h.append_op_def
apply refine_vcg
unfolding heapmap_\<alpha>_def hmr_rel_defs
apply (auto simp: in_br_conv)
done
lemmas hm_append_op_refine'[refine] = hm_append_op_refine[param_fo, THEN nres_relD]
lemma hm_append_op_\<alpha>_correct:
"hm_append_op hm k v \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = (heapmap_\<alpha> hm) (k \<mapsto> v))"
unfolding hm_append_op_def
apply refine_vcg
by (auto simp: heapmap_\<alpha>_def)
subsection \<open>Auxiliary Operations\<close>
text \<open>Auxiliary operations on heapmaps, which are derived
from the basic operations, but do not correspond to
operations of the priority map interface\<close>
text \<open>We start with some setup\<close>
lemma heapmap_hmr_relI: "(hm,h)\<in>heapmap_rel \<Longrightarrow> (hm,hmr_\<alpha> hm) \<in> hmr_rel"
by (auto simp: heapmap_rel_defs hmr_rel_defs)
lemma heapmap_hmr_relI': "heapmap_invar hm \<Longrightarrow> (hm,hmr_\<alpha> hm) \<in> hmr_rel"
by (auto simp: heapmap_rel_defs hmr_rel_defs)
text \<open>The basic principle how we prove correctness of our operations:
Invariant preservation is shown by relating the operations to
operations on heaps. Then, only correctness on the abstraction
remains to be shown, assuming the operation does not fail.
\<close>
lemma heapmap_nres_relI':
assumes "hm \<le> \<Down>hmr_rel h'"
assumes "h' \<le> SPEC (h.heap_invar)"
assumes "hm \<le>\<^sub>n SPEC (\<lambda>hm'. RETURN (heapmap_\<alpha> hm') \<le> h)"
shows "hm \<le> \<Down>heapmap_rel h"
using assms
unfolding heapmap_rel_defs hmr_rel_def
by (auto simp: pw_le_iff pw_leof_iff refine_pw_simps)
lemma heapmap_nres_relI'':
assumes "hm \<le> \<Down>hmr_rel h'"
assumes "h' \<le> SPEC \<Phi>"
assumes "\<And>h'. \<Phi> h' \<Longrightarrow> h.heap_invar h'"
assumes "hm \<le>\<^sub>n SPEC (\<lambda>hm'. RETURN (heapmap_\<alpha> hm') \<le> h)"
shows "hm \<le> \<Down>heapmap_rel h"
apply (rule heapmap_nres_relI')
apply fact
apply (rule order_trans, fact)
apply (clarsimp; fact)
apply fact
done
subsubsection \<open>Val-of\<close>
text \<open>Indexing into the heap\<close>
definition hm_val_of_op :: "('k,'v) ahm \<Rightarrow> nat \<Rightarrow> 'v nres" where
"hm_val_of_op \<equiv> \<lambda>hm i. do {
k \<leftarrow> hm_key_of_op hm i;
v \<leftarrow> hm_the_lookup_op k hm;
RETURN v
}"
lemma hm_val_of_op_refine: "(hm_val_of_op,h.val_of_op) \<in> (hmr_rel \<rightarrow> nat_rel \<rightarrow> \<langle>Id\<rangle>nres_rel)"
apply (intro fun_relI nres_relI)
unfolding hm_val_of_op_def h.val_of_op_def
hm_key_of_op_def hm_key_of_def hm_valid_def hm_length_def
hm_the_lookup_op_def
apply clarsimp
apply (rule refine_IdD)
apply refine_vcg
apply (auto simp: hmr_rel_defs in_br_conv heapmap_\<alpha>_def)
by (meson domD nth_mem subsetCE)
lemmas [refine] = hm_val_of_op_refine[param_fo, THEN nres_relD]
subsubsection \<open>Prio-of\<close>
text \<open>Priority of key\<close>
definition "hm_prio_of_op h i \<equiv> do {v \<leftarrow> hm_val_of_op h i; RETURN (prio v)}"
lemma hm_prio_of_op_refine: "(hm_prio_of_op, h.prio_of_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> \<langle>Id\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_prio_of_op_def h.prio_of_op_def
apply refine_rcg
by auto
lemmas hm_prio_of_op_refine'[refine] = hm_prio_of_op_refine[param_fo, THEN nres_relD]
subsubsection \<open>Swim\<close>
definition hm_swim_op :: "('k,'v) ahm \<Rightarrow> nat \<Rightarrow> ('k,'v) ahm nres" where
"hm_swim_op h i \<equiv> do {
RECT (\<lambda>swim (h,i). do {
ASSERT (hm_valid h i \<and> h.swim_invar (hmr_\<alpha> h) i);
if hm_valid h (h.parent i) then do {
ppi \<leftarrow> hm_prio_of_op h (h.parent i);
pi \<leftarrow> hm_prio_of_op h i;
if (\<not>ppi \<le> pi) then do {
h \<leftarrow> hm_exch_op h i (h.parent i);
swim (h, h.parent i)
} else
RETURN h
} else
RETURN h
}) (h,i)
}"
lemma hm_swim_op_refine: "(hm_swim_op, h.swim_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_swim_op_def h.swim_op_def
apply refine_rcg
apply refine_dref_type
apply (clarsimp_all simp: hm_valid_refine[param_fo, THEN IdD])
apply (simp add: hmr_rel_def in_br_conv)
done
lemmas hm_swim_op_refine'[refine] = hm_swim_op_refine[param_fo, THEN nres_relD]
lemma hm_swim_op_nofail_imp_valid:
"nofail (hm_swim_op hm i) \<Longrightarrow> hm_valid hm i \<and> h.swim_invar (hmr_\<alpha> hm) i"
unfolding hm_swim_op_def
apply (subst (asm) RECT_unfold, refine_mono)
by (auto simp: refine_pw_simps)
lemma hm_swim_op_\<alpha>_correct: "hm_swim_op hm i \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = heapmap_\<alpha> hm)"
apply (rule leof_add_nofailI)
apply (drule hm_swim_op_nofail_imp_valid)
unfolding hm_swim_op_def
apply (rule RECT_rule_leof[where
pre="\<lambda>(hm',i). hm_valid hm' i \<and> heapmap_\<alpha> hm' = heapmap_\<alpha> hm"
and V = "inv_image less_than snd"
])
apply simp
apply simp
unfolding hm_prio_of_op_def hm_val_of_op_def
hm_exch_op_def hm_key_of_op_def hm_the_lookup_op_def
apply (refine_vcg)
apply (clarsimp_all simp add: hm_valid_def hm_length_def)
apply rprems
apply (auto simp: heapmap_\<alpha>_def h.parent_def)
done
subsubsection \<open>Sink\<close>
definition hm_sink_op
where
"hm_sink_op h k \<equiv> RECT (\<lambda>D (h,k). do {
ASSERT (k>0 \<and> k\<le>hm_length h);
if hm_has_child_op h k then do {
let j = hm_left_child_op k;
pj \<leftarrow> hm_prio_of_op h j;
j \<leftarrow> (
if hm_has_next_child_op h j then do {
let j' = hm_next_child_op j;
psj \<leftarrow> hm_prio_of_op h j';
if pj>psj then RETURN j' else RETURN j
} else RETURN j);
pj \<leftarrow> hm_prio_of_op h j;
pk \<leftarrow> hm_prio_of_op h k;
if (pk > pj) then do {
h \<leftarrow> hm_exch_op h k j;
D (h,j)
} else
RETURN h
} else RETURN h
}) (h,k)"
lemma hm_sink_op_refine: "(hm_sink_op, h.sink_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_sink_op_def h.sink_op_opt_eq[symmetric] h.sink_op_opt_def
unfolding hm_has_child_op_def hm_has_next_child_op_def
hm_left_child_op_def hm_next_child_op_def
apply (rewrite at "let _ = length _ in _" Let_def)
apply (rewrite at "let _ = _ + 1 in _" Let_def)
apply refine_rcg
apply refine_dref_type
unfolding hmr_rel_def heapmap_rel_def
apply (auto simp: in_br_conv)
done
lemmas hm_sink_op_refine'[refine] = hm_sink_op_refine[param_fo, THEN nres_relD]
lemma hm_sink_op_nofail_imp_valid: "nofail (hm_sink_op hm i) \<Longrightarrow> hm_valid hm i"
unfolding hm_sink_op_def
apply (subst (asm) RECT_unfold, refine_mono)
by (auto simp: refine_pw_simps hm_valid_def)
lemma hm_sink_op_\<alpha>_correct: "hm_sink_op hm i \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = heapmap_\<alpha> hm)"
apply (rule leof_add_nofailI)
apply (drule hm_sink_op_nofail_imp_valid)
unfolding hm_sink_op_def
unfolding hm_left_child_op_def hm_has_child_op_def hm_has_next_child_op_def hm_next_child_op_def
apply (rule RECT_rule_leof[where
pre="\<lambda>(hm',i). hm_valid hm' i \<and> heapmap_\<alpha> hm' = heapmap_\<alpha> hm \<and> hm_length hm' = hm_length hm"
and V = "measure (\<lambda>(hm',i). hm_length hm' - i)"
])
apply simp
apply simp
unfolding hm_prio_of_op_def hm_val_of_op_def hm_exch_op_def
hm_key_of_op_def hm_the_lookup_op_def
apply (refine_vcg)
apply (clarsimp_all simp add: hm_valid_def hm_length_def) (* Takes long *)
apply rprems
apply (clarsimp_all simp: heapmap_\<alpha>_def h.parent_def split: prod.splits)
apply (auto)
done
subsubsection \<open>Repair\<close>
definition "hm_repair_op hm i \<equiv> do {
hm \<leftarrow> hm_sink_op hm i;
hm \<leftarrow> hm_swim_op hm i;
RETURN hm
}"
lemma hm_repair_op_refine: "(hm_repair_op, h.repair_op) \<in> hmr_rel \<rightarrow> nat_rel \<rightarrow> \<langle>hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_repair_op_def h.repair_op_def
by refine_rcg
lemmas hm_repair_op_refine'[refine] = hm_repair_op_refine[param_fo, THEN nres_relD]
lemma hm_repair_op_\<alpha>_correct: "hm_repair_op hm i \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = heapmap_\<alpha> hm)"
unfolding hm_repair_op_def
apply (refine_vcg
hm_swim_op_\<alpha>_correct[THEN leof_trans]
hm_sink_op_\<alpha>_correct[THEN leof_trans])
by auto
subsection \<open>Operations\<close>
text \<open>In this section, we define the operations that implement the priority-map interface\<close>
subsubsection \<open>Empty\<close>
definition hm_empty_op :: "('k,'v) ahm nres"
where "hm_empty_op \<equiv> do { m\<leftarrow>mop_map_empty; RETURN ([],m) }"
lemma hm_empty_op_aref: "(hm_empty_op,mop_map_empty) \<in> \<langle>heapmap_rel\<rangle>nres_rel"
unfolding hm_empty_op_def
apply refine_vcg
by (auto simp: heapmap_rel_defs hmr_rel_defs intro: nres_relI)
subsubsection \<open>Insert\<close>
definition hm_insert_op :: "'k \<Rightarrow> 'v \<Rightarrow> ('k,'v) ahm \<Rightarrow> ('k,'v) ahm nres" where
"hm_insert_op \<equiv> \<lambda>k v h. do {
ASSERT (h.heap_invar (hmr_\<alpha> h));
h \<leftarrow> hm_append_op h k v;
let l = hm_length h;
h \<leftarrow> hm_swim_op h l;
RETURN h
}"
lemma hm_insert_op_refine[refine]: "\<lbrakk> heapmap_\<alpha> hm k = None; (hm,h)\<in>hmr_rel \<rbrakk> \<Longrightarrow>
hm_insert_op k v hm \<le> \<Down>hmr_rel (h.insert_op v h)"
unfolding hm_insert_op_def h.insert_op_def
apply refine_rcg
by (auto simp: hmr_rel_def br_def)
lemma hm_insert_op_aref:
"(hm_insert_op,mop_map_update_new) \<in> Id \<rightarrow> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>heapmap_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding mop_map_update_new_alt
apply (rule ASSERT_refine_right)
apply (rule heapmap_nres_relI''[OF hm_insert_op_refine h.insert_op_correct])
apply (unfold heapmap_rel_def in_br_conv; clarsimp)
apply (erule heapmap_hmr_relI)
apply (unfold heapmap_rel_def in_br_conv; clarsimp)
apply (unfold heapmap_rel_def in_br_conv; clarsimp)
unfolding hm_insert_op_def
apply (refine_vcg
hm_append_op_\<alpha>_correct[THEN leof_trans]
hm_swim_op_\<alpha>_correct[THEN leof_trans])
apply (unfold heapmap_rel_def in_br_conv; clarsimp)
done
subsubsection \<open>Is-Empty\<close>
lemma hmr_\<alpha>_empty_iff[simp]:
"hmr_invar hm \<Longrightarrow> hmr_\<alpha> hm = [] \<longleftrightarrow> heapmap_\<alpha> hm = Map.empty"
by (auto
simp: hmr_\<alpha>_def heapmap_invar_def heapmap_\<alpha>_def hmr_invar_def restrict_map_def fun_eq_iff
split: prod.split if_splits)
definition hm_is_empty_op :: "('k,'v) ahm \<Rightarrow> bool nres" where
"hm_is_empty_op \<equiv> \<lambda>hm. do {
ASSERT (hmr_invar hm);
let l = hm_length hm;
RETURN (l=0)
}"
lemma hm_is_empty_op_refine: "(hm_is_empty_op, h.is_empty_op) \<in> hmr_rel \<rightarrow> \<langle>bool_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_is_empty_op_def h.is_empty_op_def
apply refine_rcg
subgoal by (auto simp: hmr_rel_defs in_br_conv)
subgoal by (parametricity add: hm_length_refine)
done
lemma hm_is_empty_op_aref: "(hm_is_empty_op, mop_map_is_empty) \<in> heapmap_rel \<rightarrow> \<langle>bool_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_is_empty_op_def
apply refine_vcg
apply (auto simp: hmr_rel_defs in_br_conv heapmap_rel_defs hm_length_def)
by (metis Int_absorb1 dom_restrict length_greater_0_conv ndomIff nth_mem)
subsubsection \<open>Lookup\<close>
definition hm_lookup_op :: "'k \<Rightarrow> ('k,'v) ahm \<Rightarrow> 'v option nres"
where "hm_lookup_op \<equiv> \<lambda>k hm. doN {ASSERT (heapmap_invar hm); RETURN (hm_lookup hm k)}"
lemma hm_lookup_op_aref: "(hm_lookup_op,RETURN oo op_map_lookup) \<in> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>\<langle>Id\<rangle>option_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_lookup_op_def heapmap_rel_def in_br_conv
apply refine_vcg
apply simp_all
done
lemma hm_the_lookup_op_aref: "(hm_the_lookup_op,mop_map_the_lookup) \<in> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>Id\<rangle>nres_rel"
unfolding hm_the_lookup_op_def
by (auto intro!: nres_relI simp: pw_eq_iff refine_pw_simps heapmap_rel_def in_br_conv)
subsubsection \<open>Contains-Key\<close>
definition "hm_contains_key_op \<equiv> \<lambda>k (pq,m). doN {ASSERT (heapmap_invar (pq,m)); mop_list_contains k pq}"
lemma hm_contains_key_op_aref: "(hm_contains_key_op,mop_map_contains_key) \<in> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>bool_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_contains_key_op_def heapmap_rel_defs
apply refine_vcg
by (auto simp: hmr_invar_def)
subsubsection \<open>Decrease-Key\<close>
definition "hm_decrease_key_op \<equiv> \<lambda>k v hm. do {
ASSERT (heapmap_invar hm);
ASSERT (heapmap_\<alpha> hm k \<noteq> None \<and> prio v \<le> prio (the (heapmap_\<alpha> hm k)));
i \<leftarrow> hm_index_op hm k;
hm \<leftarrow> hm_update_op hm i v;
hm_swim_op hm i
}"
definition (in heapstruct) "decrease_key_op i v h \<equiv> do {
ASSERT (valid h i \<and> prio v \<le> prio_of h i);
h \<leftarrow> update_op h i v;
swim_op h i
}"
lemma (in heapstruct) decrease_key_op_invar:
"\<lbrakk>heap_invar h; valid h i; prio v \<le> prio_of h i\<rbrakk> \<Longrightarrow> decrease_key_op i v h \<le> SPEC heap_invar"
unfolding decrease_key_op_def
apply refine_vcg
by (auto simp: swim_invar_decr)
lemma index_op_inline_refine:
assumes "heapmap_invar hm"
assumes "heapmap_\<alpha> hm k \<noteq> None"
assumes "f (hm_index hm k) \<le> m"
shows "do {i \<leftarrow> hm_index_op hm k; f i} \<le> m"
using hm_index_op_correct[of hm k] assms
by (auto simp: pw_le_iff refine_pw_simps)
lemma hm_decrease_key_op_refine:
"\<lbrakk>(hm,h)\<in>hmr_rel; (hm,m)\<in>heapmap_rel; m k = Some v'\<rbrakk>
\<Longrightarrow> hm_decrease_key_op k v hm \<le>\<Down>hmr_rel (h.decrease_key_op (hm_index hm k) v h)"
unfolding hm_decrease_key_op_def h.decrease_key_op_def
(*apply (rewrite at "Let (hm_index hm k) _" Let_def)*)
apply (refine_rcg index_op_inline_refine)
unfolding hmr_rel_def heapmap_rel_def in_br_conv
apply (clarsimp_all)
done
lemma hm_index_op_inline_leof:
assumes "f (hm_index hm k) \<le>\<^sub>n m"
shows "do {i \<leftarrow> hm_index_op hm k; f i} \<le>\<^sub>n m"
using hm_index_op_correct[of hm k] assms unfolding hm_index_op_def
by (auto simp: pw_le_iff pw_leof_iff refine_pw_simps split: prod.splits)
lemma hm_decrease_key_op_\<alpha>_correct:
"heapmap_invar hm \<Longrightarrow> hm_decrease_key_op k v hm \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = heapmap_\<alpha> hm(k\<mapsto>v))"
unfolding hm_decrease_key_op_def
apply (refine_vcg
hm_update_op_\<alpha>_correct[THEN leof_trans]
hm_swim_op_\<alpha>_correct[THEN leof_trans]
hm_index_op_inline_leof
)
apply simp_all
done
lemma hm_decrease_key_op_aref:
"(hm_decrease_key_op, PR_CONST (mop_pm_decrease_key prio)) \<in> Id \<rightarrow> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>heapmap_rel\<rangle>nres_rel"
unfolding PR_CONST_def
apply (intro fun_relI nres_relI)
apply (frule heapmap_hmr_relI)
unfolding mop_pm_decrease_key_alt
apply (rule ASSERT_refine_right; clarsimp)
apply (rule heapmap_nres_relI')
apply (rule hm_decrease_key_op_refine; assumption)
unfolding heapmap_rel_def hmr_rel_def in_br_conv
apply (rule h.decrease_key_op_invar; simp; fail )
apply (refine_vcg hm_decrease_key_op_\<alpha>_correct[THEN leof_trans]; simp; fail)
done
subsubsection \<open>Increase-Key\<close>
definition "hm_increase_key_op \<equiv> \<lambda>k v hm. do {
ASSERT (heapmap_invar hm);
ASSERT (heapmap_\<alpha> hm k \<noteq> None \<and> prio v \<ge> prio (the (heapmap_\<alpha> hm k)));
i \<leftarrow> hm_index_op hm k;
hm \<leftarrow> hm_update_op hm i v;
hm_sink_op hm i
}"
definition (in heapstruct) "increase_key_op i v h \<equiv> do {
ASSERT (valid h i \<and> prio v \<ge> prio_of h i);
h \<leftarrow> update_op h i v;
sink_op h i
}"
lemma (in heapstruct) increase_key_op_invar:
"\<lbrakk>heap_invar h; valid h i; prio v \<ge> prio_of h i\<rbrakk> \<Longrightarrow> increase_key_op i v h \<le> SPEC heap_invar"
unfolding increase_key_op_def
apply refine_vcg
by (auto simp: sink_invar_incr)
lemma hm_increase_key_op_refine:
"\<lbrakk>(hm,h)\<in>hmr_rel; (hm,m)\<in>heapmap_rel; m k = Some v'\<rbrakk>
\<Longrightarrow> hm_increase_key_op k v hm \<le>\<Down>hmr_rel (h.increase_key_op (hm_index hm k) v h)"
unfolding hm_increase_key_op_def h.increase_key_op_def
(*apply (rewrite at "Let (hm_index hm k) _" Let_def)*)
apply (refine_rcg index_op_inline_refine)
unfolding hmr_rel_def heapmap_rel_def in_br_conv
apply (clarsimp_all)
done
lemma hm_increase_key_op_\<alpha>_correct:
"heapmap_invar hm \<Longrightarrow> hm_increase_key_op k v hm \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = heapmap_\<alpha> hm(k\<mapsto>v))"
unfolding hm_increase_key_op_def
apply (refine_vcg
hm_update_op_\<alpha>_correct[THEN leof_trans]
hm_sink_op_\<alpha>_correct[THEN leof_trans]
hm_index_op_inline_leof)
apply simp_all
done
lemma hm_increase_key_op_aref:
"(hm_increase_key_op, PR_CONST (mop_pm_increase_key prio)) \<in> Id \<rightarrow> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>heapmap_rel\<rangle>nres_rel"
unfolding PR_CONST_def
apply (intro fun_relI nres_relI)
apply (frule heapmap_hmr_relI)
unfolding mop_pm_increase_key_alt
apply (rule ASSERT_refine_right; clarsimp)
apply (rule heapmap_nres_relI')
apply (rule hm_increase_key_op_refine; assumption)
unfolding heapmap_rel_def hmr_rel_def in_br_conv
apply (rule h.increase_key_op_invar; simp; fail )
apply (refine_vcg hm_increase_key_op_\<alpha>_correct[THEN leof_trans]; simp)
done
subsubsection \<open>Change-Key\<close>
definition "hm_change_key_op \<equiv> \<lambda>k v hm. do {
ASSERT (heapmap_invar hm);
ASSERT (heapmap_\<alpha> hm k \<noteq> None);
i \<leftarrow> hm_index_op hm k;
hm \<leftarrow> hm_update_op hm i v;
hm_repair_op hm i
}"
definition (in heapstruct) "change_key_op i v h \<equiv> do {
ASSERT (valid h i);
h \<leftarrow> update_op h i v;
repair_op h i
}"
lemma (in heapstruct) change_key_op_invar:
"\<lbrakk>heap_invar h; valid h i\<rbrakk> \<Longrightarrow> change_key_op i v h \<le> SPEC heap_invar"
unfolding change_key_op_def
apply (refine_vcg)
apply hypsubst
apply refine_vcg
by (auto simp: sink_invar_incr)
lemma hm_change_key_op_refine:
"\<lbrakk>(hm,h)\<in>hmr_rel; (hm,m)\<in>heapmap_rel; m k = Some v'\<rbrakk>
\<Longrightarrow> hm_change_key_op k v hm \<le>\<Down>hmr_rel (h.change_key_op (hm_index hm k) v h)"
unfolding hm_change_key_op_def h.change_key_op_def
(*apply (rewrite at "Let (hm_index hm k) _" Let_def)*)
apply (refine_rcg index_op_inline_refine)
unfolding hmr_rel_def heapmap_rel_def in_br_conv
apply (clarsimp_all)
done
lemma hm_change_key_op_\<alpha>_correct:
"heapmap_invar hm \<Longrightarrow> hm_change_key_op k v hm \<le>\<^sub>n SPEC (\<lambda>hm'. heapmap_\<alpha> hm' = heapmap_\<alpha> hm(k\<mapsto>v))"
unfolding hm_change_key_op_def
apply (refine_vcg
hm_update_op_\<alpha>_correct[THEN leof_trans]
hm_repair_op_\<alpha>_correct[THEN leof_trans]
hm_index_op_inline_leof)
unfolding heapmap_rel_def in_br_conv
apply simp
apply simp
done
lemma hm_change_key_op_aref:
"(hm_change_key_op, mop_map_update_ex) \<in> Id \<rightarrow> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>heapmap_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
apply (frule heapmap_hmr_relI)
unfolding mop_map_update_ex_alt
apply (rule ASSERT_refine_right; clarsimp)
apply (rule heapmap_nres_relI')
apply (rule hm_change_key_op_refine; assumption)
unfolding heapmap_rel_def hmr_rel_def in_br_conv
apply (rule h.change_key_op_invar; simp; fail )
apply ((refine_vcg hm_change_key_op_\<alpha>_correct[THEN leof_trans]; simp))
done
subsubsection \<open>Set\<close>
text \<open>Realized as generic algorithm!\<close> (* TODO: Implement as such! *)
lemma (in -) op_pm_set_gen_impl: "RETURN ooo op_map_update = (\<lambda>k v m. do {
c \<leftarrow> RETURN (op_map_contains_key k m);
if c then
mop_map_update_ex k v m
else
mop_map_update_new k v m
})"
apply (intro ext)
unfolding op_map_contains_key_def mop_map_update_ex_def mop_map_update_new_def
by simp
definition "hm_set_op k v hm \<equiv> do {
c \<leftarrow> hm_contains_key_op k hm;
if c then
hm_change_key_op k v hm
else
hm_insert_op k v hm
}"
(* TODO: Move. Near RETURN_to_SPEC_rule *)
thm RETURN_to_SPEC_rule
lemma SPEC_to_RETURN_rule: "m \<le> RETURN x \<Longrightarrow> m \<le> SPEC ((=) x)"
by (auto simp: pw_le_iff)
lemma hm_set_op_aref:
"(hm_set_op, mop_map_update) \<in> Id \<rightarrow> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>heapmap_rel\<rangle>nres_rel"
unfolding op_pm_set_gen_impl
apply (intro fun_relI nres_relI)
unfolding hm_set_op_def o_def
using hm_contains_key_op_aref[param_fo]
using hm_change_key_op_aref[param_fo]
using hm_insert_op_aref[param_fo]
by (fastforce simp: pw_le_iff pw_nres_rel_iff refine_pw_simps) (** Takes long *)
subsubsection \<open>Pop-Min\<close>
definition hm_pop_min_op :: "('k,'v) ahm \<Rightarrow> (('k\<times>'v) \<times> ('k,'v) ahm) nres" where
"hm_pop_min_op hm \<equiv> do {
ASSERT (heapmap_invar hm);
ASSERT (hm_valid hm 1);
k \<leftarrow> hm_key_of_op hm 1;
v \<leftarrow> hm_the_lookup_op k hm;
let l = hm_length hm;
hm \<leftarrow> hm_exch_op hm 1 l;
hm \<leftarrow> hm_butlast_op hm;
if (l\<noteq>1) then do {
hm \<leftarrow> hm_sink_op hm 1;
RETURN ((k,v),hm)
} else RETURN ((k,v),hm)
}"
lemma hm_pop_min_op_refine:
"(hm_pop_min_op, h.pop_min_op) \<in> hmr_rel \<rightarrow> \<langle>UNIV \<times>\<^sub>r hmr_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding hm_pop_min_op_def h.pop_min_op_def
(* Project away stuff of second component *)
unfolding ignore_snd_refine_conv hm_the_lookup_op_def hm_key_of_op_unfold
apply (simp cong: if_cong add: Let_def)
apply (simp add: unused_bind_conv h.val_of_op_def refine_pw_simps)
(* Prove refinement *)
apply refine_rcg
unfolding hmr_rel_def in_br_conv
apply (unfold heapmap_invar_def;simp)
apply (auto simp: in_br_conv)
done
text \<open>We demonstrate two different approaches for proving correctness
here.
The first approach uses the relation to plain heaps only to establish
the invariant.
The second approach also uses the relation to heaps to establish
correctness of the result.
The first approach seems to be more robust against badly set
up simpsets, which may be the case in early stages of development.
Assuming a working simpset, the second approach may be less work,
and the proof may look more elegant.
\<close>
text_raw \<open>\paragraph{First approach}\<close>
text \<open>Transfer heapmin-property to heapmap-domain\<close>
lemma heapmap_min_prop:
assumes INV: "heapmap_invar hm"
assumes V': "heapmap_\<alpha> hm k = Some v'"
assumes NE: "hm_valid hm (Suc 0)"
shows "prio (the (heapmap_\<alpha> hm (hm_key_of hm (Suc 0)))) \<le> prio v'"
proof -
\<comment> \<open>Transform into the domain of heaps\<close>
obtain pq m where [simp]: "hm=(pq,m)" by (cases hm)
from NE have [simp]: "pq\<noteq>[]" by (auto simp: hm_valid_def hm_length_def)
have CNV_LHS: "prio (the (heapmap_\<alpha> hm (hm_key_of hm (Suc 0))))
= h.prio_of (hmr_\<alpha> hm) (Suc 0)"
by (auto simp: heapmap_\<alpha>_def hm_key_of_def hmr_\<alpha>_def h.val_of_def)
from INV have INV': "h.heap_invar (hmr_\<alpha> hm)"
unfolding heapmap_invar_def by auto
from V' INV obtain i where IDX: "h.valid (hmr_\<alpha> hm) i"
and CNV_RHS: "prio v' = h.prio_of (hmr_\<alpha> hm) i"
apply (clarsimp simp: heapmap_\<alpha>_def heapmap_invar_def hmr_invar_def hmr_\<alpha>_def
h.valid_def h.val_of_def restrict_map_eq)
by (metis (no_types, hide_lams) Suc_leI comp_apply diff_Suc_Suc
diff_zero index_less_size_conv neq0_conv nth_index nth_map
old.nat.distinct(2) option.sel)
from h.heap_min_prop[OF INV' IDX] show ?thesis
unfolding CNV_LHS CNV_RHS .
qed
text \<open>With the above lemma, the correctness proof is straightforward\<close>
lemma heapmap_nres_rel_prodI:
assumes "hmx \<le> \<Down>(UNIV \<times>\<^sub>r hmr_rel) h'x"
assumes "h'x \<le> SPEC (\<lambda>(_,h'). h.heap_invar h')"
assumes "hmx \<le>\<^sub>n SPEC (\<lambda>(r,hm'). RETURN (r,heapmap_\<alpha> hm') \<le> \<Down>(R\<times>\<^sub>rId) hx)"
shows "hmx \<le> \<Down>(R\<times>\<^sub>rheapmap_rel) hx"
using assms
unfolding heapmap_rel_def hmr_rel_def br_def heapmap_invar_def
apply (auto simp: pw_le_iff pw_leof_iff refine_pw_simps; blast)
done
lemma hm_pop_min_op_aref: "(hm_pop_min_op, PR_CONST (mop_pm_pop_min prio)) \<in> heapmap_rel \<rightarrow> \<langle>(Id\<times>\<^sub>rId)\<times>\<^sub>rheapmap_rel\<rangle>nres_rel"
unfolding PR_CONST_def
apply (intro fun_relI nres_relI)
apply (frule heapmap_hmr_relI)
unfolding mop_pm_pop_min_alt
apply (intro ASSERT_refine_right)
apply (rule heapmap_nres_rel_prodI)
apply (rule hm_pop_min_op_refine[param_fo, THEN nres_relD]; assumption)
unfolding heapmap_rel_def hmr_rel_def in_br_conv
apply (refine_vcg; simp)
apply (refine_vcg hm_pop_min_\<alpha>_correct[THEN leof_trans]; simp split: prod.splits)
done
text_raw \<open>\paragraph{Second approach}\<close>
(* Alternative approach: Also use knowledge about result
in multiset domain. Obtaining property seems infeasible at first attempt! *)
definition "hm_kv_of_op hm i \<equiv> do {
ASSERT (hm_valid hm i \<and> hmr_invar hm);
k \<leftarrow> hm_key_of_op hm i;
v \<leftarrow> hm_the_lookup_op k hm;
RETURN (k, v)
}"
definition "kvi_rel hm i \<equiv> {((k,v),v) | k v. hm_key_of hm i = k}"
lemma hm_kv_op_refine[refine]:
assumes "(hm,h)\<in>hmr_rel"
shows "hm_kv_of_op hm i \<le> \<Down>(kvi_rel hm i) (h.val_of_op h i)"
unfolding hm_kv_of_op_def h.val_of_op_def kvi_rel_def
hm_key_of_op_unfold hm_the_lookup_op_def
apply simp
apply refine_vcg
using assms
apply (auto
simp: hm_valid_def hm_length_def hmr_rel_defs in_br_conv heapmap_\<alpha>_def hm_key_of_def
split: prod.splits)
by (meson contra_subsetD domIff not_None_eq nth_mem)
definition hm_pop_min_op' :: "('k,'v) ahm \<Rightarrow> (('k\<times>'v) \<times> ('k,'v) ahm) nres" where
"hm_pop_min_op' hm \<equiv> do {
ASSERT (heapmap_invar hm);
ASSERT (hm_valid hm 1);
kv \<leftarrow> hm_kv_of_op hm 1;
let l = hm_length hm;
hm \<leftarrow> hm_exch_op hm 1 l;
hm \<leftarrow> hm_butlast_op hm;
if (l\<noteq>1) then do {
hm \<leftarrow> hm_sink_op hm 1;
RETURN (kv,hm)
} else RETURN (kv,hm)
}"
lemma hm_pop_min_op_refine':
"\<lbrakk> (hm,h)\<in>hmr_rel \<rbrakk> \<Longrightarrow> hm_pop_min_op' hm \<le> \<Down>(kvi_rel hm 1 \<times>\<^sub>r hmr_rel) (h.pop_min_op h)"
unfolding hm_pop_min_op'_def h.pop_min_op_def
(* Project away stuff of second component *)
unfolding ignore_snd_refine_conv
(* Prove refinement *)
apply refine_rcg
unfolding hmr_rel_def heapmap_rel_def
apply (unfold heapmap_invar_def; simp add: in_br_conv)
apply (simp_all add: in_br_conv)
done
lemma heapmap_nres_rel_prodI':
assumes "hmx \<le> \<Down>(S \<times>\<^sub>r hmr_rel) h'x"
assumes "h'x \<le> SPEC \<Phi>"
assumes "\<And>h' r. \<Phi> (r,h') \<Longrightarrow> h.heap_invar h'"
assumes "hmx \<le>\<^sub>n SPEC (\<lambda>(r,hm'). (\<exists>r'. (r,r')\<in>S \<and> \<Phi> (r',hmr_\<alpha> hm')) \<and> hmr_invar hm' \<longrightarrow> RETURN (r,heapmap_\<alpha> hm') \<le> \<Down>(R\<times>\<^sub>rId) hx)"
shows "hmx \<le> \<Down>(R\<times>\<^sub>rheapmap_rel) hx"
using assms
unfolding heapmap_rel_def hmr_rel_def heapmap_invar_def
apply (auto
simp: pw_le_iff pw_leof_iff refine_pw_simps in_br_conv
)
by meson
lemma ex_in_kvi_rel_conv:
"(\<exists>r'. (r,r')\<in>kvi_rel hm i \<and> \<Phi> r') \<longleftrightarrow> (fst r = hm_key_of hm i \<and> \<Phi> (snd r))"
unfolding kvi_rel_def
apply (cases r)
apply auto
done
lemma hm_pop_min_aref': "(hm_pop_min_op', mop_pm_pop_min prio) \<in> heapmap_rel \<rightarrow> \<langle>(Id\<times>\<^sub>rId) \<times>\<^sub>r heapmap_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
apply (frule heapmap_hmr_relI)
unfolding mop_pm_pop_min_alt
apply (intro ASSERT_refine_right)
apply (rule heapmap_nres_rel_prodI')
apply (erule hm_pop_min_op_refine')
apply (unfold heapmap_rel_def hmr_rel_def in_br_conv) []
apply (rule h.pop_min_op_correct)
apply simp
apply simp
apply simp
apply (clarsimp simp: ex_in_kvi_rel_conv split: prod.splits)
unfolding hm_pop_min_op'_def hm_kv_of_op_def hm_key_of_op_unfold
hm_the_lookup_op_def
apply (refine_vcg
hm_exch_op_\<alpha>_correct[THEN leof_trans]
hm_butlast_op_\<alpha>_correct[THEN leof_trans]
hm_sink_op_\<alpha>_correct[THEN leof_trans]
)
unfolding heapmap_rel_def hmr_rel_def in_br_conv
apply (auto intro: ranI)
done
subsubsection \<open>Remove\<close>
definition "hm_remove_op k hm \<equiv> do {
ASSERT (heapmap_invar hm);
ASSERT (k \<in> dom (heapmap_\<alpha> hm));
i \<leftarrow> hm_index_op hm k;
let l = hm_length hm;
hm \<leftarrow> hm_exch_op hm i l;
hm \<leftarrow> hm_butlast_op hm;
if i \<noteq> l then
hm_repair_op hm i
else
RETURN hm
}"
definition (in heapstruct) "remove_op i h \<equiv> do {
ASSERT (heap_invar h);
ASSERT (valid h i);
let l = length h;
h \<leftarrow> exch_op h i l;
h \<leftarrow> butlast_op h;
if i \<noteq> l then
repair_op h i
else
RETURN h
}"
lemma (in -) swap_empty_iff[iff]: "swap l i j = [] \<longleftrightarrow> l=[]"
by (auto simp: swap_def)
lemma (in heapstruct)
butlast_exch_last: "butlast (exch h i (length h)) = update (butlast h) i (last h)"
unfolding exch_def update_def
apply (cases h rule: rev_cases)
apply (auto simp: swap_def butlast_list_update)
done
lemma (in heapstruct) remove_op_invar:
"\<lbrakk> heap_invar h; valid h i \<rbrakk> \<Longrightarrow> remove_op i h \<le> SPEC heap_invar"
unfolding remove_op_def
apply refine_vcg
apply (auto simp: valid_def) []
apply (auto simp: valid_def exch_def) []
apply (simp add: butlast_exch_last)
apply refine_vcg
apply auto []
apply auto []
apply (auto simp: valid_def) []
apply auto []
apply auto []
done
lemma hm_remove_op_refine[refine]:
"\<lbrakk> (hm,m)\<in>heapmap_rel; (hm,h)\<in>hmr_rel; heapmap_\<alpha> hm k \<noteq> None\<rbrakk> \<Longrightarrow>
hm_remove_op k hm \<le> \<Down>hmr_rel (h.remove_op (hm_index hm k) h)"
unfolding hm_remove_op_def h.remove_op_def heapmap_rel_def
(*apply (rewrite at "Let (hm_index hm k) _" Let_def)*)
apply (refine_rcg index_op_inline_refine)
unfolding hmr_rel_def
apply (auto simp: in_br_conv)
done
lemma hm_remove_op_aref:
"(hm_remove_op,mop_map_delete_ex) \<in> Id \<rightarrow> heapmap_rel \<rightarrow> \<langle>heapmap_rel\<rangle>nres_rel"
apply (intro fun_relI nres_relI)
unfolding mop_map_delete_ex_alt
apply (rule ASSERT_refine_right)
apply (frule heapmap_hmr_relI)
apply (rule heapmap_nres_relI')
apply (rule hm_remove_op_refine; assumption?)
apply (unfold heapmap_rel_def in_br_conv; auto)
unfolding heapmap_rel_def hmr_rel_def in_br_conv
apply (refine_vcg h.remove_op_invar; clarsimp; fail)
apply (refine_vcg hm_remove_op_\<alpha>_correct[THEN leof_trans]; simp; fail)
done
subsubsection \<open>Peek-Min\<close>
definition hm_peek_min_op :: "('k,'v) ahm \<Rightarrow> ('k\<times>'v) nres" where
"hm_peek_min_op hm \<equiv> hm_kv_of_op hm 1"
lemma hm_peek_min_op_aref:
"(hm_peek_min_op, PR_CONST (mop_pm_peek_min prio)) \<in> heapmap_rel \<rightarrow> \<langle>Id\<times>\<^sub>rId\<rangle>nres_rel"
unfolding PR_CONST_def
apply (intro fun_relI nres_relI)
proof -
fix hm and m :: "'k \<rightharpoonup> 'v"
assume A: "(hm,m)\<in>heapmap_rel"
from A have [simp]: "h.heap_invar (hmr_\<alpha> hm)" "hmr_invar hm" "m=heapmap_\<alpha> hm"
unfolding heapmap_rel_def in_br_conv heapmap_invar_def
by simp_all
have "hm_peek_min_op hm \<le> \<Down> (kvi_rel hm 1) (h.peek_min_op (hmr_\<alpha> hm))"
unfolding hm_peek_min_op_def h.peek_min_op_def
apply (refine_rcg hm_kv_op_refine)
using A
apply (simp add: heapmap_hmr_relI)
done
also have "\<lbrakk>hmr_\<alpha> hm \<noteq> []\<rbrakk> \<Longrightarrow> (h.peek_min_op (hmr_\<alpha> hm))
\<le> SPEC (\<lambda>v. v\<in>ran (heapmap_\<alpha> hm) \<and> (\<forall>v'\<in>ran (heapmap_\<alpha> hm). prio v \<le> prio v'))"
apply refine_vcg
by simp_all
finally show "hm_peek_min_op hm \<le> \<Down> (Id \<times>\<^sub>r Id) (mop_pm_peek_min prio m)"
unfolding mop_pm_peek_min_alt
apply (simp add: pw_le_iff refine_pw_simps hm_peek_min_op_def hm_kv_of_op_def
hm_key_of_op_unfold hm_the_lookup_op_def)
apply (fastforce simp: kvi_rel_def ran_def)
done
qed
end
end
|
(*
Copyright (C) 2017 M.A.L. Marques
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: mgga_exc *)
(* prefix:
mgga_c_m08_params *params;
assert(p->params != NULL);
params = (mgga_c_m08_params * )(p->params);
*)
$define lda_c_pw_params
$define lda_c_pw_modified_params
$include "lda_c_pw.mpl"
$define gga_c_pbe_params
$include "gga_c_pbe.mpl"
(* the prefactor of t was chosen to get the right K_FACTOR_C in mgga_series_w *)
m08_f := (rs, z, xt, xs0, xs1, ts0, ts1) ->
+ mgga_series_w(params_a_m08_a, 12, 2^(2/3)*t_total(z, ts0, ts1))
* f_pw(rs, z)
+ mgga_series_w(params_a_m08_b, 12, 2^(2/3)*t_total(z, ts0, ts1))
* (f_pbe(rs, z, xt, xs0, xs1) - f_pw(rs, z)):
f := (rs, z, xt, xs0, xs1, us0, us1, ts0, ts1) ->
m08_f(rs, z, xt, xs0, xs1, ts0, ts1):
|
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
Declare(CodeBlock, BRAMPermStreamOne, TPrmMulti);
Class(TPrmMulti, Tagged_tSPL_Container, rec(
abbrevs := [ (tspl_list, which) -> [tspl_list, which] ],
dims := self >> Cond(IsBound(self.params[1][1].domain), [self.params[1][1].domain(), self.params[1][1].range()], self.params[1][1].dims()),
terminate := meth(self)
local prms, res, len, p, i;
prms := self.params[1];
len := Length(prms);
res := prms[len];
for i in Reversed([1..len-1]) do
p := Cond(IsBound(prms[i].terminate), prms[i].terminate(), Prm(prms[i]));
res := COND(eq(self.params[2], (i-1)), p, res);
od;
return res.terminate();
end,
transpose := self >> TPrmMulti(List(self.params[1], i->i.transpose()), self.params[2]).withTags(self.getTags()),
isReal := True,
doNotMeasure := true,
noCodelet := true,
normalizedArithCost := self >> 0
));
#F PermutationModMatrix ( <perm>, <n>, <m> )
#F constructs for the permutation <perm> on <n> points an
#F <m> x <m> matrix M. The (i,j) entry of M contains the number
#F of pairs (k, k^<perm>) with k mod m = i and k^{perm> mod m = j.
#F <m> has to divide <n>.
#F We view <perm> as a permutation of the points {0,.., n-1} even
#F though in gap it is represented as permutation on {1,..,n}
#F (i.e shifted by 1).
#F
PermutationModMatrix := function ( p, n, m)
local M, i;
# error checking
if not IsPerm(p) then
Error("<p> is not a permutation");
fi;
if not ( IsInt(n) and IsInt(m) ) then
Error("<n> and <m> must be integers");
fi;
if not n mod m = 0 then
Error("<m> must divide <n>");
fi;
if not ( p = () or LargestMovedPointPerm(p) <= n ) then
Error("<n> is not a valid degree for <perm>");
fi;
# initialize m x m matrix M
M := List([1..m], r -> List([1..m], c -> 0));
# construct M
for i in [1..n] do
M[(i-1) mod m + 1][(i^p-1) mod m + 1] :=
M[(i-1) mod m + 1][(i^p-1) mod m + 1] + 1;
od;
return M;
end;
## ============================================================
##
## PermMatrixRowVals(p) takes permutation matrix p and returns a list
## of the '1' location in each row of the matrix.
##
## Thus, the value a in list element b means that the input in
## location b will be permuted to location a in the output.
##
## ============================================================
PermMatrixRowVals := meth(x)
return List(Prm(FPerm(x)).func.lambda().tolist(), i->i.ev());
end;
## ============================================================
##
## PermMatrixRowVals(p) takes permutation matrix p and returns a list
## of the '1' location in each row of the matrix.
##
## Thus, the value a in list element b means that the input in
## location b will be permuted to location a in the output.
##
## ============================================================
PermMatrixRowValsOld := meth(x)
local length, i, res;
res := [];
length := Length(x);
for i in [1 .. length] do
res[i] := BasisVecToNum(x[i]);
od;
return res;
end;
## ============================================================
##
## min_list(l) returns the smallest value in l, a list of lists.
##
## ============================================================
min_list := meth(l)
local t, i;
t := [];
for i in [1 .. Length(l)] do
t[i] := Minimum(l[i]);
od;
return Minimum(t);
end;
## ============================================================
##
## max_list(l) returns the largest value in l, a list of lists.
##
## ============================================================
max_list := meth(l)
local t, i;
t := [];
for i in [1 .. Length(l)] do
t[i] := Maximum(l[i]);
od;
return Maximum(t);
end;
## ============================================================
##
## RandomPerm(n) returns a random permutation on n points.
##
## RandomPermSPL(n) is not used becuase 1/3 of the time it
## returns a stride permutation, and 1/3 of the time it
## returns J(n).
##
## ============================================================
RandomPerm := meth(n)
return Perm(Random(SymmetricGroup(n)), n);
end;
## ============================================================
##
## IsPermMult(p) returns 1 if matrix p is a permutation of a
## constant multiple of a permutation, and 0 otherwise.
##
## ============================================================
IsPermMult := meth(p)
local n, i, j, total;
n := Length(p);
for i in [1..n] do
total := 0;
for j in [1..n] do
if (p[i][j] <> 0) then
total := total+1;
if (total > 1) then
return 0;
fi;
fi;
od;
od;
return 1;
end;
## ============================================================
## OnesVector(n)
## Returns a vector of 1s of length n
## ============================================================
OnesVector := meth(n)
local res, i;
res := [];
for i in [1..n] do
res[i] := 1;
od;
return res;
end;
## ============================================================
## Given the mux addresses, i.e., the multiplexor settings
## that say "given an output location, where does it get its
## input from on each cycle," determine the inverse, i.e.,
## "given an input location, determine where its output will
## go for each cycle.
##
## This function is intended as a helper function, used in
## StreamPermGen.createCode().
## ============================================================
PermHelperFindPortAddresses := meth(muxaddr)
local ports, i, row, j, rowres;
ports := [];
for i in [1..Length(muxaddr)] do
rowres := [];
row := muxaddr[i];
for j in [1..Length(row)] do
rowres[row[j]+1] := j-1;
od;
Append(ports, [rowres]);
od;
return ports;
end;
_optsInColumn := meth(col, rows_left)
local res, i, size;
size := Length(col);
res := [];
for i in [1..size] do
if ((rows_left[i] = 1) and (col[i] > 0)) then
Append(res, [i]);
fi;
od;
return res;
end;
# Used in debugging.
#_findPermBacktracks := 0;
try_column := meth(whichCol, board, rows_left)
local size, col, options, i, spec_rows_left, tres;
size := Length(board[1]);
col := TransposedMat(board)[whichCol];
options := _optsInColumn(col, rows_left);
for i in options do
if (whichCol = size) then
return [i];
fi;
rows_left[i] := 0;
tres := try_column(whichCol+1, board, rows_left);
if (tres <> [-1]) then
Append(tres, [i]); # backwards
return tres;
fi;
rows_left[i] := 1;
od;
# Debugging
#_findPermBacktracks := _findPermBacktracks+1;
return [-1];
end;
_randomPermGood := meth(board)
local rows_left, permVec, size, permMat, col, row, choices, failed, choice, i;
size := Length(board[1]);
rows_left := List([1..size], i->1);
permVec := [];
failed := false;
col := 1;
while ((col <= size) and (not failed)) do
# Collect list of choices in this column.
choices := [];
for row in [1..size] do
if ((rows_left[row] <> 0) and (board[row][col] > 0)) then
Append(choices, [row]);
fi;
od;
if (Length(choices) = 0) then
failed := true;
else
choice := Random(choices);
rows_left[choice] := 0;
Append(permVec, [choice]);
fi;
col := col + 1;
od;
if (failed) then
# If we fail, then we can assume that the square contains at least one 0 entry. So,
# we can send back the all 1 matrix, and be guaranteed that P - [1] < 0 for some value.
return(List([1..size], r -> List([1..size], c -> 1)));
else
permMat := [];
for i in [1..size] do
permMat[i] := BasisVec(size, permVec[i]-1);
od;
permMat := TransposedMat(permMat);
return permMat;
fi;
end;
_findPerm := meth(board)
local rows_left, permVec, size, permMat, i;
size := Length(board[1]);
rows_left := List([1..size], i->1);
permVec := Reversed(try_column(1, board, rows_left));
permMat := [];
for i in [1..size] do
permMat[i] := BasisVec(size, permVec[i]-1);
od;
permMat := TransposedMat(permMat);
return permMat;
end;
_minisat_path := "";
_findPermUsingSat := meth(board)
local size, r, c, zeros, dir, i, zerofile, outfile, res, permmat;
size := Length(board[1]);
zeros := [];
for r in [1..size] do
for c in [1..size] do
if (board[r][c] = 0) then
Append(zeros, [ (r-1)*size + c ]);
fi;
od;
od;
dir := Concat("/tmp/spiral/", String(GetPid()));
MakeDir(dir);
zerofile := Concat(dir, "/zeros");
outfile := Concat(dir, "/out");
PrintTo(zerofile, "");
for i in [1..Length(zeros)] do
AppendTo(zerofile, zeros[i], "\n");
od;
if (_minisat_path = "") then
Error("Error: path to SAT solver not set: paradigms.stream._minisat_path is not bound.");
fi;
# IntExec(Concat("/Users/pam/minisat/core/minisat ", String(size), " ", zerofile, " ", outfile));
IntExec(Concat(_minisat_path, " ", String(size), " ", zerofile, " ", outfile));
res := ReadVal(outfile);
permmat := [];
for i in [1..size] do
permmat[i] := BasisVec(size, res[i]);
od;
return permmat;
end;
_SumList := meth(m)
local res, i;
res := 0;
for i in [1..Length(m)] do
res := res + m[i];
od;
return res;
end;
_whichMeth := 1;
Declare(routePerm);
## ============================================================
## listofperms = FactorSemiMagicSquare(square)
##
## Given a semi-magic square of size n, find a (non-unique)
## decomposition into a sum of permutations. Returns the
## decomposition as a list of permutations (with each perm
## being represented in the "row value" form).
##
## Each semi-magic square can be decomposed into at most
## (n-1)^2 + 1 different permutations. See:
## Leep et al. Marriage, Magic, and Solitaire. The American
## Mathematical Monthly (1999) vol. 106 (5) pp. 419-429.
##
## This method uses randomly generated permutations. A better
## method would construct them based upon the square. A more
## realistic approach would be a hybrid of the two.
##
## This function is intended as a helper function, used in
## StreamPermGen.createCode().
## ============================================================
FactorSemiMagicSquare := meth(square)
local length, multplr, rndprm, j, res, rows_left, cols_left, startTime, endTime, tries, which, ct;
length := Length(square);
res := [];
# Debugging
startTime := TimeInSecs();
tries := 0;
ct := 0;
which := stream._whichMeth;
## While the semi-magic square is not all zero...
while (max_list(square) > 0) do
## What is left? Is it a multiple of a permutation matrix?
## (This step not needed for correctness, but is a performance
## optimization.)
if (IsPermMult(square) = 1) then
## If what is left is a constant times a perm matrix,
## set that perm as the random step, and complete the
## final step.
rndprm := square / max_list(square);
multplr := 0;
while (min_list(square - multplr*rndprm) >= 0) do
multplr := multplr+1;
od;
multplr := multplr-1;
square := square - multplr*rndprm;
for j in [1..multplr] do
#! fails sometimes Append(res, [routePerm(PermMatrixRowValsOld(rndprm))]);
Append(res, [PermMatrixRowValsOld(rndprm)]);
od;
ct := ct + 1;
else
## Generate a random permutation.
# set 'which' as: paradigms.stream.whichMeth := 0;
# rndprm := _randomPermGood(square);
tries := tries+1;
# if (which = 0) then
# rndprm := MatSPL(RandomPerm(length));
# #tries := tries + 1;
# else
# if (which = 1) then
# rndprm := _findPerm(square);
# else
rndprm := _findPermUsingSat(square);
# fi;
# fi;
fi;
## How many times can this permutation be
## subtracted from the square?
multplr := 0;
while (min_list(square - multplr*rndprm) >= 0) do
multplr := multplr+1;
od;
multplr := multplr-1;
## If this permutation has been subtracted from the square,
## record the decision, and check if we are done.
if (multplr > 0) then
## Subtract the permutation the correct number of times.
square := square - multplr*rndprm;
## Record the decision
for j in [1..multplr] do
Append(res, [PermMatrixRowValsOld(rndprm)]);
od;
endTime := TimeInSecs();
# Print(which, ", ", (endTime - startTime), ", ", tries, "\n");
startTime := endTime;
tries := 0;
ct := ct + 1;
fi;
od;
return res;
end;
## ============================================================
## row = PermHelperFindRow(permrows, x (readdport),
## y (writeport), streaming-width)
##
## Given a permutation as a set of row values, and a selected
## read and write port (x, y), find which permutation word can
## be scheduled that will lead to a read from port x to a
## write in port y.
##
## As the algorithm progresses, words that have been scheduled
## are removed from permrows and replaced with -1.
##
## This function is intended as a helper function, used in
## StreamPermGen.createCode().
## ============================================================
PermHelperFindRow := meth(p, x, y, w)
local a, b;
for b in [1 .. Length(p)] do
a := p[b];
# The a <> -1 isn't needed for correctness but it makes the
# code more clear.
if ((a <> -1) and (Mod(a,w) = x) and (Mod(b-1,w) = y)) then
return b;
fi;
od;
Error("Cannot find a word to schedule.");
end;
## ============================================================
## [rd_addr, wr_addr] = PermHelperFindAddresses(permrows,
## muxaddr, w);
##
## Given a permutation as a set of row values and a cycle-by-
## cycle read/write port mapping (muxaddr), and streaming width
## w, determine the assoicated memory read and write addresses.
##
## This function is intended as a helper function, used in
## StreamPermGen.createCode().
## ============================================================
PermHelperFindAddresses := meth(permrows, muxaddr, w)
local rd_addr, wr_addr, i, x, y, b, a, rdcyc, wrcyc, wrcyc2, ports;
ports := PermHelperFindPortAddresses(muxaddr);
rd_addr := []; wr_addr := [];
for i in [1..Length(permrows)/w] do
rdcyc := []; wrcyc := []; wrcyc2 := [];
for x in [1..w] do
y := ports[i][x]+1;
b := PermHelperFindRow(permrows, x-1, y-1, w);
a := permrows[b];
Append(rdcyc, [floor(a/w).v]);
Append(wrcyc, [floor((b-1)/w).v]);
Append(wrcyc2, []);
permrows[b] := -1;
od;
Append(rd_addr, [rdcyc]);
# Here, wrcyc tells the write address for each *read port* at this
# time. I need to permute these values so it tells the write address
# for each *write port* at this time.
# So, I just need to assign wrcyc to wrcyc2 based upon this mapping.
for x in [1..w] do
wrcyc2[x] := wrcyc[muxaddr[i][x]+1];
od;
Append(wr_addr, [wrcyc2]);
od;
return [rd_addr, wr_addr];
end;
## ==================================================================
## BRAMPermGeneral([rdaddr], [swnetctrl], [wraddr], [perm], it)
## where rdaddr is the set of read addresses, swnetctrl is the
## control bits for the \Omega^{-1}\Omega network, wraddr is the
## set of write addresses, and perm is the permutation as an SPL
## object. it is the iteration variable that specifies which perm
## to perform.
##
## This represents a streaming multi-permutation structure that has
## been factored using our "general" method.
##
## This is not ideal for permutations that are linear on the bits.
##
## Do not attempt to build this object by hand. Instead use
## StreamPerm(perm, streamsize) to build a generic streaming
## perm. Then, StreamPerm.createCode() will construct the
## BRAMPermGeneral given the permutation and streaming width.
## ==================================================================
Class(BRAMPermGeneral, BaseMat, SumsBase, rec(
abbrevs := [(r,m,w,p,it)-> [r,m,w,p,it]],
new := (self, r, m, w, p, it) >> SPL(WithBases(self, rec(
_children := [r, m, w, p, it],
dimensions := [ Length(r[1])*Length(r[1][1]), Length(r[1])*Length(r[1][1])],
streamSize := Length(r[1][1]),
))),
rChildren := self >> self._children,
rSetChild := meth(self, n, what) self._children[n] := what; end,
child := (self, n) >> self._children[n],
children := self >> self._children,
createCode := self >> self,
print := (self,i,is) >> Print(self.name, "(", self.dims()[1],
", ", self.streamSize, ", ", self.child(1), ", ", self.child(2),
", ", self.child(3), ", ", self.child(5), ")"),
toAMat := meth(self)
local prms, len, it, i, res;
prms := self.child(4);
len := Length(prms);
it := self.child(5);
res := prms[len];
for i in Reversed([1..len-1]) do
res := COND(eq(it, i-1), prms[i], res);
od;
return res.toAMat();
end,
dims := self >> [Length(self._children[1][1])*Length(self._children[1][1][1]),
Length(self._children[1][1])*Length(self._children[1][1][1])],
sums := self >> self
));
PadToTwoPowSize := meth(muxaddr)
local n, n2p, i, res, dif, line;
n := Length(muxaddr[1]);
if (2^Log2Int(n) = n) then
return muxaddr;
fi;
n2p := 2^(Log2Int(n)+1);
dif := n2p-n;
res := [];
for i in [1..Length(muxaddr)] do
line := Concatenation(muxaddr[i], [n..n2p-1]);
Append(res, [line]);
od;
return res;
end;
RandomVectorVector := meth(domain, inner, outer)
local res, i, j;
res := [];
for i in [1..outer] do
res[i] := [];
for j in [1..inner] do
res[i][j] := RandomList([0..domain-1]);
od;
od;
return res;
end;
## ======================================================
## A container for a streaming multi-permutation structure
##
## StreamPerm([<SPL>], <m>, <w>, <i>)
##
## where the permutations are I(<M>) x <SPL>, <SPL> is
## the permutation as an SPL object, <w> is the
## streaming width, and <i> selects between the <SPL>.
## ======================================================
Class(StreamPerm, BaseMat, SumsBase, rec(
abbrevs := [(s, m, w, i) -> [s, m, w, i]],
new := (self, prm, m, ss, i) >> SPL(WithBases(self, rec(
_children := [prm, m, ss, i],
streamSize := ss,
par := m,
it := i,
dimensions := Cond(IsBound(prm[1].domain), [m * prm[1].domain(), m * prm[1].range()], m * prm[1].dims()),
))),
rChildren := self >> self._children,
rSetChild := meth(self, n, what) self._children[n] := what; end,
child := (self, n) >> self._children[n],
children := self >> self._children,
createCode := meth(self)
local lin, prm, w, P, length, permobj, permrows, modmatrix, muxaddr, muxaddr2, ports, addrs,
rdaddr, wraddr, network, res, p3, p1, r, k, M, N, P, x, i, j, l, K, size, count, ptrans,
z, G, q1, q3, q3b, p1b, G, dm, plist, bitlist, prms, Pt, thisPerm, thisPrm, prmCode, it,
prmTens, dm_no_tens, prm_no_tens;
prm := List(self.child(1), p->Tensor(I(self.child(2)), p));
prm_no_tens := self.child(1);
w := self.child(3);
it := self.child(4);
P := [];
# If the permutation is a composition of multiple perms, it is more efficient to
# compute the bit representation of each individually, and then combine them.
for i in [1..Length(self.child(1))] do
thisPerm := self.child(1)[i];
thisPrm := prm[i];
if (ObjId(thisPrm) = L) then
thisPrm := TL(thisPerm.params[1], thisPerm.params[2]);
fi;
if (ObjId(thisPrm) = Tensor and ObjId(thisPrm.child(1)) = I and ObjId(thisPrm.child(2)) = L) then
thisPrm := TL(thisPrm.child(2).params[1], thisPrm.child(2).params[2], thisPrm.child(1).params[1], 1);
fi;
if (ObjId(thisPerm) = Compose) then
plist := thisPerm.children();
# If we just have an L, turn it into a TL so we can use its .permBits() function
plist := List(plist, i -> Cond(ObjId(i) = L, TL(i.params[1], i.params[2]), i));
# If we have I x L, turn it into TL
plist := List(plist, i -> Cond(ObjId(i) = Tensor and ObjId(i.child(1)) = I and ObjId(i.child(2)) = L, TL(i.child(2).params[1], i.child(2).params[2], i.child(1).params[1], 1), i));
bitlist := List(plist, i ->
Cond(IsBound(i.permBits),
i.permBits(),
Cond(ObjId(i) = Tensor and ObjId(i.child(1)) = I and IsBound(i.child(2).permBits),
DirectSumMat(MatSPL(I(Log2Int(i.child(1).params[1])))*GF(2).one, i.child(2).permBits()),
PermMatrixToBits(MatSPL(i))
)
)
);
if (-1 in bitlist) then
Pt := -1;
else
Pt := Product(bitlist);
if (self.child(2) > 1) then
Pt := DirectSumMat(MatSPL(I(Log2Int(self.child(2))))*GF(2).one, Pt);
fi;
fi;
else
Pt := Cond(IsBound(thisPrm.permBits),
# If P.permBits defined, use it
thisPrm.permBits(),
# If P = Tensor(I, Q) and Q.permBits defined, use it.
Cond(ObjId(thisPrm) = Tensor and ObjId(thisPrm.child(1)) = I and IsBound(thisPrm.child(2).permBits),
DirectSumMat(MatSPL(I(Log2Int(thisPrm.child(1).params[1])))*GF(2).one, thisPrm.child(2).permBits()),
# Otherise, use PermMatrixToBits function (slow)
PermMatrixToBits(MatSPL(thisPrm))
)
);
fi;
Append(P, [Pt]);
od;
# so, Make a list of Ps, one for each perm. then set lin if all are <> -1.
lin := Cond(ForAny(P, i->i=-1) or 2^Log2Int(w) <> w or Length(prm) > 1, 0, 1);
# Find the overall dimensions and the dimension of the "non-tensor" part
dm := Cond(IsBound(prm[1].domain), prm[1].domain(), prm[1].dims()[1]);
# PM: This is not very robust.
dm_no_tens := Cond(IsBound(self.child(1)[1].domain), self.child(1)[1].domain(), self.child(1)[1].dims()[1]);
# If the permutations are on w words, then we can generate code.
if (self.child(3) = dm_no_tens) then
## If this perm is multiple permutation functions composed, separate them to make the
## code generation work as expected.
prmCode := List(prm_no_tens, j ->
Cond(ObjId(j) = Compose,
Compose(List(j.children(), i->Prm(FPerm(i)))),
Prm(FPerm(j))
)
);
res := prmCode[Length(prmCode)];
for i in Reversed([1..Length(prmCode)-1]) do
res := COND(eq(it, (i-1)), prmCode[i], res);
od;
if (self.child(2) > 1) then
return STensor(CodeBlock(res).createCode(), self.child(2), self.child(3));
fi;
return CodeBlock(res).createCode();
fi;
# If our perm is not linear on the bits, or if we have multiple permutations, or
# if we have manually overridden this to avoid the patented permutations:
if ((lin = 0) or (stream.avoidPermPatent = 1)) then
# We want to ignore the tensor product until the end
prm := self.child(1);
prmTens := List(self.child(1), p->Tensor(I(self.child(2)), p));
it := self.child(4);
if (self.child(3) = 1) then
return BRAMPermStreamOne(
# List(prmTens, i->PermMatrixRowValsOld(MatSPL(i))),
List(prmTens, i->PermMatrixRowVals(i)),
prmTens,
it);
fi;
length := self._children[1][1].dims()[1];
permobj := List(self._children[1], i -> PermSPL(i));
permrows := List(permobj, i -> PermMatrixRowValsOld(MatPerm(i, length)));
modmatrix := List(permobj, i -> PermutationModMatrix(i, length, self.child(3)));
muxaddr := List(modmatrix, i -> FactorSemiMagicSquare(i));
muxaddr2 := List(muxaddr, i -> PadToTwoPowSize(i));
network := List(muxaddr2, j -> List([1..Length(j)], i -> routePerm(j[i])));
addrs := List([1..Length(permobj)], i -> PermHelperFindAddresses(permrows[i], muxaddr[i], self.child(3)));
rdaddr := List(addrs, i -> i[1]);
wraddr := List(addrs, i -> i[2]);
if (self.child(2) > 1) then
return STensor(BRAMPermGeneral(rdaddr, network, wraddr, self._children[1], it), self.child(2), w);
else
return BRAMPermGeneral(rdaddr, network, wraddr, self._children[1], it);
fi;
else
# If we are here, we only have one permutation
P := P[1];
k := Log2Int(self.child(3));
size := Log2Int(self.dimensions[1]);
# If our streaming width is 1, then we are done. P = I*P.
if (self.child(3) = 1) then
return BRAMPerm(MatSPL(I(size))*GF(2).one, P, 1);
fi;
# if P is the bit form of I tensor P', we can remove the top-left corner bits
# (up to (size-k) bits, and replace them with a streaming tensor product of a
# smaller permutation
count := GetTopLeftOnes(P,k);
P := P{[count+1..Length(P)]}{[count+1..Length(P)]};
size := size-count;
p3 := extractP3(P, k);
p1 := extractP1(P, k);
r := Rank(p1);
if (r = k) then
M := P;
N := MatSPL(I(Length(M))) * GF(2).one;
else # This is algorithm 5.2 of J. ACM 2009 paper.
# Is P a permutation matrix?
if (CheckPerm(P) = 1) then
j := NonZeroRows(p3);
i := ZeroRows(p1);
else
p1b := TransposedMat(Copy(p1));
G := MyTriangulizeMat(p1b);
G := TransposedMat(G);
G := G * GF(2).one;
q1 := p1 * G;
q3 := p3 * G;
z := LinearIndependentColumns(TransposedMat(q1));
i := [];
for l in [1 .. k] do
if l in z then
;
else
Add(i, l);
fi;
od;
q3b := [];
for l in [1 .. Length(q3)] do
Append(q3b, [q3[l]{[r+1..k]}]);
od;
j := LinearIndependentColumns(TransposedMat(q3b));
fi;
K := buildH2(i, j, size, k);
N := buildH(size, k, K) * GF(2).one;
M := N*P;
fi;
if (Maximum(Maximum(N * M - P)) <> 0 * GF(2).one) then
Print("ERROR: Could not factor permutation matrix.\n");
fi;
if (count = 0) then
return BRAMPerm(N, M, self.child(3));
else
return STensor(BRAMPerm(N,M,self.child(3)), 2^count, self.child(3));
fi;
fi;
end,
print := (self,i,is) >> Print(self.name, "(", self._children[1], ", ", self.child(2), ", ", self.child(3), ", ", self._children[4], ")"),
toAMat := meth(self)
local prms, len, it, i, res;
prms := self._children[1];
len := Length(prms);
it := self._children[4];
res := Tensor(I(self.child(2)), prms[len]);
for i in Reversed([1..len-1]) do
res := COND(eq(it, (i-1)), Tensor(I(self.child(2)), prms[i]), res);
od;
return res.toAMat();
end,
dims := self >> self.child(2) * self._children[1][1].dims(),
sums := self >> self
));
## ======================================================
## A container for a streaming RC(permutation)
##
## RCStreamPerm([<SPL>], <m>, <w>, <i>)
##
## where the permutations is I(<M>) x [<SPL>] x I(2), [<SPL>] is
## list of permutations as SPL objects, <w> is the
## streaming width, and <i> selects which of the perms.
## ======================================================
Class(RCStreamPerm, BaseMat, SumsBase, rec(
abbrevs := [(s, m, w, i) -> [s, m, w, i]],
new := (self, prm, m, ss, i) >> SPL(WithBases(self, rec(
_children := [prm, m, ss, i],
streamSize := ss,
par := m,
it := i,
dimensions := Cond(IsBound(prm[1].domain), [2 * m * prm[1].domain(), 2 * m * prm[1].range()], 2 * m * prm[1].dims()),
))),
rChildren := self >> self._children,
rSetChild := meth(self, n, what) self._children[n] := what; end,
child := (self, n) >> self._children[n],
children := self >> self._children,
createCode := meth(self)
local r;
r := StreamPerm(self.child(1), self.par, self.streamSize, self.it).createCode();
if (ObjId(r) = STensor) then
return STensor(RC(r.child(1)), r.p, r.bs*2);
else
return RC(r);
fi;
end,
print := (self,i,is) >> Print(self.name, "(", self._children[1], ", ", self.par, ", ", self.streamSize, ", ", self._children[4], ")"),
toAMat := self >> RC(StreamPerm(self._children[1], self.par, self.streamSize, self.it)).toAMat(),
dims := self >> 2 * self.par * self._children[1][1].dims(),
sums := self >> self
));
## ======================================================
## A container for a general streaming permutation (not
## necessarily linear on the bits)
##
## StreamPermGen(<SPL>, <q>)
##
## where <SPL> is the permutation as an SPL object, and
## <q> is the streaming width.
## ======================================================
## NB: Deprecated. Use StreamPerm, which covers all cases
# Class(StreamPermGen, BaseMat, SumsBase, rec(
# abbrevs := [(s, q) -> [s, q]],
# new := (self, prm, ss) >> SPL(WithBases(self, rec(
# _children := [prm],
# streamSize := ss,
# dimensions := Cond(IsBound(prm.domain), [prm.domain(), prm.range()], prm.dims()),
# ))),
# rChildren := self >> self._children,
# rSetChild := meth(self, n, what) self._children[n] := what; end,
# child := (self, n) >> self._children[1],
# children := self >> self._children,
# createCode := meth(self)
# local length, permobj, permrows, modmatrix, muxaddr, muxaddr2, ports,
# addrs, rdaddr, wraddr, network, dms, prms;
# dms := Cond(IsBound(self.child(1).domain), [self.child(1).domain(), self.child(1).range()], self.child(1).dims());
# if (self.streamSize = 1) then
# return BRAMPermStreamOne(
# [PermMatrixRowVals(MatSPL(self._children[1]))],
# [self._children[1]], 0);
# fi;
# if (self.streamSize = dms[1]) then
# ## If this perm is multiple permutation functions composed, separate them to make the
# ## code generation work as expected.
# if (ObjId(self.child(1)) = Compose) then
# prms := Compose(List(self.child(1).children(), i->Prm(FPerm(i))));
# else
# prms := Prm(FPerm(self.child(1)));
# fi;
# return CodeBlock(prms).createCode();
# fi;
# length := self._children[1].dims()[1];
# permobj := PermSPL(self._children[1]);
# permrows := PermMatrixRowVals(MatPerm(permobj, length));
# modmatrix := PermutationModMatrix(permobj, length, self.streamSize);
# muxaddr := FactorSemiMagicSquare(modmatrix);
# muxaddr2 := PadToTwoPowSize(muxaddr);
# # Print("muxaddr = ", muxaddr, "\nmuxaddr2 = ", muxaddr2);
# network := List([1..Length(muxaddr2)], i -> routePerm(muxaddr2[i]));
# addrs := PermHelperFindAddresses(permrows, muxaddr,
# self.streamSize);
# rdaddr := addrs[1];
# wraddr := addrs[2];
# #! rdaddr := RandomVectorVector(length/self.streamSize, self.streamSize, length/self.streamSize);
# #! wraddr := RandomVectorVector(length/self.streamSize, self.streamSize, length/self.streamSize);
# #! muxaddr := RandomVectorVector(self.streamSize, self.streamSize, length/self.streamSize);
# # Previously, we used 'muxaddr.'
# # return BRAMPermGeneral(rdaddr, muxaddr, wraddr, self._children[1]);
# # Now I am switching to the omega^-1 omega network control bits
# return BRAMPermGeneral(rdaddr, network, wraddr, self._children[1]);
# end,
# print := (self,i,is) >> Print(self.name, "(", self._children[1], ", ",
# self.streamSize, ")"),
# toAMat := self >> self._children[1].toAMat(),
# dims := self >> self._children[1].dims(),
# sums := self >> self
# ));
## ======================================================
## A container for a streaming multi-permutation structure
##
## StreamPermGen(<l>, <i>, <q>)
##
## where <l> is a list of permutations as SPL objects,
## <i> is a variable that selects which permutation to
## perform, and <q> is the streaming width.
## ======================================================
## Deprecated. Use StreamPerm, which supports multiple perms (or will soon!)
## !!!!!!!!!!! IMPORTANT !!!!!!!!
# Do not delete this code until I have successfully rolled its multiple perm
# support into StreamPermGen.
## !!!!!!!!!!! IMPORTANT !!!!!!!!
# Class(StreamPermGenMult, BaseMat, SumsBase, rec(
# abbrevs := [(s, i, q) -> [s, i, q]],
# new := (self, prm, i, ss) >> SPL(WithBases(self, rec(
# _children := [prm, i],
# streamSize := ss,
# dimensions := Cond(IsBound(prm[1].domain), [prm[1].domain(), prm[1].range()], prm[1].dims()),
# ))),
# rChildren := self >> self._children,
# rSetChild := meth(self, n, what) self._children[n] := what; end,
# child := (self, n) >> self._children[n],
# children := self >> self._children,
# createCode := meth(self)
# local length, permobj, permrows, modmatrix, muxaddr, muxaddr2, ports,
# addrs, rdaddr, wraddr, network, dms, prms, it, prmcode, i, res;
# dms := Cond(IsBound(self.child(1)[1].domain), [self.child(1)[1].domain(), self.child(1)[1].range()], self.child(1)[1].dims());
# it := self.child(2);
# prms := self.child(1);
# if (self.streamSize = 1) then
# return BRAMPermStreamOne(List(prms, i->PermMatrixRowVals(MatSPL(i))),
# prms,
# it);
# fi;
# if (self.streamSize = dms[1]) then
# ## If this perm is multiple permutation functions composed, separate them to make the
# ## code generation work as expected.
# prmcode := List(self.child(1), j ->
# Cond(ObjId(j)=Compose,
# Compose(List(self.child(1).children(), i->Prm(FPerm(i)))),
# Prm(FPerm(j))
# )
# );
# res := prmcode[Length(prmcode)];
# for i in Reversed([1..Length(prmcode)-1]) do
# res := COND(eq(it, i), prmcode[i], res);
# od;
# return CodeBlock(res).createCode();
# fi;
# length := self._children[1].dims()[1];
# permobj := PermSPL(self._children[1]);
# permrows := PermMatrixRowVals(MatPerm(permobj, length));
# modmatrix := PermutationModMatrix(permobj, length, self.streamSize);
# muxaddr := FactorSemiMagicSquare(modmatrix);
# muxaddr2 := PadToTwoPowSize(muxaddr);
# network := List([1..Length(muxaddr2)], i -> routePerm(muxaddr2[i]));
# addrs := PermHelperFindAddresses(permrows, muxaddr,
# self.streamSize);
# rdaddr := addrs[1];
# wraddr := addrs[2];
# # Previously, we used 'muxaddr.'
# # return BRAMPermGeneral(rdaddr, muxaddr, wraddr, self._children[1]);
# # Now I am switching to the omega^-1 omega network control bits
# return BRAMPermGeneral(rdaddr, network, wraddr, self._children[1]);
# end,
# print := (self,i,is) >> Print(self.name, "(", self._children[1], ", ", self._children[2], ", ",
# self.streamSize, ")"),
# toAMat := meth(self)
# local prms, len, it, i, res;
# prms := self._children[1];
# len := Length(prms);
# it := self._children[2];
# res := prms[len];
# for i in Reversed([1..len-1]) do
# res := COND(eq(it, i), prms[i], res);
# od;
# return res.toAMat();
# end,
# # toAMat := self >> COND(eq(self._children[2], 0), self._children[1][1], self._children[1][2]).toAMat(),
# dims := self >> self._children[1][1].dims(),
# sums := self >> self
# ));
## ==================================================================
## BRAMPermStreamOne([rd-addr, ...], [<SPL>, ... ], i)
## A container for a streaming multi-permutation structure where width = 1.
## This merits its own class because the hardware implementation
## is greatly simplified.
##
## The permutations are given as a list of vectors of read addresses, and
## as a list of SPL objects.
##
## i is the variable that selects which permutation to perform
## ==================================================================
Class(BRAMPermStreamOne, BaseMat, SumsBase, rec(
abbrevs := [(r, p, i)-> [r, p, i]],
new := (self, r, p, i) >> SPL(WithBases(self, rec(
_children := [r, p, i],
dimensions := [ Length(r[1]), Length(r[1])]
))),
rChildren := self >> self._children,
rSetChild := meth(self, n, what) self._children[n] := what; end,
child := (self, n) >> self._children[n],
children := self >> self._children,
createCode := self >> self,
print := (self,i,is) >> Print(self.name, "(", self.child(1), ", ", self.child(3), ")"),
# toAMat := self >> self.child(2).toAMat(),
toAMat := meth(self)
local prms, len, it, i, res;
prms := self._children[2];
len := Length(prms);
it := self._children[3];
res := prms[len];
for i in Reversed([1..len-1]) do
res := COND(eq(it, i-1), prms[i], res);
od;
return res.toAMat();
end,
dims := self >> [Length(self._children[1][1]),
Length(self._children[1][1])],
sums := self >> self
));
Declare(InitStreamUnrollHw, SumStreamStrategy, StreamStrategy);
# CountPermSwitches(t)
# Returns number of switches needed for permutation t.
# We assume t is a TPrm().withTags([AStream(w)]);
CountPermSwitches := function (t)
local i, M, M2, M2_t, N, N2, N2_t, n, k, r, s, s2, s3, s_m, s_n, opts;
opts := InitStreamUnrollHw();
r := RandomRuleTree(t, opts);
s := SumsRuleTreeStrategy(r, SumStreamStrategy, opts);
s2 := ApplyStrategy(s, StreamStrategy, UntilDone, opts);
s3 := s2.createCode();
k := Log2Int(s3.streamSize);
M := s3.child(2);
n := Length(M);
# Extract M2. First just get the bottom k rows.
M2_t := Sublist(M, [n-k+1 .. n]);
M2 := [];
# Now get the first n-k columns
for i in [1..k] do
Append(M2, [Sublist(M2_t[i], [1..k])]);
od;
s_m := Rank(M2);
N := s3.child(1);
# Extract N2. First just get the bottom k rows.
N2_t := Sublist(N, [n-k+1 .. n]);
N2 := [];
# Now get the first n-k columns
for i in [1..k] do
Append(N2, [Sublist(N2_t[i], [1..k])]);
od;
s_n := Rank(N2);
return (s_m + s_n) * s3.streamSize/2;
end;
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal29.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Derive Show for lst.
Derive Arbitrary for lst.
Instance Dec_Eq_lst : Dec_Eq lst.
Proof. dec_eq. Qed.
Lemma lfind_hyp_test : (@eq lst (rev (rev (Nil))) (Nil)).
Admitted.
QuickChick lfind_hyp_test.
|
Formal statement is: lemma Lim_within_id: "(id \<longlongrightarrow> a) (at a within s)" Informal statement is: The identity function converges to $a$ at $a$ within $s$.
|
Formal statement is: proposition separate_closed_compact: fixes s t :: "'a::heine_borel set" assumes "closed s" and "compact t" and "s \<inter> t = {}" shows "\<exists>d>0. \<forall>x\<in>s. \<forall>y\<in>t. d \<le> dist x y" Informal statement is: If $s$ is a closed set and $t$ is a compact set, and $s$ and $t$ are disjoint, then there exists a positive real number $d$ such that for all $x \in s$ and $y \in t$, we have $d \<le> \|x - y\|$.
|
header {* Some preliminaries on equivalence relations and quotients *}
(* author: Andrei Popescu *)
theory Equiv_Relation2 imports Preliminaries
begin
(* Recall the following constants and lemmas:
term Eps
term "A//r"
lemmas equiv_def
lemmas refl_on_def
-- note that "reflexivity on" also assumes inclusion of the relation's field into r
*)
definition proj :: "'a rel \<Rightarrow> 'a \<Rightarrow> 'a set"
where "proj r x = r `` {x}"
abbreviation
EpsSet where "EpsSet X \<equiv> Eps (% x. x \<in> X)"
definition univ :: "('a \<Rightarrow> 'b) \<Rightarrow> ('a set \<Rightarrow> 'b)"
where "univ f X == f (EpsSet X)"
lemma proj_preserves:
"x \<in> A \<Longrightarrow> proj r x \<in> A//r"
unfolding proj_def by (rule quotientI)
lemma proj_in_iff:
assumes "equiv A r"
shows "(proj r x \<in> A//r) = (x \<in> A)"
apply(rule iffI) apply(auto simp add: proj_preserves)
unfolding proj_def quotient_def proof auto
fix y assume y: "y \<in> A" and "r `` {x} = r `` {y}"
moreover have "y \<in> r `` {y}" using assms y unfolding equiv_def refl_on_def by auto
ultimately have "(x,y) \<in> r" by auto
thus "x \<in> A" using assms unfolding equiv_def refl_on_def by auto
qed
lemma proj_iff:
"\<lbrakk>equiv A r; {x,y} \<subseteq> A\<rbrakk> \<Longrightarrow> (proj r x = proj r y) = ((x,y) \<in> r)"
unfolding proj_def by (auto simp add: eq_equiv_class_iff)
lemma in_proj: "\<lbrakk>equiv A r; x \<in> A\<rbrakk> \<Longrightarrow> x \<in> proj r x"
unfolding proj_def equiv_def refl_on_def by auto
lemma proj_image: "(proj r) ` A = A//r"
unfolding proj_def[abs_def] quotient_def by auto
lemma in_quotient_imp_non_empty:
assumes "equiv A r" and "X \<in> A//r"
shows "X \<noteq> {}"
proof-
obtain x where "x \<in> A \<and> X = r `` {x}" using assms unfolding quotient_def by auto
hence "x \<in> X" using assms equiv_class_self by fastforce
thus ?thesis by auto
qed
lemma in_quotient_imp_in_rel:
"\<lbrakk>equiv A r; X \<in> A//r; {x,y} \<subseteq> X\<rbrakk> \<Longrightarrow> (x,y) \<in> r"
using assms quotient_eq_iff by fastforce
lemma in_quotient_imp_closed:
"\<lbrakk>equiv A r; X \<in> A//r; x \<in> X; (x,y) \<in> r\<rbrakk> \<Longrightarrow> y \<in> X"
unfolding quotient_def equiv_def trans_def by auto
lemma in_quotient_imp_subset:
"\<lbrakk>equiv A r; X \<in> A//r\<rbrakk> \<Longrightarrow> X \<subseteq> A"
using assms in_quotient_imp_in_rel equiv_type by fastforce
lemma equiv_Eps_in:
assumes ECH: "equiv A r" and X: "X \<in> A//r"
shows "EpsSet X \<in> X"
proof(rule "someI2_ex", auto)
show "\<exists> x. x \<in> X" using assms in_quotient_imp_non_empty by fastforce
qed
lemma equiv_Eps_preserves:
assumes ECH: "equiv A r" and X: "X \<in> A//r"
shows "EpsSet X \<in> A"
proof(rule "someI2_ex")
show "\<exists> x. x \<in> X" using assms in_quotient_imp_non_empty by fastforce
next
fix x assume "x \<in> X"
moreover have "X \<subseteq> A" using assms in_quotient_imp_subset by fastforce
ultimately show "x \<in> A" by auto
qed
lemma proj_Eps:
assumes "equiv A r" and "X \<in> A//r"
shows "proj r (EpsSet X) = X"
unfolding proj_def proof(auto)
fix x assume x: "x \<in> X"
thus "(Eps (% x. x \<in> X), x) \<in> r" using assms equiv_Eps_in in_quotient_imp_in_rel by fastforce
next
fix x assume "(EpsSet X,x) \<in> r"
thus "x \<in> X" using assms equiv_Eps_in in_quotient_imp_closed by metis
qed
lemma Eps_proj:
assumes "equiv A r" and "x \<in> A"
shows "(EpsSet (proj r x), x) \<in> r"
proof-
have 1: "proj r x \<in> A//r" using assms proj_preserves by fastforce
hence "EpsSet (proj r x) \<in> proj r x" using assms equiv_Eps_in by auto
moreover have "x \<in> proj r x" using assms in_proj by fastforce
ultimately show ?thesis using assms 1 in_quotient_imp_in_rel by fastforce
qed
lemma equiv_Eps_iff:
assumes "equiv A r" and "{X,Y} \<subseteq> A//r"
shows "((EpsSet X, EpsSet Y) \<in> r) = (X = Y)"
proof-
have "EpsSet X \<in> X \<and> EpsSet Y \<in> Y" using assms equiv_Eps_in by auto
thus ?thesis using assms quotient_eq_iff by fastforce
qed
lemma equiv_Eps_inj_on:
assumes "equiv A r"
shows "inj_on EpsSet (A//r)"
unfolding inj_on_def proof clarify
fix X Y assume X: "X \<in> A//r" and Y: "Y \<in> A//r" and Eps: "EpsSet X = EpsSet Y"
hence "EpsSet X \<in> A" using assms equiv_Eps_preserves by auto
hence "(EpsSet X, EpsSet Y) \<in> r"
using assms Eps unfolding quotient_def equiv_def refl_on_def by auto
thus "X= Y" using X Y assms equiv_Eps_iff by auto
qed
lemma univ_commute[simp]:
assumes ECH: "equiv A r" and RES: "f respects r" and x: "x \<in> A"
shows "(univ f) (proj r x) = f x"
unfolding univ_def proof-
have prj: "proj r x \<in> A//r" using x proj_preserves by fastforce
hence "EpsSet (proj r x) \<in> A" using ECH equiv_Eps_preserves by fastforce
moreover have "proj r (EpsSet (proj r x)) = proj r x" using ECH prj proj_Eps by fastforce
ultimately have "(x, EpsSet (proj r x)) \<in> r" using x ECH proj_iff by fastforce
thus "f (EpsSet (proj r x)) = f x" using RES unfolding congruent_def by auto
qed
lemma univ_unique:
assumes ECH: "equiv A r" and
RES: "f respects r" and COM: "\<forall> x \<in> A. G (proj r x) = f x"
shows "\<forall> X \<in> A//r. G X = univ f X"
proof
fix X assume "X \<in> A//r"
then obtain x where x: "x \<in> A" and X: "X = proj r x" using ECH proj_image[of r A] by blast
have "G X = f x" unfolding X using x COM by simp
thus "G X = univ f X" unfolding X using ECH RES x univ_commute by fastforce
qed
lemma univ_preserves:
assumes ECH: "equiv A r" and RES: "f respects r" and
PRES: "\<forall> x \<in> A. f x \<in> B"
shows "\<forall> X \<in> A//r. univ f X \<in> B"
proof
fix X assume "X \<in> A//r"
then obtain x where x: "x \<in> A" and X: "X = proj r x" using ECH proj_image[of r A] by blast
hence "univ f X = f x" using assms univ_commute by fastforce
thus "univ f X \<in> B" using x PRES by auto
qed
end
|
Require Import Coq.omega.Omega.
Require Import Platform.AutoSep.
Fixpoint gen_str n : string :=
match n with
| O => EmptyString
| S n' => String "0" (gen_str n')
end.
Fixpoint gen_ns n :=
match n with
| O => nil
| S n' => gen_str n' :: gen_ns n'
end.
Lemma gen_ns_len : forall n, length (gen_ns n) = n.
induction n; simpl; intuition.
Qed.
Lemma gen_str_inj : forall a b, gen_str a = gen_str b -> a = b.
induction a; induction b; simpl; intuition.
Qed.
Lemma fold_gen_str : forall n, String "0" (gen_str n) = gen_str (S n).
eauto.
Qed.
Lemma longer_str_not_in : forall r n, (n <= r)%nat -> ~ List.In (gen_str r) (gen_ns n).
induction r; induction n; simpl; intuition.
rewrite fold_gen_str in *.
eapply gen_str_inj in H1.
intuition.
Qed.
Hint Resolve longer_str_not_in.
Lemma gen_ns_NoDup : forall n, NoDup (gen_ns n).
Hint Constructors NoDup.
induction n; simpl; intuition.
Qed.
Hint Resolve gen_ns_NoDup.
Lemma behold_the_array_ls : forall len p, p =?> len ===> Ex ls, [| length ls = len |] * array ls p.
intros; unfold array; rewrite <- (gen_ns_len len); eapply Himp_trans; [ eapply behold_the_array | rewrite gen_ns_len; sepLemma; rewrite length_toArray; rewrite gen_ns_len ]; eauto.
Qed.
Lemma buf_2_fwd : forall p len, (2 <= len)%nat -> p =?> len ===> p =?> 2 * (p ^+ $8) =?> (len - 2).
destruct len; simpl; intros; try omega.
destruct len; simpl; intros; try omega.
sepLemma; eapply allocated_shift_base; [ words | intuition ].
Qed.
Definition hints_buf_2_fwd : TacPackage.
prepare buf_2_fwd tt.
Defined.
Definition hints_array : TacPackage.
prepare behold_the_array_ls tt.
Defined.
Definition hints_buf_2_fwd_array : TacPackage.
prepare (buf_2_fwd, behold_the_array_ls) tt.
Defined.
|
BP acquired the Outer Continental Shelf lease of Keathley Canyon block 102 reference <unk> , NOAA station <unk> , on October 22 , 2003 , in Phase 2 of the Western Gulf of Mexico ( <unk> / <unk> ) Sale 187 . Lower Tertiary rock formations are some of the oldest and most technically challenging offshore rock formations currently drilled for oil , dating to between 23 and 66 million years ago . The plan of exploration was filed in June 2008 .
|
module Sub12LTE108t39
import ProofColDivSeqBase
import ProofColDivSeqPostulate
%default total
-- 6(18t+6)+3 --DB[3,-1,-2]--> 6(2t)+3
export
lte108t39 : (l : Nat) -> LTE (S (plus l l)) (S (S (S ((S ((l+l+l)+(l+l+l))) + (S ((l+l+l)+(l+l+l))) + (S ((l+l+l)+(l+l+l)))))))
lte108t39 Z = (lteSuccRight . LTESucc) LTEZero
lte108t39 (S l) =
let lemma = lte108t39 l in
rewrite (sym (plusSuccRightSucc l l)) in
rewrite (sym (plusSuccRightSucc (plus l l) l)) in
rewrite (sym (plusSuccRightSucc (plus (plus l l) l) (S (S (plus (plus l l) l))))) in
rewrite (sym (plusSuccRightSucc (plus (plus l l) l) (S (plus (plus l l) l)))) in
rewrite (sym (plusSuccRightSucc (plus (plus l l) l) (plus (plus l l) l))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus l l) l) (plus (plus l l) l)) (S (S (S (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l)))))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus l l) l) (plus (plus l l) l)) (S (S (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l))))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus l l) l) (plus (plus l l) l)) (S (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l)))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus l l) l) (plus (plus l l) l)) (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus l l) l) (plus (plus l l) l)) (S (S (plus (plus (plus l l) l) (plus (plus l l) l)))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l)))) (S (S (S (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l)))))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l)))) (S (S (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l))))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l)))) (S (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l)))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l)))) (S (S (S (plus (plus (plus l l) l) (plus (plus l l) l))))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l)))) (S (S (plus (plus (plus l l) l) (plus (plus l l) l)))))) in
rewrite (sym (plusSuccRightSucc (plus (plus (plus (plus l l) l) (plus (plus l l) l)) (S (plus (plus (plus l l) l) (plus (plus l l) l)))) (S (plus (plus (plus l l) l) (plus (plus l l) l))))) in
(lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . lteSuccRight . LTESucc . LTESucc) lemma
|
\name{decorate_column_title}
\alias{decorate_column_title}
\title{
Decorate Heatmap Column Titles
}
\description{
Decorate Heatmap Column Titles
}
\usage{
decorate_column_title(..., envir = new.env(parent = parent.frame()))
}
\arguments{
\item{...}{Pass to \code{\link{decorate_title}}.}
\item{envir}{Where to look for variables inside \code{code}.}
}
\details{
This is a helper function which pre-defined \code{which} argument in \code{\link{decorate_title}}.
}
\value{
The function returns no value.
}
\author{
Zuguang Gu <[email protected]>
}
\examples{
# There is no example
NULL
}
|
Theorem not_both_true_and_false : forall P : Prop,
~ (P /\ ~P).
Proof.
intros P.
unfold not.
intros H1.
destruct H1 as [H_left H_right].
apply H_right in H_left.
apply H_left.
Qed.
|
#!/usr/bin/python
'''
Seach DataLab for potential host of these events in DES DR1 and LS DR7
'''
import os
import sys
import json
from collections import OrderedDict, namedtuple
import numpy as np
from tqdm import tqdm
from astropy.coordinates import SkyCoord
from dl import authClient as ac, queryClient as qc
from dl.helpers.utils import convert
from getpass import getpass
if __name__ == '__main__':
# initialize datalab
token = ac.login(input('Data Lab user name: '), getpass('Password: '))
# read candidates
with open('candidate-events.json', 'r') as fp:
candidate_events = json.load(fp, object_pairs_hook=OrderedDict)
if os.path.isfile('candidate-hosts-dl.json'):
with open('candidate-hosts-dl.json', 'r') as fp:
candidate_hosts = json.load(fp, object_pairs_hook=OrderedDict)
else:
candidate_hosts = OrderedDict()
# 'radius' of the box.
box_radius = 60. / 60. / 60. # 60 asec in degrees
# for each event: search for
I_counter = 0
for cand_i, cand_info_i in tqdm(candidate_events.items(),
total=candidate_events.__len__()):
if cand_i in candidate_hosts:
continue
# some events do not have complete RA/Dec info.
if not (cand_info_i['ra'] and cand_info_i['dec']):
continue
crd_i = SkyCoord(ra=cand_info_i['ra'],
dec=cand_info_i['dec'],
unit=('hour', 'deg'))
cos_delta_i = np.cos(crd_i.dec.radian)
# DES
des_query = '''
SELECT coadd_object_id AS objid, ra, dec
FROM des_dr1.galaxies
WHERE ra BETWEEN %f AND %f
AND dec BETWEEN %f AND %f
''' % (
crd_i.ra.deg - box_radius / cos_delta_i,
crd_i.ra.deg + box_radius / cos_delta_i,
crd_i.dec.deg - box_radius,
crd_i.dec.deg + box_radius
)
des_qr = qc.query(token, sql=des_query)
# Legacy Survey DR7
ls_query = '''
SELECT ref_id, ra, dec
FROM ls_dr7.galaxy
WHERE ra BETWEEN %f AND %f
AND dec BETWEEN %f AND %f
''' % (
crd_i.ra.deg - box_radius / cos_delta_i,
crd_i.ra.deg + box_radius / cos_delta_i,
crd_i.dec.deg - box_radius,
crd_i.dec.deg + box_radius
)
ls_qr = qc.query(token, sql=ls_query)
# put into dict.
candidate_hosts[cand_i] = OrderedDict([
('DES', des_qr),
('LS', ls_qr),
])
I_counter += 1
if not (I_counter % 269): # save into a file.
with open('candidate-hosts-dl.json', 'w') as fp:
json.dump(candidate_hosts, fp, indent=4)
with open('candidate-hosts-dl.json', 'w') as fp:
json.dump(candidate_hosts, fp, indent=4)
|
State Before: α : Type u_1
inst✝ : DecidableEq α
s✝ s t : Multiset α
⊢ ndinter s t = 0 ↔ Disjoint s t State After: α : Type u_1
inst✝ : DecidableEq α
s✝ s t : Multiset α
⊢ ndinter s t ⊆ 0 ↔ Disjoint s t Tactic: rw [← subset_zero] State Before: α : Type u_1
inst✝ : DecidableEq α
s✝ s t : Multiset α
⊢ ndinter s t ⊆ 0 ↔ Disjoint s t State After: no goals Tactic: simp [subset_iff, Disjoint]
|
{-# OPTIONS --sized-types #-}
module SBList {A : Set}(_≤_ : A → A → Set) where
open import Bound.Total A
open import Bound.Total.Order _≤_
open import Data.List
open import Data.Product
open import Size
data SBList : {ι : Size} → Bound → Bound → Set where
nil : {ι : Size}{b t : Bound}
→ LeB b t
→ SBList {↑ ι} b t
cons : {ι : Size}{b t : Bound}
(x : A)
→ LeB b (val x)
→ LeB (val x) t
→ SBList {ι} b t
→ SBList {↑ ι} b t
bound : List A → SBList bot top
bound [] = nil lebx
bound (x ∷ xs) = cons x lebx lext (bound xs)
unbound : {b t : Bound} → SBList b t → List A
unbound (nil _) = []
unbound (cons x _ _ xs) = x ∷ unbound xs
unbound× : {ι : Size}{b t b' t' : Bound} → SBList {ι} b t × SBList {ι} b' t' → List A × List A
unbound× (xs , ys) = (unbound xs , unbound ys)
|
[STATEMENT]
lemma tendsto_ln_powr_over_powr:
assumes "(a::real) > 0" "b > 0"
shows "((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
proof-
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
have "eventually (\<lambda>x. ln x powr a / x powr b = (ln x / x powr (b/a)) powr a) at_top"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<forall>\<^sub>F x in at_top. ln x powr a / x powr b = (ln x / x powr (b / a)) powr a
[PROOF STEP]
using assms eventually_gt_at_top[of "1::real"]
[PROOF STATE]
proof (prove)
using this:
0 < a
0 < b
eventually ((<) 1) at_top
goal (1 subgoal):
1. \<forall>\<^sub>F x in at_top. ln x powr a / x powr b = (ln x / x powr (b / a)) powr a
[PROOF STEP]
by (elim eventually_mono) (simp add: powr_divide powr_powr)
[PROOF STATE]
proof (state)
this:
\<forall>\<^sub>F x in at_top. ln x powr a / x powr b = (ln x / x powr (b / a)) powr a
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<forall>\<^sub>F x in at_top. ln x powr a / x powr b = (ln x / x powr (b / a)) powr a
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
have "eventually (\<lambda>x. 0 < ln x / x powr (b / a)) at_top"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<forall>\<^sub>F x in at_top. 0 < ln x / x powr (b / a)
[PROOF STEP]
using eventually_gt_at_top[of "1::real"]
[PROOF STATE]
proof (prove)
using this:
eventually ((<) 1) at_top
goal (1 subgoal):
1. \<forall>\<^sub>F x in at_top. 0 < ln x / x powr (b / a)
[PROOF STEP]
by (elim eventually_mono) simp
[PROOF STATE]
proof (state)
this:
\<forall>\<^sub>F x in at_top. 0 < ln x / x powr (b / a)
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
with assms
[PROOF STATE]
proof (chain)
picking this:
0 < a
0 < b
\<forall>\<^sub>F x in at_top. 0 < ln x / x powr (b / a)
[PROOF STEP]
have "((\<lambda>x. (ln x / x powr (b/a)) powr a) \<longlongrightarrow> 0) at_top"
[PROOF STATE]
proof (prove)
using this:
0 < a
0 < b
\<forall>\<^sub>F x in at_top. 0 < ln x / x powr (b / a)
goal (1 subgoal):
1. ((\<lambda>x. (ln x / x powr (b / a)) powr a) \<longlongrightarrow> 0) at_top
[PROOF STEP]
by (intro tendsto_zero_powrI tendsto_ln_over_powr) (simp_all add: eventually_mono)
[PROOF STATE]
proof (state)
this:
((\<lambda>x. (ln x / x powr (b / a)) powr a) \<longlongrightarrow> 0) at_top
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
\<forall>\<^sub>F x in at_top. ln x powr a / x powr b = (ln x / x powr (b / a)) powr a
((\<lambda>x. (ln x / x powr (b / a)) powr a) \<longlongrightarrow> 0) at_top
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<forall>\<^sub>F x in at_top. ln x powr a / x powr b = (ln x / x powr (b / a)) powr a
((\<lambda>x. (ln x / x powr (b / a)) powr a) \<longlongrightarrow> 0) at_top
goal (1 subgoal):
1. ((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
[PROOF STEP]
by (subst tendsto_cong) simp_all
[PROOF STATE]
proof (state)
this:
((\<lambda>x. ln x powr a / x powr b) \<longlongrightarrow> 0) at_top
goal:
No subgoals!
[PROOF STEP]
qed
|
------------------------------------------------------------------------------
-- Propositional equality on inductive PA
------------------------------------------------------------------------------
{-# OPTIONS --exact-split #-}
{-# OPTIONS --no-sized-types #-}
{-# OPTIONS --no-universe-polymorphism #-}
{-# OPTIONS --without-K #-}
-- This file contains some definitions which are reexported by
-- PA.Inductive.Base.
module PA.Inductive.Relation.Binary.PropositionalEquality where
open import Common.FOL.FOL using ( ¬_ )
open import PA.Inductive.Base.Core
infix 4 _≡_ _≢_
------------------------------------------------------------------------------
-- The identity type on PA.
data _≡_ (x : ℕ) : ℕ → Set where
refl : x ≡ x
-- Inequality.
_≢_ : ℕ → ℕ → Set
x ≢ y = ¬ x ≡ y
{-# ATP definition _≢_ #-}
-- Identity properties
sym : ∀ {x y} → x ≡ y → y ≡ x
sym refl = refl
trans : ∀ {x y z} → x ≡ y → y ≡ z → x ≡ z
trans refl h = h
subst : (A : ℕ → Set) → ∀ {x y} → x ≡ y → A x → A y
subst A refl Ax = Ax
cong : (f : ℕ → ℕ) → ∀ {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
cong₂ : (f : ℕ → ℕ → ℕ) → ∀ {x x' y y'} → x ≡ y → x' ≡ y' → f x x' ≡ f y y'
cong₂ f refl refl = refl
|
[STATEMENT]
lemma fps_binomial_add_mult: "fps_binomial (c+d) = fps_binomial c * fps_binomial d" (is "?l = ?r")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
let ?P = "?r - ?l"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
let ?b = "fps_binomial"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
let ?db = "\<lambda>x. fps_deriv (?b x)"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
have "fps_deriv ?P = ?db c * ?b d + ?b c * ?db d - ?db (c + d)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_deriv (fps_binomial c) * fps_binomial d + fps_binomial c * fps_deriv (fps_binomial d) - fps_deriv (fps_binomial (c + d))
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_deriv (fps_binomial c) * fps_binomial d + fps_binomial c * fps_deriv (fps_binomial d) - fps_deriv (fps_binomial (c + d))
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_deriv (fps_binomial c) * fps_binomial d + fps_binomial c * fps_deriv (fps_binomial d) - fps_deriv (fps_binomial (c + d))
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
have "\<dots> = inverse (1 + fps_X) *
(fps_const c * ?b c * ?b d + fps_const d * ?b c * ?b d - fps_const (c+d) * ?b (c + d))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fps_deriv (fps_binomial c) * fps_binomial d + fps_binomial c * fps_deriv (fps_binomial d) - fps_deriv (fps_binomial (c + d)) = inverse (1 + fps_X) * (fps_const c * fps_binomial c * fps_binomial d + fps_const d * fps_binomial c * fps_binomial d - fps_const (c + d) * fps_binomial (c + d))
[PROOF STEP]
unfolding fps_binomial_deriv
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fps_const c * fps_binomial c / (1 + fps_X) * fps_binomial d + fps_binomial c * (fps_const d * fps_binomial d / (1 + fps_X)) - fps_const (c + d) * fps_binomial (c + d) / (1 + fps_X) = inverse (1 + fps_X) * (fps_const c * fps_binomial c * fps_binomial d + fps_const d * fps_binomial c * fps_binomial d - fps_const (c + d) * fps_binomial (c + d))
[PROOF STEP]
by (simp add: fps_divide_def field_simps)
[PROOF STATE]
proof (state)
this:
fps_deriv (fps_binomial c) * fps_binomial d + fps_binomial c * fps_deriv (fps_binomial d) - fps_deriv (fps_binomial (c + d)) = inverse (1 + fps_X) * (fps_const c * fps_binomial c * fps_binomial d + fps_const d * fps_binomial c * fps_binomial d - fps_const (c + d) * fps_binomial (c + d))
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
also
[PROOF STATE]
proof (state)
this:
fps_deriv (fps_binomial c) * fps_binomial d + fps_binomial c * fps_deriv (fps_binomial d) - fps_deriv (fps_binomial (c + d)) = inverse (1 + fps_X) * (fps_const c * fps_binomial c * fps_binomial d + fps_const d * fps_binomial c * fps_binomial d - fps_const (c + d) * fps_binomial (c + d))
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
have "\<dots> = (fps_const (c + d)/ (1 + fps_X)) * ?P"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. inverse (1 + fps_X) * (fps_const c * fps_binomial c * fps_binomial d + fps_const d * fps_binomial c * fps_binomial d - fps_const (c + d) * fps_binomial (c + d)) = fps_const (c + d) / (1 + fps_X) * (fps_binomial c * fps_binomial d - fps_binomial (c + d))
[PROOF STEP]
by (simp add: field_simps fps_divide_unit fps_const_add[symmetric] del: fps_const_add)
[PROOF STATE]
proof (state)
this:
inverse (1 + fps_X) * (fps_const c * fps_binomial c * fps_binomial d + fps_const d * fps_binomial c * fps_binomial d - fps_const (c + d) * fps_binomial (c + d)) = fps_const (c + d) / (1 + fps_X) * (fps_binomial c * fps_binomial d - fps_binomial (c + d))
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
finally
[PROOF STATE]
proof (chain)
picking this:
fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) / (1 + fps_X) * (fps_binomial c * fps_binomial d - fps_binomial (c + d))
[PROOF STEP]
have th0: "fps_deriv ?P = fps_const (c+d) * ?P / (1 + fps_X)"
[PROOF STATE]
proof (prove)
using this:
fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) / (1 + fps_X) * (fps_binomial c * fps_binomial d - fps_binomial (c + d))
goal (1 subgoal):
1. fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) * (fps_binomial c * fps_binomial d - fps_binomial (c + d)) / (1 + fps_X)
[PROOF STEP]
by (simp add: fps_divide_def)
[PROOF STATE]
proof (state)
this:
fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) * (fps_binomial c * fps_binomial d - fps_binomial (c + d)) / (1 + fps_X)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
have "?P = fps_const (?P$0) * ?b (c + d)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fps_binomial c * fps_binomial d - fps_binomial (c + d) = fps_const ((fps_binomial c * fps_binomial d - fps_binomial (c + d)) $ 0) * fps_binomial (c + d)
[PROOF STEP]
unfolding fps_binomial_ODE_unique[symmetric]
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) * (fps_binomial c * fps_binomial d - fps_binomial (c + d)) / (1 + fps_X)
[PROOF STEP]
using th0
[PROOF STATE]
proof (prove)
using this:
fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) * (fps_binomial c * fps_binomial d - fps_binomial (c + d)) / (1 + fps_X)
goal (1 subgoal):
1. fps_deriv (fps_binomial c * fps_binomial d - fps_binomial (c + d)) = fps_const (c + d) * (fps_binomial c * fps_binomial d - fps_binomial (c + d)) / (1 + fps_X)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
fps_binomial c * fps_binomial d - fps_binomial (c + d) = fps_const ((fps_binomial c * fps_binomial d - fps_binomial (c + d)) $ 0) * fps_binomial (c + d)
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
fps_binomial c * fps_binomial d - fps_binomial (c + d) = fps_const ((fps_binomial c * fps_binomial d - fps_binomial (c + d)) $ 0) * fps_binomial (c + d)
[PROOF STEP]
have "?P = 0"
[PROOF STATE]
proof (prove)
using this:
fps_binomial c * fps_binomial d - fps_binomial (c + d) = fps_const ((fps_binomial c * fps_binomial d - fps_binomial (c + d)) $ 0) * fps_binomial (c + d)
goal (1 subgoal):
1. fps_binomial c * fps_binomial d - fps_binomial (c + d) = 0
[PROOF STEP]
by (simp add: fps_mult_nth)
[PROOF STATE]
proof (state)
this:
fps_binomial c * fps_binomial d - fps_binomial (c + d) = 0
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
fps_binomial c * fps_binomial d - fps_binomial (c + d) = 0
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
fps_binomial c * fps_binomial d - fps_binomial (c + d) = 0
goal (1 subgoal):
1. fps_binomial (c + d) = fps_binomial c * fps_binomial d
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
fps_binomial (c + d) = fps_binomial c * fps_binomial d
goal:
No subgoals!
[PROOF STEP]
qed
|
{-# OPTIONS --prop --without-K --rewriting #-}
-- Theorems about noninterference.
open import Calf.CostMonoid
module Calf.Noninterference (costMonoid : CostMonoid) where
open import Calf.Prelude
open import Calf.Metalanguage
open import Calf.Step costMonoid
open import Calf.PhaseDistinction
open import Calf.Types.Eq
open import Data.Product
open import Relation.Binary.PropositionalEquality as P
oblivious : ∀ {A B} (f : cmp (F A) → val (◯⁺ B)) →
∀ c e → f (step (F A) c e) ≡ f e
oblivious {A} {B} f c e = funext/Ω (λ u →
begin
f (step (F A) c e) u ≡⟨ cong (λ e → f e u) (step/ext (F A) e c u) ⟩
f e u
∎
)
where open ≡-Reasoning
unique : ∀ {A} → (a : val (● A)) → (u : ext) → a ≡ ∗ u
unique {A} a u =
eq/ref
(●/ind {A} a (λ a → F (eq (● A) a (∗ u)))
(λ a → ret (eq/intro (η≡∗ a u)))
(λ u → ret (eq/intro refl))
(λ a u → eq/uni _ _ u))
constant : ∀ {A B} (f : val (● A) → val (◯⁺ B)) →
Σ (val (◯⁺ B)) λ b → f ≡ λ _ → b
constant f =
(λ u → f (∗ u) u) , funext (λ a → funext/Ω (λ u →
P.cong (λ a → f a u) (unique a u)))
optimization : ∀ {C B : tp pos} {A : val C → tp pos}
(f : val (Σ++ C λ c → ● (A c)) → val (◯⁺ B)) →
Σ (val C → val (◯⁺ B)) λ f' → ∀ c a → f (c , a) ≡ f' c
optimization {C} {B} {A} f =
(λ c →
let g : val (● (A c)) → val (◯⁺ B)
g a = f (c , a) in
let (b , h) = constant {A c} {B} g in
b) ,
λ c a →
let g : val (● (A c)) → val (◯⁺ B)
g a = f (c , a) in
let (b , h) = constant {A c} {B} g in
P.cong-app h a
|
/- Tactic combinators -/
example : p → q → r → p ∧ ((p ∧ q) ∧ r) ∧ (q ∧ r ∧ p) := by
intros
repeat (any_goals constructor)
all_goals assumption
example : p → q → r → p ∧ ((p ∧ q) ∧ r) ∧ (q ∧ r ∧ p) := by
intros
repeat (any_goals (first | assumption | constructor))
|
import System.Concurrency
-- Test that put/get works for multiple values.
main : IO ()
main =
do c <- makeChannel
t <- fork $ ignore (for [0..2] $ \n => channelPut c n)
v0 <- channelGet c
v1 <- channelGet c
v2 <- channelGet c
threadWait t
if v0 == 0 && v1 == 1 && v2 == 2
then putStrLn "Success!"
else putStrLn "At least one value changed in transmission."
|
This research, involving 77 young people, examines young people¿s experiences of bullying- not just those who are bullied, but also young people who bully and young people who witness bullying in day to day life.
It specifically looks at racist bullying and identity related bullying and outlines how young people get bullied by adults as well as other young people within school and within the community. It makes concrete recommendations about how to reduce and combat identity related bullying and how to tackle bullying that takes place within the community.
|
Rosedale Auto Service is sometimes asked by our customers in the Baltimore area "what is a serpentine belt"? When asked, one of the skilled auto mechanics on the Rosedale Auto Service team will tell our customers that were they to take a look at the underside of their vehicle they would either notice a single belt woven through several pulleys or a system of belts connected to pulleys in front of the vehicle's engine. When only one belt is present, that belt would a serpentine belt. The serpentine belt is distinguished from other belts in that it is one continuous belt that connects itself to multiple devices. On older model vehicles, multiple belts were employed to drive the various engine compartment devices. But after the serpentine belt was developed in 1979, the majority of, if not all, vehicle manufacturers began using serpentine belts due to their ease of use and overall efficiency. One of the primary benefits of using a serpentine belt is that in older model vehicles, components such the alternator, air and power steering pumps were all connected to belt systems and if one belt broke a vehicle's owner might not be aware of losing operation of one component. Alternatively with a serpentine belt all the pulleys of the various components are connected meaning a driver will know quickly if the serpentine belt breaks because ALL the components, to include power steering, will suffer immediately or completely shut down. If you live in the greater Baltimore area and are having a problem with your serpentine belt and want the advice of a qualified and skilled auto repair specialist please contact Rosedale Auto Service!
|
# Libraries
library(tidyverse)
library(viridis)
library(patchwork)
library(hrbrthemes)
library(circlize)
library(chorddiag) #devtools::install_github("mattflor/chorddiag")
# Load dataset from github
data <- read.table("/Users/qsr/Desktop/chord/MCI_correlation.csv", header=TRUE)
# short names
colnames(data) <- c("hippoR",
"hippoL",
"tempoR",
"tempoL",
"cerebeR",
"cerebeL",
"brainstem",
"insulaR",
"insulaL",
"occiR",
"occiL",
"frontR",
"frontL",
"parieR",
"parieL",
"ventri")
rownames(data) <- colnames(data)
# I need a long format
data_long <- data %>%
rownames_to_column %>%
gather(key = 'key', value = 'value', -rowname)
# parameters
circos.clear()
circos.par(start.degree = 90, gap.degree = 4, track.margin = c(-0.1, 0.1), points.overflow.warning = FALSE)
par(mar = rep(0, 4))
# color palette
mycolor <- viridis(16, alpha = 1, begin = 0, end = 1, option = "D")
mycolor <- mycolor[sample(1:16)]
# Base plot
chordDiagram(
x = data_long,
grid.col = mycolor,
transparency = 0.25,
directional = 0,
diffHeight = -0.04,
annotationTrack = "grid",
annotationTrackHeight = c(0.05, 0.1),
link.arr.type = "big.arrow",
link.sort = TRUE,
link.largest.ontop = TRUE)
# Add text and axis
circos.trackPlotRegion(
track.index = 1,
bg.border = NA,
panel.fun = function(x, y) {
xlim = get.cell.meta.data("xlim")
sector.index = get.cell.meta.data("sector.index")
# Add names to the sector.
circos.text(
x = mean(xlim),
y = 3.2,
labels = sector.index,
facing = "bending",
cex = 0.8
)
}
)
|
(*
* Copyright 2014, NICTA
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* @TAG(NICTA_BSD)
*)
theory Sep_Solve_Example
imports Sep_Solve
begin
(* sep_solve invokes sep_cancel and sep_mp repeatedly to solve the goal, and fails if it can't
completely discharge it *)
axiomatization
Moo :: "'a :: stronger_sep_algebra => bool" and
Bar :: "'a :: stronger_sep_algebra => bool"
where Moo_Bar[sep_cancel] : "Moo s \<Longrightarrow> Bar s"
lemma "((Bar \<and>* Q \<and>* R \<longrightarrow>* B) \<and>* Moo \<and>* A \<and>* R \<and>* Q) s \<Longrightarrow> (A \<and>* B) s"
apply (sep_solve)
done
(* encouraging better proof style with different command for schematics in assumption *)
schematic_goal "((Bar \<and>* Q \<and>* R \<longrightarrow>* B) \<and>* Moo \<and>* A \<and>* R \<and>* ?Q) s \<Longrightarrow> (A \<and>* B) s"
apply (sep_schem)
done
end
|
"""
groupby(f, xs)
group values `x ∈ xs` by the result of applying `f`.
Returns a `Dict` with keys `yi` and values `[xi1, xi2,...]` such that `f(xij) = yi`.
# Example
```julia-repl
julia> groupby(isodd,1:10)
Dict{Bool,Array{Int64,1}} with 2 entries:
false => [2, 4, 6, 8, 10]
true => [3, 5, 7, 9]
```
"""
function groupby(f::Function, xs)
# maintain order in xs
sxs = Iterators.Stateful(xs)
T = eltype(sxs)
x = first(sxs)
y = f(x)
dict = Dict{typeof(y), Vector{T}}()
dict[y] = T[x]
for x in sxs
y = f(x)
haskey(dict, y) || (dict[y] = T[])
push!(dict[y], x)
end
return dict
end
"""
gatherby(f, xs)
Like `groupby` but only returns values, i.e. elements of `xs` in `Vector`s such that
`f` applied to an element of a group is the same as for any other element of that
`Vector`.
# Example
```julia-repl
julia> gatherby(isodd, 1:10)
2-element Array{Array{Int64,1},1}:
[2, 4, 6, 8, 10]
[3, 5, 7, 9]
```
"""
gatherby(f::Function, xs) = collect(values(groupby(f,xs)))
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal33.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Lemma conj1synthconj4 : forall (lv0 : natural) (lv1 : natural), (@eq natural (plus Zero lv0) lv1).
Admitted.
QuickChick conj1synthconj4.
|
If the operating system will not boot and no other troubleshooting can be performed, a recovery of the (C:) drive can be performed by starting the computer and pressing the F10 key.
NOTE: If an Edit Boot Options window is displayed, press the ENTER key to continue.
The Restore Complete System option will delete all partitions and any data saved on those partitions and restores the (C:) drive to its original, factory-installed condition. This option allows changing the size of the (C:) drive.
|
subroutine dkqg_tbqdk_g(p,msq)
implicit none
************************************************************************
* Author: R.K. Ellis *
* January, 2012. *
* calculate the element squared and subtraction terms *
* for the process *
* *
* [nwz=+1] *
* q(-p1) +g(-p2)=nu(p3)+e+(p4)+b(p5)+bb(p6)+q'(p7) *
* +g(p8) radiated from top in decay *
* *
* [nwz=-1] *
* q(-p1) +g(-p2)=e-(p3)+nu~(p4)+bb(p5)+b(p6)+q'(p7) *
* +g(p8) radiated from antitop in decay *
* *
* Top is kept strictly on-shell although all spin correlations *
* are retained. *
* Mass of bottom quark in decay is included. *
* *
* NOTE: this routine is a replacement for dkqg_tbqdk_g_old.f, *
* including the effect of the b-quark mass. In the massless *
* case it is approximately 3 times faster than that routine *
* *
************************************************************************
include 'constants.f'
include 'ewcouple.f'
include 'qcdcouple.f'
include 'masses.f'
include 'ckm.f'
include 'nwz.f'
integer j,k,hb,hc,ht,ha,h2,hg
double precision msq(-nf:nf,-nf:nf),p(mxpart,4)
double precision fac,msq_qg,msq_gq,msq_qbg,msq_gqb
double complex prop
double complex mtop(2,2,2),manti(2,2,2),
& mqg(2,2,2),mgq(2,2,2),mqbg(2,2,2),mgqb(2,2,2),
& mtotqg(2,2,2,2),mtotgq(2,2,2,2),mtotqbg(2,2,2,2),mtotgqb(2,2,2,2)
C----set all elements to zero
msq(:,:)=0d0
if (nwz .eq. +1) then
call singletoponshell(1,2,7,p,1,mqg)
call singletoponshell(2,1,7,p,1,mgq)
call singletoponshell(7,2,1,p,1,mqbg)
call singletoponshell(7,1,2,p,1,mgqb)
call tdecayg(p,3,4,5,8,mtop)
else
call singleatoponshell(1,2,7,p,-1,mqg)
call singleatoponshell(2,1,7,p,-1,mgq)
call singleatoponshell(7,2,1,p,-1,mqbg)
call singleatoponshell(7,1,2,p,-1,mgqb)
call adecayg(p,3,4,5,8,manti)
endif
c--- q-g amplitudes
do hb=1,2
do h2=1,2
do hg=1,2
do hc=1,2
mtotqg(hb,hg,h2,hc)=czip
mtotgq(hb,hg,h2,hc)=czip
mtotqbg(hb,hg,h2,hc)=czip
mtotgqb(hb,hg,h2,hc)=czip
if (nwz .eq. +1) then
do ht=1,2
mtotqg(hb,hg,h2,hc)=mtotqg(hb,hg,h2,hc)
& +mtop(hb,hg,ht)*mqg(ht,h2,hc)
mtotgq(hb,hg,h2,hc)=mtotgq(hb,hg,h2,hc)
& +mtop(hb,hg,ht)*mgq(ht,h2,hc)
mtotqbg(hb,hg,h2,hc)=mtotqbg(hb,hg,h2,hc)
& +mtop(hb,hg,ht)*mqbg(ht,h2,hc)
mtotgqb(hb,hg,h2,hc)=mtotgqb(hb,hg,h2,hc)
& +mtop(hb,hg,ht)*mgqb(ht,h2,hc)
enddo
else
do ha=1,2
mtotqg(hb,hg,h2,hc)=mtotqg(hb,hg,h2,hc)
& +mqg(hb,h2,ha)*manti(ha,hg,hc)
mtotgq(hb,hg,h2,hc)=mtotgq(hb,hg,h2,hc)
& +mgq(hb,h2,ha)*manti(ha,hg,hc)
mtotqbg(hb,hg,h2,hc)=mtotqbg(hb,hg,h2,hc)
& +mqbg(hb,h2,ha)*manti(ha,hg,hc)
mtotgqb(hb,hg,h2,hc)=mtotgqb(hb,hg,h2,hc)
& +mgqb(hb,h2,ha)*manti(ha,hg,hc)
enddo
endif
enddo
enddo
enddo
enddo
prop=dcmplx(zip,mt*twidth)
fac=V*xn*gwsq**4*gsq/abs(prop)**2*gsq*V/xn
c--- include factor for hadronic decays
c if ((case .eq. 'tt_bbh') .or. (case .eq. 'tt_hdk')) fac=2d0*xn*fac
msq_qg=0d0
msq_gq=0d0
msq_qbg=0d0
msq_gqb=0d0
do hb=1,2
do hg=1,2
do h2=1,2
do hc=1,2
msq_qg=msq_qg+fac*aveqg*abs(mtotqg(hb,hg,h2,hc))**2
msq_gq=msq_gq+fac*aveqg*abs(mtotgq(hb,hg,h2,hc))**2
msq_qbg=msq_qbg+fac*aveqg*abs(mtotqbg(hb,hg,h2,hc))**2
msq_gqb=msq_gqb+fac*aveqg*abs(mtotgqb(hb,hg,h2,hc))**2
enddo
enddo
enddo
enddo
C---fill qb-q, gg and q-qb elements
do j=-nf,nf
do k=-nf,nf
if ((j .gt. 0) .and. (k .eq. 0)) then
msq(j,k)=Vsum(j)*msq_qg
elseif ((j .lt. 0) .and. (k .eq. 0)) then
msq(j,k)=Vsum(j)*msq_qbg
elseif ((j .eq. 0) .and. (k .gt. 0)) then
msq(j,k)=Vsum(k)*msq_gq
elseif ((j .eq. 0) .and. (k .lt. 0)) then
msq(j,k)=Vsum(k)*msq_gqb
endif
enddo
enddo
return
end
|
export init_global_grid
import MPI
"""
init_global_grid(nx, ny, nz)
me, dims, nprocs, coords, comm_cart = init_global_grid(nx, ny, nz; <keyword arguments>)
Initialize a Cartesian grid of MPI processes (and also MPI itself by default) defining implicitely a global grid.
# Arguments
- {`nx`|`ny`|`nz`}`::Integer`: the number of elements of the local grid in dimension {x|y|z}.
- {`dimx`|`dimy`|`dimz`}`::Integer=0`: the desired number of processes in dimension {x|y|z}. By default, (value `0`) the process topology is created as compact as possible with the given constraints. This is handled by the MPI implementation which is installed on your system. For more information, refer to the specifications of `MPI_Dims_create` in the corresponding documentation.
- {`periodx`|`periody`|`periodz`}`::Integer=0`: whether the grid is periodic (`1`) or not (`0`) in dimension {x|y|z}.
- `quiet::Bool=false`: whether to suppress printing information like the size of the global grid (`true`) or not (`false`).
!!! note "Advanced keyword arguments"
- {`overlapx`|`overlapy`|`overlapz`}`::Integer=2`: the number of elements adjacent local grids overlap in dimension {x|y|z}. By default (value `2`), an array `A` of size (`nx`, `ny`, `nz`) on process 1 (`A_1`) overlaps the corresponding array `A` on process 2 (`A_2`) by `2` indices if the two processes are adjacent. E.g., if `overlapx=2` and process 2 is the right neighbor of process 1 in dimension x, then `A_1[end-1:end,:,:]` overlaps `A_2[1:2,:,:]`. That means, after every call `update_halo!(A)`, we have `all(A_1[end-1:end,:,:] .== A_2[1:2,:,:])` (`A_1[end,:,:]` is the halo of process 1 and `A_2[1,:,:]` is the halo of process 2). The analog applies for the dimensions y and z.
- `disp::Integer=1`: the displacement argument to `MPI.Cart_shift` in order to determine the neighbors.
- `reorder::Integer=1`: the reorder argument to `MPI.Cart_create` in order to create the Cartesian process topology.
- `comm::MPI.Comm=MPI.COMM_WORLD`: the input communicator argument to `MPI.Cart_create` in order to create the Cartesian process topology.
- `init_MPI::Bool=true`: whether to initialize MPI (`true`) or not (`false`).
For more information, refer to the documentation of MPI.jl / MPI.
# Return values
- `me`: the MPI rank of the process.
- `dims`: the number of processes in each dimension.
- `nprocs`: the number of processes.
- `coords`: the Cartesian coordinates of the process.
- `comm_cart`: the MPI communicator of the created Cartesian process topology.
# Typical use cases
init_global_grid(nx, ny, nz) # Basic call (no optional in and output arguments).
me, = init_global_grid(nx, ny, nz) # Capture 'me' (note the ','!).
me, dims = init_global_grid(nx, ny, nz) # Capture 'me' and 'dims'.
init_global_grid(nx, ny, nz; dimx=2, dimy=2) # Fix the number of processes in the dimensions x and y of the Cartesian grid of MPI processes to 2 (the number of processes can vary only in the dimension z).
init_global_grid(nx, ny, nz; periodz=1) # Make the boundaries in dimension z periodic.
See also: [`finalize_global_grid`](@ref)
"""
function init_global_grid(nx::Integer, ny::Integer, nz::Integer; dimx::Integer=0, dimy::Integer=0, dimz::Integer=0, periodx::Integer=0, periody::Integer=0, periodz::Integer=0, overlapx::Integer=2, overlapy::Integer=2, overlapz::Integer=2, disp::Integer=1, reorder::Integer=1, comm::MPI.Comm=MPI.COMM_WORLD, init_MPI::Bool=true, quiet::Bool=false)
nxyz = [nx, ny, nz];
dims = [dimx, dimy, dimz];
periods = [periodx, periody, periodz];
overlaps = [overlapx, overlapy, overlapz];
cudaaware_MPI = [false, false, false]
if haskey(ENV, "IGG_CUDAAWARE_MPI") cudaaware_MPI .= (parse(Int64, ENV["IGG_CUDAAWARE_MPI"]) > 0); end
if none(cudaaware_MPI)
if haskey(ENV, "IGG_CUDAAWARE_MPI_DIMX") cudaaware_MPI[1] = (parse(Int64, ENV["IGG_CUDAAWARE_MPI_DIMX"]) > 0); end
if haskey(ENV, "IGG_CUDAAWARE_MPI_DIMY") cudaaware_MPI[2] = (parse(Int64, ENV["IGG_CUDAAWARE_MPI_DIMY"]) > 0); end
if haskey(ENV, "IGG_CUDAAWARE_MPI_DIMZ") cudaaware_MPI[3] = (parse(Int64, ENV["IGG_CUDAAWARE_MPI_DIMZ"]) > 0); end
end
if (nx==1) error("Invalid arguments: nx can never be 1.") end
if (ny==1 && nz>1) error("Invalid arguments: ny cannot be 1 if nz is greater than 1.") end
if (any((nxyz .== 1) .& (dims .>1 ))) error("Incoherent arguments: if nx, ny, or nz is 1, then the corresponding dimx, dimy or dimz must not be set (or set 0 or 1)."); end
if (any((nxyz .< 2 .* overlaps .- 1) .& (periods .> 0))) error("Incoherent arguments: if nx, ny, or nz is smaller than 2*overlapx-1, 2*overlapy-1 or 2*overlapz-1, respectively, then the corresponding periodx, periody or periodz must not be set (or set 0)."); end
dims[(nxyz.==1).&(dims.==0)] .= 1; # Setting any of nxyz to 1, means that the corresponding dimension must also be 1 in the global grid. Thus, the corresponding dims entry must be 1.
if (init_MPI) # NOTE: init MPI only, once the input arguments have been checked.
if (MPI.Initialized()) error("MPI is already initialized. Set the argument 'init_MPI=false'."); end
MPI.Init();
else
if (!MPI.Initialized()) error("MPI has not been initialized beforehand. Remove the argument 'init_MPI=false'."); end # Ensure that MPI is always initialized after init_global_grid().
end
nprocs = MPI.Comm_size(comm);
MPI.Dims_create!(nprocs, dims);
comm_cart = MPI.Cart_create(comm, dims, periods, reorder);
me = MPI.Comm_rank(comm_cart);
coords = MPI.Cart_coords(comm_cart);
neighbors = fill(MPI.MPI_PROC_NULL, NNEIGHBORS_PER_DIM, NDIMS_MPI);
for i = 1:NDIMS_MPI
neighbors[:,i] .= MPI.Cart_shift(comm_cart, i-1, disp);
end
nxyz_g = dims.*(nxyz.-overlaps) .+ overlaps.*(periods.==0); # E.g. for dimension x with ol=2 and periodx=0: dimx*(nx-2)+2
set_global_grid(GlobalGrid(nxyz_g, nxyz, dims, overlaps, nprocs, me, coords, neighbors, periods, disp, reorder, comm_cart, cudaaware_MPI, quiet));
if (!quiet && me==0) println("Global grid: $(nxyz_g[1])x$(nxyz_g[2])x$(nxyz_g[3]) (nprocs: $nprocs, dims: $(dims[1])x$(dims[2])x$(dims[3]))"); end
init_timing_functions();
return me, dims, nprocs, coords, comm_cart; # The typical use case requires only these variables; the remaining can be obtained calling get_global_grid() if needed.
end
# Make sure that timing functions which must be fast at the first user call are already compiled now.
function init_timing_functions()
tic();
toc();
end
|
#pragma once
#include <boost/optional.hpp>
#include <boost/thread/thread.hpp>
#include <cga/lib/config.hpp>
#include <cga/lib/numbers.hpp>
#include <cga/lib/utility.hpp>
#include <atomic>
#include <condition_variable>
#include <memory>
#include <thread>
namespace cga
{
class block;
bool work_validate (cga::block_hash const &, uint64_t, uint64_t * = nullptr);
bool work_validate (cga::block const &, uint64_t * = nullptr);
uint64_t work_value (cga::block_hash const &, uint64_t);
class opencl_work;
class work_item
{
public:
cga::uint256_union item;
std::function<void(boost::optional<uint64_t> const &)> callback;
uint64_t difficulty;
};
class work_pool
{
public:
work_pool (unsigned, std::function<boost::optional<uint64_t> (cga::uint256_union const &)> = nullptr);
~work_pool ();
void loop (uint64_t);
void stop ();
void cancel (cga::uint256_union const &);
void generate (cga::uint256_union const &, std::function<void(boost::optional<uint64_t> const &)>, uint64_t = cga::work_pool::publish_threshold);
uint64_t generate (cga::uint256_union const &, uint64_t = cga::work_pool::publish_threshold);
std::atomic<int> ticket;
bool done;
std::vector<boost::thread> threads;
std::list<cga::work_item> pending;
std::mutex mutex;
std::condition_variable producer_condition;
std::function<boost::optional<uint64_t> (cga::uint256_union const &)> opencl;
cga::observer_set<bool> work_observers;
// Local work threshold for rate-limiting publishing blocks. ~5 seconds of work.
static uint64_t const publish_test_threshold = 0xff00000000000000;
static uint64_t const publish_full_threshold = 0xffffffc000000000;
static uint64_t const publish_threshold = cga::is_test_network ? publish_test_threshold : publish_full_threshold;
};
std::unique_ptr<seq_con_info_component> collect_seq_con_info (work_pool & work_pool, const std::string & name);
}
|
import Starknet
export
fib : (i: Felt) -> Felt
fib n = if n == 0 || n == 1
then n
else fib (n - 1) + fib (n - 2)
export
main : Cairo ()
main =
do
val <- storageRead 123
storageWrite 123 (val + fib 5)
topic <- createMemory
writeMemory 0 123 topic
value <- createMemory
writeMemory 0 val value
emitEvent 1 topic 1 value
|
module Invincy.Parsing
import Data.Vect
import Invincy.Core
%access public export
mutual
data Result : (i, r : Type) -> Type where
Done : Stream t s => s -> r -> Result s r
Partial : Stream t s => Inf (Parser s r) -> Result s r
Failure : Stream t s => String -> Result s r
data Parser : (i, r : Type) -> Type where
MkParser : Stream t s => (s -> Result s r) -> Parser s r
runParser : Parser s r -> s -> Result s r
runParser (MkParser f) = f
implementation Stream t s => Functor (Result s) where
map f (Done i r) = Done i (f r)
map f (Partial k) = assert_total (Partial (MkParser $ map f . runParser k))
map _ f@(Failure s) = f
implementation Functor (Parser s) where
map f (MkParser k) = MkParser $ (\x => map f $ k x)
implementation Stream t s => Applicative (Parser s) where
pure x = MkParser (\i => Done i x)
(MkParser f) <*> g = MkParser $ \i => case f i of
Done i' f' => case runParser g i' of
Done i'' r => Done i'' (f' r)
Partial k => Partial (MkParser $ map f' . runParser k)
Failure f => Failure f
Partial k => Partial $ k <*> g
Failure f => Failure f
infixl 2 <*>|
(<*>|) : Stream t s => Parser s (a -> b) -> Lazy (Parser s a) -> Parser s b
(<*>|) f g = f <*> g
implementation Stream t s => Monad (Parser s) where
f >>= g = MkParser $ \i => case runParser f i of
Done i' f' => runParser (g f') i'
Partial k => Partial $ k >>= g
f@(Failure s) => f
interface Applicative f => LazyAlternative (f : Type -> Type) where
empty : f a
(<|>) : f a -> Lazy (f a) -> f a
implementation Stream t s => LazyAlternative (Parser s) where
empty = MkParser . const $ Failure "an alternative is empty"
f <|> g = MkParser $ \i => case (runParser f i) of
Failure _ => runParser g i
-- this could probably be improved
Partial k => Partial $
let cont = MkParser (\i' => runParser k i');
next = MkParser (\i' => runParser g (i <+> i'))
in cont <|> next
done => done
fail : Stream t s => String -> Parser s r
fail = MkParser . const . Failure
infixl 3 <?>
(<?>) : Stream t s => Parser s a -> String -> Parser s a
(<?>) p s = p <|> fail s
item : Stream t s => Parser s t
item = MkParser $ \i => case uncons i of
Nothing => Partial item
Just (x, xs) => Done xs x
sat : Stream t s => (t -> Bool) -> Parser s t
sat p = do
i <- item
if p i then pure i else fail "sat"
ignore : Stream t s => Parser s a -> Parser s ()
ignore p = p *> pure ()
val : (Eq t, Stream t s) => t -> Parser s ()
val x = ignore (sat (== x))
raw : (Eq t, Stream t s) => s -> Parser s ()
raw l = case uncons l of
Nothing => pure ()
Just (x, xs) => do
sat (== x)
raw xs
pure ()
oneOf : (Eq t, Stream t s) => List t -> Parser s t
oneOf l = sat (flip elem l)
option : Stream t s => Parser s a -> Parser s (Maybe a)
option p = (Just <$> p) <|> (pure Nothing)
manyTill : Stream t s => Parser s a -> Parser s b -> Parser s (List a)
manyTill p t = option t >>= maybe ((<+>) . pure <$> p <*>| manyTill p t) (const $ pure neutral)
many : Stream t s => Parser s a -> Parser s (List a)
many p = option p >>= maybe (pure neutral) (\x => (<+>) . pure <$> pure x <*>| many p)
some : Stream t s => Parser s a -> Parser s (x:List a ** NonEmpty x)
some p = do
first <- p
rest <- many p
pure $ (first :: rest ** IsNonEmpty)
sepBy1 : Stream t s => Parser s a -> Parser s () -> Parser s (x:List a ** NonEmpty x)
sepBy1 x s = do
first <- x
rest <- many (s *> x)
pure $ (first :: rest ** IsNonEmpty)
sepBy : Stream t s => Parser s a -> Parser s () -> Parser s (List a)
sepBy x s = maybe [] getWitness <$> option (sepBy1 x s)
match : Stream t s => Parser s a -> Parser s (s, a)
match p = MkParser $ \i => match' i $ runParser p i
where
match' : s -> Result s a -> Result s (s, a)
match' raw (Done i r) =
let il = toList i
rawl = toList raw
in Done i (fromList (take (minus (length rawl) (length il)) rawl), r)
match' raw (Failure e) = Failure e
match' raw (Partial k) = Partial $ (\(raw', x) => (raw <+> raw', x)) <$> match k
feed : Stream t s => Result s r -> s -> Result s r
feed (Partial k) i = runParser k i
feed (Done i r) i' = Done (i <+> i') r
feed (Failure s) _ = Failure s
digit : Stream Char s => Parser s Char
digit = oneOf ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
integer : Stream Char s => Parser s Integer
integer = cast . pack . getWitness <$> some digit
-- feed (feed (runParser ((list ['f', 'o', 'o'] *> pure False) <|> (list ['f', 'o', 'r'] *> pure True)) ['f']) ['o']) ['o']
-- feed (feed (runParser ((list ['f', 'o', 'o'] *> pure False) <|> (list ['f', 'o', 'r'] *> pure True)) ['f']) ['o']) ['r']
parseWith : Monad m => Parser s r -> m s -> m (Result s r)
parseWith p r = do
v <- r
case runParser p v of
Partial k => parseWith k r
other => pure other
|
{-# OPTIONS --without-K --exact-split --safe #-}
module Fragment.Setoid.Morphism.Setoid where
open import Fragment.Setoid.Morphism.Base
open import Level using (Level; _⊔_)
open import Relation.Binary using (Setoid; IsEquivalence; Rel)
private
variable
a b ℓ₁ ℓ₂ : Level
module _ {S : Setoid a ℓ₁} {T : Setoid b ℓ₂} where
open Setoid T
infix 4 _≗_
_≗_ : Rel (S ↝ T) (a ⊔ ℓ₂)
f ≗ g = ∀ {x} → ∣ f ∣ x ≈ ∣ g ∣ x
≗-refl : ∀ {f} → f ≗ f
≗-refl = refl
≗-sym : ∀ {f g} → f ≗ g → g ≗ f
≗-sym f≗g {x} = sym (f≗g {x})
≗-trans : ∀ {f g h} → f ≗ g → g ≗ h → f ≗ h
≗-trans f≗g g≗h {x} = trans (f≗g {x}) (g≗h {x})
≗-isEquivalence : IsEquivalence _≗_
≗-isEquivalence = record { refl = λ {f} → ≗-refl {f}
; sym = λ {f g} → ≗-sym {f} {g}
; trans = λ {f g h} → ≗-trans {f} {g} {h}
}
_↝_/≗ : Setoid a ℓ₁ → Setoid b ℓ₂ → Setoid _ _
S ↝ T /≗ = record { Carrier = S ↝ T
; _≈_ = _≗_
; isEquivalence = ≗-isEquivalence
}
|
/*
Copyright (c) 2003, Arvid Norberg
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the distribution.
* Neither the name of the author nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef TORRENT_FILE_HPP_INCLUDED
#define TORRENT_FILE_HPP_INCLUDED
#include <memory>
#include <stdexcept>
#ifdef _MSC_VER
#pragma warning(push, 1)
#endif
#include <boost/noncopyable.hpp>
#include <boost/filesystem/path.hpp>
#ifdef _MSC_VER
#pragma warning(pop)
#endif
#include "libtorrent/size_type.hpp"
#include "libtorrent/config.hpp"
namespace libtorrent
{
namespace fs = boost::filesystem;
struct TORRENT_EXPORT file_error: std::runtime_error
{
file_error(std::string const& msg): std::runtime_error(msg) {}
};
class TORRENT_EXPORT file: public boost::noncopyable
{
public:
class seek_mode
{
friend class file;
private:
seek_mode(int v): m_val(v) {}
int m_val;
};
static const seek_mode begin;
static const seek_mode end;
class open_mode
{
friend class file;
public:
open_mode(): m_mask(0) {}
open_mode operator|(open_mode m) const
{ return open_mode(m.m_mask | m_mask); }
open_mode operator&(open_mode m) const
{ return open_mode(m.m_mask & m_mask); }
open_mode operator|=(open_mode m)
{
m_mask |= m.m_mask;
return *this;
}
bool operator==(open_mode m) const { return m_mask == m.m_mask; }
bool operator!=(open_mode m) const { return m_mask != m.m_mask; }
private:
open_mode(int val): m_mask(val) {}
int m_mask;
};
static const open_mode in;
static const open_mode out;
file();
file(fs::path const& p, open_mode m);
~file();
void open(fs::path const& p, open_mode m);
void close();
void set_size(size_type size);
size_type write(const char*, size_type num_bytes);
size_type read(char*, size_type num_bytes);
size_type seek(size_type pos, seek_mode m = begin);
size_type tell();
private:
struct impl;
const std::auto_ptr<impl> m_impl;
};
}
#endif // TORRENT_FILE_HPP_INCLUDED
|
function cumulative(weights::Array{Int, 2})
ret = copy(weights)
dim = size(weights)
for i in 2 : dim[1]
for j in 1 : dim[2]
min = ret[i - 1, j]
if j > 1 && ret[i - 1, j - 1] < min
min = ret[i - 1, j - 1]
end
if j < dim[2] && ret[i - 1, j + 1] < min
min = ret[i - 1, j + 1]
end
ret[i, j] += min
end
end
return ret
end
function back_track(weights::Array{Int, 2})
dim = size(weights)
buffer::Array{Int, 1} = ones(Int, dim[1])
buffer[1] = 1
for i in 2 : dim[2]
if weights[dim[1], i] < weights[dim[1], buffer[1]]
buffer[1] = i
end
end
for i in dim[1] - 1 : -1 : 1
k = dim[1] - i + 1
j = buffer[k - 1]
buffer[k] = j
if j > 1 && weights[i, j - 1] < weights[i, buffer[k]]
buffer[k] = j - 1
end
if j < dim[2] && weights[i, j + 1] < weights[i, buffer[k]]
buffer[k] = j + 1
end
end
ret::Array{Tuple{Int, Int}, 1} = Array{Tuple{Int, Int}, 1}(undef, dim[1])
for i in 1 : dim[1]
ret[i] = dim[1] - i + 1, buffer[i]
end
return ret
end
|
\section{Botnets}
\label{sec:botnets}
A \textit{botnet} is a network made up of unaware remote-controlled computers, typically used for malicious purposes.
A \textit{bot} is the malware that makes the infected host remotely controllable by a server — namely a \textit{controller} — instructed by an attacker. Once started, the bot contacts the controller to join the \textit{botnet} and polls it for commands to execute.
Some botnets consist of hundreds of thousands — or even millions — of computers. Since they allow such a lot of different computers to act in unison, a botnet could be used to perform \textit{distributed DDoS attacks}, \textit{massive spam campaign}, \textit{click frauds}, \textit{Bitcoin mining}, or used to \textit{distribute other malwares} — e.g. keyloggers and crypto lockers \cite{anderson2008security}. \Cref{fig:botnet-showcase} depicts a simple but common botnet-based application. From this brief description about botnets it is possible to guess their economic implications.
\begin{figure}[tp]
\centering
\includegraphics[scale=0.6]{./fig/botnetshowcase.png}
\caption{A botnet showcase. (1) The attacker spreads bots via an infection vector — e.g. a trojan horse. (2) The infected hosts join the botnet, being remotely controllable by the attacker, via a command\&control server. (3) The computational power of the botnet is rent, for wichever malicious purpose, making the attacker gaining a lot of profits. (4) The attacker instructs the botnet to perform a spamming campaign, as requested by its client.}
\label{fig:botnet-showcase}
\end{figure}
The \textit{command and control (C\&C)} is the component responsible for the distribution of commands to bots. Botnets can be controlled in several different ways, depending on how C\&C is implemented. In its simplest form, the C\&C implements a \textit{centralized client-server architecture} (i.e. \textit{C\&C server}), where bots poll a web server for commands to execute. Such a botnet is easy to stop — monitor what web servers a bot is connecting to, then take them down and causing the bots to be unable to communicate with the attacker. In a more advanced form, it implements a \textit{peer-to-peer (P2P) architecture} (i.e. \textit{C\&C node}), where bots instruct (are instructed by) other nearby bots, thus providing the botnet with a high degree of availability and redundancy. Since no single point of control can be identified, such botnets cannot be neutered only by disabling specific nodes. Therefore, defensive strategies are based on issuing fake commands or by isolating the bots from each other. Things are made harder since botnets started communicating, not only through encrypted channels, but via anonimous networks — such as Tor — where it's theoretically impossible to figure out the botnet topology.
The \textit{ZeroAccess botnet} is an important example. The ZeroAccess botnet is one of the largest known botnets with a population upwards of 1.9 million computers, generating profits through Bitcoin mining and click frauds \cite{zeroaccess-symantec-blog,zeroaccess-symantec-definition}.
A key feature of the ZeroAccess botnet is its use of a P2P based C\&C, making it highly resistant to any take-down attempts.
|
= = In popular culture = =
|
= = In popular culture = =
|
= = In popular culture = =
|
# (5) #########
print("Similar to last run")
##
ct <- makeCluster(cores)
registerDoParallel(ct)
##
nt<-10
asum<-0
bsum<-foreach(ijk=1:nt , .combine=cbind ) %dopar% {
asum<-asum+ijk
}
print(sum(bsum))
stopCluster(ct)
#readline(prompt = "NEXT>")
|
from agent import Agent
from monitor import interact
import gym
import numpy as np
env = gym.make('Taxi-v3')
env = gym.wrappers.Monitor(env, './video/',video_callable=lambda episode_id: True,force = True)
agent = Agent()
avg_rewards, best_avg_reward = interact(env, agent)
|
# Characterization of Systems in the Time Domain
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Eigenfunctions
An [eigenfunction](https://en.wikipedia.org/wiki/Eigenfunction) of a system is defined as the input signal $x(t)$ which produces the output signal $y(t) = \mathcal{H}\{ x(t) \} = \lambda \cdot x(t)$ with $\lambda \in \mathbb{C}$. The weight $\lambda$ associated with $x(t)$ is known as scalar eigenvalue of the system. Hence, besides a weighting factor, an eigenfunction is not modified by passing through the system.
[Complex exponential signals](../continuous_signals/standard_signals.ipynb#Complex-Exponential-Signal) $e^{s t}$ with $s \in \mathbb{C}$ are eigenfunctions of linear time-invariant (LTI) systems. This can be proven by applying the properties of LTI systems. Lets assume a generic LTI system with input signal $x(t) = e^{s t}$ and output signal $y(t) = \mathcal{H}\{ x(t) \}$. The response of the LTI system to the shifted input signal $x(t-\tau) = e^{s (t-\tau)}$ reads
\begin{equation}
y(t - \tau) = \mathcal{H}\{ x(t-\tau) \} = \mathcal{H}\{ e^{-s \tau} \cdot e^{s t} \}
\end{equation}
due to the implied shift-invariance. Now considering the implied linearity this can be reformulated as
\begin{equation}
y(t - \tau) = e^{-s \tau} \cdot \mathcal{H}\{ e^{s t} \} = e^{-s \tau} \cdot y(t)
\end{equation}
It is straightforward to show that $y(t) = \lambda e^{st}$ fulfills above equation
\begin{equation}
\lambda e^{s t} e^{-s \tau} = e^{-s \tau} \lambda e^{st}
\end{equation}
**Example**
An LTI system whose input/output relation is given by the following inhomogeneous linear ordinary differential equation (ODE) with constant coefficients is investigated
\begin{equation}
a_0 y(t) + a_1 \frac{d y(t)}{dt} + a_2 \frac{d^2 y(t)}{dt^2} = x(t)
\end{equation}
with $a_i \in \mathbb{R} \quad \forall i$. In the remainder, the output signal $y(t)$ of the system is computed by explicit solution of the ODE for $x(t) = e^{s t}$ as input signal. Integration constants are discarded for ease of illustration.
```python
import sympy as sym
sym.init_printing()
t, s, a0, a1, a2 = sym.symbols('t s a:3')
x = sym.exp(s * t)
y = sym.Function('y')(t)
ode = sym.Eq(a0*y + a1*y.diff(t) + a2*y.diff(t, 2), x)
solution = sym.dsolve(ode)
solution.subs({'C1': 0, 'C2': 0})
```
**Exercises**
* Is the complex exponential signal an eigenfunction of the system?
* Introduce $x(t) = e^{s t}$ and $y(t) = \lambda \cdot e^{s t}$ into the ODE and solve manually for the eigenvalue $\lambda$. How is the result related to above result derived by solving the ODE?
* Can you generalize your findings to an ODE of arbitrary order?
**Example**
The following inhomogeneous linear ODE with time-dependent coefficient is considered as an example for a **time-variant** but linear system
\begin{equation}
t \cdot \frac{d y(t)}{dt} = x(t)
\end{equation}
The output signal $y(t)$ of the system for a complex exponential signal at the input $x(t) = e^{st}$ is computed by explicit solution of the ODE. Again integration constants are discarded.
```python
ode = sym.Eq(t*y.diff(t), x)
solution = sym.dsolve(ode)
solution.subs('C1', 0)
```
Note, $\text{Ei}(\cdot)$ denotes the [exponential integral](http://docs.sympy.org/latest/modules/functions/special.html#sympy.functions.special.error_functions.Ei). The response $y(t)$ of the time-variant system is not equal to a weighted complex exponential signal $\lambda \cdot e^{s t}$. It can be concluded that complex exponentials are no eigenfunctions of this particular time-variant system.
**Example**
A final example considers the following non-linear inhomogeneous ODE with constant coefficients
\begin{equation}
\left( \frac{d y(t)}{dt} \right)^2 = x(t)
\end{equation}
as example for a **non-linear** but time-invariant system. Again, the output signal $y(t)$ of the system for a complex exponential signal at the input $x(t) = e^{st}$ is computed by explicit solution of the ODE. As before, integration constants are discarded.
```python
ode = sym.Eq(y.diff(t)**2, x)
solution = sym.dsolve(ode)
[si.subs('C1', 0) for si in solution]
```
Obviously for this non-linear system complex exponential signals are no eigenfunctions.
## Transfer Function
The complex eigenvalue $\lambda$ constitutes the weight of a complex exponential signal $e^{st}$ (using complex frequency $s$) experiences when passing through an LTI system. It is commonly termed as [*transfer function*](https://en.wikipedia.org/wiki/Transfer_function) and is denoted by $H(s)=\lambda(s)$. Using this definition, the output signal $y(t)$ of an LTI system for a complex exponential signal at the input reads
\begin{equation}
y(t) = \mathcal{H} \{ e^{st} \} = H(s) \cdot e^{st}
\end{equation}
Note that the concept of the transfer function is directly linked to the linearity and time-invariance of a system. Only in this case, complex exponential signals are eigenfunctions of the system and $H(s)$ describes the properties of an LTI system with respect to these.
Above equation can be rewritten in terms of the magnitude $| H(s) |$ and phase $\varphi(s) = \arg \{ H(s) \}$ of the complex transfer function $H(s)$
\begin{equation}
y(t) = | H(s) | \cdot e^{s t + j \varphi(s)}
\end{equation}
The magnitude $| H(s) |$ provides the frequency dependent attenuation/amplification of the eigenfunction $e^{st}$ by the system, while $\varphi(s)$ provides the introduced phase-shift.
## Link between Transfer Function and Impulse Response
In order to establish a link between the transfer function $H(s)$ and the impulse response $h(t)$, the output signal $y(t) = \mathcal{H} \{ x(t) \}$ of an LTI system with input signal $x(t)$ is considered. It is given by convolving the input signal with the impulse response
\begin{equation}
y(t) = x(t) * h(t) = \int_{-\infty}^{\infty} x(t-\tau) \cdot h(\tau) \; d\tau
\end{equation}
For a complex exponential signal as input $x(t) = e^{st}$, the output of an LTI system is given as $y(t) = \mathcal{H} \{ e^{st} \} = H(s) \cdot e^{st}$. Introducing both signals into the convolution integral yields
\begin{equation}
H(s) \cdot e^{st} = \int_{-\infty}^{\infty} e^{st} e^{-s \tau} \cdot h(\tau) \; d\tau
\end{equation}
which after canceling $e^{s t}$ (the integral depends not on $t$) results in
\begin{equation}
H(s) = \int_{-\infty}^{\infty} h(\tau) \cdot e^{-s \tau} \; d\tau
\end{equation}
under the assumption that the integral converges.
The transfer function $H(s)$ can be computed from the impulse response $h(t)$ by integrating over the impulse response multiplied with the complex exponential function $e^{- s \tau}$. This constitutes an integral transformation, which is later introduced in more detail as [Laplace transform](https://en.wikipedia.org/wiki/Laplace_transform).
Usually the temporal variable $t$ is then used
\begin{equation}
H(s) = \int_{-\infty}^{\infty} h(t) \cdot e^{-s t} \; d t
\end{equation}
rather than $\tau$ which remained from the convolution integral calculus.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Continuous- and Discrete-Time Signals and Systems - Theory and Computational Examples*.
|
(* Title: HOL/Auth/n_g2kAbsAfter_lemma_inv__67_on_rules.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_g2kAbsAfter Protocol Case Study*}
theory n_g2kAbsAfter_lemma_inv__67_on_rules imports n_g2kAbsAfter_lemma_on_inv__67
begin
section{*All lemmas on causal relation between inv__67*}
lemma lemma_inv__67_on_rules:
assumes b1: "r \<in> rules N" and b2: "(f=inv__67 )"
shows "invHoldForRule s f r (invariants N)"
proof -
have c1: "(\<exists> d. d\<le>N\<and>r=n_n_Store_i1 d)\<or>
(\<exists> d. d\<le>N\<and>r=n_n_AStore_i1 d)\<or>
(r=n_n_SendReqS_j1 )\<or>
(r=n_n_SendReqEI_i1 )\<or>
(r=n_n_SendReqES_i1 )\<or>
(r=n_n_RecvReq_i1 )\<or>
(r=n_n_SendInvE_i1 )\<or>
(r=n_n_SendInvS_i1 )\<or>
(r=n_n_SendInvAck_i1 )\<or>
(r=n_n_RecvInvAck_i1 )\<or>
(r=n_n_SendGntS_i1 )\<or>
(r=n_n_SendGntE_i1 )\<or>
(r=n_n_RecvGntS_i1 )\<or>
(r=n_n_RecvGntE_i1 )\<or>
(r=n_n_ASendReqIS_j1 )\<or>
(r=n_n_ASendReqSE_j1 )\<or>
(r=n_n_ASendReqEI_i1 )\<or>
(r=n_n_ASendReqES_i1 )\<or>
(r=n_n_SendReqEE_i1 )\<or>
(r=n_n_ARecvReq_i1 )\<or>
(r=n_n_ASendInvE_i1 )\<or>
(r=n_n_ASendInvS_i1 )\<or>
(r=n_n_ASendInvAck_i1 )\<or>
(r=n_n_ARecvInvAck_i1 )\<or>
(r=n_n_ASendGntS_i1 )\<or>
(r=n_n_ASendGntE_i1 )\<or>
(r=n_n_ARecvGntS_i1 )\<or>
(r=n_n_ARecvGntE_i1 )"
apply (cut_tac b1, auto) done
moreover {
assume d1: "(\<exists> d. d\<le>N\<and>r=n_n_Store_i1 d)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_Store_i1Vsinv__67) done
}
moreover {
assume d1: "(\<exists> d. d\<le>N\<and>r=n_n_AStore_i1 d)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_AStore_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendReqS_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqS_j1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendReqEI_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqEI_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendReqES_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqES_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_RecvReq_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvReq_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendInvE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvE_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendInvS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvS_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvAck_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_RecvInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvInvAck_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendGntS_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendGntE_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_RecvGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvGntS_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_RecvGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvGntE_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendReqIS_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqIS_j1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendReqSE_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqSE_j1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendReqEI_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqEI_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendReqES_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqES_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_SendReqEE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqEE_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ARecvReq_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvReq_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendInvE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvE_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendInvS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvS_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvAck_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ARecvInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvInvAck_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendGntS_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ASendGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendGntE_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ARecvGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvGntS_i1Vsinv__67) done
}
moreover {
assume d1: "(r=n_n_ARecvGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvGntE_i1Vsinv__67) done
}
ultimately show "invHoldForRule s f r (invariants N)"
by satx
qed
end
|
While on tour with the Wildhearts , Townsend formed a short @-@ lived thrash metal project with Metallica 's then @-@ bassist Jason Newsted . The band , known as IR8 , featured Newsted on vocals and bass , Townsend on guitar , and Tom Hunting of Exodus on drums . The group recorded a few songs together , although Townsend says that they never intended to go further than that . " People heard about it and thought we wanted to put out a CD , which is absolutely not true , " he explains . " People took this project way too seriously . " A demo tape was put together , but the material was not released until 2002 , when Newsted published the IR8 vs. <unk> compilation .
|
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <gsl/gsl_integration.h>
#include <gsl/gsl_histogram.h>
#include"mpi.h"
const double PI = 3.14159;
double rz(double red);
struct galaxy {
double x, y, z, d, w;
};
int main (int argc, char **argv) {
FILE * DD;
FILE * PP;
FILE * RR;
const int Nr = 150;
const int Nm = 100;
const double RMIN = 0.0;
const double RMAX = 150.0;
const double RSTEP = 1.0;
const double MSTEP = 0.01;
const int maxNgal = 40000000;
int dummy;
char DDname[400];
char PPname[400];
char RRname[400];
int i;
int bit;
int bittot;
int *n;
sprintf(DDname,argv[1]);
sprintf(RRname,argv[2]);
sprintf(PPname,argv[3]);
struct galaxy * gald;
struct galaxy * galr;
double * DRcount;
double * DRcount_in_place;
gald = (struct galaxy*)malloc(maxNgal*sizeof(struct galaxy));
galr = (struct galaxy*)malloc(maxNgal*sizeof(struct galaxy));
DRcount = (double*)calloc(Nr*Nm,sizeof(double));
DRcount_in_place = (double*)calloc(Nr*Nm,sizeof(double));
struct galaxy * galpd, * galpr;
galpd = gald;
galpr = galr;
DD = fopen(DDname,"r");
RR = fopen(RRname,"r");
double ra, dec, red, dist;
double dd;
int Nd = 0;
double weight;
double ddummy;
int indexFKP;
double weightFKP;
double weight1, weight2, weight3, weight4;
char line [2000];
int N = 0;
double sumW = 0;
double sumW_in_place = 0;
for (int i = 0; i < 0; i ++) {
fgets (line, 2000, DD);
printf("%s\n", line);
}
while (fscanf(DD,"%lf %lf %lf\n",&ra,&dec,&red) != EOF) {
ra *= PI/180.0;
dec *= PI/180.0;
dist = rz(red);
galpd->x = dist*cos(dec)*cos(ra);
galpd->y = dist*cos(dec)*sin(ra);
galpd->z = dist*sin(dec);
galpd->d = dist;
// galpd->w = weight1 * weight2;
galpd++;
Nd++;
}
fclose(DD);
N = 0;
for (int i = 0; i < 0; i ++) {
fgets (line, 2000, RR);
printf("%s\n", line);
}
while (fscanf(RR,"%le %le %le\n",&ra,&dec,&red) != EOF) {
ra *= PI/180.0;
dec *= PI/180.0;
dist = rz(red);
galpr->x = dist*cos(dec)*cos(ra);
galpr->y = dist*cos(dec)*sin(ra);
galpr->z = dist*sin(dec);
galpr->d = dist;
// galpr->w = weight1 * weight2;
galpr++;
N++;
}
fclose(RR);
MPI_Init (&argc, &argv);
int numproc;
int totproc;
MPI_Comm_rank (MPI_COMM_WORLD, &numproc);
MPI_Comm_size (MPI_COMM_WORLD, &totproc);
n = (int *) calloc(totproc + 1, sizeof(int));
n[0] = 1;
for (i = 1; i < totproc; i++) {
n[i] = (int)floor(double(N)/totproc*i);
}
n[totproc] = N;
bit = n[numproc];
bittot = n[numproc + 1];
sprintf(PPname, "%s.DR",PPname);
double mu;
double d1, d2, d3;
double x1, y1, z1, x2, y2, z2, gd1, gd2, w1, w2;
double r2, rat;
double xb, yb, zb, db2;
double rr;
galpd = gald;
int j;
for (i = 0; i < Nd; i++) {
x1 = galpd->x; y1 = galpd->y; z1 = galpd->z; gd1 = galpd->d; //w1 = galpd->w;
galpr = galr + bit;
for (j = bit; j < bittot; j++) {
x2 = galpr->x; y2 = galpr->y; z2 = galpr->z; gd2 = galpr->d; //w2 = galpr->w;
//sumW += w1*w2;
sumW += 1.0;
d1 = x1 - x2;
d2 = y1 - y2;
d3 = z1 - z2;
r2 = d1*d1 + d2*d2 + d3*d3;
if (r2 > RMAX*RMAX || r2 < RMIN*RMIN) {
galpr++;
continue;
}
rat = gd1/gd2;
xb = x1 + x2*rat;
yb = y1 + y2*rat;
zb = z1 + z2*rat;
db2 = xb*xb + yb*yb + zb*zb;
mu = fabs((xb*d1 + yb*d2 + zb*d3)/sqrt(r2)/sqrt(db2));
rr = sqrt(r2);
int binr = (int)((rr - RMIN)/RSTEP);
int binm = (int)(mu/MSTEP);
if (binr >= 0 && binm >= 0 && binr < Nr && binm < Nm){
int ind = binr + Nr*binm;
// DRcount[ind] += w1*w2;
DRcount[ind] += 1.0;
}
galpr++;
}
galpd++;
}
free(gald);
free(galr);
free(n);
MPI_Reduce(DRcount, DRcount_in_place, Nr*Nm, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
MPI_Reduce(&sumW, &sumW_in_place, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
printf("weighted number = %le %d\n", sumW, numproc);
if (numproc == 0) {
PP = fopen(PPname,"w");
printf("%s\n", PPname);
fprintf(PP,"# weighted number: %lf\n",sumW_in_place);
fprintf(PP,"# RBINS: %d\n", Nr);
fprintf(PP,"# MBINS: %d\n", Nm);
int k, l;
for (k = 0; k < Nm; k++) {
for (l = 0; l < Nr; l++) {
fprintf(PP,"%lf ",DRcount_in_place[k*Nr + l]);
}
fprintf(PP,"\n");
}
fclose(PP);
}
free(DRcount);
free(DRcount_in_place);
MPI_Finalize ();
return 0;
}
double f (double x, void * p) {
double Om = 0.310;
double ff = 2997.92458/sqrt(Om*(1.0 + x)*(1.0 + x)*(1.0 + x) + 1.0 - Om);
return ff;
}
double rz (double red) {
gsl_integration_workspace * w = gsl_integration_workspace_alloc(1000);
double result, error;
gsl_function F;
F.function = &f;
gsl_integration_qags(&F, 0, red, 0, 1e-7, 1000, w, &result, &error);
gsl_integration_workspace_free (w);
return result;
}
|
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
"""Classes for using TensorFlow on Amazon SageMaker for inference."""
from __future__ import absolute_import
import logging
import sagemaker
from sagemaker import image_uris, s3
from sagemaker.deserializers import JSONDeserializer
from sagemaker.deprecations import removed_kwargs
from sagemaker.predictor import Predictor
from sagemaker.serializers import JSONSerializer
class TensorFlowPredictor(Predictor):
"""A ``Predictor`` implementation for inference against TensorFlow Serving endpoints."""
def __init__(
self,
endpoint_name,
sagemaker_session=None,
serializer=JSONSerializer(),
deserializer=JSONDeserializer(),
model_name=None,
model_version=None,
**kwargs,
):
"""Initialize a ``TensorFlowPredictor``.
See :class:`~sagemaker.predictor.Predictor` for more info about parameters.
Args:
endpoint_name (str): The name of the endpoint to perform inference
on.
sagemaker_session (sagemaker.session.Session): Session object which
manages interactions with Amazon SageMaker APIs and any other
AWS services needed. If not specified, the estimator creates one
using the default AWS configuration chain.
serializer (callable): Optional. Default serializes input data to
json. Handles dicts, lists, and numpy arrays.
deserializer (callable): Optional. Default parses the response using
``json.load(...)``.
model_name (str): Optional. The name of the SavedModel model that
should handle the request. If not specified, the endpoint's
default model will handle the request.
model_version (str): Optional. The version of the SavedModel model
that should handle the request. If not specified, the latest
version of the model will be used.
"""
removed_kwargs("content_type", kwargs)
removed_kwargs("accept", kwargs)
super(TensorFlowPredictor, self).__init__(
endpoint_name,
sagemaker_session,
serializer,
deserializer,
)
attributes = []
if model_name:
attributes.append("tfs-model-name={}".format(model_name))
if model_version:
attributes.append("tfs-model-version={}".format(model_version))
self._model_attributes = ",".join(attributes) if attributes else None
def classify(self, data):
"""Placeholder docstring."""
return self._classify_or_regress(data, "classify")
def regress(self, data):
"""Placeholder docstring."""
return self._classify_or_regress(data, "regress")
def _classify_or_regress(self, data, method):
"""Placeholder docstring."""
if method not in ["classify", "regress"]:
raise ValueError("invalid TensorFlow Serving method: {}".format(method))
if self.content_type != "application/json":
raise ValueError("The {} api requires json requests.".format(method))
args = {"CustomAttributes": "tfs-method={}".format(method)}
return self.predict(data, args)
def predict(self, data, initial_args=None):
"""Placeholder docstring."""
args = dict(initial_args) if initial_args else {}
if self._model_attributes:
if "CustomAttributes" in args:
args["CustomAttributes"] += "," + self._model_attributes
else:
args["CustomAttributes"] = self._model_attributes
return super(TensorFlowPredictor, self).predict(data, args)
class TensorFlowModel(sagemaker.model.FrameworkModel):
"""A ``FrameworkModel`` implementation for inference with TensorFlow Serving."""
_framework_name = "tensorflow"
LOG_LEVEL_PARAM_NAME = "SAGEMAKER_TFS_NGINX_LOGLEVEL"
LOG_LEVEL_MAP = {
logging.DEBUG: "debug",
logging.INFO: "info",
logging.WARNING: "warn",
logging.ERROR: "error",
logging.CRITICAL: "crit",
}
LATEST_EIA_VERSION = [2, 3]
def __init__(
self,
model_data,
role,
entry_point=None,
image_uri=None,
framework_version=None,
container_log_level=None,
predictor_cls=TensorFlowPredictor,
**kwargs,
):
"""Initialize a Model.
Args:
model_data (str): The S3 location of a SageMaker model data
``.tar.gz`` file.
role (str): An AWS IAM role (either name or full ARN). The Amazon
SageMaker training jobs and APIs that create Amazon SageMaker
endpoints use this role to access training data and model
artifacts. After the endpoint is created, the inference code
might use the IAM role, if it needs to access an AWS resource.
entry_point (str): Path (absolute or relative) to the Python source
file which should be executed as the entry point to model
hosting. If ``source_dir`` is specified, then ``entry_point``
must point to a file located at the root of ``source_dir``.
image_uri (str): A Docker image URI (default: None). If not specified, a
default image for TensorFlow Serving will be used. If
``framework_version`` is ``None``, then ``image_uri`` is required.
If also ``None``, then a ``ValueError`` will be raised.
framework_version (str): Optional. TensorFlow Serving version you
want to use. Defaults to ``None``. Required unless ``image_uri`` is
provided.
container_log_level (int): Log level to use within the container
(default: logging.ERROR). Valid values are defined in the Python
logging module.
predictor_cls (callable[str, sagemaker.session.Session]): A function
to call to create a predictor with an endpoint name and
SageMaker ``Session``. If specified, ``deploy()`` returns the
result of invoking this function on the created endpoint name.
**kwargs: Keyword arguments passed to the superclass
:class:`~sagemaker.model.FrameworkModel` and, subsequently, its
superclass :class:`~sagemaker.model.Model`.
.. tip::
You can find additional parameters for initializing this class at
:class:`~sagemaker.model.FrameworkModel` and
:class:`~sagemaker.model.Model`.
"""
if framework_version is None and image_uri is None:
raise ValueError(
"Both framework_version and image_uri were None. "
"Either specify framework_version or specify image_uri."
)
self.framework_version = framework_version
super(TensorFlowModel, self).__init__(
model_data=model_data,
role=role,
image_uri=image_uri,
predictor_cls=predictor_cls,
entry_point=entry_point,
**kwargs,
)
self._container_log_level = container_log_level
def register(
self,
content_types,
response_types,
inference_instances,
transform_instances,
model_package_name=None,
model_package_group_name=None,
image_uri=None,
model_metrics=None,
metadata_properties=None,
marketplace_cert=False,
approval_status=None,
description=None,
drift_check_baselines=None,
):
"""Creates a model package for creating SageMaker models or listing on Marketplace.
Args:
content_types (list): The supported MIME types for the input data.
response_types (list): The supported MIME types for the output data.
inference_instances (list): A list of the instance types that are used to
generate inferences in real-time.
transform_instances (list): A list of the instance types on which a transformation
job can be run or on which an endpoint can be deployed.
model_package_name (str): Model Package name, exclusive to `model_package_group_name`,
using `model_package_name` makes the Model Package un-versioned (default: None).
model_package_group_name (str): Model Package Group name, exclusive to
`model_package_name`, using `model_package_group_name` makes the Model Package
versioned (default: None).
image_uri (str): Inference image uri for the container. Model class' self.image will
be used if it is None (default: None).
model_metrics (ModelMetrics): ModelMetrics object (default: None).
metadata_properties (MetadataProperties): MetadataProperties object (default: None).
marketplace_cert (bool): A boolean value indicating if the Model Package is certified
for AWS Marketplace (default: False).
approval_status (str): Model Approval Status, values can be "Approved", "Rejected",
or "PendingManualApproval" (default: "PendingManualApproval").
description (str): Model Package description (default: None).
drift_check_baselines (DriftCheckBaselines): DriftCheckBaselines object (default: None).
Returns:
A `sagemaker.model.ModelPackage` instance.
"""
instance_type = inference_instances[0]
self._init_sagemaker_session_if_does_not_exist(instance_type)
if image_uri:
self.image_uri = image_uri
if not self.image_uri:
self.image_uri = self.serving_image_uri(
region_name=self.sagemaker_session.boto_session.region_name,
instance_type=instance_type,
)
return super(TensorFlowModel, self).register(
content_types,
response_types,
inference_instances,
transform_instances,
model_package_name,
model_package_group_name,
image_uri,
model_metrics,
metadata_properties,
marketplace_cert,
approval_status,
description,
drift_check_baselines=drift_check_baselines,
)
def deploy(
self,
initial_instance_count,
instance_type,
serializer=None,
deserializer=None,
accelerator_type=None,
endpoint_name=None,
tags=None,
kms_key=None,
wait=True,
data_capture_config=None,
update_endpoint=None,
):
"""Deploy a Tensorflow ``Model`` to a SageMaker ``Endpoint``."""
if accelerator_type and not self._eia_supported():
msg = "The TensorFlow version %s doesn't support EIA." % self.framework_version
raise AttributeError(msg)
return super(TensorFlowModel, self).deploy(
initial_instance_count=initial_instance_count,
instance_type=instance_type,
serializer=serializer,
deserializer=deserializer,
accelerator_type=accelerator_type,
endpoint_name=endpoint_name,
tags=tags,
kms_key=kms_key,
wait=wait,
data_capture_config=data_capture_config,
update_endpoint=update_endpoint,
)
def _eia_supported(self):
"""Return true if TF version is EIA enabled"""
framework_version = [int(s) for s in self.framework_version.split(".")][:2]
return (
framework_version != [2, 1]
and framework_version != [2, 2]
and framework_version <= self.LATEST_EIA_VERSION
)
def prepare_container_def(self, instance_type=None, accelerator_type=None):
"""Prepare the container definition.
Args:
instance_type: Instance type of the container.
accelerator_type: Accelerator type, if applicable.
Returns:
A container definition for deploying a ``Model`` to an ``Endpoint``.
"""
if self.image_uri is None and instance_type is None:
raise ValueError(
"Must supply either an instance type (for choosing CPU vs GPU) or an image URI."
)
image_uri = self._get_image_uri(instance_type, accelerator_type)
env = self._get_container_env()
if self.entry_point:
key_prefix = sagemaker.fw_utils.model_code_key_prefix(
self.key_prefix, self.name, image_uri
)
bucket = self.bucket or self.sagemaker_session.default_bucket()
model_data = s3.s3_path_join("s3://", bucket, key_prefix, "model.tar.gz")
sagemaker.utils.repack_model(
self.entry_point,
self.source_dir,
self.dependencies,
self.model_data,
model_data,
self.sagemaker_session,
kms_key=self.model_kms_key,
)
else:
model_data = self.model_data
return sagemaker.container_def(image_uri, model_data, env)
def _get_container_env(self):
"""Placeholder docstring."""
if not self._container_log_level:
return self.env
if self._container_log_level not in self.LOG_LEVEL_MAP:
logging.warning("ignoring invalid container log level: %s", self._container_log_level)
return self.env
env = dict(self.env)
env[self.LOG_LEVEL_PARAM_NAME] = self.LOG_LEVEL_MAP[self._container_log_level]
return env
def _get_image_uri(self, instance_type, accelerator_type=None):
"""Placeholder docstring."""
if self.image_uri:
return self.image_uri
return image_uris.retrieve(
self._framework_name,
self.sagemaker_session.boto_region_name,
version=self.framework_version,
instance_type=instance_type,
accelerator_type=accelerator_type,
image_scope="inference",
)
def serving_image_uri(
self, region_name, instance_type, accelerator_type=None
): # pylint: disable=unused-argument
"""Create a URI for the serving image.
Args:
region_name (str): AWS region where the image is uploaded.
instance_type (str): SageMaker instance type. Used to determine device type
(cpu/gpu/family-specific optimized).
accelerator_type (str): The Elastic Inference accelerator type to
deploy to the instance for loading and making inferences to the
model (default: None). For example, 'ml.eia1.medium'.
Returns:
str: The appropriate image URI based on the given parameters.
"""
return self._get_image_uri(instance_type=instance_type, accelerator_type=accelerator_type)
|
import numpy as np
import lxml.etree
from ..core.hoa import from_acn
from ..fileio.adm.adm import ADM
from ..fileio.adm.elements import AudioBlockFormatHoa, AudioChannelFormat, TypeDefinition, FormatDefinition
from ..fileio.adm.elements import AudioStreamFormat, AudioTrackFormat, AudioPackFormat, AudioObject, AudioTrackUID
from ..fileio.adm.chna import populate_chna_chunk
from ..fileio.adm.generate_ids import generate_ids
from ..fileio.adm.xml import adm_to_xml
from ..fileio import openBw64
from ..fileio.bw64.chunks import ChnaChunk, FormatInfoChunk
def add_args(subparsers):
subparser = subparsers.add_parser("ambix_to_bwf", help="make a BWF file from an ambix format HOA file")
subparser.add_argument("--norm", default="SN3D", help="normalization mode")
subparser.add_argument("--nfcDist", type=float, default=None, help="Near-Field Compensation Distance (float)")
subparser.add_argument("--screenRef", help="Screen Reference", action="store_true")
subparser.add_argument("--chna-only", help="use only CHNA with common definitions", action="store_true")
subparser.add_argument("input", help="input file")
subparser.add_argument("output", help="output BWF file")
subparser.set_defaults(command=ambix_to_bwf)
def get_acn(n_channels, args):
return np.arange(n_channels)
def build_adm(acn, norm, nfcDist, screenRef):
adm = ADM()
track_uids = []
pack_format = AudioPackFormat(
audioPackFormatName="HOA",
type=TypeDefinition.HOA,
audioChannelFormats=[],
)
adm.addAudioPackFormat(pack_format)
order, degree = from_acn(acn)
for channel_no, (order, degree) in enumerate(zip(order, degree), 1):
block_format = AudioBlockFormatHoa(
order=int(order),
degree=int(degree),
normalization=norm,
nfcRefDist=nfcDist,
screenRef=screenRef,
)
name = "channel_{}".format(channel_no)
channel_format = AudioChannelFormat(
audioChannelFormatName=name,
type=TypeDefinition.HOA,
audioBlockFormats=[block_format],
)
adm.addAudioChannelFormat(channel_format)
pack_format.audioChannelFormats.append(channel_format)
stream_format = AudioStreamFormat(
audioStreamFormatName=name,
format=FormatDefinition.PCM,
audioChannelFormat=channel_format,
)
adm.addAudioStreamFormat(stream_format)
track_format = AudioTrackFormat(
audioTrackFormatName=name,
format=FormatDefinition.PCM,
audioStreamFormat=stream_format,
)
adm.addAudioTrackFormat(track_format)
track_uid = AudioTrackUID(
trackIndex=channel_no,
audioTrackFormat=track_format,
audioPackFormat=pack_format,
)
adm.addAudioTrackUID(track_uid)
track_uids.append(track_uid)
audio_object = AudioObject(
audioObjectName="HOA",
audioPackFormats=[pack_format],
audioTrackUIDs=track_uids,
)
adm.addAudioObject(audio_object)
return adm
def build_adm_common_defs(acns, norm):
from ..fileio.adm.common_definitions import load_common_definitions
adm = ADM()
load_common_definitions(adm)
order, degree = from_acn(acns)
pack_name = "3D_order{order}_{norm}_ACN".format(order=max(order), norm=norm)
[pack_format] = [apf for apf in adm.audioPackFormats if apf.audioPackFormatName == pack_name]
for channel_no, acn in enumerate(acns, 1):
track_name = "PCM_{norm}_ACN_{acn}".format(norm=norm, acn=acn)
[track_format] = [tf for tf in adm.audioTrackFormats if tf.audioTrackFormatName == track_name]
adm.addAudioTrackUID(AudioTrackUID(
trackIndex=channel_no,
audioTrackFormat=track_format,
audioPackFormat=pack_format,
))
return adm
def ambix_to_bwf(args):
with openBw64(args.input) as infile:
acn = get_acn(infile.channels, args)
if args.chna_only:
assert args.nfcDist is None
assert not args.screenRef
adm = build_adm_common_defs(acn, args.norm)
else:
adm = build_adm(acn, args.norm, args.nfcDist, args.screenRef)
generate_ids(adm)
if args.chna_only:
axml = None
else:
xml = adm_to_xml(adm)
axml = lxml.etree.tostring(xml, pretty_print=True)
chna = ChnaChunk()
populate_chna_chunk(chna, adm)
fmtInfo = FormatInfoChunk(formatTag=1,
channelCount=infile.channels,
sampleRate=infile.sampleRate,
bitsPerSample=infile.bitdepth)
with openBw64(args.output, 'w', chna=chna, formatInfo=fmtInfo, axml=axml) as outfile:
while True:
samples = infile.read(1024)
if samples.shape[0] == 0: break
outfile.write(samples)
|
\section{Linear Equations with Constant Equations}
\subsection{Auxiliary Equation}
Linear homogeneous DE with constant coefficients can be expressed as
\begin{equation}
a_{0} \frac{d^{n} y}{d x^{n}}+a_{1} \frac{d^{n-1} y}{d x^{n-1}}+\cdots+a_{n-1} \frac{d y}{d x}+a_{n} y=0
\end{equation}
Can be written in the form
\begin{equation}
f(D)y=0
\end{equation}
where $f(D)$ is a linear differential operator. If the algebraic eqn $f(m)=0$
then we know $f(D)e^{mx}=0\implies y=e^{mx}$ is a solution to the form above.
$f(m)=0$ is the auxiliary equation associated with the DE.
Since the DE is of order $n$, the auxiliary equation is of degree $n$ with
roots $m_1,\ldots,m_n$.
Thus we have $n$ solutions $y_1=\exp(m_1x),\ldots,y_n=\exp(m_nx)$ assuming
the roots are \textbf{real} and \textbf{distinct} are then \textbf{linearly independent}.
The general solution is thus
\begin{equation}
y=c_1\exp(m_1x)+\cdots+c_n\exp(m_nx)
\end{equation}
with arbitrary constants $c_1,\ldots,c_n$.
\subsubsection{Derivation}
We can say that $y^{(k)}$ is the $k$th derivative of $y$, so say the general form is
$$
a_ny^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_1y^\prime +a_0y=0
$$
If we take $y=e^{rx}$, then observe $y^{(n)}=r^ne^{rx}$. So rewrite the general form as
\begin{align*}
a_nr^ne^{rx}+a_{n-1}r^{n-1}e^{rx}+\cdots+a_1re^{rx}+a_0e^{rx}&=0\\
a_nr^n+a_{n-1}r^{n-1}+\cdots+a_1r+a_0&=0
\end{align*}
Solving for the roots $r$ in this characteristic equation helps us obtain the general solution.
\subsection{Auxiliary Equation Repeated Roots}
Need method for obtaining $n$ linearly independent solutions for $n$ equal roots of auxiliary equation.
Suppose auxiliary equation $f(m)=0$ has $n$ roots $m_1=m_2=\cdots=m_n=b$.
Thus, the operator function $f(D)$ has a factor $(D-b)^n$. Want to find $n$ linearly independent $y$ for which $(D-b)^ny=0$.
Use the substitution $y_k=x^ke^{bx}$ such that
\begin{equation}
(D-b)^n(x^ke^{bx})=0,\;k=0,1,2,\ldots,n-1
\end{equation}
The functions $y_k=x^ke^{bx}$ are linearly independent because the respective powers $x^0,\ldots,x^k$ are linearly independent.
So the general solution takes form
\begin{equation}
y=c_1e^{bx}+c_2xe^{bx}+\cdots+c_nx^{n-1}e^{bx}
\end{equation}
|
lemma Bseq_iff: "Bseq X \<longleftrightarrow> (\<exists>N. \<forall>n. norm (X n) \<le> real(Suc N))"
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.HITs.S3 where
open import Cubical.HITs.S3.Base public
-- open import Cubical.HITs.S3.Properties public
|
[STATEMENT]
lemma frac_eq: "frac x = x \<longleftrightarrow> 0 \<le> x \<and> x < 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (frac x = x) = ((0::'a) \<le> x \<and> x < (1::'a))
[PROOF STEP]
by (simp add: frac_unique_iff)
|
(* Title: HOL/Auth/n_g2kAbsAfter_lemma_inv__46_on_rules.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_g2kAbsAfter Protocol Case Study*}
theory n_g2kAbsAfter_lemma_inv__46_on_rules imports n_g2kAbsAfter_lemma_on_inv__46
begin
section{*All lemmas on causal relation between inv__46*}
lemma lemma_inv__46_on_rules:
assumes b1: "r \<in> rules N" and b2: "(f=inv__46 )"
shows "invHoldForRule s f r (invariants N)"
proof -
have c1: "(\<exists> d. d\<le>N\<and>r=n_n_Store_i1 d)\<or>
(\<exists> d. d\<le>N\<and>r=n_n_AStore_i1 d)\<or>
(r=n_n_SendReqS_j1 )\<or>
(r=n_n_SendReqEI_i1 )\<or>
(r=n_n_SendReqES_i1 )\<or>
(r=n_n_RecvReq_i1 )\<or>
(r=n_n_SendInvE_i1 )\<or>
(r=n_n_SendInvS_i1 )\<or>
(r=n_n_SendInvAck_i1 )\<or>
(r=n_n_RecvInvAck_i1 )\<or>
(r=n_n_SendGntS_i1 )\<or>
(r=n_n_SendGntE_i1 )\<or>
(r=n_n_RecvGntS_i1 )\<or>
(r=n_n_RecvGntE_i1 )\<or>
(r=n_n_ASendReqIS_j1 )\<or>
(r=n_n_ASendReqSE_j1 )\<or>
(r=n_n_ASendReqEI_i1 )\<or>
(r=n_n_ASendReqES_i1 )\<or>
(r=n_n_SendReqEE_i1 )\<or>
(r=n_n_ARecvReq_i1 )\<or>
(r=n_n_ASendInvE_i1 )\<or>
(r=n_n_ASendInvS_i1 )\<or>
(r=n_n_ASendInvAck_i1 )\<or>
(r=n_n_ARecvInvAck_i1 )\<or>
(r=n_n_ASendGntS_i1 )\<or>
(r=n_n_ASendGntE_i1 )\<or>
(r=n_n_ARecvGntS_i1 )\<or>
(r=n_n_ARecvGntE_i1 )"
apply (cut_tac b1, auto) done
moreover {
assume d1: "(\<exists> d. d\<le>N\<and>r=n_n_Store_i1 d)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_Store_i1Vsinv__46) done
}
moreover {
assume d1: "(\<exists> d. d\<le>N\<and>r=n_n_AStore_i1 d)"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_AStore_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendReqS_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqS_j1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendReqEI_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqEI_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendReqES_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqES_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_RecvReq_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvReq_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendInvE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvE_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendInvS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvS_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendInvAck_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_RecvInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvInvAck_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendGntS_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendGntE_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_RecvGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvGntS_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_RecvGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_RecvGntE_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendReqIS_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqIS_j1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendReqSE_j1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqSE_j1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendReqEI_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqEI_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendReqES_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendReqES_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_SendReqEE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_SendReqEE_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ARecvReq_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvReq_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendInvE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvE_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendInvS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvS_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendInvAck_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ARecvInvAck_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvInvAck_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendGntS_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ASendGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ASendGntE_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ARecvGntS_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvGntS_i1Vsinv__46) done
}
moreover {
assume d1: "(r=n_n_ARecvGntE_i1 )"
have "invHoldForRule s f r (invariants N)"
apply (cut_tac b2 d1, metis n_n_ARecvGntE_i1Vsinv__46) done
}
ultimately show "invHoldForRule s f r (invariants N)"
by satx
qed
end
|
(* (c) Copyright 2006-2016 Microsoft Corporation and Inria. *)
(* Distributed under the terms of CeCILL-B. *)
Require Import mathcomp.ssreflect.ssreflect.
From mathcomp
Require Import ssrfun ssrbool eqtype ssrnat seq choice path.
(******************************************************************************)
(* The Finite interface describes Types with finitely many elements, *)
(* supplying a duplicate-free sequence of all the elements. It is a subclass *)
(* of Countable and thus of Choice and Equality. As with Countable, the *)
(* interface explicitly includes these somewhat redundant superclasses to *)
(* ensure that Canonical instance inference remains consistent. Finiteness *)
(* could be stated more simply by bounding the range of the pickle function *)
(* supplied by the Countable interface, but this would yield a useless *)
(* computational interpretation due to the wasteful Peano integer encodings. *)
(* Because the Countable interface is closely tied to the Finite interface *)
(* and is not much used on its own, the Countable mixin is included inside *)
(* the Finite mixin; this makes it much easier to derive Finite variants of *)
(* interfaces, in this file for subFinType, and in the finalg library. *)
(* We define the following interfaces and structures: *)
(* finType == the packed class type of the Finite interface. *)
(* FinType T m == the packed finType class for type T and Finite mixin m. *)
(* Finite.axiom e <-> every x : T occurs exactly once in e : seq T. *)
(* FinMixin ax_e == the Finite mixin for T, encapsulating *)
(* ax_e : Finite.axiom e for some e : seq T. *)
(* UniqFinMixin uniq_e total_e == an alternative mixin constructor that uses *)
(* uniq_e : uniq e and total_e : e =i xpredT. *)
(* PcanFinMixin fK == the Finite mixin for T, given f : T -> fT and g with fT *)
(* a finType and fK : pcancel f g. *)
(* CanFinMixin fK == the Finite mixin for T, given f : T -> fT and g with fT *)
(* a finType and fK : cancel f g. *)
(* subFinType == the join interface type for subType and finType. *)
(* [finType of T for fT] == clone for T of the finType fT. *)
(* [finType of T] == clone for T of the finType inferred for T. *)
(* [subFinType of T] == a subFinType structure for T, when T already has both *)
(* finType and subType structures. *)
(* [finMixin of T by <:] == a finType structure for T, when T has a subType *)
(* structure over an existing finType. *)
(* We define or propagate the finType structure appropriately for all basic *)
(* types : unit, bool, option, prod, sum, sig and sigT. We also define a *)
(* generic type constructor for finite subtypes based on an explicit *)
(* enumeration: *)
(* seq_sub s == the subType of all x \in s, where s : seq T for some *)
(* eqType T; seq_sub s has a canonical finType instance *)
(* when T is a choiceType. *)
(* adhoc_seq_sub_choiceType s, adhoc_seq_sub_finType s == *)
(* non-canonical instances for seq_sub s, s : seq T, *)
(* which can be used when T is not a choiceType. *)
(* Bounded integers are supported by the following type and operations: *)
(* 'I_n, ordinal n == the finite subType of integers i < n, whose *)
(* enumeration is {0, ..., n.-1}. 'I_n coerces to nat, *)
(* so all the integer arithmetic functions can be used *)
(* with 'I_n. *)
(* Ordinal lt_i_n == the element of 'I_n with (nat) value i, given *)
(* lt_i_n : i < n. *)
(* nat_of_ord i == the nat value of i : 'I_n (this function is a *)
(* coercion so it is not usually displayed). *)
(* ord_enum n == the explicit increasing sequence of the i : 'I_n. *)
(* cast_ord eq_n_m i == the element j : 'I_m with the same value as i : 'I_n *)
(* given eq_n_m : n = m (indeed, i : nat and j : nat *)
(* are convertible). *)
(* widen_ord le_n_m i == a j : 'I_m with the same value as i : 'I_n, given *)
(* le_n_m : n <= m. *)
(* rev_ord i == the complement to n.-1 of i : 'I_n, such that *)
(* i + rev_ord i = n.-1. *)
(* inord k == the i : 'I_n.+1 with value k (n is inferred from the *)
(* context). *)
(* sub_ord k == the i : 'I_n.+1 with value n - k (n is inferred from *)
(* the context). *)
(* ord0 == the i : 'I_n.+1 with value 0 (n is inferred from the *)
(* context). *)
(* ord_max == the i : 'I_n.+1 with value n (n is inferred from the *)
(* context). *)
(* bump h k == k.+1 if k >= h, else k (this is a nat function). *)
(* unbump h k == k.-1 if k > h, else k (this is a nat function). *)
(* lift i j == the j' : 'I_n with value bump i j, where i : 'I_n *)
(* and j : 'I_n.-1. *)
(* unlift i j == None if i = j, else Some j', where j' : 'I_n.-1 has *)
(* value unbump i j, given i, j : 'I_n. *)
(* lshift n j == the i : 'I_(m + n) with value j : 'I_m. *)
(* rshift m k == the i : 'I_(m + n) with value m + k, k : 'I_n. *)
(* unsplit u == either lshift n j or rshift m k, depending on *)
(* whether if u : 'I_m + 'I_n is inl j or inr k. *)
(* split i == the u : 'I_m + 'I_n such that i = unsplit u; the *)
(* type 'I_(m + n) of i determines the split. *)
(* Finally, every type T with a finType structure supports the following *)
(* operations: *)
(* enum A == a duplicate-free list of all the x \in A, where A is a *)
(* collective predicate over T. *)
(* #|A| == the cardinal of A, i.e., the number of x \in A. *)
(* enum_val i == the i'th item of enum A, where i : 'I_(#|A|). *)
(* enum_rank x == the i : 'I_(#|T|) such that enum_val i = x. *)
(* enum_rank_in Ax0 x == some i : 'I_(#|A|) such that enum_val i = x if *)
(* x \in A, given Ax0 : x0 \in A. *)
(* A \subset B == all x \in A satisfy x \in B. *)
(* A \proper B == all x \in A satisfy x \in B but not the converse. *)
(* [disjoint A & B] == no x \in A satisfies x \in B. *)
(* image f A == the sequence of f x for all x : T such that x \in A *)
(* (where a is an applicative predicate), of length #|P|. *)
(* The codomain of F can be any type, but image f A can *)
(* only be used as a collective predicate is it is an *)
(* eqType. *)
(* codom f == a sequence spanning the codomain of f (:= image f T). *)
(* [seq F | x : T in A] := image (fun x : T => F) A. *)
(* [seq F | x : T] := [seq F | x <- {: T}]. *)
(* [seq F | x in A], [seq F | x] == variants without casts. *)
(* iinv im_y == some x such that P x holds and f x = y, given *)
(* im_y : y \in image f P. *)
(* invF inj_f y == the x such that f x = y, for inj_j : injective f with *)
(* f : T -> T. *)
(* dinjectiveb A f == the restriction of f : T -> R to A is injective *)
(* (this is a bolean predicate, R must be an eqType). *)
(* injectiveb f == f : T -> R is injective (boolean predicate). *)
(* pred0b A == no x : T satisfies x \in A. *)
(* [forall x, P] == P (in which x can appear) is true for all values of x; *)
(* x must range over a finType. *)
(* [exists x, P] == P is true for some value of x. *)
(* [forall (x | C), P] := [forall x, C ==> P]. *)
(* [forall x in A, P] := [forall (x | x \in A), P]. *)
(* [exists (x | C), P] := [exists x, C && P]. *)
(* [exists x in A, P] := [exists (x | x \in A), P]. *)
(* and typed variants [forall x : T, P], [forall (x : T | C), P], *)
(* [exists x : T, P], [exists x : T in A, P], etc. *)
(* -> The outer brackets can be omitted when nesting finitary quantifiers, *)
(* e.g., [forall i in I, forall j in J, exists a, f i j == a]. *)
(* 'forall_pP == view for [forall x, p _], for pP : reflect .. (p _). *)
(* 'exists_pP == view for [exists x, p _], for pP : reflect .. (p _). *)
(* [pick x | P] == Some x, for an x such that P holds, or None if there *)
(* is no such x. *)
(* [pick x : T] == Some x with x : T, provided T is nonempty, else None. *)
(* [pick x in A] == Some x, with x \in A, or None if A is empty. *)
(* [pick x in A | P] == Some x, with x \in A s.t. P holds, else None. *)
(* [pick x | P & Q] := [pick x | P & Q]. *)
(* [pick x in A | P & Q] := [pick x | P & Q]. *)
(* and (un)typed variants [pick x : T | P], [pick x : T in A], [pick x], etc. *)
(* [arg min_(i < i0 | P) M] == a value i : T minimizing M : nat, subject *)
(* to the condition P (i may appear in P and M), and *)
(* provided P holds for i0. *)
(* [arg max_(i > i0 | P) M] == a value i maximizing M subject to P and *)
(* provided P holds for i0. *)
(* [arg min_(i < i0 in A) M] == an i \in A minimizing M if i0 \in A. *)
(* [arg max_(i > i0 in A) M] == an i \in A maximizing M if i0 \in A. *)
(* [arg min_(i < i0) M] == an i : T minimizing M, given i0 : T. *)
(* [arg max_(i > i0) M] == an i : T maximizing M, given i0 : T. *)
(* These are special instances of *)
(* [arg[ord]_(i < i0 | P) F] == a value i : I, minimizing F wrt ord : rel T *)
(* such that for all j : T, ord (F i) (F j) *)
(* subject to the condition P, and provided P i0 *)
(* where I : finType, T : eqType and F : I -> T *)
(* [arg[ord]_(i < i0 in A) F] == an i \in A minimizing F wrt ord, if i0 \in A.*)
(* [arg[ord]_(i < i0) F] == an i : T minimizing F wrt ord, given i0 : T. *)
(******************************************************************************)
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Module Finite.
Section RawMixin.
Variable T : eqType.
Definition axiom e := forall x : T, count_mem x e = 1.
Lemma uniq_enumP e : uniq e -> e =i T -> axiom e.
Proof. by move=> Ue sT x; rewrite count_uniq_mem ?sT. Qed.
Record mixin_of := Mixin {
mixin_base : Countable.mixin_of T;
mixin_enum : seq T;
_ : axiom mixin_enum
}.
End RawMixin.
Section Mixins.
Variable T : countType.
Definition EnumMixin :=
let: Countable.Pack _ (Countable.Class _ m) as cT := T
return forall e : seq cT, axiom e -> mixin_of cT in
@Mixin (EqType _ _) m.
Definition UniqMixin e Ue eT := @EnumMixin e (uniq_enumP Ue eT).
Variable n : nat.
Definition count_enum := pmap (@pickle_inv T) (iota 0 n).
Hypothesis ubT : forall x : T, pickle x < n.
Lemma count_enumP : axiom count_enum.
Proof.
apply: uniq_enumP (pmap_uniq (@pickle_invK T) (iota_uniq _ _)) _ => x.
by rewrite mem_pmap -pickleK_inv map_f // mem_iota ubT.
Qed.
Definition CountMixin := EnumMixin count_enumP.
End Mixins.
Section ClassDef.
Record class_of T := Class {
base : Choice.class_of T;
mixin : mixin_of (Equality.Pack base)
}.
Definition base2 T c := Countable.Class (@base T c) (mixin_base (mixin c)).
Local Coercion base : class_of >-> Choice.class_of.
Structure type : Type := Pack {sort; _ : class_of sort}.
Local Coercion sort : type >-> Sortclass.
Variables (T : Type) (cT : type).
Definition class := let: Pack _ c as cT' := cT return class_of cT' in c.
Definition clone c of phant_id class c := @Pack T c.
Let xT := let: Pack T _ := cT in T.
Notation xclass := (class : class_of xT).
Definition pack b0 (m0 : mixin_of (EqType T b0)) :=
fun bT b & phant_id (Choice.class bT) b =>
fun m & phant_id m0 m => Pack (@Class T b m).
Definition eqType := @Equality.Pack cT xclass.
Definition choiceType := @Choice.Pack cT xclass.
Definition countType := @Countable.Pack cT (base2 xclass).
End ClassDef.
Module Import Exports.
Coercion mixin_base : mixin_of >-> Countable.mixin_of.
Coercion base : class_of >-> Choice.class_of.
Coercion mixin : class_of >-> mixin_of.
Coercion base2 : class_of >-> Countable.class_of.
Coercion sort : type >-> Sortclass.
Coercion eqType : type >-> Equality.type.
Canonical eqType.
Coercion choiceType : type >-> Choice.type.
Canonical choiceType.
Coercion countType : type >-> Countable.type.
Canonical countType.
Notation finType := type.
Notation FinType T m := (@pack T _ m _ _ id _ id).
Notation FinMixin := EnumMixin.
Notation UniqFinMixin := UniqMixin.
Notation "[ 'finType' 'of' T 'for' cT ]" := (@clone T cT _ idfun)
(at level 0, format "[ 'finType' 'of' T 'for' cT ]") : form_scope.
Notation "[ 'finType' 'of' T ]" := (@clone T _ _ id)
(at level 0, format "[ 'finType' 'of' T ]") : form_scope.
End Exports.
Module Type EnumSig.
Parameter enum : forall cT : type, seq cT.
Axiom enumDef : enum = fun cT => mixin_enum (class cT).
End EnumSig.
Module EnumDef : EnumSig.
Definition enum cT := mixin_enum (class cT).
Definition enumDef := erefl enum.
End EnumDef.
Notation enum := EnumDef.enum.
End Finite.
Export Finite.Exports.
Canonical finEnum_unlock := Unlockable Finite.EnumDef.enumDef.
(* Workaround for the silly syntactic uniformity restriction on coercions; *)
(* this avoids a cross-dependency between finset.v and prime.v for the *)
(* definition of the \pi(A) notation. *)
Definition fin_pred_sort (T : finType) (pT : predType T) := pred_sort pT.
Identity Coercion pred_sort_of_fin : fin_pred_sort >-> pred_sort.
Definition enum_mem T (mA : mem_pred _) := filter mA (Finite.enum T).
Notation enum A := (enum_mem (mem A)).
Definition pick (T : finType) (P : pred T) := ohead (enum P).
Notation "[ 'pick' x | P ]" := (pick (fun x => P%B))
(at level 0, x ident, format "[ 'pick' x | P ]") : form_scope.
Notation "[ 'pick' x : T | P ]" := (pick (fun x : T => P%B))
(at level 0, x ident, only parsing) : form_scope.
Definition pick_true T (x : T) := true.
Notation "[ 'pick' x : T ]" := [pick x : T | pick_true x]
(at level 0, x ident, only parsing).
Notation "[ 'pick' x ]" := [pick x : _]
(at level 0, x ident, only parsing) : form_scope.
Notation "[ 'pic' 'k' x : T ]" := [pick x : T | pick_true _]
(at level 0, x ident, format "[ 'pic' 'k' x : T ]") : form_scope.
Notation "[ 'pick' x | P & Q ]" := [pick x | P && Q ]
(at level 0, x ident,
format "[ '[hv ' 'pick' x | P '/ ' & Q ] ']'") : form_scope.
Notation "[ 'pick' x : T | P & Q ]" := [pick x : T | P && Q ]
(at level 0, x ident, only parsing) : form_scope.
Notation "[ 'pick' x 'in' A ]" := [pick x | x \in A]
(at level 0, x ident, format "[ 'pick' x 'in' A ]") : form_scope.
Notation "[ 'pick' x : T 'in' A ]" := [pick x : T | x \in A]
(at level 0, x ident, only parsing) : form_scope.
Notation "[ 'pick' x 'in' A | P ]" := [pick x | x \in A & P ]
(at level 0, x ident,
format "[ '[hv ' 'pick' x 'in' A '/ ' | P ] ']'") : form_scope.
Notation "[ 'pick' x : T 'in' A | P ]" := [pick x : T | x \in A & P ]
(at level 0, x ident, only parsing) : form_scope.
Notation "[ 'pick' x 'in' A | P & Q ]" := [pick x in A | P && Q]
(at level 0, x ident, format
"[ '[hv ' 'pick' x 'in' A '/ ' | P '/ ' & Q ] ']'") : form_scope.
Notation "[ 'pick' x : T 'in' A | P & Q ]" := [pick x : T in A | P && Q]
(at level 0, x ident, only parsing) : form_scope.
(* We lock the definitions of card and subset to mitigate divergence of the *)
(* Coq term comparison algorithm. *)
Local Notation card_type := (forall T : finType, mem_pred T -> nat).
Local Notation card_def := (fun T mA => size (enum_mem mA)).
Module Type CardDefSig.
Parameter card : card_type. Axiom cardEdef : card = card_def.
End CardDefSig.
Module CardDef : CardDefSig.
Definition card : card_type := card_def. Definition cardEdef := erefl card.
End CardDef.
(* Should be Include, but for a silly restriction: can't Include at toplevel! *)
Export CardDef.
Canonical card_unlock := Unlockable cardEdef.
(* A is at level 99 to allow the notation #|G : H| in groups. *)
Notation "#| A |" := (card (mem A))
(at level 0, A at level 99, format "#| A |") : nat_scope.
Definition pred0b (T : finType) (P : pred T) := #|P| == 0.
Prenex Implicits pred0b.
Module FiniteQuant.
Variant quantified := Quantified of bool.
Delimit Scope fin_quant_scope with Q. (* Bogus, only used to declare scope. *)
Bind Scope fin_quant_scope with quantified.
Notation "F ^*" := (Quantified F) (at level 2).
Notation "F ^~" := (~~ F) (at level 2).
Section Definitions.
Variable T : finType.
Implicit Types (B : quantified) (x y : T).
Definition quant0b Bp := pred0b [pred x : T | let: F^* := Bp x x in F].
(* The first redundant argument protects the notation from Coq's K-term *)
(* display kludge; the second protects it from simpl and /=. *)
Definition ex B x y := B.
(* Binding the predicate value rather than projecting it prevents spurious *)
(* unfolding of the boolean connectives by unification. *)
Definition all B x y := let: F^* := B in F^~^*.
Definition all_in C B x y := let: F^* := B in (C ==> F)^~^*.
Definition ex_in C B x y := let: F^* := B in (C && F)^*.
End Definitions.
Notation "[ x | B ]" := (quant0b (fun x => B x)) (at level 0, x ident).
Notation "[ x : T | B ]" := (quant0b (fun x : T => B x)) (at level 0, x ident).
Module Exports.
Notation ", F" := F^* (at level 200, format ", '/ ' F") : fin_quant_scope.
Notation "[ 'forall' x B ]" := [x | all B]
(at level 0, x at level 99, B at level 200,
format "[ '[hv' 'forall' x B ] ']'") : bool_scope.
Notation "[ 'forall' x : T B ]" := [x : T | all B]
(at level 0, x at level 99, B at level 200, only parsing) : bool_scope.
Notation "[ 'forall' ( x | C ) B ]" := [x | all_in C B]
(at level 0, x at level 99, B at level 200,
format "[ '[hv' '[' 'forall' ( x '/ ' | C ) ']' B ] ']'") : bool_scope.
Notation "[ 'forall' ( x : T | C ) B ]" := [x : T | all_in C B]
(at level 0, x at level 99, B at level 200, only parsing) : bool_scope.
Notation "[ 'forall' x 'in' A B ]" := [x | all_in (x \in A) B]
(at level 0, x at level 99, B at level 200,
format "[ '[hv' '[' 'forall' x '/ ' 'in' A ']' B ] ']'") : bool_scope.
Notation "[ 'forall' x : T 'in' A B ]" := [x : T | all_in (x \in A) B]
(at level 0, x at level 99, B at level 200, only parsing) : bool_scope.
Notation ", 'forall' x B" := [x | all B]^*
(at level 200, x at level 99, B at level 200,
format ", '/ ' 'forall' x B") : fin_quant_scope.
Notation ", 'forall' x : T B" := [x : T | all B]^*
(at level 200, x at level 99, B at level 200, only parsing) : fin_quant_scope.
Notation ", 'forall' ( x | C ) B" := [x | all_in C B]^*
(at level 200, x at level 99, B at level 200,
format ", '/ ' '[' 'forall' ( x '/ ' | C ) ']' B") : fin_quant_scope.
Notation ", 'forall' ( x : T | C ) B" := [x : T | all_in C B]^*
(at level 200, x at level 99, B at level 200, only parsing) : fin_quant_scope.
Notation ", 'forall' x 'in' A B" := [x | all_in (x \in A) B]^*
(at level 200, x at level 99, B at level 200,
format ", '/ ' '[' 'forall' x '/ ' 'in' A ']' B") : bool_scope.
Notation ", 'forall' x : T 'in' A B" := [x : T | all_in (x \in A) B]^*
(at level 200, x at level 99, B at level 200, only parsing) : bool_scope.
Notation "[ 'exists' x B ]" := [x | ex B]^~
(at level 0, x at level 99, B at level 200,
format "[ '[hv' 'exists' x B ] ']'") : bool_scope.
Notation "[ 'exists' x : T B ]" := [x : T | ex B]^~
(at level 0, x at level 99, B at level 200, only parsing) : bool_scope.
Notation "[ 'exists' ( x | C ) B ]" := [x | ex_in C B]^~
(at level 0, x at level 99, B at level 200,
format "[ '[hv' '[' 'exists' ( x '/ ' | C ) ']' B ] ']'") : bool_scope.
Notation "[ 'exists' ( x : T | C ) B ]" := [x : T | ex_in C B]^~
(at level 0, x at level 99, B at level 200, only parsing) : bool_scope.
Notation "[ 'exists' x 'in' A B ]" := [x | ex_in (x \in A) B]^~
(at level 0, x at level 99, B at level 200,
format "[ '[hv' '[' 'exists' x '/ ' 'in' A ']' B ] ']'") : bool_scope.
Notation "[ 'exists' x : T 'in' A B ]" := [x : T | ex_in (x \in A) B]^~
(at level 0, x at level 99, B at level 200, only parsing) : bool_scope.
Notation ", 'exists' x B" := [x | ex B]^~^*
(at level 200, x at level 99, B at level 200,
format ", '/ ' 'exists' x B") : fin_quant_scope.
Notation ", 'exists' x : T B" := [x : T | ex B]^~^*
(at level 200, x at level 99, B at level 200, only parsing) : fin_quant_scope.
Notation ", 'exists' ( x | C ) B" := [x | ex_in C B]^~^*
(at level 200, x at level 99, B at level 200,
format ", '/ ' '[' 'exists' ( x '/ ' | C ) ']' B") : fin_quant_scope.
Notation ", 'exists' ( x : T | C ) B" := [x : T | ex_in C B]^~^*
(at level 200, x at level 99, B at level 200, only parsing) : fin_quant_scope.
Notation ", 'exists' x 'in' A B" := [x | ex_in (x \in A) B]^~^*
(at level 200, x at level 99, B at level 200,
format ", '/ ' '[' 'exists' x '/ ' 'in' A ']' B") : bool_scope.
Notation ", 'exists' x : T 'in' A B" := [x : T | ex_in (x \in A) B]^~^*
(at level 200, x at level 99, B at level 200, only parsing) : bool_scope.
End Exports.
End FiniteQuant.
Export FiniteQuant.Exports.
Definition disjoint T (A B : mem_pred _) := @pred0b T (predI A B).
Notation "[ 'disjoint' A & B ]" := (disjoint (mem A) (mem B))
(at level 0,
format "'[hv' [ 'disjoint' '/ ' A '/' & B ] ']'") : bool_scope.
Local Notation subset_type := (forall (T : finType) (A B : mem_pred T), bool).
Local Notation subset_def := (fun T A B => pred0b (predD A B)).
Module Type SubsetDefSig.
Parameter subset : subset_type. Axiom subsetEdef : subset = subset_def.
End SubsetDefSig.
Module Export SubsetDef : SubsetDefSig.
Definition subset : subset_type := subset_def.
Definition subsetEdef := erefl subset.
End SubsetDef.
Canonical subset_unlock := Unlockable subsetEdef.
Notation "A \subset B" := (subset (mem A) (mem B))
(at level 70, no associativity) : bool_scope.
Definition proper T A B := @subset T A B && ~~ subset B A.
Notation "A \proper B" := (proper (mem A) (mem B))
(at level 70, no associativity) : bool_scope.
(* image, xinv, inv, and ordinal operations will be defined later. *)
Section OpsTheory.
Variable T : finType.
Implicit Types A B C P Q : pred T.
Implicit Types x y : T.
Implicit Type s : seq T.
Lemma enumP : Finite.axiom (Finite.enum T).
Proof. by rewrite unlock; case T => ? [? []]. Qed.
Section EnumPick.
Variable P : pred T.
Lemma enumT : enum T = Finite.enum T.
Proof. exact: filter_predT. Qed.
Lemma mem_enum A : enum A =i A.
Proof. by move=> x; rewrite mem_filter andbC -has_pred1 has_count enumP. Qed.
Lemma enum_uniq : uniq (enum P).
Proof.
by apply/filter_uniq/count_mem_uniq => x; rewrite enumP -enumT mem_enum.
Qed.
Lemma enum0 : enum pred0 = Nil T. Proof. exact: filter_pred0. Qed.
Lemma enum1 x : enum (pred1 x) = [:: x].
Proof.
rewrite [enum _](all_pred1P x _ _); first by rewrite size_filter enumP.
by apply/allP=> y; rewrite mem_enum.
Qed.
Variant pick_spec : option T -> Type :=
| Pick x of P x : pick_spec (Some x)
| Nopick of P =1 xpred0 : pick_spec None.
Lemma pickP : pick_spec (pick P).
Proof.
rewrite /pick; case: (enum _) (mem_enum P) => [|x s] Pxs /=.
by right; apply: fsym.
by left; rewrite -[P _]Pxs mem_head.
Qed.
End EnumPick.
Lemma eq_enum P Q : P =i Q -> enum P = enum Q.
Proof. by move=> eqPQ; apply: eq_filter. Qed.
Lemma eq_pick P Q : P =1 Q -> pick P = pick Q.
Proof. by move=> eqPQ; rewrite /pick (eq_enum eqPQ). Qed.
Lemma cardE A : #|A| = size (enum A).
Proof. by rewrite unlock. Qed.
Lemma eq_card A B : A =i B -> #|A| = #|B|.
Proof. by move=> eqAB; rewrite !cardE (eq_enum eqAB). Qed.
Lemma eq_card_trans A B n : #|A| = n -> B =i A -> #|B| = n.
Proof. by move <-; apply: eq_card. Qed.
Lemma card0 : #|@pred0 T| = 0. Proof. by rewrite cardE enum0. Qed.
Lemma cardT : #|T| = size (enum T). Proof. by rewrite cardE. Qed.
Lemma card1 x : #|pred1 x| = 1.
Proof. by rewrite cardE enum1. Qed.
Lemma eq_card0 A : A =i pred0 -> #|A| = 0.
Proof. exact: eq_card_trans card0. Qed.
Lemma eq_cardT A : A =i predT -> #|A| = size (enum T).
Proof. exact: eq_card_trans cardT. Qed.
Lemma eq_card1 x A : A =i pred1 x -> #|A| = 1.
Proof. exact: eq_card_trans (card1 x). Qed.
Lemma cardUI A B : #|[predU A & B]| + #|[predI A & B]| = #|A| + #|B|.
Proof. by rewrite !cardE !size_filter count_predUI. Qed.
Lemma cardID B A : #|[predI A & B]| + #|[predD A & B]| = #|A|.
Proof.
rewrite -cardUI addnC [#|predI _ _|]eq_card0 => [|x] /=.
by apply: eq_card => x; rewrite !inE andbC -andb_orl orbN.
by rewrite !inE -!andbA andbC andbA andbN.
Qed.
Lemma cardC A : #|A| + #|[predC A]| = #|T|.
Proof. by rewrite !cardE !size_filter count_predC. Qed.
Lemma cardU1 x A : #|[predU1 x & A]| = (x \notin A) + #|A|.
Proof.
case Ax: (x \in A).
by apply: eq_card => y; rewrite inE /=; case: eqP => // ->.
rewrite /= -(card1 x) -cardUI addnC.
rewrite [#|predI _ _|]eq_card0 => [|y /=]; first exact: eq_card.
by rewrite !inE; case: eqP => // ->.
Qed.
Lemma card2 x y : #|pred2 x y| = (x != y).+1.
Proof. by rewrite cardU1 card1 addn1. Qed.
Lemma cardC1 x : #|predC1 x| = #|T|.-1.
Proof. by rewrite -(cardC (pred1 x)) card1. Qed.
Lemma cardD1 x A : #|A| = (x \in A) + #|[predD1 A & x]|.
Proof.
case Ax: (x \in A); last first.
by apply: eq_card => y; rewrite !inE /=; case: eqP => // ->.
rewrite /= -(card1 x) -cardUI addnC /=.
rewrite [#|predI _ _|]eq_card0 => [|y]; last by rewrite !inE; case: eqP.
by apply: eq_card => y; rewrite !inE; case: eqP => // ->.
Qed.
Lemma max_card A : #|A| <= #|T|.
Proof. by rewrite -(cardC A) leq_addr. Qed.
Lemma card_size s : #|s| <= size s.
Proof.
elim: s => [|x s IHs] /=; first by rewrite card0.
by rewrite cardU1 /=; case: (~~ _) => //; apply: leqW.
Qed.
Lemma card_uniqP s : reflect (#|s| = size s) (uniq s).
Proof.
elim: s => [|x s IHs]; first by left; apply: card0.
rewrite cardU1 /= /addn; case: {+}(x \in s) => /=.
by right=> card_Ssz; have:= card_size s; rewrite card_Ssz ltnn.
by apply: (iffP IHs) => [<-| [<-]].
Qed.
Lemma card0_eq A : #|A| = 0 -> A =i pred0.
Proof. by move=> A0 x; apply/idP => Ax; rewrite (cardD1 x) Ax in A0. Qed.
Lemma pred0P P : reflect (P =1 pred0) (pred0b P).
Proof. by apply: (iffP eqP); [apply: card0_eq | apply: eq_card0]. Qed.
Lemma pred0Pn P : reflect (exists x, P x) (~~ pred0b P).
Proof.
case: (pickP P) => [x Px | P0].
by rewrite (introN (pred0P P)) => [|P0]; [left; exists x | rewrite P0 in Px].
by rewrite -lt0n eq_card0 //; right=> [[x]]; rewrite P0.
Qed.
Lemma card_gt0P A : reflect (exists i, i \in A) (#|A| > 0).
Proof. by rewrite lt0n; apply: pred0Pn. Qed.
Lemma subsetE A B : (A \subset B) = pred0b [predD A & B].
Proof. by rewrite unlock. Qed.
Lemma subsetP A B : reflect {subset A <= B} (A \subset B).
Proof.
rewrite unlock; apply: (iffP (pred0P _)) => [AB0 x | sAB x /=].
by apply/implyP; apply/idPn; rewrite negb_imply andbC [_ && _]AB0.
by rewrite andbC -negb_imply; apply/negbF/implyP; apply: sAB.
Qed.
Lemma subsetPn A B :
reflect (exists2 x, x \in A & x \notin B) (~~ (A \subset B)).
Proof.
rewrite unlock; apply: (iffP (pred0Pn _)) => [[x] | [x Ax nBx]].
by case/andP; exists x.
by exists x; rewrite /= nBx.
Qed.
Lemma subset_leq_card A B : A \subset B -> #|A| <= #|B|.
Proof.
move=> sAB.
rewrite -(cardID A B) [#|predI _ _|](@eq_card _ A) ?leq_addr //= => x.
by rewrite !inE andbC; case Ax: (x \in A) => //; apply: subsetP Ax.
Qed.
Lemma subxx_hint (mA : mem_pred T) : subset mA mA.
Proof.
by case: mA => A; have:= introT (subsetP A A); rewrite !unlock => ->.
Qed.
Hint Resolve subxx_hint : core.
(* The parametrization by predType makes it easier to apply subxx. *)
Lemma subxx (pT : predType T) (pA : pT) : pA \subset pA.
Proof. by []. Qed.
Lemma eq_subset A1 A2 : A1 =i A2 -> subset (mem A1) =1 subset (mem A2).
Proof.
move=> eqA12 [B]; rewrite !unlock; congr (_ == 0).
by apply: eq_card => x; rewrite inE /= eqA12.
Qed.
Lemma eq_subset_r B1 B2 : B1 =i B2 ->
(@subset T)^~ (mem B1) =1 (@subset T)^~ (mem B2).
Proof.
move=> eqB12 [A]; rewrite !unlock; congr (_ == 0).
by apply: eq_card => x; rewrite !inE /= eqB12.
Qed.
Lemma eq_subxx A B : A =i B -> A \subset B.
Proof. by move/eq_subset->. Qed.
Lemma subset_predT A : A \subset T.
Proof. by apply/subsetP. Qed.
Lemma predT_subset A : T \subset A -> forall x, x \in A.
Proof. by move/subsetP=> allA x; apply: allA. Qed.
Lemma subset_pred1 A x : (pred1 x \subset A) = (x \in A).
Proof. by apply/subsetP/idP=> [-> // | Ax y /eqP-> //]; apply: eqxx. Qed.
Lemma subset_eqP A B : reflect (A =i B) ((A \subset B) && (B \subset A)).
Proof.
apply: (iffP andP) => [[sAB sBA] x| eqAB]; last by rewrite !eq_subxx.
by apply/idP/idP; apply: subsetP.
Qed.
Lemma subset_cardP A B : #|A| = #|B| -> reflect (A =i B) (A \subset B).
Proof.
move=> eqcAB; case: (subsetP A B) (subset_eqP A B) => //= sAB.
case: (subsetP B A) => [//|[]] x Bx; apply/idPn => Ax.
case/idP: (ltnn #|A|); rewrite {2}eqcAB (cardD1 x B) Bx /=.
apply: subset_leq_card; apply/subsetP=> y Ay; rewrite inE /= andbC.
by rewrite sAB //; apply/eqP => eqyx; rewrite -eqyx Ay in Ax.
Qed.
Lemma subset_leqif_card A B : A \subset B -> #|A| <= #|B| ?= iff (B \subset A).
Proof.
move=> sAB; split; [exact: subset_leq_card | apply/eqP/idP].
by move/subset_cardP=> sABP; rewrite (eq_subset_r (sABP sAB)).
by move=> sBA; apply: eq_card; apply/subset_eqP; rewrite sAB.
Qed.
Lemma subset_trans A B C : A \subset B -> B \subset C -> A \subset C.
Proof.
by move/subsetP=> sAB /subsetP=> sBC; apply/subsetP=> x /sAB; apply: sBC.
Qed.
Lemma subset_all s A : (s \subset A) = all (mem A) s.
Proof. exact: (sameP (subsetP _ _) allP). Qed.
Lemma properE A B : A \proper B = (A \subset B) && ~~(B \subset A).
Proof. by []. Qed.
Lemma properP A B :
reflect (A \subset B /\ (exists2 x, x \in B & x \notin A)) (A \proper B).
Proof.
by rewrite properE; apply: (iffP andP) => [] [-> /subsetPn].
Qed.
Lemma proper_sub A B : A \proper B -> A \subset B.
Proof. by case/andP. Qed.
Lemma proper_subn A B : A \proper B -> ~~ (B \subset A).
Proof. by case/andP. Qed.
Lemma proper_trans A B C : A \proper B -> B \proper C -> A \proper C.
Proof.
case/properP=> sAB [x Bx nAx] /properP[sBC [y Cy nBy]].
rewrite properE (subset_trans sAB) //=; apply/subsetPn; exists y => //.
by apply: contra nBy; apply: subsetP.
Qed.
Lemma proper_sub_trans A B C : A \proper B -> B \subset C -> A \proper C.
Proof.
case/properP=> sAB [x Bx nAx] sBC; rewrite properE (subset_trans sAB) //.
by apply/subsetPn; exists x; rewrite ?(subsetP _ _ sBC).
Qed.
Lemma sub_proper_trans A B C : A \subset B -> B \proper C -> A \proper C.
Proof.
move=> sAB /properP[sBC [x Cx nBx]]; rewrite properE (subset_trans sAB) //.
by apply/subsetPn; exists x => //; apply: contra nBx; apply: subsetP.
Qed.
Lemma proper_card A B : A \proper B -> #|A| < #|B|.
Proof.
by case/andP=> sAB nsBA; rewrite ltn_neqAle !(subset_leqif_card sAB) andbT.
Qed.
Lemma proper_irrefl A : ~~ (A \proper A).
Proof. by rewrite properE subxx. Qed.
Lemma properxx A : (A \proper A) = false.
Proof. by rewrite properE subxx. Qed.
Lemma eq_proper A B : A =i B -> proper (mem A) =1 proper (mem B).
Proof.
move=> eAB [C]; congr (_ && _); first exact: (eq_subset eAB).
by rewrite (eq_subset_r eAB).
Qed.
Lemma eq_proper_r A B : A =i B ->
(@proper T)^~ (mem A) =1 (@proper T)^~ (mem B).
Proof.
move=> eAB [C]; congr (_ && _); first exact: (eq_subset_r eAB).
by rewrite (eq_subset eAB).
Qed.
Lemma disjoint_sym A B : [disjoint A & B] = [disjoint B & A].
Proof. by congr (_ == 0); apply: eq_card => x; apply: andbC. Qed.
Lemma eq_disjoint A1 A2 : A1 =i A2 -> disjoint (mem A1) =1 disjoint (mem A2).
Proof.
by move=> eqA12 [B]; congr (_ == 0); apply: eq_card => x; rewrite !inE eqA12.
Qed.
Lemma eq_disjoint_r B1 B2 : B1 =i B2 ->
(@disjoint T)^~ (mem B1) =1 (@disjoint T)^~ (mem B2).
Proof.
by move=> eqB12 [A]; congr (_ == 0); apply: eq_card => x; rewrite !inE eqB12.
Qed.
Lemma subset_disjoint A B : (A \subset B) = [disjoint A & [predC B]].
Proof. by rewrite disjoint_sym unlock. Qed.
Lemma disjoint_subset A B : [disjoint A & B] = (A \subset [predC B]).
Proof.
by rewrite subset_disjoint; apply: eq_disjoint_r => x; rewrite !inE /= negbK.
Qed.
Lemma disjoint_trans A B C :
A \subset B -> [disjoint B & C] -> [disjoint A & C].
Proof. by rewrite 2!disjoint_subset; apply: subset_trans. Qed.
Lemma disjoint0 A : [disjoint pred0 & A].
Proof. exact/pred0P. Qed.
Lemma eq_disjoint0 A B : A =i pred0 -> [disjoint A & B].
Proof. by move/eq_disjoint->; apply: disjoint0. Qed.
Lemma disjoint1 x A : [disjoint pred1 x & A] = (x \notin A).
Proof.
apply/negbRL/(sameP (pred0Pn _)).
apply: introP => [Ax | notAx [_ /andP[/eqP->]]]; last exact: negP.
by exists x; rewrite !inE eqxx.
Qed.
Lemma eq_disjoint1 x A B :
A =i pred1 x -> [disjoint A & B] = (x \notin B).
Proof. by move/eq_disjoint->; apply: disjoint1. Qed.
Lemma disjointU A B C :
[disjoint predU A B & C] = [disjoint A & C] && [disjoint B & C].
Proof.
case: [disjoint A & C] / (pred0P (xpredI A C)) => [A0 | nA0] /=.
by congr (_ == 0); apply: eq_card => x; rewrite [x \in _]andb_orl A0.
apply/pred0P=> nABC; case: nA0 => x; apply/idPn=> /=; move/(_ x): nABC.
by rewrite [_ x]andb_orl; case/norP.
Qed.
Lemma disjointU1 x A B :
[disjoint predU1 x A & B] = (x \notin B) && [disjoint A & B].
Proof. by rewrite disjointU disjoint1. Qed.
Lemma disjoint_cons x s B :
[disjoint x :: s & B] = (x \notin B) && [disjoint s & B].
Proof. exact: disjointU1. Qed.
Lemma disjoint_has s A : [disjoint s & A] = ~~ has (mem A) s.
Proof.
rewrite -(@eq_has _ (mem (enum A))) => [|x]; last exact: mem_enum.
rewrite has_sym has_filter -filter_predI -has_filter has_count -eqn0Ngt.
by rewrite -size_filter /disjoint /pred0b unlock.
Qed.
Lemma disjoint_cat s1 s2 A :
[disjoint s1 ++ s2 & A] = [disjoint s1 & A] && [disjoint s2 & A].
Proof. by rewrite !disjoint_has has_cat negb_or. Qed.
End OpsTheory.
Hint Resolve subxx_hint : core.
Arguments pred0P {T P}.
Arguments pred0Pn {T P}.
Arguments subsetP {T A B}.
Arguments subsetPn {T A B}.
Arguments subset_eqP {T A B}.
Arguments card_uniqP {T s}.
Arguments properP {T A B}.
(**********************************************************************)
(* *)
(* Boolean quantifiers for finType *)
(* *)
(**********************************************************************)
Section QuantifierCombinators.
Variables (T : finType) (P : pred T) (PP : T -> Prop).
Hypothesis viewP : forall x, reflect (PP x) (P x).
Lemma existsPP : reflect (exists x, PP x) [exists x, P x].
Proof. by apply: (iffP pred0Pn) => -[x /viewP]; exists x. Qed.
Lemma forallPP : reflect (forall x, PP x) [forall x, P x].
Proof. by apply: (iffP pred0P) => /= allP x; have /viewP//=-> := allP x. Qed.
End QuantifierCombinators.
Notation "'exists_ view" := (existsPP (fun _ => view))
(at level 4, right associativity, format "''exists_' view").
Notation "'forall_ view" := (forallPP (fun _ => view))
(at level 4, right associativity, format "''forall_' view").
Section Quantifiers.
Variables (T : finType) (rT : T -> eqType).
Implicit Type (D P : pred T) (f : forall x, rT x).
Lemma forallP P : reflect (forall x, P x) [forall x, P x].
Proof. exact: 'forall_idP. Qed.
Lemma eqfunP f1 f2 : reflect (forall x, f1 x = f2 x) [forall x, f1 x == f2 x].
Proof. exact: 'forall_eqP. Qed.
Lemma forall_inP D P : reflect (forall x, D x -> P x) [forall (x | D x), P x].
Proof. exact: 'forall_implyP. Qed.
Lemma forall_inPP D P PP : (forall x, reflect (PP x) (P x)) ->
reflect (forall x, D x -> PP x) [forall (x | D x), P x].
Proof. by move=> vP; apply: (iffP (forall_inP _ _)) => /(_ _ _) /vP. Qed.
Lemma eqfun_inP D f1 f2 :
reflect {in D, forall x, f1 x = f2 x} [forall (x | x \in D), f1 x == f2 x].
Proof. exact: (forall_inPP _ (fun=> eqP)). Qed.
Lemma existsP P : reflect (exists x, P x) [exists x, P x].
Proof. exact: 'exists_idP. Qed.
Lemma exists_eqP f1 f2 :
reflect (exists x, f1 x = f2 x) [exists x, f1 x == f2 x].
Proof. exact: 'exists_eqP. Qed.
Lemma exists_inP D P : reflect (exists2 x, D x & P x) [exists (x | D x), P x].
Proof. by apply: (iffP 'exists_andP) => [[x []] | [x]]; exists x. Qed.
Lemma exists_inPP D P PP : (forall x, reflect (PP x) (P x)) ->
reflect (exists2 x, D x & PP x) [exists (x | D x), P x].
Proof. by move=> vP; apply: (iffP (exists_inP _ _)) => -[x?/vP]; exists x. Qed.
Lemma exists_eq_inP D f1 f2 :
reflect (exists2 x, D x & f1 x = f2 x) [exists (x | D x), f1 x == f2 x].
Proof. exact: (exists_inPP _ (fun=> eqP)). Qed.
Lemma eq_existsb P1 P2 : P1 =1 P2 -> [exists x, P1 x] = [exists x, P2 x].
Proof. by move=> eqP12; congr (_ != 0); apply: eq_card. Qed.
Lemma eq_existsb_in D P1 P2 :
(forall x, D x -> P1 x = P2 x) ->
[exists (x | D x), P1 x] = [exists (x | D x), P2 x].
Proof. by move=> eqP12; apply: eq_existsb => x; apply: andb_id2l => /eqP12. Qed.
Lemma eq_forallb P1 P2 : P1 =1 P2 -> [forall x, P1 x] = [forall x, P2 x].
Proof. by move=> eqP12; apply/negb_inj/eq_existsb=> /= x; rewrite eqP12. Qed.
Lemma eq_forallb_in D P1 P2 :
(forall x, D x -> P1 x = P2 x) ->
[forall (x | D x), P1 x] = [forall (x | D x), P2 x].
Proof.
by move=> eqP12; apply: eq_forallb => i; case Di: (D i); rewrite // eqP12.
Qed.
Lemma negb_forall P : ~~ [forall x, P x] = [exists x, ~~ P x].
Proof. by []. Qed.
Lemma negb_forall_in D P :
~~ [forall (x | D x), P x] = [exists (x | D x), ~~ P x].
Proof. by apply: eq_existsb => x; rewrite negb_imply. Qed.
Lemma negb_exists P : ~~ [exists x, P x] = [forall x, ~~ P x].
Proof. by apply/negbLR/esym/eq_existsb=> x; apply: negbK. Qed.
Lemma negb_exists_in D P :
~~ [exists (x | D x), P x] = [forall (x | D x), ~~ P x].
Proof. by rewrite negb_exists; apply/eq_forallb => x; rewrite [~~ _]fun_if. Qed.
End Quantifiers.
Arguments forallP {T P}.
Arguments eqfunP {T rT f1 f2}.
Arguments forall_inP {T D P}.
Arguments eqfun_inP {T rT D f1 f2}.
Arguments existsP {T P}.
Arguments exists_eqP {T rT f1 f2}.
Arguments exists_inP {T D P}.
Arguments exists_eq_inP {T rT D f1 f2}.
Notation "'exists_in_ view" := (exists_inPP _ (fun _ => view))
(at level 4, right associativity, format "''exists_in_' view").
Notation "'forall_in_ view" := (forall_inPP _ (fun _ => view))
(at level 4, right associativity, format "''forall_in_' view").
Section Extrema.
Variant extremum_spec {T : eqType} (ord : rel T) {I : finType}
(P : pred I) (F : I -> T) : I -> Type :=
ExtremumSpec (i : I) of P i & (forall j : I, P j -> ord (F i) (F j)) :
extremum_spec ord P F i.
Let arg_pred {T : eqType} ord {I : finType} (P : pred I) (F : I -> T) :=
[pred i | P i & [forall (j | P j), ord (F i) (F j)]].
Section Extremum.
Context {T : eqType} {I : finType} (ord : rel T).
Context (i0 : I) (P : pred I) (F : I -> T).
Hypothesis ord_refl : reflexive ord.
Hypothesis ord_trans : transitive ord.
Hypothesis ord_total : total ord.
Definition extremum := odflt i0 (pick (arg_pred ord P F)).
Hypothesis Pi0 : P i0.
Lemma extremumP : extremum_spec ord P F extremum.
Proof.
rewrite /extremum; case: pickP => [i /andP[Pi /'forall_implyP/= min_i] | no_i].
by split=> // j; apply/implyP.
have := sort_sorted ord_total [seq F i | i <- enum P].
set s := sort _ _ => ss; have s_gt0 : size s > 0
by rewrite size_sort size_map -cardE; apply/card_gt0P; exists i0.
pose t0 := nth (F i0) s 0; have: t0 \in s by rewrite mem_nth.
rewrite mem_sort => /mapP/sig2_eqW[it0]; rewrite mem_enum => it0P def_t0.
have /negP[/=] := no_i it0; rewrite [P _]it0P/=; apply/'forall_implyP=> j Pj.
have /(nthP (F i0))[k g_lt <-] : F j \in s by rewrite mem_sort map_f ?mem_enum.
by rewrite -def_t0 sorted_le_nth.
Qed.
End Extremum.
Notation "[ 'arg[' ord ]_( i < i0 | P ) F ]" :=
(extremum ord i0 (fun i => P%B) (fun i => F))
(at level 0, ord, i, i0 at level 10,
format "[ 'arg[' ord ]_( i < i0 | P ) F ]") : form_scope.
Notation "[ 'arg[' ord ]_( i < i0 'in' A ) F ]" :=
[arg[ord]_(i < i0 | i \in A) F]
(at level 0, ord, i, i0 at level 10,
format "[ 'arg[' ord ]_( i < i0 'in' A ) F ]") : form_scope.
Notation "[ 'arg[' ord ]_( i < i0 ) F ]" := [arg[ord]_(i < i0 | true) F]
(at level 0, ord, i, i0 at level 10,
format "[ 'arg[' ord ]_( i < i0 ) F ]") : form_scope.
Section ArgMinMax.
Variables (I : finType) (i0 : I) (P : pred I) (F : I -> nat) (Pi0 : P i0).
Definition arg_min := extremum leq i0 P F.
Definition arg_max := extremum geq i0 P F.
Lemma arg_minP : extremum_spec leq P F arg_min.
Proof. by apply: extremumP => //; [apply: leq_trans|apply: leq_total]. Qed.
Lemma arg_maxP : extremum_spec geq P F arg_max.
Proof.
apply: extremumP => //; first exact: leqnn.
by move=> n m p mn np; apply: leq_trans mn.
by move=> ??; apply: leq_total.
Qed.
End ArgMinMax.
End Extrema.
Notation "[ 'arg' 'min_' ( i < i0 | P ) F ]" :=
(arg_min i0 (fun i => P%B) (fun i => F))
(at level 0, i, i0 at level 10,
format "[ 'arg' 'min_' ( i < i0 | P ) F ]") : form_scope.
Notation "[ 'arg' 'min_' ( i < i0 'in' A ) F ]" :=
[arg min_(i < i0 | i \in A) F]
(at level 0, i, i0 at level 10,
format "[ 'arg' 'min_' ( i < i0 'in' A ) F ]") : form_scope.
Notation "[ 'arg' 'min_' ( i < i0 ) F ]" := [arg min_(i < i0 | true) F]
(at level 0, i, i0 at level 10,
format "[ 'arg' 'min_' ( i < i0 ) F ]") : form_scope.
Notation "[ 'arg' 'max_' ( i > i0 | P ) F ]" :=
(arg_max i0 (fun i => P%B) (fun i => F))
(at level 0, i, i0 at level 10,
format "[ 'arg' 'max_' ( i > i0 | P ) F ]") : form_scope.
Notation "[ 'arg' 'max_' ( i > i0 'in' A ) F ]" :=
[arg max_(i > i0 | i \in A) F]
(at level 0, i, i0 at level 10,
format "[ 'arg' 'max_' ( i > i0 'in' A ) F ]") : form_scope.
Notation "[ 'arg' 'max_' ( i > i0 ) F ]" := [arg max_(i > i0 | true) F]
(at level 0, i, i0 at level 10,
format "[ 'arg' 'max_' ( i > i0 ) F ]") : form_scope.
(**********************************************************************)
(* *)
(* Boolean injectivity test for functions with a finType domain *)
(* *)
(**********************************************************************)
Section Injectiveb.
Variables (aT : finType) (rT : eqType) (f : aT -> rT).
Implicit Type D : pred aT.
Definition dinjectiveb D := uniq (map f (enum D)).
Definition injectiveb := dinjectiveb aT.
Lemma dinjectivePn D :
reflect (exists2 x, x \in D & exists2 y, y \in [predD1 D & x] & f x = f y)
(~~ dinjectiveb D).
Proof.
apply: (iffP idP) => [injf | [x Dx [y Dxy eqfxy]]]; last first.
move: Dx; rewrite -(mem_enum D) => /rot_to[i E defE].
rewrite /dinjectiveb -(rot_uniq i) -map_rot defE /=; apply/nandP; left.
rewrite inE /= -(mem_enum D) -(mem_rot i) defE inE in Dxy.
rewrite andb_orr andbC andbN in Dxy.
by rewrite eqfxy map_f //; case/andP: Dxy.
pose p := [pred x in D | [exists (y | y \in [predD1 D & x]), f x == f y]].
case: (pickP p) => [x /= /andP[Dx /exists_inP[y Dxy /eqP eqfxy]] | no_p].
by exists x; last exists y.
rewrite /dinjectiveb map_inj_in_uniq ?enum_uniq // in injf => x y Dx Dy eqfxy.
apply: contraNeq (negbT (no_p x)) => ne_xy /=; rewrite -mem_enum Dx.
by apply/existsP; exists y; rewrite /= !inE eq_sym ne_xy -mem_enum Dy eqfxy /=.
Qed.
Lemma dinjectiveP D : reflect {in D &, injective f} (dinjectiveb D).
Proof.
rewrite -[dinjectiveb D]negbK.
case: dinjectivePn=> [noinjf | injf]; constructor.
case: noinjf => x Dx [y /andP[neqxy /= Dy] eqfxy] injf.
by case/eqP: neqxy; apply: injf.
move=> x y Dx Dy /= eqfxy; apply/eqP; apply/idPn=> nxy; case: injf.
by exists x => //; exists y => //=; rewrite inE /= eq_sym nxy.
Qed.
Lemma injectivePn :
reflect (exists x, exists2 y, x != y & f x = f y) (~~ injectiveb).
Proof.
apply: (iffP (dinjectivePn _)) => [[x _ [y nxy eqfxy]] | [x [y nxy eqfxy]]];
by exists x => //; exists y => //; rewrite inE /= andbT eq_sym in nxy *.
Qed.
Lemma injectiveP : reflect (injective f) injectiveb.
Proof. by apply: (iffP (dinjectiveP _)) => injf x y => [|_ _]; apply: injf. Qed.
End Injectiveb.
Definition image_mem T T' f mA : seq T' := map f (@enum_mem T mA).
Notation image f A := (image_mem f (mem A)).
Notation "[ 'seq' F | x 'in' A ]" := (image (fun x => F) A)
(at level 0, F at level 99, x ident,
format "'[hv' [ 'seq' F '/ ' | x 'in' A ] ']'") : seq_scope.
Notation "[ 'seq' F | x : T 'in' A ]" := (image (fun x : T => F) A)
(at level 0, F at level 99, x ident, only parsing) : seq_scope.
Notation "[ 'seq' F | x : T ]" :=
[seq F | x : T in sort_of_simpl_pred (@pred_of_argType T)]
(at level 0, F at level 99, x ident,
format "'[hv' [ 'seq' F '/ ' | x : T ] ']'") : seq_scope.
Notation "[ 'seq' F , x ]" := [seq F | x : _ ]
(at level 0, F at level 99, x ident, only parsing) : seq_scope.
Definition codom T T' f := @image_mem T T' f (mem T).
Section Image.
Variable T : finType.
Implicit Type A : pred T.
Section SizeImage.
Variables (T' : Type) (f : T -> T').
Lemma size_image A : size (image f A) = #|A|.
Proof. by rewrite size_map -cardE. Qed.
Lemma size_codom : size (codom f) = #|T|.
Proof. exact: size_image. Qed.
Lemma codomE : codom f = map f (enum T).
Proof. by []. Qed.
End SizeImage.
Variables (T' : eqType) (f : T -> T').
Lemma imageP A y : reflect (exists2 x, x \in A & y = f x) (y \in image f A).
Proof.
by apply: (iffP mapP) => [] [x Ax y_fx]; exists x; rewrite // mem_enum in Ax *.
Qed.
Lemma codomP y : reflect (exists x, y = f x) (y \in codom f).
Proof. by apply: (iffP (imageP _ y)) => [][x]; exists x. Qed.
Remark iinv_proof A y : y \in image f A -> {x | x \in A & f x = y}.
Proof.
move=> fy; pose b x := A x && (f x == y).
case: (pickP b) => [x /andP[Ax /eqP] | nfy]; first by exists x.
by case/negP: fy => /imageP[x Ax fx_y]; case/andP: (nfy x); rewrite fx_y.
Qed.
Definition iinv A y fAy := s2val (@iinv_proof A y fAy).
Lemma f_iinv A y fAy : f (@iinv A y fAy) = y.
Proof. exact: s2valP' (iinv_proof fAy). Qed.
Lemma mem_iinv A y fAy : @iinv A y fAy \in A.
Proof. exact: s2valP (iinv_proof fAy). Qed.
Lemma in_iinv_f A : {in A &, injective f} ->
forall x fAfx, x \in A -> @iinv A (f x) fAfx = x.
Proof.
by move=> injf x fAfx Ax; apply: injf => //; [apply: mem_iinv | apply: f_iinv].
Qed.
Lemma preim_iinv A B y fAy : preim f B (@iinv A y fAy) = B y.
Proof. by rewrite /= f_iinv. Qed.
Lemma image_f A x : x \in A -> f x \in image f A.
Proof. by move=> Ax; apply/imageP; exists x. Qed.
Lemma codom_f x : f x \in codom f.
Proof. by apply: image_f. Qed.
Lemma image_codom A : {subset image f A <= codom f}.
Proof. by move=> _ /imageP[x _ ->]; apply: codom_f. Qed.
Lemma image_pred0 : image f pred0 =i pred0.
Proof. by move=> x; rewrite /image_mem /= enum0. Qed.
Section Injective.
Hypothesis injf : injective f.
Lemma mem_image A x : (f x \in image f A) = (x \in A).
Proof. by rewrite mem_map ?mem_enum. Qed.
Lemma pre_image A : [preim f of image f A] =i A.
Proof. by move=> x; rewrite inE /= mem_image. Qed.
Lemma image_iinv A y (fTy : y \in codom f) :
(y \in image f A) = (iinv fTy \in A).
Proof. by rewrite -mem_image ?f_iinv. Qed.
Lemma iinv_f x fTfx : @iinv T (f x) fTfx = x.
Proof. by apply: in_iinv_f; first apply: in2W. Qed.
Lemma image_pre (B : pred T') : image f [preim f of B] =i [predI B & codom f].
Proof. by move=> y; rewrite /image_mem -filter_map /= mem_filter -enumT. Qed.
Lemma bij_on_codom (x0 : T) : {on [pred y in codom f], bijective f}.
Proof.
pose g y := iinv (valP (insigd (codom_f x0) y)).
by exists g => [x fAfx | y fAy]; first apply: injf; rewrite f_iinv insubdK.
Qed.
Lemma bij_on_image A (x0 : T) : {on [pred y in image f A], bijective f}.
Proof. exact: subon_bij (@image_codom A) (bij_on_codom x0). Qed.
End Injective.
Fixpoint preim_seq s :=
if s is y :: s' then
(if pick (preim f (pred1 y)) is Some x then cons x else id) (preim_seq s')
else [::].
Lemma map_preim (s : seq T') : {subset s <= codom f} -> map f (preim_seq s) = s.
Proof.
elim: s => //= y s IHs; case: pickP => [x /eqP fx_y | nfTy] fTs.
by rewrite /= fx_y IHs // => z s_z; apply: fTs; apply: predU1r.
by case/imageP: (fTs y (mem_head y s)) => x _ fx_y; case/eqP: (nfTy x).
Qed.
End Image.
Prenex Implicits codom iinv.
Arguments imageP {T T' f A y}.
Arguments codomP {T T' f y}.
Lemma flatten_imageP (aT : finType) (rT : eqType) A (P : pred aT) (y : rT) :
reflect (exists2 x, x \in P & y \in A x) (y \in flatten [seq A x | x in P]).
Proof.
by apply: (iffP flatten_mapP) => [][x Px]; exists x; rewrite ?mem_enum in Px *.
Qed.
Arguments flatten_imageP {aT rT A P y}.
Section CardFunImage.
Variables (T T' : finType) (f : T -> T').
Implicit Type A : pred T.
Lemma leq_image_card A : #|image f A| <= #|A|.
Proof. by rewrite (cardE A) -(size_map f) card_size. Qed.
Lemma card_in_image A : {in A &, injective f} -> #|image f A| = #|A|.
Proof.
move=> injf; rewrite (cardE A) -(size_map f); apply/card_uniqP.
by rewrite map_inj_in_uniq ?enum_uniq // => x y; rewrite !mem_enum; apply: injf.
Qed.
Lemma image_injP A : reflect {in A &, injective f} (#|image f A| == #|A|).
Proof.
apply: (iffP eqP) => [eqfA |]; last exact: card_in_image.
by apply/dinjectiveP; apply/card_uniqP; rewrite size_map -cardE.
Qed.
Hypothesis injf : injective f.
Lemma card_image A : #|image f A| = #|A|.
Proof. by apply: card_in_image; apply: in2W. Qed.
Lemma card_codom : #|codom f| = #|T|.
Proof. exact: card_image. Qed.
Lemma card_preim (B : pred T') : #|[preim f of B]| = #|[predI codom f & B]|.
Proof.
rewrite -card_image /=; apply: eq_card => y.
by rewrite [y \in _]image_pre !inE andbC.
Qed.
Hypothesis card_range : #|T| = #|T'|.
Lemma inj_card_onto y : y \in codom f.
Proof. by move: y; apply/subset_cardP; rewrite ?card_codom ?subset_predT. Qed.
Lemma inj_card_bij : bijective f.
Proof.
by exists (fun y => iinv (inj_card_onto y)) => y; rewrite ?iinv_f ?f_iinv.
Qed.
End CardFunImage.
Arguments image_injP {T T' f A}.
Section FinCancel.
Variables (T : finType) (f g : T -> T).
Section Inv.
Hypothesis injf : injective f.
Lemma injF_onto y : y \in codom f. Proof. exact: inj_card_onto. Qed.
Definition invF y := iinv (injF_onto y).
Lemma invF_f : cancel f invF. Proof. by move=> x; apply: iinv_f. Qed.
Lemma f_invF : cancel invF f. Proof. by move=> y; apply: f_iinv. Qed.
Lemma injF_bij : bijective f. Proof. exact: inj_card_bij. Qed.
End Inv.
Hypothesis fK : cancel f g.
Lemma canF_sym : cancel g f.
Proof. exact/(bij_can_sym (injF_bij (can_inj fK))). Qed.
Lemma canF_LR x y : x = g y -> f x = y.
Proof. exact: canLR canF_sym. Qed.
Lemma canF_RL x y : g x = y -> x = f y.
Proof. exact: canRL canF_sym. Qed.
Lemma canF_eq x y : (f x == y) = (x == g y).
Proof. exact: (can2_eq fK canF_sym). Qed.
Lemma canF_invF : g =1 invF (can_inj fK).
Proof. by move=> y; apply: (canLR fK); rewrite f_invF. Qed.
End FinCancel.
Section EqImage.
Variables (T : finType) (T' : Type).
Lemma eq_image (A B : pred T) (f g : T -> T') :
A =i B -> f =1 g -> image f A = image g B.
Proof.
by move=> eqAB eqfg; rewrite /image_mem (eq_enum eqAB) (eq_map eqfg).
Qed.
Lemma eq_codom (f g : T -> T') : f =1 g -> codom f = codom g.
Proof. exact: eq_image. Qed.
Lemma eq_invF f g injf injg : f =1 g -> @invF T f injf =1 @invF T g injg.
Proof.
by move=> eq_fg x; apply: (canLR (invF_f injf)); rewrite eq_fg f_invF.
Qed.
End EqImage.
(* Standard finTypes *)
Lemma unit_enumP : Finite.axiom [::tt]. Proof. by case. Qed.
Definition unit_finMixin := Eval hnf in FinMixin unit_enumP.
Canonical unit_finType := Eval hnf in FinType unit unit_finMixin.
Lemma card_unit : #|{: unit}| = 1. Proof. by rewrite cardT enumT unlock. Qed.
Lemma bool_enumP : Finite.axiom [:: true; false]. Proof. by case. Qed.
Definition bool_finMixin := Eval hnf in FinMixin bool_enumP.
Canonical bool_finType := Eval hnf in FinType bool bool_finMixin.
Lemma card_bool : #|{: bool}| = 2. Proof. by rewrite cardT enumT unlock. Qed.
Local Notation enumF T := (Finite.enum T).
Section OptionFinType.
Variable T : finType.
Definition option_enum := None :: map some (enumF T).
Lemma option_enumP : Finite.axiom option_enum.
Proof. by case=> [x|]; rewrite /= count_map (count_pred0, enumP). Qed.
Definition option_finMixin := Eval hnf in FinMixin option_enumP.
Canonical option_finType := Eval hnf in FinType (option T) option_finMixin.
Lemma card_option : #|{: option T}| = #|T|.+1.
Proof. by rewrite !cardT !enumT {1}unlock /= !size_map. Qed.
End OptionFinType.
Section TransferFinType.
Variables (eT : countType) (fT : finType) (f : eT -> fT).
Lemma pcan_enumP g : pcancel f g -> Finite.axiom (undup (pmap g (enumF fT))).
Proof.
move=> fK x; rewrite count_uniq_mem ?undup_uniq // mem_undup.
by rewrite mem_pmap -fK map_f // -enumT mem_enum.
Qed.
Definition PcanFinMixin g fK := FinMixin (@pcan_enumP g fK).
Definition CanFinMixin g (fK : cancel f g) := PcanFinMixin (can_pcan fK).
End TransferFinType.
Section SubFinType.
Variables (T : choiceType) (P : pred T).
Import Finite.
Structure subFinType := SubFinType {
subFin_sort :> subType P;
_ : mixin_of (sub_eqType subFin_sort)
}.
Definition pack_subFinType U :=
fun cT b m & phant_id (class cT) (@Class U b m) =>
fun sT m' & phant_id m' m => @SubFinType sT m'.
Implicit Type sT : subFinType.
Definition subFin_mixin sT :=
let: SubFinType _ m := sT return mixin_of (sub_eqType sT) in m.
Coercion subFinType_subCountType sT := @SubCountType _ _ sT (subFin_mixin sT).
Canonical subFinType_subCountType.
Coercion subFinType_finType sT :=
Pack (@Class sT (sub_choiceClass sT) (subFin_mixin sT)).
Canonical subFinType_finType.
Lemma codom_val sT x : (x \in codom (val : sT -> T)) = P x.
Proof.
by apply/codomP/idP=> [[u ->]|Px]; last exists (Sub x Px); rewrite ?valP ?SubK.
Qed.
End SubFinType.
(* This assumes that T has both finType and subCountType structures. *)
Notation "[ 'subFinType' 'of' T ]" := (@pack_subFinType _ _ T _ _ _ id _ _ id)
(at level 0, format "[ 'subFinType' 'of' T ]") : form_scope.
Section FinTypeForSub.
Variables (T : finType) (P : pred T) (sT : subCountType P).
Definition sub_enum : seq sT := pmap insub (enumF T).
Lemma mem_sub_enum u : u \in sub_enum.
Proof. by rewrite mem_pmap_sub -enumT mem_enum. Qed.
Lemma sub_enum_uniq : uniq sub_enum.
Proof. by rewrite pmap_sub_uniq // -enumT enum_uniq. Qed.
Lemma val_sub_enum : map val sub_enum = enum P.
Proof.
rewrite pmap_filter; last exact: insubK.
by apply: eq_filter => x; apply: isSome_insub.
Qed.
(* We can't declare a canonical structure here because we've already *)
(* stated that subType_sort and FinType.sort unify via to the *)
(* subType_finType structure. *)
Definition SubFinMixin := UniqFinMixin sub_enum_uniq mem_sub_enum.
Definition SubFinMixin_for (eT : eqType) of phant eT :=
eq_rect _ Finite.mixin_of SubFinMixin eT.
Variable sfT : subFinType P.
Lemma card_sub : #|sfT| = #|[pred x | P x]|.
Proof. by rewrite -(eq_card (codom_val sfT)) (card_image val_inj). Qed.
Lemma eq_card_sub (A : pred sfT) : A =i predT -> #|A| = #|[pred x | P x]|.
Proof. exact: eq_card_trans card_sub. Qed.
End FinTypeForSub.
(* This assumes that T has a subCountType structure over a type that *)
(* has a finType structure. *)
Notation "[ 'finMixin' 'of' T 'by' <: ]" :=
(SubFinMixin_for (Phant T) (erefl _))
(at level 0, format "[ 'finMixin' 'of' T 'by' <: ]") : form_scope.
(* Regression for the subFinType stack
Record myb : Type := MyB {myv : bool; _ : ~~ myv}.
Canonical myb_sub := Eval hnf in [subType for myv].
Definition myb_eqm := Eval hnf in [eqMixin of myb by <:].
Canonical myb_eq := Eval hnf in EqType myb myb_eqm.
Definition myb_chm := [choiceMixin of myb by <:].
Canonical myb_ch := Eval hnf in ChoiceType myb myb_chm.
Definition myb_cntm := [countMixin of myb by <:].
Canonical myb_cnt := Eval hnf in CountType myb myb_cntm.
Canonical myb_scnt := Eval hnf in [subCountType of myb].
Definition myb_finm := [finMixin of myb by <:].
Canonical myb_fin := Eval hnf in FinType myb myb_finm.
Canonical myb_sfin := Eval hnf in [subFinType of myb].
Print Canonical Projections.
Print myb_finm.
Print myb_cntm.
*)
Section CardSig.
Variables (T : finType) (P : pred T).
Definition sig_finMixin := [finMixin of {x | P x} by <:].
Canonical sig_finType := Eval hnf in FinType {x | P x} sig_finMixin.
Canonical sig_subFinType := Eval hnf in [subFinType of {x | P x}].
Lemma card_sig : #|{: {x | P x}}| = #|[pred x | P x]|.
Proof. exact: card_sub. Qed.
End CardSig.
(* Subtype for an explicit enumeration. *)
Section SeqSubType.
Variables (T : eqType) (s : seq T).
Record seq_sub : Type := SeqSub {ssval : T; ssvalP : in_mem ssval (@mem T _ s)}.
Canonical seq_sub_subType := Eval hnf in [subType for ssval].
Definition seq_sub_eqMixin := Eval hnf in [eqMixin of seq_sub by <:].
Canonical seq_sub_eqType := Eval hnf in EqType seq_sub seq_sub_eqMixin.
Definition seq_sub_enum : seq seq_sub := undup (pmap insub s).
Lemma mem_seq_sub_enum x : x \in seq_sub_enum.
Proof. by rewrite mem_undup mem_pmap -valK map_f ?ssvalP. Qed.
Lemma val_seq_sub_enum : uniq s -> map val seq_sub_enum = s.
Proof.
move=> Us; rewrite /seq_sub_enum undup_id ?pmap_sub_uniq //.
rewrite (pmap_filter (insubK _)); apply/all_filterP.
by apply/allP => x; rewrite isSome_insub.
Qed.
Definition seq_sub_pickle x := index x seq_sub_enum.
Definition seq_sub_unpickle n := nth None (map some seq_sub_enum) n.
Lemma seq_sub_pickleK : pcancel seq_sub_pickle seq_sub_unpickle.
Proof.
rewrite /seq_sub_unpickle => x.
by rewrite (nth_map x) ?nth_index ?index_mem ?mem_seq_sub_enum.
Qed.
Definition seq_sub_countMixin := CountMixin seq_sub_pickleK.
Fact seq_sub_axiom : Finite.axiom seq_sub_enum.
Proof. exact: Finite.uniq_enumP (undup_uniq _) mem_seq_sub_enum. Qed.
Definition seq_sub_finMixin := Finite.Mixin seq_sub_countMixin seq_sub_axiom.
(* Beware: these are not the canonical instances, as they are not consistent *)
(* with the generic sub_choiceType canonical instance. *)
Definition adhoc_seq_sub_choiceMixin := PcanChoiceMixin seq_sub_pickleK.
Definition adhoc_seq_sub_choiceType :=
Eval hnf in ChoiceType seq_sub adhoc_seq_sub_choiceMixin.
Definition adhoc_seq_sub_finType :=
[finType of seq_sub for FinType adhoc_seq_sub_choiceType seq_sub_finMixin].
End SeqSubType.
Section SeqFinType.
Variables (T : choiceType) (s : seq T).
Local Notation sT := (seq_sub s).
Definition seq_sub_choiceMixin := [choiceMixin of sT by <:].
Canonical seq_sub_choiceType := Eval hnf in ChoiceType sT seq_sub_choiceMixin.
Canonical seq_sub_countType := Eval hnf in CountType sT (seq_sub_countMixin s).
Canonical seq_sub_subCountType := Eval hnf in [subCountType of sT].
Canonical seq_sub_finType := Eval hnf in FinType sT (seq_sub_finMixin s).
Canonical seq_sub_subFinType := Eval hnf in [subFinType of sT].
Lemma card_seq_sub : uniq s -> #|{:sT}| = size s.
Proof.
by move=> Us; rewrite cardE enumT -(size_map val) unlock val_seq_sub_enum.
Qed.
End SeqFinType.
(**********************************************************************)
(* *)
(* Ordinal finType : {0, ... , n-1} *)
(* *)
(**********************************************************************)
Section OrdinalSub.
Variable n : nat.
Inductive ordinal : predArgType := Ordinal m of m < n.
Coercion nat_of_ord i := let: Ordinal m _ := i in m.
Canonical ordinal_subType := [subType for nat_of_ord].
Definition ordinal_eqMixin := Eval hnf in [eqMixin of ordinal by <:].
Canonical ordinal_eqType := Eval hnf in EqType ordinal ordinal_eqMixin.
Definition ordinal_choiceMixin := [choiceMixin of ordinal by <:].
Canonical ordinal_choiceType :=
Eval hnf in ChoiceType ordinal ordinal_choiceMixin.
Definition ordinal_countMixin := [countMixin of ordinal by <:].
Canonical ordinal_countType := Eval hnf in CountType ordinal ordinal_countMixin.
Canonical ordinal_subCountType := [subCountType of ordinal].
Lemma ltn_ord (i : ordinal) : i < n. Proof. exact: valP i. Qed.
Lemma ord_inj : injective nat_of_ord. Proof. exact: val_inj. Qed.
Definition ord_enum : seq ordinal := pmap insub (iota 0 n).
Lemma val_ord_enum : map val ord_enum = iota 0 n.
Proof.
rewrite pmap_filter; last exact: insubK.
by apply/all_filterP; apply/allP=> i; rewrite mem_iota isSome_insub.
Qed.
Lemma ord_enum_uniq : uniq ord_enum.
Proof. by rewrite pmap_sub_uniq ?iota_uniq. Qed.
Lemma mem_ord_enum i : i \in ord_enum.
Proof. by rewrite -(mem_map ord_inj) val_ord_enum mem_iota ltn_ord. Qed.
Definition ordinal_finMixin :=
Eval hnf in UniqFinMixin ord_enum_uniq mem_ord_enum.
Canonical ordinal_finType := Eval hnf in FinType ordinal ordinal_finMixin.
Canonical ordinal_subFinType := Eval hnf in [subFinType of ordinal].
End OrdinalSub.
Notation "''I_' n" := (ordinal n)
(at level 8, n at level 2, format "''I_' n").
Hint Resolve ltn_ord : core.
Section OrdinalEnum.
Variable n : nat.
Lemma val_enum_ord : map val (enum 'I_n) = iota 0 n.
Proof. by rewrite enumT unlock val_ord_enum. Qed.
Lemma size_enum_ord : size (enum 'I_n) = n.
Proof. by rewrite -(size_map val) val_enum_ord size_iota. Qed.
Lemma card_ord : #|'I_n| = n.
Proof. by rewrite cardE size_enum_ord. Qed.
Lemma nth_enum_ord i0 m : m < n -> nth i0 (enum 'I_n) m = m :> nat.
Proof.
by move=> ?; rewrite -(nth_map _ 0) (size_enum_ord, val_enum_ord) // nth_iota.
Qed.
Lemma nth_ord_enum (i0 i : 'I_n) : nth i0 (enum 'I_n) i = i.
Proof. by apply: val_inj; apply: nth_enum_ord. Qed.
Lemma index_enum_ord (i : 'I_n) : index i (enum 'I_n) = i.
Proof.
by rewrite -{1}(nth_ord_enum i i) index_uniq ?(enum_uniq, size_enum_ord).
Qed.
End OrdinalEnum.
Lemma widen_ord_proof n m (i : 'I_n) : n <= m -> i < m.
Proof. exact: leq_trans. Qed.
Definition widen_ord n m le_n_m i := Ordinal (@widen_ord_proof n m i le_n_m).
Lemma cast_ord_proof n m (i : 'I_n) : n = m -> i < m.
Proof. by move <-. Qed.
Definition cast_ord n m eq_n_m i := Ordinal (@cast_ord_proof n m i eq_n_m).
Lemma cast_ord_id n eq_n i : cast_ord eq_n i = i :> 'I_n.
Proof. exact: val_inj. Qed.
Lemma cast_ord_comp n1 n2 n3 eq_n2 eq_n3 i :
@cast_ord n2 n3 eq_n3 (@cast_ord n1 n2 eq_n2 i) =
cast_ord (etrans eq_n2 eq_n3) i.
Proof. exact: val_inj. Qed.
Lemma cast_ordK n1 n2 eq_n :
cancel (@cast_ord n1 n2 eq_n) (cast_ord (esym eq_n)).
Proof. by move=> i; apply: val_inj. Qed.
Lemma cast_ordKV n1 n2 eq_n :
cancel (cast_ord (esym eq_n)) (@cast_ord n1 n2 eq_n).
Proof. by move=> i; apply: val_inj. Qed.
Lemma cast_ord_inj n1 n2 eq_n : injective (@cast_ord n1 n2 eq_n).
Proof. exact: can_inj (cast_ordK eq_n). Qed.
Lemma rev_ord_proof n (i : 'I_n) : n - i.+1 < n.
Proof. by case: n i => [|n] [i lt_i_n] //; rewrite ltnS subSS leq_subr. Qed.
Definition rev_ord n i := Ordinal (@rev_ord_proof n i).
Lemma rev_ordK {n} : involutive (@rev_ord n).
Proof.
by case: n => [|n] [i lti] //; apply: val_inj; rewrite /= !subSS subKn.
Qed.
Lemma rev_ord_inj {n} : injective (@rev_ord n).
Proof. exact: inv_inj rev_ordK. Qed.
(* bijection between any finType T and the Ordinal finType of its cardinal *)
Section EnumRank.
Variable T : finType.
Implicit Type A : pred T.
Lemma enum_rank_subproof x0 A : x0 \in A -> 0 < #|A|.
Proof. by move=> Ax0; rewrite (cardD1 x0) Ax0. Qed.
Definition enum_rank_in x0 A (Ax0 : x0 \in A) x :=
insubd (Ordinal (@enum_rank_subproof x0 [eta A] Ax0)) (index x (enum A)).
Definition enum_rank x := @enum_rank_in x T (erefl true) x.
Lemma enum_default A : 'I_(#|A|) -> T.
Proof. by rewrite cardE; case: (enum A) => [|//] []. Qed.
Definition enum_val A i := nth (@enum_default [eta A] i) (enum A) i.
Prenex Implicits enum_val.
Lemma enum_valP A i : @enum_val A i \in A.
Proof. by rewrite -mem_enum mem_nth -?cardE. Qed.
Lemma enum_val_nth A x i : @enum_val A i = nth x (enum A) i.
Proof. by apply: set_nth_default; rewrite cardE in i *; apply: ltn_ord. Qed.
Lemma nth_image T' y0 (f : T -> T') A (i : 'I_#|A|) :
nth y0 (image f A) i = f (enum_val i).
Proof. by rewrite -(nth_map _ y0) // -cardE. Qed.
Lemma nth_codom T' y0 (f : T -> T') (i : 'I_#|T|) :
nth y0 (codom f) i = f (enum_val i).
Proof. exact: nth_image. Qed.
Lemma nth_enum_rank_in x00 x0 A Ax0 :
{in A, cancel (@enum_rank_in x0 A Ax0) (nth x00 (enum A))}.
Proof.
move=> x Ax; rewrite /= insubdK ?nth_index ?mem_enum //.
by rewrite cardE [_ \in _]index_mem mem_enum.
Qed.
Lemma nth_enum_rank x0 : cancel enum_rank (nth x0 (enum T)).
Proof. by move=> x; apply: nth_enum_rank_in. Qed.
Lemma enum_rankK_in x0 A Ax0 :
{in A, cancel (@enum_rank_in x0 A Ax0) enum_val}.
Proof. by move=> x; apply: nth_enum_rank_in. Qed.
Lemma enum_rankK : cancel enum_rank enum_val.
Proof. by move=> x; apply: enum_rankK_in. Qed.
Lemma enum_valK_in x0 A Ax0 : cancel enum_val (@enum_rank_in x0 A Ax0).
Proof.
move=> x; apply: ord_inj; rewrite insubdK; last first.
by rewrite cardE [_ \in _]index_mem mem_nth // -cardE.
by rewrite index_uniq ?enum_uniq // -cardE.
Qed.
Lemma enum_valK : cancel enum_val enum_rank.
Proof. by move=> x; apply: enum_valK_in. Qed.
Lemma enum_rank_inj : injective enum_rank.
Proof. exact: can_inj enum_rankK. Qed.
Lemma enum_val_inj A : injective (@enum_val A).
Proof. by move=> i; apply: can_inj (enum_valK_in (enum_valP i)) (i). Qed.
Lemma enum_val_bij_in x0 A : x0 \in A -> {on A, bijective (@enum_val A)}.
Proof.
move=> Ax0; exists (enum_rank_in Ax0) => [i _|]; last exact: enum_rankK_in.
exact: enum_valK_in.
Qed.
Lemma enum_rank_bij : bijective enum_rank.
Proof. by move: enum_rankK enum_valK; exists (@enum_val T). Qed.
Lemma enum_val_bij : bijective (@enum_val T).
Proof. by move: enum_rankK enum_valK; exists enum_rank. Qed.
(* Due to the limitations of the Coq unification patterns, P can only be *)
(* inferred from the premise of this lemma, not its conclusion. As a result *)
(* this lemma will only be usable in forward chaining style. *)
Lemma fin_all_exists U (P : forall x : T, U x -> Prop) :
(forall x, exists u, P x u) -> (exists u, forall x, P x (u x)).
Proof.
move=> ex_u; pose Q m x := enum_rank x < m -> {ux | P x ux}.
suffices: forall m, m <= #|T| -> exists w : forall x, Q m x, True.
case/(_ #|T|)=> // w _; pose u x := sval (w x (ltn_ord _)).
by exists u => x; rewrite {}/u; case: (w x _).
elim=> [|m IHm] ltmX; first by have w x: Q 0 x by []; exists w.
have{IHm} [w _] := IHm (ltnW ltmX); pose i := Ordinal ltmX.
have [u Pu] := ex_u (enum_val i); suffices w' x: Q m.+1 x by exists w'.
rewrite /Q ltnS leq_eqVlt (val_eqE _ i); case: eqP => [def_i _ | _ /w //].
by rewrite -def_i enum_rankK in u Pu; exists u.
Qed.
Lemma fin_all_exists2 U (P Q : forall x : T, U x -> Prop) :
(forall x, exists2 u, P x u & Q x u) ->
(exists2 u, forall x, P x (u x) & forall x, Q x (u x)).
Proof.
move=> ex_u; have (x): exists u, P x u /\ Q x u by have [u] := ex_u x; exists u.
by case/fin_all_exists=> u /all_and2[]; exists u.
Qed.
End EnumRank.
Arguments enum_val_inj {T A} [i1 i2] : rename.
Arguments enum_rank_inj {T} [x1 x2].
Prenex Implicits enum_val enum_rank enum_valK enum_rankK.
Lemma enum_rank_ord n i : enum_rank i = cast_ord (esym (card_ord n)) i.
Proof.
by apply: val_inj; rewrite insubdK ?index_enum_ord // card_ord [_ \in _]ltn_ord.
Qed.
Lemma enum_val_ord n i : enum_val i = cast_ord (card_ord n) i.
Proof.
by apply: canLR (@enum_rankK _) _; apply: val_inj; rewrite enum_rank_ord.
Qed.
(* The integer bump / unbump operations. *)
Definition bump h i := (h <= i) + i.
Definition unbump h i := i - (h < i).
Lemma bumpK h : cancel (bump h) (unbump h).
Proof.
rewrite /bump /unbump => i.
have [le_hi | lt_ih] := leqP h i; first by rewrite ltnS le_hi subn1.
by rewrite ltnNge ltnW ?subn0.
Qed.
Lemma neq_bump h i : h != bump h i.
Proof.
rewrite /bump eqn_leq; have [le_hi | lt_ih] := leqP h i.
by rewrite ltnNge le_hi andbF.
by rewrite leqNgt lt_ih.
Qed.
Lemma unbumpKcond h i : bump h (unbump h i) = (i == h) + i.
Proof.
rewrite /bump /unbump leqNgt -subSKn.
case: (ltngtP i h) => /= [-> | ltih | ->] //; last by rewrite ltnn.
by rewrite subn1 /= leqNgt !(ltn_predK ltih, ltih, add1n).
Qed.
Lemma unbumpK {h} : {in predC1 h, cancel (unbump h) (bump h)}.
Proof. by move=> i /negbTE-neq_h_i; rewrite unbumpKcond neq_h_i. Qed.
Lemma bump_addl h i k : bump (k + h) (k + i) = k + bump h i.
Proof. by rewrite /bump leq_add2l addnCA. Qed.
Lemma bumpS h i : bump h.+1 i.+1 = (bump h i).+1.
Proof. exact: addnS. Qed.
Lemma unbump_addl h i k : unbump (k + h) (k + i) = k + unbump h i.
Proof.
apply: (can_inj (bumpK (k + h))).
by rewrite bump_addl !unbumpKcond eqn_add2l addnCA.
Qed.
Lemma unbumpS h i : unbump h.+1 i.+1 = (unbump h i).+1.
Proof. exact: unbump_addl 1. Qed.
Lemma leq_bump h i j : (i <= bump h j) = (unbump h i <= j).
Proof.
rewrite /bump leq_subLR.
case: (leqP i h) (leqP h j) => [le_i_h | lt_h_i] [le_h_j | lt_j_h] //.
by rewrite leqW (leq_trans le_i_h).
by rewrite !(leqNgt i) ltnW (leq_trans _ lt_h_i).
Qed.
Lemma leq_bump2 h i j : (bump h i <= bump h j) = (i <= j).
Proof. by rewrite leq_bump bumpK. Qed.
Lemma bumpC h1 h2 i :
bump h1 (bump h2 i) = bump (bump h1 h2) (bump (unbump h2 h1) i).
Proof.
rewrite {1 5}/bump -leq_bump addnCA; congr (_ + (_ + _)).
rewrite 2!leq_bump /unbump /bump; case: (leqP h1 h2) => [le_h12 | lt_h21].
by rewrite subn0 ltnS le_h12 subn1.
by rewrite subn1 (ltn_predK lt_h21) (leqNgt h1) lt_h21 subn0.
Qed.
(* The lift operations on ordinals; to avoid a messy dependent type, *)
(* unlift is a partial operation (returns an option). *)
Lemma lift_subproof n h (i : 'I_n.-1) : bump h i < n.
Proof. by case: n i => [[]|n] //= i; rewrite -addnS (leq_add (leq_b1 _)). Qed.
Definition lift n (h : 'I_n) (i : 'I_n.-1) := Ordinal (lift_subproof h i).
Lemma unlift_subproof n (h : 'I_n) (u : {j | j != h}) : unbump h (val u) < n.-1.
Proof.
case: n h u => [|n h] [] //= j ne_jh.
rewrite -(leq_bump2 h.+1) bumpS unbumpK // /bump.
case: (ltngtP n h) => [|_|eq_nh]; rewrite ?(leqNgt _ h) ?ltn_ord //.
by rewrite ltn_neqAle [j <= _](valP j) {2}eq_nh andbT.
Qed.
Definition unlift n (h i : 'I_n) :=
omap (fun u : {j | j != h} => Ordinal (unlift_subproof u)) (insub i).
Variant unlift_spec n h i : option 'I_n.-1 -> Type :=
| UnliftSome j of i = lift h j : unlift_spec h i (Some j)
| UnliftNone of i = h : unlift_spec h i None.
Lemma unliftP n (h i : 'I_n) : unlift_spec h i (unlift h i).
Proof.
rewrite /unlift; case: insubP => [u nhi | ] def_i /=; constructor.
by apply: val_inj; rewrite /= def_i unbumpK.
by rewrite negbK in def_i; apply/eqP.
Qed.
Lemma neq_lift n (h : 'I_n) i : h != lift h i.
Proof. exact: neq_bump. Qed.
Lemma unlift_none n (h : 'I_n) : unlift h h = None.
Proof. by case: unliftP => // j Dh; case/eqP: (neq_lift h j). Qed.
Lemma unlift_some n (h i : 'I_n) :
h != i -> {j | i = lift h j & unlift h i = Some j}.
Proof.
rewrite eq_sym => /eqP neq_ih.
by case Dui: (unlift h i) / (unliftP h i) => [j Dh|//]; exists j.
Qed.
Lemma lift_inj n (h : 'I_n) : injective (lift h).
Proof. by move=> i1 i2 [/(can_inj (bumpK h))/val_inj]. Qed.
Arguments lift_inj {n h} [i1 i2] eq_i12h : rename.
Lemma liftK n (h : 'I_n) : pcancel (lift h) (unlift h).
Proof. by move=> i; case: (unlift_some (neq_lift h i)) => j /lift_inj->. Qed.
(* Shifting and splitting indices, for cutting and pasting arrays *)
Lemma lshift_subproof m n (i : 'I_m) : i < m + n.
Proof. by apply: leq_trans (valP i) _; apply: leq_addr. Qed.
Lemma rshift_subproof m n (i : 'I_n) : m + i < m + n.
Proof. by rewrite ltn_add2l. Qed.
Definition lshift m n (i : 'I_m) := Ordinal (lshift_subproof n i).
Definition rshift m n (i : 'I_n) := Ordinal (rshift_subproof m i).
Lemma split_subproof m n (i : 'I_(m + n)) : i >= m -> i - m < n.
Proof. by move/subSn <-; rewrite leq_subLR. Qed.
Definition split {m n} (i : 'I_(m + n)) : 'I_m + 'I_n :=
match ltnP (i) m with
| LtnNotGeq lt_i_m => inl _ (Ordinal lt_i_m)
| GeqNotLtn ge_i_m => inr _ (Ordinal (split_subproof ge_i_m))
end.
Variant split_spec m n (i : 'I_(m + n)) : 'I_m + 'I_n -> bool -> Type :=
| SplitLo (j : 'I_m) of i = j :> nat : split_spec i (inl _ j) true
| SplitHi (k : 'I_n) of i = m + k :> nat : split_spec i (inr _ k) false.
Lemma splitP m n (i : 'I_(m + n)) : split_spec i (split i) (i < m).
Proof.
rewrite /split {-3}/leq.
by case: (@ltnP i m) => cmp_i_m //=; constructor; rewrite ?subnKC.
Qed.
Definition unsplit {m n} (jk : 'I_m + 'I_n) :=
match jk with inl j => lshift n j | inr k => rshift m k end.
Lemma ltn_unsplit m n (jk : 'I_m + 'I_n) : (unsplit jk < m) = jk.
Proof. by case: jk => [j|k]; rewrite /= ?ltn_ord // ltnNge leq_addr. Qed.
Lemma splitK {m n} : cancel (@split m n) unsplit.
Proof. by move=> i; apply: val_inj; case: splitP. Qed.
Lemma unsplitK {m n} : cancel (@unsplit m n) split.
Proof.
move=> jk; have:= ltn_unsplit jk.
by do [case: splitP; case: jk => //= i j] => [|/addnI] => /ord_inj->.
Qed.
Section OrdinalPos.
Variable n' : nat.
Local Notation n := n'.+1.
Definition ord0 := Ordinal (ltn0Sn n').
Definition ord_max := Ordinal (ltnSn n').
Lemma leq_ord (i : 'I_n) : i <= n'. Proof. exact: valP i. Qed.
Lemma sub_ord_proof m : n' - m < n.
Proof. by rewrite ltnS leq_subr. Qed.
Definition sub_ord m := Ordinal (sub_ord_proof m).
Lemma sub_ordK (i : 'I_n) : n' - (n' - i) = i.
Proof. by rewrite subKn ?leq_ord. Qed.
Definition inord m : 'I_n := insubd ord0 m.
Lemma inordK m : m < n -> inord m = m :> nat.
Proof. by move=> lt_m; rewrite val_insubd lt_m. Qed.
Lemma inord_val (i : 'I_n) : inord i = i.
Proof. by rewrite /inord /insubd valK. Qed.
Lemma enum_ordS : enum 'I_n = ord0 :: map (lift ord0) (enum 'I_n').
Proof.
apply: (inj_map val_inj); rewrite val_enum_ord /= -map_comp.
by rewrite (map_comp (addn 1)) val_enum_ord -iota_addl.
Qed.
Lemma lift_max (i : 'I_n') : lift ord_max i = i :> nat.
Proof. by rewrite /= /bump leqNgt ltn_ord. Qed.
Lemma lift0 (i : 'I_n') : lift ord0 i = i.+1 :> nat. Proof. by []. Qed.
End OrdinalPos.
Arguments ord0 {n'}.
Arguments ord_max {n'}.
Arguments inord {n'}.
Arguments sub_ord {n'}.
Arguments sub_ordK {n'}.
Arguments inord_val {n'}.
(* Product of two fintypes which is a fintype *)
Section ProdFinType.
Variable T1 T2 : finType.
Definition prod_enum := [seq (x1, x2) | x1 <- enum T1, x2 <- enum T2].
Lemma predX_prod_enum (A1 : pred T1) (A2 : pred T2) :
count [predX A1 & A2] prod_enum = #|A1| * #|A2|.
Proof.
rewrite !cardE !size_filter -!enumT /prod_enum.
elim: (enum T1) => //= x1 s1 IHs; rewrite count_cat {}IHs count_map /preim /=.
by case: (x1 \in A1); rewrite ?count_pred0.
Qed.
Lemma prod_enumP : Finite.axiom prod_enum.
Proof.
by case=> x1 x2; rewrite (predX_prod_enum (pred1 x1) (pred1 x2)) !card1.
Qed.
Definition prod_finMixin := Eval hnf in FinMixin prod_enumP.
Canonical prod_finType := Eval hnf in FinType (T1 * T2) prod_finMixin.
Lemma cardX (A1 : pred T1) (A2 : pred T2) : #|[predX A1 & A2]| = #|A1| * #|A2|.
Proof. by rewrite -predX_prod_enum unlock size_filter unlock. Qed.
Lemma card_prod : #|{: T1 * T2}| = #|T1| * #|T2|.
Proof. by rewrite -cardX; apply: eq_card; case. Qed.
Lemma eq_card_prod (A : pred (T1 * T2)) : A =i predT -> #|A| = #|T1| * #|T2|.
Proof. exact: eq_card_trans card_prod. Qed.
End ProdFinType.
Section TagFinType.
Variables (I : finType) (T_ : I -> finType).
Definition tag_enum :=
flatten [seq [seq Tagged T_ x | x <- enumF (T_ i)] | i <- enumF I].
Lemma tag_enumP : Finite.axiom tag_enum.
Proof.
case=> i x; rewrite -(enumP i) /tag_enum -enumT.
elim: (enum I) => //= j e IHe.
rewrite count_cat count_map {}IHe; congr (_ + _).
rewrite -size_filter -cardE /=; case: eqP => [-> | ne_j_i].
by apply: (@eq_card1 _ x) => y; rewrite -topredE /= tagged_asE ?eqxx.
by apply: eq_card0 => y.
Qed.
Definition tag_finMixin := Eval hnf in FinMixin tag_enumP.
Canonical tag_finType := Eval hnf in FinType {i : I & T_ i} tag_finMixin.
Lemma card_tagged :
#|{: {i : I & T_ i}}| = sumn (map (fun i => #|T_ i|) (enum I)).
Proof.
rewrite cardE !enumT {1}unlock size_flatten /shape -map_comp.
by congr (sumn _); apply: eq_map => i; rewrite /= size_map -enumT -cardE.
Qed.
End TagFinType.
Section SumFinType.
Variables T1 T2 : finType.
Definition sum_enum :=
[seq inl _ x | x <- enumF T1] ++ [seq inr _ y | y <- enumF T2].
Lemma sum_enum_uniq : uniq sum_enum.
Proof.
rewrite cat_uniq -!enumT !(enum_uniq, map_inj_uniq); try by move=> ? ? [].
by rewrite andbT; apply/hasP=> [[_ /mapP[x _ ->] /mapP[]]].
Qed.
Lemma mem_sum_enum u : u \in sum_enum.
Proof. by case: u => x; rewrite mem_cat -!enumT map_f ?mem_enum ?orbT. Qed.
Definition sum_finMixin := Eval hnf in UniqFinMixin sum_enum_uniq mem_sum_enum.
Canonical sum_finType := Eval hnf in FinType (T1 + T2) sum_finMixin.
Lemma card_sum : #|{: T1 + T2}| = #|T1| + #|T2|.
Proof. by rewrite !cardT !enumT {1}unlock size_cat !size_map. Qed.
End SumFinType.
|
dS/dt=-bSI, dI/dt=bSI (uso b para beta)
```python
from sympy import *
from sympy.abc import S,I,t,b
```
```python
#puntos criticos
P=-b*S*I
Q=b*S*I
#establecer P(S,I)=0 y Q(S,I)=0
Peqn=Eq(P,0)
Qeqn=Eq(Q,0)
print(solve((Peqn,Qeqn),S,I))
#matriz Jacobiana
J11=diff(P,S)
J12=diff(P,I)
J21=diff(Q,S)
J22=diff(Q,I)
J=Matrix([[J11,J12],[J21,J22]])
pprint(J)
```
[(0, I), (S, 0)]
⎡-I⋅b -S⋅b⎤
⎢ ⎥
⎣I⋅b S⋅b ⎦
```python
#J en el punto critico
Jc1=J.subs([(S,0),(I,I)])
pprint(Jc1.eigenvals())
pprint(Jc1.eigenvects())
Jc2=J.subs([(S,S),(I,0)])
pprint(Jc2.eigenvals())
pprint(Jc2.eigenvects())
```
{0: 1, -I⋅b: 1}
⎡⎛ ⎡⎡0⎤⎤⎞ ⎛ ⎡⎡-1⎤⎤⎞⎤
⎢⎜0, 1, ⎢⎢ ⎥⎥⎟, ⎜-I⋅b, 1, ⎢⎢ ⎥⎥⎟⎥
⎣⎝ ⎣⎣1⎦⎦⎠ ⎝ ⎣⎣1 ⎦⎦⎠⎦
{0: 1, S⋅b: 1}
⎡⎛ ⎡⎡1⎤⎤⎞ ⎛ ⎡⎡-1⎤⎤⎞⎤
⎢⎜0, 1, ⎢⎢ ⎥⎥⎟, ⎜S⋅b, 1, ⎢⎢ ⎥⎥⎟⎥
⎣⎝ ⎣⎣0⎦⎦⎠ ⎝ ⎣⎣1 ⎦⎦⎠⎦
Los puntos criticos son no hiperbolicos.
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
import pylab as pl
import matplotlib
```
```python
b=1
def dx_dt(x,t):
return [ -b*x[0]*x[1] , b*x[1]*x[0] ]
#trayectorias en tiempo hacia adelante
ts=np.linspace(0,10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#trayectorias en tiempo hacia atras
ts=np.linspace(0,-10,500)
ic=np.linspace(20000,100000,3)
for r in ic:
for s in ic:
x0=[r,s]
xs=odeint(dx_dt,x0,ts)
plt.plot(xs[:,0],xs[:,1],"-", color="orangered", lw=1.5)
#etiquetas de ejes y estilo de letra
plt.xlabel('S',fontsize=20)
plt.ylabel('I',fontsize=20)
plt.tick_params(labelsize=12)
plt.ticklabel_format(style="sci", scilimits=(0,0))
plt.xlim(0,100000)
plt.ylim(0,100000)
#campo vectorial
X,Y=np.mgrid[0:100000:15j,0:100000:15j]
u=-b*X*Y
v=b*Y*X
pl.quiver(X,Y,u,v,color='dimgray')
plt.savefig("SIinf.pdf",bbox_inches='tight')
plt.show()
```
Analisis de Bifurcaciones
El punto critico del sistema no depende del parametro beta por lo que no cambia al variar beta.
|
[GOAL]
α : Type u_1
β : Type u_2
head✝ : α
l : List α
⊢ (head✝ :: l) ×ˢ [] = []
[PROOFSTEP]
simp [product_cons, product_nil]
[GOAL]
α : Type u_1
β : Type u_2
l₁ : List α
l₂ : List β
a : α
b : β
⊢ (a, b) ∈ l₁ ×ˢ l₂ ↔ a ∈ l₁ ∧ b ∈ l₂
[PROOFSTEP]
simp_all [SProd.sprod, product, mem_bind, mem_map, Prod.ext_iff, exists_prop, and_left_comm, exists_and_left,
exists_eq_left, exists_eq_right]
[GOAL]
α : Type u_1
β : Type u_2
l₁ : List α
l₂ : List β
⊢ length (l₁ ×ˢ l₂) = length l₁ * length l₂
[PROOFSTEP]
induction' l₁ with x l₁ IH
[GOAL]
case nil
α : Type u_1
β : Type u_2
l₂ : List β
⊢ length ([] ×ˢ l₂) = length [] * length l₂
[PROOFSTEP]
exact (zero_mul _).symm
[GOAL]
case cons
α : Type u_1
β : Type u_2
l₂ : List β
x : α
l₁ : List α
IH : length (l₁ ×ˢ l₂) = length l₁ * length l₂
⊢ length ((x :: l₁) ×ˢ l₂) = length (x :: l₁) * length l₂
[PROOFSTEP]
simp only [length, product_cons, length_append, IH, right_distrib, one_mul, length_map, add_comm]
[GOAL]
α : Type u_1
β : Type u_2
σ : α → Type u_3
head✝ : α
l : List α
⊢ (List.sigma (head✝ :: l) fun a => []) = []
[PROOFSTEP]
simp [sigma_cons, sigma_nil]
[GOAL]
α : Type u_1
β : Type u_2
σ : α → Type u_3
l₁ : List α
l₂ : (a : α) → List (σ a)
a : α
b : σ a
⊢ { fst := a, snd := b } ∈ List.sigma l₁ l₂ ↔ a ∈ l₁ ∧ b ∈ l₂ a
[PROOFSTEP]
simp [List.sigma, mem_bind, mem_map, exists_prop, exists_and_left, and_left_comm, exists_eq_left, heq_iff_eq,
exists_eq_right]
[GOAL]
α : Type u_1
β : Type u_2
σ : α → Type u_3
l₁ : List α
l₂ : (a : α) → List (σ a)
⊢ length (List.sigma l₁ l₂) = sum (map (fun a => length (l₂ a)) l₁)
[PROOFSTEP]
induction' l₁ with x l₁ IH
[GOAL]
case nil
α : Type u_1
β : Type u_2
σ : α → Type u_3
l₂ : (a : α) → List (σ a)
⊢ length (List.sigma [] l₂) = sum (map (fun a => length (l₂ a)) [])
[PROOFSTEP]
rfl
[GOAL]
case cons
α : Type u_1
β : Type u_2
σ : α → Type u_3
l₂ : (a : α) → List (σ a)
x : α
l₁ : List α
IH : length (List.sigma l₁ l₂) = sum (map (fun a => length (l₂ a)) l₁)
⊢ length (List.sigma (x :: l₁) l₂) = sum (map (fun a => length (l₂ a)) (x :: l₁))
[PROOFSTEP]
simp only [map, sigma_cons, length_append, length_map, IH, sum_cons]
|
// Copyright 2012 Cloudera Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "rpc/thrift-client.h"
#include <boost/assign.hpp>
#include <boost/lexical_cast.hpp>
#include <ostream>
#include <thrift/Thrift.h>
#include <gutil/strings/substitute.h>
#include "util/time.h"
#include "common/names.h"
using namespace apache::thrift::transport;
using namespace apache::thrift;
using namespace strings;
DECLARE_string(ssl_client_ca_certificate);
namespace impala {
Status ThriftClientImpl::Open() {
if (!socket_create_status_.ok()) return socket_create_status_;
try {
if (!transport_->isOpen()) {
transport_->open();
}
} catch (const TException& e) {
return Status(Substitute("Couldn't open transport for $0 ($1)",
lexical_cast<string>(address_), e.what()));
}
return Status::OK();
}
Status ThriftClientImpl::OpenWithRetry(uint32_t num_tries, uint64_t wait_ms) {
uint32_t try_count = 0L;
while (true) {
++try_count;
Status status = Open();
if (status.ok()) return status;
LOG(INFO) << "Unable to connect to " << address_;
if (num_tries == 0) {
LOG(INFO) << "(Attempt " << try_count << ", will retry indefinitely)";
} else {
if (num_tries != 1) {
// No point logging 'attempt 1 of 1'
LOG(INFO) << "(Attempt " << try_count << " of " << num_tries << ")";
}
if (try_count == num_tries) return status;
}
SleepForMs(wait_ms);
}
}
void ThriftClientImpl::Close() {
try {
if (transport_.get() != NULL && transport_->isOpen()) transport_->close();
} catch (const TException& e) {
LOG(INFO) << "Error closing connection to: " << address_ << ", ignoring (" << e.what()
<< ")";
// Forcibly close the socket (since the transport may have failed to get that far
// during close())
try {
if (socket_.get() != NULL) socket_->close();
} catch (const TException& e) {
LOG(INFO) << "Error closing socket to: " << address_ << ", ignoring (" << e.what()
<< ")";
}
}
}
Status ThriftClientImpl::CreateSocket() {
if (!ssl_) {
socket_.reset(new TSocket(address_.hostname, address_.port));
} else {
try {
TSSLSocketFactory factory;
// TODO: No need to do this every time we create a socket, the factory can be
// shared. But since there may be many certificates, this needs some slightly more
// complex infrastructure to do right.
factory.loadTrustedCertificates(FLAGS_ssl_client_ca_certificate.c_str());
socket_ = factory.createSocket(address_.hostname, address_.port);
} catch (const TException& e) {
return Status(Substitute("Failed to create socket: $0", e.what()));
}
}
return Status::OK();
}
}
|
And He came to earth as God’s only Son.
Now He is seated next to the Father in the heavenly sky.
That He is God’s Son indeed.
We must accept Him and do what He says is right.
That His promises are true so we will receive.
The works of the evil one who is full of lies.
When we accept Him we will receive the prize.
|
################################################################################################
# Dependencies
#source("Code/get_dynamic_data.r")
GetAlsfrsData <- function(raw.data.name="train") {
# For a specified dataset in the original contest format, returns
# a dataset containing only ALSFRS information.
#
# Args:
# raw.data.name: name of dataset
#
# Returns:
# A dataset in which each row corresponds to a unique
# record.id. The columns consist of subject.id, alsfrs.delta, the ALSFRS
# question fields.
#
# Example usage:
# #source("Code/get_alsfrs_data.r")
# alsfrs.data<-GetAlsfrsData("train")
# Check if RDA version of ALSFRS data has been stored
rda.filename = paste("Data/alsfrs_",raw.data.name,".rda",sep="")
if (file.exists(rda.filename)) {
print(paste("loading saved alsfrs data from",rda.filename,sep=" "))
load(rda.filename)
print("finished loading")
return(alsfrs.data)
}
# Get raw data
data<-GetRawData(raw.data.name)
# Choose form from which to draw data
form <- "ALSFRS(R)"
# Choose field names to include
fields <- c("ALSFRS Delta", "1. Speech", "2. Salivation",
"3. Swallowing", "4. Handwriting", "5a. Cutting without Gastrostomy",
"5b. Cutting with Gastrostomy", "6. Dressing and Hygiene",
"7. Turning in Bed", "8. Walking", "9. Climbing Stairs",
"10. Respiratory", "R-1. Dyspnea", "R-2. Orthopnea",
"R-3. Respiratory Insufficiency")
# Choose column names for dataset
columns<-c("alsfrs.delta", "speech", "salivation",
"swallowing", "handwriting", "cutting.wo.gastrostomy", "cutting.w.gastrostomy",
"dressing", "turning", "walking", "climbing.stairs", "respiratory", "r1.dyspnea",
"r2.orthopnea", "r3.resp.insufficiency")
# Only consider desired form and fields to pass into GetDynamicData
data <- data[data$form == form & data$field %in% fields, ]
# Create dataset
alsfrs.data<-GetDynamicData(data,fields,columns)
# Sort rows by subject.id and then by alsfrs.delta
alsfrs.data<-alsfrs.data[order(alsfrs.data$subject.id,
alsfrs.data$alsfrs.delta),]
# Check for duplicate deltas and delete those rows
# Create combination of subject.ids and deltas
id<-paste(as.integer(alsfrs.data$subject.id),
as.integer(alsfrs.data$alsfrs.delta),sep=".")
# Check for duplicates in id
dup<-duplicated(id)
# Keep only nonduplicates - throws away second record with same delta
alsfrs.data<-alsfrs.data[!dup,]
# Compute ALSFRS score
alsfrs.data<-AlsfrsScore(alsfrs.data)
# Create column that is contains cutting with and without gastrostomy together
cutting<-apply(cbind(alsfrs.data$cutting.wo.gastrostomy,alsfrs.data$cutting.w.gastrostomy),1,function(x)
max(x,na.rm=TRUE))
# If both cutting fields = NA, then result of max will be -Inf. Replace all -Inf with NA
cutting[is.infinite(cutting)]<-NA
# Add to data frame
alsfrs.data<-cbind(alsfrs.data,cutting)
# Save ALSFRS data to RDA file
save(alsfrs.data, file = rda.filename)
return(alsfrs.data)
}
|
\name{circos.update}
\alias{circos.update}
\title{
Create plotting regions for a whole track
}
\description{
Create plotting regions for a whole track
}
\usage{
circos.update(...)
}
\arguments{
\item{...}{pass to \code{\link{circos.updatePlotRegion}}}
}
\details{
shortcut function of \code{\link{circos.updatePlotRegion}}.
}
\examples{
# There is no example
NULL
}
|
## Muestreo y Reconstrucción
Considerando que:
- Los [Procesadores Digitales de Señales](https://en.wikipedia.org/wiki/Digital_signal_processor) y los demás proccesadores solo pueden realizar operaciones matemáticas con una cantidad limitada de valores.
- Las señales tratadas hasta el momento son continuas en tiempo y amplitud.
- Las señales continuas no podrían ser trabajadas en procesadores.
Debe encontrarse una manera de "traducir" la información de las señales continuas de manera que puedan ser operadas por procesadores.
El muestreo y la cuantización son los procesos que se encargan de tal "traducción". El [*Muestreo*](https://en.wikipedia.org/wiki/Sampling_%28signal_processing%29) es el proceso de extraer las amplitudes de señales continuas solamente en algunos instantes. La [*cuantización*](https://en.wikipedia.org/wiki/Quantization_%28signal_processing%29) es el proceso de mapear los valores continuos a una cantidad limitada de símbolos que se define por los bits del procesador. Finalmente, las señales muestreadas y cuantizadas se conocen como **señales digitales**
### Muestreo ideal
Una señal continua $x(t)$ se muestrea tomando los valores de amplitud en algunos instantes, generalmente equidistantes en el tiempo. Esto puede modelarse como la multiplicación de la señal $x(t)$ con un tren de impulsos de Dirac, sin embargo es solamente un modelo dada la imposibilidad de generar impulsos de Dirac.
Suponiendo un muestreo con intervalos constantes $T$, la señal muestreada $x_\text{s}(t)$ se representa como:
\begin{equation}
x_\text{s}(t) = \sum_{k = - \infty}^{\infty} x(t) \cdot \delta(t - k T) = \sum_{k = - \infty}^{\infty} x(k T) \cdot \delta(t - k T)
\end{equation}
Así, la señal muestreada corresponde a una serie de impulsos de Dirac pesados mediante las amplitudes de la señal $x(t)$.
La multiplicación de la señal con el tren de impulsos de Dirac puede representarse como:
\begin{equation}
x_\text{s}(t) = x(t) \cdot \frac{1}{T} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T} \right)
\end{equation}
Las mueestras $x(k T)$ para $k \in \mathbb{Z}$ de la señal de tiempo continuo es la [señal de tiempo discreto](https://en.wikipedia.org/wiki/Discrete-time_signal) $x[k] := x(k T)$.
¿Siempre puede $x[k]$ representar a $x(t)$?.
### Espectro de una señal muestreada
Se obtiene el espectro $X_\text{s}(j \omega) = \mathcal{F} \{ x_\text{s}(t) \}$ de la señal muestreada $x_\text{s}(t)$ mediante la transformada de Fourier.
\begin{equation}
x_\text{s}(t) = x(t) \cdot \frac{1}{T} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T} \right)
\end{equation}
\begin{equation}
\begin{split}
X_\text{s}(j \omega) &= \frac{1}{2 \pi} X(j \omega) * {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{\omega_\text{s}} \right) \\
&= \frac{1}{2 \pi} X(j \omega) * \frac{2 \pi}{T} \sum_{\mu = - \infty}^{\infty} \delta(\omega - \mu \omega_\text{s}) \\
&= \frac{1}{T} \sum_{\mu = - \infty}^{\infty} X \left(j (\omega - \mu \omega_\text{s}) \right)
\end{split}
\end{equation}
donde $\omega_\text{s} = 2 \pi \, f_\text{s}$ es la frecuencia angular de muestreo y $f_\text{s} = \frac{1}{T}$ es la frecuencia de muestreo.
El espectro resultante $X_\text{s}(j \omega) $ resulta ser una réplica del espectro $X(j \omega)$ se repite cada $\omega_\text{s}$, es decir que el muestreo periódico de una señal genera espectros periódicos (en el dominio de la frecuencia).
Suponga una señal $x(t)$ que tiene solamente con bajas frecuencias. El espectro $X(j \omega)$ es:
\begin{equation}
X(j \omega) = 0 \qquad \text{for } |\omega| > \omega_\text{u}
\end{equation}
donde $\omega_\text{u}$ es la frecuencia límite a partir de la cual no hay componentes. La siguiente gráfica es un ejemplo del espectro mencionado.
Como el espectro de la señal muestrada $X_\text{s}(j \omega)$ es periódico y se repite cada $\omega_text{s}$, este corresponde a una superposición de espectros desplazados cada $\omega_text{s}$, como se muestra en la siguiente gráfica.
Observe que si $\frac{\omega_\text{s}}{2} > \omega_\text{u}$ los espectros repetidos no se traslapan y los valores no se alterarían, es decir, la señal está bien muestreada.
### Reconstrucción
Si se cumple que $\omega_\text{u} < \frac{\omega_\text{s}}{2}$, puede recuperarse el espectro $X(j \omega)$ al tomar solamente la porción central de $X_\text{s}(j \omega)$. Esto es equivalente a aplicar un filtro pasa bajas ideal de frecuencia $\omega_\text{s}$ a la señal muestreada, como se muestra a continuación.
- La linea azul representa el espectro de la señal muestreada.
- La linea roja representa el espectro del filtro pasa bajas ideal.
La función de transferencia $H(j \omega)$ del filtro en mención es:
\begin{equation}
H(j \omega) = T \cdot \text{rect} \left( \frac{\omega}{\omega_\text{s}} \right)
\end{equation}
Su respuesta impulsional (dominio del tiempo) se obtiene mediante la transformada inversa de Fourier.
\begin{equation}
h(t) = \text{sinc} \left( \frac{\pi t}{T} \right)
\end{equation}
Así, la señal reconstruida $y(t)$ es el resultado de la convolución entre dicha respuesta impulsional y el tren de pulsos de la señal muestreada.
\begin{align}
y(t) &= x_\text{s}(t) * h(t) \\
&= \left( \sum_{k = - \infty}^{\infty} x(k T) \cdot \delta(t - k T) \right) * \text{sinc} \left( \frac{\pi t}{T} \right) \\
&= \sum_{k = - \infty}^{\infty} x(k T) \cdot \text{sinc} \left( \frac{\pi}{T} (t - k T) \right)
\end{align}
Así, la señal reconstruida $y(t)$ es una superposición de señales sinc desplazadas y ponderadas por los valores muestreados. Esto se ilustra a continuación.
### Aliasing
Si no se cumple con el criterio $\frac{\omega_\text{s}}{2} > \omega_\text{u}$, los espectros repetidos se traslapan y cambian sus valores, lo cual hace que el espectro de la parte central cambie y por consiguiente la señal reconstruida no sea la original.
Este fenómeno recibe el nombre de **aliasing** dado que hace referencia a señales de alta frecuencia que cambian su aspecto por señales de baja frecuencia.
### Teorema de muestreo
También conocido como [*Teorema de Nyquist–Shannon*](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem). Dada una señal $x(t)$ de componentes frecuenciales bajas y limitadas por $\omega_\text{c}$, se puede reconstruir a partir de muestras cuya frecuencia angular sea $\omega_\text{s}$ si cumple que
\begin{equation}
\omega_\text{s} \geq 2 \cdot \omega_\text{c}
\end{equation}
Esto indica que la frecuencia de muestreo debe seleccionarse según la aplicación y el tipo de señales propias de la misma de manera que sea al menos el doble de la máxima frecuencia presente en las señales.
* sobremuestreo $\omega_\text{s} > 2 \cdot \omega_\text{c}$
* muestreo crítico $\omega_\text{s} = 2 \cdot \omega_\text{c}$
* submuestreo $\omega_\text{s} < 2 \cdot \omega_\text{c}$
Para aplicaciones reales se escoge una frecuencia de muestreo por encima del doble.
### Ejemplo: Muestreo y Reconstrucción de una señal coseno
Suponga una señal de tiempo continuo $x(t) = \cos(\omega_0 t)$.
Se define una función para tomar muestras y otra para reconstruir.
```python
import sympy as sym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
sym.init_printing()
t = sym.symbols('t', real=True)
k = sym.symbols('k', integer=True)
def ideal_sampling(x, k, w_s):
kappa = sym.symbols('kappa')
xs = sym.lambdify(kappa, x.subs(t, kappa * 2 * sym.pi / w_s))
return [xs(kappa) for kappa in k]
def ideal_reconstruction(xs, k, w_s):
T = 2*sym.pi/w_s
return sum(xs[n] * sym.sinc(sym.pi / T * (t - k[n] * T)) for n in range(len(k)))
```
Y otra función para graficar señales.
```python
def plot_signals(xs, y, w_s, k):
plt.stem(k*2*np.pi/w_s, xs)
plt.xlabel('$t$ in s')
plt.ylabel('$x_s[k] = x_s(kT)$')
plt.axis([0, 5, -1.2, 1.2])
sym.plot(y, (t, 0, 5), xlabel='$t$', ylabel='$y(t)$', ylim=(-1.2, 1.2))
```
Ahora se define $x(t) = \cos(\omega_0 t)$ con $\omega_0 = 5$.
```python
w_0 = 5
x = sym.cos(w_0 * t)
plt.rcParams['figure.figsize'] = 7, 2
sym.plot(x, (t, 0, 5), xlabel=r'$t$', ylabel=r'$x(t)$')
```
Si la frecuencia de muestreo $\omega_\text{s} > 2 \cdot \omega_0$, la señal está bien muestreada.
Ejemplo con $\omega_\text{s} = 50$
```python
k = np.arange(-100, 100)
w_s = 50
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plt.rcParams['figure.figsize'] = 7, 2
plot_signals(xs, y, w_s, k)
```
Si la frecuencia de muestreo $\omega_\text{s} = 2 \cdot \omega_0$, la señal está muestreada en el límite del teorema.
Ejemplo con $\omega_\text{s} = 10$
```python
w_s = 10
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plt.rcParams['figure.figsize'] = 7, 2
plot_signals(xs, y, w_s, k)
```
Si la frecuencia de muestreo $\omega_\text{s} < 2 \cdot \omega_0$, la señal está mal muestreada.
Ejemplo con $\omega_\text{s} = 7$
```python
w_s = 7
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plt.rcParams['figure.figsize'] = 7, 2
plot_signals(xs, y, w_s, k)
```
¿Qué pasa si $x(t) = \cos(5 t + 1.5)$?
```python
w_0 = 5
x = sym.cos(w_0 * t + 1.5)
plt.rcParams['figure.figsize'] = 7, 2
sym.plot(x, (t, 0, 5), xlabel=r'$t$', ylabel=r'$x(t)$')
```
<sympy.plotting.plot.Plot at 0x184edac0148>
```python
k = np.arange(-100, 100)
w_s = 50
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plt.rcParams['figure.figsize'] = 7, 2
plot_signals(xs, y, w_s, k)
```
```python
w_s = 10
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plt.rcParams['figure.figsize'] = 7, 2
plot_signals(xs, y, w_s, k)
```
```python
w_s = 7
xs = ideal_sampling(x, k, w_s)
y = ideal_reconstruction(xs, k, w_s)
plt.rcParams['figure.figsize'] = 7, 2
plot_signals(xs, y, w_s, k)
```
**Ejercicio para entregar en pareja**
- Analice el espectro de la señal $x(t) = cos(5t) + 0.7cos(7t)$.
- Cuál es la frecuencia más alta de $x(t)$.
- Defina una frecuencia para submuestreo $\omega_\text{sub}$.
- Defina una frecuencia para muestreo crítico $\omega_\text{cri}$.
- Defina una frecuencia para sobremuestreo $\omega_\text{sob}$.
- Con cada frecuencia definida ($\omega_\text{sub}$, $\omega_\text{cri}$ y $\omega_\text{sob}$), muestree la señal $x(t)$ para encontrar las señales en tiempo discreto $x_{\omega_\text{sub}}$, $x_{\omega_\text{cri}}$ y $x_{\omega_\text{sob}}$ respectivamente.
- Reconstruya la señal a partir de las señales muestreadas. Llame a las señales reconstruidas $y_{\omega_\text{sub}}$, $y_{\omega_\text{cri}}$ y $y_{\omega_\text{sob}}$.
- Compare el espectro de la señal original $x(t)$ con los espectros de las señales reconstruidas $y_{\omega_\text{sub}}$, $y_{\omega_\text{cri}}$ y $y_{\omega_\text{sob}}$. **Tenga en cuenta la frecuencia de muestreo asociada a cada muestreo.**
```python
```
|
If $f$ is holomorphic on an open, bounded, connected set $S$ and continuous on the closure of $S$, then there exists a point $w$ on the boundary of $S$ such that $|f(z)| \leq |f(w)|$ for all $z$ in the closure of $S$.
|
#!/usr/bin/Rscript
# Bhishan Poudel
# Jan 8, 2016
# clear; Rscript rmatrix.r; rm *~
# ref: http://www.programiz.com/r-programming/vector
# 5 data types in R : vectors, matrix, list, data frame and a factor
setwd("~/Copy/2016Spring/RProgramming/presentation/dataTypes/")
#########################################################################################################
# MATRIX CREATION
#########################################################################################################
cat("\nMatrix is a two dimensional data structure in R.")
cat("\nWe can check if a variable is a matrix or not with the class() function.")
cat("\nMatrix is very much similar to vectors but additionally contains the")
cat("\ndimension attribute. All attributes of an object can be checked with the")
cat("\nattributes() function (dimension can be checked directly ")
cat("\nwith the dim() function\n")
cat("\nCreating a Matrix \n")
matrix(1:9, nrow=3, ncol=3)
# same result is obtained by providing only one dimension
#matrix(1:9, nrow=3)
matrix(1:9, nrow=3, byrow=TRUE) # fill matrix row-wise
x <- matrix(1:9, nrow=3, dimnames=list(c("X","Y","Z"),c("A","B","C")))
x
cat("\nThese names can be accessed or changed with colnames() and rownames().\n")
colnames(x)
rownames(x)
# It is also possible to change names
colnames(x) <- c("C1","C2","C3")
rownames(x) <- c("R1","R2","R3")
x
cat("\nAnother way is using cbind() and rbind()\n")
cbind(c(1,2,3),c(4,5,6))
rbind(c(1,2,3),c(4,5,6))
cat("\ncreate a matrix from a vector by setting its dimension using dim(). \n")
x <- c(1,2,3,4,5,6); x; class(x)
dim(x) <- c(2,3); x; class(x)
#########################################################################################################
# MATRIX CREATION
#########################################################################################################
cat("\nAccessing Elements in Matrix ")
cat("\nElements can be accessed as var[row, column]\n")
cat("\nUsing integer vector as index\n")
x <- matrix(1:9, nrow=3); x
x[c(1,2),c(2,3)] # select rows 1 & 2 and columns 2 & 3
x[c(3,2),] # leaving column field blank will select entire columns
x[,] # leaving row as well as column field blank will select entire matrix
x[-1,] # select all rows except first
x[1,]; class(x[1,])
cat("Using logical vector as index\n")
cat("Two logical vectors can be used to index a matrix.\n")
x <- cbind(c(4,6,1),c(8,0,2),c(3,7,9)); x
x[c(TRUE,FALSE)] # indexes recycled T F T F
x[c(TRUE,FALSE,TRUE),c(TRUE,TRUE,FALSE)]
x[c(TRUE,FALSE),c(2,3)]
# TF will be recycled, columns 2 and 3 will be read
cat("extracting matrix elements\n")
x[x>5] # select elements greater than 5
x[x%%2 == 0] # select even elements
#example 2
mymat = matrix(1:12,4,3)
mymat = matrix(1:12,ncol=3,byrow=F)
# reading from a data file
m <- matrix(scan('matrix.dat'),nrow=3,byrow=TRUE); m
cat("scanning the numbers from a file\n")
nums = scan('matrix.dat'); nums
######################################
a <- matrix(1:9, nrow = 3)
colnames(a) <- c("A", "B", "C"); a
a[1:2, ]
#> A B C
#> [1,] 1 4 7
#> [2,] 2 5 8
a[c(T, F, T), c("B", "A")]
# TFT will be recycled, first B will be read then A will be read
#> B A
#> [1,] 4 1
#> [2,] 6 3
a[0, -2]
#> A C
#########################################################################################################
# MATRIX MODIFICATION
#########################################################################################################
cat("\nModifying a Matrix \n")
x <- matrix(1:9, nrow = 3); x
x[2,2] <- 10; x # modify a single element
x[x<5] <- 0; x # modify elements less than 5
cat("\ntranspose a matrix \n")
t(x)
cat("\nWe can add row or column using rbind() and cbind() function \n")
cbind(x,c(1,2,3)) # add columns
rbind(x,c(1,2,3)) # add row
x <- x[1:2,]; x # remove last row
cat("\nDimension of matrix can be modified as well, using the dim() function. \n")
x <- matrix(1:6, nrow = 2); x
dim(x) <- c(3,2); x # change to 3X2 matrix
dim(x) <- c(1,6); x # change to 1X6 matrix
cat("\nMatrix Mathematics \n")
x <- matrix(1:9, nrow = 3,byrow=T); x
y <- matrix(1:9, nrow = 3,byrow=T); y
z <- x %*% y; z;
det(x)
rowSums(x)
rowMeans(x)
z1 <- cbind(x,y); z1
# finding xy' and x'y
cat("\nxy' is outer product and x'y is cross product\n")
a<- x %o% y ; a # xy'
a<- crossprod(x,y); a
cat("\nfinding inverse using solve\n")
A=rbind(c(1, 2), c(3, 4)); A
solve(A) # inverse of A
# find inverses using solve
A =rbind(c(1:3),c(0,4,5),c(1,0,6))
det(A)
solve(A)* det(A)
# inverse using ginv
cat("\ninverse of A using ginv is \n")
library( MASS)
ginv(A)*det(A)
#1 2 3 inverse is 1/22 * 24 -12 -2
#0 4 5 5 3 -5
#1 0 6 -4 2 4
cat("\n \n")
cat("\n \n")
# THE END
############################################################################################
#Matrix facilites
#In the following examples, A and B are matrices and x and b are a vectors.
#Operator or Function Description
#A * B Element-wise multiplication
#A %*% B Matrix multiplication
#A %o% B Outer product. AB'
#crossprod(A,B)
#crossprod(A) A'B and A'A respectively.
#t(A) Transpose
#diag(x) Creates diagonal matrix with elements of x in the principal diagonal
#diag(A) Returns a vector containing the elements of the principal diagonal
#diag(k) If k is a scalar, this creates a k x k identity matrix. Go figure.
#solve(A, b) Returns vector x in the equation b = Ax (i.e., A-1b)
#solve(A) Inverse of A where A is a square matrix.
#ginv(A) Moore-Penrose Generalized Inverse of A.
#ginv(A) requires loading the MASS package.
#y<-eigen(A) y$val are the eigenvalues of A
#y$vec are the eigenvectors of A
#y<-svd(A) Single value decomposition of A.
#y$d = vector containing the singular values of A
#y$u = matrix with columns contain the left singular vectors of A
#y$v = matrix with columns contain the right singular vectors of A
#R <- chol(A) Choleski factorization of A. Returns the upper triangular factor, such that R'R = A.
#y <- qr(A) QR decomposition of A.
#y$qr has an upper triangle that contains the decomposition and a lower triangle that contains information on the Q decomposition.
#y$rank is the rank of A.
#y$qraux a vector which contains additional information on Q.
#y$pivot contains information on the pivoting strategy used.
#cbind(A,B,...) Combine matrices(vectors) horizontally. Returns a matrix.
#rbind(A,B,...) Combine matrices(vectors) vertically. Returns a matrix.
#rowMeans(A) Returns vector of row means.
#rowSums(A) Returns vector of row sums.
#colMeans(A) Returns vector of column means.
#colSums(A) Returns vector of column sums.
|
\name{print.CV_Data}
\alias{print.CV_Data}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Printing simple information of the cross-validation data
}
\description{
This function prints simple information of the cross-validation data stored in a \code{CV_Data} object. This object is the field \code{$cv_data} of a \code{Full_PAFit_result} object, which in turn is the returning value of \code{\link{only_A_estimate}}, \code{\link{only_F_estimate}} or \code{\link{joint_estimate}}.
}
\usage{
\method{print}{CV_Data}(x,...)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{x}{
An object of class \code{CV_Data}.
}
\item{\dots}{
%% ~~Describe \code{\dots} here~~
}
}
\value{
Prints simple information of the cross-validation data.
}
\author{
Thong Pham \email{[email protected]}
}
\examples{
## Since the runtime is long, we do not let this example run on CRAN
\dontrun{
library("PAFit")
set.seed(1)
# a network from Bianconi-Barabasi model
net <- generate_BB(N = 1000 , m = 50 ,
num_seed = 100 , multiple_node = 100,
s = 10)
net_stats <- get_statistics(net)
result <- joint_estimate(net, net_stats)
print(result$cv_data)
}
}
|
module Main where
import Data.Word
import Data.List.Split
import Data.Scientific as Scientific
import Data.Complex
import Control.Monad
import Codec.Picture
import Data.List
import Data.Time.Clock
import Data.Time.Calendar
import Graphics.X11.Xlib
import System.Exit (exitWith, ExitCode(..))
import Control.Concurrent
import Data.Bits
import GHC.Conc (numCapabilities)
import Graphical
import Mand
import GenerateSet
import GenerateInParallel
main :: IO ()
main = do
taking_input <- newMVar True
-- writePng ("benchmark.png") $ generateImage (\x y -> generatingFunction 0.0005 (-2) (2) (2) x y) 8000 8000
generateInParallel
-- forever $ loop taking_input
loop taking_input = do
--print $ head $ reverse $ general_list_R mand_iterationA ((scientific 24 (-2)) :+ scientific 2 (-1))
--print $ head $ reverse $ general_list_D mand_iteration (0.24 :+ (0.2))
--writePng ("benchmark.png") $ generateImage (\x y -> f 0.0005 (-2) (2) (2) x y) 8000 8000
input <- promptForImput taking_input "set for generating mandlebrot set, mov for the movement of the mandelbrot, grp for graphical, rang for range in a gif, sci for using scientific data type "
case input of
"set" -> generateSet taking_input
"mov" -> generateMovement
"grp" -> forkIO (startGraphical taking_input)>> return ()
"rang" -> range
"sci" -> generateSciSet
"series" -> generateSeriesOfSets
"test" -> tester
"acc" -> generateAccurateSet
"par" -> generateInParallel
generateAccurateSet = print "yet again not finishe"
tester = do
writePng ("benchmark.png") $ generateImage (\x y -> generatingFunction 0.0005 (-2) (2) (2) x y) 8000 8000
--stepf srf sif realend x y
generateSeriesOfSets = print "in complete"
generateSciSet = do
print "nothign yet"
-- print $ mi (makeSciComplex "1e-1" "2e-1") $ makeSciComplex "-9e-1" "4e-2"
--mi :: CS -> CS -> CS
--mi c z = c + z*z
makeSciComplex :: String -> String -> CS
makeSciComplex x y = (read x :: S) :+ (read x :: S)
range = do
print "in complete"
generateMovement = do
let list = [fromIntegral x/10:+ fromIntegral y/10 |x <-[(-400)..399], y <- [(-400)..399]]
let zlist = foldl (\acc x -> (0:+0):acc) [] [0..639999]
let list' = map' mand_iteration list list
--print list
--writePng "movv5_i1.png" $ generateImage (genMovImg $ map roundComplex2 list) 399 399
--writePng "movv5_i2.png" $ generateImage (genMovImg $ map roundComplex2 list') 399 399
--let list'1 = filterC $ map' mand_iteration list list'
--writePng "movv5_i3.png" $ generateImage (genMovImg $ map roundComplex2 list'1) 399 399
--let list'2 = filterC $ map' mand_iteration list list'1
--writePng "movv5_i4.png" $ generateImage (genMovImg $ map roundComplex2 list'2) 399 399
--let list'3 = filterC $ map' mand_iteration list list'2
--writePng "movv5_i5.png" $ generateImage (genMovImg $ map roundComplex2 list'3) 399 399
--let list'4 = filterC $ map' mand_iteration list list'3
mapM_ (curry' writePng) $ reverse $ snd $ mapAccumR nextMov (list, zlist) [0..10]
print "complete"
curry' :: (a -> b -> c) -> (a, b) -> c
curry' f (a,b) = f a b
nextMov (last, original) x = ((nextl, nextOrig), ("movv10_i"++(show x)++".png",generateImage (genMovImg $ map roundComplex2 nextl) 3999 3999))
where
(nextl, nextOrig) = filter2 (map' mand_iteration original last ) original removeLargeC
removeLargeC :: (Num a, Ord a )=> Complex a -> Bool
removeLargeC (a :+ b) = if abs a >2 || abs b > 2 then False else True
filterC = filter removeLargeC
filter2 :: [a] -> [b] -> (a -> Bool) -> ([a],[b])
filter2 x y f = (fst ziped,snd ziped)
where
ziped = foldr (\(a,b) (ac,bc) -> if f a then (a:ac,b:bc) else (ac,bc)) ([],[]) $ zip x y
genMovImg arr x y = if elem (fromIntegral (x-2000) /1000 :+ fromIntegral (y-2000)/1000) arr then PixelRGB8 0 0 0 else PixelRGB8 255 255 255
map' :: (a -> a -> b) -> [a] -> [a] -> [b]
map' _ [] [] = []
map' f (x:xs) (y:ys) = (f x y):(map' f xs ys)
g :: [Bool]
g = foldr (\x acc -> if mod x 10 == 0 then True:acc else False:acc) [] [0..399]
h :: [[Bool]]
h = foldr (\x acc -> if mod x 10 == 0 then g:acc else i:acc) [] [0..399]
i :: [Bool]
i = foldr (\_ acc -> False:acc) [] [0..399]
falseArr = foldr (\_ acc -> i:acc) [] [0..399]
toComplex :: RealFloat a => [[Bool]] -> [Complex a]
toComplex arr = foldr (\x acc1 -> (foldr (\y acc2 -> if arr !! x !! y then (fromIntegral (x-200)/100 :+ fromIntegral (y-200)/100):acc2 else acc2) [] [0..((length $ arr !! 0)-1)])++acc1) [] [0..((length arr) -1)]
toArr :: RealFloat a => [Complex a] -> [[Bool]]
toArr arr = f arr falseArr
where
rarr = map roundComplex2 arr
f :: RealFloat a=> [Complex a] -> [[Bool]]-> [[Bool]]
f [] fa = fa
f (x:xs) fa = f xs $ modifyarr fa newx newy
where
newx = round $ 1000*(realPart x) +2000
newy = round $ 1000*(imagPart x) +2000
modifyarr :: [[Bool]] -> Int -> Int -> [[Bool]]
modifyarr arr x y = foldr (\a acc -> (foldr (\b acc2 -> if x == a && b == y then True:acc2 else (arr !! x !! y):acc2) [] [0..((length $ arr !! 0)-1)]):acc) [] [0..((length arr)-1)]
roundComplex2 :: RealFloat a => Complex a -> Complex a
roundComplex2 (a :+ b) = (fromIntegral (round $ a*1000)/1000 :+ fromIntegral (round $ b*1000)/1000)
|
function eframes = vl_frame2oell(frames)
% VL_FRAMES2OELL Convert a geometric frame to an oriented ellipse
% EFRAME = VL_FRAME2OELL(FRAME) converts the generic FRAME to an
% oriented ellipses EFRAME. FRAME and EFRAME can be matrices, with
% one frame per column.
%
% A frame is either a point, a disc, an oriented disc, an ellipse,
% or an oriented ellipse. These are represented respectively by 2,
% 3, 4, 5 and 6 parameters each, as described in VL_PLOTFRAME(). An
% oriented ellipse is the most general geometric frame; hence, there
% is no loss of information in this conversion.
%
% If FRAME is an oriented disc or ellipse, then the conversion is
% immediate. If, however, FRAME is not oriented (it is either a
% point or an unoriented disc or ellipse), then an orientation must
% be assigned. The orientation is chosen in such a way that the
% affine transformation that maps the standard oriented frame into
% the output EFRAME does not rotate the Y axis. If frames represent
% detected visual features, this convention corresponds to assume
% that features are upright.
%
% If FRAME is a point, then the output is an ellipse with null area.
%
% See: <a href="matlab:vl_help('tut.frame')">feature frames</a>,
% VL_PLOTFRAME(), VL_HELP().
% Author: Andrea Vedaldi
% Copyright (C) 2013 Andrea Vedaldi and Brian Fulkerson.
% All rights reserved.
%
% This file is part of the VLFeat library and is made available under
% the terms of the BSD license (see the COPYING file).
[D,K] = size(frames) ;
eframes = zeros(6,K) ;
switch D
case 2
eframes(1:2,:) = frames(1:2,:) ;
case 3
eframes(1:2,:) = frames(1:2,:) ;
eframes(3,:) = frames(3,:) ;
eframes(6,:) = frames(3,:) ;
case 4
r = frames(3,:) ;
c = r.*cos(frames(4,:)) ;
s = r.*sin(frames(4,:)) ;
eframes(1:2,:) = frames(1:2,:) ;
eframes(3:6,:) = [c ; s ; -s ; c] ;
case 5
eframes(1:2,:) = frames(1:2,:) ;
eframes(3:6,:) = mapFromS(frames(3:5,:)) ;
case 6
eframes = frames ;
otherwise
error('FRAMES format is unknown.') ;
end
% --------------------------------------------------------------------
function A = mapFromS(S)
% --------------------------------------------------------------------
% Returns the (stacking of the) 2x2 matrix A that maps the unit circle
% into the ellipses satisfying the equation x' inv(S) x = 1. Here S
% is a stacked covariance matrix, with elements S11, S12 and S22.
%
% The goal is to find A such that AA' = S. In order to let the Y
% direction unaffected (upright feature), the assumption is taht
% A = [a b ; 0 c]. Hence
%
% AA' = [a^2, ab ; ab, b^2+c^2] = S.
A = zeros(4,size(S,2)) ;
a = sqrt(S(1,:));
b = S(2,:) ./ max(a, 1e-18) ;
A(1,:) = a ;
A(2,:) = b ;
A(4,:) = sqrt(max(S(3,:) - b.*b, 0)) ;
|
#' Adaptive COVID TCRB CDR3 data
#'
#' Unique TCRB CDR3 sequences from the Nolan et al. 2020. CDR3s were extracted via IgBLAST. The license for this data is Creative Commons Attribution 4.0 International License.
#'
#' @docType data
#'
#' @usage data(covid_cdr3)
#'
#' @format A character vector of length 133,034.
#'
#' @keywords datasets
#'
#' @references Nolan, Sean, et al. "A large-scale database of T-cell receptor beta (TCR??) sequences and binding associations from natural and synthetic exposure to SARS-CoV-2." (2020).
#' (\href{https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7418738/}{PubMed})
#'
#' @source \href{https://clients.adaptivebiotech.com/pub/covid-2020}{ImmuneACCESS Data}
#'
#' @examples
#' data(covid_cdr3)
#' # Average CDR3 length
#' mean(nchar(covid_cdr3)) # [1] 43.56821
#'
"covid_cdr3"
|
Speculation ID: 1
Discarded a discarded buffer
|
module GwfRivSubs
use GwfRivModule, only: SGWF2RIV7PNT, SGWF2RIV7PSV
private
public :: GWF2RIV7AR, GWF2RIV7RP
contains
SUBROUTINE GWF2RIV7AR(IN,IGRID)
C ******************************************************************
C ALLOCATE ARRAY STORAGE FOR RIVERS AND READ PARAMETER DEFINITIONS.
C ******************************************************************
C
C SPECIFICATIONS:
C ------------------------------------------------------------------
USE GLOBAL, ONLY:IOUT,NCOL,NROW,NLAY,IFREFM
USE GWFRIVMODULE, ONLY:NRIVER,MXRIVR,NRIVVL,IRIVCB,IPRRIV,NPRIV,
1 IRIVPB,NNPRIV,RIVAUX,RIVR
use utl7module, only: U1DREL, U2DREL, !UBDSV1, UBDSV2, UBDSVA,
& urword, URDCOM, !UBDSV4, UBDSVB,
& ULSTRD
C
CHARACTER*200 LINE
double precision :: r
C ------------------------------------------------------------------
C
C1------Allocate scalar variables, which makes it possible for multiple
C1------grids to be defined.
ALLOCATE(NRIVER,MXRIVR,NRIVVL,IRIVCB,IPRRIV,NPRIV,IRIVPB,NNPRIV)
C
C2------IDENTIFY PACKAGE AND INITIALIZE NRIVER AND NNPRIV.
WRITE(IOUT,1)IN
1 FORMAT(1X,/1X,'RIV -- RIVER PACKAGE, VERSION 7, 5/2/2005',
1' INPUT READ FROM UNIT ',I4)
NRIVER=0
NNPRIV=0
C
C3------READ MAXIMUM NUMBER OF RIVER REACHES AND UNIT OR FLAG FOR
C3------CELL-BY-CELL FLOW TERMS.
CALL URDCOM(IN,IOUT,LINE)
CALL UPARLSTAL(IN,IOUT,LINE,NPRIV,MXPR)
IF(IFREFM.EQ.0) THEN
READ(LINE,'(2I10)') MXACTR,IRIVCB
LLOC=21
ELSE
LLOC=1
CALL URWORD(LINE,LLOC,ISTART,ISTOP,2,MXACTR,R,IOUT,IN)
CALL URWORD(LINE,LLOC,ISTART,ISTOP,2,IRIVCB,R,IOUT,IN)
END IF
WRITE(IOUT,3) MXACTR
3 FORMAT(1X,'MAXIMUM OF ',I6,' ACTIVE RIVER REACHES AT ONE TIME')
IF(IRIVCB.LT.0) WRITE(IOUT,7)
7 FORMAT(1X,'CELL-BY-CELL FLOWS WILL BE PRINTED WHEN ICBCFL NOT 0')
IF(IRIVCB.GT.0) WRITE(IOUT,8) IRIVCB
8 FORMAT(1X,'CELL-BY-CELL FLOWS WILL BE SAVED ON UNIT ',I4)
C
C4------READ AUXILIARY VARIABLES AND PRINT OPTION.
ALLOCATE (RIVAUX(20))
NAUX=0
IPRRIV=1
10 CALL URWORD(LINE,LLOC,ISTART,ISTOP,1,N,R,IOUT,IN)
IF(LINE(ISTART:ISTOP).EQ.'AUXILIARY' .OR.
1 LINE(ISTART:ISTOP).EQ.'AUX') THEN
CALL URWORD(LINE,LLOC,ISTART,ISTOP,1,N,R,IOUT,IN)
IF(NAUX.LT.20) THEN
NAUX=NAUX+1
RIVAUX(NAUX)=LINE(ISTART:ISTOP)
WRITE(IOUT,12) RIVAUX(NAUX)
12 FORMAT(1X,'AUXILIARY RIVER VARIABLE: ',A)
END IF
GO TO 10
ELSE IF(LINE(ISTART:ISTOP).EQ.'NOPRINT') THEN
WRITE(IOUT,13)
13 FORMAT(1X,'LISTS OF RIVER CELLS WILL NOT BE PRINTED')
IPRRIV = 0
GO TO 10
END IF
C
C5------ALLOCATE SPACE FOR RIVER ARRAYS.
C5------FOR EACH REACH, THERE ARE SIX INPUT DATA VALUES PLUS ONE
C5------LOCATION FOR CELL-BY-CELL FLOW.
NRIVVL=7+NAUX
IRIVPB=MXACTR+1
MXRIVR=MXACTR+MXPR
ALLOCATE (RIVR(NRIVVL,MXRIVR))
C
C6------READ NAMED PARAMETERS.
WRITE(IOUT,99) NPRIV
99 FORMAT(1X,//1X,I5,' River parameters')
IF(NPRIV.GT.0) THEN
LSTSUM=IRIVPB
DO 120 K=1,NPRIV
LSTBEG=LSTSUM
CALL UPARLSTRP(LSTSUM,MXRIVR,IN,IOUT,IP,'RIV','RIV',1,
& NUMINST, .true.)
NLST=LSTSUM-LSTBEG
IF (NUMINST.EQ.0) THEN
C6A-----READ PARAMETER WITHOUT INSTANCES
CALL ULSTRD(NLST,RIVR,LSTBEG,NRIVVL,MXRIVR,1,IN,
& IOUT,'REACH NO. LAYER ROW COL'//
& ' STAGE STRESS FACTOR BOTTOM EL.',
& RIVAUX,20,NAUX,IFREFM,NCOL,NROW,NLAY,5,5,IPRRIV)
ELSE
C6B-----READ INSTANCES
NINLST = NLST/NUMINST
DO 110 I=1,NUMINST
CALL UINSRP(I,IN,IOUT,IP,IPRRIV)
CALL ULSTRD(NINLST,RIVR,LSTBEG,NRIVVL,MXRIVR,1,IN,
& IOUT,'REACH NO. LAYER ROW COL'//
& ' STAGE STRESS FACTOR BOTTOM EL.',
& RIVAUX,20,NAUX,IFREFM,NCOL,NROW,NLAY,5,5,IPRRIV)
LSTBEG=LSTBEG+NINLST
110 CONTINUE
END IF
120 CONTINUE
END IF
C
C7------SAVE POINTERS TO DATA AND RETURN.
CALL SGWF2RIV7PSV(IGRID)
RETURN
END SUBROUTINE GWF2RIV7AR
!***********************************************************************
SUBROUTINE GWF2RIV7RP(IN,IGRID)
C ******************************************************************
C READ RIVER HEAD, CONDUCTANCE AND BOTTOM ELEVATION
C ******************************************************************
C
C SPECIFICATIONS:
C ------------------------------------------------------------------
USE GLOBAL, ONLY:IOUT,NCOL,NROW,NLAY,IFREFM
USE GWFRIVMODULE, ONLY:NRIVER,MXRIVR,NRIVVL,IPRRIV,NPRIV,
1 IRIVPB,NNPRIV,RIVAUX,RIVR
use utl7module, only: U1DREL, U2DREL, !UBDSV1, UBDSV2, UBDSVA,
& urword, URDCOM, !UBDSV4, UBDSVB,
& ULSTRD
use SimModule, only: ustop
C ------------------------------------------------------------------
CALL SGWF2RIV7PNT(IGRID)
C
C1------READ ITMP (NUMBER OF RIVER REACHES OR FLAG TO REUSE DATA) AND
C1------NUMBER OF PARAMETERS.
IF(NPRIV.GT.0) THEN
IF(IFREFM.EQ.0) THEN
READ(IN,'(2I10)') ITMP,NP
ELSE
READ(IN,*) ITMP,NP
END IF
ELSE
NP=0
IF(IFREFM.EQ.0) THEN
READ(IN,'(I10)') ITMP
ELSE
READ(IN,*) ITMP
END IF
END IF
C
C------CALCULATE SOME CONSTANTS
NAUX=NRIVVL-7
IOUTU = IOUT
IF (IPRRIV.EQ.0) IOUTU = -IOUT
C
C2------DETERMINE THE NUMBER OF NON-PARAMETER REACHES.
IF(ITMP.LT.0) THEN
WRITE(IOUT,7)
7 FORMAT(1X,/1X,
1 'REUSING NON-PARAMETER RIVER REACHES FROM LAST STRESS PERIOD')
ELSE
NNPRIV=ITMP
END IF
C
C3------IF THERE ARE NEW NON-PARAMETER REACHES, READ THEM.
MXACTR=IRIVPB-1
IF(ITMP.GT.0) THEN
IF(NNPRIV.GT.MXACTR) THEN
WRITE(IOUT,99) NNPRIV,MXACTR
99 FORMAT(1X,/1X,'THE NUMBER OF ACTIVE REACHES (',I6,
1 ') IS GREATER THAN MXACTR(',I6,')')
CALL USTOP(' ')
END IF
CALL ULSTRD(NNPRIV,RIVR,1,NRIVVL,MXRIVR,1,IN,IOUT,
1 'REACH NO. LAYER ROW COL'//
2 ' STAGE CONDUCTANCE BOTTOM EL.',
3 RIVAUX,20,NAUX,IFREFM,NCOL,NROW,NLAY,5,5,IPRRIV)
END IF
NRIVER=NNPRIV
C
C1C-----IF THERE ARE ACTIVE RIV PARAMETERS, READ THEM AND SUBSTITUTE
CALL PRESET('RIV')
IF(NP.GT.0) THEN
NREAD=NRIVVL-1
DO 30 N=1,NP
CALL UPARLSTSUB(IN,'RIV',IOUTU,'RIV',RIVR,NRIVVL,MXRIVR,NREAD,
1 MXACTR,NRIVER,5,5,
2 'REACH NO. LAYER ROW COL'//
3 ' STAGE CONDUCTANCE BOTTOM EL.',RIVAUX,20,NAUX)
30 CONTINUE
END IF
C
C3------PRINT NUMBER OF REACHES IN CURRENT STRESS PERIOD.
WRITE (IOUT,101) NRIVER
101 FORMAT(1X,/1X,I6,' RIVER REACHES')
C
C8------RETURN.
260 RETURN
END SUBROUTINE GWF2RIV7RP
end module GwfRivSubs
|
/* eigen/eigen_sort.c
*
* Copyright (C) 1996, 1997, 1998, 1999, 2000 Gerard Jungman
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
/* Author: G. Jungman, Modified: B. Gough. */
#include <config.h>
#include <stdlib.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_eigen.h>
#include <gsl/gsl_complex.h>
#include <gsl/gsl_complex_math.h>
/* The eigen_sort below is not very good, but it is simple and
* self-contained. We can always implement an improved sort later. */
int
gsl_eigen_symmv_sort (gsl_vector * eval, gsl_matrix * evec,
gsl_eigen_sort_t sort_type)
{
if (evec->size1 != evec->size2)
{
GSL_ERROR ("eigenvector matrix must be square", GSL_ENOTSQR);
}
else if (eval->size != evec->size1)
{
GSL_ERROR ("eigenvalues must match eigenvector matrix", GSL_EBADLEN);
}
else
{
const size_t N = eval->size;
size_t i;
for (i = 0; i < N - 1; i++)
{
size_t j;
size_t k = i;
double ek = gsl_vector_get (eval, i);
/* search for something to swap */
for (j = i + 1; j < N; j++)
{
int test;
const double ej = gsl_vector_get (eval, j);
switch (sort_type)
{
case GSL_EIGEN_SORT_VAL_ASC:
test = (ej < ek);
break;
case GSL_EIGEN_SORT_VAL_DESC:
test = (ej > ek);
break;
case GSL_EIGEN_SORT_ABS_ASC:
test = (fabs (ej) < fabs (ek));
break;
case GSL_EIGEN_SORT_ABS_DESC:
test = (fabs (ej) > fabs (ek));
break;
default:
GSL_ERROR ("unrecognized sort type", GSL_EINVAL);
}
if (test)
{
k = j;
ek = ej;
}
}
if (k != i)
{
/* swap eigenvalues */
gsl_vector_swap_elements (eval, i, k);
/* swap eigenvectors */
gsl_matrix_swap_columns (evec, i, k);
}
}
return GSL_SUCCESS;
}
}
int
gsl_eigen_hermv_sort (gsl_vector * eval, gsl_matrix_complex * evec,
gsl_eigen_sort_t sort_type)
{
if (evec->size1 != evec->size2)
{
GSL_ERROR ("eigenvector matrix must be square", GSL_ENOTSQR);
}
else if (eval->size != evec->size1)
{
GSL_ERROR ("eigenvalues must match eigenvector matrix", GSL_EBADLEN);
}
else
{
const size_t N = eval->size;
size_t i;
for (i = 0; i < N - 1; i++)
{
size_t j;
size_t k = i;
double ek = gsl_vector_get (eval, i);
/* search for something to swap */
for (j = i + 1; j < N; j++)
{
int test;
const double ej = gsl_vector_get (eval, j);
switch (sort_type)
{
case GSL_EIGEN_SORT_VAL_ASC:
test = (ej < ek);
break;
case GSL_EIGEN_SORT_VAL_DESC:
test = (ej > ek);
break;
case GSL_EIGEN_SORT_ABS_ASC:
test = (fabs (ej) < fabs (ek));
break;
case GSL_EIGEN_SORT_ABS_DESC:
test = (fabs (ej) > fabs (ek));
break;
default:
GSL_ERROR ("unrecognized sort type", GSL_EINVAL);
}
if (test)
{
k = j;
ek = ej;
}
}
if (k != i)
{
/* swap eigenvalues */
gsl_vector_swap_elements (eval, i, k);
/* swap eigenvectors */
gsl_matrix_complex_swap_columns (evec, i, k);
}
}
return GSL_SUCCESS;
}
}
int
gsl_eigen_nonsymmv_sort (gsl_vector_complex * eval,
gsl_matrix_complex * evec,
gsl_eigen_sort_t sort_type)
{
if (evec->size1 != evec->size2)
{
GSL_ERROR ("eigenvector matrix must be square", GSL_ENOTSQR);
}
else if (eval->size != evec->size1)
{
GSL_ERROR ("eigenvalues must match eigenvector matrix", GSL_EBADLEN);
}
else
{
const size_t N = eval->size;
size_t i;
for (i = 0; i < N - 1; i++)
{
size_t j;
size_t k = i;
gsl_complex ek = gsl_vector_complex_get (eval, i);
/* search for something to swap */
for (j = i + 1; j < N; j++)
{
int test;
const gsl_complex ej = gsl_vector_complex_get (eval, j);
switch (sort_type)
{
case GSL_EIGEN_SORT_ABS_ASC:
test = (gsl_complex_abs (ej) < gsl_complex_abs (ek));
break;
case GSL_EIGEN_SORT_ABS_DESC:
test = (gsl_complex_abs (ej) > gsl_complex_abs (ek));
break;
case GSL_EIGEN_SORT_VAL_ASC:
case GSL_EIGEN_SORT_VAL_DESC:
default:
GSL_ERROR ("invalid sort type", GSL_EINVAL);
}
if (test)
{
k = j;
ek = ej;
}
}
if (k != i)
{
/* swap eigenvalues */
gsl_vector_complex_swap_elements (eval, i, k);
/* swap eigenvectors */
gsl_matrix_complex_swap_columns (evec, i, k);
}
}
return GSL_SUCCESS;
}
}
|
module Toolkit.Decidable.Equality.Views
import Decidable.Equality
import Toolkit.Decidable.Informative
%default total
public export
data AllEqual : (a,b,c : ty) -> Type where
AE : AllEqual a a a
public export
data Error = AB | AC
abNotEq : (a = b -> Void) -> AllEqual a b c -> Void
abNotEq f AE = f Refl
acNotEq : (a = c -> Void) -> AllEqual a b c -> Void
acNotEq f AE = f Refl
export
allEqual : DecEq type
=> (a,b,c : type)
-> DecInfo (Error) (AllEqual a b c)
allEqual a b c with (decEq a b)
allEqual a a c | (Yes Refl) with (decEq a c)
allEqual a a a | (Yes Refl) | (Yes Refl) = Yes AE
allEqual a a c | (Yes Refl) | (No contra)
= No (AC) (acNotEq contra)
allEqual a b c | (No contra)
= No (AB) (abNotEq contra)
-- [ EOF ]
|
module congruences
||| Congruence
public export
congruence : (ty1, ty2 : Type) -> (a : ty1) -> (b : ty1) -> (f : ty1 -> ty2) -> (a = b) -> ((f a) = (f b))
congruence ty1 ty2 a a f Refl = Refl
|
!*********************************************************************
!Written by Rachith Aiyappa at IISc, Bangalore
!Email [email protected] for any queries
!This module contains the functions for the N fish simulation
!*********************************************************************
!
module fish_func
contains
!*********************************************************************
!function generate_v
!This function generates kicking speeds of the respective focal fish
!Units of BL_tau
!*********************************************************************
function generate_v(r)
real,parameter :: pi = 3.14159265
real :: r
generate_v = sqrt((-16/pi)*log(1-r))
end function generate_v
!---------------------------------------------------------------------
!
!********************************************************************!
!function generate_tau
!This function generates kicking times of the respective focal fish
!Units of BL_tau
!********************************************************************!
function generate_tau(r)
real,parameter :: pi = 3.14159265
real :: r
generate_tau = sqrt((-4/pi)*log(1-r))
end function generate_tau
!--------------------------------------------------------------------
!
!********************************************************************
!function dist_calc
!This function calculates the distance from the focal fish
!********************************************************************
function dist_calc(col,focal_x,focal_y,x_temp,y_temp,N) result(d)
integer,intent(in) :: col,N
integer :: i
real, dimension(N,1) :: d
real,intent(in),dimension(N,1) :: x_temp,y_temp
real,intent(in) :: focal_x,focal_y
!write (*,*) focal_x,x_temp
do i=1,N,1
!write(*,*) d(i,col)
d(i,1) = sqrt(((focal_x-x_temp(i,1))**2) &
+ ((focal_y-y_temp(i,1))**2))
!write(*,*) d(i,col)
end do
!write(*,*) d
end function
!--------------------------------------------------------------------
!
!********************************************************************
!function del_phi_att
!This function calculate the change in heading angle due to attraction
!********************************************************************
function del_phi_att(gamma_att,d,psi,delta_phi)
!a is included to restore anticlockwise angle as positive
real,parameter :: a=-1.0
real,parameter :: l_att=5.0,d_zero=1.0
!real,parameter :: l_att=1.0 !proposed by Clement
real :: f_att,o_att,e_att
real,intent(in) :: gamma_att,d,psi,delta_phi
!write(*,*) gamma_att, d, psi, delta_phi
!write(*,*)
!original functions in the paper
f_att = ((d-d_zero)/d_zero)/(1.0+((d/l_att)**2.0))
o_att = sin(a*psi)*(1-(0.33*cos(a*psi)))
e_att = 1.0-(0.48*cos(a*delta_phi))- &
(0.31*cos(2.0*(a*delta_phi)))
!functions proposed by Clement
!f_att = (d/d_zero)/(1.0+((d/l_att)**2.0))
!o_att = sin(a*psi)*(1+cos(a*psi))
!e_att = 1
del_phi_att = gamma_att*f_att*o_att*e_att
end function del_phi_att
!---------------------------------------------------------------------
!********************************************************************
!function del_phi_ali
!This function calculate the change in heading angle due to alignment
!********************************************************************
function del_phi_ali(gamma_ali,d,psi,delta_phi)
!a is included to restore anticlockwise angle as positive
real,parameter :: a=-1.0
real,parameter :: l_ali = 5.0
real :: f_ali,o_ali,e_ali
real,intent(in) :: gamma_ali,d,psi,delta_phi
!write(*,*) gamma_ali, d, psi, delta_phi
!original funcions in the paper
f_ali = exp(-1.0*((d/l_ali)**2.0))
e_ali = (1.0) + (0.6*cos(a*psi))-(0.32*cos(2.0*(a*psi)))
o_ali = (sin(a*delta_phi))*((1.0)+ &
(0.30*cos(2.0*(a*delta_phi))))
!functions proposed by Clement
!f_ali = exp(-1.0*((d/l_ali)**2.0))
!o_ali = sin(a*delta_phi)
!e_ali = 1+cos(a*psi)
del_phi_ali = gamma_ali*f_ali*o_ali*e_ali
end function del_phi_ali
!---------------------------------------------------------------------
!********************************************************************
!function gaus_rand
!This uses the marsaglia polar method to create (pseudo-optional)
!random number pairs that have a normal distribution. The function
!saves one of the random numbers from the generated pair as a spare
!to be returned from the generated pair for the next call to the func
!Returns a real scalar
!********************************************************************
function gaus_rand()
real :: gaus_rand
real :: mean=0,sd=1
!real,intent(inout) :: x,y
real :: r,x,y
real,save :: spare
logical, save :: has_spare
call init_random_seed()
!use a spare saved from a previous run if it exits
if(has_spare) then
has_spare = .false.
gaus_rand = mean + (sd*spare)
return
else
r = 1.0
do while (r .ge. 1.0)
!spreading distribution to [-1,1]
call random_number(x)
call random_number(y)
x = (x*2.0)-1.0
y = (y*2.0)-1.0
r = (x*x) + (y*y)
end do
r = sqrt((-2.0*log(r))/r)
gaus_rand = mean + (sd*x*r)
spare = y * r
has_spare = .true.
return
end if
end function
!*********************************************************************
!RANDOM SEED GENERATOR FOR THE RANDOM_NUMBER PROGRAM
!*********************************************************************
subroutine init_random_seed()
use iso_fortran_env, only: int64
implicit none
integer, allocatable :: seed(:)
integer :: i, n, un, istat, dt(8), pid
integer(int64) :: t
call random_seed(size = n)
allocate(seed(n))
! First try if the OS provides a random number generator
open(newunit=un, file="/dev/urandom", access="stream", &
form="unformatted", action="read", status="old", iostat=istat)
if (istat == 0) then
read(un) seed
close(un)
else
! Fallback to XOR:ing the current time and pid. The PID is
! useful in case one launches multiple instances of the same
! program in parallel.
call system_clock(t)
if (t == 0) then
call date_and_time(values=dt)
t = (dt(1) - 1970) * 365_int64 * 24 * 60 * 60 &
* 1000 &
+ dt(2) * 31_int64 * 24 * 60 * 60 * 1000 &
+ dt(3) * 24_int64 * 60 * 60 * 1000 &
+ dt(5) * 60 * 60 * 1000 &
+ dt(6) * 60 * 1000 + dt(7) * 1000 &
+ dt(8)
end if
pid = getpid()
t = ieor(t, int(pid, kind(t)))
do i = 1, n
seed(i) = lcg(t)
end do
end if
call random_seed(put=seed) !value of 'put' is the array used to initialize this thread's seed.
contains
! This simple PRNG might not be good enough for real work, but is
! sufficient for seeding a better PRNG.
function lcg(s)
integer :: lcg
integer(int64) :: s
if (s == 0) then
s = 104729
else
s = mod(s, 4294967296_int64)
end if
s = mod(s * 279470273_int64, &
4294967291_int64)
lcg = int(mod(s, int(huge(0), &
int64)), kind(0))
end function lcg
end subroutine init_random_seed
!*********************************************************************
end module fish_func
|
/*
* Copyright (c) 2014-present, Facebook, Inc.
* All rights reserved.
*
* This source code is licensed under the BSD-style license found in the
* LICENSE file in the root directory of this source tree. An additional grant
* of patent rights can be found in the PATENTS file in the same directory.
*
*/
#include <fcntl.h>
#include <grp.h>
#include <sys/file.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#include <istream>
#include <string>
#include <boost/algorithm/string/trim.hpp>
#include <boost/filesystem.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/property_tree/json_parser.hpp>
#include <boost/tokenizer.hpp>
#include <osquery/filesystem.h>
#include <osquery/logger.h>
#include "osquery/events/linux/syslog.h"
namespace fs = boost::filesystem;
namespace pt = boost::property_tree;
namespace osquery {
FLAG(string,
syslog_pipe_path,
"/var/osquery/syslog_pipe",
"Path to the named pipe used for forwarding rsyslog events");
REGISTER(SyslogEventPublisher, "event_publisher", "syslog");
// rsyslog needs read/write access, osquery process needs read access
const mode_t kPipeMode = 0460;
const std::string kPipeGroupName = "syslog";
const char* kTimeFormat = "%Y-%m-%dT%H:%M:%S";
const std::vector<std::string> kCsvFields = {
"time", "host", "severity", "facility", "tag", "message"};
const size_t kMaxLogsPerRun = 10;
const size_t kErrorThreshold = 10;
Status SyslogEventPublisher::setUp() {
Status s;
if (!pathExists(FLAGS_syslog_pipe_path)) {
VLOG(1) << "Pipe does not exist. Creating pipe " << FLAGS_syslog_pipe_path;
s = createPipe(FLAGS_syslog_pipe_path);
if (!s.ok()) {
LOG(WARNING) << RLOG(1964)
<< "Problems encountered creating pipe: " << s.getMessage();
}
}
fs::file_status file_status = fs::status(FLAGS_syslog_pipe_path);
if (file_status.type() != fs::fifo_file) {
return Status(1, "Not a FIFO file: " + FLAGS_syslog_pipe_path);
}
// Try to acquire a lock on the pipe, to make sure we're the only osquery
// related proccess reading from it.
s = lockPipe(FLAGS_syslog_pipe_path);
if (!s.ok()) {
return s;
}
// Opening with both flags appears to be the only way to open the pipe
// without blocking for a writer. We won't ever write to the pipe, but we
// don't want to block here and will instead block waiting for a read in the
// run() method
readStream_.open(FLAGS_syslog_pipe_path,
std::ifstream::in | std::ifstream::out);
if (!readStream_.good()) {
return Status(1,
"Error opening pipe for reading: " + FLAGS_syslog_pipe_path);
}
VLOG(1) << "Successfully opened pipe for syslog ingestion: "
<< FLAGS_syslog_pipe_path;
return Status(0, "OK");
}
Status SyslogEventPublisher::createPipe(const std::string& path) {
if (mkfifo(FLAGS_syslog_pipe_path.c_str(), kPipeMode) != 0) {
return Status(1, "Error in mkfifo: " + std::string(strerror(errno)));
}
// Explicitly set the permissions since the umask will effect the
// permissions created by mkfifo
if (chmod(FLAGS_syslog_pipe_path.c_str(), kPipeMode) != 0) {
return Status(1, "Error in chmod: " + std::string(strerror(errno)));
}
// Try to set the group so that rsyslog will be able to write to the pipe
struct group* group = getgrnam(kPipeGroupName.c_str());
if (group == nullptr) {
VLOG(1) << "No group " << kPipeGroupName
<< " found. Not changing group for the pipe.";
return Status(0, "OK");
}
if (chown(FLAGS_syslog_pipe_path.c_str(), -1, group->gr_gid) == -1) {
return Status(1,
"Error in chown to group " + kPipeGroupName + ": " +
std::string(strerror(errno)));
}
return Status(0, "OK");
}
Status SyslogEventPublisher::lockPipe(const std::string& path) {
lockFd_ = open(path.c_str(), O_NONBLOCK);
if (lockFd_ == -1) {
return Status(
1, "Error in open for locking pipe: " + std::string(strerror(errno)));
}
if (flock(lockFd_, LOCK_EX | LOCK_NB) != 0) {
lockFd_ = -1;
return Status(
1, "Unable to acquire pipe lock: " + std::string(strerror(errno)));
}
return Status(0, "OK");
}
void SyslogEventPublisher::unlockPipe() {
if (lockFd_ != -1) {
if (flock(lockFd_, LOCK_UN) != 0) {
LOG(WARNING) << "Error unlocking pipe: " << std::string(strerror(errno));
}
}
}
Status SyslogEventPublisher::run() {
// This run function will be called by the event factory with ~100ms pause
// (see InterruptableRunnable::pause()) between runs. In case something goes
// weird and there is a huge amount of input, we limit how many logs we
// take in per run to avoid pegging the CPU.
for (size_t i = 0; i < kMaxLogsPerRun; ++i) {
if (readStream_.rdbuf()->in_avail() == 0) {
// If there is no pending data, we have flushed everything and can wait
// until the next time EventFactory calls run(). This also allows the
// thread to join when it is stopped by EventFactory.
return Status(0, "OK");
}
std::string line;
std::getline(readStream_, line);
auto ec = createEventContext();
Status status = populateEventContext(line, ec);
if (status.ok()) {
fire(ec);
if (errorCount_ > 0) {
--errorCount_;
}
} else {
LOG(ERROR) << status.getMessage() << " in line: " << line;
++errorCount_;
if (errorCount_ >= kErrorThreshold) {
return Status(1, "Too many errors in syslog parsing.");
}
}
}
return Status(0, "OK");
}
void SyslogEventPublisher::tearDown() {
unlockPipe();
}
Status SyslogEventPublisher::populateEventContext(const std::string& line,
SyslogEventContextRef& ec) {
boost::tokenizer<RsyslogCsvSeparator> tokenizer(line);
auto key = kCsvFields.begin();
for (std::string value : tokenizer) {
if (key == kCsvFields.end()) {
return Status(1, "Received more fields than expected");
}
boost::trim(value);
if (*key == "time") {
ec->time = parseTimeString(value);
} else if (*key == "tag" && !value.empty() && value.back() == ':') {
// rsyslog sends "tag" with a trailing colon that we don't need
ec->fields.emplace(*key, value.substr(0, value.size() - 1));
} else {
ec->fields.emplace(*key, value);
}
++key;
}
if (key == kCsvFields.end()) {
return Status(0, "OK");
} else {
return Status(1, "Received fewer fields than expected");
}
}
time_t SyslogEventPublisher::parseTimeString(const std::string& time_str) {
struct tm s_tm;
strptime(time_str.c_str(), kTimeFormat, &s_tm);
return timegm(&s_tm);
}
bool SyslogEventPublisher::shouldFire(const SyslogSubscriptionContextRef& sc,
const SyslogEventContextRef& ec) const {
return true;
}
}
|
""" Useful general-purpose preprocessing functions. """
import gc
import os.path
import pandas as pd
import numpy as np
import nose.tools
from numpy.random import RandomState
def normalise_z(features):
""" Normalise each feature to have zero mean and unit variance.
Parameters
----------
features : array, shape = [n_samples, n_features]
Each row is a sample point and each column is a feature.
Returns
-------
features_normalised : array, shape = [n_samples, n_features]
"""
mu = np.mean(features, axis=0)
sigma = np.std(features, axis=0)
return (features - mu) / sigma
def normalise_unit_var(features):
""" Normalise each feature to have unit variance.
Parameters
----------
features : array, shape = [n_samples, n_features]
Each row is a sample point and each column is a feature.
Returns
-------
features_normalised : array, shape = [n_samples, n_features]
"""
sigma = np.std(features, axis=0)
return features / sigma
def normalise_01(features):
""" Normalise each feature to unit interval.
Parameters
----------
features : array, shape = [n_samples, n_features]
Each row is a sample point and each column is a feature.
Returns
-------
features_normalised : array, shape = [n_samples, n_features]
"""
minimum = np.min(features, axis=0)
maximum = np.max(features, axis=0)
return (features - minimum) / (maximum - minimum)
def _get_train_test_size(train_size, test_size, n_samples):
""" Convert user train and test size inputs to integers. """
if test_size is None and train_size is None:
test_size = 0.3
train_size = 1.0 - test_size
if isinstance(test_size, float):
test_size = np.round(test_size * n_samples).astype(int)
if isinstance(train_size, float):
train_size = np.round(train_size * n_samples).astype(int)
if test_size is None:
test_size = n_samples - train_size
if train_size is None:
train_size = n_samples - test_size
return train_size, test_size
@nose.tools.nottest
def balanced_train_test_split(X, y, test_size=None, train_size=None, bootstrap=False,
random_state=None):
""" Split the data into a balanced training set and test set of some given size.
For a dataset with an unequal numer of samples in each class, one useful procedure
is to split the data into a training and a test set in such a way that the classes
are balanced.
Parameters
----------
X : array, shape = [n_samples, n_features]
Feature matrix.
y : array, shape = [n_features]
Target vector.
test_size : float or int (default=0.3)
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset
to include in the test split. If int, represents the absolute number of test samples.
If None, the value is automatically set to the complement of the train size.
If train size is also None, test size is set to 0.3.
train_size : float or int (default=1-test_size)
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset
to include in the train split. If int, represents the absolute number of train samples.
If None, the value is automatically set to the complement of the test size.
random_state : int, optional (default=None)
Pseudo-random number generator state used for random sampling.
Returns
-------
X_train : array
The feature vectors (stored as columns) in the training set.
X_test : array
The feature vectors (stored as columns) in the test set.
y_train : array
The target vector in the training set.
y_test : array
The target vector in the test set.
"""
# initialise the random number generator
rng = RandomState(random_state)
# make sure X and y are numpy arrays
X = np.asarray(X)
y = np.asarray(y)
# get information about the class distribution
classes, y_indices = np.unique(y, return_inverse=True)
n_classes = len(classes)
cls_count = np.bincount(y_indices)
# get the training and test size
train_size, test_size = _get_train_test_size(train_size, test_size, len(y))
# number of samples in each class that is included in the training and test set
n_train = np.round(train_size / n_classes).astype(int)
n_test = np.round(test_size / n_classes).astype(int)
n_total = n_train + n_test
# make sure we have enough samples to create a balanced split
min_count = min(cls_count)
if min_count < (n_train + n_test) and not bootstrap:
raise ValueError('The smallest class contains {} examples, which is not '
'enough to create a balanced split. Choose a smaller size '
'or enable bootstraping.'.format(min_count))
# selected indices are stored here
train = []
test = []
# get the desired sample from each class
for i, cls in enumerate(classes):
if bootstrap:
shuffled = rng.choice(cls_count[i], n_total, replace=True)
else:
shuffled = rng.permutation(cls_count[i])
cls_i = np.where(y == cls)[0][shuffled]
train.extend(cls_i[:n_train])
test.extend(cls_i[n_train:n_total])
train = list(rng.permutation(train))
test = list(rng.permutation(test))
return X[train], X[test], y[train], y[test]
def csv_to_hdf(csv_path, no_files=1, hdf_path='store.h5', data_cols=None, expectedrows=7569900,
min_itemsize=40, table_name='table'):
""" Convert csv files to a HDF5 table.
Parameters
----------
csv_path : str
The path of the source csv files.
no_files : int
The number of csv parts.
hdf_path : str
The path of the output.
data_cols : array
The names of the columns. Should be the same as the first line in the first csv file.
expectedrows : int
The expected number of rows in the HDF5 table.
min_itemsize : int
The minimum string size.
table_name : str
The name of the HDF5 table.
"""
if os.path.isfile(hdf_path):
print('HDF5 Table already exists. No changes were made.')
return
store = pd.HDFStore(hdf_path, complevel=9, complib='zlib', fletcher32=True)
for i in np.arange(no_files):
csv_file = csv_path.format(i)
if i == 0:
data = pd.io.api.read_csv(csv_file)
else:
data = pd.io.api.read_csv(csv_file, header=None, names=data_cols)
store.append(table_name, data, index=False, expectedrows=expectedrows,
min_itemsize=min_itemsize)
del data
gc.collect()
store.close()
|
% function Cpsi = cwt_adm(type, opt)
%
% Calculate cwt admissibility constant int(|f(w)|^2/w, w=0..inf) as
% per Eq. (4.67) of [1].
%
% 1. Mallat, S., Wavelet Tour of Signal Processing 3rd ed.
%
%---------------------------------------------------------------------------------
% Synchrosqueezing Toolbox
% Authors: Eugene Brevdo (http://www.math.princeton.edu/~ebrevdo/)
%---------------------------------------------------------------------------------
function Cpsi = cwt_adm(type, opt)
if nargin<2, opt=struct(); end
switch type
case 'sombrero',
if ~isfield(opt,'s'), s = 1; else s = opt.s; end
Cpsi = (4/3)*s*sqrt(pi);
case 'shannon',
Cpsi = log(2);
otherwise
psihfn = wfiltfn(type, opt);
Cpsi = quadgk(@(x) (conj(psihfn(x)).*psihfn(x))./x, 0, Inf);
end
% Normalize
Cpsi = Cpsi / (4*pi);
end
|
# _*_ coding: utf-8 _*_
from aip import AipOcr
import wda
import cv2
import webbrowser
import time
import datetime
from urllib import parse
import numpy as np
import requests
# """ 你的 APPID AK SK """
APP_ID = '10701834'
API_KEY = 'TbOZqAG7Xu0HutH2hKGSSZOU'
SECRET_KEY = 'm3zG4I9KXqGnkhKcgWDxohQkMB5QqMLA'
client = AipOcr(APP_ID, API_KEY, SECRET_KEY)
# c = wda.Client('http://192.168.0.117:8100')
# s = c.session()
# print(s.window_size())
def printNowDatetime():
print(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
def getImage(url):
with open(url,'rb') as fp:
return fp.read()
def ocrImage(image):
# image = getImage('/Users/user/Desktop/testP.png')
""" 如果有可选参数 """
options = {}
options["language_type"] = "CHN_ENG"
options["detect_direction"] = "true"
options["detect_language"] = "true"
options["probability"] = "true"
""" 带参数调用通用文字识别, 图片参数为本地图片 """
response = client.basicGeneral(image, options)
print(response)
print(type(response))
words = response['words_result']
appendWord = ''
for item in words:
appendWord += item['words'] + ''
return appendWord
def cvCutImg(x,y,width,height,img):
return img[y:y+height, x:x+width]
def cvBytes_to_numpyNdarray(imgBytes):
img = np.asarray(bytearray(imgBytes), np.uint8)
img = cv2.imdecode(img, cv2.IMREAD_COLOR)
# cv2.imshow('mm', img)
# cv2.waitKey(0)
# img type is numpy.ndarray
# img = cv2.imread('/Users/user/Desktop/testP.png')
return img
def cvNumpyNdarray_to_bytes(img):
return np.ndarray.tobytes(img)
def chongdingdahui():
img = c.screenshot('screen01.png')
# img = getImage('chongdingdahui.jpg')
image = cvBytes_to_numpyNdarray(img)
cutImg = cvCutImg(25, 320, 700, 175, image)
cv2.imwrite('cut.png', cutImg)
image = getImage('cut.png')
ocrwd = ocrImage(image)
image = getImage('cut.png')
ocrwd = ocrImage(image)
wd = parse.quote(ocrwd)
url = 'https://www.baidu.com/s?wd=' + wd
webbrowser.open(url)
def xiguashiping():
# img = c.screenshot('screen01.png')
img = getImage('xiguaishipin.jpg')
image = cvBytes_to_numpyNdarray(img)
cutImg = cvCutImg(40, 220, 670, 175, image)
cv2.imwrite('cut.png', cutImg)
image = getImage('cut.png')
ocrwd = ocrImage(image)
image = getImage('cut.png')
ocrwd = ocrImage(image)
wd = parse.quote(ocrwd)
url = 'https://www.baidu.com/s?wd=' + wd
webbrowser.open(url)
if __name__ == "__main__":
print('--')
while True:
time.sleep(3)
printNowDatetime()
# chongdingdahui()
xiguashiping()
|
-- DataStore.idr
--
-- A simple Vect-based data store with
-- integer key and string values
module Main
import Data.Vect
infix 5 .+.
data Schema = SString
| SInt
| (.+.) Schema Schema
SchemaType : Schema -> Type
SchemaType SString = String
SchemaType SInt = Int
SchemaType (x .+. y) = (SchemaType x, SchemaType y)
||| A record-style DataStore type
record DataStore where
constructor MkData
schema : Schema
size : Nat
items : Vect size (SchemaType schema)
{-
addToStore : DataStore -> String -> DataStore
addToStore (MkData size items) newItem = MkData _ (addToData items)
where
addToData : Vect old String -> Vect (S old) String
addToData [] = [newItem]
addToData (x :: xs) = x :: (addToData xs)
||| commands for data store
data Command = Add String
| Get Integer
| Search String
| Size
| Quit
parseCommand : (cmd : String) -> (args : String) -> Maybe Command
parseCommand "add" str = Just (Add str)
parseCommand "get" val = case all isDigit (unpack val) of
False => Nothing
True => Just (Get (cast val))
parseCommand "search" str = Just (Search str)
parseCommand "size" "" = Just Size
parseCommand "quit" "" = Just Quit
parseCommand _ _ = Nothing
||| Parses given string and returns matching command
||| if sucessful (or Nothing in case of invalid string)
parse : (input : String) -> Maybe Command
parse input = case span (/= ' ') input of
(cmd, args) => parseCommand cmd (ltrim args)
getEntry : (pos : Integer) -> (store : DataStore) -> Maybe (String, DataStore)
getEntry pos store = let store_items = items store in
case integerToFin pos (size store) of
Nothing => Just ("Out of range\n", store)
(Just id) => Just (index id store_items ++ "\n", store)
findEntriesString : (idx : Nat) -> (str : String) -> (items : Vect len String) -> String
findEntriesString idx str [] = "\n"
findEntriesString idx str (x :: xs) = if (isInfixOf str x)
then (show idx) ++ ": " ++ x ++ "\n" ++ findEntriesString (idx + 1) str xs
else findEntriesString (idx + 1) str xs
findEntries : (str : String) -> (store : DataStore) -> String
findEntries str store = findEntriesString Z str (items store)
concatStringVect : (p : Nat ** Vect p (Nat, String)) -> String
concatStringVect (Z ** []) = "\n"
concatStringVect (S x ** ((id, str) :: items)) = (show id) ++ ": " ++ str ++ "\n" ++ concatStringVect (x ** items)
||| Processes given String command
processInput : DataStore -> String -> Maybe (String, DataStore)
processInput store input = case parse input of
Nothing => Just ("Unknown command: " ++ input ++ "\n", store)
Just (Add item) =>
Just ("ID " ++ show (size store) ++ "\n", addToStore store item)
Just (Get pos) => getEntry pos store
Just (Search str) =>
Just (findEntries str store, store)
Just Size =>
Just ("Size: " ++ cast (size store) ++ "\n", store)
Just Quit => Nothing
-- entry point
main : IO ()
main = replWith (MkData 0 []) "Command: " processInput
-}
|
// boost/endian/buffers.hpp ----------------------------------------------------------//
// (C) Copyright Darin Adler 2000
// (C) Copyright Beman Dawes 2006, 2009, 2014
// Distributed under the Boost Software License, Version 1.0.
// See http://www.boost.org/LICENSE_1_0.txt
// See library home page at http://www.boost.org/libs/endian
//--------------------------------------------------------------------------------------//
// Original design developed by Darin Adler based on classes developed by Mark
// Borgerding. Four original class templates were combined into a single endian
// class template by Beman Dawes, who also added the unrolled_byte_loops sign
// partial specialization to correctly extend the sign when cover integer size
// differs from endian representation size.
// TODO: When a compiler supporting constexpr becomes available, try possible uses.
#ifndef BOOST_ENDIAN_BUFFERS_HPP
#define BOOST_ENDIAN_BUFFERS_HPP
#if defined(_MSC_VER)
# pragma warning(push)
# pragma warning(disable:4365) // conversion ... signed/unsigned mismatch
#endif
#ifdef BOOST_ENDIAN_LOG
# include <iostream>
#endif
#if defined(__BORLANDC__) || defined( __CODEGEARC__)
# pragma pack(push, 1)
#endif
#include <boost/config.hpp>
#include <boost/predef/detail/endian_compat.h>
#include <boost/endian/conversion.hpp>
#include <boost/type_traits/is_signed.hpp>
#include <boost/cstdint.hpp>
#include <boost/static_assert.hpp>
#include <boost/core/scoped_enum.hpp>
#include <iosfwd>
#include <climits>
# if CHAR_BIT != 8
# error Platforms with CHAR_BIT != 8 are not supported
# endif
# ifdef BOOST_NO_CXX11_DEFAULTED_FUNCTIONS
# define BOOST_ENDIAN_DEFAULT_CONSTRUCT {} // C++03
# else
# define BOOST_ENDIAN_DEFAULT_CONSTRUCT = default; // C++0x
# endif
# if defined(BOOST_NO_CXX11_DEFAULTED_FUNCTIONS) && defined(BOOST_ENDIAN_FORCE_PODNESS)
# define BOOST_ENDIAN_NO_CTORS
# endif
//---------------------------------- synopsis ----------------------------------------//
namespace boost
{
namespace endian
{
BOOST_SCOPED_ENUM_START(align)
{no, yes
# ifdef BOOST_ENDIAN_DEPRECATED_NAMES
, unaligned = no, aligned = yes
# endif
}; BOOST_SCOPED_ENUM_END
template <BOOST_SCOPED_ENUM(order) Order, class T, std::size_t n_bits,
BOOST_SCOPED_ENUM(align) A = align::no>
class endian_buffer;
// aligned big endian signed integer buffers
typedef endian_buffer<order::big, int8_t, 8, align::yes> big_int8_buf_at;
typedef endian_buffer<order::big, int16_t, 16, align::yes> big_int16_buf_at;
typedef endian_buffer<order::big, int32_t, 32, align::yes> big_int32_buf_at;
typedef endian_buffer<order::big, int64_t, 64, align::yes> big_int64_buf_at;
// aligned big endian unsigned integer buffers
typedef endian_buffer<order::big, uint8_t, 8, align::yes> big_uint8_buf_at;
typedef endian_buffer<order::big, uint16_t, 16, align::yes> big_uint16_buf_at;
typedef endian_buffer<order::big, uint32_t, 32, align::yes> big_uint32_buf_at;
typedef endian_buffer<order::big, uint64_t, 64, align::yes> big_uint64_buf_at;
// aligned little endian signed integer buffers
typedef endian_buffer<order::little, int8_t, 8, align::yes> little_int8_buf_at;
typedef endian_buffer<order::little, int16_t, 16, align::yes> little_int16_buf_at;
typedef endian_buffer<order::little, int32_t, 32, align::yes> little_int32_buf_at;
typedef endian_buffer<order::little, int64_t, 64, align::yes> little_int64_buf_at;
// aligned little endian unsigned integer buffers
typedef endian_buffer<order::little, uint8_t, 8, align::yes> little_uint8_buf_at;
typedef endian_buffer<order::little, uint16_t, 16, align::yes> little_uint16_buf_at;
typedef endian_buffer<order::little, uint32_t, 32, align::yes> little_uint32_buf_at;
typedef endian_buffer<order::little, uint64_t, 64, align::yes> little_uint64_buf_at;
// aligned native endian typedefs are not provided because
// <cstdint> types are superior for this use case
// unaligned big endian signed integer buffers
typedef endian_buffer<order::big, int_least8_t, 8> big_int8_buf_t;
typedef endian_buffer<order::big, int_least16_t, 16> big_int16_buf_t;
typedef endian_buffer<order::big, int_least32_t, 24> big_int24_buf_t;
typedef endian_buffer<order::big, int_least32_t, 32> big_int32_buf_t;
typedef endian_buffer<order::big, int_least64_t, 40> big_int40_buf_t;
typedef endian_buffer<order::big, int_least64_t, 48> big_int48_buf_t;
typedef endian_buffer<order::big, int_least64_t, 56> big_int56_buf_t;
typedef endian_buffer<order::big, int_least64_t, 64> big_int64_buf_t;
// unaligned big endian unsigned integer buffers
typedef endian_buffer<order::big, uint_least8_t, 8> big_uint8_buf_t;
typedef endian_buffer<order::big, uint_least16_t, 16> big_uint16_buf_t;
typedef endian_buffer<order::big, uint_least32_t, 24> big_uint24_buf_t;
typedef endian_buffer<order::big, uint_least32_t, 32> big_uint32_buf_t;
typedef endian_buffer<order::big, uint_least64_t, 40> big_uint40_buf_t;
typedef endian_buffer<order::big, uint_least64_t, 48> big_uint48_buf_t;
typedef endian_buffer<order::big, uint_least64_t, 56> big_uint56_buf_t;
typedef endian_buffer<order::big, uint_least64_t, 64> big_uint64_buf_t;
// unaligned little endian signed integer buffers
typedef endian_buffer<order::little, int_least8_t, 8> little_int8_buf_t;
typedef endian_buffer<order::little, int_least16_t, 16> little_int16_buf_t;
typedef endian_buffer<order::little, int_least32_t, 24> little_int24_buf_t;
typedef endian_buffer<order::little, int_least32_t, 32> little_int32_buf_t;
typedef endian_buffer<order::little, int_least64_t, 40> little_int40_buf_t;
typedef endian_buffer<order::little, int_least64_t, 48> little_int48_buf_t;
typedef endian_buffer<order::little, int_least64_t, 56> little_int56_buf_t;
typedef endian_buffer<order::little, int_least64_t, 64> little_int64_buf_t;
// unaligned little endian unsigned integer buffers
typedef endian_buffer<order::little, uint_least8_t, 8> little_uint8_buf_t;
typedef endian_buffer<order::little, uint_least16_t, 16> little_uint16_buf_t;
typedef endian_buffer<order::little, uint_least32_t, 24> little_uint24_buf_t;
typedef endian_buffer<order::little, uint_least32_t, 32> little_uint32_buf_t;
typedef endian_buffer<order::little, uint_least64_t, 40> little_uint40_buf_t;
typedef endian_buffer<order::little, uint_least64_t, 48> little_uint48_buf_t;
typedef endian_buffer<order::little, uint_least64_t, 56> little_uint56_buf_t;
typedef endian_buffer<order::little, uint_least64_t, 64> little_uint64_buf_t;
# ifdef BOOST_BIG_ENDIAN
// unaligned native endian signed integer buffers
typedef big_int8_buf_t native_int8_buf_t;
typedef big_int16_buf_t native_int16_buf_t;
typedef big_int24_buf_t native_int24_buf_t;
typedef big_int32_buf_t native_int32_buf_t;
typedef big_int40_buf_t native_int40_buf_t;
typedef big_int48_buf_t native_int48_buf_t;
typedef big_int56_buf_t native_int56_buf_t;
typedef big_int64_buf_t native_int64_buf_t;
// unaligned native endian unsigned integer buffers
typedef big_uint8_buf_t native_uint8_buf_t;
typedef big_uint16_buf_t native_uint16_buf_t;
typedef big_uint24_buf_t native_uint24_buf_t;
typedef big_uint32_buf_t native_uint32_buf_t;
typedef big_uint40_buf_t native_uint40_buf_t;
typedef big_uint48_buf_t native_uint48_buf_t;
typedef big_uint56_buf_t native_uint56_buf_t;
typedef big_uint64_buf_t native_uint64_buf_t;
# else
// unaligned native endian signed integer buffers
typedef little_int8_buf_t native_int8_buf_t;
typedef little_int16_buf_t native_int16_buf_t;
typedef little_int24_buf_t native_int24_buf_t;
typedef little_int32_buf_t native_int32_buf_t;
typedef little_int40_buf_t native_int40_buf_t;
typedef little_int48_buf_t native_int48_buf_t;
typedef little_int56_buf_t native_int56_buf_t;
typedef little_int64_buf_t native_int64_buf_t;
// unaligned native endian unsigned integer buffers
typedef little_uint8_buf_t native_uint8_buf_t;
typedef little_uint16_buf_t native_uint16_buf_t;
typedef little_uint24_buf_t native_uint24_buf_t;
typedef little_uint32_buf_t native_uint32_buf_t;
typedef little_uint40_buf_t native_uint40_buf_t;
typedef little_uint48_buf_t native_uint48_buf_t;
typedef little_uint56_buf_t native_uint56_buf_t;
typedef little_uint64_buf_t native_uint64_buf_t;
# endif
// Stream inserter
template <class charT, class traits, BOOST_SCOPED_ENUM(order) Order, class T,
std::size_t n_bits, BOOST_SCOPED_ENUM(align) A>
std::basic_ostream<charT, traits>&
operator<<(std::basic_ostream<charT, traits>& os,
const endian_buffer<Order, T, n_bits, A>& x)
{
return os << x.value();
}
// Stream extractor
template <class charT, class traits, BOOST_SCOPED_ENUM(order) Order, class T,
std::size_t n_bits, BOOST_SCOPED_ENUM(align) A>
std::basic_istream<charT, traits>&
operator>>(std::basic_istream<charT, traits>& is,
endian_buffer<Order, T, n_bits, A>& x)
{
T i;
if (is >> i)
x = i;
return is;
}
//---------------------------------- end synopsis ------------------------------------//
namespace detail
{
// Unrolled loops for loading and storing streams of bytes.
template <typename T, std::size_t n_bytes,
bool sign=boost::is_signed<T>::value >
struct unrolled_byte_loops
{
typedef unrolled_byte_loops<T, n_bytes - 1, sign> next;
static T load_big(const unsigned char* bytes) BOOST_NOEXCEPT
{ return static_cast<T>(*(bytes - 1) | (next::load_big(bytes - 1) << 8)); }
static T load_little(const unsigned char* bytes) BOOST_NOEXCEPT
{ return static_cast<T>(*bytes | (next::load_little(bytes + 1) << 8)); }
static void store_big(char* bytes, T value) BOOST_NOEXCEPT
{
*(bytes - 1) = static_cast<char>(value);
next::store_big(bytes - 1, static_cast<T>(value >> 8));
}
static void store_little(char* bytes, T value) BOOST_NOEXCEPT
{
*bytes = static_cast<char>(value);
next::store_little(bytes + 1, static_cast<T>(value >> 8));
}
};
template <typename T>
struct unrolled_byte_loops<T, 1, false>
{
static T load_big(const unsigned char* bytes) BOOST_NOEXCEPT
{ return *(bytes - 1); }
static T load_little(const unsigned char* bytes) BOOST_NOEXCEPT
{ return *bytes; }
static void store_big(char* bytes, T value) BOOST_NOEXCEPT
{ *(bytes - 1) = static_cast<char>(value); }
static void store_little(char* bytes, T value) BOOST_NOEXCEPT
{ *bytes = static_cast<char>(value); }
};
template <typename T>
struct unrolled_byte_loops<T, 1, true>
{
static T load_big(const unsigned char* bytes) BOOST_NOEXCEPT
{ return *reinterpret_cast<const signed char*>(bytes - 1); }
static T load_little(const unsigned char* bytes) BOOST_NOEXCEPT
{ return *reinterpret_cast<const signed char*>(bytes); }
static void store_big(char* bytes, T value) BOOST_NOEXCEPT
{ *(bytes - 1) = static_cast<char>(value); }
static void store_little(char* bytes, T value) BOOST_NOEXCEPT
{ *bytes = static_cast<char>(value); }
};
template <typename T, std::size_t n_bytes>
inline
T load_big_endian(const void* bytes) BOOST_NOEXCEPT
{
return unrolled_byte_loops<T, n_bytes>::load_big
(static_cast<const unsigned char*>(bytes) + n_bytes);
}
template <typename T, std::size_t n_bytes>
inline
T load_little_endian(const void* bytes) BOOST_NOEXCEPT
{
# if defined(__x86_64__) || defined(_M_X64) || defined(__i386) || defined(_M_IX86)
// On x86 (which is little endian), unaligned loads are permitted
if (sizeof(T) == n_bytes) // GCC 4.9, VC++ 14.0, and probably others, elide this
// test and generate code only for the applicable return
// case since sizeof(T) and n_bytes are known at compile
// time.
{
return *reinterpret_cast<T const *>(bytes);
}
# endif
return unrolled_byte_loops<T, n_bytes>::load_little
(static_cast<const unsigned char*>(bytes));
}
template <typename T, std::size_t n_bytes>
inline
void store_big_endian(void* bytes, T value) BOOST_NOEXCEPT
{
unrolled_byte_loops<T, n_bytes>::store_big
(static_cast<char*>(bytes) + n_bytes, value);
}
template <typename T, std::size_t n_bytes>
inline
void store_little_endian(void* bytes, T value) BOOST_NOEXCEPT
{
# if defined(__x86_64__) || defined(_M_X64) || defined(__i386) || defined(_M_IX86)
// On x86 (which is little endian), unaligned stores are permitted
if (sizeof(T) == n_bytes) // GCC 4.9, VC++ 14.0, and probably others, elide this
// test and generate code only for the applicable return
// case since sizeof(T) and n_bytes are known at compile
// time.
{
*reinterpret_cast<T *>(bytes) = value;
return;
}
# endif
unrolled_byte_loops<T, n_bytes>::store_little
(static_cast<char*>(bytes), value);
}
} // namespace detail
# ifdef BOOST_ENDIAN_LOG
bool endian_log(true);
# endif
// endian_buffer class template specializations --------------------------------------//
// Specializations that represent unaligned bytes.
// Taking an integer type as a parameter provides a nice way to pass both
// the size and signedness of the desired integer and get the appropriate
// corresponding integer type for the interface.
// Q: Should endian_buffer supply "value_type operator value_type() const noexcept"?
// A: No. The rationale for endian_buffers is to prevent high-cost hidden
// conversions. If an implicit conversion operator is supplied, hidden conversions
// can occur.
// unaligned big endian_buffer specialization
template <typename T, std::size_t n_bits>
class endian_buffer< order::big, T, n_bits, align::no >
{
BOOST_STATIC_ASSERT( (n_bits/8)*8 == n_bits );
public:
typedef T value_type;
# ifndef BOOST_ENDIAN_NO_CTORS
endian_buffer() BOOST_ENDIAN_DEFAULT_CONSTRUCT
explicit endian_buffer(T val) BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "big, unaligned, "
<< n_bits << "-bits, construct(" << val << ")\n";
# endif
detail::store_big_endian<T, n_bits/8>(m_value, val);
}
# endif
endian_buffer & operator=(T val) BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if (endian_log)
std::cout << "big, unaligned, " << n_bits << "-bits, assign(" << val << ")\n";
# endif
detail::store_big_endian<T, n_bits/8>(m_value, val);
return *this;
}
value_type value() const BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "big, unaligned, " << n_bits << "-bits, convert("
<< detail::load_big_endian<T, n_bits/8>(m_value) << ")\n";
# endif
return detail::load_big_endian<T, n_bits/8>(m_value);
}
const char* data() const BOOST_NOEXCEPT { return m_value; }
protected:
char m_value[n_bits/8];
};
// unaligned little endian_buffer specialization
template <typename T, std::size_t n_bits>
class endian_buffer< order::little, T, n_bits, align::no >
{
BOOST_STATIC_ASSERT( (n_bits/8)*8 == n_bits );
public:
typedef T value_type;
# ifndef BOOST_ENDIAN_NO_CTORS
endian_buffer() BOOST_ENDIAN_DEFAULT_CONSTRUCT
explicit endian_buffer(T val) BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "little, unaligned, " << n_bits << "-bits, construct("
<< val << ")\n";
# endif
detail::store_little_endian<T, n_bits/8>(m_value, val);
}
# endif
endian_buffer & operator=(T val) BOOST_NOEXCEPT
{ detail::store_little_endian<T, n_bits/8>(m_value, val); return *this; }
value_type value() const BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "little, unaligned, " << n_bits << "-bits, convert("
<< detail::load_little_endian<T, n_bits/8>(m_value) << ")\n";
# endif
return detail::load_little_endian<T, n_bits/8>(m_value);
}
const char* data() const BOOST_NOEXCEPT { return m_value; }
protected:
char m_value[n_bits/8];
};
// align::yes specializations; only n_bits == 16/32/64 supported
// aligned big endian_buffer specialization
template <typename T, std::size_t n_bits>
class endian_buffer<order::big, T, n_bits, align::yes>
{
BOOST_STATIC_ASSERT( (n_bits/8)*8 == n_bits );
BOOST_STATIC_ASSERT( sizeof(T) == n_bits/8 );
public:
typedef T value_type;
# ifndef BOOST_ENDIAN_NO_CTORS
endian_buffer() BOOST_ENDIAN_DEFAULT_CONSTRUCT
explicit endian_buffer(T val) BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "big, aligned, " << n_bits
<< "-bits, construct(" << val << ")\n";
# endif
m_value = ::boost::endian::native_to_big(val);
}
# endif
endian_buffer& operator=(T val) BOOST_NOEXCEPT
{
m_value = ::boost::endian::native_to_big(val);
return *this;
}
//operator value_type() const BOOST_NOEXCEPT
//{
// return ::boost::endian::big_to_native(m_value);
//}
value_type value() const BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "big, aligned, " << n_bits << "-bits, convert("
<< ::boost::endian::big_to_native(m_value) << ")\n";
# endif
return ::boost::endian::big_to_native(m_value);
}
const char* data() const BOOST_NOEXCEPT
{return reinterpret_cast<const char*>(&m_value);}
protected:
T m_value;
};
// aligned little endian_buffer specialization
template <typename T, std::size_t n_bits>
class endian_buffer<order::little, T, n_bits, align::yes>
{
BOOST_STATIC_ASSERT( (n_bits/8)*8 == n_bits );
BOOST_STATIC_ASSERT( sizeof(T) == n_bits/8 );
public:
typedef T value_type;
# ifndef BOOST_ENDIAN_NO_CTORS
endian_buffer() BOOST_ENDIAN_DEFAULT_CONSTRUCT
explicit endian_buffer(T val) BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "little, aligned, " << n_bits
<< "-bits, construct(" << val << ")\n";
# endif
m_value = ::boost::endian::native_to_little(val);
}
# endif
endian_buffer& operator=(T val) BOOST_NOEXCEPT
{
m_value = ::boost::endian::native_to_little(val);
return *this;
}
value_type value() const BOOST_NOEXCEPT
{
# ifdef BOOST_ENDIAN_LOG
if ( endian_log )
std::cout << "little, aligned, " << n_bits << "-bits, convert("
<< ::boost::endian::little_to_native(m_value) << ")\n";
# endif
return ::boost::endian::little_to_native(m_value);
}
const char* data() const BOOST_NOEXCEPT
{return reinterpret_cast<const char*>(&m_value);}
protected:
T m_value;
};
} // namespace endian
} // namespace boost
#if defined(__BORLANDC__) || defined( __CODEGEARC__)
# pragma pack(pop)
#endif
#if defined(_MSC_VER)
# pragma warning(pop)
#endif
#endif // BOOST_ENDIAN_BUFFERS_HPP
|
import tactic
import logic.basic
import category.nursery
import category.liftable
universes u v w
namespace medium
variables w : Type
inductive put_m' (w : Type) (α : Type u)
| pure {} : α → put_m'
| write : w → (unit → put_m') → put_m'
abbreviation put_m : Type → Type u := λ w, put_m' w punit
variables {w}
def put_m'.bind {α β} : put_m' w α → (α → put_m' w β) → put_m' w β
| (put_m'.pure x) f := f x
| (put_m'.write w f) g := put_m'.write w $ λ u, put_m'.bind (f u) g
instance put_m'.monad : monad (put_m' w) :=
{ pure := λ α, put_m'.pure
, bind := @put_m'.bind w }
instance put_m'.is_lawful_monad : is_lawful_monad.{u} (put_m' w) :=
by { refine { .. }; intros;
try { refl };
dsimp [(<$>),(>>=)];
induction x;
try { refl },
all_goals
{ dsimp [put_m'.bind], congr, ext, apply x_ih }, }
def put_m'.eval : put_m w → list w
| (put_m'.pure x) := []
| (put_m'.write w f) := w :: put_m'.eval (f punit.star)
variable w
inductive get_m : Type u → Type (u+1)
| fail {} {α} : get_m α
| pure {} {α} : α → get_m α
| read {α} : (w → get_m α) → get_m α
| loop {α β γ : Type u} : (β → w → get_m (α ⊕ β)) → (α → get_m γ) → β → get_m γ
variables {w}
def get_m.bind : Π {α β}, get_m w α → (α → get_m w β) → get_m w β
| _ _ (get_m.fail) _ := get_m.fail
| _ _ (get_m.pure x) f := f x
| _ _ (get_m.read f) g := get_m.read $ λ w, get_m.bind (f w) g
| _ _ (get_m.loop f g x₀) h := get_m.loop f (λ r, get_m.bind (g r) h) x₀
def get_m.map : Π {α β : Type u}, (α → β) → get_m w α → get_m w β
| _ _ _ (get_m.fail) := get_m.fail
| _ _ f (get_m.pure x) := get_m.pure $ f x
| _ _ f (get_m.read g) := get_m.read $ λ w, get_m.map f (g w)
| _ _ h (get_m.loop f g x₀) := get_m.loop f (λ r, get_m.map h (g r)) x₀
@[simp]
def get_m.loop.rest {α β γ : Type u} (f : β → w → get_m w (α ⊕ β)) (g : α → get_m w γ) : α ⊕ β → get_m w γ
| (sum.inr x) := get_m.loop f g x
| (sum.inl x) := g x
instance get_m.functor : functor.{u} (get_m w) :=
{ map := @get_m.map _ }
def get_m.seq {α β : Type u} : Π (f : get_m w (α → β)) (x : get_m w α), get_m w β :=
λ (f : get_m w (α → β)) (x : get_m w α), get_m.bind f (λ f, f <$> x)
-- instance : applicative get_m :=
-- { to_functor := get_m.functor
-- , pure := λ α, get_m.pure
-- , seq := @get_m.seq }
open function
instance : is_lawful_functor.{u} (get_m w) :=
by { constructor; intros;
dsimp [(<$>),get_m.seq];
induction x;
try { refl };
simp [get_m.map,*]; ext }
instance : monad (get_m w) :=
{ to_functor := get_m.functor
, pure := @get_m.pure w
, bind := @get_m.bind w }
instance : is_lawful_monad.{u} (get_m w) :=
{ to_is_lawful_functor := by apply_instance,
bind_assoc := by { intros, dsimp [(>>=)],
induction x; try { refl }; simp [get_m.bind,*], },
bind_pure_comp_eq_map := by { intros, dsimp [(>>=),(<$>)],
induction x; try {refl}; simp [get_m.bind,get_m.map,*], },
map_pure := by intros; refl,
pure_seq_eq_map := by { intros, dsimp [(>>=),(<$>)],
induction x; try {refl}; simp [get_m.bind,get_m.map,*], },
pure_bind := by intros; refl }
def get_m.or_else {α} : get_m w α → get_m w α → get_m w α
| get_m.fail x := x
| x y := x
instance : alternative.{u} (get_m w) :=
{ failure := @get_m.fail _,
orelse := @get_m.or_else _ }
def get_m.eval : Π {α}, list w → get_m w α → option α
| _ [] (get_m.pure x) := pure x
| _ [] _ := none
| α (w :: ws) (get_m.read f) := get_m.eval ws (f w)
| α (ww :: ws) (get_m.loop f g x₀) :=
get_m.eval ws $
f x₀ ww >>= get_m.loop.rest f g
| α (w :: ws) _ := none
def write_word (x : w) : put_m'.{u} w punit :=
put_m'.write x (λ _, put_m'.pure punit.star)
def read_word : get_m.{u} w (ulift w) :=
get_m.read (get_m.pure ∘ ulift.up)
open ulift
def expect_word [decidable_eq w] (x : w) : get_m.{u} w punit :=
do w' ← read_word,
if x = down w' then pure punit.star
else failure
def read_write : Π {α : Type u}, get_m w α → put_m'.{u} w punit → option α
| ._ (get_m.pure x) (put_m'.pure _) := some x
| _ _ (put_m'.pure _) := none
| ._ (get_m.read f) (put_m'.write w g) := read_write (f w) (g punit.star)
| α (@get_m.loop _ α' β γ f g x₀) (put_m'.write ww h) :=
read_write
(f x₀ ww >>= get_m.loop.rest f g)
(h punit.star)
| _ _ (put_m'.write w g) := none
def read_write' : Π {α : Type u}, get_m w α → put_m'.{u} w punit → option (α × put_m'.{u} w punit)
| _ (get_m.read f) (put_m'.write w g) := read_write' (f w) (g punit.star)
| α (@get_m.loop _ α' β γ f g x₀) (put_m'.write ww h) :=
read_write'
(f x₀ ww >>= get_m.loop.rest f g)
(h punit.star)
-- | _ (get_m.pure x) m@(put_m'.write w g) := some (x,m)
| _ (get_m.pure x) m := some (x,m)
| _ _ (put_m'.pure _) := none
| _ (get_m.fail) (put_m'.write _ _) := none
-- | _ _ m := none
lemma read_read_write_write {α : Type u} (x : get_m w α) (m : put_m w) (i : α) :
read_write x m = some i ↔ read_write' x m = some (i,(pure punit.star : put_m' w _)) :=
begin
induction m generalizing x;
cases x; casesm* punit; simp [read_write,read_write',prod.ext_iff,pure,*],
end
def pipeline {α} (x : get_m w α) (y : α → put_m w) (i : α) : option α :=
read_write x (y i)
infix ` -<< `:60 := read_write
infix ` -<<< `:60 := read_write'
infix ` <-< `:60 := pipeline
lemma eq_star (x : punit) : x = punit.star :=
by cases x; refl
-- inductive agree : Π {α} (x : α), get_m α → put_m → put_m → Prop
-- | pure {α} (x : α) (m : put_m) : agree x (get_m.pure x) m m
-- | read_write {α} (x : α) (w : unsigned)
-- (f : unsigned → get_m α) (g : punit → put_m) (m : put_m) :
-- agree x (f w) (g punit.star) m →
-- agree x (get_m.read f) (put_m'.write w g) m
-- | loop_write {α} (x : α) {β γ} (σ₀ σ₁ : β) (w : unsigned)
-- (f : β → unsigned → get_m (γ ⊕ β)) (f' : γ → get_m α)
-- (g : punit → put_m) (m m' : put_m) :
-- agree (sum.inr σ₁) (f σ₀ w) (g punit.star) m' →
-- agree x (get_m.loop σ₁ f f') m' m →
-- agree x (get_m.loop σ₀ f f') (put_m'.write w g) m
-- | loop_exit_write {α} (x : α) {β γ} (σ₀ : β) (r : γ) (w : unsigned)
-- (f : β → unsigned → get_m (γ ⊕ β)) (f' : γ → get_m α)
-- (g : punit → put_m) (m m' : put_m) :
-- agree (sum.inl r) (f σ₀ w) (g punit.star) m' →
-- agree x (f' r) m' m →
-- agree x (get_m.loop σ₀ f f') (put_m'.write w g) m
-- lemma agree_spec {α} (g : get_m α) (m : put_m) (x : α) :
-- agree x g m (put_m'.pure punit.star) ↔ g -<< m = some x :=
-- begin
-- split; intro h,
-- { cases h,
-- refl, simp [read_write], }
-- end
-- lemma loop_bind {α β γ} (i : β)
-- (body : β → unsigned → get_m (α ⊕ β)) (f₀ : α → get_m γ) :
-- get_m.loop i body f₀ = get_m.read (body i) >>= _ := _
lemma read_write_loop_bind {α β γ φ : Type u} (i : α)
(body : α → w → get_m w (φ ⊕ α))
(f₀ : φ → get_m w β) (f₁ : β → get_m w γ)
(m : punit → put_m w) (ww : w) :
(get_m.loop body f₀ i >>= f₁) -<<< put_m'.write ww m =
(body i ww >>= get_m.loop.rest body f₀ >>= f₁) -<<< m punit.star :=
begin
rw bind_assoc,
simp [(>>=),get_m.bind,read_write'],
congr, ext, cases x; simp; refl,
end
-- lemma read_write_left_overs_bind {α} (i : α)
-- (x₀ : get_m α)
-- (x₁ x₂ : put_m) :
-- x₀ -<<< x₁ = some (i,x₂) →
lemma fail_read_write {α} (x₁ : put_m w) :
get_m.fail -<<< x₁ = @none (α × put_m w) :=
by cases x₁; refl
lemma pure_read_write {α} (x₁ : put_m w) (i : α) :
get_m.pure i -<<< x₁ = some (i, x₁) :=
by cases x₁; refl
lemma read_write_left_overs_bind {α} (f : punit → put_m' w punit) (i : α)
(x₀ : get_m w α)
(x₁ x₂ : put_m' w punit) :
x₀ -<<< x₁ = some (i,x₂) → x₀ -<<< (x₁ >>= f) = some (i,x₂ >>= f) :=
begin
induction x₁ generalizing x₀ x₂,
cases x₀; simp [(>>=),put_m'.bind,read_write',pure_read_write],
{ intros, subst x₂, tauto },
cases x₀; simp [(>>=),put_m'.bind,read_write'],
{ intros, substs x₂ i, split; refl },
{ apply x₁_ih, },
{ apply x₁_ih, },
end
lemma option_eq_forall_some {α} (x y : option α) :
x = y ↔ ∀ z, x = some z ↔ y = some z :=
begin
split; intro h, { rw h; intro, refl },
{ cases y, cases x, refl,
symmetry, rw ← h, rw h, },
end
lemma read_write_weakening {α : Type u}
(x₀ x₁ : put_m w) (y₀ y₁ : get_m w α)
(h : y₀ -<<< x₀ = y₁ -<<< x₁) :
y₀ -<< x₀ = y₁ -<< x₁ :=
begin
rw option_eq_forall_some,
intro, simp [read_read_write_write,h],
end
lemma read_write_mono' {α β : Type u} (i : α)
(x₀ : get_m w α) (f₀ : α → get_m w β)
(x₁ x₂ : put_m w)
(h : x₀ -<<< x₁ = some (i,x₂)) :
(x₀ >>= f₀) -<<< x₁ = f₀ i -<<< x₂ :=
begin
-- simp [(>>=)],
induction x₁ generalizing x₀ f₀;
try { cases x₀; cases h },
{ simp [(>>=),read_write',get_m.bind] },
{ cases x₀; try { cases h },
simp [(>>=),read_write',get_m.bind] at h ⊢,
simp [(>>=),read_write',get_m.bind] at h ⊢,
{ apply x₁_ih, assumption },
simp [read_write_loop_bind,x₁_ih],
rw [x₁_ih _ _ _ h], }
end
lemma read_write_mono {α β : Type u} {i : α}
{x₀ : get_m w α} {f₀ : α → get_m w β}
{x₁ : put_m w} {f₁ : punit → put_m w}
(h : x₀ -<< x₁ = some i) :
(x₀ >>= f₀) -<< (x₁ >>= f₁) = f₀ i -<< f₁ punit.star :=
begin
apply read_write_weakening,
apply read_write_mono',
rw [read_read_write_write] at h,
replace h := read_write_left_overs_bind f₁ _ _ _ _ h,
simp [h],
end
lemma read_write_mono_left {α β} {i : α}
{x₀ : get_m w α} {f₀ : α → get_m w β}
{x₁ : put_m w}
(h : x₀ -<< x₁ = some i) :
(x₀ >>= f₀) -<< x₁ = f₀ i -<< pure punit.star :=
by rw ← read_write_mono h; simp
@[simp]
lemma read_write_word {α} (x : w) (f : ulift w → get_m w α) (f' : punit → put_m w) :
(read_word >>= f) -<< (write_word x >>= f') = f ⟨x⟩ -<< f' punit.star := rfl
@[simp]
lemma read_write_word' {α} (x : w) (f : ulift w → get_m w α) (f' : put_m w) :
(read_word >>= f) -<< (write_word x >> f') = f ⟨x⟩ -<< f' := rfl
@[simp]
lemma read_write_word'' {α} (x : w) (f : ulift w → get_m w α) :
(read_word >>= f) -<< write_word x = f ⟨x⟩ -<< pure punit.star := rfl
@[simp]
lemma read_write_pure {α} (x : α) (y : punit) (f : ulift w → get_m w α) :
(pure x : get_m w α) -<< pure y = pure x := rfl
@[simp]
lemma read_write_loop_word {α β γ : Type u} (σ₀ : α) (x : w)
(f : α → w → get_m w (β ⊕ α)) (g : β → get_m w γ)
(f' : punit → put_m w) :
get_m.loop f g σ₀ -<< (write_word x >>= f') =
(f σ₀ x >>= get_m.loop.rest f g)
-<< f' punit.star := rfl
#check @read_write_loop_word
lemma eval_eval {α}
(x₀ : get_m w α) (x₁ : put_m w) :
x₀.eval x₁.eval = x₀ -<< x₁ :=
by induction x₁ generalizing x₀; cases x₀;
simp! [*,read_write]; refl
open ulift
lemma get_m.fold_bind {α β} (x : get_m w α) (f : α → get_m w β) :
get_m.bind x f = x >>= f := rfl
lemma map_read_write {α β} (f : α → β) (x : get_m w α) (y : put_m w) :
(f <$> x) -<< y = f <$> (x -<< y) :=
begin
rw [← bind_pure_comp_eq_map,← bind_pure_comp_eq_map],
symmetry,
simp [(>>=)],
induction y generalizing x,
{ cases x; refl },
{ cases x; simp [read_write]; try { refl };
simp [get_m.bind,read_write,y_ih],
congr' 1, cases h : x_a x_a_2 y_a, refl,
cases a; refl,
dsimp [(>>=),get_m.bind],
congr, ext, simp [get_m.fold_bind],
rw bind_assoc, congr, ext z, cases z; refl,
simp [get_m.fold_bind], rw bind_assoc, congr, ext x,
cases x; refl, }
end
def sum_ulift (α β : Type u) : (α ⊕ β) ≃ (ulift.{v} α ⊕ ulift.{v} β) :=
(equiv.sum_congr equiv.ulift.symm equiv.ulift.symm)
-- def get_m.up : Π {α : Type u} {β : Type.{max u v}} (Heq : α ≃ β), get_m α → get_m β
-- | _ _ Heq (get_m.pure x) := get_m.pure $ Heq x
-- | _ _ Heq (get_m.fail) := get_m.fail
-- | _ _ Heq (get_m.read f) := get_m.read (λ w, get_m.up Heq (f w))
-- | _ β' Heq (@get_m.loop α β γ f g x) :=
-- get_m.loop
-- (λ a b, get_m.up (sum_ulift α β) (f (down.{v} a) b))
-- (λ w, get_m.up Heq (g $ down w))
-- (up.{v} x)
def get_m.up : Π {α : Type u} {β : Type.{max u v}} (Heq : α → β), get_m w α → get_m w β :=
λ α β f x, (@get_m.rec_on _ (λ α _, Π β, (α → β) → get_m w β) α x
(λ α β f, get_m.fail)
(λ α x β f, get_m.pure $ f x)
(λ α next get_m_up β f, get_m.read $ λ w, get_m_up w _ f)
(λ α β γ body rest x₀ get_m_up₀ get_m_up₁ β' f,
get_m.loop
(λ a b, get_m_up₀ (down a) b (ulift.{v} α ⊕ ulift.{v} β)
(sum_ulift α β))
(λ r, get_m_up₁ (down r) _ f)
(up x₀)) β f)
section eqns
variables {α β' γ : Type u} {β : Type.{max u v}} (Heq : α → β) (x : get_m w α)
variables {i : α} {f : w → get_m w α}
variables {f' : β' → w → get_m w (γ ⊕ β')}
variables {g' : γ → get_m w α} {j : β'}
@[simp] lemma get_m.up.eqn_1 : get_m.up Heq (get_m.pure i : get_m w _) = get_m.pure (Heq i) := rfl
@[simp] lemma get_m.up.eqn_2 : get_m.up Heq (get_m.fail : get_m w α) = get_m.fail := rfl
@[simp] lemma get_m.up.eqn_3 : get_m.up Heq (get_m.read f) = get_m.read (λ w, get_m.up Heq (f w)) := rfl
@[simp] lemma get_m.up.eqn_4 :
get_m.up Heq (get_m.loop f' g' j) =
get_m.loop
(λ a b, get_m.up (sum_ulift γ β') (f' (down.{v} a) b))
(λ w, get_m.up Heq (g' $ down w))
(up.{v} j) := rfl
end eqns
def put_m.up {α : Type u} {β : Type v} (Heq : α → β) : put_m' w α → put_m' w β
| (put_m'.pure x) := put_m'.pure $ Heq x
| (put_m'.write w f) := put_m'.write w $ λ u, put_m.up $ f u
instance : liftable1 (put_m'.{u} w) (put_m'.{v} w) :=
{ up := λ α β (eq : α ≃ β) x, put_m.up eq x
, down := λ α β (eq : α ≃ β) x, put_m.up eq.symm x
, down_up := by intros; induction x; simp [put_m.up,*]
, up_down := by intros; induction x; simp [put_m.up,*] }
open pliftable (up')
lemma up_bind {α β : Type u} {β' : Type (max u v)} (x : get_m w α) (g : α → get_m w β) (f : β → β') :
(x >>= g).up f = x.up up.{v} >>= (λ i : ulift α, (g $ down i).up f) :=
begin
dsimp [(>>=)],
induction x generalizing f g; try { refl };
simp [get_m.bind,*]
end
lemma equiv_bind {m} [monad m] [is_lawful_monad m] {α α' β}
(Heq : α ≃ α') (x : m α) (f : α → m β) :
x >>= f = (Heq <$> x) >>= f ∘ Heq.symm :=
by simp [(∘)] with functor_norm
def sum.map {α α' β β'} (f : α → α') (g : β → β') : α ⊕ β → α' ⊕ β'
| (sum.inr x) := sum.inr $ g x
| (sum.inl x) := sum.inl $ f x
def equiv.ulift_sum {α β} : (ulift $ α ⊕ β) ≃ (ulift α ⊕ ulift β) :=
{ to_fun := λ x, sum.map up up (down x),
inv_fun := λ x, up $ sum.map down down x,
right_inv := by intro; casesm* [_ ⊕ _,ulift _]; refl,
left_inv := by intro; casesm* [_ ⊕ _,ulift _]; refl }
lemma map_get_m_up {α : Type u} {β γ} (x : get_m w α) (f : α → β) (g : β → γ) :
g <$> get_m.up f x = get_m.up (g ∘ f) x :=
begin
dsimp [(<$>)],
induction x; simp [get_m.map,*]; refl,
end
lemma up_read_write {α : Type u} {α' : Type (max u v)} (x : get_m w α) (y : put_m w) (f : α ≃ α') :
x.up f -<< up' (put_m' w) y = liftable1.up option f (x -<< y) :=
begin
dsimp [up',liftable1.up],
induction y generalizing x f,
cases x; simp; refl,
cases x; simp [up',liftable.up',liftable1.up,read_write,put_m.up,*], refl, refl, refl,
rw [read_write,← y_ih,up_bind],
apply congr,
{ apply congr_arg, rw equiv_bind (@equiv.ulift_sum.{u u v v v} x_α x_β) ,
congr,
{ rw map_get_m_up, congr, ext, cases x; refl },
simp [(∘)], ext, cases x;
dsimp [equiv.ulift_sum,sum.map], refl,
cases x, refl, apply_instance },
congr,
end
lemma up_read_write' {α : Type u} {α' : Type (max u v)}
{x : get_m w α} {y : put_m w} (f : α → α') (f' : α ≃ α')
(h : ∀ i, f i = f' i) :
x.up f -<< up' (put_m' w) y = liftable1.up option f' (x -<< y) :=
begin
rw ← up_read_write, congr, ext, apply h
end
end medium
|
{-# OPTIONS --without-K #-}
module decidable where
open import level using (Level)
open import sets.empty using (⊥; ¬_)
open import sets.unit using (⊤; tt)
-- Decidable relations.
data Dec {i} (P : Set i) : Set i where
yes : ( p : P) → Dec P
no : (¬p : ¬ P) → Dec P
True : ∀ {i}{P : Set i} → Dec P → Set
True (yes _) = ⊤
True (no _) = ⊥
witness : ∀ {i}{P : Set i}{d : Dec P} → True d → P
witness {d = yes x} _ = x
witness {d = no _} ()
decide : ∀ {i} {P : Set i} {d : Dec P} → P → True d
decide {d = yes p} = λ _ → tt
decide {d = no f} = f
|
\chapter{Code fragment and transport}
This section shows fragments of code, and associated data from one of the
architectural tests in the repository. For the individual fragments
the ingress signals are shown and the corresponding packets
generated. It further shows how the packets are transported via on-chip
transport fabric.
The fragments shown below are extracted from the test whilst it is
being executed. In order to give some context to the fragment of
interest, code prior to and after the fragment is also given.
\section {Illegal Opcode test}
In this example the test executes an illegal opcode (at line labelled 14) and
traps. We show the output from the patched spike execution in line 30.
The input signals to the encoder are shown in lines labelled 38-46. The HART
will have set the signals shown in line 42 when the illegal
instruction is executed and as can be seen it is not retired.
Lines labelled 53, 56 and 59 show the packets output from the encoder for this fragment.
\subsection{Code fragment}
\begin {lstlisting}[basicstyle=\tiny]
1: *************************************************************************************
2: ****************** Fragment 0x80000222 - 0x80000226:illegal_opcode ******************
3: *************************************************************************************
4: KEY: ">" means pre-fragment execution, "<" means post-fragment execution
5: ^^^^^^^^^^^^^^^^^^^^^^^^^^ Part 1 of 1 ^^^^^^^^^^^^^^^^^^^^^^^^^^
6:
7: elf:
8: > 0000000080000104 <j_exception_stimulus>:
9: > 80000104: 00000297 auipc t0,0x0
10: > 80000108: 11e28293 addi t0,t0,286 # 80000222 <bad_opcode>
11: > 8000010c: 8282 jr t0
12: > 80000154: 9282 jalr t0
13: 0000000080000222 <bad_opcode>:
14: 80000222: 0000 unimp
15: 80000224: 0000 unimp
16: 80000226: b709 j 80000128 <j_target_end_fail>
17: < 00000000800001b0 <machine_trap_entry>:
18: < 800001b0: a805 j 800001e0 <machine_trap_entry_0>
19: < 00000000800001e0 <machine_trap_entry_0>:
20: < 800001e0: 342023f3 csrr t2,mcause
21: < 800001e4: fff0031b addiw t1,zero,-1
22: < 800001e8: 137e slli t1,t1,0x3f
23:
24: trace_spike:
25: ******** Data from br_j_asm.spike_pc_trace line 5029 ********
26: > ADDRESS=80000154, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
27: > ADDRESS=80000104, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
28: > ADDRESS=80000108, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
29: > ADDRESS=8000010c, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
30: ADDRESS=80000222, PRIVILEGE=3, EXCEPTION=1, ECAUSE=2, TVAL=0, INTERRUPT=0
31: < ADDRESS=800001b0, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
32: < ADDRESS=800001e0, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
33: < ADDRESS=800001e4, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
34: < ADDRESS=800001e8, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
35:
36: encoder_input:
37: ******** Data from br_j_asm.encoder_input line 5029 ********
38: > UNINFERABLE_JUMP, cause=0, tval=0, priv=3, iaddr_0=80000154, context=0, ctype=0, ilastsize_0=2
39: > ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=80000104, context=0, ctype=0, ilastsize_0=4
40: > ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=80000108, context=0, ctype=0, ilastsize_0=4
41: > UNINFERABLE_JUMP, cause=0, tval=0, priv=3, iaddr_0=8000010c, context=0, ctype=0, ilastsize_0=2
42: EXCEPTION, cause=2, tval=0, priv=3, iaddr_0=80000222, context=0, ctype=0, ilastsize_0=2,
----------> NOT RETIRED
43: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001b0, context=0, ctype=0, ilastsize_0=2
44: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001e0, context=0, ctype=0, ilastsize_0=4
45: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001e4, context=0, ctype=0, ilastsize_0=4
46: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001e8, context=0, ctype=0, ilastsize_0=2
47:
48: te_inst:
49: ******** Data from br_j_asm.te_inst_annotated line 5071 ********
50: > next=80000154 curr=80000150 prev=8000014c
51: > next=80000104 curr=80000154 prev=80000150
52: > next=80000108 curr=80000104 prev=80000154
53: > format=1, address=80000104, branches=1, branch_map=0, irreport=0, notify=0, updiscon=0,
Reason[prev_updiscon] Payload[05 04 01 00 80 00]
54: > next=8000010c curr=80000108 prev=80000104
55: next=80000222 curr=8000010c prev=80000108
56: format=2, address=8000010c, irreport=0, notify=0, updiscon=0, Reason[exc_only]
Payload[32 04 00 00 02]
57: < next=800001b0 curr=80000222 prev=8000010c
58: < next=800001e0 curr=800001b0 prev=80000222
59: < format=3, subformat=EXCEPTION, addrepc=80000222, branch=1, context=0, ecause=2, interrupt=0, ehaddr=0
privilege=3, tval=00000000, Reason[exception, prev_updiscon]
Payload[77 00 00 00 00 41 44 00 00 20 00 00 00 00 00]
60: < format=3, subformat=START, address=800001b0, branch=1, context=0,
privilege=3, Reason[exception_prev, reported]
Payload[73 00 00 00 00 6c 00 00 10]
61: < next=800001e4 curr=800001e0 prev=800001b0
62: < next=800001e8 curr=800001e4 prev=800001e0
63:
\end{lstlisting}
\subsection{Packet data}
The output from the encoder for the fragment of interest is given in
line 56. The least significant byte is output first, this means 32 is
byte 0, 04 is byte 1 and and the final value 02 is byte 4.
\subsection{Siemens transport}
The packet format is given in Figure~\ref{fig:packet-format}. So this means the packet will be packed as follows:
\begin{itemize}
\item
Header - 1 byte
\item
Index - N bits. As an example use 6 bits and the value of 1.
\item
Optional Siemens timestamp - 2 bytes. This example has no timestamp
\item
A type field for the packet of 2 bits '01' meaning instruction trace
\item
Payload - [32 04 00 00 02]
\end{itemize}
Since the Siemens transport is byte stream based the data seen will be:
\begin {alltt}
[0x05][0x41][0x32 0x04 0x00 0x00 0x02]
\end{alltt}
\subsection{ATB transport}
Assuming at 32 bit ATB transport results in the following ATB transfers
\begin {alltt}
[ATID=1] [ATBYTES = 3] [ATDATA = 0x00043205]
[ATID=1] [ATBYTES = 1] [ATDATA = 0x00000200]
\end{alltt}
\section{Timer Long Loop}
\subsection{Code fragment}
\begin{lstlisting}[basicstyle=\tiny]
1: **************************************************************************************
2: ****************** Fragment 0x800001a2 - 0x800001b0:timer_long_loop ******************
3: **************************************************************************************
4: KEY: ">" means pre-fragment execution, "<" means post-fragment execution
5: ^^^^^^^^^^^^^^^^^^^^^^^^^^ Part 443 of 445 ^^^^^^^^^^^^^^^^^^^^^^^^^^
6:
7: elf:
8: > 80000194: fab50ce3 beq a0,a1,8000014c <timer_interrupt_return>
9: > 80000198: 40430333 sub t1,t1,tp
10: > 8000019c: 34402473 csrr s0,mip
11: > 800001a0: 8c21 xor s0,s0,s0
12: 800001a2: 300024f3 csrr s1,mstatus
13: 800001a6: 8ca5 xor s1,s1,s1
14: 800001a8: fe0310e3 bnez t1,80000188 <timer_interrupt_long_loop>
15: 800001ac: bfb5 j 80000128 <j_target_end_fail>
16: 800001ae: 0001 nop
17: 00000000800001b0 <machine_trap_entry>:
18: 800001b0: a805 j 800001e0 <machine_trap_entry_0>
19: < 00000000800001e0 <machine_trap_entry_0>:
20: < 800001e0: 342023f3 csrr t2,mcause
21: < 800001e4: fff0031b addiw t1,zero,-1
22: < 800001e8: 137e slli t1,t1,0x3f
23: < 800001ea: 031d addi t1,t1,7
24:
25: trace_spike:
26: ******** Data from br_j_asm.spike_pc_trace line 5000 ********
27: > ADDRESS=80000194, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
28: > ADDRESS=80000198, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
29: > ADDRESS=8000019c, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
30: > ADDRESS=800001a0, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
31: ADDRESS=800001a2, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
32: ADDRESS=800001a6, PRIVILEGE=3, EXCEPTION=1, ECAUSE=8000000000000007, TVAL=0, INTERRUPT=1
33: ADDRESS=800001b0, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
34: < ADDRESS=800001e0, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
35: < ADDRESS=800001e4, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
36: < ADDRESS=800001e8, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
37: < ADDRESS=800001ea, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
38:
39: encoder_input:
40: ******** Data from br_j_asm.encoder_input line 5000 ********
41: > NONTAKEN_BRANCH, cause=0, tval=0, priv=3, iaddr_0=80000194, context=0, ctype=0, ilastsize_0=4
42: > ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=80000198, context=0, ctype=0, ilastsize_0=4
43: > ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=8000019c, context=0, ctype=0, ilastsize_0=4
44: > ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001a0, context=0, ctype=0, ilastsize_0=2
45: ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001a2, context=0, ctype=0, ilastsize_0=4
46: INTERRUPT, cause=7, tval=0, priv=3, iaddr_0=800001a6, context=0, ctype=0, ilastsize_0=2,
----------> NOT RETIRED
47: ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001b0, context=0, ctype=0, ilastsize_0=2
48: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001e0, context=0, ctype=0, ilastsize_0=4
49: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001e4, context=0, ctype=0, ilastsize_0=4
50: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001e8, context=0, ctype=0, ilastsize_0=2
51: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=800001ea, context=0, ctype=0, ilastsize_0=2
52:
53: te_inst:
54: ******** Data from br_j_asm.te_inst_annotated line 5038 ********
55: > next=80000194 curr=80000192 prev=80000190
56: > next=80000198 curr=80000194 prev=80000192
57: > next=8000019c curr=80000198 prev=80000194
58: > next=800001a0 curr=8000019c prev=80000198
59: next=800001a2 curr=800001a0 prev=8000019c
60: next=800001a6 curr=800001a2 prev=800001a0
61: format=1, address=800001a2, branches=15, branch_map=21845, irreport=0, notify=0, updiscon=0,
Reason[exc_only] Payload[bd aa aa 68 00 00 20]
62: next=800001b0 curr=800001a6 prev=800001a2
63: < next=800001e0 curr=800001b0 prev=800001a6
64: < format=3, subformat=EXCEPTION, addrepc=800001b0, branch=1, context=0, ecause=7, interrupt=0,
ehaddr=1, privilege=3, Reason[prev_exception]
Payload[77 00 00 00 80 13 36 00 00 10]
65: < next=800001e4 curr=800001e0 prev=800001b0
66: < next=800001e8 curr=800001e4 prev=800001e0
67: < next=800001ea curr=800001e8 prev=800001e4
68:
\end{lstlisting}
\subsection{Siemens transport}
The packet format is given in Figure~\ref{fig:packet-format}. So this means the packet will be packed as follows:
\begin{itemize}
\item
Header - 1 byte
\item
Index - N bits. As an example use 6 bits and the value of 0xA
\item
Optional Siemens timestamp - 2 bytes. This example has no timestamp
\item
A type field for the packet of 2 bits '01' meaning instruction trace
\item
Payload - [0xBD 0xAA 0xAA 0x68 0x00 0x00 0x20]
\end{itemize}
\begin {alltt}
[0x7][0x29][0xBD 0xAA 0xAA 0x68 0x00 0x00 0x20]
\end{alltt}
\subsection{ATB transport}
Assuming at 32 bit ATB transport results in the following ATB transfers
\begin {alltt}
[ATID=0xA] [ATBYTES = 3] [ATDATA = 0xAAAABD07]
[ATID=0xA] [ATBYTES = 3] [ATDATA = 0x20000068]
\end{alltt}
\section{Startup xrle}
\subsection{Code fragment}
\begin{lstlisting}[basicstyle=\tiny]
1: ***********************************************************************************
2: ****************** Fragment 0x20010522 - 0x20010528:startup_xrle ******************
3: ***********************************************************************************
4: KEY: ">" means pre-fragment execution, "<" means post-fragment execution
5: ^^^^^^^^^^^^^^^^^^^^^^^^^^ Part 1 of 1 ^^^^^^^^^^^^^^^^^^^^^^^^^^
6:
7: elf:
8: 20010522 <main>:
9: 20010522: 1141 addi sp,sp,-16
10: 20010524: c606 sw ra,12(sp)
11: 20010526: c422 sw s0,8(sp)
12: 20010528: 0800 addi s0,sp,16
13: < 2001052a: 800107b7 lui a5,0x80010
14: < 2001052e: 6721 lui a4,0x8
15: < 20010530: e8670713 addi a4,a4,-378 # 7e86 <__heap_size+0x7686>
16: < 20010534: 1ae7aa23 sw a4,436(a5) # 800101b4 <_sp+0xfffffbfc>
17:
18: trace_spike:
19: ******** Data from xrle.spike_pc_trace line 2 ********
20: ADDRESS=20010522, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
21: ADDRESS=20010524, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
22: ADDRESS=20010526, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
23: ADDRESS=20010528, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
24: < ADDRESS=2001052a, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
25: < ADDRESS=2001052e, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
26: < ADDRESS=20010530, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
27: < ADDRESS=20010534, PRIVILEGE=3, EXCEPTION=0, ECAUSE=0, TVAL=0, INTERRUPT=0
28:
29: encoder_input:
30: ******** Data from xrle.encoder_input line 2 ********
31: ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=20010522, context=0, ctype=0, ilastsize_0=2
32: ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=20010524, context=0, ctype=0, ilastsize_0=2
33: ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=20010526, context=0, ctype=0, ilastsize_0=2
34: ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=20010528, context=0, ctype=0, ilastsize_0=2
35: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=2001052a, context=0, ctype=0, ilastsize_0=4
36: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=2001052e, context=0, ctype=0, ilastsize_0=2
37: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=20010530, context=0, ctype=0, ilastsize_0=4
38: < ITYPE_NONE, cause=0, tval=0, priv=3, iaddr_0=20010534, context=0, ctype=0, ilastsize_0=4
39:
40: te_inst:
41: ******** Data from xrle.te_inst_annotated line 2 ********
42: > format=3, subformat=SUPPORT, enable=1, encoder_mode=0, options=4, qual_status=0 Payload[1f 04]
43: next=20010522
44: next=20010524 curr=20010522
45: format=3, subformat=START, address=20010522, branch=1, context=0,
privilege=3, Reason[ppccd]
Payload[73 00 00 00 00 91 82 00 10]
46: next=20010526 curr=20010524 prev=20010522
47: next=20010528 curr=20010526 prev=20010524
48: < next=2001052a curr=20010528 prev=20010526
49: < next=2001052e curr=2001052a prev=20010528
50: < next=20010530 curr=2001052e prev=2001052a
51: < next=20010534 curr=20010530 prev=2001052e
52:
\end{lstlisting}
\subsection{Siemens transport}
The packet format is given in Figure~\ref{fig:packet-format}. So this means the packet will be packed as follows:
\begin{itemize}
\item
Header - 1 byte
\item
Index - N bits. As an example use 6 bits and the value of 0x5
\item
Optional timestamp - 2 bytes. This example has no timestamp
\item
A type field for the packet of 2 bits '01' meaning instruction trace
\item
Payload - [0x73 0x00 0x00 0x00 0x00 0x91 0x82 0x00 0x10]
\end{itemize}
\begin {alltt}
[0x9][0x15][0x73 0x00 0x00 0x00 0x00 0x91 0x82 0x00 0x10]
\end{alltt}
\subsection{ATB transport}
Assuming at 32 bit ATB transport results in the following ATB transfers
\begin {alltt}
[ATID=0x5] [ATBYTES = 3] [ATDATA = 0x00007309]
[ATID=0x5] [ATBYTES = 3] [ATDATA = 0x82910000]
[ATID=0x5] [ATBYTES = 1] [ATDATA = 0x00001000]
\end{alltt}
|
lemma homotopy_eqv_contractible_sets: fixes S :: "'a::real_normed_vector set" and T :: "'b::real_normed_vector set" assumes "contractible S" "contractible T" "S = {} \<longleftrightarrow> T = {}" shows "S homotopy_eqv T"
|
\documentclass[10pt,letterpaper]{article}
\usepackage{hyperref}
\usepackage{cogsci}
\usepackage[nodoi]{apacite}
\usepackage{pslatex}
\usepackage{pdfsync}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{topcapt}
\usepackage{color}
\usepackage[english]{babel}
\usepackage{array}
\usepackage{pbox}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage[section]{placeins}
\usepackage[font=small,labelfont=bf]{caption}
\title{Large-scale investigations of variability in children's first words}
\author{{\large \bf Rose M. Schneider} \\ \texttt{[email protected]}\\ Department of Psychology \\ Stanford University \\
\And {\large \bf Daniel Yurovsky} \\ \texttt{[email protected]} \\ Department of Psychology \\ Stanford University \\
\And {\large \bf Michael C. Frank} \\ \texttt{[email protected]} \\ Department of Psychology \\ Stanford University \\ }
\begin{document}
\maketitle
%ABSTRACT
\begin{abstract}
A child's first word is an important step towards language. Aggregated across children, the distribution of these first productive uses of language can act as a window into early cognitive and linguistic development. We investigate both the variability and predictability in children's first words across four new datasets. We find, first, that children's first words tend to emerge earlier than previously estimated: more than 75 percent of children produce their first word before their first birthday. Second, we find a high degree of consistency in the types of things children name in their first words, independent of the age at which they are produced. Finally, we show that the particular words that children produce first are predictable from two linguistic factors: input frequency and phonological complexity. Together, our results suggest a degree of independence between early conceptual and linguistic development.
\textbf{Keywords:}
Language acquisition, word learning, cognitive development
\end{abstract}
%%INTRODUCTION%%%%
\section{Introduction}
Over the course of their first years, children rapidly go from speechless infants to toddlers producing and learning language at an astounding rate \cite{fenson1994}. Marking the beginning of productive language, a child's first word is an important and measurable insight into what a child is willing and able to talk about at that point in development. Yet, in contrast to later behaviors, children's first words often emerge during intimate moments between children and caregivers, and are difficult for external observers to record or measure. Here we leverage large-scale data from parental reports to ask what children's first words reveal about two key issues in early language learning: the time-course of the emergence of language, and the relation between conceptual and linguistic development. In three sets of analyses, we explore when a first word is likely to emerge, the semantic category distributions of these words, and some factors that predict first words.
Infants begin to show an aptitude for language from a very early point in development. By 1-month, infants already prefer to listen to child-directed speech over adult-directed speech \cite{cooper1990}. Over their first year, infants are learning to recognize and segment the distinctive sounds and word forms of their native language \cite{kuhl2004,werker2005}. Additionally, by 6--9 months, many infants already show a tendency to look to named targets when they hear common nouns, suggesting early beginnings for form-meaning mapping as well \cite{tincoff2012,bergelson2012}. Infants' abilities to comprehend language thus appear to be reasonably well-developed prior to 12 months. Furthermore, as early as 12 weeks, children also begin producing the sounds of their native language in babble, suggesting an early beginning to linguistic production \cite{kuhl2004}. However, developmental norms suggest that the typical child will produce their first word at 12 months. Is this early lag between comprehension and production real, or only apparent?
What is the relationship between children's linguistic and conceptual development? Typically-developing monolingual children show correlations between some cognitive achievements and their language production; for example, acquisition of words about disappearance is correlated with comprehension of object permanence \cite{gopnik1986}. But at a larger scale, conceptual development appears to play a more limited role: 2--5-year-old international adoptees learning English for the first time show the same gross patterns of development in vocabulary composition as monolingual infants. \cite{snedeker2007}. There are also striking convergences in early words across very different cultural contexts \cite{tardif2007}. Do patterns of first word productions---and their distribution across semantic categories---suggest any broader relationships between language acquisition and cognitive development?
Because very early language is difficult to observe in the lab, in this study we leverage parent reports to learn about children's first words, and what they can reveal about the relationship between early conceptual and linguistic development. A child's first word is highly memorable for parents, and many parents record this milestone in baby books. We define a true ``first word'' as the consistent use of a form to communicate a particular meaning, whether or not that form matches the adult target form. While recognizing that parents may not share this definition, intuitively, we believe this is what parents tend to think they are reporting when they report first words, and found support for this across our datasets in reports of first words as a phonological approximations of adult targets.
Parent report has both substantial disadvantages and real advantages as a scientific measurement. One issue with any self-report measure is that there is no way to validate participants' responses. Another complication is that parents may be biased observers, and interpret word-like babble as productive communication. Additionally, the recollection of a first word may be subject to errors in memory recall, or other retrospective biases. Nevertheless, parent report is widely used as a measure of early child language, e.g. in the MacArthur-Bates Communicative Development Inventory (CDI), a vocabulary checklist that is both a reliable and valid measure of early vocabulary (\citeNP{fenson1994,fenson2007}, although the reliability of the earliest ages of the CDI has been questioned; \citeNP{feldman2000}). Additionally, self-reports are very easy to collect, making them ideal for large-scale investigations like the present study.
To address issues of bias in self-report, we gathered data from four sources. The first dataset was a survey filled out by parents who were members of a local children's museum. These parents were an ethnically diverse population with a higher education level than the general population and a demonstrated interest in their child's development, likely leading to a high level of engagement in their children's early language. Our second dataset was the Amazon Mechanical Turk parent population; this community is more diverse in terms of age, gender, education level, and socio-economic status (SES). Our third dataset came from parents in the psycholinguistic research community. We selected this population for its familiarity with the subject and because this community was most likely to have written records about first words. The data we received from all three of these surveys was generally very consistent, both within and across datasets, leading us to believe at the very least that any bias in one was likely in operation across all three.
Our final dataset was drawn from \href{http://wordbank.stanford.edu}{Wordbank}, a large, open repository of CDI form data that aggregates across several samples including the updated CDI norming sample \cite{fenson2007}. We chose this dataset because the CDI form asks a parent to report their child's current productive vocabulary, and thus is free from any retrospective reporting biases that may skew our other surveys. Because the CDI contains a fixed set of words, it constrains the space of possible first words but also facilitates comparative analyses by reducing the space to a small, representative set.
Drawing on these datasets, we investigate the time-course of the emergence of productive language and potential factors that might lead to individual differences in linguistic development. First, we analyze variability in the age of first word onset, finding that 75\% of children are reported as producing a word prior to 12 months. Second, we ask whether the range of of first words varies with children's chronological age, allowing us to ask about the relationship between linguistic and conceptual development. This analysis yields no measurable differences, indicating that linguistic factors---rather than conceptual ones---likely constrain the set of first words. Finally, we show that two specific linguistic factors, input frequency and phonetic complexity, both predict the words that children are likely to say first.
\vspace{-.2em}
%%GENERAL DATA METHODS%%
\section{General Methods}
Data for the study come from four datasets. Three of the four were surveys specifically designed for this study.
\vspace{-.2em}
%CDM
\subsection{Dataset 1: Museum Member Survey}
\subsubsection{Participants}
We sent out a very brief survey on children's first words to subscribed members of a large local children's museum. We received responses for 502 children (215 female, 285 male, and 2 with no reported sex; M age = 11 mo., median = 10 mo.). Several responses were translated into English where possible; one response could not be translated and was excluded from further analysis.
\subsubsection{Method}
Parents completed a web-based survey. The survey asked parents to report their child's first word (excluding ``mama'' and ``dada''), the word referent, a description of the situation surrounding the first word, the child's age at time of utterance (10 mo. or younger, 11 mo., 12 mo., 13 mo., 14 mo.), the child's current age, and sex. Parents answered for only one child in this survey. We standardized responses and corrected obvious spelling errors. When the meaning of the word was not immediately apparent, we relied on the parent's description of the circumstances surrounding the word and/or the parent's classification of the word type.
\subsubsection{Exclusion of ``mama'' and ``dada''}
While many parents reported that their child's first word was ``mama'' or ``dada'' (or some equivalent or variant), we excluded these children from our analyses. First, parents may be motivated to hear these words very early in babble, even when the word is not being used in a meaningful or consistent way. Second, we were interested in the range of concepts represented in the words we analyzed. Therefore, we stressed in our surveys that parents were to report their children's first word \emph{other} than ``mama'' or ``dada'' to avoid this possibility and to detect a larger range of conceptual types. Additionally in the MTurk dataset, we included a question asking whether the child's first word was ``mama'', ``dada,'' or another first word. In total, 1107/1650 (67\%) of children were reported to produce ``mama'' (N = 618) or ``dada'' (N = 489) first rather than another word (N = 543).
\vspace{-.2em}
%SURVEY2
\subsection{Dataset 2: Amazon Mechanical Turk}
\subsubsection{Participants}
We recruited 1000 parents from Amazon Mechanical Turk (MTurk) to complete an in-depth survey on their children's first words. We restricted the survey to parents in the United States. This survey allowed parents to answer for multiple children. We received responses for 1671 children (813 female, 858 male; M age = 10 mo., median = 10 mo.). Responses from 20 children were excluded from subsequent analyses because they had not yet spoken (M age = 2.7 mo., median = 2 mo.). Responses in other languages were translated into English where possible; one response was excluded. After exclusions, this dataset contained responses from 996 parents and 1650 children's first words. Caregiver education levels were highly diverse (Elementary = 3; Some high school = 15; High school = 166; Some college = 309; College = 346; Some graduate school = 26; Graduate school = 131; N = 996).
\subsubsection{Method}
This survey was an extended version of the Museum survey, allowing for input for multiple children, and asking the respondent to list their highest level of education, child's birth order, sex, first word (excluding ``mama'' and ``dada''), word type, addressee of the first word, age at time of the word (0--24+ months), current age (0--18+ years), and both the languages of the first word and typically spoken at home.
Data were handled as in Dataset 1. Due to the larger sample size, more phonological and morphological variations appeared in parents' reports of children's productions. A final standardized form was selected, and the various original first word forms were recoded as that standardized form. For example, ``Dog dog,'' ``Doggy,'' ``Doggie,'' and ``Dogie'' were all coded as ``Dog.'' When necessary, we relied on the parent's description of the situation in this coding process.
\vspace{-.2em}
%%%SURVEY 3
\subsection{Dataset 3: Psycholinguists }
\subsubsection{Participants}
We sent out a brief survey on children's first words to subscribed members of a Psycholinguistics listserv. We received 52 responses from this survey (26 female, 26 male; M age = 11.16 mo., median 11 mo.).
\subsubsection{Method}
Questions included on the survey were: The approximate phonological form of the first word, the age of utterance, when the parent recorded the word (if at all), the child's sex, the target word, the child's birth order (first or later born), and the child's current age. Data were handled similarly to Datasets 1 and 2.
\vspace{-.2em}
%%%%WORDBANK%%%%
\subsection{Dataset 4: Wordbank}
\subsubsection{Participants}
At the time of our analysis, the Wordbank database contained 8889 unique CDI Words and Gestures administrations. From these, we selected the 76 English-speaking children whose parents reported that they produced exactly one word (31 female, 45 male, M age = 10.63 mo., median = 11 mo.). Caregiver education levels were fairly diverse (Some high school = 4; High school = 24; Some college = 21; College = 17; Some graduate school = 1; Graduate school = 9).
\subsubsection{Data preparation}
As responses were taken directly from the CDI, no data preparation was necessary.
%Analyses
\begin{table}[tb]
\centering
\begin{tabular}{cccc}
\hline
{\bf MTurk} & {\bf Museum} & {\bf Psycholinguists} & {\bf Wordbank} \\
\hline
\textbf{Dog} & \textbf{Ball} & Up & Baa Baa \\
No & \textbf{Hi} & More & Uh--Oh \\
\textbf{Ball} & \textbf{Dog}& \textbf{Hi} & Yum Yum \\
Bottle & Uh--Oh& \textbf{Cat} & \textbf{Woof Woof} \\
\textbf{Hi} & Duck & \textbf{Bye} & \textbf{Hi} \\
\textbf{Bye} & Car & & Vroom \\
Kitty & No & & This \\
Baba & \textbf{Cat} & & Meow \\
\textbf{Cat} & \textbf{Bye}& & Bottle \\
Milk & Up, More & & \textbf{Ball} \\
\hline
\end{tabular}
\caption {\small \label{tab:top10} Top ten first words (excluding ``mama'' and ``dada'') from each of the four datasets we examined. Words repeated across more than 2 datasets are bolded. Included are only words with more than 1 instance.}
\vspace{-2.6em}
\end{table}
\section{Analyses}
Table \ref{tab:top10} shows the top ten words from each dataset. Overall, there is substantial consistency across the four datasets, with ``Hi'' appearing in all four, and ``Bye'', ``Ball'', ``Dog''/``Woof Woof'', and ``Cat'' appearing in three.
Below we report three primary analyses. Analysis 1 examines the age of first production, Analysis 2 describes the semantic categories of these words, and Analysis 3 predicts which words tend to be produced on the basis of phonological complexity and input frequency.
\vspace{-.2em}
\subsection{Analysis 1: Age of First Word}
Despite evidence for very early word comprehension \cite{tincoff2012,bergelson2012}, conventional wisdom holds that the first word emerges around 12 months. However, a child's first word is almost exclusively heard by a parent or other caretaker. Is this reported lag between comprehension and production real or apparent?
%FIGURE 1
\begin{figure}[tb]
\center{\includegraphics[width=.9\linewidth]{figures/Figure1.pdf}}
\caption{\label{fig:cdfs} Cumulative probability of a child having produced her first word across development. In all datasets, more than 75\% percent of children had produced their first word by their first birthday and more than half had produced their first word by 10 months. Shaded regions show 95\% confidence intervals computed by non-parametric bootstrap.}
\vspace{-2.6em}
\end{figure}
Using data from a total of 2,279 children we plotted the cumulative probability of a child having produced a first word as a function of their age and dataset (Figure~\ref{fig:cdfs}). Prior to 12 months, approximately 75\% of children had produced a first word, across all four datasets. This result was strikingly consistent across datasets, despite significant variance in the tails.
Data from the Museum survey were truncated due to a ``ten months or earlier'' response option and showed the least age variability, with respondents modally choosing the earliest option. Data from Wordbank were also truncated due to the 8 month cutoff for the use of the CDI (as well as data sparsity in the oldest ages); nevertheless, Wordbank data showed the earliest word productions. One possibility is this may reflect a bias towards reporting at least one word, given the process of going through the entire CDI checklist; another possibility is that seeing the checklist allows parents to more thoroughly consider their child's early productions.
Data from the MTurk survey showed a broader distribution of ages, perhaps due to the greater diversity (as well as larger size) of this sample. Some children were reported to be producing words implausibly early (e.g., 4 months). These responses are very likely (though not, to be fair, with absolute certainty) the result of reporting errors or biases. To estimate retrospective reporting biases, we regressed the mean age of first words against the time since the event in our largest dataset (MTurk, which had the most fine-grained age data), but did not find a significant relationship, suggesting no biases of this type that we could measure. On the other end of the spectrum, some respondents reported first words appearing after 18 months, a timeline which might raise clinical concerns. Indeed, in a population as large as this one, there are almost certainly some children with speech-language delays or other developmental disorders. Thus, this dataset is potentially valuable for estimating the right tail of the distribution in a diverse population.
Finally, the Psycholinguist dataset shows a relatively later and steeper onset of word production than the other three (though it still reaches 75\% around 11 months). Given the high level of education of the respondents, it is likely that these children would have large early vocabularies \cite<e.g.>[et seq.]{hart1995}. On the other hand, the majority of these respondents recorded their child's first word at the time of production, decreasing concerns about retrospective report. Additionally, these respondents had training in psycholinguistics and were more likely to apply a more stringent standard (we shared our definition of a first word with respondents in the survey instructions). Thus we view the lack of very early respondents as prima facie evidence that first words before 9 months are rarer than our other surveys might lead us to believe.
In sum, we see some evidence for over-optimism (estimating first words earlier than we might expect) in a number of our datasets. It must be noted that, as our surveys were designed exclusively to ask about a child's first word, parents of later producers who had not spoken a first word yet may have been unwilling to take the survey, potentially skewing the age of production earlier. However, across the Museum, MTurk, and Psycholinguist datasets, more than 50\% of the children were currently older than 2 years, so it is possible that later-producers are included in these datasets. Additionally, we saw no evidence of retrospective reporting biases. A plausible account of this pattern is that first words---whether detected optimistically or realistically---are a memorable event whose date and context are recalled well. In addition, despite differences in the tails, there was a striking convergence between datasets in suggesting that most children in our sample produced a first word prior to their first birthday.
\vspace{-.2em}
\subsection{Analysis 2: Independence of Age and First Word}
The variability in children's age of first production gives us a natural tool for asking about the relationship between conceptual and linguistic development. All things being equal between age groups, younger children should be less conceptually sophisticated and hence might produce words for a more restricted range of concepts.\footnote{Of course, younger producers might be on average more conceptually sophisticated than older producers, but the current analysis assumes that other developmental factors (e.g. phonological development, language experience, etc.) also vary.} Alternatively, if the concepts that children most want to talk about are present early \cite{snedeker2007,snedeker2012,gleitman1990}, we should predict no difference in the distribution of first words for older and younger children.
We ask here whether older children show a different distribution of first words. We assigned words to the categories that appear on the CDI instrument (e.g., animals, games and routines, toys, people, etc.) and conducted our analysis over the category distribution of words (a loose proxy for their semantic distribution). We assigned CDI categories consistently across datasets for words that did not appear on the CDI word list. Ninety-one children were excluded because their first word could not be categorized.
Figure~\ref{fig:cdi_cats} shows the frequencies of the CDI categories split by age (\textless{12 mo.}, \textgreater{12 mo.}) and grouped by dataset. Especially in the two larger datasets, the distribution across categories was virtually indistinguishable. Animals, Games and Routines, Toys, and People were all frequent first word categories, and all seemed equally compelling as a first word for both early and later producers. Data for later speakers in Wordbank was sparse, because children were selected for this analysis when they were producing exactly one word, according to parental report on the CDI, and only 27 children met this criterion.
%CDI cats
\begin{figure*}[t]
\begin{center}
\includegraphics[width = .9\textwidth]{figures/Figure2.pdf}
\end{center}
\caption{Proportion of children's first words falling into each CDI category. The datasets showed a high degree of consistency, with most first words referring to animals or games and routines. These distributions were highly consistent between older and younger children, suggesting that first words are driven by linguistic rather than conceptual factors. The dashed line shows the baseline distribution of CDI categories.}
\label{fig:cdi_cats}
\vspace{-1.5em}
\end{figure*}
%table 2
\begin{table}[tb]
\centering
\begin{tabular}{ccc}
\hline
Data Set & CDI Category & First Word \\
\hline
MTurk & (-0.22, 0.04) & (-0.31, 0.13) \\
Museum & (-0.25, 0.28) & (-0.12, 0.52) \\
Psycholinguists & (0.00, 1.33) & (0.00, 1.11) \\
Wordbank & (-0.76, 0.84) & (-0.59, 0.69) \\
\hline
\end{tabular}
\caption{\label{tab:ent_diffs} 95\% confidence intervals on differences in the entropy of First CDI Category and First Word between older and younger children.}
\vspace{-2.65em}
\end{table}
To quantify differences in variability across age, we asked whether the entropy of children's word or category distributions were different for our older and younger children \cite{shannon1948}. Differences in entropy would signal differences in the breadth of the distribution across words or categories. Because entropy is sensitive to sample size, for each dataset we split children into older and younger groups, and then down-sampled the larger group to the size of the smaller one. We then computed the difference in entropy for children in the older group as compared to the younger group at both the word and category level. For each dataset and for both words and CDI categories, we used non-parametric bootstrap resampling to identify whether the observed difference in entropy was significant at the $p = .05$ level. For all datasets and measures, the 95\% confidence interval for entropy differences included 0, indicating no significant difference in entropy across ages (Table\ref{tab:ent_diffs}).
In our exploration of the data, we found one word that was both highly frequent and potentially linked to conceptual development: ``no.'' Negation is a complex construct, and various functions of negation (denial, refusal, nonexistence) are posited to emerge at different points in a child's development \cite{pea1982}. We coded instances of ``no'' (in the MTurk data, where the majority of instances were reported) based on parent descriptions of the situation surrounding the first word. Of 108 children producing ``no'' as a first word, 40\% did so as a refusal; there were no instances of ``no'' being used as denial, which is acquired later \cite{pea1982}.
In sum, despite producing a first word during different points in their conceptual development, both early and later producers in our sample chose to talk about the same semantic categories, and in many cases, the same things (see Table \ref{tab:top10}), although we are not able to capture whether there are changes across development in the meaning of these words in our dataset \cite{bates1976}. However, the similar distributions of semantic categories in early and later speakers suggests that first words tend to reflect concepts that are available early. Why then do children consistently pick certain words to talk about? In the next analysis, we examine the role of input frequency and phonological complexity in determining which words are predicted.
\vspace{-.2em}
\subsection{Analysis 3: Predicting First Words}
The goal of our analysis is to determine both why some first words were produced more frequently than others (e.g. ``dog'' vs. ``asleep''), and also why some words were never first words at all (e.g. ``animal''). Because the set of words that were never produced is infinite, we needed to constrain our set of candidate first words to a small, representative, finite set. For this reason, and to ensure fair comparison across datasets, we restricted our set of words to the 385 words that appear on the CDI Words and Gestures form.
To estimate the approximate frequency with which children hear each of these words, we tabulated the number of times each appears in CHILDES (a large corpus of parent-child interactions; \citeNP{macwhinney2000}). To ensure a representative sample, we counted the number of appearances of each word in a child's mother's speech across all of the corpora in the North American subset. These frequencies were then log-transformed. To estimate phonetic complexity, we chose a simple, theory-independent measure: number of phonemes. For each of this same subset of words, we queried the MRC Psycholinguistic Database \cite{Wilson1988}. Number of phonemes is an imperfect measure of phonetic complexity---it misses differences in articulatory complexity that contribute to the relative difficulty of \emph{producing} different words (e.g. ``truck'' vs. ``bunny'')---but it does capture some of the variability among the CDI words.
To predict the number of observations of each of a set of a categorical outcomes, the standard statistical model is Poisson regression, but this method behaves poorly when distributions violate its assumptions through high variance (overdispersion) and too many zeros. To adjust for these violations, we used a hurdle model \cite{mullahy1986}. This model predicts the number of observed counts through a combination of two processes: a binomial threshold (hurdle) that first determines whether a count is zero or greater, and then a second component which determines the size of the count if it is non-zero. Because the datasets were of such different sizes, we fit a separate hurdle model to each and examined consistency in the estimated parameters across datasets.
%table 3
\begin{figure}[h]
\center{\includegraphics[width=.8\linewidth]{figures/Figure3.pdf}}
\caption{\label{fig:hurdles} Parameter estimates for hurdle models predicting children's first words. Models showed a high degree of consistency across datasets: first words tend to be higher frequency and have fewer phonemes. Error bars represent 95\% confidence intervals. Intercepts are omitted for clarity.}
\vspace{-1.5em}
\end{figure}
% \subsubsection{Results and Discussion}
Across datasets, input frequency and phonetic complexity consistently predicted the number of children who produced each word as their first word. As we hypothesized, in almost all cases candidate words were more likely to be first words if they were higher frequency in children's input, and if they had fewer phonemes (Figure~\ref{fig:hurdles}). In conjunction with the analyses above, these results suggest a high degree of consistency in children's first productions, independent of conceptual development, and dependent instead on linguistic input and speech production fluency.
\section{General Discussion}
What can children's first words reveal about their conceptual and linguistic development? Using parent report data, we presented three analyses, touching on the timing of productive language emergence, the distribution of conceptual categories across developmentally early and late first words, and factors that play a role in predicting which words are produced first. More than three quarters of children produced a first word prior to their first birthday, but the particular concepts these words named did not vary across age. Instead, two non-conceptual factors---input frequency and phonetic complexity---predicted the number of children who produced a particular word first.
A child's first word is a highly salient moment whose memorability to caregivers also makes it ideally suited for use in parent-report measures. Nonetheless, even when analyzed at this scale, parent report is always limited by observer bias. Although we found no evidence of retrospective biases in parents of older children over-reporting early first words, we must remain aware that there are potential issues with memory recall or other parental biases, as in any self-report measure. We dealt with these issues in several ways, including by seeking converging evidence across multiple, distinct datasets and by testing explicitly for retrospective biases. Nevertheless, the possibility of bias is present, and future studies should consider the possibility of prospective report or dense recording techniques to extend and validate our findings.
We began by asking about the lag between comprehension and production in early language. One possible explanation for this lag is that there is a period during early infancy in which word knowledge consists of associations between auditory and visual stimuli and so there is no drive to communicate through production. In contrast to this hypothesis, our data suggest that many children are striving to communicate even quite early on. Consistent with other studies of early vocabulary \cite{tardif2007}, the productions that parents reported also included functional and communicative items---``hi,'' ``more,'' and ``no''---as well as common nouns. In sum, our work suggests that studying the very first emergence of productive speech is a rich method for adding to our understanding of language development.
\section{Acknowledgements}
Thanks to Ally Kraus for assistance with survey design and Jenni Martin and Rick Berg for help with the Museum data, as well as to the members of the Language and Cognition Lab.
\bibliographystyle{myapacite}
\setlength{\bibleftmargin}{.125in}
\setlength{\bibindent}{-\bibleftmargin}
\bibliography{fw_cogsci}
\end{document}
|
#ifndef CSM_FILTER_NODE_H
#define CSM_FILTER_NODE_H
#include <string>
#include <map>
#include <vector>
#include <boost/shared_ptr.hpp>
#include <QMetaType>
namespace CSMWorld
{
class IdTableBase;
}
namespace CSMFilter
{
/// \brief Root class for the filter node hierarchy
///
/// \note When the function documentation for this class mentions "this node", this should be
/// interpreted as "the node and all its children".
class Node
{
// not implemented
Node (const Node&);
Node& operator= (const Node&);
public:
Node();
virtual ~Node();
virtual bool test (const CSMWorld::IdTableBase& table, int row,
const std::map<int, int>& columns) const = 0;
///< \return Can the specified table row pass through to filter?
/// \param columns column ID to column index mapping
virtual std::vector<int> getReferencedColumns() const = 0;
///< Return a list of the IDs of the columns referenced by this node. The column mapping
/// passed into test as columns must contain all columns listed here.
virtual std::string toString (bool numericColumns) const = 0;
///< Return a string that represents this node.
///
/// \param numericColumns Use numeric IDs instead of string to represent columns.
};
}
Q_DECLARE_METATYPE (boost::shared_ptr<CSMFilter::Node>)
#endif
|
using Test, MPI
using ClimateMachine.Mesh.Topologies
using ClimateMachine.Mesh.Grids
using ClimateMachine.Mesh.Geometry
using StaticArrays
MPI.Initialized() || MPI.Init()
@testset "LocalGeometry" begin
FT = Float64
ArrayType = Array
xmin = 0
ymin = 0
zmin = 0
xmax = 2000
ymax = 400
zmax = 2000
Ne = (20, 2, 20)
polynomialorder = 4
brickrange = (
range(FT(xmin); length = Ne[1] + 1, stop = xmax),
range(FT(ymin); length = Ne[2] + 1, stop = ymax),
range(FT(zmin); length = Ne[3] + 1, stop = zmax),
)
topl = StackedBrickTopology(
MPI.COMM_SELF,
brickrange,
periodicity = (true, true, false),
)
grid = DiscontinuousSpectralElementGrid(
topl,
FloatType = FT,
DeviceArray = ArrayType,
polynomialorder = polynomialorder,
)
S = (
(xmax - xmin) / (polynomialorder * Ne[1]),
(ymax - ymin) / (polynomialorder * Ne[2]),
(zmax - zmin) / (polynomialorder * Ne[3]),
)
Savg = cbrt(prod(S))
M = SDiagonal(S .^ -2)
for e in 1:size(grid.vgeo, 3)
for n in 1:size(grid.vgeo, 1)
g = LocalGeometry(Val(polynomialorder), grid.vgeo, n, e)
@test lengthscale(g) ≈ Savg
@test Geometry.resolutionmetric(g) ≈ M
end
end
end
|
Fairfield, N.J. – December 3, 2009 – Kyocera Mita America, Inc., one of the world's leading document solutions companies, today announced the expansion of its flagship TASKalfa multifunctional product (MFP) series with the launch of its latest black & white MFP: the TASKalfa 300i (30 pages-per-minute). The TASKalfa 300i completes Kyocera Mita’s award-winning black & white MFP product series, joining the most recent introductions, the TASKalfa 520i and 420i models (52 and 42 pages per minute, ppm respectively).
With its powerful yet sleek, space-saving and energy-efficient design, the TASKalfa 300i delivers big performance for the needs of any office environment. The TASKalfa 300i is built on Kyocera’s award-winning MFP engine technology that utilizes the company’s patented long-life Amorphous Silicon drum, and an offers crisp 600 dpi image quality with fast and efficient print, copy and scan functionality and document portability.
With a 300,000 page preventative maintenance (PM) schedule, the TASKalfa 300i is a flexible and durable document imaging system that maximizes productivity performance and uptime for end-users.
The simplicity of the TASKalfa 300i’s 8.5 inch color touch screen control panel allows end-users to quickly and efficiently navigate functionality, imaging and finishing options. Benefits include full-color scanning, a USB host interface for portable document sharing, and optional fax and internet fax capabilities.
The TASKalfa 300i efficiently processes documents and data with its standard memory capacity of 2 GB of RAM and 160 GB hard disk drive. With the ever increasing speed of business and the demand for document portability, Kyocera offers an advanced USB host interface that gives users the ability to conveniently print-from and scan-to a USB drive in PDF, JPEG, TIFF and XPS file formats – directly from the control panel. Scanned PDF documents can be password protected, encrypted from the control panel, and can be shared through features such as: scan to e-mail, scan to SMB, scan to FTP, scan to USB, and WSD and TWAIN scanning.
The TASKalfa 300i comes standard with dual 500-sheet adjustable paper drawers and a 200-sheet multi-purpose tray. Maximum paper capacity tops 2,200 sheets and a wide range of paper stock is supported, from 16 - 32 lb Bond in the trays, and up to 110 lb Index from the multi-purpose tray.
For complex document finishing needs users can choose from a space saving internal finisher or a 1,000 sheet multi-position finisher, both designed to handle any workgroup’s finishing requirements.
The performance of Kyocera’s TASKalfa series is further optimized through Kyocera’s Hybrid Platform for Advanced Solutions (HyPAS). HyPAS is a powerful and scalable software solution platform designed to optimize Kyocera’s MFPs with integrated solutions that enable the TASKalfa 300i to seamlessly integrate with widely accepted software applications and operate in virtually any environment.
The TASKalfa 300i comes standard with Kyocera’s KX Driver, PDF Direct Print and unique PRESCRIBE Solution for on-demand black and white document creation and output. In addition, Kyocera offers advanced document workflow solutions such as KYOcapture and KYOcapture Express, which integrate with many of the industry’s leading document management systems, including SentryFile, LaserFishe, Documentum, and Microsoft SharePoint; and flexible network and device management tools such as KMnet Viewer, PrintQ Manager and Equitrac.
For organizations concerned about securing their data, Kyocera offers a standard IPv6 and IPsec compliant network interface, an optional Data Security Kit for hard disk drive overwrite and encryption, PrintQ Manager Secure Job Release and Secure Printing, and Custom Box for users - all of which offer password protected document security features. A Security Watermark is also available to deter unauthorized reproduction of documents.
The entire mid-range TASKalfa MFP series, including the 300i, 420i and 520i, connect to virtually any environment with Kyocera’s cross-platform compatibility with the drivers, tools, utilities and software options to optimize workflow and help guarantee performance in any business. TASKalfa MFPs are compatible with Mac, Linux and Windows® operating systems, including the new Windows 7 operating system.
The TASKalfa 300i is available today through authorized Kyocera Mita dealers at a Manufacturer’s Suggested Retail Price of $8,493. To find the nearest dealer, or for more information, please visit Kyocera’s web site: www.kyoceramita.com/us.
|
import dlhalos_code.data_processing as tn
from pickle import dump
import numpy as np
if __name__ == "__main__":
path_random = "/mnt/beegfs/work/ati/pearl037/regression/training_set_13.4/20sims/random/"
path_uniform = "/mnt/beegfs/work/ati/pearl037/regression/training_set_13.4/20sims/uniform/"
path_sims = "/mnt/beegfs/work/ati/pearl037/"
all_sims = ["%i" % i for i in np.arange(22)]
all_sims.remove("3")
all_sims.remove("6")
all_sims.append("6")
s = tn.SimulationPreparation(all_sims, path=path_sims)
train_sims = all_sims[:-1]
val_sim = all_sims[-1]
# Save training sets sampled at random from 9 simulations, with n=200,000
n_samples = 200000
saving_path = path_random + "200k/"
training_set = tn.InputsPreparation(train_sims, shuffle=True, scaler_type="minmax", return_rescaled_outputs=True,
output_range=(-1, 1), log_high_mass_limit=13.4,
load_ids=False, random_subset_each_sim=None,
random_style="random", random_subset_all=n_samples,
path=path_sims)
dump(training_set.particle_IDs, open(saving_path + 'training_set.pkl', 'wb'))
dump(training_set.labels_particle_IDS, open(saving_path + 'labels_training_set.pkl', 'wb'))
dump(training_set.scaler_output, open(saving_path + 'scaler_output.pkl', 'wb'))
v_set = tn.InputsPreparation([val_sim], scaler_type="minmax", load_ids=False, shuffle=True,
random_style="random", log_high_mass_limit=13.4,
random_subset_all=5000, random_subset_each_sim=1000000,
scaler_output=training_set.scaler_output, path=path_sims)
dump(v_set.particle_IDs, open(saving_path + 'validation_set.pkl', 'wb'))
dump(v_set.labels_particle_IDS, open(saving_path + 'labels_validation_set.pkl', 'wb'))
v_set2 = tn.InputsPreparation([val_sim], scaler_type="minmax", load_ids=False, log_high_mass_limit=13.4,
random_style="random", random_subset_all=50000, random_subset_each_sim=1000000,
scaler_output=training_set.scaler_output, path=path_sims)
dump(v_set2.particle_IDs, open(saving_path + 'larger_validation_set.pkl', 'wb'))
dump(v_set2.labels_particle_IDS, open(saving_path + 'larger_labels_validation_set.pkl', 'wb'))
del saving_path
del n_samples
del training_set
del v_set
del v_set2
# Save training sets sampled uniformly from 80 bins, with n=4000 per mass bin
saving_path = path_uniform + "5k_in_each_80bins/"
n_samples = 5000
training_set = tn.InputsPreparation(train_sims, shuffle=True, scaler_type="minmax", return_rescaled_outputs=True,
output_range=(-1, 1), load_ids=False, random_subset_each_sim=None,
log_high_mass_limit=13.4,
random_style="uniform", num_per_mass_bin=n_samples, num_bins=80,
path=path_sims)
dump(training_set.particle_IDs, open(saving_path + 'training_set.pkl', 'wb'))
dump(training_set.labels_particle_IDS, open(saving_path + 'labels_training_set.pkl', 'wb'))
dump(training_set.scaler_output, open(saving_path + 'scaler_output.pkl', 'wb'))
v_set = tn.InputsPreparation([val_sim], scaler_type="minmax", load_ids=False, shuffle=True,
random_style="random", random_subset_all=5000, random_subset_each_sim=1000000,
log_high_mass_limit=13.4,
scaler_output=training_set.scaler_output, path=path_sims)
dump(v_set.particle_IDs, open(saving_path + 'validation_set.pkl', 'wb'))
dump(v_set.labels_particle_IDS, open(saving_path + 'labels_validation_set.pkl', 'wb'))
v_set2 = tn.InputsPreparation([val_sim], scaler_type="minmax", load_ids=False,
log_high_mass_limit=13.4, random_style="random", random_subset_all=50000,
random_subset_each_sim=1000000,
scaler_output=training_set.scaler_output, path=path_sims)
dump(v_set2.particle_IDs, open(saving_path + 'larger_validation_set.pkl', 'wb'))
dump(v_set2.labels_particle_IDS, open(saving_path + 'larger_labels_validation_set.pkl', 'wb'))
|
[STATEMENT]
lemma ex_k_mod:
assumes coprime: "coprime (e :: nat) ((P-1)*(Q-1))"
and "P \<noteq> Q"
and "prime P"
and "prime Q"
and "d \<noteq> 0"
and " [e*d = 1] (mod (P-1))"
shows "\<exists> k. e*d = 1 + k*(P-1)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>k. e * d = 1 + k * (P - 1)
[PROOF STEP]
proof-
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<exists>k. e * d = 1 + k * (P - 1)
[PROOF STEP]
have "e > 0"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. 0 < e
[PROOF STEP]
using assms(1) assms(2) prime_gt_0_nat
[PROOF STATE]
proof (prove)
using this:
coprime e ((P - 1) * (Q - 1))
P \<noteq> Q
prime ?p \<Longrightarrow> 0 < ?p
goal (1 subgoal):
1. 0 < e
[PROOF STEP]
by fastforce
[PROOF STATE]
proof (state)
this:
0 < e
goal (1 subgoal):
1. \<exists>k. e * d = 1 + k * (P - 1)
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
0 < e
[PROOF STEP]
have "e*d \<ge> 1"
[PROOF STATE]
proof (prove)
using this:
0 < e
goal (1 subgoal):
1. 1 \<le> e * d
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
0 < e
coprime e ((P - 1) * (Q - 1))
P \<noteq> Q
prime P
prime Q
d \<noteq> 0
[e * d = 1] (mod P - 1)
goal (1 subgoal):
1. 1 \<le> e * d
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
1 \<le> e * d
goal (1 subgoal):
1. \<exists>k. e * d = 1 + k * (P - 1)
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
1 \<le> e * d
[PROOF STEP]
obtain k where k: "e*d = 1 + k*(P-1)"
[PROOF STATE]
proof (prove)
using this:
1 \<le> e * d
goal (1 subgoal):
1. (\<And>k. e * d = 1 + k * (P - 1) \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using assms(6) cong_to_1'_nat
[PROOF STATE]
proof (prove)
using this:
1 \<le> e * d
[e * d = 1] (mod P - 1)
[?a = 1] (mod ?n) = (?a = 0 \<and> ?n = 1 \<or> (\<exists>m. ?a = 1 + m * ?n))
goal (1 subgoal):
1. (\<And>k. e * d = 1 + k * (P - 1) \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
e * d = 1 + k * (P - 1)
goal (1 subgoal):
1. \<exists>k. e * d = 1 + k * (P - 1)
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
e * d = 1 + k * (P - 1)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
e * d = 1 + k * (P - 1)
goal (1 subgoal):
1. \<exists>k. e * d = 1 + k * (P - 1)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
\<exists>k. e * d = 1 + k * (P - 1)
goal:
No subgoals!
[PROOF STEP]
qed
|
%% patchMarchDistMapIterative
% Below is a demonstration of the features of the |patchMarchDistMapIterative| function
%%
clear; close all; clc;
%% Syntax
% |[varargout]=patchMarchDistMapIterative(varargin);|
%% Description
% UNDOCUMENTED
%% Examples
%
%%
%
% <<gibbVerySmall.gif>>
%
% _*GIBBON*_
% <www.gibboncode.org>
%
% _Kevin Mattheus Moerman_, <[email protected]>
%%
% _*GIBBON footer text*_
%
% License: <https://github.com/gibbonCode/GIBBON/blob/master/LICENSE>
%
% GIBBON: The Geometry and Image-based Bioengineering add-On. A toolbox for
% image segmentation, image-based modeling, meshing, and finite element
% analysis.
%
% Copyright (C) 2006-2022 Kevin Mattheus Moerman and the GIBBON contributors
%
% This program is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program. If not, see <http://www.gnu.org/licenses/>.
|
{-# OPTIONS --safe --cubical --postfix-projections #-}
module Relation.Binary where
open import Level
open import Function using (_∘_; flip; id)
open import Inspect using (inspect;〖_〗)
open import HLevels using (isSet)
open import Path as ≡ hiding (sym; refl)
open import Data.Bool using (Bool; true; false; bool)
open import Data.Bool.Properties using (false≢true)
open import Data.Empty using (⊥; ⊥-elim; ¬_)
open import Data.Sum using (either; inl; inr; _⊎_; is-l)
open import Relation.Nullary.Decidable using (Dec; yes; no; does)
open import Relation.Nullary.Decidable.Properties using (Dec→Stable)
open import Relation.Nullary.Discrete using (Discrete)
open import Relation.Nullary.Discrete.Properties using (Discrete→isSet)
open import Relation.Nullary.Stable using (Stable)
module _ (_~_ : A → A → Type b) where
Reflexive : Type _
Reflexive = ∀ {x} → x ~ x
Transitive : Type _
Transitive = ∀ {x y z} → x ~ y → y ~ z → x ~ z
Symmetric : Type _
Symmetric = ∀ {x y} → x ~ y → y ~ x
Decidable : Type _
Decidable = ∀ x y → Dec (x ~ y)
Antisymmetric : Type _
Antisymmetric = ∀ {x y} → x ~ y → y ~ x → x ≡ y
Connected : Type _
Connected = ∀ {x y} → ¬ (x ~ y) → ¬ (y ~ x) → x ≡ y
Asymmetric : Type _
Asymmetric = ∀ {x y} → x ~ y → ¬ (y ~ x)
Irreflexive : Type _
Irreflexive = ∀ {x} → ¬ (x ~ x)
Total : Type _
Total = ∀ x y → (x ~ y) ⊎ (y ~ x)
record Preorder {ℓ₁} (𝑆 : Type ℓ₁) ℓ₂ : Type (ℓ₁ ℓ⊔ ℓsuc ℓ₂) where
infix 4 _≤_
field
_≤_ : 𝑆 → 𝑆 → Type ℓ₂
refl : Reflexive _≤_
trans : Transitive _≤_
infix 4 _≰_ _≥_ _≱_
_≰_ _≥_ _≱_ : 𝑆 → 𝑆 → Type ℓ₂
x ≰ y = ¬ (x ≤ y)
x ≥ y = y ≤ x
x ≱ y = ¬ (y ≤ x)
record StrictPreorder {ℓ₁} (𝑆 : Type ℓ₁) ℓ₂ : Type (ℓ₁ ℓ⊔ ℓsuc ℓ₂) where
infix 4 _<_
field
_<_ : 𝑆 → 𝑆 → Type ℓ₂
trans : Transitive _<_
irrefl : Irreflexive _<_
asym : Asymmetric _<_
asym x<y y<x = irrefl (trans x<y y<x)
infix 4 _≮_ _>_ _≯_
_≮_ _>_ _≯_ : 𝑆 → 𝑆 → Type ℓ₂
x ≮ y = ¬ (x < y)
x > y = y < x
x ≯ y = ¬ (y < x)
record StrictPartialOrder {ℓ₁} (𝑆 : Type ℓ₁) ℓ₂ : Type (ℓ₁ ℓ⊔ ℓsuc ℓ₂) where
field strictPreorder : StrictPreorder 𝑆 ℓ₂
open StrictPreorder strictPreorder public
field conn : Connected _<_
record PartialOrder {ℓ₁} (𝑆 : Type ℓ₁) ℓ₂ : Type (ℓ₁ ℓ⊔ ℓsuc ℓ₂) where
field preorder : Preorder 𝑆 ℓ₂
open Preorder preorder public
field antisym : Antisymmetric _≤_
data Tri (A : Type a) (B : Type b) (C : Type c) : Type (a ℓ⊔ b ℓ⊔ c) where
lt : (x<y : A) → Tri A B C
eq : (x≡y : B) → Tri A B C
gt : (x>y : C) → Tri A B C
record TotalOrder {ℓ₁} (𝑆 : Type ℓ₁) ℓ₂ ℓ₃ : Type (ℓ₁ ℓ⊔ ℓsuc ℓ₂ ℓ⊔ ℓsuc ℓ₃) where
field
strictPartialOrder : StrictPartialOrder 𝑆 ℓ₂
partialOrder : PartialOrder 𝑆 ℓ₃
open PartialOrder partialOrder renaming (trans to ≤-trans) public
open StrictPartialOrder strictPartialOrder renaming (trans to <-trans) public
infix 4 _<?_
field
_<?_ : Decidable _<_
≰⇒> : ∀ {x y} → x ≰ y → x > y
≮⇒≥ : ∀ {x y} → x ≮ y → x ≥ y
<⇒≤ : ∀ {x y} → x < y → x ≤ y
<⇒≤ = ≮⇒≥ ∘ asym
_<ᵇ_ : 𝑆 → 𝑆 → Bool
x <ᵇ y = does (x <? y)
<⇒≱ : ∀ {x y} → x < y → x ≱ y
<⇒≱ {x} {y} x<y x≥y = irrefl (subst (_< _) (antisym (<⇒≤ x<y) x≥y) x<y)
≤⇒≯ : ∀ {x y} → x ≤ y → x ≯ y
≤⇒≯ {x} {y} x≤y x>y = irrefl (subst (_< _) (antisym (≮⇒≥ (asym x>y)) x≤y) x>y)
infix 4 _≤ᵇ_ _≤?_ _≤|≥_ _≟_
_≤?_ : Decidable _≤_
x ≤? y with y <? x
... | yes y<x = no (<⇒≱ y<x)
... | no y≮x = yes (≮⇒≥ y≮x)
_≤ᵇ_ : 𝑆 → 𝑆 → Bool
x ≤ᵇ y = does (x ≤? y)
_≤|≥_ : Total _≤_
x ≤|≥ y with x <? y
... | yes x<y = inl (<⇒≤ x<y)
... | no x≮y = inr (≮⇒≥ x≮y)
_≟_ : Discrete 𝑆
x ≟ y with x <? y | y <? x
... | yes x<y | _ = no (λ x≡y → irrefl (subst (_< _) x≡y x<y))
... | _ | yes y<x = no (λ x≡y → irrefl (subst (_ <_) x≡y y<x))
... | no x≮y | no y≮x = yes (conn x≮y y≮x)
Ordering : (x y : 𝑆) → Type (ℓ₁ ℓ⊔ ℓ₂)
Ordering x y = Tri (x < y) (x ≡ y) (x > y)
compare : ∀ x y → Ordering x y
compare x y with x <? y | y <? x
... | yes x<y | _ = lt x<y
... | no x≮y | yes y<x = gt y<x
... | no x≮y | no y≮x = eq (conn x≮y y≮x)
total⇒isSet : isSet 𝑆
total⇒isSet = Discrete→isSet _≟_
module FromPartialOrder {ℓ₁} {𝑆 : Type ℓ₁} {ℓ₂} (po : PartialOrder 𝑆 ℓ₂) (_≤|≥_ : Total (PartialOrder._≤_ po)) where
open PartialOrder po
partialOrder = po
≤-side : 𝑆 → 𝑆 → Bool
≤-side x y = is-l (x ≤|≥ y)
≤-dec : Decidable _≤_
≤-dec x y with x ≤|≥ y | y ≤|≥ x | inspect (≤-side x) y | inspect (≤-side y) x
≤-dec x y | inl x≤y | _ | _ | _ = yes x≤y
≤-dec x y | inr x≥y | inr y≥x | _ | _ = yes y≥x
≤-dec x y | inr x≥y | inl y≤x | 〖 x≥yᵇ 〗 | 〖 y≤xᵇ 〗 = no (x≢y ∘ flip antisym x≥y)
where
x≢y : x ≢ y
x≢y x≡y = false≢true (≡.sym x≥yᵇ ; cong₂ ≤-side x≡y (≡.sym x≡y) ; y≤xᵇ)
≮⇒≥ : ∀ {x y} → Stable (x ≤ y)
≮⇒≥ {x} {y} = Dec→Stable _ (≤-dec x y)
strictPartialOrder : StrictPartialOrder 𝑆 ℓ₂
strictPartialOrder .StrictPartialOrder.strictPreorder .StrictPreorder._<_ x y = ¬ (y ≤ x)
strictPartialOrder .StrictPartialOrder.conn x<y y<x = antisym (≮⇒≥ y<x) (≮⇒≥ x<y)
strictPartialOrder .StrictPartialOrder.strictPreorder .StrictPreorder.irrefl y≰x = y≰x refl
strictPartialOrder .StrictPartialOrder.strictPreorder .StrictPreorder.trans {x} {y} {z} y≰x z≰y z≤x with ≤-dec y z
... | yes y≤z = y≰x (trans y≤z z≤x)
... | no y≰z = either z≰y y≰z (z ≤|≥ y)
≰⇒> = id
_<?_ : Decidable _≱_
_<?_ x y with ≤-dec y x
... | yes y≤x = no λ y≰x → y≰x y≤x
... | no y≰x = yes y≰x
fromPartialOrder : (po : PartialOrder A b) (_≤|≥_ : Total (PartialOrder._≤_ po)) → TotalOrder _ _ _
fromPartialOrder po tot = record { FromPartialOrder po tot }
module FromStrictPartialOrder {ℓ₁} {𝑆 : Type ℓ₁} {ℓ₂} (spo : StrictPartialOrder 𝑆 ℓ₂) (<-dec : Decidable (StrictPartialOrder._<_ spo)) where
open StrictPartialOrder spo
strictPartialOrder = spo
_<?_ = <-dec
partialOrder : PartialOrder _ _
partialOrder .PartialOrder.preorder .Preorder._≤_ x y = ¬ (y < x)
partialOrder .PartialOrder.preorder .Preorder.refl x<x = asym x<x x<x
partialOrder .PartialOrder.preorder .Preorder.trans {x} {y} {z} y≮x z≮y z<x with x <? y
... | yes x<y = z≮y (trans z<x x<y)
... | no x≮y = z≮y (subst (z <_) (conn x≮y y≮x) z<x)
partialOrder .PartialOrder.antisym = flip conn
≰⇒> : ∀ {x y} → Stable (x < y)
≰⇒> {x} {y} = Dec→Stable (x < y) (x <? y)
≮⇒≥ = id
fromStrictPartialOrder : (spo : StrictPartialOrder A b) (_<?_ : Decidable (StrictPartialOrder._<_ spo)) → TotalOrder _ _ _
fromStrictPartialOrder spo _<?_ = record { FromStrictPartialOrder spo _<?_ }
record Equivalence {ℓ₁} (𝑆 : Type ℓ₁) ℓ₂ : Type (ℓ₁ ℓ⊔ ℓsuc ℓ₂) where
infix 4 _≋_
field
_≋_ : 𝑆 → 𝑆 → Type ℓ₂
sym : ∀ {x y} → x ≋ y → y ≋ x
refl : ∀ {x} → x ≋ x
trans : ∀ {x y z} → x ≋ y → y ≋ z → x ≋ z
≡-equivalence : ∀ {a} {A : Set a} → Equivalence A a
≡-equivalence = record
{ _≋_ = _≡_
; sym = ≡.sym
; refl = ≡.refl
; trans = _;_
}
|
| pc = 0xc002 | a = 0xfe | x = 0x00 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc004 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110100 |
| pc = 0xc006 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110100 | MEM[0x0000] = 0xfe |
| pc = 0xc009 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110100 | MEM[0x07fe] = 0xfe |
| pc = 0xc00b | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110100 | MEM[0x0001] = 0xfe |
| pc = 0xc00e | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110100 | MEM[0x07ff] = 0xfe |
| pc = 0xc010 | a = 0xfe | x = 0xfe | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc012 | a = 0xfe | x = 0xfe | y = 0xfe | sp = 0x01fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc013 | a = 0xfe | x = 0xff | y = 0xfe | sp = 0x01fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc014 | a = 0xfe | x = 0xff | y = 0xff | sp = 0x01fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc015 | a = 0xfe | x = 0x00 | y = 0xff | sp = 0x01fd | p[NV-BDIZC] = 00110110 |
| pc = 0xc016 | a = 0xfe | x = 0x00 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110110 |
| pc = 0xc018 | a = 0xfe | x = 0x00 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 10110100 | MEM[0x0000] = 0xff |
| pc = 0xc01b | a = 0xfe | x = 0x00 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 10110100 | MEM[0x07fe] = 0xff |
| pc = 0xc01d | a = 0xfe | x = 0x00 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110110 | MEM[0x0000] = 0x00 |
| pc = 0xc020 | a = 0xfe | x = 0x00 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110110 | MEM[0x07fe] = 0x00 |
| pc = 0xc022 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110100 |
| pc = 0xc024 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 10110100 | MEM[0x0001] = 0xff |
| pc = 0xc027 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 10110100 | MEM[0x07ff] = 0xff |
| pc = 0xc029 | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110110 | MEM[0x0001] = 0x00 |
| pc = 0xc02c | a = 0xfe | x = 0x01 | y = 0x00 | sp = 0x01fd | p[NV-BDIZC] = 00110110 | MEM[0x07ff] = 0x00 |
|
[STATEMENT]
lemma approx_st_eq:
assumes x: "x \<in> HFinite" and y: "y \<in> HFinite" and xy: "x \<approx> y"
shows "st x = st y"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. st x = st y
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. st x = st y
[PROOF STEP]
have "st x \<approx> x" "st y \<approx> y" "st x \<in> \<real>" "st y \<in> \<real>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (st x \<approx> x &&& st y \<approx> y) &&& st x \<in> \<real> &&& st y \<in> \<real>
[PROOF STEP]
by (simp_all add: st_approx_self st_SReal x y)
[PROOF STATE]
proof (state)
this:
st x \<approx> x
st y \<approx> y
st x \<in> \<real>
st y \<in> \<real>
goal (1 subgoal):
1. st x = st y
[PROOF STEP]
with xy
[PROOF STATE]
proof (chain)
picking this:
x \<approx> y
st x \<approx> x
st y \<approx> y
st x \<in> \<real>
st y \<in> \<real>
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
x \<approx> y
st x \<approx> x
st y \<approx> y
st x \<in> \<real>
st y \<in> \<real>
goal (1 subgoal):
1. st x = st y
[PROOF STEP]
by (fast elim: approx_trans approx_trans2 SReal_approx_iff [THEN iffD1])
[PROOF STATE]
proof (state)
this:
st x = st y
goal:
No subgoals!
[PROOF STEP]
qed
|
module Point where
open import Nat
open import Bool
-- A record can be seen as a one constructor datatype. In this case:
data Point' : Set where
mkPoint : (x : Nat)(y : Nat) -> Point'
getX : Point' -> Nat
getX (mkPoint x y) = x
getY : Point' -> Nat
getY (mkPoint x y) = y
|
using SensorFeatureTracking
using Base: Test
@testset "blockMatchingFlow" begin
##
im1 = zeros(Int32,41,61)
im1[16:26,16:26] = 100
im1[16:26,36:46] = 200
reffeats = map(CartesianIndex{2}, [(16, 36), (26, 36), (16, 46), (26, 46)])
im2 = zeros(Int32,41,61)
im2[14:24,14:24] = 70
im2[18:28,38:48] = 200
feats = getApproxBestShiTomasi(im1,nfeatures=4, stepguess=0.8)
@test feats == reffeats
feats = getApproxBestShiTomasi(im1,nfeatures=8, stepguess=0.8)
reffeats_im2 = map(CartesianIndex{2},[(14, 14),(24, 14),(14, 24),(24, 24),(18, 38),(28, 38),(18, 48),(28, 48)])
trackerssd = BlockTracker(deepcopy(feats), search_size = 6)
block_tracker!(trackerssd, im1, im2)
@test reffeats_im2 == trackerssd.features
trackersad = BlockTracker(deepcopy(feats), search_size = 6, matchFunction = compute_sad)
block_tracker!(trackersad, im1, im2)
@test reffeats_im2 == trackersad.features
trackerncc = BlockTracker(deepcopy(feats), search_size = 6, matchFunction = compute_ncc)
block_tracker!(trackerncc, im1, im2)
@test reffeats_im2 == trackersad.features
##
end
##
|
module Instances.ZZ
import public Data.ZZ
import Common.Util
import Common.Interfaces
import Specifications.OrderedGroup
import Specifications.DiscreteOrderedGroup
import Specifications.OrderedRing
import Symmetry.Abelian
import Instances.Notation
import public Instances.OrderZZ
%default total
%access public export
implementation Ringops ZZ where
Zero = Pos 0
One = Pos 1
zzMonoid : specifyMonoid {s = ZZ}
zzMonoid = MkMonoid plusAssociativeZ plusZeroLeftNeutralZ plusZeroRightNeutralZ
zzGroup : specifyGroup {s = ZZ}
zzGroup = MkGroup zzMonoid plusNegateInverseRZ plusNegateInverseLZ
zzPartialOrder : specifyPartialOrder {leq = LTEZ}
zzPartialOrder = MkPartialOrder lteReflZ lteTransitiveZ lteAntisymmetricZ
zzTotalOrder : specifyTotalOrder {leq = LTEZ}
zzTotalOrder = MkTotalOrder zzPartialOrder lteTotalZ
zzPartiallyOrderedMagma : specifyPartiallyOrderedMagma {leq = LTEZ}
zzPartiallyOrderedMagma = MkPartiallyOrderedMagma zzPartialOrder
lteLeftTranslationInvariantZ $
abelianTranslationInvariantLR plusCommutativeZ lteLeftTranslationInvariantZ
zzPartiallyOrderedGroup : specifyPartiallyOrderedGroup {leq = LTEZ}
zzPartiallyOrderedGroup = MkPartiallyOrderedGroup zzGroup
zzPartiallyOrderedMagma
zzOrderedGroup : specifyOrderedGroup {leq = LTEZ}
zzOrderedGroup = MkOrderedGroup zzPartiallyOrderedGroup lteTotalZ
zzDiscreteOrderedGroup : specifyDiscreteOrderedGroup {leq = LTEZ}
zzDiscreteOrderedGroup = MkDiscreteOrderedGroup zzOrderedGroup
plusCommutativeZ lteDiscreteZ
zzRing : specifyRing {s = ZZ}
zzRing = MkRing
(MkPreRing
multDistributesOverPlusRightZ
multDistributesOverPlusLeftZ
plusCommutativeZ)
zzGroup
multAssociativeZ
zzOrderedRing : specifyOrderedRing {leq = LTEZ}
zzOrderedRing = MkOrderedRing
(MkPartiallyOrderedRing zzRing zzPartiallyOrderedMagma)
lteTotalZ
zzDiscreteOrderedRing : specifyDiscreteOrderedRing {leq = LTEZ}
zzDiscreteOrderedRing = MkDiscreteOrderedRing zzOrderedRing
lteDiscreteZ multOneLeftNeutralZ multOneRightNeutralZ (LtePosPos LTEZero)
|
{-# OPTIONS --without-K --safe #-}
module Categories.Category.Instance.FinSetoids where
-- Category of Finite Setoids, as a sub-category of Setoids
open import Level
open import Data.Fin.Base using (Fin)
open import Data.Nat.Base using (ℕ)
open import Data.Product using (Σ)
open import Function.Bundles using (Inverse)
open import Relation.Unary using (Pred)
open import Relation.Binary.Bundles using (Setoid; module Setoid)
import Relation.Binary.PropositionalEquality as ≡
open import Categories.Category.Core using (Category)
open import Categories.Category.Construction.ObjectRestriction
open import Categories.Category.Instance.Setoids
-- The predicate that will be used
IsFiniteSetoid : {c ℓ : Level} → Pred (Setoid c ℓ) (c ⊔ ℓ)
IsFiniteSetoid X = Σ ℕ (λ n → Inverse X (≡.setoid (Fin n)))
-- The actual Category
FinSetoids : (c ℓ : Level) → Category (suc (c ⊔ ℓ)) (c ⊔ ℓ) (c ⊔ ℓ)
FinSetoids c ℓ = ObjectRestriction (Setoids c ℓ) IsFiniteSetoid
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.