text
stringlengths 0
3.34M
|
---|
If a sequence is Cauchy, then it is bounded.
|
Making the commitment to own a dog is not to be taken lightly, intelligent, loving and loyal, dogs bring great joy to our lives. Our indoor park facility brings you so much more opportunity then just letting your dog run off some steam.
Owning a dog is a way of life, if you are new to dog ownership, you will have to change your lifestyle to suit. Had dogs for years? you will be amazed at how much you can still learn... Joining the park, yes gives you access to the safe off-lead freedom of an indoor dog park, in addition you will have experience dog loving people to guide you and help you through owning a dog... learn how simple changes to your dog's diet can improve health, condition and temperament and how simple training routines will not only improve your dog's behaviour, but will also tire your dog out faster than long walks and exercise.
Our understanding of dog ownership and how we get the most from our dogs has changed so much in the last 20 years. Come join us today for more than just a Play Park.
Thanks for getting to know us.
I spend most of my time at the Park, working in and amongst all our gorgeous dogs and supportive human members.
I see my role in Dogs Playpark as the drive behind the company, moving it forward and ensuring that we never lose sight of where and why this venture was started.
Our message is simple, 'Improving the lives of dogs with every interaction' and this is something we can all do for our dogs.
Better Diets - Better exercise - Better training - Better and Happier Lives.
After previously running my own training business for the past 4 years and offering soley 121 private training prior to this I have worked with over 1000 dogs with a variety of training and behavioural needs.
Prior to working with Dogs Playpark I have been invited into schools to help educate children on the correct way to handle and respect dogs which is something we want to continue with the Playpark.
I have a high standard of ethics when it comes to training and handling animals. It’s vital that anyone working with animals ensures they keep up to date with modern dog training and methods. This standard of ethics is seen in all our staff's in house training.
Long gone are the days where owners are encourage to physically punish their pets. We are thankfully in an age were science has taught us to instead of thinking of our dog as being bad, to think of our dog as not understanding what is asked of him/her.
This leads us to think not of punishment for the dog, but instead to think of further training to help them understand.
If you have any Playpark of training enquires please feel free to contact me at any time.
I am Jas and i supervise the Playpark team and train the training staff. Alongside this I teach a number of classes within the park and offer 121 training for a variety of behaviour problems.
I am Jackie and I work on the reception and within our Parkside pet store.
I am on hand to give you advice on the Playpark, memberships and training we offer. As well as this I have a depth of in house training to help best advise you on canine nutrition and what food or treats are suited to your dog.
Hi I'm Becca. I am one of the experienced training team here at Dogs Playpark. I offer puppy and adult dog training, life skills workshops and 121 training sessions.
I am training with our behaviourist to be equipped to offer behavioural consultations and training for more advanced cases.
Hello, I'm Adele. You'll find me behind the scenes at Dogs Daycare ensuring the best quality care for your dogs while they stay with us throughout the week and weekend and giving out plenty of cuddles between enrichment games and fun exercise.
equipment for your dog as well as undergoing a training programme to be equipped to offer basic training classes for you and your dog.
Hi, I'm Cheryl the grooming manager here at Parkside Groomers. I have been grooming dogs for over 16years working with thousands of dogs from Newfoundlands to Toy Poodles.
Hi I am Helena, you''ll see me supervising the play sessions within the Playpark, assisting the trainers with their 121 training sessions and serving cake and coffee with a smile.
During our private hire sessions I will be the one to check you in and during our dog shows I will be the one welcoming you at the reception desk.
Hello! I'm Elise, I am on of the part time Dogs Daycare assistants I work during the week with our daycare dogs.
I'm responsible for teaching the daycare dog's their 'trick of the day' and on weekends I compete with my own dogs in agility, I have even qualified for Crufts 2018! I have also qualified for second stage of the Junior Team GB Squad in Agility.
|
get_dim_lyap(integrator::Union{DEIntegrator, DiscreteIterator}) =
size(init_tangent_state(integrator))[2]
function get_dim_lyap(prob::Union{LEProblem, CLVProblem})
Q0 = @view prob.tangent_prob.u0[:, 2:end]
return size(Q0, 2)
end
|
section "Tabulating the Balanced Predicate"
theory Root_Balanced_Tree_Tab
imports
Root_Balanced_Tree
"HOL-Decision_Procs.Approximation"
"HOL-Library.IArray"
begin
locale Min_tab =
fixes p :: "nat \<Rightarrow> nat \<Rightarrow> bool"
fixes tab :: "nat list"
assumes mono_p: "n \<le> n' \<Longrightarrow> p n h \<Longrightarrow> p n' h"
assumes p: "\<exists>n. p n h"
assumes tab_LEAST: "h < length tab \<Longrightarrow> tab!h = (LEAST n. p n h)"
begin
lemma tab_correct: "h < length tab \<Longrightarrow> p n h = (n \<ge> tab ! h)"
apply auto
using not_le_imp_less not_less_Least tab_LEAST apply auto[1]
by (metis LeastI mono_p p tab_LEAST)
end
definition bal_tab :: "nat list" where
"bal_tab = [0, 1, 1, 2, 4, 6, 10, 16, 25, 40, 64, 101, 161, 256, 406, 645, 1024,
1625, 2580, 4096, 6501, 10321, 16384, 26007, 41285, 65536, 104031, 165140,
262144, 416127, 660561, 1048576, 1664510, 2642245, 4194304, 6658042, 10568983,
16777216, 26632170, 42275935, 67108864, 106528681, 169103740, 268435456,
426114725, 676414963, 1073741824, 1704458900, 2705659852, 4294967296\<^cancel>\<open>,
6817835603\<close>]"
(*ML\<open>floor (Math.pow(2.0,5.0/1.5))\<close>*)
axiomatization where c_def: "c = 3/2"
fun is_floor :: "nat \<Rightarrow> nat \<Rightarrow> bool" where
"is_floor n h = (let m = floor((2::real) powr ((real(h)-1)/c)) in n \<le> m \<and> m \<le> n)"
text\<open>Note that @{prop"n \<le> m \<and> m \<le> n"} avoids the technical restriction of the
\<open>approximation\<close> method which does not support \<open>=\<close>, even on integers.\<close>
lemma bal_tab_correct:
"\<forall>i < length bal_tab. is_floor (bal_tab!i) i"
apply(simp add: bal_tab_def c_def All_less_Suc)
apply (approximation 50)
done
(* FIXME mv *)
lemma ceiling_least_real: "ceiling(r::real) = (LEAST i. r \<le> i)"
by (metis Least_equality ceiling_le le_of_int_ceiling)
lemma floor_greatest_real: "floor(r::real) = (GREATEST i. i \<le> r)"
by (metis Greatest_equality le_floor_iff of_int_floor_le)
lemma LEAST_eq_floor:
"(LEAST n. int h \<le> \<lceil>c * log 2 (real n + 1)\<rceil>) = floor((2::real) powr ((real(h)-1)/c))"
proof -
have "int h \<le> \<lceil>c * log 2 (real n + 1)\<rceil>
\<longleftrightarrow> 2 powr ((real(h)-1)/c) < real(n)+1" (is "?L = ?R") for n
proof -
have "?L \<longleftrightarrow> h < c * log 2 (real n + 1) + 1" by linarith
also have "\<dots> \<longleftrightarrow> (real h-1)/c < log 2 (real n + 1)"
using c1 by(simp add: field_simps)
also have "\<dots> \<longleftrightarrow> 2 powr ((real h-1)/c) < 2 powr (log 2 (real n + 1))"
by(simp del: powr_log_cancel)
also have "\<dots> \<longleftrightarrow> ?R"
by(simp)
finally show ?thesis .
qed
moreover have "((LEAST n::nat. r < n+1) = nat(floor r))" for r :: real
by(rule Least_equality) linarith+
ultimately show ?thesis by simp
qed
interpretation Min_tab
where p = bal_i and tab = bal_tab
proof(unfold bal_i_def, standard, goal_cases)
case (1 n n' h)
have "int h \<le> ceiling(c * log 2 (real n + 1))" by(rule 1[unfolded bal_i_def])
also have "\<dots> \<le> ceiling(c * log 2 (real n' + 1))"
using c1 "1"(1) by (simp add: ceiling_mono)
finally show ?case .
next
case (2 h)
show ?case
proof
show "int h \<le> \<lceil>c * log 2 (real (2 ^ h - 1) + 1)\<rceil>"
apply(simp add: of_nat_diff log_nat_power) using c1
by (metis ceiling_mono ceiling_of_nat order.order_iff_strict mult.left_neutral mult_eq_0_iff of_nat_0_le_iff real_mult_le_cancel_iff1)
qed
next
case 3
thus ?case using bal_tab_correct LEAST_eq_floor
by (simp add: eq_iff[symmetric]) (metis nat_int)
qed
text\<open>Now we replace the list by an immutable array:\<close>
definition bal_array :: "nat iarray" where
"bal_array = IArray bal_tab"
text\<open>A trick for code generation: how to get rid of the precondition:\<close>
lemma bal_i_code:
"bal_i n h =
(if h < IArray.length bal_array then IArray.sub bal_array h \<le> n else bal_i n h)"
by (simp add: bal_array_def tab_correct)
end
|
State Before: α : Type u_1
α' : Type ?u.1411
β : Type u_2
β' : Type ?u.1417
γ : Type u_3
γ' : Type ?u.1423
δ : Type ?u.1426
δ' : Type ?u.1429
ε : Type ?u.1432
ε' : Type ?u.1435
ζ : Type ?u.1438
ζ' : Type ?u.1441
ν : Type ?u.1444
f f' : α → β → γ
g g' : α → β → γ → δ
s s' : Set α
t t' : Set β
u u' : Set γ
v : Set δ
a a' : α
b b' : β
c c' : γ
d d' : δ
hs : s ⊆ s'
ht : t ⊆ t'
⊢ image2 f s t ⊆ image2 f s' t' State After: case intro.intro.intro.intro
α : Type u_1
α' : Type ?u.1411
β : Type u_2
β' : Type ?u.1417
γ : Type u_3
γ' : Type ?u.1423
δ : Type ?u.1426
δ' : Type ?u.1429
ε : Type ?u.1432
ε' : Type ?u.1435
ζ : Type ?u.1438
ζ' : Type ?u.1441
ν : Type ?u.1444
f f' : α → β → γ
g g' : α → β → γ → δ
s s' : Set α
t t' : Set β
u u' : Set γ
v : Set δ
a✝ a' : α
b✝ b' : β
c c' : γ
d d' : δ
hs : s ⊆ s'
ht : t ⊆ t'
a : α
b : β
ha : a ∈ s
hb : b ∈ t
⊢ f a b ∈ image2 f s' t' Tactic: rintro _ ⟨a, b, ha, hb, rfl⟩ State Before: case intro.intro.intro.intro
α : Type u_1
α' : Type ?u.1411
β : Type u_2
β' : Type ?u.1417
γ : Type u_3
γ' : Type ?u.1423
δ : Type ?u.1426
δ' : Type ?u.1429
ε : Type ?u.1432
ε' : Type ?u.1435
ζ : Type ?u.1438
ζ' : Type ?u.1441
ν : Type ?u.1444
f f' : α → β → γ
g g' : α → β → γ → δ
s s' : Set α
t t' : Set β
u u' : Set γ
v : Set δ
a✝ a' : α
b✝ b' : β
c c' : γ
d d' : δ
hs : s ⊆ s'
ht : t ⊆ t'
a : α
b : β
ha : a ∈ s
hb : b ∈ t
⊢ f a b ∈ image2 f s' t' State After: no goals Tactic: exact mem_image2_of_mem (hs ha) (ht hb)
|
#install.packages("twitteR")
#install.packages("ROAuth")
library("twitteR")
library("ROAuth")
library(httr)
# Set API Keys
api_key <- "XXXX"
api_secret <- "XXXXX"
access_token <- "XXXX"
access_token_secret <- "XXXXX"
setup_twitter_oauth(api_key, api_secret, access_token, access_token_secret)
search_string <- "#endsars"
no_of_tweets <- 7000
# Grab latest tweets
tweets <- searchTwitter('#endsars',n=10000,since = "2017-07-01",resultType="mixed")
length(tweets)
h = twListToDF(tweets)
dim(h)
write.csv(x = h,file = 'endsars6000_1912d.csv')
# # Loop over tweets and extract text
# #library(plyr)
# feed = rapply(tweets, function(t) t$getText())
# user_screen_name = rapply(tweets, function(t) t$getScreenName())
# dat = get_timestamp(tweets)
# tweet_source = rapply(tweets, function(t) t$getStatusSource())
# retweet_status = rapply(tweets, function(t) t$getIsRetweet())
#
#
# get_timestamp = function(tweets_feed){
# d = vector('list')
# for( i in 1:length(tweets_feed)){
# d[[i]] = tweets_feed[[i]]$getCreated()
# }
# tstamp <- do.call('c',d)
# }
#
#
# endsars_df <- data.frame(tweet=feed, user_screen_name, timestamp = dat, source = gsub('<.*?>', '', tweet_source),
# retweet_status)
# dim(endsars_df)
# save(tweets,file = "twee6000_19-12.rda")
# write.csv(x = endsars_df,file = 'endsars1912d.csv')
|
program makecube
use precision
implicit none
integer iargc,nunit,filestatus,i,xmax,ymax,nkeys,nkeysmax,nfiles,funit, &
status,firstpix
integer, dimension(2) :: naxesin
integer, dimension(3) :: naxesout
real(double) :: dmin,dmax,bpix,dscale
real(double), allocatable, dimension(:,:) :: datain
character(80) :: filelist,fitsmodelin,fileout
character(80), allocatable, dimension(:) :: header
interface
subroutine getfits(Refname,naxes,Ref,Rmin,Rmax,nkeys,header,bpix)
use precision
implicit none
integer :: nkeys
integer, dimension(2), intent(inout) :: naxes
real(double), intent(inout) :: Rmin,Rmax,bpix
real(double), dimension(:,:), intent(inout) :: Ref
character(80), intent(inout) :: Refname
character(80), dimension(:), intent(inout) :: header
end subroutine getfits
subroutine writedatacube(naxesin,datain,funit,firstpix)
use precision
implicit none
integer, intent(inout) :: funit,firstpix
integer, dimension(2), intent(inout) :: naxesin
real(double), dimension(:,:), intent(inout) :: datain
end subroutine writedatacube
end interface
bpix=1.0d30 !marking bad pixels
xmax=2048 !dimensions of 2D FITS files to read in.
ymax=256
nkeysmax=700 !maximum size of header
allocate(header(nkeysmax))
if(iargc().lt.2)then
write(0,*) "Usage: makecube <filelist> <prefix_out>"
write(0,*) " <filelist> - text file with list of FITS files"
write(0,*) " <prefix_out> - prefix name for output (prefix.fits)"
stop
endif
!read in a model spectrum
call getarg(1,filelist)
nunit=10 !unit number for data spectrum
open(unit=nunit,file=filelist,iostat=filestatus,status='old')
if(filestatus>0)then !trap missing file errors
write(0,*) "Cannot open ",filelist
stop
endif
nfiles=0
do
read(nunit,'(A80)',iostat=filestatus) fitsmodelin
if(filestatus == 0) then
nfiles=nfiles+1
cycle
elseif(filestatus == -1) then
exit
else
write(0,*) "File Error!!"
write(0,900) "iostat: ",filestatus
stop
endif
enddo
write(0,*) "Number of FITS files to read in: ",nfiles
rewind(nunit) !re-read filelist from beginning
!allocate space for 2D FITS to read in
allocate(datain(xmax,ymax))
!read in datacube name
call getarg(2,fileout)
fileout=trim(fileout)//".fits"
firstpix=1
!read in the model spectra
i=0 !counter to count number of FITS files read in
!loop over each line in filelist
do
!read in line
read(nunit,'(A80)',iostat=filestatus) fitsmodelin
! write(0,*) fitsmodelin
!check status of read
if(filestatus == 0) then
i=i+1
!read in FITS file
call getfits(fitsmodelin,naxesin,datain,dmin,dmax,nkeys,header,bpix)
!write(0,*) i,dmin,dmax
!after reading in first file, we can init datacube
if(i.eq.1) then
!write(0,*) "Init datacube.."
!size of the datacube
naxesout(1)=naxesin(1)
naxesout(2)=naxesin(2)
naxesout(3)=nfiles
!initialize the datacube
call initdatacube(fileout,funit,naxesout)
!initialize data-scale
dscale=2.0**16.0/dmax
endif
!write(0,*) "Write Datacube"
datain=datain*dscale
!write(0,*) "scale: ",maxval(datain)
call writedatacube(naxesin,datain,funit,firstpix)
cycle
elseif(filestatus == -1) then
exit
else
write(0,*) "File Error!!"
write(0,900) "iostat: ",filestatus
900 format(A8,I3)
stop
endif
enddo
close(nunit)
!close fits file
call ftclos(funit,status)
call ftfiou(funit,status)
end program makecube
|
```python
%matplotlib inline
%load_ext iminizinc
import asyncio
from IPython.display import HTML
import ipywidgets as widgets
from ipywidgets import interact, interactive
from problems import nqueens, us_map_coloring
from draw_utils import draw_nqueens, draw_us_map
from slide_utils import SlideController
from backtracking_search import nqueens_backtracking
from utils import autoupdate_cells
```
The iminizinc extension is already loaded. To reload it, use:
%reload_ext iminizinc
```python
HTML('')
```
# Constraint Programming
## Introduction
Constraints naturally arise in a variety of interactions and fields of study such as game theory, social studies, operations research, engineering, and artificial intelligence. A constraint refers to the relationship between the state of objects, such as the constraint that the three angles of a triangle must sum to 180 degrees. Note that this constraint has not precisely stated each angle's value and still allows some flexibility. Said another way, the triangle constraint restricts the values that the three variables (each angle) can take, thus providing information that will be useful in finding values for the three angles.
Another example of a constrained problem comes from the recently-aired hit TV series *Buddies*, where a group of five (mostly mutual) friends would like to sit at a table with three chairs in specific arrangements at different times, but have requirements as to who they will and will not sit with.
Another example comes from scheduling: at the university level, there is a large number of classes that must be scheduled in various classrooms such that no professor or classroom is double booked. Further, there are some constraints on which classes can be scheduled for the same time, as some students will need to be registered for both.
Computers can be employed to solve these types of problems, but in general these tasks are computationally intractable and cannot be solved efficiently in all cases with a single algorithm \cite{Dechter2003}. However, by formalizing these types of problems in a constraint processing framework, we can identify classes of problems that can be solved using efficient algorithms.
Below, we discuss generally the three core concepts in constraint programming: **modeling**, **inference**, and **search**. Modeling is an important step that can greatly affect the ability to efficiently solve constrained problems and inference (e.g., constraint propagation) and search are solution methods. Basic constraint propagation and state-space search are building blocks that state of the art solvers incorporate.
### Modeling
A **constraint satisfaction problem** (CSP) is formalized by a *constraint network*, which is the triple $\mathcal{R} = \langle X,D,C\rangle$, where
- $X = \{x_i\}_{i=1}^n$ is the set of $n$ variables
- $D = \{D_i\}_{i=1}^n$ is the set of variable domains, where the domain of variable $x_k$ is $D_k$
- $C = \{C_i\}_{i=1}^m$ is the set of constraints on the values that each $x_i$ can take on. Specifically,
- Each constraint $C_i = \langle S_i,R_i\rangle$ specifies allowed variable assignments.
- $S_i \subset X$ contains the variables involved in the constraint, called the *scope* of the constraint.
- $R_i$ is the constraint's *relation* and represents the simultaneous legal value assignments of variables in the associated scope.
- For example, if the scope of the first constraint is $S_1 = \{x_3, x_8\}$, then the relation $R_1$ is a subset of the Cartesian product of those variables' domains: $R_1 \subset D_3 \times D_8$, and an element of the relation $R_1$ could be written as a 2-tuple $(a,b)\in R_1$.
Each variable in a CSP can be assigned a value from its domain. A **complete assignment** is one in which every variable is assigned and a **solution** to a CSP is a consistent (or legal w.r.t. the constraints) complete assignment.
Note that for a CSP model, *any* consistent complete assignment of the variables (i.e., where all constraints are satisfied) constitutes a valid solution; however, this assignment may not be the "best" solution. Notions of optimality can be captured by introducing an objective function which is used to find a valid solution with the lowest cost. This is referred to as a **constraint *optimization* problem** (COP). We will refer generally to CSPs with the understanding that a CSP can easily become a COP by introducing a heuristic.
In this notebook, we will restrict ourselves to CSPs that can be modeled as having **discrete, finite domains**. This helps us to manage the complexity of the constraints so that we can clearly discuss the different aspects of CSPs. Other variations exist such as having discrete but *infinite* domains, where constraints can no longer be enumerated as combinations of values but must be expressed as either linear or nonlinear inequality constraints, such as $T_1 + d_1 \leq T_2$. Therefore, infinite domains require a different constraint language and special algorithms only exist for linear constraints. Additionally, the domain of a CSP may be continuous. With this change, CSPs become mathematical programming problems which are often studied in operations research or optimization theory, for example.
### Modeling as a Graph
In a general CSP, the *arity* of each constraint (i.e., the number of variables involved) is arbitrary. We can have unary constraints on a single variable, binary constraints between two variables, or $n$-ary constraints between $n$ variables. However, having more than binary constraints adds complexity to the algorithms for solving CSPs. It can be shown that every finite-domain constraint can be reduced to a set of binary constraints by adding enough auxiliary variables \cite{AIMA}. Therefore, since we are only discussing CSPs with finite domains, we will assume that the CSPs we are working with have only unary and binary constraints, meaning that each constraint scope has at most two variables.
An important view of a binary constraint network that defines a CSP is as a graph, $\langle\mathcal{V},\mathcal{E}\rangle$. In particular, each vertex corresponds to a variable, $\mathcal{V} = X$, and the edges of the graph $\mathcal{E}$ correspond to various constraints between variables. Since we are only working with binary and unary constraint networks, it is easy to visualize a graph corresponding to a CSP. For constraint networks with more than binary constraints, the constraints must be represented with a hypergraph, where hypernodes are inserted that connect three or more variables together in a constraint.
For example, consider a CSP $\mathcal{R}$ with the following definition
\begin{align}
X &= \{x_1, x_2, x_3\} \\
D &= \{D_1, D_2, D_3\},\ \text{where}\; D_1 = \{0,5\},\ D_2 = \{1,2,3\},\ D_3 = \{7\} \\
C &= \{C_1, C_2, C_3\},
\end{align}
where
\begin{align}
C_1 &= \langle S_1, R_1 \rangle = \langle \{x_1\}, \{5\} \rangle \\
C_2 &= \langle S_2, R_2 \rangle = \langle \{x_1, x_2\}, \{(0, 1), (0,3), (5,1)\} \rangle \\
C_3 &= \langle S_3, R_3 \rangle = \langle \{x_2, x_3\}, \{(1, 7), (2, 7)\} \rangle.
\end{align}
The graphical model of this CSP is shown below.
### Solving
The goal of formalizing a CSP as a constraint network model is to efficiently solve it using computational algorithms and tools. **Constraint programming** (CP) is a powerful tool to solve combinatorial constraint problems and is the study of computational systems based on constraints. Once the problem has been modeled as a formal CSP, a variety of computable algorithms could be used to find a solution that satisfies all constraints.
In general, there are two methods used to solve a CSP: search or inference. In previous 16.410/413 problems, **state-space search** was used to find the best path through some sort of graph or tree structure. Likewise, state-space search could be used to find a valid "path" through the CSP that satisfies each of the local constraints and is therefore a valid global solution. However, this approach would quickly become intractable as the number of variables and the size of each of their domains increase.
In light of this, the second solution method becomes more attractive. **Constraint propagation**, a specific type of inference, is used to reduce the number of legal values from a variable's domain by pruning values that would violate the constraints of the given variable. By making a variable locally consistent with its constraints, the domain of adjacent variables may potentially be further reduced as a result of missing values in the pairwise constraint of the two variables. In this way, by making the first variable consistent with its constraints, the constraints of neighboring variables can be re-evaluated, causing a further reduction of domains through the propagation of constraints. These ideas will later be formalized as $k$-consistency.
Constraint propagation may be combined with search, using the pros of both methods simultaneously. Alternatively, constraint propagation may be performed as a pre-processing pruning step so that search has a smaller state space to search over. Sometimes, constraint propagation is all that is required and a solution can be found without a search step at all.
After giving examples of modeling CSPs, this notebook will explore a variety of solution methods based on constraint propagation and search.
---
## Problem Models
Given a constrained problem, it is desirable to identify an appropriate constraint network model $\mathcal{R} = \langle X,D,C\rangle$ that can be used to find its solution. Modeling for CSPs is an important step that can dramatically affect the difficulty in enumerating the associated constraints or efficiency of finding a solution.
Using the general ideas and formalisms from the previous section, we consider two puzzle problems and model them as CSPs in the following sections.
### N-Queens
The N-Queens problem (depicted below for 4 queens) is a well-know puzzle among computer scientists and will be used as a recurring example throughout this notebook. The problem statement is as follows: given any integer $N$, the goal is to place $N$ queens on an $N\times N$ chessboard satisfying the constraint that no two queens threaten each other. A queen can threaten any other queen that is on the same row, column, or diagonal.
```python
# Example n-queens
draw_nqueens(nqueens(4))
```
Now let's try to understand the problem formally.
#### Attempt 1
To illustrate the effect of modeling, we first consider a (poor) model for the N-Queens constraint problem, given by the following definitions:
\begin{align}
X &= \{x_i\}_{i=1}^{N^2} && \text{(Chessboard positions)} \\
D &= \{D_i\}_{i=1}^{N^2},\ \text{where}\; D_i = \{0, 1,2,\dots,N\} && \text{(Empty or the $k^\text{th}$ queen)}
\end{align}
Without considering constraints, the size of the state space (i.e., the number of assignments) is an enormous $(N+1)^{N^2}$. For only $N=4$ queens, this becomes $5^{16} \approx 153$ billion states that could potentially be searched.
Expressing the constraints of this problem in terms of the variables and their domains also poses a challenge. Because of the way we have modeled this problem, there are six primary constraints to satisfy:
1. Exactly $N$ chess squares shall be filled (i.e., there are only $N$ queens and all of them must be used)
1. The $k^\text{th}$ queen, ($1\le k\le N$) shall only be used once.
1. No queens share a column
1. No queens share a row
1. No queens share a positive diagonal (i.e., a diagonal from bottom left to top right)
1. No queens share a negative diagonal (i.e., a diagonal from top left to bottom right)
To express these constraints mathematically, we first let $Y\triangleq\{1\le i\le N^2|x_i\in X,x_i\ne 0\}$ be the set of chess square numbers that are non-empty and $Z \triangleq \{x\in X|x\ne 0\}$ be the set of queens in those chess squares (unordered). With pointers back to which constraint they satisfy, the expressions are:
\begin{align}
|Z| = |Y| &= N && (C1) \\
z_i-z_j &\ne 0 && (C2) \\
|y_i-y_j| &\ne N && (C3) \\
\left\lfloor\frac{y_i-1}{N}\right\rfloor &\ne \left\lfloor\frac{y_j-1}{N}\right\rfloor && (C4) \\
|y_i-y_j| &\ne (N-1) && (C5) \\
|y_i-y_j| &\ne (N+1), && (C6)
\end{align}
where $z_i, z_j\in Z$ and $y_i,y_j\in Y, \forall i\ne j$, and applying $|\cdot|$ to a set is the set's cardinality (i.e., size) and applied to a scalar is the absolute value. Additionally, we use $\lfloor\cdot\rfloor$ as the floor operator. Notice how we are able to express all the constraints as pairwise (binary).
We can count the number of constraints in this model as a function of $N$. In each pairwise constraint (C2)-(C6), there are $N$ choose $2$ pairs. Since we have 5 different types of pairwise constraints, we have that the number of constraints, $\Gamma$, is
\begin{equation}
\Gamma(N) = 5 {N \choose 2} + 1 = \frac{5N!}{2!(N-2)!} + 1,
\end{equation}
where the plus one comes from the single constraint for (C1). Thus, $\Gamma(N=4) = 31$.
Examining the size of the state space in this model, we see the infeasibility of simply performing a state-space search and then performing a goal test that encodes the problem constraints. This motivates the idea of efficiently using constraints either before or during our solution search, which we will explore in the following sections.
#### Attempt 2
Motivated by the desire to do less work in searching and writing constraints, we consider another model of the N-Queens problem. We wish to decrease the size of the state space and number and difficulty of writing the constraints. Good modeling involves cleverly choosing variables and their semantics so that constraints are implicitly encoded, requiring less explicit constraints.
We can achieve this by encoding the following assumptions:
1. assume one queen per column;
1. an assignment determines which row the $i^\text{th}$ queen should be in.
With this understanding, we can write the constraint network as
\begin{align}
X &= \{x_i\}_{i=1}^{N} && \text{(Queen $i$ in the $i^\text{th}$ column)} \\
D &= \{D_i\}_{i=1}^{N},\ \text{where}\; D_i = \{1,2,\dots,N\} && \text{(The row in which the $i^\text{th}$ queen should be placed)}.
\end{align}
Now considering the size of the state space without constraints, we see that this intelligent encoding reduces the size to only $N^N$ assignments.
Writing down the constraints is also easier for this model. In fact, we only need to address constraints (C4)-(C6) from above, as (C1)-(C3) are taken care of by intelligently choosing our variables and their domains. The expressions, $\forall x_i,x_j\in X, i\ne j$, are
\begin{align}
x_i &\ne x_j && \text{(C4)} \\
|x_i-x_j| &\ne |i-j|. && \text{(C5, C6)}
\end{align}
With this reformulation, the number of constraints is
\begin{equation}
\Gamma(N) = 2 {N \choose 2} = \frac{N!}{(N-2)!}.
\end{equation}
Thus $\Gamma(N=4) = 12$.
We have successfully modeled the N-Queens problem with a reduced state space and with only two pairwise constraints. Both of these properties will allow the solvers discussed next to more efficiently find solutions to this CSP.
### Map Coloring
Map coloring is another classic example of a CSP. Consider the map of Australia shown below (from \cite{AIMA}). The goal is to assign a color to Australia's seven territories such that no neighboring regions share the same color. We are further constrained by only being able to use three colors (e.g., <span style="color:red;font-weight:bold">R</span>, <span style="color:green;font-weight:bold">G</span>, <span style="color:blue;font-weight:bold">B</span>). Next to the map is the constraint graph representation of this specific map-coloring problem.
<table width="70%">
<tr>
<td></td>
<td></td>
</tr>
</table>
The constraint network model $\mathcal{R}=\langle X,D,C \rangle$ for the general map-coloring problem with $N$ regions and $M$ colors is defined as:
\begin{align}
X &= \{x_i\}_{i=1}^N && \text{(Each region)} \\
D &= \{D_i\}_{i=1}^N,\ \text{where}\; D_i = \{c_j\}_{j=1}^M, && \text{(Available colors)}
\end{align}
and the constraints are encoded as
\begin{align}
\forall x_i\in X: x_i &\ne n_j,\ \forall n_j\in\mathcal{N}(x_i), && \text{(Each region cannot have the same color as any of its neighbors)}
\end{align}
where the neighborhood of the region $x_i$ is defined as the set $\mathcal{N}(x_i) = \{x_j\in X| A_{ij}=1,i\ne j, \forall j\}$. The matrix $A\in\mathbb{Z}_{\ge 0}^{N\times N}$ is called the *adjacency matrix* of a graph with $N$ vertices and represents the variables that a given variable is connected to by constraints (i.e., edges). The notation $A_{mn}$ indexes into the matrix by row $m$ and column $n$.
We will use the map coloring problem as a COP example later on.
### First MiniZinc model
We are now ready to solve our first CPS problem! Let us now introduce [MiniZinc](https://www.minizinc.org/), a **high-level**, **solver-independent** language to express constraint programming problems and solve them. It has a large library of constraints already encoded that we can exploit to encode our problem.
A very useful constraint is `alldifferent(array[int] of var int: x)`, which is one of the most studied and used constraint in constraint programming. As the name suggest it takes an array of variables and constrains them to take different values.
Let's focus on the N-Queens problem as formulated in attempt 2. The reader can notice that we can write (C1), (C2) and (C3) leveraging the `alldifferent` constraint. As result we get the following model.
```python
%%minizinc
include "globals.mzn";
int: n = 4;
array[1..n] of var 1..n: queens;
constraint all_different(queens);
constraint all_different([queens[i]+i | i in 1..n]);
constraint all_different([queens[i]-i | i in 1..n]);
solve satisfy;
```
Here we are asking MiniZinc to solve find any feasible solution (`solve satisfy`) given the constraints.
With high-level languages is easy to describe and solve a CSP. The solver at the same time abstract away the complexity of the search process. Let's now focus on how a CSP is actually solved.
---
## Constraint Propagation Methods
As previously mentioned, the domain size of a CSP can be dramatically reduced by removing values from variable domains that would violate the relevant constraints. This idea is called **local consistency**. By representing a CSP as a binary constraint graph, making a graph locally consistent amounts to visiting the $i^\text{th}$ node and for each of the values in the domain $D_i$, removing the values of neighboring domains that would cause an illegal assignment.
A great example of the power of constraint propagation is seen in Sudoku puzzles. Simple puzzles are designed to be solved by constraint propagation alone. By enforcing local consistency throughout simple formulations of Sudoku common in newspapers, the unique solution is found without the need for search.
While there are multiple forms of consistency, we will forgo a discussion of node consistency (single node), path consistency (3 nodes), and generally **$k$-consistency** ($k$ nodes) to focus on arc consistency.
### Arc Consistency
The most well-known notion of local consistency is **arc consistency**, where the key idea is to remove values of variable domains that can never satisfy a specified constraint. The arc $\langle x_i, x_j \rangle$ between two variables $x_i$ and $x_j$ is said to be arc consistent if $\langle x_i, x_j \rangle$ and $\langle x_j, x_i \rangle$ are *directed* arc consistent.
The arc $\langle x_i, x_j \rangle$ is **directed arc consistent** (from $x_i$ to $x_j$) if $\forall a_i \in D_i \;
\exists a_j \in D_j$ s.t. $\langle a_i, a_j \rangle \in C_{ij}$. The notation $C_{ij}$ represents a constraint between variables $x_i$ and $x_j$ with a relation on their domains $D_i, D_j$. In other words, we write a constraint $\langle \{x_i, x_j\}, R \rangle$ as $C_{ij} = R$, where $R\subset D_i\times D_j$.
As an example, consider the following simple constraint network:
\begin{align}
X &= \{x_1, x_2\} \\
D &= \{D_1, D_2\},\ \text{where}\; D_1=\{1,3,5,7\}, D_2=\{2,4,6,8\} \\
C &= \{C_{12}\},
\end{align}
where $C_{12} = \{(1,2),(3,8),(7,4)\}$ lists legal assignment relationships between $x_1$ and $x_2$.
To make $\langle x_1, x_2 \rangle$ directed arc consistent, we would remove the values from $D_1$ that could never satisfy the constraint $C_{12}$. The original domains are shown on the left, while the directed arc consistent graph is shown on the right. Note that 6 is not removed from $D_2$ because directed arc consistency only considers consistency in one direction.
<table width="70%">
<tr style="background-color:white">
<td></td>
<td></td>
</tr>
</table>
Similarly, we can make $\langle x_2, x_1 \rangle$ directed arc consistent by removing 6 from $D_2$. This results in an arc consistent graph, shown below.
#### Sound but Incomplete
By making a CSP arc consistent, we are guaranteed that solutions to the CSP will be found in the reduced domain of the arc consistent CSP. However, we are not guaranteed that any arbitrary assignment of variables from the reduced domain will offer a valid CSP solution. In other words, arc consistency is sound (all solutions are arc-consistent solutions) but incomplete (not all arc-consistent solutions are valid solutions).
### Algorithms
To achieve arc consistency in a graph, we can formalize the ideas that we discussed above about removing values from domains that will never participate in a legal constraint. Two widespread algorithms are considered, known `AC-1` and `AC-3`, which are the first and third versions described by Mackworth in \cite{Mackworth1977}.
In this section, we give the pseudocode for these algorithms and a discussion of their complexities and trade offs.
#### The `REVISE` Algorithm
First, we formalize the procedure of achieving local consistency via the `REVISE` procedure, which is an algorithm that enforces directed arc consistency on a subnetwork. This is the algorithm that we used in the toy example above with $x_1$ and $x_2$.
```vhdl
1 procedure REVISE(xi,xj)
2 for each ai in Di
3 if there is no aj in Dj such that (ai,aj) is consistent,
4 delete ai from Di
5 end if
6 end for
7 end
```
##### Complexity Analysis
The complexity of `REVISE` is $O(k^2)$, where $k$ bounds the domain size, i.e., $k=\max_i|D_i|$. The $k^2$ comes from the fact that there is a double `for loop`---the outer loop is on line 2 and the inner loop is on line 3.
#### The `AC-1` Algorithm
A first pass of enforcing arc consistency on an entire constraint network would be to revise each variable domain in a brute-force manner. This is the objective of the following `AC-1` procedure, which takes a CSP definition $\mathcal{R}=\langle X, D, C\rangle$ as input.
```vhdl
1 procedure AC1(csp)
2 loop
3 for each cij in C
4 REVISE(xi, xj)
5 REVISE(xj, xi)
6 end for
7 until no domain is changed
8 end
```
If after the `AC-1` procedure is run any of the variable domains are empty, then we conclude that the network has no solution. Otherwise, we are guaranteed an arc-consistent network.
##### Complexity Analysis
Let $k$ bound the domain size as before and let $n=|X|$ be the number of variables and $e=|C|$ be the number of constraints. One cycle through all of the constraints (lines 3-6) takes $O(2\,e\,O_\text{REVISE}) = O(ek^2)$. In the worst case, only a single domain is changed in one cycle. In this case, the maximum number of repeats (line 7) will be the total number of values, $nk$. Therefore, the worst-case complexity of the `AC-1` procedure is $O(enk^3)$.
#### The `AC-3` Algorithm
Clearly, `AC-1` is straightforward to implement and generates an arc-consistent network, but at great expense. The question we must ask ourselves when using any brute-force method is: Can we do better?
A key observation about `AC-1` is that it processes all constraints even if only a single domain was reduced. This is unnecessary because changes in a domain typically only affect a local subgraph around the node in question.
The `AC-3` procedure is an improved version that maintains a queue of ordered pairs of variables that participate in a constraint (see lines 2-4). Each arc that is processed is removed from the queue (line 6). If the domain of the arc tail $x_i$ is revised, arcs that have $x_i$ as the head will need to be re-evaluated and are added back to the queue (lines 8-10).
```vhdl
1 procedure AC3(csp)
2 for each cij in C do
3 Q ← Q ∪ {<xi,xj>, <xj,xi>};
4 end for
5 while Q is not empty
6 select and delete any arc (xi,xj) from Q
7 REVISE(xi,xj)
8 if REVISE(xi,xj) caused a change in Di
9 Q ← Q ∪ {<xk,xi> | k ≠ i, k ≠ j, ∀k }
10 end if
11 end while
12 end
```
##### Complexity Analysis
Using the same notation as before, the time complexity of `AC-3` is computed as follows. Building the initial `Q` is $O(e)$. We know that `REVISE` is $O(k^2)$ (line 7). This algorithm processes constraints at most $2k$ times since each time it is reintroduced into the queue (line 9), the domain of one of its associated variables has just been revised by at least one value, and there are at most $2k$ values. Therefore, the total time complexity of `AC-3` is $O(ek^3)$.
Note that the optimal algorithm has complexity $O(ek^2)$ since the worst case of merely verifying the arc consistency of a network requires $ek^2$ operations. There is an `AC-4` algorithm that achieves this performance by not using `REVISE` as a block box, but by exploiting the structures at the constraint level \cite{Dechter2003}.
### Example
Using our efficient CSP model (Attempt 2) from the previous section, consider the following 4-Queens problem, with the chessboard shown to the left and the corresponding constraint graph representation to the right. We have already placed the first queen in the first row, $x_1=1$.
<table width="70%">
<tr>
<td></td>
<td></td>
</tr>
</table>
We would like to use the `AC-3` algorithm to propagate constraints and eliminate inconsistent values in the domains of variables $x_2$, $x_3$ and $x_4$. Intuitively, we already know which values are inconsistent with our constraints (shown with $\times$ in the chessboard above). Follow the slides below to walk through the `AC-3` algorithm.
```python
ac3_slides = SlideController('images/4queens_slide%02d.png', 8)
```
Note how in this example, the efficiencies of `AC-3` were unnecessary. In fact, a single pass of `AC-2` would have achieved the same result. Although this was the case for this specific instance, by adding arcs back to the queue to by examined, `AC-3` is more computationally efficient in general.
---
## Search Methods
In the previous 4-Queens example, constraint propagation via `AC-3` was not enough to find a satisfying complete assignment to the CSP. In fact, if `AC-3` had been applied to the empty 4-Queens chessboard, no domains would have been pruned because all variables were already arc consistent. In these cases, we must assign the next variable a value by *guessing and testing*.
This trial and error method of guessing a variable and testing if it is consistent is formalized in **search methods** for solving CSPs. As mentioned previously, a simple state-space search would be intractable as the number of variables and their domains increase. However, we will first examine state-space search in more detail and then move to a more clever search algorithm called backtrack search (BT) that checks consistency along the way.
### Generic Search for CSPs
As we have studied before, a generic search problem can be specified by the following four elements: (1) state space, (2) initial states, (3) operator, and (4) goal test. In a CSP, consider the following definitions of these elements:
- state space
- partial assignment to variables at the current iteration of the search
- initial state
- no assignment
- operator
- add a new assignment to any unassigned variable, e.g., $x_i = a$, where $a\in D_i$.
- child extends parent assignments with new
- goal test
- all variables are assigned
- all constraints are satisfied
### Making Search More Efficient for CSPs
The inefficiency of using the generic state-space search approaches we have previously employed is caused by the size of the state space. Recall that a simple state-space search (using either breadth-first search or depth-first search) has worst case performance of $O(b^d)$, where $b$ is the branching factor and $d$ is the search depth, as illustrated below (from 16.410/413, Lecture 3).
In the above formulation of generic state-space search of CSPs, note that the branching factor is calculated as the sum of the maximum domain size $k$ for all variables $n$, i.e., $b = nk$. The search depth of a CSP is exactly $n$, because all variables must be assigned to be considered a solution. Therefore, the performance is exponential in the number of variables, $O([nk]^n)$.
This analysis fails to recognize that there are only $k^n$ possible complete assignments of the CSP. That is because the property of **commutativity** is ignored in the above formulation of CSP state-space search. CSPs are commutative because the order in which partial assignments are made do not affect the outcome. Therefore, by restricting the choice of assignment to a single variable at each node in the search tree, the runtime performance becomes only $O(k^n)$.
By combining this property with the idea that **extensions to inconsistent partial assignments are always inconsistent**, backtracking search shows how checking consistency after each assignment enables a more efficient CSP search.
<!--
With a better understanding of how expensive it can become to solve interesting problems with a simple state-space search, we are motivated to find a better searching algorithm. Two factors that contribute to the size of a search space are (1) variable ordering, and (2) consistency level.
We have already seen from the `AC-3` example on 4-Queens how enforcing arc-consistency on a network can result in the pruning of variable domains. This clearly reduces the search space of the CSP resulting in better performance from a search algorithm. Therefore, we will focus our discussion on the effects of **variable ordering**.
-->
### Backtracking Search
Backtracking (BT) search is based on depth-first search to choose values for one variable at a time, but it backtracks whenever there are no legal values left to assign. The state space is searched by extending the current partial solution with an assignment to unassigned variables. Starting with the first variable, the algorithm assigns a provisional value to each subsequent variable, checking value consistency along the way. If the algorithm encounters a variable for which no domain value is consistent with the previous assignments, a *dead-end* occurs. At this point, the search *backtracks* and the variable preceding the dead-end assignment is changed and the search continues. The algorithm returns when a solution is found, or when the search is exhausted with no solution.
#### Algorithm
The following recursive algorithm performs a backtracking search on a given CSP. The recursion base case occurs on line 3, which indicates the halting condition of the algorithm.
```vhdl
1 procedure backtrack(csp)
2 if csp.assignment is complete and feasible then
3 return assignment ; recursion base case
4 end if
5 var ← csp.get_unassigned_var()
6 for next value in csp.var_domain(var)
7 original_domain = csp.assign(var, value)
8 if csp.assignment is feasible then
9 result ← backtrack(csp)
10 if result ≠ failure then
11 return result
12 end if
13 csp.restore_domain(original_domain)
14 end if
15 csp.unassign(var, value)
16 return failure
17 end
```
#### Example
We can apply the backtrack search algorithm to the N-Queens problem. Note that this simple version of the algorithm makes finding a solution tractable for a handful of queens, but there are other improvements that can be made that are discussed in the following section.
```python
queens, exec_time = nqueens_backtracking(4)
draw_nqueens([queens.assignment])
print("Solution found in %0.4f seconds" % exec_time)
```
### Branch and Bound
Suppose we would like to find the *best* solution (in some sense) to the CSP. This amounts to solving the associated constraint optimization problem (COP), where our constraint network is now a 4-tuple, $\langle X, D_X, C, f \rangle$, where $X\in D_X$, $C: D_X \to \{\operatorname{True},\operatorname{False}\}$ and $f: D_x\to\mathbb{R}$ is a cost function. We would like to find the variable assignments $X$ that solve
\begin{array}{ll@{}ll}
\text{minimize} & f(X) &\\
\text{subject to}& C(X) &
\end{array}
By adding a cost function $f(X)$, we turn a CSP into a COP, and we can use the **branch and bound algorithm** to find the solution with the lowest cost.
To find a solution of a COP we could surely explore the whole tree and then pick the leaves with the smallest cost value. However, one may want to integrate the optimization process into the search process allowing to **prune** even if no inconsistency has been detected yet.
The main idea behind branch and bound is the following: if the best solution so far has cost $c$, this is a _lower bound_ for all other possible solutions. So, if a partial solution has led to costs of $x$ (cost so far) and the best we can achieve for all other cost components is $y$ with $x + y < c$, then we do not need to continue in this branch.
Of course every time we prune a subtree we are implicitly making the search faster compared with full exploration. Therefore with a small overhead in the algorithm, we can improve (in the average case) the runtime.
#### Algorithm
```vhdl
1 procedure BranchAndBound(cop)
2 i ← 1; ai ← {} ; initialize variable counter and assignments
3 a_inc ← {}; f_inc ← ∞ ; initialize incumbent assignment and cost
4 Di´ ← Di ; copy domain of first variable
5 while 1 ≤ i ≤ n+1
6 if i = n+1 ; "unfathomed" consistent assignment
7 f_inc ← f(ai) and a_inc ← ai ; updated incumbent
8 i ← i - 2
9 else
10 instantiate xi ← SelectValueBB(f_inc) ; Add to assignments ai; update Di
11 if xi is null ; if no value was returned,
12 i ← i - 1 ; then backtrack
13 else
14 i ← i + 1 ; else step forward and
15 Di´ ← Di ; copy domain of next variable.
16 end if
17 end if
18 end while
19 return incumbent X_inc and f_inc ; Assignments exhausted, return incumbent
20 end
```
<br><br>
```vhdl
1 procedure SelectValueBB(f_inc)
2 while Di´ ≠ ∅
3 select an arbitrary element a ∈ Di´ and remove a from Di´
4 ai ← ai ∪ {xi = a}
5 if consistent(ai) and b(ai) < f_inc
6 return a;
7 end if
8 end while ; no consistent value
9 return null
10 end
```
#### Example
Now let's revive our discussion on the map coloring problem. Imagine that we work at a company that wishes to print a colored map of the United States, so they need to choose a color for each state. Let's also imagine that the available colors are:
```python
colors = [
'red',
'green',
'blue',
'#6f2da8', #Grape
'#ffbf00', #Amber
'#01796f', #Pine
'#813f0b', #Clay
'#ff2000', #yellow
'#ff66cc', #pink
'#d21f3c' #raspberry
]
```
The CEO asks the engineering department (they have one of course) to find a color assignment that satisfies the constraints as specified above in _Map Coloring_ and they arrive at the following solution:
```python
map_colors, num_colors = us_map_coloring(colors)
draw_us_map(map_colors)
```
Unfortunately, management is never happy and they complain that {{ num_colors }} colors are really too many. Can we do better? Yes, by adding an objective function $f$ that gives a cost proportional to the number of used colors, we can minimize $f$. This results in the following solution:
```python
map_colors, opt_num_colors = us_map_coloring(colors, optimize=True)
draw_us_map(map_colors)
```
Fortunately we saved {{ num_colors - opt_num_colors }} color, well done!
---
## Extended Methods
The methods discussed in this section arise from viewing a CSP from different perspectives and from a combination of constraint propagation and search methods.
### BT Search with Forward Checking (BT-FC)
By interleaving inference from constraint propagation and search, we can obtain much more efficient solutions. A well-known way of doing this is by adding an arc consistency step to the backtracking algorithm. The result is called **forward checking**, which allows us to run search on graphs that have not already been pre-processed into arc consistent CSPs.
#### Algorithm
**Main Idea**: Maintain n domain copies for resetting, one for each search level i.
```vhdl
1 procedure BTwithFC(csp)
2 Di´ ← Di for 1 ≤ i ≤ n ; copy all domains
3 i ← 1; ai = {} ; init variable counter, assignments
4 while 1 ≤ i ≤ n
5 instantiate xi ← SelectValueFC() ; add to assignments, making ai
6 if xi is null ; if no value was returned
7 reset each Dk´ for k>i to
8 its value before xi
9 was last instantiated
10 i ← i - 1 ; backtrack
11 else
12 i ← i + 1 ; step forward
13 end if
14 end while
15 if i = 0
16 return "inconsistent"
17 else
18 return ai ; the instantiated values of {xi, ..., xn}
19 end
```
```vhdl
1 procedure SelectValueFC()
2 while Di´ ≠ ∅
3 select an arbitrary element a ∈ Di´ and remove a from Di´
4 for all k, i < k ≤ n
5 for all values b ∈ Dk´
6 if not consistent(a_{i-1}, xi=a, xk=b)
7 remove b from Dk´
8 end if
9 end for
10 if Dk´ = ∅ ; xi=a leads to a dead-end: do not select a
11 reset each Dk´, i<k≤n to its value before a was selected
12 else
13 return a
14 end if
15 end for
16 end while
17 return null
18 end
```
#### Example
The example code below runs a backtracking search with forward checking on the N-Queens problem. For the same value of $N$, note how a solution can be found much faster than without forward checking.
```python
queens, exec_time = nqueens_backtracking(4, with_forward_checking=True)
draw_nqueens([queens.assignment])
print("Solution found in %0.4f seconds" % exec_time)
```
### BT-FC with Dynamic Variable and Value Ordering
Traditional backtracking as it was introduced above uses a fixed ordering over variables and values. However, it is often better to choose ordering dynamically as the search proceeds. The idea is as follows. At each node during the search, choose:
- the most constrained variable; picking the variable with the fewest legal variables in its domain will minimize the branching factor,
- the least constraining value; choosing a value that rules out the smallest number of values of variables connected to the chosen variable via constraints will leave most options for finding a satisfying assignment.
These two ordering heuristics cause the algorithm to choose the variable that fails first and the value that fails last. This helps minimize the search space by pruning larger parts of the tree early on.
#### Example
The example code below demonstrates BT-FC with dynamic variable ordering using the most-constrained-variable heurestic. The run time cost of finding a solution to the N-Queens problem is lower than both BT and BT-FC, allowing the problem to be solved for even higher $N$.
```python
queens, exec_time = nqueens_backtracking(4, with_forward_checking=True, var_ordering='smallest_domain')
draw_nqueens([queens.assignment])
print("Solution found in %0.4f seconds" % exec_time)
```
### Adaptive Consistency: Bucket Elimination
Another method of solving constraint problems entails eliminating constraints through bucket elimination. This method can be understood through the lens of Gaussian elimination, where equations (i.e., constraints) are added and then extra variables are eliminated. More formally, these operations can be thought of from the perspective of relations as **join** and **project** operations.
Bucket elimination uses the join and projection operations on the set of constraints in order to transform a constraint graph into a single variable. After solving for that variable, other constraints are solved for by back substitution just as you would in an algebraic system in Gaussian elimination.
Using the map coloring problem where an `AllDiff` constraint exists between each neighboring variables, the join and project operators are explained. The constraint graph for the map coloring problem is shown below.
#### The Join Operator
The map coloring CSP can be trivially solved using the join operation on the constraints, which is defined as the consistent Cartesian product of the constraint relations.
Written as tables, the relations of each constraint $C_{12}$, $C_{23}$, and $C_{13}$ are
<table>
<tr><th style="text-align:center">$C_{12}$</th><th style="text-align:center">$C_{23}$</th><th style="text-align:center">$C_{13}$</th></tr>
<tr><td>
|$V_1$|$V_2$|
|-----|-----|
| R | G |
| G | R |
| B | R |
| B | G |
</td><td>
|$V_2$|$V_3$|
|-----|-----|
| R | G |
</td><td>
|$V_1$|$V_3$|
|-----|-----|
| R | G |
| B | G |
</td></tr>
</table>
These constraint relation tables are then joined together as
<table>
<tr><th style="text-align:center">$C_{12}\Join C_{23}$</th><th style="text-align:center">$C_{13}$</th></tr>
<tr><td>
|$V_1$|$V_2$|$V_3$|
|-----|-----|-----|
| G | R | G |
| B | R | G |
</td><td>
|$V_1$|$V_3$|
|-----|-----|
| R | G |
| B | G |
</td></tr>
</table>
<table>
<tr><th style="text-align:center;width:140px">$C_{12}\Join C_{23}\Join C_{13}$</th></tr>
<tr><td>
|$V_1$|$V_2$|$V_3$|
|-----|-----|-----|
| B | R | G |
</td></tr>
</table>
#### The Projection Operator
The projection operator is akin to the elimination step in Gaussian elimination and is useful for shrinking the size of the constraints. After joining all the constraints in the above example, we can project out all constraints except for one to obtain the value of that variable.
For example, the projection of $C_{12}\Join C_{23}\Join C_{13}$ onto $C_1$ is
<table>
<tr><th style="text-align:center;width:180px">$C_2 = \Pi_2 (C_{12}\Join C_{23}\Join C_{13})$</th></tr>
<tr><td>
|$V_1$|
|-----|
| B |
</td></tr>
</table>
---
# Symmetries
<div style="text-align: right"> M. C. Escher </div>
## Introduction
A CSP often exhibits some symmetries, which are mappings that preserve satisfiability of the CSP. Symmetries are particularly disadvantageous when we are looking for **all possible solutions** of a CSP, since search can revisit equivalent states over and over again.
\begin{definition} \label{def:symmetry}
(Symmetry). For any CSP instance $P = \langle X, D, C \rangle$, a solution symmetry of $P$ is a permutation of the set $X\times D$ that preserves the set of solutions to $P$.
\end{definition}
In other words, a solution symmetry is a bijective mapping defined on the set of possible variable-value pairs of a CSP that maps solutions to solutions.
### Why is symmetry important?
A principal reason for identifying CSP symmetries is to **reduce search efforts** by not exploring assignments that are symmetrically equivalent to assignments considered elsewhere in the search. In other words, if a problem has a lot of symmetric solutions of a small subset of non-symmetric solutions, the search tree is bigger and if we are looking for all those solutions, the search process is forced to visit all the symmetric solutions of the big search tree. Alternatively, if we can prune-out the subtree containing symmetric solutions, the search effort will reduce drastically.
### Case Study: symmetries in N-Queens problem
We have already seen the N-Queens problem. Let us see all the solutions of a $4 \times 4$ chessboard.
```python
queens = nqueens(4)
draw_nqueens(queens, all_solutions=True)
```
There are exactly 2 solutions.
It's easy to notice the two are the same solution if we flip (or rotate) the chessboard.
### Interactive examples
All the following code snippets are a refinement of the original N-Queens problem where we modify the problem to reduce the number of symmetries. Feel free to explore how the number of solutions to the N-Queens problem changes when we change symmetry breaking strategy and $N$.
You can use the following slider to change $N$, than press the button `Update cells...` to quickly update the results of the models.
```python
n = 5
def update_n(x):
global n
n = x
interact(update_n , x=widgets.IntSlider(value=n, min=1,max=12,step=1, description='Queens:'));
```
```python
## Update all cells dependent from the slider with the following button
button = widgets.Button(description="Update cells...")
display(button)
button.on_click(autoupdate_cells)
```
## Avoid symmetries
### Adding Constraints Before Search
In practice, symmetry in CSPs is usually identified by applying human insight: the programmer sees that some transformation would translate a hypothetical solution into another hypothetical solution. Then, the programmer can try to formalize some constraint that preserves solutions but removes some of the symmetries.
For $N$ = {{n}} the N-Queens problem has {{ len(nqueens(n)) }} solutions. One naive way to remove some of the symmetric solutions is to restrict the position for some of the queens, for example, we can say that the first queen should be on the top half of the chess board by imposing an additional constraint like
```
constraint queens[0] <= n div 2;
```
This constraint should remove approximately half of the symmetries. Let's try the new model!
```python
%%minizinc --all-solutions --statistics -m bind
include "globals.mzn";
int: n;
array[0..n-1] of var 0..n-1: queens;
constraint all_different(queens);
constraint all_different([queens[i]+i | i in 0..n-1]);
constraint all_different([queens[i]-i | i in 0..n-1]);
constraint queens[0] <= n div 2;
solve satisfy;
```
If you play with $N$ you will notice that for $N=4$ all solutions are retained. However, For $N>4$ symmetric solutions will begin to be pruned out.
This approach is fine and if done correctly it can greatly reduce the search space. However, this additional constraint can lose solutions if done incorrectly.
To address the problem in a better way we need some formal tool.
### Chessboard symmetries
Looking at the chessboard, we notice that it has eight geometric symmetries---one for each geometric transformation. In particular they are:
- identity (no-reflections) $id$ (we always include the identity)
- horizontal reflection $r_x$
- vertical reflection $r_y$
- reflections along the two diagonal axes ($r_{d_1}$ and $r_{d_2}$)
- rotations through $90$°, $180$° and $270$° ($r_{90}$, $r_{180}$, $r_{270}$)
If we label the sixteen squares of a $4 \times 4$ chessboard with the numbers 1 to 16, we can graphically see how symmetries move cells.
Now it's easy to see that a symmetry is a **permutation** that acts on a point: for example, if a queen is at $(2,1)$ (which correspondes to element $2$ in $id$), under the mapping $r_{90}$, it moves to $(4,2)$.
One useful form to write a permutation is in _Cauchy form_, for example for $r_{90}$
\begin{equation}
r_{90} : \left( \begin{array} { c c c c c c c c c }
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16\\
13 & 9 & 5 & 1 & 14 & 10 & 6 & 2 & 15 & 11 & 7 & 3 & 16 & 12 & 8 & 4
\end{array} \right)
\end{equation}
What this notation says is that an element in position $i$ in the top row, is moved to the corresponding position of the bottom row. For example $1$ → $13$, $2$ → $9$, $3$ → $5$ and so on.
This form will help us compactly write constraints to remove unwanted permutations.
### The Lex-Leader Method
Puget proved that whenever a CSP has symmetry that can be expressed as permutations of the variables, it is possible to find a _reduced form_ with the symmetries eliminated by adding constraints to the original problem \cite{Puget2003}. Puget found such a reduction for three simple constraint problems and showed that this reduced CSP could be solved more efficiently than in its original form.
The intuition is rather simple: for each equivalence class of solutions (permutation), we predefine one to be the **canonical solution**. We achieve this by choosing a static variable ordering and imposing the **lexicographic order** for each permutation. This method is called **lex-leader**.
For example, let us consider a problem where we have three variables $x_1$, $x_2$, and $x_3$ subject to the `alldifferent` constraint and domain {A,B,C}. This problem has $3!$ solutions, where $3!-1$ are symmetric solutions. Let's say that our canonical solution is `ABC`, and we want to prevent `ACB` from being a solution, the lex-leader method would impose the following additional constraint:
$$ x_1\,x_2\,x_3 \preceq_{\text{lex}} x_1\,x_3\,x_2. $$
In fact, if $x = (\text{A},\text{C},\text{B})$ the constraint is not satisfied, written as
$$ \text{A}\text{C}\text{B}\,\, \npreceq_{\text{lex}} \text{A}\text{B}\text{C}. $$
Adding constraints like this for all $3!$ permutations will remove all symmetric solutions, leaving exactly one solution (`ABC`). All other solutions can be recovered by applying each symmetry.
In general, if we have a permutation $\pi$ that generates a symmetric solution that we wish to remove, we would impose an additional constraint, usually expressed as
$$ x_1 \ldots x_k \preceq_{\text{lex}} x_{\pi (1)} \ldots x_{\pi (k)}, $$
where $\pi(i)$ is the index of the variable after the permutation.
Unfortunately, for the N-Queens problem formulated as we have seen, this technique does not immediately apply, because some of its symmetries cannot be described as permutations of the `queens` array.
The trick to overcoming this limitation is to express the N-Queens problem in terms of Boolean variables for each square of the chessboard that model whether it contains a queen or not (i.e., Attempt 1 from above). Now all the symmetries can be modeled as permutations of this array using Cauchy form.
Since the main constraints of the N-Queens problem are much easier to express with the integer `queens` array, we use both models together connecting them using _channeling constraints_.
```python
%%minizinc --all-solutions --statistics -m bind
include "globals.mzn";
int: n;
array[0..n-1,0..n-1] of var bool: qb;
array[0..n-1] of var 0..n-1: q;
constraint all_different(q);
constraint all_different([q[i]+i | i in 0..n-1]);
constraint all_different([q[i]-i | i in 0..n-1]);
constraint % Channeling constraint
forall (i,j in 0..n-1) ( qb[i,j] <-> (q[i]=j) );
constraint % Lexicographic symmetry breaking constraints
lex_lesseq(array1d(qb), [ qb[j,i] | i in reverse(0..n-1), j in 0..n-1 ]) /\ % r_{90}
lex_lesseq(array1d(qb), [ qb[i,j] | i,j in reverse(0..n-1) ]) /\ % r_{180}
lex_lesseq(array1d(qb), [ qb[j,i] | i in 0..n-1, j in reverse(0..n-1) ]) /\ % r_{270}
lex_lesseq(array1d(qb), [ qb[i,j] | i in reverse(0..n-1), j in 0..n-1 ]) /\ % r_{x}
lex_lesseq(array1d(qb), [ qb[i,j] | i in 0..n-1, j in reverse(0..n-1) ]) /\ % r_{y}
lex_lesseq(array1d(qb), [ qb[j,i] | i,j in 0..n-1 ]) /\ % r_{d_1}
lex_lesseq(array1d(qb), [ qb[j,i] | i,j in reverse(0..n-1) ]); % r_{d_2}
solve satisfy;
```
In this model the constraint `lex_lesseq(array_1, array_2)` implements the lexicographic operator $\preceq_{\text{lex }}$ between `array_1` and `array_2`. Notice that `array_2` represent the permutation fo each of the geometric symmetry of the chessboard (except the identity).
Using the lex-leader method we reduced the number of solutions but we also added a lot of constraints...
### Double-Lex
When dealing with a matrix of decision variables, we often have that any permutation of the rows (or columns) of a solution is also a solution. This class of symmetries is called _row and column symmetries_.
We can certainly use the lex-leader method to break all symmetries, but in an $n \times m$ matrix with rows and column symmetry we would add $n!m!$ constraints. Adding so many constraints can be counter-productive.
When breaking all symmetries proves too difficult, it is often possible to achieve good results by breaking a smaller
set of symmetries. One method to do this for row and column symmetries is **double-lex** (Flener et al. 2002). The idea is to impose the ordering on the rows and on the columns **independently**. This produces only $n + m − 2$ symmetry breaking constraints.
One example where the double-lex can be applied is the problem seen during the course assignments: _Buddies_. In that problem, we could permute each element on each row (i.e., seat assignment) independently while preserving the same solution. Similarly, we could also permute each column independently (i.e., swap the 20-minute segments). This is a typical case where the double-lex is effective and cheap to implement.
<div class="alert alert-block alert-info">
The double-lex method is not applicable to the N-Queens problem, because not all column (or row) permutations preserve the solution.
</div>
## Symmetry breaking constraints
### Soundness and completeness
Two important properties of symmetry breaking constraints are **soundness** and **completeness**, a set of symmetry breaking constraints
- is **sound** if and only if it leaves at least one solution in each symmetry class
- is **complete** if and only if it leaves at most one solution in each symmetry class
All the approaches we used so far in the N-Queens problem are sound and complete since they leave at least one solution in the only symmetry class available (geometric symmetries). Other problems might have different symmetry classes and it is very important that constraints added to remove a given symmetry don't remove desirable solutions from the problem.
### Intractability of Breaking Symmetry
It is worth mentioning that lex-leader requires one constraint for each element of the group.
In the case of a matrix with $m$ rows and $n$ columns, rows and column symmetry, this is $m!n!$, which is impractical in general. Therefore there are many cases where lex-leader is applicable but impractical.
\begin{theorem} \label{theo:simple_ordering_NP}
(Walsh 2011) Given any _simple_ ordering, there exists a symmetry group such that deciding if an assignment is smallest in its symmetry class according to this ordering is NP-hard.
\end{theorem}
In other words, Walsh proved that breaking symmetry completely by adding constraints to eliminate symmetric solutions is computationally intractable in general. More specifically, he proves that given any simple variable ordering, deciding if an assignment is the smallest in its symmetry class is NP-hard.
An alternative to full symmetry breaking is to break some symmetry by using just a subset of the lex-leader constraints, for example, the double-lex.
\begin{theorem} \label{theo:lex2_NP}
(Katsirelos, Narodytska, and Walsh 2010) Propagating the double-lex constraint is NP-hard.
\end{theorem}
In other words, this theorem states that there is no efficient algorithm to restrict variable domain after an assignment.
Since symmetry breaking appears intractable in general, a major research direction is to identify special cases where the symmetry group is more tractable in practice.
## Reducing the Set of Symmetry Breaking Constraints
Lex-leader constraints can be simplified to remove redundancies. For example, imagine having the following lexicographic constraint:
$$ x_1\,x_2\,x_3\,x_4\,x_5,\,x_6 \preceq_{\text{lex}}x_1\,x_3\,x_2\,x_4\,x_6,x_5. $$
We can remove the first and fourth variables from each tuple, since clearly $x_1 = x_1$ and $x_4 = x_4$, obtaining
$$ x_2\,x_3\,x_5,\,x_6 \preceq_{\text{lex}}x_3\,x_2\,x_6,x_5. $$
But we can also notice that if $x_2 < x_3$ the constraint is satisfied no matter the other values, and otherwise we have $x_2 = x_3$ to satisfy the constraint. In other words, if the second variables in the tuples are relevant, they must be equal. Similarly for $x_5$ and $x_6$. Thus, the constraint is equivalent to
$$ x_2\,x_5\preceq_{\text{lex}}x_3\,x_5. $$
Since $\preceq_{\text{lex}}$ is transitive we can go further, treating the constraints as a set and not just individually. This would help us to reduce the size of the constraint even more.
Unfortunately, the approach outlined here does not get around the fundamental problem of the exponential number of symmetries. However, the approach does illustrate how the set of constraints can be simplified, and we will see in the next section a particular case where the results are quite dramatic.
### Lex constraint decomposition
We can decompose the lex constraint of the form $x_1\ldots x_2 \preceq_{\text{lex }} y_1\ldots y_2$ to a conjunction of clauses like
$$ \left(x_1 = y_1 \right),\ldots,\left(x_k = y_k\right)\rightarrow x_{k+1}\leq y_{k+1} $$
We call clauses of this form lex implications.
In practice many lex-implications are redundant \cite{Codish2018}. Given this observation, we can ask ourselves the direct question: how many lex implications are required to express a complete symmetry break?
### Reduce implications
In a recent paper \cite{Codish2018}, the authors develop a method to find a complete and compact set of symmetry breaking constraints. Basically, the algorithm iterates over the set of lex implications, and checks for each of them if they are redundant.
Define $\phi$ to be the set of constraints expressed as a Boolean formula, and $\psi$ to be the set of lex implications used to break the symmetries in the solution space defined by $\phi$.
Given these two sets, the idea behind the reduction is quite intuitive: remove one clause from the formula, and check if there is a solution which would be forbidden by this clause. If this is not the case, the clause is redundant and can be removed.
We note that the number of clauses which can actually be removed depends on the order in which clauses are checked, thus the reduction splits in two phases:
- The first phase is shown in Algorithm 1. We rank the clauses by checking if a clause $c\in\phi$ is redundant. If so, we compute a subset $\psi\subseteq\varphi^\prime$ of clauses which makes $c$ redundant, and increase the ranking of all clauses within this set.
The rationale is that removing these clauses is more likely make other clauses no longer redundant, and so increase the size of the final symmetry break.
- The second stage is shown in Algorithm 2. We sort the clauses by ranking, so clauses which were frequently the cause of redundancy appear as late as possible. Then remove the clauses if there is not a solution which would be forbidden.
This approach can be applied to any lexicographic-based ordering.
### Example
In their paper \cite{Codish2018} show some result of their method applied to a generic matrix model
<table>
<tr>
<th>n</th>
<th colspan="2">Double-Lex</th>
<th colspan="2">All Permutations</th>
</tr>
<tr>
<td></td>
<td>Original</td>
<td>Reduced</td>
<td>Original</td>
<td>Reduced</td>
</tr>
<tr>
<td>3</td>
<td>12</td>
<td>12</td>
<td>48</td>
<td>13</td>
</tr>
<tr>
<td>4</td>
<td>24</td>
<td>24</td>
<td>312</td>
<td>32</td>
</tr>
<tr>
<td>5</td>
<td>40</td>
<td>40</td>
<td>2440</td>
<td>71</td>
</tr>
<tr>
<td>6</td>
<td>60</td>
<td>60</td>
<td>21660</td>
<td>148</td>
</tr>
<tr>
<td>7</td>
<td>84</td>
<td>84</td>
<td>211764</td>
<td>310</td>
</tr>
</table>
It is interesting to note that with the complete symmetry break it is possible to reduce the number of implications quite drastically but double-lex has no redundant implications.
### Conclusion
Removing redundant clauses can be costly due to the use of a SAT solver. However, if an instance of the same size needs to be solved several times but with different values it can greatly improve the search time since the reduction can be done offline just once.
# References
(<a id="cit-Dechter2003" href="#call-Dechter2003">Dechter, 2003</a>) Rina Dechter, ``_Constraint Processing_'', 2003.
(<a id="cit-AIMA" href="#call-AIMA">Russell and Norvig, 2003</a>) Stuart J. Russell and Peter Norvig, ``_Artificial Intelligence: A Modern Approach_'', 2003.
(<a id="cit-Mackworth1977" href="#call-Mackworth1977">Mackworth, 1977</a>) Mackworth Alan K., ``_Consistency in Networks of Relations_'', Artif. Intell., vol. 8, number 1, pp. 99--118, 1977. [online](http://dx.doi.org/10.1016/0004-3702(77)90007-8)
(<a id="cit-Puget2003" href="#call-Puget2003">Puget, 1993</a>) J.F. Puget, ``_On the satisfiability of symmetrical constrained satisfaction problems_'', Methodologies for Intelligent Systems, 1993.
(<a id="cit-Codish2018" href="#call-Codish2018">Codish, Ehlers <em>et al.</em>, 2018</a>) M. Codish, T. Ehlers, G. Gange <em>et al.</em>, ``_Breaking Symmetries with Lex Implications_'', FLOPS, 2018.
|
(* Title: JinjaThreads/Examples/AppenticeChallenge.thy
Author: Andreas Lochbihler
*)
chapter \<open>Examples\<close>
section \<open>Apprentice challenge\<close>
theory ApprenticeChallenge
imports
"../Execute/Code_Generation"
begin
text \<open>This theory implements the apprentice challenge by Porter and Moore \<^cite>\<open>"MoorePorter2002TOPLAS"\<close>.\<close>
definition ThreadC :: "addr J_mb cdecl"
where
"ThreadC =
(Thread, Object, [],
[(run, [], Void, \<lfloor>([], unit)\<rfloor>),
(start, [], Void, Native),
(join, [], Void, Native),
(interrupt, [], Void, Native),
(isInterrupted, [], Boolean, Native)])"
definition Container :: cname
where "Container = STR ''Container''"
definition ContainerC :: "addr J_mb cdecl"
where "ContainerC = (Container, Object, [(STR ''counter'', Integer, \<lparr>volatile=False\<rparr>)], [])"
definition String :: cname
where "String = STR ''String''"
definition StringC :: "addr J_mb cdecl"
where
"StringC = (String, Object, [], [])"
definition Job :: cname
where "Job = STR ''Job''"
definition JobC :: "addr J_mb cdecl"
where
"JobC =
(Job, Thread, [(STR ''objref'', Class Container, \<lparr>volatile=False\<rparr>)],
[(STR ''incr'', [], Class Job, \<lfloor>([],
sync(Var (STR ''objref''))
((Var (STR ''objref''))\<bullet>STR ''counter''{STR ''''} := ((Var (STR ''objref''))\<bullet>STR ''counter''{STR ''''} \<guillemotleft>Add\<guillemotright> Val (Intg 1)));;
Var this)\<rfloor>),
(STR ''setref'', [Class Container], Void, \<lfloor>([STR ''o''],
LAss (STR ''objref'') (Var (STR ''o'')))\<rfloor>),
(run, [], Void, \<lfloor>([],
while (true) (Var this\<bullet>STR ''incr''([])))\<rfloor>)
])"
definition Apprentice :: cname
where "Apprentice = STR ''Apprentice''"
definition ApprenticeC :: "addr J_mb cdecl"
where
"ApprenticeC =
(Apprentice, Object, [],
[(STR ''main'', [Class String\<lfloor>\<rceil>], Void, \<lfloor>([STR ''args''],
{STR ''container'':Class Container=None;
(STR ''container'' := new Container);;
(while (true)
{STR ''job'':Class Job=None;
(STR ''job'' := new Job);;
(Var (STR ''job'')\<bullet>STR ''setref''([Var (STR ''container'')]));;
(Var (STR ''job'')\<bullet>Type.start([]))
}
)
})\<rfloor>)])"
definition ApprenticeChallenge
where
"ApprenticeChallenge = Program (SystemClasses @ [StringC, ThreadC, ContainerC, JobC, ApprenticeC])"
definition ApprenticeChallenge_annotated
where "ApprenticeChallenge_annotated = annotate_prog_code ApprenticeChallenge"
lemma "wf_J_prog ApprenticeChallenge_annotated"
by eval
lemmas [code_unfold] =
Container_def Job_def String_def Apprentice_def
definition main :: "String.literal" where "main = STR ''main''"
ML_val \<open>
val _ = tracing "started";
val program = @{code ApprenticeChallenge_annotated};
val _ = tracing "prg";
val compiled = @{code J2JVM} program;
val _ = tracing "compiled";
@{code exec_J_rr}
@{code "1 :: nat"}
program
@{code Apprentice}
@{code main}
[ @{code Null}];
val _ = tracing "J_rr";
@{code exec_JVM_rr}
@{code "1 :: nat"}
compiled
@{code Apprentice}
@{code main}
[ @{code Null}];
val _ = tracing "JVM_rr";
\<close>
end
|
#EasyPlotQwt: Render EasyPlot-plots with Qwt guiqwt's interface
#-------------------------------------------------------------------------------
module EasyPlotQwt
import EasyPlot #Import only - avoid collisions
using MDDatasets
using Colors
import EasyPlot: render
#Type used to dispatch on a symbol & minimize namespace pollution:
#-------------------------------------------------------------------------------
struct DS{Symbol}; end; #Dispatchable symbol
DS(v::Symbol) = DS{v}()
include("pybase.jl")
include("base.jl")
include("display.jl")
#==Initialization
===============================================================================#
function __init__()
EasyPlot.registerdefaults(:Qwt,
maindisplay = PlotDisplay(guimode=true),
renderdisplay = EasyPlot.NullDisplay() #No support at the moment.
)
return
end
end
#Last line
|
open import Nat
open import Prelude
open import List
open import contexts
open import unions
module core where
-- types
data typ : Set where
_==>_ : typ → typ → typ
⟨⟩ : typ
⟨_×_⟩ : typ → typ → typ
D[_] : Nat → typ
-- arrow type constructors bind very tightly
infixr 25 _==>_
-- type contexts, hole contexts, and datatype environments
tctx = typ ctx
hctx = (tctx ∧ typ) ctx
denv = Σ[ dctx ∈ tctx ctx ]
∀{d1 d2 cctx1 cctx2 c} →
d1 ≠ d2 →
(d1 , cctx1) ∈ dctx →
(d2 , cctx2) ∈ dctx →
c # cctx1 ∨ c # cctx2
-- simple values
data val : Set where
⟨⟩ : val
⟨_,_⟩ : val → val → val
C[_]_ : Nat → val → val
-- examples
data ex : Set where
⟨⟩ : ex
⟨_,_⟩ : ex → ex → ex
C[_]_ : Nat → ex → ex
_↦_ : val → ex → ex
¿¿ : ex
-- simple value typing
data _⊢_::ⱽ_ : denv → val → typ → Set where
VTUnit : ∀{Σ'} → Σ' ⊢ ⟨⟩ ::ⱽ ⟨⟩
VTPair : ∀{Σ' v1 v2 τ1 τ2} →
Σ' ⊢ v1 ::ⱽ τ1 →
Σ' ⊢ v2 ::ⱽ τ2 →
Σ' ⊢ ⟨ v1 , v2 ⟩ ::ⱽ ⟨ τ1 × τ2 ⟩
VTCtor : ∀{Σ' d cctx c v τ} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Σ' ⊢ v ::ⱽ τ →
Σ' ⊢ C[ c ] v ::ⱽ D[ d ]
-- example typing
data _,_⊢_:·_ : hctx → denv → ex → typ → Set where
XTUnit : ∀{Δ Σ'} → Δ , Σ' ⊢ ⟨⟩ :· ⟨⟩
XTPair : ∀{Δ Σ' ex1 ex2 τ1 τ2} →
Δ , Σ' ⊢ ex1 :· τ1 →
Δ , Σ' ⊢ ex2 :· τ2 →
Δ , Σ' ⊢ ⟨ ex1 , ex2 ⟩ :· ⟨ τ1 × τ2 ⟩
XTCtor : ∀{Δ Σ' d cctx c ex τ} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Δ , Σ' ⊢ ex :· τ →
Δ , Σ' ⊢ C[ c ] ex :· D[ d ]
XTTop : ∀{Δ Σ' τ} → Δ , Σ' ⊢ ¿¿ :· τ
XTInOut : ∀{Δ Σ' v ex τ1 τ2} →
Σ' ⊢ v ::ⱽ τ1 →
Δ , Σ' ⊢ ex :· τ2 →
Δ , Σ' ⊢ v ↦ ex :· τ1 ==> τ2
-- the two possible prj indices
data prj-idx : Set where
P1 : prj-idx
P2 : prj-idx
prj : {A : Set} → prj-idx → A → A → A
prj P1 a1 a2 = a1
prj P2 a1 a2 = a2
mutual
record rule : Set where
inductive
constructor |C_=>_
field
parm : Nat
branch : exp
-- Expressions
data exp : Set where
fix_⦇·λ_=>_·⦈ : Nat → Nat → exp → exp
_∘_ : exp → exp → exp
X[_] : Nat → exp
⟨⟩ : exp
⟨_,_⟩ : exp → exp → exp
prj[_]_ : prj-idx → exp → exp
C[_]_ : Nat → exp → exp
case_of⦃·_·⦄ : exp → rule ctx → exp
??[_] : Nat → exp
PBE:assert : exp → exp → exp
-- u is fresh in e
data hole-name-new : (e : exp) → (u : Nat) → Set where
HNNFix : ∀{x f e u} → hole-name-new e u → hole-name-new (fix f ⦇·λ x => e ·⦈) u
HNNVar : ∀{x u} → hole-name-new (X[ x ]) u
HNNAp : ∀{e1 e2 u} → hole-name-new e1 u → hole-name-new e2 u → hole-name-new (e1 ∘ e2) u
HNNUnit : ∀{u} → hole-name-new ⟨⟩ u
HNNPair : ∀{e1 e2 u} → hole-name-new e1 u → hole-name-new e2 u → hole-name-new ⟨ e1 , e2 ⟩ u
HNNPrj : ∀{e i u} → hole-name-new e u → hole-name-new (prj[ i ] e) u
HNNCtor : ∀{c e u} → hole-name-new e u → hole-name-new (C[ c ] e) u
HNNCase : ∀{e rules u} →
hole-name-new e u →
(∀{c rule} → (c , rule) ∈ rules → hole-name-new (rule.branch rule) u) →
hole-name-new (case e of⦃· rules ·⦄) u
HNNHole : ∀{u' u} → u' ≠ u → hole-name-new (??[ u' ]) u
HNNAsrt : ∀{e1 e2 u} → hole-name-new e1 u → hole-name-new e2 u → hole-name-new (PBE:assert e1 e2) u
-- e1 and e2 do not have any hole names in common
data holes-disjoint : (e1 : exp) → (e2 : exp) → Set where
HDFix : ∀{x f e e'} → holes-disjoint e e' → holes-disjoint (fix f ⦇·λ x => e ·⦈) e'
HDVar : ∀{x e'} → holes-disjoint (X[ x ]) e'
HDAp : ∀{e1 e2 e'} → holes-disjoint e1 e' → holes-disjoint e2 e' → holes-disjoint (e1 ∘ e2) e'
HDUnit : ∀{e'} → holes-disjoint ⟨⟩ e'
HDPair : ∀{e1 e2 e'} → holes-disjoint e1 e' → holes-disjoint e2 e' → holes-disjoint ⟨ e1 , e2 ⟩ e'
HDPrj : ∀{i e e'} → holes-disjoint e e' → holes-disjoint (prj[ i ] e) e'
HDCtor : ∀{c e e'} → holes-disjoint e e' → holes-disjoint (C[ c ] e) e'
HDCase : ∀{e rules e'} →
holes-disjoint e e' →
(∀{c rule} → (c , rule) ∈ rules → holes-disjoint (rule.branch rule) e') →
holes-disjoint (case e of⦃· rules ·⦄) e'
HDHole : ∀{u e'} → hole-name-new e' u → holes-disjoint (??[ u ]) e'
HDAsrt : ∀{e1 e2 e'} → holes-disjoint e1 e' → holes-disjoint e2 e' → holes-disjoint (PBE:assert e1 e2) e'
-- e ecomplete iff e contains no holes
data _ecomplete : exp → Set where
ECFix : ∀{f x e} → e ecomplete → fix f ⦇·λ x => e ·⦈ ecomplete
ECVar : ∀{x} → X[ x ] ecomplete
ECAp : ∀{e1 e2} → e1 ecomplete → e2 ecomplete → (e1 ∘ e2) ecomplete
ECUnit : ⟨⟩ ecomplete
ECPair : ∀{e1 e2} → e1 ecomplete → e2 ecomplete → ⟨ e1 , e2 ⟩ ecomplete
ECPrj : ∀{i e} → e ecomplete → (prj[ i ] e) ecomplete
ECCtor : ∀{c e} → e ecomplete → (C[ c ] e) ecomplete
ECCase : ∀{e rules} →
e ecomplete →
(∀{c rule} → (c , rule) ∈ rules → (rule.branch rule) ecomplete) →
case e of⦃· rules ·⦄ ecomplete
ECAsrt : ∀{e1 e2} → e1 ecomplete → e2 ecomplete → (PBE:assert e1 e2) ecomplete
-- type assignment for expressions
data _,_,_⊢_::_ : hctx → denv → tctx → exp → typ → Set where
TFix : ∀{Δ Σ' Γ f x e τ1 τ2} →
Δ , Σ' , (Γ ,, (f , τ1 ==> τ2) ,, (x , τ1)) ⊢ e :: τ2 →
Δ , Σ' , Γ ⊢ fix f ⦇·λ x => e ·⦈ :: τ1 ==> τ2
TVar : ∀{Δ Σ' Γ x τ} → (x , τ) ∈ Γ → Δ , Σ' , Γ ⊢ X[ x ] :: τ
THole : ∀{Δ Σ' Γ u τ} → (u , (Γ , τ)) ∈ Δ → Δ , Σ' , Γ ⊢ ??[ u ] :: τ
TUnit : ∀{Δ Σ' Γ} → Δ , Σ' , Γ ⊢ ⟨⟩ :: ⟨⟩
TPair : ∀{Δ Σ' Γ e1 e2 τ1 τ2} →
holes-disjoint e1 e2 →
Δ , Σ' , Γ ⊢ e1 :: τ1 →
Δ , Σ' , Γ ⊢ e2 :: τ2 →
Δ , Σ' , Γ ⊢ ⟨ e1 , e2 ⟩ :: ⟨ τ1 × τ2 ⟩
TCtor : ∀{Δ Σ' Γ d cctx c e τ} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Δ , Σ' , Γ ⊢ e :: τ →
Δ , Σ' , Γ ⊢ C[ c ] e :: D[ d ]
TApp : ∀{Δ Σ' Γ f arg τ1 τ2} →
holes-disjoint f arg →
Δ , Σ' , Γ ⊢ f :: τ1 ==> τ2 →
Δ , Σ' , Γ ⊢ arg :: τ1 →
Δ , Σ' , Γ ⊢ f ∘ arg :: τ2
TPrj : ∀{Δ Σ' Γ i e τ1 τ2} →
Δ , Σ' , Γ ⊢ e :: ⟨ τ1 × τ2 ⟩ →
Δ , Σ' , Γ ⊢ prj[ i ] e :: prj i τ1 τ2
TCase : ∀{Δ Σ' Γ d cctx e rules τ} →
(d , cctx) ∈ π1 Σ' →
Δ , Σ' , Γ ⊢ e :: D[ d ] →
-- There must be a rule for each constructor, i.e. case exhuastiveness
(∀{c} → dom cctx c → dom rules c) →
(∀{c xc ec} →
(c , |C xc => ec) ∈ rules →
holes-disjoint ec e ∧
(∀{c' xc' ec'} → (c' , |C xc' => ec') ∈ rules → c ≠ c' → holes-disjoint ec ec') ∧
-- The constructor of each rule must be of the right datatype, and the branch must type-check
Σ[ τc ∈ typ ] (
(c , τc) ∈ cctx ∧
Δ , Σ' , (Γ ,, (xc , τc)) ⊢ ec :: τ)) →
Δ , Σ' , Γ ⊢ case e of⦃· rules ·⦄ :: τ
TAssert : ∀{Δ Σ' Γ e1 e2 τ} →
holes-disjoint e1 e2 →
Δ , Σ' , Γ ⊢ e1 :: τ →
Δ , Σ' , Γ ⊢ e2 :: τ →
Δ , Σ' , Γ ⊢ PBE:assert e1 e2 :: ⟨⟩
mutual
env : Set
env = result ctx
-- results - evaluation takes expressions to results, but results aren't necessarily final
data result : Set where
[_]fix_⦇·λ_=>_·⦈ : env → Nat → Nat → exp → result
⟨⟩ : result
⟨_,_⟩ : result → result → result
C[_]_ : Nat → result → result
[_]??[_] : env → Nat → result
_∘_ : result → result → result
prj[_]_ : prj-idx → result → result
[_]case_of⦃·_·⦄ : env → result → rule ctx → result
C⁻¹[_]_ : Nat → result → result
mutual
data _env-final : env → Set where
EFNone : ∅ env-final
EFInd : ∀{E x r} → E env-final → r final → (E ,, (x , r)) env-final
-- final results are those that cannot be evaluated further
data _final : result → Set where
FDet : ∀{r} → r det → r final
FIndet : ∀{r} → r indet → r final
-- final results that can be eliminated (or in the case of ⟨⟩, that don't need to be)
data _det : result → Set where
DFix : ∀{E f x e} → E env-final → [ E ]fix f ⦇·λ x => e ·⦈ det
DUnit : ⟨⟩ det
DPair : ∀{r1 r2} → r1 final → r2 final → ⟨ r1 , r2 ⟩ det
DCtor : ∀{c r} → r final → (C[ c ] r) det
-- indeterminate results are incomplete and cannot be further reduced except by resumption
data _indet : result → Set where
IDHole : ∀{E u} → E env-final → [ E ]??[ u ] indet
IDApp : ∀{r1 r2} → r1 indet → r2 final → (r1 ∘ r2) indet
IDPrj : ∀{i r} → r indet → (prj[ i ] r) indet
IDCase : ∀{E r rules} → E env-final → r indet → [ E ]case r of⦃· rules ·⦄ indet
mutual
-- type assignment for environments
data _,_,_⊢_ : hctx → denv → tctx → env → Set where
EnvId : ∀{Δ Σ'} → Δ , Σ' , ∅ ⊢ ∅
EnvInd : ∀{Δ Σ' Γ E x τx rx} →
Δ , Σ' , Γ ⊢ E →
Δ , Σ' ⊢ rx ·: τx →
Δ , Σ' , (Γ ,, (x , τx)) ⊢ (E ,, (x , rx))
-- type assignment for results
data _,_⊢_·:_ : hctx → denv → result → typ → Set where
RTFix : ∀{Δ Σ' Γ E f x e τ} →
Δ , Σ' , Γ ⊢ E →
Δ , Σ' , Γ ⊢ fix f ⦇·λ x => e ·⦈ :: τ →
Δ , Σ' ⊢ [ E ]fix f ⦇·λ x => e ·⦈ ·: τ
RTHole : ∀{Δ Σ' Γ E u τ} →
(u , (Γ , τ)) ∈ Δ →
Δ , Σ' , Γ ⊢ E →
Δ , Σ' ⊢ [ E ]??[ u ] ·: τ
RTUnit : ∀{Δ Σ'} → Δ , Σ' ⊢ ⟨⟩ ·: ⟨⟩
RTPair : ∀{Δ Σ' r1 r2 τ1 τ2} →
Δ , Σ' ⊢ r1 ·: τ1 →
Δ , Σ' ⊢ r2 ·: τ2 →
Δ , Σ' ⊢ ⟨ r1 , r2 ⟩ ·: ⟨ τ1 × τ2 ⟩
RTCtor : ∀{Δ Σ' d cctx c r τ} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Δ , Σ' ⊢ r ·: τ →
Δ , Σ' ⊢ C[ c ] r ·: D[ d ]
RTApp : ∀{Δ Σ' f arg τ1 τ2} →
Δ , Σ' ⊢ f ·: τ1 ==> τ2 →
Δ , Σ' ⊢ arg ·: τ1 →
Δ , Σ' ⊢ f ∘ arg ·: τ2
RTPrj : ∀{Δ Σ' i r τ1 τ2} →
Δ , Σ' ⊢ r ·: ⟨ τ1 × τ2 ⟩ →
Δ , Σ' ⊢ prj[ i ] r ·: prj i τ1 τ2
RTCase : ∀{Δ Σ' Γ E d cctx r rules τ} →
(d , cctx) ∈ π1 Σ' →
Δ , Σ' , Γ ⊢ E →
Δ , Σ' ⊢ r ·: D[ d ] →
-- There must be a rule for each constructor, i.e. case exhaustiveness
(∀{c} → dom cctx c → dom rules c) →
(∀{c xc ec} →
(c , |C xc => ec) ∈ rules →
-- The constructor of each rule must be of the right datatype, and the branch must type-check
Σ[ τc ∈ typ ] (
(c , τc) ∈ cctx ∧
Δ , Σ' , (Γ ,, (xc , τc)) ⊢ ec :: τ)) →
Δ , Σ' ⊢ [ E ]case r of⦃· rules ·⦄ ·: τ
RTUnwrapCtor : ∀{Δ Σ' d cctx c r τ} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Δ , Σ' ⊢ r ·: D[ d ] →
Δ , Σ' ⊢ C⁻¹[ c ] r ·: τ
excon = env ∧ ex
excons = List excon
assertions = List (result ∧ val)
hole-fillings = exp ctx
constraints = hole-fillings ∧ excons ctx
record goal : Set where
inductive
constructor _⊢??[_]:_⊨_
field
g-tctx : tctx
g-id : Nat
g-typ : typ
g-excons : excons
goals = List goal
-- value-to-example coercion
⌊_⌋ : val → ex
⌊ ⟨⟩ ⌋ = ⟨⟩
⌊ ⟨ v1 , v2 ⟩ ⌋ = ⟨ ⌊ v1 ⌋ , ⌊ v2 ⌋ ⟩
⌊ C[ c ] v ⌋ = C[ c ] ⌊ v ⌋
-- result-to-value coercion
data ⌈_⌉:=_ : result → val → Set where
CoerceUnit : ⌈ ⟨⟩ ⌉:= ⟨⟩
CoercePair : ∀{r1 r2 v1 v2} →
⌈ r1 ⌉:= v1 →
⌈ r2 ⌉:= v2 →
⌈ ⟨ r1 , r2 ⟩ ⌉:= ⟨ v1 , v2 ⟩
CoerceCtor : ∀{c r v} →
⌈ r ⌉:= v →
⌈ C[ c ] r ⌉:= C[ c ] v
-- excons typing
data _,_⊢_::ˣ_,_ : hctx → denv → excons → tctx → typ → Set where
TXNil : ∀{Δ Σ' Γ τ} → Δ , Σ' ⊢ [] ::ˣ Γ , τ
TXInd : ∀{Δ Σ' X E ex Γ τ} →
Δ , Σ' ⊢ X ::ˣ Γ , τ →
Δ , Σ' , Γ ⊢ E →
Δ , Σ' ⊢ ex :· τ →
Δ , Σ' ⊢ ((E , ex) :: X) ::ˣ Γ , τ
-- type assignment for hole fillings
data _,_⊢ᴴ_ : hctx → denv → hole-fillings → Set where
TFNil : ∀{Δ Σ'} → Δ , Σ' ⊢ᴴ ∅
TFInd : ∀{Δ Σ' F u Γ τ e} →
(u , Γ , τ) ∈ Δ →
Δ , Σ' ⊢ᴴ F →
Δ , Σ' , Γ ⊢ e :: τ →
Δ , Σ' ⊢ᴴ (F ,, (u , e))
{- TODO - we have to decide between this version and the one prior
_,_⊢ₕ_ : hctx → denv → hole-fillings → Set
Δ , Σ' ⊢ₕ F = ∀{u e} →
(u , e) ∈ F →
Σ[ Γ ∈ tctx ] Σ[ τ ∈ typ ] (
(u , Γ , τ) ∈ Δ ∧
Δ , Σ' , Γ ⊢ e :: τ)
-}
-- these are used to determine the "order" in which result consistency rules are checked
not-both-pair : (r r' : result) → Set
not-both-pair r r' = (∀{r1 r2} → r ≠ ⟨ r1 , r2 ⟩) ∨ (∀{r1 r2} → r' ≠ ⟨ r1 , r2 ⟩)
not-both-ctor : (r r' : result) → Set
not-both-ctor r r' = (∀{c r''} → r ≠ (C[ c ] r'')) ∨ (∀{c r''} → r' ≠ (C[ c ] r''))
-- result consistency
data _≡⌊_⌋_ : result → assertions → result → Set where
RCRefl : ∀{r} → r ≡⌊ [] ⌋ r
RCPair : ∀{r1 r2 r'1 r'2 A1 A2} →
(_==_ {A = result} ⟨ r1 , r2 ⟩ ⟨ r'1 , r'2 ⟩ → ⊥) →
r1 ≡⌊ A1 ⌋ r'1 →
r2 ≡⌊ A2 ⌋ r'2 →
⟨ r1 , r2 ⟩ ≡⌊ A1 ++ A2 ⌋ ⟨ r'1 , r'2 ⟩
RCCtor : ∀{c r r' A} →
(_==_ {A = result} (C[ c ] r) (C[ c ] r') → ⊥) →
r ≡⌊ A ⌋ r' →
(C[ c ] r) ≡⌊ A ⌋ (C[ c ] r')
RCAssert1 : ∀{r1 r2 v2 A} →
r1 ≠ r2 →
not-both-pair r1 r2 →
not-both-ctor r1 r2 →
⌈ r2 ⌉:= v2 →
A == (r1 , v2) :: [] →
r1 ≡⌊ A ⌋ r2
RCAssert2 : ∀{r1 r2 v1 A} →
r1 ≠ r2 →
not-both-pair r1 r2 →
not-both-ctor r1 r2 →
⌈ r1 ⌉:= v1 →
A == (r2 , v1) :: [] →
r1 ≡⌊ A ⌋ r2
-- Generic result consistency failure - this goes through if results are not consistent
data _≢_ : result → result → Set where
RCFPair1 : ∀{r1 r2 r'1 r'2} →
r1 ≢ r'1 →
⟨ r1 , r2 ⟩ ≢ ⟨ r'1 , r'2 ⟩
RCFPair2 : ∀{r1 r2 r'1 r'2} →
r2 ≢ r'2 →
⟨ r1 , r2 ⟩ ≢ ⟨ r'1 , r'2 ⟩
RCFCtorMM : ∀{c c' r r'} →
c ≠ c' →
(C[ c ] r) ≢ (C[ c' ] r')
RCFCtor : ∀{c r r'} →
r ≢ r' →
(C[ c ] r) ≢ (C[ c ] r')
RCFNoCoerce : ∀{r r'} →
r ≠ r' →
not-both-pair r r' →
not-both-ctor r r' →
(∀{v} → ⌈ r ⌉:= v → ⊥) →
(∀{v} → ⌈ r' ⌉:= v → ⊥) →
r ≢ r'
-- Various judgments accept "fuel", which defines whether or not they can recurse indefinitely,
-- and, if not, then the numerical limit. The limit is not on the recursion depth, but rather
-- on the number of "beta reductions", interpreted a bit loosely to include case evaluations.
-- - If ⌊ ⛽ ⌋ is ∞, then there is no beta reduction limit,
-- but the judgment will not be satisfied unless evaluation eventually terminates.
-- - If ⌊ ⛽ ⌋ is ⛽⟨ n ⟩, then the beta reduction limit is at most n,
-- but if the limit is reached, then "success" judgments will not go through,
-- but "failure" judgments will be satisfied automatically.
data Fuel : Set where
∞ : Fuel
⛽⟨_⟩ : Nat → Fuel
-- fuel depletion
data _⛽⇓_ : Fuel → Fuel → Set where
CF∞ : ∞ ⛽⇓ ∞
CF⛽ : ∀{n} → ⛽⟨ 1+ n ⟩ ⛽⇓ ⛽⟨ n ⟩
-- TODO we need a theorem that h-constraints cannot generate spurious hole names.
-- generally, all the steps from an exp to an something with holes should not contain hole names
-- not in the original exp, and the process as a whole should also not produce spurious names
-- this realization probably means there are other important theorems that have been missed
-- NOTE the core theorem in completeness.agda will do the trick for evaluation itself
-- TODO we should have theorems that constrain where don't cares and such can be found.
-- Don't cares should only be generated in backprop and should not appear anywhere else.
-- Generic big step evaluation
data _⊢_⌊_⌋⇒_⊣_ : env → exp → Fuel → result → assertions → Set where
EUnit : ∀{E ⛽} → E ⊢ ⟨⟩ ⌊ ⛽ ⌋⇒ ⟨⟩ ⊣ []
EPair : ∀{E ⛽ e1 e2 r1 r2 A1 A2} →
E ⊢ e1 ⌊ ⛽ ⌋⇒ r1 ⊣ A1 →
E ⊢ e2 ⌊ ⛽ ⌋⇒ r2 ⊣ A2 →
E ⊢ ⟨ e1 , e2 ⟩ ⌊ ⛽ ⌋⇒ ⟨ r1 , r2 ⟩ ⊣ A1 ++ A2
ECtor : ∀{E ⛽ c e r A} →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
E ⊢ C[ c ] e ⌊ ⛽ ⌋⇒ (C[ c ] r) ⊣ A
EFix : ∀{E ⛽ f x e} → E ⊢ fix f ⦇·λ x => e ·⦈ ⌊ ⛽ ⌋⇒ [ E ]fix f ⦇·λ x => e ·⦈ ⊣ []
EVar : ∀{E ⛽ x r} → (x , r) ∈ E → E ⊢ X[ x ] ⌊ ⛽ ⌋⇒ r ⊣ []
EHole : ∀{E ⛽ u} → E ⊢ ??[ u ] ⌊ ⛽ ⌋⇒ [ E ]??[ u ] ⊣ []
EAppFix : ∀{E ⛽ ⛽↓ e1 e2 Ef f x ef r1 A1 r2 A2 r A} →
⛽ ⛽⇓ ⛽↓ →
E ⊢ e1 ⌊ ⛽ ⌋⇒ r1 ⊣ A1 →
r1 == [ Ef ]fix f ⦇·λ x => ef ·⦈ →
E ⊢ e2 ⌊ ⛽ ⌋⇒ r2 ⊣ A2 →
(Ef ,, (f , r1) ,, (x , r2)) ⊢ ef ⌊ ⛽↓ ⌋⇒ r ⊣ A →
E ⊢ e1 ∘ e2 ⌊ ⛽ ⌋⇒ r ⊣ A1 ++ A2 ++ A
EAppIndet : ∀{E ⛽ e1 e2 r1 A1 r2 A2} →
E ⊢ e1 ⌊ ⛽ ⌋⇒ r1 ⊣ A1 →
(∀{Ef f x ef} → r1 ≠ [ Ef ]fix f ⦇·λ x => ef ·⦈) →
E ⊢ e2 ⌊ ⛽ ⌋⇒ r2 ⊣ A2 →
E ⊢ e1 ∘ e2 ⌊ ⛽ ⌋⇒ (r1 ∘ r2) ⊣ A1 ++ A2
EPrj : ∀{E ⛽ i e r1 r2 A} →
E ⊢ e ⌊ ⛽ ⌋⇒ ⟨ r1 , r2 ⟩ ⊣ A →
E ⊢ prj[ i ] e ⌊ ⛽ ⌋⇒ prj i r1 r2 ⊣ A
EPrjIndet : ∀{E ⛽ i e r A} →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
(∀{r1 r2} → r ≠ ⟨ r1 , r2 ⟩) →
E ⊢ prj[ i ] e ⌊ ⛽ ⌋⇒ prj[ i ] r ⊣ A
ECase : ∀{E ⛽ ⛽↓ e rules c xc ec r A' rc A} →
⛽ ⛽⇓ ⛽↓ →
(c , |C xc => ec) ∈ rules →
E ⊢ e ⌊ ⛽ ⌋⇒ (C[ c ] r) ⊣ A →
(E ,, (xc , r)) ⊢ ec ⌊ ⛽↓ ⌋⇒ rc ⊣ A' →
E ⊢ case e of⦃· rules ·⦄ ⌊ ⛽ ⌋⇒ rc ⊣ A ++ A'
ECaseIndet : ∀{E ⛽ e rules r A} →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
(∀{c rc} → r ≠ (C[ c ] rc)) →
E ⊢ case e of⦃· rules ·⦄ ⌊ ⛽ ⌋⇒ [ E ]case r of⦃· rules ·⦄ ⊣ A
EAssert : ∀{E ⛽ e1 r1 A1 e2 r2 A2 A3} →
E ⊢ e1 ⌊ ⛽ ⌋⇒ r1 ⊣ A1 →
E ⊢ e2 ⌊ ⛽ ⌋⇒ r2 ⊣ A2 →
r1 ≡⌊ A3 ⌋ r2 →
E ⊢ PBE:assert e1 e2 ⌊ ⛽ ⌋⇒ ⟨⟩ ⊣ A1 ++ A2 ++ A3
-- Generic evaluation failure - this goes through if evaluation would fail due to failure
-- of some assertion that occurs during evaluation, or if the fuel runs out.
data _⊢_⌊_⌋⇒∅ : env → exp → Fuel → Set where
EFPair1 : ∀{E ⛽ e1 e2} →
E ⊢ e1 ⌊ ⛽ ⌋⇒∅ →
E ⊢ ⟨ e1 , e2 ⟩ ⌊ ⛽ ⌋⇒∅
EFPair2 : ∀{E ⛽ e1 e2} →
E ⊢ e2 ⌊ ⛽ ⌋⇒∅ →
E ⊢ ⟨ e1 , e2 ⟩ ⌊ ⛽ ⌋⇒∅
EFCtor : ∀{E ⛽ c e} →
E ⊢ e ⌊ ⛽ ⌋⇒∅ →
E ⊢ C[ c ] e ⌊ ⛽ ⌋⇒∅
EFAppFun : ∀{E ⛽ e1 e2} →
E ⊢ e1 ⌊ ⛽ ⌋⇒∅ →
E ⊢ e1 ∘ e2 ⌊ ⛽ ⌋⇒∅
EFAppArg : ∀{E ⛽ e1 e2} →
E ⊢ e2 ⌊ ⛽ ⌋⇒∅ →
E ⊢ e1 ∘ e2 ⌊ ⛽ ⌋⇒∅
EFAppEval : ∀{E ⛽ ⛽↓ e1 e2 Ef f x ef r1 A1 r2 A2} →
⛽ ⛽⇓ ⛽↓ →
E ⊢ e1 ⌊ ⛽ ⌋⇒ r1 ⊣ A1 →
r1 == [ Ef ]fix f ⦇·λ x => ef ·⦈ →
E ⊢ e2 ⌊ ⛽ ⌋⇒ r2 ⊣ A2 →
(Ef ,, (f , r1) ,, (x , r2)) ⊢ ef ⌊ ⛽↓ ⌋⇒∅ →
E ⊢ e1 ∘ e2 ⌊ ⛽ ⌋⇒∅
EFPrj : ∀{E ⛽ i e} →
E ⊢ e ⌊ ⛽ ⌋⇒∅ →
E ⊢ prj[ i ] e ⌊ ⛽ ⌋⇒∅
EFCaseScrut : ∀{E ⛽ e rules} →
E ⊢ e ⌊ ⛽ ⌋⇒∅ →
E ⊢ case e of⦃· rules ·⦄ ⌊ ⛽ ⌋⇒∅
EFCaseRule : ∀{E ⛽ ⛽↓ e rules c xc ec r A} →
⛽ ⛽⇓ ⛽↓ →
(c , |C xc => ec) ∈ rules →
E ⊢ e ⌊ ⛽ ⌋⇒ (C[ c ] r) ⊣ A →
(E ,, (xc , r)) ⊢ ec ⌊ ⛽↓ ⌋⇒∅ →
E ⊢ case e of⦃· rules ·⦄ ⌊ ⛽ ⌋⇒∅
EFAssert1 : ∀{E ⛽ e1 e2} →
E ⊢ e1 ⌊ ⛽ ⌋⇒∅ →
E ⊢ PBE:assert e1 e2 ⌊ ⛽ ⌋⇒∅
EFAssert2 : ∀{E ⛽ e1 e2} →
E ⊢ e2 ⌊ ⛽ ⌋⇒∅ →
E ⊢ PBE:assert e1 e2 ⌊ ⛽ ⌋⇒∅
EFAssert : ∀{E ⛽ e1 r1 A1 e2 r2 A2} →
E ⊢ e1 ⌊ ⛽ ⌋⇒ r1 ⊣ A1 →
E ⊢ e2 ⌊ ⛽ ⌋⇒ r2 ⊣ A2 →
r1 ≢ r2 →
E ⊢ PBE:assert e1 e2 ⌊ ⛽ ⌋⇒∅
EFLimit : ∀{E e} → E ⊢ e ⌊ ⛽⟨ 0 ⟩ ⌋⇒∅
-- resumption
mutual
data _⊢_⌊_⌋:⇨_:=_ : hole-fillings → env → Fuel → env → assertions → Set where
RENil : ∀{⛽ F} → F ⊢ ∅ ⌊ ⛽ ⌋:⇨ ∅ := []
REInd : ∀{⛽ F E E' x r r' A A'} →
F ⊢ E ⌊ ⛽ ⌋:⇨ E' := A →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A' →
F ⊢ E ,, (x , r) ⌊ ⛽ ⌋:⇨ (E' ,, (x , r')) := A ++ A'
data _⊢_⌊_⌋⇨_:=_ : hole-fillings → result → Fuel → result → assertions → Set where
RHoleResume : ∀{⛽ F E u r r' e A A'} →
(u , e) ∈ F →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A' →
F ⊢ [ E ]??[ u ] ⌊ ⛽ ⌋⇨ r' := A ++ A'
RHoleIndet : ∀{⛽ F E E' u A} →
u # F →
F ⊢ E ⌊ ⛽ ⌋:⇨ E' := A →
F ⊢ [ E ]??[ u ] ⌊ ⛽ ⌋⇨ [ E' ]??[ u ] := A
RUnit : ∀{⛽ F} → F ⊢ ⟨⟩ ⌊ ⛽ ⌋⇨ ⟨⟩ := []
RPair : ∀{⛽ F r1 r2 r1' r2' A1 A2} →
F ⊢ r1 ⌊ ⛽ ⌋⇨ r1' := A1 →
F ⊢ r2 ⌊ ⛽ ⌋⇨ r2' := A2 →
F ⊢ ⟨ r1 , r2 ⟩ ⌊ ⛽ ⌋⇨ ⟨ r1' , r2' ⟩ := A1 ++ A2
RCtor : ∀{⛽ F c r r' A} →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A →
F ⊢ C[ c ] r ⌊ ⛽ ⌋⇨ (C[ c ] r') := A
RApp : ∀{⛽ ⛽↓ F r1 r2 r r' Ef f x ef r1' r2' A1 A2 Af A'} →
⛽ ⛽⇓ ⛽↓ →
F ⊢ r1 ⌊ ⛽ ⌋⇨ r1' := A1 →
r1' == [ Ef ]fix f ⦇·λ x => ef ·⦈ →
F ⊢ r2 ⌊ ⛽ ⌋⇨ r2' := A2 →
(Ef ,, (f , r1') ,, (x , r2')) ⊢ ef ⌊ ⛽↓ ⌋⇒ r ⊣ Af →
F ⊢ r ⌊ ⛽↓ ⌋⇨ r' := A' →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨ r' := A1 ++ A2 ++ Af ++ A'
RAppIndet : ∀{⛽ F r1 r2 r1' r2' A1 A2} →
F ⊢ r1 ⌊ ⛽ ⌋⇨ r1' := A1 →
(∀{Ef f x ef} → r1' ≠ [ Ef ]fix f ⦇·λ x => ef ·⦈) →
F ⊢ r2 ⌊ ⛽ ⌋⇨ r2' := A2 →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨ (r1' ∘ r2') := A1 ++ A2
RFix : ∀{⛽ F E E' f x e A} →
F ⊢ E ⌊ ⛽ ⌋:⇨ E' := A →
F ⊢ [ E ]fix f ⦇·λ x => e ·⦈ ⌊ ⛽ ⌋⇨ [ E' ]fix f ⦇·λ x => e ·⦈ := A
RPrj : ∀{⛽ F i r r1 r2 A} →
F ⊢ r ⌊ ⛽ ⌋⇨ ⟨ r1 , r2 ⟩ := A →
F ⊢ prj[ i ] r ⌊ ⛽ ⌋⇨ prj i r1 r2 := A
RPrjIndet : ∀{⛽ F i r r' A} →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A →
(∀{r1 r2} → r' ≠ ⟨ r1 , r2 ⟩) →
F ⊢ prj[ i ] r ⌊ ⛽ ⌋⇨ prj[ i ] r' := A
RCase : ∀{⛽ F E r rules c xc ec r' rc A A'} →
(c , |C xc => ec) ∈ rules →
F ⊢ r ⌊ ⛽ ⌋⇨ (C[ c ] r') := A →
F ⊢ [ E ]fix xc ⦇·λ xc => ec ·⦈ ∘ (C⁻¹[ c ] r) ⌊ ⛽ ⌋⇨ rc := A' →
F ⊢ [ E ]case r of⦃· rules ·⦄ ⌊ ⛽ ⌋⇨ rc := A ++ A'
RCaseIndet : ∀{⛽ F E E' r rules r' A A'} →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A →
(∀{c rc} → r' ≠ (C[ c ] rc)) →
F ⊢ E ⌊ ⛽ ⌋:⇨ E' := A' →
F ⊢ [ E ]case r of⦃· rules ·⦄ ⌊ ⛽ ⌋⇨ [ E' ]case r' of⦃· rules ·⦄ := A ++ A'
RUnwrapCtor : ∀{⛽ F r c rc A} →
F ⊢ r ⌊ ⛽ ⌋⇨ C[ c ] rc := A →
F ⊢ C⁻¹[ c ] r ⌊ ⛽ ⌋⇨ rc := A
RUnwrapIndet : ∀{⛽ F c r r' A} →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A →
(∀{rc} → r' ≠ (C[ c ] rc)) →
F ⊢ C⁻¹[ c ] r ⌊ ⛽ ⌋⇨ C⁻¹[ c ] r' := A
-- Generic resumption failure - this goes through if resumption would fail due to failure
-- of some evaluation that occurs during resumption.
mutual
data _⊢_⌊_⌋:⇨∅ : hole-fillings → env → Fuel → Set where
RFERes : ∀{⛽ F E x r} →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ (E ,, (x , r)) ⌊ ⛽ ⌋:⇨∅
RFEEnv : ∀{⛽ F E x r} →
F ⊢ E ⌊ ⛽ ⌋:⇨∅ →
F ⊢ (E ,, (x , r)) ⌊ ⛽ ⌋:⇨∅
{- TODO we must choose between this approach and the one prior
RFE : ∀{⛽ F E x r} →
(x , r) ∈ E →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ E ⌊ ⛽ ⌋:⇨∅
-}
data _⊢_⌊_⌋⇨∅ : hole-fillings → result → Fuel → Set where
RFHoleEval : ∀{⛽ F E u e} →
(u , e) ∈ F →
E ⊢ e ⌊ ⛽ ⌋⇒∅ →
F ⊢ [ E ]??[ u ] ⌊ ⛽ ⌋⇨∅
RFHoleRes : ∀{⛽ F E u e r A} →
(u , e) ∈ F →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ [ E ]??[ u ] ⌊ ⛽ ⌋⇨∅
RFHoleIndet : ∀{⛽ F E u} →
u # F →
F ⊢ E ⌊ ⛽ ⌋:⇨∅ →
F ⊢ [ E ]??[ u ] ⌊ ⛽ ⌋⇨∅
RFPair1 : ∀{⛽ F r1 r2} →
F ⊢ r1 ⌊ ⛽ ⌋⇨∅ →
F ⊢ ⟨ r1 , r2 ⟩ ⌊ ⛽ ⌋⇨∅
RFPair2 : ∀{⛽ F r1 r2} →
F ⊢ r2 ⌊ ⛽ ⌋⇨∅ →
F ⊢ ⟨ r1 , r2 ⟩ ⌊ ⛽ ⌋⇨∅
RFCtor : ∀{⛽ F c r} →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ C[ c ] r ⌊ ⛽ ⌋⇨∅
RFAppFun : ∀{⛽ F r1 r2} →
F ⊢ r1 ⌊ ⛽ ⌋⇨∅ →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨∅
RFAppArg : ∀{⛽ F r1 r2} →
F ⊢ r2 ⌊ ⛽ ⌋⇨∅ →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨∅
RFAppEval : ∀{⛽ ⛽↓ F r1 r2 Ef f x ef r1' r2' A1 A2} →
⛽ ⛽⇓ ⛽↓ →
F ⊢ r1 ⌊ ⛽ ⌋⇨ r1' := A1 →
r1' == [ Ef ]fix f ⦇·λ x => ef ·⦈ →
F ⊢ r2 ⌊ ⛽ ⌋⇨ r2' := A2 →
(Ef ,, (f , r1') ,, (x , r2')) ⊢ ef ⌊ ⛽↓ ⌋⇒∅ →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨∅
RFAppRes : ∀{⛽ ⛽↓ F r1 r2 Ef f x ef r1' r2' r A1 A2 Af} →
⛽ ⛽⇓ ⛽↓ →
F ⊢ r1 ⌊ ⛽ ⌋⇨ r1' := A1 →
r1' == [ Ef ]fix f ⦇·λ x => ef ·⦈ →
F ⊢ r2 ⌊ ⛽ ⌋⇨ r2' := A2 →
(Ef ,, (f , r1') ,, (x , r2')) ⊢ ef ⌊ ⛽↓ ⌋⇒ r ⊣ Af →
F ⊢ r ⌊ ⛽↓ ⌋⇨∅ →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨∅
RFFix : ∀{⛽ F E f x e} →
F ⊢ E ⌊ ⛽ ⌋:⇨∅ →
F ⊢ [ E ]fix f ⦇·λ x => e ·⦈ ⌊ ⛽ ⌋⇨∅
RFPrj : ∀{⛽ F i r} →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ prj[ i ] r ⌊ ⛽ ⌋⇨∅
RFCaseScrut : ∀{⛽ F E r rules} →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ [ E ]case r of⦃· rules ·⦄ ⌊ ⛽ ⌋⇨∅
RFCase : ∀{⛽ F E r rules c xc ec r' A} →
(c , |C xc => ec) ∈ rules →
F ⊢ r ⌊ ⛽ ⌋⇨ (C[ c ] r') := A →
F ⊢ [ E ]fix xc ⦇·λ xc => ec ·⦈ ∘ (C⁻¹[ c ] r) ⌊ ⛽ ⌋⇨∅ →
F ⊢ [ E ]case r of⦃· rules ·⦄ ⌊ ⛽ ⌋⇨∅
RFCaseIndet : ∀{⛽ F E r rules r' A} →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A →
(∀{c rc} → r' ≠ (C[ c ] rc)) →
F ⊢ E ⌊ ⛽ ⌋:⇨∅ →
F ⊢ [ E ]case r of⦃· rules ·⦄ ⌊ ⛽ ⌋⇨∅
RFUnwrapCtor : ∀{⛽ F c r} →
F ⊢ r ⌊ ⛽ ⌋⇨∅ →
F ⊢ C⁻¹[ c ] r ⌊ ⛽ ⌋⇨∅
RFLimit : ∀{F r} → F ⊢ r ⌊ ⛽⟨ 0 ⟩ ⌋⇨∅
data Filter_:=_ : excons → excons → Set where
FilterNil : Filter [] := []
FilterYes : ∀{X X' E ex} →
Filter X := X' →
ex ≠ ¿¿ →
Filter (E , ex) :: X := ((E , ex) :: X')
FilterNo : ∀{X X' E} →
Filter X := X' →
Filter (E , ¿¿) :: X := X'
-- Assertion Satisfaction and Simplification
data _⌊_⌋⊨ᴬ_ : hole-fillings → Fuel → assertions → Set where
SANil : ∀{⛽ F} → F ⌊ ⛽ ⌋⊨ᴬ []
SAInd : ∀{⛽ F r v r' A A'} →
F ⌊ ⛽ ⌋⊨ᴬ A →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A' →
F ⌊ ⛽ ⌋⊨ᴬ A' →
⌈ r' ⌉:= v →
F ⌊ ⛽ ⌋⊨ᴬ ((r , v) :: A)
-- Example Satisfaction (of Results)
data _⊢_⌊_⌋⊨ᴿ_ : hole-fillings → result → Fuel → ex → Set where
XSTop : ∀{⛽ F r} → F ⊢ r ⌊ ⛽ ⌋⊨ᴿ ¿¿
XSUnit : ∀{⛽ F} → F ⊢ ⟨⟩ ⌊ ⛽ ⌋⊨ᴿ ⟨⟩
XSPair : ∀{⛽ F r1 r2 ex1 ex2} →
F ⊢ r1 ⌊ ⛽ ⌋⊨ᴿ ex1 →
F ⊢ r2 ⌊ ⛽ ⌋⊨ᴿ ex2 →
F ⊢ ⟨ r1 , r2 ⟩ ⌊ ⛽ ⌋⊨ᴿ ⟨ ex1 , ex2 ⟩
XSCtor : ∀{⛽ F r c ex} →
F ⊢ r ⌊ ⛽ ⌋⊨ᴿ ex →
F ⊢ C[ c ] r ⌊ ⛽ ⌋⊨ᴿ (C[ c ] ex)
XSInOut : ∀{⛽ F r1 r2 v2 ex r A} →
⌈ r2 ⌉:= v2 →
F ⊢ r1 ∘ r2 ⌊ ⛽ ⌋⇨ r := A →
F ⊢ r ⌊ ⛽ ⌋⊨ᴿ ex →
F ⌊ ⛽ ⌋⊨ᴬ A →
F ⊢ r1 ⌊ ⛽ ⌋⊨ᴿ (v2 ↦ ex)
-- Example Satisfaction (of Expressions)
data _⊢_⌊_⌋⊨ᴱ_ : hole-fillings → exp → Fuel → excons → Set where
SatNil : ∀{⛽ F e} → F ⊢ e ⌊ ⛽ ⌋⊨ᴱ []
SatTop : ∀{⛽ F e E X} →
F ⊢ e ⌊ ⛽ ⌋⊨ᴱ X →
F ⊢ e ⌊ ⛽ ⌋⊨ᴱ ((E , ¿¿) :: X)
SatInd : ∀{⛽ F e E ex X r r' A A'} →
ex ≠ ¿¿ →
F ⊢ e ⌊ ⛽ ⌋⊨ᴱ X →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A' →
F ⊢ r' ⌊ ⛽ ⌋⊨ᴿ ex →
F ⌊ ⛽ ⌋⊨ᴬ A ++ A' →
F ⊢ e ⌊ ⛽ ⌋⊨ᴱ ((E , ex) :: X)
data _⌊_⌋⊨ᵁ_ : hole-fillings → Fuel → excons ctx → Set where
CSNil : ∀{⛽ F} → F ⌊ ⛽ ⌋⊨ᵁ ∅
CSInd : ∀{⛽ F U u X} →
F ⌊ ⛽ ⌋⊨ᵁ U →
F ⊢ ??[ u ] ⌊ ⛽ ⌋⊨ᴱ X →
F ⌊ ⛽ ⌋⊨ᵁ (U ,, (u , X))
-- Constraint Satisfaction
_⌊_⌋⊨ᴷ_ : hole-fillings → Fuel → constraints → Set
F ⌊ ⛽ ⌋⊨ᴷ (F0 , U) =
(∀{u e} → (u , e) ∈ F0 → (u , e) ∈ F) ∧
F ⌊ ⛽ ⌋⊨ᵁ U
-- constraints merge
_⊕_:=_ : constraints → constraints → constraints → Set
(F1 , U1) ⊕ (F2 , U2) := (F' , U') = F1 ≈ F2 ∧ F1 ∪ F2 == F' ∧ U1 ⊎ U2 == U'
mutual
-- example unevaluation
data _,_,_⊢_⇐⌊_⌋_:=_ : hctx → denv → hole-fillings → result → Fuel → ex → constraints → Set where
UTop : ∀{⛽ Δ Σ' F r} → Δ , Σ' , F ⊢ r ⇐⌊ ⛽ ⌋ ¿¿ := (∅ , ∅)
UHole : ∀{⛽ Δ Σ' F E u ex} →
ex ≠ ¿¿ →
Δ , Σ' , F ⊢ [ E ]??[ u ] ⇐⌊ ⛽ ⌋ ex := (∅ , ■ (u , (E , ex) :: []))
UUnit : ∀{⛽ Δ Σ' F} → Δ , Σ' , F ⊢ ⟨⟩ ⇐⌊ ⛽ ⌋ ⟨⟩ := (∅ , ∅)
UCtor : ∀{⛽ Δ Σ' F c r ex K} →
Δ , Σ' , F ⊢ r ⇐⌊ ⛽ ⌋ ex := K →
Δ , Σ' , F ⊢ C[ c ] r ⇐⌊ ⛽ ⌋ C[ c ] ex := K
UPair : ∀{⛽ Δ Σ' F r1 r2 ex1 ex2 K1 K2 K'} →
Δ , Σ' , F ⊢ r1 ⇐⌊ ⛽ ⌋ ex1 := K1 →
Δ , Σ' , F ⊢ r2 ⇐⌊ ⛽ ⌋ ex2 := K2 →
K1 ⊕ K2 := K' →
Δ , Σ' , F ⊢ ⟨ r1 , r2 ⟩ ⇐⌊ ⛽ ⌋ ⟨ ex1 , ex2 ⟩ := K'
UFix : ∀{⛽ ⛽↓ Δ Σ' F E f x e rf v r ex K} →
⛽ ⛽⇓ ⛽↓ →
rf == [ E ]fix f ⦇·λ x => e ·⦈ →
⌈ r ⌉:= v →
Δ , Σ' , F ⊢ e ⌊ ⛽ ⌋⇌ ((E ,, (f , rf) ,, (x , r)) , ex) :: [] := K →
Δ , Σ' , F ⊢ rf ⇐⌊ ⛽ ⌋ v ↦ ex := K
UApp : ∀{⛽ Δ Σ' F r1 r2 ex v2 K} →
ex ≠ ¿¿ →
⌈ r2 ⌉:= v2 →
Δ , Σ' , F ⊢ r1 ⇐⌊ ⛽ ⌋ v2 ↦ ex := K →
Δ , Σ' , F ⊢ r1 ∘ r2 ⇐⌊ ⛽ ⌋ ex := K
UPrj1 : ∀{⛽ Δ Σ' F r ex K} →
ex ≠ ¿¿ →
Δ , Σ' , F ⊢ r ⇐⌊ ⛽ ⌋ ⟨ ex , ¿¿ ⟩ := K →
Δ , Σ' , F ⊢ prj[ P1 ] r ⇐⌊ ⛽ ⌋ ex := K
UPrj2 : ∀{⛽ Δ Σ' F r ex K} →
ex ≠ ¿¿ →
Δ , Σ' , F ⊢ r ⇐⌊ ⛽ ⌋ ⟨ ¿¿ , ex ⟩ := K →
Δ , Σ' , F ⊢ prj[ P2 ] r ⇐⌊ ⛽ ⌋ ex := K
UCase : ∀{⛽ ⛽↓ Δ Σ' F E r rules ex c xc ec K1 K2 K'} →
ex ≠ ¿¿ →
⛽ ⛽⇓ ⛽↓ →
(c , |C xc => ec) ∈ rules →
Δ , Σ' , F ⊢ r ⇐⌊ ⛽ ⌋ C[ c ] ¿¿ := K1 →
Δ , Σ' , F ⊢ ec ⌊ ⛽↓ ⌋⇌ ((E ,, (xc , C⁻¹[ c ] r)) , ex) :: [] := K2 →
K1 ⊕ K2 := K' →
Δ , Σ' , F ⊢ [ E ]case r of⦃· rules ·⦄ ⇐⌊ ⛽ ⌋ ex := K'
UCaseGuess : ∀{⛽ ⛽↓ Δ Σ' F F' E r rules ex c xc ec r' A K K' Kₘ₁ Kₘ₂} →
ex ≠ ¿¿ →
⛽ ⛽⇓ ⛽↓ →
(c , |C xc => ec) ∈ rules →
Δ , Σ' ⊢ᴴ F' →
F ## F' →
F ∪ F' ⊢ r ⌊ ⛽ ⌋⇨ C[ c ] r' := A →
Δ , Σ' ⊢Simplify A ⌊ ⛽ ⌋:= K →
Δ , Σ' , F ∪ F' ⊢ ec ⌊ ⛽↓ ⌋⇌ ((E ,, (xc , r')) , ex) :: [] := K' →
K ⊕ K' := Kₘ₁ →
(F' , ∅) ⊕ Kₘ₁ := Kₘ₂ →
Δ , Σ' , F ⊢ [ E ]case r of⦃· rules ·⦄ ⇐⌊ ⛽ ⌋ ex := Kₘ₂
UUnwrapCtor : ∀{⛽ Δ Σ' F c r ex K} →
ex ≠ ¿¿ →
Δ , Σ' , F ⊢ r ⇐⌊ ⛽ ⌋ C[ c ] ex := K →
Δ , Σ' , F ⊢ C⁻¹[ c ] r ⇐⌊ ⛽ ⌋ ex := K
-- Assertion Simplification
data _,_⊢Simplify_⌊_⌋:=_ : hctx → denv → assertions → Fuel → constraints → Set where
SNil : ∀{⛽ Δ Σ'} → Δ , Σ' ⊢Simplify [] ⌊ ⛽ ⌋:= (∅ , ∅)
SInd : ∀{⛽ Δ Σ' r v A K K' K''} →
Δ , Σ' ⊢Simplify A ⌊ ⛽ ⌋:= K →
r final →
Δ , Σ' , ∅ ⊢ r ⇐⌊ ⛽ ⌋ ⌊ v ⌋ := K' →
K ⊕ K' := K'' →
Δ , Σ' ⊢Simplify (r , v) :: A ⌊ ⛽ ⌋:= K''
-- Live Bidirectional Example Checking
data _,_,_⊢_⌊_⌋⇌_:=_ : hctx → denv → hole-fillings → exp → Fuel → excons → constraints → Set where
ChkNil : ∀{⛽ Δ Σ' F e} → Δ , Σ' , F ⊢ e ⌊ ⛽ ⌋⇌ [] := (∅ , ∅)
ChkInd : ∀{⛽ Δ Σ' F e E ex X r r' A A' K K' K'' Kₘ₁ Kₘ₂} →
Δ , Σ' , F ⊢ e ⌊ ⛽ ⌋⇌ X := K →
E ⊢ e ⌊ ⛽ ⌋⇒ r ⊣ A →
F ⊢ r ⌊ ⛽ ⌋⇨ r' := A' →
Δ , Σ' , F ⊢ r' ⇐⌊ ⛽ ⌋ ex := K' →
Δ , Σ' ⊢Simplify A ++ A' ⌊ ⛽ ⌋:= K'' →
K' ⊕ K'' := Kₘ₁ →
K ⊕ Kₘ₁ := Kₘ₂ →
Δ , Σ' , F ⊢ e ⌊ ⛽ ⌋⇌ ((E , ex) :: X) := Kₘ₂
-- TODO theorems for all the things, including resumption, synthesis, solve, satisfaction, consistency, Group, and Filter
-- Type-Directed Guessing
data _⊢⦇_⊢●:_⦈:=ᴳ_ : denv → tctx → typ → exp → Set where
GUnit : ∀{Σ' Γ} → Σ' ⊢⦇ Γ ⊢●: ⟨⟩ ⦈:=ᴳ ⟨⟩
GPair : ∀{Σ' Γ τ1 τ2 e1 e2} →
Σ' ⊢⦇ Γ ⊢●: τ1 ⦈:=ᴳ e1 →
Σ' ⊢⦇ Γ ⊢●: τ2 ⦈:=ᴳ e2 →
Σ' ⊢⦇ Γ ⊢●: ⟨ τ1 × τ2 ⟩ ⦈:=ᴳ ⟨ e1 , e2 ⟩
GCtor : ∀{Σ' Γ d cctx c τ e} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Σ' ⊢⦇ Γ ⊢●: τ ⦈:=ᴳ e →
Σ' ⊢⦇ Γ ⊢●: D[ d ] ⦈:=ᴳ (C[ c ] e)
GFix : ∀{Σ' Γ τ1 τ2 f x e} →
Σ' ⊢⦇ Γ ,, (f , τ1 ==> τ2) ,, (x , τ1) ⊢●: τ2 ⦈:=ᴳ e →
Σ' ⊢⦇ Γ ⊢●: τ1 ==> τ2 ⦈:=ᴳ fix f ⦇·λ x => e ·⦈
GCase : ∀{Σ' Γ τ e rules d cctx} →
(d , cctx) ∈ π1 Σ' →
(∀{c} → dom cctx c → dom rules c) →
(∀{c} → dom rules c → dom cctx c) →
Σ' ⊢⦇ Γ ⊢●: D[ d ] ⦈:=ᴳ e →
(∀{c x-c e-c τ-c} →
(c , |C x-c => e-c) ∈ rules →
(c , τ-c) ∈ cctx →
Σ' ⊢⦇ Γ ,, (x-c , τ-c) ⊢●: τ ⦈:=ᴳ e-c) →
Σ' ⊢⦇ Γ ⊢●: τ ⦈:=ᴳ case e of⦃· rules ·⦄
GVar : ∀{Σ' Γ τ x} →
(x , τ) ∈ Γ →
Σ' ⊢⦇ Γ ⊢●: τ ⦈:=ᴳ X[ x ]
GApp : ∀{Σ' Γ τ e1 e2 τ'} →
Σ' ⊢⦇ Γ ⊢●: τ' ==> τ ⦈:=ᴳ e1 →
Σ' ⊢⦇ Γ ⊢●: τ' ⦈:=ᴳ e2 →
Σ' ⊢⦇ Γ ⊢●: τ ⦈:=ᴳ (e1 ∘ e2)
GPrj : ∀{Σ' Γ τ1 τ2 i e} →
Σ' ⊢⦇ Γ ⊢●: ⟨ τ1 × τ2 ⟩ ⦈:=ᴳ e →
Σ' ⊢⦇ Γ ⊢●: prj i τ1 τ2 ⦈:=ᴳ (prj[ i ] e)
-- TODO theorem that if u # Δ , then u is new in a type-checked exp or result
-- TODO theorem that any hole in the exp produced by refinement is in the goals
-- Type-and-Example-Directed Refinement
data _⊢⦇_⊢●:_⊨_⦈:=ᴿ_⊣_ : denv → tctx → typ → excons → exp → goals → Set where
RefUnit : ∀{Σ' Γ X Xf} →
Filter X := Xf →
(∀{i E ex} → Xf ⟦ i ⟧ == Some (E , ex) → ex == ⟨⟩) →
Σ' ⊢⦇ Γ ⊢●: ⟨⟩ ⊨ X ⦈:=ᴿ ⟨⟩ ⊣ []
RefPair : ∀{Σ' Γ τ1 τ2 X u1 u2 G1 G2} {E-ex1-ex2s : List (env ∧ ex ∧ ex)} →
Filter X := map (λ {(E , ex1 , ex2) → E , ⟨ ex1 , ex2 ⟩}) E-ex1-ex2s →
G1 == Γ ⊢??[ u1 ]: τ1 ⊨ map (λ {(E , ex1 , ex2) → E , ex1}) E-ex1-ex2s →
G2 == Γ ⊢??[ u2 ]: τ2 ⊨ map (λ {(E , ex1 , ex2) → E , ex2}) E-ex1-ex2s →
Σ' ⊢⦇ Γ ⊢●: ⟨ τ1 × τ2 ⟩ ⊨ X ⦈:=ᴿ ⟨ ??[ u1 ] , ??[ u2 ] ⟩ ⊣ (G1 :: (G2 :: []))
RefCtor : ∀{Σ' Γ X X' d cctx c τ u G} →
(d , cctx) ∈ π1 Σ' →
(c , τ) ∈ cctx →
Filter X := map (λ {(E , ex) → E , C[ c ] ex}) X' →
G == Γ ⊢??[ u ]: τ ⊨ X' →
Σ' ⊢⦇ Γ ⊢●: D[ d ] ⊨ X ⦈:=ᴿ C[ c ] ??[ u ] ⊣ (G :: [])
RefFix : ∀{Σ' Γ X τ1 τ2 f x u X' G} {E-in-inᶜ-outs : List (env ∧ val ∧ result ∧ ex)} →
(∀{i E-i v-i r-i ex-i} →
E-in-inᶜ-outs ⟦ i ⟧ == Some (E-i , v-i , r-i , ex-i) →
⌈ r-i ⌉:= v-i) →
Filter X := map (λ {(E , in' , _ , out) → E , in' ↦ out}) E-in-inᶜ-outs →
X' == map (λ {(E , _ , in' , out) → (E ,, (f , [ E ]fix f ⦇·λ x => ??[ u ] ·⦈) ,, (x , in')) , out}) E-in-inᶜ-outs →
G == (Γ ,, (f , τ1 ==> τ2) ,, (x , τ1)) ⊢??[ u ]: τ2 ⊨ X' →
Σ' ⊢⦇ Γ ⊢●: τ1 ==> τ2 ⊨ X ⦈:=ᴿ fix f ⦇·λ x => ??[ u ] ·⦈ ⊣ (G :: [])
-- Type-and-Example-Directed Branching
data _⊢⦇_⊢●:_⊨_⦈⌊_⌋:=ᴮ_⊣_ : denv → tctx → typ → excons → Fuel → exp → goals → Set where
BCase : ∀{⛽ Σ' Γ X Xf τ e rules d cctx x τ+u+X⁺-ctx Gs} →
Filter X := Xf →
-- choose one fresh variable name that will be used for all cases
x # Γ →
-- τ+u+X⁺-ctx is just cctx extended with hole names and excons
cctx == ctxmap π1 τ+u+X⁺-ctx →
-- the rules can be defined from x and the hole names in τ+u+X⁺-ctx
rules == ctxmap (λ {(τ-c , u-c , X⁺-c) → |C x => ??[ u-c ]}) τ+u+X⁺-ctx →
-- the following premises appear in the paper
(d , cctx) ∈ π1 Σ' →
Σ' ⊢⦇ Γ ⊢●: D[ d ] ⦈:=ᴳ e →
Gs == map
(λ {(τ-c , u-c , X⁺-c) →
-- this corresponds to the goal definition in the paper
(Γ ,, (x , τ-c)) ⊢??[ u-c ]: τ ⊨
-- this corresponds to the definition of each X_i in the paper
map (λ {(E , ex , r) → (E ,, (x , r)) , ex}) X⁺-c})
(ctx⇒values τ+u+X⁺-ctx) →
-- the following premise checks that every X⁺-c obeys the rules in the paper premise
(∀{c τ-c u-c X⁺-c} →
(c , τ-c , u-c , X⁺-c) ∈ τ+u+X⁺-ctx →
-- for each excon (extended with a result r-j) of X⁺-c, ...
(∀{j E-j ex-j r-j} →
X⁺-c ⟦ j ⟧ == Some (E-j , ex-j , r-j) →
-- the excon is an element of Filter X, and ...
Σ[ k ∈ Nat ] (Xf ⟦ k ⟧ == Some (E-j , ex-j)) ∧
-- the scrutinee e will evaluate to constructor c applied to the specified argument r-j
E-j ⊢ e ⌊ ⛽ ⌋⇒ C[ c ] r-j ⊣ [])) →
-- the last premise in the paper - every excon in Filter X is an element of some X⁺-c for some c
(∀{k E-k ex-k} →
Xf ⟦ k ⟧ == Some (E-k , ex-k) →
Σ[ c ∈ Nat ] Σ[ τ-c ∈ typ ] Σ[ u-c ∈ Nat ] Σ[ X⁺-c ∈ List (env ∧ ex ∧ result)] Σ[ j ∈ Nat ] Σ[ r-j ∈ result ] (
(c , τ-c , u-c , X⁺-c) ∈ τ+u+X⁺-ctx ∧
X⁺-c ⟦ j ⟧ == Some (E-k , ex-k , r-j))) →
Σ' ⊢⦇ Γ ⊢●: τ ⊨ X ⦈⌊ ⛽ ⌋:=ᴮ case e of⦃· rules ·⦄ ⊣ Gs
-- Hole Filling
data _,_,_⊢⦇_⊢??[_]:_⊨_⦈⌊_⌋:=_,_ : hctx → denv → hole-fillings → tctx → Nat → typ → excons → Fuel → constraints → hctx → Set where
HFRefBranch : ∀{⛽ Δ Σ' F Γ u τ X e Gs K Δ'} →
-- this premise ensures that all holes are fresh
(∀{i j g-i g-j} →
Gs ⟦ i ⟧ == Some g-i →
Gs ⟦ j ⟧ == Some g-j →
goal.g-id g-i # Δ ∧
goal.g-id g-i ≠ u ∧
(i ≠ j → goal.g-id g-i ≠ goal.g-id g-j)) →
(Σ' ⊢⦇ Γ ⊢●: τ ⊨ X ⦈:=ᴿ e ⊣ Gs ∨
Σ' ⊢⦇ Γ ⊢●: τ ⊨ X ⦈⌊ ⛽ ⌋:=ᴮ e ⊣ Gs) →
K == (■ (u , e) , list⇒ctx (map (λ {(_ ⊢??[ u' ]: _ ⊨ X') → u' , X'}) Gs)) →
Δ' == list⇒ctx (map (λ {(Γ' ⊢??[ u' ]: τ' ⊨ _) → u' , Γ' , τ'}) Gs) →
Δ , Σ' , F ⊢⦇ Γ ⊢??[ u ]: τ ⊨ X ⦈⌊ ⛽ ⌋:= K , Δ'
HFGuessChk : ∀{⛽ Δ Σ' F Γ u τ X e K K'} →
Σ' ⊢⦇ Γ ⊢●: τ ⦈:=ᴳ e →
Δ , Σ' , (F ,, (u , e)) ⊢ e ⌊ ⛽ ⌋⇌ X := K →
(■ (u , e) , ∅) ⊕ K := K' →
Δ , Σ' , F ⊢⦇ Γ ⊢??[ u ]: τ ⊨ X ⦈⌊ ⛽ ⌋:= K' , ∅
HFDefer : ∀{⛽ Δ Σ' F Γ u τ X} →
X ≠ [] →
Filter X := [] →
Δ , Σ' , F ⊢⦇ Γ ⊢??[ u ]: τ ⊨ X ⦈⌊ ⛽ ⌋:= (■ (u , ??[ u ]) , ∅) , ∅
{- TODO - later, we need to fix this stuff up too
data _,_IterSolve_,_⌊_⌋:=_,_ : hctx → denv → hole-fillings → excons ctx → Fuel → hole-fillings → hctx → Set where
ISFin : ∀{⛽ Δ Σ' F-0 U F' Δ' u+F+Δs} →
(∀{u} → dom U u → dom Δ u) →
∥ u+F+Δs ∥ == 1+ ∥ U ∥ →
Σ[ u-0 ∈ Nat ] (u+F+Δs ⟦ 0 ⟧ == Some (u-0 , F-0 , Δ)) →
(∀{u W} →
(u , W) ∈ U →
Σ[ i ∈ Nat ] Σ[ F-i ∈ hole-fillings ] Σ[ Δ-i ∈ hctx ] (
1+ i < ∥ u+F+Δs ∥ ∧ u+F+Δs ⟦ i ⟧ == Some (u , F-i , Δ-i))) →
(∀{i u-i u-i+1 W-i F-i F-i+1 Δ-i Δ-i+1 Γ-i τ-i} →
1+ i < ∥ u+F+Δs ∥ →
u+F+Δs ⟦ i ⟧ == Some (u-i , F-i , Δ-i) →
u+F+Δs ⟦ 1+ i ⟧ == Some (u-i+1 , F-i+1 , Δ-i+1) →
(u-i , W-i) ∈ U →
(u-i , Γ-i , τ-i) ∈ Δ →
Σ[ F'-i ∈ hole-fillings ] Σ[ Δ'-i ∈ hctx ] (
(Δ-i , Σ' , F-i ⊢⦇ Γ-i ⊢??[ u-i ]: τ-i ⊨ W-i ⦈⌊ ⛽ ⌋:= (F'-i , ∅) , Δ'-i) ∧
F-i+1 == F-i ∪ F'-i ∧
Δ-i+1 == Δ-i ∪ Δ'-i)) →
Σ[ u-n ∈ Nat ] (u+F+Δs ⟦ ∥ U ∥ ⟧ == Some (u-n , F' , Δ')) →
Δ , Σ' IterSolve F-0 , U ⌊ ⛽ ⌋:= F' , Δ'
ISInd : ∀{⛽ Δ Σ' F-0 U F' Δ' u+F+U+Δs U' Δ-n F-n} →
(∀{u} → dom U u → dom Δ u) →
∥ u+F+U+Δs ∥ == 1+ ∥ U ∥ →
Σ[ u-0 ∈ Nat ] (u+F+U+Δs ⟦ 0 ⟧ == Some (u-0 , F-0 , ∅ , Δ)) →
(∀{u W} →
(u , W) ∈ U →
Σ[ i ∈ Nat ] Σ[ F-i ∈ hole-fillings ] Σ[ U-i ∈ excons ctx ] Σ[ Δ-i ∈ hctx ] (
1+ i < ∥ u+F+U+Δs ∥ ∧ u+F+U+Δs ⟦ i ⟧ == Some (u , F-i , U-i , Δ-i))) →
(∀{i u-i u-i+1 W-i F-i F-i+1 U-i U-i+1 Δ-i Δ-i+1 Γ-i τ-i} →
1+ i < ∥ u+F+U+Δs ∥ →
u+F+U+Δs ⟦ i ⟧ == Some (u-i , F-i , U-i , Δ-i) →
u+F+U+Δs ⟦ 1+ i ⟧ == Some (u-i+1 , F-i+1 , U-i+1 , Δ-i+1) →
(u-i , W-i) ∈ U →
(u-i , Γ-i , τ-i) ∈ Δ →
Σ[ F'-i ∈ hole-fillings ] Σ[ Δ'-i ∈ hctx ] (
(Δ-i , Σ' , F-i ⊢⦇ Γ-i ⊢??[ u-i ]: τ-i ⊨ W-i ⦈⌊ ⛽ ⌋:= (F'-i , U-i+1) , Δ'-i) ∧
F-i+1 == F-i ∪ F'-i ∧
Δ-i+1 == Δ-i ∪ Δ'-i)) →
U' == foldl _∪_ ∅ (map (π1 ⊙ (π2 ⊙ π2)) u+F+U+Δs) →
U' ≠ ∅ →
Σ[ u-n ∈ Nat ] Σ[ U-n ∈ excons ctx ] (u+F+U+Δs ⟦ ∥ U ∥ ⟧ == Some (u-n , F-n , U-n , Δ-n)) →
Δ-n , Σ' IterSolve F-n , U' ⌊ ⛽ ⌋:= F' , Δ' →
Δ , Σ' IterSolve F-0 , U ⌊ ⛽ ⌋:= F' , Δ'
data _,_Solve_⌊_⌋:=_ : hctx → denv → constraints → Fuel → hole-fillings → Set where
Solve : ∀{⛽ Δ Σ' F0 U F Δ'} →
Δ , Σ' IterSolve F0 , U ⌊ ⛽ ⌋:= F , Δ' →
Δ , Σ' Solve (F0 , U) ⌊ ⛽ ⌋:= F
-}
{- TODO
-- those external expressions without holes
data _ecomplete : hexp → Set where
ECConst : c ecomplete
ECAsc : ∀{τ e} → τ tcomplete → e ecomplete → (e ·: τ) ecomplete
ECVar : ∀{x} → (X x) ecomplete
ECLam1 : ∀{x e} → e ecomplete → (·λ x e) ecomplete
ECLam2 : ∀{x e τ} → e ecomplete → τ tcomplete → (·λ x [ τ ] e) ecomplete
ECAp : ∀{e1 e2} → e1 ecomplete → e2 ecomplete → (e1 ∘ e2) ecomplete
ECFst : ∀{e} → e ecomplete → (fst e) ecomplete
ECSnd : ∀{e} → e ecomplete → (snd e) ecomplete
ECPair : ∀{e1 e2} → e1 ecomplete → e2 ecomplete → ⟨ e1 , e2 ⟩ ecomplete
-- those internal expressions without holes
data _dcomplete : ihexp → Set where
DCVar : ∀{x} → (X x) dcomplete
DCConst : c dcomplete
DCLam : ∀{x τ d} → d dcomplete → τ tcomplete → (·λ x [ τ ] d) dcomplete
DCAp : ∀{d1 d2} → d1 dcomplete → d2 dcomplete → (d1 ∘ d2) dcomplete
DCCast : ∀{d τ1 τ2} → d dcomplete → τ1 tcomplete → τ2 tcomplete → (d ⟨ τ1 ⇒ τ2 ⟩) dcomplete
DCFst : ∀{d} → d dcomplete → (fst d) dcomplete
DCSnd : ∀{d} → d dcomplete → (snd d) dcomplete
DCPair : ∀{d1 d2} → d1 dcomplete → d2 dcomplete → ⟨ d1 , d2 ⟩ dcomplete
mutual
-- substitution typing
data _,_⊢_:s:_ : hctx → tctx → env → tctx → Set where
STAId : ∀{Γ Γ' Δ} →
((x : Nat) (τ : htyp) → (x , τ) ∈ Γ' → (x , τ) ∈ Γ) →
Δ , Γ ⊢ Id Γ' :s: Γ'
STASubst : ∀{Γ Δ σ y Γ' d τ } →
Δ , Γ ,, (y , τ) ⊢ σ :s: Γ' →
Δ , Γ ⊢ d :: τ →
Δ , Γ ⊢ Subst d y σ :s: Γ'
-- type assignment
data _,_⊢_::_ : (Δ : hctx) (Γ : tctx) (d : ihexp) (τ : htyp) → Set where
TAConst : ∀{Δ Γ} → Δ , Γ ⊢ c :: b
TAVar : ∀{Δ Γ x τ} → (x , τ) ∈ Γ → Δ , Γ ⊢ X x :: τ
TALam : ∀{ Δ Γ x τ1 d τ2} →
x # Γ →
Δ , (Γ ,, (x , τ1)) ⊢ d :: τ2 →
Δ , Γ ⊢ ·λ x [ τ1 ] d :: (τ1 ==> τ2)
TAAp : ∀{ Δ Γ d1 d2 τ1 τ} →
Δ , Γ ⊢ d1 :: τ1 ==> τ →
Δ , Γ ⊢ d2 :: τ1 →
Δ , Γ ⊢ d1 ∘ d2 :: τ
TAEHole : ∀{ Δ Γ σ u Γ' τ} →
(u , (Γ' , τ)) ∈ Δ →
Δ , Γ ⊢ σ :s: Γ' →
Δ , Γ ⊢ ⦇⦈⟨ u , σ ⟩ :: τ
TANEHole : ∀ { Δ Γ d τ' Γ' u σ τ } →
(u , (Γ' , τ)) ∈ Δ →
Δ , Γ ⊢ d :: τ' →
Δ , Γ ⊢ σ :s: Γ' →
Δ , Γ ⊢ ⦇⌜ d ⌟⦈⟨ u , σ ⟩ :: τ
TACast : ∀{ Δ Γ d τ1 τ2} →
Δ , Γ ⊢ d :: τ1 →
τ1 ~ τ2 →
Δ , Γ ⊢ d ⟨ τ1 ⇒ τ2 ⟩ :: τ2
TAFailedCast : ∀{Δ Γ d τ1 τ2} →
Δ , Γ ⊢ d :: τ1 →
τ1 ground →
τ2 ground →
τ1 ≠ τ2 →
Δ , Γ ⊢ d ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩ :: τ2
TAFst : ∀{Δ Γ d τ1 τ2} →
Δ , Γ ⊢ d :: τ1 ⊗ τ2 →
Δ , Γ ⊢ fst d :: τ1
TASnd : ∀{Δ Γ d τ1 τ2} →
Δ , Γ ⊢ d :: τ1 ⊗ τ2 →
Δ , Γ ⊢ snd d :: τ2
TAPair : ∀{Δ Γ d1 d2 τ1 τ2} →
Δ , Γ ⊢ d1 :: τ1 →
Δ , Γ ⊢ d2 :: τ2 →
Δ , Γ ⊢ ⟨ d1 , d2 ⟩ :: τ1 ⊗ τ2
-- substitution
[_/_]_ : ihexp → Nat → ihexp → ihexp
[ d / y ] c = c
[ d / y ] X x
with natEQ x y
[ d / y ] X .y | Inl refl = d
[ d / y ] X x | Inr neq = X x
[ d / y ] (·λ x [ x₁ ] d')
with natEQ x y
[ d / y ] (·λ .y [ τ ] d') | Inl refl = ·λ y [ τ ] d'
[ d / y ] (·λ x [ τ ] d') | Inr x₁ = ·λ x [ τ ] ( [ d / y ] d')
[ d / y ] ⦇⦈⟨ u , σ ⟩ = ⦇⦈⟨ u , Subst d y σ ⟩
[ d / y ] ⦇⌜ d' ⌟⦈⟨ u , σ ⟩ = ⦇⌜ [ d / y ] d' ⌟⦈⟨ u , Subst d y σ ⟩
[ d / y ] (d1 ∘ d2) = ([ d / y ] d1) ∘ ([ d / y ] d2)
[ d / y ] (d' ⟨ τ1 ⇒ τ2 ⟩ ) = ([ d / y ] d') ⟨ τ1 ⇒ τ2 ⟩
[ d / y ] (d' ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩ ) = ([ d / y ] d') ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩
[ d / y ] ⟨ d1 , d2 ⟩ = ⟨ [ d / y ] d1 , [ d / y ] d2 ⟩
[ d / y ] (fst d') = fst ([ d / y ] d')
[ d / y ] (snd d') = snd ([ d / y ] d')
-- applying an environment to an expression
apply-env : env → ihexp → ihexp
apply-env (Id Γ) d = d
apply-env (Subst d y σ) d' = [ d / y ] ( apply-env σ d')
-- freshness
mutual
-- ... with respect to a hole context
data envfresh : Nat → env → Set where
EFId : ∀{x Γ} → x # Γ → envfresh x (Id Γ)
EFSubst : ∀{x d σ y} → fresh x d
→ envfresh x σ
→ x ≠ y
→ envfresh x (Subst d y σ)
-- ... for inernal expressions
data fresh : Nat → ihexp → Set where
FConst : ∀{x} → fresh x c
FVar : ∀{x y} → x ≠ y → fresh x (X y)
FLam : ∀{x y τ d} → x ≠ y → fresh x d → fresh x (·λ y [ τ ] d)
FHole : ∀{x u σ} → envfresh x σ → fresh x (⦇⦈⟨ u , σ ⟩)
FNEHole : ∀{x d u σ} → envfresh x σ → fresh x d → fresh x (⦇⌜ d ⌟⦈⟨ u , σ ⟩)
FAp : ∀{x d1 d2} → fresh x d1 → fresh x d2 → fresh x (d1 ∘ d2)
FCast : ∀{x d τ1 τ2} → fresh x d → fresh x (d ⟨ τ1 ⇒ τ2 ⟩)
FFailedCast : ∀{x d τ1 τ2} → fresh x d → fresh x (d ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩)
FFst : ∀{x d} → fresh x d → fresh x (fst d)
FSnd : ∀{x d} → fresh x d → fresh x (snd d)
FPair : ∀{x d1 d2} → fresh x d1 → fresh x d2 → fresh x ⟨ d1 , d2 ⟩
-- ... for external expressions
data freshh : Nat → hexp → Set where
FRHConst : ∀{x} → freshh x c
FRHAsc : ∀{x e τ} → freshh x e → freshh x (e ·: τ)
FRHVar : ∀{x y} → x ≠ y → freshh x (X y)
FRHLam1 : ∀{x y e} → x ≠ y → freshh x e → freshh x (·λ y e)
FRHLam2 : ∀{x τ e y} → x ≠ y → freshh x e → freshh x (·λ y [ τ ] e)
FRHEHole : ∀{x u} → freshh x (⦇⦈[ u ])
FRHNEHole : ∀{x u e} → freshh x e → freshh x (⦇⌜ e ⌟⦈[ u ])
FRHAp : ∀{x e1 e2} → freshh x e1 → freshh x e2 → freshh x (e1 ∘ e2)
FRHFst : ∀{x e} → freshh x e → freshh x (fst e)
FRHSnd : ∀{x e} → freshh x e → freshh x (snd e)
FRHPair : ∀{x e1 e2} → freshh x e1 → freshh x e2 → freshh x ⟨ e1 , e2 ⟩
-- with respect to all bindings in a context
freshΓ : {A : Set} → (Γ : A ctx) → (e : hexp) → Set
freshΓ {A} Γ e = (x : Nat) → dom Γ x → freshh x e
-- x is not used in a binding site in d
mutual
data unbound-in-σ : Nat → env → Set where
UBσId : ∀{x Γ} → unbound-in-σ x (Id Γ)
UBσSubst : ∀{x d y σ} → unbound-in x d
→ unbound-in-σ x σ
→ x ≠ y
→ unbound-in-σ x (Subst d y σ)
data unbound-in : (x : Nat) (d : ihexp) → Set where
UBConst : ∀{x} → unbound-in x c
UBVar : ∀{x y} → unbound-in x (X y)
UBLam2 : ∀{x d y τ} → x ≠ y
→ unbound-in x d
→ unbound-in x (·λ_[_]_ y τ d)
UBHole : ∀{x u σ} → unbound-in-σ x σ
→ unbound-in x (⦇⦈⟨ u , σ ⟩)
UBNEHole : ∀{x u σ d }
→ unbound-in-σ x σ
→ unbound-in x d
→ unbound-in x (⦇⌜ d ⌟⦈⟨ u , σ ⟩)
UBAp : ∀{ x d1 d2 } →
unbound-in x d1 →
unbound-in x d2 →
unbound-in x (d1 ∘ d2)
UBCast : ∀{x d τ1 τ2} → unbound-in x d → unbound-in x (d ⟨ τ1 ⇒ τ2 ⟩)
UBFailedCast : ∀{x d τ1 τ2} → unbound-in x d → unbound-in x (d ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩)
UBFst : ∀{x d} → unbound-in x d → unbound-in x (fst d)
UBSnd : ∀{x d} → unbound-in x d → unbound-in x (snd d)
UBPair : ∀{x d1 d2} → unbound-in x d1 → unbound-in x d2 → unbound-in x ⟨ d1 , d2 ⟩
mutual
data binders-disjoint-σ : env → ihexp → Set where
BDσId : ∀{Γ d} → binders-disjoint-σ (Id Γ) d
BDσSubst : ∀{d1 d2 y σ} → binders-disjoint d1 d2
→ binders-disjoint-σ σ d2
→ binders-disjoint-σ (Subst d1 y σ) d2
-- two terms that do not share any binders
data binders-disjoint : (d1 : ihexp) → (d2 : ihexp) → Set where
BDConst : ∀{d} → binders-disjoint c d
BDVar : ∀{x d} → binders-disjoint (X x) d
BDLam : ∀{x τ d1 d2} → binders-disjoint d1 d2
→ unbound-in x d2
→ binders-disjoint (·λ_[_]_ x τ d1) d2
BDHole : ∀{u σ d2} → binders-disjoint-σ σ d2
→ binders-disjoint (⦇⦈⟨ u , σ ⟩) d2
BDNEHole : ∀{u σ d1 d2} → binders-disjoint-σ σ d2
→ binders-disjoint d1 d2
→ binders-disjoint (⦇⌜ d1 ⌟⦈⟨ u , σ ⟩) d2
BDAp : ∀{d1 d2 d3} → binders-disjoint d1 d3
→ binders-disjoint d2 d3
→ binders-disjoint (d1 ∘ d2) d3
BDCast : ∀{d1 d2 τ1 τ2} → binders-disjoint d1 d2 → binders-disjoint (d1 ⟨ τ1 ⇒ τ2 ⟩) d2
BDFailedCast : ∀{d1 d2 τ1 τ2} → binders-disjoint d1 d2 → binders-disjoint (d1 ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩) d2
BDFst : ∀{d1 d2} → binders-disjoint d1 d2 → binders-disjoint (fst d1) d2
BDSnd : ∀{d1 d2} → binders-disjoint d1 d2 → binders-disjoint (snd d1) d2
BDPair : ∀{d1 d2 d3} →
binders-disjoint d1 d3 →
binders-disjoint d2 d3 →
binders-disjoint ⟨ d1 , d2 ⟩ d3
mutual
-- each term has to be binders unique, and they have to be pairwise
-- disjoint with the collection of bound vars
data binders-unique-σ : env → Set where
BUσId : ∀{Γ} → binders-unique-σ (Id Γ)
BUσSubst : ∀{d y σ} → binders-unique d
→ binders-unique-σ σ
→ binders-disjoint-σ σ d
→ binders-unique-σ (Subst d y σ)
-- all the variable names in the term are unique
data binders-unique : ihexp → Set where
BUHole : binders-unique c
BUVar : ∀{x} → binders-unique (X x)
BULam : {x : Nat} {τ : htyp} {d : ihexp} → binders-unique d
→ unbound-in x d
→ binders-unique (·λ_[_]_ x τ d)
BUEHole : ∀{u σ} → binders-unique-σ σ
→ binders-unique (⦇⦈⟨ u , σ ⟩)
BUNEHole : ∀{u σ d} → binders-unique d
→ binders-unique-σ σ
→ binders-unique (⦇⌜ d ⌟⦈⟨ u , σ ⟩)
BUAp : ∀{d1 d2} → binders-unique d1
→ binders-unique d2
→ binders-disjoint d1 d2
→ binders-unique (d1 ∘ d2)
BUCast : ∀{d τ1 τ2} → binders-unique d
→ binders-unique (d ⟨ τ1 ⇒ τ2 ⟩)
BUFailedCast : ∀{d τ1 τ2} → binders-unique d
→ binders-unique (d ⟨ τ1 ⇒⦇⦈⇏ τ2 ⟩)
BUFst : ∀{d} →
binders-unique d →
binders-unique (fst d)
BUSnd : ∀{d} →
binders-unique d →
binders-unique (snd d)
BUPair : ∀{d1 d2} →
binders-unique d1 →
binders-unique d2 →
binders-disjoint d1 d2 →
binders-unique ⟨ d1 , d2 ⟩
-}
|
function handle = imageModifyVargplvm(handle, imageValues, imageSize, transpose, negative, ...
scale,thresh)
% IMAGEMODIFY Helper code for visualisation of image data.
% FORMAT
% DESC is a helper function for visualising image data using latent
% variable models.
% ARG handle : the handle of the image data.
% ARG imageValues : the values to set the image data to.
% ARG imageSize : the size of the image.
% ARG transpose : whether the resized image needs to be transposed
% (default 1, which is yes).
% ARG negative : whether to display the negative of the image
% (default 0, which is no).
% ARG scale : dummy input, to maintain compatability with
% IMAGEVISUALISE.
% ARG thresh : An array, thresh = [minVal, threshDown, threshUp, maxVal]
% which gives value minVal to all image elements smaller than threshDown
% and value maxVal to all image elements larger than threshUp.
% RETURN handle : a the handle to the image data.
%
% COPYRIGHT : Neil D. Lawrence, 2003, 2004, 2006
%
% SEEALSO : imageVisualise, fgplvmResultsDynamic
% SHEFFIELDML
if nargin < 4 || isempty(transpose)
transpose = 1;
end
if nargin< 5 || isempty(negative)
negative = 0;
end
if nargin < 6
thresh = [];
end
if negative
imageValues = -imageValues;
end
if ~isempty(thresh)
imageValues(imageValues < thresh(2)) = thresh(1);
imageValues(imageValues >= thresh(3)) = thresh(4);
end
if transpose
set(handle, 'CData', reshape(imageValues(1:imageSize(1)*imageSize(2)), imageSize(1), imageSize(2))');
else
set(handle, 'CData', reshape(imageValues(1:imageSize(1)*imageSize(2)), imageSize(1), imageSize(2)));
end
|
$\cos(\pi/2) + i\sin(\pi/2) = i$.
|
module SizedPolyIO.Object where
open import Data.Product
open import Level using (_⊔_) renaming (suc to lsuc)
record Interface μ ρ : Set (lsuc (μ ⊔ ρ)) where
field
Method : Set μ
Result : (m : Method) → Set ρ
open Interface public
-- A simple object just returns for a method the response
-- and the object itself
record Object {μ ρ} (i : Interface μ ρ) : Set (μ ⊔ ρ) where
coinductive
field
objectMethod : (m : Method i) → Result i m × Object i
open Object public
|
/-
Copyright (c) 2018 Kenny Lau. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Kenny Lau, Chris Hughes, Tim Baanen
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.data.matrix.pequiv
import Mathlib.data.fintype.card
import Mathlib.group_theory.perm.sign
import Mathlib.algebra.algebra.basic
import Mathlib.tactic.ring
import Mathlib.linear_algebra.alternating
import Mathlib.PostPort
universes u v w z u_1
namespace Mathlib
namespace matrix
/-- The determinant of a matrix given by the Leibniz formula. -/
def det {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) : R :=
finset.sum finset.univ
fun (σ : equiv.perm n) => ↑↑(coe_fn equiv.perm.sign σ) * finset.prod finset.univ fun (i : n) => M (coe_fn σ i) i
@[simp] theorem det_diagonal {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {d : n → R} : det (diagonal d) = finset.prod finset.univ fun (i : n) => d i := sorry
@[simp] theorem det_zero {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (h : Nonempty n) : det 0 = 0 := sorry
@[simp] theorem det_one {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] : det 1 = 1 := sorry
theorem det_eq_one_of_card_eq_zero {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {A : matrix n n R} (h : fintype.card n = 0) : det A = 1 := sorry
theorem det_mul_aux {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {M : matrix n n R} {N : matrix n n R} {p : n → n} (H : ¬function.bijective p) : (finset.sum finset.univ
fun (σ : equiv.perm n) =>
↑↑(coe_fn equiv.perm.sign σ) * finset.prod finset.univ fun (x : n) => M (coe_fn σ x) (p x) * N (p x) x) =
0 := sorry
@[simp] theorem det_mul {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) (N : matrix n n R) : det (matrix.mul M N) = det M * det N := sorry
protected instance det.is_monoid_hom {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] : is_monoid_hom det :=
is_monoid_hom.mk det_one
/-- Transposing a matrix preserves the determinant. -/
@[simp] theorem det_transpose {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) : det (transpose M) = det M := sorry
/-- The determinant of a permutation matrix equals its sign. -/
@[simp] theorem det_permutation {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (σ : equiv.perm n) : det (pequiv.to_matrix (equiv.to_pequiv σ)) = ↑(coe_fn equiv.perm.sign σ) := sorry
/-- Permuting the columns changes the sign of the determinant. -/
theorem det_permute {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (σ : equiv.perm n) (M : matrix n n R) : (det fun (i : n) => M (coe_fn σ i)) = ↑(coe_fn equiv.perm.sign σ) * det M := sorry
@[simp] theorem det_smul {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {A : matrix n n R} {c : R} : det (c • A) = c ^ fintype.card n * det A := sorry
theorem ring_hom.map_det {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {S : Type w} [comm_ring S] {M : matrix n n R} {f : R →+* S} : coe_fn f (det M) = det (coe_fn (ring_hom.map_matrix f) M) := sorry
theorem alg_hom.map_det {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {S : Type w} [comm_ring S] [algebra R S] {T : Type z} [comm_ring T] [algebra R T] {M : matrix n n S} {f : alg_hom R S T} : coe_fn f (det M) = det (coe_fn (ring_hom.map_matrix ↑f) M) := sorry
/-!
### `det_zero` section
Prove that a matrix with a repeated column has determinant equal to zero.
-/
theorem det_eq_zero_of_row_eq_zero {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {A : matrix n n R} (i : n) (h : ∀ (j : n), A i j = 0) : det A = 0 := sorry
theorem det_eq_zero_of_column_eq_zero {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {A : matrix n n R} (j : n) (h : ∀ (i : n), A i j = 0) : det A = 0 :=
eq.mpr (id (Eq._oldrec (Eq.refl (det A = 0)) (Eq.symm (det_transpose A)))) (det_eq_zero_of_row_eq_zero j h)
/-- If a matrix has a repeated row, the determinant will be zero. -/
theorem det_zero_of_row_eq {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {M : matrix n n R} {i : n} {j : n} (i_ne_j : i ≠ j) (hij : M i = M j) : det M = 0 := sorry
theorem det_update_column_add {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) (j : n) (u : n → R) (v : n → R) : det (update_column M j (u + v)) = det (update_column M j u) + det (update_column M j v) := sorry
theorem det_update_row_add {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) (j : n) (u : n → R) (v : n → R) : det (update_row M j (u + v)) = det (update_row M j u) + det (update_row M j v) := sorry
theorem det_update_column_smul {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) (j : n) (s : R) (u : n → R) : det (update_column M j (s • u)) = s * det (update_column M j u) := sorry
theorem det_update_row_smul {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] (M : matrix n n R) (j : n) (s : R) (u : n → R) : det (update_row M j (s • u)) = s * det (update_row M j u) := sorry
/-- `det` is an alternating multilinear map over the rows of the matrix.
See also `is_basis.det`. -/
def det_row_multilinear {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] : alternating_map R (n → R) R n :=
alternating_map.mk det sorry sorry sorry
@[simp] theorem det_block_diagonal {n : Type u} [DecidableEq n] [fintype n] {R : Type v} [comm_ring R] {o : Type u_1} [fintype o] [DecidableEq o] (M : o → matrix n n R) : det (block_diagonal M) = finset.prod finset.univ fun (k : o) => det (M k) := sorry
|
!! ALL ONE MODULE
!! Initialization subroutines for walks and hops which refer to the indexing of nonzero matrix
!! elements between pairs of configurations (matrix elements in the Hamiltonian matrix for the
!! Slater determinant representation) -- initialization of derived type walktype variables.
#include "Definitions.INC"
module walksubmod
contains
subroutine walkalloc(www)
use fileptrmod
use mpimod
use walkmod
use configsubmod
implicit none
type(walktype) :: www
!! training wheels
if (www%topconfig-www%botconfig.gt.0) then
if (.not.highspinorder(www%configlist(:,www%topconfig),www%numpart)) then
OFLWR "NOT HIGHSPIN",www%topconfig
call printconfig(www%configlist(:,www%topconfig),www)
CFLST
endif
if (.not.lowspinorder(www%configlist(:,www%botconfig),www%numpart)) then
OFLWR "NOT LOWSPIN",www%botconfig
call printconfig(www%configlist(:,www%topconfig),www)
CFLST
endif
endif
!! 06-2015 configpserproc also in newconfig.f90
allocate( www%numsinglewalks(www%configstart:www%configend+1) , &
www%numdoublewalks(www%configstart:www%configend+1) )
www%numsinglewalks(:)=(-1); www%numdoublewalks(:)=(-1)
allocate( www%numsinglediagwalks(www%configstart:www%configend+1) , &
www%numdoublediagwalks(www%configstart:www%configend+1) )
www%numsinglediagwalks(:)=(-1); www%numdoublediagwalks(:)=(-1)
call getnumwalks(www)
OFLWR "Allocating singlewalks"; CFL
allocate( www%singlewalk(www%maxtotsinglewalks+1) )
www%singlewalk=-1
allocate(www%singlediag(www%numpart,www%configstart:www%configend+1) )
www%singlediag=-1
allocate( www%singlewalkdirphase(www%maxtotsinglewalks+1) )
www%singlewalkdirphase=0
allocate( www%singlewalkopspf(1:2,www%maxtotsinglewalks+1) )
www%singlewalkopspf=-1
OFLWR "Allocating doublewalks"; CFL
allocate( www%doublewalkdirspf(1:4,www%maxtotdoublewalks+1) )
www%doublewalkdirspf=-1
allocate( www%doublewalkdirphase(www%maxtotdoublewalks+1) )
www%doublewalkdirphase=0
allocate( www%doublewalk(www%maxtotdoublewalks+1) )
www%doublewalk=-1
allocate(www%doublediag(www%numpart*(www%numpart-1),www%configstart:www%configend+1))
www%doublediag=-1
OFLWR " ..done walkalloc."; CFL
contains
function highspinorder(thisconfig,numpart)
implicit none
integer,intent(in) :: numpart,thisconfig(2*numpart)
logical :: highspinorder
integer :: ii,unpaired(numpart),flag,jj
highspinorder=.true.
unpaired(1:numpart)=1
do ii=1,numpart
do jj=1,numpart !! WORKS
if (jj.ne.ii) then !!WORKS
! -xAVX error on lawrencium! doesnt work this way. compiler/instruction set bug.
! do jj=ii+1,numpart !!FAILS
if (thisconfig(jj*2-1).eq.thisconfig(ii*2-1)) then
unpaired(ii)=0
unpaired(jj)=0
endif
endif !!WORKS
enddo
enddo
flag=0
do ii=1,numpart
if (unpaired(ii).eq.1) then
if (thisconfig(ii*2).eq.1) then
flag=1
else
if (flag==1) then
highspinorder=.false.
return
endif
endif
endif
enddo
end function highspinorder
function lowspinorder(thisconfig,numpart)
implicit none
integer,intent(in) :: numpart,thisconfig(2*numpart)
logical :: lowspinorder
integer :: ii,unpaired(numpart),flag,jj
lowspinorder=.true.
unpaired(:)=1
do ii=1,numpart
! do jj=ii+1,numpart !!FAILS
do jj=1,numpart !!WORKS
if (jj.ne.ii) then !!WORKS
if (thisconfig(jj*2-1).eq.thisconfig(ii*2-1)) then
unpaired(ii)=0
unpaired(jj)=0
endif
endif !!WORKS
enddo
enddo
flag=0
do ii=1,numpart
if (unpaired(ii).eq.1) then
if (thisconfig(ii*2).eq.2) then
flag=1
else
if (flag==1) then
lowspinorder=.false.
endif
endif
endif
enddo
end function lowspinorder
end subroutine walkalloc
subroutine walks(www)
use fileptrmod
use walkmod
use mpimod !! nprocs
use aarrmod
use configsubmod
use mpisubmod
implicit none
type(walktype) :: www
integer :: iindex, iiindex, jindex, jjindex, ispin, jspin, iispin, jjspin, ispf, jspf, &
iispf, jjspf, config2, config1, dirphase, flag, idof, iidof, jdof, iwalk, idiag
integer :: thisconfig(www%num2part), thatconfig(www%num2part), temporb(2), temporb2(2),&
qsize(nprocs) !! AUTOMATIC
integer, allocatable :: listorder(:)
qsize=0
!! *********** SINGLES **********
OFLWR "Calculating walks. Singles..."; CFL
do config1=www%botconfig,www%topconfig
if (mod(config1,1000).eq.0) then
OFLWR config1, " out of ", www%topconfig; CFL
endif
iwalk=0
if (www%singlewalkflag.ne.0) then
thisconfig=www%configlist(:,config1)
do idof=1,www%numpart !! position in thisconfig that we're walking
temporb=thisconfig((idof-1)*2+1 : idof*2)
ispf=temporb(1)
ispin=temporb(2)
iindex=iind(temporb)
do jindex=1,2*www%nspf !! the walk
temporb=aarr(jindex)
jspf=temporb(1)
jspin=temporb(2)
if (ispin.ne.jspin) then
cycle
endif
flag=0
do jdof=1,www%numpart
if (jdof.ne.idof) then !! INCLUDING DIAGONAL WALKS
if (iind(thisconfig((jdof-1)*2+1:jdof*2)) == jindex) then
flag=1
endif
endif
enddo
if (flag.ne.0) then ! pauli dis allowed configuration.
cycle
endif
thatconfig=thisconfig
thatconfig((idof-1)*2+1 : idof*2)=temporb
dirphase=reorder(thatconfig,www%numpart)
if (.not.allowedconfig0(www,thatconfig,www%dfwalklevel)) then
cycle
endif
config2=getconfiguration(thatconfig,www)
if (www%configtypes(config1).ne.www%configtypes(config2)) then
cycle
endif
iwalk=iwalk+1
!! ket, bra bra is walk
if (www%holeflag.eq.0) then
www%singlewalkopspf(1:2,iwalk+www%scol(config1))=[ ispf,jspf ]
else
www%singlewalkopspf(1:2,iwalk+www%scol(config1))=[ jspf,ispf ]
endif
www%singlewalkdirphase(iwalk+www%scol(config1))=dirphase
www%singlewalk(iwalk+www%scol(config1))=config2
enddo ! the walk
enddo ! position we're walking
endif ! singlewalkflag
if ( www%numsinglewalks(config1) /= iwalk ) then
OFLWR "WALK ERROR SINGLES."; CFLST
endif
enddo ! config1
OFLWR "Calculating walks. Doubles..."; call closefile()
!! *********** DOUBLES ************
do config1=www%botconfig,www%topconfig
if (mod(config1,1000).eq.0) then
OFLWR config1, " out of ", www%topconfig; CFL
endif
iwalk=0
if (www%doublewalkflag.ne.0) then
thisconfig=www%configlist(:,config1)
do idof=1,www%numpart !! positions in thisconfig that we're walking
do iidof=idof+1,www%numpart !!
temporb=thisconfig((idof-1)*2+1 : idof*2)
ispf=temporb(1)
ispin=temporb(2)
iindex=iind(temporb)
temporb=thisconfig((iidof-1)*2+1 : iidof*2)
iispf=temporb(1)
iispin=temporb(2)
iiindex=iind(temporb)
do jindex=1,2*www%nspf !! the walk
temporb=aarr(jindex)
jspf=temporb(1)
jspin=temporb(2)
if (.not.ispin.eq.jspin) then
cycle
endif
!! no more exchange separately
do jjindex=1,2*www%nspf
if (jjindex.eq.jindex) then
cycle
endif
temporb2=aarr(jjindex)
jjspf=temporb2(1)
jjspin=temporb2(2)
if (.not.iispin.eq.jjspin) then
cycle
endif
!! INCLUDING DIAGONAL AND SINGLE WALKS
flag=0
do jdof=1,www%numpart
if (jdof.ne.idof.and.jdof.ne.iidof) then
if ((iind(thisconfig((jdof-1)*2+1:jdof*2)) == jindex).or. &
(iind(thisconfig((jdof-1)*2+1:jdof*2)) == jjindex)) then
flag=1
exit
endif
endif
enddo
if (flag.ne.0) then ! pauli dis allowed configuration.
cycle
endif
thatconfig=thisconfig
thatconfig((idof-1)*2+1 : idof*2)=temporb
thatconfig((iidof-1)*2+1 : iidof*2)=temporb2
dirphase=reorder(thatconfig,www%numpart)
if (.not.allowedconfig0(www,thatconfig,www%dfwalklevel)) then
cycle
endif
config2=getconfiguration(thatconfig,www)
if (www%configtypes(config1).ne.www%configtypes(config2)) then
cycle
endif
iwalk = iwalk+1
!! switched 2-2016 was ket2 bra2 ket1 bra1
!! www%doublewalkdirspf(1:4,iwalk,config1)=[ iispf, jjspf, ispf, jspf ]
if (www%holeflag.eq.0) then
!! now bra2 ket2 bra1 ket1
www%doublewalkdirspf(1:4,iwalk+www%dcol(config1))=[ jjspf, iispf, jspf, ispf ]
else
www%doublewalkdirspf(1:4,iwalk+www%dcol(config1))=[ iispf, jjspf, ispf, jspf ]
endif
www%doublewalkdirphase(iwalk+www%dcol(config1))=dirphase
www%doublewalk(iwalk+www%dcol(config1))=config2
enddo ! the walk
enddo
enddo ! position we're walking
enddo
endif ! doublewalkflag
if ( www%numdoublewalks(config1) /= iwalk ) then
OFLWR "WALK ERROR DOUBLES.",config1,www%numdoublewalks(config1),iwalk; CFLST
endif
enddo ! config1
call mpibarrier()
OFLWR "Sorting walks..."; CFL
!$OMP PARALLEL DEFAULT(PRIVATE) SHARED(www,nprocs)
allocate(listorder(www%singlemaxwalks+www%doublemaxwalks+1))
listorder=0
!$OMP DO SCHEDULE(DYNAMIC)
do config1=www%botconfig,www%topconfig
if (www%numsinglewalks(config1).gt.1) then
call getlistorder(www%singlewalk(www%scol(config1)+1:),listorder(:),www%numsinglewalks(config1))
call listreorder(www%singlewalkdirphase(www%scol(config1)+1:),listorder(:),www%numsinglewalks(config1),1)
call listreorder(www%singlewalkopspf(:,www%scol(config1)+1:),listorder(:),www%numsinglewalks(config1),2)
call listreorder(www%singlewalk(www%scol(config1)+1:),listorder(:),www%numsinglewalks(config1),1)
endif
if (www%numdoublewalks(config1).gt.1) then
call getlistorder(www%doublewalk(www%dcol(config1)+1:),listorder(:),www%numdoublewalks(config1))
call listreorder(www%doublewalkdirphase(www%dcol(config1)+1:),listorder(:),www%numdoublewalks(config1),1)
call listreorder(www%doublewalkdirspf(:,www%dcol(config1)+1:),listorder(:),www%numdoublewalks(config1),4)
call listreorder(www%doublewalk(www%dcol(config1)+1:),listorder(:),www%numdoublewalks(config1),1)
endif
enddo
!$OMP END DO
deallocate(listorder)
!$OMP END PARALLEL
OFLWR " .... done sorting walks."; CFL
#ifdef MPIFLAG
call mpibarrier()
if (www%sparseconfigflag.eq.0.and.www%maxtotsinglewalks.ne.0) then
qsize(:) = www%scol(www%alltopconfigs(:)+1) - www%scol(www%allbotconfigs(:))
call mpiallgather_i(www%singlewalkopspf, 2*www%maxtotsinglewalks,&
2*qsize(:),-00420042)
call mpiallgather_i(www%singlewalkdirphase,www%maxtotsinglewalks,&
qsize(:),-79800798)
call mpiallgather_i(www%singlewalk, www%maxtotsinglewalks,&
qsize(:),-798042)
endif
if (www%sparseconfigflag.eq.0.and.www%maxtotdoublewalks.ne.0) then
qsize(:) = www%dcol(www%alltopconfigs(:)+1) - www%dcol(www%allbotconfigs(:))
call mpiallgather_i(www%doublewalkdirspf, 4*www%maxtotdoublewalks,&
4*qsize(:),-9994291)
call mpiallgather_i(www%doublewalkdirphase,www%maxtotdoublewalks,&
qsize(:),-9994291)
call mpiallgather_i(www%doublewalk, www%maxtotdoublewalks,&
qsize(:),001234)
endif
call mpibarrier()
#endif
do config1=www%configstart,www%configend
idiag=0
do iwalk=1,www%numsinglewalks(config1)
if (www%singlewalk(iwalk+www%scol(config1)).eq.config1) then
idiag=idiag+1
www%singlediag(idiag,config1)=iwalk
endif
enddo
www%numsinglediagwalks(config1)=idiag
idiag=0
do iwalk=1,www%numdoublewalks(config1)
if (www%doublewalk(iwalk+www%dcol(config1)).eq.config1) then
idiag=idiag+1
www%doublediag(idiag,config1)=iwalk
endif
enddo
www%numdoublediagwalks(config1)=idiag
enddo
contains
recursive subroutine getlistorder(values, order,num)
implicit none
integer,intent(in) :: num,values(num)
integer,intent(out) :: order(num)
integer :: i,j,whichlowest, flag, lowval
integer, allocatable :: taken(:)
allocate(taken(num))
taken=0; order=-1
do j=1,num
whichlowest=-1; flag=0; lowval=10000000 !! is not used (see flag)
do i=1,num
if ( taken(i) .eq. 0 ) then
if ((flag.eq.0) .or.(values(i) .le. lowval)) then
flag=1; lowval=values(i); whichlowest=i
endif
endif
enddo
if ((whichlowest.gt.num).or.(whichlowest.lt.1)) then
OFLWR taken,"lowest ERROR, J=",j," WHICHLOWEST=", whichlowest; CFLST
endif
if (taken(whichlowest).ne.0) then
OFLWR "TAKENmm ERROR."; CFLST
endif
taken(whichlowest)=1; order(j)=whichlowest
enddo
deallocate(taken)
end subroutine getlistorder
recursive subroutine listreorder(list, order,num,numper)
implicit none
integer,intent(in) :: num, numper, order(num)
integer,intent(inout) :: list(numper,num)
integer,allocatable :: newvals(:,:)
integer :: j
allocate(newvals(numper,num))
newvals=0
do j=1,num
newvals(:,j)=list(:,order(j))
enddo
list(:,:)=newvals(:,:)
deallocate(newvals)
end subroutine listreorder
end subroutine walks
subroutine getnumwalks(www)
use fileptrmod
use walkmod
use mpimod
use aarrmod
use configsubmod
use mpisubmod
implicit none
type(walktype) :: www
integer :: iindex, iiindex, jindex, jjindex, ispin, jspin, iispin, jjspin, ispf, iispf, config1, &
dirphase, flag, idof, iidof, jdof,iwalk ,config2, maxwalks
integer :: thisconfig(www%num2part), thatconfig(www%num2part), temporb(2), temporb2(2)
integer :: totwalks, totdoublewalks, totsinglewalks
integer*8 :: allwalks
character(len=3) :: iilab
character(len=4) :: iilab0
write(iilab0,'(I4)') myrank+1000
iilab(:)=iilab0(2:4)
!! *********** SINGLES **********
call mpibarrier()
OFLWR "Counting walks. Singles"; CFL
do config1=www%botconfig,www%topconfig
if (mod(config1,1000).eq.0) then
OFLWR config1, " out of ", www%topconfig; CFL
endif
iwalk=0
if (www%singlewalkflag.ne.0) then
thisconfig=www%configlist(:,config1)
do idof=1,www%numpart !! position in thisconfig that we're walking
temporb=thisconfig((idof-1)*2+1 : idof*2)
ispf=temporb(1)
ispin=temporb(2)
iindex=iind(temporb)
do jindex=1,www%nspf * 2 !! the walk
temporb=aarr(jindex)
jspin=temporb(2)
if (ispin.ne.jspin) then
cycle
endif
flag=0
do jdof=1,www%numpart
if (jdof.ne.idof) then !! INCLUDING DIAGONAL WALKS
if (iind(thisconfig((jdof-1)*2+1:jdof*2)) == jindex) then
flag=1
endif
endif
enddo
if (flag.ne.0) then ! pauli dis allowed configuration.
cycle
endif
thatconfig=thisconfig
thatconfig((idof-1)*2+1 : idof*2)=temporb
dirphase=reorder(thatconfig,www%numpart)
if (.not.allowedconfig0(www,thatconfig,www%dfwalklevel)) then
cycle
endif
config2=getconfiguration(thatconfig,www)
if (www%configtypes(config1).ne.www%configtypes(config2)) then
cycle
endif
iwalk=iwalk+1
enddo ! the walk
enddo ! position we're walking
endif ! singlewalkflag
www%numsinglewalks(config1) = iwalk
enddo ! config1
#ifdef MPIFLAG
if (www%sparseconfigflag.eq.0) then
call mpiallgather_i(www%numsinglewalks(:),www%numconfig,&
www%configsperproc(:),www%maxconfigsperproc)
endif
#endif
OFLWR "Counting walks. Doubles"; CFL
!! *********** DOUBLES ************
do config1=www%botconfig,www%topconfig
if (mod(config1,1000).eq.0) then
OFLWR config1, " out of ", www%topconfig; CFL
endif
iwalk=0
if (www%doublewalkflag.ne.0) then
thisconfig=www%configlist(:,config1)
do idof=1,www%numpart !! positions in thisconfig that we're walking
do iidof=idof+1,www%numpart !!
temporb=thisconfig((idof-1)*2+1 : idof*2)
ispf=temporb(1)
ispin=temporb(2)
iindex=iind(temporb)
temporb=thisconfig((iidof-1)*2+1 : iidof*2)
iispf=temporb(1)
iispin=temporb(2)
iiindex=iind(temporb)
do jindex=1,2*www%nspf !! the walk
temporb=aarr(jindex)
jspin=temporb(2)
if (.not.ispin.eq.jspin) then
cycle
endif
!! no more exchange separately
do jjindex=1,2*www%nspf !! the walk
if (jjindex.eq.jindex) then
cycle
endif
temporb2=aarr(jjindex)
jjspin=temporb2(2)
if (.not.iispin.eq.jjspin) then
cycle
endif
!! INCLUDING DIAGONAL AND SINGLE WALKS
flag=0
do jdof=1,www%numpart
if (jdof.ne.idof.and.jdof.ne.iidof) then
if ((iind(thisconfig((jdof-1)*2+1:jdof*2)) == jindex).or. &
(iind(thisconfig((jdof-1)*2+1:jdof*2)) == jjindex)) then
flag=1
endif
endif
enddo
if (flag.ne.0) then ! pauli dis allowed configuration.
cycle
endif
thatconfig=thisconfig
thatconfig((idof-1)*2+1 : idof*2)=temporb
thatconfig((iidof-1)*2+1 : iidof*2)=temporb2
dirphase=reorder(thatconfig,www%numpart)
if (.not.allowedconfig0(www,thatconfig,www%dfwalklevel)) then
cycle
endif
config2=getconfiguration(thatconfig,www)
if (www%configtypes(config1).ne.www%configtypes(config2)) then
cycle
endif
iwalk = iwalk+1
enddo ! the walk
enddo
enddo ! position we're walking
enddo
endif ! doublewalkflag
www%numdoublewalks(config1)=iwalk
enddo ! config1
#ifdef MPIFLAG
if (www%sparseconfigflag.eq.0) then
call mpiallgather_i(www%numdoublewalks(:),www%numconfig,www%configsperproc(:),&
www%maxconfigsperproc)
endif
#endif
totwalks=0; totsinglewalks=0; totdoublewalks=0
www%singlemaxwalks=0; www%doublemaxwalks=0
allocate(www%scol(www%configstart:www%configend+1), www%dcol(www%configstart:www%configend+1))
do config1=www%configstart,www%configend
www%scol(config1) = totsinglewalks
www%dcol(config1) = totdoublewalks
totwalks=totwalks+www%numsinglewalks(config1)+www%numdoublewalks(config1)
totsinglewalks=totsinglewalks + www%numsinglewalks(config1)
totdoublewalks=totdoublewalks + www%numdoublewalks(config1)
if (www%singlemaxwalks.lt.www%numsinglewalks(config1)) then
www%singlemaxwalks=www%numsinglewalks(config1)
endif
if (www%doublemaxwalks.lt.www%numdoublewalks(config1)) then
www%doublemaxwalks=www%numdoublewalks(config1)
endif
enddo
www%scol(www%configend+1) = totsinglewalks
www%dcol(www%configend+1) = totdoublewalks
www%maxtotsinglewalks = totsinglewalks
www%maxtotdoublewalks = totdoublewalks
maxwalks = totwalks
allwalks=totwalks
if (www%sparseconfigflag.ne.0) then
call mympii8reduceone(allwalks)
call mympiimax(www%maxtotsinglewalks); call mympiimax(www%maxtotdoublewalks);
call mympiimax(maxwalks)
endif
OFLWR;
WRFL "Maximum number of"
WRFL " single walks= ", www%maxtotsinglewalks
WRFL " double walks= ", www%maxtotdoublewalks;
WRFL " total walks= ", maxwalks;
WRFL "TOTAL walks: ", allwalks
if (www%sparseconfigflag.ne.0) then
WRFL "maxwalks*nprocs:",int(maxwalks,8)*nprocs
endif
WRFL; CFL
end subroutine getnumwalks
subroutine hops(www)
use fileptrmod
use walkmod
use mpimod
use aarrmod
use configsubmod !! allowedconfig0
use mpisubmod
implicit none
type(walktype) :: www
integer :: ii,iwalk,iconfig,ihop,flag,iproc,isize, &
totsinglehops,totdoublehops, totsinglewalks,totdoublewalks
integer*8 :: allsinglehops,alldoublehops, allsinglewalks,alldoublewalks
!!$ integer :: numsinglehopsbyproc(nprocs), numdoublehopsbyproc(nprocs)
allocate(www%numsinglehops(www%configstart:www%configend+1),&
www%numdoublehops(www%configstart:www%configend+1))
allocate( www%singlediaghop(www%configstart:www%configend+1),&
www%doublediaghop(www%configstart:www%configend+1))
allocate( www%singlehopdiagflag(www%configstart:www%configend+1),&
www%doublehopdiagflag(www%configstart:www%configend+1))
www%numsinglehops(:)=(-99); www%numdoublehops(:)=(-99)
www%singlediaghop(:)=(-99); www%doublediaghop(:)=(-99)
www%singlehopdiagflag(:)=(-99); www%doublehopdiagflag(:)=(-99)
allocate( www%firstsinglehopbyproc(nprocs,www%configstart:www%configend+1), &
www%lastsinglehopbyproc(nprocs,www%configstart:www%configend+1) )
allocate( www%firstdoublehopbyproc(nprocs,www%configstart:www%configend+1), &
www%lastdoublehopbyproc(nprocs,www%configstart:www%configend+1) )
www%firstsinglehopbyproc(:,:)=(-99); www%lastsinglehopbyproc(:,:)=(-99)
www%firstdoublehopbyproc(:,:)=(-99); www%lastdoublehopbyproc(:,:)=(-99)
do ii=0,1
if (ii.eq.0) then
!! avoid warn bounds
allocate(www%singlehop(1,1),www%singlehopwalkstart(1,1),www%singlehopwalkend(1,1),&
www%doublehop(1,1),www%doublehopwalkstart(1,1),www%doublehopwalkend(1,1))
else
deallocate(www%singlehop,www%singlehopwalkstart,www%singlehopwalkend,&
www%doublehop,www%doublehopwalkstart,www%doublehopwalkend)
allocate(www%singlehop(www%maxnumsinglehops,www%configstart:www%configend+1),&
www%singlehopwalkstart(www%maxnumsinglehops,www%configstart:www%configend+1),&
www%singlehopwalkend(www%maxnumsinglehops,www%configstart:www%configend+1),&
www%doublehop(www%maxnumdoublehops,www%configstart:www%configend+1),&
www%doublehopwalkstart(www%maxnumdoublehops,www%configstart:www%configend+1),&
www%doublehopwalkend(www%maxnumdoublehops,www%configstart:www%configend+1))
www%singlehop(:,:)=(-99)
www%singlehopwalkstart(:,:)=1
www%singlehopwalkend(:,:)=0
www%doublehop(:,:)=(-99)
www%doublehopwalkstart(:,:)=1
www%doublehopwalkend(:,:)=0
endif
if (ii.eq.0) then
OFLWR "Counting single hops..."; CFL
else
OFLWR "Getting single hops..."; CFL
endif
do iconfig=www%configstart,www%configend
ihop=0
if (www%numsinglewalks(iconfig).gt.0) then
ihop=1
if (ii.eq.1) then
www%singlehop(1,iconfig)=www%singlewalk(1+www%scol(iconfig))
www%singlehopwalkstart(1,iconfig)=1
endif
do iwalk=2,www%numsinglewalks(iconfig)
if (www%singlewalk(iwalk+www%scol(iconfig)).ne.www%singlewalk(iwalk-1+www%scol(iconfig))) then
if (ii.eq.1) then
www%singlehopwalkend(ihop,iconfig)=iwalk-1
endif
ihop=ihop+1
if (ii.eq.1) then
www%singlehop(ihop,iconfig)=www%singlewalk(iwalk+www%scol(iconfig))
www%singlehopwalkstart(ihop,iconfig)=iwalk
endif
endif
enddo
endif ! if numsinglewalks.gt.0
if (ii.eq.0) then
www%numsinglehops(iconfig)=ihop
else
if (www%numsinglewalks(iconfig).gt.0) then
www%singlehopwalkend(ihop,iconfig)=www%numsinglewalks(iconfig)
endif
if (www%numsinglehops(iconfig).ne.ihop) then
OFLWR "CHECKME SINGLEHOPW",www%numsinglehops(iconfig),ihop,iconfig; CFLST
endif
endif
enddo
if (ii.eq.0) then
OFLWR "Counting double hops..."; CFL
else
OFLWR "Getting double hops..."; CFL
endif
do iconfig=www%configstart,www%configend
ihop=0
if (www%numdoublewalks(iconfig).gt.0) then
ihop=1
if (ii.eq.1) then
www%doublehop(1,iconfig)=www%doublewalk(1+www%dcol(iconfig))
www%doublehopwalkstart(1,iconfig)=1
endif
do iwalk=2,www%numdoublewalks(iconfig)
if (www%doublewalk(iwalk+www%dcol(iconfig)).ne.www%doublewalk(iwalk-1+www%dcol(iconfig))) then
if (ii.eq.1) then
www%doublehopwalkend(ihop,iconfig)=iwalk-1
endif
ihop=ihop+1
if (ii.eq.1) then
www%doublehop(ihop,iconfig)=www%doublewalk(iwalk+www%dcol(iconfig))
www%doublehopwalkstart(ihop,iconfig)=iwalk
endif
endif
enddo
endif !! if numdoublewalks.gt.0
if (ii.eq.0) then
www%numdoublehops(iconfig)=ihop
else
if (www%numdoublewalks(iconfig).gt.0) then
www%doublehopwalkend(ihop,iconfig)=www%numdoublewalks(iconfig)
endif
if (www%numdoublehops(iconfig).ne.ihop) then
OFLWR "CHECKME DOUBLEHOPW",www%numdoublehops(iconfig),ihop,iconfig; CFLST
endif
endif
enddo
if (ii.eq.0) then
!!$ www%maxnumsinglehops=0
!!$ www%maxnumdoublehops=0
www%maxnumsinglehops=1 !always allocate
www%maxnumdoublehops=1
totsinglehops=0; totsinglewalks=0
totdoublehops=0; totdoublewalks=0
do iconfig=www%configstart,www%configend
totsinglehops=totsinglehops+www%numsinglehops(iconfig)
totsinglewalks=totsinglewalks+www%numsinglewalks(iconfig)
if (www%numsinglehops(iconfig).gt.www%maxnumsinglehops) then
www%maxnumsinglehops=www%numsinglehops(iconfig)
endif
totdoublehops=totdoublehops+www%numdoublehops(iconfig)
totdoublewalks=totdoublewalks+www%numdoublewalks(iconfig)
if (www%numdoublehops(iconfig).gt.www%maxnumdoublehops) then
www%maxnumdoublehops=www%numdoublehops(iconfig)
endif
enddo
endif
enddo !! do ii=0,1
do iconfig=www%configstart,www%configend
if (allowedconfig0(www,www%configlist(:,iconfig),www%dfwalklevel) .and. www%holeflag.ne.0) then
if (www%numsinglehops(iconfig).eq.0.and.www%singlewalkflag.ne.0) then
www%numsinglehops(iconfig) = 1
www%singlehop(1,iconfig)=iconfig
endif
if (www%numdoublehops(iconfig).eq.0.and.www%doublewalkflag.ne.0) then
www%numdoublehops(iconfig)=1
www%doublehop(1,iconfig)=iconfig
endif
endif
enddo
do iconfig=www%configstart,www%configend
flag=0
if (www%numsinglehops(iconfig).gt.0) then
do ihop=1,www%numsinglehops(iconfig)
if (www%singlehop(ihop,iconfig).eq.iconfig) then
if (flag.eq.1) then
OFLWR "EERRR HOPSSING"; CFLST
else
flag=1
www%singlediaghop(iconfig)=ihop
endif
endif
enddo
endif
www%singlehopdiagflag(iconfig)=flag
flag=0
if (www%numdoublehops(iconfig).gt.0) then
do ihop=1,www%numdoublehops(iconfig)
if (www%doublehop(ihop,iconfig).eq.iconfig) then
if (flag.eq.1) then
OFLWR "EERRR HOPSDOUB"; CFLST
else
flag=1
www%doublediaghop(iconfig)=ihop
endif
endif
enddo
endif
www%doublehopdiagflag(iconfig)=flag
enddo
call mpibarrier()
do iconfig=www%botconfig,www%topconfig
www%firstsinglehopbyproc(1,iconfig)=1
iproc=1
do ihop=1,www%numsinglehops(iconfig)
do while (www%singlehop(ihop,iconfig).gt.www%alltopconfigs(iproc))
www%lastsinglehopbyproc(iproc,iconfig)=ihop-1
iproc=iproc+1
www%firstsinglehopbyproc(iproc,iconfig)=ihop
enddo
enddo
www%lastsinglehopbyproc(iproc,iconfig)=www%numsinglehops(iconfig)
www%firstsinglehopbyproc(iproc+1:nprocs,iconfig)=www%numsinglehops(iconfig)+1
www%lastsinglehopbyproc(iproc+1:nprocs,iconfig)=www%numsinglehops(iconfig)
www%firstdoublehopbyproc(1,iconfig)=1
iproc=1
do ihop=1,www%numdoublehops(iconfig)
do while (www%doublehop(ihop,iconfig).gt.www%alltopconfigs(iproc))
www%lastdoublehopbyproc(iproc,iconfig)=ihop-1
iproc=iproc+1
www%firstdoublehopbyproc(iproc,iconfig)=ihop
enddo
enddo
www%lastdoublehopbyproc(iproc,iconfig)=www%numdoublehops(iconfig)
www%firstdoublehopbyproc(iproc+1:nprocs,iconfig)=www%numdoublehops(iconfig)+1
www%lastdoublehopbyproc(iproc+1:nprocs,iconfig)=www%numdoublehops(iconfig)
end do
#ifdef MPIFLAG
call mpibarrier()
if (www%sparseconfigflag.eq.0) then
isize=nprocs
call mpiallgather_i(www%firstsinglehopbyproc(:,:), www%numconfig*isize,&
www%configsperproc(:)*isize,www%maxconfigsperproc*isize)
call mpiallgather_i(www%lastsinglehopbyproc(:,:), www%numconfig*isize,&
www%configsperproc(:)*isize,www%maxconfigsperproc*isize)
call mpiallgather_i(www%firstdoublehopbyproc(:,:), www%numconfig*isize,&
www%configsperproc(:)*isize,www%maxconfigsperproc*isize)
call mpiallgather_i(www%lastdoublehopbyproc(:,:), www%numconfig*isize,&
www%configsperproc(:)*isize,www%maxconfigsperproc*isize)
endif
if (www%sparseconfigflag.eq.0) then
ii=1
call mpiallgather_i(www%numdoublehops(:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%numsinglehops(:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%singlediaghop(:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%doublediaghop(:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%singlehopdiagflag(:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%doublehopdiagflag(:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
ii=www%maxnumsinglehops
call mpiallgather_i(www%singlehop(:,:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%singlehopwalkstart(:,:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%singlehopwalkend(:,:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
ii=www%maxnumdoublehops
call mpiallgather_i(www%doublehop(:,:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%doublehopwalkstart(:,:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
call mpiallgather_i(www%doublehopwalkend(:,:),www%numconfig*ii,www%configsperproc(:)*ii,&
www%maxconfigsperproc*ii)
endif
!!$ numsinglehopsbyproc(:)=0; numdoublehopsbyproc(:)=0
!!$
!!$ do iconfig=www%botconfig,www%topconfig
!!$ numsinglehopsbyproc(:)=numsinglehopsbyproc(:) + &
!!$ (www%lastsinglehopbyproc(:,iconfig)-www%firstsinglehopbyproc(:,iconfig)+1)
!!$ numdoublehopsbyproc(:)=numdoublehopsbyproc(:) + &
!!$ (www%lastdoublehopbyproc(:,iconfig)-www%firstdoublehopbyproc(:,iconfig)+1)
!!$ enddo
!!$
!!$ call mpibarrier()
!!$ if (myrank.eq.1) then
!!$ print *, "HOPS BY PROC ON PROCESSOR 1 :::::::::::::::::::::::"
!!$ print *, " singles:"
!!$ write(*,'(I5,A2,1000I7)') myrank,": ",numsinglehopsbyproc(:)/1000
!!$ print *, " doubles:"
!!$ write(*,'(I5,A2,1000I7)') myrank,": ",numdoublehopsbyproc(:)/1000
!!$ print *
!!$ endif
call mpibarrier()
#endif
OFLWR "GOT HOPS: "
WRFL " Single hops this processor ",totsinglehops, " of ", totsinglewalks
WRFL " Double hops this processor ",totdoublehops, " of ", totdoublewalks; CFL
allsinglehops=totsinglehops
allsinglewalks=totsinglewalks
alldoublehops=totdoublehops
alldoublewalks=totdoublewalks
#ifdef MPIFLAG
if (www%sparseconfigflag.ne.0) then
call mympii8reduceone(allsinglehops); call mympii8reduceone(alldoublehops)
call mympii8reduceone(allsinglewalks); call mympii8reduceone(alldoublewalks)
call mympiimax(www%maxnumsinglehops); call mympiimax(www%maxnumdoublehops)
endif
#endif
OFLWR " Single hops total ",allsinglehops, " of ", allsinglewalks
WRFL " Double hops total ",alldoublehops, " of ", alldoublewalks
WRFL " Max single hops ", www%maxnumsinglehops
WRFL " Max double hops ", www%maxnumdoublehops
WRFL; CFL
end subroutine hops
subroutine set_matsize(www)
use walkmod
implicit none
type(walktype),intent(inout) :: www
if (www%sparseconfigflag.eq.0) then
www%singlematsize=www%numconfig
www%doublematsize=www%numconfig
else
www%singlematsize=www%maxnumsinglehops
www%doublematsize=www%maxnumdoublehops
endif
end subroutine set_matsize
! deallocating doublewalks and singlewalks ahead of time
!subroutine walkdealloc(www)
! use walkmod
! implicit none
! type(walktype) :: www
! deallocate( www%numsinglewalks,www%numsinglediagwalks )
! deallocate( www%numdoublewalks,www%numdoublediagwalks )
! deallocate( www%singlewalk )
! deallocate( www%singlewalkdirphase )
! deallocate( www%singlewalkopspf )
! deallocate( www%doublewalkdirspf )
! deallocate( www%doublewalkdirphase )
! deallocate( www%doublewalk)
!end subroutine walkdealloc
end module walksubmod
|
State Before: R : Type u_1
inst✝ : CommSemigroup R
a b : R
⊢ IsRegular (a * b) ↔ IsRegular a ∧ IsRegular b State After: R : Type u_1
inst✝ : CommSemigroup R
a b : R
⊢ IsRegular (a * b) ↔ IsRegular (a * b) ∧ IsRegular (b * a) Tactic: refine' Iff.trans _ isRegular_mul_and_mul_iff State Before: R : Type u_1
inst✝ : CommSemigroup R
a b : R
⊢ IsRegular (a * b) ↔ IsRegular (a * b) ∧ IsRegular (b * a) State After: no goals Tactic: refine' ⟨fun ab => ⟨ab, by rwa [mul_comm]⟩, fun rab => rab.1⟩ State Before: R : Type u_1
inst✝ : CommSemigroup R
a b : R
ab : IsRegular (a * b)
⊢ IsRegular (b * a) State After: no goals Tactic: rwa [mul_comm]
|
/*
mallocMC: Memory Allocator for Many Core Architectures.
http://www.icg.tugraz.at/project/mvp
Copyright (C) 2012 Institute for Computer Graphics and Vision,
Graz University of Technology
Copyright (C) 2014-2016 Institute of Radiation Physics,
Helmholtz-Zentrum Dresden - Rossendorf
Author(s): Markus Steinberger - steinberger ( at ) icg.tugraz.at
Rene Widera - r.widera ( at ) hzdr.de
Axel Huebl - a.huebl ( at ) hzdr.de
Carlchristian Eckert - c.eckert ( at ) hzdr.de
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
#pragma once
#include <cstdio>
#include <boost/cstdint.hpp> /* uint32_t */
#include <iostream>
#include <string>
#include <cassert>
#include <stdexcept>
#include <boost/mpl/bool.hpp>
#include "../mallocMC_utils.hpp"
#include "Scatter.hpp"
namespace mallocMC{
namespace CreationPolicies{
namespace ScatterKernelDetail{
template <typename T_Allocator>
__global__ void initKernel(T_Allocator* heap, void* heapmem, size_t memsize){
heap->pool = heapmem;
heap->initDeviceFunction(heapmem, memsize);
}
template < typename T_Allocator >
__global__ void getAvailableSlotsKernel(T_Allocator* heap, size_t slotSize, unsigned* slots){
int gid = threadIdx.x + blockIdx.x*blockDim.x;
int nWorker = gridDim.x * blockDim.x;
unsigned temp = heap->getAvailaibleSlotsDeviceFunction(slotSize, gid, nWorker);
if(temp) atomicAdd(slots, temp);
}
template <typename T_Allocator>
__global__ void finalizeKernel(T_Allocator* heap){
heap->finalizeDeviceFunction();
}
} //namespace ScatterKernelDetail
template<class T_Config, class T_Hashing>
class Scatter
{
public:
typedef T_Config HeapProperties;
typedef T_Hashing HashingProperties;
struct Properties : HeapProperties, HashingProperties{};
typedef boost::mpl::bool_<true> providesAvailableSlots;
private:
typedef boost::uint32_t uint32;
/** Allow for a hierarchical validation of parameters:
*
* shipped default-parameters (in the inherited struct) have lowest precedence.
* They will be overridden by a given configuration struct. However, even the
* given configuration struct can be overridden by compile-time command line
* parameters (e.g. -D MALLOCMC_CP_SCATTER_PAGESIZE 1024)
*
* default-struct < template-struct < command-line parameter
*/
#ifndef MALLOCMC_CP_SCATTER_PAGESIZE
#define MALLOCMC_CP_SCATTER_PAGESIZE static_cast<uint32>(HeapProperties::pagesize::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 pagesize = MALLOCMC_CP_SCATTER_PAGESIZE;
#ifndef MALLOCMC_CP_SCATTER_ACCESSBLOCKS
#define MALLOCMC_CP_SCATTER_ACCESSBLOCKS static_cast<uint32>(HeapProperties::accessblocks::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 accessblocks = MALLOCMC_CP_SCATTER_ACCESSBLOCKS;
#ifndef MALLOCMC_CP_SCATTER_REGIONSIZE
#define MALLOCMC_CP_SCATTER_REGIONSIZE static_cast<uint32>(HeapProperties::regionsize::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 regionsize = MALLOCMC_CP_SCATTER_REGIONSIZE;
#ifndef MALLOCMC_CP_SCATTER_WASTEFACTOR
#define MALLOCMC_CP_SCATTER_WASTEFACTOR static_cast<uint32>(HeapProperties::wastefactor::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 wastefactor = MALLOCMC_CP_SCATTER_WASTEFACTOR;
#ifndef MALLOCMC_CP_SCATTER_RESETFREEDPAGES
#define MALLOCMC_CP_SCATTER_RESETFREEDPAGES static_cast<bool>(HeapProperties::resetfreedpages::value)
#endif
BOOST_STATIC_CONSTEXPR bool resetfreedpages = MALLOCMC_CP_SCATTER_RESETFREEDPAGES;
public:
BOOST_STATIC_CONSTEXPR uint32 _pagesize = pagesize;
BOOST_STATIC_CONSTEXPR uint32 _accessblocks = accessblocks;
BOOST_STATIC_CONSTEXPR uint32 _regionsize = regionsize;
BOOST_STATIC_CONSTEXPR uint32 _wastefactor = wastefactor;
BOOST_STATIC_CONSTEXPR bool _resetfreedpages = resetfreedpages;
private:
#if _DEBUG || ANALYSEHEAP
public:
#endif
//BOOST_STATIC_CONSTEXPR uint32 minChunkSize0 = pagesize/(32*32);
BOOST_STATIC_CONSTEXPR uint32 minChunkSize1 = 0x10;
BOOST_STATIC_CONSTEXPR uint32 HierarchyThreshold = (pagesize - 2*sizeof(uint32))/33;
BOOST_STATIC_CONSTEXPR uint32 minSegmentSize = 32*minChunkSize1 + sizeof(uint32);
BOOST_STATIC_CONSTEXPR uint32 tmp_maxOPM = minChunkSize1 > HierarchyThreshold ? 0 : (pagesize + (minSegmentSize-1)) / minSegmentSize;
BOOST_STATIC_CONSTEXPR uint32 maxOnPageMasks = 32 > tmp_maxOPM ? tmp_maxOPM : 32;
#ifndef MALLOCMC_CP_SCATTER_HASHINGK
#define MALLOCMC_CP_SCATTER_HASHINGK static_cast<uint32>(HashingProperties::hashingK::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 hashingK = MALLOCMC_CP_SCATTER_HASHINGK;
#ifndef MALLOCMC_CP_SCATTER_HASHINGDISTMP
#define MALLOCMC_CP_SCATTER_HASHINGDISTMP static_cast<uint32>(HashingProperties::hashingDistMP::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 hashingDistMP = MALLOCMC_CP_SCATTER_HASHINGDISTMP;
#ifndef MALLOCMC_CP_SCATTER_HASHINGDISTWP
#define MALLOCMC_CP_SCATTER_HASHINGDISTWP static_cast<uint32>(HashingProperties::hashingDistWP::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 hashingDistWP = MALLOCMC_CP_SCATTER_HASHINGDISTWP;
#ifndef MALLOCMC_CP_SCATTER_HASHINGDISTWPREL
#define MALLOCMC_CP_SCATTER_HASHINGDISTWPREL static_cast<uint32>(HashingProperties::hashingDistWPRel::value)
#endif
BOOST_STATIC_CONSTEXPR uint32 hashingDistWPRel = MALLOCMC_CP_SCATTER_HASHINGDISTWPREL;
/**
* Page Table Entry struct
* The PTE holds basic information about each page
*/
struct PTE
{
uint32 chunksize;
uint32 count;
uint32 bitmask;
__device__ void init()
{
chunksize = 0;
count = 0;
bitmask = 0;
}
};
/**
* Page struct
* The page struct is used to access the data on the page more efficiently
* and to clear the area on the page, which might hold bitsfields later one
*/
struct PAGE
{
char data[pagesize];
/**
* The pages init method
* This method initializes the region on the page which might hold
* bit fields when the page is used for a small chunk size
* @param previous_chunksize the chunksize which was uses for the page before
*/
__device__ void init()
{
//clear the entire data which can hold bitfields
uint32* write = (uint32*)(data + pagesize - (int)(sizeof(uint32)*maxOnPageMasks));
while(write < (uint32*)(data + pagesize))
*write++ = 0;
}
};
// the data used by the allocator
volatile PTE* _ptes;
volatile uint32* _regions;
PAGE* _page;
uint32 _numpages;
size_t _memsize;
uint32 _pagebasedMutex;
volatile uint32 _firstFreePageBased;
volatile uint32 _firstfreeblock;
/**
* randInit should create an random offset which can be used
* as the initial position in a bitfield
*/
__device__ inline uint32 randInit()
{
//start with the laneid offset
return laneid();
}
/**
* randInextspot delivers the next free spot in a bitfield
* it searches for the next unset bit to the left of spot and
* returns its offset. if there are no unset bits to the left
* then it wraps around
* @param bitfield the bitfield to be searched for
* @param spot the spot from which to search to the left
* @param spots number of bits that can be used
* @return next free spot in the bitfield
*/
__device__ inline uint32 nextspot(uint32 bitfield, uint32 spot, uint32 spots)
{
//wrap around the bitfields from the current spot to the left
bitfield = ((bitfield >> (spot + 1)) | (bitfield << (spots - (spot + 1))))&((1<<spots)-1);
//compute the step from the current spot in the bitfield
uint32 step = __ffs(~bitfield);
//and return the new spot
return (spot + step) % spots;
}
/**
* onPageMasksPosition returns a pointer to the beginning of the onpagemasks inside a page.
* @param page the page that holds the masks
* @param the number of hierarchical page tables (bitfields) that are used inside this mask.
* @return pointer to the first address inside the page that holds metadata bitfields.
*/
__device__ inline uint32* onPageMasksPosition(uint32 page, uint32 nMasks){
return (uint32*)(_page[page].data + pagesize - (int)sizeof(uint32)*nMasks);
}
/**
* usespot marks finds one free spot in the bitfield, marks it and returns its offset
* @param bitfield pointer to the bitfield to use
* @param spots overall number of spots the bitfield is responsible for
* @return if there is a free spot it returns the spot'S offset, otherwise -1
*/
__device__ inline int usespot(uint32 *bitfield, uint32 spots)
{
//get first spot
uint32 spot = randInit() % spots;
for(;;)
{
uint32 mask = 1 << spot;
uint32 old = atomicOr(bitfield, mask);
if( (old & mask) == 0)
return spot;
// note: __popc(old) == spots should be sufficient,
//but if someone corrupts the memory we end up in an endless loop in here...
if(__popc(old) >= spots)
return -1;
spot = nextspot(old, spot, spots);
}
}
/**
* calcAdditionalChunks determines the number of chunks that are contained in the last segment of a hierarchical page
*
* The additional checks are necessary to ensure correct results for very large pages and small chunksizes
*
* @param fullsegments the number of segments that can be completely filled in a page. This may NEVER be bigger than 32!
* @param segmentsize the number of bytes that are contained in a completely filled segment (32 chunks)
* @param chunksize the chosen allocation size within the page
* @return the number of additional chunks that will not fit in one of the fullsegments. For any correct input, this number is smaller than 32
*/
__device__ inline uint32 calcAdditionalChunks(uint32 fullsegments, uint32 segmentsize, uint32 chunksize){
if(fullsegments != 32){
return max(0,(int)pagesize - (int)fullsegments*segmentsize - (int)sizeof(uint32))/chunksize;
}else
return 0;
}
/**
* addChunkHierarchy finds a free chunk on a page which uses bit fields on the page
* @param chunksize the chunksize of the page
* @param fullsegments the number of full segments on the page (a 32 bits on the page)
* @param additional_chunks the number of additional chunks in last segment (less than 32 bits on the page)
* @param page the page to use
* @return pointer to a free chunk on the page, 0 if we were unable to obtain a free chunk
*/
__device__ inline void* addChunkHierarchy(uint32 chunksize, uint32 fullsegments, uint32 additional_chunks, uint32 page)
{
uint32 segments = fullsegments + (additional_chunks > 0 ? 1 : 0);
uint32 spot = randInit() % segments;
uint32 mask = _ptes[page].bitmask;
if((mask & (1 << spot)) != 0)
spot = nextspot(mask, spot, segments);
uint32 tries = segments - __popc(mask);
uint32* onpagemasks = onPageMasksPosition(page,segments);
for(uint32 i = 0; i < tries; ++i)
{
int hspot = usespot(onpagemasks + spot, spot < fullsegments ? 32 : additional_chunks);
if(hspot != -1)
return _page[page].data + (32*spot + hspot)*chunksize;
else
atomicOr((uint32*)&_ptes[page].bitmask, 1 << spot);
spot = nextspot(mask, spot, segments);
}
return 0;
}
/**
* addChunkNoHierarchy finds a free chunk on a page which uses the bit fields of the pte only
* @param chunksize the chunksize of the page
* @param page the page to use
* @param spots the number of chunks which fit on the page
* @return pointer to a free chunk on the page, 0 if we were unable to obtain a free chunk
*/
__device__ inline void* addChunkNoHierarchy(uint32 chunksize, uint32 page, uint32 spots)
{
int spot = usespot((uint32*)&_ptes[page].bitmask, spots);
if(spot == -1)
return 0; //that should be impossible :)
return _page[page].data + spot*chunksize;
}
/**
* tryUsePage tries to use the page for the allocation request
* @param page the page to use
* @param chunksize the chunksize of the page
* @return pointer to a free chunk on the page, 0 if we were unable to obtain a free chunk
*/
__device__ inline void* tryUsePage(uint32 page, uint32 chunksize)
{
void* chunk_ptr = NULL;
//increse the fill level
uint32 filllevel = atomicAdd((uint32*)&(_ptes[page].count), 1);
//recheck chunck size (it could be that the page got freed in the meanwhile...)
if(!resetfreedpages || _ptes[page].chunksize == chunksize)
{
if(chunksize <= HierarchyThreshold)
{
//more chunks than can be covered by the pte's single bitfield can be used
uint32 segmentsize = chunksize*32 + sizeof(uint32);
uint32 fullsegments = min(32,pagesize / segmentsize);
uint32 additional_chunks = calcAdditionalChunks(fullsegments, segmentsize, chunksize);
if(filllevel < fullsegments * 32 + additional_chunks)
chunk_ptr = addChunkHierarchy(chunksize, fullsegments, additional_chunks, page);
}
else
{
uint32 chunksinpage = min(pagesize / chunksize, 32);
if(filllevel < chunksinpage)
chunk_ptr = addChunkNoHierarchy(chunksize, page, chunksinpage);
}
}
//this one is full/not useable
if(chunk_ptr == NULL)
atomicSub((uint32*)&(_ptes[page].count), 1);
return chunk_ptr;
}
/**
* allocChunked tries to allocate the demanded number of bytes on one of the pages
* @param bytes the number of bytes to allocate
* @return pointer to a free chunk on a page, 0 if we were unable to obtain a free chunk
*/
__device__ void* allocChunked(uint32 bytes)
{
uint32 pagesperblock = _numpages/accessblocks;
uint32 reloff = warpSize*bytes / pagesize;
uint32 startpage = (bytes*hashingK + hashingDistMP*smid() + (hashingDistWP+hashingDistWPRel*reloff)*warpid() ) % pagesperblock;
uint32 maxchunksize = min(pagesize,wastefactor*bytes);
uint32 startblock = _firstfreeblock;
uint32 ptetry = startpage + startblock*pagesperblock;
uint32 checklevel = regionsize*3/4;
for(uint32 finder = 0; finder < 2; ++finder)
{
for(uint32 b = startblock; b < accessblocks; ++b)
{
while(ptetry < (b+1)*pagesperblock)
{
uint32 region = ptetry/regionsize;
uint32 regionfilllevel = _regions[region];
if(regionfilllevel < checklevel )
{
for( ; ptetry < (region+1)*regionsize; ++ptetry)
{
uint32 chunksize = _ptes[ptetry].chunksize;
if(chunksize >= bytes && chunksize <= maxchunksize)
{
void * res = tryUsePage(ptetry, chunksize);
if(res != 0) return res;
}
else if(chunksize == 0)
{
//lets open up a new page
//it is already padded
uint32 new_chunksize = max(bytes,minChunkSize1);
uint32 beforechunksize = atomicCAS((uint32*)&_ptes[ptetry].chunksize, 0, new_chunksize);
if(beforechunksize == 0)
{
void * res = tryUsePage(ptetry, new_chunksize);
if(res != 0) return res;
}
else if(beforechunksize >= bytes && beforechunksize <= maxchunksize)
{
//someone else aquired the page, but we can also use it
void * res = tryUsePage(ptetry, beforechunksize);
if(res != 0) return res;
}
}
}
//could not alloc in region, tell that
if(regionfilllevel + 1 <= regionsize)
atomicMax((uint32*)(_regions + region), regionfilllevel+1);
}
else
ptetry += regionsize;
//ptetry = (region+1)*regionsize;
}
//randomize the thread writing the info
//if(warpid() + laneid() == 0)
if(b > startblock)
_firstfreeblock = b;
}
//we are really full :/ so lets search every page for a spot!
startblock = 0;
checklevel = regionsize + 1;
ptetry = 0;
}
return 0;
}
/**
* deallocChunked frees the chunk on the page and updates all data accordingly
* @param mem pointer to the chunk
* @param page the page the chunk is on
* @param chunksize the chunksize used for the page
*/
__device__ void deallocChunked(void* mem, uint32 page, uint32 chunksize)
{
uint32 inpage_offset = ((char*)mem - _page[page].data);
if(chunksize <= HierarchyThreshold)
{
//one more level in hierarchy
uint32 segmentsize = chunksize*32 + sizeof(uint32);
uint32 fullsegments = min(32,pagesize / segmentsize);
uint32 additional_chunks = calcAdditionalChunks(fullsegments,segmentsize,chunksize);
uint32 segment = inpage_offset / (chunksize*32);
uint32 withinsegment = (inpage_offset - segment*(chunksize*32))/chunksize;
//mark it as free
uint32 nMasks = fullsegments + (additional_chunks > 0 ? 1 : 0);
uint32* onpagemasks = onPageMasksPosition(page,nMasks);
uint32 old = atomicAnd(onpagemasks + segment, ~(1 << withinsegment));
// always do this, since it might fail due to a race-condition with addChunkHierarchy
atomicAnd((uint32*)&_ptes[page].bitmask, ~(1 << segment));
}
else
{
uint32 segment = inpage_offset / chunksize;
atomicAnd((uint32*)&_ptes[page].bitmask, ~(1 << segment));
}
//reduce filllevel as free
uint32 oldfilllevel = atomicSub((uint32*)&_ptes[page].count, 1);
if(resetfreedpages)
{
if(oldfilllevel == 1)
{
//this page now got free!
// -> try lock it
uint32 old = atomicCAS((uint32*)&_ptes[page].count, 0, pagesize);
if(old == 0)
{
//clean the bits for the hierarchy
_page[page].init();
//remove chunk information
_ptes[page].chunksize = 0;
__threadfence();
//unlock it
atomicSub((uint32*)&_ptes[page].count, pagesize);
}
}
}
//meta information counters ... should not be changed by too many threads, so..
if(oldfilllevel == pagesize / 2 / chunksize)
{
uint32 region = page / regionsize;
_regions[region] = 0;
uint32 block = region * regionsize * accessblocks / _numpages ;
if(warpid() + laneid() == 0)
atomicMin((uint32*)&_firstfreeblock, block);
}
}
/**
* markpages markes a fixed number of pages as used
* @param startpage first page to mark
* @param pages number of pages to mark
* @param bytes number of overall bytes to mark pages for
* @return true on success, false if one of the pages is not free
*/
__device__ bool markpages(uint32 startpage, uint32 pages, uint32 bytes)
{
int abord = -1;
for(uint32 trypage = startpage; trypage < startpage + pages; ++trypage)
{
uint32 old = atomicCAS((uint32*)&_ptes[trypage].chunksize, 0, bytes);
if(old != 0)
{
abord = trypage;
break;
}
}
if(abord == -1)
return true;
for(uint32 trypage = startpage; trypage < abord; ++trypage)
atomicCAS((uint32*)&_ptes[trypage].chunksize, bytes, 0);
return false;
}
/**
* allocPageBasedSingleRegion tries to allocate the demanded number of bytes on a continues sequence of pages
* @param startpage first page to be used
* @param endpage last page to be used
* @param bytes number of overall bytes to mark pages for
* @return pointer to the first page to use, 0 if we were unable to use all the requested pages
*/
__device__ void* allocPageBasedSingleRegion(uint32 startpage, uint32 endpage, uint32 bytes)
{
uint32 pagestoalloc = divup(bytes, pagesize);
uint32 freecount = 0;
bool left_free = false;
for(uint32 search_page = startpage+1; search_page > endpage; )
{
--search_page;
if(_ptes[search_page].chunksize == 0)
{
if(++freecount == pagestoalloc)
{
//try filling it up
if(markpages(search_page, pagestoalloc, bytes))
{
//mark that we filled up everything up to here
if(!left_free)
atomicCAS((uint32*)&_firstFreePageBased, startpage, search_page - 1);
return _page[search_page].data;
}
}
}
else
{
left_free = true;
freecount = 0;
}
}
return 0;
}
/**
* allocPageBasedSingle tries to allocate the demanded number of bytes on a continues sequence of pages
* @param bytes number of overall bytes to mark pages for
* @return pointer to the first page to use, 0 if we were unable to use all the requested pages
* @pre only a single thread of a warp is allowed to call the function concurrently
*/
__device__ void* allocPageBasedSingle(uint32 bytes)
{
//acquire mutex
while(atomicExch(&_pagebasedMutex,1) != 0);
//search for free spot from the back
uint32 spage = _firstFreePageBased;
void* res = allocPageBasedSingleRegion(spage, 0, bytes);
if(res == 0)
//also check the rest of the pages
res = allocPageBasedSingleRegion(_numpages, spage, bytes);
//free mutex
atomicExch(&_pagebasedMutex,0);
return res;
}
/**
* allocPageBased tries to allocate the demanded number of bytes on a continues sequence of pages
* @param bytes number of overall bytes to mark pages for
* @return pointer to the first page to use, 0 if we were unable to use all the requested pages
*/
__device__ void* allocPageBased(uint32 bytes)
{
//this is rather slow, but we dont expect that to happen often anyway
//only one thread per warp can acquire the mutex
void* res = 0;
for(
#if(__CUDACC_VER_MAJOR__ >= 9)
unsigned int __mask = __ballot_sync(0xFFFFFFFF, 1),
#else
unsigned int __mask = __ballot(1),
#endif
__num = __popc(__mask),
__lanemask = mallocMC::lanemask_lt(),
__local_id = __popc(__lanemask & __mask),
__active = 0;
__active < __num;
++__active
)
if (__active == __local_id)
res = allocPageBasedSingle(bytes);
return res;
}
/**
* deallocPageBased frees the memory placed on a sequence of pages
* @param mem pointer to the first page
* @param page the first page
* @param bytes the number of bytes to be freed
*/
__device__ void deallocPageBased(void* mem, uint32 page, uint32 bytes)
{
uint32 pages = divup(bytes,pagesize);
for(uint32 p = page; p < page+pages; ++p)
_page[p].init();
__threadfence();
for(uint32 p = page; p < page+pages; ++p)
atomicCAS((uint32*)&_ptes[p].chunksize, bytes, 0);
atomicMax((uint32*)&_firstFreePageBased, page+pages-1);
}
public:
/**
* create allocates the requested number of bytes via the heap. Coalescing has to be done before by another policy.
* @param bytes number of bytes to allocate
* @return pointer to the allocated memory
*/
__device__ void* create(uint32 bytes)
{
if(bytes == 0)
return 0;
//take care of padding
//bytes = (bytes + dataAlignment - 1) & ~(dataAlignment-1); // in alignment-policy
if(bytes < pagesize)
//chunck based
return allocChunked(bytes);
else
//allocate a range of pages
return allocPageBased(bytes);
}
/**
* destroy frees the memory regions previously acllocted via create
* @param mempointer to the memory region to free
*/
__device__ void destroy(void* mem)
{
if(mem == 0)
return;
//lets see on which page we are on
uint32 page = ((char*)mem - (char*)_page)/pagesize;
uint32 chunksize = _ptes[page].chunksize;
//is the pointer the beginning of a chunk?
uint32 inpage_offset = ((char*)mem - _page[page].data);
uint32 block = inpage_offset/chunksize;
uint32 inblockoffset = inpage_offset - block*chunksize;
if(inblockoffset != 0)
{
uint32* counter = (uint32*)(_page[page].data + block*chunksize);
//coalesced mem free
uint32 old = atomicSub(counter, 1);
if(old != 1)
return;
mem = (void*) counter;
}
if(chunksize < pagesize)
deallocChunked(mem, page, chunksize);
else
deallocPageBased(mem, page, chunksize);
}
/**
* init inits the heap data structures
* the init method must be called before the heap can be used. the method can be called
* with an arbitrary number of threads, which will increase the inits efficiency
* @param memory pointer to the memory used for the heap
* @param memsize size of the memory in bytes
*/
__device__ void initDeviceFunction(void* memory, size_t memsize)
{
uint32 linid = threadIdx.x + blockDim.x*(threadIdx.y + threadIdx.z*blockDim.y);
uint32 threads = blockDim.x*blockDim.y*blockDim.z;
uint32 linblockid = blockIdx.x + gridDim.x*(blockIdx.y + blockIdx.z*gridDim.y);
uint32 blocks = gridDim.x*gridDim.y*gridDim.z;
linid = linid + linblockid*threads;
uint32 numregions = ((unsigned long long)memsize)/( ((unsigned long long)regionsize)*(sizeof(PTE)+pagesize)+sizeof(uint32));
uint32 numpages = numregions*regionsize;
//pointer is copied (copy is called page)
PAGE* page = (PAGE*)(memory);
//sec check for alignment
//copy is checked
//PointerEquivalent alignmentstatus = ((PointerEquivalent)page) & (16 -1);
//if(alignmentstatus != 0)
//{
// if(linid == 0){
// printf("c Before:\n");
// printf("c dataAlignment: %d\n",16);
// printf("c Alignmentstatus: %d\n",alignmentstatus);
// printf("c size_t memsize %llu byte\n", memsize);
// printf("c void *memory %p\n", page);
// }
// //copy is adjusted, potentially pointer to higher address now.
// page =(PAGE*)(((PointerEquivalent)page) + 16 - alignmentstatus);
// if(linid == 0) printf("c Heap Warning: memory to use not 16 byte aligned...\n");
//}
PTE* ptes = (PTE*)(page + numpages);
uint32* regions = (uint32*)(ptes + numpages);
//sec check for mem size
//this check refers to the original memory-pointer, which was not adjusted!
if( (void*)(regions + numregions) > (((char*)memory) + memsize) )
{
--numregions;
numpages = min(numregions*regionsize,numpages);
if(linid == 0) printf("c Heap Warning: needed to reduce number of regions to stay within memory limit\n");
}
//if(linid == 0) printf("Heap info: wasting %d bytes\n",(((POINTEREQUIVALENT)memory) + memsize) - (POINTEREQUIVALENT)(regions + numregions));
//if(linid == 0 && alignmentstatus != 0){
// printf("c Was shrinked automatically to:\n");
// printf("c size_t memsize %llu byte\n", memsize);
// printf("c void *memory %p\n", page);
//}
threads = threads*blocks;
for(uint32 i = linid; i < numpages; i+= threads)
{
ptes[i].init();
page[i].init();
}
for(uint32 i = linid; i < numregions; i+= threads)
regions[i] = 0;
if(linid == 0)
{
_memsize = memsize;
_numpages = numpages;
_ptes = (volatile PTE*)ptes;
_page = page;
_regions = regions;
_firstfreeblock = 0;
_pagebasedMutex = 0;
_firstFreePageBased = numpages-1;
if( (char*) (_page+numpages) > (char*)(memory) + memsize)
printf("error in heap alloc: numpages too high\n");
}
}
__device__ bool isOOM(void* p, size_t s){
// one thread that requested memory returned null
return s && (p == NULL);
}
template < typename T_DeviceAllocator >
static void* initHeap( T_DeviceAllocator* heap, void* pool, size_t memsize){
if( pool == NULL && memsize != 0 )
{
throw std::invalid_argument(
"Scatter policy cannot use NULL for non-empty memory pools. "
"Maybe you are using an incompatible ReservePoolPolicy or AlignmentPolicy."
);
}
ScatterKernelDetail::initKernel<<<1,256>>>(heap, pool, memsize);
return heap;
}
/** counts how many elements of a size fit inside a given page
*
* Examines a (potentially already used) page to find how many elements
* of size chunksize still fit on the page. This includes hierarchically
* organized pages and empty pages. The algorithm determines the number
* of chunks in the page in a manner similar to the allocation algorithm
* of CreationPolicies::Scatter.
*
* @param page the number of the page to examine. The page needs to be
* formatted with a chunksize and potentially a hierarchy.
* @param chunksize the size of element that should be placed inside the
* page. This size must be appropriate to the formatting of the
* page.
*/
__device__ unsigned countFreeChunksInPage(uint32 page, uint32 chunksize){
uint32 filledChunks = _ptes[page].count;
if(chunksize <= HierarchyThreshold)
{
uint32 segmentsize = chunksize*32 + sizeof(uint32); //each segment can hold 32 2nd-level chunks
uint32 fullsegments = min(32,pagesize / segmentsize); //there might be space for more than 32 segments with 32 2nd-level chunks
uint32 additional_chunks = calcAdditionalChunks(fullsegments, segmentsize, chunksize);
uint32 level2Chunks = fullsegments * 32 + additional_chunks;
return level2Chunks - filledChunks;
}else{
uint32 chunksinpage = min(pagesize / chunksize, 32); //without hierarchy, there can not be more than 32 chunks
return chunksinpage - filledChunks;
}
}
/** counts the number of available slots inside the heap
*
* Searches the heap for all possible locations of an element with size
* slotSize. The used traversal algorithms are similar to the allocation
* strategy of CreationPolicies::Scatter, to ensure comparable results.
* There are 3 different algorithms, based on the size of the requested
* slot: 1 slot spans over multiple pages, 1 slot fits in one chunk
* within a page, 1 slot fits in a fraction of a chunk.
*
* @param slotSize the amount of bytes that a single slot accounts for
* @param gid the id of the thread. this id does not have to correspond
* with threadId.x, but there must be a continous range of ids
* beginning from 0.
* @param stride the stride should be equal to the number of different
* gids (and therefore of value max(gid)-1)
*/
__device__ unsigned getAvailaibleSlotsDeviceFunction(size_t slotSize, int gid, int stride)
{
unsigned slotcount = 0;
if(slotSize < pagesize){ // multiple slots per page
for(uint32 currentpage = gid; currentpage < _numpages; currentpage += stride){
uint32 maxchunksize = min(pagesize, wastefactor*(uint32)slotSize);
uint32 region = currentpage/regionsize;
uint32 regionfilllevel = _regions[region];
uint32 chunksize = _ptes[currentpage].chunksize;
if(chunksize >= slotSize && chunksize <= maxchunksize){ //how many chunks left? (each chunk is big enough)
slotcount += countFreeChunksInPage(currentpage, chunksize);
}else if(chunksize == 0){
chunksize = max((uint32)slotSize, minChunkSize1); //ensure minimum chunk size
slotcount += countFreeChunksInPage(currentpage, chunksize); //how many chunks fit in one page?
}else{
continue; //the chunks on this page are too small for the request :(
}
}
}else{ // 1 slot needs multiple pages
if(gid > 0) return 0; //do this serially
uint32 pagestoalloc = divup((uint32)slotSize, pagesize);
uint32 freecount = 0;
for(uint32 currentpage = _numpages; currentpage > 0;){ //this already includes all superblocks
--currentpage;
if(_ptes[currentpage].chunksize == 0){
if(++freecount == pagestoalloc){
freecount = 0;
++slotcount;
}
}else{ // the sequence of free pages was interrupted
freecount = 0;
}
}
}
return slotcount;
}
/** Count, how many elements can be allocated at maximum
*
* Takes an input size and determines, how many elements of this size can
* be allocated with the CreationPolicy Scatter. This will return the
* maximum number of free slots of the indicated size. It is not
* guaranteed where these slots are (regarding fragmentation). Therefore,
* the practically usable number of slots might be smaller. This function
* is executed in parallel. Speedup can possibly increased by a higher
* amount ofparallel workers.
*
* @param slotSize the size of allocatable elements to count
* @param obj a reference to the allocator instance (host-side)
*/
public:
template<typename T_DeviceAllocator>
static unsigned getAvailableSlotsHost(size_t const slotSize, T_DeviceAllocator* heap){
unsigned h_slots = 0;
unsigned* d_slots;
cudaMalloc((void**) &d_slots, sizeof(unsigned));
cudaMemcpy(d_slots, &h_slots, sizeof(unsigned), cudaMemcpyHostToDevice);
ScatterKernelDetail::getAvailableSlotsKernel<<<64,256>>>(heap, slotSize, d_slots);
cudaMemcpy(&h_slots, d_slots, sizeof(unsigned), cudaMemcpyDeviceToHost);
cudaFree(d_slots);
return h_slots;
}
/** Count, how many elements can be allocated at maximum
*
* Takes an input size and determines, how many elements of this size can
* be allocated with the CreationPolicy Scatter. This will return the
* maximum number of free slots of the indicated size. It is not
* guaranteed where these slots are (regarding fragmentation). Therefore,
* the practically usable number of slots might be smaller. This function
* is executed separately for each warp and does not cooperate with other
* warps. Maximum speed is expected if every thread in the warp executes
* the function.
* Uses 256 byte of shared memory.
*
* @param slotSize the size of allocatable elements to count
*/
__device__ unsigned getAvailableSlotsAccelerator(size_t slotSize){
int linearId;
int wId = threadIdx.x >> 5; //do not use warpid-function, since this value is not guaranteed to be stable across warp lifetime
#if(__CUDACC_VER_MAJOR__ >= 9)
uint32 activeThreads = __popc(__ballot_sync(0xFFFFFFFF, true));
#else
uint32 activeThreads = __popc(__ballot(true));
#endif
__shared__ uint32 activePerWarp[32]; //32 is the maximum number of warps in a block
__shared__ unsigned warpResults[32];
warpResults[wId] = 0;
activePerWarp[wId] = 0;
// the active threads obtain an id from 0 to activeThreads-1
if(slotSize>0) linearId = atomicAdd(&activePerWarp[wId], 1);
else return 0;
//printf("Block %d, id %d: activeThreads=%d linearId=%d\n",blockIdx.x,threadIdx.x,activeThreads,linearId);
unsigned temp = getAvailaibleSlotsDeviceFunction(slotSize, linearId, activeThreads);
if(temp) atomicAdd(&warpResults[wId], temp);
__threadfence_block();
return warpResults[wId];
}
static std::string classname(){
std::stringstream ss;
ss << "Scatter[";
ss << pagesize << ",";
ss << accessblocks << ",";
ss << regionsize << ",";
ss << wastefactor << ",";
ss << resetfreedpages << ",";
ss << hashingK << ",";
ss << hashingDistMP << ",";
ss << hashingDistWP << ",";
ss << hashingDistWPRel<< "]";
return ss.str();
}
};
} //namespace CreationPolicies
} //namespace mallocMC
|
(*<*)
theory CJDDLplus
imports Main
begin
nitpick_params[user_axioms=true, show_all, expect=genuine, format = 3]
(*>*)
section \<open>Introduction\<close>
text\<open>\noindent{We present an encoding of an ambitious ethical theory ---Alan Gewirth's "Principle of Generic Consistency (PGC)"---
in Isabelle/HOL. The PGC has stirred much attention in philosophy and ethics \<^cite>\<open>"Beyleveld"\<close> and has been proposed as a
potential means to bound the impact of artificial general intelligence (AGI) \<^cite>\<open>"Kornai"\<close>.
With our contribution we make a first, important step towards formally assessing the PGC and its potential applications in AI.
Our formalisation utilises the shallow semantical embedding approach \<^cite>\<open>"J23"\<close>
and adapts a recent embedding of dyadic deontic logic in HOL \<^cite>\<open>"C71"\<close> \<^cite>\<open>"BenzmuellerDDL"\<close>.
}\<close>
section \<open>Semantic Embedding of Carmo and Jones' Dyadic Deontic Logic (DDL) augmented with Kaplanian contexts\<close>
text\<open>\noindent{We introduce a modification of the semantic embedding developed by Benzm\"uller et al. \<^cite>\<open>"C71"\<close> \<^cite>\<open>"BenzmuellerDDL"\<close>
for the Dyadic Deontic Logic originally presented by Carmo and Jones \<^cite>\<open>"CJDDL"\<close>. We extend this embedding
to a two-dimensional semantics as originally presented by David Kaplan \<^cite>\<open>"Kaplan1979"\<close> \<^cite>\<open>"Kaplan1989"\<close>.}\<close>
subsection \<open>Definition of Types\<close>
typedecl w \<comment> \<open> Type for possible worlds (Kaplan's "circumstances of evaluation" or "counterfactual situations") \<close>
typedecl e \<comment> \<open> Type for individuals (entities eligible to become agents) \<close>
typedecl c \<comment> \<open> Type for Kaplanian "contexts of use" \<close>
type_synonym wo = "w\<Rightarrow>bool" \<comment> \<open> contents/propositions are identified with their truth-sets \<close>
type_synonym cwo = "c\<Rightarrow>wo" \<comment> \<open> sentence meaning (Kaplan's "character") is a function from contexts to contents \<close>
type_synonym m = "cwo" \<comment> \<open> we use the letter 'm' for characters (reminiscent of "meaning") \<close>
subsection \<open>Semantic Characterisation of DDL\<close> (*cf. original Carmo and Jones Paper @{cite "CJDDL"} p.290ff*)
subsubsection \<open>Basic Set Operations\<close>
abbreviation subset::"wo\<Rightarrow>wo\<Rightarrow>bool" (infix "\<sqsubseteq>" 46) where "\<alpha> \<sqsubseteq> \<beta> \<equiv> \<forall>w. \<alpha> w \<longrightarrow> \<beta> w"
abbreviation intersection::"wo\<Rightarrow>wo\<Rightarrow>wo" (infixr "\<sqinter>" 48) where "\<alpha> \<sqinter> \<beta> \<equiv> \<lambda>x. \<alpha> x \<and> \<beta> x"
abbreviation union::"wo\<Rightarrow>wo\<Rightarrow>wo" (infixr "\<squnion>" 48) where "\<alpha> \<squnion> \<beta> \<equiv> \<lambda>x. \<alpha> x \<or> \<beta> x"
abbreviation complement::"wo\<Rightarrow>wo" ("\<sim>_"[45]46) where "\<sim>\<alpha> \<equiv> \<lambda>x. \<not>\<alpha> x"
abbreviation instantiated::"wo\<Rightarrow>bool" ("\<I>_"[45]46) where "\<I> \<phi> \<equiv> \<exists>x. \<phi> x"
abbreviation setEq::"wo\<Rightarrow>wo\<Rightarrow>bool" (infix "=\<^sub>s" 46) where "\<alpha> =\<^sub>s \<beta> \<equiv> \<forall>x. \<alpha> x \<longleftrightarrow> \<beta> x"
abbreviation univSet :: "wo" ("\<top>") where "\<top> \<equiv> \<lambda>w. True"
abbreviation emptySet :: "wo" ("\<bottom>") where "\<bottom> \<equiv> \<lambda>w. False"
subsubsection \<open>Set-Theoretic Conditions for DDL\<close>
consts
av::"w\<Rightarrow>wo" \<comment> \<open> set of worlds that are open alternatives (aka. actual versions) of w \<close>
pv::"w\<Rightarrow>wo" \<comment> \<open> set of worlds that are possible alternatives (aka. potential versions) of w \<close>
ob::"wo\<Rightarrow>wo\<Rightarrow>bool" \<comment> \<open> set of propositions which are obligatory in a given context (of type wo) \<close>
axiomatization where
sem_3a: "\<forall>w. \<I>(av w)" and \<comment> \<open> av is serial: in every situation there is always an open alternative \<close>
sem_4a: "\<forall>w. av w \<sqsubseteq> pv w" and \<comment> \<open> open alternatives are possible alternatives \<close>
sem_4b: "\<forall>w. pv w w" and \<comment> \<open> pv is reflexive: every situation is a possible alternative to itself \<close>
sem_5a: "\<forall>X. \<not>(ob X \<bottom>)" and \<comment> \<open> contradictions cannot be obligatory \<close>
sem_5b: "\<forall>X Y Z. (X \<sqinter> Y) =\<^sub>s (X \<sqinter> Z) \<longrightarrow> (ob X Y \<longleftrightarrow> ob X Z)" and
sem_5c: "\<forall>X Y Z. \<I>(X \<sqinter> Y \<sqinter> Z) \<and> ob X Y \<and> ob X Z \<longrightarrow> ob X (Y \<sqinter> Z)" and
sem_5d: "\<forall>X Y Z. (Y \<sqsubseteq> X \<and> ob X Y \<and> X \<sqsubseteq> Z) \<longrightarrow> ob Z ((Z \<sqinter> (\<sim>X)) \<squnion> Y)" and
sem_5e: "\<forall>X Y Z. Y \<sqsubseteq> X \<and> ob X Z \<and> \<I>(Y \<sqinter> Z) \<longrightarrow> ob Y Z"
lemma True nitpick[satisfy] oops \<comment> \<open> model found: axioms are consistent \<close>
subsubsection \<open>Verifying Semantic Conditions\<close>
lemma sem_5b1: "ob X Y \<longrightarrow> ob X (Y \<sqinter> X)" by (metis (no_types, lifting) sem_5b)
lemma sem_5b2: "(ob X (Y \<sqinter> X) \<longrightarrow> ob X Y)" by (metis (no_types, lifting) sem_5b)
lemma sem_5ab: "ob X Y \<longrightarrow> \<I>(X \<sqinter> Y)" by (metis (full_types) sem_5a sem_5b)
lemma sem_5bd1: "Y \<sqsubseteq> X \<and> ob X Y \<and> X \<sqsubseteq> Z \<longrightarrow> ob Z ((\<sim>X) \<squnion> Y)" using sem_5b sem_5d by smt
lemma sem_5bd2: "ob X Y \<and> X \<sqsubseteq> Z \<longrightarrow> ob Z ((Z \<sqinter> (\<sim>X)) \<squnion> Y)" using sem_5b sem_5d by (smt sem_5b1)
lemma sem_5bd3: "ob X Y \<and> X \<sqsubseteq> Z \<longrightarrow> ob Z ((\<sim>X) \<squnion> Y)" by (smt sem_5bd2 sem_5b)
lemma sem_5bd4: "ob X Y \<and> X \<sqsubseteq> Z \<longrightarrow> ob Z ((\<sim>X) \<squnion> (X \<sqinter> Y))" using sem_5bd3 by auto
lemma sem_5bcd: "(ob X Z \<and> ob Y Z) \<longrightarrow> ob (X \<squnion> Y) Z" using sem_5b sem_5c sem_5d oops
(* 5e and 5ab justify redefinition of @{cite "O\<langle>\<phi>|\<sigma>\<rangle>"} as (ob A B)*)
lemma "ob A B \<longleftrightarrow> (\<I>(A \<sqinter> B) \<and> (\<forall>X. X \<sqsubseteq> A \<and> \<I>(X \<sqinter> B) \<longrightarrow> ob X B))" using sem_5e sem_5ab by blast
subsection \<open>(Shallow) Semantic Embedding of DDL\<close>
subsubsection \<open>Basic Propositional Logic\<close>
abbreviation pand::"m\<Rightarrow>m\<Rightarrow>m" (infixr"\<^bold>\<and>" 51) where "\<phi>\<^bold>\<and>\<psi> \<equiv> \<lambda>c w. (\<phi> c w)\<and>(\<psi> c w)"
abbreviation por::"m\<Rightarrow>m\<Rightarrow>m" (infixr"\<^bold>\<or>" 50) where "\<phi>\<^bold>\<or>\<psi> \<equiv> \<lambda>c w. (\<phi> c w)\<or>(\<psi> c w)"
abbreviation pimp::"m\<Rightarrow>m\<Rightarrow>m" (infix"\<^bold>\<rightarrow>" 49) where "\<phi>\<^bold>\<rightarrow>\<psi> \<equiv> \<lambda>c w. (\<phi> c w)\<longrightarrow>(\<psi> c w)"
abbreviation pequ::"m\<Rightarrow>m\<Rightarrow>m" (infix"\<^bold>\<leftrightarrow>" 48) where "\<phi>\<^bold>\<leftrightarrow>\<psi> \<equiv> \<lambda>c w. (\<phi> c w)\<longleftrightarrow>(\<psi> c w)"
abbreviation pnot::"m\<Rightarrow>m" ("\<^bold>\<not>_" [52]53) where "\<^bold>\<not>\<phi> \<equiv> \<lambda>c w. \<not>(\<phi> c w)"
subsubsection \<open>Modal Operators\<close>
abbreviation cjboxa :: "m\<Rightarrow>m" ("\<^bold>\<box>\<^sub>a_" [52]53) where "\<^bold>\<box>\<^sub>a\<phi> \<equiv> \<lambda>c w. \<forall>v. (av w) v \<longrightarrow> (\<phi> c v)"
abbreviation cjdiaa :: "m\<Rightarrow>m" ("\<^bold>\<diamond>\<^sub>a_" [52]53) where "\<^bold>\<diamond>\<^sub>a\<phi> \<equiv> \<lambda>c w. \<exists>v. (av w) v \<and> (\<phi> c v)"
abbreviation cjboxp :: "m\<Rightarrow>m" ("\<^bold>\<box>\<^sub>p_" [52]53) where "\<^bold>\<box>\<^sub>p\<phi> \<equiv> \<lambda>c w. \<forall>v. (pv w) v \<longrightarrow> (\<phi> c v)"
abbreviation cjdiap :: "m\<Rightarrow>m" ("\<^bold>\<diamond>\<^sub>p_" [52]53) where "\<^bold>\<diamond>\<^sub>p\<phi> \<equiv> \<lambda>c w. \<exists>v. (pv w) v \<and> (\<phi> c v)"
abbreviation cjtaut :: "m" ("\<^bold>\<top>") where "\<^bold>\<top> \<equiv> \<lambda>c w. True"
abbreviation cjcontr :: "m" ("\<^bold>\<bottom>") where "\<^bold>\<bottom> \<equiv> \<lambda>c w. False"
subsubsection \<open>Deontic Operators\<close>
abbreviation cjod :: "m\<Rightarrow>m\<Rightarrow>m" ("\<^bold>O\<langle>_|_\<rangle>"54) where "\<^bold>O\<langle>\<phi>|\<sigma>\<rangle> \<equiv> \<lambda>c w. ob (\<sigma> c) (\<phi> c)"
abbreviation cjoa :: "m\<Rightarrow>m" ("\<^bold>O\<^sub>a_" [53]54) where "\<^bold>O\<^sub>a\<phi> \<equiv> \<lambda>c w. (ob (av w)) (\<phi> c) \<and> (\<exists>x. (av w) x \<and> \<not>(\<phi> c x))"
abbreviation cjop :: "m\<Rightarrow>m" ("\<^bold>O\<^sub>i_" [53]54) where "\<^bold>O\<^sub>i\<phi> \<equiv> \<lambda>c w. (ob (pv w)) (\<phi> c) \<and> (\<exists>x. (pv w) x \<and> \<not>(\<phi> c x))"
subsubsection \<open>Logical Validity (Classical)\<close>
abbreviation modvalidctx :: "m\<Rightarrow>c\<Rightarrow>bool" ("\<lfloor>_\<rfloor>\<^sup>M") where "\<lfloor>\<phi>\<rfloor>\<^sup>M \<equiv> \<lambda>c. \<forall>w. \<phi> c w" \<comment> \<open> context-dependent modal validity \<close>
abbreviation modvalid :: "m\<Rightarrow>bool" ("\<lfloor>_\<rfloor>") where "\<lfloor>\<phi>\<rfloor> \<equiv> \<forall>c. \<lfloor>\<phi>\<rfloor>\<^sup>M c" \<comment> \<open> general modal validity (modally valid in each context) \<close>
(*
If we introduce the alternative definition of logical validity below (from Kaplan's LD) instead of the previous one,
we can prove valid most of the following theorems excepting only CJ_7 and CJ_8 and the necessitation rule.
consts World::"c\<Rightarrow>w" \<comment> \<open> function retrieving the world corresponding to context c (Kaplanian contexts are world-centered) \<close>
abbreviation ldtruectx::"m\<Rightarrow>c\<Rightarrow>bool" ("\<lfloor>_\<rfloor>\<^sub>_") where "\<lfloor>\<phi>\<rfloor>\<^sub>c \<equiv> \<phi> c (World c)" \<comment> \<open> truth in the given context \<close>
abbreviation ldvalid::"m\<Rightarrow>bool" ("\<lfloor>_\<rfloor>") where "\<lfloor>\<phi>\<rfloor> \<equiv> \<forall>c. \<lfloor>\<phi>\<rfloor>\<^sub>c" \<comment> \<open> LD validity (true in every context) \<close>
*)
subsection \<open>Verifying the Embedding\<close>
subsubsection \<open>Avoiding Modal Collapse\<close>
lemma "\<lfloor>P \<^bold>\<rightarrow> \<^bold>O\<^sub>aP\<rfloor>" nitpick oops \<comment> \<open> (actual) deontic modal collapse is countersatisfiable \<close>
lemma "\<lfloor>P \<^bold>\<rightarrow> \<^bold>O\<^sub>iP\<rfloor>" nitpick oops \<comment> \<open> (ideal) deontic modal collapse is countersatisfiable \<close>
lemma "\<lfloor>P \<^bold>\<rightarrow> \<^bold>\<box>\<^sub>aP\<rfloor>" nitpick oops \<comment> \<open> alethic modal collapse is countersatisfiable (implies all other necessity operators) \<close>
subsubsection \<open>Necessitation Rule\<close>
lemma NecDDLa: "\<lfloor>A\<rfloor> \<Longrightarrow> \<lfloor>\<^bold>\<box>\<^sub>aA\<rfloor>" by simp (* Valid only using classical (not LD) validity*)
lemma NecDDLp: "\<lfloor>A\<rfloor> \<Longrightarrow> \<lfloor>\<^bold>\<box>\<^sub>pA\<rfloor>" by simp (* Valid only using classical (not LD) validity*)
subsubsection \<open>Lemmas for Semantic Conditions\<close> (* extracted from Benzmüller et al. paper @{cite "BenzmuellerDDL"}*)
abbreviation mboxS5 :: "m\<Rightarrow>m" ("\<^bold>\<box>\<^sup>S\<^sup>5_" [52]53) where "\<^bold>\<box>\<^sup>S\<^sup>5\<phi> \<equiv> \<lambda>c w. \<forall>v. \<phi> c v"
abbreviation mdiaS5 :: "m\<Rightarrow>m" ("\<^bold>\<diamond>\<^sup>S\<^sup>5_" [52]53) where "\<^bold>\<diamond>\<^sup>S\<^sup>5\<phi> \<equiv> \<lambda>c w. \<exists>v. \<phi> c v"
lemma C_2: "\<lfloor>\<^bold>O\<langle>A | B\<rangle> \<^bold>\<rightarrow> \<^bold>\<diamond>\<^sup>S\<^sup>5(B \<^bold>\<and> A)\<rfloor>" by (simp add: sem_5ab)
lemma C_3: "\<lfloor>((\<^bold>\<diamond>\<^sup>S\<^sup>5(A \<^bold>\<and> B \<^bold>\<and> C)) \<^bold>\<and> \<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>O\<langle>C|A\<rangle>) \<^bold>\<rightarrow> \<^bold>O\<langle>(B \<^bold>\<and> C)| A\<rangle>\<rfloor>" by (simp add: sem_5c)
lemma C_4: "\<lfloor>(\<^bold>\<box>\<^sup>S\<^sup>5(A \<^bold>\<rightarrow> B) \<^bold>\<and> \<^bold>\<diamond>\<^sup>S\<^sup>5(A \<^bold>\<and> C) \<^bold>\<and> \<^bold>O\<langle>C|B\<rangle>) \<^bold>\<rightarrow> \<^bold>O\<langle>C|A\<rangle>\<rfloor>" using sem_5e by blast
lemma C_5: "\<lfloor>\<^bold>\<box>\<^sup>S\<^sup>5(A \<^bold>\<leftrightarrow> B) \<^bold>\<rightarrow> (\<^bold>O\<langle>C|A\<rangle> \<^bold>\<rightarrow> \<^bold>O\<langle>C|B\<rangle>)\<rfloor>" using C_2 sem_5e by blast
lemma C_6: "\<lfloor>\<^bold>\<box>\<^sup>S\<^sup>5(C \<^bold>\<rightarrow> (A \<^bold>\<leftrightarrow> B)) \<^bold>\<rightarrow> (\<^bold>O\<langle>A|C\<rangle> \<^bold>\<leftrightarrow> \<^bold>O\<langle>B|C\<rangle>)\<rfloor>" by (metis sem_5b)
lemma C_7: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>\<box>\<^sup>S\<^sup>5\<^bold>O\<langle>B|A\<rangle>\<rfloor>" by blast
lemma C_8: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>O\<langle>A \<^bold>\<rightarrow> B| \<^bold>\<top>\<rangle>\<rfloor>" using sem_5bd4 by presburger
subsubsection \<open>Verifying Axiomatic Characterisation\<close>
text\<open>\noindent{The following theorems have been taken from the original Carmo and Jones' paper (\<^cite>\<open>"CJDDL"\<close> p.293ff).}\<close>
lemma CJ_3: "\<lfloor>\<^bold>\<box>\<^sub>pA \<^bold>\<rightarrow> \<^bold>\<box>\<^sub>aA\<rfloor>" by (simp add: sem_4a)
lemma CJ_4: "\<lfloor>\<^bold>\<not>\<^bold>O\<langle>\<^bold>\<bottom>|A\<rangle>\<rfloor>" by (simp add: sem_5a)
lemma CJ_5: "\<lfloor>(\<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>O\<langle>C|A\<rangle>) \<^bold>\<rightarrow> \<^bold>O\<langle>B\<^bold>\<and>C|A\<rangle>\<rfloor>" nitpick oops \<comment> \<open> countermodel found \<close>
lemma CJ_5_minus: "\<lfloor>\<^bold>\<diamond>\<^sup>S\<^sup>5(A \<^bold>\<and> B \<^bold>\<and> C) \<^bold>\<and> (\<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>O\<langle>C|A\<rangle>) \<^bold>\<rightarrow> \<^bold>O\<langle>B\<^bold>\<and>C|A\<rangle>\<rfloor>" by (simp add: sem_5c)
lemma CJ_6: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>O\<langle>B|A\<^bold>\<and>B\<rangle>\<rfloor>" by (smt C_2 C_4)
lemma CJ_7: "\<lfloor>A \<^bold>\<leftrightarrow> B\<rfloor> \<longrightarrow> \<lfloor>\<^bold>O\<langle>C|A\<rangle> \<^bold>\<leftrightarrow> \<^bold>O\<langle>C|B\<rangle>\<rfloor>" using sem_5ab sem_5e by blast (* Valid only using classical (not Kaplan's indexical) validity*)
lemma CJ_8: "\<lfloor>C \<^bold>\<rightarrow> (A \<^bold>\<leftrightarrow> B)\<rfloor> \<longrightarrow> \<lfloor>\<^bold>O\<langle>A|C\<rangle> \<^bold>\<leftrightarrow> \<^bold>O\<langle>B|C\<rangle>\<rfloor>" using C_6 by simp (* Valid only using classical (not Kaplan's indexical) validity*)
lemma CJ_9a: "\<lfloor>\<^bold>\<diamond>\<^sub>p\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>\<box>\<^sub>p\<^bold>O\<langle>B|A\<rangle>\<rfloor>" by simp
lemma CJ_9p: "\<lfloor>\<^bold>\<diamond>\<^sub>a\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>\<box>\<^sub>a\<^bold>O\<langle>B|A\<rangle>\<rfloor>" by simp
lemma CJ_9_var_a: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>\<box>\<^sub>a\<^bold>O\<langle>B|A\<rangle>\<rfloor>" by simp
lemma CJ_9_var_b: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>\<box>\<^sub>p\<^bold>O\<langle>B|A\<rangle>\<rfloor>" by simp
lemma CJ_11a: "\<lfloor>(\<^bold>O\<^sub>aA \<^bold>\<and> \<^bold>O\<^sub>aB) \<^bold>\<rightarrow> \<^bold>O\<^sub>a(A \<^bold>\<and> B)\<rfloor>" nitpick oops \<comment> \<open> countermodel found \<close>
lemma CJ_11a_var: "\<lfloor>\<^bold>\<diamond>\<^sub>a(A \<^bold>\<and> B) \<^bold>\<and> (\<^bold>O\<^sub>aA \<^bold>\<and> \<^bold>O\<^sub>aB) \<^bold>\<rightarrow> \<^bold>O\<^sub>a(A \<^bold>\<and> B)\<rfloor>" using sem_5c by auto
lemma CJ_11p: "\<lfloor>(\<^bold>O\<^sub>iA \<^bold>\<and> \<^bold>O\<^sub>iB) \<^bold>\<rightarrow> \<^bold>O\<^sub>i(A \<^bold>\<and> B)\<rfloor>" nitpick oops \<comment> \<open> countermodel found \<close>
lemma CJ_11p_var: "\<lfloor>\<^bold>\<diamond>\<^sub>p(A \<^bold>\<and> B) \<^bold>\<and> (\<^bold>O\<^sub>iA \<^bold>\<and> \<^bold>O\<^sub>iB) \<^bold>\<rightarrow> \<^bold>O\<^sub>i(A \<^bold>\<and> B)\<rfloor>" using sem_5c by auto
lemma CJ_12a: "\<lfloor>\<^bold>\<box>\<^sub>aA \<^bold>\<rightarrow> (\<^bold>\<not>\<^bold>O\<^sub>aA \<^bold>\<and> \<^bold>\<not>\<^bold>O\<^sub>a(\<^bold>\<not>A))\<rfloor>" using sem_5ab by blast (*using C_2 by blast *)
lemma CJ_12p: "\<lfloor>\<^bold>\<box>\<^sub>pA \<^bold>\<rightarrow> (\<^bold>\<not>\<^bold>O\<^sub>iA \<^bold>\<and> \<^bold>\<not>\<^bold>O\<^sub>i(\<^bold>\<not>A))\<rfloor>" using sem_5ab by blast (*using C_2 by blast*)
lemma CJ_13a: "\<lfloor>\<^bold>\<box>\<^sub>a(A \<^bold>\<leftrightarrow> B) \<^bold>\<rightarrow> (\<^bold>O\<^sub>aA \<^bold>\<leftrightarrow> \<^bold>O\<^sub>aB)\<rfloor>" using sem_5b by metis (*using C_6 by blast *)
lemma CJ_13p: "\<lfloor>\<^bold>\<box>\<^sub>p(A \<^bold>\<leftrightarrow> B) \<^bold>\<rightarrow> (\<^bold>O\<^sub>iA \<^bold>\<leftrightarrow> \<^bold>O\<^sub>iB)\<rfloor>" using sem_5b by metis (*using C_6 by blast *)
lemma CJ_O_O: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<rightarrow> \<^bold>O\<langle>A \<^bold>\<rightarrow> B|\<^bold>\<top>\<rangle>\<rfloor>" using sem_5bd4 by presburger
text\<open>\noindent{An ideal obligation which is actually possible both to fulfill and to violate entails an actual obligation (\<^cite>\<open>"CJDDL"\<close> p.319).}\<close>
lemma CJ_Oi_Oa: "\<lfloor>(\<^bold>O\<^sub>iA \<^bold>\<and> \<^bold>\<diamond>\<^sub>aA \<^bold>\<and> \<^bold>\<diamond>\<^sub>a(\<^bold>\<not>A)) \<^bold>\<rightarrow> \<^bold>O\<^sub>aA\<rfloor>" using sem_5e sem_4a by blast
text\<open>\noindent{Bridge relations between conditional obligations and actual/ideal obligations:}\<close>
lemma CJ_14a: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>\<box>\<^sub>aA \<^bold>\<and> \<^bold>\<diamond>\<^sub>aB \<^bold>\<and> \<^bold>\<diamond>\<^sub>a\<^bold>\<not>B \<^bold>\<rightarrow> \<^bold>O\<^sub>aB\<rfloor>" using sem_5e by blast
lemma CJ_14p: "\<lfloor>\<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>\<box>\<^sub>pA \<^bold>\<and> \<^bold>\<diamond>\<^sub>pB \<^bold>\<and> \<^bold>\<diamond>\<^sub>p\<^bold>\<not>B \<^bold>\<rightarrow> \<^bold>O\<^sub>iB\<rfloor>" using sem_5e by blast
lemma CJ_15a: "\<lfloor>(\<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>\<diamond>\<^sub>a(A \<^bold>\<and> B) \<^bold>\<and> \<^bold>\<diamond>\<^sub>a(A \<^bold>\<and> \<^bold>\<not>B)) \<^bold>\<rightarrow> \<^bold>O\<^sub>a(A \<^bold>\<rightarrow> B)\<rfloor>" using CJ_O_O sem_5e by fastforce (*using CJ_O_O CJ_14a by blast*)
lemma CJ_15p: "\<lfloor>(\<^bold>O\<langle>B|A\<rangle> \<^bold>\<and> \<^bold>\<diamond>\<^sub>p(A \<^bold>\<and> B) \<^bold>\<and> \<^bold>\<diamond>\<^sub>p(A \<^bold>\<and> \<^bold>\<not>B)) \<^bold>\<rightarrow> \<^bold>O\<^sub>i(A \<^bold>\<rightarrow> B)\<rfloor>" using CJ_O_O sem_5e by fastforce (*using CJ_O_O CJ_14p by blast*)
(*<*)
end
(*>*)
|
lemma tendsto_mult_filterlim_at_infinity: fixes c :: "'a::real_normed_field" assumes "(f \<longlongrightarrow> c) F" "c \<noteq> 0" assumes "filterlim g at_infinity F" shows "filterlim (\<lambda>x. f x * g x) at_infinity F"
|
\chapter{Android Native Build Tutorial}
\label{Appendix:AndroidNativeBuildTutorial}
During the implementation of the Android port, a lot of notices were written down. Although there's couple of information about how to do native Android development available, it's often out-of-date, confusing or just not working. A lot of trivial looking information had also be figured out in many frustrating hours of work. So, because there were already a lot of notices written down and the possibility is high that someone using PixelLight for native Android development is requiring the same or quite similar information, those notices were put into this appendix in a revised form.
The good news is, when you build PixelLight for Android, you don't really need to know everything written down in here. The process is heavily automated by using CMake scripts. In case you just find this appendix unnecessary or you hate spoilers and want to figure out all by yourself, ignore this appendix. For all others, this information in here is hopefully useful for you for the first steps in native Android development without the automated CMake build system PixelLight is using.
Please note that this tutorial is using Linux because using \ac{MS} Windows for native Android development is really painful.
\paragraph{Some Hints on how to Read this Tutorial}
The tutorial is written in brief sentences to keep it compact. But this doesn't mean there's no additional information, so, the most important information is marked by using \textbf{bold text}. Additional, nice to know information is marked by using \textrightarrow. If you want to use the fast path, just focus on the \textbf{bold texts} like terminal commands.
\section{Prerequisites}
\begin{itemize}
\item{Used \ac{OS}: "Ubuntu 11.10 - Oneiric Ocelot" (just mentioned the used version to be on the safe side, other, newer versions may probably work as well)}
\item{Install \ac{JDK} ("\textbf{sudo apt-get install default-jdk}") and \ac{JRE} ("\textbf{sudo apt-get install default-jre}")}
\item{Install "ant" ("\textbf{sudo apt-get install ant}"), required to create the \ac{APK} files}
\end{itemize}
To install all packages at once, just use:
\begin{lstlisting}[language=sh]
apt-get install default-jdk default-jre ant
\end{lstlisting}
\paragraph{Install Android \ac{NDK}}
\begin{itemize}
\item{\textbf{Download} from \url{http://developer.android.com/sdk/ndk/index.html} \textrightarrow{} "android-ndk-r6b-linux-x86.tar.bz2"}
\item{\textbf{Extract} to for example "\textasciitilde /android-ndk-r6b" ("\textasciitilde /" is your home directory)}
\end{itemize}
\paragraph{Android \ac{NDK} (\emph{ndk r6b}) - \ac{MS} Windows}
\begin{itemize}
\item{Extract it and set the \ac{MS} Windows PATH environment variable \emph{ANDROID\_NDK} to the \ac{NDK} root directory}
\item{Set the \ac{MS} Windows \emph{PATH} environment variable \emph{ANDROID\_NDK\_TOOLCHAIN\_ROOT} to the \ac{NDK} toolchain root directory (e.g. "C:/android-ndk-r6b/toolchains/arm-linux-androideabi-4.4.3/prebuilt/windows/arm-linux-androideabi")}
\item{(Those variables can also be added/set within the CMake-\ac{GUI})}
\end{itemize}
\paragraph{Install Android \ac{SDK}}
\begin{itemize}
\item{\textbf{Download} from \url{http://developer.android.com/sdk/index.html} \textrightarrow{} "android-sdk\_r12-linux\_x86.tgz"}
\item{\textbf{Extract} to for example "\textasciitilde /android-sdk-linux\_x86" ("\textasciitilde /" is your home directory)}
\item{Tested with Android \ac{SDK} Tools, revision 12}
\item{Tested with Android \ac{SDK} Platform-tools, revision 6}
\item{Tested with \ac{SDK} Platform Android 2.3, \ac{API} 9\footnote{See \url{http://developer.android.com/guide/appendix/api-levels.html}}}
\end{itemize}
\paragraph{Optional but Highly Recommended for a Decent Workflow}
This example assumes that the data has been extracted directly within the home (\emph{\textasciitilde}) directory. Open hidden "\textasciitilde /.bashrc"-file and add:
\begin{lstlisting}[language=sh]
# Important Android SDK and NDK paths
export ANDROID_SDK=~/android-sdk-linux_x86
export ANDROID_NDK=~/android-ndk-r6b
export PATH=${PATH}:${ANDROID_SDK}/tools:${ANDROID_SDK}/platform-tools:~/${ANDROID_NDK}
\end{lstlisting}
\begin{itemize}
\item{Open a new terminal so the changes from the step above have an effect}
\end{itemize}
\paragraph{Android \ac{SDK} and AVD Manager}
Type "\textbf{android}" to open the "Android SDK and AVD Manager"-\ac{GUI}:
\begin{itemize}
\item{"Available packages" \textrightarrow{} disable the "Display updates only"-checkbox (you may need to enlarge the window to see this checkbox) \textrightarrow{} install at least the following:}
\item{"Android SDK Tools, revision 12"}
\item{"Android SDK Platform-tools, revision 6"}
\item{"SDK Platform Android 2.3.1, \ac{API} 9, revision 2" (it's marked "Obsolete", but within the \ac{NDK} r6b there's only up to \ac{API} level 9 available and we don't want to mix)}
\end{itemize}
\section{Android Emulator and Device}
\begin{itemize}
\item{Android emulator\footnote{General information: \url{http://developer.android.com/guide/developing/devices/emulator.html}} \textrightarrow{} Type "\textbf{android}" to open the "Android SDK and AVD Manager"-\ac{GUI} and do the emulator configuration in here}
\item{Android device\footnote{General information: \url{http://developer.android.com/guide/developing/device.html}} \textrightarrow{} Configuration: Just connect your device to your computer and it should work at once... if you have enabled "USB-Debugging" on your device (launch "Settings", tap "Applications", tap "Development" and enable the checkbox for USB debugging)}
\end{itemize}
\paragraph{Not Required but Nice to Know}
Check available devices: Type "\textbf{adb devices}" \textrightarrow{} You should see at least one entry when your device is connected. The result may look like the following:
\begin{lstlisting}[language=sh]
"List of devices attached
028842074300d157 device
emulator-5554 device"
\end{lstlisting}
\begin{itemize}
\item{The output for each instance is formatted like this: \textrightarrow{} "[serialNumber] [state]"}
\item{In this case "028842074300d157 device" is my connected smartphone}
\item{In this case "emulator-5554 device" is the emulator I created and started}
\end{itemize}
In case you don't see your device, try:
\begin{lstlisting}[language=sh]
adb kill-server
adb start-server
adb devices
\end{lstlisting}
\section{\ac{NDK} Build System}
The \ac{NDK} build system is not used by PixelLight, but for the first steps it's nice to know how to use it.
Time for a first experiment. Sadly, as on September 2011, it appears that some of the information at \url{http://developer.android.com/sdk/ndk/overview.html} is out-of-date and something like "I just try out native-activity to get the idea" isn't working as "just" as thought. So, here's an updated version with some additional handy information for \ac{MS} Windows users like myself: (yes, there are people out there don't knowing that "\textasciitilde /" is your home directory, so I just mention such stuff)
\paragraph{build.xml}
Run the following command to generate a \emph{build.xml} file:
\begin{itemize}
\item{\textbf{Change} to the "\textasciitilde /android-ndk-r6b/samples/native-activity" \textbf{directory}}
\item{\textrightarrow{} Don't follow the official sample instructions: "android update project -p . -s" \textrightarrow{} "Error: The project either has no target set or the target is invalid. Please provide a \verb+--target+ to the 'android update' command."}
\item{\textrightarrow{} Type \verb+"android --help"+ to see available options}
\item{\textrightarrow{} Type "android list targets" so see available targets (= \ac{API} levels)}
\item{Type "\textbf{android update project -t android-9 -p .}" ("." means "the current directory")}
\item{\textrightarrow{} You now have a "build.xml"-file in the same directory as the "AndroidManifest.xml"-file, this is required for the \ac{APK} creation step}
\end{itemize}
\paragraph{Compile}
Compile the native code using the \emph{ndk-build} command.
\begin{itemize}
\item{"\textbf{ndk-build}"}
\item{\textrightarrow{} You now have "\textasciitilde /android-ndk-r6b/samples/native-activity/libs/armeabi/libnative-activity.so"}
\end{itemize}
\paragraph{\ac{APK}}
Create \ac{APK} file
\begin{itemize}
\item{Type "\textbf{ant debug}" ("debug" is only for developing and testing using easy automatically signing, see http://developer.android.com/guide/publishing/app-signing.html)}
\item{\textrightarrow{} You now have "\textasciitilde /android-ndk-r6b/samples/native-activity/bin" with some files in it}
\item{Start the emulator, or connect your device (ensure that it has Android 2.3 > on it, else the app will just crash)}
\item{Type "\textbf{adb install -r bin/NativeActivity-debug.apk}" ("-r"-option to avoid "Failure [INSTALL\_FAILED\_ALREADY\_EXISTS]"-error when the \ac{APK} is already installed, default destination is "/data/local/tmp/NativeActivity-debug.apk")}
\item{\textrightarrow{} The app is now available and ready to be started within your emulator or on your device}
\item{\textrightarrow{} To uninstall the app, type "\textbf{adb uninstall com.example.native\_activity}"}
\end{itemize}
\paragraph{Release \ac{APK}}
Create release \ac{APK} file\footnote{See \url{http://developer.android.com/guide/publishing/app-signing.html for detailed information}}
\begin{itemize}
\item{Type "\textbf{ant release}" (you now have "bin/NativeActivity-unsigned.apk")}
\item{\textrightarrow{} Don't try "adb install -r bin/NativeActivity-unsigned.apk", this will just result in "Failure [INSTALL\_PARSE\_FAILED\_NO\_CERTIFICATES]"}
\item{\textrightarrow{} The \ac{JDK} tools Jarsigner and Keytool will be used (ensure they're available, if you installed \ac{JRE} they are usually available)}
\item{\textrightarrow{} You need a private key to sign your \ac{APK} file with, if you don't have any: Type e.g. "keytool -genkey -v -keystore my-release-key.keystore -alias myalias -keyalg RSA -keysize 2048 -validity 10000" (you now have a "my-release-key.keystore"-file)}
\item{For signing the \ac{APK} file, type: "\textbf{jarsigner -verbose -keystore my-release-key.keystore bin/NativeActivity-unsigned.apk myalias}" (no new file, it's still "bin/NativeActivity-unsigned.apk" and you may rename it later, but for now we don't touch the name)}
\item{\textrightarrow{} Type "jarsigner -verify bin/NativeActivity-unsigned.apk" to verify that everything went fine and "jarsigner -verify -verbose -certs bin/NativeActivity-unsigned.apk" to get additional information}
\item{Finally, align your \ac{APK} file by typing "\textbf{zipalign -v 4 bin/NativeActivity-unsigned.apk bin/NativeActivity.apk}" (you now have the ready to be released file "bin/NativeActivity.apk")}
\item{To install this file right now, type "\textbf{adb install -r bin/NativeActivity.apk}"}
\item{\textrightarrow{} If you receive a "Failure [INSTALL\_PARSE\_FAILED\_INCONSISTENT\_CERTIFICATES]" you need to remove the previously installed application by typing "adb uninstall com.example.native\_activity"}
\item{\textrightarrow{} To uninstall the app, type "\textbf{adb uninstall com.example.native\_activity}"}
\end{itemize}
\section{CMake Build System}
We want to use the universal CMake build system, not the special \ac{NDK} build system.
\begin{itemize}
\item{\textbf{Download and extract} CMake toolchain "android-cmake" from \url{http://code.google.com/p/android-cmake/} (we're using it's "android.toolchain.cmake"-file) and extract it to e.g. "\textasciitilde /android-cmake"}
\end{itemize}
\paragraph{Optional but Highly recommended for a Decent Workflow}
Open hidden "\textasciitilde /.bashrc"-file and add:
\begin{lstlisting}[language=sh]
# CMake toolchain "android-cmake" from \url{http://code.google.com/p/android-cmake/}
export ANDROID_CMAKE=~/android-cmake
export ANDTOOLCHAIN=$ANDROID_CMAKE/toolchain/android.toolchain.cmake
alias android-cmake='cmake -DCMAKE_TOOLCHAIN_FILE=$ANDTOOLCHAIN '
\end{lstlisting}
\begin{itemize}
\item{Open a new terminal so the changes from the step above have an effect}
\end{itemize}
\subsection{First Experiment}
Time for a first experiment by using "hello-gl2" of \emph{android-cmake}.
\begin{itemize}
\item{\textbf{Change} into the "hello-gl2"-\textbf{directory} of \emph{android-cmake}}
\end{itemize}
\paragraph{Build}
Type:
\begin{itemize}
\item{"\textbf{mkdir build}"}
\item{"\textbf{cd build}"}
\item{"\textbf{android-cmake -DARM\_TARGET=armeabi ..}"}
\item{\textrightarrow{} The Android \ac{SDK} emulator supports only "armeabi", but "android-cmake" has "armeabi-v7a" as default. If you don't change this, the application will just crash when you try to start it within the emulator. For more complex 3D applications, "armeabi-v7a" is highly recommended due to hardware floating point support. So, for more advanced stuff you really need a real device instead of the emulator.}
\item{"\textbf{make}"}
\item{\textrightarrow{} You now have "\textasciitilde hello-gl2/libs/armeabi/libgl2jni.so"}
\end{itemize}
\paragraph{Keystore}
\begin{itemize}
\item{\textbf{Copy "my-release-key.keystore"} from your \ac{NDK} build system experiment into the "hello-gl2"-directory, or keep it within e.g. your home directory and update the entries below}
\end{itemize}
\paragraph{\ac{APK}}
\textbf{Back} to the "hello-gl2"-directory and type:
\begin{itemize}
\item{"\textbf{sh project\_create.sh}" (you may need to open this file first and replace \verb+"android update project --name HelloGL2 --path ."+ through \verb+"android update project -t android-8 --name HelloGL2 --path ."+)}
\item{"\textbf{ant release}" (create the \ac{APK} file)}
\item{"\textbf{jarsigner -verbose -keystore my-release-key.keystore bin/HelloGL2-unsigned.apk myalias}" (sign the \ac{APK} file)}
\item{"\textbf{zipalign -v 4 bin/HelloGL2-unsigned.apk bin/HelloGL2.apk}" (align the \ac{APK} file)}
\item{"\textbf{adb uninstall com.android.gl2jni}" (in order to ensure that we don't get problems when doing the following install step)}
\item{"\textbf{adb install -r bin/HelloGL2.apk}" (install the \ac{APK} file on the device)}
\item{"\textbf{adb shell am start -n com.android.gl2jni/.GL2JNIActivity}" (start the installed \ac{APK} file automatically)}
\end{itemize}
\subsection{Native Activity Experiment}
Another experiment, "native-activity" for a native activity of the \ac{NDK}.
\begin{itemize}
\item{\textbf{Change} into the "native-activity"-\textbf{directory}} of the \ac{NDK}
\end{itemize}
\paragraph{CMakeLists.txt within the "native-activity"-directory}
Within the "native-activity"-directory, create a text file named \emph{CMakeLists.txt} with the following content:
\begin{lstlisting}[language=sh]
cmake_minimum_required(VERSION 2.8)
project(native-activity)
add_subdirectory(jni)
\end{lstlisting}
\paragraph{CMakeLists.txt within the "native-activity/jni"-directory}
Within the "native-activity/jni"-directory, create a text file named \emph{CMakeLists.txt} with the following content:
\begin{lstlisting}[language=sh]
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fPIC")
include_directories("${ANDROID_NDK}/sources/android/native_app_glue")
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
set(LIBRARY_DEPS log android EGL GLESv1_CM)
set(MY_SRCS
${ANDROID_NDK}/sources/android/native_app_glue/android_native_app_glue.c
main.c
)
add_library(native-activity SHARED ${MY_SRCS})
target_link_libraries(native-activity ${LIBRARY_DEPS})
\end{lstlisting}
\paragraph{Build}
We're going to use the native activity which was introduced in Android 2.3 (Gingerbread, "android-9" \ac{NDK} \ac{API} level). Within the "native-activity"-directory, type:
\begin{itemize}
\item{"\textbf{mkdir build}"}
\item{"\textbf{cd build}"}
\item{"\textbf{android-cmake -DARM\_TARGET=armeabi -DANDROID\_API\_LEVEL=9 ..}"}
\item{"\textbf{make}"}
\end{itemize}
\paragraph{Nice to Know}
In case you want to write "\textbf{android-cmake -DARM\_TARGET=armeabi ..}" instead of "\textbf{android-cmake -DARM\_TARGET=armeabi -DANDROID\_API\_LEVEL=9 ..}" \textrightarrow{} Open hidden "\textasciitilde /.bashrc"-file and add:
\begin{lstlisting}[language=sh]
export ANDROID_API_LEVEL=9
\end{lstlisting}
\paragraph{Keystore}
\begin{itemize}
\item{\textbf{Copy} "my-release-key.keystore" from your \ac{NDK} build system experiment into the "native-activity"-directory, or keep it within e.g. your home directory and update the entries below}
\end{itemize}
\paragraph{\ac{APK}}
Back to the "native-activity"-directory and type:
\begin{itemize}
\item{\verb+"android update project -t android-9 --name native-activity --path ."+}
\item{"\textbf{ant release}"}
\item{"\textbf{jarsigner -verbose -keystore my-release-key.keystore bin/native-activity-unsigned.apk myalias}"}
\item{"\textbf{zipalign -v 4 bin/native-activity-unsigned.apk bin/native-activity.apk}"}
\item{"\textbf{adb uninstall com.example.native\_activity}"}
\item{"\textbf{adb install -r bin/native-activity.apk}"}
\end{itemize}
All in one single command line row:
\begin{lstlisting}[language=sh]
mkdir build;cd build;android-cmake -DARM_TARGET=armeabi ..;make;cd ..;android update project -t android-9 --name native-activity --path .;ant release;jarsigner -verbose -keystore my-release-key.keystore bin/native-activity-unsigned.apk myalias;rm bin/native-activity.apk;zipalign -v 4 bin/native-activity-unsigned.apk bin/native-activity.apk;adb uninstall com.example.native_activity;adb install -r bin/native-activity.apk
\end{lstlisting}
\section{Glossary}
Glossary (only terms I wasn't really familiar with)
\begin{center}
\centering
\begin{tabular}{ | l | l | p{8cm} |}
\hline
Short & Long & Information\\ \hline
ndk & Native Development Kit & \url{http://developer.android.com/sdk/ndk/index.html}\\ \hline
adb & Android Debug Bridge & \url{http://developer.android.com/guide/developing/tools/adb.html} !important!\\ \hline
avd & Android Virtual Device & \url{http://developer.android.com/guide/developing/devices/emulator.html}\\ \hline
aapt & Android Asset Packaging Tool & \\ \hline
JNI & Java Native Interface & \\ \hline
logcat & & Android system central log buffer\\ \hline
\end{tabular}
\end{center}
\section{Command Glossary}
Command glossary (only terms I wasn't really familiar with)
\begin{center}
\centering
\begin{tabular}{ | l | p{10cm} |}
\hline
Command & Result\\ \hline
android & Start "Android SDK and AVD Manager" (e.g. to start the emulator)\\ \hline
adb devices & See all available emulators/devices\\ \hline
adb logcat & Show Android log (called "logcat"), this is realtime, so the command prompt will block and show new upcoming log entries at once, more information: \url{http://developer.android.com/guide/developing/tools/adb.html#logcat}\\ \hline
adb logcat -s <tag> & Show only messages with the given tag within the Android log, example: "adb logcat -s PixelLight"\\ \hline
adb push <local> <remote> & Copy <local> (file or directory recursively) to the emulator/device to destination <remote>, example: "adb push foo.txt /sdcard/foo.txt"\\ \hline
adb pull <remote> <local> & Copy <remove> (file or directory recursively) from the emulator/device to destination <local>, example: "adb pull /sdcard/foo.txt foo.txt"\\ \hline
adb install <\ac{APK} file> & Install \ac{APK} file on emulator/device (but doesn't start it automatically)\\ \hline
adb -d install <\ac{APK} file> & Install \ac{APK} file on device (but doesn't start it automatically)\\ \hline
adb -e install <\ac{APK} file> & Install \ac{APK} file on emulator (but doesn't start it automatically)\\ \hline
adb uninstall <package> & Uninstall an \ac{APK}, example: "adb uninstall com.example.native\_activity"\\ \hline
adb shell am start <app> & Start an app, example: "adb shell am start -n com.android.gl2jni/.GL2JNIActivity"\\ \hline
\end{tabular}
\end{center}
\ac{NDK} build system related:
\begin{center}
\centering
\begin{tabular}{ | l | p{8cm} |}
\hline
Command & Result\\ \hline
android update project -t android-9 -p . & Create/update "build.xml" (required for Ant)\\ \hline
ndk-build & Compile native code\\ \hline
ant debug & Create debug \ac{APK}\\ \hline
ant release & Create release \ac{APK}\\ \hline
\end{tabular}
\end{center}
\section{Possible issues}
\paragraph{"Android can't load in my shared library"}
\begin{itemize}
\item{Is the build target correct?}
\item{The Android Emulator is only able to deal with "armeabi", not e.g. "armeabi-v7a"}
\item{\textrightarrow{} When using CMake-\ac{GUI}, add the string entry "ARM\_TARGET"="armeabi"}
\end{itemize}
\paragraph{How to Start a Native Activity Automatically?}
I wasn't able to figure this one out. When having a Java file as main program entry point, can can e.g. just type "adb shell am start -n com.android.gl2jni.GL2JNIActivity" and the application starts automatically. Well, a pure native activity without a single Java source file... I have no glue...
|
There was plenty of grumbling about the Advance Report on Durable Goods for January 2010 that was released today. The headline number showed new orders up 3% but ex-transportation, new orders were actually down 0.6%.
If you're overweight tech, like I am, though, you would be smiling happily.
I'll run through a few charts and you'll see why. Let's start by looking at shipments for the tech sector as a whole and than for the sub-sectors of computers, semiconductors and communications equipment.
With the great improvement in shipments, it's disappointing to see Tech lagging other sectors like Materials and Consumer Discretionary. Some would argue, however, that shipments are in the past, and stocks move based on future performance.
If that's truly the case, Tech might still be the place to be. Take a look at these charts showing new orders.
You can see that new orders are quite strong, again reaching levels last seen in 2008. Though communication equipment is lagging the other sub-sectors, it too shows increasing orders.
I have written several posts (here, here and here) on how the tech earnings scorecard this earnings season has been pretty darn strong. I have contended that tech stocks should be getting more investor interest. Today's Durable Goods report may, I think, begin to convert the unbelievers. I noticed that the NASAQ finished Thursday in better shape than the S&P 500 and the Dow. Are we seeing the beginning of an upsurge in tech?
|
Require Import Crypto.Arithmetic.PrimeFieldTheorems.
Require Import Crypto.Specific.solinas32_2e416m2e208m1_16limbs.Synthesis.
(* TODO : change this to field once field isomorphism happens *)
Definition mul :
{ mul : feBW_loose -> feBW_loose -> feBW_tight
| forall a b, phiBW_tight (mul a b) = F.mul (phiBW_loose a) (phiBW_loose b) }.
Proof.
Set Ltac Profiling.
Time synthesize_mul ().
Show Ltac Profile.
Time Defined.
Print Assumptions mul.
|
Fast @-@ fading violets cover 'd up in leaves ;
|
(* Title: CoreC++
Author: Daniel Wasserrab
Maintainer: Daniel Wasserrab <wasserra at fmi.uni-passau.de>
Based on the Jinja theory J/Progress.thy by Tobias Nipkow
*)
section \<open>Progress of Small Step Semantics\<close>
theory Progress imports Equivalence DefAss Conform begin
subsection \<open>Some pre-definitions\<close>
lemma final_refE:
"\<lbrakk> P,E,h \<turnstile> e : Class C; final e;
\<And>r. e = ref r \<Longrightarrow> Q;
\<And>r. e = Throw r \<Longrightarrow> Q \<rbrakk> \<Longrightarrow> Q"
by (simp add:final_def,auto,case_tac v,auto)
lemma finalRefE:
"\<lbrakk> P,E,h \<turnstile> e : T; is_refT T; final e;
e = null \<Longrightarrow> Q;
\<And>r. e = ref r \<Longrightarrow> Q;
\<And>r. e = Throw r \<Longrightarrow> Q\<rbrakk> \<Longrightarrow> Q"
apply (cases T)
apply (simp add:is_refT_def)+
apply (simp add:final_def)
apply (erule disjE)
apply clarsimp
apply (erule exE)+
apply fastforce
apply (auto simp:final_def is_refT_def)
apply (case_tac v)
apply auto
done
lemma subE:
"\<lbrakk> P \<turnstile> T \<le> T'; is_type P T'; wf_prog wf_md P;
\<lbrakk> T = T'; \<forall>C. T \<noteq> Class C \<rbrakk> \<Longrightarrow> Q;
\<And>C D. \<lbrakk> T = Class C; T' = Class D; P \<turnstile> Path C to D unique \<rbrakk> \<Longrightarrow> Q;
\<And>C. \<lbrakk> T = NT; T' = Class C \<rbrakk> \<Longrightarrow> Q \<rbrakk> \<Longrightarrow> Q"
apply(cases T')
apply auto
apply(drule_tac T = "T" in widen_Class)
apply auto
done
lemma assumes wf:"wf_prog wf_md P"
and typeof:" P \<turnstile> typeof\<^bsub>h\<^esub> v = Some T'"
and type:"is_type P T"
shows sub_casts:"P \<turnstile> T' \<le> T \<Longrightarrow> \<exists>v'. P \<turnstile> T casts v to v'"
proof(erule subE)
from type show "is_type P T" .
next
from wf show "wf_prog wf_md P" .
next
assume "T' = T" and "\<forall>C. T' \<noteq> Class C"
thus "\<exists>v'. P \<turnstile> T casts v to v'" by(fastforce intro:casts_prim)
next
fix C D
assume T':"T' = Class C" and T:"T = Class D"
and path_unique:"P \<turnstile> Path C to D unique"
from T' typeof obtain a Cs where v:"v = Ref(a,Cs)" and last:"last Cs = C"
by(auto dest!:typeof_Class_Subo)
from last path_unique obtain Cs' where "P \<turnstile> Path last Cs to D via Cs'"
by(auto simp:path_unique_def path_via_def)
hence "P \<turnstile> Class D casts Ref(a,Cs) to Ref(a,Cs@\<^sub>pCs')"
by -(rule casts_ref,simp_all)
with T v show "\<exists>v'. P \<turnstile> T casts v to v'" by auto
next
fix C
assume "T' = NT" and T:"T = Class C"
with typeof have "v = Null" by simp
with T show "\<exists>v'. P \<turnstile> T casts v to v'" by(fastforce intro:casts_null)
qed
text\<open>Derivation of new induction scheme for well typing:\<close>
inductive
WTrt' :: "[prog,env,heap,expr, ty ] \<Rightarrow> bool"
("_,_,_ \<turnstile> _ :'' _" [51,51,51]50)
and WTrts':: "[prog,env,heap,expr list,ty list] \<Rightarrow> bool"
("_,_,_ \<turnstile> _ [:''] _" [51,51,51]50)
for P :: prog
where
"is_class P C \<Longrightarrow> P,E,h \<turnstile> new C :' Class C"
| "\<lbrakk>is_class P C; P,E,h \<turnstile> e :' T; is_refT T\<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> Cast C e :' Class C"
| "\<lbrakk>is_class P C; P,E,h \<turnstile> e :' T; is_refT T\<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> \<lparr>C\<rparr>e :' Class C"
| "P \<turnstile> typeof\<^bsub>h\<^esub> v = Some T \<Longrightarrow> P,E,h \<turnstile> Val v :' T"
| "E V = Some T \<Longrightarrow> P,E,h \<turnstile> Var V :' T"
| "\<lbrakk> P,E,h \<turnstile> e\<^sub>1 :' T\<^sub>1; P,E,h \<turnstile> e\<^sub>2 :' T\<^sub>2;
case bop of Eq \<Rightarrow> T = Boolean
| Add \<Rightarrow> T\<^sub>1 = Integer \<and> T\<^sub>2 = Integer \<and> T = Integer \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> e\<^sub>1 \<guillemotleft>bop\<guillemotright> e\<^sub>2 :' T"
| "\<lbrakk> P,E,h \<turnstile> Var V :' T; P,E,h \<turnstile> e :' T' \<^cancel>\<open>V \<noteq> This\<close>; P \<turnstile> T' \<le> T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> V:=e :' T"
| "\<lbrakk>P,E,h \<turnstile> e :' Class C; Cs \<noteq> []; P \<turnstile> C has least F:T via Cs\<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> e\<bullet>F{Cs} :' T"
| "P,E,h \<turnstile> e :' NT \<Longrightarrow> P,E,h \<turnstile> e\<bullet>F{Cs} :' T"
| "\<lbrakk>P,E,h \<turnstile> e\<^sub>1 :' Class C; Cs \<noteq> []; P \<turnstile> C has least F:T via Cs;
P,E,h \<turnstile> e\<^sub>2 :' T'; P \<turnstile> T' \<le> T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> e\<^sub>1\<bullet>F{Cs}:=e\<^sub>2 :' T"
| "\<lbrakk> P,E,h \<turnstile> e\<^sub>1:'NT; P,E,h \<turnstile> e\<^sub>2 :' T'; P \<turnstile> T' \<le> T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> e\<^sub>1\<bullet>F{Cs}:=e\<^sub>2 :' T"
| "\<lbrakk> P,E,h \<turnstile> e :' Class C; P \<turnstile> C has least M = (Ts,T,m) via Cs;
P,E,h \<turnstile> es [:'] Ts'; P \<turnstile> Ts' [\<le>] Ts \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> e\<bullet>M(es) :' T"
| "\<lbrakk> P,E,h \<turnstile> e :' Class C'; P \<turnstile> Path C' to C unique;
P \<turnstile> C has least M = (Ts,T,m) via Cs;
P,E,h \<turnstile> es [:'] Ts'; P \<turnstile> Ts' [\<le>] Ts \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> e\<bullet>(C::)M(es) :' T"
| "\<lbrakk>P,E,h \<turnstile> e :' NT; P,E,h \<turnstile> es [:'] Ts\<rbrakk> \<Longrightarrow> P,E,h \<turnstile> Call e Copt M es :' T"
| "\<lbrakk> P \<turnstile> typeof\<^bsub>h\<^esub> v = Some T'; P,E(V\<mapsto>T),h \<turnstile> e\<^sub>2 :' T\<^sub>2; P \<turnstile> T' \<le> T; is_type P T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> {V:T := Val v; e\<^sub>2} :' T\<^sub>2"
| "\<lbrakk> P,E(V\<mapsto>T),h \<turnstile> e :' T'; \<not> assigned V e; is_type P T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> {V:T; e} :' T'"
| "\<lbrakk> P,E,h \<turnstile> e\<^sub>1 :' T\<^sub>1; P,E,h \<turnstile> e\<^sub>2 :' T\<^sub>2 \<rbrakk> \<Longrightarrow> P,E,h \<turnstile> e\<^sub>1;;e\<^sub>2 :' T\<^sub>2"
| "\<lbrakk> P,E,h \<turnstile> e :' Boolean; P,E,h \<turnstile> e\<^sub>1:' T; P,E,h \<turnstile> e\<^sub>2:' T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> if (e) e\<^sub>1 else e\<^sub>2 :' T"
| "\<lbrakk> P,E,h \<turnstile> e :' Boolean; P,E,h \<turnstile> c:' T \<rbrakk>
\<Longrightarrow> P,E,h \<turnstile> while(e) c :' Void"
| "\<lbrakk> P,E,h \<turnstile> e :' T'; is_refT T'\<rbrakk> \<Longrightarrow> P,E,h \<turnstile> throw e :' T"
| "P,E,h \<turnstile> [] [:'] []"
| "\<lbrakk> P,E,h \<turnstile> e :' T; P,E,h \<turnstile> es [:'] Ts \<rbrakk> \<Longrightarrow> P,E,h \<turnstile> e#es [:'] T#Ts"
lemmas WTrt'_induct = WTrt'_WTrts'.induct [split_format (complete)]
and WTrt'_inducts = WTrt'_WTrts'.inducts [split_format (complete)]
inductive_cases WTrt'_elim_cases[elim!]:
"P,E,h \<turnstile> V :=e :' T"
text\<open>... and some easy consequences:\<close>
apply(rule iffI)
apply (auto elim: WTrt'.cases intro!:WTrt'_WTrts'.intros)
done
lemma [iff]: "P,E,h \<turnstile> Val v :' T = (P \<turnstile> typeof\<^bsub>h\<^esub> v = Some T)"
apply(rule iffI)
apply (auto elim: WTrt'.cases intro!:WTrt'_WTrts'.intros)
done
lemma [iff]: "P,E,h \<turnstile> Var V :' T = (E V = Some T)"
apply(rule iffI)
apply (auto elim: WTrt'.cases intro!:WTrt'_WTrts'.intros)
done
lemma wt_wt': "P,E,h \<turnstile> e : T \<Longrightarrow> P,E,h \<turnstile> e :' T"
and wts_wts': "P,E,h \<turnstile> es [:] Ts \<Longrightarrow> P,E,h \<turnstile> es [:'] Ts"
proof (induct rule:WTrt_inducts)
case (WTrtBlock E V T h e T')
thus ?case
apply(case_tac "assigned V e")
apply(auto intro:WTrt'_WTrts'.intros
simp add:fun_upd_same assigned_def simp del:fun_upd_apply)
done
qed(auto intro:WTrt'_WTrts'.intros simp del:fun_upd_apply)
lemma wt'_wt: "P,E,h \<turnstile> e :' T \<Longrightarrow> P,E,h \<turnstile> e : T"
and wts'_wts: "P,E,h \<turnstile> es [:'] Ts \<Longrightarrow> P,E,h \<turnstile> es [:] Ts"
apply (induct rule:WTrt'_inducts)
apply (fastforce intro: WTrt_WTrts.intros)+
done
corollary wt'_iff_wt: "(P,E,h \<turnstile> e :' T) = (P,E,h \<turnstile> e : T)"
by(blast intro:wt_wt' wt'_wt)
corollary wts'_iff_wts: "(P,E,h \<turnstile> es [:'] Ts) = (P,E,h \<turnstile> es [:] Ts)"
by(blast intro:wts_wts' wts'_wts)
lemmas WTrt_inducts2 = WTrt'_inducts [unfolded wt'_iff_wt wts'_iff_wts,
case_names WTrtNew WTrtDynCast WTrtStaticCast WTrtVal WTrtVar WTrtBinOp
WTrtLAss WTrtFAcc WTrtFAccNT WTrtFAss WTrtFAssNT WTrtCall WTrtStaticCall WTrtCallNT
WTrtInitBlock WTrtBlock WTrtSeq WTrtCond WTrtWhile WTrtThrow
WTrtNil WTrtCons, consumes 1]
subsection\<open>The theorem \<open>progress\<close>\<close>
lemma mdc_leq_dyn_type:
"P,E,h \<turnstile> e : T \<Longrightarrow>
\<forall>C a Cs D S. T = Class C \<and> e = ref(a,Cs) \<and> h a = Some(D,S) \<longrightarrow> P \<turnstile> D \<preceq>\<^sup>* C"
and "P,E,h \<turnstile> es [:] Ts \<Longrightarrow>
\<forall>T Ts' e es' C a Cs D S. Ts = T#Ts' \<and> es = e#es' \<and>
T = Class C \<and> e = ref(a,Cs) \<and> h a = Some(D,S)
\<longrightarrow> P \<turnstile> D \<preceq>\<^sup>* C"
proof (induct rule:WTrt_inducts2)
case (WTrtVal h v T E)
have type:"P \<turnstile> typeof\<^bsub>h\<^esub> v = Some T" by fact
{ fix C a Cs D S
assume "T = Class C" and "Val v = ref(a,Cs)" and "h a = Some(D,S)"
with type have "Subobjs P D Cs" and "C = last Cs" by (auto split:if_split_asm)
hence "P \<turnstile> D \<preceq>\<^sup>* C" by simp (rule Subobjs_subclass) }
thus ?case by blast
qed auto
lemma appendPath_append_last:
assumes notempty:"Ds \<noteq> []"
shows"(Cs @\<^sub>p Ds) @\<^sub>p [last Ds] = (Cs @\<^sub>p Ds)"
proof -
have "last Cs = hd Ds \<Longrightarrow> last (Cs @ tl Ds) = last Ds"
proof(cases "tl Ds = []")
case True
assume last:"last Cs = hd Ds"
with True notempty have "Ds = [last Cs]" by (fastforce dest:hd_Cons_tl)
hence "last Ds = last Cs" by simp
with True show ?thesis by simp
next
case False
assume last:"last Cs = hd Ds"
from notempty False have "last (tl Ds) = last Ds"
by -(drule hd_Cons_tl,drule_tac x="hd Ds" in last_ConsR,simp)
with False show ?thesis by simp
qed
thus ?thesis by(simp add:appendPath_def)
qed
end
|
Wood closet doors – Normally in medium-sized houses, the rooms have a closet included, it may not be the largest of all but it will do the job. In the big houses, the stage is completely different; the cabinets occupy a whole room, where you can feel like in a private clothing store. Whatever the case with which you identify yourself, today we will show you some great ideas of how you can build a beautiful wooden wardrobe that fits your needs. By the way we will take wood as the main material, since it is the most used to build and create closets or closet, it is resistant, durable, keeps moisture away and will be the best way to preserve your clothes.
Destine a whole room to install a wood closet doors made to measure space, are the best solution for a shared closet, separate equally the number of drawers and shelves for an equitable distribution. One of the great benefits of wood is that there are many types you can choose from. In this case we show you this custom made closet option in dark wood. This will also be defined by the decorative style you have in the room. Wooden wardrobe option is one of the most current: large doors that go from floor to ceiling and polished mahogany wood, which goes perfectly with any decoration inside the room. The interior depends on you.
The main thing in a wood closet doors is that it should be practical and with plenty of storage space. This idea of multiple shelves and drawers is excellent, because it allows you to keep all your belongings in order. The more floors, the more space to organize. We must not forget that a sliding door will always be well received, does not occupy space and is easy to hide.
|
[STATEMENT]
lemma decisive5:
assumes has3A: "hasw [x,y,z] A"
and iia: "iia swf A Is"
and swf: "SWF swf A Is universal_domain"
and wp: "weak_pareto swf A Is universal_domain"
and sd: "semidecisive swf A Is {j} x y"
shows "decisive swf A Is {j} y x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
from sd
[PROOF STATE]
proof (chain)
picking this:
semidecisive swf A Is {j} x y
[PROOF STEP]
have "decisive swf A Is {j} x z"
[PROOF STATE]
proof (prove)
using this:
semidecisive swf A Is {j} x y
goal (1 subgoal):
1. decisive swf A Is {j} x z
[PROOF STEP]
by (rule decisive1[OF has3A iia swf wp])
[PROOF STATE]
proof (state)
this:
decisive swf A Is {j} x z
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
hence "semidecisive swf A Is {j} x z"
[PROOF STATE]
proof (prove)
using this:
decisive swf A Is {j} x z
goal (1 subgoal):
1. semidecisive swf A Is {j} x z
[PROOF STEP]
by (rule d_imp_sd)
[PROOF STATE]
proof (state)
this:
semidecisive swf A Is {j} x z
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
hence "decisive swf A Is {j} y z"
[PROOF STATE]
proof (prove)
using this:
semidecisive swf A Is {j} x z
goal (1 subgoal):
1. decisive swf A Is {j} y z
[PROOF STEP]
by (rule decisive3[OF has3A iia swf wp])
[PROOF STATE]
proof (state)
this:
decisive swf A Is {j} y z
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
hence "semidecisive swf A Is {j} y z"
[PROOF STATE]
proof (prove)
using this:
decisive swf A Is {j} y z
goal (1 subgoal):
1. semidecisive swf A Is {j} y z
[PROOF STEP]
by (rule d_imp_sd)
[PROOF STATE]
proof (state)
this:
semidecisive swf A Is {j} y z
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
thus "decisive swf A Is {j} y x"
[PROOF STATE]
proof (prove)
using this:
semidecisive swf A Is {j} y z
goal (1 subgoal):
1. decisive swf A Is {j} y x
[PROOF STEP]
by (rule decisive4[OF has3A iia swf wp])
[PROOF STATE]
proof (state)
this:
decisive swf A Is {j} y x
goal:
No subgoals!
[PROOF STEP]
qed
|
clear; clc;
load mariodata;
global H;
global A;
global mario;
global dxnkey;
global dxnpressed;
global shiftpressed;
global spacepressed;
global spacetime;
dxnpressed=0;
shiftpressed=0;
spacepressed=0;
dxnkey='';
timeout = 30; % duration of sim (sec)
% rfrsh = 1/60; % NTSC refresh rate (sec)
rfrsh = 1/10; % refresh rate that works for now (want better rate)
scale = 1; %
%SETUP FIGURE:
figure(1); clf;
hold on;
gnd=patch([0 0 10000 10000 0],[-20 0 0 -20 -20],7);
set(gca,'YTick',[],'XTick',[],'Color',[0.3 0.6 0.94]);
plot3([0 1000],[0 0],[-1 -1],'k','LineWidth',2);
axis([0 400 -20 200]);
set(gcf,'keypressfcn',@keypress,'keyreleasefcn',@keyrelease,'WindowButtonUpFcn','');
axis equal;
%INITIALIZE SPRITE PATCHES:
maxdims = [32 17]; %sprite dimensions
colormap(clrs);
for i=1:maxdims(1) %rows
for j=1:maxdims(2) %cols
H(i,j)=patch([-1 -1 0 0 -1]+j,[-1 0 0 -1 -1]+i,0,'FaceAlpha',0);
end;
end;
shading flat;
ttl = title(sprintf('TIME\n%03.0f',max(timeout,0)),'FontWeight','bold','Color',[1 1 1],'FontSize',14,'FontName','Monotxt','HorizontalAlignment','right','Position', [390,150,1]);
%INITIAL STATE:
%states: 1:small, 2:big, 3:firepower
%actions: 1:stand, 2:run, 3:jump, 4:duck, 5:skid, 6:climb, 7:switm, 8:shoot, 9: die
%dxns: 1:right, 2:left
state = 3; %current character state
dxn = 1; %current character direction
action = 1; %current character action
iter = 1; %frame within current action
loc = [0 0]; %initial position
spd = [0 0]; %current x and y speed
spdlim = [8 18]; %magnitude of max speed in x and y
acc = [0 0]; %current x and y acceleration
updatesprite(state,dxn,action,iter,[0 0])
%REMINDER:
%states: 1:small 2:big 3:firepower
%dxns: 1:right 2:left
%actions: 1:stand 2:walk 3:jump 4:duck 5:skid 6:climb 7:swim 8:shoot 9:die
%MAIN LOOP:
tic;
tnext=0;
t=toc;
while toc<timeout,
%DIRECTION COMMANDED:
if dxnpressed, %accelerate
if dxnkey(1)=='d' %user pressed down (duck)
action=4;
if spd(1)==0 %stay at rest
acc(1)=0;
else
acc(1) = -sign(spd(1))*ceil(abs(spd(1)/2)); %decellerate
end;
elseif dxnkey(1)=='r' %accelerate right
acc(1) = 2*(spd(1)<spdlim(1));
if spd(1)<0 %skidding turn left to right
action = 5;
dxn = 2;
else %running right
action = 2;
dxn = 1;
end;
elseif dxnkey(1)=='l' %accelerate left
acc(1) = -2*(spd(1)>-spdlim(1));
if spd(1)>0 %skidding turn right to left
action = 5;
dxn = 1;
else %running left
action = 2;
dxn = 2;
end;
end;
%NO ARROWS PRESSED: slow to rest
else
if spd(1)==0 %stay at rest
acc(1)=0;
action=1;
else
acc(1) = -sign(spd(1))*ceil(abs(spd(1)/2)); %decellerate
end;
end;
%SHOOTING:
if state==3 && shiftpressed,
action=8;
end;
%Jumping:
if spacepressed && loc(2)==0,
spacepressed = 0;
acc(2)=-3;
spd(2)=18;
end;
%UPDATE POSITION:
spd = round(spd+acc);
loc = loc+spd;
%end jump?
if loc(2)<=0 && spd(2)<0 %end jump
loc(2)=0;
spd(2)=0;
acc(2)=0;
if dxnkey(1)=='l' || dxnkey(1)=='r' %end running
action=2;
elseif dxnkey(1)=='d' %end ducking
action=4;
end;
elseif loc(2)>0
if action==4 || action==8 %jumping stance unless ducking or shooting
else
action=3; %airborne stance
end;
end;
%INCREMENT FRAME OF CURRENT ACTION:
if action==2,
if iter==1, iter=3;
else iter=iter-1;
end;
elseif sum(action==[1 3 4 5 7 9])
iter=1;
end;
%UPDATE FIGURE:
updatesprite(state,dxn,action,iter,loc)
axis([0 400 -20 200])
set(ttl,'String',sprintf('TIME\n%03.0f',max(timeout-toc,0)));
drawnow;
%CONTROL REFRESH RATE:
% tnext = tnext+rfrsh;
% t=toc;
% txtra = tnext-t;
% if txtra>0,
% pause(txtra);
% disp('paused');
% else
% disp([t,tnext,txtra])
% end;
end;
%DEATH
ylims = get(gca,'YLim');
spd=[0 12];
acc=[0 -3];
updatesprite(1,1,9,1,loc);
pause(1);
while loc(2)>ylims(1)-30
spd(2)=max(spd(2)+acc(2),-spdlim(2)); %terminal velocity
loc=loc+spd;
updatesprite(1,1,9,1,loc);
drawnow;
end;
clf;
plot(0,0);
set(gca,'Color','k','XTick',[],'YTick',[],'ButtonDownFcn','test1');
text(140,100,'click to continue','Color','w','FontSize',18);
axis([0 400 -20 200],'equal');
|
[STATEMENT]
lemma right_zero_game[simp]: "right_options (zero_game) = zempty"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. right_options zero_game = zempty
[PROOF STEP]
by (simp add: zero_game_def)
|
#!/usr/bin/r
library(inline)
library(rbenchmark)
serialCode <- '
// assign to C++ vector
std::vector<double> x = Rcpp::as<std::vector< double > >(xs);
size_t n = x.size();
for (size_t i=0; i<n; i++) {
x[i] = ::log(x[i]);
}
return Rcpp::wrap(x);
'
funSerial <- cxxfunction(signature(xs="numeric"), body=serialCode, plugin="Rcpp")
serialStdAlgCode <- '
std::vector<double> x = Rcpp::as<std::vector< double > >(xs);
std::transform(x.begin(), x.end(), x.begin(), ::log);
return Rcpp::wrap(x);
'
funSerialStdAlg <- cxxfunction(signature(xs="numeric"), body=serialStdAlgCode, plugin="Rcpp")
## same, but with Rcpp vector just to see if there is measurable difference
serialRcppCode <- '
// assign to C++ vector
Rcpp::NumericVector x = Rcpp::NumericVector(xs);
size_t n = x.size();
for (size_t i=0; i<n; i++) {
x[i] = ::log(x[i]);
}
return x;
'
funSerialRcpp <- cxxfunction(signature(xs="numeric"), body=serialRcppCode, plugin="Rcpp")
serialStdAlgRcppCode <- '
Rcpp::NumericVector x = Rcpp::NumericVector(xs);
std::transform(x.begin(), x.end(), x.begin(), ::log);
return x;
'
funSerialStdAlgRcpp <- cxxfunction(signature(xs="numeric"), body=serialStdAlgRcppCode, plugin="Rcpp")
serialImportTransRcppCode <- '
Rcpp::NumericVector x(xs);
return Rcpp::NumericVector::import_transform(x.begin(), x.end(), ::log);
'
funSerialImportTransRcpp <- cxxfunction(signature(xs="numeric"), body=serialImportTransRcppCode, plugin="Rcpp")
## now with a sugar expression with internalizes the loop
sugarRcppCode <- '
// assign to C++ vector
Rcpp::NumericVector x = log ( Rcpp::NumericVector(xs) );
return x;
'
funSugarRcpp <- cxxfunction(signature(xs="numeric"), body=sugarRcppCode, plugin="Rcpp")
## lastly via OpenMP for parallel use
openMPCode <- '
// assign to C++ vector
std::vector<double> x = Rcpp::as<std::vector< double > >(xs);
size_t n = x.size();
#pragma omp parallel for shared(x, n)
for (size_t i=0; i<n; i++) {
x[i] = ::log(x[i]);
}
return Rcpp::wrap(x);
'
## modify the plugin for Rcpp to support OpenMP
settings <- getPlugin("Rcpp")
settings$env$PKG_CXXFLAGS <- paste('-fopenmp', settings$env$PKG_CXXFLAGS)
settings$env$PKG_LIBS <- paste('-fopenmp -lgomp', settings$env$PKG_LIBS)
funOpenMP <- cxxfunction(signature(xs="numeric"), body=openMPCode, plugin="Rcpp", settings=settings)
z <- seq(1, 2e6)
res <- benchmark(funSerial(z), funSerialStdAlg(z),
funSerialRcpp(z), funSerialStdAlgRcpp(z),
funSerialImportTransRcpp(z),
funOpenMP(z), funSugarRcpp(z),
columns=c("test", "replications", "elapsed",
"relative", "user.self", "sys.self"),
order="relative",
replications=100)
print(res)
|
//
// syrk.h
// Linear Algebra Template Library
//
// Created by Rodney James on 1/4/12.
// Copyright (c) 2012 University of Colorado Denver. All rights reserved.
//
#ifndef _syrk_h
#define _syrk_h
/// @file syrk.h Performs matrix-matrix multiplication operations resulting in symmetric matrices.
#include <cctype>
#include "latl.h"
namespace LATL
{
/// @brief Performs multiplcation of a real matrix with its transpose.
///
/// For real matrix A, real symmetric matrix C, and real scalars alpha and beta
///
/// C := alpha*A*A'+beta*C or C := alpha*A'*A+beta*C
/// is computed.
/// @return 0 if success.
/// @return -i if the ith argument is invalid.
/// @tparam real_t Floating point type.
/// @param uplo Specifies whether the upper or lower triangular part of the symmetric matrix C
/// is to be referenced:
///
/// if uplo = 'U' or 'u' then C is upper triangular,
/// if uplo = 'L' or 'l' then C is lower triangular.
/// @param trans Specifies the operation to be perfomed as follows:
///
/// if trans = 'N' or 'n' then C := alpha*A*A'+beta*C
/// if trans = 'T' or 't' then C := alpha*A'*A+beta*C
/// if trans = 'C' or 'c' then C := alpha*A'*A+beta*C
/// @param n Specifies the order of the symmetric matrix C. n>=0
/// @param k Specifies the other dimension of the real matrix A (see below). k>=0
/// @param alpha Real scalar.
/// @param A Pointer to real matrix.
///
/// if trans = 'N' or 'n' then A is n-by-k
/// if trans = 'T' or 't' then A is k-by-n
/// if trans = 'C' or 'c' then A is k-by-n
/// @param ldA Column length of the matrix A. If trans = 'N' or 'n' ldA>=n, otherwise ldA>=k.
/// @param beta Real scalar.
/// @param C Pointer to real symmetric n-by-n matrix C.
/// Only the upper or lower triangular part of C is referenced, depending on the value of uplo above.
/// @param ldC Column length of the matrix C. ldC>=n
/// @ingroup BLAS
template <typename real_t>
int SYRK(char uplo, char trans, int_t n, int_t k, real_t alpha, real_t *A, int_t ldA, real_t beta, real_t *C, int_t ldC)
{
using std::toupper;
const real_t zero(0.0);
const real_t one(1.0);
int_t i,j,l;
real_t *a,*c,*at;
real_t t;
uplo=toupper(uplo);
trans=toupper(trans);
if((uplo!='U')&&(uplo!='L'))
return -1;
else if((trans!='N')&&(trans!='T')&&(trans!='C'))
return -2;
else if(n<0)
return -3;
else if(k<0)
return -4;
else if(ldA<((trans=='N')?n:k))
return -7;
else if(ldC<n)
return -10;
else if((n==0)||(((alpha==zero)||(k==0))&&(beta==one)))
return 0;
if(alpha==zero)
{
if(uplo=='U')
{
c=C;
for(j=0;j<n;j++)
{
for(i=0;i<=j;i++)
c[i]*=beta;
c+=ldC;
}
}
else
{
c=C;
for(j=0;j<n;j++)
{
for(i=j;i<n;i++)
c[i]*=beta;
c+=ldC;
}
}
}
else if(trans=='N')
{
if(uplo=='U')
{
c=C;
for(j=0;j<n;j++)
{
for(i=0;i<=j;i++)
c[i]*=beta;
a=A;
for(l=0;l<k;l++)
{
t=alpha*a[j];
for(i=0;i<=j;i++)
c[i]+=t*a[i];
a+=ldA;
}
c+=ldC;
}
}
else
{
c=C;
for(j=0;j<n;j++)
{
for(i=j;i<n;i++)
c[i]*=beta;
a=A;
for(l=0;l<k;l++)
{
t=alpha*a[j];
for(i=j;i<n;i++)
c[i]+=t*a[i];
a+=ldA;
}
c+=ldC;
}
}
}
else
{
if(uplo=='U')
{
c=C;
at=A;
for(j=0;j<n;j++)
{
a=A;
for(i=0;i<=j;i++)
{
t=zero;
for(l=0;l<k;l++)
t+=a[l]*at[l];
c[i]=alpha*t+beta*c[i];
a+=ldA;
}
at+=ldA;
c+=ldC;
}
}
else
{
at=A;
c=C;
for(j=0;j<n;j++)
{
a=A+j*ldA;
for(i=j;i<n;i++)
{
t=zero;
for(l=0;l<k;l++)
t+=a[l]*at[l];
c[i]=alpha*t+beta*c[i];
a+=ldA;
}
at+=ldA;
c+=ldC;
}
}
}
return 0;
}
/// @brief Performs multiplcation of a complex matrix with its transpose.
///
/// For complex matrix A, complex symmetric matrix C, and complex scalars alpha and beta
///
/// C := alpha*A*A.'+beta*C or C := alpha*A.'*A+beta*C
/// is computed.
/// @return 0 if success.
/// @return -i if the ith argument is invalid.
/// @tparam real_t Floating point type.
/// @param uplo Specifies whether the upper or lower triangular part of the symmetric matrix C
/// is to be referenced:
///
/// if uplo = 'U' or 'u' then C is upper triangular,
/// if uplo = 'L' or 'l' then C is lower triangular.
/// @param trans Specifies the operation to be perfomed as follows:
///
/// if trans = 'N' or 'n' then C := alpha*A*A.'+beta*C
/// if trans = 'T' or 't' then C := alpha*A.'*A+beta*C
/// @param n Specifies the order of the complex symmetric matrix C. n>=0
/// @param k Specifies the other dimension of the complex matrix A (see below). k>=0
/// @param alpha Complex scalar.
/// @param A Pointer to complex matrix.
///
/// if trans = 'N' or 'n' then A is n-by-k
/// if trans = 'T' or 't' then A is k-by-n
/// @param ldA Column length of the matrix A. If trans = 'N' or 'n' ldA>=n, otherwise ldA>=k.
/// @param beta Complex scalar.
/// @param C Pointer to complex symmetric n-by-n matrix C.
/// Only the upper or lower triangular part of C is referenced, depending on the value of uplo above.
/// @param ldC Column length of the matrix C. ldC>=n
/// @ingroup BLAS
template <typename real_t>
int SYRK(char uplo, char trans, int_t n, int_t k, complex<real_t> alpha, complex<real_t> *A, int_t ldA, complex<real_t> beta, complex<real_t> *C, int_t ldC)
{
using std::toupper;
const complex<real_t> zero(0.0,0.0);
const complex<real_t> one(1.0,0.0);
int_t i,j,l;
complex<real_t> *a,*c,*at;
complex<real_t> t;
uplo=toupper(uplo);
trans=toupper(trans);
if((uplo!='U')&&(uplo!='L'))
return -1;
else if((trans!='N')&&(trans!='T'))
return -2;
else if(n<0)
return -3;
else if(k<0)
return -4;
else if(ldA<((trans=='N')?n:k))
return -7;
else if(ldC<n)
return -10;
else if((n==0)||(((alpha==zero)||(k==0))&&(beta==one)))
return 0;
if(alpha==zero)
{
if(uplo=='U')
{
c=C;
for(j=0;j<n;j++)
{
for(i=0;i<=j;i++)
c[i]*=beta;
c+=ldC;
}
}
else
{
c=C;
for(j=0;j<n;j++)
{
for(i=j;i<n;i++)
c[i]*=beta;
c+=ldC;
}
}
}
else if(trans=='N')
{
if(uplo=='U')
{
c=C;
for(j=0;j<n;j++)
{
for(i=0;i<=j;i++)
c[i]*=beta;
a=A;
for(l=0;l<k;l++)
{
t=alpha*a[j];
for(i=0;i<=j;i++)
c[i]+=t*a[i];
a+=ldA;
}
c+=ldC;
}
}
else
{
c=C;
for(j=0;j<n;j++)
{
for(i=j;i<n;i++)
c[i]*=beta;
a=A;
for(l=0;l<k;l++)
{
t=alpha*a[j];
for(i=j;i<n;i++)
c[i]+=t*a[i];
a+=ldA;
}
c+=ldC;
}
}
}
else
{
if(uplo=='U')
{
c=C;
at=A;
for(j=0;j<n;j++)
{
a=A;
for(i=0;i<=j;i++)
{
t=zero;
for(l=0;l<k;l++)
t+=a[l]*at[l];
c[i]=alpha*t+beta*c[i];
a+=ldA;
}
at+=ldA;
c+=ldC;
}
}
else
{
at=A;
c=C;
for(j=0;j<n;j++)
{
a=A+j*ldA;
for(i=j;i<n;i++)
{
t=zero;
for(l=0;l<k;l++)
t+=a[l]*at[l];
c[i]=alpha*t+beta*c[i];
a+=ldA;
}
at+=ldA;
c+=ldC;
}
}
}
return 0;
}
#ifdef __latl_cblas
#include <cblas.h>
template <> int SYRK<float>(char uplo, char trans, int_t n, int_t k, float alpha, float *A, int_t ldA, float beta, float *C, int_t ldC)
{
using std::toupper;
uplo=toupper(uplo);
trans=toupper(trans);
if((uplo!='U')&&(uplo!='L'))
return -1;
else if((trans!='N')&&(trans!='T')&&(trans!='C'))
return -2;
else if(n<0)
return -3;
else if(k<0)
return -4;
else if(ldA<((trans=='N')?n:k))
return -7;
else if(ldC<n)
return -10;
const CBLAS_UPLO Uplo=(uplo=='U')?CblasUpper:CblasLower;
const CBLAS_TRANSPOSE Trans=(trans=='N')?CblasNoTrans:((trans=='T')?CblasTrans:CblasConjTrans);
cblas_ssyrk(CblasColMajor,Uplo,Trans,n,k,alpha,A,ldA,beta,C,ldC);
return 0;
}
template <> int SYRK<double>(char uplo, char trans, int_t n, int_t k, double alpha, double *A, int_t ldA, double beta, double *C, int_t ldC)
{
using std::toupper;
uplo=toupper(uplo);
trans=toupper(trans);
if((uplo!='U')&&(uplo!='L'))
return -1;
else if((trans!='N')&&(trans!='T')&&(trans!='C'))
return -2;
else if(n<0)
return -3;
else if(k<0)
return -4;
else if(ldA<((trans=='N')?n:k))
return -7;
else if(ldC<n)
return -10;
const CBLAS_UPLO Uplo=(uplo=='U')?CblasUpper:CblasLower;
const CBLAS_TRANSPOSE Trans=(trans=='N')?CblasNoTrans:((trans=='T')?CblasTrans:CblasConjTrans);
cblas_dsyrk(CblasColMajor,Uplo,Trans,n,k,alpha,A,ldA,beta,C,ldC);
return 0;
}
template <> int SYRK<float>(char uplo, char trans, int_t n, int_t k, complex<float> alpha, complex<float> *A, int_t ldA, complex<float> beta, complex<float> *C, int_t ldC)
{
using std::toupper;
uplo=toupper(uplo);
trans=toupper(trans);
if((uplo!='U')&&(uplo!='L'))
return -1;
else if((trans!='N')&&(trans!='T'))
return -2;
else if(n<0)
return -3;
else if(k<0)
return -4;
else if(ldA<((trans=='N')?n:k))
return -7;
else if(ldC<n)
return -10;
const CBLAS_UPLO Uplo=(uplo=='U')?CblasUpper:CblasLower;
const CBLAS_TRANSPOSE Trans=(trans=='N')?CblasNoTrans:((trans=='T')?CblasTrans:CblasConjTrans);
cblas_csyrk(CblasColMajor,Uplo,Trans,n,k,&alpha,A,ldA,&beta,C,ldC);
return 0;
}
template <> int SYRK<double>(char uplo, char trans, int_t n, int_t k, complex<double> alpha, complex<double> *A, int_t ldA, complex<double> beta, complex<double> *C, int_t ldC)
{
using std::toupper;
uplo=toupper(uplo);
trans=toupper(trans);
if((uplo!='U')&&(uplo!='L'))
return -1;
else if((trans!='N')&&(trans!='T'))
return -2;
else if(n<0)
return -3;
else if(k<0)
return -4;
else if(ldA<((trans=='N')?n:k))
return -7;
else if(ldC<n)
return -10;
const CBLAS_UPLO Uplo=(uplo=='U')?CblasUpper:CblasLower;
const CBLAS_TRANSPOSE Trans=(trans=='N')?CblasNoTrans:((trans=='T')?CblasTrans:CblasConjTrans);
cblas_zsyrk(CblasColMajor,Uplo,Trans,n,k,&alpha,A,ldA,&beta,C,ldC);
return 0;
}
#endif
}
#endif
|
If $f$ converges to $a$, then $\|f\|$ converges to $\|a\|$.
|
(*
Title: Order_Predicates.thy
Author: Manuel Eberl, TU München
Locales for order relations modelled as predicates (as opposed to sets of pairs).
*)
section \<open>Order Relations as Binary Predicates\<close>
theory Order_Predicates
imports
Main
"HOL-Library.Disjoint_Sets"
"HOL-Combinatorics.Permutations"
"List-Index.List_Index"
begin
subsection \<open>Basic Operations on Relations\<close>
text \<open>The type of binary relations\<close>
type_synonym 'a relation = "'a \<Rightarrow> 'a \<Rightarrow> bool"
definition map_relation :: "('a \<Rightarrow> 'b) \<Rightarrow> 'b relation \<Rightarrow> 'a relation" where
"map_relation f R = (\<lambda>x y. R (f x) (f y))"
definition restrict_relation :: "'a set \<Rightarrow> 'a relation \<Rightarrow> 'a relation" where
"restrict_relation A R = (\<lambda>x y. x \<in> A \<and> y \<in> A \<and> R x y)"
lemma restrict_relation_restrict_relation [simp]:
"restrict_relation A (restrict_relation B R) = restrict_relation (A \<inter> B) R"
by (intro ext) (auto simp add: restrict_relation_def)
lemma restrict_relation_empty [simp]: "restrict_relation {} R = (\<lambda>_ _. False)"
by (simp add: restrict_relation_def)
lemma restrict_relation_UNIV [simp]: "restrict_relation UNIV R = R"
by (simp add: restrict_relation_def)
subsection \<open>Preorders\<close>
text \<open>Preorders are reflexive and transitive binary relations.\<close>
locale preorder_on =
fixes carrier :: "'a set"
fixes le :: "'a relation"
assumes not_outside: "le x y \<Longrightarrow> x \<in> carrier" "le x y \<Longrightarrow> y \<in> carrier"
assumes refl: "x \<in> carrier \<Longrightarrow> le x x"
assumes trans: "le x y \<Longrightarrow> le y z \<Longrightarrow> le x z"
begin
lemma carrier_eq: "carrier = {x. le x x}"
using not_outside refl by auto
lemma preorder_on_map:
"preorder_on (f -` carrier) (map_relation f le)"
by unfold_locales (auto dest: not_outside simp: map_relation_def refl elim: trans)
lemma preorder_on_restrict:
"preorder_on (carrier \<inter> A) (restrict_relation A le)"
by unfold_locales (auto simp: restrict_relation_def refl intro: trans not_outside)
lemma preorder_on_restrict_subset:
"A \<subseteq> carrier \<Longrightarrow> preorder_on A (restrict_relation A le)"
using preorder_on_restrict[of A] by (simp add: Int_absorb1)
lemma restrict_relation_carrier [simp]:
"restrict_relation carrier le = le"
using not_outside by (intro ext) (auto simp add: restrict_relation_def)
end
subsection \<open>Total preorders\<close>
text \<open>Total preorders are preorders where any two elements are comparable.\<close>
locale total_preorder_on = preorder_on +
assumes total: "x \<in> carrier \<Longrightarrow> y \<in> carrier \<Longrightarrow> le x y \<or> le y x"
begin
lemma total': "\<not>le x y \<Longrightarrow> x \<in> carrier \<Longrightarrow> y \<in> carrier \<Longrightarrow> le y x"
using total[of x y] by blast
lemma total_preorder_on_map:
"total_preorder_on (f -` carrier) (map_relation f le)"
proof -
interpret R': preorder_on "f -` carrier" "map_relation f le"
using preorder_on_map[of f] .
show ?thesis by unfold_locales (simp add: map_relation_def total)
qed
lemma total_preorder_on_restrict:
"total_preorder_on (carrier \<inter> A) (restrict_relation A le)"
proof -
interpret R': preorder_on "carrier \<inter> A" "restrict_relation A le"
by (rule preorder_on_restrict)
from total show ?thesis
by unfold_locales (auto simp: restrict_relation_def)
qed
lemma total_preorder_on_restrict_subset:
"A \<subseteq> carrier \<Longrightarrow> total_preorder_on A (restrict_relation A le)"
using total_preorder_on_restrict[of A] by (simp add: Int_absorb1)
end
text \<open>Some fancy notation for order relations\<close>
abbreviation (input) weakly_preferred :: "'a \<Rightarrow> 'a relation \<Rightarrow> 'a \<Rightarrow> bool"
("_ \<preceq>[_] _" [51,10,51] 60) where
"a \<preceq>[R] b \<equiv> R a b"
definition strongly_preferred ("_ \<prec>[_] _" [51,10,51] 60) where
"a \<prec>[R] b \<equiv> (a \<preceq>[R] b) \<and> \<not>(b \<preceq>[R] a)"
definition indifferent ("_ \<sim>[_] _" [51,10,51] 60) where
"a \<sim>[R] b \<equiv> (a \<preceq>[R] b) \<and> (b \<preceq>[R] a)"
abbreviation (input) weakly_not_preferred ("_ \<succeq>[_] _" [51,10,51] 60) where
"a \<succeq>[R] b \<equiv> b \<preceq>[R] a"
term "a \<succeq>[R] b \<longleftrightarrow> b \<preceq>[R] a"
abbreviation (input) strongly_not_preferred ("_ \<succ>[_] _" [51,10,51] 60) where
"a \<succ>[R] b \<equiv> b \<prec>[R] a"
context preorder_on
begin
lemma strict_trans: "a \<prec>[le] b \<Longrightarrow> b \<prec>[le] c \<Longrightarrow> a \<prec>[le] c"
unfolding strongly_preferred_def by (blast intro: trans)
lemma weak_strict_trans: "a \<preceq>[le] b \<Longrightarrow> b \<prec>[le] c \<Longrightarrow> a \<prec>[le] c"
unfolding strongly_preferred_def by (blast intro: trans)
lemma strict_weak_trans: "a \<prec>[le] b \<Longrightarrow> b \<preceq>[le] c \<Longrightarrow> a \<prec>[le] c"
unfolding strongly_preferred_def by (blast intro: trans)
end
lemma (in total_preorder_on) not_weakly_preferred_iff:
"a \<in> carrier \<Longrightarrow> b \<in> carrier \<Longrightarrow> \<not>a \<preceq>[le] b \<longleftrightarrow> b \<prec>[le] a"
using total[of a b] by (auto simp: strongly_preferred_def)
lemma (in total_preorder_on) not_strongly_preferred_iff:
"a \<in> carrier \<Longrightarrow> b \<in> carrier \<Longrightarrow> \<not>a \<prec>[le] b \<longleftrightarrow> b \<preceq>[le] a"
using total[of a b] by (auto simp: strongly_preferred_def)
subsection \<open>Orders\<close>
locale order_on = preorder_on +
assumes antisymmetric: "le x y \<Longrightarrow> le y x \<Longrightarrow> x = y"
locale linorder_on = order_on carrier le + total_preorder_on carrier le for carrier le
subsection \<open>Maximal elements\<close>
text \<open>
Maximal elements are elements in a preorder for which there exists no strictly greater element.
\<close>
definition Max_wrt_among :: "'a relation \<Rightarrow> 'a set \<Rightarrow> 'a set" where
"Max_wrt_among R A = {x\<in>A. R x x \<and> (\<forall>y\<in>A. R x y \<longrightarrow> R y x)}"
lemma Max_wrt_among_cong:
assumes "restrict_relation A R = restrict_relation A R'"
shows "Max_wrt_among R A = Max_wrt_among R' A"
proof -
from assms have "R x y \<longleftrightarrow> R' x y" if "x \<in> A" "y \<in> A" for x y
using that by (auto simp: restrict_relation_def fun_eq_iff)
thus ?thesis unfolding Max_wrt_among_def by blast
qed
definition Max_wrt :: "'a relation \<Rightarrow> 'a set" where
"Max_wrt R = Max_wrt_among R UNIV"
lemma Max_wrt_altdef: "Max_wrt R = {x. R x x \<and> (\<forall>y. R x y \<longrightarrow> R y x)}"
unfolding Max_wrt_def Max_wrt_among_def by simp
context preorder_on
begin
lemma Max_wrt_among_preorder:
"Max_wrt_among le A = {x\<in>carrier \<inter> A. \<forall>y\<in>carrier \<inter> A. le x y \<longrightarrow> le y x}"
unfolding Max_wrt_among_def using not_outside refl by blast
lemma Max_wrt_preorder:
"Max_wrt le = {x\<in>carrier. \<forall>y\<in>carrier. le x y \<longrightarrow> le y x}"
unfolding Max_wrt_altdef using not_outside refl by blast
lemma Max_wrt_among_subset:
"Max_wrt_among le A \<subseteq> carrier" "Max_wrt_among le A \<subseteq> A"
unfolding Max_wrt_among_preorder by auto
lemma Max_wrt_subset:
"Max_wrt le \<subseteq> carrier"
unfolding Max_wrt_preorder by auto
lemma Max_wrt_among_nonempty:
assumes "B \<inter> carrier \<noteq> {}" "finite (B \<inter> carrier)"
shows "Max_wrt_among le B \<noteq> {}"
proof -
define A where "A = B \<inter> carrier"
have "A \<subseteq> carrier" by (simp add: A_def)
from assms(2,1)[folded A_def] this have "{x\<in>A. (\<forall>y\<in>A. le x y \<longrightarrow> le y x)} \<noteq> {}"
proof (induction A rule: finite_ne_induct)
case (singleton x)
thus ?case by (auto simp: refl)
next
case (insert x A)
then obtain y where y: "y \<in> A" "\<And>z. z \<in> A \<Longrightarrow> le y z \<Longrightarrow> le z y" by blast
thus ?case using insert.prems
by (cases "le y x") (blast intro: trans)+
qed
thus ?thesis by (simp add: A_def Max_wrt_among_preorder Int_commute)
qed
lemma Max_wrt_nonempty:
"carrier \<noteq> {} \<Longrightarrow> finite carrier \<Longrightarrow> Max_wrt le \<noteq> {}"
using Max_wrt_among_nonempty[of UNIV] by (simp add: Max_wrt_def)
lemma Max_wrt_among_map_relation_vimage:
"f -` Max_wrt_among le A \<subseteq> Max_wrt_among (map_relation f le) (f -` A)"
by (auto simp: Max_wrt_among_def map_relation_def)
lemma image_subset_vimage_the_inv_into:
assumes "inj_on f A" "B \<subseteq> A"
shows "f ` B \<subseteq> the_inv_into A f -` B"
using assms by (auto simp: the_inv_into_f_f)
lemma Max_wrt_among_map_relation_bij_subset:
assumes "bij (f :: 'a \<Rightarrow> 'b)"
shows "f ` Max_wrt_among le A \<subseteq>
Max_wrt_among (map_relation (inv f) le) (f ` A)"
using assms Max_wrt_among_map_relation_vimage[of "inv f" A]
by (simp add: bij_imp_bij_inv inv_inv_eq bij_vimage_eq_inv_image)
lemma Max_wrt_among_map_relation_bij:
assumes "bij f"
shows "f ` Max_wrt_among le A = Max_wrt_among (map_relation (inv f) le) (f ` A)"
proof (intro equalityI Max_wrt_among_map_relation_bij_subset assms)
interpret R: preorder_on "f ` carrier" "map_relation (inv f) le"
using preorder_on_map[of "inv f"] assms
by (simp add: bij_imp_bij_inv bij_vimage_eq_inv_image inv_inv_eq)
show "Max_wrt_among (map_relation (inv f) le) (f ` A) \<subseteq> f ` Max_wrt_among le A"
unfolding Max_wrt_among_preorder R.Max_wrt_among_preorder
using assms bij_is_inj[OF assms]
by (auto simp: map_relation_def inv_f_f image_Int [symmetric])
qed
lemma Max_wrt_map_relation_bij:
"bij f \<Longrightarrow> f ` Max_wrt le = Max_wrt (map_relation (inv f) le)"
proof -
assume bij: "bij f"
interpret R: preorder_on "f ` carrier" "map_relation (inv f) le"
using preorder_on_map[of "inv f"] bij
by (simp add: bij_imp_bij_inv bij_vimage_eq_inv_image inv_inv_eq)
from bij show ?thesis
unfolding R.Max_wrt_preorder Max_wrt_preorder
by (auto simp: map_relation_def inv_f_f bij_is_inj)
qed
lemma Max_wrt_among_mono:
"le x y \<Longrightarrow> x \<in> Max_wrt_among le A \<Longrightarrow> y \<in> A \<Longrightarrow> y \<in> Max_wrt_among le A"
using not_outside by (auto simp: Max_wrt_among_preorder intro: trans)
lemma Max_wrt_mono:
"le x y \<Longrightarrow> x \<in> Max_wrt le \<Longrightarrow> y \<in> Max_wrt le"
unfolding Max_wrt_def using Max_wrt_among_mono[of x y UNIV] by blast
end
context total_preorder_on
begin
lemma Max_wrt_among_total_preorder:
"Max_wrt_among le A = {x\<in>carrier \<inter> A. \<forall>y\<in>carrier \<inter> A. le y x}"
unfolding Max_wrt_among_preorder using total by blast
lemma Max_wrt_total_preorder:
"Max_wrt le = {x\<in>carrier. \<forall>y\<in>carrier. le y x}"
unfolding Max_wrt_preorder using total by blast
lemma decompose_Max:
assumes A: "A \<subseteq> carrier"
defines "M \<equiv> Max_wrt_among le A"
shows "restrict_relation A le = (\<lambda>x y. x \<in> A \<and> y \<in> M \<or> (y \<notin> M \<and> restrict_relation (A - M) le x y))"
using A by (intro ext) (auto simp: M_def Max_wrt_among_total_preorder
restrict_relation_def Int_absorb1 intro: trans)
end
subsection \<open>Weak rankings\<close>
inductive of_weak_ranking :: "'alt set list \<Rightarrow> 'alt relation" where
"i \<le> j \<Longrightarrow> i < length xs \<Longrightarrow> j < length xs \<Longrightarrow> x \<in> xs ! i \<Longrightarrow> y \<in> xs ! j \<Longrightarrow>
x \<succeq>[of_weak_ranking xs] y"
lemma of_weak_ranking_Nil [simp]: "of_weak_ranking [] = (\<lambda>_ _. False)"
by (intro ext) (simp add: of_weak_ranking.simps)
lemma of_weak_ranking_Nil' [code]: "of_weak_ranking [] x y = False"
by simp
lemma of_weak_ranking_Cons [code]:
"x \<succeq>[of_weak_ranking (z#zs)] y \<longleftrightarrow> x \<in> z \<and> y \<in> \<Union>(set (z#zs)) \<or> x \<succeq>[of_weak_ranking zs] y"
(is "?lhs \<longleftrightarrow> ?rhs")
proof
assume ?lhs
then obtain i j
where ij: "i < length (z#zs)" "j < length (z#zs)" "i \<le> j" "x \<in> (z#zs) ! i" "y \<in> (z#zs) ! j"
by (blast elim: of_weak_ranking.cases)
thus ?rhs by (cases i; cases j) (force intro: of_weak_ranking.intros)+
next
assume ?rhs
thus ?lhs
proof (elim disjE conjE)
assume "x \<in> z" "y \<in> \<Union>(set (z # zs))"
then obtain j where "j < length (z # zs)" "y \<in> (z # zs) ! j"
by (subst (asm) set_conv_nth) auto
with \<open>x \<in> z\<close> show "of_weak_ranking (z # zs) y x"
by (intro of_weak_ranking.intros[of 0 j]) auto
next
assume "of_weak_ranking zs y x"
then obtain i j where "i < length zs" "j < length zs" "i \<le> j" "x \<in> zs ! i" "y \<in> zs ! j"
by (blast elim: of_weak_ranking.cases)
thus "of_weak_ranking (z # zs) y x"
by (intro of_weak_ranking.intros[of "Suc i" "Suc j"]) auto
qed
qed
lemma of_weak_ranking_indifference:
assumes "A \<in> set xs" "x \<in> A" "y \<in> A"
shows "x \<preceq>[of_weak_ranking xs] y"
using assms by (induction xs) (auto simp: of_weak_ranking_Cons)
lemma of_weak_ranking_map:
"map_relation f (of_weak_ranking xs) = of_weak_ranking (map ((-`) f) xs)"
by (intro ext, induction xs)
(simp_all add: map_relation_def of_weak_ranking_Cons)
lemma of_weak_ranking_permute':
assumes "f permutes (\<Union>(set xs))"
shows "map_relation f (of_weak_ranking xs) = of_weak_ranking (map ((`) (inv f)) xs)"
proof -
have "map_relation f (of_weak_ranking xs) = of_weak_ranking (map ((-`) f) xs)"
by (rule of_weak_ranking_map)
also from assms have "map ((-`) f) xs = map ((`) (inv f)) xs"
by (intro map_cong refl) (simp_all add: bij_vimage_eq_inv_image permutes_bij)
finally show ?thesis .
qed
lemma of_weak_ranking_permute:
assumes "f permutes (\<Union>(set xs))"
shows "of_weak_ranking (map ((`) f) xs) = map_relation (inv f) (of_weak_ranking xs)"
using of_weak_ranking_permute'[OF permutes_inv[OF assms]] assms
by (simp add: inv_inv_eq permutes_bij)
definition is_weak_ranking where
"is_weak_ranking xs \<longleftrightarrow> ({} \<notin> set xs) \<and>
(\<forall>i j. i < length xs \<and> j < length xs \<and> i \<noteq> j \<longrightarrow> xs ! i \<inter> xs ! j = {})"
definition is_finite_weak_ranking where
"is_finite_weak_ranking xs \<longleftrightarrow> is_weak_ranking xs \<and> (\<forall>x\<in>set xs. finite x)"
definition weak_ranking :: "'alt relation \<Rightarrow> 'alt set list" where
"weak_ranking R = (SOME xs. is_weak_ranking xs \<and> R = of_weak_ranking xs)"
lemma is_weak_ranking_nonempty: "is_weak_ranking xs \<Longrightarrow> {} \<notin> set xs"
by (simp add: is_weak_ranking_def)
lemma is_weak_ranking_rev [simp]: "is_weak_ranking (rev xs) \<longleftrightarrow> is_weak_ranking xs"
by (simp add: is_weak_ranking_iff)
lemma is_weak_ranking_map_inj:
assumes "is_weak_ranking xs" "inj_on f (\<Union>(set xs))"
shows "is_weak_ranking (map ((`) f) xs)"
using assms by (auto simp: is_weak_ranking_iff distinct_map inj_on_image disjoint_image)
lemma of_weak_ranking_rev [simp]:
"of_weak_ranking (rev xs) (x::'a) y \<longleftrightarrow> of_weak_ranking xs y x"
proof -
have "of_weak_ranking (rev xs) y x" if "of_weak_ranking xs x y" for xs and x y :: 'a
proof -
from that obtain i j where "i < length xs" "j < length xs" "x \<in> xs ! i" "y \<in> xs ! j" "i \<ge> j"
by (elim of_weak_ranking.cases) simp_all
thus ?thesis
by (intro of_weak_ranking.intros[of "length xs - i - 1" "length xs - j - 1"] diff_le_mono2)
(auto simp: diff_le_mono2 rev_nth)
qed
from this[of xs y x] this[of "rev xs" x y] show ?thesis by (intro iffI) simp_all
qed
lemma is_weak_ranking_Nil [simp, code]: "is_weak_ranking []"
by (auto simp: is_weak_ranking_def)
lemma is_finite_weak_ranking_Nil [simp, code]: "is_finite_weak_ranking []"
by (auto simp: is_finite_weak_ranking_def)
lemma is_weak_ranking_Cons_empty [simp]:
"\<not>is_weak_ranking ({} # xs)" by (simp add: is_weak_ranking_def)
lemma is_finite_weak_ranking_Cons_empty [simp]:
"\<not>is_finite_weak_ranking ({} # xs)" by (simp add: is_finite_weak_ranking_def)
lemma is_weak_ranking_singleton [simp]:
"is_weak_ranking [x] \<longleftrightarrow> x \<noteq> {}"
by (auto simp add: is_weak_ranking_def)
lemma is_finite_weak_ranking_singleton [simp]:
"is_finite_weak_ranking [x] \<longleftrightarrow> x \<noteq> {} \<and> finite x"
by (auto simp add: is_finite_weak_ranking_def)
lemma is_weak_ranking_append:
"is_weak_ranking (xs @ ys) \<longleftrightarrow>
is_weak_ranking xs \<and> is_weak_ranking ys \<and>
(set xs \<inter> set ys = {} \<and> \<Union>(set xs) \<inter> \<Union>(set ys) = {})"
by (simp only: is_weak_ranking_iff)
(auto dest: disjointD disjoint_unionD1 disjoint_unionD2 intro: disjoint_union)
lemma is_weak_ranking_Cons [code]:
"is_weak_ranking (x # xs) \<longleftrightarrow>
x \<noteq> {} \<and> is_weak_ranking xs \<and> x \<inter> \<Union>(set xs) = {}"
using is_weak_ranking_append[of "[x]" xs] by auto
lemma is_finite_weak_ranking_Cons [code]:
"is_finite_weak_ranking (x # xs) \<longleftrightarrow>
x \<noteq> {} \<and> finite x \<and> is_finite_weak_ranking xs \<and> x \<inter> \<Union>(set xs) = {}"
by (auto simp add: is_finite_weak_ranking_def is_weak_ranking_Cons)
primrec is_weak_ranking_aux where
"is_weak_ranking_aux A [] \<longleftrightarrow> True"
| "is_weak_ranking_aux A (x#xs) \<longleftrightarrow> x \<noteq> {} \<and>
A \<inter> x = {} \<and> is_weak_ranking_aux (A \<union> x) xs"
lemma is_weak_ranking_aux:
"is_weak_ranking_aux A xs \<longleftrightarrow> A \<inter> \<Union>(set xs) = {} \<and> is_weak_ranking xs"
by (induction xs arbitrary: A) (auto simp: is_weak_ranking_Cons)
lemma is_weak_ranking_code [code]:
"is_weak_ranking xs \<longleftrightarrow> is_weak_ranking_aux {} xs"
by (subst is_weak_ranking_aux) auto
lemma of_weak_ranking_altdef:
assumes "is_weak_ranking xs" "x \<in> \<Union>(set xs)" "y \<in> \<Union>(set xs)"
shows "of_weak_ranking xs x y \<longleftrightarrow>
find_index ((\<in>) x) xs \<ge> find_index ((\<in>) y) xs"
proof -
from assms
have A: "find_index ((\<in>) x) xs < length xs" "find_index ((\<in>) y) xs < length xs"
by (simp_all add: find_index_less_size_conv)
from this[THEN nth_find_index]
have B: "x \<in> xs ! find_index ((\<in>) x) xs" "y \<in> xs ! find_index ((\<in>) y) xs" .
show ?thesis
proof
assume "of_weak_ranking xs x y"
then obtain i j where ij: "j \<le> i" "i < length xs" "j < length xs" "x \<in> xs ! i" "y \<in> xs !j"
by (cases rule: of_weak_ranking.cases) simp_all
with A B have "i = find_index ((\<in>) x) xs" "j = find_index ((\<in>) y) xs"
using assms(1) unfolding is_weak_ranking_def by blast+
with ij show "find_index ((\<in>) x) xs \<ge> find_index ((\<in>) y) xs" by simp
next
assume "find_index ((\<in>) x) xs \<ge> find_index ((\<in>) y) xs"
from this A(2,1) B(2,1) show "of_weak_ranking xs x y"
by (rule of_weak_ranking.intros)
qed
qed
lemma restrict_relation_of_weak_ranking_Cons:
assumes "is_weak_ranking (A # As)"
shows "restrict_relation (\<Union>(set As)) (of_weak_ranking (A # As)) = of_weak_ranking As"
proof -
from assms interpret R: total_preorder_on "\<Union>(set As)" "of_weak_ranking As"
by (intro total_preorder_of_weak_ranking)
(simp_all add: is_weak_ranking_Cons)
from assms show ?thesis using R.not_outside
by (intro ext) (auto simp: restrict_relation_def of_weak_ranking_Cons
is_weak_ranking_Cons)
qed
lemmas of_weak_ranking_wf =
total_preorder_of_weak_ranking is_weak_ranking_code insert_commute
(* Test *)
lemma "total_preorder_on {1,2,3,4::nat} (of_weak_ranking [{1,3},{2},{4}])"
by (simp add: of_weak_ranking_wf)
context
fixes x :: "'alt set" and xs :: "'alt set list"
assumes wf: "is_weak_ranking (x#xs)"
begin
interpretation R: total_preorder_on "\<Union>(set (x#xs))" "of_weak_ranking (x#xs)"
by (intro total_preorder_of_weak_ranking) (simp_all add: wf)
lemma of_weak_ranking_imp_in_set:
assumes "of_weak_ranking xs a b"
shows "a \<in> \<Union>(set xs)" "b \<in> \<Union>(set xs)"
using assms by (fastforce elim!: of_weak_ranking.cases)+
lemma of_weak_ranking_Cons':
assumes "a \<in> \<Union>(set (x#xs))" "b \<in> \<Union>(set (x#xs))"
shows "of_weak_ranking (x#xs) a b \<longleftrightarrow> b \<in> x \<or> (a \<notin> x \<and> of_weak_ranking xs a b)"
proof
assume "of_weak_ranking (x # xs) a b"
with wf of_weak_ranking_imp_in_set[of a b]
show "(b \<in> x \<or> a \<notin> x \<and> of_weak_ranking xs a b)"
by (auto simp: is_weak_ranking_Cons of_weak_ranking_Cons)
next
assume "b \<in> x \<or> a \<notin> x \<and> of_weak_ranking xs a b"
with assms show "of_weak_ranking (x#xs) a b"
by (fastforce simp: of_weak_ranking_Cons)
qed
lemma Max_wrt_among_of_weak_ranking_Cons1:
assumes "x \<inter> A = {}"
shows "Max_wrt_among (of_weak_ranking (x#xs)) A = Max_wrt_among (of_weak_ranking xs) A"
proof -
from wf interpret R': total_preorder_on "\<Union>(set xs)" "of_weak_ranking xs"
by (intro total_preorder_of_weak_ranking) (simp_all add: is_weak_ranking_Cons)
from assms show ?thesis
by (auto simp: R.Max_wrt_among_total_preorder
R'.Max_wrt_among_total_preorder of_weak_ranking_Cons)
qed
lemma Max_wrt_among_of_weak_ranking_Cons2:
assumes "x \<inter> A \<noteq> {}"
shows "Max_wrt_among (of_weak_ranking (x#xs)) A = x \<inter> A"
proof -
from wf interpret R': total_preorder_on "\<Union>(set xs)" "of_weak_ranking xs"
by (intro total_preorder_of_weak_ranking) (simp_all add: is_weak_ranking_Cons)
from assms obtain a where "a \<in> x \<inter> A" by blast
with wf R'.not_outside(1)[of a] show ?thesis
by (auto simp: R.Max_wrt_among_total_preorder is_weak_ranking_Cons
R'.Max_wrt_among_total_preorder of_weak_ranking_Cons)
qed
lemma Max_wrt_among_of_weak_ranking_Cons:
"Max_wrt_among (of_weak_ranking (x#xs)) A =
(if x \<inter> A = {} then Max_wrt_among (of_weak_ranking xs) A else x \<inter> A)"
using Max_wrt_among_of_weak_ranking_Cons1 Max_wrt_among_of_weak_ranking_Cons2 by simp
lemma Max_wrt_of_weak_ranking_Cons:
"Max_wrt (of_weak_ranking (x#xs)) = x"
using wf by (simp add: is_weak_ranking_Cons Max_wrt_def Max_wrt_among_of_weak_ranking_Cons)
end
lemma Max_wrt_of_weak_ranking:
assumes "is_weak_ranking xs"
shows "Max_wrt (of_weak_ranking xs) = (if xs = [] then {} else hd xs)"
proof (cases xs)
case Nil
hence "of_weak_ranking xs = (\<lambda>_ _. False)" by (intro ext) simp_all
with Nil show ?thesis by (simp add: Max_wrt_def Max_wrt_among_def)
next
case (Cons x xs')
with assms show ?thesis by (simp add: Max_wrt_of_weak_ranking_Cons)
qed
locale finite_total_preorder_on = total_preorder_on +
assumes finite_carrier [intro]: "finite carrier"
begin
lemma finite_total_preorder_on_map:
assumes "finite (f -` carrier)"
shows "finite_total_preorder_on (f -` carrier) (map_relation f le)"
proof -
interpret R': total_preorder_on "f -` carrier" "map_relation f le"
using total_preorder_on_map[of f] .
from assms show ?thesis by unfold_locales simp
qed
function weak_ranking_aux :: "'a set \<Rightarrow> 'a set list" where
"weak_ranking_aux {} = []"
| "A \<noteq> {} \<Longrightarrow> A \<subseteq> carrier \<Longrightarrow> weak_ranking_aux A =
Max_wrt_among le A # weak_ranking_aux (A - Max_wrt_among le A)"
| "\<not>(A \<subseteq> carrier) \<Longrightarrow> weak_ranking_aux A = undefined"
by blast simp_all
termination proof (relation "Wellfounded.measure card")
fix A
let ?B = "Max_wrt_among le A"
assume A: "A \<noteq> {}" "A \<subseteq> carrier"
moreover from A(2) have "finite A" by (rule finite_subset) blast
moreover from A have "?B \<noteq> {}" "?B \<subseteq> A"
by (intro Max_wrt_among_nonempty Max_wrt_among_subset; force)+
ultimately have "card (A - ?B) < card A"
by (intro psubset_card_mono) auto
thus "(A - ?B, A) \<in> measure card" by simp
qed simp_all
lemma weak_ranking_aux_Union:
"A \<subseteq> carrier \<Longrightarrow> \<Union>(set (weak_ranking_aux A)) = A"
proof (induction A rule: weak_ranking_aux.induct [case_names empty nonempty])
case (nonempty A)
with Max_wrt_among_subset[of A] show ?case by auto
qed simp_all
lemma weak_ranking_aux_wf:
"A \<subseteq> carrier \<Longrightarrow> is_weak_ranking (weak_ranking_aux A)"
proof (induction A rule: weak_ranking_aux.induct [case_names empty nonempty])
case (nonempty A)
have "is_weak_ranking (Max_wrt_among le A # weak_ranking_aux (A - Max_wrt_among le A))"
unfolding is_weak_ranking_Cons
proof (intro conjI)
from nonempty.prems nonempty.hyps show "Max_wrt_among le A \<noteq> {}"
by (intro Max_wrt_among_nonempty) auto
next
from nonempty.prems show "is_weak_ranking (weak_ranking_aux (A - Max_wrt_among le A))"
by (intro nonempty.IH) blast
next
from nonempty.prems nonempty.hyps have "Max_wrt_among le A \<noteq> {}"
by (intro Max_wrt_among_nonempty) auto
moreover from nonempty.prems
have "\<Union>(set (weak_ranking_aux (A - Max_wrt_among le A))) = A - Max_wrt_among le A"
by (intro weak_ranking_aux_Union) auto
ultimately show "Max_wrt_among le A \<inter> \<Union>(set (weak_ranking_aux (A - Max_wrt_among le A))) = {}"
by blast+
qed
with nonempty.prems nonempty.hyps show ?case by simp
qed simp_all
lemma of_weak_ranking_weak_ranking_aux':
assumes "A \<subseteq> carrier" "x \<in> A" "y \<in> A"
shows "of_weak_ranking (weak_ranking_aux A) x y \<longleftrightarrow> restrict_relation A le x y"
using assms
proof (induction A rule: weak_ranking_aux.induct [case_names empty nonempty])
case (nonempty A)
define M where "M = Max_wrt_among le A"
from nonempty.prems nonempty.hyps have M: "M \<subseteq> A" unfolding M_def
by (intro Max_wrt_among_subset)
from nonempty.prems have in_MD: "le x y" if "x \<in> A" "y \<in> M" for x y
using that unfolding M_def Max_wrt_among_total_preorder
by (auto simp: Int_absorb1)
from nonempty.prems have in_MI: "x \<in> M" if "y \<in> M" "x \<in> A" "le y x" for x y
using that unfolding M_def Max_wrt_among_total_preorder
by (auto simp: Int_absorb1 intro: trans)
from nonempty.prems nonempty.hyps
have IH: "of_weak_ranking (weak_ranking_aux (A - M)) x y =
restrict_relation (A - M) le x y" if "x \<notin> M" "y \<notin> M"
using that unfolding M_def by (intro nonempty.IH) auto
from nonempty.prems
interpret R': total_preorder_on "A - M" "of_weak_ranking (weak_ranking_aux (A - M))"
by (intro total_preorder_of_weak_ranking weak_ranking_aux_wf weak_ranking_aux_Union) auto
from nonempty.prems nonempty.hyps M weak_ranking_aux_Union[of A] R'.not_outside[of x y]
show ?case
by (cases "x \<in> M"; cases "y \<in> M")
(auto simp: restrict_relation_def of_weak_ranking_Cons IH M_def [symmetric]
intro: in_MD dest: in_MI)
qed simp_all
lemma of_weak_ranking_weak_ranking_aux:
"of_weak_ranking (weak_ranking_aux carrier) = le"
proof (intro ext)
fix x y
have "is_weak_ranking (weak_ranking_aux carrier)" by (rule weak_ranking_aux_wf) simp
then interpret R: total_preorder_on carrier "of_weak_ranking (weak_ranking_aux carrier)"
by (intro total_preorder_of_weak_ranking weak_ranking_aux_wf weak_ranking_aux_Union)
(simp_all add: weak_ranking_aux_Union)
show "of_weak_ranking (weak_ranking_aux carrier) x y = le x y"
proof (cases "x \<in> carrier \<and> y \<in> carrier")
case True
thus ?thesis
using of_weak_ranking_weak_ranking_aux'[of carrier x y] by simp
next
case False
with R.not_outside have "of_weak_ranking (weak_ranking_aux carrier) x y = False"
by auto
also from not_outside False have "\<dots> = le x y" by auto
finally show ?thesis .
qed
qed
lemma weak_ranking_aux_unique':
assumes "\<Union>(set As) \<subseteq> carrier" "is_weak_ranking As"
"of_weak_ranking As = restrict_relation (\<Union>(set As)) le"
shows "As = weak_ranking_aux (\<Union>(set As))"
using assms
proof (induction As)
case (Cons A As)
have "restrict_relation (\<Union>(set As)) (of_weak_ranking (A # As)) = of_weak_ranking As"
by (intro restrict_relation_of_weak_ranking_Cons Cons.prems)
also have eq1: "of_weak_ranking (A # As) = restrict_relation (\<Union>(set (A # As))) le" by fact
finally have eq: "of_weak_ranking As = restrict_relation (\<Union>(set As)) le"
by (simp add: Int_absorb2)
with Cons.prems have eq2: "weak_ranking_aux (\<Union>(set As)) = As"
by (intro sym [OF Cons.IH]) (auto simp: is_weak_ranking_Cons)
from eq1 have
"Max_wrt_among le (\<Union>(set (A # As))) =
Max_wrt_among (of_weak_ranking (A#As)) (\<Union>(set (A#As)))"
by (intro Max_wrt_among_cong) simp_all
also from Cons.prems have "\<dots> = A"
by (subst Max_wrt_among_of_weak_ranking_Cons2)
(simp_all add: is_weak_ranking_Cons)
finally have Max: "Max_wrt_among le (\<Union>(set (A # As))) = A" .
moreover from Cons.prems have "A \<noteq> {}" by (simp add: is_weak_ranking_Cons)
ultimately have "weak_ranking_aux (\<Union>(set (A # As))) = A # weak_ranking_aux (A \<union> \<Union>(set As) - A)"
using Cons.prems by simp
also from Cons.prems have "A \<union> \<Union>(set As) - A = \<Union>(set As)"
by (auto simp: is_weak_ranking_Cons)
also from eq2 have "weak_ranking_aux \<dots> = As" .
finally show ?case ..
qed simp_all
lemma weak_ranking_aux_unique:
assumes "is_weak_ranking As" "of_weak_ranking As = le"
shows "As = weak_ranking_aux carrier"
proof -
interpret R: total_preorder_on "\<Union>(set As)" "of_weak_ranking As"
by (intro total_preorder_of_weak_ranking assms) simp_all
from assms have "x \<in> \<Union>(set As) \<longleftrightarrow> x \<in> carrier" for x
using R.not_outside not_outside R.refl[of x] refl[of x]
by blast
hence eq: "\<Union>(set As) = carrier" by blast
from assms eq have "As = weak_ranking_aux (\<Union>(set As))"
by (intro weak_ranking_aux_unique') simp_all
with eq show ?thesis by simp
qed
lemma weak_ranking_total_preorder:
"is_weak_ranking (weak_ranking le)" "of_weak_ranking (weak_ranking le) = le"
proof -
from weak_ranking_aux_wf[of carrier] of_weak_ranking_weak_ranking_aux
have "\<exists>x. is_weak_ranking x \<and> le = of_weak_ranking x" by auto
hence "is_weak_ranking (weak_ranking le) \<and> le = of_weak_ranking (weak_ranking le)"
unfolding weak_ranking_def by (rule someI_ex)
thus "is_weak_ranking (weak_ranking le)" "of_weak_ranking (weak_ranking le) = le"
by simp_all
qed
lemma weak_ranking_altdef:
"weak_ranking le = weak_ranking_aux carrier"
by (intro weak_ranking_aux_unique weak_ranking_total_preorder)
lemma weak_ranking_Union: "\<Union>(set (weak_ranking le)) = carrier"
by (simp add: weak_ranking_altdef weak_ranking_aux_Union)
lemma weak_ranking_unique:
assumes "is_weak_ranking As" "of_weak_ranking As = le"
shows "As = weak_ranking le"
using assms unfolding weak_ranking_altdef by (rule weak_ranking_aux_unique)
lemma weak_ranking_permute:
assumes "f permutes carrier"
shows "weak_ranking (map_relation (inv f) le) = map ((`) f) (weak_ranking le)"
proof -
from assms have "inv f -` carrier = carrier"
by (simp add: permutes_vimage permutes_inv)
then interpret R: finite_total_preorder_on "inv f -` carrier" "map_relation (inv f) le"
by (intro finite_total_preorder_on_map) (simp_all add: finite_carrier)
from assms have "is_weak_ranking (map ((`) f) (weak_ranking le))"
by (intro is_weak_ranking_map_inj)
(simp_all add: weak_ranking_total_preorder permutes_inj_on)
with assms show ?thesis
by (intro sym[OF R.weak_ranking_unique])
(simp_all add: of_weak_ranking_permute weak_ranking_Union weak_ranking_total_preorder)
qed
lemma weak_ranking_index_unique:
assumes "is_weak_ranking xs" "i < length xs" "j < length xs" "x \<in> xs ! i" "x \<in> xs ! j"
shows "i = j"
using assms unfolding is_weak_ranking_def by auto
lemma weak_ranking_index_unique':
assumes "is_weak_ranking xs" "i < length xs" "x \<in> xs ! i"
shows "i = find_index ((\<in>) x) xs"
using assms find_index_less_size_conv nth_mem
by (intro weak_ranking_index_unique[OF assms(1,2) _ assms(3)]
nth_find_index[of "(\<in>) x"]) blast+
lemma weak_ranking_eqclass1:
assumes "A \<in> set (weak_ranking le)" "x \<in> A" "y \<in> A"
shows "le x y"
proof -
from assms obtain i where "weak_ranking le ! i = A" "i < length (weak_ranking le)"
by (auto simp: set_conv_nth)
with assms have "of_weak_ranking (weak_ranking le) x y"
by (intro of_weak_ranking.intros[of i i]) auto
thus ?thesis by (simp add: weak_ranking_total_preorder)
qed
lemma weak_ranking_eqclass2:
assumes A: "A \<in> set (weak_ranking le)" "x \<in> A" and le: "le x y" "le y x"
shows "y \<in> A"
proof -
define xs where "xs = weak_ranking le"
have wf: "is_weak_ranking xs" by (simp add: xs_def weak_ranking_total_preorder)
let ?le' = "of_weak_ranking xs"
from le have le': "?le' x y" "?le' y x" by (simp_all add: weak_ranking_total_preorder xs_def)
from le'(1) obtain i j
where ij: "j \<le> i" "i < length xs" "j < length xs" "x \<in> xs ! i" "y \<in> xs ! j"
by (cases rule: of_weak_ranking.cases)
from le'(2) obtain i' j'
where i'j': "j' \<le> i'" "i' < length xs" "j' < length xs" "x \<in> xs ! j'" "y \<in> xs ! i'"
by (cases rule: of_weak_ranking.cases)
from ij i'j' have eq: "i = j'" "j = i'"
by (intro weak_ranking_index_unique[OF wf]; simp)+
moreover from A obtain k where k: "k < length xs" "A = xs ! k"
by (auto simp: xs_def set_conv_nth)
ultimately have "k = i" using ij i'j' A
by (intro weak_ranking_index_unique[OF wf, of _ _ x]) auto
with ij i'j' k eq show ?thesis by (auto simp: xs_def)
qed
lemma hd_weak_ranking:
assumes "x \<in> hd (weak_ranking le)" "y \<in> carrier"
shows "le y x"
proof -
from weak_ranking_Union assms obtain i
where "i < length (weak_ranking le)" "y \<in> weak_ranking le ! i"
by (auto simp: set_conv_nth)
moreover from assms(2) weak_ranking_Union have "weak_ranking le \<noteq> []" by auto
ultimately have "of_weak_ranking (weak_ranking le) y x" using assms(1)
by (intro of_weak_ranking.intros[of 0 i]) (auto simp: hd_conv_nth)
thus ?thesis by (simp add: weak_ranking_total_preorder)
qed
lemma last_weak_ranking:
assumes "x \<in> last (weak_ranking le)" "y \<in> carrier"
shows "le x y"
proof -
from weak_ranking_Union assms obtain i
where "i < length (weak_ranking le)" "y \<in> weak_ranking le ! i"
by (auto simp: set_conv_nth)
moreover from assms(2) weak_ranking_Union have "weak_ranking le \<noteq> []" by auto
ultimately have "of_weak_ranking (weak_ranking le) x y" using assms(1)
by (intro of_weak_ranking.intros[of i "length (weak_ranking le) - 1"])
(auto simp: last_conv_nth)
thus ?thesis by (simp add: weak_ranking_total_preorder)
qed
text \<open>
The index in weak ranking of a given alternative. An element with index 0 is
first-ranked; larger indices correspond to less-preferred alternatives.
\<close>
definition weak_ranking_index :: "'a \<Rightarrow> nat" where
"weak_ranking_index x = find_index (\<lambda>A. x \<in> A) (weak_ranking le)"
lemma nth_weak_ranking_index:
assumes "x \<in> carrier"
shows "weak_ranking_index x < length (weak_ranking le)"
"x \<in> weak_ranking le ! weak_ranking_index x"
proof -
from assms weak_ranking_Union show "weak_ranking_index x < length (weak_ranking le)"
unfolding weak_ranking_index_def by (auto simp add: find_index_less_size_conv)
thus "x \<in> weak_ranking le ! weak_ranking_index x" unfolding weak_ranking_index_def
by (rule nth_find_index)
qed
lemma ranking_index_eqI:
"i < length (weak_ranking le) \<Longrightarrow> x \<in> weak_ranking le ! i \<Longrightarrow> weak_ranking_index x = i"
using weak_ranking_index_unique'[of "weak_ranking le" i x]
by (simp add: weak_ranking_index_def weak_ranking_total_preorder)
lemma ranking_index_le_iff [simp]:
assumes "x \<in> carrier" "y \<in> carrier"
shows "weak_ranking_index x \<ge> weak_ranking_index y \<longleftrightarrow> le x y"
proof -
have "le x y \<longleftrightarrow> of_weak_ranking (weak_ranking le) x y"
by (simp add: weak_ranking_total_preorder)
also have "\<dots> \<longleftrightarrow> weak_ranking_index x \<ge> weak_ranking_index y"
proof
assume "weak_ranking_index x \<ge> weak_ranking_index y"
thus "of_weak_ranking (weak_ranking le) x y"
by (rule of_weak_ranking.intros) (simp_all add: nth_weak_ranking_index assms)
next
assume "of_weak_ranking (weak_ranking le) x y"
then obtain i j where
"i \<le> j" "i < length (weak_ranking le)" "j < length (weak_ranking le)"
"x \<in> weak_ranking le ! j" "y \<in> weak_ranking le ! i"
by (elim of_weak_ranking.cases) blast
with ranking_index_eqI[of i] ranking_index_eqI[of j]
show "weak_ranking_index x \<ge> weak_ranking_index y" by simp
qed
finally show ?thesis ..
qed
end
lemma weak_ranking_False [simp]: "weak_ranking (\<lambda>_ _. False) = []"
proof -
interpret finite_total_preorder_on "{}" "\<lambda>_ _. False"
by unfold_locales simp_all
have "[] = weak_ranking (\<lambda>_ _. False)" by (rule weak_ranking_unique) simp_all
thus ?thesis ..
qed
lemmas of_weak_ranking_weak_ranking =
finite_total_preorder_on.weak_ranking_total_preorder(2)
lemma finite_total_preorder_on_iff:
"finite_total_preorder_on A R \<longleftrightarrow> total_preorder_on A R \<and> finite A"
by (simp add: finite_total_preorder_on_def finite_total_preorder_on_axioms_def)
lemma finite_total_preorder_of_weak_ranking:
assumes "\<Union>(set xs) = A" "is_finite_weak_ranking xs"
shows "finite_total_preorder_on A (of_weak_ranking xs)"
proof -
from assms(2) have "is_weak_ranking xs" by (simp add: is_finite_weak_ranking_def)
from assms(1) and this interpret total_preorder_on A "of_weak_ranking xs"
by (rule total_preorder_of_weak_ranking)
from assms(2) show ?thesis
by unfold_locales (simp add: assms(1)[symmetric] is_finite_weak_ranking_def)
qed
lemma weak_ranking_of_weak_ranking:
assumes "is_finite_weak_ranking xs"
shows "weak_ranking (of_weak_ranking xs) = xs"
proof -
from assms interpret finite_total_preorder_on "\<Union>(set xs)" "of_weak_ranking xs"
by (intro finite_total_preorder_of_weak_ranking) simp_all
from assms show ?thesis
by (intro sym[OF weak_ranking_unique]) (simp_all add: is_finite_weak_ranking_def)
qed
lemma weak_ranking_eqD:
assumes "finite_total_preorder_on alts R1"
assumes "finite_total_preorder_on alts R2"
assumes "weak_ranking R1 = weak_ranking R2"
shows "R1 = R2"
proof -
from assms have "of_weak_ranking (weak_ranking R1) = of_weak_ranking (weak_ranking R2)" by simp
with assms(1,2) show ?thesis by (simp add: of_weak_ranking_weak_ranking)
qed
lemma weak_ranking_eq_iff:
assumes "finite_total_preorder_on alts R1"
assumes "finite_total_preorder_on alts R2"
shows "weak_ranking R1 = weak_ranking R2 \<longleftrightarrow> R1 = R2"
using assms weak_ranking_eqD by auto
definition preferred_alts :: "'alt relation \<Rightarrow> 'alt \<Rightarrow> 'alt set" where
"preferred_alts R x = {y. y \<succeq>[R] x}"
lemma (in preorder_on) preferred_alts_refl [simp]: "x \<in> carrier \<Longrightarrow> x \<in> preferred_alts le x"
by (simp add: preferred_alts_def refl)
lemma (in preorder_on) preferred_alts_altdef:
"preferred_alts le x = {y\<in>carrier. y \<succeq>[le] x}"
by (auto simp: preferred_alts_def intro: not_outside)
lemma (in preorder_on) preferred_alts_subset: "preferred_alts le x \<subseteq> carrier"
unfolding preferred_alts_def using not_outside by blast
subsection \<open>Rankings\<close>
(* TODO: Extend theory on rankings. Can probably mostly be based on
existing theory on weak rankings. *)
definition ranking :: "'a relation \<Rightarrow> 'a list" where
"ranking R = map the_elem (weak_ranking R)"
locale finite_linorder_on = linorder_on +
assumes finite_carrier [intro]: "finite carrier"
begin
sublocale finite_total_preorder_on carrier le
by unfold_locales (fact finite_carrier)
lemma singleton_weak_ranking:
assumes "A \<in> set (weak_ranking le)"
shows "is_singleton A"
proof (rule is_singletonI')
from assms show "A \<noteq> {}"
using weak_ranking_total_preorder(1) is_weak_ranking_iff by auto
next
fix x y assume "x \<in> A" "y \<in> A"
with assms
have "x \<preceq>[of_weak_ranking (weak_ranking le)] y" "y \<preceq>[of_weak_ranking (weak_ranking le)] x"
by (auto intro!: of_weak_ranking_indifference)
with weak_ranking_total_preorder(2)
show "x = y" by (intro antisymmetric) simp_all
qed
lemma weak_ranking_ranking: "weak_ranking le = map (\<lambda>x. {x}) (ranking le)"
unfolding ranking_def map_map o_def
proof (rule sym, rule map_idI)
fix A assume "A \<in> set (weak_ranking le)"
hence "is_singleton A" by (rule singleton_weak_ranking)
thus "{the_elem A} = A" by (auto elim: is_singletonE)
qed
end
end
|
\chapter{Mathematical tool of quantum mechanics}
\section{The linear vector space and Hilbert space}
\subsection{The linear vector space}
A linear vector space consisits of two set of elements and two algebraic rules :
\begin{itemize}
\item A set of vectors $\psi,\phi,\chi, \cdots $ and a set of scalars a,b,c....;
\item a rule of vector addition and a rule of scalar multiplication
\end{itemize}
\textbf{(a)Addition rule}\\
The addition rule has the properties and the structure of an abelian group:
\begin{itemize}
\item If $\psi$ and $\phi$ are vectors of a space,their sum $\psi+\phi$ is also a vector of the same space.
\item Commutativity :$\psi+\phi=\phi+\psi$
\item Associativity:$(\psi+\phi)+\chi=\psi+(\phi+\chi)$
\item The existance of a zero or neutral vector:for each vector $\psi$, there must be zero vector 0 such that : 0+$\psi=\psi+0=\psi$
\item Existance of a symmetric or inverse vector :each vector $\psi$ must have a symmetric vector (-$\psi$) such that ($\psi$)+(-$\psi$)=(-$\psi$)+$\psi$=0.
\end{itemize}
\textbf{(b)Multiplication rule}
The multiplication of a vectot by scalars (scalars can be real or complex numbers) has these properties:
\begin{itemize}
\item The product of a scalar with vector gives another vector.In general if $\psi$ and $\phi$ are the two vectors of the space,any linear combination $a\psi+b\phi$ is also a vector of the space. a and b being scalars.
\item Distributivity with respect to addition\\
$a(\psi+\phi)=a\psi+a\phi$\\
$(a+b)\psi=a\psi+b\psi$
\item Associativity with respect to multiplication of scalars:\\
$a(b\psi)=(ab)\psi$
\item For each element $\psi$ there must exist a unitary scalar I and a zero scalar "0" such that \\
$I\psi=\psi I =\psi$ and $0\psi=\psi0=0$
\end{itemize}
\subsection{The Hilbert space}
A Hilbert space $\mathcal{H}$ consists of a set of vectors $\psi,\phi,\chi,\cdots$ and set of scalars a,b,c which satisfies the following four properties.\\
\textbf{(a)} $\mathcal{H}$ \textbf{is a linear space.}\\
\textbf{(b)} $\mathcal{H}$ \textbf{has a defind scalar product that is strictly positive}.\\
\textbf{Scalar product}
\par The scalar product of an element $\psi$ with another element $\phi$ is in general a complex number,denoted by ($\psi,\phi$) \\
($\psi,\phi$)=complex number
\begin{note}
Since the scalar product is a complex number ,the quandity ($\psi,\phi$) is generally not equal to ($\phi,\psi$)\\
($\psi,\phi$)=($\psi*,\phi$)
\end{note}
\textbf{Properties of scalar product}
\begin{itemize}
\item The scalar product of $\psi$ with $\phi$ is equal to the complex conjugate of the scalar product of $\phi$ with $\psi$\\
($\psi,\phi$)=($\phi,\psi$)*
\item The scalar product of $\phi$ with $\psi$ is linear with respect to the second factor if $\psi=$ $a \psi_{1}+b \psi_{2}$ :
$$
\left(\phi, a \psi_{1}+b \psi_{2}\right)=a\left(\phi, \psi_{1}\right)+b\left(\phi, \psi_{2}\right),
$$
and antilinear with respect to the first factor if $\phi=a \phi_{1}+b \phi_{2}:$
$$
\left(a \phi_{1}+b \phi_{2}, \psi\right)=a^{*}\left(\phi_{1}, \psi\right)+b^{*}\left(\phi_{2}, \psi\right) .
$$
\item - The scalar product of a vector $\psi$ with itself is a positive real number:
$$
(\psi, \psi)=\|\psi\|^{2} \geq 0
$$
where the equality holds only for $\psi=O$.
\end{itemize}
\textbf{(c)}$\mathcal{H}$ \textbf{is separable}\\
There exists a Cauchy sequence $\psi_{n} \in \mathcal{H}(n=1,2, \ldots)$ such that for every $\psi$ of $\mathcal{H}$ and $\varepsilon>0$, there exists at least one $\psi_{n}$ of the sequence for which
$$
\left\|\psi-\psi_{n}\right\|<\varepsilon .
$$
\textbf{(d)} $\mathcal{H}$ \textbf{is complete}\\
Every Cauchy sequence $\psi_{n} \in \mathcal{H}$ converges to an element of $\mathcal{H}$. That is, for any $\psi_{n}$, the relation
$$
\lim _{n, m \rightarrow \infty}\left\|\psi_{n}-\psi_{m}\right\|=0,
$$
defines a unique limit $\psi$ of $\mathcal{H}$ such that
$$
\lim _{n \rightarrow \infty}\left\|\psi-\psi_{n}\right\|=0 .
$$
\subsection{Linear vector space that are Hilbert spaces}
\begin{itemize}
\item The first one is the three diamenensional Euclidean vector space which have finite(descrete) set of base vectors.
\item The second example is the space of the entire complex functions $\psi(x)$ which hace infinite(contineous) basis.
\end{itemize}
\section{Dimension and Basis of a vector space}
A set of $N$ nonzero vectors $\phi_{1}, \phi_{2}, \ldots, \phi_{N}$ is said to be linearly independent if and only if the solution of the equation
$$
\sum_{i=1}^{N} a_{i} \phi_{i}=0
$$
is $a_{1}=a_{2}=\cdots=a_{N}=0 .$ But if there exists a set of scalars, which are not all zero, so that one of the vectors (say $\phi_{n}$ ) can be expressed as a linear combination of the others,
$$
\phi_{n}=\sum_{i=1}^{n-1} a_{i} \phi_{i}+\sum_{i=n+1}^{N} a_{i} \phi_{i},
$$
$\text { the set }\left\{\phi_{i}\right\} \text { is said to be linearly dependent. }$\\
\textbf{Dimension:} The dimension of a vector space is given by the maximum number of linearly independent vectors the space can have. For instance, if the maximum number of linearly independent vectors a space has is $N$ (i.e.. $\phi_{1}, \phi_{2}, \ldots, \phi_{N}$ ). this space is said to be $N$-dimensional. In this $N$-dimensional vector space, any vector $\psi$ can be expanded as a linear combination:
$$
\psi=\sum_{i=1}^{N} a_{i} \phi_{t} .
$$
\textbf{Basis:} The basis of a vector space consists of a set of the maximum possible number of linearly independent vectors belonging to that space. This set of vectors, $\phi_{1}, \phi_{2}, \ldots, \phi_{N}$, to be denoted in short by $\left\{\phi_{1}\right\}$, is called the basis of the vector space, while the vectors $\phi_{1}, \phi_{2}, \ldots, \phi_{N}$ are called the base vectors.
\section{Dirac notation}
The physical state of a sysytem is represented in quantum mechanics by elements of a Hilbert space;These elments are called state vectors.We can represents the state vectors in different bases by means of function expansions.\\
\textbf{Kets: elements of a vector space}\\
Dirac denoted the state vector $\psi $ by the symbol $|\psi\rangle$, which he called a ket vector, or simply a ket. Kets belong to the Hilbert (vector) space $\mathcal{H}$, or, in short, to the ket-space.\\
\textbf{Bras: elements of a dual space}\\
we know from linear algebra that a dual space can be associated with every vector space. Dirac denoted the elements of a dual space by the symbol $\langle 1$, which he called a bra vector, or simply a bra; for instance, the element $\langle\psi /$ represents a bra. Note: For every ket $|\psi\rangle$ there exists a unique bra $\langle\psi|$ and vice versa. Again, while kets belong to the Hilbert space $\mathcal{H}$, the corresponding bras belong to its dual (Hilbert) space $\mathcal{H}_{d}$.\\
\textbf{Bra-ket: Dirac notation for the scalar product}\\
Dirac denoted the scalar (inner) product by the symbol $\langle\mid\rangle$, which he called a a bra-ket. For instance, the scalar product $(\phi, \psi)$ is denoted by the bra-ket $\langle\phi \mid \psi\rangle$ :
$$
(\phi, \psi) \quad \longrightarrow \quad\langle\phi \mid \psi\rangle \text {. }
$$
\begin{note}
\begin{itemize}
\item When a ket (or bra) is multiplied by a complex number, we also get a ket (or bra).
\item In the coordinate representation,the scalar product $\langle\phi \mid \psi\rangle$ is given by\\
$$\langle\phi \mid \psi\rangle=\int \phi^*(r,t)\psi(r,t)d^3r$$
\end{itemize}
\end{note}
\newpage
\textbf{ Properties of kets, bras, and bra-kets} \\
\begin{itemize}
\item \textbf{ Every ket has a corresponding bra}\\
To every $k e t|\psi\rangle$, there corresponds a unique bra $\langle\psi|$ and vice versa:
$|\psi\rangle \quad \longleftrightarrow \quad\langle\psi|$.
There is a one-to-one correspondence between bras and kets:
$$
a|\psi\rangle+b|\phi\rangle \longleftrightarrow a^{*}\langle\psi|+b^{*}\langle\phi| \text {. }
$$
where $a$ and $b$ are complex numbers. The following is a common notation:
$$
|a \psi\rangle=a|\psi\rangle, \quad\langle a \psi|=a^{*}\langle\psi| .
$$
\item \textbf{Properties of the scalar product}\\
(1)$\langle\phi \mid \psi\rangle^{*}=\langle\psi \mid \phi\rangle $ .\\
(2)$\left\langle\psi \mid a_{1} \psi_{1}+a_{2} \psi_{2}\right\rangle= a_{1}\left\langle\psi \mid \psi_{1}\right\rangle+a_{2}\left\langle\psi \mid \psi_{2}\right\rangle $\\
(3)$\left\langle a_{1} \phi_{1}+a_{2} \phi_{2} \mid \psi\right\rangle= a_{1}^{*}\left\langle\phi_{1} \mid \psi\right\rangle+a_{2}^{*}\left\langle\phi_{2} \mid \psi\right\rangle $\\
(4)$\left\langle a_{1} \phi_{1}+a_{2} \phi_{2} \mid b_{1} \psi_{1}+b_{2} \psi_{2}\right\rangle= a_{1}^{*} b_{1}\left\langle\phi_{1} \mid \psi_{1}\right\rangle+a_{1}^{*} b_{2}\left\langle\phi_{1} \mid \psi_{2}\right\rangle
+a_{2}^{*} b_{1}\left\langle\phi_{2} \mid \psi_{1}\right\rangle+a_{2}^{*} b_{2}\left\langle\phi_{2} \mid \psi_{2}\right\rangle$ .
\item \textbf{ The norm is real and positive}\\
For any state vector $|\psi\rangle$ of the Hilbert space $\mathcal{H}$, the norm $\langle\psi \mid \psi\rangle$ is real and positive; $\langle\psi \mid \psi\rangle$ is equal to zero only for the case where $|\psi\rangle=O$, where $O$ is the zero vector. If the state $|\psi\rangle$ is normalized then $\langle\psi \mid \psi\rangle=1$.
\item \textbf{Schwarz inequality}
For any two states $|\psi\rangle$ and $|\phi\rangle$ of the Hilbert space, we can show that
$$
|\langle\psi \mid \phi\rangle|^{2} \leq\langle\psi \mid \psi\rangle\langle\phi \mid \phi\rangle .
$$
If $|\psi\rangle$ and $|\phi\rangle$ are linearly dependent (i.e., proportional: $|\psi\rangle=\alpha|\phi\rangle$, where $\alpha$ is a scalar), this relation becomes an equality. The Schwarz inequality is analogous to the following relation of the real Euclidean space
$$
|\vec{A} \cdot \vec{B}|^{2} \leq|\vec{A}|^{2}|\vec{B}|^{2}
$$
\item \textbf{Triangle inequality}\\
$$
\sqrt{\langle\psi+\phi \mid \psi+\phi\rangle} \leq \sqrt{\langle\psi \mid \psi\rangle}+\sqrt{\langle\phi \mid \phi\rangle} .
$$
If $|\psi\rangle$ and $|\phi\rangle$ are linearly dependent, $|\psi\rangle=\alpha|\phi\rangle$, and if the proportionality scalar $\alpha$ is real and positive, the triangle inequality becomes an equality. The counterpart of this inequality in Euclidean space is given by $|\vec{A}+\vec{B}| \leq|\vec{A}|+|\vec{B}|$.
\item \textbf{Orthogonal states}\\
Two kets, $|\psi\rangle$ and $|\phi\rangle$, are said to be orthogonal if they have a vanishing scalar product:
$$
\langle\psi \mid \phi\rangle=0 .
$$
\item \textbf{Orthonormal states}\\
Two kets, $|\psi\rangle $ and $|\phi \rangle $ are said to be orthonormal and if each one of them has a unit norm:\\
$$ \langle \psi \mid \phi \rangle =0$$
$$ \langle \psi \mid \psi \rangle =1$$
$$ \langle \phi \mid \phi \rangle =1$$
\end{itemize}
\begin{exercise}
Consider the following two kets\\
|$\psi\rangle=\left(\begin{array}{c}
-3 i \\
2+i \\
4
\end{array}\right), \quad|\phi\rangle=\left(\begin{array}{c}
2 \\
-i \\
2-3 i
\end{array}\right)$\\
(a) Find the bra $\langle \phi |$ \\
(b) Evaluate the scalar product $\langle \phi \mid \psi \rangle $
\end{exercise}
\begin{answer}
(a) the bra $\langle\phi|$ can be obtained by simply taking the complex conjugate of the transpose of the ket $|\phi\rangle$ :
$$
\langle\phi|=\left(\begin{array}{lll}
2 & i & 2+3 i
\end{array}\right) .
$$
(b) The scalar product $\langle\phi \mid \psi\rangle$ can be calculated as follows:
$$
\begin{aligned}
\langle\phi \mid \psi\rangle &=\left(\begin{array}{lll}
2 & i & 2+3 i
\end{array}\right)\left(\begin{array}{c}
-3 i \\
2+i \\
4
\end{array}\right) \\
&=2(-3 i)+i(2+i)+4(2+3 i) \\
&=7+8 i .
\end{aligned}
$$
\end{answer}
\begin{exercise}
Consider the states $|\psi\rangle=3 i\left|\phi_{1}\right\rangle-7 i\left|\phi_{2}\right\rangle$ and $|\chi\rangle=-\left|\phi_{1}\right\rangle+2 i\left|\phi_{2}\right\rangle$, where $\left|\phi_{1}\right\rangle$ and $\left|\phi_{2}\right\rangle$ are orthonormal.\\
(a) Calculate $|\psi+\chi\rangle$ and $\langle\psi+\chi|$.\\
(b) Calculate the scalar products $\langle\psi \mid \chi\rangle$ and $\langle\chi \mid \psi\rangle$. Are they equal?
\end{exercise}
\begin{answer}
(a) The calculation of $|\psi+\chi\rangle$ is straightforward:
$$
\begin{aligned}
|\psi+\chi\rangle &=|\psi\rangle+|\chi\rangle=\left(3 i\left|\phi_{1}\right\rangle-7 i\left|\phi_{2}\right\rangle\right)+\left(-\left|\phi_{1}\right\rangle+2 i\left|\phi_{2}\right\rangle\right) \\
&=(-1+3 i)\left|\phi_{1}\right\rangle-5 i\left|\phi_{2}\right\rangle .
\end{aligned}
$$
This leads at once to the expression of $\langle\psi+\chi|$ :
$$
\langle\psi+\chi|=(-1+3 i)^{*}\left\langle\phi_{1}\right|+(-5 i)^{*}\left\langle\phi_{2}\right|=(-1-3 i)\left\langle\phi_{1}\right|+5 i\left\langle\phi_{2}\right| \text {. }
$$
(b) Since $\left\langle\phi_{1} \mid \phi_{1}\right\rangle=\left\langle\phi_{2} \mid \phi_{2}\right\rangle=1,\left\langle\phi_{1} \mid \phi_{2}\right\rangle=\left\langle\phi_{2} \mid \phi_{1}\right\rangle=0$, and since the bras corresponding to the kets $|\psi\rangle=3 i\left|\phi_{1}\right\rangle-7 i\left|\phi_{2}\right\rangle$ and $|\chi\rangle=-\left|\phi_{1}\right\rangle+2 i\left|\phi_{2}\right\rangle$ are given by $\langle\psi|=-3 i\left\langle\phi_{1}\right|+7 i\left\langle\phi_{2}\right|$ and $\langle\chi|=-\left\langle\phi_{1}\right|-2 i\left\langle\phi_{2}\right|$, the scalar products are\\
$$\begin{aligned}
\langle\psi \mid \chi\rangle &=\left(-3 i\left\langle\phi_{1}\left|+7 i\left\langle\phi_{2}\right|\right)\left(-\left|\phi_{1}\right\rangle+2 i\left|\phi_{2}\right\rangle\right)\right.\right.\\
&=(-3 i)(-1)\left\langle\phi_{1} \mid \phi_{1}\right\rangle+(7 i)(2 i)\left\langle\phi_{2} \mid \phi_{2}\right\rangle \\
&=-14+3 i \\
\langle\chi \mid \psi\rangle &=\left(-\left\langle\phi_{1}\left|-2 i\left\langle\phi_{2}\right|\right)\left(3 i\left|\phi_{1}\right\rangle-7 i\left|\phi_{2}\right\rangle\right)\right.\right.\\
&=(-1)(3 i)\left\langle\phi_{1} \mid \phi_{1}\right\rangle+(-2 i)(-7 i)\left\langle\phi_{2} \mid \phi_{2}\right\rangle \\
&=-14-3 i .
\end{aligned}$$
$\text { We see that }\langle\psi \mid \chi\rangle \text { is equal to the complex conjugate of }\langle\chi \mid \psi\rangle\rangle \text {. }$
\end{answer}
\section{Operators}
\subsection{General definitions}
Definition of an operator: An operator $ \hat{A}$ is a mathematical rule that when applied to a ket $|\psi\rangle$ transforms it into another ket $\left|\psi^{\prime}\right\rangle$ of the same space and when it acts on a bra $\langle\phi|$ transforms it into another bra $\left\langle\phi^{\prime}\right|$ :
$$
\hat{A}|\psi\rangle=\left|\psi^{\prime}\right\rangle, \quad\langle\phi| \hat{A}=\left\langle\phi^{\prime}\right| .
$$
A similar definition applies to wave functions:
$$
\hat{A} \psi(\vec{r})=\psi^{\prime}(\vec{r}), \quad \phi(\vec{r}) \hat{A}=\phi^{\prime}(\vec{r})
$$
\textbf{Examples of operators}
\begin{itemize}
\item $\text { Unity operator: it leaves any ket unchanged, } \hat{I}|\psi\rangle=|\psi\rangle \text {. }$
\item $\text { The gradient operator: } \vec{\nabla} \psi(\vec{r})=(\partial \psi(\vec{r}) / \partial x) \vec{i}+(\partial \psi(\vec{r}) / \partial y) \vec{j}+(\partial \psi(\vec{r}) / \partial z) \vec{k} \text {. }$
\item The linear momentum operator:$\vec{P}\psi(r)=-i\hbar \nabla \psi(r)$
\item The parity operator :$\mathcal{P}\psi(r)=\psi(-r)$
\end{itemize}
\textbf{Product of operators}\\
The product of two operators is generally not commutative:\\
$$\hat{A}\hat{B}\neq \hat{B}\hat{A}$$
But they are associative\\
$$\hat{A}\hat{B}\hat{C}=\hat{A}(\hat{B}\hat{C})=(\hat{A}\hat{B})\hat{C}$$
\textbf{Linear operators}\\
An operator $\hat{A}$ is said to be linear if it obeys the distributive law and like, all operators ,it commute with constants.That is an operator $\hat{A}$ is linear if ,for any vector $|\psi_{1}\rangle $ and $|\psi_{2}\rangle $ and any complex numbers $a_1$ and $a_2$ we have\\
$$\hat{A}(a_1\mid \psi_{1}\rangle +a_2\mid \psi_{2}\rangle)=a_1 \hat{A} \mid \psi_{1}\rangle +a_2\hat{A}\mid \psi_{2}\rangle $$
and
$$\left(\left\langle\psi_{1}\right| a_{1}+\left\langle\psi_{2}\right| a_{2}\right) \hat{A}=a_{1}\left\langle\psi_{1}\right| \hat{A}+a_{2}\left\langle\psi_{2}\right| \hat{A}$$
\textbf{Expectation value of an operator}\\
The expectation or mean value $\langle\hat{A}\rangle$ of an operator $\hat{A}$ with respect to a state $|\psi\rangle$ is defined by
$$
\langle\hat{A}\rangle=\frac{\langle\psi|\hat{A}| \psi\rangle}{\langle\psi \mid \psi\rangle} .
$$
\subsection{Hermitian operator}
\textbf{Hermitian adjoint}\\
The Hermitian adjoint or conjugate$ \alpha^{\dagger}$, of a complex number $\alpha$ is the complex conjugate of this number: $\alpha^{\dagger}=\alpha^{*}$. The Hermitian adjoint, or simply the adjoint, $\hat{A}^{\dagger}$, of an operator $\hat{A}$ is defined by this relation:
$$
\left\langle\psi\left|\hat{A}^{\dagger}\right| \phi\right\rangle=\langle\phi|\hat{A}| \psi\rangle^{*} .
$$
\textbf{Properties}\\
To obtain the Hermitian adjoint of any expression, we must cyclically reverse the order of the factors and make three replacements:
\begin{itemize}
\item $\text { Replace constants by their complex conjugates: } \alpha^{\dagger}=\alpha^{*} \text {. }$
\item $\text { Replace kets (bras) by the corresponding bras (kets): } \left.(|\psi\rangle)^{\dagger}=\langle\psi| \text { and }(\langle\psi|)^{\dagger}=|\psi\rangle\right\rangle \text {. }$
\item $\text { Replace operators by their adjoints. }$
\end{itemize}
Following these rules, we can write
$$
\begin{aligned}
\left(\hat{A}^{\dagger}\right)^{\dagger} &=\hat{A}, \\
(a \hat{A})^{\dagger} &=a^{*} \hat{A}^{\dagger}, \\
\left(\hat{A}^{n}\right)^{\dagger} &=\left(\hat{A}^{\dagger}\right)^{n}, \\
(\hat{A}+\hat{B}+\hat{C}+\hat{D})^{\dagger} &=\hat{A}^{\dagger}+\hat{B}^{\dagger}+\hat{C}^{\dagger}+\hat{D}^{\dagger}, \\
(\hat{A} \hat{B} \hat{C} \hat{D})^{\dagger} &=\hat{D}^{\dagger} \hat{C}^{\dagger} \hat{B}^{\dagger} \hat{A}^{\dagger}, \\
(\hat{A} \hat{B} \hat{C}|\psi\rangle)^{\dagger} &=\langle\psi| D^{\dagger} C^{\dagger} B^{\dagger} A^{\dagger} .
\end{aligned}
$$
\textbf{Hermitian and skew Hermitian operators}\\
An operator $\hat{A}$ is said to be Hermitian if it is equal to its adjoint $\hat{A}^{\dagger}$ :
$$
\hat{A}=\hat{A}^{\dagger} \quad \text { or } \quad\langle\psi|\hat{A}| \phi\rangle=\langle\phi|\hat{A}| \psi\rangle^{*} .
$$
On the other hand ,an operator $\hat{B}$ is said to be skew Hermitian or anti Hermitian if \\
$$ \hat{B}^{\dagger}=-B $$
or $$ \langle \psi \mid \hat{B} \mid \phi \rangle =-\langle \phi \mid \hat{B} \mid \psi\rangle ^*$$
\textbf{Examples}\\
Hermitian:$\hat{A}+\hat{A}^{\dagger}$ and $i(\hat{A}-\hat{A}^{\dagger})$ etc\\
Anti Hermitian:$i(\hat{A}+\hat{A}^{\dagger})$
\begin{exercise}
$\text { Prove that the operators } i(d / d x) \text { and } d^{2} / d x^{2} \text { are Hermitian. }$
\end{exercise}
\begin{answer}
$$
\int_{-\infty}^{\infty} \psi_{m}^{*}\left(i \frac{d}{d x}\right) \psi_{n} d x=i\left[\psi_{m}^{*} \psi_{n}\right]_{-\infty}^{\infty}-i \int_{-\infty}^{\infty} \psi_{n} \frac{d}{d x} \psi_{m}^{*} d x=\int_{-\infty}^{\infty}\left(i \frac{d}{d x} \psi_{m}\right)^{*} \psi_{n} d x
$$
Therefore, $i d / d x$ is Hermitian.\\
$$\int_{-\infty}^{\infty} \psi_{m}^{*} \frac{d^{2} \psi_{n}}{d x^{2}} d x=\left[\psi_{m}^{*} \frac{d \psi_{n}}{d x}\right]_{-m}^{\infty}-\int_{-}^{\infty} \frac{d \psi_{n}}{d x} \frac{d \psi_{m}^{*}}{d x} d x$$
$$
=\left[\frac{d \psi_{m}^{*}}{d x} \psi_{n}\right]_{-\infty}^{\infty}+\int_{-\infty}^{\infty} \psi_{n} \frac{d^{2} \psi_{m}^{*}}{d x^{2}} d x=\int_{-\infty}^{\infty} \frac{d^{2} \psi_{m}^{*}}{d x^{2}} \psi_{n} d x
$$
Thus, $d^{2} / d x^{2}$ is Hermitian.
\end{answer}
\begin{exercise}
$\text { Prove that operator } P_{x}=-i \hbar \frac{d}{d x} \text { is Hermitian but } D_{x}=\frac{d}{d x} \text { is not Hermitian. }$
\end{exercise}
\begin{answer}
The given operator $-i \hbar \frac{d}{d x}$ is the same as $P_{x}$. Let $\psi_{1}(x)$ and $\psi_{2}(x)$ be two arbitrary functions.
$$
\begin{aligned}
\left\langle\psi_{1} \mid P_{x} \psi_{2}\right\rangle &=\int_{-\infty}^{x} \psi_{1}^{*}(x)\left[-i \hbar \frac{d}{d x} \psi_{2}(x)\right] d x \\
&=-i \hbar \int_{-\infty}^{\infty} \psi_{1}^{*}(x) \frac{d}{d x} \psi_{2}(x) d x
\end{aligned}
$$
Integrating by parts, this is equal to
$$
-i \hbar\left[\left.\psi_{1}^{*}(x) \psi_{2}(x)\right|_{-\infty} ^{\infty}-\int_{-\infty}^{\infty} \frac{d \psi_{1}^{*}(x)}{d x} \psi_{2}(x) d x\right]
$$
For the wave function to be square integrable, it must go to zero as $x$ goes to $-\infty$ or $+\infty$.
Thus, the first term in the square bracket is zero. So,
$$
\left\langle\psi_{1}\left|P_{x}\right| \psi_{2}\right\rangle=i \hbar \int_{-\infty}^{\infty} \frac{d \psi_{1}^{*}(x)}{d x} \psi_{2}(x) d x
$$
$$
\text { Also, } \begin{aligned}
\left\langle P_{x} \psi_{1} \mid \psi_{2}\right\rangle &=\int_{-\infty}^{\infty}\left[P_{x} \psi_{1}(x)\right]^{*} \psi_{2}(x) d x \\
&=\int_{-\infty}^{\infty}\left[-i \hbar \frac{d \psi_{1}(x)}{d x}\right]^{*} \psi_{2}(x) d x=i \hbar \int_{-\infty}^{\infty} \frac{d \psi_{1}^{*}(x)}{d x} \psi_{2}(x) d x
\end{aligned}
$$
From (i) and (ii), $\left\langle\psi_{1} \mid P_{x} \psi_{2}\right\rangle=\left\langle P_{x} \psi_{1} \mid \psi_{2}\right\rangle$\\
Hence $P_{x}=-i \hbar \frac{d}{d x}$ is Hermitian. Similar calculation with $A=\frac{d}{d x}$ will give,
$$
\left\langle\psi_{1} \mid A \psi_{2}\right\rangle=-\int_{-\infty}^{\infty} \frac{d \psi_{1}^{*}(x)}{d x} \psi_{2}(x) d x
$$
And, $\quad\left\langle A \psi_{1} \mid \psi_{2}\right\rangle=\int_{-\infty}^{\infty} \frac{d \psi_{1}^{*}(x)}{d x} \psi_{2}(x) d x$\\
Hence, $\left\langle\psi_{1} \mid A \psi_{2}\right\rangle \neq\left\langle A \psi_{1} \mid \psi_{2}\right\rangle$ and so, $A=\frac{d}{d x}$ is not Hermitian.
\end{answer}
\begin{exercise}
$\text { Show that operator } O=(1+i) A B+(1-i) B A \text { is Hermitian if } A \text { and } B \text { is Hermitian. }$
\end{exercise}
\begin{answer}
$$[(1+i) A B+(1-i) B A]^{\dagger}=[(1+i) A B]^{\dagger}+[(1-i) B A]^{\dagger}$$
$$
\begin{aligned}
&=(1+i)^{*}(A B)^{\dagger}+(1-i)^{*}(B A)^{\dagger} \\
&=(1-i) B^{\dagger} A^{\dagger}+(1+i) A^{\dagger} B^{\dagger} \quad \text { if } A \text { and } B \text { is hermitian } \\
&=(1-i) B A+(1+i) A B \\
&(1+i) A B+(1-i) B A
\end{aligned}
$$
Thus the given operator is Hermitian
\end{answer}
\subsection{Projection operators}
An operator $\hat{P}$ is said to be projection operator if it is Hermitian and equal to its own square:\\
$$\hat{P}^{\dagger}=\hat{P} \quad \quad \hat{P}^2=\hat{P}$$
THe unit operator $\hat{I}$ is a simple example of a projection operator Since $$\hat{I}^{\dagger}=\hat{I}, \hat{I}^2=\hat{I}$$
\textbf{Properties}\\
\begin{itemize}
\item The product of two commuting projection operators, $\hat{P}_{1}$ and $\hat{P}_{2}$, is also a projection operator, since
$$
\left(\hat{P}_{1} \hat{P}_{2}\right)^{\dagger}=\hat{P}_{2}^{\dagger} \hat{P}_{1}^{\dagger}=\hat{P}_{2} \hat{P}_{1}=\hat{P}_{1} \hat{P}_{2} \text { and }\left(\hat{P}_{1} \hat{P}_{2}\right)^{2}=\hat{P}_{1} \hat{P}_{2} \hat{P}_{1} \hat{P}_{2}=\hat{P}_{1}^{2} \hat{P}_{2}^{2}=\hat{P}_{1} \hat{P}_{2} \text {. }
$$
\item The sum of two projection operators is generally not a projection operator.
\item Two projection operators are said to be orthogonal if their product is zero.
\item For a sum of projection operators $\hat{P}_{1}+\hat{P}_{2}+\hat{P}_{3}+\cdots$ to be a projection operator, it is necessary and sufficient that these projection operators be mutually orthogonal (i.e., the cross-product terms must vanish).
\end{itemize}
\subsection{Commutator Algebra}
The commutator of two operators $\hat{A}$ and $\hat{B}$, denoted by $[\hat{A}, \hat{B}]$, is defined by
$$
[\hat{A}, \hat{B}]=\hat{A} \hat{B}-\hat{B} \hat{A}
$$
and the anticommutator $\{\hat{A}, \hat{B}\}$ is defined by
$$
\{\hat{A}, \hat{B}\}=\hat{A} \hat{B}+\hat{B} \hat{A}
$$
Two operators are said to commute if their commutator is equal to zero and hence $\hat{A} \hat{B}=\hat{B} \cdot \hat{A}$. Any operator commutes with itself:
$$
[\hat{A}, \hat{A}]=0
$$
Note that if two operators are Hermitian and their product is also Hermitian, these operators commute:
$$
(\hat{A} \hat{B})^{\dagger}=\hat{B}^{\dagger} \hat{A}^{\dagger}=\hat{B} \hat{A}
$$
and since $(\hat{A} \hat{B})^{\dagger}=\hat{A} \hat{B}$ we have $\hat{A} \hat{B}=\hat{B} \hat{A}$.\\
\textbf{Example}\\
$$\left[\hat{X}, \hat{P}_{x}\right]=i \hbar \hat{I}, \quad\left[\hat{Y}, \hat{P}_{y}\right]=i \hbar \hat{I}, \quad\left[\hat{Z}, \hat{P}_{z}\right]=i \hbar \hat{I}$$
Where $\hat{P_x}=-i\hbar \frac{\partial}{\partial x}$, $\hat{P_y}=-i\hbar \frac{\partial}{\partial y}$, $\hat{P_z}=-i\hbar \frac{\partial}{\partial z}$ and $\hat{I}$ is the unit operator.\\
\begin{note}
\begin{enumerate}
\item $\left[\hat{X}, \hat{P}_{x}\right]=i \hbar, \quad\left[\hat{Y}, \hat{P}_{Y}\right]=i \hbar, \quad\left[\hat{Z}, \hat{P}_{Z}\right]=i \hbar$
\item $\left[\hat{X}, \hat{P}_{y}\right]=\left[\hat{X}, \hat{P}_{z}\right]=\left[\hat{Y}, \hat{P}_{x}\right]=\left[\hat{Y}, \hat{P}_{z}\right]=\left[\hat{Z}, \hat{P}_{x}\right]=\left[\hat{Z}, \hat{P}_{y}\right]=0$
\item $\left[\hat{R}_{j}, \hat{P}_{k}\right]=i \hbar \delta_{j k}, \quad\left[\hat{R}_{j}, \hat{R}_{k}\right]=0, \quad\left[\hat{P}_{j}, \hat{P}_{k}\right]=0 \quad(j, k=x, y, z)$
\item $\left[\hat{X}^{n}, \hat{P}_{x}\right]=i \hbar n \hat{X}^{n-1}, \quad\left[\hat{X}, \hat{P}_{x}^{n}\right]=i \hbar n \hat{P}_{x}^{n-1}$
\item $\left.\left[ f(\hat{X}), \hat{P}_{x}\right]=i \hbar \frac{d f(\hat{X})}{d \hat{X}} \Longrightarrow[\hat{P}, F(\hat{\vec{R}})]=-i \hbar \vec{\nabla} F(\hat{\vec{R}})\right)$\\
Where F is a function of the operator $\hat{R}$
\end{enumerate}
\end{note}
\textbf{Properties}\\
\begin{itemize}
\item Antisymmetry:\\
$$[\hat{A}, \hat{B}]=-[\hat{B}, \hat{A}]$$
\item Linearity:\\
$$[\hat{A}, \hat{B}+\hat{C}+\hat{D}+\cdots]=[\hat{A}, \hat{B}]+[\hat{A}, \hat{C}]+[\hat{A}, \hat{D}]+\cdots$$
\item Hermitian conjugate of a commutator: \\
$$[\hat{A}, \hat{B}]^{\dagger}=\left[\hat{B}^{\dagger}, \hat{A}^{\dagger}\right]$$
\item Distributivity: \\
$$\begin{aligned}
&{[\hat{A}, \hat{B} \hat{C}]=[\hat{A}, \hat{B}] \hat{C}+\hat{B}[\hat{A}, \hat{C}]} \\
&{[\hat{A} \hat{B}, \hat{C}]=\hat{A}[\hat{B}, \hat{C}]+[\hat{A}, \hat{C}] \hat{B}}
\end{aligned}$$
\item jacobi identity:\\
$$[\hat{A},[\hat{B}, \hat{C}]]+[\hat{B},[\hat{C}, \hat{A}]]+[\hat{C},[\hat{A}, \hat{B}]]=0$$
\item Operators commute with scalars: an operator $\hat{A}$ commutes with any scalar $b$ :
$$
[\hat{A}, b]=0
$$
\end{itemize}
\begin{note}
Uncertainity relation between two operator A and B is \\
$$\Delta A \Delta B \geq \frac{1}{2}\left| \langle \left[ \hat{A},\hat{B}\right] \rangle \right| $$
\end{note}
\begin{exercise}
(a)Show that the commutator of two Hermitian operator is anti Hermitian.\\
(b)Evaluate the commutator $\left[ \hat{A},\left[ \hat{B},\hat{C}\right] \hat{D}\right] $
\end{exercise}
\begin{answer}
(a) If $\hat{A}$ and $\hat{B}$ are Hermitian, we can write
$$
[\hat{A}, \hat{B}]^{\dagger}=(\hat{A} \hat{B}-\hat{B} \hat{A})^{\dagger}=\hat{B}^{\dagger} \hat{A}^{\dagger}-\hat{A}^{\dagger} \hat{B}^{\dagger}=\hat{B} \hat{A}-\hat{A} \hat{B}=-[\hat{A}, \hat{B}] ;
$$
that is, the commutator of $\hat{A}$ and $\hat{B}$ is anti-Hermitian: $[\hat{A}, \hat{B}]^{\dagger}=-[\hat{A}, \hat{B}]$.\\
(b) Using the distributivity relation we have
$$
\begin{aligned}
[\hat{A},[\hat{B}, \hat{C}] \hat{D}] &=[\hat{B}, \hat{C}][\hat{A}, \hat{D}]+[\hat{A},[\hat{B}, \hat{C}]] \hat{D} \\
&=(\hat{B} \hat{C}-\hat{C} \hat{B})(\hat{A} \hat{D}-\hat{D} \hat{A})+\hat{A}(\hat{B} \hat{C}-\hat{C} \hat{B}) \hat{D}-(\hat{B} \hat{C}-\hat{C} \hat{B}) \hat{A} \hat{D} \\
&=\hat{C} \hat{B} \hat{D} \hat{A}-\hat{B} \hat{C} \hat{D} \hat{A}+\hat{A} \hat{B} \hat{C} \hat{D}-\hat{A} \hat{C} \hat{B} \hat{D} .
\end{aligned}
$$
\end{answer}
\begin{exercise}
Find the following commutation relations:
(i) $\left[\frac{\partial}{\partial x}, \frac{\partial^{2}}{\partial x^{2}}\right]$
(ii) $\left[\frac{\partial}{\partial x}, F(x)\right]$
\end{exercise}
\begin{answer}
(i) $\left[\frac{\partial}{\partial x}, \frac{\partial^{2}}{\partial x^{2}}\right] \psi=\left(\frac{\partial}{\partial x} \frac{\partial^{2}}{\partial x^{2}}-\frac{\partial^{2}}{\partial x^{2}} \frac{\partial}{\partial x}\right) \psi=\left(\frac{\partial^{3}}{\partial x^{3}}-\frac{\partial^{3}}{\partial x^{3}}\right) \psi=0$\\
(ii) $\left[\frac{\partial}{\partial x}, F(x)\right] \psi=\frac{\partial}{\partial x}(F \psi)-F \frac{\partial}{\partial x} \psi=\frac{\partial F}{\partial x} \psi+F \frac{\partial \psi}{\partial x}-F \frac{\partial \psi}{\partial x}=\frac{\partial F}{\partial x} \psi$\\
Thus, $\left[\frac{\partial}{\partial x}, F(x)\right]=\frac{\partial F}{\partial x}$
\end{answer}
\begin{exercise}
If Hamiltonian of system is $H=\frac{p_{x}^{2}}{2 m}+V(x)$ then Find commutation $[H, x]$ and $[[H, x], x]$
\end{exercise}
\begin{answer}
As, $H=p^{2} / 2 m+V(x)$\\
We have, $[H, x]=\frac{1}{2 m}\left[p^{2}, x\right]=-i \hbar p / m$ \\
and so, $[[H, x], x]=\frac{i \hbar}{m}[p, x]=-\hbar^{2} / m$\\
Hence, $\langle m|[[H, x] x]| m\rangle=-\frac{\hbar^{2}}{m}$
\end{answer}
\subsection{Functions of operators}
\textbf{Commutators involving function operators}\\
If $\hat{A}$ commutes with another operator $\hat{B}$, then $\hat{B}$ commutes with any operator function that depends on $\hat{A}$ :
$$
[\hat{A}, \hat{B}]=0 \quad \Longrightarrow \quad[\hat{B}, F(\hat{A})]=0 ;
$$
in particular, $F(\hat{A})$ commutes with $\hat{A}$ and with any other function, $G(\hat{A})$, of $\hat{A}$ :
$$
[\hat{A}, F(\hat{A})]=0, \quad\left[\hat{A}^{n}, F(\hat{A})\right]=0, \quad[F(\hat{A}), G(\hat{A})]=0 .
$$
\subsection{Inverse and unitary operators}
\textbf{Inverse of an operator}\\
Assuming it exists the inverse $\hat{A}^{-1}$ of a linear operator $A$ is defined by the relation
$$
\hat{A}^{-1} \hat{A}=\hat{A} \hat{A}^{-1}=\hat{I},
$$
where $\hat{I}$ is the unit operator, the operator that leaves any state $|\psi\rangle$ unchanged.\\
\textbf{Unitary operator}\\
A linear operator $\hat{U}$ is said to be unitary if its inverse $\hat{U}^{-1}$ is equal to its adjoint $\hat{U}^{\dagger}$ :
$$
\hat{U}^{\dagger}=\hat{U}^{-1} \quad \text { or } \quad \hat{U} \hat{U}^{\dagger}=\hat{U}^{\dagger} \hat{U}=\hat{I} .
$$
The product of two unitary operators is also unitary, since
$$
(\hat{U} \hat{V})(\hat{U} \hat{V})^{\dagger}=(\hat{U} \hat{V})\left(\hat{V}^{\dagger} \hat{U}^{\dagger}\right)=\hat{U}\left(\hat{V} \hat{V}^{\dagger}\right) \hat{U}^{\dagger}=\hat{U} \hat{U}^{\dagger}=\hat{I},
$$
or $(\hat{U} \hat{V})^{\dagger}=(\hat{U} \hat{V})^{-1}$. This result can be generalized to any number of operators; the product of a number of unitary operators is also unitary, since
$$
\begin{aligned}
(\hat{A} \hat{B} \hat{C} \hat{D} \cdots)(\hat{A} \hat{B} \hat{C} \hat{D} \cdots)^{\dagger} &=\hat{A} \hat{B} \hat{C} \hat{D}(\cdots) \hat{D}^{\dagger} \hat{C}^{\dagger} \hat{B}^{\dagger} \hat{A}^{\dagger}=\hat{A} \hat{B} \hat{C}\left(\hat{D} \hat{D}^{\dagger}\right) \hat{C}^{\dagger} \hat{B}^{\dagger} \hat{A}^{\dagger} \\
&=\hat{A} \hat{B}\left(\hat{C} \hat{C}^{\dagger}\right) \hat{B}^{\dagger} \hat{A}^{\dagger}=\hat{A}\left(\hat{B} \hat{B}^{\dagger}\right) \hat{A}^{\dagger} \\
&=\hat{A} \hat{A}^{\dagger}=\hat{l}
\end{aligned}
$$
\subsection{Eigen value and eigen vector of an operator}
A state vector $|\psi\rangle$ is said to be an eigenvector (also called an eigenket or eigenstate) of an operator $\hat{A}$ if the application of $\hat{A}$ to $|\psi\rangle$ gives
$$
\hat{A}|\psi\rangle=a|\psi\rangle,
$$
where $a$ is a complex number, called an eigenvalue of $\hat{A}$. This equation is known as the eigenvalue equation, or eigenvalue problem, of the operator $\hat{A}$. Its solutions yield the eigenvalues and eigenvectors of $\hat{A}$.
\begin{exercise}
$\text { Show that if } \hat{A}^{-1} \text { exists, the eigenvalues of } \hat{A}^{-1} \text { are just the inverses of those of } \hat{A} \text {. }$
\end{exercise}
\begin{answer}
Since $\hat{A}^{-1} \hat{A}=\hat{I}$ we have on the one hand
$$
\hat{A}^{-1} \hat{A}|\psi\rangle=|\psi\rangle,
$$
and on the other hand
$$
\hat{A}^{-1} \hat{A}|\psi\rangle=\hat{A}^{-1}(\hat{A}|\psi\rangle)=a \hat{A}^{-1}|\psi\rangle .
$$
Combining the previous two equations, we obtain
$$
a \hat{A}^{-1}|\psi\rangle=|\psi\rangle,
$$
hence
$$
\hat{A}^{-1}|\psi\rangle=\frac{1}{a}|\psi\rangle
$$
This means that $|\psi\rangle$ is also an eigenvector of $\hat{A}^{-1}$ with eigenvalue $1 / a$. That is, if $\hat{A}^{-1}$ exists, then
$$
\hat{A}|\psi\rangle=a|\psi\rangle \quad \Longrightarrow \quad \hat{A}^{-1}|\psi\rangle=\frac{1}{a}|\psi\rangle .
$$
\end{answer}
\section{Theorems}
\begin{theorem}
For an Hermitian operator ,all of its eigen values are real and the eigen vectors corresponding to different eigen values are orthogonal. \\
If $\hat{A}^{\dagger}=\hat{A}, \hat{A}|\phi_{n}\rangle =a_n |\phi_{n}\rangle \implies a_n=\text{real number }$\\
and\\
$\langle \phi_{m}\mid \phi_{n}\rangle=\delta_{mn}$
\end{theorem}
\begin{theorem}
If two Hermitian operators, $\hat{A}$ and $\hat{B}$, commute and if $\hat{A}$ has no degenerate eigenvalue, then each eigenvector of $\hat{A}$ is also an eigenvector of $\hat{B}$. In addition, we can construct a common orthonormal basis that is made of the joint eigenvectors of $\hat{A}$ and $\hat{B}$.
\end{theorem}
\begin{theorem}
The eigenvalues of an anti-Hermitian operator are either purely imaginary or equal to zero.
\end{theorem}
\begin{theorem}
The eigenvalues of a unitary operator are complex numbers of moduli equal to one; the eigenvectors of a unitary operator that has no degenerate eigenvalues are mutually orthogonal.
\end{theorem}
\section{Wavefunction in coordinate and momentum representations}
We have $$xp-px=i\hbar$$
So $$\langle x\mid xp-px\mid x^{\prime}\rangle=i\hbar\langle x\mid x^{\prime}\rangle$$
$$x\langle x |p|x^{\prime}\rangle -\langle x |p|x^{\prime}\rangle x^{\prime}=i\hbar \delta(x-x^{\prime})$$
$$\langle x|p|x^{\prime}\rangle =i\hbar \frac{\delta(x-x^{\prime})}{x-x^{\prime}}$$
We have $-\frac{\delta(x)}{x}=\delta^{\prime}(x)$\\
$$\langle x|p|x^{\prime}\rangle=-i\hbar \frac{\partial}{\partial x}\delta(x-x^{\prime})$$
So similarly $$\langle x|p|\psi(t)\rangle=-i\hbar\frac{\partial}{\partial x} \langle|\psi(t)\rangle$$
$$\langle x|p|\psi(t)\rangle=-i\hbar \frac{\partial }{\partial x}\psi(x,t)$$
So the momentum operator in position basis \\
$$\hat{P}=-i\hbar \frac{\partial}{\partial x}$$
Now let us find the expression of position operator in the momentum basis \\
$$\langle p|xp-px|p^{\prime}\rangle =i\hbar \delta (p-p^{\prime})$$
$$p|px-xp|p^{\prime}\rangle =-i\hbar \delta(p-p^{\prime})$$\\
$$p\langle p|x|p^{\prime}\rangle -\langle p|x|p^{\prime}\rangle p^{\prime}=-i\hbar\delta(p-p^{\prime})$$
$$\langle p|x|p^{\prime}\rangle=+i\hbar\frac{\partial}{\partial x}\delta(p-p^{\prime})$$
So $$\langle p|x|\psi(t)\rangle =i\hbar\frac{\partial}{\partial p} \langle p|\psi(t)\rangle=i\hbar\frac{\partial}{\partial p}\psi (p,t)$$\\
ie $\hat{x}=i\hbar\frac{\partial}{\partial p}$ in momentum space\\
So in three dimension \\
position space\\
$\hat{p}=-i\hbar \nabla_{r}$\\
$\hat{x}=x$
Momentum space\\
$\hat{p}=p$\\
$\hat{x}=i\hbar \nabla_{p}$\\
Now let us check whether momentum is self adjoint\\
$\int_{-\infty}^{+\infty} f^*(x)\left( -i\hbar \frac{\partial}{\partial x}g(x)\right)dx=-i\hbar\left\lbrace \left[ f^{*}(x)g(x) \right]_{-\infty}^{+\infty}-\int_{-\infty}^{+\infty} g(x)\frac{\partial}{\partial x}f^*(x)dx\right\rbrace $\\
$=i\hbar \int \frac{\partial}{\partial x} f^*(x) g(x)dx$
$=\int\left( -i\hbar \frac{\partial}{\partial x}f(x)\right) ^{\dagger}g(x)dx$
$=\int(pf)^{\dagger}g dx$
So momentum is hermitian .\\
Since -i$\hbar\implies i\hbar;\frac{\partial}{\partial x} \implies -\frac{\partial}{\partial x}$ to keep momentum operator hermitian,so $ \frac{\partial}{\partial x}$ is anti hermitian.\\
So $\left( \frac{d^n}{dx^n}\right) ^{\dagger}=(-1)^n \left( \frac{d^n}{dx^n}\right)$\\
Its a good position to discuss the time evolution of a state vector .The time evolution of the state vector is prescribed by the rule called schrodinger equation which is given as \\
$$i\hbar \frac{d}{dt}|\psi(t)\rangle =\hat{H}|\psi(t)\rangle $$
This is not an eigen value equation since $\hat{H}$ is an operator.\\
Since we are only dealing with autonomous system $\hat{H}$ is explicitly time independant.So we can write the formal solution \\
$$|\psi(t) \rangle =e^{-\frac{i}{\hbar}\hat{H(t)}} |\psi(o)\rangle $$
or $$|\psi(t) \rangle =e^{\frac{-i}{\hbar}\hat{H}(t-t_0)} |\psi(t_0)\rangle$$
Let's denote $e^{\frac{-i}{\hbar}\hat{H}(t-t_0)}$ as U.
Now $$(U^{\dagger})=e^{\frac{i}{\hbar}\hat{H}(t-t_0)}=U^{-1}$$
ie $$U^{\dagger}U^{-1}=U^{-1}U=I\implies \text{ unitary operator }$$
It is interesting thing is that even in non-automonous system time evolution operator is unitary operator.,but formal solution went be this \\
If the system changes from $t_O t0 t_2$\\
Then $$U(t_2,t_0)=U(t_2,t_1)U(t_1,t_0)$$
Product of unitary operator is unitary \\
If we normalized $|\psi(0)\rangle$ would it be preserved in time evolution.\\
$$|\psi(t) \rangle =e^{-\frac{i}{\hbar}\hat{H(t)}} |\psi(o)\rangle $$
So $$\langle \psi(t)|=\langle \psi(0)|e^{\frac{i}{\hbar}
H^{\dagger}t}$$
So $$\langle \psi(t)\mid \psi(t)\rangle=\langle \psi(o)|e^{\frac{i}{\hbar}H^{\dagger}t}e^{\frac{-i}{\hbar}H^{\dagger}t}\mid \psi(o)\rangle$$
Remember $$e^{\hat{A}}e^{\hat{B}}=e^{\hat{A}+\hat{B}}$$
Iff $$\left[ A,B\right] =0$$
$ \left[ H^{\dagger},H\right] =0$ Since H is hermitian.\\
So $$\langle \psi(t)\mid \psi(t)\rangle=\langle \psi_{0}|\psi_{0}\rangle$$
Ie probability is conserved in time evolution as in classical mechanics.(liouville's thoerem).The preservation of probality follows from the unitary of the time evolution operator.\\
Now we have studied the schrodinger equation in the abstract basis .Then how does it look like in position basis.\\
we have \\
\begin{align*}
i\hbar \frac{d}{dt}|\psi(t)\rangle &=\hat{H}|\psi(t)\rangle \\
\intertext{with} \hat{H}&=\frac{p^2}{2m}+V(r)\\
\intertext{in position basis (3-D)}\\
\langle r|i\hbar \frac{d}{dt}|\psi(t)\rangle&=\langle r|\hat{H}|\psi(t)\rangle\\
i\hbar \frac{d}{dt}\langle |\psi(t)\rangle&=\langle r|\frac{p^2}{2m}+V(r)|\psi(t)\rangle\\
i\hbar\frac{\partial}{\partial t}\psi(r,t)&=-\frac{\hbar^2}{2m} \nabla^2\langle r|\psi(t)\rangle+\langle r|v(r)|\psi(t)\rangle\\
&=-\frac{\hbar^2}{2m}\nabla^2\psi(r,t)+v(r)\psi(r,t)\\
i\hbar \frac{\partial}{\partial t}\psi(r,t)&=-\frac{\hbar^2}{2m} \nabla^2 \psi(r,t)+v(r)\psi(r,t)\\
\intertext{This is partial differential equation,first order in time.}
\end{align*}
Let's calculate
\begin{align*}
\langle x|\hat{P}|x^{\prime}&=\langle x|-i\hbar \frac{\partial}{\partial x}|x^{\prime}\\
&=-i\hbar \frac{\partial}{\partial x} \langle x|x^{\prime} \rangle\\
\intertext{similarly}\\
\langle x|\hat{P}|\hat{P}\rangle&=-i\hbar \frac{\partial}{\partial x}\langle x|p\rangle\\
p\langle x|\hat{P}\rangle&=-i\hbar\frac{\partial}{\partial x} \langle x|p\rangle\\
\frac{\partial \langle x|p\rangle}{\langle x|\hat{p}\rangle}&=\frac{p}{-i\hbar}\partial x\\
\langle x|p\rangle &\propto e^{\frac{ipx}{\hbar}}\\
\langle p|x\rangle &\propto e^{\frac{-ipx}{\hbar}}\\
\intertext{so we get}
\psi(x,t)&=\int_{-\infty}^{+\infty} dp\langle x|p\rangle \hat{\psi}(p,t)\\
\psi(x,t)&=\frac{1}{\sqrt{2\pi \hbar}}\int_{-\infty}^{+\infty} dp e^{ipx} \vec{\psi}(p,t)\\
\vec{\psi}(p,t)&=\frac{1}{\sqrt{2\pi \hbar}} \int_{-\infty}^{+\infty}dxe^{-ipx} \psi(x,t)\\
\end{align*}
In 3-D\\
$$\psi(r,t)=\frac{1}{(2\pi \hbar)^{3/2}}\int_{-\infty}^{+\infty}d^3r e^{\frac{i\vec{p}\cdot \vec{r}}{\hbar}} \vec{\psi}(p,t)$$
$$\vec{\psi}(p,t)=\frac{1}{(2\pi \hbar)^{3/2}}\int_{-\infty}^{+\infty}d^3p e^{\frac{-i\vec{p}\cdot \vec{r}}{\hbar}} \psi(x,t)$$
Turns out that momentum space wave function is the fourier transform of the position space wave function.\\
Parsevals theorem guarantee that\\
$$\int_{-\infty}^{+\infty} d^3p |\psi(p,t)|^2=\int_{-\infty}^{+\infty} d^3r |\psi(r,t)|^2$$
ie if wavefunction is normalized in position space it will be normalized in momentum space.
\section{Parity operator}
The space reflection about the origin of the coordinate system is called inversion or a parity operation.This transformation is descrete.The parity operator $\hat{P}$ is defined by its action on the ket $|\vec{r}\rangle$ of the position space:
$$\hat{P}|\vec{r}\rangle =|\vec{-r}\rangle$$
such that
$$\hat{P}\psi(\vec{r})=\psi(-\vec{r})$$
\begin{note}
\begin{itemize}
\item Parity operator is Hermitian $\vec{P}^{\dagger}=\vec{P}$
\item From the definition we have\\
$$\vec{P}^2 \psi(\vec{r})=\vec{P}\psi(-\vec{r})=\psi(\vec{r})$$
Hence $\vec{P}^2$ is equal to the unity operator
$$\vec{P}^2=I or \vec{P}=\vec{P}^{-1}$$
\item The parity operator is therefore unitary, since its Hermitian adjoint is equal to its inverse.\\
$$\vec{P}^{\dagger}=\vec{P}^{-1}$$
\item Now, since $\hat{P}^{2}=\hat{I}$, the eigenvalues of $\hat{P}$ are $+1$ or $-I$ with the corresponding eigenstates
$$
\hat{P} \psi_{+}(\vec{r})=\psi_{+}(-\vec{r})=\psi_{+}(\vec{r}), \quad \hat{P} \psi_{-}(\vec{r})=\psi_{-}(-\vec{r})=-\psi_{-}(\vec{r}) .
$$
The eigenstate $\left|\psi_{+}\right\rangle$is said to be even and $\left|\psi_{-}\right\rangle$is odd. Therefore, the eigenfunctions of the parity operator have definite parity: they are either even or odd.
\item \textbf{Even and odd operators}\\
An operator $\hat{A}$ is said to be $e v e n$ if it obeys the condition
$$
\hat{P} \hat{A} \hat{P}=\hat{A}
$$
and an operator $\hat{B}$ is odd if
$$
\hat{P} \hat{B} \hat{P}=-\hat{B}
$$
We can easily verify that even operators commute with the parity operator $\hat{P}$ and that odd operators anticommute with $\hat{P}$ :
$$
\begin{aligned}
\hat{A} \hat{P} &=(\hat{P} \hat{A} \hat{P}) \hat{P}=\hat{P} \hat{A} \hat{P}^{2}=\hat{P}\hat{A} \\
\hat{B} \hat{P} &=-(\hat{P} \hat{B} \hat{P}) \hat{P}=-\hat{P} \hat{B} \hat{P}^{2}=-\hat{P} \hat{B}
\end{aligned}
$$
\end{itemize}
\end{note}
\newpage
\begin{abox}
Practice set 1
\end{abox}
\begin{enumerate}
\begin{minipage}{\textwidth}
\item Consider a particle in a one dimensional potential that satisfies $V(x)=V(-x)$. Let $\left|\psi_{0}\right\rangle$ and $\left|\psi_{1}\right\rangle$ denote the ground and the first excited states, respectively, and let $|\psi\rangle=\alpha_{0}\left|\psi_{0}\right\rangle+\alpha_{1}\left|\psi_{1}\right\rangle$ be a normalized state with $\alpha_{0}$ and $\alpha_{1}$ being real constants. The expectation value $\langle x\rangle$ of the position operator $x$ in the state $|\psi\rangle$ is given by
\exyear{NET DEC 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\alpha_{0}^{2}\left\langle\psi_{0}|x| \psi_{0}\right\rangle+\alpha_{1}^{2}\left\langle\psi_{1}|x| \psi_{1}\right\rangle$
\task[\textbf{B.}]$\alpha_{0} \alpha_{1}\left[\left\langle\psi_{0}|x| \psi_{1}\right\rangle+\left\langle\psi_{1}|x| \psi_{0}\right\rangle\right]$
\task[\textbf{C.}]$\alpha_{0}^{2}+\alpha_{1}^{2}$
\task[\textbf{D.}]$2 \alpha_{0} \alpha_{1}$
\end{tasks}
\begin{minipage}{\textwidth}
\item The wave function of a particle at time $t=0$ is given by $|\psi(0)\rangle=\frac{1}{\sqrt{2}}\left(\left|u_{1}\right\rangle+\left|u_{2}\right\rangle\right)$, where
$\left|u_{1}\right\rangle$ and $\left|u_{2}\right\rangle$ are the normalized eigenstates with eigenvalues $E_{1}$ and $E_{2}$ respectively, $\left(E_{2}>E_{1}\right)$. The shortest time after which $|\psi(t)\rangle$ will become orthogonal to $|\psi(0)\rangle$ is
\exyear{NET DEC 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{-\hbar \pi}{2\left(E_{2}-E_{1}\right)}$
\task[\textbf{B.}]$\frac{\hbar \pi}{E_{2}-E_{1}}$
\task[\textbf{C.}]$\frac{\sqrt{2} \hbar \pi}{E_{2}-E_{1}}$
\task[\textbf{D.}]$\frac{2 \hbar \pi}{E_{2}-E_{1}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item $\text { The commutator }\left[x^{2}, p^{2}\right] \text { is }$
\exyear{NET JUNE 2012}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $2 i \hbar x p$
\task[\textbf{B.}]$2 i \hbar(x p+p x)$
\task[\textbf{C.}]$2 i \hbar p x$
\task[\textbf{D.}]$2 i \hbar(x p-p x)$
\end{tasks}
\begin{minipage}{\textwidth}
\item Which of the following is a self-adjoint operator in the spherical polar coordinate system $(r, \theta, \phi)$ ?
\exyear{NET JUNE 2012}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-\frac{i \hbar}{\sin ^{2} \theta} \frac{\partial}{\partial \theta}$
\task[\textbf{B.}]$-i \hbar \frac{\partial}{\partial \theta}$
\task[\textbf{C.}] $-\frac{i \hbar}{\sin \theta} \frac{\partial}{\partial \theta}$
\task[\textbf{D.}] $-i \hbar \sin \theta \frac{\partial}{\partial \theta}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Given the usual canonical commutation relations, the commutator $[A, B]$ of $A=i\left(x p_{y}-y p_{x}\right)$ and $B=\left(y p_{z}+z p_{y}\right)$ is
\exyear{NET DEC 2012}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\hbar\left(x p_{z}-p_{x} z\right)$
\task[\textbf{B.}]$-\hbar\left(x p_{z}-p_{x} z\right)$
\task[\textbf{C.}]$\hbar\left(x p_{z}+p_{x} z\right)$
\task[\textbf{D.}]$-\hbar\left(x p_{z}+p_{x} z\right)$
\end{tasks}
\begin{minipage}{\textwidth}
\item If the operators $A$ and $B$ satisfy the commutation relation $[A, B]=I$, where $I$ is the identity operator, then
\exyear{NET JUNE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\left[e^{A}, B\right]=e^{A}$
\task[\textbf{B.}]$\left[e^{A}, B\right]=\left[e^{B}, A\right]$
\task[\textbf{C.}]$\left[e^{A}, B\right]=\left[e^{-B}, A\right]$
\task[\textbf{D.}]$\left[e^{A}, B\right]=I$
\end{tasks}
\begin{minipage}{\textwidth}
\item Suppose Hamiltonian of a conservative system in classical mechanics is $H=\omega x p$, where $\omega$ is a constant and $x$ and $p$ are the position and momentum respectively. The corresponding Hamiltonian in quantum mechanics, in the coordinate representation, is
\exyear{NET DEC 2014}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-i \hbar \omega\left(x \frac{\partial}{\partial x}-\frac{1}{2}\right)$
\task[\textbf{B.}]$-i \hbar \omega\left(x \frac{\partial}{\partial x}+\frac{1}{2}\right)$
\task[\textbf{C.}] $-i \hbar \omega x \frac{\partial}{\partial x}$
\task[\textbf{D.}]$-\frac{i \hbar \omega}{2} \times \frac{\partial}{\partial x}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Let $x$ and $p$ denote, respectively, the coordinate and momentum operators satisfying the canonical commutation relation $[x, p]=i$ in natural units $(\hbar=1)$. Then the commutator $\left[x, p e^{-p}\right]$ is
\exyear{NET DEC 2014}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i(1-p) e^{-p}$
\task[\textbf{B.}]$i\left(1-p^{2}\right) e^{-p}$
\task[\textbf{C.}]$i\left(1-e^{-p}\right)$
\task[\textbf{D.}]ipe $^{-p}$
\end{tasks}
\begin{minipage}{\textwidth}
\item The wavefunction of a particle in one-dimension is denoted by $\psi(x)$ in the coordinate representation and by $\phi(p)=\int \psi(x) e^{\frac{-i p x}{\hbar}} d x$ in the momentum representation. If the action of an operator $\hat{T}$ on $\psi(x)$ is given by $\hat{T} \psi(x)=\psi(x+a)$, where $a$ is a constant then $\hat{T} \phi(p)$ is given by
\exyear{NET JUNE 2015}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-\frac{i}{\hbar} \operatorname{ap} \phi(p)$
\task[\textbf{B.}]$e^{\frac{-i a p}{\hbar}} \phi(p)$
\task[\textbf{C.}]$e^{\frac{+i a p}{\hbar}} \phi(p)$
\task[\textbf{D.}]$\left(1+\frac{i}{\hbar} a p\right) \phi(p)$
\end{tasks}
\begin{minipage}{\textwidth}
\item Two different sets of orthogonal basis vectors $\left\{\left(\begin{array}{l}1 \\ 0\end{array}\right),\left(\begin{array}{l}0 \\ 1\end{array}\right)\right\}$ and $\left\{\frac{1}{\sqrt{2}}\left(\begin{array}{l}1 \\ 1\end{array}\right), \frac{1}{\sqrt{2}}\left(\begin{array}{c}1 \\ -1\end{array}\right)\right\}$ are given for a two dimensional real vector space. The matrix representation of a linear operator $\hat{A}$ in these basis are related by a unitary transformation. The unitary matrix may be chosen to be
\exyear{NET JUNE 2015}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\left(\begin{array}{cc}0 & -1 \\ 1 & 0\end{array}\right)$
\task[\textbf{B.}]$\left(\begin{array}{ll}0 & 1 \\ 1 & 0\end{array}\right)$
\task[\textbf{C.}]$\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1 & 1 \\ 1 & -1\end{array}\right)$
\task[\textbf{D.}] $\frac{1}{\sqrt{2}}\left(\begin{array}{ll}1 & 0 \\ 1 & 1\end{array}\right)$
\end{tasks}
\begin{minipage}{\textwidth}
\item A Hermitian operator $\hat{O}$ has two normalized eigenstates $|1\rangle$ and $|2\rangle$ with eigenvalues 1 and 2 , respectively. The two states $|u\rangle=\cos \theta|1\rangle+\sin \theta|2\rangle$ and $|v\rangle=\cos \phi|1\rangle+\sin \phi|2\rangle$ are such that $\langle v|\hat{O}| v\rangle=7 / 4$ and $\langle u \mid v\rangle=0$. Which of the following are possible values of $\theta$ and $\phi$ ?
\exyear{NET DEC 2015}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\theta=-\frac{\pi}{6}$ and $\phi=\frac{\pi}{3}$
\task[\textbf{B.}]$\theta=\frac{\pi}{6}$ and $\phi=\frac{\pi}{3}$
\task[\textbf{C.}]$\theta=-\frac{\pi}{4}$ and $\phi=\frac{\pi}{4}$
\task[\textbf{D.}]$\theta=\frac{\pi}{3}$ and $\phi=-\frac{\pi}{6}$
\end{tasks}
\begin{minipage}{\textwidth}
\item If $\hat{L}_{x}, \hat{L}_{y}, \hat{L}_{z}$ are the components of the angular momentum operator in three dimensions the commutator $\left[\hat{L}_{x}, \hat{L}_{x} \hat{L}_{y} \hat{L}_{z}\right]$ may be simplified to
\exyear{NET JUNE 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i \hbar L_{x}\left(\hat{L}_{z}^{2}-\hat{L}_{y}^{2}\right)$
\task[\textbf{B.}]$i \hbar \hat{L}_{z} \hat{L}_{y} \hat{L}_{x}$
\task[\textbf{C.}]$i \hbar L_{x}\left(2 \hat{L}_{z}^{2}-\hat{L}_{y}^{2}\right)$
\task[\textbf{D.}]0
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider the operator, $a=x+\frac{d}{d x}$ acting on smooth function of $x$. Then commutator $[\alpha, \cos x]$ is
\exyear{NET DEC 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-\sin x$
\task[\textbf{B.}]$\cos x$
\task[\textbf{C.}]$-\cos x$
\task[\textbf{D.}]0
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider the operator $\vec{\pi}=\vec{p}-q \vec{A}$, where $\vec{p}$ is the momentum operator, $\vec{A}=\left(A_{x}, A_{y}, A_{z}\right)$ is the vector potential and $q$ denotes the electric charge. If $\vec{B}=\left(B_{x}, B_{y}, B_{z}\right)$ denotes the magnetic field, the $z$-component of the vector operator $\vec{\pi} \times \vec{\pi}$ is
\exyear{NET DEC 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i q \hbar B_{z}+q\left(A_{x} p_{y}-A_{y} p_{x}\right)$
\task[\textbf{B.}]$-i q \hbar B_{z}-q\left(A_{x} p_{y}-A_{y} p_{x}\right)$
\task[\textbf{C.}]$-i q \hbar B_{2}$
\task[\textbf{D.}] $i q \hbar B_{z}$
\end{tasks}
\begin{minipage}{\textwidth}
\item $\text { The two vectors }\left(\begin{array}{l}
a \\
0
\end{array}\right) \text { and }\left(\begin{array}{l}
b \\
c
\end{array}\right) \text { are orthonormal if }$
\exyear{NET JUNE 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $a=\pm 1, b=\pm 1 / \sqrt{2}, c=\pm 1 / \sqrt{2}$
\task[\textbf{B.}] $a=\pm 1, b=\pm 1, c=0$
\task[\textbf{C.}]$a=\pm 1, b=0, c=\pm 1$
\task[\textbf{D.}] $a=\pm 1, b=\pm 1 / 2, c=1 / 2$
\end{tasks}
\begin{minipage}{\textwidth}
\item Let $x$ denote the position operator and $p$ the canonically conjugate momentum operator of a particle. The commutator
$$
\left[\frac{1}{2 m} p^{2}+\beta x^{2}, \frac{1}{m} p^{2}+\gamma x^{2}\right]
$$
where $\beta$ and $\gamma$ are constants, is zero if
\exyear{NET DEC 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\gamma=\beta$
\task[\textbf{B.}]$\gamma=2 \beta$
\task[\textbf{C.}]$\gamma=\sqrt{2} \beta$
\task[\textbf{D.}]$2 \gamma=\beta$
\end{tasks}
\begin{minipage}{\textwidth}
\item Consider the operator $A_{x}=L_{y} p_{z}-L_{z} p_{y}$, where $L_{i}$ and $p_{i}$ denote, respectively, the components of the angular momentum and momentum operators. The commutator $\left[A_{x}, x\right]$ where $x$ is the $x$ - component of the position operator, is
\exyear{NET DEC 2018}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-i \hbar\left(z p_{z}+y p_{y}\right)$
\task[\textbf{B.}]$-i \hbar\left(z p_{z}-y p_{y}\right)$
\task[\textbf{C.}]$i \hbar\left(z p_{z}+y p_{y}\right)$
\task[\textbf{D.}]$i \hbar\left(z p_{z}-y p_{y}\right)$
\end{tasks}
\end{enumerate}
\colorlet{ocre1}{ocre!70!}
\colorlet{ocrel}{ocre!30!}
\setlength\arrayrulewidth{1pt}
\begin{table}[H]
\centering
\arrayrulecolor{ocre}
\begin{tabular}{|p{1.5cm}|p{1.5cm}||p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{4}{|c|}{\textbf{Answer key}}\\\hline\hline
\rowcolor{ocrel}Q.No.&Answer&Q.No.&Answer\\\hline
1&\textbf{b}&2&\textbf{b}\\\hline
3&\textbf{b}&4&\textbf{c}\\\hline
5&\textbf{c}&6&\textbf{a}\\\hline
7&\textbf{b}&8&\textbf{a}\\\hline
9&\textbf{c}&10&\textbf{c}\\\hline
11&\textbf{a}&12&\textbf{a}\\\hline
13&\textbf{a}&14&\textbf{d}\\\hline
15&\textbf{c}&16&\textbf{b}\\\hline
17&\textbf{a}&&\\\hline
\end{tabular}
\end{table}
\newpage
\begin{abox}
Practice set 2
\end{abox}
\begin{enumerate}
\begin{minipage}{\textwidth}
\item The quantum mechanical operator for the momentum of a particle moving in one dimension is given by
\exyear{GATE 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i \hbar \frac{d}{d x}$
\task[\textbf{B.}]$-i \hbar \frac{d}{d x}$
\task[\textbf{C.}]$i \hbar \frac{\partial}{\partial t}$
\task[\textbf{D.}]$-\frac{\hbar^{2}}{2 m} \frac{d^{2}}{d x^{2}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item If $L_{x}, L_{y}$ and $L_{z}$ are respectively the $x, y$ and $z$ components of angular momentum operator $L$. The commutator $\left[L_{x} L_{y}, L_{z}\right]$ is equal to
\exyear{GATE 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i \hbar\left(L_{x}^{2}+L_{y}^{2}\right)$
\task[\textbf{B.}]$2 i \hbar L_{z}$
\task[\textbf{C.}]$i \hbar\left(L_{x}^{2}-L_{y}^{2}\right)$
\task[\textbf{D.}] 0
\end{tasks}
\textbf{common data questions 3 and 4 }\\
In a one-dimensional harmonic oscillator, $\varphi_{0}, \varphi_{1}$ and $\varphi_{2}$ are respectively the ground, first and the second excited states. These three states are normalized and are orthogonal to one another $\psi_{1}$ and $\psi_{2}$ are two states defined by
$$
\psi_{1}=\varphi_{0}-2 \varphi_{1}+3 \varphi_{2}, \psi_{2}=\varphi_{0}-\varphi_{1}+\alpha \varphi_{2}, \psi_{2}=\varphi_{0}-\varphi_{1}+\alpha \varphi_{2}
$$
where $\alpha$ is a constant\\
\begin{minipage}{\textwidth}
\item $\text { The value of } \alpha \text { which } \psi_{2} \text { is orthogonal to } \psi_{1} \text { is }$
\exyear{GATE 2011}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}]2
\task[\textbf{B.}]1
\task[\textbf{C.}]-1
\task[\textbf{D.}]-2
\end{tasks}
\begin{minipage}{\textwidth}
\item For the value of $\alpha$ determined in $\mathrm{Q} 3$, the expectation value of energy of the oscillator in the state $\psi_{2}$ is
\end{minipage}
\begin{tasks}(1)
\task[\textbf{A.}] $\hbar \omega$
\task[\textbf{B.}]$3 \hbar \omega / 2$
\task[\textbf{C.}]$3 \hbar \omega$
\task[\textbf{D.}]$9 \hbar \omega / 2$
\end{tasks}
\begin{minipage}{\textwidth}
\item Which one of the following commutation relations is NOT CORRECT? Here, symbols have their usual meanings.
\exyear{GATE 2013}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\left[L^{2}, L_{z}\right]=0$
\task[\textbf{B.}]$\left\lfloor L_{x}, L_{y}\right\rfloor=i \hbar L_{z}$
\task[\textbf{C.}]$\left[L_{z}, L_{+}\right]=\hbar L_{+}$
\task[\textbf{D.}] $\left[L_{z}, L_{-}\right]=\hbar L_{-}$
\end{tasks}
\begin{minipage}{\textwidth}
\item Let $\vec{L}$ and $\vec{p}$ be the angular and linear momentum operators, respectively, for a a particle. The commutator $\left\lfloor L_{x}, p_{y}\right\rfloor$ gives
\exyear{GATE 2015}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $-i \hbar p_{z}$
\task[\textbf{B.}]0
\task[\textbf{C.}]$i \hbar p_{x}$
\task[\textbf{D.}]$i \hbar p_{z}$
\end{tasks}
\begin{minipage}{\textwidth}
\item $\text { Which of the following operators is Hermitian? }$
\exyear{GATE 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $\frac{d}{d x}$
\task[\textbf{B.}]$\frac{d^{2}}{d x^{2}}$
\task[\textbf{C.}]$i \frac{d^{2}}{d x^{2}}$
\task[\textbf{D.}]$\frac{d^{3}}{d x^{3}}$
\end{tasks}
\begin{minipage}{\textwidth}
\item If $x$ and $p$ are the $x$ components of the position and the momentum operators of a particle respectively, the commutator $\left[x^{2}, p^{2}\right]$ is
\exyear{GATE 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i \hbar(x p-p x)$
\task[\textbf{B.}]$2 i \hbar(x p-p x)$
\task[\textbf{C.}]$i \hbar(x p+p x)$
\task[\textbf{D.}]$2 i \hbar(x p+p x)$
\end{tasks}
\begin{minipage}{\textwidth}
\item $\text { For the parity operator } P, \text { which of the following statements is NOT true? }$
\exyear{GATE 2016}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $P^{\dagger}=P$
\task[\textbf{B.}] $P^{2}=-P$
\task[\textbf{C.}] $P^{2}=I$
\task[\textbf{D.}]$P^{\dagger}=P^{-1}$
\end{tasks}
\begin{minipage}{\textwidth}
\item $\text { Which one of the following operators is Hermitian? }$
\exyear{GATE 2017}
\end{minipage}
\begin{tasks}(2)
\task[\textbf{A.}] $i \frac{\left(p_{x} x^{2}-x^{2} p_{x}\right)}{2}$
\task[\textbf{B.}]$i \frac{\left(p_{x} x^{2}+x^{2} p_{x}\right)}{2}$
\task[\textbf{C.}]$e^{i p_{x} a}$
\task[\textbf{D.}]$e^{-i p_{x} a}$
\end{tasks}
\end{enumerate}
\colorlet{ocre1}{ocre!70!}
\colorlet{ocrel}{ocre!30!}
\setlength\arrayrulewidth{1pt}
\begin{table}[H]
\centering
\arrayrulecolor{ocre}
\begin{tabular}{|p{1.5cm}|p{1.5cm}||p{1.5cm}|p{1.5cm}|}
\hline
\multicolumn{4}{|c|}{\textbf{Answer key}}\\\hline\hline
\rowcolor{ocrel}Q.No.&Answer&Q.No.&Answer\\\hline
1&\textbf{b}&2&\textbf{c}\\\hline
3&\textbf{c}&4&\textbf{b}\\\hline
5&\textbf{d}&6&\textbf{d}\\\hline
7&\textbf{b}&8&\textbf{d}\\\hline
9&\textbf{b}&10&\textbf{a}\\\hline
\end{tabular}
\end{table}
\newpage
\begin{abox}
Practice set 3
\end{abox}
\begin{enumerate}
\begin{minipage}{\textwidth}
\item If $\left|\phi_{1}\right\rangle$ and $\left|\phi_{2}\right\rangle$ be two orthonormal state vectors such that $A=\left|\phi_{1}\right\rangle\left(\phi_{2}|+| \phi_{2}\right\rangle\langle\phi|$, then If $\left|\phi_{1}\right\rangle$ and $\left|\phi_{2}\right\rangle$ be two orthonormal state vectors such that $A=\left|\phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1}\right|$, then\\
(a) Prove that $A$ is Hermitian\\
(b) Find the value of $A^{2}$.
\end{minipage}
\begin{answer}
(a) For $A$ to be a projection operator, $A$ should be Hermitian and $A^{2}$ should be equal to $A$. The Hermitian adjoint of $\left|\phi_{1}\right\rangle\left\langle\phi_{2}\right|$ is $\left|\phi_{2}\right\rangle\left\langle\phi_{1}\right|$ and that of $\left|\phi_{2}\right\rangle\left\langle\phi_{1}\right|$ is $\left|\phi_{1}\right\rangle\left\langle\phi_{2}\right|$. So
\begin{align*}
&A^{\dagger}=\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right]^{\dagger}=\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2}\right|\right]^{\dagger}+\left[\left|\phi_{2}\right\rangle\left\langle\phi_{1}\right|\right]^{\dagger} \\
&=\left|\phi_{2}\right\rangle\left\langle\phi_{1}|+| \phi_{1}\right\rangle\left\langle\phi_{2}\right|=A
\end{align*}
Hence $A$ is Hermitian.\\
\begin{align*}
&\text { Now, } \quad A^{2}=\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right]\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right] \\
&=\left|\phi_{1}\right\rangle\left\langle\phi_{2}\left|\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right]+\right| \phi_{2}\right\rangle\left\langle\phi_{1}\right|\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right] \\
&=\left[\left|\phi_{1}\right\rangle\left\langle\phi_{2} \mid \phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{1}\right\rangle\left\langle\phi_{2} \mid \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right]+\left[\left|\phi_{2}\right\rangle\left\langle\phi_{1} \mid \phi_{1}\right\rangle\left\langle\phi_{2}|+| \phi_{2}\right\rangle\left\langle\phi_{1} \mid \phi_{2}\right\rangle\left\langle\phi_{1}\right|\right] \\
&\text { Since }\left|\phi_{1}\right\rangle \text { and }\left|\phi_{2}\right\rangle \text { are orthonormal, } \\
&\qquad A^{2}==\left|\phi_{1}\right\rangle\left\langle\phi_{1}|+| \phi_{2}\right\rangle\left\langle\phi_{2}\right|
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item (a) Find the Eigen State of momentum operator $P_{x}=-i \hbar \frac{d}{d x}$. If eigen value is $\lambda$ by relation $P_{x} \phi=\lambda \phi$ where $\frac{\lambda}{\hbar}=k$.\\
(b) Expand the wave function $\psi(x)=A \sin k x \sin 2 k x$ in basis of Eigen functions of momentum operator $P_{x}$
\end{minipage}
\begin{answer}
$P_{x} \phi=\lambda \phi \text { where } \frac{\lambda}{\hbar}=k$\\\\
Case 1: If $\lambda$ is positive $P_{x} \phi=\hbar k \phi \Rightarrow-i \frac{d \phi}{d x}=k \phi \Rightarrow \frac{d \phi}{\phi}=i k d x \Rightarrow \ln \phi=i k x+C \Rightarrow \phi=e^{i k x}$\\
Case 2 : If $\lambda$ is negative\\
$P_{x} \phi=\hbar k \phi \Rightarrow-i \frac{d \phi}{d x}=-k \phi \Rightarrow \frac{d \phi}{\phi}=-i k d x \Rightarrow \ln \phi=-i k x+C \Rightarrow \phi=e^{-i k x}$\\\\
(b) Expand the function $\psi(x)=A \sin k x \sin 2 k x$ as a linear combination of eigenfunctions of the momentum operator $P_{x}$.
\begin{align*}
&\psi(x)=A \sin k x \sin 2 k x=A\left(\frac{e^{i k x}-e^{-i k x}}{2 i}\right)\left(\frac{e^{2 i k x}-e^{-2 i k x}}{2 i}\right) \\
&\frac{A}{4}\left(-e^{-3 i k x}+e^{-i k x}+e^{i k x}-e^{3 i k x}\right)
\end{align*}
\end{answer}
\begin{minipage}{\textwidth}
\item If $\left|\phi_{1}\right\rangle=A\left(\begin{array}{l}1 \\ 0 \\ 0\end{array}\right)\left|\phi_{2}\right\rangle=B\left(\begin{array}{l}0 \\ i \\ i\end{array}\right)\left|\phi_{3}\right\rangle=C\left(\begin{array}{c}0 \\ i \\ -i\end{array}\right)$\\
(a) Find normalization constant $A, B, C$ for ket $\left|\phi_{1}\right\rangle\left|\phi_{2}\right\rangle\left|\phi_{3}\right\rangle$\\
(b) Prove that $\left|\phi_{1}\right\rangle,\left|\phi_{2}\right\rangle$ and $\left|\phi_{3}\right\rangle$ are orthogonal\\
(c) Check whether $\left|\phi_{1}\right\rangle,\left|\phi_{2}\right\rangle$ and $\left|\phi_{3}\right\rangle$ are linearly independent or not.
\end{minipage}
\begin{answer}
(a) $\left\langle\phi_{1} \mid \phi_{1}\right\rangle=1 \Rightarrow A^{*}\left(\begin{array}{lll}1 & 0 & 0\end{array}\right) A\left(\begin{array}{l}1 \\ 0 \\ 0\end{array}\right)=1 \Rightarrow A=1$
$$
\left\langle\phi_{2} \mid \phi_{2}\right\rangle=1 \Rightarrow B^{*}\left(\begin{array}{lll}
0 & -i & -i
\end{array}\right) B\left(\begin{array}{l}
0 \\
i \\
i
\end{array}\right)=1 \Rightarrow B=\frac{1}{\sqrt{2}}
$$
$$
\left\langle\phi_{3} \mid \phi_{3}\right\rangle=1 \Rightarrow C^{*}\left(\begin{array}{lll}
0 & -i & i
\end{array}\right) C\left(\begin{array}{c}
0 \\
i \\
-i
\end{array}\right)=1 \Rightarrow C=\frac{1}{\sqrt{2}}
$$
(b) $\left\langle\phi_{1} \mid \phi_{2}\right\rangle=A^{*}\left(\begin{array}{lll}1 & 0 & 0\end{array}\right) B\left(\begin{array}{l}0 \\ i \\ i\end{array}\right)=0$
$$
\left\langle\phi_{1} \mid \phi_{3}\right\rangle=A^{*}\left(\begin{array}{lll}
1 & 0 & 0
\end{array}\right) C\left(\begin{array}{c}
0 \\
i \\
-i
\end{array}\right)=0
$$
$$
\left\langle\phi_{2} \mid \phi_{3}\right\rangle=A^{*}\left(\begin{array}{lll}
0 & -i & -i
\end{array}\right) C\left(\begin{array}{c}
0 \\
i \\
-i
\end{array}\right)=A^{*} C(0+1-1)=0
$$
(c) $c_{1}\left|\phi_{1}\right\rangle+c_{2}\left|\phi_{2}\right\rangle+c_{3}\left|\phi_{3}\right\rangle=0 \Rightarrow c_{1}\left(\begin{array}{c}
1 \\
0 \\
0
\end{array}\right)+c_{2}\left(\begin{array}{c}
0 \\
i \\
i
\end{array}\right)+c_{3}\left(\begin{array}{c}
0 \\
i \\
-i
\end{array}\right)=0$\\
$$
c_{1}=0 \quad c_{2}+c_{3}=0 \text { and } c_{2}-c_{3}=0 \Rightarrow c_{1}=0, c_{2}=0, c_{3}=0
$$
So $\left|\phi_{1}\right\rangle,\left|\phi_{2}\right\rangle$ and $\left|\phi_{3}\right\rangle$ are linearly independent
\end{answer}
\begin{minipage}{\textwidth}
\item If Hamiltonian of system is $H=\frac{p_{x}^{2}}{2 m}+V(x)$ then Find commutation $[H, x]$ and $[[H, x], x]$
\end{minipage}
\begin{answer}
$\text { As, } H=p^{2} / 2 m+V(x)$\\\\
We have, $[H, x]=\frac{1}{2 m}\left[p^{2}, x\right]=-i \hbar p / m$\\\\
and so, $[[H, x], x]=\frac{i \hbar}{m}[p, x]=-\hbar^{2} / m$\\\\
Hence, $\langle m|[[H, x] x]| m\rangle=-\frac{\hbar^{2}}{m}$.
\end{answer}
\end{enumerate}
|
Require Import Bedrock.Platform.tests.Thread0 Bedrock.Platform.tests.Connect Bedrock.Platform.Bootstrap.
Module Type S.
Parameter heapSize : nat.
End S.
Module Make(M : S).
Import M.
Module M'.
Definition globalSched : W := ((heapSize + 50) * 4)%nat.
Definition inbuf_size := 40.
Theorem inbuf_size_lower : (inbuf_size >= 2)%nat.
unfold inbuf_size; auto.
Qed.
Theorem inbuf_size_upper : (N_of_nat (inbuf_size * 4) < Npow2 32)%N.
reflexivity.
Qed.
End M'.
Import M'.
Module E := Connect.Make(M').
Import E.
Section boot.
Hypothesis heapSizeLowerBound : (3 <= heapSize)%nat.
Definition size := heapSize + 50 + 1.
Hypothesis mem_size : goodSize (size * 4)%nat.
Let heapSizeUpperBound : goodSize (heapSize * 4).
goodSize.
Qed.
Definition bootS := bootS heapSize 1.
Definition boot := bimport [[ "malloc"!"init" @ [Malloc.initS], "connect"!"main" @ [E.mainS] ]]
bmodule "main" {{
bfunctionNoRet "main"() [bootS]
Sp <- (heapSize * 4)%nat;;
Assert [PREmain[_] globalSched =?> 1 * 0 =?> heapSize];;
Call "malloc"!"init"(0, heapSize)
[PREmain[_] globalSched =?> 1 * mallocHeap 0];;
Goto "connect"!"main"
end
}}.
Ltac t := unfold globalSched, localsInvariantMain, M'.globalSched; genesis.
Theorem ok0 : moduleOk boot.
vcgen; abstract t.
Qed.
Definition m1 := link Buffers.m boot.
Definition m2 := link E.m m1.
Definition m := link m2 E.T.T.m.
Lemma ok1 : moduleOk m1.
link Buffers.ok ok0.
Qed.
Lemma ok2 : moduleOk m2.
link E.ok ok1.
Qed.
Theorem ok : moduleOk m.
link ok2 E.T.T.ok.
Qed.
Variable stn : settings.
Variable prog : program.
Hypothesis inj : forall l1 l2 w, Labels stn l1 = Some w
-> Labels stn l2 = Some w
-> l1 = l2.
Hypothesis agree : forall l pre bl,
LabelMap.MapsTo l (pre, bl) (XCAP.Blocks m)
-> exists w, Labels stn l = Some w
/\ prog w = Some bl.
Hypothesis agreeImp : forall l pre, LabelMap.MapsTo l pre (XCAP.Imports m)
-> exists w, Labels stn l = Some w
/\ prog w = None.
Hypothesis omitImp : forall l w,
Labels stn ("sys", l) = Some w
-> prog w = None.
Variable w : W.
Hypothesis at_start : Labels stn ("main", Global "main") = Some w.
Variable st : state.
Hypothesis mem_low : forall n, (n < size * 4)%nat -> st.(Mem) n <> None.
Hypothesis mem_high : forall w, $ (size * 4) <= w -> st.(Mem) w = None.
Theorem safe : sys_safe stn prog (w, st).
safety ok.
Qed.
End boot.
End Make.
|
% !TeX spellcheck = en_GB
% !TeX encoding = UTF-8
\chapter{Functional Specifications}
\label{ch:functional_specifications}
\epigraph{“Details matter. It’s worth waiting to get it right.”}{- Steve Jobs}
This chapter is divided into five sections.
\begin{enumerate}
\item The first section is a brief overview of the data set, which will be used within the prototype implementation. In this section, the decision for a particular dataset is clearly explained. Furthermore, the flow of the data will be introduced.
\item The purpose of the second section is to describe different approaches and alternatives when it comes to split a dataset. The prototypes dataset is divided into separate parts for training, validation and testing.
\item Within the third section, the so-called VGG16 architecture will be explained in greater detail.
\item How a model came to its decisions is related to the topic loss functions and the gradient. Because this was already addressed in the associated theory section, a detailed explanation will be left out. Instead, this section elaborates details for pre-trained model implementations. The prototype uses pre-trained networks to make its predictions, and this section explains how exactly this can be done.
\item To make decisions transparent, the prototype uses the SHAP library, which is introduced in the theory section as well. This chapter summarizes the theory part. It also introduces other methods and explains why SHAP is used to get the defined requirements of a transparent decision.
\end{enumerate}
The five sections are related to the five steps, which are introduced at the end of the requirements chapter. Moreover, it is strongly related to the theoretical part of this thesis. The overall intention is to apply the given knowledge practically, by using the prototype implementation.
\section{Input Data}
\label{sec:input_data}
The dataset "cats vs dogs" is used as an input dataset because it is one of the most rapid ways to enter the field of image recognition. Furthermore, the decision is based on a quantitative approach. The CIFAR-10, CIFAR-100 and the "cats vs dogs" datasets are compared on their size:
\begin{itemize}
\item The CIFAR-10 dataset consists of 60000 32\(\times\)32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
\item The CIFAR-100 dataset is just like the CIFAR-10, except it has 100 classes instead of 10. The classes are containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" and the superclass to which it belongs.
\item The training archive contains 25000 images of dogs and cats. The resolution of the images is quite different.
\end{itemize}
Because the total amount of images and their resolution is different, total download size and the total amount of possible results are compared. Because the bullet points are not very clear, the following illustration shall present the information precisely and allows to apply a quantitative approach on the given datasets:
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.9\linewidth]{photo/47_data-set-quantative.png}}
\caption{The "cats vs dogs" dataset is much bigger than the CIFAR-10 and CIFAR-100 dataset (left). The possible results for the CIFAR-10 and CIFAR-100 are much more as in "cats vs dogs" (mid). If the size and the potential outcomes are put in relation to each other, it can be seen that "cats vs dogs" dataset offer the most substantial amount of sample data per class. According to the course of dimensionality, is the "cats vs dogs" dataset the right choice (right).}
\label{fig:47_data-set-quantative}
\end{figure}
As mentioned in the theory chapter and the requirements chapter, the total size and the number of classes matters, if it comes to choosing a handy dataset. Usually, the number of classes is way more important as the total size of examples. This is true as long as this size is not too big. This is why the "cats vs dogs" dataset is the best fit for the prototype. Another reason to choose the "cats vs dogs" dataset is the course of dimensionality. In a Nutshell, if the feature space of an image recognition task is quite big, it leads to better results if more input data is available. As can be seen on the right plot in Figure \ref{fig:47_data-set-quantative}, the "cats vs dogs" dataset has more data per class (measured in consumed storage space per class). A few samples of the prototypes input dataset can be seen in Figure \ref{fig:38_to_predict_images.png}. The images are not in high definition. This is good because otherwise the total amount of features (pixels) is just too big and must be scaled anyways in order to ensure a suitable training time on a personal computer.
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=1\linewidth]{photo/38_to_predict_images.png}}
\caption{Example records from the "cats vs dogs" dataset. The pictures have already been scaled (compare IT concept).}
\label{fig:38_to_predict_images.png}
\end{figure}
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.9\linewidth]{photo/46_data_model}}
\caption{
If the data is loaded (1) it will be decided into subsets afterwards (2). With the help of the training and validation subset, a deep neural network gets adjusted as long as it fits the training and testing data (3). Afterwards, the model gets tested of its ability to make predictions on the test subset (4). Finally, the model and the results will be used to make transparent decisions (5).}
\label{fig:46_data_model}
\end{figure}
Last but not least, Figure \ref{fig:46_data_model} visualizes the data flow within the prototype implementation. It shows the flow from a non-technical point of view. At first, the data gets loaded from a web host (1). At second, the dataset will be divided into different subsets (2). Each for testing, training and validation of the prototypes deep learning model. The input data comes in as "images" represented by a matrix as already explained in the section about \hyperref[subsec:cnn]{convolutional neural networks (Section \ref{subsec:cnn})}. The validation is everyday praxis among the implementation of deep learning models. It turns out it is better to test a network twice. Once on unseen data after the model is trained (4). And twice with the validation subset, the model will be tested after each iteration (3). This allows stopping the iteration process of adjusting the wights before it comes to an overfit. The validation dataset influences the model. It is essential to know that validation can not be seen as a real test.
\section{Scale the Data}
\label{sec:Scaling_the_data}
As mentioned in the theory chapter, usually the data will be split into two subsets: a test set and trainset and sometimes to three: training set, validate set and test set. The model's weights are adjusted on the training data. The test set is used to make predictions on unseen data. When this is done, one of two things might happen: the model is overfitted or underfitted. Within the prototype implementation, it is important to avoid over and underfitting. In order to achieve that, the dataset is divided into three subsets. As can be seen in Figure \ref{fig:48_pie_chart}, the training set is by far the largest one. The validation and training set is relatively small.
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.9\linewidth]{photo/48_pie_chart}}
\caption{
The Figure visualizes how the data for my prototype implementation is divided into different parts. The training set is by far the most enormous one. The validation and training set is relatively small.
}
\label{fig:48_pie_chart}
\end{figure}
Alternative approaches or a different size for each part can work, as well. With the help of an iterative process, the perfect size of each piece can be determined. But as long as the results are good (defined in the requirements section), this works fine for the prototype implementation.
\section{Fit a Model}
\label{sec:fit_a_model}
The prototype uses a convolutional deep learning approach with a so-called VGG16 architecture.
This architecture can be seen in Figure \ref{fig:36_architecture_from_vgg16_net}. The name is determined by its 16 different layers which consist of convolutional layers, max-pooling layers, activation layers and fully connected layers. VGG16 is a convolution neural network (CNN) architecture which was used to win the ILSVR (ImageNet) competition in 2014. It is considered to be one of the most relevant architectures than it comes to explain these kinds of models. A unique thing about VGG16 is that instead of having a large number of hyper-parameters, the founders of this architecture focused on having convolution layers with a 3\(\times\)3 filter (stride = 1) and the same padding.
Furthermore, it has a max-pooling layer with a 2\(\times\)2 filter (stride = 2). It follows this arrangement of convolution and max pool layers consistently throughout the whole architecture. The following arrangement of the same building blocks over and over again makes it easy to explain. This is why the architecture is perfect if its decision shall be made transparent.
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.9\linewidth]{photo/36_architecture_from_vgg16_net}}
\caption{
There are 13 convolutional layers, five Max Pooling layers and three Dense layers which sum up to 21 layers but only 16 weight layers. Convolutional layer 1 has 64 filters, while convolutional layer 2 has 128 filters, convolutional layer 3 has 256 filters while convolutional layer four and convolutional layer 5 has 512 filters. VGG-16 network is trained on the ImageNet dataset, which has over 14 million images and 1000 classes and archives 92.7\% top-5 accuracy. It exceeds AlexNet (another convolutional neural networks architecture) by replacing large filters of size 11 and five in the first and second convolution layers with small size 3\(\times\)3 filters.
}
\label{fig:36_architecture_from_vgg16_net}
\end{figure}
In the following, I will explain the key factors around the VGG16 convolutional neural network architecture. The building blocks are described in \hyperref[subsec:cnn]{Section \ref{subsec:cnn}}.
\begin{itemize}
\item There are 13 convolutional layers, five max-pooling layers and three Dense layers which sum up to 21 layers, but only 16 weight layers
\item The first convolutional layer has 64 filters
\item The second convolutional layer has 128 filters
\item The third convolutional layer has 256 filters
\item the convolution layers four and five have 512 filters each
\end{itemize}
The VGG-16 network is pre-trained on the ImageNet dataset, which has over 14 million images and 1000 classes and archives 92.7\% top-five accuracy. It exceeds AlexNet (another convolutional neural networks architecture) by replacing large filters of size 11 and 5 in the first and second convolution layers with small size 3\(\times\)3 filters. VGG16 achieved great results. It is used even there are more advanced architectures today because it is easy to explain.
Even though the VGG16 is the right fit for the prototype, the architecture has some disadvantages. They are mentioned in the following because they affect the implementation process:
\begin{itemize}
\item It has such numerous weight parameters, that the model is very "heavy". Normally, the model is bigger than 500 MBs.
\item Furthermore, the enormous amount of parameters lead to long computation time.
\end{itemize}
\section{Make Predictions}
\label{make_predictions}
A highly effective approach in the deep learning field is to use someone else's work. While this approach might be a no-go in other fields, it is most welcomed in the deep learning area. This works because a pre-trained network is usually trained on a big dataset for large scale image-classification tasks. Consequently, the hierarchy of features learnt by such networks can very effectively act as generic models for a lot of computer vision problems. Even if new issues involve completely different classes, than those of the original task for which the model was trained. This shift of learnt features is important for the success of pre-trained networks. There are two ways to use a pre-trained network:
\begin{enumerate}
\item Feature Extraction
\item Fine Tuning
\end{enumerate}
In general convolutional neural networks involve two parts. First, they start with a series of convolutional and pooling layers as we had seen before in the section where the VGG16 model is described in detail. Usually, the architecture ends with a flatten layer. This part is called the convolutional base. The base is responsible for learning all the features of the images. On top the convolutional base sits the prediction layers, which consists of several dense layers, ending in the output layer with a softmax activation to predict some output classes (e.g. the pre-trained VGG16 ends with a softmax layer to predict 1000 classes). This technique involves reusing the convolutional base, which has already learnt feature representations from a large image set (like ImageNet). Then a custom prediction layer gets slapped atop the pre-trained convolutional base. During the training process, the convolutional base is frozen so that its weights are not adjusted. Just the weights of the custom prediction layer gets updated (Figure \ref{fig:49_feature_extraction}).
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.9\linewidth]{photo/49_feature_extraction}}
\caption{
During the training process, the convolutional base is frozen (so that its weights are not updated). Weights of only the custom prediction layer get updated
}
\label{fig:49_feature_extraction}
\end{figure}
The second approach is called fine-tuning. It is not used within the prototype implementation. This is why it is not explained in every detail. The idea behind is to unfreeze a few of the top layers of the convolutional base and jointly train it with the custom prediction layer which is put atop on the convolutional base (Figure \ref{fig:50_feature_extraction}).
\begin{figure}[htp]
\centering
\fbox{\includegraphics[width=0.9\linewidth]{photo/50_feature_extraction}}
\caption{
Fine Tuning means, freezing a few of the top layers of the convolutional base as in the feature extraction, and jointly train it with the custom prediction layer which we slapped atop the convolutional base.
}
\label{fig:50_feature_extraction}
\end{figure}
\section{Make Transparent Decisions}
\label{make_transparent_decisions}
SHAP Gradient Explainer was fully covered in the corresponding theory section. It is briefly summarized, and a few alternative approaches are introduced in the following. Moreover, a short statement is given why the SHAP is the right fit for the implementation of transparent decisions.
Integrated gradients values are a bit different from SHAP values and require a single reference value to integrate from. However, in SHAP Gradient Explainer, expected gradients reformulate the integral as an expectation and combine that expectation with sampling reference values from the background dataset. The technique uses an entire dataset as the background distribution versus just a single reference value.
There are a wide variety of techniques and tools for interpreting decisions made by deep learning models. Except for the SHAP Gradient Explainer, the significant methods are the following:
\begin{itemize}
\item Visualizing activation layers visualize how a given input comes out of specific activation layers. The approach explores which feature maps are getting activated in the model.
\item Occlusion sensitivity visualizes how parts of the image affect deep neural networks confidence by hiding parts of the image iteratively.
\item Grad-Cam visualizes how parts of the image affect neural networks output by looking into the gradients backpropagated to the class activation maps.
\item SmoothGrad averages the so-called sensitivity maps for an input image to identify pixels of interest.
\end{itemize}
This gives an overview of what the neural network models do. The list of techniques here is not exhaustive. It just covers some of the most popular and widely used methods to interpret convolutional neural network models.
The decision for the SHAP is based on the following two reasons:
\begin{enumerate}
\item It is no longer the state of the art than it comes to make transparent decisions, but it is easy to understand.
\item It can be used in other machine learning fields as well. If it comes to generalization and to find a conclusion, it makes sense to have an implementation which fits a broad range of issues.
\end{enumerate}
This chapter was very extensive because it covers a lot of necessary aspects for the implementation of the prototype. As mentioned before, detailed explanations for almost all points mentioned in this chapter are in the \hyperref[ch:theory]{Theory Chapter (\ref{ch:theory})}. Unfortunately not every detail could be considered, because otherwise, the work would be too large. Anyway, the next chapter is about the technical specifications of the prototype. More precisely, which libraries possibly exist to implement the prototype and which are finally used.
|
This week we got some exciting news, which I can’t wait to share when it’s a little more “official official”! But in the meantime I’ve been thinking a lot about our future home and settling down. Zee German has been traveling for business the last couple days, and it’s always funny for me to realize how intertwined our lives have become.
Life just feel more complete when we’re together. And I think it’s really true that the most important thing about a home is who you share it with (thank you Pinterest!).
So in honor of our handsome other halves, I hope you enjoy today’s tune!
|
module Uri
import Uri.Parser
import Uri.Types
import Debug.Time
public export
decodeRelativeURI : String -> Maybe RelativeURI
decodeRelativeURI str =
case run parseRelativeURI (0, str) of
Right (uri, pos) => Just uri
Left err => Nothing
public export
decodeURI : String -> Maybe URI
decodeURI str =
case run parseURI (0, str) of
Right (uri, pos) => Just uri
Left err => Nothing
|
theory Tree imports Main begin
subsection "Tree"
inductive_set
tree :: "['a => 'a set,'a] => (nat * 'a) set"
for subs :: "'a => 'a set" and gamma :: 'a
(******
* This set represents the nodes in a tree which may represent a proof of gamma.
* Only storing the annotation and its level.
* NOTE: could parameterize on subs
******)
where
tree0: "(0,gamma) : tree subs gamma"
| tree1: "[| (n,delta) : tree subs gamma; sigma : subs delta |]
==> (Suc n,sigma) : tree subs gamma"
declare tree.cases [elim]
declare tree.intros [intro]
lemma tree0Eq: "(0,y) : tree subs gamma = (y = gamma)"
apply(rule iffI)
apply (erule tree.cases, auto)
done
lemma tree1Eq [rule_format]:
"\<forall>Y. (Suc n,Y) \<in> tree subs gamma = (\<exists>sigma \<in> subs gamma . (n,Y) \<in> tree subs sigma)"
by (induct n) (blast, force)
--"moving down a tree"
definition
incLevel :: "nat * 'a => nat * 'a" where
"incLevel = (% (n,a). (Suc n,a))"
lemma injIncLevel: "inj incLevel"
apply(simp add: incLevel_def)
apply(rule inj_onI)
apply auto
done
lemma treeEquation: "tree subs gamma = insert (0,gamma) (UN delta:subs gamma . incLevel ` tree subs delta)"
apply(rule set_eqI)
apply(simp add: split_paired_all)
apply(case_tac a)
apply(force simp add: tree0Eq incLevel_def)
apply(force simp add: tree1Eq incLevel_def)
done
definition
fans :: "['a => 'a set] => bool" where
"fans subs \<longleftrightarrow> (!x. finite (subs x))"
lemma fansD: "fans subs ==> finite (subs A)"
by(simp add: fans_def)
lemma fansI: "(!A. finite (subs A)) ==> fans subs"
by(simp add: fans_def)
subsection "Terminal"
definition
terminal :: "['a => 'a set,'a] => bool" where
"terminal subs delta \<longleftrightarrow> subs(delta) = {}"
lemma terminalD: "terminal subs Gamma ==> x ~: subs Gamma"
by(simp add: terminal_def)
-- "not a good dest rule"
lemma terminalI: "x \<in> subs Gamma ==> ~ terminal subs Gamma"
by(auto simp add: terminal_def)
-- "not a good intro rule"
subsection "Inherited"
definition
inherited :: "['a => 'a set,(nat * 'a) set => bool] => bool" where
"inherited subs P \<longleftrightarrow> (!A B. (P A & P B) = P (A Un B))
& (!A. P A = P (incLevel ` A))
& (!n Gamma A. ~(terminal subs Gamma) --> P A = P (insert (n,Gamma) A))
& (P {})"
(******
inherited properties:
- preserved under: dividing into 2, join 2 parts
moving up/down levels
inserting non terminal nodes
- hold on empty node set
******)
-- "FIXME tjr why does it have to be invariant under inserting nonterminal nodes?"
lemma inheritedUn[rule_format]:"inherited subs P --> P A --> P B --> P (A Un B)"
by (auto simp add: inherited_def)
lemma inheritedIncLevel[rule_format]: "inherited subs P --> P A --> P (incLevel ` A)"
by (auto simp add: inherited_def)
lemma inheritedEmpty[rule_format]: "inherited subs P --> P {}"
by (auto simp add: inherited_def)
lemma inheritedInsert[rule_format]:
"inherited subs P --> ~(terminal subs Gamma) --> P A --> P (insert (n,Gamma) A)"
by (auto simp add: inherited_def)
lemma inheritedI[rule_format]: "[| \<forall>A B . (P A & P B) = P (A Un B);
\<forall>A . P A = P (incLevel ` A);
\<forall>n Gamma A . ~(terminal subs Gamma) --> P A = P (insert (n,Gamma) A);
P {} |] ==> inherited subs P"
by (simp add: inherited_def)
(* These only for inherited join, and a few more places... *)
lemma inheritedUnEq[rule_format, symmetric]: "inherited subs P --> (P A & P B) = P (A Un B)"
by (auto simp add: inherited_def)
lemma inheritedIncLevelEq[rule_format, symmetric]: "inherited subs P --> P A = P (incLevel ` A)"
by (auto simp add: inherited_def)
lemma inheritedInsertEq[rule_format, symmetric]: "inherited subs P --> ~(terminal subs Gamma) --> P A = P (insert (n,Gamma) A)"
by (auto simp add: inherited_def)
lemmas inheritedUnD = iffD1[OF inheritedUnEq];
lemmas inheritedInsertD = inheritedInsertEq[THEN iffD1];
lemmas inheritedIncLevelD = inheritedIncLevelEq[THEN iffD1]
lemma inheritedUNEq[rule_format]:
"finite A --> inherited subs P --> (!x:A. P (B x)) = P (UN a:A. B a)"
apply(intro impI)
apply(erule finite_induct)
apply simp
apply(simp add: inheritedEmpty)
apply(force dest: inheritedUnEq)
done
lemmas inheritedUN = inheritedUNEq[THEN iffD1]
lemmas inheritedUND[rule_format] = inheritedUNEq[THEN iffD2]
lemma inheritedPropagateEq[rule_format]: assumes a: "inherited subs P"
and b: "fans subs"
and c: "~(terminal subs delta)"
shows "P(tree subs delta) = (!sigma:subs delta. P(tree subs sigma))"
apply(insert fansD[OF b])
apply(subst treeEquation [of _ delta])
using assms
apply(simp add: inheritedInsertEq inheritedUNEq[symmetric] inheritedIncLevelEq)
done
lemma inheritedPropagate:
"[| ~P(tree subs delta); inherited subs P; fans subs; ~(terminal subs delta)|]
==> \<exists>sigma \<in> subs delta . ~P(tree subs sigma)"
by(simp add: inheritedPropagateEq)
lemma inheritedViaSub: "[| inherited subs P; fans subs; P(tree subs delta); sigma \<in> subs delta |]
==> P(tree subs sigma)"
apply(frule_tac terminalI)
apply(simp add: inheritedPropagateEq)
done
lemma inheritedJoin[rule_format]:
"(inherited subs P & inherited subs Q) --> inherited subs (%x. P x & Q x)"
by(blast intro!: inheritedI
dest: inheritedUnEq inheritedIncLevelEq inheritedInsertEq inheritedEmpty)
lemma inheritedJoinI[rule_format]: "[| inherited subs P; inherited subs Q; R = ( % x . P x & Q x) |] ==> inherited subs R"
by(blast intro!:inheritedI dest: inheritedUnEq inheritedIncLevelEq inheritedInsertEq inheritedEmpty)
subsection "bounded, boundedBy"
definition
boundedBy :: "nat => (nat * 'a) set => bool" where
"boundedBy N A \<longleftrightarrow> (\<forall>(n,delta) \<in> A. n < N)"
definition
bounded :: "(nat * 'a) set => bool" where
"bounded A \<longleftrightarrow> (\<exists>N . boundedBy N A)"
lemma boundedByEmpty[simp]: "boundedBy N {}"
by(simp add: boundedBy_def)
lemma boundedByInsert: "boundedBy N (insert (n,delta) B) = (n < N & boundedBy N B)"
by(simp add: boundedBy_def)
lemma boundedByUn: "boundedBy N (A Un B) = (boundedBy N A & boundedBy N B)"
by(auto simp add: boundedBy_def)
lemma boundedByIncLevel': "boundedBy (Suc N) (incLevel ` A) = boundedBy N A";
by(auto simp add: incLevel_def boundedBy_def)
lemma boundedByAdd1: "boundedBy N B \<Longrightarrow> boundedBy (N+M) B"
by(auto simp add: boundedBy_def)
lemma boundedByAdd2: "boundedBy M B \<Longrightarrow> boundedBy (N+M) B"
by(auto simp add: boundedBy_def)
lemma boundedByMono: "boundedBy m B \<Longrightarrow> m < M \<Longrightarrow> boundedBy M B"
by(auto simp: boundedBy_def)
lemmas boundedByMonoD = boundedByMono
lemma boundedBy0: "boundedBy 0 A = (A = {})"
apply(simp add: boundedBy_def)
apply(auto simp add: boundedBy_def)
done
lemma boundedBySuc': "boundedBy N A \<Longrightarrow> boundedBy (Suc N) A"
by (auto simp add: boundedBy_def)
lemma boundedByIncLevel: "boundedBy n (incLevel ` (tree subs gamma)) = ( \<exists>m . n = Suc m & boundedBy m (tree subs gamma))";
apply(cases n)
apply(force simp add: boundedBy0 tree0)
apply(force simp add: treeEquation [of _ gamma] incLevel_def boundedBy_def)
done
lemma boundedByUN: "boundedBy N (UN x:A. B x) = (!x:A. boundedBy N (B x))"
by(simp add: boundedBy_def)
lemma boundedBySuc[rule_format]: "sigma \<in> subs Gamma \<Longrightarrow> boundedBy (Suc n) (tree subs Gamma) \<longrightarrow> boundedBy n (tree subs sigma)"
apply(subst treeEquation [of _ Gamma])
apply rule
apply(simp add: boundedByInsert)
apply(simp add: boundedByUN)
apply(drule_tac x=sigma in bspec) apply assumption
apply(simp add: boundedByIncLevel)
done
subsection "Inherited Properties- bounded"
lemma boundedEmpty: "bounded {}"
by(simp add: bounded_def)
lemma boundedUn: "bounded (A Un B) = (bounded A & bounded B)"
apply(auto simp add: bounded_def boundedByUn)
apply(rule_tac x="N+Na" in exI)
apply(blast intro: boundedByAdd1 boundedByAdd2)
done
lemma boundedIncLevel: "bounded (incLevel` A) = (bounded A)"
apply (simp add: bounded_def, rule)
apply(erule exE)
apply(rule_tac x=N in exI)
apply (simp add: boundedBy_def incLevel_def, force)
apply(erule exE)
apply(rule_tac x="Suc N" in exI)
apply (simp add: boundedBy_def incLevel_def, force)
done
lemma boundedInsert: "bounded (insert a B) = (bounded B)"
apply(case_tac a)
apply (simp add: bounded_def boundedByInsert, rule) apply blast
apply(erule exE)
apply(rule_tac x="Suc(aa+N)" in exI)
apply(force intro:boundedByMono)
done
lemma inheritedBounded: "inherited subs bounded"
by(blast intro!: inheritedI boundedUn[symmetric] boundedIncLevel[symmetric]
boundedInsert[symmetric] boundedEmpty)
subsection "founded"
definition
founded :: "['a => 'a set,'a => bool,(nat * 'a) set] => bool" where
"founded subs Pred = (%A. !(n,delta):A. terminal subs delta --> Pred delta)"
lemma foundedD: "founded subs P (tree subs delta) ==> terminal subs delta ==> P delta"
by(simp add: treeEquation [of _ delta] founded_def)
lemma foundedMono: "[| founded subs P A; \<forall>x. P x --> Q x |] ==> founded subs Q A";
by (auto simp: founded_def)
lemma foundedSubs: "founded subs P (tree subs Gamma) \<Longrightarrow> sigma \<in> subs Gamma \<Longrightarrow> founded subs P (tree subs sigma)"
apply(simp add: founded_def)
apply(intro ballI impI)
apply (case_tac x, simp, rule)
apply(drule_tac x="(Suc a, b)" in bspec)
apply(subst treeEquation)
apply (force simp: incLevel_def, simp)
done
subsection "Inherited Properties- founded"
lemma foundedInsert[rule_format]: "~ terminal subs delta --> founded subs P (insert (n,delta) B) = (founded subs P B)"
apply(simp add: terminal_def founded_def) done
lemma foundedUn: "(founded subs P (A Un B)) = (founded subs P A & founded subs P B)";
apply(simp add: founded_def) by force
lemma foundedIncLevel: "founded subs P (incLevel ` A) = (founded subs P A)";
apply (simp add: founded_def incLevel_def, auto) done
lemma foundedEmpty: "founded subs P {}"
by(auto simp add: founded_def)
lemma inheritedFounded: "inherited subs (founded subs P)"
by(blast intro!: inheritedI foundedUn[symmetric] foundedIncLevel[symmetric]
foundedInsert[symmetric] foundedEmpty)
subsection "Inherited Properties- finite"
lemmas finiteInsert = finite_insert
lemma finiteUn: "finite (A Un B) = (finite A & finite B)";
apply simp done
lemma finiteIncLevel: "finite (incLevel ` A) = finite A";
apply (insert injIncLevel, rule)
apply(frule finite_imageD)
apply (blast intro: subset_inj_on, assumption)
apply(rule finite_imageI)
by assumption
-- "FIXME often have injOn f A, finite f ` A, to show A finite"
lemma finiteEmpty: "finite {}" by auto
lemma inheritedFinite: "inherited subs (%A. finite A)"
apply(blast intro!: inheritedI finiteUn[symmetric] finiteIncLevel[symmetric] finiteInsert[symmetric] finiteEmpty)
done
subsection "path: follows a failing inherited property through tree"
definition
failingSub :: "['a => 'a set,(nat * 'a) set => bool,'a] => 'a" where
"failingSub subs P gamma = (SOME sigma. (sigma:subs gamma & ~P(tree subs sigma)))"
lemma failingSubProps: "[| inherited subs P; ~P (tree subs gamma); ~(terminal subs gamma); fans subs |]
==> failingSub subs P gamma \<in> subs gamma & ~(P (tree subs (failingSub subs P gamma)))"
apply(simp add: failingSub_def)
apply(drule inheritedPropagate) apply(assumption+)
apply(erule bexE)
apply (rule someI2, auto)
done
lemma failingSubFailsI: "[| inherited subs P; ~P (tree subs gamma); ~(terminal subs gamma); fans subs |]
==> ~(P (tree subs (failingSub subs P gamma)))"
apply(rule conjunct2[OF failingSubProps]) .
lemmas failingSubFailsE = failingSubFailsI[THEN notE]
lemma failingSubSubs: "[| inherited subs P; ~P (tree subs gamma); ~(terminal subs gamma); fans subs |]
==> failingSub subs P gamma \<in> subs gamma"
apply(rule conjunct1[OF failingSubProps]) .
primrec path :: "['a => 'a set,'a,(nat * 'a) set => bool,nat] => 'a"
where
path0: "path subs gamma P 0 = gamma"
| pathSuc: "path subs gamma P (Suc n) = (if terminal subs (path subs gamma P n)
then path subs gamma P n
else failingSub subs P (path subs gamma P n))"
lemma pathFailsP: "[| inherited subs P; fans subs; ~P(tree subs gamma) |]
==> ~(P (tree subs (path subs gamma P n)))"
apply (induct_tac n, simp, simp)
apply rule
apply(rule failingSubFailsI) apply(assumption+)
done
lemmas PpathE = pathFailsP[THEN notE]
lemma pathTerminal[rule_format]: "[| inherited subs P; fans subs; terminal subs gamma |]
==> terminal subs (path subs gamma P n)"
apply (induct_tac n, simp_all) done
lemma pathStarts: "path subs gamma P 0 = gamma"
by simp
lemma pathSubs: "[| inherited subs P; fans subs; ~P(tree subs gamma); ~ (terminal subs (path subs gamma P n)) |]
==> path subs gamma P (Suc n) \<in> subs (path subs gamma P n)"
apply simp
apply (rule failingSubSubs, assumption)
apply(rule pathFailsP)
apply(assumption+)
done
lemma pathStops: "terminal subs (path subs gamma P n) ==> path subs gamma P (Suc n) = path subs gamma P n"
by simp
subsection "Branch"
definition
branch :: "['a => 'a set,'a,nat => 'a] => bool" where
"branch subs Gamma f \<longleftrightarrow> f 0 = Gamma
& (!n . terminal subs (f n) --> f (Suc n) = f n)
& (!n . ~ terminal subs (f n) --> f (Suc n) \<in> subs (f n))"
lemma branch0: "branch subs Gamma f ==> f 0 = Gamma"
by (simp add: branch_def)
lemma branchStops: "branch subs Gamma f ==> terminal subs (f n) ==> f (Suc n) = f n"
by (simp add: branch_def)
lemma branchSubs: "branch subs Gamma f ==> ~ terminal subs (f n) ==> f (Suc n) \<in> subs (f n)"
by (simp add: branch_def)
lemma branchI: "[| (f 0 = Gamma);
!n . terminal subs (f n) --> f (Suc n) = f n;
!n . ~ terminal subs (f n) --> f (Suc n) \<in> subs (f n) |] ==> branch subs Gamma f"
by (simp add: branch_def)
lemma branchTerminalPropagates: "branch subs Gamma f ==> terminal subs (f m) ==> terminal subs (f (m + n))"
apply (induct_tac n, simp)
by(simp add: branchStops)
lemma branchTerminalMono: "branch subs Gamma f ==> m < n ==> terminal subs (f m) ==> terminal subs (f n)"
apply(subgoal_tac "terminal subs (f (m+(n-m)))") apply force
apply(rule branchTerminalPropagates)
.
lemma branchPath:
"[| inherited subs P; fans subs; ~P(tree subs gamma) |]
==> branch subs gamma (path subs gamma P)"
by(auto intro!: branchI pathStarts pathSubs pathStops)
subsection "failing branch property: abstracts path defn"
lemma failingBranchExistence: "!!subs.
[| inherited subs P; fans subs; ~P(tree subs gamma) |]
==> \<exists>f . branch subs gamma f & (\<forall>n . ~P(tree subs (f n)))"
apply(rule_tac x="path subs gamma P" in exI)
apply(rule conjI)
apply(force intro!: branchPath)
apply(intro allI)
apply(rule pathFailsP)
by auto
definition
infBranch :: "['a => 'a set,'a,nat => 'a] => bool" where
"infBranch subs Gamma f \<longleftrightarrow> f 0 = Gamma & (\<forall>n. f (Suc n) \<in> subs (f n))"
lemma infBranchI: "[| (f 0 = Gamma); !n . f (Suc n) \<in> subs (f n) |] ==> infBranch subs Gamma f"
by (simp add: infBranch_def)
subsection "Tree induction principles"
-- "we work hard to use nothing fancier that induction over naturals"
lemma boundedTreeInduction':
"\<lbrakk> fans subs;
\<forall>delta. ~ terminal subs delta --> (\<forall>sigma \<in> subs delta. P sigma) --> P delta \<rbrakk>
\<Longrightarrow> \<forall>Gamma. boundedBy m (tree subs Gamma) \<longrightarrow> founded subs P (tree subs Gamma) \<longrightarrow> P Gamma"
apply(induct_tac m)
apply(intro impI allI)
apply(simp add: boundedBy0)
apply(subgoal_tac "(0,Gamma) \<in> tree subs Gamma") apply blast apply(rule tree0)
apply(intro impI allI)
apply(drule_tac x=Gamma in spec)
apply (case_tac "terminal subs Gamma", simp)
apply(drule_tac foundedD) apply assumption apply assumption
apply (erule impE, assumption)
apply (erule impE, rule)
apply(drule_tac x=sigma in spec)
apply(erule impE)
apply(rule boundedBySuc) apply assumption apply assumption
apply(erule impE)
apply(rule foundedSubs) apply assumption apply assumption
apply assumption
apply assumption
done
-- "tjr tidied and introduced new lemmas"
lemma boundedTreeInduction:
"\<lbrakk>fans subs;
bounded (tree subs Gamma); founded subs P (tree subs Gamma);
\<forall>delta. ~ terminal subs delta --> (\<forall>sigma \<in> subs delta. P sigma) --> P delta
\<rbrakk> \<Longrightarrow> P Gamma"
apply(unfold bounded_def)
apply(erule exE)
apply(frule_tac boundedTreeInduction') apply assumption
apply force
done
lemma boundedTreeInduction2':
"[| fans subs;
\<forall>delta. (\<forall>sigma \<in> subs delta. P sigma) --> P delta |]
==> \<forall>Gamma. boundedBy m (tree subs Gamma) \<longrightarrow> P Gamma"
apply(induct_tac m)
apply(intro impI allI)
apply(simp (no_asm_use) add: boundedBy0)
apply(subgoal_tac "(0,Gamma) \<in> tree subs Gamma") apply blast apply(rule tree0)
apply(intro impI allI)
apply(drule_tac x=Gamma in spec)
apply (erule impE, rule)
apply(drule_tac x=sigma in spec)
apply(erule impE)
apply(rule boundedBySuc) apply assumption apply assumption
apply assumption
apply assumption
done
lemma boundedTreeInduction2:
"[| fans subs; boundedBy m (tree subs Gamma);
\<forall>delta. (\<forall>sigma \<in> subs delta. P sigma) --> P delta |]
==> P Gamma"
by (frule_tac boundedTreeInduction2', assumption, blast)
end
|
lemma nonzero_Reals_inverse: "a \<in> \<real> \<Longrightarrow> a \<noteq> 0 \<Longrightarrow> inverse a \<in> \<real>" for a :: "'a::real_div_algebra"
|
\section{Results}
\subsection{Convincing tuning curves}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figs/Mouse12-120806_awakedata.mat/T3C10.eps}
\caption{Cell with convining tuning curve. There appears to be a strong reaction to stimuli.}
\label{fig:1a_high_mut_info}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figs/Mouse12-120806_awakedata.mat/T2C2.eps}
\caption{Cell with unconvincing tuning curve, and the stimuli appears to be uncorrelated with the response.}
\label{fig:1a_low_mut_info}
\end{subfigure}
\caption{Two cells from mouse 12 with different tuning curves}
\label{fig:1a_two_tuning_curves}
\end{figure}
To understand how different cells reacts to different stimuli, we created tuning curves from \cref{eq:firing_rate}. Some of the cells showed very convincing tuning properties with strong peaks for specific stimuli with little background, such as in \cref{sub@fig:1a_high_mut_info}. Others appears to mostly be noise, and does not appear to contain much information such as shown in \cref{fig:1a_low_mut_info}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{figs/1b.eps}
\caption{Promising tuning curves from two different mice and different brain areas. }
\label{fig:1b_convincing_tuning}
\end{figure}
In \cref{fig:1b_convincing_tuning} there are 4 different neurons from different mice and brain areas, with different convincing tuning properties. Neuron T4C14 also showes how the tuning curve wraps around when the stimuli goes beyond $360^\circ$. Most of the neurons appears to have unique tuning properties, and tend to prefer different stimulus. By combining multiple neurons it appears to be possible to at least perform an edjucated guess about the current head direction by knowing the activitity in different HD cells. The different tuning curve spikes appear to be well distributed around a circle, i.e. the cells seems to cover most head directions well.
\subsection{Peculiar tuning properties}
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{figs/peculiar_tuning.eps}
\caption{Cell with peculiar tuning properties. Notice how the cells have multiple firing patterns.}
\label{fig:peculiar_tuning}
\end{figure}
Some cells also had more peculiar tuning properties with multiple firing patterns. \cref{fig:peculiar_tuning} shows three cells with multiple firing patterns, but which still appears to contain useful information about head direction. These cells may not indicate one specific head angle, but by knowing when these cells are active we can still use the information to rule out other possibilites and shape the distribution of possible head directions.
\subsection{Mutual information}
To quantify the amount of information about head direction for each cell, the mutual information score were calculated using \cref{eq:mutinfo_disc}. Each of the plots include the mutual information score in the title, and from \cref{fig:1a_two_tuning_curves} we can see how the mutual information tend to be higher for the more convincing tuning curves. The mutual information score is in bits (shannons) per unit time, and can the thought of as how many yes/no answers (bits) we gain on average after observing a cell during a sufficiently short time interval \cite{mutualinfo}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{figs/weird_mutual.eps}
\caption{Convincing tuning curves with low mutual information}
\label{fig:weird_mutual}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{figs/probs.eps}
\caption{Prior probability distribution}
\label{fig:probs}
\end{subfigure}
\end{figure}
However, low mutual information score does however not imply that the corresponding tuning curve is of no interest. In \cref{fig:weird_mutual} there are three tuning curves which looks convincing, but which results in a low mutual information score. By also looking at the prior probability density function in \cref{fig:probs}, we can see how these tuning curves have their peak in a head direction which was less visited by the mice. The prior indicate what we believed the head angle distribution were before observing any cell activity. When observing cell activity which contradicts our prior belief (i.e. high activity in cells which triggers at an unlikely head direction), we simply need more information to "accept it". Observing high activity in these neruons contradicts our prior belief and would make us more uncertain about what the true head direction really is (i.e. all possibilites becomes more equally likely).
\subsection{Principal component analysis}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{figs/Mouse12-120806_awakedata.mat/pca.eps}
\caption{PCA for mouse 12}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{figs/Mouse28-140313_awakedata.mat/pca.eps}
\caption{PCA for mouse 28}
\end{subfigure}
\caption{Principal components analysis for the two datasets. The principal components are visualized both in cartesian coordinates as well as in polar coordinates. The plots clearly shows similar stimuli clustering together. The polar coordinate angle $\theta$ appears to explain the difference head direction well.}
\label{fig:pca}
\end{figure}
To better visualize and utilize the data, PCA analysis were used to reduce the number of variables, while still keep as much information as possible. In the rather beautiful plot \cref{fig:pca} it shows the two first principal components, where the scores are color-coded according to the head direction. The plots clearly shows how the different stimuli is grouped by color, and there appears to be a strong relationship between the principal components and the head direction. The two components explains about $15\%$ of the variance which considering the amount of cells (variables in the original dataset) makes this a very useful visualization.
It also appears to be a slight trend in the samples, where the colour appears to get brighter with clockwise motion. By applying a non-linear transformation to polar coordinates, we see how the head direction is well explained by the rotation $\theta$ (not to be confused with the head direction), and only barely depends on the radius $\rho$. Notice how the color increase in intensity almost linearly as principal component scores rotate around the circle. This indicates a circular relationship between the first two principal components and the head direction.
|
function get_edges(obj::EdgesWithNodeInfoBuilder)
return jcall(obj, "getEdges", List, ())
end
function visit_internal_node(obj::EdgesWithNodeInfoBuilder, arg0::BSPTree)
return jcall(obj, "visitInternalNode", void, (BSPTree,), arg0)
end
function visit_leaf_node(obj::EdgesWithNodeInfoBuilder, arg0::BSPTree)
return jcall(obj, "visitLeafNode", void, (BSPTree,), arg0)
end
function visit_order(obj::EdgesWithNodeInfoBuilder, arg0::BSPTree)
return jcall(obj, "visitOrder", BSPTreeVisitor_Order, (BSPTree,), arg0)
end
|
Require Export compcert.backend.EraseArgs.
Require Import MemoryX.
Import Coqlib.
Import Integers.
Import AST.
Import Values.
Import Memory.
Import Locations.
Section WITHMEMORYMODELX.
Context `{memory_model_x: Mem.MemoryModelX}.
Lemma free_extcall_arg_inject_neutral sp m e m' :
free_extcall_arg sp m e = Some m' ->
Mem.inject_neutral (Mem.nextblock m) m ->
Mem.inject_neutral (Mem.nextblock m') m' .
Proof.
unfold free_extcall_arg.
destruct e; try congruence.
destruct sl; try congruence.
destruct sp; try congruence.
intros H H0.
erewrite Mem.nextblock_free by eauto.
eapply Mem.free_inject_neutral; eauto.
generalize (size_chunk_pos (chunk_of_type ty)).
intros H1.
apply Mem.free_range_perm in H.
eapply Mem.perm_valid_block.
eapply H.
split.
{
reflexivity.
}
omega.
Qed.
Corollary free_extcall_args_inject_neutral sp l:
forall m m',
free_extcall_args sp m l = Some m' ->
Mem.inject_neutral (Mem.nextblock m) m ->
Mem.inject_neutral (Mem.nextblock m') m' .
Proof.
induction l; simpl; try congruence.
intros m m' H H0.
destruct (free_extcall_arg sp m a) eqn:EQ; try discriminate.
eapply IHl.
{
eassumption.
}
eapply free_extcall_arg_inject_neutral; eauto.
Qed.
Lemma free_extcall_args_no_perm l:
forall init_sp m m',
free_extcall_args init_sp m l = Some m' ->
forall of ty,
In (S Outgoing of ty) l ->
(forall (b : block) (so : ptrofs),
init_sp = Vptr b so ->
let ofs :=
Ptrofs.unsigned (Ptrofs.add so (Ptrofs.repr (Stacklayout.fe_ofs_arg + 4 * of))) in
forall o : Z,
ofs <= o < ofs + size_chunk (chunk_of_type ty) ->
~ Mem.perm m' b o Max Nonempty).
Proof.
induction l; try contradiction.
unfold free_extcall_args. fold free_extcall_args.
unfold free_extcall_arg.
simpl In.
destruct a.
{
intros; eapply IHl; eauto;
intuition congruence.
}
destruct sl; try (intros; eapply IHl; eauto; intuition congruence).
intros init_sp m m' H of ty0 H0 b so H1 o H2.
rewrite H1 in H.
match type of H with
match ?z with Some _ => _ | _ => _ end = _ =>
destruct z eqn:FREE; try discriminate
end.
inversion H0; eauto.
inversion H3; subst.
apply free_extcall_args_extends in H.
intro ABSURD.
eapply Mem.perm_extends in ABSURD; eauto.
revert ABSURD.
eapply Mem.perm_free_2; eauto.
Qed.
End WITHMEMORYMODELX.
|
module Idris.Doc.String
import Core.Context
import Core.Context.Log
import Core.Core
import Core.Env
import Core.TT
import Idris.Pretty
import Idris.Pretty.Render
import Idris.REPL.Opts
import Idris.Resugar
import Idris.Syntax
import TTImp.TTImp
import TTImp.Elab.Prim
import Data.List
import Data.List1
import Data.Maybe
import Data.Strings
import Libraries.Data.ANameMap
import Libraries.Data.NameMap
import Libraries.Data.StringMap as S
import Libraries.Data.String.Extra
import Libraries.Control.ANSI.SGR
import public Libraries.Text.PrettyPrint.Prettyprinter
import public Libraries.Text.PrettyPrint.Prettyprinter.Util
import Parser.Lexer.Source
%default covering
public export
data IdrisDocAnn
= TCon Name
| DCon
| Fun Name
| Header
| Declarations
| Decl Name
| DocStringBody
| Syntax IdrisSyntax
export
styleAnn : IdrisDocAnn -> AnsiStyle
styleAnn (TCon _) = color BrightBlue
styleAnn DCon = color BrightRed
styleAnn (Fun _) = color BrightGreen
styleAnn Header = underline
styleAnn _ = []
export
tCon : Name -> Doc IdrisDocAnn -> Doc IdrisDocAnn
tCon n = annotate (TCon n)
export
dCon : Doc IdrisDocAnn -> Doc IdrisDocAnn
dCon = annotate DCon
export
fun : Name -> Doc IdrisDocAnn -> Doc IdrisDocAnn
fun n = annotate (Fun n)
export
header : Doc IdrisDocAnn -> Doc IdrisDocAnn
header d = annotate Header d <+> colon
-- Add a doc string for a name in the current namespace
export
addDocString : {auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
Name -> String ->
Core ()
addDocString n_in doc
= do n <- inCurrentNS n_in
log "doc.record" 50 $
"Adding doc for " ++ show n_in ++ " (aka " ++ show n ++ " in current NS)"
syn <- get Syn
put Syn (record { docstrings $= addName n doc,
saveDocstrings $= insert n () } syn)
-- Add a doc string for a name, in an extended namespace (e.g. for
-- record getters)
export
addDocStringNS : {auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
Namespace -> Name -> String ->
Core ()
addDocStringNS ns n_in doc
= do n <- inCurrentNS n_in
let n' = case n of
NS old root => NS (old <.> ns) root
root => NS ns root
syn <- get Syn
put Syn (record { docstrings $= addName n' doc,
saveDocstrings $= insert n' () } syn)
export
getDocsForPrimitive : {auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
Constant -> Core (List String)
getDocsForPrimitive constant = do
let (_, type) = checkPrim EmptyFC constant
let typeString = show constant ++ " : " ++ show !(resugar [] type)
pure [typeString ++ "\n\tPrimitive"]
prettyTerm : PTerm -> Doc IdrisDocAnn
prettyTerm = reAnnotate Syntax . Idris.Pretty.prettyTerm
export
getDocsForName : {auto o : Ref ROpts REPLOpts} ->
{auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
FC -> Name -> Core (Doc IdrisDocAnn)
getDocsForName fc n
= do syn <- get Syn
defs <- get Ctxt
let extra = case nameRoot n of
"-" => [NS numNS (UN "negate")]
_ => []
resolved <- lookupCtxtName n (gamma defs)
let all@(_ :: _) = extra ++ map fst resolved
| _ => undefinedName fc n
let ns@(_ :: _) = concatMap (\n => lookupName n (docstrings syn)) all
| [] => pure $ pretty ("No documentation for " ++ show n)
docs <- traverse showDoc ns
pure $ vcat (punctuate Line docs)
where
-- Avoid generating too much whitespace by not returning a single empty line
reflowDoc : String -> List (Doc IdrisDocAnn)
reflowDoc "" = []
reflowDoc str = map (indent 2 . reflow) (forget $ Extra.lines str)
showTotal : Name -> Totality -> Doc IdrisDocAnn
showTotal n tot
= case isTerminating tot of
Unchecked => ""
_ => header "Totality" <++> pretty tot
prettyName : Name -> Doc IdrisDocAnn
prettyName n =
let root = nameRoot n in
if isOpName n then parens (pretty root) else pretty root
getDConDoc : Name -> Core (Doc IdrisDocAnn)
getDConDoc con
= do defs <- get Ctxt
Just def <- lookupCtxtExact con (gamma defs)
-- should never happen, since we know that the DCon exists:
| Nothing => pure Empty
syn <- get Syn
ty <- resugar [] =<< normaliseHoles defs [] (type def)
let conWithTypeDoc = annotate (Decl con) (hsep [dCon (prettyName con), colon, prettyTerm ty])
let [(n, str)] = lookupName con (docstrings syn)
| _ => pure conWithTypeDoc
pure $ vcat
[ conWithTypeDoc
, annotate DocStringBody $ vcat $ reflowDoc str
]
getImplDoc : Name -> Core (List (Doc IdrisDocAnn))
getImplDoc n
= do defs <- get Ctxt
Just def <- lookupCtxtExact n (gamma defs)
| Nothing => pure []
ty <- resugar [] =<< normaliseHoles defs [] (type def)
pure [annotate (Decl n) $ prettyTerm ty]
getMethDoc : Method -> Core (List (Doc IdrisDocAnn))
getMethDoc meth
= do syn <- get Syn
let [(n, str)] = lookupName meth.name (docstrings syn)
| _ => pure []
ty <- pterm meth.type
let nm = prettyName meth.name
pure $ pure $ vcat [
annotate (Decl meth.name) (hsep [fun (meth.name) nm, colon, prettyTerm ty])
, annotate DocStringBody $ vcat (
toList (indent 2 . pretty . show <$> meth.totalReq)
++ reflowDoc str)
]
getInfixDoc : Name -> Core (List (Doc IdrisDocAnn))
getInfixDoc n
= do let Just (fixity, assoc) = S.lookupName n (infixes !(get Syn))
| Nothing => pure []
pure $ pure $ hsep
[ pretty (show fixity)
, "operator,"
, "level"
, pretty (show assoc)
]
getPrefixDoc : Name -> Core (List (Doc IdrisDocAnn))
getPrefixDoc n
= do let Just assoc = S.lookupName n (prefixes !(get Syn))
| Nothing => pure []
pure $ ["prefix operator, level" <++> pretty (show assoc)]
getFixityDoc : Name -> Core (List (Doc IdrisDocAnn))
getFixityDoc n =
pure $ case toList !(getInfixDoc n) ++ toList !(getPrefixDoc n) of
[] => []
[f] => [header "Fixity Declaration" <++> f]
fs => [header "Fixity Declarations" <+> Line <+>
indent 2 (vcat fs)]
getIFaceDoc : (Name, IFaceInfo) -> Core (Doc IdrisDocAnn)
getIFaceDoc (n, iface)
= do let params =
case params iface of
[] => []
ps => [hsep (header "Parameters" :: punctuate comma (map (pretty . show) ps))]
let constraints =
case !(traverse pterm (parents iface)) of
[] => []
ps => [hsep (header "Constraints" :: punctuate comma (map (pretty . show) ps))]
mdocs <- traverse getMethDoc (methods iface)
let meths = case concat mdocs of
[] => []
docs => [vcat [header "Methods", annotate Declarations $ vcat $ map (indent 2) docs]]
sd <- getSearchData fc False n
idocs <- case hintGroups sd of
[] => pure (the (List (List (Doc IdrisDocAnn))) [])
((_, tophs) :: _) => traverse getImplDoc tophs
let insts = case concat idocs of
[] => []
[doc] => [header "Implementation" <++> annotate Declarations doc]
docs => [vcat [header "Implementations"
, annotate Declarations $ vcat $ map (indent 2) docs]]
pure (vcat (params ++ constraints ++ meths ++ insts))
getFieldDoc : Name -> Core (Doc IdrisDocAnn)
getFieldDoc nm
= do syn <- get Syn
defs <- get Ctxt
Just def <- lookupCtxtExact nm (gamma defs)
-- should never happen, since we know that the DCon exists:
| Nothing => pure Empty
ty <- resugar [] =<< normaliseHoles defs [] (type def)
let prettyName = pretty (nameRoot nm)
let projDecl = hsep [ fun nm prettyName, colon, prettyTerm ty ]
let [(_, str)] = lookupName nm (docstrings syn)
| _ => pure projDecl
pure $ annotate (Decl nm)
$ vcat [ projDecl
, annotate DocStringBody $ vcat (reflowDoc str)
]
getFieldsDoc : Name -> Core (List (Doc IdrisDocAnn))
getFieldsDoc recName
= do let (Just ns, n) = displayName recName
| _ => pure []
let recNS = ns <.> mkNamespace n
defs <- get Ctxt
let fields = getFieldNames (gamma defs) recNS
syn <- get Syn
case fields of
[] => pure []
[proj] => pure [header "Projection" <++> annotate Declarations !(getFieldDoc proj)]
projs => pure [vcat [header "Projections"
, annotate Declarations $
vcat $ map (indent 2) $ !(traverse getFieldDoc projs)]]
getExtra : Name -> GlobalDef -> Core (List (Doc IdrisDocAnn))
getExtra n d = do
do syn <- get Syn
let [] = lookupName n (ifaces syn)
| [ifacedata] => pure <$> getIFaceDoc ifacedata
| _ => pure [] -- shouldn't happen, we've resolved ambiguity by now
case definition d of
PMDef _ _ _ _ _ => pure [showTotal n (totality d)]
TCon _ _ _ _ _ _ cons _ =>
do let tot = [showTotal n (totality d)]
cdocs <- traverse (getDConDoc <=< toFullNames) cons
cdoc <- case cdocs of
[] => pure []
[doc] => pure
$ (header "Constructor" <++> annotate Declarations doc)
:: !(getFieldsDoc n)
docs => pure [vcat [header "Constructors"
, annotate Declarations $
vcat $ map (indent 2) docs]]
pure (tot ++ cdoc)
_ => pure []
showCategory : GlobalDef -> Doc IdrisDocAnn -> Doc IdrisDocAnn
showCategory d = case definition d of
TCon _ _ _ _ _ _ _ _ => tCon (fullname d)
DCon _ _ _ => dCon
PMDef _ _ _ _ _ => fun (fullname d)
ForeignDef _ _ => fun (fullname d)
Builtin _ => fun (fullname d)
_ => id
showDoc : (Name, String) -> Core (Doc IdrisDocAnn)
showDoc (n, str)
= do defs <- get Ctxt
Just def <- lookupCtxtExact n (gamma defs)
| Nothing => undefinedName fc n
ty <- resugar [] =<< normaliseHoles defs [] (type def)
let cat = showCategory def
nm <- aliasName n
let docDecl = annotate (Decl n) (hsep [cat (pretty (show nm)), colon, prettyTerm ty])
let docText = reflowDoc str
extra <- getExtra n def
fixes <- getFixityDoc n
let docBody = annotate DocStringBody $ vcat $ docText ++ (map (indent 2) (extra ++ fixes))
pure (vcat [docDecl, docBody])
export
getDocsForPTerm : {auto o : Ref ROpts REPLOpts} ->
{auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
PTerm -> Core (List String)
getDocsForPTerm (PRef fc name) = pure $ [!(render styleAnn !(getDocsForName fc name))]
getDocsForPTerm (PPrimVal _ constant) = getDocsForPrimitive constant
getDocsForPTerm (PType _) = pure ["Type : Type\n\tThe type of all types is Type. The type of Type is Type."]
getDocsForPTerm (PString _ _) = pure ["String Literal\n\tDesugars to a fromString call"]
getDocsForPTerm (PList _ _ _) = pure ["List Literal\n\tDesugars to (::) and Nil"]
getDocsForPTerm (PSnocList _ _ _) = pure ["SnocList Literal\n\tDesugars to (:<) and Empty"]
getDocsForPTerm (PPair _ _ _) = pure ["Pair Literal\n\tDesugars to MkPair or Pair"]
getDocsForPTerm (PDPair _ _ _ _ _) = pure ["Dependant Pair Literal\n\tDesugars to MkDPair or DPair"]
getDocsForPTerm (PUnit _) = pure ["Unit Literal\n\tDesugars to MkUnit or Unit"]
getDocsForPTerm pterm = pure ["Docs not implemented for " ++ show pterm ++ " yet"]
summarise : {auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
Name -> Core String
summarise n -- n is fully qualified
= do syn <- get Syn
defs <- get Ctxt
Just def <- lookupCtxtExact n (gamma defs)
| _ => pure ""
let doc = case lookupName n (docstrings syn) of
[(_, doc)] => case Extra.lines doc of
("" ::: _) => Nothing
(d ::: _) => Just d
_ => Nothing
ty <- normaliseHoles defs [] (type def)
pure (nameRoot n ++ " : " ++ show !(resugar [] ty) ++
maybe "" ((++) "\n\t") doc)
-- Display all the exported names in the given namespace
export
getContents : {auto c : Ref Ctxt Defs} ->
{auto s : Ref Syn SyntaxInfo} ->
Namespace -> Core (List String)
getContents ns
= -- Get all the names, filter by any that match the given namespace
-- and are visible, then display with their type
do defs <- get Ctxt
ns <- allNames (gamma defs)
let allNs = filter inNS ns
allNs <- filterM (visible defs) allNs
traverse summarise (sort allNs)
where
visible : Defs -> Name -> Core Bool
visible defs n
= do Just def <- lookupCtxtExact n (gamma defs)
| Nothing => pure False
pure (visibility def /= Private)
inNS : Name -> Bool
inNS (NS xns (UN _)) = ns `isParentOf` xns
inNS _ = False
|
// boost heap: heap node helper classes
//
// Copyright (C) 2010 Tim Blechmann
//
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_HEAP_DETAIL_HEAP_NODE_HPP
#define BOOST_HEAP_DETAIL_HEAP_NODE_HPP
#include <boost/assert.hpp>
#include <boost/static_assert.hpp>
#include <boost/core/allocator_access.hpp>
#include <boost/intrusive/list.hpp>
#include <boost/type_traits/conditional.hpp>
#ifdef BOOST_HEAP_SANITYCHECKS
#define BOOST_HEAP_ASSERT BOOST_ASSERT
#else
#define BOOST_HEAP_ASSERT(expression)
#endif
namespace boost {
namespace heap {
namespace detail {
namespace bi = boost::intrusive;
template <bool auto_unlink = false>
struct heap_node_base:
bi::list_base_hook<typename boost::conditional<auto_unlink,
bi::link_mode<bi::auto_unlink>,
bi::link_mode<bi::safe_link>
>::type
>
{};
typedef bi::list<heap_node_base<false> > heap_node_list;
struct nop_disposer
{
template <typename T>
void operator()(T * n)
{
BOOST_HEAP_ASSERT(false);
}
};
template <typename Node, typename HeapBase>
bool is_heap(const Node * n, typename HeapBase::value_compare const & cmp)
{
for (typename Node::const_child_iterator it = n->children.begin(); it != n->children.end(); ++it) {
Node const & this_node = static_cast<Node const &>(*it);
const Node * child = static_cast<const Node*>(&this_node);
if (cmp(HeapBase::get_value(n->value), HeapBase::get_value(child->value)) ||
!is_heap<Node, HeapBase>(child, cmp))
return false;
}
return true;
}
template <typename Node>
std::size_t count_nodes(const Node * n);
template <typename Node, typename List>
std::size_t count_list_nodes(List const & node_list)
{
std::size_t ret = 0;
for (typename List::const_iterator it = node_list.begin(); it != node_list.end(); ++it) {
const Node * child = static_cast<const Node*>(&*it);
ret += count_nodes<Node>(child);
}
return ret;
}
template <typename Node>
std::size_t count_nodes(const Node * n)
{
return 1 + count_list_nodes<Node, typename Node::child_list>(n->children);
}
/* node cloner
*
* Requires `Clone Constructor':
* template <typename Alloc>
* Node::Node(Node const &, Alloc &)
*
* template <typename Alloc>
* Node::Node(Node const &, Alloc &, Node * parent)
*
* */
template <typename Node,
typename NodeBase,
typename Alloc>
struct node_cloner
{
node_cloner(Alloc & allocator):
allocator(allocator)
{}
Node * operator() (NodeBase const & node)
{
Node * ret = allocator.allocate(1);
new (ret) Node(static_cast<Node const &>(node), allocator);
return ret;
}
Node * operator() (NodeBase const & node, Node * parent)
{
Node * ret = allocator.allocate(1);
new (ret) Node(static_cast<Node const &>(node), allocator, parent);
return ret;
}
private:
Alloc & allocator;
};
/* node disposer
*
* Requirements:
* Node::clear_subtree(Alloc &) clears the subtree via allocator
*
* */
template <typename Node,
typename NodeBase,
typename Alloc>
struct node_disposer
{
typedef typename boost::allocator_pointer<Alloc>::type node_pointer;
node_disposer(Alloc & alloc):
alloc_(alloc)
{}
void operator()(NodeBase * base)
{
node_pointer n = static_cast<node_pointer>(base);
n->clear_subtree(alloc_);
n->~Node();
alloc_.deallocate(n, 1);
}
Alloc & alloc_;
};
template <typename ValueType,
bool constant_time_child_size = true
>
struct heap_node:
heap_node_base<!constant_time_child_size>
{
typedef heap_node_base<!constant_time_child_size> node_base;
public:
typedef ValueType value_type;
typedef bi::list<node_base,
bi::constant_time_size<constant_time_child_size> > child_list;
typedef typename child_list::iterator child_iterator;
typedef typename child_list::const_iterator const_child_iterator;
typedef typename child_list::size_type size_type;
heap_node(ValueType const & v):
value(v)
{}
#if !defined(BOOST_NO_CXX11_RVALUE_REFERENCES) && !defined(BOOST_NO_CXX11_VARIADIC_TEMPLATES)
template <class... Args>
heap_node(Args&&... args):
value(std::forward<Args>(args)...)
{}
#endif
/* protected: */
heap_node(heap_node const & rhs):
value(rhs.value)
{
/* we don't copy the child list, but clone it later */
}
public:
template <typename Alloc>
heap_node (heap_node const & rhs, Alloc & allocator):
value(rhs.value)
{
children.clone_from(rhs.children, node_cloner<heap_node, node_base, Alloc>(allocator), nop_disposer());
}
size_type child_count(void) const
{
BOOST_STATIC_ASSERT(constant_time_child_size);
return children.size();
}
void add_child(heap_node * n)
{
children.push_back(*n);
}
template <typename Alloc>
void clear_subtree(Alloc & alloc)
{
children.clear_and_dispose(node_disposer<heap_node, node_base, Alloc>(alloc));
}
void swap_children(heap_node * rhs)
{
children.swap(rhs->children);
}
ValueType value;
child_list children;
};
template <typename value_type>
struct parent_pointing_heap_node:
heap_node<value_type>
{
typedef heap_node<value_type> super_t;
parent_pointing_heap_node(value_type const & v):
super_t(v), parent(NULL)
{}
#if !defined(BOOST_NO_CXX11_RVALUE_REFERENCES) && !defined(BOOST_NO_CXX11_VARIADIC_TEMPLATES)
template <class... Args>
parent_pointing_heap_node(Args&&... args):
super_t(std::forward<Args>(args)...), parent(NULL)
{}
#endif
template <typename Alloc>
struct node_cloner
{
node_cloner(Alloc & allocator, parent_pointing_heap_node * parent):
allocator(allocator), parent_(parent)
{}
parent_pointing_heap_node * operator() (typename super_t::node_base const & node)
{
parent_pointing_heap_node * ret = allocator.allocate(1);
new (ret) parent_pointing_heap_node(static_cast<parent_pointing_heap_node const &>(node), allocator, parent_);
return ret;
}
private:
Alloc & allocator;
parent_pointing_heap_node * parent_;
};
template <typename Alloc>
parent_pointing_heap_node (parent_pointing_heap_node const & rhs, Alloc & allocator, parent_pointing_heap_node * parent):
super_t(static_cast<super_t const &>(rhs)), parent(parent)
{
super_t::children.clone_from(rhs.children, node_cloner<Alloc>(allocator, this), nop_disposer());
}
void update_children(void)
{
typedef heap_node_list::iterator node_list_iterator;
for (node_list_iterator it = super_t::children.begin(); it != super_t::children.end(); ++it) {
parent_pointing_heap_node * child = static_cast<parent_pointing_heap_node*>(&*it);
child->parent = this;
}
}
void remove_from_parent(void)
{
BOOST_HEAP_ASSERT(parent);
parent->children.erase(heap_node_list::s_iterator_to(*this));
parent = NULL;
}
void add_child(parent_pointing_heap_node * n)
{
BOOST_HEAP_ASSERT(n->parent == NULL);
n->parent = this;
super_t::add_child(n);
}
parent_pointing_heap_node * get_parent(void)
{
return parent;
}
const parent_pointing_heap_node * get_parent(void) const
{
return parent;
}
parent_pointing_heap_node * parent;
};
template <typename value_type>
struct marked_heap_node:
parent_pointing_heap_node<value_type>
{
typedef parent_pointing_heap_node<value_type> super_t;
marked_heap_node(value_type const & v):
super_t(v), mark(false)
{}
#if !defined(BOOST_NO_CXX11_RVALUE_REFERENCES) && !defined(BOOST_NO_CXX11_VARIADIC_TEMPLATES)
template <class... Args>
marked_heap_node(Args&&... args):
super_t(std::forward<Args>(args)...), mark(false)
{}
#endif
marked_heap_node * get_parent(void)
{
return static_cast<marked_heap_node*>(super_t::parent);
}
const marked_heap_node * get_parent(void) const
{
return static_cast<marked_heap_node*>(super_t::parent);
}
bool mark;
};
template <typename Node>
struct cmp_by_degree
{
template <typename NodeBase>
bool operator()(NodeBase const & left,
NodeBase const & right)
{
return static_cast<const Node*>(&left)->child_count() < static_cast<const Node*>(&right)->child_count();
}
};
template <typename List, typename Node, typename Cmp>
Node * find_max_child(List const & list, Cmp const & cmp)
{
BOOST_HEAP_ASSERT(!list.empty());
const Node * ret = static_cast<const Node *> (&list.front());
for (typename List::const_iterator it = list.begin(); it != list.end(); ++it) {
const Node * current = static_cast<const Node *> (&*it);
if (cmp(ret->value, current->value))
ret = current;
}
return const_cast<Node*>(ret);
}
} /* namespace detail */
} /* namespace heap */
} /* namespace boost */
#undef BOOST_HEAP_ASSERT
#endif /* BOOST_HEAP_DETAIL_HEAP_NODE_HPP */
|
corollary\<^marker>\<open>tag unimportant\<close> Cauchy_theorem_convex: "\<lbrakk>continuous_on S f; convex S; finite K; \<And>x. x \<in> interior S - K \<Longrightarrow> f field_differentiable at x; valid_path g; path_image g \<subseteq> S; pathfinish g = pathstart g\<rbrakk> \<Longrightarrow> (f has_contour_integral 0) g"
|
(** **********************************************************
Contents:
- "Kleisli" definition of monad [Kleisli]
- equivalence of this definition and the "monoidal" definition [weq_Kleisli_Monad]
Written by: Joseph Helfer, Matthew Weaver, 2017
************************************************************)
Require Import UniMath.Foundations.PartD.
Require Import UniMath.MoreFoundations.Tactics.
Require Import UniMath.CategoryTheory.Core.Categories.
Require Import UniMath.CategoryTheory.Core.Functors.
Require Import UniMath.CategoryTheory.Core.NaturalTransformations.
Require Import UniMath.CategoryTheory.Monads.KTriples.
Require Import UniMath.CategoryTheory.Monads.Monads.
Require Import UniMath.CategoryTheory.Monads.RelativeMonads.
Local Open Scope cat.
(** Remark that a monad on C is the same as a relative monad for the identity functor on C *)
Goal ∏ (C : category), KleisliMonad C = RelMonad (functor_identity C).
Proof.
intros.
apply idpath.
Qed.
Coercion RelMonad_from_Kleisli {C : category} (T : KleisliMonad C) := (T : RelMonad (functor_identity C)).
(** * Equivalence of the types of KleisliMonad and "monoidal" monads *)
Section monad_types_equiv.
Definition Monad_to_Kleisli {C : category} : Monad C → KleisliMonad C :=
λ T, (functor_on_objects T ,, (pr1 (η T) ,, @bind C T))
,, @Monad_law2 C T ,, (@η_bind C T ,, @bind_bind C T).
Definition Kleisli_to_functor {C : category} (T: KleisliMonad C) : C ⟶ C.
Proof.
use make_functor.
- use make_functor_data.
+ exact (RelMonad_from_Kleisli T).
+ apply r_lift.
- apply is_functor_r_lift.
Defined.
Definition Kleisli_to_μ {C : category} (T: KleisliMonad C) :
Kleisli_to_functor T ∙ Kleisli_to_functor T ⟹ Kleisli_to_functor T.
Proof.
use tpair.
- exact (λ (x : C), r_bind T (identity (T x))).
- intros x x' f; simpl.
unfold r_lift.
now rewrite (r_bind_r_bind T), <- assoc, (r_eta_r_bind T (T x')), id_right, (r_bind_r_bind T), id_left.
Defined.
Definition Kleisli_to_η {C : category} (T: KleisliMonad C) :
functor_identity C ⟹ Kleisli_to_functor T.
Proof.
use tpair.
- exact (r_eta T).
- intros x x' f; simpl.
unfold r_lift.
now rewrite (r_eta_r_bind T x).
Defined.
Definition Kleisli_to_Monad {C : category} (T : KleisliMonad C) : Monad C.
Proof.
use (((Kleisli_to_functor T,, Kleisli_to_μ T) ,, Kleisli_to_η T) ,, _).
do 2 try apply tpair; intros; simpl.
- apply (r_eta_r_bind T).
- unfold r_lift. now rewrite (r_bind_r_bind T), <- assoc, (r_eta_r_bind T (T c)), id_right, (r_bind_r_eta T).
- unfold r_lift. now rewrite !(r_bind_r_bind T), id_left, <- assoc, (r_eta_r_bind T (T c)), id_right.
Defined.
Proposition Kleisli_to_Monad_to_Kleisli {C : category} (T : KleisliMonad C) :
Monad_to_Kleisli (Kleisli_to_Monad T) = T.
Proof.
apply subtypePath.
- intro. do 2 try apply isapropdirprod;
do 5 try (apply impred; intro);
apply homset_property.
- apply (maponpaths (λ p, tpair _ _ p )); simpl.
apply dirprod_paths.
* apply idpath.
* repeat (apply funextsec; unfold homot; intro).
simpl; unfold Monads.bind; simpl; unfold r_lift.
now rewrite (r_bind_r_bind T), <- assoc, (r_eta_r_bind T (T x0)), id_right.
Defined.
Lemma Monad_to_Kleisli_to_Monad_raw_data {C : category} (T : Monad C) :
Monad_to_raw_data (Kleisli_to_Monad (Monad_to_Kleisli T)) = Monad_to_raw_data T.
Proof.
apply (maponpaths (λ p, tpair _ _ p )); simpl.
apply dirprod_paths.
+ apply dirprod_paths;
repeat (apply funextsec; unfold homot; intro);
simpl.
* unfold r_lift, r_bind, r_eta; simpl. unfold Monads.bind.
rewrite (functor_comp T), <- assoc.
change (# T x1 · (# T (η T x0) · μ T x0) = #T x1).
now rewrite (@Monad_law2 C T x0), id_right.
* unfold Monads.bind, r_bind; simpl.
now rewrite (functor_id T), id_left.
+ apply idpath.
Defined.
Definition Monad_to_Kleisli_to_Monad {C : category} (T : Monad C) :
Kleisli_to_Monad (Monad_to_Kleisli T) = T.
Proof.
apply Monad_eq_raw_data .
apply Monad_to_Kleisli_to_Monad_raw_data.
Defined.
Definition isweq_Monad_to_Kleisli {C : category} :
isweq Monad_to_Kleisli :=
isweq_iso _ _ (Monad_to_Kleisli_to_Monad(C:=C)) Kleisli_to_Monad_to_Kleisli.
Definition weq_Kleisli_Monad {C : category} :
Monad C ≃ KleisliMonad C := _,, isweq_Monad_to_Kleisli.
End monad_types_equiv.
|
theory Fold imports Sem_Equiv Vars begin
subsection "Simple folding of arithmetic expressions"
type_synonym
tab = "vname \<Rightarrow> val option"
fun afold :: "aexp \<Rightarrow> tab \<Rightarrow> aexp" where
"afold (N n) _ = N n" |
"afold (V x) t = (case t x of None \<Rightarrow> V x | Some k \<Rightarrow> N k)" |
"afold (Plus e1 e2) t = (case (afold e1 t, afold e2 t) of
(N n1, N n2) \<Rightarrow> N(n1+n2) | (e1',e2') \<Rightarrow> Plus e1' e2')"
definition "approx t s \<longleftrightarrow> (\<forall>x k. t x = Some k \<longrightarrow> s x = k)"
theorem aval_afold[simp]:
assumes "approx t s"
shows "aval (afold a t) s = aval a s"
using assms
by (induct a) (auto simp: approx_def split: aexp.split option.split)
theorem aval_afold_N:
assumes "approx t s"
shows "afold a t = N n \<Longrightarrow> aval a s = n"
by (metis assms aval.simps(1) aval_afold)
definition
"merge t1 t2 = (\<lambda>m. if t1 m = t2 m then t1 m else None)"
primrec "defs" :: "com \<Rightarrow> tab \<Rightarrow> tab" where
"defs SKIP t = t" |
"defs (x ::= a) t =
(case afold a t of N k \<Rightarrow> t(x \<mapsto> k) | _ \<Rightarrow> t(x:=None))" |
"defs (c1;;c2) t = (defs c2 o defs c1) t" |
"defs (IF b THEN c1 ELSE c2) t = merge (defs c1 t) (defs c2 t)" |
"defs (WHILE b DO c) t = t |` (-lvars c)"
primrec fold where
"fold SKIP _ = SKIP" |
"fold (x ::= a) t = (x ::= (afold a t))" |
"fold (c1;;c2) t = (fold c1 t;; fold c2 (defs c1 t))" |
"fold (IF b THEN c1 ELSE c2) t = IF b THEN fold c1 t ELSE fold c2 t" |
"fold (WHILE b DO c) t = WHILE b DO fold c (t |` (-lvars c))"
lemma approx_merge:
"approx t1 s \<or> approx t2 s \<Longrightarrow> approx (merge t1 t2) s"
by (fastforce simp: merge_def approx_def)
lemma approx_map_le:
"approx t2 s \<Longrightarrow> t1 \<subseteq>\<^sub>m t2 \<Longrightarrow> approx t1 s"
by (clarsimp simp: approx_def map_le_def dom_def)
lemma restrict_map_le [intro!, simp]: "t |` S \<subseteq>\<^sub>m t"
by (clarsimp simp: restrict_map_def map_le_def)
lemma merge_restrict:
assumes "t1 |` S = t |` S"
assumes "t2 |` S = t |` S"
shows "merge t1 t2 |` S = t |` S"
proof -
from assms
have "\<forall>x. (t1 |` S) x = (t |` S) x"
and "\<forall>x. (t2 |` S) x = (t |` S) x" by auto
thus ?thesis
by (auto simp: merge_def restrict_map_def
split: if_splits)
qed
lemma defs_restrict:
"defs c t |` (- lvars c) = t |` (- lvars c)"
proof (induction c arbitrary: t)
case (Seq c1 c2)
hence "defs c1 t |` (- lvars c1) = t |` (- lvars c1)"
by simp
hence "defs c1 t |` (- lvars c1) |` (-lvars c2) =
t |` (- lvars c1) |` (-lvars c2)" by simp
moreover
from Seq
have "defs c2 (defs c1 t) |` (- lvars c2) =
defs c1 t |` (- lvars c2)"
by simp
hence "defs c2 (defs c1 t) |` (- lvars c2) |` (- lvars c1) =
defs c1 t |` (- lvars c2) |` (- lvars c1)"
by simp
ultimately
show ?case by (clarsimp simp: Int_commute)
next
case (If b c1 c2)
hence "defs c1 t |` (- lvars c1) = t |` (- lvars c1)" by simp
hence "defs c1 t |` (- lvars c1) |` (-lvars c2) =
t |` (- lvars c1) |` (-lvars c2)" by simp
moreover
from If
have "defs c2 t |` (- lvars c2) = t |` (- lvars c2)" by simp
hence "defs c2 t |` (- lvars c2) |` (-lvars c1) =
t |` (- lvars c2) |` (-lvars c1)" by simp
ultimately
show ?case by (auto simp: Int_commute intro: merge_restrict)
qed (auto split: aexp.split)
lemma big_step_pres_approx:
"(c,s) \<Rightarrow> s' \<Longrightarrow> approx t s \<Longrightarrow> approx (defs c t) s'"
proof (induction arbitrary: t rule: big_step_induct)
case Skip thus ?case by simp
next
case Assign
thus ?case
by (clarsimp simp: aval_afold_N approx_def split: aexp.split)
next
case (Seq c1 s1 s2 c2 s3)
have "approx (defs c1 t) s2" by (rule Seq.IH(1)[OF Seq.prems])
hence "approx (defs c2 (defs c1 t)) s3" by (rule Seq.IH(2))
thus ?case by simp
next
case (IfTrue b s c1 s')
hence "approx (defs c1 t) s'" by simp
thus ?case by (simp add: approx_merge)
next
case (IfFalse b s c2 s')
hence "approx (defs c2 t) s'" by simp
thus ?case by (simp add: approx_merge)
next
case WhileFalse
thus ?case by (simp add: approx_def restrict_map_def)
next
case (WhileTrue b s1 c s2 s3)
hence "approx (defs c t) s2" by simp
with WhileTrue
have "approx (defs c t |` (-lvars c)) s3" by simp
thus ?case by (simp add: defs_restrict)
qed
lemma big_step_pres_approx_restrict:
"(c,s) \<Rightarrow> s' \<Longrightarrow> approx (t |` (-lvars c)) s \<Longrightarrow> approx (t |` (-lvars c)) s'"
proof (induction arbitrary: t rule: big_step_induct)
case Assign
thus ?case by (clarsimp simp: approx_def)
next
case (Seq c1 s1 s2 c2 s3)
hence "approx (t |` (-lvars c2) |` (-lvars c1)) s1"
by (simp add: Int_commute)
hence "approx (t |` (-lvars c2) |` (-lvars c1)) s2"
by (rule Seq)
hence "approx (t |` (-lvars c1) |` (-lvars c2)) s2"
by (simp add: Int_commute)
hence "approx (t |` (-lvars c1) |` (-lvars c2)) s3"
by (rule Seq)
thus ?case by simp
next
case (IfTrue b s c1 s' c2)
hence "approx (t |` (-lvars c2) |` (-lvars c1)) s"
by (simp add: Int_commute)
hence "approx (t |` (-lvars c2) |` (-lvars c1)) s'"
by (rule IfTrue)
thus ?case by (simp add: Int_commute)
next
case (IfFalse b s c2 s' c1)
hence "approx (t |` (-lvars c1) |` (-lvars c2)) s"
by simp
hence "approx (t |` (-lvars c1) |` (-lvars c2)) s'"
by (rule IfFalse)
thus ?case by simp
qed auto
declare assign_simp [simp]
lemma approx_eq:
"approx t \<Turnstile> c \<sim> fold c t"
proof (induction c arbitrary: t)
case SKIP show ?case by simp
next
case Assign
show ?case by (simp add: equiv_up_to_def)
next
case Seq
thus ?case by (auto intro!: equiv_up_to_seq big_step_pres_approx)
next
case If
thus ?case by (auto intro!: equiv_up_to_if_weak)
next
case (While b c)
hence "approx (t |` (- lvars c)) \<Turnstile>
WHILE b DO c \<sim> WHILE b DO fold c (t |` (- lvars c))"
by (auto intro: equiv_up_to_while_weak big_step_pres_approx_restrict)
thus ?case
by (auto intro: equiv_up_to_weaken approx_map_le)
qed
lemma approx_empty [simp]:
"approx Map.empty = (\<lambda>_. True)"
by (auto simp: approx_def)
theorem constant_folding_equiv:
"fold c Map.empty \<sim> c"
using approx_eq [of Map.empty c]
by (simp add: equiv_up_to_True sim_sym)
end
|
<!-- HTML file automatically generated from DocOnce source (https://github.com/doconce/doconce/)
doconce format html week10.do.txt --no_mako -->
<!-- dom:TITLE: PHY321: Harmonic oscillations: Time-dependent Forces and Fourier Series -->
# PHY321: Harmonic oscillations: Time-dependent Forces and Fourier Series
**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway
Date: **Mar 21, 2022**
Copyright 1999-2022, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license
## Aims and Overarching Motivation
Driven oscillations and resonances with physical examples.
### Monday, March 14
Summary of analytical expressions from February 28-March 4 (see slides at <https://mhjensen.github.io/Physics321/doc/pub/week9/html/week9-reveal.html>. Discussion of resonances and other examples like the mathematical pendulum and other systems.
* [Video of lecture](https://youtu.be/xV3sIH6AXiE)
* [Handwritten notes](https://github.com/mhjensen/Physics321/blob/master/doc/HandWrittenNotes/Spring2022/NotesMarch14.pdf)
**Reading suggestion**: Taylor sections 5.6-5.8.
### Wednesday
Examples of oscillations and Fourier analysis applied to harmonic oscillations.
* [Video of lecture](https://youtu.be/RDzLMV5ymxc)
* [Handwritten notes](https://github.com/mhjensen/Physics321/blob/master/doc/HandWrittenNotes/Spring2022/NotesMarch16.pdf)
* [Video on solving differential equations numerically](https://youtu.be/7nYIfV0z1VM)
* [Video on Fourier aanalysis](https://youtu.be/neXZ4fb-4Rs)
* [Handwritten notes for Fourier analysis](https://github.com/mhjensen/Physics321/blob/master/doc/HandWrittenNotes/Spring2022/NotesFourierAnalysisMarch21.pdf)
**Reading suggestion**: See lecture notes at <https://mhjensen.github.io/Physics321/doc/pub/week10/html/week10-reveal.html> and Taylor sections 5.6-5.8.
### Friday
Work on homework 6 and discussions of harmonic oscillation examples and systems.
## Numerical Studies of Driven Oscillations
Solving the problem of driven oscillations numerically gives us much
more flexibility to study different types of driving forces. We can
reuse our earlier code by simply adding a driving force. If we stay in
the $x$-direction only this can be easily done by adding a term
$F_{\mathrm{ext}}(x,t)$. Note that we have kept it rather general
here, allowing for both a spatial and a temporal dependence.
Before we dive into the code, we need to briefly remind ourselves
about the equations we started with for the case with damping, namely
$$
m\frac{d^2x}{dt^2} + b\frac{dx}{dt}+kx(t) =0,
$$
with no external force applied to the system.
Let us now for simplicty assume that our external force is given by
$$
F_{\mathrm{ext}}(t) = F_0\cos{(\omega t)},
$$
where $F_0$ is a constant (what is its dimension?) and $\omega$ is the frequency of the applied external driving force.
**Small question:** would you expect energy to be conserved now?
Introducing the external force into our lovely differential equation
and dividing by $m$ and introducing $\omega_0^2=\sqrt{k/m}$ we have
$$
\frac{d^2x}{dt^2} + \frac{b}{m}\frac{dx}{dt}+\omega_0^2x(t) =\frac{F_0}{m}\cos{(\omega t)},
$$
Thereafter we introduce a dimensionless time $\tau = t\omega_0$
and a dimensionless frequency $\tilde{\omega}=\omega/\omega_0$. We have then
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\frac{F_0}{m\omega_0^2}\cos{(\tilde{\omega}\tau)},
$$
Introducing a new amplitude $\tilde{F} =F_0/(m\omega_0^2)$ (check dimensionality again) we have
$$
\frac{d^2x}{d\tau^2} + \frac{b}{m\omega_0}\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
Our final step, as we did in the case of various types of damping, is
to define $\gamma = b/(2m\omega_0)$ and rewrite our equations as
$$
\frac{d^2x}{d\tau^2} + 2\gamma\frac{dx}{d\tau}+x(\tau) =\tilde{F}\cos{(\tilde{\omega}\tau)}.
$$
This is the equation we will code below using the Euler-Cromer method.
```
%matplotlib inline
# Common imports
import numpy as np
import pandas as pd
from math import *
import matplotlib.pyplot as plt
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
DeltaT = 0.001
#set up arrays
tfinal = 20 # in dimensionless time
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions as one-dimensional arrays of time
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using Euler-Cromer's method
for i in range(n-1):
# Set up the acceleration
# Here you could have defined your own function for this
a = -2*gamma*v[i]-x[i]+Ftilde*cos(t[i]*Omegatilde)
# update velocity, time and position
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockEulerCromer")
plt.show()
```
In the above example we have focused on the Euler-Cromer method. This
method has a local truncation error which is proportional to $\Delta t^2$
and thereby a global error which is proportional to $\Delta t$.
We can improve this by using the Runge-Kutta family of
methods. The widely popular Runge-Kutta to fourth order or just **RK4**
has indeed a much better truncation error. The RK4 method has a global
error which is proportional to $\Delta t$.
Let us revisit this method and see how we can implement it for the above example.
## Differential Equations, Runge-Kutta methods
Runge-Kutta (RK) methods are based on Taylor expansion formulae, but yield
in general better algorithms for solutions of an ordinary differential equation.
The basic philosophy is that it provides an intermediate step in the computation of $y_{i+1}$.
To see this, consider first the following definitions
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
\frac{dy}{dt}=f(t,y),
\label{_auto1} \tag{1}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
y(t)=\int f(t,y) dt,
\label{_auto2} \tag{2}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
y_{i+1}=y_i+ \int_{t_i}^{t_{i+1}} f(t,y) dt.
\label{_auto3} \tag{3}
\end{equation}
$$
To demonstrate the philosophy behind RK methods, let us consider
the second-order RK method, RK2.
The first approximation consists in Taylor expanding $f(t,y)$
around the center of the integration interval $t_i$ to $t_{i+1}$,
that is, at $t_i+h/2$, $h$ being the step.
Using the midpoint formula for an integral,
defining $y(t_i+h/2) = y_{i+1/2}$ and
$t_i+h/2 = t_{i+1/2}$, we obtain
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\int_{t_i}^{t_{i+1}} f(t,y) dt \approx hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto4} \tag{4}
\end{equation}
$$
This means in turn that we have
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
y_{i+1}=y_i + hf(t_{i+1/2},y_{i+1/2}) +O(h^3).
\label{_auto5} \tag{5}
\end{equation}
$$
However, we do not know the value of $y_{i+1/2}$. Here comes thus the next approximation, namely, we use Euler's
method to approximate $y_{i+1/2}$. We have then
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
y_{(i+1/2)}=y_i + \frac{h}{2}\frac{dy}{dt}=y(t_i) + \frac{h}{2}f(t_i,y_i).
\label{_auto6} \tag{6}
\end{equation}
$$
This means that we can define the following algorithm for
the second-order Runge-Kutta method, RK2.
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
k_1=hf(t_i,y_i),
\label{_auto7} \tag{7}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
k_2=hf(t_{i+1/2},y_i+k_1/2),
\label{_auto8} \tag{8}
\end{equation}
$$
with the final value
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
y_{i+i}\approx y_i + k_2 +O(h^3).
\label{_auto9} \tag{9}
\end{equation}
$$
The difference between the previous one-step methods
is that we now need an intermediate step in our evaluation,
namely $t_i+h/2 = t_{(i+1/2)}$ where we evaluate the derivative $f$.
This involves more operations, but the gain is a better stability
in the solution.
The fourth-order Runge-Kutta, RK4, has the following algorithm
$$
k_1=hf(t_i,y_i) \hspace{0.5cm} k_2=hf(t_i+h/2,y_i+k_1/2)
$$
and
$$
k_3=hf(t_i+h/2,y_i+k_2/2)\hspace{0.5cm} k_4=hf(t_i+h,y_i+k_3)
$$
with the final result
$$
y_{i+1}=y_i +\frac{1}{6}\left( k_1 +2k_2+2k_3+k_4\right).
$$
Thus, the algorithm consists in first calculating $k_1$
with $t_i$, $y_1$ and $f$ as inputs. Thereafter, we increase the step
size by $h/2$ and calculate $k_2$, then $k_3$ and finally $k_4$. The global error goes as $O(h^4)$.
However, at this stage, if we keep adding different methods in our
main program, the code will quickly become messy and ugly. Before we
proceed thus, we will now introduce functions that enbody the various
methods for solving differential equations. This means that we can
separate out these methods in own functions and files (and later as classes and more
generic functions) and simply call them when needed. Similarly, we
could easily encapsulate various forces or other quantities of
interest in terms of functions. To see this, let us bring up the code
we developed above for the simple sliding block, but now only with the simple forward Euler method. We introduce
two functions, one for the simple Euler method and one for the
force.
Note that here the forward Euler method does not know the specific force function to be called.
It receives just an input the name. We can easily change the force by adding another function.
```
def ForwardEuler(v,x,t,n,Force):
for i in range(n-1):
v[i+1] = v[i] + DeltaT*Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]
t[i+1] = t[i] + DeltaT
```
```
def SpringForce(v,x,t):
# note here that we have divided by mass and we return the acceleration
return -2*gamma*v-x+Ftilde*cos(t*Omegatilde)
```
It is easy to add a new method like the Euler-Cromer
```
def ForwardEulerCromer(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
v[i+1] = v[i] + DeltaT*a
x[i+1] = x[i] + DeltaT*v[i+1]
t[i+1] = t[i] + DeltaT
```
and the Velocity Verlet method (be careful with time-dependence here, it is not an ideal method for non-conservative forces))
```
def VelocityVerlet(v,x,t,n,Force):
for i in range(n-1):
a = Force(v[i],x[i],t[i])
x[i+1] = x[i] + DeltaT*v[i]+0.5*a*DeltaT*DeltaT
anew = Force(v[i],x[i+1],t[i+1])
v[i+1] = v[i] + 0.5*DeltaT*(a+anew)
t[i+1] = t[i] + DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```
def RK2(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Final result
x[i+1] = x[i]+k2x
v[i+1] = v[i]+k2v
t[i+1] = t[i]+DeltaT
```
Finally, we can now add the Runge-Kutta2 method via a new function
```
def RK4(v,x,t,n,Force):
for i in range(n-1):
# Setting up k1
k1x = DeltaT*v[i]
k1v = DeltaT*Force(v[i],x[i],t[i])
# Setting up k2
vv = v[i]+k1v*0.5
xx = x[i]+k1x*0.5
k2x = DeltaT*vv
k2v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k3
vv = v[i]+k2v*0.5
xx = x[i]+k2x*0.5
k3x = DeltaT*vv
k3v = DeltaT*Force(vv,xx,t[i]+DeltaT*0.5)
# Setting up k4
vv = v[i]+k3v
xx = x[i]+k3x
k4x = DeltaT*vv
k4v = DeltaT*Force(vv,xx,t[i]+DeltaT)
# Final result
x[i+1] = x[i]+(k1x+2*k2x+2*k3x+k4x)/6.
v[i+1] = v[i]+(k1v+2*k2v+2*k3v+k4v)/6.
t[i+1] = t[i] + DeltaT
```
The Runge-Kutta family of methods are particularly useful when we have a time-dependent acceleration.
If we have forces which depend only the spatial degrees of freedom (no velocity and/or time-dependence), then energy conserving methods like the Velocity Verlet or the Euler-Cromer method are preferred. As soon as we introduce an explicit time-dependence and/or add dissipitave forces like friction or air resistance, then methods like the family of Runge-Kutta methods are well suited for this.
The code below uses the Runge-Kutta4 methods.
```
DeltaT = 0.001
#set up arrays
tfinal = 20 # in dimensionless time
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
x = np.zeros(n)
# Initial conditions (can change to more than one dim)
x0 = 1.0
v0 = 0.0
x[0] = x0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using the RK4 method
# Note that we define the force function as a SpringForce
RK4(v,x,t,n,SpringForce)
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('x[m]')
ax.set_xlabel('t[s]')
ax.plot(t, x)
fig.tight_layout()
save_fig("ForcedBlockRK4")
plt.show()
```
## Example: The classical pendulum and scaling the equations
Let us end our discussion of oscillations with another classical case, the pendulum. You will find the material here useful for homework 7.
The angular equation of motion of the pendulum is given by
Newton's equation and with no external force it reads
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
ml\frac{d^2\theta}{dt^2}+mgsin(\theta)=0,
\label{_auto10} \tag{10}
\end{equation}
$$
with an angular velocity and acceleration given by
<!-- Equation labels as ordinary links -->
<div id="_auto11"></div>
$$
\begin{equation}
v=l\frac{d\theta}{dt},
\label{_auto11} \tag{11}
\end{equation}
$$
and
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
a=l\frac{d^2\theta}{dt^2}.
\label{_auto12} \tag{12}
\end{equation}
$$
We do however expect that the motion will gradually come to an end due
a viscous drag torque acting on the pendulum. In the presence of the
drag, the above equation becomes
<!-- Equation labels as ordinary links -->
<div id="eq:pend1"></div>
$$
\begin{equation}
ml\frac{d^2\theta}{dt^2}+\nu\frac{d\theta}{dt} +mgsin(\theta)=0, \label{eq:pend1} \tag{13}
\end{equation}
$$
where $\nu$ is now a positive constant parameterizing the viscosity
of the medium in question. In order to maintain the motion against
viscosity, it is necessary to add some external driving force.
We choose here a periodic driving force. The last equation becomes then
<!-- Equation labels as ordinary links -->
<div id="eq:pend2"></div>
$$
\begin{equation}
ml\frac{d^2\theta}{dt^2}+\nu\frac{d\theta}{dt} +mgsin(\theta)=Asin(\omega t), \label{eq:pend2} \tag{14}
\end{equation}
$$
with $A$ and $\omega$ two constants representing the amplitude and
the angular frequency respectively. The latter is called the driving frequency.
We define
$$
\omega_0=\sqrt{g/l},
$$
the so-called natural frequency and the new dimensionless quantities
$$
\hat{t}=\omega_0t,
$$
with the dimensionless driving frequency
$$
\hat{\omega}=\frac{\omega}{\omega_0},
$$
and introducing the quantity $Q$, called the *quality factor*,
$$
Q=\frac{mg}{\omega_0\nu},
$$
and the dimensionless amplitude
$$
\hat{A}=\frac{A}{mg}
$$
## More on the Pendulum
We have
$$
\frac{d^2\theta}{d\hat{t}^2}+\frac{1}{Q}\frac{d\theta}{d\hat{t}}
+sin(\theta)=\hat{A}cos(\hat{\omega}\hat{t}).
$$
This equation can in turn be recast in terms of two coupled first-order differential equations as follows
$$
\frac{d\theta}{d\hat{t}}=\hat{v},
$$
and
$$
\frac{d\hat{v}}{d\hat{t}}=-\frac{\hat{v}}{Q}-sin(\theta)+\hat{A}cos(\hat{\omega}\hat{t}).
$$
These are the equations to be solved. The factor $Q$ represents the
number of oscillations of the undriven system that must occur before
its energy is significantly reduced due to the viscous drag. The
amplitude $\hat{A}$ is measured in units of the maximum possible
gravitational torque while $\hat{\omega}$ is the angular frequency of
the external torque measured in units of the pendulum's natural
frequency.
## The Pendulum code
We need to define a new force, which we simply call the pendulum
force. The only thing which changes from our previous spring-force
problem is the non-linearity introduced by angle $\theta$ due to the
$\sin{\theta}$ term. Here we have kept a generic variable $x$
instead. This makes our codes very similar. Feel free to use these examples for homework seven.
```
def PendulumForce(v,x,t):
# note here that we have divided by mass and we return the acceleration
return -gamma*v-sin(x)+Ftilde*cos(t*Omegatilde)
```
## Setting up the various variables and running the code
```
DeltaT = 0.001
#set up arrays
tfinal = 20 # in years
n = ceil(tfinal/DeltaT)
# set up arrays for t, v, and x
t = np.zeros(n)
v = np.zeros(n)
theta = np.zeros(n)
# Initial conditions (can change to more than one dim)
theta0 = 1.0
v0 = 0.0
theta[0] = theta0
v[0] = v0
gamma = 0.2
Omegatilde = 0.5
Ftilde = 1.0
# Start integrating using the RK4 method
# Note that we define the force function as a PendulumForce
RK4(v,theta,t,n,PendulumForce)
# Plot position as function of time
fig, ax = plt.subplots()
ax.set_ylabel('theta[radians]')
ax.set_xlabel('t[s]')
ax.plot(t, theta)
fig.tight_layout()
save_fig("PendulumRK4")
plt.show()
```
## Principle of Superposition and Periodic Forces (Fourier Transforms)
If one has several driving forces, $F(t)=\sum_n F_n(t)$, one can find
the particular solution to each $F_n$, $x_{pn}(t)$, and the particular
solution for the entire driving force is
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
x_p(t)=\sum_nx_{pn}(t).
\label{_auto13} \tag{15}
\end{equation}
$$
This is known as the principal of superposition. It only applies when
the homogenous equation is linear. If there were an anharmonic term
such as $x^3$ in the homogenous equation, then when one summed various
solutions, $x=(\sum_n x_n)^2$, one would get cross
terms. Superposition is especially useful when $F(t)$ can be written
as a sum of sinusoidal terms, because the solutions for each
sinusoidal (sine or cosine) term is analytic, as we saw above.
Driving forces are often periodic, even when they are not
sinusoidal. Periodicity implies that for some time $\tau$
$$
\begin{eqnarray}
F(t+\tau)=F(t).
\end{eqnarray}
$$
One example of a non-sinusoidal periodic force is a square wave. Many
components in electric circuits are non-linear, e.g. diodes, which
makes many wave forms non-sinusoidal even when the circuits are being
driven by purely sinusoidal sources.
The code here shows a typical example of such a square wave generated using the functionality included in the **scipy** Python package. We have used a period of $\tau=0.2$.
```
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t)
plt.plot(t, SqrSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
For the sinusoidal example studied in the previous week the
period is $\tau=2\pi/\omega$. However, higher harmonics can also
satisfy the periodicity requirement. In general, any force that
satisfies the periodicity requirement can be expressed as a sum over
harmonics,
<!-- Equation labels as ordinary links -->
<div id="_auto14"></div>
$$
\begin{equation}
F(t)=\frac{f_0}{2}+\sum_{n>0} f_n\cos(2n\pi t/\tau)+g_n\sin(2n\pi t/\tau).
\label{_auto14} \tag{16}
\end{equation}
$$
We can write down the answer for
$x_{pn}(t)$, by substituting $f_n/m$ or $g_n/m$ for $F_0/m$. By
writing each factor $2n\pi t/\tau$ as $n\omega t$, with $\omega\equiv
2\pi/\tau$,
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef1"></div>
$$
\begin{equation}
\label{eq:fourierdef1} \tag{17}
F(t)=\frac{f_0}{2}+\sum_{n>0}f_n\cos(n\omega t)+g_n\sin(n\omega t).
\end{equation}
$$
The solutions for $x(t)$ then come from replacing $\omega$ with
$n\omega$ for each term in the particular solution,
$$
\begin{eqnarray}
x_p(t)&=&\frac{f_0}{2k}+\sum_{n>0} \alpha_n\cos(n\omega t-\delta_n)+\beta_n\sin(n\omega t-\delta_n),\\
\nonumber
\alpha_n&=&\frac{f_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\beta_n&=&\frac{g_n/m}{\sqrt{((n\omega)^2-\omega_0^2)+4\beta^2n^2\omega^2}},\\
\nonumber
\delta_n&=&\tan^{-1}\left(\frac{2\beta n\omega}{\omega_0^2-n^2\omega^2}\right).
\end{eqnarray}
$$
Because the forces have been applied for a long time, any non-zero
damping eliminates the homogenous parts of the solution, so one need
only consider the particular solution for each $n$.
The problem will considered solved if one can find expressions for the
coefficients $f_n$ and $g_n$, even though the solutions are expressed
as an infinite sum. The coefficients can be extracted from the
function $F(t)$ by
<!-- Equation labels as ordinary links -->
<div id="eq:fourierdef2"></div>
$$
\begin{eqnarray}
\label{eq:fourierdef2} \tag{18}
f_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\cos(2n\pi t/\tau),\\
\nonumber
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~F(t)\sin(2n\pi t/\tau).
\end{eqnarray}
$$
To check the consistency of these expressions and to verify
Eq. ([18](#eq:fourierdef2)), one can insert the expansion of $F(t)$ in
Eq. ([17](#eq:fourierdef1)) into the expression for the coefficients in
Eq. ([18](#eq:fourierdef2)) and see whether
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~\left\{
\frac{f_0}{2}+\sum_{m>0}f_m\cos(m\omega t)+g_m\sin(m\omega t)
\right\}\cos(n\omega t).
\end{eqnarray}
$$
Immediately, one can throw away all the terms with $g_m$ because they
convolute an even and an odd function. The term with $f_0/2$
disappears because $\cos(n\omega t)$ is equally positive and negative
over the interval and will integrate to zero. For all the terms
$f_m\cos(m\omega t)$ appearing in the sum, one can use angle addition
formulas to see that $\cos(m\omega t)\cos(n\omega
t)=(1/2)(\cos[(m+n)\omega t]+\cos[(m-n)\omega t]$. This will integrate
to zero unless $m=n$. In that case the $m=n$ term gives
<!-- Equation labels as ordinary links -->
<div id="_auto15"></div>
$$
\begin{equation}
\int_{-\tau/2}^{\tau/2}dt~\cos^2(m\omega t)=\frac{\tau}{2},
\label{_auto15} \tag{19}
\end{equation}
$$
and
$$
\begin{eqnarray}
f_n&=?&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2} dt~f_n/2\\
\nonumber
&=&f_n~\checkmark.
\end{eqnarray}
$$
The same method can be used to check for the consistency of $g_n$.
Consider the driving force:
<!-- Equation labels as ordinary links -->
<div id="_auto16"></div>
$$
\begin{equation}
F(t)=At/\tau,~~-\tau/2<t<\tau/2,~~~F(t+\tau)=F(t).
\label{_auto16} \tag{20}
\end{equation}
$$
Find the Fourier coefficients $f_n$ and $g_n$ for all $n$ using Eq. ([18](#eq:fourierdef2)).
Only the odd coefficients enter by symmetry, i.e. $f_n=0$. One can find $g_n$ integrating by parts,
<!-- Equation labels as ordinary links -->
<div id="eq:fouriersolution"></div>
$$
\begin{eqnarray}
\label{eq:fouriersolution} \tag{21}
g_n&=&\frac{2}{\tau}\int_{-\tau/2}^{\tau/2}dt~\sin(n\omega t) \frac{At}{\tau}\\
\nonumber
u&=&t,~dv=\sin(n\omega t)dt,~v=-\cos(n\omega t)/(n\omega),\\
\nonumber
g_n&=&\frac{-2A}{n\omega \tau^2}\int_{-\tau/2}^{\tau/2}dt~\cos(n\omega t)
+\left.2A\frac{-t\cos(n\omega t)}{n\omega\tau^2}\right|_{-\tau/2}^{\tau/2}.
\end{eqnarray}
$$
The first term is zero because $\cos(n\omega t)$ will be equally
positive and negative over the interval. Using the fact that
$\omega\tau=2\pi$,
$$
\begin{eqnarray}
g_n&=&-\frac{2A}{2n\pi}\cos(n\omega\tau/2)\\
\nonumber
&=&-\frac{A}{n\pi}\cos(n\pi)\\
\nonumber
&=&\frac{A}{n\pi}(-1)^{n+1}.
\end{eqnarray}
$$
## Fourier Series
More text will come here, chpater 5.7-5.8 of Taylor are discussed
during the lectures. The code here uses the Fourier series discussed
in chapter 5.7 for a square wave signal. The equations for the
coefficients are are discussed in Taylor section 5.7, see Example
5.4. The code here visualizes the various approximations given by
Fourier series compared with a square wave with period $T=0.2$, witth
$0.1$ and max value $F=2$. We see that when we increase the number of
components in the Fourier series, the Fourier series approximation gets closes and closes to the square wave signal.
```
import numpy as np
import math
from scipy import signal
import matplotlib.pyplot as plt
# number of points
n = 500
# start and final times
t0 = 0.0
tn = 1.0
# Period
T =0.2
# Max value of square signal
Fmax= 2.0
# Width of signal
Width = 0.1
t = np.linspace(t0, tn, n, endpoint=False)
SqrSignal = np.zeros(n)
FourierSeriesSignal = np.zeros(n)
SqrSignal = 1.0+signal.square(2*np.pi*5*t+np.pi*Width/T)
a0 = Fmax*Width/T
FourierSeriesSignal = a0
Factor = 2.0*Fmax/np.pi
for i in range(1,500):
FourierSeriesSignal += Factor/(i)*np.sin(np.pi*i*Width/T)*np.cos(i*t*2*np.pi/T)
plt.plot(t, SqrSignal)
plt.plot(t, FourierSeriesSignal)
plt.ylim(-0.5, 2.5)
plt.show()
```
## Response to Transient Force
Consider a particle at rest in the bottom of an underdamped harmonic
oscillator, that then feels a sudden impulse, or change in momentum,
$I=F\Delta t$ at $t=0$. This increases the velocity immediately by an
amount $v_0=I/m$ while not changing the position. One can then solve
the trajectory by solving the equations with initial
conditions $v_0=I/m$ and $x_0=0$. This gives
<!-- Equation labels as ordinary links -->
<div id="_auto17"></div>
$$
\begin{equation}
x(t)=\frac{I}{m\omega'}e^{-\beta t}\sin\omega't, ~~t>0.
\label{_auto17} \tag{22}
\end{equation}
$$
Here, $\omega'=\sqrt{\omega_0^2-\beta^2}$. For an impulse $I_i$ that
occurs at time $t_i$ the trajectory would be
<!-- Equation labels as ordinary links -->
<div id="_auto18"></div>
$$
\begin{equation}
x(t)=\frac{I_i}{m\omega'}e^{-\beta (t-t_i)}\sin[\omega'(t-t_i)] \Theta(t-t_i),
\label{_auto18} \tag{23}
\end{equation}
$$
where $\Theta(t-t_i)$ is a step function, i.e. $\Theta(x)$ is zero for
$x<0$ and unity for $x>0$. If there were several impulses linear
superposition tells us that we can sum over each contribution,
<!-- Equation labels as ordinary links -->
<div id="_auto19"></div>
$$
\begin{equation}
x(t)=\sum_i\frac{I_i}{m\omega'}e^{-\beta(t-t_i)}\sin[\omega'(t-t_i)]\Theta(t-t_i)
\label{_auto19} \tag{24}
\end{equation}
$$
Now one can consider a series of impulses at times separated by
$\Delta t$, where each impulse is given by $F_i\Delta t$. The sum
above now becomes an integral,
<!-- Equation labels as ordinary links -->
<div id="eq:Greeny"></div>
$$
\begin{eqnarray}\label{eq:Greeny} \tag{25}
x(t)&=&\int_{-\infty}^\infty dt'~F(t')\frac{e^{-\beta(t-t')}\sin[\omega'(t-t')]}{m\omega'}\Theta(t-t')\\
\nonumber
&=&\int_{-\infty}^\infty dt'~F(t')G(t-t'),\\
\nonumber
G(\Delta t)&=&\frac{e^{-\beta\Delta t}\sin[\omega' \Delta t]}{m\omega'}\Theta(\Delta t)
\end{eqnarray}
$$
The quantity
$e^{-\beta(t-t')}\sin[\omega'(t-t')]/m\omega'\Theta(t-t')$ is called a
Green's function, $G(t-t')$. It describes the response at $t$ due to a
force applied at a time $t'$, and is a function of $t-t'$. The step
function ensures that the response does not occur before the force is
applied. One should remember that the form for $G$ would change if the
oscillator were either critically- or over-damped.
When performing the integral in Eq. ([25](#eq:Greeny)) one can use
angle addition formulas to factor out the part with the $t'$
dependence in the integrand,
<!-- Equation labels as ordinary links -->
<div id="eq:Greeny2"></div>
$$
\begin{eqnarray}
\label{eq:Greeny2} \tag{26}
x(t)&=&\frac{1}{m\omega'}e^{-\beta t}\left[I_c(t)\sin(\omega't)-I_s(t)\cos(\omega't)\right],\\
\nonumber
I_c(t)&\equiv&\int_{-\infty}^t dt'~F(t')e^{\beta t'}\cos(\omega't'),\\
\nonumber
I_s(t)&\equiv&\int_{-\infty}^t dt'~F(t')e^{\beta t'}\sin(\omega't').
\end{eqnarray}
$$
If the time $t$ is beyond any time at which the force acts,
$F(t'>t)=0$, the coefficients $I_c$ and $I_s$ become independent of
$t$.
Consider an undamped oscillator ($\beta\rightarrow 0$), with
characteristic frequency $\omega_0$ and mass $m$, that is at rest
until it feels a force described by a Gaussian form,
$$
\begin{eqnarray*}
F(t)&=&F_0 \exp\left\{\frac{-t^2}{2\tau^2}\right\}.
\end{eqnarray*}
$$
For large times ($t>>\tau$), where the force has died off, find
$x(t)$.\\ Solve for the coefficients $I_c$ and $I_s$ in
Eq. ([26](#eq:Greeny2)). Because the Gaussian is an even function,
$I_s=0$, and one need only solve for $I_c$,
$$
\begin{eqnarray*}
I_c&=&F_0\int_{-\infty}^\infty dt'~e^{-t^{\prime 2}/(2\tau^2)}\cos(\omega_0 t')\\
&=&\Re F_0 \int_{-\infty}^\infty dt'~e^{-t^{\prime 2}/(2\tau^2)}e^{i\omega_0 t'}\\
&=&\Re F_0 \int_{-\infty}^\infty dt'~e^{-(t'-i\omega_0\tau^2)^2/(2\tau^2)}e^{-\omega_0^2\tau^2/2}\\
&=&F_0\tau \sqrt{2\pi} e^{-\omega_0^2\tau^2/2}.
\end{eqnarray*}
$$
The third step involved completing the square, and the final step used the fact that the integral
$$
\begin{eqnarray*}
\int_{-\infty}^\infty dx~e^{-x^2/2}&=&\sqrt{2\pi}.
\end{eqnarray*}
$$
To see that this integral is true, consider the square of the integral, which you can change to polar coordinates,
$$
\begin{eqnarray*}
I&=&\int_{-\infty}^\infty dx~e^{-x^2/2}\\
I^2&=&\int_{-\infty}^\infty dxdy~e^{-(x^2+y^2)/2}\\
&=&2\pi\int_0^\infty rdr~e^{-r^2/2}\\
&=&2\pi.
\end{eqnarray*}
$$
Finally, the expression for $x$ from Eq. ([26](#eq:Greeny2)) is
$$
\begin{eqnarray*}
x(t>>\tau)&=&\frac{F_0\tau}{m\omega_0} \sqrt{2\pi} e^{-\omega_0^2\tau^2/2}\sin(\omega_0t).
\end{eqnarray*}
$$
|
function [ n_data, x, fx ] = struve_h0_values ( n_data )
%*****************************************************************************80
%
%% STRUVE_H0_VALUES returns some values of the Struve H0 function.
%
% Discussion:
%
% The function is defined by:
%
% HO(x) = 2/pi * Integral ( 0 <= t <= pi/2 ) sin ( x * cos ( t ) ) dt
%
% In Mathematica, the function can be evaluated by:
%
% StruveH[0,x]
%
% The data was reported by McLeod.
%
% Licensing:
%
% This code is distributed under the GNU LGPL license.
%
% Modified:
%
% 19 September 2004
%
% Author:
%
% John Burkardt
%
% Reference:
%
% Milton Abramowitz and Irene Stegun,
% Handbook of Mathematical Functions,
% US Department of Commerce, 1964.
%
% Allan McLeod,
% Algorithm 757, MISCFUN: A software package to compute uncommon
% special functions,
% ACM Transactions on Mathematical Software,
% Volume 22, Number 3, September 1996, pages 288-301.
%
% Stephen Wolfram,
% The Mathematica Book,
% Fourth Edition,
% Wolfram Media / Cambridge University Press, 1999.
%
% Parameters:
%
% Input/output, integer N_DATA. The user sets N_DATA to 0 before the
% first call. On each call, the routine increments N_DATA by 1, and
% returns the corresponding data; when there is no more data, the
% output value of N_DATA will be 0 again.
%
% Output, real X, the argument of the function.
%
% Output, real FX, the value of the function.
%
n_max = 20;
fx_vec = [ ...
0.12433974658847434366E-02, ...
-0.49735582423748415045E-02, ...
0.39771469054536941564E-01, ...
-0.15805246001653314198E+00, ...
0.56865662704828795099E+00, ...
0.66598399314899916605E+00, ...
0.79085884950809589255E+00, ...
-0.13501457342248639716E+00, ...
0.20086479668164503137E+00, ...
-0.11142097800261991552E+00, ...
-0.17026804865989885869E+00, ...
-0.13544931808186467594E+00, ...
0.94393698081323450897E-01, ...
-0.10182482016001510271E+00, ...
0.96098421554162110012E-01, ...
-0.85337674826118998952E-01, ...
-0.76882290637052720045E-01, ...
0.47663833591418256339E-01, ...
-0.70878751689647343204E-01, ...
0.65752908073352785368E-01 ];
x_vec = [ ...
0.0019531250E+00, ...
-0.0078125000E+00, ...
0.0625000000E+00, ...
-0.2500000000E+00, ...
1.0000000000E+00, ...
1.2500000000E+00, ...
2.0000000000E+00, ...
-4.0000000000E+00, ...
7.5000000000E+00, ...
11.0000000000E+00, ...
11.5000000000E+00, ...
-16.0000000000E+00, ...
20.0000000000E+00, ...
25.0000000000E+00, ...
-30.0000000000E+00, ...
50.0000000000E+00, ...
75.0000000000E+00, ...
-80.0000000000E+00, ...
100.0000000000E+00, ...
-125.0000000000E+00 ];
if ( n_data < 0 )
n_data = 0;
end
n_data = n_data + 1;
if ( n_max < n_data )
n_data = 0;
x = 0.0;
fx = 0.0;
else
x = x_vec(n_data);
fx = fx_vec(n_data);
end
return
end
|
If $s$ is a convex set and $x, y \in s$, then $u x + v y \in s$ for all $u, v \geq 0$ such that $u + v = 1$.
|
[STATEMENT]
lemma Ri_spec: "x \<le> Ri x y y"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> Ri x y y
[PROOF STEP]
unfolding Ri_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> \<Sqinter> {f y |f. x \<le> f y \<and> mono f}
[PROOF STEP]
by (rule Inf_greatest, safe)
|
Formal statement is: lemma bounded_cball[simp,intro]: "bounded (cball x e)" Informal statement is: The closed ball of radius $e$ centered at $x$ is bounded.
|
module sum-downFrom where
import Relation.Binary.PropositionalEquality as Eq
open Eq using (_≡_; refl; sym; cong)
open Eq.≡-Reasoning
open import Data.Nat using (ℕ; zero; suc; _+_; _*_; _∸_; _≤_; s≤s; z≤n)
open import Data.Nat.Properties using
(*-suc; *-identityʳ; *-distribʳ-+; *-distribˡ-∸; +-∸-assoc; +-∸-comm; m+n∸m≡n; m≤m*n)
open import lists using (List; []; _∷_; [_,_,_]; sum)
-- (n - 1), ⋯ , 0 を返す
downFrom : ℕ → List ℕ
downFrom zero = []
downFrom (suc n) = n ∷ downFrom n
_ : downFrom 3 ≡ [ 2 , 1 , 0 ]
_ = refl
-- n ≤ n * n の証明
n≤n*n : ∀ (n : ℕ) → n ≤ n * n
n≤n*n zero = z≤n
n≤n*n (suc n) = m≤m*n (suc n) (s≤s z≤n)
-- n ≤ n * 2 の証明
n≤n*2 : ∀ (n : ℕ) → n ≤ n * 2
n≤n*2 n = m≤m*n n (s≤s z≤n)
-- n * 2 ∸ n = n の証明
n*2∸n≡n : ∀ (n : ℕ) → n * 2 ∸ n ≡ n
n*2∸n≡n n =
begin
n * 2 ∸ n
≡⟨ cong (_∸ n) (*-suc n 1) ⟩ -- 積の展開
n + n * 1 ∸ n
≡⟨ m+n∸m≡n n (n * 1) ⟩ -- n ∸ n の除去
n * 1
≡⟨ *-identityʳ n ⟩ -- * 1 の除去
n
∎
-- m * (n ∸ 1) = m * n ∸ m の証明
m*[n∸1]≡m*n∸m : ∀ (m n : ℕ) → m * (n ∸ 1) ≡ m * n ∸ m
m*[n∸1]≡m*n∸m m n =
begin
m * (n ∸ 1)
≡⟨ *-distribˡ-∸ m n 1 ⟩ -- n * の分配
m * n ∸ m * 1
≡⟨ cong (m * n ∸_) (*-identityʳ m) ⟩ -- * 1 の除去
m * n ∸ m
∎
-- (n - 1) + ⋯ + 0 と n * (n ∸ 1) / 2 が等しいことの証明
sum-downFrom : ∀ (n : ℕ) → sum (downFrom n) * 2 ≡ n * (n ∸ 1)
sum-downFrom zero =
begin
sum (downFrom zero) * 2
≡⟨⟩
sum [] * 2
≡⟨⟩
zero
-- = zero * (zero ∸ 1)
∎
sum-downFrom (suc n) =
begin
sum (downFrom (suc n)) * 2
≡⟨⟩
sum (n ∷ downFrom n) * 2
≡⟨⟩
(n + sum (downFrom n)) * 2
≡⟨ *-distribʳ-+ 2 n (sum (downFrom n)) ⟩ -- * 2 の分配
(n * 2) + (sum (downFrom n)) * 2
≡⟨ cong (n * 2 +_) (sum-downFrom n) ⟩ -- 帰納法
(n * 2) + (n * (n ∸ 1))
≡⟨ cong (n * 2 +_) (m*[n∸1]≡m*n∸m n n) ⟩ -- n * の分配
(n * 2) + (n * n ∸ n)
≡⟨ sym (+-∸-assoc (n * 2) (n≤n*n n)) ⟩ -- 結合法則
(n * 2) + n * n ∸ n
≡⟨ +-∸-comm (n * n) (n≤n*2 n) ⟩ -- 交換法則
(n * 2) ∸ n + n * n
≡⟨ cong (_+ n * n) (n*2∸n≡n n) ⟩ -- n ∸ n の除去
n + n * n
-- = n * (suc n)
-- = (suc n) * n
-- = (suc n) * ((suc n) ∸ 1)
∎
|
(*<*)
(*
* The worker/wrapper transformation, following Gill and Hutton.
* (C)opyright 2009-2011, Peter Gammie, peteg42 at gmail.com.
* License: BSD
*)
theory Accumulator
imports
HOLCF
LList
WorkerWrapperNew
begin
(*>*)
section{* Naive reverse becomes accumulator-reverse. *}
text{* \label{sec:accum} *}
subsection{* Hughes lists, naive reverse, worker-wrapper optimisation. *}
text{* The ``Hughes'' list type. *}
type_synonym 'a H = "'a llist \<rightarrow> 'a llist"
definition
list2H :: "'a llist \<rightarrow> 'a H" where
"list2H \<equiv> lappend"
lemma acc_c2a_strict[simp]: "list2H\<cdot>\<bottom> = \<bottom>"
by (rule cfun_eqI, simp add: list2H_def)
definition
H2list :: "'a H \<rightarrow> 'a llist" where
"H2list \<equiv> \<Lambda> f . f\<cdot>lnil"
text{* The paper only claims the homomorphism holds for finite lists,
but in fact it holds for all lazy lists in HOLCF. They are trying to
dodge an explicit appeal to the equation @{thm "inst_cfun_pcpo"},
which does not hold in Haskell. *}
lemma H_llist_hom_append: "list2H\<cdot>(xs :++ ys) = list2H\<cdot>xs oo list2H\<cdot>ys" (is "?lhs = ?rhs")
proof(rule cfun_eqI)
fix zs
have "?lhs\<cdot>zs = (xs :++ ys) :++ zs" by (simp add: list2H_def)
also have "\<dots> = xs :++ (ys :++ zs)" by (rule lappend_assoc)
also have "\<dots> = list2H\<cdot>xs\<cdot>(ys :++ zs)" by (simp add: list2H_def)
also have "\<dots> = list2H\<cdot>xs\<cdot>(list2H\<cdot>ys\<cdot>zs)" by (simp add: list2H_def)
also have "\<dots> = (list2H\<cdot>xs oo list2H\<cdot>ys)\<cdot>zs" by simp
finally show "?lhs\<cdot>zs = (list2H\<cdot>xs oo list2H\<cdot>ys)\<cdot>zs" .
qed
lemma H_llist_hom_id: "list2H\<cdot>lnil = ID" by (simp add: list2H_def)
lemma H2list_list2H_inv: "H2list oo list2H = ID"
by (rule cfun_eqI, simp add: H2list_def list2H_def)
text{* \citet[\S4.2]{GillHutton:2009} define the naive reverse
function as follows. *}
fixrec lrev :: "'a llist \<rightarrow> 'a llist"
where
"lrev\<cdot>lnil = lnil"
| "lrev\<cdot>(x :@ xs) = lrev\<cdot>xs :++ (x :@ lnil)"
text{* Note ``body'' is the generator of @{term "lrev_def"}. *}
lemma lrev_strict[simp]: "lrev\<cdot>\<bottom> = \<bottom>"
by fixrec_simp
fixrec lrev_body :: "('a llist \<rightarrow> 'a llist) \<rightarrow> 'a llist \<rightarrow> 'a llist"
where
"lrev_body\<cdot>r\<cdot>lnil = lnil"
| "lrev_body\<cdot>r\<cdot>(x :@ xs) = r\<cdot>xs :++ (x :@ lnil)"
lemma lrev_body_strict[simp]: "lrev_body\<cdot>r\<cdot>\<bottom> = \<bottom>"
by fixrec_simp
text{* This is trivial but syntactically a bit touchy. Would be nicer
to define @{term "lrev_body"} as the generator of the fixpoint
definition of @{term "lrev"} directly. *}
lemma lrev_lrev_body_eq: "lrev = fix\<cdot>lrev_body"
by (rule cfun_eqI, subst lrev_def, subst lrev_body.unfold, simp)
text{* Wrap / unwrap functions. *}
definition
unwrapH :: "('a llist \<rightarrow> 'a llist) \<rightarrow> 'a llist \<rightarrow> 'a H" where
"unwrapH \<equiv> \<Lambda> f xs . list2H\<cdot>(f\<cdot>xs)"
lemma unwrapH_strict[simp]: "unwrapH\<cdot>\<bottom> = \<bottom>"
unfolding unwrapH_def by (rule cfun_eqI, simp)
definition
wrapH :: "('a llist \<rightarrow> 'a H) \<rightarrow> 'a llist \<rightarrow> 'a llist" where
"wrapH \<equiv> \<Lambda> f xs . H2list\<cdot>(f\<cdot>xs)"
lemma wrapH_unwrapH_id: "wrapH oo unwrapH = ID" (is "?lhs = ?rhs")
proof(rule cfun_eqI)+
fix f xs
have "?lhs\<cdot>f\<cdot>xs = H2list\<cdot>(list2H\<cdot>(f\<cdot>xs))" by (simp add: wrapH_def unwrapH_def)
also have "\<dots> = (H2list oo list2H)\<cdot>(f\<cdot>xs)" by simp
also have "\<dots> = ID\<cdot>(f\<cdot>xs)" by (simp only: H2list_list2H_inv)
also have "\<dots> = ?rhs\<cdot>f\<cdot>xs" by simp
finally show "?lhs\<cdot>f\<cdot>xs = ?rhs\<cdot>f\<cdot>xs" .
qed
subsection{* Gill/Hutton-style worker/wrapper. *}
definition
lrev_work :: "'a llist \<rightarrow> 'a H" where
"lrev_work \<equiv> fix\<cdot>(unwrapH oo lrev_body oo wrapH)"
definition
lrev_wrap :: "'a llist \<rightarrow> 'a llist" where
"lrev_wrap \<equiv> wrapH\<cdot>lrev_work"
lemma lrev_lrev_ww_eq: "lrev = lrev_wrap"
using worker_wrapper_id[OF wrapH_unwrapH_id lrev_lrev_body_eq]
by (simp add: lrev_wrap_def lrev_work_def)
subsection{* Optimise worker/wrapper. *}
text{* Intermediate worker. *}
fixrec lrev_body1 :: "('a llist \<rightarrow> 'a H) \<rightarrow> 'a llist \<rightarrow> 'a H"
where
"lrev_body1\<cdot>r\<cdot>lnil = list2H\<cdot>lnil"
| "lrev_body1\<cdot>r\<cdot>(x :@ xs) = list2H\<cdot>(wrapH\<cdot>r\<cdot>xs :++ (x :@ lnil))"
definition
lrev_work1 :: "'a llist \<rightarrow> 'a H" where
"lrev_work1 \<equiv> fix\<cdot>lrev_body1"
lemma lrev_body_lrev_body1_eq: "lrev_body1 = unwrapH oo lrev_body oo wrapH"
apply (rule cfun_eqI)+
apply (subst lrev_body.unfold)
apply (subst lrev_body1.unfold)
apply (case_tac xa)
apply (simp_all add: list2H_def wrapH_def unwrapH_def)
done
lemma lrev_work1_lrev_work_eq: "lrev_work1 = lrev_work"
by (unfold lrev_work_def lrev_work1_def,
rule cfun_arg_cong[OF lrev_body_lrev_body1_eq])
text{* Now use the homomorphism. *}
fixrec lrev_body2 :: "('a llist \<rightarrow> 'a H) \<rightarrow> 'a llist \<rightarrow> 'a H"
where
"lrev_body2\<cdot>r\<cdot>lnil = ID"
| "lrev_body2\<cdot>r\<cdot>(x :@ xs) = list2H\<cdot>(wrapH\<cdot>r\<cdot>xs) oo list2H\<cdot>(x :@ lnil)"
lemma lrev_body2_strict[simp]: "lrev_body2\<cdot>r\<cdot>\<bottom> = \<bottom>"
by fixrec_simp
definition
lrev_work2 :: "'a llist \<rightarrow> 'a H" where
"lrev_work2 \<equiv> fix\<cdot>lrev_body2"
lemma lrev_work2_strict[simp]: "lrev_work2\<cdot>\<bottom> = \<bottom>"
unfolding lrev_work2_def
by (subst fix_eq) simp
lemma lrev_body2_lrev_body1_eq: "lrev_body2 = lrev_body1"
by ((rule cfun_eqI)+
, (subst lrev_body1.unfold, subst lrev_body2.unfold)
, (simp add: H_llist_hom_append[symmetric] H_llist_hom_id))
lemma lrev_work2_lrev_work1_eq: "lrev_work2 = lrev_work1"
by (unfold lrev_work2_def lrev_work1_def
, rule cfun_arg_cong[OF lrev_body2_lrev_body1_eq])
text {* Simplify. *}
fixrec lrev_body3 :: "('a llist \<rightarrow> 'a H) \<rightarrow> 'a llist \<rightarrow> 'a H"
where
"lrev_body3\<cdot>r\<cdot>lnil = ID"
| "lrev_body3\<cdot>r\<cdot>(x :@ xs) = r\<cdot>xs oo list2H\<cdot>(x :@ lnil)"
lemma lrev_body3_strict[simp]: "lrev_body3\<cdot>r\<cdot>\<bottom> = \<bottom>"
by fixrec_simp
definition
lrev_work3 :: "'a llist \<rightarrow> 'a H" where
"lrev_work3 \<equiv> fix\<cdot>lrev_body3"
lemma lrev_wwfusion: "list2H\<cdot>((wrapH\<cdot>lrev_work2)\<cdot>xs) = lrev_work2\<cdot>xs"
proof -
{
have "list2H oo wrapH\<cdot>lrev_work2 = unwrapH\<cdot>(wrapH\<cdot>lrev_work2)"
by (rule cfun_eqI, simp add: unwrapH_def)
also have "\<dots> = (unwrapH oo wrapH)\<cdot>lrev_work2" by simp
also have "\<dots> = lrev_work2"
apply -
apply (rule worker_wrapper_fusion[OF wrapH_unwrapH_id, where body="lrev_body"])
apply (auto iff: lrev_body2_lrev_body1_eq lrev_body_lrev_body1_eq lrev_work2_def lrev_work1_def)
done
finally have "list2H oo wrapH\<cdot>lrev_work2 = lrev_work2" .
}
thus ?thesis using cfun_eq_iff[where f="list2H oo wrapH\<cdot>lrev_work2" and g="lrev_work2"] by auto
qed
text{* If we use this result directly, we only get a partially-correct
program transformation, see \citet{Tullsen:PhDThesis} for details. *}
lemma "lrev_work3 \<sqsubseteq> lrev_work2"
unfolding lrev_work3_def
proof(rule fix_least)
{
fix xs have "lrev_body3\<cdot>lrev_work2\<cdot>xs = lrev_work2\<cdot>xs"
proof(cases xs)
case bottom thus ?thesis by simp
next
case lnil thus ?thesis
unfolding lrev_work2_def
by (subst fix_eq[where F="lrev_body2"], simp)
next
case (lcons y ys)
hence "lrev_body3\<cdot>lrev_work2\<cdot>xs = lrev_work2\<cdot>ys oo list2H\<cdot>(y :@ lnil)" by simp
also have "\<dots> = list2H\<cdot>((wrapH\<cdot>lrev_work2)\<cdot>ys) oo list2H\<cdot>(y :@ lnil)"
using lrev_wwfusion[where xs=ys] by simp
also from lcons have "\<dots> = lrev_body2\<cdot>lrev_work2\<cdot>xs" by simp
also have "\<dots> = lrev_work2\<cdot>xs"
unfolding lrev_work2_def by (simp only: fix_eq[symmetric])
finally show ?thesis by simp
qed
}
thus "lrev_body3\<cdot>lrev_work2 = lrev_work2" by (rule cfun_eqI)
qed
text{* We can't show the reverse inclusion in the same way as the
fusion law doesn't hold for the optimised definition. (Intuitively we
haven't established that it is equal to the original @{term "lrev"}
definition.) We could show termination of the optimised definition
though, as it operates on finite lists. Alternatively we can use
induction (over the list argument) to show total equivalence.
The following lemma shows that the fusion Gill/Hutton want to do is
completely sound in this context, by appealing to the lazy list
induction principle. *}
lemma lrev_work3_lrev_work2_eq: "lrev_work3 = lrev_work2" (is "?lhs = ?rhs")
proof(rule cfun_eqI)
fix x
show "?lhs\<cdot>x = ?rhs\<cdot>x"
proof(induct x)
show "lrev_work3\<cdot>\<bottom> = lrev_work2\<cdot>\<bottom>"
apply (unfold lrev_work3_def lrev_work2_def)
apply (subst fix_eq[where F="lrev_body2"])
apply (subst fix_eq[where F="lrev_body3"])
by (simp add: lrev_body3.unfold lrev_body2.unfold)
next
show "lrev_work3\<cdot>lnil = lrev_work2\<cdot>lnil"
apply (unfold lrev_work3_def lrev_work2_def)
apply (subst fix_eq[where F="lrev_body2"])
apply (subst fix_eq[where F="lrev_body3"])
by (simp add: lrev_body3.unfold lrev_body2.unfold)
next
fix a l assume "lrev_work3\<cdot>l = lrev_work2\<cdot>l"
thus "lrev_work3\<cdot>(a :@ l) = lrev_work2\<cdot>(a :@ l)"
apply (unfold lrev_work3_def lrev_work2_def)
apply (subst fix_eq[where F="lrev_body2"])
apply (subst fix_eq[where F="lrev_body3"])
apply (fold lrev_work3_def lrev_work2_def)
apply (simp add: lrev_body3.unfold lrev_body2.unfold lrev_wwfusion)
done
qed simp_all
qed
text{* Use the combined worker/wrapper-fusion rule. Note we get a weaker lemma. *}
lemma lrev3_2_syntactic: "lrev_body3 oo (unwrapH oo wrapH) = lrev_body2"
apply (subst lrev_body2.unfold, subst lrev_body3.unfold)
apply (rule cfun_eqI)+
apply (case_tac xa)
apply (simp_all add: unwrapH_def)
done
lemma lrev_work3_lrev_work2_eq': "lrev = wrapH\<cdot>lrev_work3"
proof -
from lrev_lrev_body_eq
have "lrev = fix\<cdot>lrev_body" .
also from wrapH_unwrapH_id unwrapH_strict
have "\<dots> = wrapH\<cdot>(fix\<cdot>lrev_body3)"
by (rule worker_wrapper_fusion_new
, simp add: lrev3_2_syntactic lrev_body2_lrev_body1_eq lrev_body_lrev_body1_eq)
finally show ?thesis unfolding lrev_work3_def by simp
qed
text{* Final syntactic tidy-up. *}
fixrec lrev_body_final :: "('a llist \<rightarrow> 'a H) \<rightarrow> 'a llist \<rightarrow> 'a H"
where
"lrev_body_final\<cdot>r\<cdot>lnil\<cdot>ys = ys"
| "lrev_body_final\<cdot>r\<cdot>(x :@ xs)\<cdot>ys = r\<cdot>xs\<cdot>(x :@ ys)"
definition
lrev_work_final :: "'a llist \<rightarrow> 'a H" where
"lrev_work_final \<equiv> fix\<cdot>lrev_body_final"
definition
lrev_final :: "'a llist \<rightarrow> 'a llist" where
"lrev_final \<equiv> \<Lambda> xs. lrev_work_final\<cdot>xs\<cdot>lnil"
lemma lrev_body_final_lrev_body3_eq': "lrev_body_final\<cdot>r\<cdot>xs = lrev_body3\<cdot>r\<cdot>xs"
apply (subst lrev_body_final.unfold)
apply (subst lrev_body3.unfold)
apply (cases xs)
apply (simp_all add: list2H_def ID_def cfun_eqI)
done
lemma lrev_body_final_lrev_body3_eq: "lrev_body_final = lrev_body3"
by (simp only: lrev_body_final_lrev_body3_eq' cfun_eqI)
lemma lrev_final_lrev_eq: "lrev = lrev_final" (is "?lhs = ?rhs")
proof -
have "?lhs = lrev_wrap" by (rule lrev_lrev_ww_eq)
also have "\<dots> = wrapH\<cdot>lrev_work" by (simp only: lrev_wrap_def)
also have "\<dots> = wrapH\<cdot>lrev_work1" by (simp only: lrev_work1_lrev_work_eq)
also have "\<dots> = wrapH\<cdot>lrev_work2" by (simp only: lrev_work2_lrev_work1_eq)
also have "\<dots> = wrapH\<cdot>lrev_work3" by (simp only: lrev_work3_lrev_work2_eq)
also have "\<dots> = wrapH\<cdot>lrev_work_final" by (simp only: lrev_work3_def lrev_work_final_def lrev_body_final_lrev_body3_eq)
also have "\<dots> = lrev_final" by (simp add: lrev_final_def cfun_eqI H2list_def wrapH_def)
finally show ?thesis .
qed
(*<*)
end
(*>*)
|
[STATEMENT]
lemma ide_prod:
assumes "ide a" and "ide b"
shows "ide (prod a b)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. ide (local.prod a b)
[PROOF STEP]
using assms prod_def hf_to_ide_mapsto ide_to_hf_mapsto
[PROOF STATE]
proof (prove)
using this:
ide a
ide b
local.prod ?a ?b = hf_to_ide (ide_to_hf ?a * ide_to_hf ?b)
hf_to_ide \<in> UNIV \<rightarrow> Collect ide
ide_to_hf \<in> Collect ide \<rightarrow> UNIV
goal (1 subgoal):
1. ide (local.prod a b)
[PROOF STEP]
by auto
|
module Keys
%default total
%access public
data Digit = Zero | One | Two | Three | Four
| Five | Six | Seven | Eight | Nine
%name Digit digit
instance Eq Digit where
Zero == Zero = True
One == One = True
Two == Two = True
Three == Three = True
Four == Four = True
Five == Five = True
Six == Six = True
Seven == Seven = True
Eight == Eight = True
Nine == Nine = True
_ == _ = False
instance Cast Digit Nat where
cast Zero = 0
cast One = 1
cast Two = 2
cast Three = 3
cast Four = 4
cast Five = 5
cast Six = 6
cast Seven = 7
cast Eight = 8
cast Nine = 9
data Key = DigitKey Digit
| A | B | C | D | E | F | G | H | I | J | K | L | M
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z
| Backtick
| Dash
| EqualSign
| OpenSquareBracket | CloseSquareBracket
| Slash | Backslash
| Tab
| Semicolon
| SingleQuote
| Comma | Period
| F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10 | F11 | F12
| Escape
| Insert
| Delete
| Backspace
| Home | End
| PageUp | PageDown
| Enter
%name Key key
instance Eq Key where
(DigitKey x) == (DigitKey y) = x == y
A == A = True
B == B = True
C == C = True
D == D = True
E == E = True
F == F = True
G == G = True
H == H = True
I == I = True
J == J = True
K == K = True
L == L = True
M == M = True
N == N = True
O == O = True
P == P = True
Q == Q = True
R == R = True
S == S = True
T == T = True
U == U = True
V == V = True
W == W = True
X == X = True
Y == Y = True
Z == Z = True
Backtick == Backtick = True
Dash == Dash = True
EqualSign == EqualSign = True
OpenSquareBracket == OpenSquareBracket = True
CloseSquareBracket == CloseSquareBracket = True
Slash == Slash = True
Backslash == Backslash = True
Tab == Tab = True
Semicolon == Semicolon = True
SingleQuote == SingleQuote = True
Comma == Comma = True
Period == Period = True
F1 == F1 = True
F2 == F2 = True
F3 == F3 = True
F4 == F4 = True
F5 == F5 = True
F6 == F6 = True
F7 == F7 = True
F8 == F8 = True
F9 == F9 = True
F10 == F10 = True
F11 == F11 = True
F12 == F12 = True
Escape == Escape = True
Insert == Insert = True
Delete == Delete = True
Backspace == Backspace = True
Home == Home = True
End == End = True
PageUp == PageUp = True
PageDown == PageDown = True
Enter == Enter = True
_ == _ = False
isDigitKey : Key -> Bool
isDigitKey (DigitKey _) = True
isDigitKey _ = False
-- Aliases for readability
Shift : Bool
Shift = True
NoShift : Bool
NoShift = False
record KeyChord where
constructor KC
key : Key
shift : Bool
alt : Bool
ctrl : Bool
fn : Bool
super : Bool
%name KeyChord keyc
unmod : Key -> KeyChord
unmod key = KC key NoShift False False False False
shifted : Key -> KeyChord
shifted key = record { shift = Shift } $ unmod key
isUnmod : KeyChord -> Bool
isUnmod (KC _ False False False False False) = True
isUnmod _ = False
isDigit : KeyChord -> Bool
isDigit keyc = (isDigitKey . key) keyc && isUnmod keyc
toDigit : KeyChord -> Maybe Digit
toDigit (KC (DigitKey digit) NoShift _ _ _ _) = Just digit
toDigit _ = Nothing
toKey : Char -> Maybe KeyChord
toKey 'a' = Just $ unmod A
toKey 'b' = Just $ unmod B
toKey 'c' = Just $ unmod C
toKey 'd' = Just $ unmod D
toKey 'e' = Just $ unmod E
toKey 'f' = Just $ unmod F
toKey 'g' = Just $ unmod G
toKey 'h' = Just $ unmod H
toKey 'i' = Just $ unmod I
toKey 'j' = Just $ unmod J
toKey 'k' = Just $ unmod K
toKey 'l' = Just $ unmod L
toKey 'm' = Just $ unmod M
toKey 'n' = Just $ unmod N
toKey 'o' = Just $ unmod O
toKey 'p' = Just $ unmod P
toKey 'q' = Just $ unmod Q
toKey 'r' = Just $ unmod R
toKey 's' = Just $ unmod S
toKey 't' = Just $ unmod T
toKey 'u' = Just $ unmod U
toKey 'v' = Just $ unmod V
toKey 'w' = Just $ unmod W
toKey 'x' = Just $ unmod X
toKey 'y' = Just $ unmod Y
toKey 'z' = Just $ unmod Z
toKey 'A' = Just $ shifted A
toKey 'B' = Just $ shifted B
toKey 'C' = Just $ shifted C
toKey 'D' = Just $ shifted D
toKey 'E' = Just $ shifted E
toKey 'F' = Just $ shifted F
toKey 'G' = Just $ shifted G
toKey 'H' = Just $ shifted H
toKey 'I' = Just $ shifted I
toKey 'J' = Just $ shifted J
toKey 'K' = Just $ shifted K
toKey 'L' = Just $ shifted L
toKey 'M' = Just $ shifted M
toKey 'N' = Just $ shifted N
toKey 'O' = Just $ shifted O
toKey 'P' = Just $ shifted P
toKey 'Q' = Just $ shifted Q
toKey 'R' = Just $ shifted R
toKey 'S' = Just $ shifted S
toKey 'T' = Just $ shifted T
toKey 'U' = Just $ shifted U
toKey 'V' = Just $ shifted V
toKey 'W' = Just $ shifted W
toKey 'X' = Just $ shifted X
toKey 'Y' = Just $ shifted Y
toKey 'Z' = Just $ shifted Z
toKey _ = Nothing
toKeys : String -> List KeyChord
toKeys = catMaybes . map toKey . unpack
|
%% FTAlignStack
function S=AlignStack(stack)
%% 081010 Tobias Henn
S=stack;
stackcontainer=stack.spectr;
clear S.spectr
[ymax,xmax,emax]=size(stackcontainer);
xresolution=S.Xvalue/xmax;
yresolution=S.Yvalue/ymax;
center=ceil(emax/4*3);
spectr=zeros(ymax,xmax,emax);
shifts=zeros(emax,4);
%calculate image shifts for each energy,perform shift with FT method
for k=1:emax
shifts(k,:)=dftregistration(fft2(stackcontainer(:,:,center)),fft2(stackcontainer(:,:,k)),10);
spectr(:,:,k)=FTMatrixShift(stackcontainer(:,:,k),-shifts(k,3),-shifts(k,4));
end
%Reduce image size
shiftymax=ceil(max(shifts(:,3)));
shiftxmax=ceil(max(shifts(:,4)));
shiftymin=ceil(abs(min(shifts(:,3))));
shiftxmin=ceil(abs(min(shifts(:,4))));
shiftmatrix=zeros(ymax-shiftymin-shiftymax,xmax-shiftxmax-shiftxmin,emax);
shiftmatrix(:,:,:)=spectr((1+shiftymax):(ymax-shiftymin),(1+shiftxmax):(xmax-shiftxmin),:);
S.spectr=abs(shiftmatrix);
S.Xvalue=size(S.spectr,2)*xresolution;
S.Yvalue=size(S.spectr,1)*yresolution;
return
|
The madness of the 2012 NBA Draft, from Dion Waiters' rise to Kentucky's historic six-man draft class.
As always, the NBA Draft started at breakneck speed. Commissioner David Stern came out from behind his almighty Oz curtain and was greeted as he has been for years — with aggressive booing from the Prudential Center crowd.
“Thank you for the warm reception,” Stern said, in a tone normally reserved for Jim Rome.
From there, the parade of bad suits, embarrassing family and awkward TV could not be stopped.
|
model_soilevaporation <- function (diffusionLimitedEvaporation = 6605.505,
energyLimitedEvaporation = 448.24){
#'- Name: SoilEvaporation -Version: 1.0, -Time step: 1
#'- Description:
#' * Title: SoilEvaporation Model
#' * Author: Pierre Martre
#' * Reference: Modelling energy balance in the wheat crop model SiriusQuality2:
#' Evapotranspiration and canopy and soil temperature calculations
#' * Institution: INRA Montpellier
#' * ExtendedDescription: Starting from a soil at field capacity, soil evaporation is assumed to
#' be energy limited during the first phase of evaporation and diffusion limited thereafter.
#' Hence, the soil evaporation model considers these two processes taking the minimum between
#' the energy limited evaporation (PtSoil) and the diffused limited
#' evaporation
#' * ShortDescription: Starting from a soil at field capacity, soil evaporation is assumed to
#' be energy limited during the first phase of evaporation and diffusion limited thereafter.
#' Hence, the soil evaporation model considers these two processes taking the minimum between
#' the energy limited evaporation (PtSoil) and the diffused limited
#' evaporation
#'- inputs:
#' * name: diffusionLimitedEvaporation
#' ** description : diffusion Limited Evaporation
#' ** variablecategory : state
#' ** datatype : DOUBLE
#' ** default : 6605.505
#' ** min : 0
#' ** max : 10000
#' ** unit : g m-2 d-1
#' ** uri : http://www1.clermont.inra.fr/siriusquality/?page_id=547
#' ** inputtype : variable
#' * name: energyLimitedEvaporation
#' ** description : energy Limited Evaporation
#' ** variablecategory : state
#' ** datatype : DOUBLE
#' ** default : 448.240
#' ** min : 0
#' ** max : 1000
#' ** unit : g m-2 d-1
#' ** uri : http://www1.clermont.inra.fr/siriusquality/?page_id=547
#' ** inputtype : variable
#'- outputs:
#' * name: soilEvaporation
#' ** description : soil Evaporation
#' ** variablecategory : auxiliary
#' ** datatype : DOUBLE
#' ** min : 0
#' ** max : 5000
#' ** unit : g m-2 d-1
#' ** uri : http://www1.clermont.inra.fr/siriusquality/?page_id=547
soilEvaporation <- min(diffusionLimitedEvaporation, energyLimitedEvaporation)
return (list('soilEvaporation' = soilEvaporation))
}
|
MODULE PHYS
IMPLICIT NONE
REAL*8, PARAMETER :: pai = 3.1415926
REAL*8, PARAMETER :: boltz = 1.3806488D-16 ! Boltzmann constant, erg / K
REAL*8, PARAMETER :: elec_mass = 9.1093829D-28 ! Electron mass, g
REAL*8, PARAMETER :: prot_mass = 1.6726217D-24 ! Proton mass, g
REAL*8, PARAMETER :: bohr_rad = 5.29D-9 ! Bohr radius, cm
REAL*8, PARAMETER :: fine_str = 7.2973525D-3 ! Fine structure constant
REAL*8, PARAMETER :: light_speed = 2.9979245D+10 ! Speed of light, cm / s
REAL*8, PARAMETER :: planck = 6.6260688D-27 ! Planck constant, erg * s
PUBLIC
CONTAINS
REAL*8 FUNCTION HYD_LEV_ENERGY(n) ! Energy of a hydrogen level with principal quantum number n (in CGS units)
USE MATH
INTEGER, INTENT(IN) :: n
HYD_LEV_ENERGY = -elec_mass * DSQ(light_speed * fine_str) / ISQ(n) / 2.0D0
RETURN
END FUNCTION HYD_LEV_ENERGY
REAL*8 FUNCTION PLANCK_FUNC(nu, T) ! Planck intensity per unit frequency (in CGS units)
USE MATH
REAL*8, INTENT(IN) :: nu, T
REAL*8 :: PLANCK_EXP
REAL*8 :: PLANCK_FREQ_FACT
PLANCK_FREQ_FACT = 2.0D0 * planck * DCUBE(nu) / DSQ(light_speed)
PLANCK_EXP = DEXP(planck * nu / boltz / T)
PLANCK_FUNC = PLANCK_FREQ_FACT / (PLANCK_EXP - 1.0D0)
RETURN
END FUNCTION PLANCK_FUNC
END MODULE PHYS
|
#include <vector>
#include <regex>
#include <cstdio>
#include <cstdlib>
#include <chrono>
#include <fstream>
#include <typeinfo>
#include <boost/core/demangle.hpp>
#include "accurate.hh"
#include "cfgfile.hh"
#include "method.hh"
#include "output.hh"
#include "results.hh"
#include "binc.hh"
using namespace dds;
using std::vector;
using std::initializer_list;
using Json::Value;
using agms::projection;
using agms::depth_type;
using binc::print;
using binc::elements_of;
//----------------------------------------------
//
// Processing the "dataset" section
//
//----------------------------------------------
void dds::prepare_dataset(Value &cfg, dataset &D) {
Json::Value jdset = cfg["dataset"];
if (jdset.isNull())
return;
set<string> kwords{
"data_source", // this is a url
"loops", "max_length", "max_timestamp", "hash_sources", "hash_streams",
"time_window", "fixed_window", "flush_window",
"warmup_time", "warmup_size"
};
for (auto member: jdset.getMemberNames()) {
if (kwords.count(member) == 0)
throw std::runtime_error("Unknown keyword `" + member + "' in dataset section of config");
}
if (!jdset.isMember("data_source"))
throw std::runtime_error("The dataset does not specify some data_source");
parsed_url purl;
parse_url(jdset["data_source"].asString(), purl);
datasrc ds = open_data_source(purl.type, purl.path, purl.vars);
D.load(ds);
{
Json::Value js = jdset["loops"];
if (!js.isNull())
D.set_loops(js.asUInt());
}
{
Json::Value js = jdset["max_length"];
if (!js.isNull())
D.set_max_length(js.asUInt());
}
{
Json::Value js = jdset["max_timestamp"];
if (!js.isNull())
D.set_max_timestamp(js.asInt());
}
{
Json::Value js = jdset["hash_sources"];
if (!js.isNull())
D.hash_sources(js.asInt());
}
{
Json::Value js = jdset["hash_streams"];
if (!js.isNull())
D.hash_streams(js.asInt());
}
{
Json::Value js = jdset["time_window"];
bool flush = jdset.get("flush_window", false).asBool();
if (!js.isNull())
D.set_time_window(js.asInt(), flush);
}
{
Json::Value js = jdset["fixed_window"];
bool flush = jdset.get("flush_window", false).asBool();
if (!js.isNull())
D.set_fixed_window(js.asInt(), flush);
}
{
Json::Value js = jdset["warmup_time"];
if (!js.isNull()) {
D.warmup_time(js.asInt());
}
}
{
Json::Value js = jdset["warmup_size"];
if (!js.isNull()) {
D.warmup_size(js.asInt());
}
}
D.create();
}
//----------------------------------------------
//
// Processing the "components" section
//
//----------------------------------------------
projection dds::get_projection(const Value &js) {
const Value &jp = js["projection"];
depth_type d = jp["depth"].asUInt();
size_t w = jp["width"].asUInt();
assert(d > 0 && w > 0);
projection proj(d, w);
if (!jp["epsilon"].isNull())
proj.set_epsilon(jp["epsilon"].asDouble());
return proj;
}
std::vector<stream_id> dds::get_streams(const Value &js) {
std::vector<stream_id> ret;
if (js.isMember("stream")) {
ret.push_back(js["stream"].asInt());
} else if (js.isMember("streams")) {
const Value &jp = js["streams"];
if (jp.isArray()) {
for (auto &&val : jp) {
ret.push_back(val.asInt());
}
} else {
ret.push_back(jp.asInt());
}
}
return ret;
}
basic_stream_query dds::get_query(const Value &js) {
basic_stream_query Q;
if (!js.isMember("query"))
return Q;
qtype qt = qtype_repr[js["query"].asString()];
Q.set_type(qt);
double beta = js.get("beta", 0.0).asDouble();
Q.set_approximation(beta);
auto streams = get_streams(js);
Q.set_operands(streams);
return Q;
}
void dds::prepare_components(Value &js, vector<reactive *> &components) {
Value jcomp = js["components"];
if (jcomp.isNull()) return;
if (!jcomp.isArray())
throw std::runtime_error("Error in cfg file: 'components' is not an array");
for (size_t index = 0; index < jcomp.size(); index++) {
Value jc = jcomp[(int) index];
if (jc.get("ignore", false).asBool())
continue;
string type = jc["type"].asString();
// map to a handler
basic_component_type *ctype = basic_component_type::get_component_type(type);
try {
auto c = ctype->create(jc);
cout << "Component " << c->name() << " of component type " << type << "(instance of "
<< boost::core::demangle(typeid(*c).name()) << ") created" << endl;
components.push_back(c);
} catch (std::exception &e) {
std::cerr << "Failed to create component " << index << std::endl;
throw;
}
}
}
//------------------------------------
//
// Processing the "files" section
//
//------------------------------------
template<typename T>
struct enum_processor {
string varname;
T default_value;
enum_repr<T> &repr;
enum_processor(const string &_vname, T _defval, enum_repr<T> &_repr)
: varname(_vname), default_value(_defval), repr(_repr) {}
T operator()(const map<string, string> &vars) {
T retval = default_value;
if (vars.count(varname)) {
string om = vars.at(varname);
if (!repr.is_member(om))
throw std::runtime_error("Illegal value in URL: " + varname + "=" + om);
retval = repr[om];
}
return retval;
}
};
using namespace std::string_literals;
enum_processor<text_format> proc_text_format("format"s, default_text_format, text_format_repr);
enum_processor<open_mode> proc_open_mode("open_mode"s, default_open_mode, open_mode_repr);
#define RE_FNAME "[a-zA-X0-9 _.-]+"
#define RE_PATH "(/?(?:" RE_FNAME "/)*(?:" RE_FNAME "))"
#define RE_ID "[a-zA-Z_][a-zA-Z0-9_]*"
#define RE_TYPE "(" RE_ID "):"
#define RE_VAR RE_ID "=" RE_PATH
#define RE_VARS RE_VAR "(?:," RE_VAR ")*"
#define RE_URL RE_TYPE RE_PATH "?(?:\\?(" RE_VARS "))?"
void dds::parse_url(const string &url, parsed_url &purl) {
using std::regex;
using std::smatch;
using std::regex_match;
using std::sregex_token_iterator;
regex re_url(RE_URL);
smatch match;
if (!regex_match(url, match, re_url))
throw std::runtime_error("Malformed url `" + url + "'");
purl.type = match[1];
purl.path = match[2];
string allvars = match[3];
// split variables
regex re_var("(" RE_ID ")=(" RE_PATH ")");
regex re_comma(",");
auto s = sregex_token_iterator(allvars.begin(), allvars.end(), re_comma, -1);
auto s2 = sregex_token_iterator();
for (; s != s2; ++s) {
string var = *s;
smatch vmatch;
regex_match(var, vmatch, re_var);
string vname = vmatch[1];
string vvalue = vmatch[2];
if (!vname.empty()) purl.vars[vname] = vvalue;
}
purl.mode = proc_open_mode(purl.vars);
purl.format = proc_text_format(purl.vars);
}
static output_file *process_output_file(const string &url) {
parsed_url purl;
parse_url(url, purl);
if (purl.type == "file")
return CTX.open(purl.path, purl.mode, purl.format);
else if (purl.type == "hdf5")
return CTX.open_hdf5(purl.path, purl.mode);
else if (purl.type == "stdout")
return &output_stdout;
else if (purl.type == "stderr")
return &output_stderr;
throw std::runtime_error("Unknown output_file type: `" + purl.type + "'");
}
static void proc_bind(output_table *tab, const string &fname, const output_file_map &fmap) {
if (fmap.count(fname) == 0)
throw std::runtime_error("Could not find file `" + fname + "' to bind table `" + tab->name() + "' to.");
tab->bind(fmap.at(fname));
}
output_file_map dds::prepare_output(Json::Value &jsctx, reporter &R) {
output_file_map fmap;
// check the relevant sections, "files" and "bind"
Value files = jsctx["files"];
for (auto member : files.getMemberNames()) {
string url = files[member].asString();
fmap[member] = process_output_file(url);
}
// do the bindings
Value bind = jsctx["bind"];
for (auto table_name : bind.getMemberNames()) {
output_table *table = output_table::get(table_name);
Value binds = bind[table_name];
if (binds.isNull())
continue;
else if (binds.isString())
proc_bind(table, binds.asString(), fmap);
else if (binds.isArray()) {
for (size_t i = 0; i < binds.size(); i++)
proc_bind(table, binds[(int) i].asString(), fmap);
} else
throw std::runtime_error("Binding for `" + table->name() + "' is not a string or array");
if (table->flavor() == table_flavor::RESULTS)
R.watch(*table);
}
// set up sampling
Value sample = jsctx["sample"];
for (auto ts_name : sample.getMemberNames()) {
time_series *ts = dynamic_cast<time_series *>(output_table::get(ts_name));
if (ts == nullptr || ts->flavor() != table_flavor::TIMESERIES)
throw std::runtime_error("Could not find time series table `" + ts_name + "'");
R.sample(*ts, sample[ts_name].asUInt());
}
return fmap;
}
//------------------------------------
//
// Execution
//
//------------------------------------
void dds::execute(Value &cfg) {
/* Reset the context */
CTX.initialize();
/* Create dataset */
dataset D;
prepare_dataset(cfg, D);
/* Create components */
std::vector<reactive *> components;
prepare_components(cfg, components);
/* Create output files */
reporter R;
prepare_output(cfg, R);
progress_reporter pbar(stdout, 40, "Progress: ");
/* Run */
using namespace std::chrono;
steady_clock::time_point startt = steady_clock::now();
CTX.run();
steady_clock::time_point endt = steady_clock::now();
cout << "Execution time="
<< duration_cast<milliseconds>(endt - startt).count() / 1000.0
<< "sec" << endl;
/* Clean up */
for (auto p : components)
delete p;
CTX.close_result_files();
agms_sketch_updater_factory.clear();
CTX.clear();
}
void dds::generate_schema(output_table *table) {
using std::ofstream;
using std::ios_base;
/// create the output file name (json)
string filename = table->name() + ".schema";
Json::Value root, columns;
root["name"] = table->name();
for (size_t i = 0; i < table->size(); i++) {
Value column;
column["name"] = (*table)[i]->name();
column["type"] = boost::core::demangle((*table)[i]->type().name());
column["arithmetic"] = (*table)[i]->is_arithmetic();
root["columns"][Json::ArrayIndex(i)] = column;
}
ofstream scfile(filename, ios_base::out | ios_base::trunc);
scfile << root << endl;
scfile.close();
// close by returning
}
|
proposition\<^marker>\<open>tag unimportant\<close> power_series_holomorphic: assumes "\<And>w. w \<in> ball z r \<Longrightarrow> ((\<lambda>n. a n*(w - z)^n) sums f w)" shows "f holomorphic_on ball z r"
|
State Before: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
⊢ x = 0 State After: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
✝ : Nontrivial M
⊢ x = 0 Tactic: nontriviality M State Before: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
✝ : Nontrivial M
⊢ x = 0 State After: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
✝ : Nontrivial M
this : Nontrivial R
⊢ x = 0 Tactic: haveI : Nontrivial R := Module.nontrivial R M State Before: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
✝ : Nontrivial M
this : Nontrivial R
⊢ x = 0 State After: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
✝ : Nontrivial M
this : Nontrivial R
⊢ SameRay R x (-1 • x) Tactic: refine' eq_zero_of_sameRay_neg_smul_right (neg_lt_zero.2 (zero_lt_one' R)) _ State Before: R : Type u_1
inst✝⁵ : StrictOrderedCommRing R
M : Type u_2
N : Type ?u.156166
inst✝⁴ : AddCommGroup M
inst✝³ : AddCommGroup N
inst✝² : Module R M
inst✝¹ : Module R N
x y : M
inst✝ : NoZeroSMulDivisors R M
h : SameRay R x (-x)
✝ : Nontrivial M
this : Nontrivial R
⊢ SameRay R x (-1 • x) State After: no goals Tactic: rwa [neg_one_smul]
|
# A striped preorder is one for which comparability is an equivalence relation
######################################################################
`is_element/striped_preord` := (A::set) -> proc(R)
local E;
global reason;
if not `is_element/preord`(A)(R) then
reason := [convert(procname,string),"R is not a preorder on A",R,A,reason];
return false;
fi;
E := R union `op/autorel`(A)(R);
if not(`is_transitive/autorel`(A)(E)) then
reason := [convert(procname,string),"R u R^op is not transitive",R];
return false;
fi;
return true;
end;
######################################################################
`is_equal/striped_preord` := eval(`is_equal/autorel`);
`is_leq/striped_preord` := eval(`is_leq/preord`);
`is_separated/striped_preord` := eval(`is_separated/autorel`);
`op/striped_preord` := eval(`op/autorel`);
######################################################################
# A075729
`count_elements/striped_preord` := proc(A::set)
local n,x,egf;
n := nops(A);
egf := convert(series(exp(1/(2-exp(x))-1),x=0,n+1),polynom,x):
return n! * coeff(egf,x,n);
end:
######################################################################
`list_elements/striped_preord` := proc(A::set)
local RR,PI,pi,LL,QQ,B,Q,L;
RR := [];
PI := `list_elements/partitions`(A);
for pi in PI do
LL := [{}];
for B in pi do
QQ := `list_elements/total_preord`(B);
LL := [seq(seq(L union Q,Q in QQ),L in LL)];
od;
RR := [op(RR),op(LL)];
od:
return RR;
end:
######################################################################
# Note that the function below is not actually a rank function, but
# it is an ingredient in certain rank functions to be defined
# elsewhere.
`rank/preord` := (A::set) -> proc(R)
nops(`block_partition/preord`(A)(R));
end:
######################################################################
`random_element/striped_preord` := (A::set) -> proc()
local pi,Q,B;
pi := `random_element/partitions`(A)();
Q := {};
for B in pi do
Q := Q union `random_element/total_preord`(B)();
od;
return Q;
end:
######################################################################
# Returns a striped preorder R0 > R with
# (or FAIL if R is already maximal).
`bump/striped_preord` := (A::set) -> proc(R)
local pi,B,a,b,B0,R0;
pi := `block_partition/preord`(A)(R);
for B in pi do
if nops(B) > 1 then
a := B[1];
B0 := B minus {a};
R0 := R minus {seq([b,a],b in B0)};
return R0;
fi;
od;
return FAIL;
end:
######################################################################
# Returns a striped preorder R1 with R < R0 <= S
# (or FAIL if this is impossible)
`relative_bump/striped_preord` := (A::set) -> proc(R,S)
local pi,B,BB,SB,C,a,b,c,B0,R0;
pi := `block_partition/preord`(A)(R);
for B in pi do
BB := `top/autorel`(B);
SB := S intersect BB;
if BB minus S <> {} then
for a in B do
C := select(b -> member([b,a],SB) and not(member([a,b],SB)),B);
if C = {} then
B0 := B minus {a};
R0 := R minus {seq([b,a],b in B0)};
return R0;
fi;
od;
return FAIL; # should not happen
fi;
od;
return FAIL;
end:
######################################################################
`describe/striped_preord` := (A::set) -> proc(R)
local s0,s1,s2,pi,B,C,r,m,j,k,c;
s0 := "";
pi := `block_partition/preord`(A)(R union `op/autorel`(A)(R));
for B in pi do
r := `rank_table/preord`(B)(R intersect `top/autorel`(B));
m := max(map(b -> r[b],B));
s1 := "";
for j from 0 to m do
C := select(b -> r[b]=j,B);
s2 := "";
for k from 1 to nops(C) do
s2 := cat(s2,`if`(k>1,"=",""),sprintf("%A",C[k]));
od:
s1 := cat(s1,`if`(j>0,"<",""),s2);
od;
s0 := cat(s0,`if`(s0="","",","),s1);
od:
return s0;
end:
|
"""Use this script to play manually against a Blooms agent.
"""
import numpy as np
import Arena
from MCTS import MCTS
from blooms.BloomsGame import BloomsGame
from blooms.BloomsPlayers import *
from blooms.pytorch.NNet import NNetWrapper as NNet
from utils import *
# WARNING: The game size and score target should match the chosen agent
game = BloomsGame(size=5, score_target=20)
human = HumanBloomsPlayer(game).play
# WARNING: The chosen agent should match the game size and score target
model = NNet(game)
model.load_checkpoint('./notebooks/results/chkpts_board5_24hrs', 'best.pth.tar')
args = dotdict({'numMCTSSims': 100, 'cpuct':1.0})
mcts = MCTS(game, model, args)
agent = lambda x: np.argmax(mcts.getActionProb(x, temp=0))
arena = Arena.Arena(agent, human, game)
print(arena.playGames(2, verbose=True, display=False))
|
PROGRAM xfit
C driver for routine fit
INTEGER NPT
REAL SPREAD
PARAMETER(NPT=100,SPREAD=0.5)
INTEGER i,idum,mwt
REAL a,b,chi2,gasdev,q,siga,sigb,sig(NPT),x(NPT),y(NPT)
idum=-117
do 11 i=1,NPT
x(i)=0.1*i
y(i)=-2.0*x(i)+1.0+SPREAD*gasdev(idum)
sig(i)=SPREAD
11 continue
do 12 mwt=0,1
call fit(x,y,NPT,sig,mwt,a,b,siga,sigb,chi2,q)
if (mwt.eq.0) then
write(*,'(//1x,a)') 'Ignoring standard deviation'
else
write(*,'(//1x,a)') 'Including standard deviation'
endif
write(*,'(1x,t5,a,f9.6,t24,a,f9.6)') 'A = ',a,'Uncertainty: ',
* siga
write(*,'(1x,t5,a,f9.6,t24,a,f9.6)') 'B = ',b,'Uncertainty: ',
* sigb
write(*,'(1x,t5,a,4x,f10.6)') 'Chi-squared: ',chi2
write(*,'(1x,t5,a,f10.6)') 'Goodness-of-fit: ',q
12 continue
END
|
module Logic.Operations where
import Logic.Relations as Rel
import Logic.Equivalence as Eq
open Eq using (Equivalence; module Equivalence)
BinOp : Set -> Set
BinOp A = A -> A -> A
module MonoEq {A : Set}(Eq : Equivalence A) where
module EqEq = Equivalence Eq
open EqEq
Commutative : BinOp A -> Set
Commutative _+_ = (x y : A) -> (x + y) == (y + x)
Associative : BinOp A -> Set
Associative _+_ = (x y z : A) -> (x + (y + z)) == ((x + y) + z)
LeftIdentity : A -> BinOp A -> Set
LeftIdentity z _+_ = (x : A) -> (z + x) == x
RightIdentity : A -> BinOp A -> Set
RightIdentity z _+_ = (x : A) -> (x + z) == x
module Param where
Commutative : {A : Set}(Eq : Equivalence A) -> BinOp A -> Set
Commutative Eq = Op.Commutative
where module Op = MonoEq Eq
Associative : {A : Set}(Eq : Equivalence A) -> BinOp A -> Set
Associative Eq = Op.Associative
where module Op = MonoEq Eq
LeftIdentity : {A : Set}(Eq : Equivalence A) -> A -> BinOp A -> Set
LeftIdentity Eq = Op.LeftIdentity
where module Op = MonoEq Eq
RightIdentity : {A : Set}(Eq : Equivalence A) -> A -> BinOp A -> Set
RightIdentity Eq = Op.RightIdentity
where module Op = MonoEq Eq
|
import cv2
import math
import numpy as np
import random
import torch
import torch.nn.functional as F
import torchvision.transforms as T
from torchvision.transforms.functional import to_tensor
from torch import Tensor
from typing import Any, Callable, List, Optional, Tuple
Mean = Optional[Tuple[float, float, float]]
Std = Mean
def no_transforms(image: np.ndarray, size=256) -> Tensor:
image = cv2.resize(image, (size, size), interpolation=cv2.INTER_AREA)
return to_tensor(image)
class Resize(object):
def __init__(self, size: int, mode=cv2.INTER_NEAREST):
if not size:
raise AttributeError("Size should be positive number")
self.size = size
self.mode = mode
def __call__(self, image: np.ndarray):
size = (self.size, self.size)
return cv2.resize(image, dsize=size, interpolation=self.mode)
def __repr__(self):
return "{}(size={})".format(Resize.__name__, self.size)
def resize(t, size=None, scale=None, mode='nearest', align_corners=False, normalize=False):
# type: (Tensor, Optional[int], Optional[float], str, Optional[bool], Optional[bool]) -> Tensor
t = t.float().unsqueeze_(0)
t = F.interpolate(t, size=size, scale_factor=scale, mode=mode,
# not sure if this is needed
align_corners=mode in ['bilinear', 'bicubic'] or None)
t = t.clamp_(min=0, max=255).squeeze_(0)
if normalize:
return t.div_(255)
return t
class UpscaleIfBelow(object):
def __init__(self, min_size: int):
if not min_size:
raise AttributeError("min_size should be positive number")
self.min_size = min_size
def __call__(self, t: Tensor):
C, H, W = t.shape
S = self.min_size
if H < S or W < S:
scale = math.ceil(S / min(H, W))
t = resize(t, scale=scale, mode='nearest', normalize=False)
return t
def __repr__(self):
return "{}(target_size={})".format(UpscaleIfBelow.__name__, self.min_size)
class ResizeTensor(object):
def __init__(self, size: int, mode='nearest', normalize=True):
if not size:
raise AttributeError("Size should be positive number")
self.size = size
self.mode = mode
self.normalize = normalize
def __call__(self, t: Tensor):
t = resize(t, size=self.size, mode=self.mode, normalize=self.normalize)
return t
def __repr__(self):
return "{}(size={}, mode={}, normalize={})".format(
Resize.__name__, self.size, self.mode, self.normalize)
class PadIfNeeded(object):
def __init__(self, size: int, mode='constant', value=0, normalize=False):
if not size:
raise AttributeError("Size should be positive number")
self.size = size
self.mode = mode
self.value = value
self.normalize = normalize
def __call__(self, t: Tensor):
S = self.size
H, W = t.size(-2), t.size(-1)
if H >= S and W >= S:
return t
pad_H = (S - H) // 2
pad_W = (S - W) // 2
pad = [pad_W, S - (W + pad_W),
pad_H, S - (H + pad_H)]
if self.mode != 'constant':
t = t.float().unsqueeze_(0)
t = F.pad(t, pad, mode=self.mode, value=self.value)
if t.ndim > 3:
t = t.squeeze_(0)
if self.normalize:
return t.float().div_(255)
return t
def __repr__(self):
return "{}(size={}, mode={}, value={})".format(
PadIfNeeded.__name__, self.size, self.mode, self.value)
class CropCenter(object):
def __init__(self, size: int):
if not size:
raise AttributeError("Size should be positive number")
self.size = size
def __call__(self, t: Tensor):
S = int(self.size)
H, W = t.size(-2), t.size(-1)
y0 = int(round((H - S) / 2.))
x0 = int(round((W - S) / 2.))
t = t[:, y0:y0+S, x0:x0+S]
return t
def __repr__(self):
return "{}(size={})".format(CropCenter.__name__, self.size)
def diff(x: Tensor, dim: int) -> Tensor:
mask = list(map(slice, x.shape[:dim]))
mask0 = mask + [slice(1, x.size(dim))]
mask1 = mask + [slice(0, -1)]
return x[mask0] - x[mask1]
def image_grad(x: Tensor, n=1, keep_size=False) -> Tensor:
for _ in range(n):
x = diff(x, -1)
x = diff(x, -2)
if keep_size:
pad = [(n + i) // 2 for i in [0, 1, 0, 1]]
x = F.pad(x, pad)
return x
class SpatialGradFilter(object):
def __init__(self, order: int):
if not order:
raise AttributeError("Order should be positive number")
self.order = order
def __call__(self, t: Tensor):
return image_grad(t, n=self.order, keep_size=True)
def __repr__(self):
return "{}(order={})".format(SpatialGradFilter.__name__, self.order)
class RandomHorizontalFlipSequence(object):
def __init__(self, p=0.5):
if p < 0 or p > 1:
raise ValueError("range of random erasing probability should be between 0 and 1")
self.p = p
def __call__(self, t: Tensor):
if random.uniform(0, 1) < self.p:
return t.flip(-1)
return t
def __repr__(self):
return "{}(p={})".format(RandomHorizontalFlipSequence.__name__, self.p)
|
safeDivide : Double -> Double -> Maybe Double
safeDivide x y = if y == 0 then Nothing
else Just (x/y)
|
(* Copyright 2021 (C) Mihails Milehins *)
section\<open>Smallness for semicategories\<close>
theory CZH_SMC_Small_Semicategory
imports
CZH_DG_Small_Digraph
CZH_SMC_Semicategory
begin
subsection\<open>Background\<close>
text\<open>
An explanation of the methodology chosen for the exposition of all
matters related to the size of the semicategories and associated entities
is given in the previous chapter.
\<close>
named_theorems smc_small_cs_simps
named_theorems smc_small_cs_intros
subsection\<open>Tiny semicategory\<close>
subsubsection\<open>Definition and elementary properties\<close>
locale tiny_semicategory = \<Z> \<alpha> + vfsequence \<CC> + Comp: vsv \<open>\<CC>\<lparr>Comp\<rparr>\<close> for \<alpha> \<CC> +
assumes tiny_smc_length[smc_cs_simps]: "vcard \<CC> = 5\<^sub>\<nat>"
and tiny_smc_tiny_digraph[slicing_intros]: "tiny_digraph \<alpha> (smc_dg \<CC>)"
and tiny_smc_Comp_vdomain: "gf \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> (\<CC>\<lparr>Comp\<rparr>) \<longleftrightarrow>
(\<exists>g f b c a. gf = [g, f]\<^sub>\<circ> \<and> g : b \<mapsto>\<^bsub>\<CC>\<^esub> c \<and> f : a \<mapsto>\<^bsub>\<CC>\<^esub> b)"
and tiny_smc_Comp_is_arr[smc_cs_intros]:
"\<lbrakk> g : b \<mapsto>\<^bsub>\<CC>\<^esub> c; f : a \<mapsto>\<^bsub>\<CC>\<^esub> b \<rbrakk> \<Longrightarrow> g \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f : a \<mapsto>\<^bsub>\<CC>\<^esub> c"
and tiny_smc_assoc[smc_cs_simps]:
"\<lbrakk> h : c \<mapsto>\<^bsub>\<CC>\<^esub> d; g : b \<mapsto>\<^bsub>\<CC>\<^esub> c; f : a \<mapsto>\<^bsub>\<CC>\<^esub> b \<rbrakk> \<Longrightarrow>
(h \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> g) \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f = h \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (g \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f)"
lemmas [smc_cs_simps] =
tiny_semicategory.tiny_smc_length
tiny_semicategory.tiny_smc_assoc
lemmas [slicing_intros] =
tiny_semicategory.tiny_smc_Comp_is_arr
text\<open>Rules.\<close>
lemma (in tiny_semicategory) tiny_semicategory_axioms'[smc_small_cs_intros]:
assumes "\<alpha>' = \<alpha>"
shows "tiny_semicategory \<alpha>' \<CC>"
unfolding assms by (rule tiny_semicategory_axioms)
mk_ide rf tiny_semicategory_def[unfolded tiny_semicategory_axioms_def]
|intro tiny_semicategoryI|
|dest tiny_semicategoryD[dest]|
|elim tiny_semicategoryE[elim]|
lemma tiny_semicategoryI':
assumes "semicategory \<alpha> \<CC>" and "\<CC>\<lparr>Obj\<rparr> \<in>\<^sub>\<circ> Vset \<alpha>" and "\<CC>\<lparr>Arr\<rparr> \<in>\<^sub>\<circ> Vset \<alpha>"
shows "tiny_semicategory \<alpha> \<CC>"
proof-
interpret semicategory \<alpha> \<CC> by (rule assms(1))
show ?thesis
proof(intro tiny_semicategoryI)
show "vfsequence \<CC>" by (simp add: vfsequence_axioms)
from assms show "tiny_digraph \<alpha> (smc_dg \<CC>)"
by (intro tiny_digraphI') (auto simp: slicing_simps)
qed (auto simp: smc_cs_simps intro: smc_cs_intros)
qed
lemma tiny_semicategoryI'':
assumes "\<Z> \<alpha>"
and "vfsequence \<CC>"
and "vsv (\<CC>\<lparr>Comp\<rparr>)"
and "vcard \<CC> = 5\<^sub>\<nat>"
and "vsv (\<CC>\<lparr>Dom\<rparr>)"
and "vsv (\<CC>\<lparr>Cod\<rparr>)"
and "\<D>\<^sub>\<circ> (\<CC>\<lparr>Dom\<rparr>) = \<CC>\<lparr>Arr\<rparr>"
and "\<R>\<^sub>\<circ> (\<CC>\<lparr>Dom\<rparr>) \<subseteq>\<^sub>\<circ> \<CC>\<lparr>Obj\<rparr>"
and "\<D>\<^sub>\<circ> (\<CC>\<lparr>Cod\<rparr>) = \<CC>\<lparr>Arr\<rparr>"
and "\<R>\<^sub>\<circ> (\<CC>\<lparr>Cod\<rparr>) \<subseteq>\<^sub>\<circ> \<CC>\<lparr>Obj\<rparr>"
and "\<And>gf. gf \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> (\<CC>\<lparr>Comp\<rparr>) \<longleftrightarrow>
(\<exists>g f b c a. gf = [g, f]\<^sub>\<circ> \<and> g : b \<mapsto>\<^bsub>\<CC>\<^esub> c \<and> f : a \<mapsto>\<^bsub>\<CC>\<^esub> b)"
and "\<And>b c g a f. \<lbrakk> g : b \<mapsto>\<^bsub>\<CC>\<^esub> c; f : a \<mapsto>\<^bsub>\<CC>\<^esub> b \<rbrakk> \<Longrightarrow> g \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f : a \<mapsto>\<^bsub>\<CC>\<^esub> c"
and "\<And>c d h b g a f. \<lbrakk> h : c \<mapsto>\<^bsub>\<CC>\<^esub> d; g : b \<mapsto>\<^bsub>\<CC>\<^esub> c; f : a \<mapsto>\<^bsub>\<CC>\<^esub> b \<rbrakk> \<Longrightarrow>
(h \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> g) \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f = h \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (g \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f)"
and "\<CC>\<lparr>Obj\<rparr> \<in>\<^sub>\<circ> Vset \<alpha>"
and "\<CC>\<lparr>Arr\<rparr> \<in>\<^sub>\<circ> Vset \<alpha>"
shows "tiny_semicategory \<alpha> \<CC>"
by (intro tiny_semicategoryI tiny_digraphI, unfold slicing_simps)
(simp_all add: smc_dg_def nat_omega_simps assms)
text\<open>Slicing.\<close>
context tiny_semicategory
begin
interpretation dg: tiny_digraph \<alpha> \<open>smc_dg \<CC>\<close> by (rule tiny_smc_tiny_digraph)
lemmas_with [unfolded slicing_simps]:
tiny_smc_Obj_in_Vset[smc_small_cs_intros] = dg.tiny_dg_Obj_in_Vset
and tiny_smc_Arr_in_Vset[smc_small_cs_intros] = dg.tiny_dg_Arr_in_Vset
and tiny_smc_Dom_in_Vset[smc_small_cs_intros] = dg.tiny_dg_Dom_in_Vset
and tiny_smc_Cod_in_Vset[smc_small_cs_intros] = dg.tiny_dg_Cod_in_Vset
end
text\<open>Elementary properties.\<close>
sublocale tiny_semicategory \<subseteq> semicategory
by (rule semicategoryI)
(
auto
simp:
vfsequence_axioms
tiny_digraph.tiny_dg_digraph
tiny_smc_tiny_digraph
tiny_smc_Comp_vdomain
intro: smc_cs_intros smc_cs_simps
)
lemmas (in tiny_semicategory) tiny_smc_semicategory = semicategory_axioms
lemmas [smc_small_cs_intros] = tiny_semicategory.tiny_smc_semicategory
text\<open>Size.\<close>
lemma (in tiny_semicategory) tiny_smc_Comp_in_Vset: "\<CC>\<lparr>Comp\<rparr> \<in>\<^sub>\<circ> Vset \<alpha>"
proof-
have "\<CC>\<lparr>Arr\<rparr> \<in>\<^sub>\<circ> Vset \<alpha>" by (simp add: tiny_smc_Arr_in_Vset)
with Axiom_of_Infinity have "\<CC>\<lparr>Arr\<rparr> ^\<^sub>\<times> 2\<^sub>\<nat> \<in>\<^sub>\<circ> Vset \<alpha>"
by (intro Limit_vcpower_in_VsetI) auto
with Comp.pnop_vdomain have D: "\<D>\<^sub>\<circ> (\<CC>\<lparr>Comp\<rparr>) \<in>\<^sub>\<circ> Vset \<alpha>" by auto
moreover from tiny_smc_Arr_in_Vset smc_Comp_vrange have
"\<R>\<^sub>\<circ> (\<CC>\<lparr>Comp\<rparr>) \<in>\<^sub>\<circ> Vset \<alpha>"
by auto
ultimately show ?thesis by (simp add: Comp.vbrelation_Limit_in_VsetI)
qed
lemma (in tiny_semicategory) tiny_smc_in_Vset: "\<CC> \<in>\<^sub>\<circ> Vset \<alpha>"
proof-
note [smc_cs_intros] =
tiny_smc_Obj_in_Vset
tiny_smc_Arr_in_Vset
tiny_smc_Dom_in_Vset
tiny_smc_Cod_in_Vset
tiny_smc_Comp_in_Vset
show ?thesis
by (subst smc_def) (cs_concl cs_shallow cs_intro: smc_cs_intros V_cs_intros)
qed
lemma small_tiny_semicategories[simp]: "small {\<CC>. tiny_semicategory \<alpha> \<CC>}"
proof(rule down)
show "{\<CC>. tiny_semicategory \<alpha> \<CC>} \<subseteq> elts (set {\<CC>. semicategory \<alpha> \<CC>})"
by (auto intro: smc_small_cs_intros)
qed
lemma tiny_semicategories_vsubset_Vset:
"set {\<CC>. tiny_semicategory \<alpha> \<CC>} \<subseteq>\<^sub>\<circ> Vset \<alpha>"
by (rule vsubsetI) (simp add: tiny_semicategory.tiny_smc_in_Vset)
lemma (in semicategory) smc_tiny_semicategory_if_ge_Limit:
assumes "\<Z> \<beta>" and "\<alpha> \<in>\<^sub>\<circ> \<beta>"
shows "tiny_semicategory \<beta> \<CC>"
proof(intro tiny_semicategoryI)
show "tiny_digraph \<beta> (smc_dg \<CC>)"
by (rule digraph.dg_tiny_digraph_if_ge_Limit, rule smc_digraph; intro assms)
qed
(
auto simp:
assms(1)
smc_cs_simps
smc_cs_intros
smc_digraph digraph.dg_tiny_digraph_if_ge_Limit
smc_Comp_vdomain vfsequence_axioms
)
subsubsection\<open>Opposite tiny semicategory\<close>
lemma (in tiny_semicategory) tiny_semicategory_op:
"tiny_semicategory \<alpha> (op_smc \<CC>)"
by (intro tiny_semicategoryI', unfold smc_op_simps)
(auto simp: smc_op_intros smc_small_cs_intros)
lemmas tiny_semicategory_op[smc_op_intros] =
tiny_semicategory.tiny_semicategory_op
subsection\<open>Finite semicategory\<close>
subsubsection\<open>Definition and elementary properties\<close>
text\<open>
A finite semicategory is a generalization of the concept of a finite category,
as presented in nLab
\<^cite>\<open>"noauthor_nlab_nodate"\<close>
\footnote{\url{https://ncatlab.org/nlab/show/finite+category}}.
\<close>
locale finite_semicategory = \<Z> \<alpha> + vfsequence \<CC> + Comp: vsv \<open>\<CC>\<lparr>Comp\<rparr>\<close> for \<alpha> \<CC> +
assumes fin_smc_length[smc_cs_simps]: "vcard \<CC> = 5\<^sub>\<nat>"
and fin_smc_finite_digraph[slicing_intros]: "finite_digraph \<alpha> (smc_dg \<CC>)"
and fin_smc_Comp_vdomain: "gf \<in>\<^sub>\<circ> \<D>\<^sub>\<circ> (\<CC>\<lparr>Comp\<rparr>) \<longleftrightarrow>
(\<exists>g f b c a. gf = [g, f]\<^sub>\<circ> \<and> g : b \<mapsto>\<^bsub>\<CC>\<^esub> c \<and> f : a \<mapsto>\<^bsub>\<CC>\<^esub> b)"
and fin_smc_Comp_is_arr[smc_cs_intros]:
"\<lbrakk> g : b \<mapsto>\<^bsub>\<CC>\<^esub> c; f : a \<mapsto>\<^bsub>\<CC>\<^esub> b \<rbrakk> \<Longrightarrow> g \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f : a \<mapsto>\<^bsub>\<CC>\<^esub> c"
and fin_smc_assoc[smc_cs_simps]:
"\<lbrakk> h : c \<mapsto>\<^bsub>\<CC>\<^esub> d; g : b \<mapsto>\<^bsub>\<CC>\<^esub> c; f : a \<mapsto>\<^bsub>\<CC>\<^esub> b \<rbrakk> \<Longrightarrow>
(h \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> g) \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f = h \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> (g \<circ>\<^sub>A\<^bsub>\<CC>\<^esub> f)"
lemmas [smc_cs_simps] =
finite_semicategory.fin_smc_length
finite_semicategory.fin_smc_assoc
lemmas [slicing_intros] =
finite_semicategory.fin_smc_Comp_is_arr
text\<open>Rules.\<close>
lemma (in finite_semicategory) finite_semicategory_axioms'[smc_small_cs_intros]:
assumes "\<alpha>' = \<alpha>"
shows "finite_semicategory \<alpha>' \<CC>"
unfolding assms by (rule finite_semicategory_axioms)
mk_ide rf finite_semicategory_def[unfolded finite_semicategory_axioms_def]
|intro finite_semicategoryI|
|dest finite_semicategoryD[dest]|
|elim finite_semicategoryE[elim]|
lemma finite_semicategoryI':
assumes "semicategory \<alpha> \<CC>" and "vfinite (\<CC>\<lparr>Obj\<rparr>)" and "vfinite (\<CC>\<lparr>Arr\<rparr>)"
shows "finite_semicategory \<alpha> \<CC>"
proof-
interpret semicategory \<alpha> \<CC> by (rule assms(1))
show ?thesis
proof(intro finite_semicategoryI)
show "vfsequence \<CC>" by (simp add: vfsequence_axioms)
from assms show "finite_digraph \<alpha> (smc_dg \<CC>)"
by (intro finite_digraphI) (auto simp: slicing_simps)
qed (auto simp: smc_cs_simps intro: smc_cs_intros)
qed
lemma finite_semicategoryI'':
assumes "tiny_semicategory \<alpha> \<CC>" and "vfinite (\<CC>\<lparr>Obj\<rparr>)" and "vfinite (\<CC>\<lparr>Arr\<rparr>)"
shows "finite_semicategory \<alpha> \<CC>"
using assms by (intro finite_semicategoryI')
(auto intro: smc_cs_intros smc_small_cs_intros)
text\<open>Slicing.\<close>
context finite_semicategory
begin
interpretation dg: finite_digraph \<alpha> \<open>smc_dg \<CC>\<close> by (rule fin_smc_finite_digraph)
lemmas_with [unfolded slicing_simps]:
fin_smc_Obj_vfinite[smc_small_cs_intros] = dg.fin_dg_Obj_vfinite
and fin_smc_Arr_vfinite[smc_small_cs_intros] = dg.fin_dg_Arr_vfinite
end
text\<open>Elementary properties.\<close>
sublocale finite_semicategory \<subseteq> tiny_semicategory
by (rule tiny_semicategoryI)
(
auto simp:
vfsequence_axioms
fin_smc_Comp_vdomain
fin_smc_finite_digraph
finite_digraph.fin_dg_tiny_digraph
intro: smc_cs_intros smc_cs_simps
)
lemmas (in finite_semicategory) fin_smc_tiny_semicategory =
tiny_semicategory_axioms
lemmas [smc_small_cs_intros] = finite_semicategory.fin_smc_tiny_semicategory
lemma (in finite_semicategory) fin_smc_in_Vset: "\<CC> \<in>\<^sub>\<circ> Vset \<alpha>"
by (rule tiny_smc_in_Vset)
text\<open>Size.\<close>
lemma small_finite_semicategories[simp]: "small {\<CC>. finite_semicategory \<alpha> \<CC>}"
proof(rule down)
show "{\<CC>. finite_semicategory \<alpha> \<CC>} \<subseteq> elts (set {\<CC>. semicategory \<alpha> \<CC>})"
by (auto intro: smc_small_cs_intros)
qed
lemma finite_semicategories_vsubset_Vset:
"set {\<CC>. finite_semicategory \<alpha> \<CC>} \<subseteq>\<^sub>\<circ> Vset \<alpha>"
by (rule vsubsetI) (simp add: finite_semicategory.fin_smc_in_Vset)
subsubsection\<open>Opposite finite semicategory\<close>
lemma (in finite_semicategory) finite_semicategory_op:
"finite_semicategory \<alpha> (op_smc \<CC>)"
by (intro finite_semicategoryI', unfold smc_op_simps)
(auto simp: smc_op_intros smc_small_cs_intros)
lemmas finite_semicategory_op[smc_op_intros] =
finite_semicategory.finite_semicategory_op
text\<open>\newpage\<close>
end
|
module Models
using ..FastAI
using BSON
using Flux
using Zygote
using DataDeps
using Metalhead
include("./datadeps.jl")
function __init__()
initdatadeps()
end
include("layers.jl")
include("blocks.jl")
include("./metalhead.jl")
include("./xresnet.jl")
export xresnet18, xresnet50
end
|
[STATEMENT]
lemma new_thread_bisim0_extNTA2J_extNTA2J0:
assumes wf: "wwf_J_prog P"
and red: "P,t \<turnstile> \<langle>a'\<bullet>M'(vs), h\<rangle> -ta\<rightarrow>ext \<langle>va, h'\<rangle>"
and nt: "NewThread t' CMa m \<in> set \<lbrace>ta\<rbrace>\<^bsub>t\<^esub>"
shows "bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
obtain C M a where CMa [simp]: "CMa = (C, M, a)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>C M a. CMa = (C, M, a) \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by(cases CMa)
[PROOF STATE]
proof (state)
this:
CMa = (C, M, a)
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
from red nt
[PROOF STATE]
proof (chain)
picking this:
P,t \<turnstile> \<langle>a'\<bullet>M'(vs),h\<rangle> -ta\<rightarrow>ext \<langle>va,h'\<rangle>
NewThread t' CMa m \<in> set \<lbrace>ta\<rbrace>\<^bsub>t\<^esub>
[PROOF STEP]
have [simp]: "m = h'"
[PROOF STATE]
proof (prove)
using this:
P,t \<turnstile> \<langle>a'\<bullet>M'(vs),h\<rangle> -ta\<rightarrow>ext \<langle>va,h'\<rangle>
NewThread t' CMa m \<in> set \<lbrace>ta\<rbrace>\<^bsub>t\<^esub>
goal (1 subgoal):
1. m = h'
[PROOF STEP]
by(rule red_ext_new_thread_heap)
[PROOF STATE]
proof (state)
this:
m = h'
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
from red_external_new_thread_sees[OF wf red nt[unfolded CMa]]
[PROOF STATE]
proof (chain)
picking this:
typeof_addr h' a = \<lfloor>Class_type C\<rfloor> \<and> (\<exists>T meth D. P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>meth\<rfloor> in D)
[PROOF STEP]
obtain T pns body D where h'a: "typeof_addr h' a = \<lfloor>Class_type C\<rfloor>"
and sees: "P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>(pns, body)\<rfloor> in D"
[PROOF STATE]
proof (prove)
using this:
typeof_addr h' a = \<lfloor>Class_type C\<rfloor> \<and> (\<exists>T meth D. P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>meth\<rfloor> in D)
goal (1 subgoal):
1. (\<And>T pns body D. \<lbrakk>typeof_addr h' a = \<lfloor>Class_type C\<rfloor>; P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>(pns, body)\<rfloor> in D\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
typeof_addr h' a = \<lfloor>Class_type C\<rfloor>
P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>(pns, body)\<rfloor> in D
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
from sees_wf_mdecl[OF wf sees]
[PROOF STATE]
proof (chain)
picking this:
wf_mdecl wwf_J_mdecl P D (M, [], T, \<lfloor>(pns, body)\<rfloor>)
[PROOF STEP]
have "fv body \<subseteq> {this}"
[PROOF STATE]
proof (prove)
using this:
wf_mdecl wwf_J_mdecl P D (M, [], T, \<lfloor>(pns, body)\<rfloor>)
goal (1 subgoal):
1. fv body \<subseteq> {this}
[PROOF STEP]
by(auto simp add: wf_mdecl_def)
[PROOF STATE]
proof (state)
this:
fv body \<subseteq> {this}
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
with red nt h'a sees
[PROOF STATE]
proof (chain)
picking this:
P,t \<turnstile> \<langle>a'\<bullet>M'(vs),h\<rangle> -ta\<rightarrow>ext \<langle>va,h'\<rangle>
NewThread t' CMa m \<in> set \<lbrace>ta\<rbrace>\<^bsub>t\<^esub>
typeof_addr h' a = \<lfloor>Class_type C\<rfloor>
P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>(pns, body)\<rfloor> in D
fv body \<subseteq> {this}
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
P,t \<turnstile> \<langle>a'\<bullet>M'(vs),h\<rangle> -ta\<rightarrow>ext \<langle>va,h'\<rangle>
NewThread t' CMa m \<in> set \<lbrace>ta\<rbrace>\<^bsub>t\<^esub>
typeof_addr h' a = \<lfloor>Class_type C\<rfloor>
P \<turnstile> C sees M: []\<rightarrow>T = \<lfloor>(pns, body)\<rfloor> in D
fv body \<subseteq> {this}
goal (1 subgoal):
1. bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
[PROOF STEP]
by(fastforce simp add: is_call_def intro: bisim_red_red0.intros)
[PROOF STATE]
proof (state)
this:
bisim_red_red0 (extNTA2J P CMa, m) (extNTA2J0 P CMa, m)
goal:
No subgoals!
[PROOF STEP]
qed
|
State Before: R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
⊢ ↑(coeff R n) (inv.aux a φ) =
if n = 0 then a
else
-a *
∑ x in Finset.Nat.antidiagonal n, if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (inv.aux a φ) else 0 State After: R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
⊢ (if single () n = 0 then a
else
-a *
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
if n = 0 then a
else
-a *
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 Tactic: rw [coeff, inv.aux, MvPowerSeries.coeff_inv_aux] State Before: R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
⊢ (if single () n = 0 then a
else
-a *
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
if n = 0 then a
else
-a *
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 State After: R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
⊢ (if n = 0 then a
else
-a *
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
if n = 0 then a
else
-a *
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 Tactic: simp only [Finsupp.single_eq_zero] State Before: R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
⊢ (if n = 0 then a
else
-a *
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
if n = 0 then a
else
-a *
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 State After: case inl
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : n = 0
⊢ a = a
case inr
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ (-a *
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
-a *
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 Tactic: split_ifs State Before: case inr
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ (-a *
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
-a *
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 State After: case inr.e_a
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ (∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 Tactic: congr 1 State Before: case inr.e_a
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ (∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0) =
∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0 State After: case inr.e_a
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ (∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0) =
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0 Tactic: symm State Before: case inr.e_a
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ (∑ x in Finset.Nat.antidiagonal n,
if x.snd < n then ↑(coeff R x.fst) φ * ↑(coeff R x.snd) (MvPowerSeries.inv.aux a φ) else 0) =
∑ x in Finsupp.antidiagonal (single () n),
if x.snd < single () n then
↑(MvPowerSeries.coeff R x.fst) φ * ↑(MvPowerSeries.coeff R x.snd) (MvPowerSeries.inv.aux a φ)
else 0 State After: case inr.e_a.hi
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (a : ℕ × ℕ), a ∈ Finset.Nat.antidiagonal n → (single () a.fst, single () a.snd) ∈ Finsupp.antidiagonal (single () n)
case inr.e_a.h
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (a_1 : ℕ × ℕ),
a_1 ∈ Finset.Nat.antidiagonal n →
(if a_1.snd < n then ↑(coeff R a_1.fst) φ * ↑(coeff R a_1.snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () a_1.fst, single () a_1.snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () a_1.fst, single () a_1.snd).fst) φ *
↑(MvPowerSeries.coeff R (single () a_1.fst, single () a_1.snd).snd) (MvPowerSeries.inv.aux a φ)
else 0
case inr.e_a.i_inj
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (a₁ a₂ : ℕ × ℕ),
a₁ ∈ Finset.Nat.antidiagonal n →
a₂ ∈ Finset.Nat.antidiagonal n →
(single () a₁.fst, single () a₁.snd) = (single () a₂.fst, single () a₂.snd) → a₁ = a₂
case inr.e_a.i_surj
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (b : (Unit →₀ ℕ) × (Unit →₀ ℕ)),
b ∈ Finsupp.antidiagonal (single () n) → ∃ a ha, b = (single () a.fst, single () a.snd) Tactic: apply Finset.sum_bij fun (p : ℕ × ℕ) _h => (single () p.1, single () p.2) State Before: case inl
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : n = 0
⊢ a = a State After: no goals Tactic: rfl State Before: case inr.e_a.hi
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (a : ℕ × ℕ), a ∈ Finset.Nat.antidiagonal n → (single () a.fst, single () a.snd) ∈ Finsupp.antidiagonal (single () n) State After: case inr.e_a.hi.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
hij : (i, j) ∈ Finset.Nat.antidiagonal n
⊢ (single () (i, j).fst, single () (i, j).snd) ∈ Finsupp.antidiagonal (single () n) Tactic: rintro ⟨i, j⟩ hij State Before: case inr.e_a.hi.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
hij : (i, j) ∈ Finset.Nat.antidiagonal n
⊢ (single () (i, j).fst, single () (i, j).snd) ∈ Finsupp.antidiagonal (single () n) State After: case inr.e_a.hi.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
hij : (i, j).fst + (i, j).snd = n
⊢ (single () (i, j).fst, single () (i, j).snd) ∈ Finsupp.antidiagonal (single () n) Tactic: rw [Finset.Nat.mem_antidiagonal] at hij State Before: case inr.e_a.hi.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
hij : (i, j).fst + (i, j).snd = n
⊢ (single () (i, j).fst, single () (i, j).snd) ∈ Finsupp.antidiagonal (single () n) State After: no goals Tactic: rw [Finsupp.mem_antidiagonal, ← Finsupp.single_add, hij] State Before: case inr.e_a.h
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (a_1 : ℕ × ℕ),
a_1 ∈ Finset.Nat.antidiagonal n →
(if a_1.snd < n then ↑(coeff R a_1.fst) φ * ↑(coeff R a_1.snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () a_1.fst, single () a_1.snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () a_1.fst, single () a_1.snd).fst) φ *
↑(MvPowerSeries.coeff R (single () a_1.fst, single () a_1.snd).snd) (MvPowerSeries.inv.aux a φ)
else 0 State After: case inr.e_a.h.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
⊢ (if (i, j).snd < n then ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () (i, j).fst, single () (i, j).snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
else 0 Tactic: rintro ⟨i, j⟩ _hij State Before: case inr.e_a.h.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
⊢ (if (i, j).snd < n then ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () (i, j).fst, single () (i, j).snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
else 0 State After: case pos
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ (if (i, j).snd < n then ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () (i, j).fst, single () (i, j).snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
else 0
case neg
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
⊢ (if (i, j).snd < n then ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () (i, j).fst, single () (i, j).snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
else 0 Tactic: by_cases H : j < n State Before: case pos
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ (if (i, j).snd < n then ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () (i, j).fst, single () (i, j).snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
else 0 State After: case pos
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) =
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
case pos.hc
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ (single () (i, j).fst, single () (i, j).snd).snd < single () n Tactic: rw [if_pos H, if_pos] State Before: case pos.hc
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ (single () (i, j).fst, single () (i, j).snd).snd < single () n State After: case pos.hc.left
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ ∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ ¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) Tactic: constructor State Before: case pos
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) =
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ) State After: no goals Tactic: rfl State Before: case pos.hc.left
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ ∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1) State After: case pos.hc.left.unit
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd PUnit.unit) (↑(single () n) PUnit.unit) Tactic: rintro ⟨⟩ State Before: case pos.hc.left.unit
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd PUnit.unit) (↑(single () n) PUnit.unit) State After: no goals Tactic: simpa [Finsupp.single_eq_same] using le_of_lt H State Before: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
⊢ ¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) State After: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
hh :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ False Tactic: intro hh State Before: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : j < n
hh :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ False State After: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j ≥ n
hh :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ False Tactic: rw [lt_iff_not_ge] at H State Before: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j ≥ n
hh :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ False State After: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j ≥ n
hh :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ j ≥ n Tactic: apply H State Before: case pos.hc.right
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j ≥ n
hh :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ j ≥ n State After: no goals Tactic: simpa [Finsupp.single_eq_same] using hh () State Before: case neg
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
⊢ (if (i, j).snd < n then ↑(coeff R (i, j).fst) φ * ↑(coeff R (i, j).snd) (MvPowerSeries.inv.aux a φ) else 0) =
if (single () (i, j).fst, single () (i, j).snd).snd < single () n then
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).fst) φ *
↑(MvPowerSeries.coeff R (single () (i, j).fst, single () (i, j).snd).snd) (MvPowerSeries.inv.aux a φ)
else 0 State After: case neg.hnc
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
⊢ ¬(single () (i, j).fst, single () (i, j).snd).snd < single () n Tactic: rw [if_neg H, if_neg] State Before: case neg.hnc
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
⊢ ¬(single () (i, j).fst, single () (i, j).snd).snd < single () n State After: case neg.hnc.intro
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
_h₁ :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
h₂ :
¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ False Tactic: rintro ⟨_h₁, h₂⟩ State Before: case neg.hnc.intro
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
_h₁ :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
h₂ :
¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ False State After: case neg.hnc.intro
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
_h₁ :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
h₂ :
¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ ∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) Tactic: apply h₂ State Before: case neg.hnc.intro
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
_h₁ :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
h₂ :
¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ ∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) State After: case neg.hnc.intro.unit
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
_h₁ :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
h₂ :
¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ PartialOrder.toPreorder.1.1 (↑(single () n) PUnit.unit) (↑(single () (i, j).fst, single () (i, j).snd).snd PUnit.unit) Tactic: rintro ⟨⟩ State Before: case neg.hnc.intro.unit
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
H : ¬j < n
_h₁ :
∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () (i, j).fst, single () (i, j).snd).snd i_1) (↑(single () n) i_1)
h₂ :
¬∀ (i_1 : Unit),
PartialOrder.toPreorder.1.1 (↑(single () n) i_1) (↑(single () (i, j).fst, single () (i, j).snd).snd i_1)
⊢ PartialOrder.toPreorder.1.1 (↑(single () n) PUnit.unit) (↑(single () (i, j).fst, single () (i, j).snd).snd PUnit.unit) State After: no goals Tactic: simpa [Finsupp.single_eq_same] using not_lt.1 H State Before: case inr.e_a.i_inj
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (a₁ a₂ : ℕ × ℕ),
a₁ ∈ Finset.Nat.antidiagonal n →
a₂ ∈ Finset.Nat.antidiagonal n →
(single () a₁.fst, single () a₁.snd) = (single () a₂.fst, single () a₂.snd) → a₁ = a₂ State After: case inr.e_a.i_inj.mk.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j k l : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
_hkl : (k, l) ∈ Finset.Nat.antidiagonal n
⊢ (single () (i, j).fst, single () (i, j).snd) = (single () (k, l).fst, single () (k, l).snd) → (i, j) = (k, l) Tactic: rintro ⟨i, j⟩ ⟨k, l⟩ _hij _hkl State Before: case inr.e_a.i_inj.mk.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
i j k l : ℕ
_hij : (i, j) ∈ Finset.Nat.antidiagonal n
_hkl : (k, l) ∈ Finset.Nat.antidiagonal n
⊢ (single () (i, j).fst, single () (i, j).snd) = (single () (k, l).fst, single () (k, l).snd) → (i, j) = (k, l) State After: no goals Tactic: simpa only [Prod.mk.inj_iff, Finsupp.unique_single_eq_iff] using id State Before: case inr.e_a.i_surj
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
⊢ ∀ (b : (Unit →₀ ℕ) × (Unit →₀ ℕ)),
b ∈ Finsupp.antidiagonal (single () n) → ∃ a ha, b = (single () a.fst, single () a.snd) State After: case inr.e_a.i_surj.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ ∃ a ha, (f, g) = (single () a.fst, single () a.snd) Tactic: rintro ⟨f, g⟩ hfg State Before: case inr.e_a.i_surj.mk
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ ∃ a ha, (f, g) = (single () a.fst, single () a.snd) State After: case inr.e_a.i_surj.mk.refine'_1
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ (↑f (), ↑g ()) ∈ Finset.Nat.antidiagonal n
case inr.e_a.i_surj.mk.refine'_2
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ (f, g) = (single () (↑f (), ↑g ()).fst, single () (↑f (), ↑g ()).snd) Tactic: refine' ⟨(f (), g ()), _, _⟩ State Before: case inr.e_a.i_surj.mk.refine'_1
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ (↑f (), ↑g ()) ∈ Finset.Nat.antidiagonal n State After: case inr.e_a.i_surj.mk.refine'_1
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g).fst + (f, g).snd = single () n
⊢ (↑f (), ↑g ()) ∈ Finset.Nat.antidiagonal n Tactic: rw [Finsupp.mem_antidiagonal] at hfg State Before: case inr.e_a.i_surj.mk.refine'_1
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g).fst + (f, g).snd = single () n
⊢ (↑f (), ↑g ()) ∈ Finset.Nat.antidiagonal n State After: no goals Tactic: rw [Finset.Nat.mem_antidiagonal, ← Finsupp.add_apply, hfg, Finsupp.single_eq_same] State Before: case inr.e_a.i_surj.mk.refine'_2
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ (f, g) = (single () (↑f (), ↑g ()).fst, single () (↑f (), ↑g ()).snd) State After: case inr.e_a.i_surj.mk.refine'_2
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ f = single () (↑f (), ↑g ()).fst ∧ g = single () (↑f (), ↑g ()).snd Tactic: rw [Prod.mk.inj_iff] State Before: case inr.e_a.i_surj.mk.refine'_2
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ f = single () (↑f (), ↑g ()).fst ∧ g = single () (↑f (), ↑g ()).snd State After: case inr.e_a.i_surj.mk.refine'_2
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ f = single () (↑f ()) ∧ g = single () (↑g ()) Tactic: dsimp State Before: case inr.e_a.i_surj.mk.refine'_2
R : Type u_1
inst✝ : Ring R
n : ℕ
a : R
φ : PowerSeries R
h✝ : ¬n = 0
f g : Unit →₀ ℕ
hfg : (f, g) ∈ Finsupp.antidiagonal (single () n)
⊢ f = single () (↑f ()) ∧ g = single () (↑g ()) State After: no goals Tactic: exact ⟨Finsupp.unique_single f, Finsupp.unique_single g⟩
|
\section{Games with imperfect information}
|
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
########################################################################
# rules for A x I, I x A, (A x I)L, (I x A)L
NewRulesFor(TTensorI, rec(
# loop splitting for A x I
AxI_LS_L := rec(
info := "(A_rxs x I_n)(L|I) -> Sum(Sum(SAG))",
forTransposition := false,
applicable := nt -> let(P := nt.params,
P[3] = AVec
and nt.isTag(1, AVecMemL)
and IsInt(P[2]/nt.firstTag().v)
and P[2]/nt.firstTag().v>1
),
children := nt -> [[
When(nt.numTags() = 1,
nt.params[1],
nt.params[1].setTags(nt.params[1].withoutFirstTag())
)
]],
apply := (nt, C, cnt) -> let(
v := nt.firstTag().v,
i := Ind(v),
m_by_v := nt.params[2]/v,
j := Ind(m_by_v),
d := C[1].dims(),
fr := When(nt.params[4] = AVec,
fTensor(fId(d[2]), fBase(m_by_v, j), fBase(v, i)),
fTensor(fBase(m_by_v, j), fBase(v, i), fId(d[2]))
),
ISumLS(j, m_by_v,
Buf(Scat(fTensor(
fId(d[1]), fBase(m_by_v, j), fId(v)
)))
* ISum(i, v, Scat(fTensor(
fId(d[1]), fBase(v, i))) * C[1] * Gath(fr)
)
)
)
#D isApplicable := P -> P[3].isVec and Length(P[5]) > 0 and P[5][1].isMemL and IsInt(P[2]/P[5][1].v) and P[2]/P[5][1].v>1,
#D allChildren := P -> let(pv:=P[5], v:=pv[1].v, d:=P[1].dims(), [[When(Length(pv)=1, P[1], P[1].setpv(Drop(pv, 1)))]]),
#D rule := (P, C) -> let(
#D v:=P[5][1].v, i:=Ind(v), m_by_v := P[2]/v, j:=Ind(m_by_v), d:=C[1].dims(),
#D fr:=When(P[4].isVec, fTensor(fId(d[2]), fBase(m_by_v, j), fBase(v, i)), fTensor(fBase(m_by_v, j), fBase(v, i), fId(d[2]))),
#D ISumLS(j, m_by_v, Buf(Scat(fTensor(fId(d[1]), fBase(m_by_v, j), fId(v)))) *
#D ISum(i, v, Scat(fTensor(fId(d[1]), fBase(v, i))) * C[1] * Gath(fr)))
#D )
),
AxI_LS_R := rec(
info := "(I|L)(A_rxs x I_n) -> Sum(Sum(SAG))",
forTransposition := false,
applicable := nt -> let(P := nt.params,
P[4] = AVec
and nt.isTag(1, AVecMemR)
and IsInt(P[2]/nt.firstTag().v)
and P[2]/nt.firstTag().v > 1
),
children := nt -> [[
When(nt.numTags() = 1,
nt.params[1],
nt.params[1].setTags(nt.params[1].withoutFirstTag())
)
]],
apply := (nt, C, cnt) -> let(
v := nt.firstTag().v,
i := Ind(v),
m_by_v := nt.params[2]/v,
j := Ind(m_by_v),
d := C[1].dims(),
fw := When(nt.params[3] = AVec,
fTensor(fId(d[1]), fBase(m_by_v, j), fBase(v, i)),
fTensor(fBase(m_by_v, j), fBase(v, i), fId(d[1]))
),
ISumLS(j, m_by_v,
ISum(i, v, Scat(fw) * C[1] * Gath(fTensor(fId(d[2]), fBase(v, i))))
* Buf(Gath(fTensor(
fId(d[2]), fBase(m_by_v, j), fId(v)
)))
)
)
#D isApplicable := P -> P[4].isVec and Length(P[5]) > 0 and P[5][1].isMemR and IsInt(P[2]/P[5][1].v) and P[2]/P[5][1].v>1,
#D allChildren := P -> let(pv:=P[5], v:=pv[1].v, d:=P[1].dims(), [[When(Length(pv)=1, P[1], P[1].setpv(Drop(pv, 1)))]]),
#D rule := (P, C) -> let(
#D v:=P[5][1].v, i:=Ind(v), m_by_v := P[2]/v, j:=Ind(m_by_v), d:=C[1].dims(),
#D fw:=When(P[3].isVec, fTensor(fId(d[1]), fBase(m_by_v, j), fBase(v, i)), fTensor(fBase(m_by_v, j), fBase(v, i), fId(d[1]))),
#D ISumLS(j, m_by_v,
#D ISum(i, v, Scat(fw) * C[1] * Gath(fTensor(fId(d[2]), fBase(v, i)))) *
#D Buf(Gath(fTensor(fId(d[2]), fBase(m_by_v, j), fId(v)))))
#D )
),
LS_drop := rec(
info := "drop tag",
forTransposition := false,
applicable := nt -> let(P := nt.params,
(nt.isTag(1, AVecMemL) or nt.isTag(1, AVecMemR))
and (
(not IsInt(P[2]/nt.firstTag().v))
or P[2]/nt.firstTag().v = 1
)
),
children := nt -> let(P := nt.params,
[[ TTensorI(P[1], P[2], P[3], P[4]) ]]
),
apply := (nt, C, cnt) -> C[1],
#D isApplicable := P -> Length(P[5]) > 0 and (P[5][1].isMemL or P[5][1].isMemR) and
#D ((not IsInt(P[2]/P[5][1].v)) or P[2]/P[5][1].v=1),
#D allChildren := P -> [[TTensorI(P[1], P[2], P[3], P[4], [])]],
#D rule := (P, C) -> C[1],
)
));
########################################################################
# TCompose rules
NewRulesFor(TCompose, rec(
TCompose_LS_L := rec(
info := "TCompose loop splitting left",
forTransposition := false,
applicable := nt -> nt.isTag(1, AVecMemL),
children := nt -> [ Concat(
[ nt.params[1][1].setTags(nt.getTags()) ],
Drop(nt.params[1], 1)
)],
apply := (nt, C, cnt) -> Compose(C)
#D isApplicable := P -> Length(P[2]) > 0 and P[2][1].isMemL,
#D allChildren := P -> [Concat([P[1][1].setpv(P[2])], Drop(P[1], 1))],
#D rule := (P, C) -> Compose(C)
),
TCompose_LS_R := rec(
info := "TCompose loop splitting right",
forTransposition := false,
applicable := nt -> nt.isTag(1, AVecMemR),
children := nt -> [ Concat(
DropLast(nt.params[1], 1),
[
nt.params[1][Length(nt.params[1])].setTags(nt.getTags())
]
)],
apply := (nt, C, cnt) -> Compose(C)
#D isApplicable := P -> Length(P[2]) > 0 and P[2][1].isMemR,
#D allChildren := P -> [Concat(DropLast(P[1], 1), [P[1][Length(P[1])].setpv(P[2])])],
#D rule := (P, C) -> Compose(C)
)
));
|
/- Author: E.W.Ayers © 2019 -/
import ..equate
namespace rats
open robot
/- Example within the context of defining the rationals as ordered pairs of integers
quotiented by the relation (⟨a,b⟩ ~ ⟨c,d⟩) ↔ (a * d = c * b).
-/
meta def blast : tactic unit :=
tactic.timetac "blast" $ (using_smt_with {cc_cfg := {ac:=ff}} $ tactic.intros >> smt_tactic.iterate (smt_tactic.ematch >> smt_tactic.try smt_tactic.close))
attribute [ematch] mul_comm mul_assoc
universes u
structure q (α : Type u) [integral_domain α] := (n : α) (d : α ) (nz : d ≠ 0)
lemma q.ext {α : Type u} [integral_domain α] : Π (q1 q2 : q α), q1.n = q2.n → q1.d = q2.d → q1 = q2
|⟨n,d,nz⟩ ⟨_,_,_⟩ rfl rfl := rfl
instance (α : Type u) [integral_domain α] : setoid (q α) :=
{ r := (λ a b, a.1 * b.2 = b.1 * a.2)
, iseqv :=
⟨ λ a, rfl
, λ a b, eq.symm
, λ ⟨a,b,_⟩ ⟨c,d,h⟩ ⟨e,f,_⟩
(p : a * d = c * b)
(q : c * f = e * d),
suffices d * (a * f) = d * (e * b), from eq_of_mul_eq_mul_left h this,
-- by blast -- takes about 2 seconds
by equate -- also about 2 seconds, but much slower because implemented in Lean VM
⟩
}
def free (α : Type u) [integral_domain α] : Type* := @quotient (q α) (by apply_instance)
variables {α : Type u} [integral_domain α]
-- [TODO]
-- namespace free
-- def add : free α → free α → free α
-- := λ x y, quotient.lift_on₂ x y
-- (λ x y, ⟦(⟨x.1 * y.2 + y.1 * x.2, x.2 * y.2, mul_ne_zero x.nz y.nz⟩ : q α)⟧)
-- (λ a1 a2 b1 b2,
-- assume p : a1.n * b1.d = b1.1 * a1.2,
-- assume q : a2.1 * b2.2 = b2.1 * a2.2,
-- suffices (a1.1 * a2.2 + a2.1 * a1.2) * (b1.2 * b2.2)
-- = (b1.1 * b2.2 + b2.1 * b1.2) * (a1.2 * a2.2),
-- from quotient.sound this,
-- calc ((a1.1 * a2.2) + (a2.1 * a1.2)) * (b1.2 * b2.2)
-- = ((b1.1 * a1.2) * (a2.2 * b2.2) + (b1.2 * a1.2) * (b2.1 * a2.2))
-- : by equate
-- ... = (b1.1 * b2.2 + b2.1 * b1.2) * (a1.2 * a2.2)
-- : by symmetry; clear p q; equate
-- )
-- end free
end rats
|
/-
Copyright (c) 2020 Yury Kudryashov. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yury Kudryashov
-/
import order.conditionally_complete_lattice
import algebra.big_operators.basic
import algebra.group.prod
import algebra.group.pi
import algebra.module.pi
/-!
# Support of a function
In this file we define `function.support f = {x | f x ≠ 0}` and prove its basic properties.
We also define `function.mul_support f = {x | f x ≠ 1}`.
-/
open set
open_locale big_operators
namespace function
variables {α β A B M N P R S G M₀ G₀ : Type*} {ι : Sort*}
section has_one
variables [has_one M] [has_one N] [has_one P]
/-- `support` of a function is the set of points `x` such that `f x ≠ 0`. -/
def support [has_zero A] (f : α → A) : set α := {x | f x ≠ 0}
/-- `mul_support` of a function is the set of points `x` such that `f x ≠ 1`. -/
@[to_additive] def mul_support (f : α → M) : set α := {x | f x ≠ 1}
@[to_additive] lemma mul_support_eq_preimage (f : α → M) : mul_support f = f ⁻¹' {1}ᶜ := rfl
@[to_additive] lemma nmem_mul_support {f : α → M} {x : α} :
x ∉ mul_support f ↔ f x = 1 :=
not_not
@[to_additive] lemma compl_mul_support {f : α → M} :
(mul_support f)ᶜ = {x | f x = 1} :=
ext $ λ x, nmem_mul_support
@[simp, to_additive] lemma mem_mul_support {f : α → M} {x : α} :
x ∈ mul_support f ↔ f x ≠ 1 :=
iff.rfl
@[simp, to_additive] lemma mul_support_subset_iff {f : α → M} {s : set α} :
mul_support f ⊆ s ↔ ∀ x, f x ≠ 1 → x ∈ s :=
iff.rfl
@[to_additive] lemma mul_support_subset_iff' {f : α → M} {s : set α} :
mul_support f ⊆ s ↔ ∀ x ∉ s, f x = 1 :=
forall_congr $ λ x, not_imp_comm
@[simp, to_additive] lemma mul_support_eq_empty_iff {f : α → M} :
mul_support f = ∅ ↔ f = 1 :=
by { simp_rw [← subset_empty_iff, mul_support_subset_iff', funext_iff], simp }
@[simp, to_additive] lemma mul_support_nonempty_iff {f : α → M} :
(mul_support f).nonempty ↔ f ≠ 1 :=
by rw [← ne_empty_iff_nonempty, ne.def, mul_support_eq_empty_iff]
@[simp, to_additive] lemma mul_support_one' : mul_support (1 : α → M) = ∅ :=
mul_support_eq_empty_iff.2 rfl
@[simp, to_additive] lemma mul_support_one : mul_support (λ x : α, (1 : M)) = ∅ :=
mul_support_one'
@[to_additive]
@[to_additive] lemma mul_support_binop_subset (op : M → N → P) (op1 : op 1 1 = 1)
(f : α → M) (g : α → N) :
mul_support (λ x, op (f x) (g x)) ⊆ mul_support f ∪ mul_support g :=
λ x hx, classical.by_cases
(λ hf : f x = 1, or.inr $ λ hg, hx $ by simp only [hf, hg, op1])
or.inl
@[to_additive] lemma mul_support_sup [semilattice_sup M] (f g : α → M) :
mul_support (λ x, f x ⊔ g x) ⊆ mul_support f ∪ mul_support g :=
mul_support_binop_subset (⊔) sup_idem f g
@[to_additive] lemma mul_support_inf [semilattice_inf M] (f g : α → M) :
mul_support (λ x, f x ⊓ g x) ⊆ mul_support f ∪ mul_support g :=
mul_support_binop_subset (⊓) inf_idem f g
@[to_additive] lemma mul_support_max [linear_order M] (f g : α → M) :
mul_support (λ x, max (f x) (g x)) ⊆ mul_support f ∪ mul_support g :=
mul_support_sup f g
@[to_additive] lemma mul_support_min [linear_order M] (f g : α → M) :
mul_support (λ x, min (f x) (g x)) ⊆ mul_support f ∪ mul_support g :=
mul_support_inf f g
@[to_additive] lemma mul_support_supr [conditionally_complete_lattice M] [nonempty ι]
(f : ι → α → M) :
mul_support (λ x, ⨆ i, f i x) ⊆ ⋃ i, mul_support (f i) :=
begin
rw mul_support_subset_iff',
simp only [mem_Union, not_exists, nmem_mul_support],
intros x hx,
simp only [hx, csupr_const]
end
@[to_additive] lemma mul_support_infi [conditionally_complete_lattice M] [nonempty ι]
(f : ι → α → M) :
mul_support (λ x, ⨅ i, f i x) ⊆ ⋃ i, mul_support (f i) :=
@mul_support_supr _ (order_dual M) ι ⟨(1:M)⟩ _ _ f
@[to_additive] lemma mul_support_comp_subset {g : M → N} (hg : g 1 = 1) (f : α → M) :
mul_support (g ∘ f) ⊆ mul_support f :=
λ x, mt $ λ h, by simp only [(∘), *]
@[to_additive] lemma mul_support_subset_comp {g : M → N} (hg : ∀ {x}, g x = 1 → x = 1)
(f : α → M) :
mul_support f ⊆ mul_support (g ∘ f) :=
λ x, mt hg
@[to_additive] lemma mul_support_comp_eq (g : M → N) (hg : ∀ {x}, g x = 1 ↔ x = 1)
(f : α → M) :
mul_support (g ∘ f) = mul_support f :=
set.ext $ λ x, not_congr hg
@[to_additive] lemma mul_support_comp_eq_preimage (g : β → M) (f : α → β) :
mul_support (g ∘ f) = f ⁻¹' mul_support g :=
rfl
@[to_additive support_prod_mk] lemma mul_support_prod_mk (f : α → M) (g : α → N) :
mul_support (λ x, (f x, g x)) = mul_support f ∪ mul_support g :=
set.ext $ λ x, by simp only [mul_support, not_and_distrib, mem_union_eq, mem_set_of_eq,
prod.mk_eq_one, ne.def]
@[to_additive support_prod_mk'] lemma mul_support_prod_mk' (f : α → M × N) :
mul_support f = mul_support (λ x, (f x).1) ∪ mul_support (λ x, (f x).2) :=
by simp only [← mul_support_prod_mk, prod.mk.eta]
@[to_additive] lemma mul_support_along_fiber_subset (f : α × β → M) (a : α) :
mul_support (λ b, f (a, b)) ⊆ (mul_support f).image prod.snd :=
by tidy
@[simp, to_additive] lemma mul_support_along_fiber_finite_of_finite
(f : α × β → M) (a : α) (h : (mul_support f).finite) :
(mul_support (λ b, f (a, b))).finite :=
(h.image prod.snd).subset (mul_support_along_fiber_subset f a)
end has_one
@[to_additive] lemma mul_support_mul [monoid M] (f g : α → M) :
mul_support (λ x, f x * g x) ⊆ mul_support f ∪ mul_support g :=
mul_support_binop_subset (*) (one_mul _) f g
@[simp, to_additive] lemma mul_support_inv [group G] (f : α → G) :
mul_support (λ x, (f x)⁻¹) = mul_support f :=
set.ext $ λ x, not_congr inv_eq_one
@[simp, to_additive] lemma mul_support_inv' [group G] (f : α → G) :
mul_support (f⁻¹) = mul_support f :=
mul_support_inv f
@[simp] lemma mul_support_inv₀ [group_with_zero G₀] (f : α → G₀) :
mul_support (λ x, (f x)⁻¹) = mul_support f :=
set.ext $ λ x, not_congr inv_eq_one₀
@[to_additive] lemma mul_support_mul_inv [group G] (f g : α → G) :
mul_support (λ x, f x * (g x)⁻¹) ⊆ mul_support f ∪ mul_support g :=
mul_support_binop_subset (λ a b, a * b⁻¹) (by simp) f g
@[to_additive support_sub] lemma mul_support_group_div [group G] (f g : α → G) :
mul_support (λ x, f x / g x) ⊆ mul_support f ∪ mul_support g :=
mul_support_binop_subset (/) (by simp only [one_div, one_inv]) f g
lemma mul_support_div [group_with_zero G₀] (f g : α → G₀) :
mul_support (λ x, f x / g x) ⊆ mul_support f ∪ mul_support g :=
mul_support_binop_subset (/) (by simp only [div_one]) f g
@[simp] lemma support_mul [mul_zero_class R] [no_zero_divisors R] (f g : α → R) :
support (λ x, f x * g x) = support f ∩ support g :=
set.ext $ λ x, by simp only [mem_support, mul_ne_zero_iff, mem_inter_eq, not_or_distrib]
lemma support_smul_subset_right [add_monoid A] [monoid B] [distrib_mul_action B A]
(b : B) (f : α → A) :
support (b • f) ⊆ support f :=
λ x hbf hf, hbf $ by rw [pi.smul_apply, hf, smul_zero]
lemma support_smul_subset_left [semiring R] [add_comm_monoid M] [module R M]
(f : α → R) (g : α → M) :
support (f • g) ⊆ support f :=
λ x hfg hf, hfg $ by rw [pi.smul_apply', hf, zero_smul]
lemma support_smul [semiring R] [add_comm_monoid M] [module R M]
[no_zero_smul_divisors R M] (f : α → R) (g : α → M) :
support (f • g) = support f ∩ support g :=
ext $ λ x, smul_ne_zero
@[simp] lemma support_inv [group_with_zero G₀] (f : α → G₀) :
support (λ x, (f x)⁻¹) = support f :=
set.ext $ λ x, not_congr inv_eq_zero
@[simp] lemma support_div [group_with_zero G₀] (f g : α → G₀) :
support (λ x, f x / g x) = support f ∩ support g :=
by simp [div_eq_mul_inv]
@[to_additive] lemma mul_support_prod [comm_monoid M] (s : finset α) (f : α → β → M) :
mul_support (λ x, ∏ i in s, f i x) ⊆ ⋃ i ∈ s, mul_support (f i) :=
begin
rw mul_support_subset_iff',
simp only [mem_Union, not_exists, nmem_mul_support],
exact λ x, finset.prod_eq_one
end
lemma support_prod_subset [comm_monoid_with_zero A] (s : finset α) (f : α → β → A) :
support (λ x, ∏ i in s, f i x) ⊆ ⋂ i ∈ s, support (f i) :=
λ x hx, mem_Inter₂.2 $ λ i hi H, hx $ finset.prod_eq_zero hi H
lemma support_prod [comm_monoid_with_zero A] [no_zero_divisors A] [nontrivial A]
(s : finset α) (f : α → β → A) :
support (λ x, ∏ i in s, f i x) = ⋂ i ∈ s, support (f i) :=
set.ext $ λ x, by
simp only [support, ne.def, finset.prod_eq_zero_iff, mem_set_of_eq, set.mem_Inter, not_exists]
lemma mul_support_one_add [has_one R] [add_left_cancel_monoid R] (f : α → R) :
mul_support (λ x, 1 + f x) = support f :=
set.ext $ λ x, not_congr add_right_eq_self
lemma mul_support_one_add' [has_one R] [add_left_cancel_monoid R] (f : α → R) :
mul_support (1 + f) = support f :=
mul_support_one_add f
lemma mul_support_add_one [has_one R] [add_right_cancel_monoid R] (f : α → R) :
mul_support (λ x, f x + 1) = support f :=
set.ext $ λ x, not_congr add_left_eq_self
lemma mul_support_add_one' [has_one R] [add_right_cancel_monoid R] (f : α → R) :
mul_support (f + 1) = support f :=
mul_support_add_one f
lemma mul_support_one_sub' [has_one R] [add_group R] (f : α → R) :
mul_support (1 - f) = support f :=
by rw [sub_eq_add_neg, mul_support_one_add', support_neg']
lemma mul_support_one_sub [has_one R] [add_group R] (f : α → R) :
mul_support (λ x, 1 - f x) = support f :=
mul_support_one_sub' f
end function
namespace set
open function
variables {α β M : Type*} [has_one M] {f : α → M}
@[to_additive] lemma image_inter_mul_support_eq {s : set β} {g : β → α} :
(g '' s ∩ mul_support f) = g '' (s ∩ mul_support (f ∘ g)) :=
by rw [mul_support_comp_eq_preimage f g, image_inter_preimage]
end set
namespace pi
variables {A : Type*} {B : Type*} [decidable_eq A] [has_zero B] {a : A} {b : B}
lemma support_single_zero : function.support (pi.single a (0 : B)) = ∅ := by simp
@[simp] lemma support_single_of_ne (h : b ≠ 0) :
function.support (pi.single a b) = {a} :=
begin
ext,
simp only [mem_singleton_iff, ne.def, function.mem_support],
split,
{ contrapose!,
exact λ h', single_eq_of_ne h' b },
{ rintro rfl,
rw single_eq_same,
exact h }
end
lemma support_single [decidable_eq B] :
function.support (pi.single a b) = if b = 0 then ∅ else {a} := by { split_ifs with h; simp [h] }
lemma support_single_subset : function.support (pi.single a b) ⊆ {a} :=
begin
classical,
rw support_single,
split_ifs; simp
end
lemma support_single_disjoint {b' : B} (hb : b ≠ 0) (hb' : b' ≠ 0) {i j : A} :
disjoint (function.support (single i b)) (function.support (single j b')) ↔ i ≠ j :=
by rw [support_single_of_ne hb, support_single_of_ne hb', disjoint_singleton]
end pi
|
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE LambdaCase #-}
module LinearAlgebra (linearAlgebraTest) where
import Test.QuickCheck
import Test.Tasty.QuickCheck
import Test.Tasty
import System.Exit
import Numeric.LinearAlgebra (rank)
import qualified Numeric.LinearAlgebra.Data as Matrix
import Math.Tensor.Internal.LinearAlgebra (independentColumns, independentColumnsMat)
data SmallInt = S0 | S1 deriving (Show, Ord, Eq, Enum, Bounded)
toSmall :: Int -> SmallInt
toSmall 0 = S0
toSmall 1 = S1
toSmall i = error $ "cannot convert " ++ show i ++ " to SmallInt"
fromSmall :: Num a => SmallInt -> a
fromSmall S0 = 0
fromSmall S1 = 1
instance Arbitrary SmallInt where
arbitrary = arbitraryBoundedEnum
data MatrixData a = MatrixData (Positive Int) (Positive Int) [a] deriving Show
instance Arbitrary a => Arbitrary (MatrixData a) where
arbitrary = do
m@(Positive m') <- arbitrary
n@(Positive n') <- arbitrary
xs <- vector (m'*n')
return $ MatrixData m n xs
prop_smallValues :: MatrixData SmallInt -> Bool
prop_smallValues (MatrixData (Positive rows) (Positive cols) xs) =
rank mat' == rank mat
where
mat = (rows Matrix.>< cols) $ map fromSmall xs
mat' = independentColumnsMat mat
prop_ints :: MatrixData Int -> Bool
prop_ints (MatrixData (Positive rows) (Positive cols) xs) =
rank mat' == rank mat
where
mat = (rows Matrix.>< cols) $ map fromIntegral xs
mat' = independentColumnsMat mat
prop_doubles :: MatrixData Double -> Bool
prop_doubles (MatrixData (Positive rows) (Positive cols) xs) =
rank mat' == rank mat
where
mat = (rows Matrix.>< cols) xs
mat' = independentColumnsMat mat
prop_consec :: Positive Int -> Int -> Bool
prop_consec (Positive dim') start =
independentColumns mat == [0,1]
where
dim = dim' + 100
mat = (dim Matrix.>< dim) $ map fromIntegral [start..]
testCase1 = testProperty "prop_smallValues" prop_smallValues
testCase2 = testProperty "prop_ints" prop_ints
testCase3 = testProperty "prop_doubles" prop_doubles
testCase4 = testProperty "prop_consec" prop_consec
linearAlgebraTest = testGroup "LinearAlgebraTest" [testCase1, testCase2, testCase3, testCase4]
|
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef MATHFUNCTIONS_H_
#define MATHFUNCTIONS_H_
#ifdef PADDLE_USE_MKL
#include <mkl.h>
#include <mkl_lapacke.h>
#else
extern "C" {
#include <cblas.h>
}
#ifdef PADDLE_USE_ATLAS
extern "C" {
#include <clapack.h>
}
#else
#include <lapacke.h>
#endif
#endif
#include <cmath>
namespace paddle {
template <class T>
void gemm(const CBLAS_TRANSPOSE transA,
const CBLAS_TRANSPOSE transB,
const int M,
const int N,
const int K,
const T alpha,
const T* A,
const int lda,
const T* B,
const int ldb,
const T beta,
T* C,
const int ldc);
template <class T>
int getrf(const CBLAS_ORDER Order,
const int M,
const int N,
T* A,
const int lda,
int* ipiv);
template <class T>
int getri(
const CBLAS_ORDER Order, const int N, T* A, const int lda, const int* ipiv);
template <class T>
void axpy(const int n, const T alpha, const T* x, T* y);
template <class T>
T dotProduct(const int n, const T* x, const T* y);
template <class T>
void vExp(const int n, const T* a, T* r);
template <class T>
void vPow(const int n, const T* a, const T b, T* r);
template <class T>
void vLog(const int n, const T* a, T* r);
template <class T>
void vAdd(const int n, const T* a, const T* b, T* r);
template <class T>
void vInvSqrt(const int n, const T* a, T* r);
template <class T>
void vLog1p(const int n, const T* a, T* r);
template <class T>
void vTanh(const int n, const T* a, T* r);
} // namespace paddle
#endif // MATHFUNCTIONS_H_
|
module Avionics.SafetyEnvelopes.ExtInterface where
open import Data.Bool using (Bool)
open import Data.Float using (Float)
open import Data.List using (List; map)
open import Data.Maybe using (Maybe; just; nothing)
open import Data.Product using (_×_; _,_)
open import Avionics.Probability using (Dist; NormalDist; ND)
open import Avionics.Real renaming (fromFloat to ff; toFloat to tf)
open import Avionics.SafetyEnvelopes using (z-predictable'; sample-z-predictable)
open import ExtInterface.Data.Maybe using (just; nothing) renaming (Maybe to ExtMaybe)
open import ExtInterface.Data.Product as Ext using (⟨_,_⟩)
fromFloats-z-predictable : List (Float Ext.× Float) → Float → Float → ExtMaybe (Float Ext.× Bool)
fromFloats-z-predictable means×stds z x =
let
ndists = map (λ{⟨ mean , std ⟩ → ND (ff mean) (ff std)}) means×stds
(m , b) = z-predictable' ndists (ff z) (ff x)
in
just ⟨ tf m , b ⟩
{-# COMPILE GHC fromFloats-z-predictable as zPredictable #-}
fromFloats-sample-z-predictable :
List (Float Ext.× Float)
→ Float → Float → List Float → ExtMaybe (Float Ext.× Float Ext.× Bool)
fromFloats-sample-z-predictable means×stds zμ zσ xs =
let
ndists = map (λ{⟨ mean , std ⟩ → ND (ff mean) (ff std)}) means×stds
in
return (sample-z-predictable ndists (ff zμ) (ff zσ) (map ff xs))
where
return : Maybe (ℝ × ℝ × Bool) → ExtMaybe (Float Ext.× Float Ext.× Bool)
return nothing = nothing
return (just (m' , v' , b)) = just ⟨ tf m' , ⟨ tf v' , b ⟩ ⟩
{-# COMPILE GHC fromFloats-sample-z-predictable as sampleZPredictable #-}
|
from __future__ import print_function
import argparse
import imp
import random
import numpy as np
import pickle
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import confusion_matrix
def main(config_module, N_SEED):
n_permutation = config_module.n_permutations
precomputed_kernel_files = config_module.precomputed_kernel_files
cv_n_folds = config_module.cv_n_folds
experiment_name = config_module.experiment_name
random.seed(N_SEED)
np.random.seed(N_SEED)
file_npz = np.load("./kernels/" + precomputed_kernel_files[0])
y = file_npz['labels']
permutation_test_bac = np.zeros((n_permutation,))
f = open("./results/"+experiment_name+"/AV/best_clf.pkl", 'rb')
permutation_classifiers = pickle.load(f)
f.close()
f = open("./results/"+experiment_name+"/AV/best_AV_combination.pkl", 'rb')
permutation_AV_combination = pickle.load(f)
f.close()
f = open("./results/"+experiment_name+"/AV/final_BAC.pkl", 'rb')
best_BAC = pickle.load(f)
f.close()
kernels = []
print("Loading kernels...")
for precomputed_kernel_file in precomputed_kernel_files:
file_npz = np.load("./kernels/" + precomputed_kernel_file)
kernels.append(file_npz['kernel'])
print("Starting permutation with ", n_permutation, " iterations")
for p in range(n_permutation):
np.random.seed(N_SEED + p)
permuted_labels = np.random.permutation(y)
skf = StratifiedKFold(n_splits=cv_n_folds, shuffle=True, random_state=N_SEED)
cv_test_bac = np.zeros((cv_n_folds,))
for i, (train_index, test_index) in enumerate(skf.split(permuted_labels, permuted_labels)):
best_classifiers = permutation_classifiers[i]
best_combination = permutation_AV_combination[i]
y_train, y_test = permuted_labels[train_index], y[test_index]
predictions_proba = []
for j in range(len(kernels)):
precomputed_kernel = kernels[j]
x_train, x_test = precomputed_kernel[train_index, :][:, train_index], precomputed_kernel[test_index, :][
:, train_index]
(best_classifiers[j]).fit(x_train, y_train)
predictions_proba.append((best_classifiers[j]).predict_proba(x_test))
proba_sum = 0
for k in range(len(best_combination)):
proba_sum += best_combination[k] * np.array(predictions_proba[k])
cm = confusion_matrix(y_test, np.argmax(proba_sum, axis=-1))
test_bac = np.sum(np.true_divide(np.diagonal(cm), np.sum(cm, axis=1))) / cm.shape[1]
cv_test_bac[i] = test_bac
permutation_test_bac[p] = cv_test_bac.mean()
print("Permutation: ", p, " BAC: ", cv_test_bac.mean())
print("")
print("P-VALUE", (np.sum((permutation_test_bac>best_BAC).astype('int'))+1.)/(n_permutation+1.))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Script to train model.')
parser.add_argument("config_name", type=str, help="The name of file .py with configurations, e.g., Combined")
args = parser.parse_args()
config_name = args.config_name
try:
config_module = imp.load_source('config', config_name)
except IOError:
print('Cannot open ',config_name, '. Please specify the correct path of the configuration file. Example: python general_AV_SVM.py ./config/config_test.py')
if np.isscalar(config_module.N_SEED):
main(config_module, config_module.N_SEED)
else:
print("Please report the N_seed as a number.")
|
/-
Copyright (c) 2020 Scott Morrison. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Scott Morrison, Bhavik Mehta
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.category_theory.limits.preserves.basic
import Mathlib.category_theory.limits.shapes.equalizers
import Mathlib.category_theory.limits.shapes.strong_epi
import Mathlib.category_theory.limits.shapes.pullbacks
import Mathlib.PostPort
universes v₁ u₁ l
namespace Mathlib
/-!
# Definitions and basic properties of regular monomorphisms and epimorphisms.
A regular monomorphism is a morphism that is the equalizer of some parallel pair.
We give the constructions
* `split_mono → regular_mono` and
* `regular_mono → mono`
as well as the dual constructions for regular epimorphisms. Additionally, we give the
construction
* `regular_epi ⟶ strong_epi`.
-/
namespace category_theory
/-- A regular monomorphism is a morphism which is the equalizer of some parallel pair. -/
class regular_mono {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y) where
Z : C
left : Y ⟶ Z
right : Y ⟶ Z
w : f ≫ left = f ≫ right
is_limit : limits.is_limit (limits.fork.of_ι f w)
theorem regular_mono.w_assoc {C : Type u₁} [category C] {X : C} {Y : C} {f : X ⟶ Y}
[c : regular_mono f] {X' : C} (f' : regular_mono.Z f ⟶ X') :
f ≫ regular_mono.left ≫ f' = f ≫ regular_mono.right ≫ f' :=
sorry
/-- Every regular monomorphism is a monomorphism. -/
protected instance regular_mono.mono {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[regular_mono f] : mono f :=
limits.mono_of_is_limit_parallel_pair regular_mono.is_limit
protected instance equalizer_regular {C : Type u₁} [category C] {X : C} {Y : C} (g : X ⟶ Y)
(h : X ⟶ Y) [limits.has_limit (limits.parallel_pair g h)] :
regular_mono (limits.equalizer.ι g h) :=
regular_mono.mk Y g h (limits.equalizer.condition g h)
(limits.fork.is_limit.mk
(limits.fork.of_ι (limits.equalizer.ι g h) (limits.equalizer.condition g h))
(fun (s : limits.fork g h) => limits.limit.lift (limits.parallel_pair g h) s) sorry sorry)
/-- Every split monomorphism is a regular monomorphism. -/
protected instance regular_mono.of_split_mono {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[split_mono f] : regular_mono f :=
regular_mono.mk Y 𝟙 (retraction f ≫ f) (limits.cone_of_split_mono._proof_1 f)
(limits.split_mono_equalizes f)
/-- If `f` is a regular mono, then any map `k : W ⟶ Y` equalizing `regular_mono.left` and
`regular_mono.right` induces a morphism `l : W ⟶ X` such that `l ≫ f = k`. -/
def regular_mono.lift' {C : Type u₁} [category C] {X : C} {Y : C} {W : C} (f : X ⟶ Y)
[regular_mono f] (k : W ⟶ Y) (h : k ≫ regular_mono.left = k ≫ regular_mono.right) :
Subtype fun (l : W ⟶ X) => l ≫ f = k :=
limits.fork.is_limit.lift' regular_mono.is_limit k h
/--
The second leg of a pullback cone is a regular monomorphism if the right component is too.
See also `pullback.snd_of_mono` for the basic monomorphism version, and
`regular_of_is_pullback_fst_of_regular` for the flipped version.
-/
def regular_of_is_pullback_snd_of_regular {C : Type u₁} [category C] {P : C} {Q : C} {R : C} {S : C}
{f : P ⟶ Q} {g : P ⟶ R} {h : Q ⟶ S} {k : R ⟶ S} [hr : regular_mono h] (comm : f ≫ h = g ≫ k)
(t : limits.is_limit (limits.pullback_cone.mk f g comm)) : regular_mono g :=
sorry
/--
The first leg of a pullback cone is a regular monomorphism if the left component is too.
See also `pullback.fst_of_mono` for the basic monomorphism version, and
`regular_of_is_pullback_snd_of_regular` for the flipped version.
-/
def regular_of_is_pullback_fst_of_regular {C : Type u₁} [category C] {P : C} {Q : C} {R : C} {S : C}
{f : P ⟶ Q} {g : P ⟶ R} {h : Q ⟶ S} {k : R ⟶ S} [hr : regular_mono k] (comm : f ≫ h = g ≫ k)
(t : limits.is_limit (limits.pullback_cone.mk f g comm)) : regular_mono f :=
regular_of_is_pullback_snd_of_regular sorry (limits.pullback_cone.flip_is_limit t)
/-- A regular monomorphism is an isomorphism if it is an epimorphism. -/
def is_iso_of_regular_mono_of_epi {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[regular_mono f] [e : epi f] : is_iso f :=
limits.is_iso_limit_cone_parallel_pair_of_epi regular_mono.is_limit
/-- A regular epimorphism is a morphism which is the coequalizer of some parallel pair. -/
class regular_epi {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y) where
W : C
left : W ⟶ X
right : W ⟶ X
w : left ≫ f = right ≫ f
is_colimit : limits.is_colimit (limits.cofork.of_π f w)
theorem regular_epi.w_assoc {C : Type u₁} [category C] {X : C} {Y : C} {f : X ⟶ Y}
[c : regular_epi f] {X' : C} (f' : Y ⟶ X') :
regular_epi.left ≫ f ≫ f' = regular_epi.right ≫ f ≫ f' :=
sorry
/-- Every regular epimorphism is an epimorphism. -/
protected instance regular_epi.epi {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[regular_epi f] : epi f :=
limits.epi_of_is_colimit_parallel_pair regular_epi.is_colimit
protected instance coequalizer_regular {C : Type u₁} [category C] {X : C} {Y : C} (g : X ⟶ Y)
(h : X ⟶ Y) [limits.has_colimit (limits.parallel_pair g h)] :
regular_epi (limits.coequalizer.π g h) :=
regular_epi.mk X g h (limits.coequalizer.condition g h)
(limits.cofork.is_colimit.mk
(limits.cofork.of_π (limits.coequalizer.π g h) (limits.coequalizer.condition g h))
(fun (s : limits.cofork g h) => limits.colimit.desc (limits.parallel_pair g h) s) sorry sorry)
/-- Every split epimorphism is a regular epimorphism. -/
protected instance regular_epi.of_split_epi {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[split_epi f] : regular_epi f :=
regular_epi.mk X 𝟙 (f ≫ section_ f) (limits.cocone_of_split_epi._proof_1 f)
(limits.split_epi_coequalizes f)
/-- If `f` is a regular epi, then every morphism `k : X ⟶ W` coequalizing `regular_epi.left` and
`regular_epi.right` induces `l : Y ⟶ W` such that `f ≫ l = k`. -/
def regular_epi.desc' {C : Type u₁} [category C] {X : C} {Y : C} {W : C} (f : X ⟶ Y) [regular_epi f]
(k : X ⟶ W) (h : regular_epi.left ≫ k = regular_epi.right ≫ k) :
Subtype fun (l : Y ⟶ W) => f ≫ l = k :=
limits.cofork.is_colimit.desc' regular_epi.is_colimit k h
/--
The second leg of a pushout cocone is a regular epimorphism if the right component is too.
See also `pushout.snd_of_epi` for the basic epimorphism version, and
`regular_of_is_pushout_fst_of_regular` for the flipped version.
-/
def regular_of_is_pushout_snd_of_regular {C : Type u₁} [category C] {P : C} {Q : C} {R : C} {S : C}
{f : P ⟶ Q} {g : P ⟶ R} {h : Q ⟶ S} {k : R ⟶ S} [gr : regular_epi g] (comm : f ≫ h = g ≫ k)
(t : limits.is_colimit (limits.pushout_cocone.mk h k comm)) : regular_epi h :=
sorry
/--
The first leg of a pushout cocone is a regular epimorphism if the left component is too.
See also `pushout.fst_of_epi` for the basic epimorphism version, and
`regular_of_is_pushout_snd_of_regular` for the flipped version.
-/
def regular_of_is_pushout_fst_of_regular {C : Type u₁} [category C] {P : C} {Q : C} {R : C} {S : C}
{f : P ⟶ Q} {g : P ⟶ R} {h : Q ⟶ S} {k : R ⟶ S} [fr : regular_epi f] (comm : f ≫ h = g ≫ k)
(t : limits.is_colimit (limits.pushout_cocone.mk h k comm)) : regular_epi k :=
regular_of_is_pushout_snd_of_regular sorry (limits.pushout_cocone.flip_is_colimit t)
/-- A regular epimorphism is an isomorphism if it is a monomorphism. -/
def is_iso_of_regular_epi_of_mono {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[regular_epi f] [m : mono f] : is_iso f :=
limits.is_iso_limit_cocone_parallel_pair_of_epi regular_epi.is_colimit
protected instance strong_epi_of_regular_epi {C : Type u₁} [category C] {X : C} {Y : C} (f : X ⟶ Y)
[regular_epi f] : strong_epi f :=
sorry
end Mathlib
|
/* rng/ranlxs.c
*
* Copyright (C) 1996, 1997, 1998, 1999, 2000 James Theiler, Brian Gough
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include <config.h>
#include <stdlib.h>
#include <gsl/gsl_rng.h>
/* This is an implementation of M. Luescher's second generation
version of the RANLUX generator.
Thanks to Martin Luescher for providing information on this
generator.
*/
static unsigned long int ranlxs_get (void *vstate);
static inline double ranlxs_get_double (void *vstate);
static void ranlxs_set_lux (void *state, unsigned long int s, unsigned int luxury);
static void ranlxs0_set (void *state, unsigned long int s);
static void ranlxs1_set (void *state, unsigned long int s);
static void ranlxs2_set (void *state, unsigned long int s);
static const int next[12] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0};
static const int snext[24] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 0};
static const double sbase = 16777216.0; /* 2^24 */
static const double sone_bit = 1.0 / 16777216.0; /* 1/2^24 */
static const double one_bit = 1.0 / 281474976710656.0; /* 1/2^48 */
static const double shift = 268435456.0; /* 2^28 */
#define RANLUX_STEP(x1,x2,i1,i2,i3) \
x1=xdbl[i1] - xdbl[i2]; \
if (x2 < 0) \
{ \
x1-=one_bit; \
x2+=1; \
} \
xdbl[i3]=x2
typedef struct
{
double xdbl[12], ydbl[12]; /* doubles first so they are 8-byte aligned */
double carry;
float xflt[24];
unsigned int ir;
unsigned int jr;
unsigned int is;
unsigned int is_old;
unsigned int pr;
}
ranlxs_state_t;
static void increment_state (ranlxs_state_t * state);
static void
increment_state (ranlxs_state_t * state)
{
int k, kmax, m;
double x, y1, y2, y3;
float *xflt = state->xflt;
double *xdbl = state->xdbl;
double *ydbl = state->ydbl;
double carry = state->carry;
unsigned int ir = state->ir;
unsigned int jr = state->jr;
for (k = 0; ir > 0; ++k)
{
y1 = xdbl[jr] - xdbl[ir];
y2 = y1 - carry;
if (y2 < 0)
{
carry = one_bit;
y2 += 1;
}
else
{
carry = 0;
}
xdbl[ir] = y2;
ir = next[ir];
jr = next[jr];
}
kmax = state->pr - 12;
for (; k <= kmax; k += 12)
{
y1 = xdbl[7] - xdbl[0];
y1 -= carry;
RANLUX_STEP (y2, y1, 8, 1, 0);
RANLUX_STEP (y3, y2, 9, 2, 1);
RANLUX_STEP (y1, y3, 10, 3, 2);
RANLUX_STEP (y2, y1, 11, 4, 3);
RANLUX_STEP (y3, y2, 0, 5, 4);
RANLUX_STEP (y1, y3, 1, 6, 5);
RANLUX_STEP (y2, y1, 2, 7, 6);
RANLUX_STEP (y3, y2, 3, 8, 7);
RANLUX_STEP (y1, y3, 4, 9, 8);
RANLUX_STEP (y2, y1, 5, 10, 9);
RANLUX_STEP (y3, y2, 6, 11, 10);
if (y3 < 0)
{
carry = one_bit;
y3 += 1;
}
else
{
carry = 0;
}
xdbl[11] = y3;
}
kmax = state->pr;
for (; k < kmax; ++k)
{
y1 = xdbl[jr] - xdbl[ir];
y2 = y1 - carry;
if (y2 < 0)
{
carry = one_bit;
y2 += 1;
}
else
{
carry = 0;
}
xdbl[ir] = y2;
ydbl[ir] = y2 + shift;
ir = next[ir];
jr = next[jr];
}
ydbl[ir] = xdbl[ir] + shift;
for (k = next[ir]; k > 0;)
{
ydbl[k] = xdbl[k] + shift;
k = next[k];
}
for (k = 0, m = 0; k < 12; ++k)
{
x = xdbl[k];
y2 = ydbl[k] - shift;
if (y2 > x)
y2 -= sone_bit;
y1 = (x - y2) * sbase;
xflt[m++] = (float) y1;
xflt[m++] = (float) y2;
}
state->ir = ir;
state->is = 2 * ir;
state->is_old = 2 * ir;
state->jr = jr;
state->carry = carry;
}
static inline double
ranlxs_get_double (void *vstate)
{
ranlxs_state_t *state = (ranlxs_state_t *) vstate;
const unsigned int is = snext[state->is];
state->is = is;
if (is == state->is_old)
increment_state (state);
return state->xflt[state->is];
}
static unsigned long int
ranlxs_get (void *vstate)
{
return ranlxs_get_double (vstate) * 16777216.0; /* 2^24 */
}
static void
ranlxs_set_lux (void *vstate, unsigned long int s, unsigned int luxury)
{
ranlxs_state_t *state = (ranlxs_state_t *) vstate;
int ibit, jbit, i, k, m, xbit[31];
double x, y;
long int seed;
if (s == 0)
s = 1; /* default seed is 1 */
seed = s;
i = seed & 0xFFFFFFFFUL;
for (k = 0; k < 31; ++k)
{
xbit[k] = i % 2;
i /= 2;
}
ibit = 0;
jbit = 18;
for (k = 0; k < 12; ++k)
{
x = 0;
for (m = 1; m <= 48; ++m)
{
y = (double) xbit[ibit];
x += x + y;
xbit[ibit] = (xbit[ibit] + xbit[jbit]) % 2;
ibit = (ibit + 1) % 31;
jbit = (jbit + 1) % 31;
}
state->xdbl[k] = one_bit * x;
}
state->carry = 0;
state->ir = 0;
state->jr = 7;
state->is = 23;
state->is_old = 0;
state->pr = luxury;
}
static void
ranlxs0_set (void *vstate, unsigned long int s)
{
ranlxs_set_lux (vstate, s, 109);
}
void
ranlxs1_set (void *vstate, unsigned long int s)
{
ranlxs_set_lux (vstate, s, 202);
}
static void
ranlxs2_set (void *vstate, unsigned long int s)
{
ranlxs_set_lux (vstate, s, 397);
}
static const gsl_rng_type ranlxs0_type =
{"ranlxs0", /* name */
0x00ffffffUL, /* RAND_MAX */
0, /* RAND_MIN */
sizeof (ranlxs_state_t),
&ranlxs0_set,
&ranlxs_get,
&ranlxs_get_double};
static const gsl_rng_type ranlxs1_type =
{"ranlxs1", /* name */
0x00ffffffUL, /* RAND_MAX */
0, /* RAND_MIN */
sizeof (ranlxs_state_t),
&ranlxs1_set,
&ranlxs_get,
&ranlxs_get_double};
static const gsl_rng_type ranlxs2_type =
{"ranlxs2", /* name */
0x00ffffffUL, /* RAND_MAX */
0, /* RAND_MIN */
sizeof (ranlxs_state_t),
&ranlxs2_set,
&ranlxs_get,
&ranlxs_get_double};
const gsl_rng_type *gsl_rng_ranlxs0 = &ranlxs0_type;
const gsl_rng_type *gsl_rng_ranlxs1 = &ranlxs1_type;
const gsl_rng_type *gsl_rng_ranlxs2 = &ranlxs2_type;
|
function [ferns,hsPr] = fernsClfChangeNFerns( data, hs, ferns, Mnew, varargin )
% Train random fern classifier.
%
% See "Fast Keypoint Recognition in Ten Lines of Code" by Mustafa Ozuysal,
% Pascal Fua and Vincent Lepetit, CVPR07.
%
% Dimensions:
% M - number ferns
% S - fern depth
% F - number features
% N - number input vectors
% H - number classes
%
% USAGE
% [ferns,hsPr] = fernsClfTrain( data, hs, [varargin] )
%
% INPUTS
% data - [NxF] N length F feature vectors
% hs - [Nx1] target output labels in [1,H]
% varargin - additional params (struct or name/value pairs)
% .S - [10] fern depth (ferns are exponential in S)
% .M - [50] number of ferns to train
% .thrr - [0 1] range for randomly generated thresholds
% .bayes - [1] if true combine probs using bayes assumption
% .ferns - [] if given reuse previous ferns (recompute pFern)
%
% OUTPUTS
% ferns - learned fern model w the following fields
% .fids - [MxS] feature ids for each fern for each depth
% .thrs - [MxS] threshold corresponding to each fid
% .pFern - [2^SxHxM] learned log probs at fern leaves
% .bayes - if true combine probs using bayes assumption
% .inds - [NxM] cached indices for original training data
% .H - number classes
% hsPr - [Nx1] predicted output labels
%
% EXAMPLE
% N=5000; H=5; d=2; [xs0,hs0,xs1,hs1]=demoGenData(N,N,H,d,1,1);
% fernPrm=struct('S',4,'M',50,'thrr',[-1 1],'bayes',1);
% tic, [ferns,hsPr0]=fernsClfTrain(xs0,hs0,fernPrm); toc
% tic, hsPr1 = fernsClfApply( xs1, ferns ); toc
% e0=mean(hsPr0~=hs0); e1=mean(hsPr1~=hs1);
% fprintf('errors trn=%f tst=%f\n',e0,e1); figure(1);
% subplot(2,2,1); visualizeData(xs0,2,hs0);
% subplot(2,2,2); visualizeData(xs0,2,hsPr0);
% subplot(2,2,3); visualizeData(xs1,2,hs1);
% subplot(2,2,4); visualizeData(xs1,2,hsPr1);
%
% See also fernsClfApply, fernsInds
%
% Piotr's Image&Video Toolbox Version 2.50
% Copyright 2010 Piotr Dollar. [pdollar-at-caltech.edu]
% Please email me if you find bugs, or have suggestions or questions!
% Licensed under the Lesser GPL [see external/lgpl.txt]
dfs={'thrr',[0 1],'tmp',''};
[thrr,~]=getPrmDflt(varargin,dfs,1);
[Mold,S] = size(ferns.fids);
[N,F]=size(data); assert(length(hs)==N);
H=max(hs); assert(all(hs>0)); assert(S<=20);
if Mnew == Mold,
elseif Mnew < Mold,
Mremove = Mold - Mnew;
% randomly choose some ferns to remove
idx_remove = randsample(Mold,Mremove);
ferns.fids(idx_remove,:) = [];
ferns.thrs(idx_remove,:) = [];
ferns.counts(:,:,idx_remove) = [];
ferns.pFern(:,:,idx_remove) = [];
ferns.inds(:,idx_remove) = [];
else
Madd = Mnew - Mold;
% create some new ferns
thrs_add=rand(Madd,S)*(thrr(2)-thrr(1))+thrr(1);
fids_add=uint32(floor(rand(Madd,S)*F+1));
inds_add=fernsInds(data,fids_add,thrs_add);
% store new ferns
ferns.fids(Mold+1:Mnew,:) = fids_add;
ferns.thrs(Mold+1:Mnew,:) = thrs_add;
ferns.inds(:,Mold+1:Mnew) = inds_add;
% get counts for each leaf for each class for each new fern
pFern_add = nan(2^S,H,Madd);
edges = 1:2^S;
for m = 1:Madd,
for h = 1:H,
pFern_add(:,h,m) = histc(inds_add(hs==h,m),edges);
end
end
pFern_add = pFern_add + ferns.bayes;
ferns.counts(:,:,Mold+1:Mnew) = pFern_add;
% convert fern leaf class counts into probabilities
if( ferns.bayes<=0 )
norm = 1./sum(pFern_add,2);
pFern_add = bsxfun(@times,pFern_add,norm);
else
norm = 1./sum(pFern_add,1);
pFern_add = bsxfun(@times,pFern_add,norm);
pFern_add=log(pFern_add);
end
ferns.pFern(:,:,Mold+1:Mnew) = pFern_add;
clear pFern;
end
if(nargout==2),
hsPr=fernsClfApply([],ferns,ferns.inds);
end
end
|
subroutine head(mth,ndy,nyr,gs,rt,io)
c********************************************
c Write headers and sets date, time, version. Version 9-27-99
c Input: mth, ndy, ny - date from call, gs - start time from cpu_time
c Output: rt - run time in seconds
c============================================
integer mth,ndy,nyr,io
real gs,ge,rt,aux
character*24 dat
intent(in) mth,ndy,nyr,gs,io
intent(out) rt
c********************************************
write(io,'('' Version '',i2,''-'',i2,''-'',i4)') mth,ndy,nyr
c Output time and date
call fdate(dat)
write(io,'('' Date & Time '',a24)') dat
c End time
call cpu_time(ge)
rt=ge-gs
c Correct for time fold
if(rt<0.) then
rt=rt+86400.
endif
c-------
if(rt<60.) then
write(io,'('' Elapsed time = '',f9.2,'' secs'')') rt
write(6,'('' Elapsed time = '',f9.2,'' secs'')') rt
return
elseif(rt<3600.) then
rm=rt/60.
write(io,'('' Elapsed time = '',f9.2,'' minutes'')') rm
write(6,'(/'' Elapsed time = '',f9.2,'' minutes'')') rm
return
else
rh=rt/3600.
write(io,'('' Elapsed time = '',f9.2,'' hours'')') rh
write(6,'(/'' Elapsed time = '',f9.2,'' hours'')') rh
endif
c----------
return
end
|
/*
* L2RightHandSide.cc
*
* Created on: 29.06.2017
* Author: thies
*/
#include <deal.II/base/exceptions.h>
#include <deal.II/base/work_stream.h>
#include <deal.II/fe/fe_update_flags.h>
#include <deal.II/numerics/vector_tools.h>
#include <base/DiscretizedFunction.h>
#include <forward/L2RightHandSide.h>
#include <functional>
namespace wavepi {
namespace forward {
using namespace dealii;
using namespace wavepi::base;
template <int dim>
L2RightHandSide<dim>::L2RightHandSide(std::shared_ptr<Function<dim>> f) : base_rhs(f) {}
template <int dim>
L2RightHandSide<dim>::AssemblyScratchData::AssemblyScratchData(const FiniteElement<dim> &fe,
const Quadrature<dim> &quad)
: fe_values(fe, quad, update_values | update_quadrature_points | update_JxW_values) {}
template <int dim>
L2RightHandSide<dim>::AssemblyScratchData::AssemblyScratchData(const AssemblyScratchData &scratch_data)
: fe_values(scratch_data.fe_values.get_fe(), scratch_data.fe_values.get_quadrature(),
update_values | update_quadrature_points | update_JxW_values) {}
template <int dim>
void L2RightHandSide<dim>::copy_local_to_global(Vector<double> &result, const AssemblyCopyData ©_data) {
for (unsigned int i = 0; i < copy_data.local_dof_indices.size(); ++i)
result(copy_data.local_dof_indices[i]) += copy_data.cell_rhs(i);
}
template <int dim>
std::shared_ptr<Function<dim>> L2RightHandSide<dim>::get_base_rhs() const {
return base_rhs;
}
template <int dim>
void L2RightHandSide<dim>::set_base_rhs(std::shared_ptr<Function<dim>> base_rhs) {
this->base_rhs = base_rhs;
}
template <int dim>
void L2RightHandSide<dim>::local_assemble(const Vector<double> &f,
const typename DoFHandler<dim>::active_cell_iterator &cell,
AssemblyScratchData &scratch_data, AssemblyCopyData ©_data) {
const unsigned int dofs_per_cell = scratch_data.fe_values.get_fe().dofs_per_cell;
const unsigned int n_q_points = scratch_data.fe_values.get_quadrature().size();
copy_data.cell_rhs.reinit(dofs_per_cell);
copy_data.local_dof_indices.resize(dofs_per_cell);
scratch_data.fe_values.reinit(cell);
cell->get_dof_indices(copy_data.local_dof_indices);
for (unsigned int q_point = 0; q_point < n_q_points; ++q_point)
for (unsigned int i = 0; i < dofs_per_cell; ++i)
for (unsigned int k = 0; k < dofs_per_cell; ++k)
copy_data.cell_rhs(i) += f[copy_data.local_dof_indices[k]] * scratch_data.fe_values.shape_value(k, q_point) *
scratch_data.fe_values.shape_value(i, q_point) * scratch_data.fe_values.JxW(q_point);
}
template <int dim>
void L2RightHandSide<dim>::create_right_hand_side(const DoFHandler<dim> &dof, const Quadrature<dim> &quad,
Vector<double> &rhs) const {
AssertThrow(base_rhs, ExcInternalError());
base_rhs->set_time(this->get_time());
auto base_rhs_d = std::dynamic_pointer_cast<DiscretizedFunction<dim>>(base_rhs);
if (base_rhs_d) {
Vector<double> coeffs = base_rhs_d->get_function_coefficients(base_rhs_d->get_time_index());
Assert(coeffs.size() == dof.n_dofs(), ExcDimensionMismatch(coeffs.size(), dof.n_dofs()));
WorkStream::run(dof.begin_active(), dof.end(),
std::bind(&L2RightHandSide<dim>::local_assemble, *this, std::ref(coeffs), std::placeholders::_1,
std::placeholders::_2, std::placeholders::_3),
std::bind(&L2RightHandSide<dim>::copy_local_to_global, *this, std::ref(rhs), std::placeholders::_1),
AssemblyScratchData(dof.get_fe(), quad), AssemblyCopyData());
} else
VectorTools::create_right_hand_side(dof, quad, *base_rhs.get(), rhs);
}
template class L2RightHandSide<1>;
template class L2RightHandSide<2>;
template class L2RightHandSide<3>;
} /* namespace forward */
} /* namespace wavepi */
|
C Copyright(C) 2009-2017 National Technology & Engineering Solutions of
C Sandia, LLC (NTESS). Under the terms of Contract DE-NA0003525 with
C NTESS, the U.S. Government retains certain rights in this software.
C
C Redistribution and use in source and binary forms, with or without
C modification, are permitted provided that the following conditions are
C met:
C
C * Redistributions of source code must retain the above copyright
C notice, this list of conditions and the following disclaimer.
C
C * Redistributions in binary form must reproduce the above
C copyright notice, this list of conditions and the following
C disclaimer in the documentation and/or other materials provided
C with the distribution.
C * Neither the name of NTESS nor the names of its
C contributors may be used to endorse or promote products derived
C from this software without specific prior written permission.
C
C THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
C "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
C LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
C A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
C OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
C SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
C LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
C DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
C THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
C (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
C OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C=======================================================================
SUBROUTINE TPPLOT (NEUTRL, MAXPTS, NPTS, TIMLIM, PLTVAL,
& NAMES, BLKCOL, MAPEL, MAPND)
C=======================================================================
C --*** TPPLOT *** (TPLOT) Plot the curves
C -- Written by Amy Gilkey - revised 01/22/88
C --
C --TPPLOT does all the plotting for a set, including labeling.
C --It also calculates the scaling information for each curve.
C --
C --Parameters:
C -- NEUTRL - IN - the type of neutral file to write.
C -- MAXPTS - IN - the maximum number of points on a curve
C -- NPTS - IN - the number of points on each curve
C -- TIMLIM - IN - the starting and ending time
C -- PLTVAL - IN - the plot data;
C -- PLTVAL(x,NTPVAR+1) holds the times if TIMPLT
C -- PLTVAL(x,NTPVAR+2) holds the compressed times if TIMPLT and needed
C -- NAMES - IN - the variable names
C -- BLKCOL - IN/OUT - the user selected colors of the element blocks.
C -- BLKCOL(0) = 1 if the user defined material
C -- colors should be used in mesh plots.
C -- = -1 if program selected colors should
C -- be used.
C -- BLKCOL(i) = the user selected color of element
C -- block i:
C -- -2 - no color selected by user.
C -- -1 - black
C -- 0 - white
C -- 1 - red
C -- 2 - green
C -- 3 - yellow
C -- 4 - blue
C -- 5 - cyan
C -- 6 - magenta
C --
C --Common Variables:
C -- Uses NTPCRV, NTPVAR, TIMPLT of /TPVARS/
C -- Uses DOGRID, LINTYP, ISYTYP, OVERLY of /XYOPT/
C -- Sets XMIN, XMAX, YMIN, YMAX of /XYLIM/
PARAMETER (NUMSYM = 6, NUMLIN = 6)
include 'params.blk'
include 'neutral.blk'
include 'dbnums.blk'
include 'tpvars.blk'
include 'xyopt.blk'
include 'xylim.blk'
INTEGER NPTS(NTPVAR)
REAL TIMLIM(2)
REAL PLTVAL(MAXPTS,NTPVAR+2)
CHARACTER*(*) NAMES(*)
INTEGER BLKCOL(0:NELBLK)
INTEGER MAPEL(*), MAPND(*)
LOGICAL GRABRT, gobck
LOGICAL SCACRV
LOGICAL NUMCRV
LOGICAL DOLEG
CHARACTER*1024 PLTITL
CHARACTER*1024 TXLAB, TYLAB
LOGICAL SVOVER
CHARACTER*8 SVLSID
C --Save user-set parameters and set program parameters (to eliminate
C --checks for senseless conditions)
SVOVER = OVERLY
SVLSID = LABSID
IF ((NEUTRL .NE. 0) .OR. (NTPCRV .LE. 1)) OVERLY = .FALSE.
IF (IXSCAL .NE. 'SET') THEN
IXSCAL = IAXSCA
IF (TIMPLT) THEN
IXSCAL = 'ALL'
ELSE IF (NEUTRL .NE. 0) THEN
IXSCAL = 'CURVE'
ELSE IF (NTPCRV .EQ. 1) THEN
IXSCAL = 'ALL'
ELSE IF (OVERLY) THEN
IF (IXSCAL .EQ. 'PLOT') IXSCAL = 'ALL'
ELSE
IF (IXSCAL .EQ. 'CURVE') IXSCAL = 'PLOT'
END IF
END IF
IF (IYSCAL .NE. 'SET') THEN
IYSCAL = IAXSCA
IF (NEUTRL .NE. 0) THEN
IYSCAL = 'CURVE'
ELSE IF (NTPCRV .EQ. 1) THEN
IYSCAL = 'ALL'
ELSE IF (OVERLY) THEN
IF (IYSCAL .EQ. 'PLOT') IYSCAL = 'ALL'
ELSE
IF (IYSCAL .EQ. 'CURVE') IYSCAL = 'PLOT'
END IF
END IF
IF ((.NOT. OVERLY)
& .OR. ((ISYTYP .LT. 0) .AND. (NUMSYM .GE. NTPCRV))
& .OR. ((LINTYP .LT. 0) .AND. (NUMLIN .GE. NTPCRV)))
& LABSID = 'NONE'
NUMCRV = (LABSID .NE. 'NONE')
C --Calculate axis limits if same scale
IF (IXSCAL .EQ. 'ALL') THEN
IF (TIMPLT) THEN
CALL MINMAX (MAXPTS, PLTVAL(1,NTPVAR+1), XMIN, XMAX)
ELSE
CALL CRVLIM ('X', TIMPLT, MAXPTS, NPTS, 1, NTPVAR, PLTVAL)
END IF
IF (NEUTRL .EQ. 0)
* CALL EXPMAX (LABSID, XMIN, XMAX)
END IF
IF (IYSCAL .EQ. 'ALL') THEN
CALL CRVLIM ('Y', TIMPLT, MAXPTS, NPTS, 1, NTPVAR, PLTVAL)
IF (NEUTRL .EQ. 0)
* CALL EXPMAX (' ', YMIN, YMAX)
END IF
SCACRV = (IXSCAL .EQ. 'CURVE') .OR. (IYSCAL .EQ. 'CURVE')
C --Label plot if overlaid
100 CONTINUE
IF (OVERLY) THEN
CALL TPLAB (1, NTPCRV, NUMCRV, TIMLIM, NAMES,
& TXLAB, TYLAB, BLKCOL, MAPEL, MAPND, *130)
IF (.NOT. SCACRV)
& CALL XYAXIS (0, DOGRID, TXLAB, TYLAB, BLKCOL, *130)
END IF
gobck = .false.
if (neutrl .ne. csv .and. neutrl .ne. raw) then
N = 1
np = 1
120 continue
IF (TIMPLT) THEN
IF (NPTS(N) .EQ. MAXPTS) THEN
NX = NTPVAR+1
ELSE
NX = NTPVAR+2
END IF
NY = N
ELSE
NX = N
NY = N+1
END IF
C --Calculate min/max if needed
IF ((IXSCAL .EQ. 'PLOT') .OR. (IXSCAL .EQ. 'CURVE')) THEN
CALL CRVLIM ('X', TIMPLT, MAXPTS, NPTS, N, NY, PLTVAL)
IF (NEUTRL .EQ. 0)
* CALL EXPMAX (LABSID, XMIN, XMAX)
END IF
IF ((IYSCAL .EQ. 'PLOT') .OR. (IYSCAL .EQ. 'CURVE')) THEN
CALL CRVLIM ('Y', TIMPLT, MAXPTS, NPTS, N, NY, PLTVAL)
IF (NEUTRL .EQ. 0)
* CALL EXPMAX (' ', YMIN, YMAX)
END IF
IF (OVERLY) THEN
IF (SCACRV)
& CALL XYAXIS (NP, DOGRID, TXLAB, TYLAB, BLKCOL, *130)
END IF
IF (NEUTRL .EQ. 0) THEN
C --Label plot if needed
110 CONTINUE
IF (.NOT. OVERLY) THEN
CALL TPLAB (N, 1, NUMCRV, TIMLIM, NAMES,
& TXLAB, TYLAB, BLKCOL, MAPEL, MAPND, *130)
CALL XYAXIS (0, DOGRID, TXLAB, TYLAB, BLKCOL, *130)
END IF
IF (GRABRT()) GOTO 130
IF (OVERLY) THEN
CALL GRCOLR (NP)
CALL GRSYMB (LINTYP, ISYTYP, NP)
ELSE
CALL GRCOLR (1)
CALL GRSYMB (LINTYP, ISYTYP, 1)
END IF
C --Plot variable against time or variable against variable
IF (GRABRT()) GOTO 130
CALL PLTCUR (PLTVAL(1,NX), PLTVAL(1,NY), NPTS(N))
IF (NUMCRV) THEN
IF (GRABRT()) GOTO 130
CALL GRNCRV (LABSID, NP, NPTS(N),
& PLTVAL(1,NX), PLTVAL(1,NY), (LINTYP .EQ. 0))
END IF
C --Finish plot
IF (OVERLY) THEN
CALL PLTFLU
gobck = .false.
END IF
IF (.NOT. OVERLY) THEN
C --Set color in case text is requested
CALL UGRCOL (0, BLKCOL)
gobck = .true.
CALL GRPEND (.TRUE., .TRUE., NP, NTPCRV, GOBCK,
$ *110, *130)
END IF
ELSE
gobck = .false.
C --Get plot labels
CALL TPLABN (N, TIMLIM, NAMES, PLTITL, TXLAB, TYLAB,
* MAPEL, MAPND)
C --Plot variable against time or variable against variable
IF (NEUTRL .EQ. XMGR) THEN
CALL WRTNEU (NPTS(N), PLTVAL(1,NX), PLTVAL(1,NY),
& PLTITL, TXLAB, TYLAB)
ELSE IF (NEUTRL .EQ. GRAF) THEN
CALL GRFNEU (NPTS(N), PLTVAL(1,NX), PLTVAL(1,NY),
& PLTITL, TXLAB, TYLAB)
END IF
END IF
if (gobck) then
n = ny -1
np = np - 1
if (n .lt. 1) n = 1
if (np .lt. 1) np = 1
else
N = NY+1
np = np + 1
end if
if (np .le. ntpcrv) go to 120
else
if (neutrl .eq. csv) then
doleg = .true.
else
doleg = .false.
end if
C ... CSV format neutral file selected...
CALL TPLABN (1, TIMLIM, NAMES, PLTITL, TXLAB, TYLAB,
* MAPEL, MAPND)
call wrtcsv(ntpcrv, maxpts, npts, pltval, txlab, names, doleg,
* MAPEL, MAPND)
end if
C --Finish overlaid plot
IF (OVERLY) THEN
C --Set color in case text is requested
CALL UGRCOL (0, BLKCOL)
CALL GRPEND (.TRUE., .TRUE., 0, 0, .FALSE., *100, *130)
END IF
130 CONTINUE
C --Restore user-set parameters
OVERLY = SVOVER
LABSID = SVLSID
IF (IXSCAL .NE. 'SET') IXSCAL = IAXSCA
IF (IYSCAL .NE. 'SET') IYSCAL = IAXSCA
RETURN
END
|
/-
Copyright (c) 2018 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon
-/
import control.traversable.lemmas
import logic.equiv.basic
/-!
# Transferring `traversable` instances along isomorphisms
This file allows to transfer `traversable` instances along isomorphisms.
## Main declarations
* `equiv.map`: Turns functorially a function `α → β` into a function `t' α → t' β` using the functor
`t` and the equivalence `Π α, t α ≃ t' α`.
* `equiv.functor`: `equiv.map` as a functor.
* `equiv.traverse`: Turns traversably a function `α → m β` into a function `t' α → m (t' β)` using
the traversable functor `t` and the equivalence `Π α, t α ≃ t' α`.
* `equiv.traversable`: `equiv.traverse` as a traversable functor.
* `equiv.is_lawful_traversable`: `equiv.traverse` as a lawful traversable functor.
-/
universes u
namespace equiv
section functor
parameters {t t' : Type u → Type u}
parameters (eqv : Π α, t α ≃ t' α)
variables [functor t]
open functor
/-- Given a functor `t`, a function `t' : Type u → Type u`, and
equivalences `t α ≃ t' α` for all `α`, then every function `α → β` can
be mapped to a function `t' α → t' β` functorially (see
`equiv.functor`). -/
protected def map {α β : Type u} (f : α → β) (x : t' α) : t' β :=
eqv β $ map f ((eqv α).symm x)
/-- The function `equiv.map` transfers the functoriality of `t` to
`t'` using the equivalences `eqv`. -/
protected def functor : functor t' :=
{ map := @equiv.map _ }
variables [is_lawful_functor t]
protected lemma id_map {α : Type u} (x : t' α) : equiv.map id x = x :=
by simp [equiv.map, id_map]
protected lemma comp_map {α β γ : Type u} (g : α → β) (h : β → γ) (x : t' α) :
equiv.map (h ∘ g) x = equiv.map h (equiv.map g x) :=
by simp [equiv.map]; apply comp_map
protected lemma is_lawful_functor : @is_lawful_functor _ equiv.functor :=
{ id_map := @equiv.id_map _ _,
comp_map := @equiv.comp_map _ _ }
protected lemma is_lawful_functor' [F : _root_.functor t']
(h₀ : ∀ {α β} (f : α → β), _root_.functor.map f = equiv.map f)
(h₁ : ∀ {α β} (f : β), _root_.functor.map_const f = (equiv.map ∘ function.const α) f) :
_root_.is_lawful_functor t' :=
begin
have : F = equiv.functor,
{ casesI F, dsimp [equiv.functor],
congr; ext; [rw ← h₀, rw ← h₁] },
substI this,
exact equiv.is_lawful_functor
end
end functor
section traversable
parameters {t t' : Type u → Type u}
parameters (eqv : Π α, t α ≃ t' α)
variables [traversable t]
variables {m : Type u → Type u} [applicative m]
variables {α β : Type u}
/-- Like `equiv.map`, a function `t' : Type u → Type u` can be given
the structure of a traversable functor using a traversable functor
`t'` and equivalences `t α ≃ t' α` for all α. See `equiv.traversable`. -/
protected def traverse (f : α → m β) (x : t' α) : m (t' β) :=
eqv β <$> traverse f ((eqv α).symm x)
/-- The function `equiv.traverse` transfers a traversable functor
instance across the equivalences `eqv`. -/
protected def traversable : traversable t' :=
{ to_functor := equiv.functor eqv,
traverse := @equiv.traverse _ }
end traversable
section equiv
parameters {t t' : Type u → Type u}
parameters (eqv : Π α, t α ≃ t' α)
variables [traversable t] [is_lawful_traversable t]
variables {F G : Type u → Type u} [applicative F] [applicative G]
variables [is_lawful_applicative F] [is_lawful_applicative G]
variables (η : applicative_transformation F G)
variables {α β γ : Type u}
open is_lawful_traversable functor
protected lemma id_traverse (x : t' α) :
equiv.traverse eqv id.mk x = x :=
by simp! [equiv.traverse,id_bind,id_traverse,functor.map] with functor_norm
protected lemma traverse_eq_map_id (f : α → β) (x : t' α) :
equiv.traverse eqv (id.mk ∘ f) x = id.mk (equiv.map eqv f x) :=
by simp [equiv.traverse, traverse_eq_map_id] with functor_norm; refl
protected lemma comp_traverse (f : β → F γ) (g : α → G β) (x : t' α) :
equiv.traverse eqv (comp.mk ∘ functor.map f ∘ g) x =
comp.mk (equiv.traverse eqv f <$> equiv.traverse eqv g x) :=
by simp [equiv.traverse,comp_traverse] with functor_norm; congr; ext; simp
protected lemma naturality (f : α → F β) (x : t' α) :
η (equiv.traverse eqv f x) = equiv.traverse eqv (@η _ ∘ f) x :=
by simp only [equiv.traverse] with functor_norm
/-- The fact that `t` is a lawful traversable functor carries over the
equivalences to `t'`, with the traversable functor structure given by
`equiv.traversable`. -/
protected def is_lawful_traversable : @is_lawful_traversable t' (equiv.traversable eqv) :=
{ to_is_lawful_functor := @equiv.is_lawful_functor _ _ eqv _ _,
id_traverse := @equiv.id_traverse _ _,
comp_traverse := @equiv.comp_traverse _ _,
traverse_eq_map_id := @equiv.traverse_eq_map_id _ _,
naturality := @equiv.naturality _ _ }
/-- If the `traversable t'` instance has the properties that `map`,
`map_const`, and `traverse` are equal to the ones that come from
carrying the traversable functor structure from `t` over the
equivalences, then the fact that `t` is a lawful traversable functor
carries over as well. -/
protected def is_lawful_traversable' [_i : traversable t']
(h₀ : ∀ {α β} (f : α → β),
map f = equiv.map eqv f)
(h₁ : ∀ {α β} (f : β),
map_const f = (equiv.map eqv ∘ function.const α) f)
(h₂ : ∀ {F : Type u → Type u} [applicative F],
by exactI ∀ [is_lawful_applicative F]
{α β} (f : α → F β),
traverse f = equiv.traverse eqv f) :
_root_.is_lawful_traversable t' :=
begin
-- we can't use the same approach as for `is_lawful_functor'` because
-- h₂ needs a `is_lawful_applicative` assumption
refine {to_is_lawful_functor :=
equiv.is_lawful_functor' eqv @h₀ @h₁, ..}; introsI,
{ rw [h₂, equiv.id_traverse], apply_instance },
{ rw [h₂, equiv.comp_traverse f g x, h₂], congr,
rw [h₂], all_goals { apply_instance } },
{ rw [h₂, equiv.traverse_eq_map_id, h₀]; apply_instance },
{ rw [h₂, equiv.naturality, h₂]; apply_instance }
end
end equiv
end equiv
|
= = = = Use of embryos for research or fertilization = = = =
|
//==================================================================================================
/*!
@file
@copyright 2016 NumScale SAS
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE.md or copy at http://boost.org/LICENSE_1_0.txt)
*/
//==================================================================================================
#ifndef BOOST_SIMD_ARCH_COMMON_SIMD_FUNCTION_FREXP_HPP_INCLUDED
#define BOOST_SIMD_ARCH_COMMON_SIMD_FUNCTION_FREXP_HPP_INCLUDED
#include <boost/simd/detail/overload.hpp>
#include <boost/simd/meta/hierarchy/simd.hpp>
#include <boost/simd/detail/constant/limitexponent.hpp>
#include <boost/simd/detail/constant/mask1frexp.hpp>
#include <boost/simd/detail/constant/mask2frexp.hpp>
#include <boost/simd/detail/constant/maxexponentm1.hpp>
#include <boost/simd/constant/nbmantissabits.hpp>
#include <boost/simd/function/bitwise_and.hpp>
#include <boost/simd/function/bitwise_andnot.hpp>
#include <boost/simd/function/bitwise_cast.hpp>
#include <boost/simd/function/bitwise_or.hpp>
#include <boost/simd/function/if_else_zero.hpp>
#include <boost/simd/function/is_greater.hpp>
#include <boost/simd/function/is_nez.hpp>
#include <boost/simd/function/logical_notand.hpp>
#include <boost/simd/function/minus.hpp>
#include <boost/simd/function/multiplies.hpp>
#include <boost/simd/function/if_plus.hpp>
#include <boost/simd/function/shr.hpp>
#include <boost/simd/detail/dispatch/meta/as_integer.hpp>
#include <utility>
#ifndef BOOST_SIMD_NO_DENORMALS
#include <boost/simd/function/is_less.hpp>
#include <boost/simd/function/is_nez.hpp>
#include <boost/simd/function/if_else.hpp>
#include <boost/simd/function/abs.hpp>
#include <boost/simd/function/logical_and.hpp>
#include <boost/simd/constant/twotonmb.hpp>
#include <boost/simd/constant/smallestposval.hpp>
#endif
namespace boost { namespace simd { namespace ext
{
namespace bd = boost::dispatch;
namespace bs = boost::simd;
BOOST_DISPATCH_OVERLOAD(frexp_
, (typename A0, typename X)
, bd::cpu_
, bs::pack_<bd::floating_<A0>, X>
)
{
using i_t = bd::as_integer_t<A0, signed>;
BOOST_FORCEINLINE std::pair<A0,i_t> operator()(A0 const& a0) const
{
A0 r0;
i_t r1;
using s_type = bd::scalar_of_t<A0>;
#ifndef BOOST_SIMD_NO_DENORMALS
auto test = logical_and(is_less(bs::abs(a0), Smallestposval<A0>()), is_nez(a0));
A0 aa0 = if_else(test, Twotonmb<A0>()*a0, a0);
i_t t = if_else_zero(test,Nbmantissabits<A0>());
#else
A0 aa0 = a0;
#endif
r1 = simd::bitwise_cast<i_t>(bitwise_and(aa0, Mask1frexp<A0>())); //extract exp.
A0 x = bitwise_andnot(aa0, Mask1frexp<A0>());
r1 = shr(r1,Nbmantissabits<s_type>()) - Maxexponentm1<A0>();
r0 = bitwise_or(x,Mask2frexp<A0>());
auto test0 = is_nez(aa0);
auto test1 = is_greater(r1,Limitexponent<A0>());
r1 = if_else_zero(logical_notand(test1, test0), r1);
#ifndef BOOST_SIMD_NO_DENORMALS
r1 -= t ;
#endif
r0 = if_else_zero(test0, if_plus(test1,r0,aa0));
return {r0, r1};
}
};
BOOST_DISPATCH_OVERLOAD( frexp_
, (typename A0, typename X)
, bd::cpu_
, boost::simd::fast_tag
, bs::pack_< bd::floating_<A0>, X>
)
{
using i_t = bd::as_integer_t<A0, signed>;
using sA0 = bd::scalar_of_t<A0>;
BOOST_FORCEINLINE std::pair<A0,i_t> operator() (const fast_tag &
, A0 const& a0 ) const BOOST_NOEXCEPT
{
i_t r1 = bitwise_cast<i_t>(bitwise_and(Mask1frexp<A0>(), a0));
A0 x = bitwise_andnot(a0, Mask1frexp<A0>());
return {bitwise_or(x,Mask2frexp<A0>()), shr(r1,Nbmantissabits<sA0>()) - Maxexponentm1<A0>()};
}
};
} } }
#endif
|
/-
Copyright (c) 2020 Yury Kudryashov. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon, Patrick Massot, Yury Kudryashov
-/
import algebra.group.opposite
/-!
# Monoid, group etc structures on `M × N`
In this file we define one-binop (`monoid`, `group` etc) structures on `M × N`. We also prove
trivial `simp` lemmas, and define the following operations on `monoid_hom`s:
* `fst M N : M × N →* M`, `snd M N : M × N →* N`: projections `prod.fst` and `prod.snd`
as `monoid_hom`s;
* `inl M N : M →* M × N`, `inr M N : N →* M × N`: inclusions of first/second monoid
into the product;
* `f.prod g : `M →* N × P`: sends `x` to `(f x, g x)`;
* `f.coprod g : M × N →* P`: sends `(x, y)` to `f x * g y`;
* `f.prod_map g : M × N → M' × N'`: `prod.map f g` as a `monoid_hom`,
sends `(x, y)` to `(f x, g y)`.
-/
variables {A : Type*} {B : Type*} {G : Type*} {H : Type*} {M : Type*} {N : Type*} {P : Type*}
namespace prod
@[to_additive]
instance [has_mul M] [has_mul N] : has_mul (M × N) := ⟨λ p q, ⟨p.1 * q.1, p.2 * q.2⟩⟩
@[simp, to_additive]
lemma fst_mul [has_mul M] [has_mul N] (p q : M × N) : (p * q).1 = p.1 * q.1 := rfl
@[simp, to_additive]
lemma snd_mul [has_mul M] [has_mul N] (p q : M × N) : (p * q).2 = p.2 * q.2 := rfl
@[simp, to_additive]
lemma mk_mul_mk [has_mul M] [has_mul N] (a₁ a₂ : M) (b₁ b₂ : N) :
(a₁, b₁) * (a₂, b₂) = (a₁ * a₂, b₁ * b₂) := rfl
@[to_additive]
lemma mul_def [has_mul M] [has_mul N] (p q : M × N) : p * q = (p.1 * q.1, p.2 * q.2) := rfl
@[to_additive]
instance [has_one M] [has_one N] : has_one (M × N) := ⟨(1, 1)⟩
@[simp, to_additive]
lemma fst_one [has_one M] [has_one N] : (1 : M × N).1 = 1 := rfl
@[simp, to_additive]
lemma snd_one [has_one M] [has_one N] : (1 : M × N).2 = 1 := rfl
@[to_additive]
lemma one_eq_mk [has_one M] [has_one N] : (1 : M × N) = (1, 1) := rfl
@[simp, to_additive]
lemma mk_eq_one [has_one M] [has_one N] {x : M} {y : N} : (x, y) = 1 ↔ x = 1 ∧ y = 1 :=
mk.inj_iff
@[to_additive]
lemma fst_mul_snd [mul_one_class M] [mul_one_class N] (p : M × N) :
(p.fst, 1) * (1, p.snd) = p :=
ext (mul_one p.1) (one_mul p.2)
@[to_additive]
instance [has_inv M] [has_inv N] : has_inv (M × N) := ⟨λp, (p.1⁻¹, p.2⁻¹)⟩
@[simp, to_additive]
lemma fst_inv [has_inv G] [has_inv H] (p : G × H) : (p⁻¹).1 = (p.1)⁻¹ := rfl
@[simp, to_additive]
lemma snd_inv [has_inv G] [has_inv H] (p : G × H) : (p⁻¹).2 = (p.2)⁻¹ := rfl
@[simp, to_additive]
lemma inv_mk [has_inv G] [has_inv H] (a : G) (b : H) : (a, b)⁻¹ = (a⁻¹, b⁻¹) := rfl
@[to_additive]
instance [has_div M] [has_div N] : has_div (M × N) := ⟨λ p q, ⟨p.1 / q.1, p.2 / q.2⟩⟩
@[simp] lemma fst_sub [add_group A] [add_group B] (a b : A × B) : (a - b).1 = a.1 - b.1 := rfl
@[simp] lemma snd_sub [add_group A] [add_group B] (a b : A × B) : (a - b).2 = a.2 - b.2 := rfl
@[simp] lemma mk_sub_mk [add_group A] [add_group B] (x₁ x₂ : A) (y₁ y₂ : B) :
(x₁, y₁) - (x₂, y₂) = (x₁ - x₂, y₁ - y₂) := rfl
instance [mul_zero_class M] [mul_zero_class N] : mul_zero_class (M × N) :=
{ zero_mul := assume a, prod.rec_on a $ λa b, mk.inj_iff.mpr ⟨zero_mul _, zero_mul _⟩,
mul_zero := assume a, prod.rec_on a $ λa b, mk.inj_iff.mpr ⟨mul_zero _, mul_zero _⟩,
.. prod.has_zero, .. prod.has_mul }
@[to_additive]
instance [semigroup M] [semigroup N] : semigroup (M × N) :=
{ mul_assoc := assume a b c, mk.inj_iff.mpr ⟨mul_assoc _ _ _, mul_assoc _ _ _⟩,
.. prod.has_mul }
instance [semigroup_with_zero M] [semigroup_with_zero N] : semigroup_with_zero (M × N) :=
{ .. prod.mul_zero_class, .. prod.semigroup }
@[to_additive]
instance [mul_one_class M] [mul_one_class N] : mul_one_class (M × N) :=
{ one_mul := assume a, prod.rec_on a $ λa b, mk.inj_iff.mpr ⟨one_mul _, one_mul _⟩,
mul_one := assume a, prod.rec_on a $ λa b, mk.inj_iff.mpr ⟨mul_one _, mul_one _⟩,
.. prod.has_mul, .. prod.has_one }
@[to_additive]
instance [monoid M] [monoid N] : monoid (M × N) :=
{ npow := λ z a, ⟨monoid.npow z a.1, monoid.npow z a.2⟩,
npow_zero' := λ z, ext (monoid.npow_zero' _) (monoid.npow_zero' _),
npow_succ' := λ z a, ext (monoid.npow_succ' _ _) (monoid.npow_succ' _ _),
.. prod.semigroup, .. prod.mul_one_class }
@[to_additive]
instance [div_inv_monoid G] [div_inv_monoid H] : div_inv_monoid (G × H) :=
{ div_eq_mul_inv := λ a b, mk.inj_iff.mpr ⟨div_eq_mul_inv _ _, div_eq_mul_inv _ _⟩,
zpow := λ z a, ⟨div_inv_monoid.zpow z a.1, div_inv_monoid.zpow z a.2⟩,
zpow_zero' := λ z, ext (div_inv_monoid.zpow_zero' _) (div_inv_monoid.zpow_zero' _),
zpow_succ' := λ z a, ext (div_inv_monoid.zpow_succ' _ _) (div_inv_monoid.zpow_succ' _ _),
zpow_neg' := λ z a, ext (div_inv_monoid.zpow_neg' _ _) (div_inv_monoid.zpow_neg' _ _),
.. prod.monoid, .. prod.has_inv, .. prod.has_div }
@[to_additive]
instance [group G] [group H] : group (G × H) :=
{ mul_left_inv := assume a, mk.inj_iff.mpr ⟨mul_left_inv _, mul_left_inv _⟩,
.. prod.div_inv_monoid }
@[to_additive]
instance [comm_semigroup G] [comm_semigroup H] : comm_semigroup (G × H) :=
{ mul_comm := assume a b, mk.inj_iff.mpr ⟨mul_comm _ _, mul_comm _ _⟩,
.. prod.semigroup }
@[to_additive]
instance [left_cancel_semigroup G] [left_cancel_semigroup H] :
left_cancel_semigroup (G × H) :=
{ mul_left_cancel := λ a b c h, prod.ext (mul_left_cancel (prod.ext_iff.1 h).1)
(mul_left_cancel (prod.ext_iff.1 h).2),
.. prod.semigroup }
@[to_additive]
instance [right_cancel_semigroup G] [right_cancel_semigroup H] :
right_cancel_semigroup (G × H) :=
{ mul_right_cancel := λ a b c h, prod.ext (mul_right_cancel (prod.ext_iff.1 h).1)
(mul_right_cancel (prod.ext_iff.1 h).2),
.. prod.semigroup }
@[to_additive]
instance [left_cancel_monoid M] [left_cancel_monoid N] : left_cancel_monoid (M × N) :=
{ .. prod.left_cancel_semigroup, .. prod.monoid }
@[to_additive]
instance [right_cancel_monoid M] [right_cancel_monoid N] : right_cancel_monoid (M × N) :=
{ .. prod.right_cancel_semigroup, .. prod.monoid }
@[to_additive]
instance [cancel_monoid M] [cancel_monoid N] : cancel_monoid (M × N) :=
{ .. prod.right_cancel_monoid, .. prod.left_cancel_monoid }
@[to_additive]
instance [comm_monoid M] [comm_monoid N] : comm_monoid (M × N) :=
{ .. prod.comm_semigroup, .. prod.monoid }
@[to_additive]
instance [cancel_comm_monoid M] [cancel_comm_monoid N] : cancel_comm_monoid (M × N) :=
{ .. prod.left_cancel_monoid, .. prod.comm_monoid }
instance [mul_zero_one_class M] [mul_zero_one_class N] : mul_zero_one_class (M × N) :=
{ .. prod.mul_zero_class, .. prod.mul_one_class }
instance [monoid_with_zero M] [monoid_with_zero N] : monoid_with_zero (M × N) :=
{ .. prod.monoid, .. prod.mul_zero_one_class }
instance [comm_monoid_with_zero M] [comm_monoid_with_zero N] : comm_monoid_with_zero (M × N) :=
{ .. prod.comm_monoid, .. prod.monoid_with_zero }
@[to_additive]
instance [comm_group G] [comm_group H] : comm_group (G × H) :=
{ .. prod.comm_semigroup, .. prod.group }
end prod
namespace monoid_hom
variables (M N) [mul_one_class M] [mul_one_class N]
/-- Given monoids `M`, `N`, the natural projection homomorphism from `M × N` to `M`.-/
@[to_additive "Given additive monoids `A`, `B`, the natural projection homomorphism
from `A × B` to `A`"]
def fst : M × N →* M := ⟨prod.fst, rfl, λ _ _, rfl⟩
/-- Given monoids `M`, `N`, the natural projection homomorphism from `M × N` to `N`.-/
@[to_additive "Given additive monoids `A`, `B`, the natural projection homomorphism
from `A × B` to `B`"]
def snd : M × N →* N := ⟨prod.snd, rfl, λ _ _, rfl⟩
/-- Given monoids `M`, `N`, the natural inclusion homomorphism from `M` to `M × N`. -/
@[to_additive "Given additive monoids `A`, `B`, the natural inclusion homomorphism
from `A` to `A × B`."]
def inl : M →* M × N :=
⟨λ x, (x, 1), rfl, λ _ _, prod.ext rfl (one_mul 1).symm⟩
/-- Given monoids `M`, `N`, the natural inclusion homomorphism from `N` to `M × N`. -/
@[to_additive "Given additive monoids `A`, `B`, the natural inclusion homomorphism
from `B` to `A × B`."]
def inr : N →* M × N :=
⟨λ y, (1, y), rfl, λ _ _, prod.ext (one_mul 1).symm rfl⟩
variables {M N}
@[simp, to_additive] lemma coe_fst : ⇑(fst M N) = prod.fst := rfl
@[simp, to_additive] lemma coe_snd : ⇑(snd M N) = prod.snd := rfl
@[simp, to_additive] lemma inl_apply (x) : inl M N x = (x, 1) := rfl
@[simp, to_additive] lemma inr_apply (y) : inr M N y = (1, y) := rfl
@[simp, to_additive] lemma fst_comp_inl : (fst M N).comp (inl M N) = id M := rfl
@[simp, to_additive] lemma snd_comp_inl : (snd M N).comp (inl M N) = 1 := rfl
@[simp, to_additive] lemma fst_comp_inr : (fst M N).comp (inr M N) = 1 := rfl
@[simp, to_additive] lemma snd_comp_inr : (snd M N).comp (inr M N) = id N := rfl
section prod
variable [mul_one_class P]
/-- Combine two `monoid_hom`s `f : M →* N`, `g : M →* P` into `f.prod g : M →* N × P`
given by `(f.prod g) x = (f x, g x)` -/
@[to_additive prod "Combine two `add_monoid_hom`s `f : M →+ N`, `g : M →+ P` into
`f.prod g : M →+ N × P` given by `(f.prod g) x = (f x, g x)`"]
protected def prod (f : M →* N) (g : M →* P) : M →* N × P :=
{ to_fun := λ x, (f x, g x),
map_one' := prod.ext f.map_one g.map_one,
map_mul' := λ x y, prod.ext (f.map_mul x y) (g.map_mul x y) }
@[simp, to_additive prod_apply]
@[simp, to_additive fst_comp_prod]
lemma fst_comp_prod (f : M →* N) (g : M →* P) : (fst N P).comp (f.prod g) = f :=
ext $ λ x, rfl
@[simp, to_additive snd_comp_prod]
lemma snd_comp_prod (f : M →* N) (g : M →* P) : (snd N P).comp (f.prod g) = g :=
ext $ λ x, rfl
@[simp, to_additive prod_unique]
lemma prod_unique (f : M →* N × P) :
((fst N P).comp f).prod ((snd N P).comp f) = f :=
ext $ λ x, by simp only [prod_apply, coe_fst, coe_snd, comp_apply, prod.mk.eta]
end prod
section prod_map
variables {M' : Type*} {N' : Type*} [mul_one_class M'] [mul_one_class N'] [mul_one_class P]
(f : M →* M') (g : N →* N')
/-- `prod.map` as a `monoid_hom`. -/
@[to_additive prod_map "`prod.map` as an `add_monoid_hom`"]
def prod_map : M × N →* M' × N' := (f.comp (fst M N)).prod (g.comp (snd M N))
@[to_additive prod_map_def]
lemma prod_map_def : prod_map f g = (f.comp (fst M N)).prod (g.comp (snd M N)) := rfl
@[simp, to_additive coe_prod_map]
lemma coe_prod_map : ⇑(prod_map f g) = prod.map f g := rfl
@[to_additive prod_comp_prod_map]
lemma prod_comp_prod_map (f : P →* M) (g : P →* N) (f' : M →* M') (g' : N →* N') :
(f'.prod_map g').comp (f.prod g) = (f'.comp f).prod (g'.comp g) :=
rfl
end prod_map
section coprod
variables [comm_monoid P] (f : M →* P) (g : N →* P)
/-- Coproduct of two `monoid_hom`s with the same codomain:
`f.coprod g (p : M × N) = f p.1 * g p.2`. -/
@[to_additive "Coproduct of two `add_monoid_hom`s with the same codomain:
`f.coprod g (p : M × N) = f p.1 + g p.2`."]
def coprod : M × N →* P := f.comp (fst M N) * g.comp (snd M N)
@[simp, to_additive]
lemma coprod_apply (p : M × N) : f.coprod g p = f p.1 * g p.2 := rfl
@[simp, to_additive]
lemma coprod_comp_inl : (f.coprod g).comp (inl M N) = f :=
ext $ λ x, by simp [coprod_apply]
@[simp, to_additive]
lemma coprod_comp_inr : (f.coprod g).comp (inr M N) = g :=
ext $ λ x, by simp [coprod_apply]
@[simp, to_additive] lemma coprod_unique (f : M × N →* P) :
(f.comp (inl M N)).coprod (f.comp (inr M N)) = f :=
ext $ λ x, by simp [coprod_apply, inl_apply, inr_apply, ← map_mul]
@[simp, to_additive] lemma coprod_inl_inr {M N : Type*} [comm_monoid M] [comm_monoid N] :
(inl M N).coprod (inr M N) = id (M × N) :=
coprod_unique (id $ M × N)
lemma comp_coprod {Q : Type*} [comm_monoid Q] (h : P →* Q) (f : M →* P) (g : N →* P) :
h.comp (f.coprod g) = (h.comp f).coprod (h.comp g) :=
ext $ λ x, by simp
end coprod
end monoid_hom
namespace mul_equiv
section
variables {M N} [mul_one_class M] [mul_one_class N]
/-- The equivalence between `M × N` and `N × M` given by swapping the components
is multiplicative. -/
@[to_additive prod_comm "The equivalence between `M × N` and `N × M` given by swapping the
components is additive."]
def prod_comm : M × N ≃* N × M :=
{ map_mul' := λ ⟨x₁, y₁⟩ ⟨x₂, y₂⟩, rfl, ..equiv.prod_comm M N }
@[simp, to_additive coe_prod_comm] lemma coe_prod_comm :
⇑(prod_comm : M × N ≃* N × M) = prod.swap := rfl
@[simp, to_additive coe_prod_comm_symm] lemma coe_prod_comm_symm :
⇑((prod_comm : M × N ≃* N × M).symm) = prod.swap := rfl
end
section
variables {M N} [monoid M] [monoid N]
/-- The monoid equivalence between units of a product of two monoids, and the product of the
units of each monoid. -/
@[to_additive prod_add_units "The additive monoid equivalence between additive units of a product
of two additive monoids, and the product of the additive units of each additive monoid."]
def prod_units : units (M × N) ≃* units M × units N :=
{ to_fun := (units.map (monoid_hom.fst M N)).prod (units.map (monoid_hom.snd M N)),
inv_fun := λ u, ⟨(u.1, u.2), (↑u.1⁻¹, ↑u.2⁻¹), by simp, by simp⟩,
left_inv := λ u, by simp,
right_inv := λ ⟨u₁, u₂⟩, by simp [units.map],
map_mul' := monoid_hom.map_mul _ }
end
end mul_equiv
section units
open mul_opposite
/-- Canonical homomorphism of monoids from `units α` into `α × αᵐᵒᵖ`.
Used mainly to define the natural topology of `units α`. -/
def embed_product (α : Type*) [monoid α] : units α →* α × αᵐᵒᵖ :=
{ to_fun := λ x, ⟨x, op ↑x⁻¹⟩,
map_one' := by simp only [one_inv, eq_self_iff_true, units.coe_one, op_one, prod.mk_eq_one,
and_self],
map_mul' := λ x y, by simp only [mul_inv_rev, op_mul, units.coe_mul, prod.mk_mul_mk] }
end units
|
\chapter{Introduction}
\label{chap:introduction}
% Establish a precedence
Personal computers have been developed to a point where those unfamiliar with computer science theory might conclude there is nothing computers cannot do.
While this is an understandable conclusion, it has been proven that there is a limit to the types of computation our ``classical computers", what we today consider general purpose computers, can perform \cite{linz}.
In the last few decades however, the field of quantum mechanics and quantum computing have advanced to the point where primitive operations are now possible in the quantum sphere.
Just this year Google claimed to achieve ``quantum supremacy" in an experiment in which they performed a computation, using a quantum computer, in under five minutes.
Google estimated that the same computation would take a state-of-the-art super computer 10,000 years to complete \cite{quantum_supremacy}.
While quantum computers are not general purpose, they can solve NP-Hard and exponentially complex problems in polynomial time or better \cite{MikeAndIke}.
This breakthrough in computability will change the way information is stored, secured, and created.
% State the problem
Currently, encryption protocols ensure the integrity of data and identities.
One such protocol is the the widely adopted Diffe-Hellman protocol, which is an asymmetric encryption protocol that relies on the historic difficulty of factoring large prime numbers for security \cite{qc:agi}.
The protocol works by using a public and private key pair for each participating party.
Data encrypted using one of the keys (usually a public key) can then only be decrypted using the private key.
A person's public and private keys are mathematically related, but it requires factoring large prime numbers to derive the private key from the public key, which is computationally infeasible with a classical computer.
Quantum computers however, are able to do this in only polynomial time complexity \cite{doi:10.1137/S0036144598347011}.
This development, combined with the growing power of quantum computers, gives rise to security concerns to the Diffe-Hellman protocol in the future.
% Mention the solution
The BB84 is a quantum key distribution protocol which uses properties of quantum bits to address the growing security concerns of current key exchange protocols (KEP).
The BB84 protocol allows two parties to co-generate a disposable, randomly generated, encryption key which can be used to encrypt and decrypt data, and then be discarded after use.
This type of a use and throw encryption key is known as a one-time-pad, and it gets its security from being disposable.
Common patterns for breaking encryption keys cannot be used due to the disposable nature of the one-time-pad.
This technique is only susceptible to a brute force attack, which is always possible in principal but often infeasible \cite{cryptography}.
The BB84 protocol allows for detection of an eavesdropper during the key generation process, allowing them to abort key generation before any encrypted messages are transmitted \cite{qcftgu}.
% Describe the thesis' contribution to the problem
This thesis presents a peer to peer simulation of the BB84 quantum KEP, and serves as an introduction to programming in the quantum computing paradigm using simulaqron and cqc, a quantum network simulator and messaging interface \cite{simulaqron}.
We show that the BB84s security guarantees hold true in the simulated environment.
The simulation allows two parties to co-generate an encryption key and begin exchanging encrypted information using that key.
Both parties can simultaneously detect if a third party tried to eavesdrop on the key generation, allowing the users to abort the communication.
% Describe the thesis content
The rest of the thesis is organized as follows.
Chapter~\ref{chap:background} provides background information on encryption protocols, the quantum computation paradigm, and other background information related to the BB84 protocol.
Chapter~\ref{chap:bb84} details the BB84 protocol algorithm, as well as discusses its theoretical guarantees.
Chapter~\ref{chap:implementation} describes the simulaqron and cqc libraries, as well as the BB84 simulator.
This chapter also serves as introductory material for someone getting into quantum simulation or programming in the quantum paradigm.
Chapter~\ref{chap:conclusion} summarizes this simulator and its potential applications, as well as discusses possible future work.
|
{-# OPTIONS --cubical --no-import-sorts #-}
module Number.Definitions where
open import Agda.Primitive renaming (_⊔_ to ℓ-max; lsuc to ℓ-suc; lzero to ℓ-zero)
open import Cubical.Foundations.Everything renaming (_⁻¹ to _⁻¹ᵖ; assoc to ∙-assoc)
open import Cubical.Foundations.Logic
open import Utils
open import MoreLogic.Definitions
open import MoreLogic.Properties
open import MorePropAlgebra.Definitions
open import MorePropAlgebra.Consequences
open import MorePropAlgebra.Structures
|
We offer:
1. Free estimates
2. Affordable Home Improvement & Repair paint jobs
3. Flexible cost
Local, well trained college students will manage your project and its painters.
One of the largest residential contractors in the country.
We use well trained painters on all jobs so that our clients get the quality that is expected of any contractor.
Over 10,000 homes painted last year (help us make you one of those 13,000 satisfied customers).
$3,000,000 in liability coverage.
Call today and we will tell you why it takes a college education to paint your home.
NOTE: Licensing Information located via http://www.ca.gov/
20090920 13:42:56 nbsp Seeing as you have a phone number here, where is the lic # ? Users/StevenDaubert
20100201 23:57:32 nbsp As I understand it, College Works hires students all around the country, trains them, and gives them the tools to run their own minibusinesses as a sort of twist on a franchise system. Its typically a summer job. The concept always struck me as kind of shady, but my parents had their house in Livermore painted by a CW group (managed by a UCD student!) and had great results. Im sure its been at least 10 years now, and as far as I know there have been no complaints with the paint. That said, theres probably a lot of variance from one manager to the next. Users/TomGarberson
20100403 11:42:11 nbsp I just called both numbers. One is disconnected and the other does not belong to College Works Painting. Users/PatLenzi1
20100403 16:26:25 nbsp Theres another phone number on the website linked above. It may be a national thing, though, rather than the local branch(es). You could check with them to see whats available in the area. Users/TomGarberson
20101101 20:26:29 nbsp Ive heard notgood things about College Works, specifically about working for College Works. I Googled it and found this: http://forums.redflagdeals.com/studentworkspaintinghiringpainters541187/2/ so be careful if youre considering working for them. Users/BrianSparks
20110125 14:47:18 nbsp These guys are total crooks. I saw notices all over UC Davis campus today. They violate numerous California labor laws and keep interns on board with promises of pay without ever actually paying them. Theyve been banned from marketing on campus and students are supposed to notify the campus police if approached by them. Users/Doogle
20110224 16:54:03 nbsp The numbers that I have edited were for an intern and his helper who were part of the program during one summer. I tried contacting them and have been told that they no longer work with the program. I would suggest looking into the information updated above if interested in their services or searching online if interested in the credibility of the program. Users/aviDavis
20110224 18:50:00 nbsp If you provide a phone number, once again YOU NEED TO INCLUDE RELEVANT LICENSES OF LIC CONTRACTORS OR CSLB WILL COME INTO PLAY!
Aggie did an interesting article on College works as of late... Users/StevenDaubert
20110225 13:44:54 nbsp Information was taken from their website since erasing the info entirely was not acceptable. Now edited for Licensing info. Please note that I do not work for CWP nor am I advocating or denouncing them. The edits were made to help others as the information was incorrect. Also, while I did update the license number, please note that there were only 2 DavisWIKI entries found via http://daviswiki.org/Home_Improvement_%26_Repair for Painting that included a License number and that a simple search would have allowed one to locate the CSLB Information. Users/aviDavis
|
[STATEMENT]
lemma perfect_imp_closed_map:
"perfect_map X Y f \<Longrightarrow> closed_map X Y f"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. perfect_map X Y f \<Longrightarrow> closed_map X Y f
[PROOF STEP]
by (simp add: perfect_map_def proper_map_def)
|
\section*{Names}
Names\footnote{
In
\href{http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf}{
\color{DarkBlue}ECMAScript 2018 ($9^{\textrm{th}}$ Edition)},
these names are called \emph{identifiers}.
} start with \verb@_@, \verb@$@ or a
letter\footnote{
By \emph{letter}
we mean \href{http://unicode.org/reports/tr44/}{\color{DarkBlue}Unicode} letters (L) or letter numbers (NI).
} and contain only \verb@_@, \verb@$@,
letters or digits\footnote{
By \emph{digit} we mean characters in the
\href{http://unicode.org/reports/tr44/}{Unicode} categories
Nd (including the decimal digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9), Mn, Mc and Pc.
}. Reserved words\footnote{
By \emph{Reserved word} we mean any of:
$\textbf{\texttt{break}}$, $\textbf{\texttt{case}}$, $\textbf{\texttt{catch}}$, $\textbf{\texttt{continue}}$, $\textbf{\texttt{debugger}}$, $\textbf{\texttt{default}}$, $\textbf{\texttt{delete}}$, $\textbf{\texttt{do}}$, $\textbf{\texttt{else}}$, $\textbf{\texttt{finally}}$, $\textbf{\texttt{for}}$, $\textbf{\texttt{function}}$, $\textbf{\texttt{if}}$, $\textbf{\texttt{in}}$, $\textbf{\texttt{instanceof}}$, $\textbf{\texttt{new}}$, $\textbf{\texttt{return}}$, $\textbf{\texttt{switch}}$, $\textbf{\texttt{this}}$, $\textbf{\texttt{throw}}$, $\textbf{\texttt{try}}$, $\textbf{\texttt{typeof}}$, $\textbf{\texttt{var}}$, $\textbf{\texttt{void}}$, $\textbf{\texttt{while}}$, $\textbf{\texttt{with}}$, $\textbf{\texttt{class}}$, $\textbf{\texttt{const}}$, $\textbf{\texttt{enum}}$, $\textbf{\texttt{export}}$, $\textbf{\texttt{extends}}$, $\textbf{\texttt{import}}$, $\textbf{\texttt{super}}$, $\textbf{\texttt{implements}}$, $\textbf{\texttt{interface}}$, $\textbf{\texttt{let}}$, $\textbf{\texttt{package}}$, $\textbf{\texttt{private}}$, $\textbf{\texttt{protected}}$, $\textbf{\texttt{public}}$, $\textbf{\texttt{static}}$, $\textbf{\texttt{yield}}$, $\textbf{\texttt{null}}$, $\textbf{\texttt{true}}$, $\textbf{\texttt{false}}$.
} such as keywords are not allowed as names.
Valid names are \verb@x@, \verb@_45@, \verb@$$@ and $\mathtt{\pi}$,
but always keep in mind that programming is communicating and that the familiarity of the
audience with the characters used in names is an important aspect of program readability.
|
Formal statement is: lemma at_within_ball: "e > 0 \<Longrightarrow> dist x y < e \<Longrightarrow> at y within ball x e = at y" Informal statement is: If $e > 0$ and $|x - y| < e$, then the filter at $y$ within the ball of radius $e$ centered at $x$ is the same as the filter at $y$.
|
{-# OPTIONS --cubical --no-import-sorts #-}
module Cubical.Codata.Everything where
open import Cubical.Codata.EverythingSafe public
--- Modules making assumptions that might be incompatible with other
-- flags or make use of potentially unsafe features.
-- Assumes --guardedness
open import Cubical.Codata.Stream public
open import Cubical.Codata.Conat public
open import Cubical.Codata.M public
-- Also uses {-# TERMINATING #-}.
open import Cubical.Codata.M.Bisimilarity public
{-
-- Alternative M type implemetation, based on
-- https://arxiv.org/pdf/1504.02949.pdf
-- "Non-wellfounded trees in Homotopy Type Theory"
-- Benedikt Ahrens, Paolo Capriotti, Régis Spadotti
-}
open import Cubical.Codata.M.AsLimit.M
open import Cubical.Codata.M.AsLimit.Coalg
open import Cubical.Codata.M.AsLimit.helper
open import Cubical.Codata.M.AsLimit.Container
open import Cubical.Codata.M.AsLimit.itree
open import Cubical.Codata.M.AsLimit.stream
|
! hit_and_miss.f90
! Estimates volume of polyhedron by simple MC
PROGRAM hit_and_miss
!------------------------------------------------------------------------------------------------!
! This software was written in 2016/17 !
! by Michael P. Allen <[email protected]>/<[email protected]> !
! and Dominic J. Tildesley <[email protected]> ("the authors"), !
! to accompany the book "Computer Simulation of Liquids", second edition, 2017 ("the text"), !
! published by Oxford University Press ("the publishers"). !
! !
! LICENCE !
! Creative Commons CC0 Public Domain Dedication. !
! To the extent possible under law, the authors have dedicated all copyright and related !
! and neighboring rights to this software to the PUBLIC domain worldwide. !
! This software is distributed without any warranty. !
! You should have received a copy of the CC0 Public Domain Dedication along with this software. !
! If not, see <http://creativecommons.org/publicdomain/zero/1.0/>. !
! !
! DISCLAIMER !
! The authors and publishers make no warranties about the software, and disclaim liability !
! for all uses of the software, to the fullest extent permitted by applicable law. !
! The authors and publishers do not recommend use of this software for any purpose. !
! It is made freely available, solely to clarify points made in the text. When using or citing !
! the software, you should not imply endorsement by the authors or publishers. !
!------------------------------------------------------------------------------------------------!
USE, INTRINSIC :: iso_fortran_env, ONLY : output_unit
IMPLICIT NONE
REAL :: v
REAL, DIMENSION(3) :: r, zeta
REAL, DIMENSION(3), PARAMETER :: r_0 = [1.0, 2.0, 3.0]
REAL, PARAMETER :: v_0 = PRODUCT(r_0)
INTEGER :: tau, tau_shot, tau_hit
CALL RANDOM_SEED()
tau_hit = 0
tau_shot = 1000000
DO tau = 1, tau_shot
CALL RANDOM_NUMBER ( zeta(:) ) ! uniform in range (0,1)
r = zeta * r_0 ! uniform in v_0
IF ( r(2) < ( 2.0 - 2.0*r(1) ) .AND. &
& r(3) < ( 1.0 + r(2) ) ) THEN ! in polyhedron
tau_hit = tau_hit + 1
END IF
END DO
v = v_0 * REAL ( tau_hit ) / REAL ( tau_shot )
WRITE ( unit=output_unit, fmt='(a,f10.5)' ) 'Estimate = ', v
END PROGRAM hit_and_miss
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.