Datasets:
AI4M
/

text
stringlengths
0
3.34M
[STATEMENT] lemma neg_diff_pos_meas_set: assumes "(neg_meas_set M2)" and "(pos_meas_set N1)" and "(space M = N1 \<union> N2)" and "N2 \<in> sets M" shows "pos_meas_set ((M2 - N2) \<inter> space M)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. pos_meas_set ((M2 - N2) \<inter> space M) [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. pos_meas_set ((M2 - N2) \<inter> space M) [PROOF STEP] have "(M2 - N2) \<inter> space M \<subseteq> N1" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (M2 - N2) \<inter> space M \<subseteq> N1 [PROOF STEP] using assms [PROOF STATE] proof (prove) using this: neg_meas_set M2 pos_meas_set N1 space M = N1 \<union> N2 N2 \<in> sets M goal (1 subgoal): 1. (M2 - N2) \<inter> space M \<subseteq> N1 [PROOF STEP] by auto [PROOF STATE] proof (state) this: (M2 - N2) \<inter> space M \<subseteq> N1 goal (1 subgoal): 1. pos_meas_set ((M2 - N2) \<inter> space M) [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: (M2 - N2) \<inter> space M \<subseteq> N1 goal (1 subgoal): 1. pos_meas_set ((M2 - N2) \<inter> space M) [PROOF STEP] using assms pos_meas_subset neg_meas_setD1 [PROOF STATE] proof (prove) using this: (M2 - N2) \<inter> space M \<subseteq> N1 neg_meas_set M2 pos_meas_set N1 space M = N1 \<union> N2 N2 \<in> sets M \<lbrakk>pos_meas_set ?A; ?B \<subseteq> ?A; ?B \<in> sets M\<rbrakk> \<Longrightarrow> pos_meas_set ?B neg_meas_set ?E \<Longrightarrow> ?E \<in> sets M goal (1 subgoal): 1. pos_meas_set ((M2 - N2) \<inter> space M) [PROOF STEP] by blast [PROOF STATE] proof (state) this: pos_meas_set ((M2 - N2) \<inter> space M) goal: No subgoals! [PROOF STEP] qed
Formal statement is: lemma seq_offset_neg: "(f \<longlongrightarrow> l) sequentially \<Longrightarrow> ((\<lambda>i. f(i - k)) \<longlongrightarrow> l) sequentially" Informal statement is: If $f$ converges to $l$ as $i \to \infty$, then $f(i - k)$ converges to $l$ as $i \to \infty$.
\section{IMPLEMENTATION}\label{sec:impl} The project has been implemented on the Java Virtual Machine with a modular design allowing for a separation of concerns. In total, there are 5 modules including the load balancer, job server, the main client, the Kotlin client (Kotlin DSL support), and the communications module. Each module is responsible for one specific job, and they collectively work together. Since the project is implemented on JVM, it can theoretically support any language that executes on top of the JVM, and not just Java. The implementation officially supports both the Java and Kotlin programming languages. \subsection{Communications}\label{subsec:communcations} Network communications is an important part of the distributed system. The communications module has the job of guaranteeing that messages are passed from client to server and are handled by the correct handler. Networking is done on top of the TCP protocol however, it is possible to implement using UDP. The UDP protocol would add extra challenges to the communications because UDP does not guarantee the order of messages upon arrival. This could be problematic if a job request came before the associated code migration operation. The communication protocol is a very simple request-reply protocol. Each message contains the following information: message length, request Id, and serialized message content. The messages are serialized and deserialized using a framework called Fast-Serialization, which has a greater performance than the native JVM serialization algorithms. The communications layer only supports synchronous requests, however, this is acceptable for asynchronous jobs because the requests are sent on top of the executor service (lightweight thread-pool). To avoid busy waiting, the wait-notify pattern is used. A thread that is waiting for a response from the server invokes wait(), whilst notify() is called when the response is received from the server. This allows any threads to sleep during the interim period, and to help save computing resources. \begin{minipage}{\linewidth} \setlength{\belowcaptionskip}{15pt plus 3pt minus 2pt} \captionof{table}{Message Format} \begin{tabular}{ c l } \hline \multicolumn{1}{c}{Number of Bytes} & \multicolumn{1}{c}{Content} \\ \hline 4 & Message Length (N) \\ 4 & Request Id \\ N & Message Content \\ \end{tabular} \end{minipage} \subsection{Client}\label{subsec:client} The Client module is responsible for determining the method call sites at runtime from their method reference, serializing classes and their dependencies, and performing the migration/job requests to either a server or load balancer. This module is dependent on both the communication module, and the load balancing module. The communications module is used for all networking. The load balancing module is also used on the client side to allow for use of multiple job servers without requiring an external load balancer to act as the middleman. To allow the user to maintain static type safety, method references and lambda expressions are used to specify the call site of the job that is to be executed remotely. The call site information is determined at runtime to avoid any sort of compiler bootstrapping. The call site information specifies the signature of a method, the method name, and it's location. This information is required to perform the code migration process. A function also may make use of any dependency, even if it is not defined as part of the Java runtime environment. These dependencies are determined by visiting all classes, methods, fields, and instructions, defined within the class file using the ASM bytecode manipulation and generation framework. After the dependency graph is formed, the client is ready to perform a migration request. At this point, information pertaining to the job and it's dependencies is cached, and does not need to be looked up again. When an asynchronous job request is sent to a server, its handler is processed by the executor service (a lightweight thread-pool). This allows for asynchronous requests even though the communications module only supports a synchronous request-reply protocol. \subsubsection{Synchronous Java Example} In this example, the function 'factorial' executed with an input of 6 in a blocking manor. A method reference is used to tell the system what code is to be executed. \begin{lstlisting} a = delegate(this::factorial, 6) println("6! " + a); \end{lstlisting} \subsubsection{Asynchronous Java Example} In this code snippet, the main calculation is done asynchronously by the server and the client does not block while waiting for the reply. the client is then free to do other tasks while the server completes the job. Both the job and callback functions are defined via lambda expression, with the function parameters in the middle. \begin{lstlisting} delegate((a, b) -> a * b, 5, 9, n -> { // callback println("5 * 9 = " + n); }); \end{lstlisting} \subsection{Kotlin Client}\label{subsec:kotlinClient} The Kotlin client module serves to provide extended support to the Kotlin programming language. This module defines an expressive Kotlin DSL (domain specific language) which serves to make use of the expressive grammar defined by the Kotlin programming language. This module is dependent upon the main Client module to handle any code migration and job requests. The DSL is design to be extremely simple for users to understand. \subsubsection{Synchronous Kotlin Example} Since Kotlin has extremely expressive syntax, a function which accepts a lambda expression can be written as a block with a body. For example, code in the delegate block would be executed remotely in a blocking manor. \begin{lstlisting} val a = 100 val b = 200 val result = delegate { // blocking remote execution a * b } println("$a * $b = $result") \end{lstlisting} \subsubsection{Asynchronous Kotlin Example} To execute an asynchronous job, the programmer can define code within an 'async' block and optionally provide a 'callback' block to aquire the result. \begin{lstlisting} val a = 10 val b = 20 async { a * b // executes remotely } callback { // executes locally println("$a * $b = $it") } \end{lstlisting} \subsection{Job Server}\label{subsec:jobServer} It is the responsibility of the job server to handle incoming migration requests and to perform the execution of any job requests. Upon receiving a migration request, the job server attempts to load all classes specified in the request. This is easily accomplished by providing a custom class loader that searches the class map (key=class name, value=bytecode) instead of the local file system. In the unlikely event that a class fails to load, either because it is corrupted or its dependencies are missing, the job server will reply with a failure message. After the migration action has been completed, clients can send a number of job requests in parallel to the server. Each job request contains the following information: the method name, the method signature the name of its declaring class, and the function parameters. This information is required to find the declaring class, and to invoke the method by using reflection. In the event that the class or method is not found, the server will send the failure response. If a job throws an exception during its execution, the server will respond with the exception information including the detailed message and stack trace. Each job executes on an expandable thread pool the serves to limit the cost of instantiating threads. Each thread is cached and jobs are continuously submitted as the requests come in. If all the threads in the thread pool are busy processing a job, the thread pool will expand by instantiating and adding a new thread. Since the Job Server is responsible for loading and executing untrusted code, a special security manager was implemented to inhibit malicious actors. This is the most important feature to convince people to use the platform. Users would not donate their resources without any sort of security guarantee. The Job Server's security policy will deny all permissions to untrusted code by default and this policy will not interfere with any of the server's permission requirements. This is possible by providing an implementation of the SecurityManager that has a thread-local security enable/disable flag that can only be modified by the class that executes the jobs. This will provide the server with access to the internet so it can communicate with the clients while simultaneously blocking the same permissions for jobs. \subsection{Load Balancer}\label{subsec:modules} The load balancer has the important job of managing the use of resources on the computing network. This module depends upon the communication module to handle all network IO between clients and job servers. The load balancer accepts connections from clients and forwards their job request based on resource availability. It also allows clients to self describe as a job server in order to allow for resource donation. To balance the computing load across several job servers, a round-robin scheduling algorithm is provided as the default scheduling algorithm. Advanced users can write and use their own scheduling algorithms via dependency injection. After a client initialized a connection, it must first send a migration request to the load balancer before sending any job requests. This protocol acts the same as direct communication between a client and server, therefore the client does not need to distinguish between a job server or load balancer. The load balancer must verify that a job server has the required code before forwarding a job request to the chosen server. This is a problem because of the one-to-many relationship between the client and job servers. To solve this problem, the load balancer stores a map between a JVM class name, and its bytecode instructions. It also records a list of each class file that the job server has so it can prevent migration of instructions that already exist on the job server. The migration process is lazy, and only occurs when a job request occurs to a server that has not seen the specific code before.
State Before: α : Type u_1 inst✝² : CancelCommMonoidWithZero α inst✝¹ : DecidableEq α inst✝ : UniqueFactorizationMonoid α p a : α h : p ∈ factors a ⊢ a ≠ 0 State After: α : Type u_1 inst✝² : CancelCommMonoidWithZero α inst✝¹ : DecidableEq α inst✝ : UniqueFactorizationMonoid α p a : α h : p ∈ factors a ha : a = 0 ⊢ False Tactic: intro ha State Before: α : Type u_1 inst✝² : CancelCommMonoidWithZero α inst✝¹ : DecidableEq α inst✝ : UniqueFactorizationMonoid α p a : α h : p ∈ factors a ha : a = 0 ⊢ False State After: α : Type u_1 inst✝² : CancelCommMonoidWithZero α inst✝¹ : DecidableEq α inst✝ : UniqueFactorizationMonoid α p a : α h : p ∈ 0 ha : a = 0 ⊢ False Tactic: rw [factors, dif_pos ha] at h State Before: α : Type u_1 inst✝² : CancelCommMonoidWithZero α inst✝¹ : DecidableEq α inst✝ : UniqueFactorizationMonoid α p a : α h : p ∈ 0 ha : a = 0 ⊢ False State After: no goals Tactic: exact Multiset.not_mem_zero _ h
##SNOPSIS #Qtl analysis based on rqtl. ##AUTHOR ## Isaak Y Tecle ([email protected]) options(echo = FALSE) library(qtl) allargs <- commandArgs() infile <- grep("infile_list", allargs, value=TRUE) outfile <- grep("outfile_list", allargs, value=TRUE) infile <- scan(infile, what="character") statfile <- grep("stat", infile, value=TRUE) ##### stat files statfiles <- scan(statfile, what="character") ###### QTL mapping method ############ qtlmethodfile <- grep("stat_qtl_method", statfiles, value=TRUE) qtlmethod <- scan(qtlmethodfile, what="character", sep="\n") if (qtlmethod == "Maximum Likelihood") { qtlmethod <- c("em") } else if (qtlmethod == "Haley-Knott Regression") { qtlmethod <- c("hk") } else if (qtlmethod == "Multiple Imputation") { qtlmethod <- c("imp") } else if (qtlmethod == "Marker Regression") { qtlmethod <- c("mr") } ###### QTL model ############ qtlmodelfile <- grep("stat_qtl_model", statfiles, value=TRUE) qtlmodel <- scan(qtlmodelfile, what="character", sep="\n") if (qtlmodel == "Single-QTL Scan") { qtlmodel <- c("scanone") } else if (qtlmodel == "Two-QTL Scan") { qtlmodel<-c("scantwo") } ###### permutation############ userpermufile <- grep("stat_permu_test", statfiles, value=TRUE) userpermuvalue <- scan(userpermufile, what="numeric", dec= ".", sep="\n") if (userpermuvalue == "None") { userpermuvalue<-c(0) } userpermuvalue <- as.numeric(userpermuvalue) #####for test only #userpermuvalue<-c(100) ##### ######genome step size############ stepsizefile <- grep("stat_step_size", statfiles, value=TRUE) stepsize <- scan(stepsizefile, what="numeric", dec = ".", sep="\n") if (qtlmethod == 'mr') { stepsize <- c(0) } else if (qtlmethod != 'mr' & stepsize == "zero") { stepsize <- c(0) } stepsize <- as.numeric(stepsize) ######genotype calculation method############ genoprobmethodfile <- grep("stat_prob_method", statfiles, value=TRUE) genoprobmethod <- scan(genoprobmethodfile, what="character", dec=".", sep="\n") ########No. of draws for sim.geno method########### drawsnofile <- c() drawsno <- c() if (qtlmethod == 'imp') { if (is.null(grep("stat_no_draws", statfiles))==FALSE) { drawsnofile <- grep("stat_no_draws", statfiles, value=TRUE) } if (is.null(drawsnofile)==FALSE) { drawsno <- scan(drawsnofile, what="numeric", dec = ".", sep="\n") drawsno <- as.numeric(drawsno) } } ########significance level for genotype #######probablity calculation genoproblevelfile <- grep("stat_prob_level", statfiles, value=TRUE) genoproblevel <- scan(genoproblevelfile, what="numeric", dec = ".", sep="\n") if (qtlmethod == 'mr') { if (is.logical(genoproblevel) == FALSE) { genoproblevel <- c(0) } if (is.logical(genoprobmethod) ==FALSE) { genoprobmethod <- c('Calculate') } } genoproblevel <- as.numeric(genoproblevel) ########significance level for permutation test permuproblevelfile <- grep("stat_permu_level", statfiles, value=TRUE) permuproblevel <- scan(permuproblevelfile, what="numeric", dec = ".", sep="\n") permuproblevel <- as.numeric(permuproblevel) ######### cvtermfile <- grep("cvterm", infile, value=TRUE) popidfile <- grep("popid", infile, value=TRUE) genodata <- grep("genodata", infile, value=TRUE) phenodata <- grep("phenodata", infile, value=TRUE) permufile <- grep("permu", infile, value=TRUE) crossfile <- grep("cross", infile, value=TRUE) popid <- scan(popidfile, what="integer", sep="\n") cross <- scan(crossfile,what="character", sep="\n") popdata<-c() if (cross == "f2") { popdata <- read.cross("csvs", genfile=genodata, phefile=phenodata, na.strings=c("NA", "-"), genotypes=c("1", "2", "3", "4", "5"), ) popdata <-jittermap(popdata) } else if (cross == "bc" | cross == "rilsib" | cross == "rilself") { popdata <- read.cross("csvs", genfile=genodata, phefile=phenodata, na.strings=c("NA", "-"), genotypes=c("1", "2"), ) popdata<-jittermap(popdata) } if (cross == "rilself") { popdata<-convert2riself(popdata) } else if (cross == "rilsib") { popdata<-convert2risib(popdata) } #calculates the qtl genotype probablity at #the specififed step size and probability level genotypetype <- c() if (genoprobmethod == "Calculate") { popdata <- calc.genoprob(popdata, step=stepsize, error.prob=genoproblevel ) genotypetype<-c('prob') } else if (genoprobmethod == "Simulate") { popdata <- sim.geno(popdata, n.draws=drawsno, step=stepsize, error.prob=genoproblevel, stepwidth="fixed" ) genotypetype <- c('draws') } cvterm <- scan(cvtermfile, what="character") #reads the cvterm cv <- find.pheno(popdata, cvterm)#returns the col no. of the cvterm permuvalues <- scan(permufile, what="character") permuvalue1 <- permuvalues[1] permuvalue2 <- permuvalues[2] permu <- c() if (is.logical(permuvalue1) == FALSE) { if (qtlmodel == "scanone") { if (userpermuvalue == 0 ) { popdataperm <- scanone(popdata, pheno.col=cv, model="normal", method=qtlmethod ) } else if (userpermuvalue != 0) { popdataperm <- scanone(popdata, pheno.col=cv, model="normal", n.perm = userpermuvalue, method=qtlmethod ) permu <- summary(popdataperm, alpha=permuproblevel) } } else if (qtlmethod != "mr") { if (qtlmodel == "scantwo") { if (userpermuvalue == 0 ) { popdataperm <- scantwo(popdata, pheno.col=cv, model="normal", method=qtlmethod ) } else if (userpermuvalue != 0) { popdataperm <- scantwo(popdata, pheno.col=cv, model="normal", n.perm=userpermuvalue, method=qtlmethod ) permu <- summary(popdataperm, alpha=permuproblevel) } } } } ##########set the LOD cut-off for singificant qtls ############## LodThreshold <- c() if(is.null(permu) == FALSE) { LodThreshold <- permu[1,1] } ##########QTL EFFECTS ############## chrlist <- c("chr1") for (no in 2:12) { chr <- paste("chr", no, sep="") chrlist <- append(chrlist, chr) } chrdata <- paste(cvterm, popid, "chr1", sep="_") chrtest <- c("chr1") for (ch in chrlist) { if (ch=="chr1") { chrdata <- paste(cvterm, popid, ch, sep="_") } else { n <- paste(cvterm, popid, ch, sep="_") chrdata <- append(chrdata, n) } } chrno <- 1 datasummary <- c() confidenceints <- c() lodconfidenceints <- c() QtlChrs <- c() QtlPositions <- c() QtlLods <- c() for (i in chrdata) { filedata <- paste(cvterm, popid, chrno, sep="_") filedata <- paste(filedata,"txt",sep=".") i <- scanone(popdata, chr=chrno, pheno.col=cv, model = "normal", method= qtlmethod ) position <- max(i,chr=chrno) p <- position[["pos"]] LodScore <- position[["lod"]] QtlChr <- levels(position[["chr"]]) if ( is.null(LodThreshold)==FALSE ) { if (LodScore >=LodThreshold ) { QtlChrs <- append(QtlChrs, QtlChr) QtlLods <- append(QtlLods, LodScore) QtlPositions <- append(QtlPositions, round(p, 0)) } } peakmarker <- find.marker(popdata, chr=chrno, pos=p) lodpeakmarker <- i[peakmarker, ] lodconfidenceint <- bayesint(i, chr=chrno, prob=0.95, expandtomarkers=TRUE) if (is.na(lodconfidenceint[peakmarker, ])){ lodconfidenceint <- rbind(lodconfidenceint, lodpeakmarker) } peakmarker <- c(chrno, peakmarker) if (chrno==1) { datasummary <- i peakmarkers <- peakmarker lodconfidenceints <- lodconfidenceint } if (chrno > 1 ) { datasummary <- rbind(datasummary, i) peakmarkers <- rbind(peakmarkers, peakmarker) lodconfidenceints <- rbind(lodconfidenceints, lodconfidenceint) } chrno <- chrno + 1; } ##########QTL EFFECTS ############## ResultDrop <- c() ResultFull <- c() Effects <- c() if (is.null(LodThreshold) == FALSE) { if ( max(QtlLods) >= LodThreshold ) { QtlObj <- makeqtl(popdata, QtlChrs, QtlPositions, what=genotypetype ) QtlsNo <- length(QtlPositions) Eq <- c("y~") for (i in 1:QtlsNo) { q <- paste("Q", i, sep="") if (i==1) { Eq <- paste(Eq, q, sep="") } else if (i>1) { Eq <- paste(Eq, q, sep="*") } } QtlEffects <- try(fitqtl(popdata, pheno.col=cv, QtlObj, formula=Eq, method="hk", get.ests=TRUE ) ) if(class(QtlEffects) != 'try-error') { ResultModel <- attr(QtlEffects, "formula") Effects <- QtlEffects$ests$ests QtlLodAnova <- QtlEffects$lod ResultFull <- QtlEffects$result.full ResultDrop <- QtlEffects$result.drop if (is.numeric(Effects)) { Effects <- round(Effects, 2) } if (is.numeric(ResultFull)) { ResultFull <- round(ResultFull,2) } if (is.numeric(ResultDrop)) { ResultDrop<-round(ResultDrop, 2) } } } } ##########creating vectors for the outfiles############## outfiles <- scan(file=outfile, what="character") qtlfile <- grep("qtl_summary", outfiles, value=TRUE) peakmarkersfile <- grep("peak_marker", outfiles, value=TRUE) confidencelodfile <- grep("confidence", outfiles, value=TRUE) QtlEffectsFile <- grep("qtl_effects", outfiles, value=TRUE) VariationFile <- grep("explained_variation", outfiles, value=TRUE) ##### writing outputs to their respective files write.table(datasummary, file=qtlfile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) write.table(peakmarkers, file=peakmarkersfile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) write.table(lodconfidenceints, file=confidencelodfile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) if (is.null(ResultDrop)==FALSE) { write.table(ResultDrop, file=VariationFile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) } else { if (is.null(ResultFull)==FALSE) { write.table(ResultFull, file=VariationFile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) } } write.table(Effects, file=QtlEffectsFile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) write.table(permu, file=permufile, sep="\t", col.names=NA, quote=FALSE, append=FALSE ) q(runLast = FALSE)
In the writing of his novel , Qian Cai used a different character when spelling Zhou 's given name . Instead of the original character meaning " similar " , it was changed to <unk> , meaning " rude or rustic " . So , " <unk> " represents Zhou 's distinct fictional persona . This spelling has even been carried over into modern day martial arts manuals .
\section{Environmental control functions} This section discusses the general functions required to establish the ADCL environment and to shut it down. All ADCL functions return error codes. ADCL leaves it up to the application to take the appropriate actions in case an error occurs. The only exception to that rule is if an error occurs within an MPI function called by ADCL, since MPI's default error behavior is to abort in case of an error. However, the user can change the default behavior of the MPI library by setting the default error handler of {\tt MPI\_COMM\_WORLD} to {\tt MPI\_ERRORS\_RETURN} (see also section 7.2 in the MPI-1~\cite{mpi1} specification). ADCL provides C and F90 interfaces for most functions. The Fortran interface of a routines contains an additional argument compared to its C counterpart, namely the error code. Furthermore, all Fortran ADCL objects are defined as integers, leaning on the approach chosen by MPI. A C application has to include the ADCL header file called {\tt ADCL.h}, a Fortran application has to include the file {\tt ADCL.inc} in any routine utilizing ADCL functions. \subsection{Initializing ADCL} \begin{verbatim} int ADCL_Init ( void ); subroutine ADCL_Init ( ierror ) integer ierror \end{verbatim} {\tt ADCL\_Init} initializes the ADCL execution environment. The function allocates internal data structures required for ADCL, and has to be called therefore before any other ADCL function. Upon success, ADCL returns {\tt ADCL\_SUCCESS}. It is recommended to call {\tt ADCL\_Init} right after {\tt MPI\_Init}. It is erroneous to call {\tt ADCL\_Init} multiple times. \subsection{Shutting down ADCL} \begin{verbatim} int ADCL_Finalize ( void ); subroutine ADCL_Finalize ( ierror ) integer ierror \end{verbatim} {\tt ADCL\_Finalize} finalizes the ADCL environment. Since the function deallocates internal data structures, it should be called at the very end of the application, but before {\tt MPI\_Finalize}. It is erroneous to call {\tt ADCL\_Finalize} multiple times. \subsection{ADCL program skeletons} Using the two functions described above, the following presents the skeleton for any ADCL application. \begin{verbatim} #include <stdio.h> #include "mpi.h" #include "ADCL.h" int main ( int argc, char **argv ) { MPI_Init ( &argc, &argv ); ADCL_Init (); ... ADCL_Finalize (); MPI_Finalize (); return 0; } \end{verbatim} Accordingly, the fortran skeleton looks as follows: \begin{verbatim} program ADCLskeleton include 'mpif.h' include 'adcl.inc' integer ierror call MPI_Init ( ierror ) call ADCL_Init (ierror ) ... call ADCL_Finalize ( ierror ) call MPI_Finalize ( ierror ) end program ADCLskeleton \end{verbatim} \subsection{ADCL error codes} The following is a list of error codes as defined by ADCL. \begin{itemize} \item {\tt ADCL\_SUCCESS} : no error \item {\tt ADCL\_NO\_MEMORY}: internal memory allocation failed \item {\tt ADCL\_ERROR\_INTERNAL} : internal ADCL error \item {\tt ADCL\_USER\_ERROR}: generic user error \item {\tt ADCL\_UNDEFINED}: undefined behavior \item {\tt ADCL\_NOT\_FOUND} : object not found \item {\tt ADCL\_INVALID\_ARG} : invalid argument passed by user to an ADCL function. Generic error code, only used if one of the codes below do not match. \item {\tt ADCL\_INVALID\_NDIMS} : invalid number of dimension passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_DIMS} : invalid dimension passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_HWIDTH} : invalid number of halo-cells passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_DAT}: invalid MPI datatype passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_DATA}: invalid buffer pointer passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_COMTYPE}: invalid communication type passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_COMM}: invalid MPI communicator passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_REQUEST}: invalid ADCL request passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_NC} : invalid NC argument passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_TYPE}: ? \item {\tt ADCL\_INVALID\_TOPOLOGY}: invalid ADCL topology passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_ATTRIBUTE}: invalid ADCL attribute passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_ATTRSET}: invalid ADCL attribute-set passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_FUNCTION}: invalid ADCL function passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_WORK\_FUNCTION\_PTR}: invalid ADCL function pointer passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_FNCTSET}: invalid ADCL function-set passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_VECTOR}: invalid ADCL vector passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_VECTORSET}: invalid ADCL vector set passed by user to an ADCL function. \item {\tt ADCL\_INVALID\_DIRECTION}: invalid direction argument passed by user to an ADCL function. \end{itemize}
! ! Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. ! ! Licensed under the Apache License, Version 2.0 (the "License"); ! you may not use this file except in compliance with the License. ! You may obtain a copy of the License at ! ! http://www.apache.org/licenses/LICENSE-2.0 ! ! Unless required by applicable law or agreed to in writing, software ! distributed under the License is distributed on an "AS IS" BASIS, ! WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ! See the License for the specific language governing permissions and ! limitations under the License. ! MODULE extendedtype TYPE :: TA real :: reala = 7.7 END TYPE TA TYPE, EXTENDS(TA) :: TB REAL :: realb END TYPE TB TYPE, EXTENDS(TB) :: TC REAL :: realc END TYPE TC TYPE, EXTENDS(TC) :: TD REAL :: reald=3.3 END TYPE TD TYPE(TD) :: one=TD(1.0,2.0,3.0,4.0) TYPE(TD) :: two=TD(reald=4.0,realb=2.0,reala=1.0,realc=3.0) TYPE(TD) :: one1=TD(1.0,2.0,3.0) ! this one is wrong TYPE(TD) :: two2=TD(4.0,reald=2.0,realb=1.0,realc=3.0) TYPE(TD) :: three=TD(realc=2.0,realb=3.0) TYPE(TD) :: four=TD(TA(1.0),2.0,3.0) TYPE(TD) :: five=TD(TB(TA(1.0),3.0),3.5) TYPE(TA),parameter :: mea=TA(5.0) TYPE(TD) :: six=TD(mea,realc=2.0,realb=3.0) TYPE(TD) :: seven=TD(TB(realb=2.0,reala=1.0),3.0) contains subroutine printme print *, "one:",one print *, "two:",two print *, "one1:",one1 print *, "two2:",two2 print *, "three:",three print *, "four:",four print *, "five:",five print *, "six:",six print *, "seven:",seven end subroutine printme END MODULE extendedtype PROGRAM test_fortran2003 USE extendedtype parameter(N=36) real expect(N) data expect /1.0,2.0,3.0,4.0, &!one &1.0,2.0,3.0,4.0, &!two &1.0,2.0,3.0,3.3, &!one1 &4.0,1.0,3.0,2.0, &!two2 &7.7,3.0,2.0,3.3, &!three &1.0,2.0,3.0,3.3, &!four &1.0,3.0,3.5,3.3, &!five &5.0,3.0,2.0,3.3, &!six 1.0,2.0,3.0,3.3 /!seven real result(N) ! call printme() result(1)=one%reala result(2)=one%realb result(3)=one%realc result(4)=one%reald result(5)=two%reala result(6)=two%realb result(7)=two%realc result(8)=two%reald result(9)=one1%reala result(10)=one1%realb result(11)=one1%realc result(12)=one1%reald result(13)=two2%reala result(14)=two2%realb result(15)=two2%realc result(16)=two2%reald result(17)=three%reala result(18)=three%realb result(19)=three%realc result(20)=three%reald result(21)=four%reala result(22)=four%realb result(23)=four%realc result(24)=four%reald result(25)=five%reala result(26)=five%realb result(27)=five%realc result(28)=five%reald result(29)=six%reala result(30)=six%realb result(31)=six%realc result(32)=six%reald result(33)=seven%reala result(34)=seven%realb result(35)=seven%realc result(36)=seven%reald call check(expect,result,N) END PROGRAM test_fortran2003
function jdate = julian (month, day, year) % Julian date % Input % month = calendar month [1 - 12] % day = calendar day [1 - 31] % year = calendar year [yyyy] % Output % jdate = Julian date % special notes % (1) calendar year must include all digits % (2) will report October 5, 1582 to October 14, 1582 % as invalid calendar dates and stop % Orbital Mechanics with Matlab %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% y = year; m = month; b = 0; c = 0; if (m <= 2) y = y - 1; m = m + 12; end if (y < 0) c = -.75; end % check for valid calendar date if (year < 1582) % null elseif (year > 1582) a = fix(y / 100); b = 2 - a + floor(a / 4); elseif (month < 10) % null elseif (month > 10) a = fix(y / 100); b = 2 - a + floor(a / 4); elseif (day <= 4) % null elseif (day > 14) a = fix(y / 100); b = 2 - a + floor(a / 4); else clc; home; fprintf('\n\n this is an invalid calendar date!!\n'); keycheck; return; end jd = fix(365.25 * y + c) + fix(30.6001 * (m + 1)); jdate = jd + day + b + 1720994.5;
{-# OPTIONS --without-K --safe #-} module Math.NumberTheory.Product.Nat.Properties where open import Data.Nat.Properties open import Math.NumberTheory.Product.Generic.Properties -- TODO rename _≈_ to _≡_ open CommutativeMonoidProductProperties *-1-commutativeMonoid public
Formal statement is: lemma cis_cnj: "cnj (cis t) = cis (-t)" Informal statement is: The complex conjugate of $e^{it}$ is $e^{-it}$.
MODULE coilsnamin USE modular_coils USE saddle_coils USE saddle_surface USE vf_coils USE tf_coils USE bcoils_mod USE bnorm_mod USE control_mod USE Vcoilpts IMPLICIT NONE NAMELIST /coilsin/ nmod_coils_per_period, nf_phi, nf_rho, epsfcn, 1 lvf, lmodcur, lsurfv, lbnorm, rhoc, rhos, phic, phis, curmod, 2 dcc_wgt, dcc_exp, dcc_tgt, dcp_wgt, dcp_exp, dcp_tgt, rc_wgt, 3 rc_exp, rc_tgt, lmod_wgt, lmod_tgt, niter_opt, nstep, i_pol, 4 numsurf, m_num, n_num, rmn_sf, zmn_sf, num_vf, rc_vf, zc_vf, 5 cc_vf, lbcoil, lncsx, lsymm, lsaddle, nsad_coils_per_period, 6 ltfc, ltfcv, i_tfc, lsadsfv, nsad_u, nsad_v, nfils, sad_v_c, 7 sad_v_s, sad_u_c, sad_u_s, sad_v0, sad_u0, lsadcur, lpolcur, 8 cursad, numsurf_sad, m_sad, n_sad, rmn_sad, zmn_sad, dsc_wgt, 9 dsc_exp, dsc_tgt, r_ext, ymin_wgt, ymin_tgt, lmodular, lqos, A rs_wgt, rs_exp, rs_tgt, lsmod, laccess, n_access, x0_access, B y0_access, z0_access, x1_access, y1_access, z1_access, C dac_wgt, dac_exp, dac_tgt, cvf_wgt, cvf_tgt, cs_wgt, cs_tgt, D cu_wgt, cu_tgt, dpc_wgt, mc_bg, lp_bg, bcoil_cur, dscxp_wgt, E dscxp_exp, dscxp_tgt, deln, delt, lspline, lsplbkp, nvar_vc, F nvar_uc, bkp_wgt, bkp_tgt, mxb_wgt, lvfvar, nrvf_c, rcfc_vf, G rcfs_vf, lvfr, lvfz, nopt_alg, lrestart, nopt_wsurf, rvf_wgt, H rvf_tgt, nsad_group, ls_cur, csad_scl, lsad_wgt, lsad_tgt, I rmax_wgt, rmax_tgt, bcoil_file, lbcoil_cur, lsadshape, J lvfc, lcc_vf, csc_wgt, csc_tgt, scd_wgt, scd_tgt, lctrlpt, K nwdim, nvf_fix, laxis, vacfld_wgt, sc_dmin_tgt, sc_dmin_wgt ! ! VARIABLE DESCRIPTIONS ! ! SCALARS ! ! nmod_coils_per_period - no. of coils per fp, modular representation ! nsad_coils_per_period - no. of coils per fp, saddle representation ! nf_phi - no. of phi Fourier modes, modular rep. ! nf_rho - no. of rho Fourier modes, modular rep. ! epsfcn - step size for derivative approximation ! niter_opt - max. number of function evaluations ! nstep ! i_pol - total poloidal current per fp ! numsurf - number of winding surface modes, modular rep. ! m_num - vector of poloidal mode numbers, modular rep. ! n_num - vector of toroidal mode numbers, modular rep. ! num_vf - no. of vf coil pairs ! nsad_u - no. of u-coefficients (Fourier or spline) in saddle rep. ! nsad_v - no. of v-coefficients (Fourier or spline) in saddle rep. ! nfils - no. of filaments per modular/saddle coil (1, 3, or 5) ! numsurf_sad - no. of winding surface modes, saddle rep. ! m_sad - vector of poloidal mode numbers, saddle rep. ! n_sad - vector of toroidal mode numbers, saddle rep. ! n_access - number of access zone constraints ! deln - dist. from coil centerline to filament, normal to winding surface ! delt - dist. from coil centerline to filament, tangent to winding surface ! nvar_vc - nvar_vc(i,j) = 0/1 to fix/vary spline coefficient i, coil type j ! nvar_uc ! nrvf_c ! nopt_alg ! nopt_wsurf ! nsad_group = array of integers defining current group for 'saddle' coils ! nwdim ! nvf_fix ! ! LOGICAL CONTROL VARIABLES ! ! lvf ! lmodcur = T to vary currents in modular representation ! lsurfv = T to vary winding surface coefficients in modular representation ! lbnorm = T to read bnorm coeffs. and match B-normal at plasma boundary ! lbcoil = T to read file containing background coils ! lncsx = T to extend coils in v=0 plane for NBI access ! lsymm = T for coil in v=0 plane with modular representation ! lsaddle = T to use saddle OR modular coil representation ! ltfc ! ltfcv ! lsadsfv = T to vary winding surface in saddle representation ! lsadcur = T to vary currents in saddle representation ! lpolcur = T to implement constraint on total poloidal currents ! lmodular = T to use modular coil representation (defunct, superceded by lsaddle) ! lqos = T for code to generate qos TF winding ! lsmod = T to implement modular mode in saddle representation ! laccess = T to implement access zone constraints ! lp_bg ! lspline = T for spline option in saddle representation ! lsplbkp = T to vary spline breakpoints (not used yet) ! lvfvar = T to vary VF coil geometry ! lvfr = T to vary r-coefficients in VF coil representation ! lvfz = T to vary z-coordinates in VF coil representation ! lrestart ! ls_cur ! lbcoil_cur ! lsadshape = T to vary coil geometry ! lvfc = T to vary VF coil currents ! lcc_vf ! lctrlpt = T for control point spline representation (also need lspline=T) ! laxis = T to include magnetic axis in coils.ext file ! ! COIL FOURIER ARRAYS FOR MODULAR REPRESENTATION ! ! rhoc ! rhos ! phic = coeffs. of cosine terms in modular representation ! phis = coeffs. of sine terms in modular representation ! curmod = coil currents for modular representation ! ! COIL FOURIER/SPLINE ARRAYS FOR SADDLE REPRESENTATION ! ! sad_v_c = coeffs. of cosine terms for v-coordinate, saddle representation ! sad_v_s = coeffs. of sine terms for v-coordinate, saddle representation ! sad_u_c = coeffs. of cosine terms for u-coordinate, saddle representation ! sad_u_s = coeffs. of sine terms for u-coordinate, saddle representation ! sad_v0 = const. term used in original b-spline (non-control-pt.) repr. ! sad_u0 = const. term used in original b-spline (non-control-pt.) repr. ! ! SURFACE ARRAYS ! ! rmn_sf = R-coefficients, winding surface for modular representation ! zmn_sf = Z-coefficients, winding surface for modular representation ! rmn_sad = R-coefficients, winding surface for saddle representation ! zmn_sad = R-coefficients, winding surface for saddle representation ! ! TARGET/WEIGHTS ! ! dcc_wgt = weights for coil-coil spacing penalties, modular rep. ! dcc_exp = exponents for coil-coil spacing, = -1 to use linear constraints ! dcc_tgt = targets for coil-coil spacing penalties ! dcp_wgt = weight for coil-plasma spacing penalties ! dcp_exp = exponent for coil-plasma spacing, = -1 to use linear constraint ! dcp_tgt = target tor coil-plasma spacing penalty ! rc_wgt = weights for coil radius of curvature penalties ! rc_exp = exp. for coil radius of curvature penalty, = -1 for linear constraints ! rc_tgt = targets for coil radius of curvature penalties ! dsc_wgt = weights for coil-coil spacing penalties, 'saddle' rep. ! dsc_exp = exponent for 'saddle' coil-coil spacing, = -1 for linear con. ! dsc_tgt = targets for 'saddle' coil-coil spacing penalties ! sc_dmin_tgt = 2D array of saddle coil-coil spacing penalties for each ! coil pair (diagonal is excluded) ! sc_dmin_wgt = 2D array of saddle coil-coil weights ! r_ext ! ymin_wgt = weights for y-min constraints ! ymin_tgt = targets for y-min constraints ! rs_wgt = weights for 'saddle' coil radius of curvature penalties ! rs_exp = exp. For 'saddle' coil radius of curvature penalty, = -1 for linear ! rs_tgt = targets for 'saddle' coil radius of curvature penalties ! lsad_wgt = weights for 'saddle' coil length constraints ! lsad_tgt = target values for 'saddle' coil length constraints ! lmod_wgt = weights for modular coil length constraints ! lmod_tgt = target values for modular coil length constraints ! dac_wgt = weights for access penalties ! dac_exp = exponent for access penalties ! dac_tgt = target distance for access penalties ! cvf_wgt ! cvf_tgt ! cs_wgt ! cs_tgt ! cu_wgt ! cu_tgt ! dpc_wgt ! dscxp_wgt = weights for 'saddle' coil linear current density penalties ! dscxp_exp = exponents for 'saddle' coil linear current density penalties ! dscxp_tgt = targets for 'saddle' coil linear current density penalties ! bkp_wgt ! bkp_tgt ! mxb_wgt ! rmax_wgt ! rmax_tgt ! rvf_wgt ! rvf_tgt ! csc_wgt ! csc_tgt ! scd_wgt ! scd_tgt ! rc_vf ! zc_vf ! cc_vf ! i_tfc ! cursad ! x0_access = coordinates of endpoints for lines defining access zones ! y0_access = ! z0_access = ! x1_access = ! y1_access = ! z1_access= ! mc_bg ! bcoil_cur = 'background' coil currents ! rcfc_vf ! rcfs_vf ! csad_scl ! bcoil_file = name of 'background' coil file CONTAINS SUBROUTINE read_coils_namelist (iunit, istat) INTEGER :: iunit, istat READ (iunit, nml=coilsin, iostat=istat) END SUBROUTINE read_coils_namelist END MODULE coilsnamin
```python import numpy as np import sympy as sym import numba import pydae.build as db ``` ```python ``` ### Electromechanical differential equations \begin{eqnarray} f_1 &=& \dot \delta = \Omega_b \left( \omega - \omega_s \right) \\ f_2 &=& \dot \omega = \frac{1}{2H} \left( p_m - p_e - D \left( \omega - \omega_s \right) \right) \end{eqnarray} ### Electric rotor differential equations \begin{eqnarray} f_3 &=& \dot e_q' = \frac{1}{T'_{d0}} \left( -e'_q - \left(X_d - X'_d \right) i_d + v_f^\star \right) \\ f_4 &=& \dot e'_d = \frac{1}{T'_{q0}} \left( -e'_d - \left(X_q - X'_q \right) i_q \right) \end{eqnarray} ### Park transform \begin{eqnarray} v_d &=& v_t \sin\left(\delta - \theta_t\right) \\ v_q &=& v_t \cos\left(\delta - \theta_t\right) \\ p_e &=& \left( v_q + R_a i_q \right) i_q + \left( v_d + R_a i_d \right) i_d \end{eqnarray} ### Stator equations \begin{eqnarray} g_1 &=& v_q + R_a i_q + X'_d i_d - e'_q\\ g_2 &=& v_d + R_a i_d - X'_q i_q - e'_d\\ \end{eqnarray} ### Powers \begin{eqnarray} g_3 &=& i_d v_d + i_q v_q - p_t \\ g_4 &=& i_d v_q - i_q v_d - q_t \end{eqnarray} ### Network equations \begin{eqnarray} g_5 &=& p_t - \left(v_t v_0 \sin\left(\theta_t - \theta_0\right)\right)/X_l\\ g_6 &=& q_t + \left(v_t v_0 \cos\left(\theta_t - \theta_0\right)\right)/X_l - v_t^2/X_l \\ g_7 &=& p_l - \left(v_0 v_t \sin\left(\theta_0 - \theta_t\right)\right)/X_l\\ g_8 &=& q_l + \left(v_0 v_t \cos\left(\theta_0 - \theta_t\right)\right)/X_l - v_0^2/X_l \end{eqnarray} ## System definition ```python params_dict = {'X_d':1.81,'X1d':0.3, 'T1d0':8.0, # synnchronous machine d-axis parameters 'X_q':1.76,'X1q':0.65,'T1q0':1.0, # synnchronous machine q-axis parameters 'R_a':0.003, 'X_l': 0.02, 'H':3.5,'D':0.01, 'Omega_b':2*np.pi*50,'omega_s':1.0, 'K_delta':0.1,'K_v':1e-3 } u_ini_dict = {'v_t':0.8,'theta_t':1.0,'p_l':0.0,'q_l':0.0} # for the initialization problem u_run_dict = {'p_m':0.8,'v_f':1.0,'p_l':0.0,'q_l':0.0} # for the running problem (here initialization and running problem are the same) x_list = ['delta','omega','e1q','e1d'] # dynamic states y_ini_list = ['i_d','i_q','p_t','q_t','p_m','v_f','v_0','theta_0'] y_run_list = ['i_d','i_q','p_t','q_t','v_t','theta_t','v_0','theta_0'] sys_vars = {'params':params_dict, 'u_list':u_run_dict, 'x_list':x_list, 'y_list':y_run_list} exec(db.sym_gen_str()) # exec to generate the required symbolic varables and constants ``` ```python # auxiliar equations v_d = v_t*sin(delta - theta_t) # park v_q = v_t*cos(delta - theta_t) # park p_e = i_d*(v_d + R_a*i_d) + i_q*(v_q + R_a*i_q) # electromagnetic power # dynamic equations ddelta = Omega_b*(omega - omega_s) - K_delta*delta # load angle domega = 1/(2*H)*(p_m - p_e - D*(omega - omega_s)) # speed de1q = 1/T1d0*(-e1q - (X_d - X1d)*i_d + v_f + K_v*(1-v_t)) de1d = 1/T1q0*(-e1d + (X_q - X1q)*i_q) # algrbraic equations g_1 = v_q + R_a*i_q + X1d*i_d - e1q # stator g_2 = v_d + R_a*i_d - X1q*i_q - e1d # stator g_3 = i_d*v_d + i_q*v_q - p_t # active power g_4 = i_d*v_q - i_q*v_d - q_t # reactive power g_5 = p_t - (v_t*v_0*sin(theta_t - theta_0))/X_l # network equation (p) g_6 = q_t + (v_t*v_0*cos(theta_t - theta_0))/X_l - v_t**2/X_l # network equation (q) g_7 = -p_l - (v_t*v_0*sin(theta_0 - theta_t))/X_l # network equation (p) g_8 = -q_l + (v_t*v_0*cos(theta_0 - theta_t))/X_l - v_0**2/X_l # network equation (q) ``` ```python sys = {'name':'iso_milano_ex8p1_4ord_uctrl', 'params_dict':params, 'f_list':[ddelta,domega,de1q,de1d], 'g_list':[g_1,g_2,g_3,g_4,g_5,g_6,g_7,g_8], 'x_list':x_list, 'y_ini_list':y_ini_list, 'y_run_list':y_run_list, 'u_ini_dict':u_ini_dict, 'u_run_dict':u_run_dict, 'h_dict':{'p_m':p_m,'p_e':p_e, 'v_f':v_f}} sys = db.system(sys) db.sys2num(sys) ``` ```python u_ini_dict ``` {'v_t': 0.8, 'theta_t': 1.0, 'p_l': 0.0, 'q_l': 0.0} ```python u_ini_dict ``` {'v_t': 0.8, 'theta_t': 1.0, 'p_l': 0.0, 'q_l': 0.0} ```python ```
||| Note: The difference to a 'strict' Writer implementation is ||| that accumulation of values does not happen in the ||| Applicative and Monad instances but when invoking `Writer`-specific ||| functions like `writer` or `listen`. module Control.Monad.Writer.CPS import Control.Monad.Identity import Control.Monad.Trans %default total ||| A writer monad parameterized by: ||| ||| @w the output to accumulate. ||| ||| @m The inner monad. ||| ||| The `pure` function produces the output `neutral`, while `>>=` ||| combines the outputs of the subcomputations using `<+>`. public export record WriterT (w : Type) (m : Type -> Type) (a : Type) where constructor MkWriterT unWriterT : w -> m (a, w) ||| Construct an writer computation from a (result,output) computation. ||| (The inverse of `runWriterT`.) public export %inline writerT : (Functor m, Semigroup w) => m (a, w) -> WriterT w m a writerT f = MkWriterT $ \w => (\(a,w') => (a,w <+> w')) <$> f ||| Unwrap a writer computation. ||| (The inverse of 'writerT'.) public export %inline runWriterT : Monoid w => WriterT w m a -> m (a,w) runWriterT m = unWriterT m neutral ||| Extract the output from a writer computation. public export %inline execWriterT : (Functor m, Monoid w) => WriterT w m a -> m w execWriterT = map snd . runWriterT ||| Map both the return value and output of a computation using ||| the given function. public export %inline mapWriterT : (Functor n, Monoid w, Semigroup w') => (m (a, w) -> n (b, w')) -> WriterT w m a -> WriterT w' n b mapWriterT f m = MkWriterT $ \w => (\(a,w') => (a,w <+> w')) <$> f (runWriterT m) -------------------------------------------------------------------------------- -- Writer Functions -------------------------------------------------------------------------------- ||| The `return` function produces the output `neutral`, while `>>=` ||| combines the outputs of the subcomputations using `<+>`. public export Writer : Type -> Type -> Type Writer w = WriterT w Identity ||| Unwrap a writer computation as a (result, output) pair. public export %inline runWriter : Monoid w => Writer w a -> (a, w) runWriter = runIdentity . runWriterT ||| Extract the output from a writer computation. public export %inline execWriter : Monoid w => Writer w a -> w execWriter = runIdentity . execWriterT ||| Map both the return value and output of a computation using ||| the given function. public export %inline mapWriter : (Monoid w, Semigroup w') => ((a, w) -> (b, w')) -> Writer w a -> Writer w' b mapWriter f = mapWriterT $ \(Id p) => Id (f p) -------------------------------------------------------------------------------- -- Implementations -------------------------------------------------------------------------------- public export %inline Functor m => Functor (WriterT w m) where map f m = MkWriterT $ \w => (\(a,w') => (f a,w')) <$> unWriterT m w public export %inline Monad m => Applicative (WriterT w m) where pure a = MkWriterT $ \w => pure (a,w) MkWriterT mf <*> MkWriterT mx = MkWriterT $ \w => do (f,w1) <- mf w (a,w2) <- mx w1 pure (f a,w2) public export %inline (Monad m, Alternative m) => Alternative (WriterT w m) where empty = MkWriterT $ \_ => empty MkWriterT m <|> mn = MkWriterT $ \w => m w <|> unWriterT mn w public export %inline Monad m => Monad (WriterT w m) where m >>= k = MkWriterT $ \w => do (a,w1) <- unWriterT m w unWriterT (k a) w1 public export %inline MonadTrans (WriterT w) where lift m = MkWriterT $ \w => map (\a => (a,w)) m public export %inline HasIO m => HasIO (WriterT w m) where liftIO = lift . liftIO
% book : Signals and Systems Laboratory with MATLAB % authors : Alex Palamides & Anastasia Veloni % % % % problem 5 - convolution of x(t) and h(t) t1=0:.1:.9; t2=1:.1:3; t3=3.1:.1:5; x1=zeros(size(t1)); x2=t2; x3=ones(size(t3)); x=[x1 x2 x3]; h1=zeros(size(t1)); h2=exp(-t2); h3=zeros(size(t3)); h=[h1 h2 h3]; y=conv(x,h)*0.1; plot(0:.1:10,y); legend('y(t)') figure th=1:.01:3; h=exp(-th); tx1=1:.01:3; x1=tx1; tx2=3.01:.01:5; x2=ones(size(tx2)); x=[x1 x2]; y=conv(x,h)*0.01; plot(2:.01:8,y); legend('y(t)')
{-# OPTIONS --without-K --safe #-} open import Categories.Category -- this module characterizes a category of all equalizer indexed by I. -- this notion formalizes a category with all equalizer up to certain cardinal. module Categories.Diagram.Equalizer.Indexed {o ℓ e} (C : Category o ℓ e) where open import Level open Category C record IndexedEqualizerOf {i} {I : Set i} {A B : Obj} (M : I → A ⇒ B) : Set (i ⊔ o ⊔ e ⊔ ℓ) where field E : Obj arr : E ⇒ A -- a reference morphism ref : E ⇒ B equality : ∀ i → M i ∘ arr ≈ ref equalize : ∀ {X} (h : X ⇒ A) (r : X ⇒ B) → (∀ i → M i ∘ h ≈ r) → X ⇒ E universal : ∀ {X} (h : X ⇒ A) (r : X ⇒ B) (eq : ∀ i → M i ∘ h ≈ r) → h ≈ arr ∘ equalize h r eq unique : ∀ {X} {l : X ⇒ E} (h : X ⇒ A) (r : X ⇒ B) (eq : ∀ i → M i ∘ h ≈ r) → h ≈ arr ∘ l → l ≈ equalize h r eq record IndexedEqualizer {i} (I : Set i) : Set (i ⊔ o ⊔ e ⊔ ℓ) where field A B : Obj M : I → A ⇒ B equalizerOf : IndexedEqualizerOf M open IndexedEqualizerOf equalizerOf public AllEqualizers : ∀ i → Set (o ⊔ ℓ ⊔ e ⊔ suc i) AllEqualizers i = (I : Set i) → IndexedEqualizer I AllEqualizersOf : ∀ i → Set (o ⊔ ℓ ⊔ e ⊔ suc i) AllEqualizersOf i = ∀ {I : Set i} {A B : Obj} (M : I → A ⇒ B) → IndexedEqualizerOf M
/* @copyright Louis Dionne 2014 Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE.md or copy at http://boost.org/LICENSE_1_0.txt) */ #include <boost/hana/detail/assert.hpp> #include <boost/hana/functional.hpp> #include <boost/hana/integral.hpp> #include <boost/hana/list/instance.hpp> using namespace boost::hana; using namespace literals; //! [main] BOOST_HANA_CONSTANT_ASSERT( sort_by(_>_, list(1_c, -2_c, 3_c, 0_c)) == list(3_c, 1_c, 0_c, -2_c) ); //! [main] int main() { }
/- Copyright (c) 2017 Johannes Hölzl. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Johannes Hölzl -/ import order.bounds import data.set.bool import data.nat.basic /-! # Theory of complete lattices ## Main definitions * `Sup` and `Inf` are the supremum and the infimum of a set; * `supr (f : ι → α)` and `infi (f : ι → α)` are indexed supremum and infimum of a function, defined as `Sup` and `Inf` of the range of this function; * `class complete_lattice`: a bounded lattice such that `Sup s` is always the least upper boundary of `s` and `Inf s` is always the greatest lower boundary of `s`; * `class complete_linear_order`: a linear ordered complete lattice. ## Naming conventions We use `Sup`/`Inf`/`supr`/`infi` for the corresponding functions in the statement. Sometimes we also use `bsupr`/`binfi` for "bounded" supremum or infimum, i.e. one of `⨆ i ∈ s, f i`, `⨆ i (hi : p i), f i`, or more generally `⨆ i (hi : p i), f i hi`. ## Notation * `⨆ i, f i` : `supr f`, the supremum of the range of `f`; * `⨅ i, f i` : `infi f`, the infimum of the range of `f`. -/ set_option old_structure_cmd true open set variables {α β β₂ : Type*} {ι ι₂ : Sort*} /-- class for the `Sup` operator -/ class has_Sup (α : Type*) := (Sup : set α → α) /-- class for the `Inf` operator -/ class has_Inf (α : Type*) := (Inf : set α → α) export has_Sup (Sup) has_Inf (Inf) /-- Supremum of a set -/ add_decl_doc has_Sup.Sup /-- Infimum of a set -/ add_decl_doc has_Inf.Inf /-- Indexed supremum -/ def supr [has_Sup α] {ι} (s : ι → α) : α := Sup (range s) /-- Indexed infimum -/ def infi [has_Inf α] {ι} (s : ι → α) : α := Inf (range s) @[priority 50] instance has_Inf_to_nonempty (α) [has_Inf α] : nonempty α := ⟨Inf ∅⟩ @[priority 50] instance has_Sup_to_nonempty (α) [has_Sup α] : nonempty α := ⟨Sup ∅⟩ notation `⨆` binders `, ` r:(scoped f, supr f) := r notation `⨅` binders `, ` r:(scoped f, infi f) := r instance (α) [has_Inf α] : has_Sup (order_dual α) := ⟨(Inf : set α → α)⟩ instance (α) [has_Sup α] : has_Inf (order_dual α) := ⟨(Sup : set α → α)⟩ /-- Note that we rarely use `complete_semilattice_Sup` (in fact, any such object is always a `complete_lattice`, so it's usually best to start there). Nevertheless it is sometimes a useful intermediate step in constructions. -/ @[ancestor partial_order has_Sup] class complete_semilattice_Sup (α : Type*) extends partial_order α, has_Sup α := (le_Sup : ∀s, ∀a∈s, a ≤ Sup s) (Sup_le : ∀s a, (∀b∈s, b ≤ a) → Sup s ≤ a) section variables [complete_semilattice_Sup α] {s t : set α} {a b : α} @[ematch] theorem le_Sup : a ∈ s → a ≤ Sup s := complete_semilattice_Sup.le_Sup s a theorem Sup_le : (∀b∈s, b ≤ a) → Sup s ≤ a := complete_semilattice_Sup.Sup_le s a lemma is_lub_Sup (s : set α) : is_lub s (Sup s) := ⟨assume x, le_Sup, assume x, Sup_le⟩ lemma is_lub.Sup_eq (h : is_lub s a) : Sup s = a := (is_lub_Sup s).unique h theorem le_Sup_of_le (hb : b ∈ s) (h : a ≤ b) : a ≤ Sup s := le_trans h (le_Sup hb) theorem Sup_le_Sup (h : s ⊆ t) : Sup s ≤ Sup t := (is_lub_Sup s).mono (is_lub_Sup t) h @[simp] theorem Sup_le_iff : Sup s ≤ a ↔ (∀b ∈ s, b ≤ a) := is_lub_le_iff (is_lub_Sup s) lemma le_Sup_iff : a ≤ Sup s ↔ (∀ b ∈ upper_bounds s, a ≤ b) := ⟨λ h b hb, le_trans h (Sup_le hb), λ hb, hb _ (λ x, le_Sup)⟩ theorem Sup_le_Sup_of_forall_exists_le (h : ∀ x ∈ s, ∃ y ∈ t, x ≤ y) : Sup s ≤ Sup t := le_Sup_iff.2 $ λ b hb, Sup_le $ λ a ha, let ⟨c, hct, hac⟩ := h a ha in hac.trans (hb hct) -- We will generalize this to conditionally complete lattices in `cSup_singleton`. theorem Sup_singleton {a : α} : Sup {a} = a := is_lub_singleton.Sup_eq end /-- Note that we rarely use `complete_semilattice_Inf` (in fact, any such object is always a `complete_lattice`, so it's usually best to start there). Nevertheless it is sometimes a useful intermediate step in constructions. -/ @[ancestor partial_order has_Inf] class complete_semilattice_Inf (α : Type*) extends partial_order α, has_Inf α := (Inf_le : ∀s, ∀a∈s, Inf s ≤ a) (le_Inf : ∀s a, (∀b∈s, a ≤ b) → a ≤ Inf s) section variables [complete_semilattice_Inf α] {s t : set α} {a b : α} @[ematch] theorem Inf_le : a ∈ s → Inf s ≤ a := complete_semilattice_Inf.Inf_le s a theorem le_Inf : (∀b∈s, a ≤ b) → a ≤ Inf s := complete_semilattice_Inf.le_Inf s a lemma is_glb_Inf (s : set α) : is_glb s (Inf s) := ⟨assume a, Inf_le, assume a, le_Inf⟩ lemma is_glb.Inf_eq (h : is_glb s a) : Inf s = a := (is_glb_Inf s).unique h theorem Inf_le_of_le (hb : b ∈ s) (h : b ≤ a) : Inf s ≤ a := le_trans (Inf_le hb) h theorem Inf_le_Inf (h : s ⊆ t) : Inf t ≤ Inf s := (is_glb_Inf s).mono (is_glb_Inf t) h @[simp] theorem le_Inf_iff : a ≤ Inf s ↔ (∀b ∈ s, a ≤ b) := le_is_glb_iff (is_glb_Inf s) lemma Inf_le_iff : Inf s ≤ a ↔ (∀ b, (∀ x ∈ s, b ≤ x) → b ≤ a) := ⟨λ h b hb, le_trans (le_Inf hb) h, λ hb, hb _ (λ x, Inf_le)⟩ theorem Inf_le_Inf_of_forall_exists_le (h : ∀ x ∈ s, ∃ y ∈ t, y ≤ x) : Inf t ≤ Inf s := le_of_forall_le begin simp only [le_Inf_iff], introv h₀ h₁, rcases h _ h₁ with ⟨y,hy,hy'⟩, solve_by_elim [le_trans _ hy'] end -- We will generalize this to conditionally complete lattices in `cInf_singleton`. theorem Inf_singleton {a : α} : Inf {a} = a := is_glb_singleton.Inf_eq end /-- A complete lattice is a bounded lattice which has suprema and infima for every subset. -/ @[protect_proj, ancestor lattice complete_semilattice_Sup complete_semilattice_Inf has_top has_bot] class complete_lattice (α : Type*) extends lattice α, complete_semilattice_Sup α, complete_semilattice_Inf α, has_top α, has_bot α := (le_top : ∀ x : α, x ≤ ⊤) (bot_le : ∀ x : α, ⊥ ≤ x) @[priority 100] -- see Note [lower instance priority] instance complete_lattice.to_bounded_order [h : complete_lattice α] : bounded_order α := { ..h } /-- Create a `complete_lattice` from a `partial_order` and `Inf` function that returns the greatest lower bound of a set. Usually this constructor provides poor definitional equalities. If other fields are known explicitly, they should be provided; for example, if `inf` is known explicitly, construct the `complete_lattice` instance as ``` instance : complete_lattice my_T := { inf := better_inf, le_inf := ..., inf_le_right := ..., inf_le_left := ... -- don't care to fix sup, Sup, bot, top ..complete_lattice_of_Inf my_T _ } ``` -/ def complete_lattice_of_Inf (α : Type*) [H1 : partial_order α] [H2 : has_Inf α] (is_glb_Inf : ∀ s : set α, is_glb s (Inf s)) : complete_lattice α := { bot := Inf univ, bot_le := λ x, (is_glb_Inf univ).1 trivial, top := Inf ∅, le_top := λ a, (is_glb_Inf ∅).2 $ by simp, sup := λ a b, Inf {x | a ≤ x ∧ b ≤ x}, inf := λ a b, Inf {a, b}, le_inf := λ a b c hab hac, by { apply (is_glb_Inf _).2, simp [*] }, inf_le_right := λ a b, (is_glb_Inf _).1 $ mem_insert_of_mem _ $ mem_singleton _, inf_le_left := λ a b, (is_glb_Inf _).1 $ mem_insert _ _, sup_le := λ a b c hac hbc, (is_glb_Inf _).1 $ by simp [*], le_sup_left := λ a b, (is_glb_Inf _).2 $ λ x, and.left, le_sup_right := λ a b, (is_glb_Inf _).2 $ λ x, and.right, le_Inf := λ s a ha, (is_glb_Inf s).2 ha, Inf_le := λ s a ha, (is_glb_Inf s).1 ha, Sup := λ s, Inf (upper_bounds s), le_Sup := λ s a ha, (is_glb_Inf (upper_bounds s)).2 $ λ b hb, hb ha, Sup_le := λ s a ha, (is_glb_Inf (upper_bounds s)).1 ha, .. H1, .. H2 } /-- Any `complete_semilattice_Inf` is in fact a `complete_lattice`. Note that this construction has bad definitional properties: see the doc-string on `complete_lattice_of_Inf`. -/ def complete_lattice_of_complete_semilattice_Inf (α : Type*) [complete_semilattice_Inf α] : complete_lattice α := complete_lattice_of_Inf α (λ s, is_glb_Inf s) /-- Create a `complete_lattice` from a `partial_order` and `Sup` function that returns the least upper bound of a set. Usually this constructor provides poor definitional equalities. If other fields are known explicitly, they should be provided; for example, if `inf` is known explicitly, construct the `complete_lattice` instance as ``` instance : complete_lattice my_T := { inf := better_inf, le_inf := ..., inf_le_right := ..., inf_le_left := ... -- don't care to fix sup, Inf, bot, top ..complete_lattice_of_Sup my_T _ } ``` -/ def complete_lattice_of_Sup (α : Type*) [H1 : partial_order α] [H2 : has_Sup α] (is_lub_Sup : ∀ s : set α, is_lub s (Sup s)) : complete_lattice α := { top := Sup univ, le_top := λ x, (is_lub_Sup univ).1 trivial, bot := Sup ∅, bot_le := λ x, (is_lub_Sup ∅).2 $ by simp, sup := λ a b, Sup {a, b}, sup_le := λ a b c hac hbc, (is_lub_Sup _).2 (by simp [*]), le_sup_left := λ a b, (is_lub_Sup _).1 $ mem_insert _ _, le_sup_right := λ a b, (is_lub_Sup _).1 $ mem_insert_of_mem _ $ mem_singleton _, inf := λ a b, Sup {x | x ≤ a ∧ x ≤ b}, le_inf := λ a b c hab hac, (is_lub_Sup _).1 $ by simp [*], inf_le_left := λ a b, (is_lub_Sup _).2 (λ x, and.left), inf_le_right := λ a b, (is_lub_Sup _).2 (λ x, and.right), Inf := λ s, Sup (lower_bounds s), Sup_le := λ s a ha, (is_lub_Sup s).2 ha, le_Sup := λ s a ha, (is_lub_Sup s).1 ha, Inf_le := λ s a ha, (is_lub_Sup (lower_bounds s)).2 (λ b hb, hb ha), le_Inf := λ s a ha, (is_lub_Sup (lower_bounds s)).1 ha, .. H1, .. H2 } /-- Any `complete_semilattice_Sup` is in fact a `complete_lattice`. Note that this construction has bad definitional properties: see the doc-string on `complete_lattice_of_Sup`. -/ def complete_lattice_of_complete_semilattice_Sup (α : Type*) [complete_semilattice_Sup α] : complete_lattice α := complete_lattice_of_Sup α (λ s, is_lub_Sup s) /-- A complete linear order is a linear order whose lattice structure is complete. -/ class complete_linear_order (α : Type*) extends complete_lattice α, linear_order α namespace order_dual variable (α) instance [complete_lattice α] : complete_lattice (order_dual α) := { le_Sup := @complete_lattice.Inf_le α _, Sup_le := @complete_lattice.le_Inf α _, Inf_le := @complete_lattice.le_Sup α _, le_Inf := @complete_lattice.Sup_le α _, .. order_dual.lattice α, ..order_dual.has_Sup α, ..order_dual.has_Inf α, .. order_dual.bounded_order α } instance [complete_linear_order α] : complete_linear_order (order_dual α) := { .. order_dual.complete_lattice α, .. order_dual.linear_order α } end order_dual section variables [complete_lattice α] {s t : set α} {a b : α} theorem Inf_le_Sup (hs : s.nonempty) : Inf s ≤ Sup s := is_glb_le_is_lub (is_glb_Inf s) (is_lub_Sup s) hs theorem Sup_union {s t : set α} : Sup (s ∪ t) = Sup s ⊔ Sup t := ((is_lub_Sup s).union (is_lub_Sup t)).Sup_eq theorem Sup_inter_le {s t : set α} : Sup (s ∩ t) ≤ Sup s ⊓ Sup t := by finish /- Sup_le (assume a ⟨a_s, a_t⟩, le_inf (le_Sup a_s) (le_Sup a_t)) -/ theorem Inf_union {s t : set α} : Inf (s ∪ t) = Inf s ⊓ Inf t := ((is_glb_Inf s).union (is_glb_Inf t)).Inf_eq theorem le_Inf_inter {s t : set α} : Inf s ⊔ Inf t ≤ Inf (s ∩ t) := @Sup_inter_le (order_dual α) _ _ _ @[simp] theorem Sup_empty : Sup ∅ = (⊥ : α) := (@is_lub_empty α _ _).Sup_eq @[simp] theorem Inf_empty : Inf ∅ = (⊤ : α) := (@is_glb_empty α _ _).Inf_eq @[simp] theorem Sup_univ : Sup univ = (⊤ : α) := (@is_lub_univ α _ _).Sup_eq @[simp] theorem Inf_univ : Inf univ = (⊥ : α) := (@is_glb_univ α _ _).Inf_eq -- TODO(Jeremy): get this automatically @[simp] theorem Sup_insert {a : α} {s : set α} : Sup (insert a s) = a ⊔ Sup s := ((is_lub_Sup s).insert a).Sup_eq @[simp] theorem Inf_insert {a : α} {s : set α} : Inf (insert a s) = a ⊓ Inf s := ((is_glb_Inf s).insert a).Inf_eq theorem Sup_le_Sup_of_subset_insert_bot (h : s ⊆ insert ⊥ t) : Sup s ≤ Sup t := le_trans (Sup_le_Sup h) (le_of_eq (trans Sup_insert bot_sup_eq)) theorem Inf_le_Inf_of_subset_insert_top (h : s ⊆ insert ⊤ t) : Inf t ≤ Inf s := le_trans (le_of_eq (trans top_inf_eq.symm Inf_insert.symm)) (Inf_le_Inf h) theorem Sup_pair {a b : α} : Sup {a, b} = a ⊔ b := (@is_lub_pair α _ a b).Sup_eq theorem Inf_pair {a b : α} : Inf {a, b} = a ⊓ b := (@is_glb_pair α _ a b).Inf_eq @[simp] theorem Inf_eq_top : Inf s = ⊤ ↔ (∀a∈s, a = ⊤) := iff.intro (assume h a ha, top_unique $ h ▸ Inf_le ha) (assume h, top_unique $ le_Inf $ assume a ha, top_le_iff.2 $ h a ha) lemma eq_singleton_top_of_Inf_eq_top_of_nonempty {s : set α} (h_inf : Inf s = ⊤) (hne : s.nonempty) : s = {⊤} := by { rw set.eq_singleton_iff_nonempty_unique_mem, rw Inf_eq_top at h_inf, exact ⟨hne, h_inf⟩, } @[simp] theorem Sup_eq_bot : Sup s = ⊥ ↔ (∀a∈s, a = ⊥) := @Inf_eq_top (order_dual α) _ _ lemma eq_singleton_bot_of_Sup_eq_bot_of_nonempty {s : set α} (h_sup : Sup s = ⊥) (hne : s.nonempty) : s = {⊥} := by { rw set.eq_singleton_iff_nonempty_unique_mem, rw Sup_eq_bot at h_sup, exact ⟨hne, h_sup⟩, } /--Introduction rule to prove that `b` is the supremum of `s`: it suffices to check that `b` is larger than all elements of `s`, and that this is not the case of any `w<b`. See `cSup_eq_of_forall_le_of_forall_lt_exists_gt` for a version in conditionally complete lattices. -/ theorem Sup_eq_of_forall_le_of_forall_lt_exists_gt (_ : ∀a∈s, a ≤ b) (H : ∀w, w < b → (∃a∈s, w < a)) : Sup s = b := have (Sup s < b) ∨ (Sup s = b) := lt_or_eq_of_le (Sup_le ‹∀a∈s, a ≤ b›), have ¬(Sup s < b) := assume: Sup s < b, let ⟨a, _, _⟩ := (H (Sup s) ‹Sup s < b›) in /- a ∈ s, Sup s < a-/ have Sup s < Sup s := lt_of_lt_of_le ‹Sup s < a› (le_Sup ‹a ∈ s›), show false, by finish [lt_irrefl (Sup s)], show Sup s = b, by finish /--Introduction rule to prove that `b` is the infimum of `s`: it suffices to check that `b` is smaller than all elements of `s`, and that this is not the case of any `w>b`. See `cInf_eq_of_forall_ge_of_forall_gt_exists_lt` for a version in conditionally complete lattices. -/ theorem Inf_eq_of_forall_ge_of_forall_gt_exists_lt (_ : ∀a∈s, b ≤ a) (H : ∀w, b < w → (∃a∈s, a < w)) : Inf s = b := @Sup_eq_of_forall_le_of_forall_lt_exists_gt (order_dual α) _ _ ‹_› ‹_› ‹_› end section complete_linear_order variables [complete_linear_order α] {s t : set α} {a b : α} lemma Inf_lt_iff : Inf s < b ↔ (∃a∈s, a < b) := is_glb_lt_iff (is_glb_Inf s) lemma lt_Sup_iff : b < Sup s ↔ (∃a∈s, b < a) := lt_is_lub_iff (is_lub_Sup s) lemma Sup_eq_top : Sup s = ⊤ ↔ (∀b<⊤, ∃a∈s, b < a) := iff.intro (assume (h : Sup s = ⊤) b hb, by rwa [←h, lt_Sup_iff] at hb) (assume h, top_unique $ le_of_not_gt $ assume h', let ⟨a, ha, h⟩ := h _ h' in lt_irrefl a $ lt_of_le_of_lt (le_Sup ha) h) lemma Inf_eq_bot : Inf s = ⊥ ↔ (∀b>⊥, ∃a∈s, a < b) := @Sup_eq_top (order_dual α) _ _ lemma lt_supr_iff {f : ι → α} : a < supr f ↔ (∃i, a < f i) := lt_Sup_iff.trans exists_range_iff lemma infi_lt_iff {f : ι → α} : infi f < a ↔ (∃i, f i < a) := Inf_lt_iff.trans exists_range_iff end complete_linear_order /- ### supr & infi -/ section variables [complete_lattice α] {s t : ι → α} {a b : α} -- TODO: this declaration gives error when starting smt state --@[ematch] theorem le_supr (s : ι → α) (i : ι) : s i ≤ supr s := le_Sup ⟨i, rfl⟩ @[ematch] theorem le_supr' (s : ι → α) (i : ι) : (: s i ≤ supr s :) := le_Sup ⟨i, rfl⟩ /- TODO: this version would be more powerful, but, alas, the pattern matcher doesn't accept it. @[ematch] theorem le_supr' (s : ι → α) (i : ι) : (: s i :) ≤ (: supr s :) := le_Sup ⟨i, rfl⟩ -/ lemma is_lub_supr : is_lub (range s) (⨆j, s j) := is_lub_Sup _ lemma is_lub.supr_eq (h : is_lub (range s) a) : (⨆j, s j) = a := h.Sup_eq lemma is_glb_infi : is_glb (range s) (⨅j, s j) := is_glb_Inf _ lemma is_glb.infi_eq (h : is_glb (range s) a) : (⨅j, s j) = a := h.Inf_eq theorem le_supr_of_le (i : ι) (h : a ≤ s i) : a ≤ supr s := le_trans h (le_supr _ i) theorem le_bsupr {p : ι → Prop} {f : Π i (h : p i), α} (i : ι) (hi : p i) : f i hi ≤ ⨆ i hi, f i hi := le_supr_of_le i $ le_supr (f i) hi theorem le_bsupr_of_le {p : ι → Prop} {f : Π i (h : p i), α} (i : ι) (hi : p i) (h : a ≤ f i hi) : a ≤ ⨆ i hi, f i hi := le_trans h (le_bsupr i hi) theorem supr_le (h : ∀i, s i ≤ a) : supr s ≤ a := Sup_le $ assume b ⟨i, eq⟩, eq ▸ h i theorem bsupr_le {p : ι → Prop} {f : Π i (h : p i), α} (h : ∀ i hi, f i hi ≤ a) : (⨆ i (hi : p i), f i hi) ≤ a := supr_le $ λ i, supr_le $ h i theorem bsupr_le_supr (p : ι → Prop) (f : ι → α) : (⨆ i (H : p i), f i) ≤ ⨆ i, f i := bsupr_le (λ i hi, le_supr f i) theorem supr_le_supr (h : ∀i, s i ≤ t i) : supr s ≤ supr t := supr_le $ assume i, le_supr_of_le i (h i) theorem supr_le_supr2 {t : ι₂ → α} (h : ∀i, ∃j, s i ≤ t j) : supr s ≤ supr t := supr_le $ assume j, exists.elim (h j) le_supr_of_le theorem bsupr_le_bsupr {p : ι → Prop} {f g : Π i (hi : p i), α} (h : ∀ i hi, f i hi ≤ g i hi) : (⨆ i hi, f i hi) ≤ ⨆ i hi, g i hi := bsupr_le $ λ i hi, le_trans (h i hi) (le_bsupr i hi) theorem supr_le_supr_const (h : ι → ι₂) : (⨆ i:ι, a) ≤ (⨆ j:ι₂, a) := supr_le $ le_supr _ ∘ h theorem bsupr_le_bsupr' {p q : ι → Prop} (hpq : ∀ i, p i → q i) {f : ι → α} : (⨆ i (hpi : p i), f i) ≤ ⨆ i (hqi : q i), f i := supr_le_supr $ λ i, supr_le_supr_const (hpq i) @[simp] theorem supr_le_iff : supr s ≤ a ↔ (∀i, s i ≤ a) := (is_lub_le_iff is_lub_supr).trans forall_range_iff theorem supr_lt_iff : supr s < a ↔ ∃ b < a, ∀ i, s i ≤ b := ⟨λ h, ⟨supr s, h, λ i, le_supr s i⟩, λ ⟨b, hba, hsb⟩, (supr_le hsb).trans_lt hba⟩ theorem Sup_eq_supr {s : set α} : Sup s = (⨆a ∈ s, a) := le_antisymm (Sup_le $ assume b h, le_supr_of_le b $ le_supr _ h) (supr_le $ assume b, supr_le $ assume h, le_Sup h) lemma Sup_eq_supr' {α} [has_Sup α] (s : set α) : Sup s = ⨆ x : s, (x : α) := by rw [supr, subtype.range_coe] lemma Sup_sUnion {s : set (set α)} : Sup (⋃₀ s) = ⨆ (t ∈ s), Sup t := begin apply le_antisymm, { apply Sup_le (λ b hb, _), rcases hb with ⟨t, ts, bt⟩, apply le_trans _ (le_supr _ t), exact le_trans (le_Sup bt) (le_supr _ ts), }, { apply supr_le (λ t, _), exact supr_le (λ ts, Sup_le_Sup (λ x xt, ⟨t, ts, xt⟩)) } end lemma le_supr_iff : (a ≤ supr s) ↔ (∀ b, (∀ i, s i ≤ b) → a ≤ b) := ⟨λ h b hb, le_trans h (supr_le hb), λ h, h _ $ λ i, le_supr s i⟩ lemma monotone.le_map_supr [complete_lattice β] {f : α → β} (hf : monotone f) : (⨆ i, f (s i)) ≤ f (supr s) := supr_le $ λ i, hf $ le_supr _ _ lemma monotone.le_map_supr2 [complete_lattice β] {f : α → β} (hf : monotone f) {ι' : ι → Sort*} (s : Π i, ι' i → α) : (⨆ i (h : ι' i), f (s i h)) ≤ f (⨆ i (h : ι' i), s i h) := calc (⨆ i h, f (s i h)) ≤ (⨆ i, f (⨆ h, s i h)) : supr_le_supr $ λ i, hf.le_map_supr ... ≤ f (⨆ i (h : ι' i), s i h) : hf.le_map_supr lemma monotone.le_map_Sup [complete_lattice β] {s : set α} {f : α → β} (hf : monotone f) : (⨆a∈s, f a) ≤ f (Sup s) := by rw [Sup_eq_supr]; exact hf.le_map_supr2 _ lemma order_iso.map_supr [complete_lattice β] (f : α ≃o β) (x : ι → α) : f (⨆ i, x i) = ⨆ i, f (x i) := eq_of_forall_ge_iff $ f.surjective.forall.2 $ λ x, by simp only [f.le_iff_le, supr_le_iff] lemma order_iso.map_Sup [complete_lattice β] (f : α ≃o β) (s : set α) : f (Sup s) = ⨆ a ∈ s, f a := by simp only [Sup_eq_supr, order_iso.map_supr] lemma supr_comp_le {ι' : Sort*} (f : ι' → α) (g : ι → ι') : (⨆ x, f (g x)) ≤ ⨆ y, f y := supr_le_supr2 $ λ x, ⟨_, le_refl _⟩ lemma monotone.supr_comp_eq [preorder β] {f : β → α} (hf : monotone f) {s : ι → β} (hs : ∀ x, ∃ i, x ≤ s i) : (⨆ x, f (s x)) = ⨆ y, f y := le_antisymm (supr_comp_le _ _) (supr_le_supr2 $ λ x, (hs x).imp $ λ i hi, hf hi) lemma function.surjective.supr_comp {α : Type*} [has_Sup α] {f : ι → ι₂} (hf : function.surjective f) (g : ι₂ → α) : (⨆ x, g (f x)) = ⨆ y, g y := by simp only [supr, hf.range_comp] lemma supr_congr {α : Type*} [has_Sup α] {f : ι → α} {g : ι₂ → α} (h : ι → ι₂) (h1 : function.surjective h) (h2 : ∀ x, g (h x) = f x) : (⨆ x, f x) = ⨆ y, g y := by { convert h1.supr_comp g, exact (funext h2).symm } -- TODO: finish doesn't do well here. @[congr] theorem supr_congr_Prop {α : Type*} [has_Sup α] {p q : Prop} {f₁ : p → α} {f₂ : q → α} (pq : p ↔ q) (f : ∀x, f₁ (pq.mpr x) = f₂ x) : supr f₁ = supr f₂ := begin have := propext pq, subst this, congr' with x, apply f end theorem infi_le (s : ι → α) (i : ι) : infi s ≤ s i := Inf_le ⟨i, rfl⟩ @[ematch] theorem infi_le' (s : ι → α) (i : ι) : (: infi s ≤ s i :) := Inf_le ⟨i, rfl⟩ theorem infi_le_of_le (i : ι) (h : s i ≤ a) : infi s ≤ a := le_trans (infi_le _ i) h theorem binfi_le {p : ι → Prop} {f : Π i (hi : p i), α} (i : ι) (hi : p i) : (⨅ i hi, f i hi) ≤ f i hi := infi_le_of_le i $ infi_le (f i) hi theorem binfi_le_of_le {p : ι → Prop} {f : Π i (hi : p i), α} (i : ι) (hi : p i) (h : f i hi ≤ a) : (⨅ i hi, f i hi) ≤ a := le_trans (binfi_le i hi) h theorem le_infi (h : ∀i, a ≤ s i) : a ≤ infi s := le_Inf $ assume b ⟨i, eq⟩, eq ▸ h i theorem le_binfi {p : ι → Prop} {f : Π i (h : p i), α} (h : ∀ i hi, a ≤ f i hi) : a ≤ ⨅ i hi, f i hi := le_infi $ λ i, le_infi $ h i theorem infi_le_binfi (p : ι → Prop) (f : ι → α) : (⨅ i, f i) ≤ ⨅ i (H : p i), f i := le_binfi (λ i hi, infi_le f i) theorem infi_le_infi (h : ∀i, s i ≤ t i) : infi s ≤ infi t := le_infi $ assume i, infi_le_of_le i (h i) theorem infi_le_infi2 {t : ι₂ → α} (h : ∀j, ∃i, s i ≤ t j) : infi s ≤ infi t := le_infi $ assume j, exists.elim (h j) infi_le_of_le theorem binfi_le_binfi {p : ι → Prop} {f g : Π i (h : p i), α} (h : ∀ i hi, f i hi ≤ g i hi) : (⨅ i hi, f i hi) ≤ ⨅ i hi, g i hi := le_binfi $ λ i hi, le_trans (binfi_le i hi) (h i hi) theorem infi_le_infi_const (h : ι₂ → ι) : (⨅ i:ι, a) ≤ (⨅ j:ι₂, a) := le_infi $ infi_le _ ∘ h @[simp] theorem le_infi_iff : a ≤ infi s ↔ (∀i, a ≤ s i) := ⟨assume : a ≤ infi s, assume i, le_trans this (infi_le _ _), le_infi⟩ theorem Inf_eq_infi {s : set α} : Inf s = (⨅a ∈ s, a) := @Sup_eq_supr (order_dual α) _ _ theorem Inf_eq_infi' {α} [has_Inf α] (s : set α) : Inf s = ⨅ a : s, a := @Sup_eq_supr' (order_dual α) _ _ lemma monotone.map_infi_le [complete_lattice β] {f : α → β} (hf : monotone f) : f (infi s) ≤ (⨅ i, f (s i)) := le_infi $ λ i, hf $ infi_le _ _ lemma monotone.map_infi2_le [complete_lattice β] {f : α → β} (hf : monotone f) {ι' : ι → Sort*} (s : Π i, ι' i → α) : f (⨅ i (h : ι' i), s i h) ≤ (⨅ i (h : ι' i), f (s i h)) := @monotone.le_map_supr2 (order_dual α) (order_dual β) _ _ _ f hf.dual _ _ lemma monotone.map_Inf_le [complete_lattice β] {s : set α} {f : α → β} (hf : monotone f) : f (Inf s) ≤ ⨅ a∈s, f a := by rw [Inf_eq_infi]; exact hf.map_infi2_le _ lemma order_iso.map_infi [complete_lattice β] (f : α ≃o β) (x : ι → α) : f (⨅ i, x i) = ⨅ i, f (x i) := order_iso.map_supr f.dual _ lemma order_iso.map_Inf [complete_lattice β] (f : α ≃o β) (s : set α) : f (Inf s) = ⨅ a ∈ s, f a := order_iso.map_Sup f.dual _ lemma le_infi_comp {ι' : Sort*} (f : ι' → α) (g : ι → ι') : (⨅ y, f y) ≤ ⨅ x, f (g x) := infi_le_infi2 $ λ x, ⟨_, le_refl _⟩ lemma monotone.infi_comp_eq [preorder β] {f : β → α} (hf : monotone f) {s : ι → β} (hs : ∀ x, ∃ i, s i ≤ x) : (⨅ x, f (s x)) = ⨅ y, f y := le_antisymm (infi_le_infi2 $ λ x, (hs x).imp $ λ i hi, hf hi) (le_infi_comp _ _) lemma function.surjective.infi_comp {α : Type*} [has_Inf α] {f : ι → ι₂} (hf : function.surjective f) (g : ι₂ → α) : (⨅ x, g (f x)) = ⨅ y, g y := @function.surjective.supr_comp _ _ (order_dual α) _ f hf g lemma infi_congr {α : Type*} [has_Inf α] {f : ι → α} {g : ι₂ → α} (h : ι → ι₂) (h1 : function.surjective h) (h2 : ∀ x, g (h x) = f x) : (⨅ x, f x) = ⨅ y, g y := @supr_congr _ _ (order_dual α) _ _ _ h h1 h2 @[congr] theorem infi_congr_Prop {α : Type*} [has_Inf α] {p q : Prop} {f₁ : p → α} {f₂ : q → α} (pq : p ↔ q) (f : ∀x, f₁ (pq.mpr x) = f₂ x) : infi f₁ = infi f₂ := @supr_congr_Prop (order_dual α) _ p q f₁ f₂ pq f lemma supr_const_le {x : α} : (⨆ (h : ι), x) ≤ x := supr_le (λ _, le_rfl) lemma le_infi_const {x : α} : x ≤ (⨅ (h : ι), x) := le_infi (λ _, le_rfl) -- We will generalize this to conditionally complete lattices in `cinfi_const`. theorem infi_const [nonempty ι] {a : α} : (⨅ b:ι, a) = a := by rw [infi, range_const, Inf_singleton] -- We will generalize this to conditionally complete lattices in `csupr_const`. theorem supr_const [nonempty ι] {a : α} : (⨆ b:ι, a) = a := @infi_const (order_dual α) _ _ _ _ @[simp] lemma infi_top : (⨅i:ι, ⊤ : α) = ⊤ := top_unique $ le_infi $ assume i, le_refl _ @[simp] lemma supr_bot : (⨆i:ι, ⊥ : α) = ⊥ := @infi_top (order_dual α) _ _ @[simp] lemma infi_eq_top : infi s = ⊤ ↔ (∀i, s i = ⊤) := Inf_eq_top.trans forall_range_iff @[simp] lemma supr_eq_bot : supr s = ⊥ ↔ (∀i, s i = ⊥) := Sup_eq_bot.trans forall_range_iff @[simp] lemma infi_pos {p : Prop} {f : p → α} (hp : p) : (⨅ h : p, f h) = f hp := le_antisymm (infi_le _ _) (le_infi $ assume h, le_refl _) @[simp] lemma infi_neg {p : Prop} {f : p → α} (hp : ¬ p) : (⨅ h : p, f h) = ⊤ := le_antisymm le_top $ le_infi $ assume h, (hp h).elim @[simp] lemma supr_pos {p : Prop} {f : p → α} (hp : p) : (⨆ h : p, f h) = f hp := le_antisymm (supr_le $ assume h, le_refl _) (le_supr _ _) @[simp] lemma supr_neg {p : Prop} {f : p → α} (hp : ¬ p) : (⨆ h : p, f h) = ⊥ := le_antisymm (supr_le $ assume h, (hp h).elim) bot_le /--Introduction rule to prove that `b` is the supremum of `f`: it suffices to check that `b` is larger than `f i` for all `i`, and that this is not the case of any `w<b`. See `csupr_eq_of_forall_le_of_forall_lt_exists_gt` for a version in conditionally complete lattices. -/ theorem supr_eq_of_forall_le_of_forall_lt_exists_gt {f : ι → α} (h₁ : ∀ i, f i ≤ b) (h₂ : ∀ w, w < b → (∃ i, w < f i)) : (⨆ (i : ι), f i) = b := Sup_eq_of_forall_le_of_forall_lt_exists_gt (forall_range_iff.mpr h₁) (λ w hw, exists_range_iff.mpr $ h₂ w hw) /--Introduction rule to prove that `b` is the infimum of `f`: it suffices to check that `b` is smaller than `f i` for all `i`, and that this is not the case of any `w>b`. See `cinfi_eq_of_forall_ge_of_forall_gt_exists_lt` for a version in conditionally complete lattices. -/ theorem infi_eq_of_forall_ge_of_forall_gt_exists_lt {f : ι → α} (h₁ : ∀ i, b ≤ f i) (h₂ : ∀ w, b < w → (∃ i, f i < w)) : (⨅ (i : ι), f i) = b := @supr_eq_of_forall_le_of_forall_lt_exists_gt (order_dual α) _ _ _ ‹_› ‹_› ‹_› lemma supr_eq_dif {p : Prop} [decidable p] (a : p → α) : (⨆h:p, a h) = (if h : p then a h else ⊥) := by by_cases p; simp [h] lemma supr_eq_if {p : Prop} [decidable p] (a : α) : (⨆h:p, a) = (if p then a else ⊥) := supr_eq_dif (λ _, a) lemma infi_eq_dif {p : Prop} [decidable p] (a : p → α) : (⨅h:p, a h) = (if h : p then a h else ⊤) := @supr_eq_dif (order_dual α) _ _ _ _ lemma infi_eq_if {p : Prop} [decidable p] (a : α) : (⨅h:p, a) = (if p then a else ⊤) := infi_eq_dif (λ _, a) -- TODO: should this be @[simp]? theorem infi_comm {f : ι → ι₂ → α} : (⨅i, ⨅j, f i j) = (⨅j, ⨅i, f i j) := le_antisymm (le_infi $ assume i, le_infi $ assume j, infi_le_of_le j $ infi_le _ i) (le_infi $ assume j, le_infi $ assume i, infi_le_of_le i $ infi_le _ j) /- TODO: this is strange. In the proof below, we get exactly the desired among the equalities, but close does not get it. begin apply @le_antisymm, simp, intros, begin [smt] ematch, ematch, ematch, trace_state, have := le_refl (f i_1 i), trace_state, close end end -/ -- TODO: should this be @[simp]? theorem supr_comm {f : ι → ι₂ → α} : (⨆i, ⨆j, f i j) = (⨆j, ⨆i, f i j) := @infi_comm (order_dual α) _ _ _ _ @[simp] theorem infi_infi_eq_left {b : β} {f : Πx:β, x = b → α} : (⨅x, ⨅h:x = b, f x h) = f b rfl := le_antisymm (infi_le_of_le b $ infi_le _ rfl) (le_infi $ assume b', le_infi $ assume eq, match b', eq with ._, rfl := le_refl _ end) @[simp] theorem infi_infi_eq_right {b : β} {f : Πx:β, b = x → α} : (⨅x, ⨅h:b = x, f x h) = f b rfl := le_antisymm (infi_le_of_le b $ infi_le _ rfl) (le_infi $ assume b', le_infi $ assume eq, match b', eq with ._, rfl := le_refl _ end) @[simp] theorem supr_supr_eq_left {b : β} {f : Πx:β, x = b → α} : (⨆x, ⨆h : x = b, f x h) = f b rfl := @infi_infi_eq_left (order_dual α) _ _ _ _ @[simp] theorem supr_supr_eq_right {b : β} {f : Πx:β, b = x → α} : (⨆x, ⨆h : b = x, f x h) = f b rfl := @infi_infi_eq_right (order_dual α) _ _ _ _ attribute [ematch] le_refl theorem infi_subtype {p : ι → Prop} {f : subtype p → α} : (⨅ x, f x) = (⨅ i (h:p i), f ⟨i, h⟩) := le_antisymm (le_infi $ assume i, le_infi $ assume : p i, infi_le _ _) (le_infi $ assume ⟨i, h⟩, infi_le_of_le i $ infi_le _ _) lemma infi_subtype' {p : ι → Prop} {f : ∀ i, p i → α} : (⨅ i (h : p i), f i h) = (⨅ x : subtype p, f x x.property) := (@infi_subtype _ _ _ p (λ x, f x.val x.property)).symm lemma infi_subtype'' {ι} (s : set ι) (f : ι → α) : (⨅ i : s, f i) = ⨅ (t : ι) (H : t ∈ s), f t := infi_subtype theorem infi_inf_eq {f g : ι → α} : (⨅ x, f x ⊓ g x) = (⨅ x, f x) ⊓ (⨅ x, g x) := le_antisymm (le_inf (le_infi $ assume i, infi_le_of_le i inf_le_left) (le_infi $ assume i, infi_le_of_le i inf_le_right)) (le_infi $ assume i, le_inf (inf_le_of_left_le $ infi_le _ _) (inf_le_of_right_le $ infi_le _ _)) /- TODO: here is another example where more flexible pattern matching might help. begin apply @le_antisymm, safe, pose h := f a ⊓ g a, begin [smt] ematch, ematch end end -/ lemma infi_inf [h : nonempty ι] {f : ι → α} {a : α} : (⨅x, f x) ⊓ a = (⨅ x, f x ⊓ a) := by rw [infi_inf_eq, infi_const] lemma inf_infi [nonempty ι] {f : ι → α} {a : α} : a ⊓ (⨅x, f x) = (⨅ x, a ⊓ f x) := by rw [inf_comm, infi_inf]; simp [inf_comm] lemma binfi_inf {p : ι → Prop} {f : Π i (hi : p i), α} {a : α} (h : ∃ i, p i) : (⨅i (h : p i), f i h) ⊓ a = (⨅ i (h : p i), f i h ⊓ a) := by haveI : nonempty {i // p i} := (let ⟨i, hi⟩ := h in ⟨⟨i, hi⟩⟩); rw [infi_subtype', infi_subtype', infi_inf] lemma inf_binfi {p : ι → Prop} {f : Π i (hi : p i), α} {a : α} (h : ∃ i, p i) : a ⊓ (⨅i (h : p i), f i h) = (⨅ i (h : p i), a ⊓ f i h) := by simpa only [inf_comm] using binfi_inf h theorem supr_sup_eq {f g : ι → α} : (⨆ x, f x ⊔ g x) = (⨆ x, f x) ⊔ (⨆ x, g x) := @infi_inf_eq (order_dual α) ι _ _ _ lemma supr_sup [h : nonempty ι] {f : ι → α} {a : α} : (⨆ x, f x) ⊔ a = (⨆ x, f x ⊔ a) := @infi_inf (order_dual α) _ _ _ _ _ lemma sup_supr [nonempty ι] {f : ι → α} {a : α} : a ⊔ (⨆ x, f x) = (⨆ x, a ⊔ f x) := @inf_infi (order_dual α) _ _ _ _ _ /-! ### `supr` and `infi` under `Prop` -/ @[simp] theorem infi_false {s : false → α} : infi s = ⊤ := le_antisymm le_top (le_infi $ assume i, false.elim i) @[simp] theorem supr_false {s : false → α} : supr s = ⊥ := le_antisymm (supr_le $ assume i, false.elim i) bot_le theorem infi_true {s : true → α} : infi s = s trivial := infi_pos trivial theorem supr_true {s : true → α} : supr s = s trivial := supr_pos trivial @[simp] theorem infi_exists {p : ι → Prop} {f : Exists p → α} : (⨅ x, f x) = (⨅ i, ⨅ h:p i, f ⟨i, h⟩) := le_antisymm (le_infi $ assume i, le_infi $ assume : p i, infi_le _ _) (le_infi $ assume ⟨i, h⟩, infi_le_of_le i $ infi_le _ _) @[simp] theorem supr_exists {p : ι → Prop} {f : Exists p → α} : (⨆ x, f x) = (⨆ i, ⨆ h:p i, f ⟨i, h⟩) := @infi_exists (order_dual α) _ _ _ _ theorem infi_and {p q : Prop} {s : p ∧ q → α} : infi s = (⨅ h₁ h₂, s ⟨h₁, h₂⟩) := le_antisymm (le_infi $ assume i, le_infi $ assume j, infi_le _ _) (le_infi $ assume ⟨i, h⟩, infi_le_of_le i $ infi_le _ _) /-- The symmetric case of `infi_and`, useful for rewriting into a infimum over a conjunction -/ lemma infi_and' {p q : Prop} {s : p → q → α} : (⨅ (h₁ : p) (h₂ : q), s h₁ h₂) = ⨅ (h : p ∧ q), s h.1 h.2 := by { symmetry, exact infi_and } theorem supr_and {p q : Prop} {s : p ∧ q → α} : supr s = (⨆ h₁ h₂, s ⟨h₁, h₂⟩) := @infi_and (order_dual α) _ _ _ _ /-- The symmetric case of `supr_and`, useful for rewriting into a supremum over a conjunction -/ lemma supr_and' {p q : Prop} {s : p → q → α} : (⨆ (h₁ : p) (h₂ : q), s h₁ h₂) = ⨆ (h : p ∧ q), s h.1 h.2 := by { symmetry, exact supr_and } theorem infi_or {p q : Prop} {s : p ∨ q → α} : infi s = (⨅ h : p, s (or.inl h)) ⊓ (⨅ h : q, s (or.inr h)) := le_antisymm (le_inf (infi_le_infi2 $ assume j, ⟨_, le_refl _⟩) (infi_le_infi2 $ assume j, ⟨_, le_refl _⟩)) (le_infi $ assume i, match i with | or.inl i := inf_le_of_left_le $ infi_le _ _ | or.inr j := inf_le_of_right_le $ infi_le _ _ end) theorem supr_or {p q : Prop} {s : p ∨ q → α} : (⨆ x, s x) = (⨆ i, s (or.inl i)) ⊔ (⨆ j, s (or.inr j)) := @infi_or (order_dual α) _ _ _ _ section variables (p : ι → Prop) [decidable_pred p] lemma supr_dite (f : Π i, p i → α) (g : Π i, ¬p i → α) : (⨆ i, if h : p i then f i h else g i h) = (⨆ i (h : p i), f i h) ⊔ (⨆ i (h : ¬ p i), g i h) := begin rw ←supr_sup_eq, congr' 1 with i, split_ifs with h; simp [h], end lemma supr_ite (f g : ι → α) : (⨆ i, if p i then f i else g i) = (⨆ i (h : p i), f i) ⊔ (⨆ i (h : ¬ p i), g i) := supr_dite _ _ _ lemma infi_dite (f : Π i, p i → α) (g : Π i, ¬p i → α) : (⨅ i, if h : p i then f i h else g i h) = (⨅ i (h : p i), f i h) ⊓ (⨅ i (h : ¬ p i), g i h) := supr_dite p (show Π i, p i → order_dual α, from f) g lemma infi_ite (f g : ι → α) : (⨅ i, if p i then f i else g i) = (⨅ i (h : p i), f i) ⊓ (⨅ i (h : ¬ p i), g i) := infi_dite _ _ _ end lemma Sup_range {α : Type*} [has_Sup α] {f : ι → α} : Sup (range f) = supr f := rfl lemma Inf_range {α : Type*} [has_Inf α] {f : ι → α} : Inf (range f) = infi f := rfl lemma supr_range' {α} [has_Sup α] (g : β → α) (f : ι → β) : (⨆ b : range f, g b) = ⨆ i, g (f i) := by rw [supr, supr, ← image_eq_range, ← range_comp] lemma infi_range' {α} [has_Inf α] (g : β → α) (f : ι → β) : (⨅ b : range f, g b) = ⨅ i, g (f i) := @supr_range' _ _ (order_dual α) _ _ _ lemma infi_range {g : β → α} {f : ι → β} : (⨅b∈range f, g b) = (⨅i, g (f i)) := by rw [← infi_subtype'', infi_range'] lemma supr_range {g : β → α} {f : ι → β} : (⨆b∈range f, g b) = (⨆i, g (f i)) := @infi_range (order_dual α) _ _ _ _ _ theorem Inf_image' {α} [has_Inf α] {s : set β} {f : β → α} : Inf (f '' s) = (⨅ a : s, f a) := by rw [infi, image_eq_range] theorem Sup_image' {α} [has_Sup α] {s : set β} {f : β → α} : Sup (f '' s) = (⨆ a : s, f a) := @Inf_image' _ (order_dual α) _ _ _ theorem Inf_image {s : set β} {f : β → α} : Inf (f '' s) = (⨅ a ∈ s, f a) := by rw [← infi_subtype'', Inf_image'] theorem Sup_image {s : set β} {f : β → α} : Sup (f '' s) = (⨆ a ∈ s, f a) := @Inf_image (order_dual α) _ _ _ _ /- ### supr and infi under set constructions -/ theorem infi_emptyset {f : β → α} : (⨅ x ∈ (∅ : set β), f x) = ⊤ := by simp theorem supr_emptyset {f : β → α} : (⨆ x ∈ (∅ : set β), f x) = ⊥ := by simp theorem infi_univ {f : β → α} : (⨅ x ∈ (univ : set β), f x) = (⨅ x, f x) := by simp theorem supr_univ {f : β → α} : (⨆ x ∈ (univ : set β), f x) = (⨆ x, f x) := by simp theorem infi_union {f : β → α} {s t : set β} : (⨅ x ∈ s ∪ t, f x) = (⨅x∈s, f x) ⊓ (⨅x∈t, f x) := by simp only [← infi_inf_eq, infi_or] lemma infi_split (f : β → α) (p : β → Prop) : (⨅ i, f i) = (⨅ i (h : p i), f i) ⊓ (⨅ i (h : ¬ p i), f i) := by simpa [classical.em] using @infi_union _ _ _ f {i | p i} {i | ¬ p i} lemma infi_split_single (f : β → α) (i₀ : β) : (⨅ i, f i) = f i₀ ⊓ (⨅ i (h : i ≠ i₀), f i) := by convert infi_split _ _; simp theorem infi_le_infi_of_subset {f : β → α} {s t : set β} (h : s ⊆ t) : (⨅ x ∈ t, f x) ≤ (⨅ x ∈ s, f x) := by rw [(union_eq_self_of_subset_left h).symm, infi_union]; exact inf_le_left theorem supr_union {f : β → α} {s t : set β} : (⨆ x ∈ s ∪ t, f x) = (⨆x∈s, f x) ⊔ (⨆x∈t, f x) := @infi_union (order_dual α) _ _ _ _ _ lemma supr_split (f : β → α) (p : β → Prop) : (⨆ i, f i) = (⨆ i (h : p i), f i) ⊔ (⨆ i (h : ¬ p i), f i) := @infi_split (order_dual α) _ _ _ _ lemma supr_split_single (f : β → α) (i₀ : β) : (⨆ i, f i) = f i₀ ⊔ (⨆ i (h : i ≠ i₀), f i) := @infi_split_single (order_dual α) _ _ _ _ theorem supr_le_supr_of_subset {f : β → α} {s t : set β} (h : s ⊆ t) : (⨆ x ∈ s, f x) ≤ (⨆ x ∈ t, f x) := @infi_le_infi_of_subset (order_dual α) _ _ _ _ _ h theorem infi_insert {f : β → α} {s : set β} {b : β} : (⨅ x ∈ insert b s, f x) = f b ⊓ (⨅x∈s, f x) := eq.trans infi_union $ congr_arg (λx:α, x ⊓ (⨅x∈s, f x)) infi_infi_eq_left theorem supr_insert {f : β → α} {s : set β} {b : β} : (⨆ x ∈ insert b s, f x) = f b ⊔ (⨆x∈s, f x) := eq.trans supr_union $ congr_arg (λx:α, x ⊔ (⨆x∈s, f x)) supr_supr_eq_left theorem infi_singleton {f : β → α} {b : β} : (⨅ x ∈ (singleton b : set β), f x) = f b := by simp theorem infi_pair {f : β → α} {a b : β} : (⨅ x ∈ ({a, b} : set β), f x) = f a ⊓ f b := by rw [infi_insert, infi_singleton] theorem supr_singleton {f : β → α} {b : β} : (⨆ x ∈ (singleton b : set β), f x) = f b := @infi_singleton (order_dual α) _ _ _ _ theorem supr_pair {f : β → α} {a b : β} : (⨆ x ∈ ({a, b} : set β), f x) = f a ⊔ f b := by rw [supr_insert, supr_singleton] lemma infi_image {γ} {f : β → γ} {g : γ → α} {t : set β} : (⨅ c ∈ f '' t, g c) = (⨅ b ∈ t, g (f b)) := by rw [← Inf_image, ← Inf_image, ← image_comp] lemma supr_image {γ} {f : β → γ} {g : γ → α} {t : set β} : (⨆ c ∈ f '' t, g c) = (⨆ b ∈ t, g (f b)) := @infi_image (order_dual α) _ _ _ _ _ _ /-! ### `supr` and `infi` under `Type` -/ theorem supr_of_empty' {α ι} [has_Sup α] [is_empty ι] (f : ι → α) : supr f = Sup (∅ : set α) := congr_arg Sup (range_eq_empty f) theorem supr_of_empty [is_empty ι] (f : ι → α) : supr f = ⊥ := (supr_of_empty' f).trans Sup_empty theorem infi_of_empty' {α ι} [has_Inf α] [is_empty ι] (f : ι → α) : infi f = Inf (∅ : set α) := congr_arg Inf (range_eq_empty f) theorem infi_of_empty [is_empty ι] (f : ι → α) : infi f = ⊤ := @supr_of_empty (order_dual α) _ _ _ f lemma supr_bool_eq {f : bool → α} : (⨆b:bool, f b) = f tt ⊔ f ff := by rw [supr, bool.range_eq, Sup_pair, sup_comm] lemma infi_bool_eq {f : bool → α} : (⨅b:bool, f b) = f tt ⊓ f ff := @supr_bool_eq (order_dual α) _ _ lemma sup_eq_supr (x y : α) : x ⊔ y = ⨆ b : bool, cond b x y := by rw [supr_bool_eq, bool.cond_tt, bool.cond_ff] lemma inf_eq_infi (x y : α) : x ⊓ y = ⨅ b : bool, cond b x y := @sup_eq_supr (order_dual α) _ _ _ lemma is_glb_binfi {s : set β} {f : β → α} : is_glb (f '' s) (⨅ x ∈ s, f x) := by simpa only [range_comp, subtype.range_coe, infi_subtype'] using @is_glb_infi α s _ (f ∘ coe) theorem supr_subtype {p : ι → Prop} {f : subtype p → α} : (⨆ x, f x) = (⨆ i (h:p i), f ⟨i, h⟩) := @infi_subtype (order_dual α) _ _ _ _ lemma supr_subtype' {p : ι → Prop} {f : ∀ i, p i → α} : (⨆ i (h : p i), f i h) = (⨆ x : subtype p, f x x.property) := (@supr_subtype _ _ _ p (λ x, f x.val x.property)).symm lemma supr_subtype'' {ι} (s : set ι) (f : ι → α) : (⨆ i : s, f i) = ⨆ (t : ι) (H : t ∈ s), f t := supr_subtype lemma is_lub_bsupr {s : set β} {f : β → α} : is_lub (f '' s) (⨆ x ∈ s, f x) := by simpa only [range_comp, subtype.range_coe, supr_subtype'] using @is_lub_supr α s _ (f ∘ coe) theorem infi_sigma {p : β → Type*} {f : sigma p → α} : (⨅ x, f x) = (⨅ i (h:p i), f ⟨i, h⟩) := eq_of_forall_le_iff $ λ c, by simp only [le_infi_iff, sigma.forall] theorem supr_sigma {p : β → Type*} {f : sigma p → α} : (⨆ x, f x) = (⨆ i (h:p i), f ⟨i, h⟩) := @infi_sigma (order_dual α) _ _ _ _ theorem infi_prod {γ : Type*} {f : β × γ → α} : (⨅ x, f x) = (⨅ i j, f (i, j)) := eq_of_forall_le_iff $ λ c, by simp only [le_infi_iff, prod.forall] theorem supr_prod {γ : Type*} {f : β × γ → α} : (⨆ x, f x) = (⨆ i j, f (i, j)) := @infi_prod (order_dual α) _ _ _ _ theorem infi_sum {γ : Type*} {f : β ⊕ γ → α} : (⨅ x, f x) = (⨅ i, f (sum.inl i)) ⊓ (⨅ j, f (sum.inr j)) := eq_of_forall_le_iff $ λ c, by simp only [le_inf_iff, le_infi_iff, sum.forall] theorem supr_sum {γ : Type*} {f : β ⊕ γ → α} : (⨆ x, f x) = (⨆ i, f (sum.inl i)) ⊔ (⨆ j, f (sum.inr j)) := @infi_sum (order_dual α) _ _ _ _ theorem supr_option (f : option β → α) : (⨆ o, f o) = f none ⊔ ⨆ b, f (option.some b) := eq_of_forall_ge_iff $ λ c, by simp only [supr_le_iff, sup_le_iff, option.forall] theorem infi_option (f : option β → α) : (⨅ o, f o) = f none ⊓ ⨅ b, f (option.some b) := @supr_option (order_dual α) _ _ _ /-- A version of `supr_option` useful for rewriting right-to-left. -/ lemma supr_option_elim (a : α) (f : β → α) : (⨆ o : option β, o.elim a f) = a ⊔ ⨆ b, f b := by simp [supr_option] /-- A version of `infi_option` useful for rewriting right-to-left. -/ lemma infi_option_elim (a : α) (f : β → α) : (⨅ o : option β, o.elim a f) = a ⊓ ⨅ b, f b := @supr_option_elim (order_dual α) _ _ _ _ /-- When taking the supremum of `f : ι → α`, the elements of `ι` on which `f` gives `⊥` can be dropped, without changing the result. -/ lemma supr_ne_bot_subtype (f : ι → α) : (⨆ i : {i // f i ≠ ⊥}, f i) = ⨆ i, f i := begin by_cases htriv : ∀ i, f i = ⊥, { simp only [htriv, supr_bot] }, refine le_antisymm (supr_comp_le f _) (supr_le_supr2 _), intros i, by_cases hi : f i = ⊥, { rw hi, obtain ⟨i₀, hi₀⟩ := not_forall.mp htriv, exact ⟨⟨i₀, hi₀⟩, bot_le⟩ }, { exact ⟨⟨i, hi⟩, rfl.le⟩ }, end /-- When taking the infimum of `f : ι → α`, the elements of `ι` on which `f` gives `⊤` can be dropped, without changing the result. -/ lemma infi_ne_top_subtype (f : ι → α) : (⨅ i : {i // f i ≠ ⊤}, f i) = ⨅ i, f i := @supr_ne_bot_subtype (order_dual α) ι _ f /-! ### `supr` and `infi` under `ℕ` -/ lemma supr_ge_eq_supr_nat_add {u : ℕ → α} (n : ℕ) : (⨆ i ≥ n, u i) = ⨆ i, u (i + n) := begin apply le_antisymm; simp only [supr_le_iff], { exact λ i hi, le_Sup ⟨i - n, by { dsimp only, rw tsub_add_cancel_of_le hi }⟩ }, { exact λ i, le_Sup ⟨i + n, supr_pos (nat.le_add_left _ _)⟩ } end lemma infi_ge_eq_infi_nat_add {u : ℕ → α} (n : ℕ) : (⨅ i ≥ n, u i) = ⨅ i, u (i + n) := @supr_ge_eq_supr_nat_add (order_dual α) _ _ _ lemma monotone.supr_nat_add {f : ℕ → α} (hf : monotone f) (k : ℕ) : (⨆ n, f (n + k)) = ⨆ n, f n := le_antisymm (supr_le (λ i, (le_refl _).trans (le_supr _ (i + k)))) (supr_le_supr (λ i, hf (nat.le_add_right i k))) @[simp] lemma supr_infi_ge_nat_add (f : ℕ → α) (k : ℕ) : (⨆ n, ⨅ i ≥ n, f (i + k)) = ⨆ n, ⨅ i ≥ n, f i := begin have hf : monotone (λ n, ⨅ i ≥ n, f i), from λ n m hnm, le_infi (λ i, (infi_le _ i).trans (le_infi (λ h, infi_le _ (hnm.trans h)))), rw ←monotone.supr_nat_add hf k, { simp_rw [infi_ge_eq_infi_nat_add, ←nat.add_assoc], }, end lemma sup_supr_nat_succ (u : ℕ → α) : u 0 ⊔ (⨆ i, u (i + 1)) = ⨆ i, u i := begin refine eq_of_forall_ge_iff (λ c, _), simp only [sup_le_iff, supr_le_iff], refine ⟨λ h, _, λ h, ⟨h _, λ i, h _⟩⟩, rintro (_|i), exacts [h.1, h.2 i] end lemma inf_infi_nat_succ (u : ℕ → α) : u 0 ⊓ (⨅ i, u (i + 1)) = ⨅ i, u i := @sup_supr_nat_succ (order_dual α) _ u end section complete_linear_order variables [complete_linear_order α] lemma supr_eq_top (f : ι → α) : supr f = ⊤ ↔ (∀b<⊤, ∃i, b < f i) := by simp only [← Sup_range, Sup_eq_top, set.exists_range_iff] lemma infi_eq_bot (f : ι → α) : infi f = ⊥ ↔ (∀b>⊥, ∃i, f i < b) := by simp only [← Inf_range, Inf_eq_bot, set.exists_range_iff] end complete_linear_order /-! ### Instances -/ instance Prop.complete_lattice : complete_lattice Prop := { Sup := λs, ∃a∈s, a, le_Sup := assume s a h p, ⟨a, h, p⟩, Sup_le := assume s a h ⟨b, h', p⟩, h b h' p, Inf := λs, ∀a:Prop, a∈s → a, Inf_le := assume s a h p, p a h, le_Inf := assume s a h p b hb, h b hb p, .. Prop.bounded_order, .. Prop.distrib_lattice } @[simp] lemma Inf_Prop_eq {s : set Prop} : Inf s = (∀p ∈ s, p) := rfl @[simp] lemma Sup_Prop_eq {s : set Prop} : Sup s = (∃p ∈ s, p) := rfl @[simp] lemma infi_Prop_eq {ι : Sort*} {p : ι → Prop} : (⨅i, p i) = (∀i, p i) := le_antisymm (assume h i, h _ ⟨i, rfl⟩ ) (assume h p ⟨i, eq⟩, eq ▸ h i) @[simp] lemma supr_Prop_eq {ι : Sort*} {p : ι → Prop} : (⨆i, p i) = (∃i, p i) := le_antisymm (λ ⟨q, ⟨i, (eq : p i = q)⟩, hq⟩, ⟨i, eq.symm ▸ hq⟩) (λ ⟨i, hi⟩, ⟨p i, ⟨i, rfl⟩, hi⟩) instance pi.has_Sup {α : Type*} {β : α → Type*} [Π i, has_Sup (β i)] : has_Sup (Π i, β i) := ⟨λ s i, ⨆ f : s, (f : Π i, β i) i⟩ instance pi.has_Inf {α : Type*} {β : α → Type*} [Π i, has_Inf (β i)] : has_Inf (Π i, β i) := ⟨λ s i, ⨅ f : s, (f : Π i, β i) i⟩ instance pi.complete_lattice {α : Type*} {β : α → Type*} [∀ i, complete_lattice (β i)] : complete_lattice (Π i, β i) := { Sup := Sup, Inf := Inf, le_Sup := λ s f hf i, le_supr (λ f : s, (f : Π i, β i) i) ⟨f, hf⟩, Inf_le := λ s f hf i, infi_le (λ f : s, (f : Π i, β i) i) ⟨f, hf⟩, Sup_le := λ s f hf i, supr_le $ λ g, hf g g.2 i, le_Inf := λ s f hf i, le_infi $ λ g, hf g g.2 i, .. pi.bounded_order, .. pi.lattice } lemma Inf_apply {α : Type*} {β : α → Type*} [Π i, has_Inf (β i)] {s : set (Πa, β a)} {a : α} : (Inf s) a = (⨅ f : s, (f : Πa, β a) a) := rfl @[simp] lemma infi_apply {α : Type*} {β : α → Type*} {ι : Sort*} [Π i, has_Inf (β i)] {f : ι → Πa, β a} {a : α} : (⨅i, f i) a = (⨅i, f i a) := by rw [infi, Inf_apply, infi, infi, ← image_eq_range (λ f : Π i, β i, f a) (range f), ← range_comp] lemma Sup_apply {α : Type*} {β : α → Type*} [Π i, has_Sup (β i)] {s : set (Πa, β a)} {a : α} : (Sup s) a = (⨆f:s, (f : Πa, β a) a) := rfl lemma unary_relation_Sup_iff {α : Type*} (s : set (α → Prop)) {a : α} : Sup s a ↔ ∃ (r : α → Prop), r ∈ s ∧ r a := by { change (∃ _, _) ↔ _, simp [-eq_iff_iff] } lemma binary_relation_Sup_iff {α β : Type*} (s : set (α → β → Prop)) {a : α} {b : β} : Sup s a b ↔ ∃ (r : α → β → Prop), r ∈ s ∧ r a b := by { change (∃ _, _) ↔ _, simp [-eq_iff_iff] } @[simp] lemma supr_apply {α : Type*} {β : α → Type*} {ι : Sort*} [Π i, has_Sup (β i)] {f : ι → Πa, β a} {a : α} : (⨆i, f i) a = (⨆i, f i a) := @infi_apply α (λ i, order_dual (β i)) _ _ f a section complete_lattice variables [preorder α] [complete_lattice β] theorem monotone_Sup_of_monotone {s : set (α → β)} (m_s : ∀f∈s, monotone f) : monotone (Sup s) := assume x y h, supr_le $ λ f, le_supr_of_le f $ m_s f f.2 h theorem monotone_Inf_of_monotone {s : set (α → β)} (m_s : ∀f∈s, monotone f) : monotone (Inf s) := assume x y h, le_infi $ λ f, infi_le_of_le f $ m_s f f.2 h end complete_lattice namespace prod variables (α β) instance [has_Inf α] [has_Inf β] : has_Inf (α × β) := ⟨λs, (Inf (prod.fst '' s), Inf (prod.snd '' s))⟩ instance [has_Sup α] [has_Sup β] : has_Sup (α × β) := ⟨λs, (Sup (prod.fst '' s), Sup (prod.snd '' s))⟩ instance [complete_lattice α] [complete_lattice β] : complete_lattice (α × β) := { le_Sup := assume s p hab, ⟨le_Sup $ mem_image_of_mem _ hab, le_Sup $ mem_image_of_mem _ hab⟩, Sup_le := assume s p h, ⟨ Sup_le $ ball_image_of_ball $ assume p hp, (h p hp).1, Sup_le $ ball_image_of_ball $ assume p hp, (h p hp).2⟩, Inf_le := assume s p hab, ⟨Inf_le $ mem_image_of_mem _ hab, Inf_le $ mem_image_of_mem _ hab⟩, le_Inf := assume s p h, ⟨ le_Inf $ ball_image_of_ball $ assume p hp, (h p hp).1, le_Inf $ ball_image_of_ball $ assume p hp, (h p hp).2⟩, .. prod.lattice α β, .. prod.bounded_order α β, .. prod.has_Sup α β, .. prod.has_Inf α β } end prod section complete_lattice variables [complete_lattice α] {a : α} {s : set α} /-- This is a weaker version of `sup_Inf_eq` -/ lemma sup_Inf_le_infi_sup : a ⊔ Inf s ≤ (⨅ b ∈ s, a ⊔ b) := le_infi $ assume i, le_infi $ assume h, sup_le_sup_left (Inf_le h) _ /-- This is a weaker version of `Inf_sup_eq` -/ lemma Inf_sup_le_infi_sup : Inf s ⊔ a ≤ (⨅ b ∈ s, b ⊔ a) := le_infi $ assume i, le_infi $ assume h, sup_le_sup_right (Inf_le h) _ /-- This is a weaker version of `inf_Sup_eq` -/ lemma supr_inf_le_inf_Sup : (⨆ b ∈ s, a ⊓ b) ≤ a ⊓ Sup s := supr_le $ assume i, supr_le $ assume h, inf_le_inf_left _ (le_Sup h) /-- This is a weaker version of `Sup_inf_eq` -/ lemma supr_inf_le_Sup_inf : (⨆ b ∈ s, b ⊓ a) ≤ Sup s ⊓ a := supr_le $ assume i, supr_le $ assume h, inf_le_inf_right _ (le_Sup h) lemma disjoint_Sup_left {a : set α} {b : α} (d : disjoint (Sup a) b) {i} (hi : i ∈ a) : disjoint i b := (supr_le_iff.mp (supr_le_iff.mp (supr_inf_le_Sup_inf.trans (d : _)) i : _) hi : _) lemma disjoint_Sup_right {a : set α} {b : α} (d : disjoint b (Sup a)) {i} (hi : i ∈ a) : disjoint b i := (supr_le_iff.mp (supr_le_iff.mp (supr_inf_le_inf_Sup.trans (d : _)) i : _) hi : _) end complete_lattice namespace complete_lattice variables [complete_lattice α] /-- An independent set of elements in a complete lattice is one in which every element is disjoint from the `Sup` of the rest. -/ def set_independent (s : set α) : Prop := ∀ ⦃a⦄, a ∈ s → disjoint a (Sup (s \ {a})) variables {s : set α} (hs : set_independent s) @[simp] lemma set_independent_empty : set_independent (∅ : set α) := λ x hx, (set.not_mem_empty x hx).elim theorem set_independent.mono {t : set α} (hst : t ⊆ s) : set_independent t := λ a ha, (hs (hst ha)).mono_right (Sup_le_Sup (diff_subset_diff_left hst)) /-- If the elements of a set are independent, then any pair within that set is disjoint. -/ lemma set_independent.disjoint {x y : α} (hx : x ∈ s) (hy : y ∈ s) (h : x ≠ y) : disjoint x y := disjoint_Sup_right (hs hx) ((mem_diff y).mpr ⟨hy, by simp [h.symm]⟩) include hs /-- If the elements of a set are independent, then any element is disjoint from the `Sup` of some subset of the rest. -/ lemma set_independent.disjoint_Sup {x : α} {y : set α} (hx : x ∈ s) (hy : y ⊆ s) (hxy : x ∉ y) : disjoint x (Sup y) := begin have := (hs.mono $ insert_subset.mpr ⟨hx, hy⟩) (mem_insert x _), rw [insert_diff_of_mem _ (mem_singleton _), diff_singleton_eq_self hxy] at this, exact this, end omit hs /-- An independent indexed family of elements in a complete lattice is one in which every element is disjoint from the `supr` of the rest. Example: an indexed family of non-zero elements in a vector space is linearly independent iff the indexed family of subspaces they generate is independent in this sense. Example: an indexed family of submodules of a module is independent in this sense if and only the natural map from the direct sum of the submodules to the module is injective. -/ def independent {ι : Sort*} {α : Type*} [complete_lattice α] (t : ι → α) : Prop := ∀ i : ι, disjoint (t i) (⨆ (j ≠ i), t j) lemma set_independent_iff {α : Type*} [complete_lattice α] (s : set α) : set_independent s ↔ independent (coe : s → α) := begin simp_rw [independent, set_independent, set_coe.forall, Sup_eq_supr], apply forall_congr, intro a, apply forall_congr, intro ha, congr' 2, convert supr_subtype.symm, simp [supr_and], end variables {t : ι → α} (ht : independent t) theorem independent_def : independent t ↔ ∀ i : ι, disjoint (t i) (⨆ (j ≠ i), t j) := iff.rfl theorem independent_def' {ι : Type*} {t : ι → α} : independent t ↔ ∀ i, disjoint (t i) (Sup (t '' {j | j ≠ i})) := by {simp_rw Sup_image, refl} theorem independent_def'' {ι : Type*} {t : ι → α} : independent t ↔ ∀ i, disjoint (t i) (Sup {a | ∃ j ≠ i, t j = a}) := by {rw independent_def', tidy} @[simp] lemma independent_empty (t : empty → α) : independent t. @[simp] lemma independent_pempty (t : pempty → α) : independent t. /-- If the elements of a set are independent, then any pair within that set is disjoint. -/ lemma independent.disjoint {x y : ι} (h : x ≠ y) : disjoint (t x) (t y) := disjoint_Sup_right (ht x) ⟨y, by simp [h.symm]⟩ lemma independent.mono {ι : Type*} {α : Type*} [complete_lattice α] {s t : ι → α} (hs : independent s) (hst : t ≤ s) : independent t := λ i, (hs i).mono (hst i) (supr_le_supr $ λ j, supr_le_supr $ λ _, hst j) /-- Composing an independent indexed family with an injective function on the index results in another indepedendent indexed family. -/ lemma independent.comp {ι ι' : Sort*} {α : Type*} [complete_lattice α] {s : ι → α} (hs : independent s) (f : ι' → ι) (hf : function.injective f) : independent (s ∘ f) := λ i, (hs (f i)).mono_right begin refine (supr_le_supr $ λ i, _).trans (supr_comp_le _ f), exact supr_le_supr_const hf.ne, end /-- Composing an indepedent indexed family with an order isomorphism on the elements results in another indepedendent indexed family. -/ lemma independent.map_order_iso {ι : Sort*} {α β : Type*} [complete_lattice α] [complete_lattice β] (f : α ≃o β) {a : ι → α} (ha : independent a) : independent (f ∘ a) := λ i, ((ha i).map_order_iso f).mono_right (f.monotone.le_map_supr2 _) @[simp] lemma independent_map_order_iso_iff {ι : Sort*} {α β : Type*} [complete_lattice α] [complete_lattice β] (f : α ≃o β) {a : ι → α} : independent (f ∘ a) ↔ independent a := ⟨ λ h, have hf : f.symm ∘ f ∘ a = a := congr_arg (∘ a) f.left_inv.comp_eq_id, hf ▸ h.map_order_iso f.symm, λ h, h.map_order_iso f⟩ /-- If the elements of a set are independent, then any element is disjoint from the `supr` of some subset of the rest. -/ lemma independent.disjoint_bsupr {ι : Type*} {α : Type*} [complete_lattice α] {t : ι → α} (ht : independent t) {x : ι} {y : set ι} (hx : x ∉ y) : disjoint (t x) (⨆ i ∈ y, t i) := disjoint.mono_right (bsupr_le_bsupr' $ λ i hi, (ne_of_mem_of_not_mem hi hx : _)) (ht x) end complete_lattice
#include <boost/array.hpp> int main() { boost::array<int, 3> a{}; return a[0]; }
# Part 1 - Scalars and Vectors For the questions below it is not sufficient to simply provide answer to the questions, but you must solve the problems and show your work using python (the NumPy library will help a lot!) Translate the vectors and matrices into their appropriate python representations and use numpy or functions that you write yourself to demonstrate the result or property. ``` import numpy as np np.random.seed(0) ``` ``` import matplotlib.pyplot as plt ``` ## 1.1 Create a two-dimensional vector and plot it on a graph ``` x2 = np.array([[5],[5]]) origin = [0], [0] # origin point plt.quiver(*origin, x2[0], x2[1], color='r', scale=20) plt.xlim(-10, 10) plt.ylim(-10, 10) plt.hlines(0, -10, 10, colors='k') plt.vlines(0, -10, 10, colors='k') plt.grid() plt.title("2 dimensional vector") plt.show() ``` ## 1.2 Create a three-dimensional vecor and plot it on a graph ``` from mpl_toolkits import mplot3d x3 = [3, 6, 9] vector = np.array([0, 0, 0, 3, 6, 9]) X, Y, Z, U, V, W = zip(vector) fig = plt.figure(figsize=(8,8)) ax = fig.add_subplot(111, projection='3d') ax.quiver(X, Y, Z, U, V, W, length=5) ax.set_xlim([0, 10]) ax.set_ylim([0, 10]) ax.set_zlim([-0, 10]) plt.show() ``` ## 1.3 Scale the vectors you created in 1.1 by $5$, $\pi$, and $-e$ and plot all four vectors (original + 3 scaled vectors) on a graph. What do you notice about these vectors? ``` from math import e, pi print(e) print(pi) x2_11 = x2*1.1 x2_pi = x2*pi x2_e = x2*e print(x2_11) print(x2_pi) print(x2_e) ``` 2.718281828459045 3.141592653589793 [[5.5] [5.5]] [[15.70796327] [15.70796327]] [[13.59140914] [13.59140914]] ``` plt.quiver(*origin, x2[0], x2[1], color='r', scale=20) plt.quiver(*origin, x2_11[0], x2_11[1], color='g', scale=20) plt.quiver(*origin, x2_pi[0], x2_pi[1], color='b', scale=20) plt.quiver(*origin, x2_e[0], x2_e[1], color='c', scale=20) plt.xlim(-10, 10) plt.ylim(-10, 10) plt.hlines(0, -10, 10, colors='k') plt.vlines(0, -10, 10, colors='k') plt.grid() plt.title("2 dimensional vector") plt.show() ``` ``` # They all follow the same path in space ``` ## 1.4 Graph vectors $\vec{a}$ and $\vec{b}$ and plot them on a graph \begin{align} \vec{a} = \begin{bmatrix} 5 \\ 7 \end{bmatrix} \qquad \vec{b} = \begin{bmatrix} 3 \\4 \end{bmatrix} \end{align} ``` a = np.array([[5], [7]]) b = np.array([[3], [4]]) ``` ``` plt.quiver(*origin, a[0], a[1], color='b', scale=30) plt.quiver(*origin, b[0], b[1], color='c', scale=30) plt.xlim(-10, 10) plt.ylim(-10, 10) plt.hlines(0, -10, 10, colors='k') plt.vlines(0, -10, 10, colors='k') plt.grid() plt.title("2 dimensional vector") plt.show() ``` ## 1.5 find $\vec{a} - \vec{b}$ and plot the result on the same graph as $\vec{a}$ and $\vec{b}$. Is there a relationship between vectors $\vec{a} \thinspace, \vec{b} \thinspace \text{and} \thinspace \vec{a-b}$ ``` a_less_b = a - b ``` ``` plt.quiver(*origin, a_less_b[0], a_less_b[1], color='c', scale=30) plt.quiver(*origin, a[0], a[1], color='b', scale=30) plt.quiver(*origin, b[0], b[1], color='c', scale=30) plt.xlim(-10, 10) plt.ylim(-10, 10) plt.hlines(0, -10, 10, colors='k') plt.vlines(0, -10, 10, colors='k') plt.grid() plt.title("2 dimensional vector") plt.show() ``` ``` # a less b occupies the path in space where the paths coords are the result of the reduction of a and b ``` ## 1.6 Find $c \cdot d$ \begin{align} \vec{c} = \begin{bmatrix}7 & 22 & 4 & 16\end{bmatrix} \qquad \vec{d} = \begin{bmatrix}12 & 6 & 2 & 9\end{bmatrix} \end{align} ``` c = np.array([[7], [22], [4], [16]]) d = np.array([[12], [6], [2], [9]]) answer = c*d answer ``` array([[ 84], [132], [ 8], [144]]) ## 1.7 Find $e \times f$ \begin{align} \vec{e} = \begin{bmatrix} 5 \\ 7 \\ 2 \end{bmatrix} \qquad \vec{f} = \begin{bmatrix} 3 \\4 \\ 6 \end{bmatrix} \end{align} ``` e = np.array([[5], [7], [2]]) f = np.array([[3], [4], [6]]) answer = e*f answer ``` array([[15], [28], [12]]) ## 1.8 Find $||g||$ and then find $||h||$. Which is longer? \begin{align} \vec{g} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 8 \end{bmatrix} \qquad \vec{h} = \begin{bmatrix} 3 \\3 \\ 3 \\ 3 \end{bmatrix} \end{align} ``` g = np.array([[1], [1], [1], [8]]) h = np.array([[3], [3], [3], [3]]) ng = np.linalg.norm(g) nh = np.linalg.norm(h) print(ng) print(nh) ``` 8.18535277187245 6.0 ``` # g is longer ``` # Part 2 - Matrices ## 2.1 What are the dimensions of the following matrices? Which of the following can be multiplied together? See if you can find all of the different legal combinations. \begin{align} A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix} \qquad B = \begin{bmatrix} 2 & 4 & 6 \\ \end{bmatrix} \qquad C = \begin{bmatrix} 9 & 6 & 3 \\ 4 & 7 & 11 \end{bmatrix} \qquad D = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad E = \begin{bmatrix} 1 & 3 \\ 5 & 7 \end{bmatrix} \end{align} ``` # A = 3,2 # B = 1,3 # C = 2,3 # D = 3,3 # E = 2,2 # Legal combinations = AC AE BD BA CA CB CD DA DB ``` ## 2.2 Find the following products: CD, AE, and BA. What are the dimensions of the resulting matrices? How does that relate to the dimensions of their factor matrices? ``` A = np.array([[1, 2], [3, 4], [5, 6]]) B = np.array([[2, 4, 6]]) C = np.array([[9, 6, 3], [4, 7, 11]]) D = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) E = np.array([[1, 3], [5, 7]]) ``` ``` C.dot(D) ``` array([[ 9, 6, 3], [ 4, 7, 11]]) ``` A.dot(E) ``` array([[11, 17], [23, 37], [35, 57]]) ``` B.dot(A) ``` array([[44, 56]]) ``` # the resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix ``` ## 2.3 Find $F^{T}$. How are the numbers along the main diagonal (top left to Bbottom right) of the original matrix and its transpose related? What are the dimensions of $F$? What are the dimensions of $F^{T}$? \begin{align} F = \begin{bmatrix} 20 & 19 & 18 & 17 \\ 16 & 15 & 14 & 13 \\ 12 & 11 & 10 & 9 \\ 8 & 7 & 6 & 5 \\ 4 & 3 & 2 & 1 \end{bmatrix} \end{align} ``` F = np.array([[20, 19, 18, 17], [16, 15, 14, 13], [12, 11, 10, 9], [8, 7, 6, 5], [4, 3, 2, 1]]) F.T # They are the same along the main diagonal ``` array([[20, 16, 12, 8, 4], [19, 15, 11, 7, 3], [18, 14, 10, 6, 2], [17, 13, 9, 5, 1]]) ``` F3 = F.dot(3) F3 ``` array([[60, 57, 54, 51], [48, 45, 42, 39], [36, 33, 30, 27], [24, 21, 18, 15], [12, 9, 6, 3]]) ``` F3.T # The same holds true as before ``` array([[60, 48, 36, 24, 12], [57, 45, 33, 21, 9], [54, 42, 30, 18, 6], [51, 39, 27, 15, 3]]) # Part 3 - Square Matrices ## 3.1 Find $IG$ (be sure to show your work) 😃 You don't have to do anything crazy complicated here to show your work, just create the G matrix as specified below, and a corresponding 2x2 Identity matrix and then multiply them together to show the result. You don't need to write LaTeX or anything like that (unless you want to). \begin{align} G= \begin{bmatrix} 12 & 11 \\ 7 & 10 \end{bmatrix} \end{align} ``` G = np.array([[12, 11], [7, 10]]) GI = np.array([[12, 0], [0, 10]]) G.dot(GI) ``` array([[144, 110], [ 84, 100]]) ## 3.2 Find $|H|$ and then find $|J|$. \begin{align} H= \begin{bmatrix} 12 & 11 \\ 7 & 10 \end{bmatrix} \qquad J= \begin{bmatrix} 0 & 1 & 2 \\ 7 & 10 & 4 \\ 3 & 2 & 0 \end{bmatrix} \end{align} ``` H = np.array([[12, 11], [7, 10]]) J = np.array([[0, 1, 2], [7, 10, 4], [3, 2, 0]]) ``` ``` np.linalg.det(H) ``` 43.000000000000014 ``` np.linalg.det(J) ``` -19.999999999999996 ## 3.3 Find $H^{-1}$ and then find $J^{-1}$ ``` np.linalg.matrix_power(H, -1) ``` array([[ 0.23255814, -0.25581395], [-0.1627907 , 0.27906977]]) ``` np.linalg.matrix_power(J, -1) ``` array([[ 0.4 , -0.2 , 0.8 ], [-0.6 , 0.3 , -0.7 ], [ 0.8 , -0.15, 0.35]]) ## 3.4 Find $HH^{-1}$ and then find $J^{-1}J$. Is $HH^{-1} == J^{-1}J$? Why or Why not? Please ignore Python rounding errors. If necessary, format your output so that it rounds to 5 significant digits (the fifth decimal place). ``` (H.dot(np.linalg.matrix_power(H, -1))) ``` array([[1.00000000e+00, 5.55111512e-16], [2.22044605e-16, 1.00000000e+00]]) ``` (np.linalg.matrix_power(J, -1)).dot(J) ``` array([[ 1.00000000e+00, 2.22044605e-16, 0.00000000e+00], [-1.11022302e-16, 1.00000000e+00, 0.00000000e+00], [-1.66533454e-16, -1.11022302e-16, 1.00000000e+00]]) ``` ``` # Stretch Goals: A reminder that these challenges are optional. If you finish your work quickly we welcome you to work on them. If there are other activities that you feel like will help your understanding of the above topics more, feel free to work on that. Topics from the Stretch Goals sections will never end up on Sprint Challenges. You don't have to do these in order, you don't have to do all of them. - Write a function that can calculate the dot product of any two vectors of equal length that are passed to it. - Write a function that can calculate the norm of any vector - Prove to yourself again that the vectors in 1.9 are orthogonal by graphing them. - Research how to plot a 3d graph with animations so that you can make the graph rotate (this will be easier in a local notebook than in google colab) - Create and plot a matrix on a 2d graph. - Create and plot a matrix on a 3d graph. - Plot two vectors that are not collinear on a 2d graph. Calculate the determinant of the 2x2 matrix that these vectors form. How does this determinant relate to the graphical interpretation of the vectors?
-- Raw terms, weakening (renaming) and substitution. {-# OPTIONS --without-K --safe #-} module Definition.Untyped where open import Tools.Fin open import Tools.Nat open import Tools.Product open import Tools.List import Tools.PropositionalEquality as PE infixl 30 _∙_ infix 30 Π_▹_ infixr 22 _▹▹_ infix 30 Σ_▹_ infixr 22 _××_ infix 30 ⟦_⟧_▹_ infixl 30 _ₛ•ₛ_ _•ₛ_ _ₛ•_ infix 25 _[_] infix 25 _[_]↑ -- Typing contexts (length indexed snoc-lists, isomorphic to lists). -- Terms added to the context are well scoped in the sense that it cannot -- contain more unbound variables than can be looked up in the context. data Con (A : Nat → Set) : Nat → Set where ε : Con A 0 -- Empty context. _∙_ : {n : Nat} → Con A n → A n → Con A (1+ n) -- Context extension. private variable n m ℓ : Nat -- Representation of sub terms using a list of binding levels data GenTs (A : Nat → Set) : Nat → List Nat → Set where [] : {n : Nat} → GenTs A n [] _∷_ : {n b : Nat} {bs : List Nat} (t : A (b + n)) (ts : GenTs A n bs) → GenTs A n (b ∷ bs) -- Kinds are indexed on the number of expected sub terms -- and the number of new variables bound by each sub term data Kind : (ns : List Nat) → Set where Ukind : Kind [] Pikind : Kind (0 ∷ 1 ∷ []) Lamkind : Kind (1 ∷ []) Appkind : Kind (0 ∷ 0 ∷ []) Sigmakind : Kind (0 ∷ 1 ∷ []) Prodkind : Kind (0 ∷ 0 ∷ []) Fstkind : Kind (0 ∷ []) Sndkind : Kind (0 ∷ []) Natkind : Kind [] Zerokind : Kind [] Suckind : Kind (0 ∷ []) Natreckind : Kind (1 ∷ 0 ∷ 0 ∷ 0 ∷ []) Unitkind : Kind [] Starkind : Kind [] Emptykind : Kind [] Emptyreckind : Kind (0 ∷ 0 ∷ []) -- Terms are indexed by its number of unbound variables and are either: -- de Bruijn style variables or -- generic terms, formed by their kind and sub terms data Term (n : Nat) : Set where var : (x : Fin n) → Term n gen : {bs : List Nat} (k : Kind bs) (c : GenTs Term n bs) → Term n private variable A F H t u v : Term n B E G : Term (1+ n) -- The Grammar of our language. -- We represent the expressions of our language as de Bruijn terms. -- Variables are natural numbers interpreted as de Bruijn indices. -- Π, lam, and natrec are binders. -- Type constructors. U : Term n -- Universe. U = gen Ukind [] Π_▹_ : (A : Term n) (B : Term (1+ n)) → Term n -- Dependent function type (B is a binder). Π A ▹ B = gen Pikind (A ∷ B ∷ []) Σ_▹_ : (A : Term n) (B : Term (1+ n)) → Term n -- Dependent sum type (B is a binder). Σ A ▹ B = gen Sigmakind (A ∷ B ∷ []) ℕ : Term n -- Type of natural numbers. ℕ = gen Natkind [] Empty : Term n -- Empty type Empty = gen Emptykind [] Unit : Term n -- Unit type Unit = gen Unitkind [] lam : (t : Term (1+ n)) → Term n -- Function abstraction (binder). lam t = gen Lamkind (t ∷ []) _∘_ : (t u : Term n) → Term n -- Application. t ∘ u = gen Appkind (t ∷ u ∷ []) prod : (t u : Term n) → Term n -- Dependent products prod t u = gen Prodkind (t ∷ u ∷ []) fst : (t : Term n) → Term n -- First projection fst t = gen Fstkind (t ∷ []) snd : (t : Term n) → Term n -- Second projection snd t = gen Sndkind (t ∷ []) -- Introduction and elimination of natural numbers. zero : Term n -- Natural number zero. zero = gen Zerokind [] suc : (t : Term n) → Term n -- Successor. suc t = gen Suckind (t ∷ []) natrec : (A : Term (1+ n)) (t u v : Term n) → Term n -- Natural number recursor (A is a binder). natrec A t u v = gen Natreckind (A ∷ t ∷ u ∷ v ∷ []) star : Term n -- Unit element star = gen Starkind [] Emptyrec : (A e : Term n) → Term n -- Empty type recursor Emptyrec A e = gen Emptyreckind (A ∷ e ∷ []) -- Binding types data BindingType : Set where BΠ : BindingType BΣ : BindingType ⟦_⟧_▹_ : BindingType → Term n → Term (1+ n) → Term n ⟦ BΠ ⟧ F ▹ G = Π F ▹ G ⟦ BΣ ⟧ F ▹ G = Σ F ▹ G -- Injectivity of term constructors w.r.t. propositional equality. -- If W F G = W H E then F = H and G = E. B-PE-injectivity : ∀ W → ⟦ W ⟧ F ▹ G PE.≡ ⟦ W ⟧ H ▹ E → F PE.≡ H × G PE.≡ E B-PE-injectivity BΠ PE.refl = PE.refl , PE.refl B-PE-injectivity BΣ PE.refl = PE.refl , PE.refl -- If suc n = suc m then n = m. suc-PE-injectivity : suc t PE.≡ suc u → t PE.≡ u suc-PE-injectivity PE.refl = PE.refl -- Neutral terms. -- A term is neutral if it has a variable in head position. -- The variable blocks reduction of such terms. data Neutral : Term n → Set where var : (x : Fin n) → Neutral (var x) ∘ₙ : Neutral t → Neutral (t ∘ u) fstₙ : Neutral t → Neutral (fst t) sndₙ : Neutral t → Neutral (snd t) natrecₙ : Neutral v → Neutral (natrec G t u v) Emptyrecₙ : Neutral t → Neutral (Emptyrec A t) -- Weak head normal forms (whnfs). -- These are the (lazy) values of our language. data Whnf {n : Nat} : Term n → Set where -- Type constructors are whnfs. Uₙ : Whnf U Πₙ : Whnf (Π A ▹ B) Σₙ : Whnf (Σ A ▹ B) ℕₙ : Whnf ℕ Unitₙ : Whnf Unit Emptyₙ : Whnf Empty -- Introductions are whnfs. lamₙ : Whnf (lam t) zeroₙ : Whnf zero sucₙ : Whnf (suc t) starₙ : Whnf star prodₙ : Whnf (prod t u) -- Neutrals are whnfs. ne : Neutral t → Whnf t -- Whnf inequalities. -- Different whnfs are trivially distinguished by propositional equality. -- (The following statements are sometimes called "no-confusion theorems".) U≢ne : Neutral A → U PE.≢ A U≢ne () PE.refl ℕ≢ne : Neutral A → ℕ PE.≢ A ℕ≢ne () PE.refl Empty≢ne : Neutral A → Empty PE.≢ A Empty≢ne () PE.refl Unit≢ne : Neutral A → Unit PE.≢ A Unit≢ne () PE.refl B≢ne : ∀ W → Neutral A → ⟦ W ⟧ F ▹ G PE.≢ A B≢ne BΠ () PE.refl B≢ne BΣ () PE.refl U≢B : ∀ W → U PE.≢ ⟦ W ⟧ F ▹ G U≢B BΠ () U≢B BΣ () ℕ≢B : ∀ W → ℕ PE.≢ ⟦ W ⟧ F ▹ G ℕ≢B BΠ () ℕ≢B BΣ () Empty≢B : ∀ W → Empty PE.≢ ⟦ W ⟧ F ▹ G Empty≢B BΠ () Empty≢B BΣ () Unit≢B : ∀ W → Unit PE.≢ ⟦ W ⟧ F ▹ G Unit≢B BΠ () Unit≢B BΣ () zero≢ne : Neutral t → zero PE.≢ t zero≢ne () PE.refl suc≢ne : Neutral t → suc u PE.≢ t suc≢ne () PE.refl -- Several views on whnfs (note: not recursive). -- A whnf of type ℕ is either zero, suc t, or neutral. data Natural {n : Nat} : Term n → Set where zeroₙ : Natural zero sucₙ : Natural (suc t) ne : Neutral t → Natural t -- A (small) type in whnf is either Π A B, Σ A B, ℕ, Empty, Unit or neutral. -- Large types could also be U. data Type {n : Nat} : Term n → Set where Πₙ : Type (Π A ▹ B) Σₙ : Type (Σ A ▹ B) ℕₙ : Type ℕ Emptyₙ : Type Empty Unitₙ : Type Unit ne : Neutral t → Type t ⟦_⟧-type : ∀ (W : BindingType) → Type (⟦ W ⟧ F ▹ G) ⟦ BΠ ⟧-type = Πₙ ⟦ BΣ ⟧-type = Σₙ -- A whnf of type Π A ▹ B is either lam t or neutral. data Function {n : Nat} : Term n → Set where lamₙ : Function (lam t) ne : Neutral t → Function t -- A whnf of type Σ A ▹ B is either prod t u or neutral. data Product {n : Nat} : Term n → Set where prodₙ : Product (prod t u) ne : Neutral t → Product t -- These views classify only whnfs. -- Natural, Type, Function and Product are a subsets of Whnf. naturalWhnf : Natural t → Whnf t naturalWhnf sucₙ = sucₙ naturalWhnf zeroₙ = zeroₙ naturalWhnf (ne x) = ne x typeWhnf : Type A → Whnf A typeWhnf Πₙ = Πₙ typeWhnf Σₙ = Σₙ typeWhnf ℕₙ = ℕₙ typeWhnf Emptyₙ = Emptyₙ typeWhnf Unitₙ = Unitₙ typeWhnf (ne x) = ne x functionWhnf : Function t → Whnf t functionWhnf lamₙ = lamₙ functionWhnf (ne x) = ne x productWhnf : Product t → Whnf t productWhnf prodₙ = prodₙ productWhnf (ne x) = ne x ⟦_⟧ₙ : (W : BindingType) → Whnf (⟦ W ⟧ F ▹ G) ⟦_⟧ₙ BΠ = Πₙ ⟦_⟧ₙ BΣ = Σₙ ------------------------------------------------------------------------ -- Weakening -- In the following we define untyped weakenings η : Wk. -- The typed form could be written η : Γ ≤ Δ with the intention -- that η transport a term t living in context Δ to a context Γ -- that can bind additional variables (which cannot appear in t). -- Thus, if Δ ⊢ t : A and η : Γ ≤ Δ then Γ ⊢ wk η t : wk η A. -- -- Even though Γ is "larger" than Δ we write Γ ≤ Δ to be conformant -- with subtyping A ≤ B. With subtyping, relation Γ ≤ Δ could be defined as -- ``for all x ∈ dom(Δ) have Γ(x) ≤ Δ(x)'' (in the sense of subtyping) -- and this would be the natural extension of weakenings. data Wk : Nat → Nat → Set where id : {n : Nat} → Wk n n -- η : Γ ≤ Γ. step : {n m : Nat} → Wk m n → Wk (1+ m) n -- If η : Γ ≤ Δ then step η : Γ∙A ≤ Δ. lift : {n m : Nat} → Wk m n → Wk (1+ m) (1+ n) -- If η : Γ ≤ Δ then lift η : Γ∙A ≤ Δ∙A. -- Composition of weakening. -- If η : Γ ≤ Δ and η′ : Δ ≤ Φ then η • η′ : Γ ≤ Φ. infixl 30 _•_ _•_ : {l m n : Nat} → Wk l m → Wk m n → Wk l n id • η′ = η′ step η • η′ = step (η • η′) lift η • id = lift η lift η • step η′ = step (η • η′) lift η • lift η′ = lift (η • η′) liftn : {k m : Nat} → Wk k m → (n : Nat) → Wk (n + k) (n + m) liftn ρ Nat.zero = ρ liftn ρ (1+ n) = lift (liftn ρ n) -- Weakening of variables. -- If η : Γ ≤ Δ and x ∈ dom(Δ) then wkVar η x ∈ dom(Γ). wkVar : {m n : Nat} (ρ : Wk m n) (x : Fin n) → Fin m wkVar id x = x wkVar (step ρ) x = (wkVar ρ x) +1 wkVar (lift ρ) x0 = x0 wkVar (lift ρ) (x +1) = (wkVar ρ x) +1 -- Weakening of terms. -- If η : Γ ≤ Δ and Δ ⊢ t : A then Γ ⊢ wk η t : wk η A. mutual wkGen : {m n : Nat} {bs : List Nat} (ρ : Wk m n) (c : GenTs Term n bs) → GenTs Term m bs wkGen ρ [] = [] wkGen ρ (_∷_ {b = b} t c) = (wk (liftn ρ b) t) ∷ (wkGen ρ c) wk : {m n : Nat} (ρ : Wk m n) (t : Term n) → Term m wk ρ (var x) = var (wkVar ρ x) wk ρ (gen k c) = gen k (wkGen ρ c) -- Adding one variable to the context requires wk1. -- If Γ ⊢ t : B then Γ∙A ⊢ wk1 t : wk1 B. wk1 : Term n → Term (1+ n) wk1 = wk (step id) -- Weakening of a neutral term. wkNeutral : ∀ ρ → Neutral t → Neutral {n} (wk ρ t) wkNeutral ρ (var n) = var (wkVar ρ n) wkNeutral ρ (∘ₙ n) = ∘ₙ (wkNeutral ρ n) wkNeutral ρ (fstₙ n) = fstₙ (wkNeutral ρ n) wkNeutral ρ (sndₙ n) = sndₙ (wkNeutral ρ n) wkNeutral ρ (natrecₙ n) = natrecₙ (wkNeutral ρ n) wkNeutral ρ (Emptyrecₙ e) = Emptyrecₙ (wkNeutral ρ e) -- Weakening can be applied to our whnf views. wkNatural : ∀ ρ → Natural t → Natural {n} (wk ρ t) wkNatural ρ sucₙ = sucₙ wkNatural ρ zeroₙ = zeroₙ wkNatural ρ (ne x) = ne (wkNeutral ρ x) wkType : ∀ ρ → Type t → Type {n} (wk ρ t) wkType ρ Πₙ = Πₙ wkType ρ Σₙ = Σₙ wkType ρ ℕₙ = ℕₙ wkType ρ Emptyₙ = Emptyₙ wkType ρ Unitₙ = Unitₙ wkType ρ (ne x) = ne (wkNeutral ρ x) wkFunction : ∀ ρ → Function t → Function {n} (wk ρ t) wkFunction ρ lamₙ = lamₙ wkFunction ρ (ne x) = ne (wkNeutral ρ x) wkProduct : ∀ ρ → Product t → Product {n} (wk ρ t) wkProduct ρ prodₙ = prodₙ wkProduct ρ (ne x) = ne (wkNeutral ρ x) wkWhnf : ∀ ρ → Whnf t → Whnf {n} (wk ρ t) wkWhnf ρ Uₙ = Uₙ wkWhnf ρ Πₙ = Πₙ wkWhnf ρ Σₙ = Σₙ wkWhnf ρ ℕₙ = ℕₙ wkWhnf ρ Emptyₙ = Emptyₙ wkWhnf ρ Unitₙ = Unitₙ wkWhnf ρ lamₙ = lamₙ wkWhnf ρ prodₙ = prodₙ wkWhnf ρ zeroₙ = zeroₙ wkWhnf ρ sucₙ = sucₙ wkWhnf ρ starₙ = starₙ wkWhnf ρ (ne x) = ne (wkNeutral ρ x) -- Non-dependent version of Π. _▹▹_ : Term n → Term n → Term n A ▹▹ B = Π A ▹ wk1 B -- Non-dependent products. _××_ : Term n → Term n → Term n A ×× B = Σ A ▹ wk1 B ------------------------------------------------------------------------ -- Substitution -- The substitution operation subst σ t replaces the free de Bruijn indices -- of term t by chosen terms as specified by σ. -- The substitution σ itself is a map from natural numbers to terms. Subst : Nat → Nat → Set Subst m n = Fin n → Term m -- Given closed contexts ⊢ Γ and ⊢ Δ, -- substitutions may be typed via Γ ⊢ σ : Δ meaning that -- Γ ⊢ σ(x) : (subst σ Δ)(x) for all x ∈ dom(Δ). -- -- The substitution operation is then typed as follows: -- If Γ ⊢ σ : Δ and Δ ⊢ t : A, then Γ ⊢ subst σ t : subst σ A. -- -- Although substitutions are untyped, typing helps us -- to understand the operation on substitutions. -- We may view σ as the infinite stream σ 0, σ 1, ... -- Extract the substitution of the first variable. -- -- If Γ ⊢ σ : Δ∙A then Γ ⊢ head σ : subst σ A. head : Subst m (1+ n) → Term m head σ = σ x0 -- Remove the first variable instance of a substitution -- and shift the rest to accommodate. -- -- If Γ ⊢ σ : Δ∙A then Γ ⊢ tail σ : Δ. tail : Subst m (1+ n) → Subst m n tail σ x = σ (x +1) -- Substitution of a variable. -- -- If Γ ⊢ σ : Δ then Γ ⊢ substVar σ x : (subst σ Δ)(x). substVar : (σ : Subst m n) (x : Fin n) → Term m substVar σ x = σ x -- Identity substitution. -- Replaces each variable by itself. -- -- Γ ⊢ idSubst : Γ. idSubst : Subst n n idSubst = var -- Weaken a substitution by one. -- -- If Γ ⊢ σ : Δ then Γ∙A ⊢ wk1Subst σ : Δ. wk1Subst : Subst m n → Subst (1+ m) n wk1Subst σ x = wk1 (σ x) -- Lift a substitution. -- -- If Γ ⊢ σ : Δ then Γ∙A ⊢ liftSubst σ : Δ∙A. liftSubst : (σ : Subst m n) → Subst (1+ m) (1+ n) liftSubst σ x0 = var x0 liftSubst σ (x +1) = wk1Subst σ x liftSubstn : {k m : Nat} → Subst k m → (n : Nat) → Subst (n + k) (n + m) liftSubstn σ Nat.zero = σ liftSubstn σ (1+ n) = liftSubst (liftSubstn σ n) -- Transform a weakening into a substitution. -- -- If ρ : Γ ≤ Δ then Γ ⊢ toSubst ρ : Δ. toSubst : Wk m n → Subst m n toSubst pr x = var (wkVar pr x) -- Apply a substitution to a term. -- -- If Γ ⊢ σ : Δ and Δ ⊢ t : A then Γ ⊢ subst σ t : subst σ A. mutual substGen : {bs : List Nat} (σ : Subst m n) (g : GenTs Term n bs) → GenTs Term m bs substGen σ [] = [] substGen σ (_∷_ {b = b} t ts) = subst (liftSubstn σ b) t ∷ (substGen σ ts) subst : (σ : Subst m n) (t : Term n) → Term m subst σ (var x) = substVar σ x subst σ (gen x c) = gen x (substGen σ c) -- Extend a substitution by adding a term as -- the first variable substitution and shift the rest. -- -- If Γ ⊢ σ : Δ and Γ ⊢ t : subst σ A then Γ ⊢ consSubst σ t : Δ∙A. consSubst : Subst m n → Term m → Subst m (1+ n) consSubst σ t x0 = t consSubst σ t (x +1) = σ x -- Singleton substitution. -- -- If Γ ⊢ t : A then Γ ⊢ sgSubst t : Γ∙A. sgSubst : Term n → Subst n (1+ n) sgSubst = consSubst idSubst -- Compose two substitutions. -- -- If Γ ⊢ σ : Δ and Δ ⊢ σ′ : Φ then Γ ⊢ σ ₛ•ₛ σ′ : Φ. _ₛ•ₛ_ : Subst ℓ m → Subst m n → Subst ℓ n _ₛ•ₛ_ σ σ′ x = subst σ (σ′ x) -- Composition of weakening and substitution. -- -- If ρ : Γ ≤ Δ and Δ ⊢ σ : Φ then Γ ⊢ ρ •ₛ σ : Φ. _•ₛ_ : Wk ℓ m → Subst m n → Subst ℓ n _•ₛ_ ρ σ x = wk ρ (σ x) -- If Γ ⊢ σ : Δ and ρ : Δ ≤ Φ then Γ ⊢ σ ₛ• ρ : Φ. _ₛ•_ : Subst ℓ m → Wk m n → Subst ℓ n _ₛ•_ σ ρ x = σ (wkVar ρ x) -- Substitute the first variable of a term with an other term. -- -- If Γ∙A ⊢ t : B and Γ ⊢ s : A then Γ ⊢ t[s] : B[s]. _[_] : (t : Term (1+ n)) (s : Term n) → Term n t [ s ] = subst (sgSubst s) t -- Substitute the first variable of a term with an other term, -- but let the two terms share the same context. -- -- If Γ∙A ⊢ t : B and Γ∙A ⊢ s : A then Γ∙A ⊢ t[s]↑ : B[s]↑. _[_]↑ : (t : Term (1+ n)) (s : Term (1+ n)) → Term (1+ n) t [ s ]↑ = subst (consSubst (wk1Subst idSubst) s) t B-subst : (σ : Subst m n) (W : BindingType) (F : Term n) (G : Term (1+ n)) → subst σ (⟦ W ⟧ F ▹ G) PE.≡ ⟦ W ⟧ (subst σ F) ▹ (subst (liftSubst σ) G) B-subst σ BΠ F G = PE.refl B-subst σ BΣ F G = PE.refl
PROGRAM COCKTAIL IMPLICIT NONE INTEGER :: intArray(10) = (/ 4, 9, 3, -2, 0, 7, -5, 1, 6, 8 /) WRITE(*,"(A,10I5)") "Unsorted array:", intArray CALL Cocktail_sort(intArray) WRITE(*,"(A,10I5)") "Sorted array :", intArray CONTAINS SUBROUTINE Cocktail_sort(a) INTEGER, INTENT(IN OUT) :: a(:) INTEGER :: i, bottom, top, temp LOGICAL :: swapped bottom = 1 top = SIZE(a) - 1 DO WHILE (bottom < top ) swapped = .FALSE. DO i = bottom, top IF (array(i) > array(i+1)) THEN temp = array(i) array(i) = array(i+1) array(i+1) = temp swapped = .TRUE. END IF END DO IF (.NOT. swapped) EXIT DO i = top, bottom + 1, -1 IF (array(i) < array(i-1)) THEN temp = array(i) array(i) = array(i-1) array(i-1) = temp swapped = .TRUE. END IF END DO IF (.NOT. swapped) EXIT bottom = bottom + 1 top = top - 1 END DO END SUBROUTINE Cocktail_sort END PROGRAM COCKTAIL
module Web.Internal.WebidlPrim import JS import Web.Internal.Types -------------------------------------------------------------------------------- -- Interfaces -------------------------------------------------------------------------------- namespace DOMException export %foreign "browser:lambda:(a,b)=> new DOMException(a,b)" prim__new : UndefOr String -> UndefOr String -> PrimIO DOMException export %foreign "browser:lambda:x=>x.code" prim__code : DOMException -> PrimIO Bits16 export %foreign "browser:lambda:x=>x.message" prim__message : DOMException -> PrimIO String export %foreign "browser:lambda:x=>x.name" prim__name : DOMException -> PrimIO String -------------------------------------------------------------------------------- -- Callbacks -------------------------------------------------------------------------------- namespace Function export %foreign "browser:lambda:x=>(a)=>x(a)()" prim__toFunction : ( IO (Array AnyPtr) -> IO AnyPtr ) -> PrimIO Function namespace VoidFunction export %foreign "browser:lambda:x=>()=>x()()" prim__toVoidFunction : (() -> IO ()) -> PrimIO VoidFunction
open import Common.Prelude open import Common.Reflection data D (A : Set) : Nat → Set where d : ∀ {n} → A → D A n term : Term term = con (quote d) (hArg (def (quote Nat) []) ∷ vArg (con (quote zero) []) ∷ []) -- There was a bug where extra implicit arguments were inserted for the parameters, resulting in -- the unquoted value 'd {_} {Nat} zero' instead of 'd {Nat} zero'. value : D Nat zero value = unquote (give term)
From Test Require Import tactic. Section FOFProblem. Variable Universe : Set. Variable UniverseElement : Universe. Variable wd_ : Universe -> Universe -> Prop. Variable col_ : Universe -> Universe -> Universe -> Prop. Variable col_swap1_1 : (forall A B C : Universe, (col_ A B C -> col_ B A C)). Variable col_swap2_2 : (forall A B C : Universe, (col_ A B C -> col_ B C A)). Variable col_triv_3 : (forall A B : Universe, col_ A B B). Variable wd_swap_4 : (forall A B : Universe, (wd_ A B -> wd_ B A)). Variable col_trans_5 : (forall P Q A B C : Universe, ((wd_ P Q /\ (col_ P Q A /\ (col_ P Q B /\ col_ P Q C))) -> col_ A B C)). Theorem pipo_6 : (forall O B C Bprime Cprime X Y : Universe, ((wd_ Cprime O /\ (wd_ Bprime O /\ (wd_ B O /\ (wd_ B C /\ (wd_ C O /\ (wd_ X Y /\ (wd_ Y C /\ (wd_ X C /\ (wd_ Y B /\ (wd_ X B /\ (wd_ B Bprime /\ (wd_ C C /\ (wd_ C Bprime /\ (wd_ B Cprime /\ (wd_ Cprime C /\ (wd_ Cprime Bprime /\ (col_ O Bprime Cprime /\ (col_ B O C /\ (col_ O X Y /\ (col_ X Y Cprime /\ (col_ X Y C /\ col_ Cprime O C))))))))))))))))))))) -> col_ O B Bprime)). Proof. time tac. Qed. End FOFProblem.
module Inductive.Examples.Sum where open import Inductive open import Tuple open import Data.Fin open import Data.Product open import Data.List open import Data.Vec _⊎_ : Set → Set → Set A ⊎ B = Inductive (((A ∷ []) , []) ∷ (((B ∷ []) , []) ∷ [])) inl : {A B : Set} → A → A ⊎ B inl a = construct zero (a ∷ []) [] inr : {A B : Set} → B → A ⊎ B inr b = construct (suc zero) (b ∷ []) [] case : {A B C : Set} → A ⊎ B → (A → C) → (B → C) → C case x f g = rec (f ∷ (g ∷ [])) x
function [ x, seed ] = anglit_sample ( seed ) %*****************************************************************************80 % %% ANGLIT_SAMPLE samples the Anglit PDF. % % Licensing: % % This code is distributed under the GNU LGPL license. % % Modified: % % 01 September 2004 % % Author: % % John Burkardt % % Parameters: % % Input, integer SEED, a seed for the random number generator. % % Output, real X, a sample of the PDF. % % Output, integer SEED, an updated seed for the random number generator. % [ cdf, seed ] = r8_uniform_01 ( seed ); x = anglit_cdf_inv ( cdf ); return end
// Copyright David Abrahams 2002. Permission to copy, use, // modify, sell and distribute this software is granted provided this // copyright notice appears in all copies. This software is provided // "as is" without express or implied warranty, and with no claim as // to its suitability for any purpose. #include <boost/python/converter/object_manager.hpp> #include <boost/python/borrowed.hpp> #include <boost/static_assert.hpp> #include <boost/python/handle.hpp> using namespace boost::python; using namespace boost::python::converter; struct X {}; int main() { BOOST_STATIC_ASSERT(is_object_manager<handle<> >::value); BOOST_STATIC_ASSERT(!is_object_manager<int>::value); BOOST_STATIC_ASSERT(!is_object_manager<X>::value); BOOST_STATIC_ASSERT(is_reference_to_object_manager<handle<>&>::value); BOOST_STATIC_ASSERT(is_reference_to_object_manager<handle<> const&>::value); BOOST_STATIC_ASSERT(is_reference_to_object_manager<handle<> volatile&>::value); BOOST_STATIC_ASSERT(is_reference_to_object_manager<handle<> const volatile&>::value); BOOST_STATIC_ASSERT(!is_reference_to_object_manager<handle<> >::value); BOOST_STATIC_ASSERT(!is_reference_to_object_manager<X>::value); BOOST_STATIC_ASSERT(!is_reference_to_object_manager<X&>::value); BOOST_STATIC_ASSERT(!is_reference_to_object_manager<X const&>::value); return 0; }
#!/usr/bin/env python # coding: utf-8 # In[1]: #Import tools import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time from config import api_key from config import gkey import gmaps import os import json # In[2]: #read csv weather_df = pd.read_csv("../Weather.py/output/cities_final.csv") weather_df # In[3]: gmaps.configure(api_key=gkey) # Store latitude and longitude in locations abd humidity in a humidity variable locations = weather_df[["Lat", "Lng"]] humidity = weather_df["Humidity"] # In[4]: # Plot Heatmap fig = gmaps.figure(center=(46.0, -5.0), zoom_level=1) max_intensity = np.max(humidity) # Create heat layer heat_layer = gmaps.heatmap_layer(locations, weights = humidity, dissipating=False, max_intensity=100, point_radius=2) # Add heat layer fig.add_layer(heat_layer) #show fig fig # In[5]: #Find cities using given parameters city_df = weather_df.loc[(weather_df["Max Temp"] < 80) & (weather_df["Max Temp"] > 70) & (weather_df["Wind Speed"] < 10) & (weather_df["Cloudiness"] == 0)].dropna() # In[6]: #Add Hotel Name column city_df['Hotel Name'] = "" city_df # In[7]: #Show only the values we need hotel_df = city_df[['Hotel Name','City','Country','Lat','Lng']] #ser base url and parameters base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" params = {"type" : "lodging", "keyword" : "hotel", "radius" : 5000, "key" : gkey} # In[ ]: # In[8]: #Create for loop for index, row in hotel_df.iterrows(): #grab the longitude, latitude, city and country from city_df lat = row["Lat"] lng = row["Lng"] city = row["City"] Country = row["Country"] #Add location to the parameter dictionary params["location"] = f"{lat},{lng}" base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" response = requests.get(base_url,params=params).json() #create a variable with the results results = response['results'] try: #Add Hotel Name to hotel_df hotel_df.loc[index,"Hotel Name"] = results[0]['name'] print(f"{results[0]['name']} is the closest hotel to {city}") except (KeyError, IndexError): print("Missing information") # In[11]: #drop the rows that are missing information hotel_df = (hotel_df.drop(hotel_df.index[[1, 6]])) hotel_df # In[12]: # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """ <dl> <dt>Name</dt><dd>{Hotel Name}</dd> <dt>City</dt><dd>{City}</dd> <dt>Country</dt><dd>{Country}</dd> </dl> """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["Lat", "Lng"]] # In[13]: # Add marker layer and info box content ontop of heat map markers = gmaps.marker_layer(locations, info_box_content = hotel_info) fig.add_layer(markers) # Display Map fig
```python from decodes.core import * from decodes.io.jupyter_out import JupyterOut out = JupyterOut.unit_square( ) ``` # Alternate Coordinate Geometry todo \begin{align} x = r \ cos\theta \\ y = r \ sin\theta \end{align} ### Cylindrical Coordinates \begin{eqnarray} x &=& r \ cos\theta \\ y &=& r \ sin\theta \\ z &=& z \end{eqnarray} ```python """ Cylindrical Evaluation of an Orthonormal CS Returns a Point relative to this CS given three cylindrical coordinates. """ def eval_cyl(self, radius, radians, z): pt = Point( radius * cos(radians), radius * sin(radians), z) return self.eval(pt) ``` ### Spherical Coordinates \begin{eqnarray} x &=& \varrho \ sin\varphi \ cos\theta \\ y &=& \varrho \ sin\varphi \ sin\theta \\ z &=& \varrho \ cos\varphi \end{eqnarray} ```python """ Spherical Evaluation of an Orthonormal CS Returns a Point relative to this CS given three spherical coordinates. """ def eval_sph(self, rho, phi, theta): x = rho * sin(phi) * cos(theta) y = rho * sin(phi) * sin(theta) z = rho * cos(phi) return self.eval(Point(x,y,z)) ``` ```python ```
State Before: α : Type u_1 β : Type u_2 γ : Type ?u.7921 ι : Sort ?u.7924 π : α → Type ?u.7929 s s₁ s₂ : Set α t✝ t₁ t₂ : Set β p : Set γ f f₁ f₂ f₃ : α → β g g₁ g₂ : β → γ f' f₁' f₂' : β → α g' : γ → β a : α b : β heq : EqOn f₁ f₂ s t : Set β x : α hx : x ∈ s ⊢ x ∈ f₁ ⁻¹' t ↔ x ∈ f₂ ⁻¹' t State After: no goals Tactic: rw [mem_preimage, mem_preimage, heq hx]
<unk> Resignation by Louis <unk>
Over the years we’ve helped thousands of senior men and women expand their horizons and enhance their career prospects. For some people this means improving their own professional career strategies. For others, it’s a case of moving out of their historic career or profession into something related or even totally new. Each option takes very careful consideration and planning; and we are happy to support and guide you as you consider your plans. After all, our experienced consultants have helped over 10,000 people over the course of 25 years. If you know you have that niggle inside that tells you to do something better; be more ambitious or act on those dreams you’ve had for years, what should you do next? Get in touch with us by phone or email (use the contact form) and we can book a session with one of our experienced consultants to discuss your options. You might feel you would benefit from more than a single session, in which case simply select from the options available or discuss them with your consultant. Alternatively you can take advantage of our structured programmes, particularly our 12-hour Career Development Programme tailored from our extensive experience of helping senior executives and professionals. This is particularly useful to change roles in an accelerated timescale. We understand it can be difficult mentally and financially if you’re facing change. So we offer longer career support which can be tailored to your personal needs. These programmes are particularly useful for those starting their own business or making extensive and profound changes in their lives.
lemma INT_decseq_offset: assumes "decseq F" shows "(\<Inter>i. F i) = (\<Inter>i\<in>{n..}. F i)"
Module tmp. Inductive day : Type := | monday : day | tuesday : day | wednesday : day | thursday : day | friday : day | saturday : day | sunday : day. Definition next_weekday (d:day) : day := match d with | monday => tuesday | tuesday => wednesday | wednesday => thursday | thursday => friday | friday => monday | saturday => monday | sunday => monday end. Eval simpl in (next_weekday friday). Eval simpl in (next_weekday (next_weekday saturday)). Example text_next_weekday: (next_weekday (next_weekday saturday)) = tuesday. Proof. simpl. reflexivity. Qed. Inductive bool: Type := | true : bool | false : bool. Definition negb (b:bool) : bool := match b with | true => false | false => true end. Definition andb (b1: bool) (b2: bool) : bool := match b1 with | true => b2 | false => false end. Definition orb (b1: bool) (b2: bool) : bool := match b1 with | true => true | false => b2 end. Example test_orb1: (orb true false) = true. Proof. simpl. reflexivity. Qed. Example test_orb2: (orb false false) = false. Proof. simpl. reflexivity. Qed. Example test_orb3: (orb false true) = true. Proof. simpl. reflexivity. Qed. Example test_orb4: (orb true true) = true. Proof. simpl. reflexivity. Qed. Definition admit {T: Type} : T. Admitted. Definition nandb (b1: bool) (b2: bool) : bool := match b1 with | true => negb b2 | false => true end. Example test_nandb1: (nandb true false) = true. Proof. simpl. reflexivity. Qed. Example test_nandb2: (nandb false false) = true. Proof. simpl. reflexivity. Qed. Example test_nandb3: (nandb false true) = true. Proof. simpl. reflexivity. Qed. Example test_nandb4: (nandb true true) = false. Proof. simpl. reflexivity. Qed. Definition andb3 (b1:bool) (b2:bool) (b3:bool) : bool := match b1 with | true => andb b2 b3 | false => false end. Example test_andb31: (andb3 true true true) = true. Proof. simpl. reflexivity. Qed. Example test_andb32: (andb3 false true true) = false. Proof. simpl. reflexivity. Qed. Example test_andb33: (andb3 true false true) = false. Proof. simpl. reflexivity. Qed. Example test_andb34: (andb3 true true false) = false. Proof. simpl. reflexivity. Qed. Check (negb true). Check negb. Module Playground1. Inductive nat : Type := | O : nat | S : nat -> nat. Definition pred (n: nat) : nat := match n with | O => O | S n' => n' end. End Playground1. Definition minustwo (n: nat) : nat := match n with | O => O | S O => O | S (S n') => n' end. Check (S (S (S (S 0)))). Eval simpl in (minustwo 4). Check S. Check pred. Check minustwo. Fixpoint evenb (n:nat) : bool := match n with | O => true | S O => false | S (S n') => evenb n' end. Definition oddb (n:nat) : bool := negb (evenb n). Example test_oddb1: (oddb (S O)) = true. Proof. simpl. reflexivity. Qed. Example test_oddb2: (oddb (S (S (S (S O))))) = false. Proof. simpl. reflexivity. Qed. Module Playground2. Fixpoint plus (n : nat) (m : nat) : nat := match n with | O => m | S n' => S (plus n' m) end. Eval simpl in (plus (S (S (S 0)))) (S (S 0)). Fixpoint mult (n m : nat) : nat := match n with | O => O | S n' => plus m (mult n' m) end. Fixpoint minus (n m: nat) : nat := match n, m with | O, _ => O | S _, O => n | S n', S m' => minus n' m' end. End Playground2. Fixpoint exp (base power : nat) : nat := match power with | O => S O | S p => mult base (exp base p) end. Example test_mult1: (mult 3 3) = 9. Proof. simpl. reflexivity. Qed. Fixpoint factorial (n:nat) : nat := match n with | O => 1 | S n' => mult n (factorial n') end. Example test_factorial1: (factorial 3) = 6. Proof. simpl. reflexivity. Qed. Example test_factorial2: (factorial 5) = (mult 10 12). Proof. simpl. reflexivity. Qed. Notation "x + y" := (plus x y) (at level 50, left associativity) : nat_scope. Notation "x - y" := (minus x y) (at level 50, left associativity) : nat_scope. Notation "x * y" := (mult x y) (at level 40, left associativity) : nat_scope. Check ((0 + 1) + 1). Fixpoint beq_nat (n m : nat) : bool := match n with | O => match m with | O => true | S m' => false end | S n' => match m with | O => false | S m' => beq_nat n' m' end end. Fixpoint ble_nat (n m : nat) : bool := match n with | O => true | S n' => match m with | O => false | S m' => ble_nat n' m' end end. Example test_ble_nat1: (ble_nat 2 2) = true. Proof. simpl. reflexivity. Qed. Example test_ble_nat2: (ble_nat 2 4) = true. Proof. simpl. reflexivity. Qed. Example test_ble_nat3: (ble_nat 4 2) = false. Proof. simpl. reflexivity. Qed. Fixpoint blt_nat (n m : nat) : bool := match n, m with | O, O => false | O, S _ => true | S _, O => false | S n', S m' => blt_nat n' m' end. Example test_blt_nat1: (blt_nat 2 2) = false. Proof. simpl. reflexivity. Qed. Example test_blt_nat2: (blt_nat 2 4) = true. Proof. simpl. reflexivity. Qed. Example test_blt_nat3: (blt_nat 4 2) = false. Proof. simpl. reflexivity. Qed. Theorem plus_0_n : forall n: nat, 0 + n = n. Proof. simpl. reflexivity. Qed. Theorem plus_0_n' : forall n: nat, 0 + n = n. Proof. reflexivity. Qed. Eval simpl in (forall n: nat, n + 0 = n). Eval simpl in (forall n: nat, 0 + n = n). Theorem plus_0_n'' : forall n: nat, 0 + n = n. Proof. intros n. reflexivity. Qed. Theorem plus_1_l : forall n: nat, 1 + n = S n. Proof. intros. reflexivity. Qed. Theorem mult_0_l : forall n:nat, 0 * n = 0. Proof. intros. reflexivity. Qed. Theorem plus_id_example : forall n m : nat, n = m -> n + n = m + m. Proof. intros n m. intros H. rewrite -> H. reflexivity. Qed. Theorem plus_id_exercise : forall n m o : nat, n = m -> m = o -> n + m = m + o. Proof. intros n m o. intros H. intros I. rewrite -> H. rewrite -> I. reflexivity. Qed. Theorem mult_0_plus : forall n m : nat, (0 + n) * m = n * m. Proof. intros n m. rewrite -> plus_0_n. reflexivity. Qed. Theorem mult_1_plus : forall n m : nat, (1 + n) * m = m + (n * m). Proof. intros n m. rewrite -> plus_1_l. reflexivity. Qed. Theorem plus_1_neq_0 : forall n : nat, beq_nat(n + 1) 0 = false. Proof. intros n. destruct n as [| n']. reflexivity. reflexivity. Qed. Theorem negb_involtive : forall b : bool, negb (negb b) = b. Proof. intros b. destruct b. reflexivity. reflexivity. Qed. Theorem zero_nbeq_plus_1 : forall n : nat, beq_nat 0 (n + 1) = false. Proof. intros n. destruct n. reflexivity. reflexivity. Qed. Require String. Open Scope string_scope. Ltac move_to_top x := match reverse goal with | H : _ |- _ => try move x after H end. Tactic Notation "assert_eq" ident(x) constr(v) := let H := fresh in assert (x = v) as H by reflexivity; clear H. Tactic Notation "Case_aux" ident(x) constr(name) := first [ set (x := name); move_to_top x | assert_eq x name; move_to_top x | fail 1 "because we are working on a different case" ]. Tactic Notation "Case" constr(name) := Case_aux Case name. Tactic Notation "SCase" constr(name) := Case_aux SCase name. Tactic Notation "SSCase" constr(name) := Case_aux SSCase name. Tactic Notation "SSSCase" constr(name) := Case_aux SSSCase name. Tactic Notation "SSSSCase" constr(name) := Case_aux SSSSCase name. Tactic Notation "SSSSSCase" constr(name) := Case_aux SSSSSCase name. Tactic Notation "SSSSSSCase" constr(name) := Case_aux SSSSSSCase name. Tactic Notation "SSSSSSSCase" constr(name) := Case_aux SSSSSSSCase name. Theorem andb_true_elim1 : forall b c : bool, andb b c = true -> b = true. Proof. intros b c H. destruct b. Case "b = true". reflexivity. Case "b = false". rewrite <- H. reflexivity. Qed. Theorem andb_true_elim2 : forall b c : bool, andb b c = true -> c = true. Proof. intros b c H. destruct c. rewrite <- H. reflexivity. rewrite <- H. destruct b. reflexivity. reflexivity. Qed. Theorem plus_0_r : forall n:nat, n + 0 = n. Proof. intros n. induction n as [| n']. Case "n = 0". reflexivity. Case "n = S n'". simpl. rewrite -> IHn'. reflexivity. Qed. Theorem minus_diag : forall n, minus n n = 0. Proof. intros n. induction n as [| n']. Case "n = 0". simpl. reflexivity. Case "n = S n'". simpl. rewrite -> IHn'. reflexivity. Qed. Theorem mult_0_r : forall n:nat, n * 0 = 0. Proof. intros n. induction n as [| n']. + simpl. reflexivity. + simpl. rewrite -> IHn'. reflexivity. Qed. Theorem plus_n_Sm : forall n m : nat, S (n + m) = n + (S m). Proof. intros n m. induction n as [| n']. + simpl. reflexivity. + simpl. rewrite -> IHn'. reflexivity. Qed. Lemma succ : forall n m : nat, S(n + m) = n + S(m). Proof. intros n m. induction n. + reflexivity. + simpl. rewrite IHn. reflexivity. Qed. Theorem plus_comm : forall n m : nat, n + m = m + n. Proof. intros n m. induction n as [| n']. + simpl. rewrite plus_0_r. reflexivity. + simpl. rewrite IHn'. destruct m. reflexivity. induction m as [| m']. * reflexivity. * simpl. rewrite succ. reflexivity. Qed. Fixpoint double (n: nat) := match n with | O => O | S n' => S (S (double n')) end. Lemma double_plus : forall n, double n = n + n. Proof. intros n. induction n as [| n']. + reflexivity. + simpl. rewrite <- succ. rewrite <- IHn'. reflexivity. Qed. Theorem beq_nat_refl : forall n : nat, true = beq_nat n n. Proof. intros n. induction n as [| n']. + reflexivity. + simpl. rewrite IHn'. reflexivity. Qed. Theorem mult_0_plus' : forall n m : nat, (0 + n) * m = n * m. Proof. intros n m. assert (H: 0 + n = n). + reflexivity. + rewrite -> H. reflexivity. Qed. Theorem plus_assoc : forall n m p : nat, n + (m + p) = (n + m) + p. Proof. intros n m p. induction n as [|n']. + simpl. reflexivity. + simpl. rewrite IHn'. reflexivity. Qed. Theorem plus_swap : forall n m p : nat, n + (m + p) = m + (n + p). Proof. intros n m p. assert(H: (n + m) + p = n + (m + p)). + rewrite <-plus_assoc. reflexivity. + assert(I: (m + n) + p = m + (n + p)). rewrite <- plus_assoc. reflexivity. rewrite <- H. rewrite <- I. assert(J: n + m = m + n). rewrite <- plus_comm. reflexivity. rewrite J. reflexivity. Qed. Theorem ble_nat_refl : forall n: nat, true = ble_nat n n. Proof. intros n. induction n as [|n']. + simpl. reflexivity. + simpl. rewrite IHn'. reflexivity. Qed. Theorem zero_nbeq_S : forall n:nat, beq_nat 0 (S n) = false. Proof. intros n. simpl. reflexivity. Qed. Theorem andb_false_r : forall b : bool, andb b false = false. Proof. intros b. destruct b. + reflexivity. + reflexivity. Qed. Theorem plus_ble_compat_l : forall n m p : nat, ble_nat n m = true -> ble_nat (p + n) (p + m) = true. Proof. intros n m p. intros H. induction p as [| p']. + simpl. rewrite H. reflexivity. + simpl. rewrite IHp'. reflexivity. Qed. Theorem S_nbeq_0 : forall n: nat, beq_nat (S n) 0 = false. Proof. intros n. simpl. reflexivity. Qed. Theorem mult_1_l : forall n:nat, 1*n = n. Proof. intros n. simpl. rewrite plus_0_r. reflexivity. Qed. Theorem all3_spec : forall b c : bool, orb (andb b c) (orb (negb b) (negb c)) = true. Proof. intros b c. destruct b. + simpl. destruct c. simpl. reflexivity. simpl. reflexivity. + simpl. reflexivity. Qed. Inductive bin: Type := | O : bin | B : bin -> bin | Bp : bin -> bin. Fixpoint binc (b : bin) :bin := match b with | O => Bp O | B b' => Bp b' | Bp b' => B (binc b') end. Fixpoint bin_to_nat (b: bin) :nat := match b with | O => 0 | B O => 0 | B b' => 2 * (bin_to_nat b') | Bp b' => 2 * (bin_to_nat b') + 1 end. Fixpoint nat_to_bin (n : nat) : bin := match n with | 0 => O | S (n') => binc (nat_to_bin n') end. Fixpoint normalize (b: bin) : bin := match bin_to_nat b with | 0 => O | S n => binc (nat_to_bin n) end. End tmp.
Formal statement is: lemma in_convex_hull_linear_image: assumes "linear f" and "x \<in> convex hull s" shows "f x \<in> convex hull (f ` s)" Informal statement is: If $f$ is a linear map and $x$ is in the convex hull of $s$, then $f(x)$ is in the convex hull of $f(s)$.
Other divine groups were composed of deities with interrelated roles , or who together represented a region of the Egyptian mythological cosmos . There were sets of gods for the hours of the day and night and for each nome ( province ) of Egypt . Some of these groups contain a specific , symbolically important number of deities . Paired gods can stand for opposite but interrelated concepts that are part of a greater unity . Ra , who is dynamic and light @-@ producing , and Osiris , who is static and shrouded in darkness , merge into a single god each night . Groups of three are linked with plurality in ancient Egyptian thought , and groups of four connote completeness . Rulers in the late New Kingdom promoted a particularly important group of three gods above all others : Amun , Ra , and Ptah . These deities stood for the plurality of all gods , as well as for their own cult centers ( the major cities of Thebes , Heliopolis , and Memphis ) and for many threefold sets of concepts in Egyptian religious thought . Sometimes Set , the patron god of the Nineteenth Dynasty kings and the embodiment of disorder within the world , was added to this group , which emphasized a single coherent vision of the pantheon .
import torch import torch.nn as nn import numpy as np # Class inspired by Abhishek Thakur's book: # "Approaching (Almost) Any Machine Learning # Problem" class Engine: @staticmethod def train(data_loader, model, optimizer, device, scheduler=None): """Function for training the model for one epoch :param data_loader: torch data_loader :param model: model :param optimizer: torch optimizer :param device: device :param scheduler: learning scheduler """ # setting the model to the training mode model.train() model.to(device) # list to track training loss training_loss = list() for data in data_loader: inputs = data["image"].to(device) labels = data["target"].to(device) # Clearing the gradients # This approach is faset than optimizer.zero_grad() # https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html for param in model.parameters(): param.grad = None outputs = model(inputs) # Calculating the loss loss = nn.CrossEntropyLoss()(outputs, labels) print(loss.item()) training_loss.append(loss.item()) # Compute the grad loss.backward() if scheduler is not None: scheduler.step(loss) optimizer.step() return training_loss @staticmethod def evaluate(data_loader, model, device): # initialize empy lists to store predictions # and targets final_predictions = [] final_targets = [] # putting the model to eval mode model.eval() model.to(device) with torch.no_grad(): for data in data_loader: inputs = data["image"].to(device) labels = data["target"].to(device) # making predictions predictions = model(inputs) predictions = predictions.cpu().numpy().tolist() targets = data["target"].cpu().numpy().tolist() final_predictions.extend(predictions) final_targets.extend(targets) return final_predictions, final_targets
PIUserInfo <- function(identityType = NULL, name = NULL, isAuthenticated = NULL, sID = NULL, impersonationLevel = NULL, webException = NULL) { if (is.null(identityType) == FALSE) { if (is.character(identityType) == FALSE) { return (print(paste0("Error: identityType must be a string."))) } } if (is.null(name) == FALSE) { if (is.character(name) == FALSE) { return (print(paste0("Error: name must be a string."))) } } if (is.null(isAuthenticated) == FALSE) { if (is.logical(isAuthenticated) == FALSE) { return (print(paste0("Error: isAuthenticated must be a boolean."))) } } if (is.null(sID) == FALSE) { if (is.character(sID) == FALSE) { return (print(paste0("Error: sID must be a string."))) } } if (is.null(impersonationLevel) == FALSE) { if (is.character(impersonationLevel) == FALSE) { return (print(paste0("Error: impersonationLevel must be a string."))) } } if (is.null(webException) == FALSE) { className <- attr(webException, "className") if ((is.null(className)) || (className != "PIWebException")) { return (print(paste0("Error: the class from the parameter webException should be PIWebException."))) } } value <- list( IdentityType = identityType, Name = name, IsAuthenticated = isAuthenticated, SID = sID, ImpersonationLevel = impersonationLevel, WebException = webException) valueCleaned <- rmNullObs(value) attr(valueCleaned, "className") <- "PIUserInfo" return(valueCleaned) }
Require Export Iron.Language.SystemF2Cap.Type. Require Export Iron.Language.SystemF2Cap.Value.Exp. Require Export Iron.Language.SystemF2Cap.Value.Operator.LiftX. Require Export Iron.Language.SystemF2Cap.Value.Relation.TyJudge. (********************************************************************) (* Substitution of Types in Exps *) Fixpoint substTV (d: nat) (u: ty) (vv: val) : val := match vv with | VVar _ => vv | VLoc _ => vv | VBox x => VBox (substTX d u x) | VLam t x => VLam (substTT d u t) (substTX d u x) | VLAM k x => VLAM k (substTX (S d) (liftTT 1 0 u) x) | VConst c => vv end with substTX (d: nat) (u: ty) (xx: exp) : exp := match xx with | XVal v => XVal (substTV d u v) | XLet t x1 x2 => XLet (substTT d u t) (substTX d u x1) (substTX d u x2) | XApp v1 v2 => XApp (substTV d u v1) (substTV d u v2) | XAPP v1 t2 => XAPP (substTV d u v1) (substTT d u t2) | XOp1 op1 v => XOp1 op1 (substTV d u v) | XPrivate ts x => XPrivate ts (substTX (S d) (liftTT 1 0 u) x) | XExtend t x => XExtend (substTT d u t) (substTX (S d) (liftTT 1 0 u) x) | XRun v => XRun (substTV d u v) | XAlloc t v => XAlloc (substTT d u t) (substTV d u v) | XRead t v => XRead (substTT d u t) (substTV d u v) | XWrite t v1 v2 => XWrite (substTT d u t) (substTV d u v1) (substTV d u v2) end. (********************************************************************) (* Substitution of types in exps. *) Lemma subst_type_exp_ix : forall ix ke te se sp x1 t1 e1 t2 o2 k2 , get ix ke = Some (o2, k2) -> TypeX ke te se sp x1 t1 e1 -> KindT (delete ix ke) sp t2 k2 -> TypeX (delete ix ke) (substTE ix t2 te) (substTE ix t2 se) sp (substTX ix t2 x1) (substTT ix t2 t1) (substTT ix t2 e1). Proof. intros. gen ix ke te se sp t1 t2 e1. gen o2 k2. induction x1 using exp_mutind with (PV := fun v => forall ix ke te se sp t1 t2 o3 k3 , get ix ke = Some (o3, k3) -> TypeV ke te se sp v t1 -> KindT (delete ix ke) sp t2 k3 -> TypeV (delete ix ke) (substTE ix t2 te) (substTE ix t2 se) sp (substTV ix t2 v)(substTT ix t2 t1)); intros; simpl; inverts_type; eauto. - Case "VVar". apply TvVar; auto. unfold substTE. auto. eauto using subst_type_type_ix. - Case "VLoc". eapply TvLoc; fold substTT; rrwrite ( TRef (substTT ix t2 r) (substTT ix t2 t) = substTT ix t2 (TRef r t)). unfold substTE; eauto. eauto using subst_type_type_ix. - Case "VBox". eapply TvBox; fold substTT. eauto using subst_type_type_ix. - Case "VLam". simpl. apply TvLam. eapply subst_type_type_ix; eauto. unfold substTE at 1. rewrite map_rewind. rrwrite ( map (substTT ix t2) (te :> t) = substTE ix t2 (te :> t)). eauto. - Case "VLAM". simpl. apply TvLAM. rewrite delete_rewind. rewrite (liftTE_substTE 0 ix). rewrite (liftTE_substTE 0 ix). rrwrite ( TBot KEffect = substTT (S ix) (liftTT 1 0 t2) (TBot KEffect)). eauto using kind_kienv_weaken. - Case "VConst". destruct c; burn. - Case "XLet". simpl. apply TxLet. eapply subst_type_type_ix; eauto. eauto. unfold substTE at 1. rewrite map_rewind. rrwrite ( map (substTT ix t2) (te :> t) = substTE ix t2 (te :> t)). eauto. - Case "XApp". eapply TxApp. eapply IHx1 in H8; eauto. simpl in H8. burn. eapply IHx0 in H11; eauto. - Case "XAPP". rrwrite ( TBot KEffect = substTT 0 t (TBot KEffect)). rewrite (substTT_substTT 0 ix). rewrite (substTT_substTT 0 ix). eapply TvAPP. simpl. eapply (IHx1 ix) in H8; eauto. simpl. eauto using subst_type_type_ix. - Case "XOp1". eapply TxOpPrim. destruct o; simpl in *. inverts H8. rrwrite (TNat = substTT ix t2 TNat); eauto. inverts H8. rrwrite (TNat = substTT ix t2 TNat); eauto. destruct o; simpl in *. inverts H8. spec IHx1 H11; eauto. spec IHx1 H11; eauto. inverts H8. spec IHx1 H1; eauto. - Case "XPrivate". apply TxPrivate with (t := substTT (S ix) (liftTT 1 0 t2) t) (e := substTT (S ix) (liftTT 1 0 t2) e). + rrwrite (ix = 0 + ix). eapply lowerTT_substTT_liftTT. auto. + rrwrite (S ix = 1 + ix + 0). erewrite maskOnVarT_substTT. * have (~FreeT 0 (liftTT 1 0 t2)). rrwrite (maskOnVarT 0 (liftTT 1 0 t2) = liftTT 1 0 t2) by (apply maskOnVarT_freeT_id; eauto). rrwrite (1 + ix + 0 = 1 + 0 + ix). erewrite lowerTT_substTT_liftTT; eauto. * have (~FreeT 0 (liftTT 1 0 t2)). auto. + auto. + rewrite delete_rewind. rewrite (liftTE_substTE 0 ix). rewrite (liftTE_substTE 0 ix). rrwrite (ts = substTE (1 + 0 + ix) (liftTT 1 0 t2) (liftTE 0 ts)) by admit. (* fine. ts has only 1 free var, so subst above this is identity. *) unfold substTE at 1. unfold substTE at 1. rewrite <- map_app. rrwrite ( map (substTT (1 + 0 + ix) (liftTT 1 0 t2)) (liftTE 0 te >< liftTE 0 ts) = substTE (1 + 0 + ix) (liftTT 1 0 t2) (liftTE 0 te >< liftTE 0 ts)). eapply IHx1. eauto. eauto using kind_kienv_weaken. admit. (* fine. ts has only 1 free var, so subst above this is identity. *) - Case "XExtend". have H0: (ix = 0 + ix). rewrite H0 at 4. rewrite (substTT_substTT 0 ix). simpl. clear H0. apply TxExtend with (e := substTT (S ix) (liftTT 1 0 t2) e). + rrwrite (ix = 0 + ix). eapply lowerTT_substTT_liftTT with (d' := ix) (t2 := t2) in H4; eauto. simpl in H4. rrwrite (S (0 + ix) = 1 + ix + 0). erewrite maskOnVarT_substTT; eauto. simpl. erewrite maskOnVarT_freeT_id; eauto. rrwrite (ix + 0 = ix); eauto. + eauto using subst_type_type_ix. + rewrite delete_rewind. rewrite (liftTE_substTE 0 ix). rewrite (liftTE_substTE 0 ix). eapply IHx1. * eauto. * simpl. rrwrite ( delete ix ke :> (OCon, KRegion) = insert 0 (OCon, KRegion) (delete ix ke)). eapply kind_kienv_insert. auto. * eauto. - Case "XRun". eapply TxRun; fold substTT. rrwrite ( TSusp (substTT ix t2 e1) (substTT ix t2 t1) = substTT ix t2 (TSusp e1 t1)). eauto using subst_type_type_ix. - Case "XAlloc". eapply TxOpAlloc; fold substTT. eauto using subst_type_type_ix. eauto. - Case "XRead". eapply TxOpRead; fold substTT. eauto using subst_type_type_ix. rrwrite ( TRef (substTT ix t2 r) (substTT ix t2 t1) = substTT ix t2 (TRef r t1)). eauto. - Case "XWrite". eapply TxOpWrite; fold substTT. eauto using subst_type_type_ix. eapply IHx1 in H12; eauto. snorm. eauto. Qed. Lemma subst_type_exp : forall ke te se sp x1 t1 e1 t2 o2 k2 , TypeX (ke :> (o2, k2)) te se sp x1 t1 e1 -> KindT ke sp t2 k2 -> TypeX ke (substTE 0 t2 te) (substTE 0 t2 se) sp (substTX 0 t2 x1) (substTT 0 t2 t1) (substTT 0 t2 e1). Proof. intros. rrwrite (ke = delete 0 (ke :> (o2, k2))). eapply subst_type_exp_ix; burn. Qed.
Describe Users/kjhewett here. 20110604 16:10:49 nbsp Hi KJ and Welcome to the Wiki! No worries about the delete, its a pretty common mistake. Im glad your cat came home! Users/JonathanLawton
theory regShiftFifo imports paraGste1 begin abbreviation rst::"expType" where "rst \<equiv> IVar (Ident ''rst'')" abbreviation push::"expType" where "push \<equiv> IVar (Ident ''push'')" abbreviation pop::"expType" where "pop \<equiv> IVar (Ident ''pop'')" abbreviation dataIn::"expType" where "dataIn \<equiv> IVar (Ident ''dataIn'' )" abbreviation LOW::"expType" where "LOW \<equiv> Const (boolV False)" abbreviation HIGH::"expType" where " HIGH \<equiv> Const (boolV True)" abbreviation emptyFifo::"expType" where " emptyFifo \<equiv> IVar (Ident ''empty'' ) " abbreviation tail::"expType" where " tail \<equiv> IVar (Ident ''tail'' ) " abbreviation head::"expType" where " head \<equiv> IVar (Ident ''head'' ) " abbreviation full::"expType" where " full \<equiv> IVar (Ident ''full'' ) " definition fullForm::"nat\<Rightarrow>formula" where [simp]: " fullForm DEPTH\<equiv> eqn tail (Const (index DEPTH)) " abbreviation mem::"nat \<Rightarrow> expType" where "mem i \<equiv> IVar (Para (Ident ''mem'') i)" type_synonym paraExpType="nat \<Rightarrow>expType" abbreviation dataOut::"nat\<Rightarrow>expType" where "dataOut DEPTH \<equiv> read (Ident ''mem'') DEPTH (IVar (Ident ''tail'' ))" abbreviation rstForm::"formula" where "rstForm \<equiv> (eqn rst HIGH)" abbreviation emptyForm::"formula" where "emptyForm \<equiv> (eqn emptyFifo HIGH)" abbreviation pushForm::"formula" where "pushForm \<equiv> andForm (andForm (eqn rst LOW) (eqn push HIGH)) (eqn pop LOW)" abbreviation popForm::"formula" where "popForm \<equiv> andForm (andForm (eqn rst LOW) (eqn push LOW)) (eqn pop HIGH)" abbreviation nPushPopForm::"formula" where "nPushPopForm \<equiv> andForm (andForm (eqn rst LOW) (eqn push LOW)) (eqn pop LOW)" abbreviation pushDataForm::"nat \<Rightarrow>formula" where " pushDataForm D \<equiv>andForm pushForm (eqn dataIn (Const (index D)))" abbreviation popDataForm::"nat\<Rightarrow>nat \<Rightarrow>formula" where " popDataForm DEPTH D \<equiv> (eqn (dataOut DEPTH) (Const (index D)))" abbreviation nFullForm::"nat \<Rightarrow>formula" where "nFullForm DEPTH\<equiv> neg (fullForm DEPTH)" abbreviation nEmptyForm::"formula" where "nEmptyForm \<equiv> neg emptyForm " definition vertexI::"node" where [simp]: "vertexI \<equiv>Vertex 0" (*DEPTH=LAST + 1*) definition vertexL::"nat \<Rightarrow> node list" where [simp]: "vertexL LAST \<equiv> vertexI # (map (%i. Vertex i) (down LAST))" definition edgeL::"nat \<Rightarrow> edge list" where [simp]: "edgeL LAST \<equiv> [Edge vertexI ( Vertex 1)] @ [Edge ( Vertex 1) ( Vertex 3)] @ [Edge ( Vertex 1) ( Vertex 4)] @(map (%i. ( Edge (Vertex ( 2*i+1 )) (Vertex ( 2*i+1 ))) ) (upt 0 (LAST+2) )) (* self-loop*) @(map (%i. ( Edge (Vertex ( 2*i+2 )) (Vertex ( 2*i+2 ))) ) (upt 1 (LAST+2) )) (* self-loop*) @(map (%i. ( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 3))) ) ( upt 1 (LAST+1))) @(map (%i. ( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 4))) ) ( upt 1 (LAST+1))) @(map (%i. ( Edge (Vertex (2 * i + 3)) (Vertex (2 * i + 1))) ) ( upt 0 (LAST+1) )) @(map (%i. ( Edge (Vertex (2 * i + 4)) (Vertex (2 * i + 2))) ) ( upt 1 (LAST+1) )) @[Edge ( Vertex 4) ( Vertex 1)] " primrec node2Nat::"node \<Rightarrow> nat" where "node2Nat (Vertex n) = n" definition antOfRbFifo::"nat\<Rightarrow>edge\<Rightarrow>formula" where [simp]: "antOfRbFifo D edge\<equiv> (let from=node2Nat (source edge) in let to=node2Nat (sink edge) in if (from = 0) then rstForm else if (from=to) then nPushPopForm else (if ((from mod 2) =1) then ( if ((from + 2)=to) then ( pushForm ) else if (from=(to + 2)) then popForm else pushDataForm D ) else popForm))" definition consOfRbFifo::"nat\<Rightarrow>nat\<Rightarrow>edge \<Rightarrow>formula" where [simp]: "consOfRbFifo D LAST edge \<equiv> (let from=node2Nat (source edge) in let to=node2Nat (sink edge) in if (((from mod 2) = 1) \<and> ((to mod 2) = 1)) then (if from =1 then (andForm emptyForm (nFullForm LAST)) else if (from = (2*LAST+3)) then (andForm nEmptyForm (fullForm LAST)) else andForm nEmptyForm (nFullForm LAST)) else if (from=4 \<and> to = 1) then popDataForm LAST D else if (from = (2*LAST+4)) then (andForm nEmptyForm (fullForm LAST)) else if (from =1 ) then (andForm emptyForm (nFullForm LAST)) else if (from \<noteq>0) then (andForm nEmptyForm (nFullForm LAST)) else chaos)" definition rbFifoGsteSpec::" nat\<Rightarrow>nat\<Rightarrow>gsteSpec" where [simp]: "rbFifoGsteSpec LAST data\<equiv>Graph vertexI (edgeL LAST ) (antOfRbFifo data ) (consOfRbFifo data LAST)" primrec applyPlusN::"expType\<Rightarrow>nat \<Rightarrow>expType" where "applyPlusN e 0=e" | "applyPlusN e (Suc N) = uif ''+'' [applyPlusN e N, Const (index 1)]" definition tagFunOfRegShiftFifo:: " nat\<Rightarrow>nodeTagFuncList" where [simp]: "tagFunOfRegShiftFifo DATA n \<equiv> (let x=node2Nat n in let DataE=(Const (index DATA)) in if (x = 0) then [] else (if ((x mod 2) = 1) then (if (x =1) then [eqn tail (Const (index 0)), eqn emptyFifo (Const (boolV True))] else [eqn tail (Const (index (x div 2 - 1 ))), eqn emptyFifo (Const (boolV False))] ) else (if (x = 2) then [] else [eqn tail (Const (index (x div 2 - 2 ))), eqn emptyFifo (Const (boolV False)), eqn (IVar (Para (Ident ''mem'') 0)) DataE ]) ) ) " definition branch1::"generalizeStatement" where (*[simp]:*) "branch1 \<equiv> (let S1=assign (Ident ''tail'',(Const (index 0))) in let S2=assign (Ident ''empty'',HIGH) in Parallel [S1,S2])" (*map (\<lambda>i. assign ((Para (Ident ''mem'') i), iteForm (eqn (Const (index i)) (Const (index 0))) dataIn (read (Ident ''mem'') LAST (uif ''-'' [(Const (index i)), (Const (index 1))])))) (upt 1 (LAST +1) )*) definition branch2::"nat\<Rightarrow>generalizeStatement" where "branch2 LAST \<equiv> (let S1= map (\<lambda>i. assign ((Para (Ident ''mem'') i), (IVar (Para (Ident ''mem'') i)))) (upt 1 (LAST +1)) in let S4=assign ((Para (Ident ''mem'') 0), dataIn) in let tailPlus=uif ''+'' [tail, (Const (index 1))] in let S2=assign (Ident ''tail'',iteForm (neg (eqn emptyFifo HIGH)) tailPlus tail) in let S3=assign (Ident ''empty'',LOW) in Parallel ([S4,S2,S3]@S1))" definition branch3::"generalizeStatement" where "branch3 \<equiv> (let S1=Parallel [assign (Ident ''empty'',HIGH)] in let S2=Parallel [ assign (Ident ''tail'', uif ''-'' [tail, (Const (index 1))])] in If (eqn tail (Const (index 0))) S1 S2)" definition tagFunOfRbfifio:: "nat \<Rightarrow> nat\<Rightarrow>nodeTagFuncList" where [simp]: " tagFunOfRbfifio depth DATA n \<equiv> (let x=node2Nat n in let DataE=(Const (index DATA)) in if (x = 0) then [] else (if ((x mod 2) = 1) then (if (x=1) then [eqn tail (Const (index 0)), eqn emptyFifo (Const (boolV True))] else [eqn tail (applyPlusN head (x div 2 )), eqn emptyFifo (Const (boolV False))] ) else (if (x = (2)) then [] else [eqn tail (applyPlusN head ((x div 2) - 1)), eqn ( read (Ident ''mem'') depth tail) DataE ]) )) " abbreviation shiftRegfifo::" nat\<Rightarrow>generalizeStatement" where "shiftRegfifo LAST\<equiv> caseStatement [(eqn rst HIGH, branch1), (andForm (eqn push HIGH) (neg (eqn tail (Const (index LAST)))), branch2 LAST), (andForm (eqn pop HIGH) (eqn emptyFifo LOW), branch3)] " consts J::" interpretFunType" axiomatization where axiomOnIAdd [simp,intro]: " J ''+'' [index m, index (Suc 0)] = index (m + 1)" axiomatization where axiomOnISub [simp,intro ]: " J ''-'' [index m, index 1] = index (m - 1)" lemma consistencyOfRbfifo: assumes a:"0 < LAST " shows "consistent' (shiftRegfifo LAST ) (J ) (rbFifoGsteSpec LAST data) (tagFunOfRegShiftFifo data)" proof(unfold consistent'_def,rule allI,rule impI) fix e let ?G=" (rbFifoGsteSpec LAST data)" let ?M="( shiftRegfifo LAST )" let ?tag="(tagFunOfRegShiftFifo data)" let ?P ="\<lambda>e. (let f=andListForm (?tag (sink e)) in let f'=andListForm (?tag (source e)) in tautlogy (implyForm (andForm f' (antOf ?G e)) (preCond1 f ( ?M))) (J ))" assume a1:"e \<in> edgesOf (rbFifoGsteSpec LAST data)" have "e=Edge vertexI ( Vertex 1) | e=Edge ( Vertex 1) ( Vertex 3) | e=Edge ( Vertex 1) ( Vertex 4)| (\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex ( 2*i+1 )) (Vertex ( 2*i+1 ))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST +1\<and> e=( Edge (Vertex ( 2*i+2 )) (Vertex ( 2*i+2 ))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 3))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 4))) ) | (\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e= ( Edge (Vertex (2 * i + 3)) (Vertex (2 * i + 1))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex(2 * i + 4)) (Vertex(2 * i+2)))) | e=( Edge (Vertex 4) (Vertex 1))" apply(cut_tac a1,auto) done moreover {assume b1:"e=Edge vertexI ( Vertex 1)" have "?P e" apply(cut_tac b1, simp add:antOfRbFifo_def branch1_def) done } moreover {assume b1:" (\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 1)) (Vertex ( 2*i + 1))) ) " (is "\<exists>i. ?asm i") from b1 obtain i where b2:"?asm i" by auto have "?P e" apply(cut_tac b2, simp add:antOfRbFifo_def substNIl) done } moreover {assume b1:" (\<exists>i. 1\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 2)) (Vertex ( 2*i + 2))) ) " (is "\<exists>i. ?asm i") from b1 obtain i where b2:"?asm i" by auto have "?P e" apply(cut_tac b2, simp add:antOfRbFifo_def substNIl) done } moreover {assume b1:" e=Edge ( Vertex 1) ( Vertex 3) " have "?P e" apply(cut_tac a b1,auto simp add:branch2_def ) done } moreover {assume b1:" e=Edge ( Vertex 1) ( Vertex 4) " let ?f="andForm (neg (eqn rst HIGH)) (andForm (eqn push HIGH) (neg (eqn tail (Const (index LAST )))) ) " have "?P e " apply(cut_tac a b1 ,auto simp add:simp add:branch2_def) done } moreover {assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i + 3))" (is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have b2:"i - 1 < LAST" by(cut_tac a b1,auto) have "?P e " apply(cut_tac a b1 b2 ,auto simp add: antOfRbFifo_def branch2_def assms ) done } moreover {assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i+4)) "(is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have b2:"i - 1 < LAST" by(cut_tac a b1,auto) have "?P e " by(cut_tac a b1 b2 ,auto simp add: antOfRbFifo_def branch2_def assms) } moreover {assume b1:"\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 3)) ( Vertex (2*i + 1)) " (is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have "?P e " using axiomOnISub by(cut_tac a b1 ,auto simp add: antOfRbFifo_def branch3_def assms ) } moreover {assume b1:"\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 4)) ( Vertex (2*i +2)) " (is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have "?P e " using axiomOnISub apply(cut_tac a b1 ,auto simp add: antOfRbFifo_def branch3_def assms ) done } moreover {assume b1:"e=Edge (Vertex 4) ( Vertex 1)" have "?P e" apply(cut_tac a b1,auto simp add:antOfRbFifo_def Let_def branch3_def) done } ultimately show "?P e" by satx qed lemma testAux[simp]: shows "(expEval I e s =index i \<longrightarrow> i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)@S))) s =expEval I (mem i) s) \<and> (expEval I e s =index i \<longrightarrow> LAST < i\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)@S))) s =expEval I (caseExp S) s)" (is "?P LAST") proof(induct_tac LAST,auto )qed lemma testAux1[simp]: shows "(expEval I e s =index i \<longrightarrow> i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)))) s =expEval I (mem i) s)" proof - have a:"(expEval I e s =index i \<longrightarrow> i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)@[]))) s =expEval I (mem i) s)" apply(cut_tac testAux [where S="[]"],blast)done then show ?thesis by auto qed lemma test[simp]:assumes a1: "expEval I e s = (index i)" and a2:"i \<le> LAST" shows "expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)))) s = expEval I (mem i) s" proof - have a:"(expEval I e s =index i \<longrightarrow> i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)))) s =expEval I (mem i) s)" by simp with a1 a2 show ?thesis by blast qed (*lemma test'[simp]:assumes a1: "eqn e (Const (index i))" and a2:"i \<le> LAST" shows "eqn (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)))) (mem i) " proof - have a:"(expEval I e s =index i \<longrightarrow> i \<le> LAST\<longrightarrow>expEval I (caseExp ((map (\<lambda>i. (eqn e (Const (index i)), mem i)) (down LAST)))) s =expEval I (mem i) s)" by simp with a1 a2 show ?thesis by blast qed *) lemma instImply: assumes a:"G=(rbFifoGsteSpec LAST data)" and b:"0 < LAST " and c:"tag=tagFunOfRegShiftFifo data" shows "\<forall> e. e \<in>edgesOf G\<longrightarrow> tautlogy (implyForm (andForm (antOf G e) (andListForm (tag (source e)))) (consOf G e)) I" proof(rule allI,rule impI,simp,rule allI,rule impI) fix e s assume a1:"e \<in> edgesOf G " and a2:" formEval I (antOf G e) s \<and> formEval I (andListForm (tag (source e))) s" let ?P ="\<lambda>e. formEval I (consOf G e) s" have "e=Edge vertexI ( Vertex 1) | e=Edge ( Vertex 1) ( Vertex 3) | e=Edge ( Vertex 1) ( Vertex 4)| (\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex ( 2*i+1 )) (Vertex ( 2*i+1 ))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex ( 2*i+2 )) (Vertex ( 2*i+2 ))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 3))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex (2 * i + 1)) (Vertex (2 * i + 4))) ) | (\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e= ( Edge (Vertex (2 * i + 3)) (Vertex (2 * i + 1))) ) | (\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=( Edge (Vertex(2 * i + 4)) (Vertex(2 * i+2)))) | e=( Edge (Vertex 4) (Vertex 1))" apply(cut_tac a a1,auto) done moreover {assume b1:"e=Edge vertexI ( Vertex 1)" have "?P e" apply(cut_tac a b1, auto simp add:antOfRbFifo_def) done } moreover {assume b1:" (\<exists>i. 0\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 1)) (Vertex ( 2*i + 1))) ) " (is "\<exists>i. ?asm i") from b1 obtain i where b2:"?asm i" by auto have "?P e" apply(cut_tac a b c a2 b2, auto) done } moreover {assume b1:" (\<exists>i. 1\<le>i \<and> i\<le> LAST+1 \<and> e=( Edge (Vertex (2* i + 2)) (Vertex ( 2*i + 2))) ) " (is "\<exists>i. ?asm i") from b1 obtain i where b2:"?asm i" by auto have "?P e" apply(cut_tac a b c a2 b2,auto) done } moreover {assume b1:" e=Edge ( Vertex 1) ( Vertex 3) " have "?P e" apply(cut_tac a b c a2 b1,auto ) done } moreover {assume b1:" e=Edge ( Vertex 1) ( Vertex 4) " have "?P e " apply(cut_tac a b b1 c a2,auto simp add: antOfRbFifo_def ) done } moreover {assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i + 3))" (is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have "?P e " by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def assms ) } moreover {assume b1:" \<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 1)) ( Vertex (2*i+4)) "(is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have "?P e " by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def assms) } moreover {assume b1:"\<exists>i. 0\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 3)) ( Vertex (2*i + 1)) " (is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have "?P e " by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def assms ) } moreover {assume b1:"\<exists>i. 1\<le>i \<and> i\<le> LAST \<and> e=Edge ( Vertex (2*i + 4)) ( Vertex (2*i+2 )) " (is "\<exists>i. ?Q i") from b1 obtain i where b1:"?Q i" by blast have "?P e " by(cut_tac a b c a2 b1 ,auto simp add: antOfRbFifo_def Let_def assms) } moreover {assume b1:"e=Edge (Vertex 4) ( Vertex 1)" have "?P e" apply(cut_tac a b c a2 b1 ,auto) done } ultimately show "?P e" by satx qed lemma main: assumes a:"G=(rbFifoGsteSpec LAST data)" and b:"0 < LAST " and c:"tag=tagFunOfRegShiftFifo data" and d:"M=(shiftRegfifo LAST )" shows " circuitSatGsteSpec M G J " proof(rule mainLemma) have a1:"consistent' (shiftRegfifo LAST ) (J ) (rbFifoGsteSpec LAST data) (tagFunOfRegShiftFifo data)" using b by (rule consistencyOfRbfifo) from a c d this show "consistent' M (J ) G tag" by simp next from a b c show "\<forall>e. e \<in> edgesOf G \<longrightarrow> tautlogy (implyForm (andForm (antOf G e) (andListForm (tag (source e)))) (consOf G e)) (J )" apply(rule instImply) done next from a c show "tag (initOf G) = []" apply auto done qed end
function f = flops_det(n) % FLOPS_DET Flops for matrix determinant. % FLOPS_DET(n) returns the number of flops for det(eye(n)). if n == 1 f = 1; else % this is from logdet f = flops_chol(n) + n; end
x<-read.table("infile.txt", header=TRUE) a<-x$plus b<-floor(9*(a-min(a))/(max(a)-min(a))) write.table(b, "BDI.txt", quote=FALSE, row.names=FALSE,col.names=FALSE)
[STATEMENT] lemma take_length[simp]: "take (length al) al = al" [PROOF STATE] proof (prove) goal (1 subgoal): 1. take (length al) al = al [PROOF STEP] using take_all [PROOF STATE] proof (prove) using this: length ?xs \<le> ?n \<Longrightarrow> take ?n ?xs = ?xs goal (1 subgoal): 1. take (length al) al = al [PROOF STEP] by auto
Due to Hurricane Irene, parts of New Jersey were declared a federal disaster area this week. Federal funding is available to people in Bergen, Essex, Morris, Passaic, and Somerset Counties. More than 150,000 homes and businesses in the state remained without electricity Wednesday afternoon, with utilities predicting restoration by the weekend or early next week. The old Reading Viaduct, becoming a city park? Talks have been going on for eight years to get city officials on board with the idea. Now, the city is in talks with Reading International Co. to take control of the larger section of the viaduct to transform it into an elevated public park. Meanwhile, the Center City District is working with SEPTA on a legal agreement to create a park on the shorter section of the viaduct owned by the transit agency.
\section{Case study \#4: Recomputability of HCI Studies} \label{s:group4} Research in computer science often involves humans, especially if the subject of study is how people and machines interact with each other (e.g., in HCI). This kind of research focuses on phenomena that involve human agents and are therefore not addressable strictly through computation. However, there are also strictly computational issues that are critical for the replicability of HCI research. In this case study we looked at current practice in HCI research through an example published experiment, and aimed at identifying the road blocks and difficulties that present themselves when reproducing statistical analysis of data obtained from human behavior. \groupsubsec{7.1 Background} The field of HCI heavily relies on the execution and analysis of empirical studies that involve humans. These results are used to, for example, build new interaction techniques and devices~\cite{Nacenta:2008}, and propose new models of human behavior with computers~\cite{Shoemaker:2012}. The reliability of the analysis and conclusions of these studies have been recently questioned. Problems identified include the fact that barely any results are replicated~\cite{hornbaek:replications}, that the research is often difficult or impossible to replicate~\cite{wilson:2011}, and that replicated results are difficult to publish and share to the larger community~\cite{wilson:2012}. The HCI community is currently trying to address some of these problems through new venues for publication (e.g., RepliCHI~\cite{wilson:2013,wilson:2014}), the creation of new tools~\cite{Mackay:2007}, and efforts to change the research culture and incentives (ACM CHI, arguably the most important conference in the field, introduced in 2013 a replication award or distinction for papers that address replicability). Although the problem of replicability of experiments with humans is difficult and will likely require significant efforts from the community, the recomputability of these results and the associated statistical analysis has received relatively little attention, even though it is probably one of the most significant sources of inaccuracy and incorrect data in the field. Recomputability in HCI experiments refers mostly to the ability of others (not authors) to replicate the statistical analysis and statistical conclusions of a paper utilizing the same recorded data from the experiment. Quantitative experiments in HCI analyse data that is obtained from humans to draw conclusions that are relevant for the understanding or development of interfaces. Although the experiments are necessarily affected by the inherent variability introduced by humans, the analyses should not. Ideally, every researcher in the area, and more specifically, every reviewer of a paper containing statistical analysis of quantitative human data should be able to reproduce the analysis. The ability of reviewers to determine if a statistical analysis and interpretation of the data in a paper are correct is currently limited to checking that the reported degrees of freedom in an ANOVA (or a similar inferential statistic procedure) are consistent with the design of the experiment, and that the intermediate statistic figures (e.g., F values, DOF, p-values) are consistent with the statistic analysis. This is obviously not sufficient to detect even relatively simple errors during analysis that could mean the difference between radically opposite interpretations of the data. Examples that have been encountered by some of the authors of this paper include: reading statistics and degrees of freedom from an incorrect column in the software, reading statistics and degrees of freedom from an incorrect table, and performing within-subject analysis on between-subjects data. Some of these errors are virtually impossible to detect if the only provided information are the statistical figures typically found in papers. The problem is further magnified if the analysis is not standard. For example, if a new computational measure is created from the data, it might be impossible to reproduce without having the exact code, and if the analysis applies a machine learning approach there might be large numbers of parameters to adjust and many differently implemented variants of the same analysis (different analysis frameworks might have implementations of the same analysis that might lead to different results). In order to prevent those errors and the significant loss of credibility of the data that they cause, authors should enable the recomputation of statistical and machine learning analysis on the data of any experiment, to the extent allowed by other ethics and privacy issues (Section~\ref{s:group1}). This requires that: a) authors make the data available, b) authors provide suitable meta-data that describes the semantics and structure of the data, c) authors provide instruction on how to reproduce the analysis. Sharing the data and the procedures of the analysis has advantages that go beyond the pure verifiability of the correctness of the result: the data can also be reanalysed (individually or in combination with other sources) to discover new insights, the analysis can serve as educational material for students in the area, and scientific fraud becomes, at least in theory, much harder to perpetrate. In this spirit of openness and scientific integrity, one of the authors (M.A.N.) has been striving to provide the data and the analyses for his own empirical research in HCI. Specifically, a recent project on the memorability of gestures~\cite{Nacenta:memorability} was developed from scratch as a pilot experience that would enable anyone to reproduce the analysis. For this purpose, the data and the basic analysis scripts necessary to perform the inferential statistics contained in the paper were prepared and included as an attachment to the original paper, which is currently accessible through the institutional research repository at the University of St Andrews~\cite{Nacenta:memorability_data}. This data and the required auxiliary files took approximately 6 hours to compile and prepare by the main author (excluding the time spent compiling and designing the statistical analyses). If this paper is representative of other work in the area, this amount of effort on the side of the authors does seem reasonable in exchange for the expected quality improvements for the field that recomputability could deliver. However, we have little knowledge about the challenges and difficulties encountered by the replicators (rather than the authors) in order to verify and check that the analysis is correct. For this purpose, and in the context of the summer school that this article reflects on, we decided to set up an experiment in which the participants of the summer school (and authors of this article) with the exception of the author of the data, would try to replicate the results of the paper. The main objective of this research is to learn about the challenges and difficulties of a simple recomputation exercise of standard statistical analysis, to provide real examples of experience in recomputation of analysis in HCI, and to enable improvement of the provided data in the future. \groupsubsec{7.2 Experience Report: Recomputing a Memorability Experiment} The authors of this article (henceforth the \emph{reanalysts}), with the exception of M.A.N. divided themselves into four teams (4, 3, 4, and 5 people per team), each of which would try to reproduce the same selection of results of the gesture memorability study reported in reference~\cite{Nacenta:memorability}. The target results for reanalysis were the averages and ANOVA analyses of the first paragraph of the /emph{Results} section of \emph{Experiment 3}. This paragraph contained three types of analysis: simple calculations of averages (recall rates), omnibus parametric ANOVA analyses, and pairwise post-hoc parametric t-tests. Approximately half of the reanalysts had a good understanding of HCI or had performed research in the HCI field. To provide sufficient background, the author of the reanalysed paper gave a 20-minute presentation on the content of the paper, aimed at a moderately knowledgeable audience. Reanalysts were allowed to ask any number of questions at the end. The reanalysts received also a physical and a digital version of the original paper and a URL from where to download the data (as distributed originally to the public in~\cite{Nacenta:memorability_data}). The data is provided in a comma separated file (with column heading names in the first row). The data package also includes IBM SPSS Syntax files (SPSS's scripting language), and a README.txt file containing descriptions of the different files, including explanations of the columns. SPSS Syntax files were provided because it was the platform in which the analyses were performed, and it is commonly used as statistical software for the analysis of experiments in the HCI and Psychology communities. Teams were given approximately 90 minutes to replicate the results contained in the paragraph of the paper indicated above. Two groups opted to try to replicate the results by using SPSS (installed in the machines available to the reanalysts), one opted to replicate the results using R, and one opted to replicate the results using R while simultaneously recording the recomputation in a VM. The leader of the session provided help to the SPSS groups strictly on issues related to the SPSS interface. Each group was asked to assign one person to take notes on a paper notepad of the development of the session (specifically, steps taken, difficulties found, misunderstandings, and breakthroughs). \groupsubsec{7.3 Results} All the reanalysts spent the allocated time working on the recomputation of the results while recording on their notepads the actions and obstacles encountered. After the session was over, the reanalysts shared in public their results, conclusions and main obstacles for the benefit of the rest of the groups. The notepads were later analysed by M.A.N. by identifying problems, creating a physical affinity diagram of problems~\cite{hartson:2012}, and identifying the most relevant groups of related problems. The two following subsections report the degree of success achieved in the recomputation and the main categories of challenges and obstacles found. \groupsubsubsec{7.3.1 Measures of Success} Groups 2, 3 and 4 were able to achieve some verification of data present in the paper in the allotted time. \emph{Group 1 (SPSS)} were able to open the data, read the README file, and run the script that loads and performs the analysis. They were not, however, able to find the appropriate correspondence between results on the paper and the output of SPSS. \emph{Group 2 (SPSS)} were able to open the data, read the README file, load the data with SPSS independently of the SPSS script, verify the integrity and structure of the data, run the scripts, find one of the ANOVA analyses in the output, and verify its correctness. \emph{Group 3 (R)} were able to find and open the data, read the data with R, and run some basic descriptive statistics (averages). \emph{Group 4 (R + VM)} were able to create a VM to store the analysis of the data, unpack the data, read the README file, failed at converting the provided SPSS Syntax scripts into R scripts, but were finally able to reproduce some basic descriptive statistics (averages). \groupsubsubsec{7.3.2 Identified Problems and Challenges} We identified four main groups of problems and challenges: tool problems, cross-tool problems, data and script problems, and lacks of knowledge. \emph{Tool problems.} Reanalysts found it difficult to load data in SPSS, to run scripts, and and found the syntax scripts themselves non human-readable. The SPSS model of running scripts and presenting the results in very long report in a separate window/file was also found confusing. The SPSS Syntax Scripting facility is also difficult to get to work, and can be misleading (the system is not designed with the main goal of running full scripts), even for previous users of the tool. Additionally, the scripts cannot use relative file references, which forces the reanalysts to change the script itself instead of just running it (the folder structure of the reanalysis machine is not necessarily the same as the original machine where the data was first analysed). \emph{Cross-tool problems.} Reanalysts were unsure of whether the difference in versions from the software used for the original analysis (SPSS 19), and the tool available for reanalysis (SPSS 21) would cause problems. One group that felt comfortable with R but wanted to take advantage of the provided SPSS Syntax scripts tried to convert one to the other using an existing free R package~\cite{spsstor}, however, the tool was found to be inadequate for this purpose; conversion from one language in one tool to another is a very complex problem, not likely to be solved soon. Additionally, the necessary installation of packages, dependencies, and the VM caused significant overhead. \emph{Data and Script problems.} The data and scripts provided were also not ideal, and generated a number of problems and difficulties. Reanalysts detected inconsistencies in the naming of conditions and columns between the data and the paper, which are due to the authors of the original paper renaming conditions and columns to make the paper more readable. Some groups also tried to identify data based on the SPSS-generated graphics, but these do not correspond to the graphics used in the final version of the paper (SPSS graphics are not of the quality and format required in most scientific publications, and therefore had to be redone). This caused confusion to three groups. Finally, the analyses provided in the SPSS Syntax are exhaustive, containing much more information than the paper. This caused confusion in reanalysts, who had serious difficulties relating the output generated by the scripts with the data reported in the paper. This was sometimes made worse by the fact that the other was different in both systems. \emph{Required Knowledge Breadth.} All groups highlighted the depth and breadth of knowledge required to achieve recomputation of data. At the lowest levels of abstraction, reanalysts had to be knowledgeable in SPSS operation. Knowledge on the use of data formats is also a requirement. Those groups that used R for analysis did not only have to show a significant mastery of R, but also of the relationship between R and SPSS Syntax and, more importantly, of the specific statistical procedures and how they are performed in both platforms. Finally, the reanalysts had to achieve a grounded understanding of the experimental design and purpose of the experiment, something that requires detailed and thoughtful study. \groupsubsec{7.4 Discussion and Recommendations} Although the main focus on recomputation in HCI has focused on the replication of empirical data collection (replicated experiments), there is still much to do (and much benefit to get) from improving the recomputability of the analyses of the data gathered. In this section we discuss the main issues and lessons learned from our experience, as well as limitations from our methods and a set of recommendations on how to improve the impact and feasibility of recomputation in HCI. \groupsubsubsec{7.4.1 Reasonable Success} Our experience shows that a group of motivated individuals achieved a modest amount of success in reanalyzing a set of simple statistical analysis of HCI empirical data. The results suggest that recomputability is within reach of the experimental HCI community, and data and analysis sharing practices will allow researchers with a stake in the correctness of other researchers' results to verify their analysis. This is possible even in the current state of affairs (many different tools being used, lack of explicit support for recomputation), but requires a significant amount of time, effort, and expertise from multiple sources. This effort and time is often not available for recomputation scenarios that require agile and fast reanalysis, such as paper article reviewing. For this, tool support and a culture change will be required. \groupsubsubsec{7.4.2 Tools are Key (and not ready)} SPSS might be an adequate tool for performing statistical analysis; it is successfully used by many in HCI and many other areas. Reanalysis, however, imposes a different set of constraints and requirements, and our reanalysts had many problems with the tool. Some problems relate to the general usability of the tool (which makes reanalysis difficult if you are not an SPSS expert), some to the implicit design assumption in SPSS that the data is collected, analysed and interpreted by the same person. One of the key problems of using SPSS to enable recomputation is that there is no easy way to establish a clear correspondence between the results of the analysis in SPSS and the specific statistics extracted for the paper text, tables and graphics. R seems better suited for these tasks. It is possible to write R code that integrates with the text through Sweave~\cite{lmucs-papers:Leisch:2002} so that the specific analyses are compiled together with the PDF document. This makes the origin and procedure used to obtain a particular numerical result traceable to the data, and therefore easier to check and recompute. Although this is highly desirable it might still be unreasonable to demand that everyone writing or reviewing HCI and psychology papers master a programming language and tools that are generally not renowned for their usability, and that everyone is able to deal with the installation hassles of R, Sweave, ggplot2, \LaTeX, etc. in their operating system of choice. There is room for improvement for these tools, and distributing VMs may further help, but commercial tools still have an opportunity to retain their business if they provide features that adapt to the demands of easy recomputation and better support for scientific reporting. Although our experience only involved R and SPSS, the example extrapolates to other commercial tools (e.g., SAS) and open source projects (SciPy) in the statistical arena. \groupsubsubsec{7.4.3 A Culture Change} Recomputation is therefore feasible and likely to become easier in the near future through better support and tools. However, it is unclear whether the HCI research community will embrace it. Recomputability requires more work for researchers writing papers, new habits in the analysis and reporting of experiments and, for most researchers, learning and mastering new tools. A change of culture will, however, not only mean better science through more recomputable results, but also enhanced opportunities for new analysis on old data, enabling learning from others, and making scientific fraud and bad practices easier to detect. For this all to happen, we need to start demanding from authors that they share data and analyses, and that they consider the needs of the reanalyst while planning, performing, and reporting their quantitative empirical research. A small example of this are current efforts by one of the authors to make data available to the research community through purpose-made interfaces that enable analysis and reanalysis of previous results \cite{Grijincu:2014}. \groupsubsubsec{7.4.4 Limitations} To our knowledge, this section reports the first study of recomputation of the statistical analysis of HCI empirical data. We have been able to learn valuable lessons from this experience, including ways to improve the actual data and analysis kit for the original study. However this only represents a semi-informal study with semi-controlled observation for one specific case analysed using a specific tool (SPSS). Further research is required to validate these results and generalize the lessons learned to other tools and other types of reanalysts; specifically, it would be useful to investigate how experts in a particular field go about reanalysing existing results, and what are the specific barriers present when the data and analyses are prepared with a more sophisticated system such as R with Sweave and \LaTeX. \groupsubsubsec{7.4.5 Recommendations} For the recomputability of quantitative analyses in HCI research, we make the following recommendations: \begin{itemize} \item When possible, share the raw data and analysis for experiments to enable recomputability. \item Aim to reduce the knowledge required to reanalyse data. Reanalyst teams already require knowledge of the topic area, the reanalysis tool, and the computational procedures. \item Make results explicitly traceable from computation to report. \item After the paper is written, revise and adapt data for consistency of nomenclature of factors and condition names. \item Due to cost, fitness and availability, favor open source tool platforms for analysis and reanalysis preparation (at least for the moment). \item To reduce overheads due to learning of open tools by reanalysts, provide also clear instructions with the data and links to resources for learning and using the reanalysis tools. \item To establish a replicability research culture, demand that research authors provide data and analysis at publication time. \end{itemize}
State Before: R✝ : Type u inst✝¹ : CommRing R✝ R : Type u_1 inst✝ : CommRing R x y : PrimeSpectrum R h : x ⤳ y ⊢ stalkSpecializes (Sheaf.presheaf (structureSheaf R)) h ≫ stalkToFiberRingHom R x = stalkToFiberRingHom R y ≫ let_fun this := PrimeSpectrum.localizationMapOfSpecializes h; this State After: R✝ : Type u inst✝¹ : CommRing R✝ R : Type u_1 inst✝ : CommRing R x y : PrimeSpectrum R h : x ⤳ y ⊢ stalkSpecializes (Sheaf.presheaf (structureSheaf R)) h ≫ (stalkIso R x).hom = (stalkIso R y).hom ≫ let_fun this := PrimeSpectrum.localizationMapOfSpecializes h; this Tactic: change _ ≫ (StructureSheaf.stalkIso R x).hom = (StructureSheaf.stalkIso R y).hom ≫ _ State Before: R✝ : Type u inst✝¹ : CommRing R✝ R : Type u_1 inst✝ : CommRing R x y : PrimeSpectrum R h : x ⤳ y ⊢ stalkSpecializes (Sheaf.presheaf (structureSheaf R)) h ≫ (stalkIso R x).hom = (stalkIso R y).hom ≫ let_fun this := PrimeSpectrum.localizationMapOfSpecializes h; this State After: R✝ : Type u inst✝¹ : CommRing R✝ R : Type u_1 inst✝ : CommRing R x y : PrimeSpectrum R h : x ⤳ y ⊢ (stalkIso R y).inv ≫ stalkSpecializes (Sheaf.presheaf (structureSheaf R)) h = (let_fun this := PrimeSpectrum.localizationMapOfSpecializes h; this) ≫ (stalkIso R x).inv Tactic: rw [← Iso.eq_comp_inv, Category.assoc, ← Iso.inv_comp_eq] State Before: R✝ : Type u inst✝¹ : CommRing R✝ R : Type u_1 inst✝ : CommRing R x y : PrimeSpectrum R h : x ⤳ y ⊢ (stalkIso R y).inv ≫ stalkSpecializes (Sheaf.presheaf (structureSheaf R)) h = (let_fun this := PrimeSpectrum.localizationMapOfSpecializes h; this) ≫ (stalkIso R x).inv State After: no goals Tactic: exact localizationToStalk_stalkSpecializes h
Require Import Crypto.Arithmetic.PrimeFieldTheorems. Require Import Crypto.Specific.montgomery32_2e137m13_5limbs.Synthesis. (* TODO : change this to field once field isomorphism happens *) Definition opp : { opp : feBW_small -> feBW_small | forall a, phiM_small (opp a) = F.opp (phiM_small a) }. Proof. Set Ltac Profiling. Time synthesize_opp (). Show Ltac Profile. Time Defined. Print Assumptions opp.
lemma fixes f g :: "complex fps" and r :: ereal defines "R \<equiv> Min {r, fps_conv_radius f, fps_conv_radius g}" assumes "subdegree g \<le> subdegree f" assumes "fps_conv_radius f > 0" "fps_conv_radius g > 0" "r > 0" assumes "\<And>z. z \<in> eball 0 r \<Longrightarrow> z \<noteq> 0 \<Longrightarrow> eval_fps g z \<noteq> 0" shows fps_conv_radius_divide: "fps_conv_radius (f / g) \<ge> R" and eval_fps_divide: "ereal (norm z) < R \<Longrightarrow> c = fps_nth f (subdegree g) / fps_nth g (subdegree g) \<Longrightarrow> eval_fps (f / g) z = (if z = 0 then c else eval_fps f z / eval_fps g z)"
[GOAL] 𝕜 : Type u A : Type v inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a b : 𝓜(𝕜, A) h : a.toProd = b.toProd ⊢ a = b [PROOFSTEP] cases a [GOAL] case mk 𝕜 : Type u A : Type v inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A b : 𝓜(𝕜, A) toProd✝ : (A →L[𝕜] A) × (A →L[𝕜] A) central✝ : ∀ (x y : A), ↑toProd✝.snd x * y = x * ↑toProd✝.fst y h : { toProd := toProd✝, central := central✝ }.toProd = b.toProd ⊢ { toProd := toProd✝, central := central✝ } = b [PROOFSTEP] cases b [GOAL] case mk.mk 𝕜 : Type u A : Type v inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A toProd✝¹ : (A →L[𝕜] A) × (A →L[𝕜] A) central✝¹ : ∀ (x y : A), ↑toProd✝¹.snd x * y = x * ↑toProd✝¹.fst y toProd✝ : (A →L[𝕜] A) × (A →L[𝕜] A) central✝ : ∀ (x y : A), ↑toProd✝.snd x * y = x * ↑toProd✝.fst y h : { toProd := toProd✝¹, central := central✝¹ }.toProd = { toProd := toProd✝, central := central✝ }.toProd ⊢ { toProd := toProd✝¹, central := central✝¹ } = { toProd := toProd✝, central := central✝ } [PROOFSTEP] simpa using h [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A x : (A →L[𝕜] A) × (A →L[𝕜] A) ⊢ x ∈ Set.range toProd → x ∈ {lr | ∀ (x y : A), ↑lr.snd x * y = x * ↑lr.fst y} [PROOFSTEP] rintro ⟨a, rfl⟩ [GOAL] case intro 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) ⊢ a.toProd ∈ {lr | ∀ (x y : A), ↑lr.snd x * y = x * ↑lr.fst y} [PROOFSTEP] exact a.central [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a b : 𝓜(𝕜, A) x y : A ⊢ ↑(a.snd + b.snd) x * y = x * ↑(a.fst + b.fst) y [PROOFSTEP] simp only [ContinuousLinearMap.add_apply, mul_add, add_mul, central] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) x y : A ⊢ -↑a.snd x * y = x * -↑a.fst y [PROOFSTEP] simp only [ContinuousLinearMap.neg_apply, neg_mul, mul_neg, central] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a b : 𝓜(𝕜, A) x y : A ⊢ ↑(a.snd - b.snd) x * y = x * ↑(a.fst - b.fst) y [PROOFSTEP] simp only [ContinuousLinearMap.sub_apply, _root_.sub_mul, _root_.mul_sub, central] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝¹⁰ : NontriviallyNormedField 𝕜 inst✝⁹ : NonUnitalNormedRing A inst✝⁸ : NormedSpace 𝕜 A inst✝⁷ : SMulCommClass 𝕜 A A inst✝⁶ : IsScalarTower 𝕜 A A S : Type u_3 inst✝⁵ : Monoid S inst✝⁴ : DistribMulAction S A inst✝³ : SMulCommClass 𝕜 S A inst✝² : ContinuousConstSMul S A inst✝¹ : IsScalarTower S A A inst✝ : SMulCommClass S A A s : S a : 𝓜(𝕜, A) x y : A ⊢ ↑(s • a.snd) x * y = x * ↑(s • a.fst) y [PROOFSTEP] simp only [ContinuousLinearMap.smul_apply, mul_smul_comm, smul_mul_assoc, central] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a b : 𝓜(𝕜, A) x y : A ⊢ ↑b.snd (↑a.snd x) * y = x * ↑a.fst (↑b.fst y) [PROOFSTEP] simp only [central] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A n : ℕ x y : A ⊢ ↑(↑n).snd x * y = x * ↑(↑n).fst y [PROOFSTEP] rw [Prod.snd_natCast, Prod.fst_natCast] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A n : ℕ x y : A ⊢ ↑↑n x * y = x * ↑↑n y [PROOFSTEP] simp only [← Nat.smul_one_eq_coe, smul_apply, one_apply, mul_smul_comm, smul_mul_assoc] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A n : ℤ x y : A ⊢ ↑(↑n).snd x * y = x * ↑(↑n).fst y [PROOFSTEP] rw [Prod.snd_intCast, Prod.fst_intCast] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A n : ℤ x y : A ⊢ ↑↑n x * y = x * ↑↑n y [PROOFSTEP] simp only [← Int.smul_one_eq_coe, smul_apply, one_apply, mul_smul_comm, smul_mul_assoc] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) n : ℕ x y : A ⊢ ↑(a.toProd ^ n).snd x * y = x * ↑(a.toProd ^ n).fst y [PROOFSTEP] induction' n with k hk generalizing x y [GOAL] case zero 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) x✝ y✝ x y : A ⊢ ↑(a.toProd ^ Nat.zero).snd x * y = x * ↑(a.toProd ^ Nat.zero).fst y [PROOFSTEP] rfl [GOAL] case succ 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) x✝ y✝ : A k : ℕ hk : ∀ (x y : A), ↑(a.toProd ^ k).snd x * y = x * ↑(a.toProd ^ k).fst y x y : A ⊢ ↑(a.toProd ^ Nat.succ k).snd x * y = x * ↑(a.toProd ^ Nat.succ k).fst y [PROOFSTEP] rw [Prod.pow_snd, Prod.pow_fst] at hk ⊢ [GOAL] case succ 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) x✝ y✝ : A k : ℕ hk : ∀ (x y : A), ↑(a.snd ^ k) x * y = x * ↑(a.fst ^ k) y x y : A ⊢ ↑(a.snd ^ Nat.succ k) x * y = x * ↑(a.fst ^ Nat.succ k) y [PROOFSTEP] rw [pow_succ a.snd, mul_apply, a.central, hk, pow_succ' a.fst, mul_apply] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A x : (A →L[𝕜] A) × (A →L[𝕜] A)ᵐᵒᵖ ⊢ x ∈ Set.range toProdMulOpposite → x ∈ {lr | ∀ (x y : A), ↑(unop lr.snd) x * y = x * ↑lr.fst y} [PROOFSTEP] rintro ⟨a, rfl⟩ [GOAL] case intro 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A a : 𝓜(𝕜, A) ⊢ toProdMulOpposite a ∈ {lr | ∀ (x y : A), ↑(unop lr.snd) x * y = x * ↑lr.fst y} [PROOFSTEP] exact a.central [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A k : 𝕜 x y : A ⊢ ↑(↑(algebraMap 𝕜 ((A →L[𝕜] A) × (A →L[𝕜] A))) k).snd x * y = x * ↑(↑(algebraMap 𝕜 ((A →L[𝕜] A) × (A →L[𝕜] A))) k).fst y [PROOFSTEP] simp_rw [Prod.algebraMap_apply, Algebra.algebraMap_eq_smul_one, smul_apply, one_apply, mul_smul_comm, smul_mul_assoc] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A a : 𝓜(𝕜, A) x y : A ⊢ ↑(comp (comp (↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv)) a.snd) ↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv), comp (comp (↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv)) a.fst) ↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv)).snd x * y = x * ↑(comp (comp (↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv)) a.snd) ↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv), comp (comp (↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv)) a.fst) ↑(ContinuousLinearEquiv.mk (starₗᵢ 𝕜).toLinearEquiv)).fst y [PROOFSTEP] simpa only [star_mul, star_star] using (congr_arg star (a.central (star y) (star x))).symm [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : Star 𝓜(𝕜, A) := instStar x : 𝓜(𝕜, A) ⊢ star (star x) = x [PROOFSTEP] ext [GOAL] case h.h₁.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : Star 𝓜(𝕜, A) := instStar x : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (star x)).fst x✝ = ↑x.fst x✝ [PROOFSTEP] simp only [star_fst, star_snd, star_star] [GOAL] case h.h₂.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : Star 𝓜(𝕜, A) := instStar x : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (star x)).snd x✝ = ↑x.snd x✝ [PROOFSTEP] simp only [star_fst, star_snd, star_star] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : Star 𝓜(𝕜, A) := instStar x y : 𝓜(𝕜, A) ⊢ star (x + y) = star x + star y [PROOFSTEP] ext [GOAL] case h.h₁.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : Star 𝓜(𝕜, A) := instStar x y : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (x + y)).fst x✝ = ↑(star x + star y).fst x✝ [PROOFSTEP] simp only [star_fst, star_snd, add_fst, add_snd, ContinuousLinearMap.add_apply, star_add] [GOAL] case h.h₂.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : Star 𝓜(𝕜, A) := instStar x y : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (x + y)).snd x✝ = ↑(star x + star y).snd x✝ [PROOFSTEP] simp only [star_fst, star_snd, add_fst, add_snd, ContinuousLinearMap.add_apply, star_add] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : StarAddMonoid 𝓜(𝕜, A) := instStarAddMonoid a b : 𝓜(𝕜, A) ⊢ star (a * b) = star b * star a [PROOFSTEP] ext [GOAL] case h.h₁.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : StarAddMonoid 𝓜(𝕜, A) := instStarAddMonoid a b : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (a * b)).fst x✝ = ↑(star b * star a).fst x✝ [PROOFSTEP] simp only [star_fst, star_snd, mul_fst, mul_snd, star_star, ContinuousLinearMap.coe_mul, Function.comp_apply] [GOAL] case h.h₂.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : StarAddMonoid 𝓜(𝕜, A) := instStarAddMonoid a b : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (a * b)).snd x✝ = ↑(star b * star a).snd x✝ [PROOFSTEP] simp only [star_fst, star_snd, mul_fst, mul_snd, star_star, ContinuousLinearMap.coe_mul, Function.comp_apply] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : StarAddMonoid 𝓜(𝕜, A) := instStarAddMonoid k : 𝕜 a : 𝓜(𝕜, A) ⊢ star (k • a) = star k • star a [PROOFSTEP] ext [GOAL] case h.h₁.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : StarAddMonoid 𝓜(𝕜, A) := instStarAddMonoid k : 𝕜 a : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (k • a)).fst x✝ = ↑(star k • star a).fst x✝ [PROOFSTEP] exact star_smul _ _ [GOAL] case h.h₂.h 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : NontriviallyNormedField 𝕜 inst✝⁷ : NonUnitalNormedRing A inst✝⁶ : NormedSpace 𝕜 A inst✝⁵ : SMulCommClass 𝕜 A A inst✝⁴ : IsScalarTower 𝕜 A A inst✝³ : StarRing 𝕜 inst✝² : StarRing A inst✝¹ : StarModule 𝕜 A inst✝ : NormedStarGroup A src✝ : StarAddMonoid 𝓜(𝕜, A) := instStarAddMonoid k : 𝕜 a : 𝓜(𝕜, A) x✝ : A ⊢ ↑(star (k • a)).snd x✝ = ↑(star k • star a).snd x✝ [PROOFSTEP] exact star_smul _ _ [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A ⊢ ↑𝕜 = ↑(algebraMap 𝕜 𝓜(𝕜, 𝕜)) [PROOFSTEP] ext x : 3 [GOAL] case h.h.h₁ 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A x : 𝕜 ⊢ (↑𝕜 x).toProd.fst = (↑(algebraMap 𝕜 𝓜(𝕜, 𝕜)) x).fst [PROOFSTEP] rfl -- `fst` is defeq [GOAL] case h.h.h₂ 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A x : 𝕜 ⊢ (↑𝕜 x).toProd.snd = (↑(algebraMap 𝕜 𝓜(𝕜, 𝕜)) x).snd [PROOFSTEP] refine ContinuousLinearMap.ext fun y => ?_ [GOAL] case h.h.h₂ 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A x y : 𝕜 ⊢ ↑(↑𝕜 x).toProd.snd y = ↑(↑(algebraMap 𝕜 𝓜(𝕜, 𝕜)) x).snd y [PROOFSTEP] exact mul_comm y x [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : NonUnitalNormedRing A inst✝² : NormedSpace 𝕜 A inst✝¹ : SMulCommClass 𝕜 A A inst✝ : IsScalarTower 𝕜 A A ⊢ Function.Injective ↑toProdMulOppositeHom [PROOFSTEP] simpa using toProdMulOpposite_injective [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A ⊢ CompleteSpace 𝓜(𝕜, A) [PROOFSTEP] rw [completeSpace_iff_isComplete_range uniformEmbedding_toProdMulOpposite.toUniformInducing] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A ⊢ IsComplete (Set.range toProdMulOpposite) [PROOFSTEP] apply IsClosed.isComplete [GOAL] case h 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A ⊢ IsClosed (Set.range toProdMulOpposite) [PROOFSTEP] simp only [range_toProdMulOpposite, Set.setOf_forall] [GOAL] case h 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A ⊢ IsClosed (⋂ (i : A) (i_1 : A), {x | ↑(unop x.snd) i * i_1 = i * ↑x.fst i_1}) [PROOFSTEP] refine' isClosed_iInter fun x => isClosed_iInter fun y => isClosed_eq _ _ [GOAL] case h.refine'_1 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A x y : A ⊢ Continuous fun x_1 => ↑(unop x_1.snd) x * y case h.refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A x y : A ⊢ Continuous fun x_1 => x * ↑x_1.fst y [PROOFSTEP] exact ((ContinuousLinearMap.apply 𝕜 A _).continuous.comp <| continuous_unop.comp continuous_snd).mul continuous_const [GOAL] case h.refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁵ : NontriviallyNormedField 𝕜 inst✝⁴ : NonUnitalNormedRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : CompleteSpace A x y : A ⊢ Continuous fun x_1 => x * ↑x_1.fst y [PROOFSTEP] exact continuous_const.mul ((ContinuousLinearMap.apply 𝕜 A _).continuous.comp continuous_fst) [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) ⊢ ‖a.fst‖ = ‖a.snd‖ [PROOFSTEP] have h0 : ∀ f : A →L[𝕜] A, ∀ C : ℝ≥0, (∀ b : A, ‖f b‖₊ ^ 2 ≤ C * ‖f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C := by intro f C h have h1 : ∀ b, C * ‖f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 := by intro b convert mul_le_mul_right' (mul_le_mul_left' (f.le_op_nnnorm b) C) ‖b‖₊ using 1 ring have := NNReal.div_le_of_le_mul (f.op_nnnorm_le_bound _ (by simpa only [sqrt_sq, sqrt_mul] using fun b => sqrt_le_sqrt_iff.mpr ((h b).trans (h1 b)))) convert NNReal.rpow_le_rpow this two_pos.le · simp only [NNReal.rpow_two, div_pow, sq_sqrt] simp only [sq, mul_self_div_self] · simp only [NNReal.rpow_two, sq_sqrt] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) ⊢ ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C [PROOFSTEP] intro f C h [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ ⊢ ‖f‖₊ ≤ C [PROOFSTEP] have h1 : ∀ b, C * ‖f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 := by intro b convert mul_le_mul_right' (mul_le_mul_left' (f.le_op_nnnorm b) C) ‖b‖₊ using 1 ring [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ ⊢ ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 [PROOFSTEP] intro b [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ b : A ⊢ C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 [PROOFSTEP] convert mul_le_mul_right' (mul_le_mul_left' (f.le_op_nnnorm b) C) ‖b‖₊ using 1 [GOAL] case h.e'_4 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ b : A ⊢ C * ‖f‖₊ * ‖b‖₊ ^ 2 = C * (‖f‖₊ * ‖b‖₊) * ‖b‖₊ [PROOFSTEP] ring [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ h1 : ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 ⊢ ‖f‖₊ ≤ C [PROOFSTEP] have := NNReal.div_le_of_le_mul (f.op_nnnorm_le_bound _ (by simpa only [sqrt_sq, sqrt_mul] using fun b => sqrt_le_sqrt_iff.mpr ((h b).trans (h1 b)))) [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ h1 : ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 ⊢ ∀ (x : A), ‖↑f x‖₊ ≤ ?m.1273079 * ?m.1273080 * ‖x‖₊ [PROOFSTEP] simpa only [sqrt_sq, sqrt_mul] using fun b => sqrt_le_sqrt_iff.mpr ((h b).trans (h1 b)) [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ h1 : ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 this : ‖f‖₊ / ↑sqrt ‖f‖₊ ≤ ↑sqrt C ⊢ ‖f‖₊ ≤ C [PROOFSTEP] convert NNReal.rpow_le_rpow this two_pos.le [GOAL] case h.e'_3 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ h1 : ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 this : ‖f‖₊ / ↑sqrt ‖f‖₊ ≤ ↑sqrt C ⊢ ‖f‖₊ = (‖f‖₊ / ↑sqrt ‖f‖₊) ^ 2 [PROOFSTEP] simp only [NNReal.rpow_two, div_pow, sq_sqrt] [GOAL] case h.e'_3 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ h1 : ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 this : ‖f‖₊ / ↑sqrt ‖f‖₊ ≤ ↑sqrt C ⊢ ‖f‖₊ = ‖f‖₊ ^ 2 / ‖f‖₊ [PROOFSTEP] simp only [sq, mul_self_div_self] [GOAL] case h.e'_4 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) f : A →L[𝕜] A C : ℝ≥0 h : ∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊ h1 : ∀ (b : A), C * ‖↑f b‖₊ * ‖b‖₊ ≤ C * ‖f‖₊ * ‖b‖₊ ^ 2 this : ‖f‖₊ / ↑sqrt ‖f‖₊ ≤ ↑sqrt C ⊢ C = ↑sqrt C ^ 2 [PROOFSTEP] simp only [NNReal.rpow_two, sq_sqrt] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C ⊢ ‖a.fst‖ = ‖a.snd‖ [PROOFSTEP] have h1 : ∀ b, ‖a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖a.fst b‖₊ * ‖b‖₊ := by intro b calc ‖a.fst b‖₊ ^ 2 = ‖star (a.fst b) * a.fst b‖₊ := by simpa only [← sq] using CstarRing.nnnorm_star_mul_self.symm _ ≤ ‖a.snd (star (a.fst b))‖₊ * ‖b‖₊ := (a.central (star (a.fst b)) b ▸ nnnorm_mul_le _ _) _ ≤ ‖a.snd‖₊ * ‖a.fst b‖₊ * ‖b‖₊ := nnnorm_star (a.fst b) ▸ mul_le_mul_right' (a.snd.le_op_nnnorm _) _ [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C ⊢ ∀ (b : A), ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ [PROOFSTEP] intro b [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C b : A ⊢ ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ [PROOFSTEP] calc ‖a.fst b‖₊ ^ 2 = ‖star (a.fst b) * a.fst b‖₊ := by simpa only [← sq] using CstarRing.nnnorm_star_mul_self.symm _ ≤ ‖a.snd (star (a.fst b))‖₊ * ‖b‖₊ := (a.central (star (a.fst b)) b ▸ nnnorm_mul_le _ _) _ ≤ ‖a.snd‖₊ * ‖a.fst b‖₊ * ‖b‖₊ := nnnorm_star (a.fst b) ▸ mul_le_mul_right' (a.snd.le_op_nnnorm _) _ [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C b : A ⊢ ‖↑a.fst b‖₊ ^ 2 = ‖star (↑a.fst b) * ↑a.fst b‖₊ [PROOFSTEP] simpa only [← sq] using CstarRing.nnnorm_star_mul_self.symm [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C h1 : ∀ (b : A), ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ ⊢ ‖a.fst‖ = ‖a.snd‖ [PROOFSTEP] have h2 : ∀ b, ‖a.snd b‖₊ ^ 2 ≤ ‖a.fst‖₊ * ‖a.snd b‖₊ * ‖b‖₊ := by intro b calc ‖a.snd b‖₊ ^ 2 = ‖a.snd b * star (a.snd b)‖₊ := by simpa only [← sq] using CstarRing.nnnorm_self_mul_star.symm _ ≤ ‖b‖₊ * ‖a.fst (star (a.snd b))‖₊ := ((a.central b (star (a.snd b))).symm ▸ nnnorm_mul_le _ _) _ = ‖a.fst (star (a.snd b))‖₊ * ‖b‖₊ := (mul_comm _ _) _ ≤ ‖a.fst‖₊ * ‖a.snd b‖₊ * ‖b‖₊ := nnnorm_star (a.snd b) ▸ mul_le_mul_right' (a.fst.le_op_nnnorm _) _ [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C h1 : ∀ (b : A), ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ ⊢ ∀ (b : A), ‖↑a.snd b‖₊ ^ 2 ≤ ‖a.fst‖₊ * ‖↑a.snd b‖₊ * ‖b‖₊ [PROOFSTEP] intro b [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C h1 : ∀ (b : A), ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ b : A ⊢ ‖↑a.snd b‖₊ ^ 2 ≤ ‖a.fst‖₊ * ‖↑a.snd b‖₊ * ‖b‖₊ [PROOFSTEP] calc ‖a.snd b‖₊ ^ 2 = ‖a.snd b * star (a.snd b)‖₊ := by simpa only [← sq] using CstarRing.nnnorm_self_mul_star.symm _ ≤ ‖b‖₊ * ‖a.fst (star (a.snd b))‖₊ := ((a.central b (star (a.snd b))).symm ▸ nnnorm_mul_le _ _) _ = ‖a.fst (star (a.snd b))‖₊ * ‖b‖₊ := (mul_comm _ _) _ ≤ ‖a.fst‖₊ * ‖a.snd b‖₊ * ‖b‖₊ := nnnorm_star (a.snd b) ▸ mul_le_mul_right' (a.fst.le_op_nnnorm _) _ [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C h1 : ∀ (b : A), ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ b : A ⊢ ‖↑a.snd b‖₊ ^ 2 = ‖↑a.snd b * star (↑a.snd b)‖₊ [PROOFSTEP] simpa only [← sq] using CstarRing.nnnorm_self_mul_star.symm [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) h0 : ∀ (f : A →L[𝕜] A) (C : ℝ≥0), (∀ (b : A), ‖↑f b‖₊ ^ 2 ≤ C * ‖↑f b‖₊ * ‖b‖₊) → ‖f‖₊ ≤ C h1 : ∀ (b : A), ‖↑a.fst b‖₊ ^ 2 ≤ ‖a.snd‖₊ * ‖↑a.fst b‖₊ * ‖b‖₊ h2 : ∀ (b : A), ‖↑a.snd b‖₊ ^ 2 ≤ ‖a.fst‖₊ * ‖↑a.snd b‖₊ * ‖b‖₊ ⊢ ‖a.fst‖ = ‖a.snd‖ [PROOFSTEP] exact le_antisymm (h0 _ _ h1) (h0 _ _ h2) [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) ⊢ ‖a.fst‖ = ‖a‖ [PROOFSTEP] simp only [norm_def, toProdHom_apply, Prod.norm_def, norm_fst_eq_snd, max_eq_right le_rfl] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁶ : NontriviallyNormedField 𝕜 inst✝⁵ : NonUnitalNormedRing A inst✝⁴ : NormedSpace 𝕜 A inst✝³ : SMulCommClass 𝕜 A A inst✝² : IsScalarTower 𝕜 A A inst✝¹ : StarRing A inst✝ : CstarRing A a : 𝓜(𝕜, A) ⊢ ‖a.snd‖ = ‖a‖ [PROOFSTEP] rw [← norm_fst, norm_fst_eq_snd] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) ⊢ ‖star a * a‖₊ = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] have hball : (Metric.closedBall (0 : A) 1).Nonempty := Metric.nonempty_closedBall.2 zero_le_one [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) ⊢ ‖star a * a‖₊ = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] have key : ∀ x y, ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖a.snd (star (a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ := by intro x y hx hy rw [a.central] calc ‖star (a.fst (star x)) * a.fst y‖₊ ≤ ‖a.fst (star x)‖₊ * ‖a.fst y‖₊ := nnnorm_star (a.fst (star x)) ▸ nnnorm_mul_le _ _ _ ≤ ‖a.fst‖₊ * 1 * (‖a.fst‖₊ * 1) := (mul_le_mul' (a.fst.le_op_norm_of_le ((nnnorm_star x).trans_le hx)) (a.fst.le_op_norm_of_le hy)) _ ≤ ‖a‖₊ * ‖a‖₊ := by simp only [mul_one, nnnorm_fst, le_rfl] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) ⊢ ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] intro x y hx hy [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) x y : A hx : ‖x‖₊ ≤ 1 hy : ‖y‖₊ ≤ 1 ⊢ ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] rw [a.central] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) x y : A hx : ‖x‖₊ ≤ 1 hy : ‖y‖₊ ≤ 1 ⊢ ‖star (↑a.fst (star x)) * ↑a.fst y‖₊ ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] calc ‖star (a.fst (star x)) * a.fst y‖₊ ≤ ‖a.fst (star x)‖₊ * ‖a.fst y‖₊ := nnnorm_star (a.fst (star x)) ▸ nnnorm_mul_le _ _ _ ≤ ‖a.fst‖₊ * 1 * (‖a.fst‖₊ * 1) := (mul_le_mul' (a.fst.le_op_norm_of_le ((nnnorm_star x).trans_le hx)) (a.fst.le_op_norm_of_le hy)) _ ≤ ‖a‖₊ * ‖a‖₊ := by simp only [mul_one, nnnorm_fst, le_rfl] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) x y : A hx : ‖x‖₊ ≤ 1 hy : ‖y‖₊ ≤ 1 ⊢ ‖a.fst‖₊ * 1 * (‖a.fst‖₊ * 1) ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] simp only [mul_one, nnnorm_fst, le_rfl] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ ⊢ ‖star a * a‖₊ = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] rw [← nnnorm_snd] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ ⊢ ‖(star a * a).snd‖₊ = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] simp only [mul_snd, ← sSup_closed_unit_ball_eq_nnnorm, star_snd, mul_apply] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ ⊢ sSup ((fun a_1 => ‖↑a.snd (star (↑a.fst (star a_1)))‖₊) '' Metric.closedBall 0 1) = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] simp only [← @_root_.op_nnnorm_mul 𝕜 A] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ ⊢ sSup ((fun a_1 => ‖↑(ContinuousLinearMap.mul 𝕜 A) (↑a.snd (star (↑a.fst (star a_1))))‖₊) '' Metric.closedBall 0 1) = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] simp only [← sSup_closed_unit_ball_eq_nnnorm, mul_apply'] [GOAL] 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ ⊢ sSup ((fun a_1 => sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1)) '' Metric.closedBall 0 1) = ‖a‖₊ * ‖a‖₊ [PROOFSTEP] refine' csSup_eq_of_forall_le_of_forall_lt_exists_gt (hball.image _) _ fun r hr => _ [GOAL] case refine'_1 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ ⊢ ∀ (a_1 : ℝ≥0), a_1 ∈ (fun a_2 => sSup ((fun a_3 => ‖↑a.snd (star (↑a.fst (star a_2))) * a_3‖₊) '' Metric.closedBall 0 1)) '' Metric.closedBall 0 1 → a_1 ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] rintro - ⟨x, hx, rfl⟩ [GOAL] case refine'_1.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ x : A hx : x ∈ Metric.closedBall 0 1 ⊢ (fun a_1 => sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1)) x ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] refine' csSup_le (hball.image _) _ [GOAL] case refine'_1.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ x : A hx : x ∈ Metric.closedBall 0 1 ⊢ ∀ (b : ℝ≥0), b ∈ (fun a_1 => ‖↑a.snd (star (↑a.fst (star x))) * a_1‖₊) '' Metric.closedBall 0 1 → b ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] rintro - ⟨y, hy, rfl⟩ [GOAL] case refine'_1.intro.intro.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ x : A hx : x ∈ Metric.closedBall 0 1 y : A hy : y ∈ Metric.closedBall 0 1 ⊢ (fun a_1 => ‖↑a.snd (star (↑a.fst (star x))) * a_1‖₊) y ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] exact key x y (mem_closedBall_zero_iff.1 hx) (mem_closedBall_zero_iff.1 hy) [GOAL] case refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ ⊢ ∃ a_1, a_1 ∈ (fun a_2 => sSup ((fun a_3 => ‖↑a.snd (star (↑a.fst (star a_2))) * a_3‖₊) '' Metric.closedBall 0 1)) '' Metric.closedBall 0 1 ∧ r < a_1 [PROOFSTEP] simp only [Set.mem_image, Set.mem_setOf_eq, exists_prop, exists_exists_and_eq_and] [GOAL] case refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ ⊢ ∃ a_1, a_1 ∈ Metric.closedBall 0 1 ∧ r < sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] have hr' : NNReal.sqrt r < ‖a‖₊ := ‖a‖₊.sqrt_mul_self ▸ NNReal.sqrt_lt_sqrt_iff.2 hr [GOAL] case refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < ‖a‖₊ ⊢ ∃ a_1, a_1 ∈ Metric.closedBall 0 1 ∧ r < sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] simp_rw [← nnnorm_fst, ← sSup_closed_unit_ball_eq_nnnorm] at hr' [GOAL] case refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) ⊢ ∃ a_1, a_1 ∈ Metric.closedBall 0 1 ∧ r < sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] obtain ⟨_, ⟨x, hx, rfl⟩, hxr⟩ := exists_lt_of_lt_csSup (hball.image _) hr' [GOAL] case refine'_2.intro.intro.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x ⊢ ∃ a_1, a_1 ∈ Metric.closedBall 0 1 ∧ r < sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] have hx' : ‖x‖₊ ≤ 1 := mem_closedBall_zero_iff.1 hx [GOAL] case refine'_2.intro.intro.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x hx' : ‖x‖₊ ≤ 1 ⊢ ∃ a_1, a_1 ∈ Metric.closedBall 0 1 ∧ r < sSup ((fun a_2 => ‖↑a.snd (star (↑a.fst (star a_1))) * a_2‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] refine' ⟨star x, mem_closedBall_zero_iff.2 ((nnnorm_star x).trans_le hx'), _⟩ [GOAL] case refine'_2.intro.intro.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x hx' : ‖x‖₊ ≤ 1 ⊢ r < sSup ((fun a_1 => ‖↑a.snd (star (↑a.fst (star (star x)))) * a_1‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] refine' lt_csSup_of_lt _ ⟨x, hx, rfl⟩ _ [GOAL] case refine'_2.intro.intro.intro.intro.refine'_1 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x hx' : ‖x‖₊ ≤ 1 ⊢ BddAbove ((fun a_1 => ‖↑a.snd (star (↑a.fst (star (star x)))) * a_1‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] refine' ⟨‖a‖₊ * ‖a‖₊, _⟩ [GOAL] case refine'_2.intro.intro.intro.intro.refine'_1 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x hx' : ‖x‖₊ ≤ 1 ⊢ ‖a‖₊ * ‖a‖₊ ∈ upperBounds ((fun a_1 => ‖↑a.snd (star (↑a.fst (star (star x)))) * a_1‖₊) '' Metric.closedBall 0 1) [PROOFSTEP] rintro - ⟨y, hy, rfl⟩ [GOAL] case refine'_2.intro.intro.intro.intro.refine'_1.intro.intro 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x hx' : ‖x‖₊ ≤ 1 y : A hy : y ∈ Metric.closedBall 0 1 ⊢ (fun a_1 => ‖↑a.snd (star (↑a.fst (star (star x)))) * a_1‖₊) y ≤ ‖a‖₊ * ‖a‖₊ [PROOFSTEP] exact key (star x) y ((nnnorm_star x).trans_le hx') (mem_closedBall_zero_iff.1 hy) [GOAL] case refine'_2.intro.intro.intro.intro.refine'_2 𝕜 : Type u_1 A : Type u_2 inst✝⁸ : DenselyNormedField 𝕜 inst✝⁷ : StarRing 𝕜 inst✝⁶ : NonUnitalNormedRing A inst✝⁵ : StarRing A inst✝⁴ : CstarRing A inst✝³ : NormedSpace 𝕜 A inst✝² : SMulCommClass 𝕜 A A inst✝¹ : IsScalarTower 𝕜 A A inst✝ : StarModule 𝕜 A a : 𝓜(𝕜, A) hball : Set.Nonempty (Metric.closedBall 0 1) key : ∀ (x : A) (y : (fun x => A) (star (↑a.fst (star x)))), ‖x‖₊ ≤ 1 → ‖y‖₊ ≤ 1 → ‖↑a.snd (star (↑a.fst (star x))) * y‖₊ ≤ ‖a‖₊ * ‖a‖₊ r : ℝ≥0 hr : r < ‖a‖₊ * ‖a‖₊ hr' : ↑sqrt r < sSup ((fun a_1 => ‖↑a.fst a_1‖₊) '' Metric.closedBall 0 1) x : A hx : x ∈ Metric.closedBall 0 1 hxr : ↑sqrt r < (fun a_1 => ‖↑a.fst a_1‖₊) x hx' : ‖x‖₊ ≤ 1 ⊢ r < (fun a_1 => ‖↑a.snd (star (↑a.fst (star (star x)))) * a_1‖₊) x [PROOFSTEP] simpa only [a.central, star_star, CstarRing.nnnorm_star_mul_self, NNReal.sq_sqrt, ← sq] using pow_lt_pow_of_lt_left hxr zero_le' two_pos
This has multiple meanings. You are probably looking for information on one of the following: Bicycling Under The Influence riding ones bike while under the influence of drugs and/or alchohol BUI (Band) a former Davis band This is a Disambiguation disambiguation page &#8212; a navigational aid which lists other pages that might otherwise share the same title. If an article link referred you here, you might want to go back and fix it to point directly to the intended page.
/* * Implement Heap sort -- direct and indirect sorting * Based on descriptions in Sedgewick "Algorithms in C" * * Copyright (C) 1999 Thomas Walter * * 18 February 2000: Modified for GSL by Brian Gough * * This is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the * Free Software Foundation; either version 2, or (at your option) any * later version. * * This source is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * for more details. */ #include <config.h> #include <stdlib.h> #include <gsl/gsl_heapsort.h> static inline void swap (void *base, size_t size, size_t i, size_t j); static inline void downheap (void *data, const size_t size, const size_t N, size_t k, gsl_comparison_fn_t compare); /* Inline swap function for moving objects around */ static inline void swap (void *base, size_t size, size_t i, size_t j) { register char *a = size * i + (char *) base; register char *b = size * j + (char *) base; register size_t s = size; if (i == j) return; do { char tmp = *a; *a++ = *b; *b++ = tmp; } while (--s > 0); } #define CMP(data,size,j,k) (compare((char *)(data) + (size) * (j), (char *)(data) + (size) * (k))) static inline void downheap (void *data, const size_t size, const size_t N, size_t k, gsl_comparison_fn_t compare) { while (k <= N / 2) { size_t j = 2 * k; if (j < N && CMP (data, size, j, j + 1) < 0) { j++; } if (CMP (data, size, k, j) < 0) { swap (data, size, j, k); } else { break; } k = j; } } void gsl_heapsort (void *data, size_t count, size_t size, gsl_comparison_fn_t compare) { /* Sort the array in ascending order. This is a true inplace algorithm with N log N operations. Worst case (an already sorted array) is something like 20% slower */ size_t N; size_t k; if (count == 0) { return; /* No data to sort */ } /* We have n_data elements, last element is at 'n_data-1', first at '0' Set N to the last element number. */ N = count - 1; k = N / 2; k++; /* Compensate the first use of 'k--' */ do { k--; downheap (data, size, N, k, compare); } while (k > 0); while (N > 0) { /* first swap the elements */ swap (data, size, 0, N); /* then process the heap */ N--; downheap (data, size, N, 0, compare); } }
%% IMPORTSWARMDB function obj = importswarmdb(obj, dbname, auth, snum, enum) % IMPORTSWARMDB % Load a swarm database metrics table into an EventRate object % eventrate = importswarmdb(erobj, dbname, auth, snum, enum); % % INPUT: % dbname the path of the database (must have a 'metrics' table) % auth name of the grid to load swarm tracking metrics for % snum,enum start and end datenumbers (Matlab time format, see 'help datenum') % % OUTPUT: % obj an eventrate object % % Example: % erobj = importswarmdb('/avort/devrun/dbswarm/swarm_metadata', 'RD_lo', datenum(2010, 7, 1), datenum(2010, 7, 14) ); % Glenn Thompson, 20100714 % initialize obj.dbroot = dbname; obj.snum = snum; obj.enum = enum; obj.auth = auth; % check that database exists dbtablename = sprintf('%s.metrics',dbname); if exist(dbtablename,'file') % load the data try db = dbopen(dbname, 'r'); catch me fprintf('Error: Could not open %s for reading',dbname); return; end db = dblookup_table(db, 'metrics'); if (dbquery(db, 'dbRECORD_COUNT')==0) fprintf('Error: Could not open %s for reading',dbtablename); return; end db = dbsubset(db, sprintf('auth ~= /.*%s.*/',auth)); numrows = dbquery(db,'dbRECORD_COUNT'); debug.print_debug(sprintf('Got %d rows after auth subset',numrows),2); sepoch = datenum2epoch(snum); eepoch = datenum2epoch(enum); db = dbsubset(db, sprintf('timewindow_starttime >= %f && timewindow_endtime <= %f',sepoch,eepoch)); numrows = dbquery(db,'dbRECORD_COUNT'); debug.print_debug(sprintf('Got %d rows after time subset',numrows),2); if numrows > 0 % Note that metrics are only saved when mean_rate >= 1. % Therefore there will be lots of mean_rate==0 timewindows not in % database. [tempsepoch, tempeepoch, mean_rate, median_rate, mean_mag, cum_mag] = dbgetv(db,'timewindow_starttime', 'timewindow_endtime', 'mean_rate', 'median_rate', 'mean_ml', 'cum_ml'); obj.binsize = (tempeepoch(1) - tempsepoch(1))/86400; obj.stepsize = min(tempsepoch(2:end) - tempsepoch(1:end-1))/86400; obj.time = snum+obj.stepsize:obj.stepsize:enum; obj.numbins = length(obj.time); obj.mean_rate = zeros(obj.numbins, 1); obj.counts = zeros(obj.numbins, 1); obj.median_rate = zeros(obj.numbins, 1); obj.mean_mag = zeros(obj.numbins, 1); obj.cum_mag = zeros(obj.numbins, 1); for c=1:length(tempeepoch) tempenum = epoch2datenum(tempeepoch(c)); i = find(obj.time == tempenum); obj.mean_rate(i) = mean_rate(c); obj.counts(i) = mean_rate(c) * (obj.binsize * 24); obj.median_rate(i) = median_rate(c); obj.mean_mag(i) = mean_mag(c); obj.cum_mag(i) = cum_mag(c); end end dbclose(db); else % error - table does not exist fprintf('Error: %s does not exist',dbtablename); return; end obj.total_counts = sum(obj.counts)*obj.stepsize/obj.binsize; end
import numpy as np import pandas as pd from matplotlib import pyplot as plt def create_datasets(): Ntrain = [10, 100, 1000, 10000] for N in Ntrain: data, labels = generateData(N) labels = np.squeeze(labels) plot(data, labels) dataset = pd.DataFrame(np.transpose(np.vstack((data, labels)))) filename = 'GMM_Dtrain_' + str(N) +'.csv' dataset.to_csv(filename, index = False) def generateData(N): gmmParameters = {} gmmParameters['priors'] = [.2,.3,.35,.15] # priors should be a row vector gmmParameters['meanVectors'] = np.zeros((4,2)) gmmParameters['meanVectors'][0, :] = [0, 0] gmmParameters['meanVectors'][1, :] = [0, 30] gmmParameters['meanVectors'][2, :] = [30, 0] gmmParameters['meanVectors'][3, :] = [30, 30] gmmParameters['covMatrices'] = np.zeros((4, 2, 2)) gmmParameters['covMatrices'][0,:,:] = np.array([[1, -3], [-3, 1]]) gmmParameters['covMatrices'][1,:,:] = np.array([[8, 4], [4, 8]]) gmmParameters['covMatrices'][2,:,:] = np.array([[6, 3], [3, 6]]) gmmParameters['covMatrices'][3,:,:] = np.array([[7, 1], [1, 7]]) x,labels = generateDataFromGMM(N,gmmParameters) return x, labels def generateDataFromGMM(N,gmmParameters): # Generates N vector samples from the specified mixture of Gaussians # Returns samples and their component labels # Data dimensionality is determined by the size of mu/Sigma parameters np.random.seed(0) priors = gmmParameters['priors'] # priors should be a row vector meanVectors = gmmParameters['meanVectors'] covMatrices = gmmParameters['covMatrices'] n = meanVectors.shape[1] # Data dimensionality C = len(priors) # Number of components x = np.zeros((n,N)) labels = np.zeros((1,N)) # Decide randomly which samples will come from each component u = np.random.random((1,N)) thresholds = np.zeros((1,C+1)) thresholds[:,0:C] = np.cumsum(priors) thresholds[:,C] = 1 for l in range(C): indl = np.where(u <= float(thresholds[:,l])) print(indl[0]) Nl = len(indl[1]) labels[indl] = (l)*1 u[indl] = 1.1 x[:,indl[1]] = np.transpose(np.random.multivariate_normal(meanVectors[l,:], covMatrices[l,:,:], Nl)) return x,labels def plot(data, labels, mark="o"): plt.scatter(data[0,labels == 0], data[1,labels == 0], marker=mark, color = "b") plt.scatter(data[0,labels == 1], data[1,labels == 1], marker=mark, color = "r") plt.scatter(data[0,labels == 2], data[1,labels == 2], marker=mark, color = "g") plt.scatter(data[0,labels == 3], data[1,labels == 3], marker=mark, color = "y") plt.xlabel("x1") plt.ylabel("x2") plt.title('Training Dataset') plt.show() create_datasets()
// Copyright Carl Philipp Reh 2009 - 2016. // Distributed under the Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include <fcppt/make_cref.hpp> #include <fcppt/reference_comparison.hpp> #include <fcppt/reference_output.hpp> #include <fcppt/strong_typedef.hpp> #include <fcppt/container/find_opt.hpp> #include <fcppt/optional/comparison.hpp> #include <fcppt/optional/output.hpp> #include <fcppt/optional/reference.hpp> #include <fcppt/preprocessor/disable_gcc_warning.hpp> #include <fcppt/preprocessor/pop_warning.hpp> #include <fcppt/preprocessor/push_warning.hpp> #include <fcppt/config/external_begin.hpp> #include <boost/test/unit_test.hpp> #include <set> #include <fcppt/config/external_end.hpp> FCPPT_PP_PUSH_WARNING FCPPT_PP_DISABLE_GCC_WARNING(-Weffc++) BOOST_AUTO_TEST_CASE( container_find_opt ) { FCPPT_PP_POP_WARNING FCPPT_MAKE_STRONG_TYPEDEF( int, strong_int ); struct comp { bool operator()( int const _left, int const _right ) const { return _left < _right; } bool operator()( int const _value, strong_int const _comp ) const { return _value < _comp.get(); } bool operator()( strong_int const _comp, int const _value ) const { return _comp.get() < _value; } FCPPT_PP_PUSH_WARNING FCPPT_PP_DISABLE_GCC_WARNING(-Wunused-local-typedefs) typedef void is_transparent; FCPPT_PP_POP_WARNING }; typedef std::set< int, comp > int_set; int_set const set{ 1,2,3 }; typedef fcppt::optional::reference< int const > optional_int_ref; BOOST_CHECK_EQUAL( fcppt::container::find_opt( set, strong_int( 3 ) ), optional_int_ref( fcppt::make_cref( *set.find( 3 ) ) ) ); BOOST_CHECK( !fcppt::container::find_opt( set, strong_int( 4 ) ).has_value() ); }
# Realization of Non-Recursive Filters *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## Fast Convolution The straightforward convolution of two finite-length signals $x[k]$ and $h[k]$ is a numerical complex task. This has led to the development of various techniques with considerably lower complexity. The basic concept of the *fast convolution* is to exploit the correspondence between the convolution and the scalar multiplication in the frequency domain. ### Convolution of Finite-Length Signals The convolution of a causal signal $x_L[k]$ of length $L$ with a causal impulse response $h_N[k]$ of length $N$ is given as \begin{equation} y[k] = x_L[k] * h_N[k] = \sum_{\kappa = 0}^{L-1} x_L[\kappa] \; h_N[k - \kappa] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \; x_L[k - \kappa] \end{equation} The resulting signal $y[k]$ is of finite length $M = N+L-1$. The computation of $y[k]$ for $k=0,1, \dots, M-1$ requires $M \cdot N$ multiplications and $M \cdot (N-1)$ additions. The computational complexity of the convolution is consequently [in the order of](https://en.wikipedia.org/wiki/Big_O_notation) $\mathcal{O}(M \cdot N)$. Discrete-time Fourier transformation (DTFT) of above relation yields \begin{equation} Y(e^{j \Omega}) = X_L(e^{j \Omega}) \cdot H_N(e^{j \Omega}) \end{equation} Discarding the effort of transformation, the computationally complex convolution is replaced by a scalar multiplication with respect to the frequency $\Omega$. However, $\Omega$ is a continuous frequency variable which limits the numerical evaluation of this scalar multiplication. In practice, the DTFT is replaced by the discrete Fourier transformation (DFT). Two aspects have to be considered before a straightforward application of the DFT 1. The DFTs $X_L[\mu]$ and $H_N[\mu]$ are of length $L$ and $N$ respectively and cannot be multiplied straightforward 2. For $N = L$, the multiplication of the two spectra $X_L[\mu]$ and $H_L[\mu]$ would result in the [periodic/circular convolution](https://en.wikipedia.org/wiki/Circular_convolution) $x_L[k] \circledast h_L[k]$ due to the periodicity of the DFT. Since we aim at realizing the linear convolution $x_L[k] * h_N[k]$ with the DFT, special care has to be taken to avoid cyclic effects. ### Linear Convolution by Periodic Convolution The periodic convolution of the two signals $x_L[k]$ and $h_N[k]$ is defined as \begin{equation} x_L[k] \circledast h_N[k] = \sum_{\kappa=0}^{M-1} \tilde{x}_M[k - \kappa] \; h_N[\kappa] \end{equation} where without loss of generality it is assumed that $L \geq N$ and $M \geq N$. The periodic continuation $\tilde{x}_M[k]$ of $x[k]$ with period $M$ is given as \begin{equation} \tilde{x}_M[k] = \sum_{m = -\infty}^{\infty} x_L[m \cdot M + k] \end{equation} The result of the circular convolution has a periodicity of $M$. To compute the linear convolution by the periodic convolution one has to take care that the result of the linear convolution fits into one period of the periodic convolution. Hence, the periodicity has to be chosen as $M \geq N+L-1$. This can be achieved by zero-padding of $x_L[k]$ and $h_N[k]$ to a total length of $M$ \begin{align} x_M[k] &= \begin{cases} x_L[k] & \mathrm{for} \; k=0, 1, \dots, L-1 \\ 0 & \mathrm{for} \; k=L, L+1, \dots, M-1 \end{cases} \\ h_M[k] &= \begin{cases} h_N[k] & \mathrm{for} \; k=0, 1, \dots, N-1 \\ 0 & \mathrm{for} \; k=N, N+1, \dots, M-1 \end{cases} \end{align} This results in the desired equality of linear and periodic convolution \begin{equation} x_L[k] * h_N[k] = x_M[k] \circledast h_M[k] \end{equation} for $k = 0,1,\dots, M-1$ with $M = N+L-1$. #### Example The following example computes the linear, periodic and linear by periodic convolution of a rectangular signal $x[k] = \text{rect}_L[k]$ of length $L$ with a triangular signal $h[k] = \Lambda_N[k]$ of length $N$. ```python %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.signal as sig L = 32 # length of signal x[k] N = 16 # length of signal h[k] M = 16 # periodicity of periodic convolution def cconv(x, y, P): # Periodic convolution with period P of two signals x and y x = _wrap(x, P) h = _wrap(y, P) return np.fromiter([np.dot(np.roll(x[::-1], k+1), h) for k in np.arange(P)], float) def _wrap(x, N): # Zero-padding to length N or periodic summation with period N M = len(x) rows = int(np.ceil(M/N)) if (M < int(N*rows)): x = np.pad(x, (0, int(N*rows-M)), 'constant') x = np.reshape(x, (rows, N)) return np.sum(x, axis=0) # generate signals x = np.ones(L) h = sig.triang(N) # linear convolution y1 = np.convolve(x, h, 'full') # periodic convolution y2 = cconv(x, h, M) # linear convolution via periodic convolution xp = np.append(x, np.zeros(N-1)) hp = np.append(h, np.zeros(L-1)) y3 = cconv(xp, hp, L+N-1) # plot results def plot_signal(x): plt.figure(figsize = (10, 3)) plt.stem(x) plt.xlabel(r'$k$') plt.ylabel(r'$y[k]$') plt.axis([0, N+L, 0, 1.1*x.max()]) plot_signal(x) plt.title('Signal $x[k]$') plot_signal(y1) plt.title('Linear convolution') plot_signal(y2) plt.title('Periodic convolution with period M = %d' %M) plot_signal(y3) plt.title('Linear convolution by periodic convolution'); ``` **Exercise** * Change the lengths `L`, `N` and `M` within the constraints given above and check how the results for the different convolutions change ### The Fast Convolution Using the above derived equality of the linear and periodic convolution one can express the linear convolution $y[k] = x_L[k] * h_N[k]$ by the DFT as \begin{equation} y[k] = \text{IDFT}_M \{ \; \text{DFT}_M\{ x_M[k] \} \cdot \text{DFT}_M\{ h_M[k] \} \; \} \end{equation} This operation requires three DFTs of length $M$ and $M$ complex multiplications. On first sight this does not seem to be an improvement, since one DFT/IDFT requires $M^2$ complex multiplications and $M \cdot (M-1)$ complex additions. The overall numerical complexity is hence in the order of $\mathcal{O}(M^2)$. The DFT can be realized efficiently by the [fast Fourier transformation](https://en.wikipedia.org/wiki/Fast_Fourier_transform) (FFT), which lowers the computational complexity to $\mathcal{O}(M \log_2 M)$. The resulting algorithm is known as *fast convolution* due to its computational efficiency. The fast convolution algorithm is composed of the following steps 1. Zero-padding of the two input signals $x_L[k]$ and $h_N[k]$ to at least a total length of $M \geq N+L-1$ 2. Computation of the DFTs $X[\mu]$ and $H[\mu]$ using a FFT of length $M$ 3. Multiplication of the spectra $Y[\mu] = X[\mu] \cdot H[\mu]$ 4. Inverse DFT of $Y[\mu]$ using an inverse FFT of length $M$ The overall complexity depends on the particular implementation of the FFT. Many FFTs are most efficient for lengths which are a power of two. It therefore can make sense, in terms of computational complexity, to choose $M$ as a power of two instead of the shortest possible length $N+L-1$. For real valued signals $x[k] \in \mathbb{R}$ and $h[k] \in \mathbb{R}$ the computational complexity can be reduced significantly by using a real valued FFT. #### Example The implementation of the fast convolution algorithm is straightforward. Most implementations of the FFT include the zero-padding to a given length $M$, e.g in `numpy` by `numpy.fft.fft(x, M)`. In the following example an implementation of the fast convolution in `Python` is shown. The output of the fast convolution is compared to a straightforward implementation by means of the absolute difference $|e[k]|$. The observed differences are due to numerical effects in the convolution and the FFT. The differences can be neglected in most applications. ```python L = 16 # length of signal x[k] N = 16 # length of signal h[k] M = N+L-1 # generate signals x = np.ones(L) h = sig.triang(N) # linear convolution y1 = np.convolve(x, h, 'full') # fast convolution y2 = np.fft.ifft(np.fft.fft(x, M)*np.fft.fft(h, M)) plt.figure(figsize=(10, 3)) plt.stem(np.abs(y1-y2)) plt.xlabel(r'k') plt.ylabel(r'|e[k]|'); ``` #### Numerical Complexity It was already argued that the numerical complexity of the fast convolution is considerably lower due to the usage of the FFT. The gain with respect to the convolution is evaluated in the following. In order to measure the execution times for both algorithms the `timeit` module is used. The algorithms are evaluated for the convolution of two signals $x_L[k]$ and $h_N[k]$ of length $L=N=2^n$ for $n=0, 1, \dots, 16$. ```python import timeit n = np.arange(17) # lengths = 2**n to evaluate reps = 20 # number of repetitions for timeit gain = np.zeros(len(n)) for N in n: length = 2**N # setup environment for timeit tsetup = 'import numpy as np; from numpy.fft import rfft, irfft; \ x=np.random.randn(%d); h=np.random.randn(%d)' % (length, length) # direct convolution tc = timeit.timeit('np.convolve(x, x, "full")', setup=tsetup, number=reps) # fast convolution tf = timeit.timeit('irfft(rfft(x, %d) * rfft(h, %d))' % (2*length, 2*length), setup=tsetup, number=reps) # speedup by using the fast convolution gain[N] = tc/tf # show the results plt.figure(figsize = (15, 10)) plt.barh(n-.5, gain, log=True) plt.plot([1, 1], [-1, n[-1]+1], 'r-') plt.yticks(n, 2**n) plt.xlabel('Gain of fast convolution') plt.ylabel('Length of signals') plt.title('Comparison between direct/fast convolution') plt.grid() ``` **Exercise** * When is the fast convolution more efficient/faster than a direct convolution? * Why is it slower below a given signal length? * Is the trend of the gain as expected by the numerical complexity of the FFT? **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2017*.
\chapter{UML Project Diagrams}\label{ap:uml} %\begin{table}[ht!] % \centering % \caption{Components Implementation} % \label{tab:components-implementation} % \begin{tabular}{l r} % \toprule % \textbf{Component} & \textbf{Implementation}\\ % \midrule % Sniffer & Python \\ % Sqlite Database & Python \\ % TraceAnalyzer & C++ \\ % FlowGenerator & C++ \\ % \bottomrule % \end{tabular} %\end{table} \begin{figure*}[pht!] \centering \includegraphics[width=1.0\textwidth]{figures/apD/sniffer} \caption{Sniffer UML Class Diagram} \label{fig:uml-sniffer} \end{figure*} \clearpage \begin{figure*}[pht!] \centering \begin{turn}{90} \includegraphics[width=1.5\textwidth]{figures/apD/trace-analyzer} \end{turn} \caption{TraceAnalyzer UML Class Diagram} \label{fig:uml-trace-analyzer} \end{figure*} \clearpage \begin{figure*}[pht!] \centering \includegraphics[width=0.9\textwidth]{figures/apD/flow-generator} \caption{FlowGenerator UML Class Diagram} \label{fig:uml-flow-generator} \end{figure*}
/- Copyright (c) 2018 Scott Morrison. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Scott Morrison, Bhavik Mehta -/ import category_theory.functor.const import category_theory.discrete_category /-! # The category `discrete punit` We define `star : C ⥤ discrete punit` sending everything to `punit.star`, show that any two functors to `discrete punit` are naturally isomorphic, and construct the equivalence `(discrete punit ⥤ C) ≌ C`. -/ universes v u -- morphism levels before object levels. See note [category_theory universes]. namespace category_theory variables (C : Type u) [category.{v} C] namespace functor /-- The constant functor sending everything to `punit.star`. -/ @[simps] def star : C ⥤ discrete punit := (functor.const _).obj ⟨⟨⟩⟩ variable {C} /-- Any two functors to `discrete punit` are isomorphic. -/ @[simps] def punit_ext (F G : C ⥤ discrete punit) : F ≅ G := nat_iso.of_components (λ _, eq_to_iso dec_trivial) (λ _ _ _, dec_trivial) /-- Any two functors to `discrete punit` are *equal*. You probably want to use `punit_ext` instead of this. -/ lemma punit_ext' (F G : C ⥤ discrete punit) : F = G := functor.ext (λ _, dec_trivial) (λ _ _ _, dec_trivial) /-- The functor from `discrete punit` sending everything to the given object. -/ abbreviation from_punit (X : C) : discrete punit.{v+1} ⥤ C := (functor.const _).obj X /-- Functors from `discrete punit` are equivalent to the category itself. -/ @[simps] def equiv : (discrete punit ⥤ C) ≌ C := { functor := { obj := λ F, F.obj ⟨⟨⟩⟩, map := λ F G θ, θ.app ⟨⟨⟩⟩ }, inverse := functor.const _, unit_iso := begin apply nat_iso.of_components _ _, intro X, apply discrete.nat_iso, rintro ⟨⟨⟩⟩, apply iso.refl _, intros, ext ⟨⟨⟩⟩, simp, end, counit_iso := begin refine nat_iso.of_components iso.refl _, intros X Y f, dsimp, simp, -- See note [dsimp, simp]. end } end functor /-- A category being equivalent to `punit` is equivalent to it having a unique morphism between any two objects. (In fact, such a category is also a groupoid; see `groupoid.of_hom_unique`) -/ theorem equiv_punit_iff_unique : nonempty (C ≌ discrete punit) ↔ (nonempty C) ∧ (∀ x y : C, nonempty $ unique (x ⟶ y)) := begin split, { rintro ⟨h⟩, refine ⟨⟨h.inverse.obj ⟨⟨⟩⟩⟩, λ x y, nonempty.intro _⟩, apply (unique_of_subsingleton _), swap, { have hx : x ⟶ h.inverse.obj ⟨⟨⟩⟩ := by convert h.unit.app x, have hy : h.inverse.obj ⟨⟨⟩⟩ ⟶ y := by convert h.unit_inv.app y, exact hx ≫ hy, }, have : ∀ z, z = h.unit.app x ≫ (h.functor ⋙ h.inverse).map z ≫ h.unit_inv.app y, { intro z, simpa using congr_arg (≫ (h.unit_inv.app y)) (h.unit.naturality z), }, apply subsingleton.intro, intros a b, rw [this a, this b], simp only [functor.comp_map], congr, }, { rintro ⟨⟨p⟩, h⟩, haveI := λ x y, (h x y).some, refine nonempty.intro (category_theory.equivalence.mk ((functor.const _).obj ⟨⟨⟩⟩) ((functor.const _).obj p) _ (by apply functor.punit_ext)), exact nat_iso.of_components (λ _, { hom := default, inv := default }) (λ _ _ _, by tidy), }, end end category_theory
% !TEX program = xelatex \documentclass{resume} %\usepackage{zh_CN-Adobefonts_external} % Simplified Chinese Support using external fonts (./fonts/zh_CN-Adobe/) %\usepackage{zh_CN-Adobefonts_internal} % Simplified Chinese Support using system fonts \begin{document} \pagenumbering{gobble} % suppress displaying page number \name{Mengqiu Tang} \basicInfo{ \email{[email protected]} \textperiodcentered\ \phone{(+86) 177-6707-3148} \textperiodcentered\ \linkedin[fixme]{https://www.linkedin.com/in/billryan8}} \section{\faGraduationCap\ Education} \datedsubsection{\textbf{Zhejiang University (ZJU)}, Hangzhou, China}{2018 -- Present} \textit{Master student} in Computer science, expected March 2021 \datedsubsection{\textbf{Sichuan University}, Chengdu, China}{2014 -- 2018} \textit{B.S.} in Machine Design \section{\faUsers\ Experience} \datedsubsection{\textbf{Qulian Technology.} Hangzhou, China}{2018 -- 2019} \role{Intern}{} Brief introduction: golang backend developer. \begin{itemize} \item Developed a blockchain electronic wallet \item Participated in \href{https://filoop.com/}{filoop},Implemented email,node deployment,blockchain monitor and so on features \item Project management \end{itemize} \datedsubsection{\textbf{Alibaba Inc.} Hangzhou, China}{Nov. 2019 -- Present} \role{Research Intern}{} Brief introduction: Label propagation algorithm(LPA) accleration \begin{itemize} \item Implemented a general label propagation algorithm framework \item Optimized the momory usage during LPA and compress the storage of nodes and edges,and also optimized the decompression of compressed graph when propagate labels. \item GPU threads collaboration during propagate labels \end{itemize} \datedsubsection{\textbf{Lian Fang Technology} Hangzhou, China}{Dec. 2019 -- Present} \role{Partner\&Developer}{Startup company} \begin{itemize} \item A startup team consists of our classmates \item We are developing a blockchain game \item Implemented usercenter,game history,leaderboard and blockchain pament features via Nodejs \end{itemize} % Reference Test %\datedsubsection{\textbf{Paper Title\cite{zaharia2012resilient}}}{May. 2015} %An xxx optimized for xxx\cite{verma2015large} %\begin{itemize} % \item main contribution %\end{itemize} \section{\faCogs\ Skills} \begin{itemize}[parsep=0.5ex] \item Programming Languages: golang > C++ > Nodejs \item Platform: Linux \item Development: Web, System, Cloud \end{itemize} \section{\faHeartO\ Honors and Awards} \datedline{\textit{\nth{1} Prize}, Award on the Champion of th First China Graduate Ariticial Intelligence Innovation Contest}{ Dec 2019} \datedline{\textit{\nth{2} Prize}, Award on the Ninth National College Student Mathematics Competition,Provincail Second Prize }{ 2017} \section{\faInfo\ Miscellaneous} \begin{itemize}[parsep=0.5ex] \item Blog: http://tangmengqiu.github.io \item GitHub: https://github.com/tangmengqiu \item Languages: English - Fluent(CET6), Mandarin - Native speaker \end{itemize} %% Reference %\newpage %\bibliographystyle{IEEETran} %\bibliography{mycite} \end{document}
(* Title: JinjaThreads/Execute/PCompilerRefine.thy Author: Andreas Lochbihler Tabulation for the compiler *) theory PCompilerRefine imports TypeRelRefine "../Compiler/PCompiler" begin subsection \<open>@{term "compP"}\<close> text \<open> Applying the compiler to a tabulated program either compiles every method twice (once for the program itself and once for method lookup) or recomputes the class and method lookup tabulation from scratch. We follow the second approach. \<close> fun compP_code' :: "(cname \<Rightarrow> mname \<Rightarrow> ty list \<Rightarrow> ty \<Rightarrow> 'a \<Rightarrow> 'b) \<Rightarrow> 'a prog_impl' \<Rightarrow> 'b prog_impl'" where "compP_code' f (P, Cs, s, F, m) = (let P' = map (compC f) P in (P', tabulate_class P', s, F, tabulate_Method P'))" definition compP_code :: "(cname \<Rightarrow> mname \<Rightarrow> ty list \<Rightarrow> ty \<Rightarrow> 'a \<Rightarrow> 'b) \<Rightarrow> 'a prog_impl \<Rightarrow> 'b prog_impl" where "compP_code f P = ProgRefine (compP_code' f (impl_of P))" declare compP.simps [simp del] compP.simps[symmetric, simp] lemma compP_code_code [code abstract]: "impl_of (compP_code f P) = compP_code' f (impl_of P)" apply(cases P) apply(simp add: compP_code_def) apply(subst ProgRefine_inverse) apply(auto simp add: tabulate_subcls_def tabulate_sees_field_def Mapping_inject intro!: ext) done declare compP.simps [simp] compP.simps[symmetric, simp del] lemma compP_program [code]: "compP f (program P) = program (compP_code f P)" by(cases P)(clarsimp simp add: program_def compP_code_code) text \<open>Merge module names to avoid cycles in module dependency\<close> code_identifier code_module PCompiler \<rightharpoonup> (SML) PCompiler and (OCaml) PCompiler and (Haskell) PCompiler | code_module PCompilerRefine \<rightharpoonup> (SML) PCompiler and (OCaml) PCompiler and (Haskell) PCompiler ML_val \<open>@{code compP}\<close> end
--- TODO : -- - retirer des simps et optimiser -- - retirer les derniers sorry (nommer les hypothèses dans ite) import topology.basic import topology.algebra.continuous_functions import topology.continuous_on import topology.algebra.ordered import topology.constructions import topology.algebra.ordered import data.set.function import topology.constructions import tactic.split_ifs import tactics import misc open set topological_space open classical set_option pp.beta true /- Typeclass définissant un type pointé -/ class pointed (α : Type) := (point : α) /- Prends un type pointé et lui renvoit son point base-/ def point (α : Type) [s : pointed α] : α := @pointed.point α s -- On considère par la suite un espace topologique X pointé variable X:Type variable [topological_space X] variable [pointed X] -- Dans la suite du fichier, on pointe ℝ en 0 instance : pointed ℝ := pointed.mk 0 --Définitions par soustype d'un chemin et d'une boucle /-- Chemin sur un type X -/ def path := {f: I → X // continuous f} /-- Boucle sur un type X -/ def loop := {f: I → X // continuous f ∧ f(0)=f(1) ∧ f(0) = point X } /-- Homotopie de lacets -/ def loop_homotopy (f : loop X) (g : loop X) : Prop := ∃ (H : I × I -> X), (∀ t, H(0,t) = f.val(t) ∧ H(1,t)=g.val(t)) ∧ (continuous H) /-- Compostition de lacets -/ noncomputable def loop_comp : (loop X ) -> (loop X) -> (loop X) := λ f g, ⟨λ t, ite (t.val≤0.5) (f.val(⟨2*t.val, sorry⟩) ) (g.val (⟨ 2*t.val-1, sorry⟩)), begin split, -- on doit montrer que la compositions de lacets est un lacet -- on commence par la continuité apply continuous_if, rotate, -- on relègue la preuve de la frontière à la fin apply continuous.comp, exact f.property.1, apply continuous_subtype_mk, apply continuous.mul, exact continuous_const, apply continuous_subtype_val, apply continuous.comp, exact g.property.1, apply continuous_subtype_mk, apply continuous.add, apply continuous.mul, exact continuous_const, apply continuous_subtype_val, exact continuous_const, -- le lacet vaut bien x₀ aux extrémités split_ifs, simp at h_1, exfalso, sorry, split, simp, rw f.property.2.2, conv {to_lhs, -- conv permet de travailler dans le membre de gauche rw <- g.property.2.2, rw g.property.2.1,}, congr, apply subtype.eq', -- on "relève" l'égalité dans le type ambiant simp, ring, -- découle des propriétés dans un anneau simp, rw f.property.2.2, exfalso, simp at h, linarith, exfalso, simp at h, linarith, intros a ha, have a_def : a.val=1/2, -- il faut montrer que la frontière = {1/2} rw frontieronI' at ha, exact ha, rw a_def, rw invtwo, simp, rw g.property.2.2, rw <- f.property.2.1, rw f.property.2.2, end⟩ /- Lacet inverse -/ def loop_inv : loop X -> loop X := λ f, ⟨λ x:I, f.val(⟨1-x.val, oneminus x⟩ ), begin split, apply continuous.comp, exact f.property.left, apply continuous_subtype_mk, apply continuous.sub, apply continuous_const, apply continuous_subtype_val,, split, simp, symmetry, exact f.property.2.1, simp, rw <- f.property.2.1, exact f.property.2.2, end ⟩ /-- L'homotopie est reflexive -/ theorem loop_homotopy_refl : reflexive (loop_homotopy X) := begin intro f, -- on utilise l'homotopie H(t,s) = f(s) let H : I × I -> X := λ x, f.val (x.2), use H, split, -- l'homotopie vaut f aux extremités intros, split, simp *, -- l'homotopie est continue apply continuous.comp, -- on déplit la composition exact f.property.left, -- f est continue exact continuous_snd, -- la projection sur le deuxième élément est continue end /-- L'homotopie est symmétrique -/ theorem loop_homotopy_symm : symmetric (loop_homotopy X) := begin intros f g, intro h1, cases h1 with H hH, -- on utilise l'homotopie H_2(t,s) = H(1-t,s) let H2 : I × I -> X := λ x, H(⟨1-x.1.val, oneminus x.1⟩, x.2), use H2, split, -- l'homotopie vaut g et f aux extremités intros, split, simp *, -- on déplit la définition de H_2 simp [ coe_of_0], rw (hH.1 t).2, -- on réécrit g en H(1,t) simp *, -- on déplit la définition de H_2 simp [one_minus_one_coe], rw (hH.1 t).1, -- on réécrit f en H(0,t) -- l'homotopie est continue apply continuous.comp, -- on déplit la composition exact hH.2, -- H est continue par hypothèse apply continuous.prod_mk, -- on déplit le produit apply continuous_subtype_mk, -- on déplit le passage au sous-type apply continuous.sub, -- on déplit la soustraction apply continuous_const, -- une fonction constante est continue apply continuous.comp, -- on déplit la composition exact continuous_subtype_val, -- le "relèvement" de sous-type est continu exact continuous_fst, -- la projection sur le premier élément est continue exact continuous_snd, -- la projection sur le deuxième élément est continue end /-- L'homotopie est transitive -/ theorem loop_homotopy_trans : transitive (loop_homotopy X) := begin intros f g h, intros h1 h2, cases h1 with h1func h1hyp, cases h2 with h2func h2hyp, -- on utilise l'homotopie H₃(t,s) = H₁(2t,s) si t ≤ 0.5 -- H₂(2t-1,s) sinon let H : I × I -> X := λ x, ite (x.1.val≤0.5) (h1func( ⟨ 2*x.1.val, sorry ⟩, x.2)) (h2func(⟨2*x.1.val-1, sorry⟩, x.2 )), use H, split, -- l'homotopie vaut f et h aux extremités intros, split, simp *, split_ifs, -- on déplit la définition d'une condition rw <- (h1hyp.1 t).1, -- soit 0≤1/2 congr, simp [coe_of_0], -- auquel cas les deux arguments sont égaux simp at h_1, -- soit 2<0 exfalso, exact not_2_lt_0 h_1, -- auquel cas on obtient une absurdité simp *, split_ifs, -- on déplit la définition d'une condition simp at h_1, -- soit 1<1/2 exfalso, exact not_1_lt_half h_1, -- auquel cas on obtient une absurdité rw <- (h2hyp.1 t).2, -- soit 1>0 congr, rw <- oneisone, simp, ring, -- continuité simp *, apply continuous_if, rotate, -- partie 1 apply continuous.comp, exact h1hyp.2, apply continuous.prod_mk, apply continuous_subtype_mk, apply continuous.mul, exact continuous_const, apply continuous.comp, exact continuous_subtype_val, -- le "relèvement" de sous-type est continu exact continuous_fst, exact continuous_snd, -- partie 2 apply continuous.comp, exact h2hyp.2, apply continuous.prod_mk, apply continuous_subtype_mk, apply continuous.sub, apply continuous.mul, exact continuous_const, apply continuous.comp, exact continuous_subtype_val, -- le "relèvement" de sous-type est continu exact continuous_fst, exact continuous_const, exact continuous_snd, --frontière intros a ha, have a_def : a.fst.val=1/2, -- il faut montrer que la frontière = {1/2, -} rw frontieronI at ha, exact ha, rw a_def, rw invtwo, simp, rw (h1hyp.1 a.snd).2, rw (h2hyp.1 a.snd).1, end /-- L'homotopie est une relation d'équivalence -/ theorem loop_homotopy_equiv : equivalence (loop_homotopy X) := ⟨ loop_homotopy_refl X, loop_homotopy_symm X, loop_homotopy_trans X⟩ /-- Sétoïde (X, homotopies de X) -/ definition homotopy.setoid : setoid (loop X) := { r := loop_homotopy X, iseqv := loop_homotopy_equiv X} /-- Ensemble des classes d'homotopie -/ definition homotopy_classes := quotient (homotopy.setoid X) /-- Réduction à classe d'équivalence près -/ definition reduce_homotopy: (loop X) → homotopy_classes X := quot.mk (loop_homotopy X) -- notation à améliorer (inférer le type automatiquement) notation `[` f `|` X `]` := reduce_homotopy X f
# This is supposed to be a toy 2D model for issues that arise in # computational geometry for computer aided design. # We work everywhere in the real projective plane, and consider # only straight lines and conic curves and segments thereof. # This is supposed to make everything nicely computable. dp3 := proc(x,y) local i; add(x[i]*y[i],i=1..3); end; nm3 := proc(x) local i; sqrt(add(x[i]^2,i=1..3)); end; xp3 := (u,v) -> [u[2]*v[3]-u[3]*v[2], u[3]*v[1]-u[1]*v[3], u[1]*v[2]-u[2]*v[1]]; tp3 := (u,v,w) -> Determinant(Matrix([u,v,w])); eq3 := (u,v) -> evalb(xp3(u,v) = [0$3]); sp3 := (u) -> [u[1]/u[3],u[2]/u[3]]; `plane_stereo/RP` := (x) -> [x[1]/x[3],x[2]/x[3]]; `plane_unstereo/RP` := (u) -> [u[1],u[2],1]; `disc_stereo/RP` := (x) -> [x[1],x[2]] *~ (signum(x[3])/nm3(x)); `disc_unstereo/RP` := (u) -> [u[1],u[2],sqrt(1-u[1]^2-u[2]^2)]; ###################################################################### `is_element/RP_points` := proc(x) type(x,[numeric,numeric,numeric]); end: `is_leq/RP_points` := NULL: `list_elements/RP_points` := NULL: `count_elements/RP_points` := NULL: `random_element/RP_points` := () -> `random_element/R`(3)(); `dist/RP_points` := proc(x,y) local i; 1 - add(x[i]*y[i],i=1..3)^2/add(x[i]^2,i=1..3) end: `is_equal/RP_points` := (x,y) -> evalb(xp3(x,y) = [0$3]); `in_general_position/RP_points` := (u,v,w) -> evalb(Determinant(Matrix([u,v,w])) = 0); `disc_plot/point` := (x) -> point(`disc_stereo/RP`(x),args[2..-1]); ###################################################################### `is_element/RP_lines` := proc(x) type(x,[numeric,numeric,numeric]); end: `is_leq/RP_lines` := NULL: `list_elements/RP_lines` := NULL: `count_elements/RP_lines` := NULL: `random_element/RP_lines` := () -> `random_element/R`(3)(); `dist/RP_lines` := proc(x,y) local i; 1 - add(x[i]*y[i],i=1..3)^2/add(x[i]^2,i=1..3) end: `is_equal/RP_lines` := (x,y) -> evalb(xp3(x,y) = [0$3]); `in_general_position/RP_lines` := (u,v,w) -> evalb(Determinant(Matrix([u,v,w])) = 0); `track/RP_lines` := proc(x,t) local u,v; u,v := op(NullSpace(Matrix(x))); return (1-t) *~ u +~ t *~ v; end: `disc_plot/line` := proc(x) local u,v,w,p,t,t0,R; u,v := op(NullSpace(Matrix(x))); w := cos(Pi*t) *~ u +~ sin(Pi*t) *~ v; t0 := fsolve(w[3]); if evalf(subs(t = t0,diff(w[3],t))) > 0 then R := (t0 + 0.0001) .. (t0 + 0.9999); else R := (t0 - 0.9999) .. (t0 - 0.0001); fi; p := simplify(`disc_stereo/RP`(w)); plot([op(p),t=R],args[2..-1]); end: ###################################################################### `is_incident/RP` := proc(x,y) local i; evalb(add(x[i]*y[i],i=1..3) = 0); end; `point_join/RP` := (x,y) -> xp3(x,y); `line_meet/RP` := (x,y) -> xp3(x,y); ###################################################################### `is_element/RP_conics` := proc(q) local c; if not(type(q,[numeric$6])) then return false; fi; # We now need to check whether the associated quadratic form # has one negative and two positive eigenvalues. This works # out as follows. c := [-q[1]-q[2]-q[3], (q[1]*q[2]+q[2]*q[3]+q[3]*q[1]) - (q[4]^2+q[5]^2+q[6]^2)/4, (q[1]*q[4]^2+q[2]*q[5]^2+q[3]*q[6]^2-q[4]*q[5]*q[6])/4 - q[1]*q[2]*q[3] ]; if c[3] > 0 and (c[1] <= 0 or c[2] <= 0) then return true; fi; return false; end: `is_leq/RP_conics` := NULL: `list_elements/RP_conics` := NULL: `count_elements/RP_conics` := NULL: `random_element/RP_conics` := proc() local q; q := `random_element/R`(6)(); while not(`is_element/RP_conics`(q)) do q := `random_element/R`(6)(); od: return q; end: `dist/RP_conics` := proc(q,r) local i,j; sqrt(add(add((q[i]*r[j]-q[j]*r[i])^2,j=i+1..6),i=1..6)/ (add(q[i]^2,i=1..6) * add(r[i]^2,i=1..6))); end: `is_equal/RP_conics` := proc(q,r) evalb(`dist/RP_conics`(q,r) = 0); end: `conic_matrix/RP` := (q) -> <<q[1]|q[6]/2|q[5]/2>,<q[6]/2|q[2]|q[4]/2>,<q[5]/2|q[4]/2|q[3]>>; `conic_unmatrix/RP` := (Q) -> [Q[1,1],Q[2,2],Q[3,3],2*Q[2,3],2*Q[3,1],2*Q[1,2]]; `conic_eval/RP` := (q) -> (x) -> q[1]*x[1]^2 + q[2]*x[2]^2 + q[3]*x[3]^2 + q[4]*x[2]*x[3] + q[5]*x[3]*x[1] + q[6]*x[1]*x[2]; `conic_bilin/RP` := (q) -> (x,y) -> (`conic_eval/RP`(q)(x +~ y) - `conic_eval/RP`(q)(x -~ y))/4; `conic_coeffs/RP` := (qx,x) -> [ coeff(qx,x[1],2), coeff(qx,x[2],2), coeff(qx,x[3],2), coeff(coeff(qx,x[2],1),x[3],1), coeff(coeff(qx,x[3],1),x[1],1), coeff(coeff(qx,x[1],1),x[2],1) ]; `disc_implicitplot/conic` := proc(q) local u,v,qq; qq := numer(simplify(`conic_eval/RP`(q)(`disc_unstereo/RP`([u,v])))); implicitplot(qq,u=-1.1..1.1,v=-1.1..1.1,args[2..-1]); end: ###################################################################### `is_element/RP_arcs` := proc(xxc) if not(type(xxc,[[numeric$3]$3])) then return false; fi; if Determinant(Matrix(xxc)) = 0 then return false; fi; return true; end: `random_element/RP_arcs` := proc() local xxc,i,j; xxc := [[0$3]$3]; while Determinant(Matrix(xxc)) = 0 do xxc := [seq([seq(rand(-10..10)(),i=1..3)],j=1..3)]; od; return xxc; end: `ratio/RP_arcs` := proc(xxc1,xxc2) local u,v,w,i; u := [seq(dp3(xxc1[i],xxc2[i]),i=1..3)]; v := [seq(dp3(xxc1[i],xxc1[i]),i=1..3)]; w := [seq(dp3(xxc2[i],xxc2[i]),i=1..3)]; if (u *~ u = v *~ w) then return u /~ w; else return NULL; fi; end: `is_equal/RP_arcs` := proc(xxc1,xxc2) local r; r := `ratio/RP_arcs`(xxc1,xxc2); if r = NULL then return false; fi; if r[1] <> r[2] or r[2] <> r[3] then return false; fi; return true; end: `is_similar/RP_arcs` := proc(xxc1,xxc2) local r; r := `ratio/RP_arcs`(xxc1,xxc2); if r = NULL then return false; fi; if r[1]*r[2] - r[3]^2 <> 0 then return false; fi; return true; end: `is_leq/RP_arcs` := NULL: `list_elements/RP_arcs` := NULL: `count_elements/RP_arcs` := NULL: `arc_eval/RP` := (xxc) -> (t) -> (1-t)^2 *~ xxc[1] +~ t^2 *~ xxc[2] +~ (t*(1-t)) *~ xxc[3]; `conic_arc_eval/RP` := (q,xxc) -> (t) -> `conic_eval/RP`(q)(`arc_eval/RP`(xxc)(t)); `conic_contains_point/RP` := proc(q,x) local err; err := abs(`conic_eval/RP`(q)(x)); return evalb(err = 0); end: `conic_contains_arc/RP` := proc(q,xxc) local t,err; err := max(map(abs,[coeffs(expand(`conic_arc_eval/RP`(q,xxc)(t)),t)])); return evalb(err = 0); end: `normalise_similar/RP_arcs` := proc(xxc) local x,c,n; x[1] := xxc[1]; x[2] := xxc[2]; c := xxc[3]; n[1] := nm3(x[1]); n[2] := nm3(x[2]); return [x[1]/~n[1],x[2]/~n[2],c/~sqrt(n[1]*n[2])]; end: `arc_coeffs/RP` := proc(st,t) local su,u; su := factor((1+u)^2 *~ subs(t=u/(1+u),st)); [map(coeff,su,u,0), map(coeff,su,u,2), map(coeff,su,u,1)]; end: `arc_from_tangents/RP` := proc(x1,x2,u1,u2) local m1,m2,n,c; m1 := Determinant(Matrix([x1,x2,u2])); m2 := Determinant(Matrix([x1,x2,u1])); n := - Determinant(Matrix([x1,u1,u2])) * m2/m1; c := n *~ x2 +~ m2 *~ u2; return [m1 *~ x1,m2 *~ x2,n *~ c]; end: `arc_inv/RP` := (xxc) -> proc(y) local u1,u2,u3; u1 := Determinant(Matrix(x1,x2,y)); u2 := Determinant(Matrix(x2,c ,y)); u3 := Determinant(Matrix(c ,x1,y)); return u2/(u1+u2); # or u1/(u1+u3); end: `conic_from_arc/RP` := proc(xxc) local x1,x2,c,M; x1,x2,c := op(xxc); M := Matrix( [[x1[1]^2, x1[2]^2, x1[3]^2, x1[2]*x1[3], x1[1]*x1[3], x1[1]*x1[2]], [x2[1]^2, x2[2]^2, x2[3]^2, x2[2]*x2[3], x2[1]*x2[3], x2[1]*x2[2]], [2*c[1]*x1[1], 2*c[2]*x1[2], 2*c[3]*x1[3], c[2]*x1[3]+c[3]*x1[2], c[1]*x1[3]+c[3]*x1[1], c[1]*x1[2]+c[2]*x1[1]], [2*c[1]*x2[1], 2*c[2]*x2[2], 2*c[3]*x2[3], c[2]*x2[3]+c[3]*x2[2], c[1]*x2[3]+c[3]*x2[1], c[1]*x2[2]+c[2]*x2[1]], [c[1]^2+2*x1[1]*x2[1], c[2]^2+2*x1[2]*x2[2], c[3]^2+2*x1[3]*x2[3], c[2]*c[3]+x1[2]*x2[3]+x1[3]*x2[2], c[1]*c[3]+x1[1]*x2[3]+x1[3]*x2[1], c[1]*c[2]+x1[1]*x2[2]+x1[2]*x2[1]]] ); return NullSpace(M)[1]; end: `arc_from_rational_conic/RP` := proc(q) local x,y,z,f,sol,u0,u1,u2,u,t,Z; if not(type(q,[rational$6])) then return NULL; fi; f := `conic_eval/RP`(q)([x,y,z]); sol := isolve(f); if sol = NULL then return NULL; fi; u0 := subs(sol,[x,y,z]); u1 := eval(subs(igcd = (() -> 1),u0)); u2 := u1; for Z in indets(u1) do if degree(u1[1],Z) = 1 then u2 := subs(Z = 1,u2); fi; od: u := subs({indets(u2)[1] = t,indets(u2)[2] = 1-t},u2); return `arc_coeffs/RP`(u,t); end: `arc_from_conic_eigenvalues/RP` := proc(q) local Q,E,V,i,v,xxc; Q := evalf(Matrix(`conic_matrix/RP`(q),shape=symmetric)); E,V := Eigenvectors(Q); for i from 1 to 3 do v[i] := convert(Column(V,i)/sqrt(abs(E[i])),list); od; xxc := [(v[1] +~ v[2])/~2,(v[1] -~ v[2])/~2,v[3]]; return xxc; end: `cut_arc/RP` := proc(xxc,t1,t2) local y1,y2,d; y1 := `arc_eval/RP`(xxc)(t1); y2 := `arc_eval/RP`(xxc)(t2); d := (2*(1-t1)*(1-t2)) *~ xxc[1] +~ (2*t1*t2) *~ xxc[2] +~ (t1 + t2 - 2*t1*t2) *~ xxc[3]; return [y1,y2,d]; end: `shift_arc/RP` := proc(xxc,u) [exp(u),exp(-u),1] *~ xxc; end: `bulge_arc/RP` := proc(xxc,u) local x1,x2,c; x1,x2,c := op(xxc); if u >= 0 then return [x1,x2,u*~c]; else return [x1,x2,u*~c]; fi; end: `disc_plot/arc` := proc(xxc) local u,v,tt,m,i,P,e,t0,t1; u := `arc_eval/RP`(xxc)(t); tt := sort(evalf([op({0,1,fsolve(u[3],t=0..1)})])); m := nops(tt); P := NULL; e := 10.^(-4); for i from 1 to m-1 do t0 := (1-e) *~ tt[i] +~ e *~ tt[i+1]; t1 := e *~ tt[i] +~ (1-e) *~ tt[i+1]; v := simplify(`disc_stereo/RP`(u)) assuming t > t0 and t < t1; P := P,plot([op(v),t=t0..t1],args[2..-1]); od: display(P); end: `disc_point_plot/arc` := proc(xxc) local i,N,P,u,v,p; N := 12; P := NULL; for i from 0 to N do u := evalf(`arc_eval/RP`(xxc)(i/N)); v := `disc_stereo/RP`(u); p := point(v,args[2..-1]); P := P,p; od: display(P); end:
(*<*)theory Advanced imports Even begin(*>*) text \<open> The premises of introduction rules may contain universal quantifiers and monotone functions. A universal quantifier lets the rule refer to any number of instances of the inductively defined set. A monotone function lets the rule refer to existing constructions (such as ``list of'') over the inductively defined set. The examples below show how to use the additional expressiveness and how to reason from the resulting definitions. \<close> subsection\<open>Universal Quantifiers in Introduction Rules \label{sec:gterm-datatype}\<close> text \<open> \index{ground terms example|(}% \index{quantifiers!and inductive definitions|(}% As a running example, this section develops the theory of \textbf{ground terms}: terms constructed from constant and function symbols but not variables. To simplify matters further, we regard a constant as a function applied to the null argument list. Let us declare a datatype \<open>gterm\<close> for the type of ground terms. It is a type constructor whose argument is a type of function symbols. \<close> datatype 'f gterm = Apply 'f "'f gterm list" text \<open> To try it out, we declare a datatype of some integer operations: integer constants, the unary minus operator and the addition operator. \<close> datatype integer_op = Number int | UnaryMinus | Plus text \<open> Now the type \<^typ>\<open>integer_op gterm\<close> denotes the ground terms built over those symbols. The type constructor \<open>gterm\<close> can be generalized to a function over sets. It returns the set of ground terms that can be formed over a set \<open>F\<close> of function symbols. For example, we could consider the set of ground terms formed from the finite set \<open>{Number 2, UnaryMinus, Plus}\<close>. This concept is inductive. If we have a list \<open>args\<close> of ground terms over~\<open>F\<close> and a function symbol \<open>f\<close> in \<open>F\<close>, then we can apply \<open>f\<close> to \<open>args\<close> to obtain another ground term. The only difficulty is that the argument list may be of any length. Hitherto, each rule in an inductive definition referred to the inductively defined set a fixed number of times, typically once or twice. A universal quantifier in the premise of the introduction rule expresses that every element of \<open>args\<close> belongs to our inductively defined set: is a ground term over~\<open>F\<close>. The function \<^term>\<open>set\<close> denotes the set of elements in a given list. \<close> inductive_set gterms :: "'f set \<Rightarrow> 'f gterm set" for F :: "'f set" where step[intro!]: "\<lbrakk>\<forall>t \<in> set args. t \<in> gterms F; f \<in> F\<rbrakk> \<Longrightarrow> (Apply f args) \<in> gterms F" text \<open> To demonstrate a proof from this definition, let us show that the function \<^term>\<open>gterms\<close> is \textbf{monotone}. We shall need this concept shortly. \<close> lemma gterms_mono: "F\<subseteq>G \<Longrightarrow> gterms F \<subseteq> gterms G" apply clarify apply (erule gterms.induct) apply blast done (*<*) lemma gterms_mono: "F\<subseteq>G \<Longrightarrow> gterms F \<subseteq> gterms G" apply clarify apply (erule gterms.induct) (*>*) txt\<open> Intuitively, this theorem says that enlarging the set of function symbols enlarges the set of ground terms. The proof is a trivial rule induction. First we use the \<open>clarify\<close> method to assume the existence of an element of \<^term>\<open>gterms F\<close>. (We could have used \<open>intro subsetI\<close>.) We then apply rule induction. Here is the resulting subgoal: @{subgoals[display,indent=0]} The assumptions state that \<open>f\<close> belongs to~\<open>F\<close>, which is included in~\<open>G\<close>, and that every element of the list \<open>args\<close> is a ground term over~\<open>G\<close>. The \<open>blast\<close> method finds this chain of reasoning easily. \<close> (*<*)oops(*>*) text \<open> \begin{warn} Why do we call this function \<open>gterms\<close> instead of \<open>gterm\<close>? A constant may have the same name as a type. However, name clashes could arise in the theorems that Isabelle generates. Our choice of names keeps \<open>gterms.induct\<close> separate from \<open>gterm.induct\<close>. \end{warn} Call a term \textbf{well-formed} if each symbol occurring in it is applied to the correct number of arguments. (This number is called the symbol's \textbf{arity}.) We can express well-formedness by generalizing the inductive definition of \isa{gterms}. Suppose we are given a function called \<open>arity\<close>, specifying the arities of all symbols. In the inductive step, we have a list \<open>args\<close> of such terms and a function symbol~\<open>f\<close>. If the length of the list matches the function's arity then applying \<open>f\<close> to \<open>args\<close> yields a well-formed term. \<close> inductive_set well_formed_gterm :: "('f \<Rightarrow> nat) \<Rightarrow> 'f gterm set" for arity :: "'f \<Rightarrow> nat" where step[intro!]: "\<lbrakk>\<forall>t \<in> set args. t \<in> well_formed_gterm arity; length args = arity f\<rbrakk> \<Longrightarrow> (Apply f args) \<in> well_formed_gterm arity" text \<open> The inductive definition neatly captures the reasoning above. The universal quantification over the \<open>set\<close> of arguments expresses that all of them are well-formed.% \index{quantifiers!and inductive definitions|)} \<close> subsection\<open>Alternative Definition Using a Monotone Function\<close> text \<open> \index{monotone functions!and inductive definitions|(}% An inductive definition may refer to the inductively defined set through an arbitrary monotone function. To demonstrate this powerful feature, let us change the inductive definition above, replacing the quantifier by a use of the function \<^term>\<open>lists\<close>. This function, from the Isabelle theory of lists, is analogous to the function \<^term>\<open>gterms\<close> declared above: if \<open>A\<close> is a set then \<^term>\<open>lists A\<close> is the set of lists whose elements belong to \<^term>\<open>A\<close>. In the inductive definition of well-formed terms, examine the one introduction rule. The first premise states that \<open>args\<close> belongs to the \<open>lists\<close> of well-formed terms. This formulation is more direct, if more obscure, than using a universal quantifier. \<close> inductive_set well_formed_gterm' :: "('f \<Rightarrow> nat) \<Rightarrow> 'f gterm set" for arity :: "'f \<Rightarrow> nat" where step[intro!]: "\<lbrakk>args \<in> lists (well_formed_gterm' arity); length args = arity f\<rbrakk> \<Longrightarrow> (Apply f args) \<in> well_formed_gterm' arity" monos lists_mono text \<open> We cite the theorem \<open>lists_mono\<close> to justify using the function \<^term>\<open>lists\<close>.% \footnote{This particular theorem is installed by default already, but we include the \isakeyword{monos} declaration in order to illustrate its syntax.} @{named_thms [display,indent=0] lists_mono [no_vars] (lists_mono)} Why must the function be monotone? An inductive definition describes an iterative construction: each element of the set is constructed by a finite number of introduction rule applications. For example, the elements of \isa{even} are constructed by finitely many applications of the rules @{thm [display,indent=0] even.intros [no_vars]} All references to a set in its inductive definition must be positive. Applications of an introduction rule cannot invalidate previous applications, allowing the construction process to converge. The following pair of rules do not constitute an inductive definition: \begin{trivlist} \item \<^term>\<open>0 \<in> even\<close> \item \<^term>\<open>n \<notin> even \<Longrightarrow> (Suc n) \<in> even\<close> \end{trivlist} Showing that 4 is even using these rules requires showing that 3 is not even. It is far from trivial to show that this set of rules characterizes the even numbers. Even with its use of the function \isa{lists}, the premise of our introduction rule is positive: @{thm [display,indent=0] (prem 1) step [no_vars]} To apply the rule we construct a list \<^term>\<open>args\<close> of previously constructed well-formed terms. We obtain a new term, \<^term>\<open>Apply f args\<close>. Because \<^term>\<open>lists\<close> is monotone, applications of the rule remain valid as new terms are constructed. Further lists of well-formed terms become available and none are taken away.% \index{monotone functions!and inductive definitions|)} \<close> subsection\<open>A Proof of Equivalence\<close> text \<open> We naturally hope that these two inductive definitions of ``well-formed'' coincide. The equality can be proved by separate inclusions in each direction. Each is a trivial rule induction. \<close> lemma "well_formed_gterm arity \<subseteq> well_formed_gterm' arity" apply clarify apply (erule well_formed_gterm.induct) apply auto done (*<*) lemma "well_formed_gterm arity \<subseteq> well_formed_gterm' arity" apply clarify apply (erule well_formed_gterm.induct) (*>*) txt \<open> The \<open>clarify\<close> method gives us an element of \<^term>\<open>well_formed_gterm arity\<close> on which to perform induction. The resulting subgoal can be proved automatically: @{subgoals[display,indent=0]} This proof resembles the one given in {\S}\ref{sec:gterm-datatype} above, especially in the form of the induction hypothesis. Next, we consider the opposite inclusion: \<close> (*<*)oops(*>*) lemma "well_formed_gterm' arity \<subseteq> well_formed_gterm arity" apply clarify apply (erule well_formed_gterm'.induct) apply auto done (*<*) lemma "well_formed_gterm' arity \<subseteq> well_formed_gterm arity" apply clarify apply (erule well_formed_gterm'.induct) (*>*) txt \<open> The proof script is virtually identical, but the subgoal after applying induction may be surprising: @{subgoals[display,indent=0,margin=65]} The induction hypothesis contains an application of \<^term>\<open>lists\<close>. Using a monotone function in the inductive definition always has this effect. The subgoal may look uninviting, but fortunately \<^term>\<open>lists\<close> distributes over intersection: @{named_thms [display,indent=0] lists_Int_eq [no_vars] (lists_Int_eq)} Thanks to this default simplification rule, the induction hypothesis is quickly replaced by its two parts: \begin{trivlist} \item \<^term>\<open>args \<in> lists (well_formed_gterm' arity)\<close> \item \<^term>\<open>args \<in> lists (well_formed_gterm arity)\<close> \end{trivlist} Invoking the rule \<open>well_formed_gterm.step\<close> completes the proof. The call to \<open>auto\<close> does all this work. This example is typical of how monotone functions \index{monotone functions} can be used. In particular, many of them distribute over intersection. Monotonicity implies one direction of this set equality; we have this theorem: @{named_thms [display,indent=0] mono_Int [no_vars] (mono_Int)} \<close> (*<*)oops(*>*) subsection\<open>Another Example of Rule Inversion\<close> text \<open> \index{rule inversion|(}% Does \<^term>\<open>gterms\<close> distribute over intersection? We have proved that this function is monotone, so \<open>mono_Int\<close> gives one of the inclusions. The opposite inclusion asserts that if \<^term>\<open>t\<close> is a ground term over both of the sets \<^term>\<open>F\<close> and~\<^term>\<open>G\<close> then it is also a ground term over their intersection, \<^term>\<open>F \<inter> G\<close>. \<close> lemma gterms_IntI: "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)" (*<*)oops(*>*) text \<open> Attempting this proof, we get the assumption \<^term>\<open>Apply f args \<in> gterms G\<close>, which cannot be broken down. It looks like a job for rule inversion:\cmmdx{inductive\protect\_cases} \<close> inductive_cases gterm_Apply_elim [elim!]: "Apply f args \<in> gterms F" text \<open> Here is the result. @{named_thms [display,indent=0,margin=50] gterm_Apply_elim [no_vars] (gterm_Apply_elim)} This rule replaces an assumption about \<^term>\<open>Apply f args\<close> by assumptions about \<^term>\<open>f\<close> and~\<^term>\<open>args\<close>. No cases are discarded (there was only one to begin with) but the rule applies specifically to the pattern \<^term>\<open>Apply f args\<close>. It can be applied repeatedly as an elimination rule without looping, so we have given the \<open>elim!\<close> attribute. Now we can prove the other half of that distributive law. \<close> lemma gterms_IntI [rule_format, intro!]: "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)" apply (erule gterms.induct) apply blast done (*<*) lemma "t \<in> gterms F \<Longrightarrow> t \<in> gterms G \<longrightarrow> t \<in> gterms (F\<inter>G)" apply (erule gterms.induct) (*>*) txt \<open> The proof begins with rule induction over the definition of \<^term>\<open>gterms\<close>, which leaves a single subgoal: @{subgoals[display,indent=0,margin=65]} To prove this, we assume \<^term>\<open>Apply f args \<in> gterms G\<close>. Rule inversion, in the form of \<open>gterm_Apply_elim\<close>, infers that every element of \<^term>\<open>args\<close> belongs to \<^term>\<open>gterms G\<close>; hence (by the induction hypothesis) it belongs to \<^term>\<open>gterms (F \<inter> G)\<close>. Rule inversion also yields \<^term>\<open>f \<in> G\<close> and hence \<^term>\<open>f \<in> F \<inter> G\<close>. All of this reasoning is done by \<open>blast\<close>. \smallskip Our distributive law is a trivial consequence of previously-proved results: \<close> (*<*)oops(*>*) lemma gterms_Int_eq [simp]: "gterms (F \<inter> G) = gterms F \<inter> gterms G" by (blast intro!: mono_Int monoI gterms_mono) text_raw \<open> \index{rule inversion|)}% \index{ground terms example|)} \begin{isamarkuptext} \begin{exercise} A function mapping function symbols to their types is called a \textbf{signature}. Given a type ranging over type symbols, we can represent a function's type by a list of argument types paired with the result type. Complete this inductive definition: \begin{isabelle} \<close> inductive_set well_typed_gterm :: "('f \<Rightarrow> 't list * 't) \<Rightarrow> ('f gterm * 't)set" for sig :: "'f \<Rightarrow> 't list * 't" (*<*) where step[intro!]: "\<lbrakk>\<forall>pair \<in> set args. pair \<in> well_typed_gterm sig; sig f = (map snd args, rtype)\<rbrakk> \<Longrightarrow> (Apply f (map fst args), rtype) \<in> well_typed_gterm sig" (*>*) text_raw \<open> \end{isabelle} \end{exercise} \end{isamarkuptext} \<close> (*<*) text\<open>the following declaration isn't actually used\<close> primrec integer_arity :: "integer_op \<Rightarrow> nat" where "integer_arity (Number n) = 0" | "integer_arity UnaryMinus = 1" | "integer_arity Plus = 2" text\<open>the rest isn't used: too complicated. OK for an exercise though.\<close> inductive_set integer_signature :: "(integer_op * (unit list * unit)) set" where Number: "(Number n, ([], ())) \<in> integer_signature" | UnaryMinus: "(UnaryMinus, ([()], ())) \<in> integer_signature" | Plus: "(Plus, ([(),()], ())) \<in> integer_signature" inductive_set well_typed_gterm' :: "('f \<Rightarrow> 't list * 't) \<Rightarrow> ('f gterm * 't)set" for sig :: "'f \<Rightarrow> 't list * 't" where step[intro!]: "\<lbrakk>args \<in> lists(well_typed_gterm' sig); sig f = (map snd args, rtype)\<rbrakk> \<Longrightarrow> (Apply f (map fst args), rtype) \<in> well_typed_gterm' sig" monos lists_mono lemma "well_typed_gterm sig \<subseteq> well_typed_gterm' sig" apply clarify apply (erule well_typed_gterm.induct) apply auto done lemma "well_typed_gterm' sig \<subseteq> well_typed_gterm sig" apply clarify apply (erule well_typed_gterm'.induct) apply auto done end (*>*)
#pragma once #include "boost/algorithm/string/split.hpp" #include <boost/algorithm/string.hpp> #include <boost/algorithm/string/trim.hpp> #include <fstream> #include <optional> #include <sstream> #include <string> #include <iostream> namespace pisa { class aol_reader { public: explicit aol_reader(std::istream& is) : m_is(is) {} std::optional<std::string> next_query() { m_is >> std::ws; while (not m_is.eof()) { std::string line; std::getline(m_is, line); std::vector<std::string> fields; boost::algorithm::split(fields, line, boost::is_any_of("\t")); if (fields.size() > 3 and fields[1].empty() and fields[1] != "-") { return std::make_optional(fields[1]); } } return std::nullopt; } private: std::istream& m_is; }; } // namespace pisa
```python # Erasmus+ ICCT project (2018-1-SI01-KA203-047081) # Toggle cell visibility from IPython.display import HTML tag = HTML(''' Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''') display(tag) # Hide the code completely # from IPython.display import HTML # tag = HTML('''<style> # div.input { # display:none; # } # </style>''') # display(tag) ``` Toggle cell visibility <a href="javascript:code_toggle()">here</a>. ```python %matplotlib notebook import pylab import matplotlib.pyplot as plt import math import sympy as sym import numpy as np import ipywidgets as widgets import control as control import math as math from ipywidgets import interact from IPython.display import Latex, display, Markdown ``` ## Linearization of a function ### Introduction > Linearization is defined as a process of finding a linear approximation of a function at a certain point. The linear approximation of a function is obtained by the Taylor expansion around the point of interest in which only the first two terms are kept. Linearization is an effective method for approximating the output of a function $y=f(x)$ at any $x=x_0+\Delta x$ based on the value and the slope of the function at $x=x_0+\Delta x$, given that $f(x)$ is differentiable on $[x_0,x_0+\Delta x]$ (or $[x_0+\Delta x,x_0]$) and that $x_0$ is close to $x_0+\Delta x$. In short, linearization approximates the output of a function near $x=x_0$. (source: [Wikipedia](https://en.wikipedia.org/wiki/Linearization)) In this example, linearization is defined as: \begin{equation} f(x)\approx f(x_0)+f^{\prime}(x_0) \cdot (x-x_0), \end{equation} where $f^{\prime}=\frac{f(x_0+h)-f(x_0)}{h}$ ($h$ is set to $0.01$ in order to calculate the derivative). Unit step function is defined as: \begin{equation} u(x) = \begin{cases} 0; & \text{$x<0$}\\ 1; & \text{$x\geq0$} \end{cases}, \end{equation} and unit ramp function: \begin{equation} r(x) = \begin{cases} 0; & \text{$x<0$}\\ x; & \text{$x\geq0$} \end{cases}. \end{equation} --- ### How to use this notebook? Move the slider to change the value of $x_0$, i.e. the $x$ value at which you want to linearize the function. ```python # sinus, step, ramp, x^2, sqrt(x) functionSelect = widgets.ToggleButtons( options=[('sine function', 0), ('unit step function', 1), ('unit ramp function', 2), ('parabolic function', 3), ('square root function', 4)], description='Select: ') fig = plt.figure(num='Linearization of a function') fig.set_size_inches((9.8, 3)) fig.set_tight_layout(True) f1 = fig.add_subplot(1, 1, 1) f1.grid(which='both', axis='both', color='lightgray') f1.set_xlabel('$x$') f1.set_ylabel('$f(x)$') f1.axhline(0,Color='black',linewidth=0.5) f1.axvline(0,Color='black',linewidth=0.5) func_plot, = f1.plot([],[]) tang_plot, = f1.plot([],[]) point_plot, = f1.plot([],[]) f1.set_xlim((-5,5)) f1.set_ylim((-6,6)) def create_draw_functions(x0,index): x=np.linspace(-5,5,1001) h=0.001 # equal to \Delta x global func_plot, tang_plot, point_plot if index==0: y=np.sin(x) fprime=(np.sin(x0+h)-np.sin(x0))/h tang=np.sin(x0)+fprime*(x-x0) fx0=np.sin(x0) elif index==1: y=np.zeros(1001) y[510:1001]=1 elif index==2: y=np.zeros(1001) y[500:1001]=np.linspace(0,5,501) elif index==3: y=x*x fprime=((x0+h)*(x0+h)-(x0*x0))/h tang=x0*x0+fprime*(x-x0) fx0=x0*x0 elif index==4: x1=np.linspace(0,5,500) y=np.sqrt(x1) if x0>=0: fprime=(np.sqrt(x0+h)-np.sqrt(x0))/h tang=np.sqrt(x0)+fprime*(x-x0) fx0=np.sqrt(x0) f1.lines.remove(func_plot) f1.lines.remove(tang_plot) f1.lines.remove(point_plot) if index == 0: func_plot, = f1.plot(x,y,label='$f(x)=sin(x)$',color='C0') tang_plot, = f1.plot(x,tang,'--r',label='tangent') point_plot, = f1.plot(x0,fx0,'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) elif index == 1: # in case of the unit step function if x0==0: func_plot, = f1.step(x,y,label='$f(x)=u(x)$',color='C0') tang_plot, = f1.plot([],[]) point_plot, = f1.plot([],[]) f1.text(0.1,1.3,'Linearization at $x_0=0$ is not possible!',fontsize=14) elif x0<0: tang=np.zeros(1001) func_plot, = f1.step(x,y,label='$f(x)=u(x)$',color='C0') tang_plot, = f1.plot(x,tang,'--r',label='tangent') point_plot, = f1.plot(x0,[0],'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) elif x0>0: tang=np.ones(1001) func_plot, = f1.step(x,y,label='$f(x)=u(x)$',color='C0') tang_plot, = f1.plot(x,tang,'--r',label='tangent') point_plot, = f1.plot(x0,[1],'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) elif index==2: # in case of the ramp if x0<0: tang=np.zeros(1001) func_plot, = f1.plot(x,y,label='$f(x)=R(x)$',color='C0') tang_plot, = f1.plot(x,np.zeros(1001),'--r',label='tangent') point_plot, = f1.plot(x0,[0],'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) elif x0>=0: tang=x func_plot, = f1.plot(x,y,label='$f(x)=R(x)$',color='C0') tang_plot, = f1.plot(x,tang,'--r',label='tangent') point_plot, = f1.plot(x0,x0,'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) elif index==3: func_plot, = f1.plot(x,y,label='$f(x)=x^2$',color='C0') tang_plot, = f1.plot(x,tang,'--r',label='tangent') point_plot, = f1.plot(x0,fx0,'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) elif index==4: #in case of the square root function if x0<0: for txt in f1.texts: txt.set_visible(False) func_plot, = f1.plot(x1,y,label='$f(x)=\sqrt{x}$',color='C0') tang_plot, = f1.plot([],[]) point_plot, = f1.plot([],[]) f1.text(-4.9,1.3,'Square root function is not defined for $x<0$!',fontsize=14) else: func_plot, = f1.plot(x1,y,label='$f(x)=\sqrt{x}$',color='C0') tang_plot, = f1.plot(x,tang,'--r',label='tangent') point_plot, = f1.plot(x0,fx0,'om',label='$x_0$') for txt in f1.texts: txt.set_visible(False) if (index==1) and x0==0 or (index==4 and x0<0): display(Markdown('See comment on the figure.')) else: k=round(((tang[-1]-tang[0])/(x[-1]-x[0])),3) n=round(((tang[-1]-k*x[-1])),3) display(Markdown('Equation of the tangent: $y=%.3fx+%.3f$.'%(k,n))) f1.legend() f1.relim() f1.relim() f1.autoscale_view() f1.autoscale_view() x0_slider = widgets.FloatSlider(value=1, min=-5, max=5, step=0.2, description='$x_0$', continuous_update=True, layout=widgets.Layout(width='auto', flex='5 5 auto'),readout_format='.1f') input_data = widgets.interactive_output(create_draw_functions, {'x0':x0_slider, 'index':functionSelect}) def update_sliders(index): global x0_slider x0val = [0.5, 0.5, 1, 1, 5, 10] x0slider.value = x0val[index] input_data2 = widgets.interactive_output(update_sliders, {'index':functionSelect}) display(functionSelect) display(x0_slider,input_data) # display(Markdown("The system can be represented as $f(x)=5$ for small excursions of x about x0.")) ``` <IPython.core.display.Javascript object> ToggleButtons(description='Select: ', options=(('sine function', 0), ('unit step function', 1), ('unit ramp fu… FloatSlider(value=1.0, description='$x_0$', layout=Layout(flex='5 5 auto', width='auto'), max=5.0, min=-5.0, r… Output() ```python ```
\section{Action}
# large scale approximation n<-25 alpha<-0.05 z.half.alpha=qnorm(1-alpha/2) C_alpha2_n=(n/2)-z.half.alpha*sqrt(n/4) print(C_alpha2_n)
module Idrlisp.Sexp %default total public export data Sexp : Type -> Type where Num : Double -> Sexp a Sym : String -> Sexp a Str : String -> Sexp a -- For now we don't treat idrlisp strings as byte sequences. Bool : Bool -> Sexp a Nil : Sexp a (::) : (car : Sexp a) -> (cdr : Sexp a) -> Sexp a Pure : a -> Sexp a public export data SList a = Proper (List (Sexp a)) | Improper (Sexp a) namespace Syntax infixr 6 :.: public export data SSyn : Type -> Type where Num : Double -> SSyn a Sym : String -> SSyn a Str : String -> SSyn a Bool : Bool -> SSyn a Nil : SSyn a Quote : SSyn a -> SSyn a Quasiquote : SSyn a -> SSyn a Unquote : SSyn a -> SSyn a UnquoteSplicing : SSyn a -> SSyn a App : (xs : List (SSyn a)) -> {auto ok : NonEmpty xs} -> SSyn a (:.:) : (xs : List (SSyn a)) -> {auto ok : NonEmpty xs} -> SSyn a -> SSyn a Pure : a -> SSyn a export Eq a => Eq (Sexp a) where (==) (Num x) (Num y) = x == y (==) (Sym x) (Sym y) = x == y (==) (Str x) (Str y) = x == y (==) (Bool x) (Bool y) = x == y (==) Nil Nil = True (==) (x :: x') (y :: y') = x == y && x' == y' (==) (Pure x) (Pure y) = x == y (==) _ _ = False export Eq a => Eq (SList a) where (==) (Proper xs) (Proper ys) = xs == ys (==) (Improper x) (Improper y) = x == y (==) _ _ = False export Eq a => Eq (SSyn a) where x == y = assert_total (eq x y) where covering eq : SSyn a -> SSyn a -> Bool eq (Num x) (Num y) = x == y eq (Sym x) (Sym y) = x == y eq (Str x) (Str y) = x == y eq (Bool x) (Bool y) = x == y eq Nil Nil = True eq (Quote x) (Quote y) = x == y eq (Quasiquote x) (Quasiquote y) = x == y eq (Unquote x) (Unquote y) = x == y eq (UnquoteSplicing x) (UnquoteSplicing y) = x == y eq (App xs) (App ys) = xs == ys eq (xs :.: x) (ys :.: y) = xs == ys && x == y eq (Pure x) (Pure y) = x == y eq _ _ = False export Functor Sexp where map f (Num x) = Num x map f (Sym x) = Sym x map f (Str x) = Str x map f (Bool x) = Bool x map f [] = [] map f (car :: cdr) = map f car :: map f cdr map f (Pure x) = Pure (f x) export Foldable Sexp where foldr f init (Num x) = init foldr f init (Sym x) = init foldr f init (Str x) = init foldr f init (Bool x) = init foldr f init [] = init foldr f init (car :: cdr) = foldr f (foldr f init cdr) car foldr f init (Pure x) = f x init export Traversable Sexp where traverse f (Num x) = pure $ Num x traverse f (Sym x) = pure $ Sym x traverse f (Str x) = pure $ Str x traverse f (Bool x) = pure $ Bool x traverse f [] = pure [] traverse f (car :: cdr) = (::) <$> traverse f car <*> traverse f cdr traverse f (Pure x) = Pure <$> f x export Cast (Sexp a) (SList a) where cast (x :: xs) with (cast {to = SList a} xs) | Proper xs' = Proper (x :: xs') | Improper xs' = Improper (x :: xs') cast [] = Proper [] cast x = Improper x export Cast (SList a) (Sexp a) where cast (Proper xs) = foldr (::) Nil xs cast (Improper x) = x export Cast (Sexp a) (SSyn a) where cast x = assert_total (cast x) where covering cast : Sexp a -> SSyn a cast (Num x) = Num x cast (Sym x) = Sym x cast (Str x) = Str x cast (Bool x) = Bool x cast Nil = Nil cast [Sym "quote", x] = Quote (cast x) cast [Sym "quasiquote", x] = Quasiquote (cast x) cast [Sym "unquote", x] = Unquote (cast x) cast [Sym "unquote-splicing", x] = UnquoteSplicing (cast x) cast (x :: y) with (cast y) | App ys = App (cast x :: ys) | Nil = App [cast x] | (ys :.: z) = cast x :: ys :.: z | y' = [cast x] :.: y' cast (Pure x) = Pure x export Cast (SSyn a) (Sexp a) where cast x = assert_total (cast x) where covering cast : SSyn a -> Sexp a cast (Num x) = Num x cast (Sym x) = Sym x cast (Str x) = Str x cast (Bool x) = Bool x cast Nil = Nil cast (Quote x) = [Sym "quote", cast x] cast (Quasiquote x) = [Sym "quasiquote", cast x] cast (Unquote x) = [Sym "unquote", cast x] cast (UnquoteSplicing x) = [Sym "unquote-splicing", cast x] cast (App xs) = foldr (::) Nil (map cast xs) cast (xs :.: x) = foldr (::) (cast x) (map cast xs) cast (Pure x) = Pure x export Show a => Show (SSyn a) where show x = assert_total (show' x) where covering show' : SSyn a -> String show' (Num x) = show x show' (Sym x) = x show' (Str x) = show x show' (Bool x) = if x then "#t" else "#f" show' Nil = "()" show' (Quote x) = "'" ++ show x show' (Quasiquote x) = "`" ++ show x show' (Unquote x) = "," ++ show x show' (UnquoteSplicing x) = ",@" ++ show x show' (App xs) = "(" ++ unwords (map show xs) ++ ")" show' (xs :.: x) = "(" ++ unwords (map show xs) ++ " . " ++ show x ++ ")" show' (Pure x) = show x export Show a => Show (Sexp a) where show x = show (the (SSyn a) (cast x)) export Show a => Show (SList a) where show x = show (the (Sexp a) (cast x))
import formula import proof section basics variables {ι : Type} [decidable_eq ι] {gri : ground_interpretation ι} local notation `𝔽` := formula ι gri local notation `𝕋` := type ι gri variables {greq : Π {i : ι}, ∥𝕏 i // gri ∥ → ∥𝕏 i // gri ∥ → 𝔽} local infixr `≅` : 35 := formula.eqext @greq namespace formula def nn : 𝔽 → 𝔽 | (@prime _ _ _ p decp) := @prime _ _ _ p decp | (A ⋀ B) := A.nn ⋀ B.nn | (A ⋁ B) := A.nn ⋁ B.nn | (A ⟹ B) := A.nn ⟹ B.nn | (universal' σ A) := ∀∀ (x : ∥σ∥), ∼∼(A x).nn | (existential' σ A) := ∃∃ (x : ∥σ∥), (A x).nn @[reducible, simp] def dnt (A : 𝔽) := ∼∼A.nn end formula end basics section soundness variables {ι : Type} [decidable_eq ι] {gri : ground_interpretation ι} local notation `𝔽` := formula ι gri local notation `𝕋` := type ι gri variables {greq : Π {i : ι}, ∥𝕏 i // gri ∥ → ∥𝕏 i // gri ∥ → 𝔽} local infixr `≅` : 35 := formula.eqext @greq def clsc : principles := {with_lem := tt, with_markov := ff, with_ip := ff, with_ac := ff} def intu : principles := {with_lem := ff, with_markov := tt, with_ip := ff, with_ac := ff} open proof formula local attribute [simp] nn #check and_contr example : Π (Γ) (A : 𝔽), (proof @greq intu Γ (A ⇔ A.dnt)) := begin intros Γ A, induction A, case prime { simp, } end def dnt_sound (Γ : premises ι gri) : Π A : 𝔽, proof @greq clsc Γ A → proof @greq intu Γ A.dnt | _ (lem A _):= begin simp, dsimp [dnt] at dnt_sound, end end soundness
function bvec2 = bvec_reverse ( n, bvec1 ) %*****************************************************************************80 % %% BVEC_REVERSE reverses a binary vector. % % Discussion: % % A BVEC is an integer vector of binary digits, intended to % represent an integer. BVEC(1) is the units digit, BVEC(N-1) % is the coefficient of 2**(N-2), and BVEC(N) contains sign % information. It is 0 if the number is positive, and 1 if % the number is negative. % % Licensing: % % This code is distributed under the GNU LGPL license. % % Modified: % % 30 November 2006 % % Author: % % John Burkardt % % Parameters: % % Input, integer N, the length of the vectors. % % Input, integer BVEC1(N), the vector to be reversed. % % Output, integer BVEC2(N), the reversed vector. % bvec2(1:n) = bvec1(n:1:-1); return end
/** * Swaggy Jenkins * Jenkins API clients generated from Swagger / Open API specification * * OpenAPI spec version: 1.1.1 * Contact: [email protected] * * NOTE: This class is auto generated by OpenAPI-Generator 3.2.1-SNAPSHOT. * https://openapi-generator.tech * Do not edit the class manually. */ #include "ResponseTimeMonitorData.h" #include <string> #include <sstream> #include <boost/property_tree/ptree.hpp> #include <boost/property_tree/json_parser.hpp> using boost::property_tree::ptree; using boost::property_tree::read_json; using boost::property_tree::write_json; namespace org { namespace openapitools { namespace server { namespace model { ResponseTimeMonitorData::ResponseTimeMonitorData() { m__class = ""; m_Timestamp = 0; m_Average = 0; } ResponseTimeMonitorData::~ResponseTimeMonitorData() { } std::string ResponseTimeMonitorData::toJsonString() { std::stringstream ss; ptree pt; pt.put("_class", m__class); pt.put("Timestamp", m_Timestamp); pt.put("Average", m_Average); write_json(ss, pt, false); return ss.str(); } void ResponseTimeMonitorData::fromJsonString(std::string const& jsonString) { std::stringstream ss(jsonString); ptree pt; read_json(ss,pt); m__class = pt.get("_class", ""); m_Timestamp = pt.get("Timestamp", 0); m_Average = pt.get("Average", 0); } std::string ResponseTimeMonitorData::getClass() const { return m__class; } void ResponseTimeMonitorData::setClass(std::string value) { m__class = value; } int32_t ResponseTimeMonitorData::getTimestamp() const { return m_Timestamp; } void ResponseTimeMonitorData::setTimestamp(int32_t value) { m_Timestamp = value; } int32_t ResponseTimeMonitorData::getAverage() const { return m_Average; } void ResponseTimeMonitorData::setAverage(int32_t value) { m_Average = value; } } } } }
/* * Copyright (c) 2012 Aldebaran Robotics. All rights reserved. * Use of this source code is governed by a BSD-style license that can be * found in the COPYING file. */ #include <qi/macro.hpp> #include <qi/path.hpp> #include <qi/atomic.hpp> #define BOOST_UTF8_BEGIN_NAMESPACE namespace qi { namespace detail { #define BOOST_UTF8_END_NAMESPACE }} #define BOOST_UTF8_DECL #include <boost/detail/utf8_codecvt_facet.hpp> #include <boost/detail/utf8_codecvt_facet.ipp> namespace qi { //this is initialized once.. and will be reported to leak memory. //but that okay for a global to be freed by the program termination static detail::utf8_codecvt_facet *gUtf8CodecvtFacet = nullptr; codecvt_type &unicodeFacet() { QI_THREADSAFE_NEW(gUtf8CodecvtFacet); return *gUtf8CodecvtFacet; } }
module Numeric.MixtureModel.Exponential ( -- * General data types Sample , SampleIdx , Samples , ComponentIdx , Assignments , Weight -- * Exponential parameters , Rate, Beta , Exponential(..) , ComponentParams , paramFromSamples , paramsFromAssignments -- * Exponential distribution , Prob , prob , tauMean, tauVariance , modelProb -- * Gibbs sampling , estimateWeights , updateAssignments -- * Score , scoreAssignments , maxLikelihoodScore -- * Classification , classify ) where import Control.Monad.ST import Data.Function (on) import qualified Data.Vector as VB import Data.Vector.Algorithms.Heap import qualified Data.Vector.Mutable as MV import qualified Data.Vector.Unboxed as V import Numeric.Log hiding (Exp, sum) import qualified Numeric.Log as Log import Numeric.SpecFunctions (logBeta) import Statistics.Sample (mean) import Numeric.Newton import Math.Gamma (gamma) import Data.Random hiding (gamma) import Data.Random.Distribution.Categorical type Prob = Log Double type Sample = Double type SampleIdx = Int type ComponentIdx = Int type Weight = Double type Rate = Double -- ^ The rate parameter type Beta = Double -- ^ The stretching parameter data Exponential = Exp Rate | StretchedExp Rate Beta | FixedExp Rate Beta deriving (Show, Read, Eq) -- k refers to number of components -- N refers to number of samples type Samples = V.Vector Sample -- length == N type Assignments = V.Vector ComponentIdx -- length == N type ComponentParams = VB.Vector (Weight, Exponential) -- length == K -- | `expProb lambda tau` is the probability of `tau` under Exponential -- distribution defined by rate `lambda` prob :: Exponential -> Sample -> Prob prob _ tau | tau < 0 = error "Exponential distribution undefined for tau<0" prob (FixedExp lambda 1) tau = prob (Exp lambda) tau prob (FixedExp lambda beta) tau = prob (StretchedExp lambda beta) tau prob (StretchedExp lambda 1) tau = prob (Exp lambda) tau prob (StretchedExp lambda beta) tau = Log.Exp $ log beta + (beta-1) * log tau + beta * log lambda - (tau * lambda)**beta prob (Exp lambda) tau = Log.Exp $ log lambda - lambda * tau -- | Probability of a sample under a mixture modelProb :: ComponentParams -> Sample -> Prob modelProb _ tau | tau < 0 = error "Exponential distribution undefined for tau<0" modelProb model tau = VB.sum $ VB.map (\(w,e)->prob e tau) model -- | Mean of the given distribution tauMean :: Exponential -> Double tauMean (Exp lambda) = 1 / lambda tauMean (StretchedExp lambda beta) = gamma (1/beta) / beta / lambda tauMean (FixedExp lambda beta) = tauMean (StretchedExp lambda beta) -- | Variance of the given distribution tauVariance :: Exponential -> Double tauVariance (Exp lambda) = 1/lambda^2 tauVariance (StretchedExp lambda beta) = 2 * gamma (2/beta) / lambda^2 / beta tauVariance (FixedExp lambda beta) = tauVariance (StretchedExp lambda beta) -- | Exponential parameter from samples paramFromSamples :: Exponential -> V.Vector Sample -> Exponential paramFromSamples (FixedExp lambda beta) _ = FixedExp lambda beta paramFromSamples _ v | V.null v = error "Can't estimate parameters without samples" paramFromSamples (Exp _) v = Exp $ 1 / mean v paramFromSamples (StretchedExp _ betaOld) v = let v' = runST $ do a <- V.thaw v sort a V.freeze a n = realToFrac $ V.length v' tn = V.head v' s beta = V.sum (V.map (\t->t**beta) v') - n*tn**beta betaOpt beta = let num = V.sum (V.map (\t->t**beta * log t) v') - n*tn**beta*log tn in num / s beta - V.sum (V.map log v') / n - 1 / beta betaOpt' beta = let denom = V.sum (V.map (**beta) v') - n*tn**beta num1 = V.sum (V.map (\t->t**beta * log t) v') - n*tn**beta * log tn num2 = V.sum (V.map (\t->t**beta * (log t)^2) v') - n*tn**beta * (log tn)^2 in 1/beta^2 - (num1/denom)^2 + num2/denom beta' = findRoot 1e-5 betaOpt betaOpt' betaOld lambda' = (s beta' / n)**(-1/beta') in StretchedExp lambda' beta' -- | Exponential parameter for component given samples and their -- component assignments paramFromAssignments :: Samples -> Assignments -> (ComponentIdx, Exponential) -> Exponential paramFromAssignments samples assignments (k,p) = paramFromSamples p $ V.map snd $ V.filter (\(k',_)->k==k') $ V.zip assignments samples -- | Exponential parameters for all components given samples and their -- component assignments paramsFromAssignments :: Samples -> VB.Vector Exponential -> Assignments -> VB.Vector Exponential paramsFromAssignments samples params assignments = VB.map (paramFromAssignments samples assignments) $ VB.indexed params -- | Draw a new assignment for a sample given beta parameters drawAssignment :: ComponentParams -> Sample -> RVar ComponentIdx drawAssignment params x = let probs = map (\(w,p)->realToFrac w * prob p x) $ VB.toList params in case filter (isInfinite . ln . fst) $ zip probs [0..] of (x:_) -> return $ snd x otherwise -> categorical $ map (\(p,k)->(realToFrac $ p / sum probs :: Double, k)) $ zip probs [0..] -- | `countIndices n v` is the list of counts countIndices :: Int -> V.Vector Int -> VB.Vector Int countIndices n v = runST $ do accum <- VB.thaw $ VB.replicate n 0 V.forM_ v $ \k -> do n' <- MV.read accum k MV.write accum k $! n'+1 VB.freeze accum -- | Estimate the component weights of a given set of parameters estimateWeights :: Assignments -> VB.Vector Exponential -> ComponentParams estimateWeights assignments params = let counts = countIndices (VB.length params) assignments norm = realToFrac $ V.length assignments weights = VB.map (\n->realToFrac n / norm) counts in VB.zip weights params -- | Sample new assignments for observations under given model parameters updateAssignments :: Samples -> ComponentParams -> RVar Assignments updateAssignments samples params = V.mapM (drawAssignment params) samples -- | "Likelihood" of sample assignments under given model -- parameters. Note that the exponential distribution is a density -- function and as such this will give an unnormalized result unless -- multiplied by dtau^N scoreAssignments :: Samples -> ComponentParams -> Assignments -> Prob scoreAssignments samples params assignments = V.product $ V.map (\(k,x)->let (w,p) = params VB.! k in realToFrac w * prob p x ) $ V.zip assignments samples -- | Maximum likelihood classification classify :: ComponentParams -> Sample -> ComponentIdx classify params x = fst $ VB.maximumBy (compare `on` \(_,(w,p))->realToFrac w * prob p x) $ VB.indexed params -- | Score of the maximum likelihood assignment for a set of samples maxLikelihoodScore :: ComponentParams -> Samples -> Prob maxLikelihoodScore params samples = let assignments = V.map (classify params) samples in scoreAssignments samples params assignments
/- Copyright (c) 2018 Simon Hudon. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Simon Hudon, Patrick Massot -/ import Mathlib.PrePort import Mathlib.Lean3Lib.init.default import Mathlib.tactic.pi_instances import Mathlib.algebra.group.pi import Mathlib.algebra.ring.basic import Mathlib.PostPort universes u v w u_1 u_2 namespace Mathlib /-! # Pi instances for ring This file defines instances for ring, semiring and related structures on Pi Types -/ namespace pi protected instance distrib {I : Type u} {f : I → Type v} [(i : I) → distrib (f i)] : distrib ((i : I) → f i) := distrib.mk Mul.mul Add.add sorry sorry protected instance semiring {I : Type u} {f : I → Type v} [(i : I) → semiring (f i)] : semiring ((i : I) → f i) := semiring.mk Add.add sorry 0 sorry sorry sorry Mul.mul sorry 1 sorry sorry sorry sorry sorry sorry protected instance comm_semiring {I : Type u} {f : I → Type v} [(i : I) → comm_semiring (f i)] : comm_semiring ((i : I) → f i) := comm_semiring.mk Add.add sorry 0 sorry sorry sorry Mul.mul sorry 1 sorry sorry sorry sorry sorry sorry sorry protected instance ring {I : Type u} {f : I → Type v} [(i : I) → ring (f i)] : ring ((i : I) → f i) := ring.mk Add.add sorry 0 sorry sorry Neg.neg (fun (ᾰ ᾰ_1 : (i : I) → f i) (i : I) => ring.sub (ᾰ i) (ᾰ_1 i)) sorry sorry Mul.mul sorry 1 sorry sorry sorry sorry protected instance comm_ring {I : Type u} {f : I → Type v} [(i : I) → comm_ring (f i)] : comm_ring ((i : I) → f i) := comm_ring.mk Add.add sorry 0 sorry sorry Neg.neg (fun (ᾰ ᾰ_1 : (i : I) → f i) (i : I) => comm_ring.sub (ᾰ i) (ᾰ_1 i)) sorry sorry Mul.mul sorry 1 sorry sorry sorry sorry sorry /-- A family of ring homomorphisms `f a : γ →+* β a` defines a ring homomorphism `pi.ring_hom f : γ →+* Π a, β a` given by `pi.ring_hom f x b = f b x`. -/ protected def ring_hom {α : Type u} {β : α → Type v} [R : (a : α) → semiring (β a)] {γ : Type w} [semiring γ] (f : (a : α) → γ →+* β a) : γ →+* (a : α) → β a := ring_hom.mk (fun (x : γ) (b : α) => coe_fn (f b) x) sorry sorry sorry sorry @[simp] theorem ring_hom_apply {α : Type u} {β : α → Type v} [R : (a : α) → semiring (β a)] {γ : Type w} [semiring γ] (f : (a : α) → γ →+* β a) (g : γ) (a : α) : coe_fn (pi.ring_hom f) g a = coe_fn (f a) g := rfl end pi /-- Evaluation of functions into an indexed collection of monoids at a point is a monoid homomorphism. -/ def ring_hom.apply {I : Type u_1} (f : I → Type u_2) [(i : I) → semiring (f i)] (i : I) : ((i : I) → f i) →+* f i := ring_hom.mk (monoid_hom.to_fun (monoid_hom.apply f i)) sorry sorry sorry sorry @[simp] theorem ring_hom.apply_apply {I : Type u_1} (f : I → Type u_2) [(i : I) → semiring (f i)] (i : I) (g : (i : I) → f i) : coe_fn (ring_hom.apply f i) g = g i := rfl end Mathlib
module Inigo.Async.Git import Inigo.Async.Base import Inigo.Async.Promise import Inigo.Async.FS import System.Path ||| Download a git repository optionally specifying the commit, then remove the .git folder ||| Returns `True` iff the file already exists. export git_downloadTo : (url : String) -> (commit : Maybe String) -> (dest : String) -> Promise Bool git_downloadTo url commit dest = do case !(fs_exists dest) of False => do ignore $ system "git" ["clone", "-q", "--progress", "--recurse-submodules", url, dest] Nothing False False maybe (pure ()) (\com => ignore $ system "git" ["checkout", "-q", com] (Just dest) False False) commit fs_rmdir True (dest </> ".git") pure False True => pure True
Volunteers are vital to Thames Reach in our efforts to end street homelessness and support vulnerable and socially excluded people across London. As well as helping individuals to take steps forward in their lives, volunteering also offers opportunities to learn, meet new people and experience something different. Benefacto is a social enterprise helping to get people who work in a professional capacity to volunteer in their local communities. Since last June, they’ve been arranging for professional and corporate volunteers to dedicate some of their time to helping Thames Reach clients at iReach, a weekly digital skills workshop at our Employment Academy in Camberwell. iReach gives people with little or no computer skills the opportunity to learn at their own pace. Funded by the Worshipful Company of Information Technologists, the workshop teaches practical digital skills, so clients can access services and benefits online, and can also help to reduce social isolation. Benefacto volunteer Molly spent an afternoon volunteering at an iReach session earlier this month. “You take it for granted but lots of people can’t use a computer,” she said. “It’s so important for applying for jobs and staying in contact with friends and family. “You can see people’s progress throughout the session. As a volunteer you want to get to a stage where you’re almost in the way. There was a lot of people at the session, everyone was enjoying it, it was amazing to see,” she added. Stevie Back, volunteer coordinator at Benefacto, said: “Doing this gives volunteers a chance to touch down with perhaps an unfamiliar reality and realise the importance of digital inclusion. “We’ve spent a long time seeking out the right charities for our volunteers. People working in a professional or corporate environment will tend to have digital skills they can impart through sessions like iReach, which is a fantastic project. “We’ve really enjoyed working with Thames Reach and will continue to do so. We’ve had 23 volunteers helping out with iReach. They share their expertise and make contact with people in their communities they might otherwise never have met,” she said. Thames Reach digital support worker Chris Hamm, who runs the iReach sessions, said: “When our service users first arrive, some are apprehensive about using the equipment. Some have almost no previous experience of using computers. “After a while, though, you can see how self-confident our learners become. The transformation is lovely to watch and the knowhow that our volunteers share with them plays a big part in that,” he said.
/- Copyright (c) 2020 Aaron Anderson. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Aaron Anderson, Jalex Stark, Kyle Miller, Lu-Ming Zhang -/ import combinatorics.simple_graph.basic import combinatorics.simple_graph.connectivity import data.rel import linear_algebra.matrix.trace import linear_algebra.matrix.symmetric /-! # Adjacency Matrices This module defines the adjacency matrix of a graph, and provides theorems connecting graph properties to computational properties of the matrix. ## Main definitions * `matrix.is_adj_matrix`: `A : matrix V V α` is qualified as an "adjacency matrix" if (1) every entry of `A` is `0` or `1`, (2) `A` is symmetric, (3) every diagonal entry of `A` is `0`. * `matrix.is_adj_matrix.to_graph`: for `A : matrix V V α` and `h : A.is_adj_matrix`, `h.to_graph` is the simple graph induced by `A`. * `matrix.compl`: for `A : matrix V V α`, `A.compl` is supposed to be the adjacency matrix of the complement graph of the graph induced by `A`. * `simple_graph.adj_matrix`: the adjacency matrix of a `simple_graph`. * `simple_graph.adj_matrix_pow_apply_eq_card_walk`: each entry of the `n`th power of a graph's adjacency matrix counts the number of length-`n` walks between the corresponding pair of vertices. -/ open_locale big_operators matrix open finset matrix simple_graph variables {V α β : Type*} namespace matrix /-- `A : matrix V V α` is qualified as an "adjacency matrix" if (1) every entry of `A` is `0` or `1`, (2) `A` is symmetric, (3) every diagonal entry of `A` is `0`. -/ structure is_adj_matrix [has_zero α] [has_one α] (A : matrix V V α) : Prop := (zero_or_one : ∀ i j, (A i j) = 0 ∨ (A i j) = 1 . obviously) (symm : A.is_symm . obviously) (apply_diag : ∀ i, A i i = 0 . obviously) namespace is_adj_matrix variables {A : matrix V V α} @[simp] lemma apply_diag_ne [mul_zero_one_class α] [nontrivial α] (h : is_adj_matrix A) (i : V) : ¬ A i i = 1 := by simp [h.apply_diag i] @[simp] lemma apply_ne_one_iff [mul_zero_one_class α] [nontrivial α] (h : is_adj_matrix A) (i j : V) : ¬ A i j = 1 ↔ A i j = 0 := by { obtain (h|h) := h.zero_or_one i j; simp [h] } @[simp] lemma apply_ne_zero_iff [mul_zero_one_class α] [nontrivial α] (h : is_adj_matrix A) (i j : V) : ¬ A i j = 0 ↔ A i j = 1 := by rw [←apply_ne_one_iff h, not_not] /-- For `A : matrix V V α` and `h : is_adj_matrix A`, `h.to_graph` is the simple graph whose adjacency matrix is `A`. -/ @[simps] def to_graph [mul_zero_one_class α] [nontrivial α] (h : is_adj_matrix A) : simple_graph V := { adj := λ i j, A i j = 1, symm := λ i j hij, by rwa h.symm.apply i j, loopless := λ i, by simp [h] } instance [mul_zero_one_class α] [nontrivial α] [decidable_eq α] (h : is_adj_matrix A) : decidable_rel h.to_graph.adj := by { simp only [to_graph], apply_instance } end is_adj_matrix /-- For `A : matrix V V α`, `A.compl` is supposed to be the adjacency matrix of the complement graph of the graph induced by `A.adj_matrix`. -/ def compl [has_zero α] [has_one α] [decidable_eq α] [decidable_eq V] (A : matrix V V α) : matrix V V α := λ i j, ite (i = j) 0 (ite (A i j = 0) 1 0) section compl variables [decidable_eq α] [decidable_eq V] (A : matrix V V α) @[simp] lemma compl_apply_diag [has_zero α] [has_one α] (i : V) : A.compl i i = 0 := by simp [compl] @[simp] lemma compl_apply [has_zero α] [has_one α] (i j : V) : A.compl i j = 0 ∨ A.compl i j = 1 := by { unfold compl, split_ifs; simp, } @[simp] lemma is_symm_compl [has_zero α] [has_one α] (h : A.is_symm) : A.compl.is_symm := by { ext, simp [compl, h.apply, eq_comm], } @[simp] lemma is_adj_matrix_compl [has_zero α] [has_one α] (h : A.is_symm) : is_adj_matrix A.compl := { symm := by simp [h] } namespace is_adj_matrix variable {A} @[simp] lemma compl [has_zero α] [has_one α] (h : is_adj_matrix A) : is_adj_matrix A.compl := is_adj_matrix_compl A h.symm lemma to_graph_compl_eq [mul_zero_one_class α] [nontrivial α] (h : is_adj_matrix A) : h.compl.to_graph = (h.to_graph)ᶜ := begin ext v w, cases h.zero_or_one v w with h h; by_cases hvw : v = w; simp [matrix.compl, h, hvw] end end is_adj_matrix end compl end matrix open matrix namespace simple_graph variables (G : simple_graph V) [decidable_rel G.adj] variables (α) /-- `adj_matrix G α` is the matrix `A` such that `A i j = (1 : α)` if `i` and `j` are adjacent in the simple graph `G`, and otherwise `A i j = 0`. -/ def adj_matrix [has_zero α] [has_one α] : matrix V V α | i j := if (G.adj i j) then 1 else 0 variable {α} @[simp] lemma adj_matrix_apply (v w : V) [has_zero α] [has_one α] : G.adj_matrix α v w = if (G.adj v w) then 1 else 0 := rfl @[simp] theorem transpose_adj_matrix [has_zero α] [has_one α] : (G.adj_matrix α)ᵀ = G.adj_matrix α := by { ext, simp [adj_comm] } @[simp] lemma is_symm_adj_matrix [has_zero α] [has_one α] : (G.adj_matrix α).is_symm := transpose_adj_matrix G variable (α) /-- The adjacency matrix of `G` is an adjacency matrix. -/ @[simp] lemma is_adj_matrix_adj_matrix [has_zero α] [has_one α] : (G.adj_matrix α).is_adj_matrix := { zero_or_one := λ i j, by by_cases G.adj i j; simp [h] } /-- The graph induced by the adjacency matrix of `G` is `G` itself. -/ lemma to_graph_adj_matrix_eq [mul_zero_one_class α] [nontrivial α] : (G.is_adj_matrix_adj_matrix α).to_graph = G := begin ext, simp only [is_adj_matrix.to_graph_adj, adj_matrix_apply, ite_eq_left_iff, zero_ne_one], apply not_not, end variables {α} [fintype V] @[simp] lemma adj_matrix_dot_product [non_assoc_semiring α] (v : V) (vec : V → α) : dot_product (G.adj_matrix α v) vec = ∑ u in G.neighbor_finset v, vec u := by simp [neighbor_finset_eq_filter, dot_product, sum_filter] @[simp] lemma dot_product_adj_matrix [non_assoc_semiring α] (v : V) (vec : V → α) : dot_product vec (G.adj_matrix α v) = ∑ u in G.neighbor_finset v, vec u := by simp [neighbor_finset_eq_filter, dot_product, sum_filter, finset.sum_apply] @[simp] lemma adj_matrix_mul_vec_apply [non_assoc_semiring α] (v : V) (vec : V → α) : ((G.adj_matrix α).mul_vec vec) v = ∑ u in G.neighbor_finset v, vec u := by rw [mul_vec, adj_matrix_dot_product] @[simp] lemma adj_matrix_vec_mul_apply [non_assoc_semiring α] (v : V) (vec : V → α) : ((G.adj_matrix α).vec_mul vec) v = ∑ u in G.neighbor_finset v, vec u := begin rw [← dot_product_adj_matrix, vec_mul], refine congr rfl _, ext, rw [← transpose_apply (adj_matrix α G) x v, transpose_adj_matrix], end @[simp] lemma adj_matrix_mul_apply [non_assoc_semiring α] (M : matrix V V α) (v w : V) : (G.adj_matrix α ⬝ M) v w = ∑ u in G.neighbor_finset v, M u w := by simp [mul_apply, neighbor_finset_eq_filter, sum_filter] @[simp] variable (α) @[simp] theorem trace_adj_matrix [add_comm_monoid α] [has_one α] : matrix.trace (G.adj_matrix α) = 0 := by simp [matrix.trace] variable {α} theorem adj_matrix_mul_self_apply_self [non_assoc_semiring α] (i : V) : ((G.adj_matrix α) ⬝ (G.adj_matrix α)) i i = degree G i := by simp [degree] variable {G} @[simp] lemma adj_matrix_mul_vec_const_apply [semiring α] {a : α} {v : V} : (G.adj_matrix α).mul_vec (function.const _ a) v = G.degree v * a := by simp [degree] lemma adj_matrix_mul_vec_const_apply_of_regular [semiring α] {d : ℕ} {a : α} (hd : G.is_regular_of_degree d) {v : V} : (G.adj_matrix α).mul_vec (function.const _ a) v = (d * a) := by simp [hd v] theorem adj_matrix_pow_apply_eq_card_walk [decidable_eq V] [semiring α] (n : ℕ) (u v : V) : (G.adj_matrix α ^ n) u v = fintype.card {p : G.walk u v | p.length = n} := begin rw card_set_walk_length_eq, induction n with n ih generalizing u v, { obtain rfl | h := eq_or_ne u v; simp [finset_walk_length, *] }, { nth_rewrite 0 [nat.succ_eq_one_add], simp only [pow_add, pow_one, finset_walk_length, ih, mul_eq_mul, adj_matrix_mul_apply], rw finset.card_bUnion, { norm_cast, rw set.sum_indicator_subset _ (subset_univ (G.neighbor_finset u)), congr' 2, ext x, split_ifs with hux; simp [hux], }, /- Disjointness for card_bUnion -/ { intros x hx y hy hxy p hp, split_ifs at hp with hx hy; simp only [inf_eq_inter, empty_inter, inter_empty, not_mem_empty, mem_inter, mem_map, function.embedding.coe_fn_mk, exists_prop] at hp; try { simpa using hp }, obtain ⟨⟨qx, hql, hqp⟩, ⟨rx, hrl, hrp⟩⟩ := hp, unify_equations hqp hrp, exact absurd rfl hxy, } }, end end simple_graph namespace matrix.is_adj_matrix variables [mul_zero_one_class α] [nontrivial α] variables {A : matrix V V α} (h : is_adj_matrix A) /-- If `A` is qualified as an adjacency matrix, then the adjacency matrix of the graph induced by `A` is itself. -/ lemma adj_matrix_to_graph_eq [decidable_eq α] : h.to_graph.adj_matrix α = A := begin ext i j, obtain (h'|h') := h.zero_or_one i j; simp [h'], end end matrix.is_adj_matrix
/* specfunc/gsl_sf_airy.h * * Copyright (C) 1996, 1997, 1998, 1999, 2000 Gerard Jungman * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or (at * your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ /* Author: G. Jungman */ #ifndef __GSL_SF_AIRY_H__ #define __GSL_SF_AIRY_H__ #include <gsl/gsl_mode.h> #include <gsl/gsl_sf_result.h> #undef __BEGIN_DECLS #undef __END_DECLS #ifdef __cplusplus # define __BEGIN_DECLS extern "C" { # define __END_DECLS } #else # define __BEGIN_DECLS /* empty */ # define __END_DECLS /* empty */ #endif __BEGIN_DECLS /* Airy function Ai(x) * * exceptions: GSL_EUNDRFLW */ int gsl_sf_airy_Ai_e(const double x, const gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Ai(const double x, gsl_mode_t mode); /* Airy function Bi(x) * * exceptions: GSL_EOVRFLW */ int gsl_sf_airy_Bi_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Bi(const double x, gsl_mode_t mode); /* scaled Ai(x): * Ai(x) x < 0 * exp(+2/3 x^{3/2}) Ai(x) x > 0 * * exceptions: none */ int gsl_sf_airy_Ai_scaled_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Ai_scaled(const double x, gsl_mode_t mode); /* scaled Bi(x): * Bi(x) x < 0 * exp(-2/3 x^{3/2}) Bi(x) x > 0 * * exceptions: none */ int gsl_sf_airy_Bi_scaled_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Bi_scaled(const double x, gsl_mode_t mode); /* derivative Ai'(x) * * exceptions: GSL_EUNDRFLW */ int gsl_sf_airy_Ai_deriv_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Ai_deriv(const double x, gsl_mode_t mode); /* derivative Bi'(x) * * exceptions: GSL_EOVRFLW */ int gsl_sf_airy_Bi_deriv_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Bi_deriv(const double x, gsl_mode_t mode); /* scaled derivative Ai'(x): * Ai'(x) x < 0 * exp(+2/3 x^{3/2}) Ai'(x) x > 0 * * exceptions: none */ int gsl_sf_airy_Ai_deriv_scaled_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Ai_deriv_scaled(const double x, gsl_mode_t mode); /* scaled derivative: * Bi'(x) x < 0 * exp(-2/3 x^{3/2}) Bi'(x) x > 0 * * exceptions: none */ int gsl_sf_airy_Bi_deriv_scaled_e(const double x, gsl_mode_t mode, gsl_sf_result * result); double gsl_sf_airy_Bi_deriv_scaled(const double x, gsl_mode_t mode); /* Zeros of Ai(x) */ int gsl_sf_airy_zero_Ai_e(unsigned int s, gsl_sf_result * result); double gsl_sf_airy_zero_Ai(unsigned int s); /* Zeros of Bi(x) */ int gsl_sf_airy_zero_Bi_e(unsigned int s, gsl_sf_result * result); double gsl_sf_airy_zero_Bi(unsigned int s); /* Zeros of Ai'(x) */ int gsl_sf_airy_zero_Ai_deriv_e(unsigned int s, gsl_sf_result * result); double gsl_sf_airy_zero_Ai_deriv(unsigned int s); /* Zeros of Bi'(x) */ int gsl_sf_airy_zero_Bi_deriv_e(unsigned int s, gsl_sf_result * result); double gsl_sf_airy_zero_Bi_deriv(unsigned int s); __END_DECLS #endif /* __GSL_SF_AIRY_H__ */
{-# OPTIONS --without-K #-} open import HoTT.Base open import HoTT.Equivalence module HoTT.Equivalence.Coproduct where open variables private variable A' B' : 𝒰 i +-empty₁ : 𝟎 {i} + B ≃ B +-empty₁ = let open Iso in iso→eqv λ where .f (inl ()) .f (inr b) → b .g → inr .η (inl ()) .η (inr b) → refl .ε _ → refl +-empty₂ : A + 𝟎 {j} ≃ A +-empty₂ = let open Iso in iso→eqv λ where .f (inl a) → a .f (inr ()) .g → inl .η (inl b) → refl .η (inr ()) .ε _ → refl +-comm : A + B ≃ B + A +-comm = let open Iso in iso→eqv λ where .f (inl a) → inr a .f (inr b) → inl b .g (inl b) → inr b .g (inr a) → inl a .η (inl a) → refl .η (inr b) → refl .ε (inl b) → refl .ε (inr a) → refl +-equiv : A ≃ A' → B ≃ B' → A + B ≃ A' + B' +-equiv e₁ e₂ = iso→eqv λ where .f (inl a) → inl (f₁ a) .f (inr b) → inr (f₂ b) .g (inl a') → inl (g₁ a') .g (inr b') → inr (g₂ b') .η (inl a) → ap inl (η₁ a) .η (inr b) → ap inr (η₂ b) .ε (inl a') → ap inl (ε₁ a') .ε (inr b') → ap inr (ε₂ b') where open Iso open Iso (eqv→iso e₁) renaming (f to f₁ ; g to g₁ ; η to η₁ ; ε to ε₁) open Iso (eqv→iso e₂) renaming (f to f₂ ; g to g₂ ; η to η₂ ; ε to ε₂)
= = = 1999 – 2002 = = =
module GameServer.Protocol.Binary import Data.List import Network.Socket.Raw import System ||| Returns an unsigned `Integer` value as a list of bytes mkBytes : List Int -> Integer -> List Int mkBytes acc 0 = acc mkBytes acc n = let m = n `mod` 256 q = n `div` 256 in mkBytes (cast m :: acc) q export readBytes : (ptr: BufPtr) -> (offset:Integer) -> (len: Int) -> List Int -> IO (List Int) readBytes _ _ 0 acc = pure acc readBytes ptr offset n acc = do byte <- sock_peek ptr (cast offset + n-1) readBytes ptr offset (n-1) (byte :: acc) export putBytes : (ptr: BufPtr) -> (offset:Integer) -> (len:Int) -> List Int -> IO () putBytes ptr offset len bs = putBytes' len bs where putBytes' : (len:Int) -> List Int -> IO () putBytes' 0 bs = pure () putBytes' n (b::bs) = do sock_poke ptr (cast offset + len - n) b putBytes' (n-1) bs mkInteger : (acc:Integer) -> (x:Int) -> Integer mkInteger acc x = acc * 256 + cast x padding : Integer -> Nat -> List Int padding pad len = replicate (fromInteger $ pad - cast len) (the Int 0) ||| Writes an unsigned `Integer` value as a 64 bits in big endian form ||| into a buffer. ||| It assumes that 1/ the Integer is no larger than 2^63 -1 to fit ||| into 64 bits and 2/ the `ptr` points to a buffer with enough ||| room to fit 8 bytes export write64be : Integer -> (ptr: BufPtr) -> IO () write64be val ptr = do let bytes = mkBytes [] val let bytes8 = padding 8 (length bytes) ++ bytes putBytes ptr 0 8 bytes8 ||| Read a 64 bits big endian value as an unsigned `Integer` ||| Assumes there are at least 8 bytes in the buffer pointed at by ||| `ptr` export read64be : (ptr: BufPtr) -> IO Integer read64be ptr = do bytes <- readBytes ptr 0 8 [] pure $ foldl mkInteger 0 bytes export write32be : Integer -> (offset : Int) -> (ptr: BufPtr) -> IO () write32be val offset ptr = do let bytes = mkBytes [] val let bytes8 = padding 8 (length bytes) ++ bytes putBytes ptr (cast offset) 8 bytes8 ||| Read a 32 bits big endian value as an unsigned `Integer` ||| Assumes there are at least 8 bytes in the buffer pointed at by ||| `ptr` export read32be : (ptr: BufPtr) -> (offset : Integer) -> IO Integer read32be ptr offset = do bytes <- readBytes ptr offset 4 [] pure $ foldl mkInteger 0 bytes ||| Read a length-encoded string from given buffers ||| Strings are assumed to be encoded as a 32 big endian length and ||| followed by utf8 encoded number of bytes export readString : (ptr : BufPtr) -> (offset : Integer) -> IO (String, Integer) readString ptr offset = do len <- read32be ptr offset chars <- map chr <$> readBytes ptr (offset+4) (cast len) [] pure $ (pack chars, offset+4+len) export writeString : (ptr : BufPtr) -> (offset : Integer) -> String -> IO Integer writeString ptr offset str = do let chars = unpack str let len = length chars write32be (cast len) (cast offset) ptr putBytes ptr (offset + 4) (cast len) (map ord chars) pure $ offset + 4 + cast len namespace Test export test_parseLength : IO Integer test_parseLength = do len_buf <- sock_alloc 8 write64be 12345678901234567 len_buf read64be len_buf export test_parseLength2 : IO Integer test_parseLength2 = do len_buf <- sock_alloc 8 write64be (-12345678901234567) len_buf read64be len_buf
# GraphHopper Directions API # # You use the GraphHopper Directions API to add route planning, navigation and route optimization to your software. E.g. the Routing API has turn instructions and elevation data and the Route Optimization API solves your logistic problems and supports various constraints like time window and capacity restrictions. Also it is possible to get all distances between all locations with our fast Matrix API. # # OpenAPI spec version: 1.0.0 # # Generated by: https://github.com/swagger-api/swagger-codegen.git #' CostMatrixData Class #' #' @field times #' @field distances #' @field info #' #' @importFrom R6 R6Class #' @importFrom jsonlite fromJSON toJSON #' @export CostMatrixData <- R6::R6Class( 'CostMatrixData', public = list( `times` = NULL, `distances` = NULL, `info` = NULL, initialize = function(`times`, `distances`, `info`){ if (!missing(`times`)) { stopifnot(is.list(`times`), length(`times`) != 0) lapply(`times`, function(x) stopifnot(R6::is.R6(x))) self$`times` <- `times` } if (!missing(`distances`)) { stopifnot(is.list(`distances`), length(`distances`) != 0) lapply(`distances`, function(x) stopifnot(R6::is.R6(x))) self$`distances` <- `distances` } if (!missing(`info`)) { stopifnot(R6::is.R6(`info`)) self$`info` <- `info` } }, toJSON = function() { CostMatrixDataObject <- list() if (!is.null(self$`times`)) { CostMatrixDataObject[['times']] <- lapply(self$`times`, function(x) x$toJSON()) } if (!is.null(self$`distances`)) { CostMatrixDataObject[['distances']] <- lapply(self$`distances`, function(x) x$toJSON()) } if (!is.null(self$`info`)) { CostMatrixDataObject[['info']] <- self$`info`$toJSON() } CostMatrixDataObject }, fromJSON = function(CostMatrixDataJson) { CostMatrixDataObject <- jsonlite::fromJSON(CostMatrixDataJson) if (!is.null(CostMatrixDataObject$`times`)) { self$`times` <- lapply(CostMatrixDataObject$`times`, function(x) { timesObject <- Integer$new() timesObject$fromJSON(jsonlite::toJSON(x, auto_unbox = TRUE)) timesObject }) } if (!is.null(CostMatrixDataObject$`distances`)) { self$`distances` <- lapply(CostMatrixDataObject$`distances`, function(x) { distancesObject <- Numeric$new() distancesObject$fromJSON(jsonlite::toJSON(x, auto_unbox = TRUE)) distancesObject }) } if (!is.null(CostMatrixDataObject$`info`)) { infoObject <- CostMatrixDataInfo$new() infoObject$fromJSON(jsonlite::toJSON(CostMatrixDataObject$info, auto_unbox = TRUE)) self$`info` <- infoObject } }, toJSONString = function() { sprintf( '{ "times": [%s], "distances": [%s], "info": %s }', lapply(self$`times`, function(x) paste(x$toJSON(), sep=",")), lapply(self$`distances`, function(x) paste(x$toJSON(), sep=",")), self$`info`$toJSON() ) }, fromJSONString = function(CostMatrixDataJson) { CostMatrixDataObject <- jsonlite::fromJSON(CostMatrixDataJson) self$`times` <- lapply(CostMatrixDataObject$`times`, function(x) Integer$new()$fromJSON(jsonlite::toJSON(x, auto_unbox = TRUE))) self$`distances` <- lapply(CostMatrixDataObject$`distances`, function(x) Numeric$new()$fromJSON(jsonlite::toJSON(x, auto_unbox = TRUE))) CostMatrixDataInfoObject <- CostMatrixDataInfo$new() self$`info` <- CostMatrixDataInfoObject$fromJSON(jsonlite::toJSON(CostMatrixDataObject$info, auto_unbox = TRUE)) } ) )
########################### FUNCOES RELATIVAS AO USO E EXTRACAO DE GRADES ########################## #' Objetos \code{gradecolina} #' #' Tabelas regulares amostradas de um objeto \code{\link{interpolador}}. #' #' A chamada do metodo \code{predict} em objetos da classe \code{interpolador} e suas subclasses #' pode ser retornada de forma mais completa. Esta forma consiste na tabela regularmente espacada #' para representacao discreta da curva colina. #' #' Objetos do tipo \code{gradecolina} sao uma lista de dois elementos, o primeiro dos quais um #' data.table com as colunas #' #' \describe{ #' \item{\code{hl}}{queda liquida} #' \item{\code{pot}}{potencia gerada} #' \item{\code{rend}}{rendimento interpolado} #' \item{\code{inhull}}{booleano indicando se o ponto foi interpolado (\code{TRUE}) ou extrapolado (\code{FALSE})} #' } #' #' O segundo elemento e um objeto \code{curvacolina}, contendo a colina original. #' #' @name gradecolina #' #' @family gradecolina NULL #' Construtor Interno De \code{gradecolina} #' #' Funcao interna do pacote, nao deve ser chamada pelo usuario #' #' @param pontos data.frame-like contendo coordenadas interpoladas #' @param rends vetor numerico de rendimentos interpolados #' @param interpolador tipo de interpolador de onde foi extraida #' #' @return objeto da classe \code{gradecolina}; lista de dois elementos, o primeiro dos quais um #' data.table com as colunas #' #' \describe{ #' \item{\code{hl}}{queda liquida} #' \item{\code{pot}}{potencia gerada} #' \item{\code{rend}}{rendimento interpolado} #' \item{\code{inhull}}{booleano indicando se o ponto foi interpolado (\code{TRUE}) ou extrapolado (\code{FALSE})} #' } #' #' O segundo elemento e um objeto \code{curvacolina}, contendo a colina original #' #' @importFrom geometry inhulln convhulln new_gradecolina <- function(pontos, rends, interpolador) { hl <- pot <- vaz <- rend <- NULL colina <- getcolina(interpolador) nhl <- length(unique(pontos[, "hl"])) npot <- length(unique(pontos[, "pot"])) grade <- as.data.table(cbind(pontos, rend = rends)) inhull <- inhulln(convhulln(colina$CC[, list(hl, pot)]), data.matrix(pontos)) grade$inhull <- inhull out <- list(grade = grade, colina = colina) class(out) <- "gradecolina" attr(out, "interp") <- class(interpolador)[1] attr(out, "nhl") <- nhl attr(out, "npot") <- npot return(out) } # METODOS ------------------------------------------------------------------------------------------ #' Interpolacao Bilinear #' #' Interpolacao bilinear de \code{pontos} na grade bivariada \code{gradecolina} #' #' \code{predict.gradecolina} interpola pontos arbitrarios especificados atraves do argumento #' \code{pontos}. \code{fitted.gradecolina} interpola os pontos da propria curva colina original #' na grade. \code{residuals.gradecolina} retorna os erros de interpolacao dos pontos da colina #' original na grade. #' #' @param object objeto da classe \code{gradecolina} #' @param pontos data.frame-like contendo as coordenadas \code{hl} e \code{pot} dos pontos a #' interpolar. #' @param full.output booleano -- se \code{FALSE} (padrao) retorna apenas o vetor de rendimentos #' interpolados nas coordenadas \code{pontos}; se \code{TRUE} um data.table de \code{pontos} com #' a coluna \code{rend} adicionada #' @param ... existe somente para consistencia de metodos. Nao possui utilidade #' #' @examples #' #' # usando o interpolador de triangulacao #' tri <- interpolador(colinadummy, "triangulacao") #' #' # extrai uma grade dele #' coord <- coordgrade(colinadummy, 10, 10) #' gradecolina <- predict(tri, coord, as.gradecolina = TRUE) #' #' # interpolando pontos arbitrarios #' coord_interp <- coordgrade(colinadummy, 25, 25) #' pred <- predict(gradecolina, coord_interp) #' pred <- predict(gradecolina, coord_interp, full.output = TRUE) #' #' # interpolando a propria curva colina #' fitt <- fitted(gradecolina) #' #' # residuos #' resid <- residuals(gradecolina) #' #' @return set \code{full.output = FALSE} vetor de rendimentos interpolados, do contrario um #' \code{data.table} contendo \code{pontos} adicionado da coluna \code{rend} com resultado da #' interpolacao #' #' @name interpolacao_bilinear #' #' @family gradecolina #' #' @import data.table #' @importFrom geometry inhulln convhulln #' #' @export predict.gradecolina <- function(object, pontos, full.output = FALSE, ...) { hl <- pot <- ordem0 <- NULL gradecolina <- as.data.table(object[[1]]) pontos <- as.data.table(pontos) hlGrade <- gradecolina[, unique(hl)] potGrade <- gradecolina[, unique(pot)] rendGrade <- data.matrix(dcast(gradecolina, hl ~ pot, value.var = "rend")[, -1]) pontos[, ordem0 := seq_len(.N)] setorder(pontos, pot) hlPred <- pontos[, hl] potPred <- pontos[, pot] interp <- INTERPBILIN(hlGrade, potGrade, rendGrade, hlPred, potPred) if(full.output) { out <- cbind(pontos[, list(hl, pot)], rend = as.numeric(interp)) # o inhulln reclama se receber uma matriz de inteiros (inacreditavelmente), entao precisa # somar um 0.0 para converter a matriz em floats pts <- data.matrix(pontos[, list(hl, pot)]) + .0 inhull <- inhulln(convhulln(object$colina$CC[, list(hl, pot)]), pts) out[, inhull := inhull] } else { # a funcao em cpp retorna um vetor coluna (pro R, uma matriz N x 1) out <- as.numeric(interp) } out <- out[order(pontos$ordem0)] return(out) } #' @rdname interpolacao_bilinear #' #' @export fitted.gradecolina <- function(object, full.output = FALSE, ...) { hl <- pot <- NULL fitt <- predict(object, object$colina$CC[, list(hl, pot)], full.output) return(fitt) } #' @rdname interpolacao_bilinear #' #' @export residuals.gradecolina <- function(object, ...) { obs <- object$colina$CC$rend prev <- fitted(object) res <- obs - prev return(res) } #' Escrita De \code{gradecolina} #' #' Metodo para facilitacao de escrita de \code{gradecolina} lida pelas funcoes do pacote #' #' @param x objeto \code{gradecolina} a ser escrito #' @param file caminho para escrita com extensao de aquivo #' #' @return Escreve grade em \code{x} no caminho especificado #' #' @family gradecolina #' #' @import data.table #' #' @export write.gradecolina <- function(x, file) fwrite(x$grade, file, quote = FALSE, sep = ";")
Formal statement is: lemma diff_left: "prod (a - a') b = prod a b - prod a' b" Informal statement is: $(a - a')b = ab - a'b$.
[STATEMENT] lemma adopt_node_document_in_heap: assumes "heap_is_wellformed h" and "known_ptrs h" and "type_wf h" assumes "h \<turnstile> ok (adopt_node owner_document node)" shows "owner_document |\<in>| document_ptr_kinds h" [PROOF STATE] proof (prove) goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] obtain old_document parent_opt h2 h' where old_document: "h \<turnstile> get_owner_document (cast node) \<rightarrow>\<^sub>r old_document" and parent_opt: "h \<turnstile> get_parent node \<rightarrow>\<^sub>r parent_opt" and h2: "h \<turnstile> (case parent_opt of Some parent \<Rightarrow> do { remove_child parent node } | None \<Rightarrow> do { return ()}) \<rightarrow>\<^sub>h h2" and h': "h2 \<turnstile> (if owner_document \<noteq> old_document then do { old_disc_nodes \<leftarrow> get_disconnected_nodes old_document; set_disconnected_nodes old_document (remove1 node old_disc_nodes); disc_nodes \<leftarrow> get_disconnected_nodes owner_document; set_disconnected_nodes owner_document (node # disc_nodes) } else do { return () }) \<rightarrow>\<^sub>h h'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>old_document parent_opt h2 h'. \<lbrakk>h \<turnstile> get_owner_document (cast\<^sub>n\<^sub>o\<^sub>d\<^sub>e\<^sub>_\<^sub>p\<^sub>t\<^sub>r\<^sub>2\<^sub>o\<^sub>b\<^sub>j\<^sub>e\<^sub>c\<^sub>t\<^sub>_\<^sub>p\<^sub>t\<^sub>r node) \<rightarrow>\<^sub>r old_document; h \<turnstile> get_parent node \<rightarrow>\<^sub>r parent_opt; h \<turnstile> (case parent_opt of None \<Rightarrow> return () | Some parent \<Rightarrow> remove_child parent node) \<rightarrow>\<^sub>h h2; h2 \<turnstile> (if owner_document \<noteq> old_document then Heap_Error_Monad.bind (get_disconnected_nodes old_document) (\<lambda>old_disc_nodes. Heap_Error_Monad.bind (set_disconnected_nodes old_document (remove1 node old_disc_nodes)) (\<lambda>_. Heap_Error_Monad.bind (get_disconnected_nodes owner_document) (\<lambda>disc_nodes. set_disconnected_nodes owner_document (node # disc_nodes)))) else return ()) \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using assms(4) [PROOF STATE] proof (prove) using this: h \<turnstile> ok adopt_node owner_document node goal (1 subgoal): 1. (\<And>old_document parent_opt h2 h'. \<lbrakk>h \<turnstile> get_owner_document (cast\<^sub>n\<^sub>o\<^sub>d\<^sub>e\<^sub>_\<^sub>p\<^sub>t\<^sub>r\<^sub>2\<^sub>o\<^sub>b\<^sub>j\<^sub>e\<^sub>c\<^sub>t\<^sub>_\<^sub>p\<^sub>t\<^sub>r node) \<rightarrow>\<^sub>r old_document; h \<turnstile> get_parent node \<rightarrow>\<^sub>r parent_opt; h \<turnstile> (case parent_opt of None \<Rightarrow> return () | Some parent \<Rightarrow> remove_child parent node) \<rightarrow>\<^sub>h h2; h2 \<turnstile> (if owner_document \<noteq> old_document then Heap_Error_Monad.bind (get_disconnected_nodes old_document) (\<lambda>old_disc_nodes. Heap_Error_Monad.bind (set_disconnected_nodes old_document (remove1 node old_disc_nodes)) (\<lambda>_. Heap_Error_Monad.bind (get_disconnected_nodes owner_document) (\<lambda>disc_nodes. set_disconnected_nodes owner_document (node # disc_nodes)))) else return ()) \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by(auto simp add: adopt_node_def elim!: bind_returns_heap_E dest!: pure_returns_heap_eq[rotated, OF get_owner_document_pure] pure_returns_heap_eq[rotated, OF get_parent_pure]) [PROOF STATE] proof (state) this: h \<turnstile> get_owner_document (cast\<^sub>n\<^sub>o\<^sub>d\<^sub>e\<^sub>_\<^sub>p\<^sub>t\<^sub>r\<^sub>2\<^sub>o\<^sub>b\<^sub>j\<^sub>e\<^sub>c\<^sub>t\<^sub>_\<^sub>p\<^sub>t\<^sub>r node) \<rightarrow>\<^sub>r old_document h \<turnstile> get_parent node \<rightarrow>\<^sub>r parent_opt h \<turnstile> (case parent_opt of None \<Rightarrow> return () | Some parent \<Rightarrow> remove_child parent node) \<rightarrow>\<^sub>h h2 h2 \<turnstile> (if owner_document \<noteq> old_document then Heap_Error_Monad.bind (get_disconnected_nodes old_document) (\<lambda>old_disc_nodes. Heap_Error_Monad.bind (set_disconnected_nodes old_document (remove1 node old_disc_nodes)) (\<lambda>_. Heap_Error_Monad.bind (get_disconnected_nodes owner_document) (\<lambda>disc_nodes. set_disconnected_nodes owner_document (node # disc_nodes)))) else return ()) \<rightarrow>\<^sub>h h' goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] proof (cases "owner_document = old_document") [PROOF STATE] proof (state) goal (2 subgoals): 1. owner_document = old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h 2. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] case True [PROOF STATE] proof (state) this: owner_document = old_document goal (2 subgoals): 1. owner_document = old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h 2. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] then [PROOF STATE] proof (chain) picking this: owner_document = old_document [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) using this: owner_document = old_document goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] using old_document get_owner_document_owner_document_in_heap assms(1) assms(2) assms(3) [PROOF STATE] proof (prove) using this: owner_document = old_document h \<turnstile> get_owner_document (cast\<^sub>n\<^sub>o\<^sub>d\<^sub>e\<^sub>_\<^sub>p\<^sub>t\<^sub>r\<^sub>2\<^sub>o\<^sub>b\<^sub>j\<^sub>e\<^sub>c\<^sub>t\<^sub>_\<^sub>p\<^sub>t\<^sub>r node) \<rightarrow>\<^sub>r old_document \<lbrakk>heap_is_wellformed ?h; type_wf ?h; known_ptrs ?h; ?h \<turnstile> get_owner_document ?ptr \<rightarrow>\<^sub>r ?owner_document\<rbrakk> \<Longrightarrow> ?owner_document |\<in>| document_ptr_kinds ?h heap_is_wellformed h known_ptrs h type_wf h goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] by auto [PROOF STATE] proof (state) this: owner_document |\<in>| document_ptr_kinds h goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] case False [PROOF STATE] proof (state) this: owner_document \<noteq> old_document goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] then [PROOF STATE] proof (chain) picking this: owner_document \<noteq> old_document [PROOF STEP] obtain h3 old_disc_nodes disc_nodes where old_disc_nodes: "h2 \<turnstile> get_disconnected_nodes old_document \<rightarrow>\<^sub>r old_disc_nodes" and h3: "h2 \<turnstile> set_disconnected_nodes old_document (remove1 node old_disc_nodes) \<rightarrow>\<^sub>h h3" and old_disc_nodes: "h3 \<turnstile> get_disconnected_nodes owner_document \<rightarrow>\<^sub>r disc_nodes" and h': "h3 \<turnstile> set_disconnected_nodes owner_document (node # disc_nodes) \<rightarrow>\<^sub>h h'" [PROOF STATE] proof (prove) using this: owner_document \<noteq> old_document goal (1 subgoal): 1. (\<And>old_disc_nodes h3 disc_nodes. \<lbrakk>h2 \<turnstile> get_disconnected_nodes old_document \<rightarrow>\<^sub>r old_disc_nodes; h2 \<turnstile> set_disconnected_nodes old_document (remove1 node old_disc_nodes) \<rightarrow>\<^sub>h h3; h3 \<turnstile> get_disconnected_nodes owner_document \<rightarrow>\<^sub>r disc_nodes; h3 \<turnstile> set_disconnected_nodes owner_document (node # disc_nodes) \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using h' [PROOF STATE] proof (prove) using this: owner_document \<noteq> old_document h2 \<turnstile> (if owner_document \<noteq> old_document then Heap_Error_Monad.bind (get_disconnected_nodes old_document) (\<lambda>old_disc_nodes. Heap_Error_Monad.bind (set_disconnected_nodes old_document (remove1 node old_disc_nodes)) (\<lambda>_. Heap_Error_Monad.bind (get_disconnected_nodes owner_document) (\<lambda>disc_nodes. set_disconnected_nodes owner_document (node # disc_nodes)))) else return ()) \<rightarrow>\<^sub>h h' goal (1 subgoal): 1. (\<And>old_disc_nodes h3 disc_nodes. \<lbrakk>h2 \<turnstile> get_disconnected_nodes old_document \<rightarrow>\<^sub>r old_disc_nodes; h2 \<turnstile> set_disconnected_nodes old_document (remove1 node old_disc_nodes) \<rightarrow>\<^sub>h h3; h3 \<turnstile> get_disconnected_nodes owner_document \<rightarrow>\<^sub>r disc_nodes; h3 \<turnstile> set_disconnected_nodes owner_document (node # disc_nodes) \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by(auto elim!: bind_returns_heap_E bind_returns_heap_E2[rotated, OF get_disconnected_nodes_pure, rotated] ) [PROOF STATE] proof (state) this: h2 \<turnstile> get_disconnected_nodes old_document \<rightarrow>\<^sub>r old_disc_nodes h2 \<turnstile> set_disconnected_nodes old_document (remove1 node old_disc_nodes) \<rightarrow>\<^sub>h h3 h3 \<turnstile> get_disconnected_nodes owner_document \<rightarrow>\<^sub>r disc_nodes h3 \<turnstile> set_disconnected_nodes owner_document (node # disc_nodes) \<rightarrow>\<^sub>h h' goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] then [PROOF STATE] proof (chain) picking this: h2 \<turnstile> get_disconnected_nodes old_document \<rightarrow>\<^sub>r old_disc_nodes h2 \<turnstile> set_disconnected_nodes old_document (remove1 node old_disc_nodes) \<rightarrow>\<^sub>h h3 h3 \<turnstile> get_disconnected_nodes owner_document \<rightarrow>\<^sub>r disc_nodes h3 \<turnstile> set_disconnected_nodes owner_document (node # disc_nodes) \<rightarrow>\<^sub>h h' [PROOF STEP] have "owner_document |\<in>| document_ptr_kinds h3" [PROOF STATE] proof (prove) using this: h2 \<turnstile> get_disconnected_nodes old_document \<rightarrow>\<^sub>r old_disc_nodes h2 \<turnstile> set_disconnected_nodes old_document (remove1 node old_disc_nodes) \<rightarrow>\<^sub>h h3 h3 \<turnstile> get_disconnected_nodes owner_document \<rightarrow>\<^sub>r disc_nodes h3 \<turnstile> set_disconnected_nodes owner_document (node # disc_nodes) \<rightarrow>\<^sub>h h' goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h3 [PROOF STEP] by (meson is_OK_returns_result_I local.get_disconnected_nodes_ptr_in_heap) [PROOF STATE] proof (state) this: owner_document |\<in>| document_ptr_kinds h3 goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] moreover [PROOF STATE] proof (state) this: owner_document |\<in>| document_ptr_kinds h3 goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] have "object_ptr_kinds h = object_ptr_kinds h2" [PROOF STATE] proof (prove) goal (1 subgoal): 1. object_ptr_kinds h = object_ptr_kinds h2 [PROOF STEP] using h2 [PROOF STATE] proof (prove) using this: h \<turnstile> (case parent_opt of None \<Rightarrow> return () | Some parent \<Rightarrow> remove_child parent node) \<rightarrow>\<^sub>h h2 goal (1 subgoal): 1. object_ptr_kinds h = object_ptr_kinds h2 [PROOF STEP] apply(simp split: option.splits) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> object_ptr_kinds h = object_ptr_kinds h2 [PROOF STEP] apply(rule writes_small_big[where P="\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h'", OF remove_child_writes]) [PROOF STATE] proof (prove) goal (4 subgoals): 1. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> h \<turnstile> remove_child (?ptr15 x2) (?child15 x2) \<rightarrow>\<^sub>h h2 2. \<And>x2 ha h' w. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2; w \<in> remove_child_locs (?ptr15 x2) |h \<turnstile> get_owner_document (cast\<^sub>n\<^sub>o\<^sub>d\<^sub>e\<^sub>_\<^sub>p\<^sub>t\<^sub>r\<^sub>2\<^sub>o\<^sub>b\<^sub>j\<^sub>e\<^sub>c\<^sub>t\<^sub>_\<^sub>p\<^sub>t\<^sub>r (?child15 x2))|\<^sub>r; ha \<turnstile> w \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> object_ptr_kinds ha = object_ptr_kinds h' 3. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> reflp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') 4. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> transp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') [PROOF STEP] using remove_child_pointers_preserved [PROOF STATE] proof (prove) using this: \<lbrakk>?w \<in> remove_child_locs ?ptr ?owner_document; ?h \<turnstile> ?w \<rightarrow>\<^sub>h ?h'\<rbrakk> \<Longrightarrow> object_ptr_kinds ?h = object_ptr_kinds ?h' goal (4 subgoals): 1. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> h \<turnstile> remove_child (?ptr15 x2) (?child15 x2) \<rightarrow>\<^sub>h h2 2. \<And>x2 ha h' w. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2; w \<in> remove_child_locs (?ptr15 x2) |h \<turnstile> get_owner_document (cast\<^sub>n\<^sub>o\<^sub>d\<^sub>e\<^sub>_\<^sub>p\<^sub>t\<^sub>r\<^sub>2\<^sub>o\<^sub>b\<^sub>j\<^sub>e\<^sub>c\<^sub>t\<^sub>_\<^sub>p\<^sub>t\<^sub>r (?child15 x2))|\<^sub>r; ha \<turnstile> w \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> object_ptr_kinds ha = object_ptr_kinds h' 3. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> reflp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') 4. \<And>x2. \<lbrakk>parent_opt = Some x2; h \<turnstile> remove_child x2 node \<rightarrow>\<^sub>h h2\<rbrakk> \<Longrightarrow> transp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') [PROOF STEP] by (auto simp add: reflp_def transp_def) [PROOF STATE] proof (state) this: object_ptr_kinds h = object_ptr_kinds h2 goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] moreover [PROOF STATE] proof (state) this: object_ptr_kinds h = object_ptr_kinds h2 goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] have "object_ptr_kinds h2 = object_ptr_kinds h3" [PROOF STATE] proof (prove) goal (1 subgoal): 1. object_ptr_kinds h2 = object_ptr_kinds h3 [PROOF STEP] apply(rule writes_small_big[where P="\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h'", OF set_disconnected_nodes_writes h3]) [PROOF STATE] proof (prove) goal (3 subgoals): 1. \<And>h h' w. \<lbrakk>w \<in> set_disconnected_nodes_locs old_document; h \<turnstile> w \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> object_ptr_kinds h = object_ptr_kinds h' 2. reflp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') 3. transp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') [PROOF STEP] using set_disconnected_nodes_pointers_preserved set_child_nodes_pointers_preserved [PROOF STATE] proof (prove) using this: \<lbrakk>?w \<in> set_disconnected_nodes_locs ?document_ptr; ?h \<turnstile> ?w \<rightarrow>\<^sub>h ?h'\<rbrakk> \<Longrightarrow> object_ptr_kinds ?h = object_ptr_kinds ?h' \<lbrakk>?w \<in> set_child_nodes_locs ?object_ptr; ?h \<turnstile> ?w \<rightarrow>\<^sub>h ?h'\<rbrakk> \<Longrightarrow> object_ptr_kinds ?h = object_ptr_kinds ?h' goal (3 subgoals): 1. \<And>h h' w. \<lbrakk>w \<in> set_disconnected_nodes_locs old_document; h \<turnstile> w \<rightarrow>\<^sub>h h'\<rbrakk> \<Longrightarrow> object_ptr_kinds h = object_ptr_kinds h' 2. reflp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') 3. transp (\<lambda>h h'. object_ptr_kinds h = object_ptr_kinds h') [PROOF STEP] by (auto simp add: reflp_def transp_def) [PROOF STATE] proof (state) this: object_ptr_kinds h2 = object_ptr_kinds h3 goal (1 subgoal): 1. owner_document \<noteq> old_document \<Longrightarrow> owner_document |\<in>| document_ptr_kinds h [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: owner_document |\<in>| document_ptr_kinds h3 object_ptr_kinds h = object_ptr_kinds h2 object_ptr_kinds h2 = object_ptr_kinds h3 [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) using this: owner_document |\<in>| document_ptr_kinds h3 object_ptr_kinds h = object_ptr_kinds h2 object_ptr_kinds h2 = object_ptr_kinds h3 goal (1 subgoal): 1. owner_document |\<in>| document_ptr_kinds h [PROOF STEP] by(auto simp add: document_ptr_kinds_def) [PROOF STATE] proof (state) this: owner_document |\<in>| document_ptr_kinds h goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: owner_document |\<in>| document_ptr_kinds h goal: No subgoals! [PROOF STEP] qed
# "Monte Carlo Methods 2" > "In this blog post we continue to look at Monte Carlo methods and how they can be used. We move from sampling from univariate distributions to multi-variate distributions." - toc: true - author: Lewis Cole (2020) - branch: master - badges: false - comments: false - categories: [Monte-Carlo, Statistics, Probability, Computational-Statistics, Theory, Computation, Copula] - hide: false - search_exclude: false - image: https://github.com/lewiscoleblog/blog/raw/master/images/Monte-Carlo/copula.png ___ This is the second blog post in a series - you can find the previous blog post [here](https://lewiscoleblog.com/monte-carlo-methods) ___ In this blog post we shall continue our exploration of Monte Carlo methods. To briefly recap in the previous blog post we looked at the general principle underlying Monte-Carlo methods, we looked at methods used by pseudo-random number generators to feed our models and we looked at a variety of methods to convert these uniform variates to general univariate distributions. ## Multi-Variate Relations We start by looking at the problem of sampling from general multi-variate distributions. In many instances simply sampling univariate distributions will be insufficient. For example if we are creating a model that looks of mortgage repayment defaults we do not want to model the default rate by one univariate distribution and the interest rate by another univariate distribution using the techniques described so far. In an ideal world we would know the exact relationship we are trying to model, in physics and the hard sciences the exact mechanism underlying the joint behaviour may be modelled exactly. However in many cases in finance (and the social sciences) this is very hard to elicit (if not impossible) so we want to create "correlated" samples that capture (at least qualitatively) some of the joint behaviour. ### Measures of Correlation We start by looking at some measures of dependance between variables. This helps us evaluate whether a model is working as we would expect or not. A lot of times joint-dependence is difficult to elicit from the data so often with this sort of model there is a lot of expert judgements, sensitivity testing, back testing and so on in order to calibrate a model. This is out of the scope of what I'm looking to cover in this blog post series but is worth keeping in mind for this section on multi-variate methods. By far the most prevelant measure of dependance is the Pearson correlation coefficient. This is the first (and in many cases only) measure of depedence we find in textbooks. We can express it algebraically as: $$ \rho_{(X,Y)} = \frac{\mathbb{E}[(X - \mathbb{E}(X))(Y - \mathbb{E}(Y))]}{\sqrt{\mathbb{E}[(X - \mathbb{E}(X))^2]\mathbb{E}[(Y - \mathbb{E}(Y))^2]}} = \frac{Cov(X,Y)}{SD(X)SD(Y)}$$ That is the ratio of the covariance between two variables divide by the product of their standard deviations. This measure has a number of useful properties - it gives a metric between $[-1,1]$ (via the Cauchy-Schwarz inequality) which varies from "completely anti-dependant" to "completely-dependant". It is also mathematically very tractable, it is very easy to calculate and it often crops up when performing an analytic investigation. However it's use in "the real world" is fairly limited, it is perhaps the most abused of all statistics. One issue with this measure in practice is that it requires defined first and second moments for the definition to work. We can calculate this statistic on samples from any distribution, however if we are in the realms of fat-tailed distributions (such as a power-law) this sample estimate will be meaningless. In finance and real-world risk applications this is a big concern, it means that many of the early risk management models still being used that do not allow for the possiblity of fat-tails are at best not-useful and at worst highly dangerous for breeding false confidence. Another issue with the measure is it is highly unintuitive. Given the way it is defined the difference between $\rho=0.05$ and $\rho = 0.1$ is negligible, yet the difference between $\rho = 0.94$ and $\rho = 0.99$ is hugely significant. In this authors experience even very "technical" practitioners fail to remember this and treat a "5% point increase" as having a consistent impact. This issue is compounded further when dealing with subject matter experts who do not necessarily have mathematical/probability training but have an "intuitive" grasp of "correlation"! The biggest and most limiting factor for the Pearson coefficient however is that it only considers linear relationships between variables - any relation that is not linear will effectively be approximated linearly. In probability distribution terms this is equivalent to a joint-normal distribution. That is to say: **Pearson correlation only exhibits good properties in relation to joint-normal behaviour!** Another option to use is the Spearmann correlation coefficient. This is strongly related to the Pearson correlation coefficient but with one important difference: instead of using the values themselves we work with the percentiles instead. For example suppose we have a standard normal distribution and we sample the point: $x = 1.64...$ we know that this is the 95th percentile of the standard normal CDF so we would use the value: $y=0.95$ in the definition of Pearson correlation above. This has the benefit that it doesn't matter what distributions we use we will still end up with a reasonable metric of dependance. In the case of joint-normality we will have that the Pearson and Spearmann coefficients are equal - this shows that Spearmann is in some sense a generalization of the Pearson. However before we get too excited the issues around it being an unintuitive metric still stands. It also has the added complication that if we observe data "in the wild" we don't know what the percentile of our observation is! For example suppose we're looking at relationships between height and weight in a population: if we find a person of height: $1.75m$ what percentile does this correspond to? We will likely have to estimate this given our observations which adds a layer of approximation. In the case of heavy-tail distributions this is particularly problematic since everything may appear to be "normal" for many many observations but suddenly 1 extreme value will totally change our perception of the underlying distribution. Another issue is that while the Spearmann doesn't rely only on linear relations - it does require at least monotone relations. Anything more complicated that that and the relationship will not be captured. Let's look at a few simple examples to highlight the differences between these metrics: ```python #hide import warnings warnings.filterwarnings('ignore') ``` ```python # Examples of Pearson and Spearman coefficients import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm, rankdata, pearsonr import seaborn as sns %matplotlib inline U = np.random.random(1000) X = norm.ppf(U) Y_lin = 4*X + 5 Y_exp = np.exp(X) Y_con = X**2 # Create function to return estimated percentile def pctile(x): return rankdata(x) / x.shape[0] # Calculate Pearson coefficients pea_lin = pearsonr(X, Y_lin)[0] pea_exp = pearsonr(X, Y_exp)[0] pea_con = pearsonr(X, Y_con)[0] # Calculate Spearman coefficients X_pct = pctile(X) Y_lin_pct = pctile(Y_lin) Y_exp_pct = pctile(Y_exp) Y_con_pct = pctile(Y_con) spe_lin = pearsonr(X_pct, Y_lin_pct)[0] spe_exp = pearsonr(X_pct, Y_exp_pct)[0] spe_con = pearsonr(X_pct, Y_con_pct)[0] # Create Plots fig, ax = plt.subplots(1, 3, figsize=(15, 5)) sns.scatterplot(X, Y_lin, ax=ax[0]) ax[0].set_ylabel("Y=4X+5") ax[0].set_xlabel("X \n Pearson: %f \n Spearman: %f" %(pea_lin, spe_lin)) sns.scatterplot(X, Y_exp, ax=ax[1]) ax[1].set_ylabel("Y=exp(X)") ax[1].set_xlabel("X \n Pearson: %f \n Spearman: %f" %(pea_exp, spe_exp)) sns.scatterplot(X, Y_con, ax=ax[2]) ax[2].set_ylabel("Y=X^2") ax[2].set_xlabel("X \n Pearson: %f \n Spearman: %f" %(pea_con, spe_con)) fig.suptitle("Comparison of Pearson and Spearman Correlation", x=0.5, y=0.95, size='xx-large') plt.show() ``` We can see here that for a linear translation both metrics produce the same result. In the case of a non-linear monotone (e.g. exponential) relation the Spearman correctly identifies full dependancy. However neither metric is capable of capturing the dependency in the last non-monotone example - in fact both metrics suggest independence! From a basic visual inspection we would not describe these two variables as being independent. This shows we need to be careful using correlation metrics such as these. This is even before the introduction of noise! Another somewhat popular choice of dependancy metric is the Kendall-Tau metric. This is a little different to the preceeding options. Essentially to calculate the Kendall-Tau we look at the joint pairs of samples, if both are concordant we add $1$ to a counter otherwise add $-1$ and move onto the next pair. We then take the average value of this (i.e. divide by the total number of pairs of joint samples). We can equally denote this by the formula: $$ \tau ={\frac {2}{n(n-1)}}\sum _{i<j}\operatorname{sgn}(x_{i}-x_{j})\operatorname{sgn}(y_{i}-y_{j})$$ Where $\operatorname{sgn}(.)$ is the sign operator. $x_i$ and $y_i$ represent the rank or percentile of each sample within the set of all $x$ or $y$ samples. The benefit of the Kendall-tau is it is slightly more intuitive than the Spearman however it still suffers from some of the same issues (e.g. it will fail the $X^2$ example above). We can show that Pearson, Spearman and Kendall are both particular cases of the generalized correlation coefficient: $$\Gamma ={\frac {\sum _{{i,j=1}}^{n}a_{{ij}}b_{{ij}}}{{\sqrt {\sum _{{i,j=1}}^{n}a_{{ij}}^{2}\sum _{{i,j=1}}^{n}b_{{ij}}^{2}}}}}$$ Where $a_{ij}$ is the x-score and $b_{ij}$ the y-score for pairs of samples $(x_i, y_i)$ and $(x_j, y_j)$. For example for the Kendall Tau we set: $a_{ij} = \operatorname{sgn}(x_i - x_j)$ and $b_{ij} = \operatorname{sgn}(y_i - y_j)$. For Spearman we set: $a_{ij} = (x_i - x_j)$ and $b_{ij} =(y_i - y_j)$. Where again we are working with ranks (or percentiles) rather than sampled values themselves. Can we improve on these metrics from a practical standpoint? The answer is: yes! Unfortunately it is difficult to do and there is as much art as there is science to implementing it. We instead consider the mutual-information: $$I(X, Y) = \int \int p_{X,Y}(x,y) \operatorname{log}\left( \frac{p_{X,Y}(x,y)}{p_X(x)p_Y(y)} \right) dx dy $$ Or similar for discrete distributions. Here $p_{X,Y}(.)$ represents the joint pdf of $(X,Y)$ and $p_X$, $p_Y$ represent the marginal distributions of $X$ and $Y$ respectively. This has the interpretation that it represents the information gained in knowing about the joint distribution compared to assuming independance of the marginal distributions. This is essentially matches our intuition of what dependance "is". However it is a bit of a pain to work with since we usually have to make distributional assumptions. We can use generalizations of mutual information also. For example total correlation: $$C(X_{1},X_{2},\ldots ,X_{n})=\left[\sum _{{i=1}}^{n}H(X_{i})\right]-H(X_{1},X_{2},\ldots ,X_{n}) $$ Which is the sum of information for each marginal distribution less the joint information. This compares to mutual information that can be expressed: $$ \operatorname{I} (X;Y)=\mathrm {H} (Y)-\mathrm {H} (Y|X) $$ Another possible generalization to use is that of dual-total correlation which we can express as: $$D(X_{1},\ldots ,X_{n})=H\left(X_{1},\ldots ,X_{n}\right)-\sum _{i=1}^{n}H\left(X_{i}\mid X_{1},\ldots ,X_{i-1},X_{i+1},\ldots ,X_{n}\right) $$ In the above we use the functional $H(.)$ to represent information: $$H=-\sum _{i}p_{i}\log _{2}(p_{i}) $$ Or similar for continuous variables. Despite the complications in working with these definitions these functions do pass the "complicated" examples such as $Y=X^2$ above and are a much better match for our intuition around dependence. I have long pushed for these metrics to be used more widely. ### Other Measurements of Dependence We now appreciate that there are many different ways in which random variables can relate to each other (linear, monotone, independent, and so on.) So far the metrics and measures presented are "overall" measures of dependence, they give us one number for how the variables relate to each other. However in many cases we know that relations are not defined by just one number - typically we may find stronger relations in the tails (in extreme cases) than in "everyday" events. In the extreme we may have independence most of the time and yet near complete depednence when things "go bad". For an example of this consider we are looking at property damage and one factor we consider is wind-speed. At every day levels of wind there is unlikely to be much of a relationship between wind-speed and property damage - any damage that occurs is likely to be due to some other cause (vandalism, derelict/collapsing buildings, and so on). However as the wind-speed increases at a certain point there will be some property damage caused by the wind. At more extreme levels (say at the hurricane level) there will be almost complete dependence between wind level and property damage level. It is not difficult to think of other examples. In fact you'll likely find that most relations display this character to some degree often as a result of a structural shift in the system (e.g. in the wind example through bits of debris flying through the air!) Of course we would like to model these structural shifts exactly but this is often complicated if not impossible - through Monte-Carlo methods we are able to generate samples with behaviour that is at least qualitatively "correct". We call this quality "tail dependence". This can affect upside and downside tails or just one tail depending on the phenomena. Naively we would think that an option would be to use one of the correlation metrics above on censored data (that is filtering out all samples below/above a certain threshold). Unfortunately this does not work, the correlation metric is not additive - meaning you can't discretize the space into smaller units and combine them to give the overall correlation measure. This is another reason correlation metrics don't fit well with our intuition. However the information theoretic quantities can satisfy this condition but as before are a little more difficult to work with. We have some specialist metrics that are easier to use for this purpose. The simplest metric to look at to study tail-dependence is the "joint exceedance probability". As the name suggests we look at the probability of time that the joint distribution exists beyond a certain level (or below a certain level). Suppose we have joint observations $(X_i, Y_i)_{i=1}^N$ and we have: $\mathbb{P}(X > \tilde{x}) = p$ and $\mathbb{P}(Y > \tilde{y}) = p$ then the joint exceedance probability of $(X,Y)$ at percentile $p$ is: $$JEP_{(X,Y)}(p) = \frac{\sum_i\mathbb{1}_{(X_i > \tilde{x})}\mathbb{1}_{(Y_i > \tilde{y}})}{N} $$ Where: $\mathbb{1}_{(.)}$ is the indicator variable/Heaviside function. By construction we have that for $X$ and $Y$ fully rank-dependent then $JEP_{(X,Y)}(p) = p$ if $X$ and $Y$ fully rank-independent then $JEP_{(X,Y)}(p) = p^2$. It often helps to standardise the metric to vary between $[0,1]$ through the transform (we wouldn't use this metric if we are in the case of negative-dependence so can limit ourselves to indpendence as the lower-bound): $$JEP_{(X,Y)}(p) = \frac{\sum_i\mathbb{1}_{(X_i > \tilde{x})}\mathbb{1}_{(Y_i > \tilde{y}})}{pN} - p$$ As with other rank-methods this metric is only useful with monotone relations, it cannot cope with non-monotone relations such as the $Y=X^2$ example. We can also view tail dependence through the prism of probability theory. We can define the upper tail dependence ($\lambda _{u}$) between variables $X_1$ and $X_2$ with CDFs $F_1(.)$ and $F_2(.)$ respectively as: $$\lambda _{u}=\lim _{q\rightarrow 1}\mathbb{P} (X_{2}>F_{2}^{-1}(q)\mid X_{1}>F_{1}^{-1}(q))$$ The lower tail dependence ($\lambda _{l}$) can be defined similarly through: $$\lambda _{l}=\lim _{q\rightarrow 0}\mathbb{P} (X_{2}\leq F_{2}^{-1}(q)\mid X_{1} \leq F_{1}^{-1}(q))$$ The JEP metric above is in some sense a point estimate of these quantities, if we had infinite data and could make $p$ arbitrarily small in $JEP_{(X,Y)}(p)$ we would tend to $\lambda _{u}$. We will revisit the tail-dependence metric once we have built up some of the theoretical background. ## Multi-Variate Generation We now move onto the issue of generating multi-variate samples for use within our Monte-Carlo models. It is not always to encode exact mechanisms for how variables relate (e.g. if we randomly sample from a distribution to give a country's GDP how do we write down a function to turn this into samples of the country's imports/exports? It is unlikely we'll be able to do this with any accuracy, and if we could create functions like this we would better spend our time speculating on the markets than developing Monte-Carlo models!) So we have to rely on broad-brush methods that are at least qualitatively correct. ### Multi-Variate Normal We start with perhaps the simplest and most common joint-distribution (rightly or wrongly) the multi-variate normal. As the name suggests this is a multi-variate distribution with normal distributions as marginals. The joint behaviour is specified by a covariance matrix ($\mathbf{\Sigma}$) representing pairwise covariances between the marginal distributions. We can then specify the pdf as: $$f_{\mathbf {X} }(x_{1},\ldots ,x_{k})={\frac {\exp \left(-{\frac {1}{2}}({\mathbf {x} }-{\boldsymbol {\mu }})^{\mathrm {T} }{\boldsymbol {\Sigma }}^{-1}({\mathbf {x} }-{\boldsymbol {\mu }})\right)}{\sqrt {(2\pi )^{k}|{\boldsymbol {\Sigma }}|}}}$$ Which is essentially just the pdf of a univariate normal distribution just with vectors replacing the single argument. To avoid degeneracy we require that $\mathbf{\Sigma}$ is a positive definite matrix. That is for any vector $\mathbf{x} \in \mathbb{R}^N_{/0}$ we have: $\mathbf{x}^T \mathbf{\Sigma} \mathbf{x} > 0$. How can we sample from this joint distribution? One of the most common was is through the use of a matrix $\mathbf{A}$ such that: $\mathbf{A} \mathbf{A}^T = \mathbf{\Sigma}$. If we have a vector of independent standard normal variates: $\mathbf{Z} = (Z_1, Z_2, ... , Z_N)$ then the vector: $\mathbf{X} = \mathbf{\mu} + \mathbf{A}\mathbf{Z}$ follows a multivariate normal distribution with means $\mathbf{\mu}$ and covariances $\mathbf{\Sigma}$. The Cholesky-decomposition is typically used to find a matrix $\mathbf{A}$ of the correct form, as a result I have heard it called the "Cholesky-method". We will not cover the Cholesky decomposition here since it is a little out of scope, you can read more about it [here](https://en.wikipedia.org/wiki/Cholesky_decomposition) if you would like. Let's look at an example of generating a 4d multivariate normal using this method: ```python # Simulating from a multivariate normal # Using Cholesky Decomposition import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt import seaborn as sns import pandas as pd %matplotlib inline # Fix number of variables and number simulations N = 4 sims = 1000 # Fix means of normal variates mu = np.ones(N)*0.5 # Initialize covariance matrix cov = np.zeros((N, N)) # Create a covariance matrix # Randomly sampled for i in range(N): for j in range(N): if i==j: cov[i, j] = 1 else: cov[i, j] = 0.5 cov[j, i] = cov[i, j] # Calculate cholesky decomposition A = np.linalg.cholesky(cov) # Sample independent normal variates Z = norm.rvs(0,1, size=(sims, N)) # Convert to correlated normal variables X = Z @ A + mu # Convert X to dataframe for plotting dfx = pd.DataFrame(X, columns=["X1", "X2", "X3", "X4"]) # Create variable Plots def hide_current_axis(*args, **kwds): plt.gca().set_visible(False) pp = sns.pairplot(dfx, diag_kind="kde") pp.map_upper(hide_current_axis) plt.show() ``` Here we can see we have geerated correlated normal variates. We see here that this is a quick and efficient way of generating correlated normal variates. However it is not without its problems. Most notably is specifying a covariance/correlation matrix quickly becomes a pain as we increase the number of variates. For example if we had $100$ variables we would need to specify: $4950$ coefficients (all the lower off diagonals). If we have to worry about the matrix being positive definite this quickly becomes a pain and we can even begin to run into memory headaches when matrix multiplying with huge matrices. An alternative highly pragmatric approach is to use a "driver" approach. Here we keep a number of standard normal variables that "drive" our target variables - each target variable has an associated weight to each of the driver variables and a "residual" component for its idiosyncratic stochasticity. In large scale models this can drastically reduce the number of variables to calibrate. Of course we lose some of the explanatory power in the model, but in most cases we can still capture the behaviour we would like. This procedure in some sense "naturally" meets the positive definite criteria and we do not need to think about it. Let's look at this in action: ```python # Simulating from a multivariate normal # Using a driver method import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline # Fix number of variables and number simulations N = 4 sims = 1000 # Target means and standard deviations mu = np.ones(N)*0.5 sig = np.ones(N)*0.5 # Fix number of driver variables M = 2 # Specify driver weight matrix w_mat = np.zeros((M, N)) for i in range(M): for j in range(N): w_mat[i, j] = 0.25 # Residual weights w_res = 1 - w_mat.sum(axis=0) # Simulate driver variables drv = norm.rvs(0,1, size=(sims, M)) # Simulate residual variables res = norm.rvs(0,1, size=(sims, N)) # Calculate correlated variables X = drv @ w_mat + w_res * res # Standardise variables X = (X - X.mean(axis=0)) / (X.std(axis=0)) # Apply transforms X = X*sig + mu # Convert X to dataframe for plotting dfx = pd.DataFrame(X, columns=["X1", "X2", "X3", "X4"]) # Create variable Plots def hide_current_axis(*args, **kwds): plt.gca().set_visible(False) pp = sns.pairplot(dfx, diag_kind="kde") pp.map_upper(hide_current_axis) plt.show() ``` We can see that this "driver" method can significanlty reduce the number of parameters needed for our models. They can also be used to help "interpret" the model - for example we could interpret one of our "driver" variables as being a country's GDP we could then use this to study what happens to our model when GDP falls (e.g. is below the 25th percentile). The nature of the drivers is such that we do not need to know how to model them exactly, this is useful for modelling things such as "consumer sentiment" where a sophisticated model does not necessarily exist. This is part of the "art" side of Monte-Carlo modelling rather than a hard science. The tail dependence parameters from the multi-variate normal satisfy: $$\lambda_u = \lambda_l = 0$$ That is there is no tail-dependence. This can be a problem for our modelling. ### Other Multivariate Distributions With the multivariate normal under our belts we may now want to move onto other joint distributions. Unfortunately things are not that simple, using the methods above we are essentially limited to multivariate distributions that can be easily built off of the multivariate normal distribution. One such popular example is the multivariate student-t distribution. We can sample from this using the transform: $$ \mathbf{X} = \frac{\mathbf{Z}}{\sqrt{\frac{\chi_{\nu}}{\nu}}}$$ Where: $\mathbf{Z}$ is a multivariate normal and $\chi_{\nu}$ is a univariate chi-square with $\nu$ degrees of freedom. The resulting multivariate student-t has covariance matrix $\mathbf{\Sigma}$ and $\nu$ degrees of freedom. Here the degrees of freedom is a parameter that controls the degree of tail-dependence with lower values representing more tail dependence. In the limit $\nu \to \infty$ we have $\mathbf{X} \to \mathbf{Z}$. We can add a couple of extra lines to our Gausssian examples above to convert them to a multi-variate student t distribution: ```python # Simulating from a multivariate student t # Using Cholesky Decomposition and a chi2 transform import numpy as np from scipy.stats import norm, chi2 import matplotlib.pyplot as plt import seaborn as sns import pandas as pd %matplotlib inline # Fix number of variables and number simulations N = 4 sims = 1000 # Fix means of normal variates mu = np.ones(N)*0.5 # Initialize covariance matrix cov = np.zeros((N, N)) # Set degrees of freedom nu nu = 5 # Create a covariance matrix # Randomly sampled for i in range(N): for j in range(N): if i==j: cov[i, j] = 1 else: cov[i, j] = 0.5 cov[j, i] = cov[i, j] # Calculate cholesky decomposition A = np.linalg.cholesky(cov) # Sample independent normal variates Z = norm.rvs(0,1, size=(sims, N)) # Convert to correlated normal variables X = Z @ A + mu # Standardize the normal variates Z = (X - X.mean(axis=0)) / X.std(axis=0) # Sample from Chi2 distribution with nu degrees of freedom chis = chi2.rvs(nu, size=sims) # Convert standard normals to student-t variables T = (Z.T / np.sqrt(chis / nu)).T # Convert T to dataframe for plotting dft = pd.DataFrame(T, columns=["T1", "T2", "T3", "T4"]) # Create variable Plots def hide_current_axis(*args, **kwds): plt.gca().set_visible(False) pp = sns.pairplot(dft, diag_kind="kde") pp.map_upper(hide_current_axis) plt.show() ``` The tail dependence parameter fro the student-t copula for 2 variates with correlation parameter $\rho$ and $\nu$ degrees of freedom is: $$\lambda_u = \lambda_l = 2 t_{\nu+1} \left(- \frac{\sqrt{\nu+1}\sqrt{1-\rho}}{\sqrt{1+\rho}} \right) > 0$$ We can see that even if we specify $\rho=0$ there will be some level of tail-dependence between the variates. In fact it is impossible to enforce independence between variables using the student-t copula. This can be an issue in some instances but there are extensions we can make to overcome - for example see my [insurance aggregation model](https://lewiscoleblog.com/insurance-aggregation-model). Another interesting property of the student-t copula is that it also has "opposing" tail dependency. Namely: $$\lim _{q\rightarrow 0}\mathbb{P} (X_{2}\leq F_{2}^{-1}(q)\mid X_{1} > F_{1}^{-1}(1-q)) = \lim _{q\rightarrow 0}\mathbb{P} (X_{2}>F_{2}^{-1}(1-q)\mid X_{1} \leq F_{1}^{-1}(q)) = 2 t_{\nu+1} \left(- \frac{\sqrt{\nu+1}\sqrt{1+\rho}}{\sqrt{1-\rho}} \right) > 0$$ This gives rise to the familiar "X" shape of a joint student-t scatter plot. ### The Need for More Sophistication So far we are limited to modelling joint behaviour with either normal or student-t marginal distributions. This is very limiting, in practice we will want to sample from joint distributions with arbitrary marginal distributions. Typically we will find it easier to fit marginal distributions to data and we will want our models to produce reasonable marginal results. We therefore need a better method of modelling joint behaviour. ## Copula Methods One way to do this is through a copula method. A copula is nothing more than a multi-variate distribution with uniform $[0,1]$ marginals. This is particularly useful for modelling purposes as it allows us to fully delineate the univariate distributions from the joint behaviour. For example we can select a copula with the joint behaviour we would like and then use (for example) a generalized inverse transform on the marginal distributions to get the results we desire. We can do this simply by adapting the multi-variate Gaussian above, we can use the CDF function on the generated normal variates to create joint-uniform variates (i.e. a copula) and then inverse transform these to give Gamma marginals. This will result in Gamma marginals that display a rank-normal correlation structure: ```python # Simulating from Gamma variates with rank-normal dependency # Using a copula type method import numpy as np from scipy.stats import norm, gamma import matplotlib.pyplot as plt import seaborn as sns import pandas as pd %matplotlib inline # Fix number of variables and number simulations N = 4 sims = 1000 # Fix means of normal variates mu = np.ones(N)*0.5 # Initialize covariance matrix cov = np.zeros((N, N)) # Select Gamma parameters a and b a = 5 b = 3 # Create a covariance matrix # Randomly sampled for i in range(N): for j in range(N): if i==j: cov[i, j] = 1 else: cov[i, j] = 0.5 cov[j, i] = cov[i, j] # Calculate cholesky decomposition A = np.linalg.cholesky(cov) # Sample independent normal variates Z = norm.rvs(0,1, size=(sims, N)) # Convert to correlated normal variables X = Z @ A + mu # Standardize the normal variates Z = (X - X.mean(axis=0)) / X.std(axis=0) # Apply the normal CDF to create copula C C = norm.cdf(Z) # Use inverse transform to create Gamma variates G = gamma.ppf(C, a=a, scale=1/b) # Convert T to dataframe for plotting dfg = pd.DataFrame(G, columns=["G1", "G2", "G3", "G4"]) # Create variable Plots def hide_current_axis(*args, **kwds): plt.gca().set_visible(False) pp = sns.pairplot(dfg, diag_kind="kde") pp.map_upper(hide_current_axis) plt.show() ``` This is really quite neat! It makes our modelling that much easier as we do not need to consider the joint-distribution in its entirety, we can break it down into smaller pieces that are easier to handle. The example above is quite a simple one however, we would like to have more freedom than this as we are still bound by sampling from a joint-distribution. Let's look at some notation. Suppose we have variates: $(X_1, ... ,X_N)$ where each follows a cdf $F_i(x) = \mathbb{P}(X_i \leq x)$. We then have: $(U_1, ... , U_N) = (F_1(X_1), ..., F_N(X_N)$ with uniform random variates by definition. The copula defined by variables $(X_1, ..., X_N)$ is: $C(U_1, ... , U_N) = \mathbb{P}(X_1 \leq F_1(U_1), ... , X_N \leq F_N(U_N))$. From this definition $C: [0,1]^N \to [0,1]$ is an N-dimensional copula if it is a cumulative distribution function (CDF) on the unit cube with uniform marginals. Going the other way any function $C: [0,1]^N \to [0,1]$ defines a copula if: - $C(u_{1},\dots ,u_{i-1},0,u_{i+1},\dots ,u_{N})=0$, the copula is zero if any one of the arguments is zero, - $C(1,\dots ,1,u,1,\dots ,1)=u$, the copula is equal to u if one argument is u and all others 1, - C is d-non-decreasing, i.e., for each hyperrectangle $B=\prod _{i=1}^{N}[x_{i},y_{i}]\subseteq [0,1]^{N}$ the C-volume of B is non-negative: $$\int _{B}\mathrm {d} C(u)=\sum _{\mathbf {z} \in \times _{i=1}^{N}\{x_{i},y_{i}\}}(-1)^{N(\mathbf {z} )}C(\mathbf {z} )\geq 0$$ where the $N(\mathbf {z} )=\#\{k:z_{k}=x_{k}\}$. We now move onto the fundamental theorem of copulae: Sklar's theorem. The statement of the theorem exists in two parts: 1. Let $H(X_1, ..., X_N)$ represent a joint distribution function with marginal distributions $X_i \sim F_i$. Then there exists a copula $C(.)$ such that: $H(X_1, ..., X_N) = C(F_1(X_1), ... , F_N(X_N))$. Moreover if $F_i$ are continuous then the copula $C(.)$ is unique 2. Given a copula function $C(.)$ and univariate distribution functions $F_i$ then $C(F_1(X_1), ... , F_N(X_N))$ defines a joint distribution with marginal distributions $F_i$ Sklar's theorem shows there is a bijection relation between the space of joint distributions and the space of copulae (for continuous variables). This shows that this is indeed a powerful method to use in our modelling. There are a couple of things to note here: the copula is itself a rank-order method of applying dependency. It is invariant under monotonic transforms (due to the use of the generalized inverse method). We can also derive the Fréchet–Hoeffding bounds: $$ \operatorname{max}\left\{ 1 - N + \sum_{i=1}^{N} u_i, 0 \right\} \leq C(u_1, ..., u_N) \leq \operatorname{min} \{u_1, ... , u_N \}$$ We can see this quite easily by noting that for the lower-bound: \begin{align} C(u_1, ... , u_N) &= \mathbb{P} \left( \bigcap_{i=1}^N \{ U_i \leq u_i \} \right) \\ &= 1 - \mathbb{P} \left( \bigcup_{i=1}^N \{ U_i \geq u_i \} \right) \\ &\geq 1 - \sum_{i=1}^N \mathbb{P}( U_i \geq u_i ) \\ &= 1 - N + \sum_{i=1}^{N} u_i \end{align} For the upper bound we have: $$ \bigcap_{i=1}^N \{ U_i \leq u_i \} \subseteq \{ U_i \leq u_i \} \qquad \implies \qquad \mathbb{P} \left( \bigcap_{i=1}^N \{ U_i \leq u_i \} \right) \leq \mathbb{P} \left( U_i \leq u_i \right) = u_i $$ For all possible indices $i$. We have that the upper-bound function $\operatorname{min} \{u_1, ... , u_N \}$ itself defines a copula corresponding to complete (rank) dependence. It is often called the "co-monotonic copula". The lower bound exists as a copula only in the case $N=2$ whereby it represents complete negative (rank) dependence. We can also calculate the Spearman ($\rho$) and Kendall-Tau ($\tau$) dependency coefficients using the copula construction: $$ \rho = 12 \int_0^1 \int_0^1 C(u,v) du dv - 3 $$ And: $$ \tau = 4 \int_0^1 \int_0^1 C(u,v) dC(u,v) - 1$$ We can view the Gaussian example above in a copula frame via: $$C^{Gauss}(u_1, ... , u_N) = \Phi_{\mathbf{\mu,\Sigma}}(\Phi^{-1}(u_1), ..., \Phi^{-1}(u_N))$$ Where: $\Phi_{\mathbf{\mu,\Sigma}}$ is the CDF of a multivariate Gaussian with means $\mathbf{\mu}$ and correlation matrix $\mathbf{\Sigma}$ The function: $\Phi^{-1}(.)$ is the inverse CDF of the univariate standard normal distribution. We call this the Gaussian copula or the rank-normal copula. We can define the student-t copula in a similar way. I have developed a model based around a hierarchical driver structure with a generalization of a student-t copula for the purposes of modelling insurance aggregation. You can read my blog-post dedicated to this specifically [here!](https://lewiscoleblog.com/insurance-aggregation-model) But what are some other options? The copula methods are very flexible so lets look at some other examples. Starting off with the Archimedean family of copulae. A copula is Archimedian if it admits a representation: $$ C_{Arch}(u_{1},\dots ,u_{N};\theta )=\psi ^{[-1]}\left(\psi (u_{1};\theta )+\cdots +\psi (u_{N};\theta );\theta \right) $$ Here the function $\psi$ is called the generator function. We also have parameter $\theta$ taking values in some arbitrary parameter space - the role that $\theta$ takes depends on the form of the generator. $\psi ^{-1}$ is the pseudo-inverse of $\psi$: \begin{equation} \psi ^{[-1]}(t;\theta ) = \begin{cases} \psi ^{-1}(t;\theta )&{\mbox{if }}0\leq t\leq \psi (0;\theta ) \\ 0&{\mbox{if }}\psi (0;\theta )\leq t\leq \infty \end{cases} \end{equation} We note that this functional form has some very useful properties. For example we note that we can express the upper and lower tail dependence parameters (defined above). If we place further assumptions on the generator function being a Laplace transform of strictly positive random variables we can write down neat forms of the upper and lower tail depdence metrics: $$ \lambda _{u} = 2 - 2 \lim_{s \downarrow 0} \frac{\psi'^{[-1]} (2s)}{\psi'^{[-1]} (s)} $$ and: $$ \lambda _{l} = 2 \lim_{s \to \infty} \frac{\psi'^{[-1]} (2s)}{\psi'^{[-1]} (s)} $$ Similarly we can conveniently calculate the Kendall-tau measure of dependence via: $$ \tau = 1 + 4 \int_0^1 \frac{\psi(t)}{\psi'(t)} dt $$ This is very convenient for modelling purposes since we can easily control the joint properties in a predictable way. Unforunately the calculation of Spearman correlation coefficients is not as simple and in many cases a closed form analytic solution does not exist. We can summarise some of the common Archimedean copulae in the table below: |Copula Name | $\psi(t)$ | $\psi^{[-1]}(t)$ | $\theta$-defined-range | Lower-Tail-$\lambda_l(\theta)$ | Upper-Tail-$\lambda_u(\theta)$ |$\tau$ | |:--------|:------:|:-------------:|:----------------:|:-----------:|:-----------:|:-------:| | Frank | $-\log \left({\frac {\exp(-\theta t)-1}{\exp(-\theta )-1}}\right)$ | $-{\frac {1}{\theta }}\,\log(1+\exp(-t)(\exp(-\theta )-1))$ | $\mathbb {R} \backslash \{0\}$ | 0 | 0 | $1 + 4(D_1(\theta)-1)/\theta$ | | Clayton | $\frac {1}{\theta }(t^{-\theta }-1)$ | $\left(1+\theta t\right)^{-1/\theta }$ | $[-1,\infty) \backslash \{0\}$ | $2^{-1/\theta}$ | 0 | $\frac{\theta}{\theta + 2}$ | | Gumbel | $\left(-\log(t)\right)^{\theta }$ | $\exp\left(-t^{1/\theta }\right)$ | $[1, \infty)$ | 0 | $2-2^{1/\theta}$ | $1 - \theta^{-1}$ | Where: $D_k(\alpha) = \frac{k}{\alpha^k} \int_0^{\alpha} \frac{t^k}{exp(t)-1} dt$ is the Debye function. We can see that the Frank copula applies dependence without any tail-dependence, whereas the Clayton and Gumbel are options for lower or upper-tail dependence modelling. This is in contrast to the student-t we observed before which applies symmetrically to both lower and upper tails. To finish we will look at the bi-variate Frank, Clayton and Gumbel copulae. We will use a variety of methods to do this. For the Frank copula we will look at the "conditional" copula (that is the probability of one variate conditional on a specific value of the other): we will sample the overall percentile for the copula ($z$) and one of the variates ($u_1$) and then back "out" the remaining variate ($u_2$). The pair $\{ u_1, u_2\}$ is then distributed as the copula: $$ u_2 = -\frac{1}{\theta} \log \left[1 + z\frac{1-e^{-\theta}}{z(e^{-\theta u}-1) - e^{-\theta u}} \right]$$ We can do the same for the Clayton: $$ u_2 = \left(1 + u^{-\frac{1}{\theta}} \left( z^{-\frac{1}{1+\theta}}-1 \right) \right)^{-\theta}$$ Unforunately the conditinal copula of the Gumbel is not invertible and so we are unable to follow the same approach. Instead we follow the approach shown by Embrechts where we sample a uniform variate $v$ and then find $0<s<1$ such that: $sln(s) = \theta(s-v)$. We then sample another uniform variate $u$ and the pair: $\{ exp(u^{1/\theta}ln(s)), exp((1-u)^{1/\theta}ln(s))\}$ is a sample from the Gumbel copula with parameter $\theta$. We can implement this as: ```python # Sampling from the Frank, Clayton and Gumbel archimedean Copulae # A very basic bi-variate implementation import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy.optimize import fsolve %matplotlib inline # Set number of simulations and theta parameter sims = 1000 theta = 2 # Simulate u1 - first component of copula U1 = np.random.random(sims) # Simulate z - the joint probaibility Z = np.random.random(sims) # Define conversion functions for each copula def frank(theta, z, u): return -1/theta*np.log(1+ z*(1-np.exp(-theta))/(z*(np.exp(-theta*u)-1) - (np.exp(-theta*u)))) def clayton(theta, z, u): return np.power((1+ np.power(u, -1/theta)*(np.power(z, -1/(1+theta))-1)), -theta) # Define function to find S such that sln(s) = theta(s-u) def gumfunc(s): return s*np.log(s) - theta*(s-U1) # Use fsolve to find roots S = fsolve(gumfunc, U1) U1_gumbel = np.exp(np.log(S)*np.power(Z, 1/theta)) U2_gumbel = np.exp(np.log(S)*np.power(1-Z, 1/theta)) U1_frank = U1 U2_frank = frank(theta, Z, U1) U1_clayton = U1 U2_clayton = clayton(theta, Z, U1) # Create Plots fig, ax = plt.subplots(1, 3, figsize=(15, 5)) sns.scatterplot(U1_frank, U2_frank, ax=ax[0]) ax[0].set_title("Frank") ax[0].set_xlim([0,1]) ax[0].set_ylim([0,1]) sns.scatterplot(U1_clayton, U2_clayton, ax=ax[1]) ax[1].set_title("Clayton") ax[1].set_xlim([0,1]) ax[1].set_ylim([0,1]) sns.scatterplot(U1_gumbel, U2_gumbel, ax=ax[2]) ax[2].set_title("Gumbel") ax[2].set_xlim([0,1]) ax[2].set_ylim([0,1]) fig.suptitle(r"Comparison of Archemedean Copulae with $\theta =$%i" %theta, x=0.5, y=1, size='xx-large') plt.show() ``` As expected we can see little dependence in the tails of the Frank Copula, some dependence in the lower tail (small percentiles) of the Clayton and some dependence in the upper tail (larger percentiles) of the Gumbel copulae. It was a bit of work to sample from these copulae. Thankfully packages exist to make our lives easier - for example: [copulas](https://pypi.org/project/copulas/). But is important to understand at least the basics behind how some of these packages work before we jump in and use them. ## Conclusion We have now extended our abilities from being able to sample from univariate distributions in our previous blog post to how to sample from multi-variate distributions. We saw how we could use a Cholesky decomposition to sample from a mulit-variate normal and further how we could implement a driver approach in order to reduce the number of model parameters. We then saw how we could extend this to sample from a multi-variate student-t distribution. We then looked at copula methods in order to separate the marginal and joint behaviour of a system. We ended by looking at the powerful class of Archimedean copulae andthe Frank, Clayton and Gumbel copulae specifically.
module Fourier where import Data.Complex import Data.Complex.Integrate coefficient :: (RealFloat a, Integral b) => (a -> Complex a) -> b -> Complex a coefficient = coefficientWithPrecision 1000 coefficientWithPrecision :: (RealFloat a, Integral b) => Integer -> (a -> Complex a) -> b -> Complex a coefficientWithPrecision prec f n = integrate g prec 0 1 where term t = f t * cis ((-2) * pi * fromIntegral n * t) -- "integrate" requires a function with a complex input, but "term" takes a real input g = term . realPart
import netket as nk import networkx as nx import numpy as np operators = {} # Ising 1D g = nk.graph.Hypercube(length=20, n_dim=1, pbc=True) hi = nk.hilbert.Spin(s=0.5, graph=g) operators["Ising 1D"] = nk.operator.Ising(h=1.321, hilbert=hi) # Heisenberg 1D g = nk.graph.Hypercube(length=20, n_dim=1, pbc=True) hi = nk.hilbert.Spin(s=0.5, total_sz=0, graph=g) operators["Heisenberg 1D"] = nk.operator.Heisenberg(hilbert=hi) # Bose Hubbard g = nk.graph.Hypercube(length=3, n_dim=2, pbc=True) hi = nk.hilbert.Boson(n_max=3, n_bosons=6, graph=g) operators["Bose Hubbard"] = nk.operator.BoseHubbard(U=4.0, hilbert=hi) # Graph Hamiltonian sigmax = [[0, 1], [1, 0]] mszsz = [[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]] edges = [ [0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 8], [8, 9], [9, 10], [10, 11], [11, 12], [12, 13], [13, 14], [14, 15], [15, 16], [16, 17], [17, 18], [18, 19], [19, 0], ] g = nk.graph.CustomGraph(edges=edges) hi = nk.hilbert.CustomHilbert(local_states=[-1, 1], graph=g) ha = nk.operator.GraphOperator( hi, siteops=[sigmax], bondops=[mszsz], bondops_colors=[0] ) operators["Graph Hamiltonian"] = ha # Custom Hamiltonian sx = [[0, 1], [1, 0]] sy = [[0, 1.0j], [-1.0j, 0]] sz = [[1, 0], [0, -1]] g = nk.graph.CustomGraph(edges=[[i, i + 1] for i in range(20)]) hi = nk.hilbert.CustomHilbert(local_states=[1, -1], graph=g) sx_hat = nk.operator.LocalOperator(hi, [sx] * 3, [[0], [1], [5]]) sy_hat = nk.operator.LocalOperator(hi, [sy] * 4, [[2], [3], [4], [9]]) szsz_hat = nk.operator.LocalOperator(hi, sz, [0]) * nk.operator.LocalOperator( hi, sz, [1] ) szsz_hat += nk.operator.LocalOperator(hi, sz, [4]) * nk.operator.LocalOperator( hi, sz, [5] ) szsz_hat += nk.operator.LocalOperator(hi, sz, [6]) * nk.operator.LocalOperator( hi, sz, [8] ) szsz_hat += nk.operator.LocalOperator(hi, sz, [7]) * nk.operator.LocalOperator( hi, sz, [0] ) operators["Custom Hamiltonian"] = sx_hat + sy_hat + szsz_hat operators["Custom Hamiltonian Prod"] = sx_hat * 1.5 + (2.0 * sy_hat) rg = nk.utils.RandomEngine(seed=1234) def test_produce_elements_in_hilbert(): for name, ha in operators.items(): hi = ha.hilbert print(name, hi) assert len(hi.local_states) == hi.local_size assert hi.size > 0 rstate = np.zeros(hi.size) local_states = hi.local_states for i in range(1000): hi.random_vals(rstate, rg) conns = ha.get_conn(rstate) for connector, newconf in zip(conns[1], conns[2]): rstatet = np.array(rstate) hi.update_conf(rstatet, connector, newconf) for rs in rstatet: assert rs in local_states def test_operator_is_hermitean(): for name, ha in operators.items(): hi = ha.hilbert print(name, hi) assert len(hi.local_states) == hi.local_size rstate = np.zeros(hi.size) local_states = hi.local_states for i in range(100): hi.random_vals(rstate, rg) conns = ha.get_conn(rstate) for mel, connector, newconf in zip(conns[0], conns[1], conns[2]): rstatet = np.array(rstate) hi.update_conf(rstatet, connector, newconf) conns1 = ha.get_conn(rstatet) foundinv = False for meli, connectori, newconfi in zip(conns1[0], conns1[1], conns1[2]): rstatei = np.array(rstatet) hi.update_conf(rstatei, connectori, newconfi) if np.array_equal(rstatei, rstate): foundinv = True assert meli == np.conj(mel) assert foundinv def test_no_segfault(): g = nk.graph.Hypercube(8, 1) hi = nk.hilbert.Spin(g, 0.5) lo = nk.operator.LocalOperator(hi, [[1,0],[0,1]], [0]) lo = lo.transpose() hi = None lo = lo * lo assert True
# Projector $Def$ If a square matrix $P \in \mathbb{R}^{m\times m}$ satisfies $P^2 = P$, then we call $P$ a **projector**. **Range** of $A$, the set of images, $i.e.$, *the space spanned by the columnss of matrix $A$*. **Null** of $A$, all vectors whose images are zero, $i.e.$, all $\vec{x}$ satisfying $A\vec{x} = \vec{0}$. $Theorem$ **1** For ***any*** matrix $A\in \mathbb{R}^{m \times m}$, $\DeclareMathOperator*{\null}{null} \DeclareMathOperator*{\range}{range} \boxed{ \null\left(A\right) \subseteq \range\left(I-A\right) }$. $Proof$ If $\vec{x} \in \null \left(A\right)$, then $A\vec{x} = \vec{0}$. Then $\vec{x} = \vec{x} - A\vec{x} = \left(I - A \right) \vec{x} \in \range\left(I-A\right)$ $Theorem$ **2** Let $P \in \mathbb{R}^{m \times m}$. Then $\boxed{ P^2 = P}$ is *eqivalent* to $\boxed{ \null(P) = \range(I - P)}$ $Proof$ $\Rightarrow)$ Since we already have $\null (P) \subseteq \range(I-P)$ from *Theorem 1*, now we prove that $\null(P) \supseteq \range(I-P)$. So for $\vec{x} \in \range(I-P)$, there exists a $\vec{y}$ such that $\vec{x} = (I-P)\vec{y}$, so that $P\vec{x} = P(I-P)\vec{y} = (P - P^2)\vec{y} = \vec{0}$, so $\vec{x} \in \null(P)$. Half Done! $\Leftarrow)$ Since now $\forall \vec{x} \in \mathbb{R}^{m}$, $(I-P)\vec{x} \in \range(I - P) = \null(P)$, so that $P\big( (I-P)\vec{x} \big) = (P - P^2)\vec{x} = \vec{0}$, that is to say that in terms of map, $P$ and $P^2$ are equivalent, $i.e.$, $P = P^2$. ALL DONE! $\square$ In general, any projector $P \in \mathbb{R}^{ m \times m}$ maps $\mathbb{R}^{ m}$ onto its **range**. For any vector $\vec{v}$ in the range of $P$, $P\vec{v} = \vec{v}$. For any vector $\vec{v}$ *not* in the **range** of $P$, the difference $\vec{v} − P\vec{v}$ is in the **null** of $P$. $Def$ **orthogonal projector** and **oblique projector**: Let $P \in \mathbb{R}^{ m \times m}$ be a projector. If $\range(I-P) \perp \range(P)$, $i.e.$, $\null(P) \perp \range(P)$, then it is an orthogonal projector, otherwise oblique projector. >**e.g.** $B = \left[ \begin{array}{ccc} 1 & 1\\ 0 & 0 \end{array}\right]$ > >Now $B^2 = \left[ \begin{array}{ccc} 1 & 1\\ 0 & 0 \end{array}\right] = B$, so it is a projector. And its **range** is the space spanned by $\left[ \begin{array}{c} 1\\ 0 \end{array}\right]$, and its **null** is spanned by $\left[ \begin{array}{c} 1\\ -1 \end{array}\right]$. > >And obviously $\range(B) \not \perp \null(B)$ How to determine whether it is oblique or orthogonal projector? $Theorem$ **3** Let $P \in \mathbb{R}^{ m \times m}$ be a projector. $P$ is an *orthogonal projector* $iff$ $\boxed{ P = P^{\mathrm{T}} }$. $Proof$ $\Rightarrow)$ Denote the dimension of $\range(P)$ by $r$, so that the dimension of $\null(P)$ is $m-r$. Denote the basis of $\range(P)$ by $\vec{q}_{1}, \vec{q}_{2},\dots,\vec{q}_{r}$, and the basis of $\null(P)$ by $\vec{q}_{r+1}, \vec{q}_{r+2},\dots,\vec{q}_{m}$. Since $P$ is orthogonal projector, $\{ \vec{q}_{1}, \vec{q}_{2},\dots,\vec{q}_{r} \}$ are orthogonal to $\{ \vec{q}_{r+1}, \vec{q}_{r+2},\dots,\vec{q}_{m} \}$. Let $Q = \left[ \begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_m \end{array}\right]$, so that $Q$ is an orthogonal matrix, $Q^{\mathrm{T}}Q = I$. Then we have $$\begin{align} Q^{\mathrm{T}}PQ =& \left[ \begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_m \end{array}\right]^{\mathrm{T}} P \left[ \begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_m \end{array}\right] \\ =& \left[ \begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_m \end{array}\right]^{\mathrm{T}} \left[ \begin{array}{cccc} P\vec{q}_1 & P\vec{q}_2 & \cdots & P\vec{q}_m \end{array}\right] \\ =& \left[ \begin{array}{ccccccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_r & \vec{q}_{r+1} & \cdots & \vec{q}_m \end{array}\right]^{\mathrm{T}} \left[ \begin{array}{ccccccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_r & 0 & \cdots & 0 \end{array}\right] \\ =& I_r \end{align}$$ Therefore $$\begin{align} P =& QI_rQ^{\mathrm{T}} \\ =& Q(I_rI_r)Q^{\mathrm{T}} = (QI_r)(I_rQ^{\mathrm{T}}) \\ =& \left[ \begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_r \end{array}\right] \left[ \begin{array}{cccc} \vec{q}_1^{\mathrm{T}} & \vec{q}_2^{\mathrm{T}} & \cdots & \vec{q}_r^{\mathrm{T}} \end{array}\right]^{\mathrm{T}} \\ =& Q_r Q_r^{\mathrm{T}} \end{align}$$ OK, now $P$ is symmetrix. $Conclusion$ If $P$ is orthogonal projector, then $P$ can be written as $\sum \limits_{i = 1}^{r} \vec{q}_i \vec{q}_{i} ^{ \mathrm{T}}$, where $\vec{q}_1,\vec{q}_2,\dots,\vec{q}_r$ are set of orthonormal basis vectors of $\range(P)$. And on the other hand, if $Q_r = \left[ \begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots & \vec{q}_r \end{array}\right]$ contains a set of orthonormal vectors in $\mathbb{R}^{m}$, then the orthogonal projector from $\mathbb{R}^{m}$ to the range of $Q_r$ is $P = Q_rQ_r^{\mathrm{T}} = \sum \limits_{i = 1}^{r} \vec{q}_i \vec{q}_{i} ^{ \mathrm{T}}$. In general for any given vector $\vec{v} \in \mathbb{R}^{m}$, not necessarily a normalized vector, the orthogonal projector to the direction $\vec{v}$ is $$P_{\vec{v}} = \bigg(\frac{\vec{v}} {\|\vec{v}\|}\bigg)\bigg( \frac{\vec{v}} {\|\vec{v}\|}\bigg)^{\mathrm{T}} = \boxed{ \frac{\vec{v}\vec{v}^{\mathrm{T}}} {\|\vec{v}\|^2}}$$ After that, given any vector $\vec{x} \in \mathbb{R}^{m}$, we have $P_{\vec{v}}\vec{x}$ is the orthogonal projection of $\vec{x}$ onto the direction $\vec{v}$ and the difference $\vec{x} − P_{\vec{v}}\vec{x}$ is perpendicular to $\vec{v}$. And more generally, given a matrix $W \in \mathbb{R}^{m \times n}$, assuming that $m \geq n$ and the columns of $W$ are linearly independent. The orthogonal projector from $\mathbb{R}^{m}$ to the column space (the range) of $W$ can be determined as follows, denoted as $P$. 1. $\vec{v}$ be any vector in $\mathbb{R}^{m}$ and $\vec{y} = P\vec{v} \in \range(W)$ is the image of $\vec{v}$ under this orthogonal projector. 2. Since the projector is orthogonal, the difference $\vec{v} − \vec{y}$ is orthogonal to $\range(W)$, $i.e.$, we have $W^{\mathrm{T}}(\vec{v} − \vec{y}) = \vec{0}$. 3. We also have $\vec{y} = W\vec{x}$, for some $\vec{x} \in \mathbb{R}^{n}$, so that $W^{\mathrm{T}}(\vec{v} − W\vec{x}) = \vec{0}$, $i.e.$, $\vec{x} = \big( W^{\mathrm{T}}W \big)^{-1}W^{\mathrm{T}}\vec{v}$. Since $W$ is of full column rank as assumed ahead, $\big( W^{\mathrm{T}}W \big)^{-1}$ exists. 4. $\vec{y} = P\vec{v} = W\vec{x} = W\big( W^{\mathrm{T}}W \big)^{-1}W^{\mathrm{T}}\vec{v} \Rightarrow \boxed{ P = W\big( W^{\mathrm{T}}W \big)^{-1}W^{\mathrm{T}}}$. $\dagger$ ***NO*** orthogonal matrix, except the identity matrix, is an orthogonal projector.$\ddagger$ # Householder reflector To do Gaussian elimination. For any given vector $\vec{x}$, we want to find an operator (matrix) $F$ such that, $$\vec{x} = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array}\right] \overset{F}{\longrightarrow } F\vec{x} = \left[ \begin{array}{c} \pm \left\| \vec{x}\right\|_2 \\ 0 \\ \vdots \\ 0 \end{array}\right] = \pm \left\| \vec{x}\right\|_2 \vec{e}_1$$ All are zeros below the first entry. Then how to determine matrix $F$? Assume that $\vec{x}$ is reflected to $F\vec{x} = \|\vec{x}\|_2 \vec{e}_1$, the positive direction of $x\text{-axis}$. 1. Define $\vec{v} = \vec{x} - F\vec{x} = \vec{x} - \|\vec{x}\|_2 \vec{e}_1$ 2. Following above, the orthogonal projector onto the direction of $\vec{v}$ is $P_{\vec{v}} = \newcommand{\ffrac}{\displaystyle \frac} \ffrac{\vec{v}\vec{v}^{\mathrm{T}}} {\|\vec{v}\|_2^2}$, so that we can see that $\vec{v} = 2P_{\vec{v}}\vec{x}$ 3. $F\vec{x} = \vec{x} - \vec{v} = (I - 2P_{\vec{v}})\vec{x}$, $i.e.$, $\boxed{ F = I - 2P_{\vec{v}} = I - 2 \displaystyle \frac{\vec{v}\vec{v}^{\mathrm{T}}} {\|\vec{v}\|_2^2}}$ $Proof$ Brief proof of the second point. To prove $2P_{\vec{v}}\vec{x} = 2\cdot\ffrac{\vec{v}\vec{v}^{\mathrm{T}}\vec{x}} {\|\vec{v}\|_2^2}= \vec{v}$, it equals to prove that $2\vec{v}^{\mathrm{T}}\vec{x} = \|\vec{v}\|_2^2$. $$2\vec{v}^{\mathrm{T}}\vec{x} = 2(\vec{x}^{\mathrm{T}} - \|\vec{x}\|_2 \vec{e}_1^{\mathrm{T}})\vec{x} = 2\|\vec{x}\|_2^2 - 2\|\vec{x}\|_2 \cdot x_1 \\ \|\vec{v}\|_2^2 = x_1^2 -2\|\vec{x}\|_{2}x_{1} + \|\vec{x}\|_2^2 + x_2^2 + \cdots + x_n^2 = 2\|\vec{x}\|_2^2 - 2\|\vec{x}\|_2 \cdot x_1$$ ALL DONE! $\square$ *** Now we have found what we want, which reflects the vector $\vec{x}$ to the direction of $\vec{e}_1$ by multiplying matrix (operator) $F$. Besides we have $$F^{\mathrm{T}}F = (I - 2P_{\vec{v}})^{\mathrm{T}}(I - 2P_{\vec{v}}) = I^2 - 4P_{\vec{v}} + 4P_{\vec{v}}^2 = I - 4P_{\vec{v}} + 4P_{\vec{v}} = I$$ So actually this *Householder reflector* is an *orthogonal matrix*. ## Which reflector For a given $\vec{x}$ we have | $$F\vec{x} = +\sideset{}{^{2}}{\|\vec{x}\|} \sideset{}{_{1}}{\vec{e}}$$ | $$F\vec{x} = -\sideset{}{^{2}}{\|\vec{x}\|} \sideset{}{_{1}}{\vec{e}}$$ | |:---------------------------------------------------------------------:|:---------------------------------------------------------------------:| | | | As mentioned before, subtracting two numbers which are *close* is an **ill-conditioned** problem. So that we prefer that $\vec{x}$ and $F\vec{x}$ are in opposite direction, $i.e.$ $$\DeclareMathOperator*{\sign}{sign} F\vec{x} = \left[\begin{array}{c} -\sign(x_1)\|\vec{x}\|_2 \\ 0 \\ 0 \\ \vdots \\ 0 \end{array} \right] = -\sign(x_1) \|\vec{x}\|_2\vec{e}_1 $$ And then $\vec{v} = \vec{x} - F\vec{x} = \vec{x} + \sign(x_1)\|\vec{x}\|_2\vec{e}_1$, while $F$ keeps the same expression. # QR factorization by Householder reflectors Now we can use the Householder reflectors to reduce a matrix $A$ to its upper trigangular form. Let $A$ be an $m\times n$ size matrix, assuming that $m \geq n$ and $\DeclareMathOperator*{\rank}{rank} \rank(A) = n$, a column full rank matrix. We got our first operator $F_1$ that can reflect the first column of $A$ to its $\vec{e}_1$ direction. 1. Take $\vec{x} = \left[\begin{array}{c} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{array} \right] $, and the projection $F_1\vec{x} = -\sign(x_1)\|\vec{x}\|_2 \vec{e}_1$,then $\vec{v}_1 = \vec{x} - F_1 \vec{x} = \vec{x} + \sign(x_1) \|\vec{x}\|_2 \vec{e}_1$. 2. We can get the Householder reflector: $$Q_1 = F_1 = I - 2 \frac{\vec{v}_1\vec{v}_1^{\mathrm{T}}} {\|\vec{v}_1\|_2^2}$$ 3. Now we have $$Q_1A = \left[\begin{array}{cccc} a_{11}^{(1)} & a_{12} ^{(1)} & \cdots & a_{1n} ^{(1)} \\ 0 & a_{22} ^{(1)} & \cdots & a_{2n} ^{(1)} \\ 0 & a_{32} ^{(1)} & \cdots & a_{3n} ^{(1)} \\ \vdots & \vdots & \vdots & \vdots \\ 0 & a_{m2} ^{(1)} & \cdots & a_{mn} ^{(1)} \\ \end{array} \right]$$ So now we take $\vec{x} = \left[\begin{array}{c} a_{22} ^{(1)} \\ a_{32} ^{(1)} \\ \vdots \\ a_{m2} ^{(1)} \end{array} \right] $, and the projection $F_2\vec{x} = -\sign(x_1)\|\vec{x}\|_2 \vec{e}_1$,then $\vec{v}_2 = \vec{x} - F_2 \vec{x} = \vec{x} + \sign(x_1) \|\vec{x}\|_2 \vec{e}_1$. 4. We can get the second Householder reflector: $$F_2 = I - 2 \frac{\vec{v}_2\vec{v}_2^{\mathrm{T}}} {\|\vec{v}_2\|_2^2}$$ Notice that at this time $F_2$ is an $(m-1)\times (m-1)$ orthogonal matrix, so we define $$Q_2 = \left[\begin{array}{cc} 1 & \vec{0}^{\mathrm{T}} \\ \vec{0} & F_2 \end{array} \right]_{m \times m}$$ So on and so forth, after $n$ times loop, we get the QR factorization as following: $$Q_{n}Q_{n-1}\cdots Q_{2}Q_{1}A = \left[\begin{array}{cccc} a_{11}^{(1)} & a_{12} ^{(1)} & \cdots & a_{1n} ^{(1)} \\ 0 & a_{22} ^{(2)} & \cdots & a_{2n} ^{(2)} \\ 0 & 0 & \ddots & \vdots \\ \vdots & \vdots & \ddots & a_{nn} ^{(n)} \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & 0 \end{array} \right]_{m \times n} = R$$ $R$ is upper triangular, and each $Q_i$ is an orthogonal matrix. So $Q_i^{\mathrm{T}} = Q_i^{-1}$. And seeing from how we create $Q_i$, we also have $Q_i = Q_i^{\mathrm{T}}$ $$A = Q_1Q_2Q_3\cdots Q_nR := Q_{m \times m}R_{m \times n}$$ $Algorithm$ **Householder Triangularization** ``` for k = 1 to n x = A_{k:m,k} v_k = x + sign(x_1) \|x\|_2 e_1 v_k = v_k / \|v_k\|_2 A_{k:m,k:n} = A_{k:m,k:n} − 2v_k ( v_k^T A_{k:m,k:n} ) end ``` The amount on floating point operations of the Householder Triangularization is approximately (): $$\begin{align} \sum_{k=1}^n \sum_{j=k}^{n} 4(m-k+1) =& \sum_{k=1}^n 4(m-k+1)(n-k+1) \\ =& \sum_{k=1}^n 4\big(mn+m+n+1+k^2+2k(m+1)(n+1)\big) \\ \approx & 2mn^2 - \frac{2} {3} n^3 \end{align}$$ *** After the above algorithm, the result $R$ is stored in the upper triangular part of $A$. As for $Q$, it's gone. But we can rebuild it using the saved vector $\vec{v}_k$. Why $Q$ is not needed? Because we use QR factorization to solve the equation $A\vec{x} = \vec{b}$. Then we will have $Q^{\mathrm{T}}QR\vec{x} = Q^{\mathrm{T}}\vec{b} = R\vec{x}$. So actually what we need is the product of matrix $Q$ (or its transpose) and a given vector $\vec{b}$, which can be achieved with the stored $\vec{v}_k$. $$\begin{align} Q =& Q_1Q_2Q_3\cdots Q_n \\ =& \big( I_{m} - 2\vec{v}_1 \vec{v}_1^{\mathrm{T}} \big) \left( \begin{array}{cc} 1 & \vec{0}^{\mathrm{T}} \\ \vec{0} & I_{m-1} - 2\vec{v}_2 \vec{v}_2^{\mathrm{T}} \end{array} \right) \cdots \left( \begin{array}{cc} I_{n-1} & \mathbf{0} \\ \mathbf{0} & I_{m-n+1} - 2\vec{v}_n \vec{v}_n^{\mathrm{T}} \end{array} \right) \\[1em] Q^{\mathrm{T}} =& Q_n^{\mathrm{T}}Q_{n-1}^{\mathrm{T}}Q_{n-2}^{\mathrm{T}}\cdots Q_1^{\mathrm{T}} = Q_nQ_{n-1}Q_{n-2}\cdots Q_1 \\ =& \left( \begin{array}{cc} I_{n-1} & \mathbf{0} \\ \mathbf{0} & I_{m-n+1} - 2\vec{v}_n \vec{v}_n^{\mathrm{T}} \end{array} \right) \left( \begin{array}{cc} I_{n-2} & \mathbf{0} \\ \mathbf{0} & I_{m-n+2} - 2\vec{v}_{n-1} \vec{v}_{n-1}^{\mathrm{T}} \end{array} \right) \cdots \left( \begin{array}{cc} 1 & \vec{0}^{\mathrm{T}} \\ \vec{0} & I_{m-1} - 2\vec{v}_2 \vec{v}_2^{\mathrm{T}} \end{array} \right) \big( I_m - 2\vec{v}_1 \vec{v}_1^{\mathrm{T}} \big) \end{align}$$ $Algorithm$ **Given $\vec{b}$, find $Q\vec{b}$** ``` x = b for k = n : −1 : 1 x_{k:m} = x_{k:m} − 2 v_k ( v_k^T x_{k:m} ) end ``` $Algorithm$ **Given $\vec{b}$, find $Q^{\mathrm{T}}\vec{b}$** ``` x = b for k = 1 : n x_{k:m} = x_{k:m} − 2 v_k ( v_k^T x_{k:m} ) end ``` And if you still want the explicit $Q$, just using $Q\vec{e}_1, Q\vec{e}_2, \dots , Q\vec{e}_n$ to get that. # QR factorization by Gram-Schmidt orthogonalization Still we assume that $A$ be an $m\times n$ size matrix, and $m \geq n$ with $\rank(A) = n$, a column full rank matrix. Let $Q \in \mathbb{R}^{m \times m}$ and $R \in \mathbb{R}^{m \times n}$ be the result of QR factorization. $$A = \left[\begin{array}{cccc} \vec{a}_1^c & \vec{a}_2^c & \cdots \vec{a}_n^c \end{array} \right] = \left[\begin{array}{cccc} \vec{q}_1^c & \vec{q}_2^c & \cdots \vec{q}_m^c \end{array} \right]\left[\begin{array}{c} \begin{array}{cccc} r_{11} & r_{12} & \cdots & r_{1n} \\ 0 & r_{22} & \cdots & r_{2n} \\ \vdots & 0 & \ddots & \vdots \\ 0 & 0 & \cdots & r_{nn} \end{array} \\ \mathbf{0} \end{array} \right]=QR$$ But actually we don't need a **full QR factorization**, here's the **reduced** one. $$A = \left[\begin{array}{cccc} \vec{a}_1^c & \vec{a}_2^c & \cdots \vec{a}_n^c \end{array} \right] = \left[\begin{array}{cccc} \vec{q}_1^c & \vec{q}_2^c & \cdots \vec{q}_n^c \end{array} \right]\left[\begin{array}{cccc} r_{11} & r_{12} & \cdots & r_{1n} \\ 0 & r_{22} & \cdots & r_{2n} \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & r_{nn} \end{array}\right]=\hat{Q}\hat{R}$$ And more easily we can find that $\begin{align} \vec{a}_1^c =& r_{11} \vec{q}_1^c \\ \vec{a}_2^c =& r_{12} \vec{q}_1^c + r_{22} \vec{q}_2^c \\ \vdots & \\ \vec{a}_j^c =& r_{1j} \vec{q}_1^c + r_{2j} \vec{q}_2^c + \cdots + r_{jj} \vec{q}_j^c\\ \vdots & \\ \vec{a}_n^c =& r_{1n} \vec{q}_1^c + r_{2n} \vec{q}_2^c + \cdots + r_{nn} \vec{q}_j^c\\ \end{align}$ $Conclusion$ 1. $\vec{a}_j^c \in \big< \vec{q}_1^c, \vec{q}_2^c, \dots, \vec{q}_j^c \big>$, $j = 1,2,\dots, n$ 2. $r_{jj}\neq 0$. Firstly that $r_{11}$ can't, otherwise $\vec{a}_1^c = \vec{0}$, however $A$ is a column full rank matrix. And for $j = 2,3, \dots, n$, still $r_{jj}\neq 0$, otherwise $\vec{a}_1^c, \vec{a}_2^c, \dots, \vec{a}_j^c$ can be expressed by $\big< \vec{q}_1^c, \vec{q}_2^c, \dots, \vec{q}_{j-1}^c \big>$, which implies that these $j$ columns are linear dependent, can't be! (And we can get to this by seeing how we get the $R$.) 3. $\left\{\begin{align} \vec{q}_1^c =& \ffrac{\vec{a}_1^c} {r_{11}} \\ \vec{q}_2^c =& \ffrac{\vec{a}_2^c - r_{12}\vec{q}_1^{c}} {r_{22}} \\ \vdots & \\ \vec{q}_j^c =& \ffrac{\vec{a}_j^c - r_{1j}\vec{q}_1^{c} - r_{2j}\vec{q}_2^{c} -\cdots -r_{j-1,j}\vec{q}_{j-1}^{c}} {r_{jj}}\\ \vdots & \\ \vec{q}_n^c =& \ffrac{\vec{a}_n^c - r_{1n}\vec{q}_1^{c} - r_{2n}\vec{q}_2^{c} -\cdots -r_{n-1,n}\vec{q}_{n-1}^{c}} {r_{nn}}\\ \end{align}\right.$ 4. $\vec{q}_j^c \in \big< \vec{a}_1^c, \vec{a}_2^c, \dots, \vec{a}_j^c \big>$, $j = 1,2,\dots, n$ 5. $\big< \vec{a}_1^c, \vec{a}_2^c, \dots, \vec{a}_j^c \big> = \big< \vec{q}_1^c, \vec{q}_2^c, \dots, \vec{q}_j^c \big>$ 6. (From the process of getting $R$, we can see that each time $r_{jj} = \|\vec{x}\|$.) So that actually $\|\vec{q}_j^c\| = 1$, and $\vec{q}_1^c, \vec{q}_2^c, \dots, \vec{q}_j^c$ is actually set of orthonormal vectors. 7. So $r_{jj} = \|\vec{a}_j - r_{1j}\vec{q}_1 - r_{2j}\vec{q}_2 - \cdots - r_{j-1,j}\vec{q}_{j-1} \|_2$, and $\vec{q}_j^c = \ffrac{\vec{a}_j - r_{1j}\vec{q}_1 - r_{2j}\vec{q}_2 - \cdots - r_{j-1,j}\vec{q}_{j-1}} {r_{jj}}$ For the algorithms below, calculate $r_{1,1}$ and $\vec{q}_1^{c}$ first. $Algorithm$ **QR factorization by classical Gram-Schmidt** ``` for j = 1 to n \vec{v} = \vec{a}_j for i = 1 to j−1 r_{ij} = \vec{q}_i^T \vec{a}_j \vec{v} = \vec{v} − r_{ij} \vec{q}_i end r_{jj} = \| \vec{v} \|_2 \vec{q}_j = \vec{v}/r_{jj} end ``` With flops $\sum\limits_{j=1}^{n}\sum\limits_{i=1}^{j-1} 4m \approx 2mn^2$ $Algorithm$ **QR factorization by modified Gram-Schmidt** ``` for j = 1 to n \vec{v} = \vec{a}_j for i = 1 to j−1 r_{ij} = \vec{q}_i^T \vec{v} \vec{v} = \vec{v} − r_{ij} \vec{q}_i end r_{jj} = \| \vec{v} \|_2 \vec{q}_j = \vec{v}/r_{jj} end ``` # Backward stability of QR factorization $Theorem$ Let $A = QR$ by Householder triangularization and for the computed factors, we have $\tilde{Q}\tilde{R} = A + \delta A$, with $\ffrac{\|\delta A\|} {\| A \|} = O(\epsilon_{machine})$. But it is not backward stable if using the classical or modified Gram-Schmidt algorithm. Take an example. $$A = \left[\begin{array}{cc} 0.7 & 0.7 + 10^{-15} \\ 0.7 + 10^{-15} & 0.7 + 10^{-15} \end{array} \right]$$ Condition number of $A$ is about $10^{15}$. Denote the result from Gram-Schmidt algorithm in MATLAB as $Q_G, R_G$, respectively; and similar for $Q_H, R_H$ using Householder triangularization. The result are $$Q_G Q_G^{\mathrm{T}} = \left[\begin{array}{cc} 0.890 & 0.012 \\ 0.012 & 1.109 \end{array} \right],Q_H Q_H^{\mathrm{T}} = \left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right]$$ # Solution of system of linear equations through QR factorization Given $A \in \mathbb{R}^{n \times n}$ and $\vec{b} \in \mathbb{R}^{n}$. As mentioned above, we have $R\vec{x} = Q^{\mathrm{T}}\vec{b}$ after the QR factorization, which can be solved by backward substitution or backslash function. Comparing using the LU factorization, it's more stable but more expensive. # Legendre polynomials 勒让德多项式 Consider the space of polynomials of degree less or equal than $n − 1$ on the interval $x \in [ − 1, 1]$. The monomials $1, x, x^2 , ... , x^{n − 1}$ form a basis of this space; $p(x) = a_1 + a_2 x + \cdots + a_n x^{n − 1}$. $Def$ **inner product** $$\big(p(x),q(x)\big) = \int _{-1} ^{1} p(x)q(x) \,\mathrm{d}x$$ Two polynomials are orthogonal if $\big(p(x),q(x)\big)=0$. Then after that the fact is $1, x, x^2 , ... , x^{n − 1}$ is not an orthogonal basis. So using QR factorization to find it. $$A = \left[\begin{array}{cccc} 1 & x & \cdots & x^{n-1} \end{array} \right] = \left[\begin{array}{cccc} P_0(x) & P_1(x) & \cdots & P_{n-1}(x) \end{array} \right] \left[\begin{array}{cccc} r_{11} & r_{12}& \cdots & r_{1n} \\ 0 & r_{22} & \cdots & r_{2n} \\ \vdots & \ddots & \ddots& \vdots \\ 0 & \cdots & 0 & r_{nn} \end{array} \right]$$ And the polynomials in $Q$ are the **Legendre polynomials**. $$\displaystyle P_{n}(x)={1 \over 2^{n}n!}{\mathrm{d}^{n} \over \mathrm{d}x^{n}}\left[(x^{2}-1)^{n}\right]$$ Here each polynomial $P_i(x)$ is of degree $i$, and since $P_i(1) = 1$, so they form an orthogonal basis
State Before: α : Type ?u.89099 β : Type ?u.89102 γ : Type ?u.89105 ι : Type ?u.89108 inst✝⁶ : NormedField α inst✝⁵ : SeminormedAddCommGroup β E : Type u_1 inst✝⁴ : SeminormedAddCommGroup E inst✝³ : NormedSpace α E F : Type ?u.89139 inst✝² : SeminormedAddCommGroup F inst✝¹ : NormedSpace α F inst✝ : NormedSpace ℝ E x : E r : ℝ hr : r ≠ 0 ⊢ frontier (closedBall x r) = sphere x r State After: no goals Tactic: rw [frontier, closure_closedBall, interior_closedBall x hr, closedBall_diff_ball]
(MPRIMALDUALBARRERALOG)= # 4.5 Método primal-dual de barrera logarítmica (BL) ```{admonition} Notas para contenedor de docker: Comando de docker para ejecución de la nota de forma local: nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker. `docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion_2 -p 8888:8888 -p 8787:8787 -d palmoreck/jupyterlab_optimizacion_2:3.0.0` password para jupyterlab: `qwerty` Detener el contenedor de docker: `docker stop jupyterlab_optimizacion_2` Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion_2:3.0.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion_2). ``` --- Nota generada a partir de [liga1](https://drive.google.com/file/d/16-_PvWNaO0Zc9x04-SRsxCRdn5fxebf2/view), [liga2](https://drive.google.com/file/d/1oulU1QAKyLyYrkpJBLSPlbWnKFCWpllX/view), [liga3](https://drive.google.com/file/d/1RMwUXEN_SOHKue-J9Cx3Ldvj9bejLjiM/view) ```{admonition} Al final de esta nota la comunidad lectora: :class: tip * Conocerá el método primal dual de barrera logarítmica para resolver programas lineales. * Aprenderá que tal método: * es un caso particular de métodos de penalización para resolver problemas de optimización con restricciones. * tiene la capacidad de resolver problemas convexos. * trabaja con bloques de matrices lo que ayuda a aprovechar operaciones vectoriales o matriciales. * puede implementarse con cómputo en paralelo. ``` El método primal-dual de barrera logarítmica (BL) es un método iterativo que realiza un manejo de las variables primales y duales del problema de optimización a resolver. Se le clasifica como un método por puntos interiores. ```{sidebar} Un poco de historia ... El [método símplex](https://en.wikipedia.org/wiki/Simplex_algorithm) desarrollado por Dantzig en los $40$'s hizo posible que se formularan y analizaran modelos grandes en una forma sistemática y eficiente. Hoy en día continúa siendo uno de los métodos más utilizados para resolver programas lineales. No obstante puede ser ineficiente en problemas lineales "patológicos" (ver [Klee-Minty cube](https://en.wikipedia.org/wiki/Klee%E2%80%93Minty_cube)) pues el tiempo para resolver tales problemas es exponencial respecto al tamaño del problema (medido como el número de variables y la cantidad de almacenamiento para los datos del problema). Para la mayoría de problemas prácticos el método símplex es mucho más eficiente que estos casos "patológicos" pero esto motivó la investigación y desarrollo de nuevos algoritmos con mejor desempeño. En 1984 Karmarkar publicó el [algoritmo](https://en.wikipedia.org/wiki/Karmarkar%27s_algorithm) que lleva su mismo nombre que tiene una complejidad polinomial y en la práctica resultó ser eficiente. Pertenece a la clase de métodos con el nombre de [puntos interiores](https://en.wikipedia.org/wiki/Interior-point_method). Hay diferentes tipos de métodos por puntos interiores siendo los de la clase primal-dual ampliamente usados en la práctica. ``` ## Métodos por puntos interiores (PI) Los métodos por puntos interiores (PI) son esquemas iterativos que en un inicio se utilizaron para resolver PL's, sin embargo, se ha extendido su uso al caso no lineal. Por ejemplo, distintos tipos de métodos por PI han sido usados para resolver problemas de optimización convexos, ver {ref}`problemas de optimización convexa en su forma estándar o canónica <PROBOPTCONVEST>`. ```{margin} Recuérdese que nombramos problemas de optimización con restricciones *large scale* a aquellos problemas de optimización que tienen un número de variables y restricciones mayor o igual a $10^5$ (ambas). ``` En cada iteración de los métodos PI las restricciones de desigualdad del problema de optimización se satisfacen de forma estricta. Cada iteración es costosa de calcular y realiza avance significativo a la solución en contraste con el método símplex que requiere un gran número de iteraciones no costosas. Una característica que tienen los métodos PI es que los problemas *large scale* no requieren muchas más iteraciones que los problemas *small scale* a diferencia del método símplex. Sin embargo para problemas *small scale* en general realizan más iteraciones que el método símplex. En cada iteración el método símplex se mueve de la solución FEV actual a una solución FEV adyacente por una arista de la frontera de la región factible, ver {ref}`método símplex <METODOSIMPLEX>`. Los problemas del tipo *large scale* tienen una cantidad enorme de soluciones FEV. Para ver esto piénsese en un PL al que se le van añadiendo restricciones funcionales. Entonces se añadirán aristas y por tanto soluciones FEV. Los métodos PI evitan tal comportamiento pues avanzan por el interior de la región factible hacia los puntos óptimos y tiene muy poco efecto el ir añadiedo restricciones funcionales al PL para el desempeño de los métodos PI. ```{admonition} Observación :class: tip Los métodos PI han mostrado "buena" eficiencia (en términos del número de iteraciones realizadas) en resolver problemas de optimización *large scale*. Además son métodos que pueden implementarse para procesamiento con cómputo en paralelo. ``` Los métodos PI conforme avanzan en las interaciones aproximan a los puntos óptimos en el límite. Por ejemplo, para el {ref}`ejemplo prototipo <EJPROTOTIPO>` de un programa lineal (PL) a continuación se presenta una trayectoria obtenida por un método PI que se aproxima a la solución óptima $(2, 6)$: $$\displaystyle \max_{x \in \mathbb{R}^2} 3x_1 + 5x_2$$ $$\text{sujeto a: }$$ $$x_1 \leq 4$$ $$2x_2 \leq 12$$ $$3x_1 + 2x_2 \leq 18$$ $$x_1 \geq 0, x_2 \geq 0$$ ```python import numpy as np import matplotlib.pyplot as plt ``` ```python np.set_printoptions(precision=3, suppress=True) ``` ```python #x_1 ≤ 4 point1_x_1 = (4,0) point2_x_1 = (4, 10) point1_point2_x_1 = np.row_stack((point1_x_1, point2_x_1)) #x_1 ≥ 0 point3_x_1 = (0,0) point4_x_1 = (0, 10) point3_point4_x_1 = np.row_stack((point3_x_1, point4_x_1)) #2x_2 ≤ 12 or x_2 ≤ 6 point1_x_2 = (0, 6) point2_x_2 = (8, 6) point1_point2_x_2 = np.row_stack((point1_x_2, point2_x_2)) #x_2 ≥ 0 point3_x_2 = (0, 0) point4_x_2 = (8, 0) point3_point4_x_2 = np.row_stack((point3_x_2, point4_x_2)) #3x_1 + 2x_2 ≤ 18 x_1_region_1 = np.linspace(0,4, 100) x_2_region_1 = 1/2*(18 - 3*x_1_region_1) x_1 = np.linspace(0,6, 100) x_2 = 1/2*(18 - 3*x_1) plt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], point3_point4_x_1[:,0], point3_point4_x_1[:,1], point1_point2_x_2[:,0], point1_point2_x_2[:,1], point3_point4_x_2[:,0], point3_point4_x_2[:,1], x_1, x_2) optimal_point = (2, 6) plt.scatter(optimal_point[0], optimal_point[1], marker='o', s=150, facecolors="none", edgecolors='b') plt.legend(["$x_1 = 4$", "$x_1 = 0$", "$2x_2 = 12$", "$x_2 = 0$", "$3x_1+2x_2 = 18$", "(óptimo coordenada 1, óptimo coordenada 2)"], bbox_to_anchor=(1, 1)) point_1_interior_points = (1, 2) point_2_interior_points = (1.27, 4) point_3_interior_points = (1.38, 5) point_4_interior_points = (1.56, 5.5) points_interior_points = np.row_stack((point_1_interior_points, point_2_interior_points, point_3_interior_points, point_4_interior_points)) plt.plot(points_interior_points[:, 0], points_interior_points[:, 1], marker='o', color="blue" ) plt.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color="plum") x_1_region_2 = np.linspace(0,2, 100) plt.fill_between(x_1_region_2, 0, 6, color="plum") plt.title("Región factible del PL") plt.show() ``` ```{margin} Recuérdese que los parámetros de un PL son $b_i, c_i, a_{ij}$. ``` Aunque los métodos PI son una buena alternativa para resolver PL's perdemos ventajas que tiene el método símplex como es el análisis de sensibilidad y el análisis posterior que puede realizarse al modificar los parámetros del PL. Ver las referencias al final de la nota para tales análisis. ## Método primal-dual Se describirán dos ideas que se utilizan en los métodos primal dual y posteriormente una tercera idea que utiliza la función de barrera logarítmica (FBL). Para esto, considérese la forma estándar de un PL (PLE): $$ \displaystyle \min_{x \in \mathbb{R}^n} c^Tx\\ \text{sujeto a:} \\ Ax=b\\ x \geq 0 $$ donde: $A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m$, $m < n$ con *rank* completo por renglones y las restricciones se interpretan de una forma *pointwise*. ```{margin} Las restricciones $Ax = b$ se pueden escribir con funciones $h: \mathbb{R}^n \rightarrow \mathbb{R}$ , $h_i(x) = b_i-a_i ^Tx$, $a_i$ $i$-ésimo renglón de $A \in \mathbb{R}^{m \times n}$ y $b_i$ $i$-ésima entrada de $b$ para $i=1, \cdots, m$ ``` La función Lagrangiana del problema anterior es: $$\mathcal{L}(x, \lambda, \nu) = f_o(x) + \displaystyle \sum_{i=1}^n \lambda_i f_i(x) + \sum_{i=1}^m \nu_i h_i(x) = c^Tx + \lambda^T(-x) + \nu^T(b-Ax)$$ donde: $\mathcal{L}: \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$. El problema dual asociado es: ```{margin} El problema primal es: $\displaystyle \min_{x \in \mathbb{R}^n} c^Tx \\ \text{sujeto a:}\\ Ax = b \\ x \geq 0 $ $A \in \mathbb{R}^{m \times n}$ y *rank* de A igual a $m < n$. ``` $$\displaystyle \max_{\nu \in \mathbb{R}^m, \lambda \in \mathbb{R}^n} b^T \nu \\ \text{sujeto a :} \\ c - A^T \nu - \lambda = 0 \\ \lambda \geq 0 $$ Las condiciones KKT son: ```{margin} Ver {ref}`las condiciones KKT para un PL en su forma estándar<CONDKKTPLESTANDAR>`. ``` $$ \begin{eqnarray} \nabla_x \mathcal{L}(x, \lambda, \nu) &=& c - A^T\nu - \lambda = 0 \nonumber \\ \lambda^Tx &=& 0 \nonumber \\ Ax &=& b \nonumber \\ -x &\leq& 0 \nonumber \\ \lambda &\geq& 0 \end{eqnarray} $$ Los métodos de la clase primal-dual encuentran soluciones $(x^*, \lambda^*, \nu^*)$ para las igualdades anteriores y modifican las direcciones de búsqueda y tamaños de paso para que las desigualdades se satisfagan de forma **estricta** en cada iteración. En los métodos de la clase primal-dual reescribimos las condiciones KKT de optimalidad anteriores mediante una función $F: \mathbb{R}^{2n + m} \rightarrow \mathbb{R}^{2n+m}$ dada por: $$F(x, \lambda, \nu ) = \left [ \begin{array}{c} c - A^T \nu - \lambda \\ X \Lambda e \\ b - Ax \end{array} \right ]$$ y resolvemos la ecuación **no lineal** $F(x, \lambda, \nu )=0$ para $(x, \lambda) \geq 0$, donde: $X = \text{diag}(x_1, \dots, x_n)$, $\Lambda = \text{diag}(\lambda_1, \dots, \lambda_n)$ y $e$ es un vector de $1$'s en $\mathbb{R}^n$. Además en cada iteración se cumple $x^{(k)} > 0$ y $\lambda^{(k)} > 0$ para $(x^{(k)}, \lambda^{(k)}, \nu^{(k)})$, por esto tales métodos son considerados como puntos interiores. Como la mayoría de los métodos iterativos en optimización, los métodos primal-dual tienen un procedimiento para determinar la dirección de búsqueda y una cantidad que debe ser monitoreada cuyo valor alcance un valor objetivo. En el caso de los PLE's tal cantidad es la *duality gap* medida como: $\lambda^Tx$, ver {ref}`brecha dual <BRECHADUAL>`. (PRIMIDEAMETPRIMDUAL)= ### Primera idea: determinar la dirección de búsqueda ```{margin} Sistema de ecuaciones no lineales a resolver: $F(x, \lambda, \nu ) = \left [ \begin{array}{c} c - A^T \nu - \lambda \\ X \Lambda e \\ b - Ax \end{array} \right ] = 0$ ``` La dirección de búsqueda se determina aplicando el método de Newton al sistema de ecuaciones no lineales que se muestra en el margen del PLE. Por tanto, se resuelve el sistema de ecuaciones lineales: $$J_F(x, \lambda, \nu) \left [ \begin{array}{c} \Delta x \\ \Delta \lambda \\ \Delta \nu \end{array} \right ] = - F(x, \lambda, \nu)$$ donde: $J_F$ es la Jacobiana de $F$ cuya expresión es: $$J_F(x, \lambda, \nu) = \left [ \begin{array}{ccc} 0 & I & -A^T \\ \Lambda & X & 0 \\ -A & 0 & 0 \end{array} \right ].$$ para el vector de incógnitas $\left [ \begin{array}{c} \Delta x \\ \Delta \lambda \\ \Delta \nu \end{array} \right ]$. Una vez calculado tal vector de incógnitas se realiza la actualización: $$\left [ \begin{array}{c} x \\ \lambda \\ \nu \end{array} \right ]^{(k+1)} = \left [ \begin{array}{c} x \\ \lambda \\ \nu \end{array} \right ]^{(k)} + \left [ \begin{array}{c} \Delta x \\ \Delta \lambda \\ \Delta \nu \end{array} \right ]$$ donde: $k$ hace referencia a la $k$-ésima iteración. Ver {ref}`Sistema de ecuaciones no lineales<SISTECNOLINEALES>`. ```{admonition} Observación :class: tip Si bien podría elegirse otra dirección de búsqueda, la dirección de Newton (o variantes de ésta) se prefiere por sus propiedades de convergencia e invarianza ante transformaciones afín. ``` Si denotamos $r_d = c - A^T \nu - \lambda, r_p = b - Ax$ como el residual para factibilidad dual y residual para factibilidad primal respectivamente entonces el sistema de ecuaciones lineales a resolver es: $$\left [ \begin{array}{ccc} 0 & I & -A^T \\ \Lambda & X & 0 \\ -A & 0 & 0 \end{array} \right ] \left [ \begin{array}{c} \Delta x \\ \Delta \lambda \\ \Delta \nu \end{array} \right ] = - \left [ \begin{array}{c} r_d \\ X \Lambda e \\ r_p \end{array} \right ]$$ ```{admonition} Comentarios * El sistema de ecuaciones lineales anterior para problemas *large scale* no se construye pues es un sistema cuadrado de tamaño $2n + m \times 2n + m$ y se resuelve reduciéndolo a sistemas de ecuaciones equivalentes. Representa el paso más costoso del método primal-dual. * Se pueden eliminar los signos negativos que están en los bloques de la matriz del sistema de ecuaciones lineales anterior que contienen $A, A^T$ pero por consistencia con lo desarrollado en la nota de dualidad para un PL se mantienen los signos negativos (si se eliminan también debe de ajustarse el lado derecho del sistema). Esta modificación se relaciona con la definición de la función Lagrangiana. ``` (SEGUNIDEAPRIMDUAL)= ### Segunda idea: cortar el paso Si se toma un paso completo es muy posible que en la siguiente iteración se encuentre muy cerca de alguna de las fronteras de restricción o bien se salga de la región factible. Para esto se define un parámetro $t^{(k)} \in (0, 1]$ y por tanto la actualización es: $$\left [ \begin{array}{c} x \\ \lambda \\ \nu \end{array} \right ]^{(k+1)} = \left [ \begin{array}{c} x \\ \lambda \\ \nu \end{array} \right ]^{(k)} + t^{(k)} \left [ \begin{array}{c} \Delta x \\ \Delta \lambda \\ \Delta \nu \end{array} \right ]$$ donde: $k$ hace referencia a la $k$-ésima iteración. ```{admonition} Comentario El parámetro $t^{(k)}$ se calcula con metodologías como búsqueda de línea o regiones de confianza, ver [line search](https://en.wikipedia.org/wiki/Line_search), {ref}`método de búsqueda de línea por backtracking <MBUSLINBACK>`, [trust region](https://en.wikipedia.org/wiki/Trust_region). ``` ## Método primal-dual de barrera logarítmica (BL) ### Tercera idea: reducir la *duality gap* y centrar. Uso de la función de barrera logarítmica (FBL) En cada iteración los métodos primal-dual buscan reducir la *duality gap* o bien mantenerse "cerca" de la trayectoria nombrada trayectoria central. ```{margin} Sistema de ecuaciones lineales $\left [ \begin{array}{ccc} 0 & I & -A^T \\ \Lambda & X & 0 \\ -A & 0 & 0 \end{array} \right ] \left [ \begin{array}{c} \Delta x \\ \Delta \lambda \\ \Delta \nu \end{array} \right ] = - \left [ \begin{array}{c} r_d \\ X \Lambda e \\ r_p \end{array} \right ]$ ``` ```{admonition} Comentario Recuérdese que la *duality gap* en un PLE para el par $(x, \nu)$ primal-dual factible está dada por la diferencia: $c^Tx - b^T \nu$. La *duality gap* en un PLE es igual a $\lambda^Tx$. En el sistema de ecuaciones lineales que se muestra en el margen se representa cada sumando de $\lambda^Tx$ con el producto $X \Lambda e$ (recuérdese $X, \Lambda$ son matrices diagonales). ``` La trayectoria central se define a partir de la FBL, ver [Barrier function](https://en.wikipedia.org/wiki/Barrier_function). La definición siguiente se da para un POCE de forma general. ```{margin} Las restricciones $Ax = b$ se pueden escribir con funciones $h_i: \mathbb{R}^n \rightarrow \mathbb{R}$ , $h_i(x) = b_i-a_i ^Tx$, $a_i$ $i$-ésimo renglón de $A \in \mathbb{R}^{p \times n}$ y $b_i$ $i$-ésima entrada de $b$ para $i=1, \cdots, p$ ``` ```{admonition} Definición Considérese el problema de optimización convexa en la forma estándar (POCE): $$ \begin{eqnarray} \displaystyle \min_{x \in \mathbb{R}^n} &f_o(x)& \nonumber \\ &\text{sujeto a:}& \nonumber\\ f_i(x) &\leq& 0 \quad i=1,\dots,m \nonumber \\ Ax &=& b \end{eqnarray} $$ con $A \in \mathbb{R}^{p \times n}$ y *rank* de $A$ igual a $p < n$. Se define la función de barrera logarítmica (FBL) como: $$\phi(x) =-\displaystyle \sum_{i=1}^m \log(-f_i(x))$$ ``` ```{sidebar} Un poco de historia ... La metodología para resolver el problema de barrera logarítmica (PBL) está fundamentada en la *sequential unconstrained minimization technique (SUMT)*, [A. V. Fiacco, G. P. McCormick, 1965](https://www.jstor.org/stable/168637?seq=1). Es una técnica para resolver problemas no lineales sin restricciones que genera una secuencia de puntos interiores factibles que convergen a la solución del problema. Se eligen funciones de barrera con propiedades como la convexidad. Hay versiones de la SUMT para puntos exteriores que inician con puntos no factibles y vía la penalización se busca la convergencia hacia la región factible. En cada iteración de SUMT se define un valor del parámetro de barrera y se resuelve un problema de optimización más sencillo que el original con el método de Newton. La solución de tal problema se utiliza para definir puntos iniciales del siguiente problema a resolver con un valor del parámetro de barrera diferente. A medida que se avanza en las iteraciones la función objetivo del PBL se aproxima cada vez más a $f_o$, al valor óptimo y al conjunto óptimo. ``` El POCE se resuelve planteando el siguiente problema: $$ \displaystyle \min_{x \in \mathbb{R}^n} f_B(x|t_B) \\ \text{sujeto a:} \\ Ax = b $$ donde: $f_B(x|t_B) = f_o(x) + \frac{1}{t_B} \phi(x) = f_o(x) - \frac{1}{t_B} \displaystyle \sum_{i=1}^m \log(-f_i(x))$, $\phi: \mathbb{R}^n \rightarrow \mathbb{R}$ con $t_B$ un parámetro positivo que nombramos **parámetro de barrera**. Denotamos a este problema como **problema de barrera logarítmica (PBL)**. ```python x = np.linspace(-2, -.1, 100) log_barrier = -np.log(-x) t_B1 = 0.2 t_B2 = 0.5 t_B3 = 1 t_B4 = 2 t_B5 = 10 plt.plot(x, 1/t_B1*log_barrier, "r", x, 1/t_B2*log_barrier, "b", x, 1/t_B3*log_barrier, "g", x, 1/t_B4*log_barrier, "m", x, 1/t_B5*log_barrier, "c") plt.legend(["$t_{B1}=0.2$", "$t_{B2}=0.5$", "$t_{B3}=1$", "$t_{B4}=2$", "$t_{B5}=10$"], bbox_to_anchor=(1,1)) plt.axhline(color="black") plt.axvline(color="black") plt.title("Gráfica de la FBL variando el parámetro $t_B$") plt.show() ``` Valores más grandes de $t_B$ hacen que $f_B(x|t_B)$ tienda a $f_o(x)$. Como se observa en la gráfica anterior al elegir un valor de $t_B$ cada vez más grande se tiene: $f_B(x|t_B) = f_o(x) + \frac{1}{t_B} \phi(x) \approx f_o(x)$. ```{margin} El POCE recuérdese es: $ \begin{eqnarray} \displaystyle \min_{x \in \mathbb{R}^n} &f_o(x)& \nonumber \\ &\text{sujeto a:}& \nonumber\\ f_i(x) &\leq& 0 \quad i=1,\dots,m \nonumber \\ Ax &=& b \end{eqnarray} $ y el PBL es: $ \displaystyle \min_{x \in \mathbb{R}^n} f_B(x|t_B) \\ \text{sujeto a:} \\ Ax = b $ con $A \in \mathbb{R}^{p \times n}$ y *rank* de $A$ igual a $p < n$, $\begin{eqnarray} f_B(x|t_B) &=& f_o(x) + \frac{1}{t_B} \phi(x) \nonumber \\ &=& f_o(x) - \frac{1}{t_B} \displaystyle \sum_{i=1}^m \log(-f_i(x)) \end{eqnarray} $. ``` ```{admonition} Comentarios * La función $\phi(x) = -\frac{1}{t_B} \log(-x)$ para $x < 0$ es convexa, diferenciable y aproxima a la función indicadora: $$I(x) = \begin{cases} \infty \text{ si } x > 0 \\ 0 \text{ si } x \leq 0 \end{cases}$$ la cual es una función discontinua: En el dibujo anterior sólo se visualiza para el eje horizontal el intervalo $(-\infty, 0)$. * La función indicadora del punto anterior ayuda a reescribir el POCE como: $$\displaystyle \min_{x \in \mathbb{R}^n} f_o(x) + \displaystyle \sum_{i=1}^m I(f_i(x))$$ $$\text{sujeto a:}$$ $$Ax = b$$ por esto resolver el PBL es equivalente a resolver el POCE para valores más grandes de $t_B$. * La FBL es un caso particular de funciones de barrera que penalizan al no satisfacer las restricciones de desigualdad, ver [Penalty method](https://en.wikipedia.org/wiki/Penalty_method). * En general las funciones de barrera deben tener las siguientes propiedades para $x$ primal factibles: 1. Tener valores "pequeños" si $x$ está "lejos" de la frontera de la región factible. 2. Tener valores "grandes" si $x$ está "cerca" de la frontera de la región factible. 3. Tener propiedades como convexidad o diferenciabilidad (ventajas al tener tales propiedades). Por lo anterior las funciones de barrera evitan que se cruce o llegue a la frontera de la región factible del problema primal. * Otra función de barrera para un PL es: $\phi(x) = -\displaystyle \sum_{i=1}^m \frac{1}{f_i(x)}$ para $x$ factibles. * Los problemas de optimización convexos con únicamente restricciones de igualdad pueden resolverse aplicando extensiones del método de Newton. ``` ### Trayectoria central determinada por los puntos centrales ```{margin} El PBL para un POCE recuérdese es: $ \displaystyle \min_{x \in \mathbb{R}^n} f_B(x|t_B) \\ \text{sujeto a:} \\ Ax = b $ con $A \in \mathbb{R}^{p \times n}$ y *rank* de $A$ igual a $p < n$, $\begin{eqnarray} f_B(x|t_B) &=& f_o(x) + \frac{1}{t_B} \phi(x) \nonumber \\ &=& f_o(x) - \frac{1}{t_B} \displaystyle \sum_{i=1}^m \log(-f_i(x)) \end{eqnarray} $. ``` ```{admonition} Definición Para cada valor del parámetro de barrera $t_B$, se definen los **puntos centrales** $x^*(t_B)$ como la solución del PBL y el conjunto de puntos centrales se le nombra trayectoria central, *central path*. ``` Revisemos las condiciones KKT de optimalidad que deben cumplir los puntos centrales para un PLE: $$ \displaystyle \min_{x \in \mathbb{R}^n} c^Tx\\ \text{sujeto a:} \\ Ax=b\\ x \geq 0 $$ con $A \in \mathbb{R}^{m \times n}$ y *rank* de $A$ igual a $m < n$. Se tiene: ```{margin} Recuérdese que en un PLE $f_i(x) = - x_i \forall i=1, \cdots, n$. ``` $$ \begin{eqnarray} \phi(x) &=& -\displaystyle \sum_{i=1}^n \log(-f_i(x)) \nonumber \\ &=& - \sum_{i=1}^n \log(x_i) \end{eqnarray} $$ y por tanto el PBL para el PLE (PBL-PLE) es: $$\displaystyle \min_{x \in \mathbb{R}^n} c^Tx - \frac{1}{t_B} \displaystyle \sum_{i=1}^n \log(x_i)$$ $$\text{sujeto a:}$$ $$Ax=b$$ La función Lagrangiana del PBL-PLE es: $$ \begin{eqnarray} \mathcal{L}_B(x, \nu) &=& f_B(x|t_B) + \sum_{i=1}^m \nu_i h_i(x) \nonumber \\ &=& c^Tx - \frac{1}{t_B} \displaystyle \sum_{i=1}^n \log(x_i) + \sum_{i=1}^m \nu_i(b_i-a_i ^Tx) \nonumber \\ &=& c^Tx - \frac{1}{t_B} \displaystyle \sum_{i=1}^n \log(x_i) + \nu^T(b-Ax) \nonumber \end{eqnarray} $$ con $a_i$ $i$-ésimo renglón de $A$ y $b_i$ $i$-ésima entrada de $b$. Las condiciones necesarias y suficientes KKT de optimalidad del PBL-PLE son: ```{margin} Recuérdese que las condiciones de KKT para un PLE son: $ \begin{eqnarray} \nabla_x \mathcal{L}(x, \lambda, \nu) &=& c - A^T\nu - \lambda = 0 \nonumber \\ \lambda^Tx &=& 0 \nonumber \\ Ax &=& b \nonumber \\ -x &\leq& 0 \nonumber \\ \lambda &\geq& 0 \end{eqnarray} $ ``` $$ \begin{eqnarray} \nabla_x \mathcal{L}_B(x, \nu) &=& c - A^T\nu - \frac{1}{t_B}d = 0 \nonumber \\ Ax &=& b \nonumber \\ \end{eqnarray} $$ donde: $d = X^{-1}e = \left [ \begin{array}{c} \frac{1}{x_1} \\ \vdots \\ \frac{1}{x_n} \\ \end{array} \right ]$. Ver {ref}`condiciones KKT para un PL en su forma estándar <CONDKKTPLESTANDAR>` (se muestran en el margen). Los puntos centrales $x^*(t_B)$ resuelven el PBL-PLE y por tanto satisfacen: $$ \begin{eqnarray} \nabla_x \mathcal{L}_B(x^*(t_B), \nu) &=& c - A^T\nu - \frac{1}{t_B} d(t_B) = 0 \nonumber \\ Ax^*(t_B) &=& b \nonumber \\ \end{eqnarray} $$ donde: $d(t_B) = X^{*-1}(t_B)e = \left [ \begin{array}{c} \frac{1}{x_1^*(t_B)} \\ \vdots \\ \frac{1}{x_n^*(t_B)} \\ \end{array} \right ]$. ### Relación entre las condiciones KKT de optimalidad del PLE y las del PBL-PLE Para establecer la relación entre las condiciones KKT de optimalidad del PLE y las del PBL-PLE considérese **sólo** en esta sección que el PBL-PLE es: $$\displaystyle \min_{x \in \mathbb{R}^n} t_B c^Tx - \displaystyle \sum_{i=1}^n \log(x_i)$$ $$\text{sujeto a:}$$ $$Ax=b$$ ```{margin} La FBL en el PBL-PLE recuérdese es: $ \begin{eqnarray} \phi(x) &=& -\displaystyle \sum_{i=1}^n \log(-f_i(x)) \nonumber \\ &=& - \sum_{i=1}^n \log(x_i) \end{eqnarray} $ ``` ```{admonition} Observación :class: tip Esta forma del PBL-PLE es equivalente a la revisada anteriormente en la que la FBL se divide por el parámetro $t_B$. Es una cuestión sólo de escritura matemática lo que se realiza a continuación. ``` Las condiciones KKT son iguales a las revisadas en la sección anterior salvo la posición en la que se tiene el parámetro $t_B$: $$ \begin{eqnarray} \nabla_x \mathcal{L}_B(x^*(t_B), \hat{\nu}) &=& t_Bc - A^T\hat{\nu} - d(t_B) = 0 \nonumber \\ Ax^*(t_B) &=& b \nonumber \\ \end{eqnarray} $$ donde: $\hat{\nu} = t_B \nu$. ```{margin} Recuérdese que las condiciones de KKT para un PLE son: $ \begin{eqnarray} \nabla_x \mathcal{L}(x, \lambda, \nu) &=& c - A^T\nu - \lambda = 0 \nonumber \\ \lambda^Tx &=& 0 \nonumber \\ Ax &=& b \nonumber \\ -x &\leq& 0 \nonumber \\ \lambda &\geq& 0 \end{eqnarray} $ ``` Las condiciones KKT para un PLE se muestran en el margen y obsérvese que si $\lambda_i^*(t_B) = - \frac{1}{t_B f_i(x^*(t_B))} = - \frac{1}{t_B (-x_i^*(t_B))}$ con $x_i^*(t_B)$ $i$-ésima componente de $x^*(t_B)$ $\forall i = 1, \dots, n$ entonces se cumple: $$\lambda_i^*(t_B) > 0$$ pues por la definición de la FBL en el PBL-PLE debe cumplirse: $f_i(x) = -x_i < 0$ o bien para los puntos centrales $-x_i^*(t_B) < 0 \forall i=1, \dots, n$. Lo anterior resulta del dominio de la función $\log$ la cual está definida únicamente en $\mathbb{R}_{++}$ (reales positivos). Esto satisface las desigualdades de factibilidad primal y de factibilidad dual de las condiciones KKT de optimalidad. La holgura complementaria de las condiciones KKT de optimalidad para un PBL-PLE son: $$ \begin{eqnarray} \lambda^*(t_B)^Tx^*(t_B) &=& \displaystyle \sum_{i=1}^n \lambda_i^*(t_B) x_i^*(t_B) \nonumber \\ &=& \displaystyle \sum_{i=1}^n - \frac{x_i^*(t_B)}{t_B (-x_i^*(t_B))} \nonumber \\ &=& \displaystyle \sum_{i=1}^n \frac{1}{t_B} = \frac{n}{t_B} \end{eqnarray} $$ ```{margin} El POCE recuérdese es: $ \begin{eqnarray} \displaystyle \min_{x \in \mathbb{R}^n} &f_o(x)& \nonumber \\ &\text{sujeto a:}& \nonumber\\ f_i(x) &\leq& 0 \quad i=1,\dots,m \nonumber \\ Ax &=& b \end{eqnarray} $ y el PBL es: $ \displaystyle \min_{x \in \mathbb{R}^n} f_B(x|t_B) \\ \text{sujeto a:} \\ Ax = b $ con $A \in \mathbb{R}^{p \times n}$ y *rank* de $A$ igual a $p < n$, $\begin{eqnarray} f_B(x|t_B) &=& f_o(x) + \frac{1}{t_B} \phi(x) \nonumber \\ &=& f_o(x) - \frac{1}{t_B} \displaystyle \sum_{i=1}^m \log(-f_i(x)) \end{eqnarray} $. ``` Por tanto la *duality gap* asociada con $x^*(t_B), \lambda^*(t_B), \nu^*(t_B)$ es: $\frac{n}{t_B}$ donde: $\nu^*(t_B) = \frac{\hat{\nu}}{t_B}$. ```{margin} Recuérdese que las condiciones de KKT para un PLE son: $ \begin{eqnarray} \nabla_x \mathcal{L}(x, \lambda, \nu) &=& c - A^T\nu - \lambda = 0 \nonumber \\ \lambda^Tx &=& 0 \nonumber \\ Ax &=& b \nonumber \\ -x &\leq& 0 \nonumber \\ \lambda &\geq& 0 \end{eqnarray} $ ``` ```{admonition} Comentarios * Por la forma de la *duality gap* anterior para los puntos centrales si $t_B$ se incrementa entonces la *duality gap* tiende a cero en el método primal dual de BL. * Para un PBL que se obtiene de un POCE la *duality gap* anterior para los puntos centrales es $\frac{m}{t_B}$ pues se tienen $m$ funciones $f_i$ de desigualdad. * Las condiciones KKT de optimalidad del PBL-PLE son las condiciones KKT de optimalidad del PLE (que se muestran en el margen) pero perturbadas por el parámetro $t_B$: $$ \begin{eqnarray} \nabla_x \mathcal{L}_B(x(t_B), \hat{\nu}) &=& t_Bc - A^T\hat{\nu} - d(t_B) = 0 \nonumber \\ Ax(t_B) &=& b \nonumber \\ \lambda_i(t_B)x_i(t_B) &=& \frac{1}{t_B} \end{eqnarray} $$ donde: $\hat{\nu} = t_B \nu$, $d(t_B) = X^{-1}(t)e = \left [ \begin{array}{c} \frac{1}{x_1(t_B)} \\ \vdots \\ \frac{1}{x_n(t_B)} \\ \end{array} \right ]$ y la *duality gap* se estima como: $\lambda(t_B)^Tx(t_B) = \frac{n}{t_B}$. ``` ### ¿Cómo calcular los puntos centrales? Para calcular los puntos centrales del PBL-PLE se utiliza la {ref}`primera idea: determinar la dirección de búsqueda <PRIMIDEAMETPRIMDUAL>` en la que se resuelve el siguiente sistema de ecuaciones no lineales con el método de Newton: ```{margin} El PBL-PLE recuérdese es: $ \displaystyle \min_{x \in \mathbb{R}^n} c^Tx - \frac{1}{t_B} \displaystyle \sum_{i=1}^n \log(x_i) \nonumber \\ \text{sujeto a:} \nonumber \\ Ax=b $ con $A \in \mathbb{R}^{m \times n}$ y *rank* de $A$ igual a $m < n$. Las condiciones KKT son: $ \begin{eqnarray} \nabla_x \mathcal{L}_B(x, \nu) &=& c - A^T\nu - \frac{1}{t_B}d = 0 \nonumber \\ Ax &=& b \nonumber \\ \end{eqnarray} $ donde: $d = X^{-1}e = \left [ \begin{array}{c} \frac{1}{x_1} \\ \vdots \\ \frac{1}{x_n} \\ \end{array} \right ]$. ``` $$F(x, \nu) = \left [ \begin{array}{c} c - A^T\nu - \frac{1}{t_B}d(t_B) \\ b- Ax(t_B) \end{array} \right ] = 0$$ donde: $d(t_B) = X^{*-1}(t)e = \left [ \begin{array}{c} \frac{1}{x_1^*(t_B)} \\ \vdots \\ \frac{1}{x_n^*(t_B)} \\ \end{array} \right ]$. Este sistema de ecuaciones no lineales conduce a resolver el sistema de ecuaciones lineales: $$J_F(x, \nu) \left [ \begin{array}{c} \Delta x \\ \Delta \nu \end{array} \right ] = - F(x, \nu)$$ donde: $J_F(x, \nu) = \left [ \begin{array}{cc} \nabla_{xx} ^2 \mathcal{L}_B(x, \nu) & \nabla_{\nu x} \mathcal{L}_B(x,\nu) \\ -A & 0\end{array} \right ] = \left [ \begin{array}{cc} \frac{1}{t_B} D^2(t_B) & -A^T \\ -A & 0\end{array} \right ]$ y $D^2(t_B) = \text{diag}^2(d(t_B)) \in \mathbb{R}^{n \times n}$. La actualización en el método de Newton es: $$\left [ \begin{array}{c} x \\ \nu \end{array} \right ]^{(k+1)} = \left [ \begin{array}{c} x \\ \nu \end{array} \right ]^{(k)} + t^{(k)} \left [ \begin{array}{c} \Delta x \\ \Delta \nu \end{array} \right ]$$ donde se utilizó la {ref}`segunda idea: cortar el paso <SEGUNIDEAPRIMDUAL>`. ```{admonition} Comentarios * Se pueden eliminar los signos negativos que están en los bloques de la matriz del sistema de ecuaciones lineales anterior que contienen $A, A^T$ pero por consistencia con lo desarrollado en la nota de dualidad para un PL se mantienen los signos negativos (si se eliminan también debe de ajustarse el lado derecho del sistema). Esta modificación se relaciona con la definición de la función Lagrangiana. * También una modificación que se realiza para que en el primer bloque del sistema de ecuaciones lineales anterior no tengamos del lado izquierdo y del lado derecho $\frac{1}{t_B}$ se puede trabajar con el problema de optimización equivalente: $$ \displaystyle \min_{x \in \mathbb{R}^n} f_B(x|t_B) \\ \text{sujeto a:} \\ Ax = b $$ donde: $f_B(x|t_B) = t_Bf_o(x) + \phi(x) = t_Bf_o(x) - \displaystyle \sum_{i=1}^m \log(-f_i(x))$. **Estas dos modificaciones se utilizan para implementar el método primal-dual de BL**. ``` ## Método primal-dual de BL aplicado al ejemplo prototipo ```python !pip install --quiet "git+https://github.com/ITAM-DS/analisis-numerico-computo-cientifico.git#egg=opt&subdirectory=src" ``` WARNING: You are using pip version 20.3.3; however, version 21.0.1 is available. You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command. Se utiliza el paquete de *Python* [opt](https://analisis-numerico-computo-cientifico.readthedocs.io/) en los siguientes `import`'s. ```python from opt.utils_logarithmic_barrier import log_barrier_aux_eval_constraints, \ constraint_inequalities_funcs_generator, constraint_inequalities_funcs_eval, \ phi, logarithmic_barrier, line_search_for_log_barrier_by_backtracking ``` Problema de optimización: $$ \displaystyle \max_{x \in \mathbb{R}^2} 3x_1 + 5x_2\\ \text{sujeto a: } \\ x_1 \leq 4 \nonumber \\ 2x_2 \leq 12 \\ 3x_1 + 2x_2 \leq 18 \\ x_1 \geq 0 \\ x_2 \geq 0 \\ $$ Definimos la función $\phi: \mathbb{R}^n \rightarrow \mathbb{R}$ como $\phi(x) = - \displaystyle \sum_{i=1}^m \log(-(a_i ^Tx-b_i)) - \sum_{i=1}^n \log(-{e}_i ^T(-x))$ con $a_i$ $i$-ésimo renglón de $A = \left [ \begin{array}{cc} 1 & 0 \\0 & 2 \\ 3 & 2 \end{array} \right ]$, $b_i$ $i$-ésima entrada del vector $b = \left [ \begin{array}{c} 4 \\ 12 \\ 18 \end{array} \right ]$ y $e_i$ $i$-ésimo vector canónico. Para este problema las curvas de nivel de la función $\phi$ se ven como sigue: ```python const_ineq_two_pars = {0: lambda x1,x2: x1 - 4, 1: lambda x1,x2: 2*x2 - 12, 2: lambda x1,x2: 3*x1 + 2*x2 - 18, 3: lambda x1,x2: -x1, 4: lambda x1,x2: -x2 } def const_ineq_funcs_eval_two_pars(x1,x2, const_ineq): """ Auxiliary function for the evaluation of constraint inequalities in logarithmic barrier function using two parameters as input. """ const_ineq_funcs_eval = np.array([const(x1,x2) for const in \ constraint_inequalities_funcs_generator(const_ineq)]) return const_ineq_funcs_eval def phi_two_pars(x1,x2, const_ineq): """ Implementation of phi function for logarithmic barrier using two parameters as input. """ const_ineq_funcs_eval = -const_ineq_funcs_eval_two_pars(x1, x2, const_ineq) log_barrier_const_eval = np.log(const_ineq_funcs_eval) return -np.sum(log_barrier_const_eval, axis=0) ``` ```python density=1e-1 x1l=0.1 x2d=0.1 x1r=4 x2u=8 x1_p=np.arange(x1l,x1r,density) x2_p=np.arange(x2d,x2u,density) x1_mesh,x2_mesh = np.meshgrid(x1_p,x2_p) z = phi_two_pars(x1_mesh,x2_mesh,const_ineq_two_pars) plt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], "--", color="black", label="_nolegend_") plt.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], "--", color="black", label="_nolegend_") plt.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], "--", color="black", label="_nolegend_") plt.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], "--", color="black", label="_nolegend_") plt.plot(x_1, x_2, "--", color="black", label="_nolegend_") plt.contour(x1_p, x2_p, z) plt.title("Curvas de nivel de $\phi$") plt.show() ``` Reescribimos el problema anterior sin las restricciones $A x \leq b$ como: $ \displaystyle \min_{x \in \mathbb{R}^2} t_B(-3x_1 -5x_2) - [\log(4-x_1) + \log(12 - 2x_2) + \log(18 - (3 x_1 + 2 x_2)) + \log(x_1) + \log(x_2) ]\\ $ Realizamos la actualización: $$\left [ \begin{array}{c}x_1 \\ x_2 \end{array} \right ] = \left [ \begin{array}{c}x_1 \\ x_2 \end{array} \right ] + t \left [ \begin{array}{c}\Delta x_1 \\ \Delta x_2 \end{array} \right ]$$ donde: $t$ es parámetro de *backtracking* y $\left [ \begin{array}{c}\Delta x_1 \\ \Delta x_2 \end{array} \right ]$ es solución del sistema de ecuaciones lineales: $$ \nabla^2f_{B}(x) \left [ \begin{array}{c}\Delta x_1 \\ \Delta x_2 \end{array} \right ] = - \nabla f_{B}(x) \nonumber \\ $$ que para este problema es: $$ \begin{eqnarray} \tilde{A}^T \text{diag}^2(d(t_B))\tilde{A} \left [ \begin{array}{c}\Delta x_1 \\ \Delta x_2 \end{array} \right ] &=& -(t_Bc + \tilde{A}^Td(t_B)) \nonumber \end{eqnarray} $$ donde: $c = \left [ \begin{array}{c}-3 \\ -5 \end{array} \right ], \tilde{A} = \left [ \begin{array}{c} A \\ I \end{array} \right ] = \left [ \begin{array}{cc} 1 & 0 \\0 & 2 \\ 3 & 2 \\ 1 & 0 \\ 0 & 1 \end{array} \right ]$, $b_i$ $i$-ésima entrada del vector $b = \left [ \begin{array}{c} 4 \\ 12 \\ 18 \end{array} \right ]$. Y el vector $d(t_B) = \left [ \begin{array}{c} \frac{1}{b_1 - a_1^Tx(t_B)} \\ \frac{1}{b_2 - a_2^Tx(t_B)} \\ \frac{1}{b_3 - a_3^Tx(t_B)} \\ \frac{1}{-x_1(t_B)} \\ \frac{1}{-x_2(t_B)} \end{array}\right ] = \left [ \begin{array}{c} \frac{1}{4-x_1(t_B)} \\ \frac{1}{12-2x_2(t_B)} \\ \frac{1}{18-3x_1(t_B)-2x_2(t_B)} \\ \frac{1}{-x_1(t_B)} \\ \frac{1}{-x_2(t_B)} \end{array}\right ]$. ````{admonition} Observación :class: tip Aunque podríamos definir las siguientes líneas de acuerdo al desarrollo matemático anterior: ```python A = np.array([[1, 0], [0, 2], [3, 2]]) m = 3 n = 2 b = np.array([4, 12, 18]) A_tilde = np.row_stack((A, np.eye(n))) d = np.array([1/(b[0]-A[0,:].dot(x)), 1/(b[1]-A[1,:].dot(x)), 1/(b[2]-A[2,:].dot(x)), 1/(-x[0]), 1/(-x[1])]) system_matrix = (A_tilde.T*(d*d))@A_tilde rhs = -(t_B*c +A_tilde.T@d) ``` usamos *SymPy* para uso de diferenciación simbólica (no se recomienda el uso de *SymPy* para problemas *medium* o *large scale*). ```` ```{admonition} Comentario Valores más "grandes" de $t_B$ hacen que la Hessiana de la función objetivo del PBL varíe rápidamente cerca de la frontera del conjunto factible. En este ejemplo prototipo algunas de las entradas de $\text{diag}^2(d(t_B))$ serán muy grandes en tales valores de $t_B$. ``` ```python import sympy from sympy.tensor.array import derive_by_array ``` ```python x1, x2 = sympy.symbols("x1, x2") c = np.array([-3, -5]) fo_sympy = c[0]*x1 + c[1]*x2 phi_sympy = -(sympy.log(4-x1) + sympy.log(12-2*x2) + sympy.log(18-3*x1-2*x2) + sympy.log(x1) + sympy.log(x2)) gf_sympy = derive_by_array(fo_sympy, (x1, x2)) Hf_sympy = derive_by_array(gf_sympy, (x1, x2)) gphi_sympy = derive_by_array(phi_sympy, (x1, x2)) Hphi_sympy = derive_by_array(gphi_sympy, (x1, x2)) constraints_ineq = {0: lambda x: x[0] - b[0], 1: lambda x: 2*x[1] - b[1], 2: lambda x: 3*x[0] + 2*x[1] - b[2], 3: lambda x: -x[0], 4: lambda x: -x[1] } x_0 = np.array([1, 2], dtype=float) x = x_0 fo = lambda x: np.dot(c, x) t_B_0 = 10 b = np.array([4, 12, 18], dtype=float) n = x_0.size gf_B = lambda x, t_B: np.array([component.subs({"x1": x[0], "x2": x[1], "t_B": t_B}) for component in t_B*gf_sympy + gphi_sympy], dtype = float) Hf_B = lambda x, t_B: np.array([second_partial_derivative.subs({"x1": x[0], "x2": x[1], "t_B": t_B}) for second_partial_derivative in t_B*Hf_sympy + Hphi_sympy], dtype=float).reshape(n,n) ``` ### Primera iteración ```{margin} Aquí evaluamos la FBL del ejemplo prototipo: $\begin{eqnarray} f_B(x|t_B) &=& t_B(-3x_1 -5x_2) \nonumber \\ &-& \log(4-x_1) - \log(12 - 2x_2) \nonumber \\ &-& \log(18 - (3 x_1 + 2 x_2)) \nonumber \\ &-& \log(x_1) - \log(x_2) \end{eqnarray} $ ``` ```python log_barrier_eval = logarithmic_barrier(fo,x,t_B_0,constraints_ineq) ``` ```python print(log_barrier_eval) ``` -136.26909628370626 ```python const_ineq_funcs_eval = -constraint_inequalities_funcs_eval(x,constraints_ineq) ``` ```python print(const_ineq_funcs_eval) ``` [ 3. 8. 11. 1. 2.] ```{margin} Aquí revisamos que al evaluar las restricciones esté dentro del dominio de la función $\log$ (valores estrictamente positivos). Si no están los puntos en el dominio debemos devolver un mensaje y detener el método. ``` ```python if(sum(const_ineq_funcs_eval < -np.nextafter(0,1)) >=1): print("Some constraint inequalities evaluated in x were nonpositive, check approximations") ``` ```python fo_eval = fo(x) ``` ```python print(fo_eval) ``` -13.0 ```{margin} Resolvemos el sistema de ecuaciones lineales para calcular la dirección de Newton. ``` ```python system_matrix = Hf_B(x, t_B_0) rhs = -gf_B(x, t_B_0) ``` ```python dir_Newton = np.linalg.solve(system_matrix, rhs) ``` ```python print(dir_Newton) ``` [ 19.696 142.065] ```{margin} Aquí calculamos el decremento de Newton cuya definición se da al finalizar las iteraciones. ``` ```python dec_Newton_squared = rhs.dot(dir_Newton) ``` ```python print(dec_Newton_squared) ``` 7711.552360817478 ```python stopping_criteria = dec_Newton_squared/2 ``` ```python print(stopping_criteria) ``` 3855.776180408739 ```python der_direct = -dec_Newton_squared ``` ```{margin} Aquí cortamos el paso con la metodología de búsqueda de línea por *backtracking*. ``` ```python t = line_search_for_log_barrier_by_backtracking(fo,dir_Newton,x_0,t_B_0, constraints_ineq, der_direct) ``` /home/miuser/.local/lib/python3.7/site-packages/opt/utils_logarithmic_barrier.py:17: RuntimeWarning: invalid value encountered in log eval_f_const_inequality = np.log(eval_f_const_inequality) ```python print(t) ``` 0.015625 ```python x = x + t*dir_Newton ``` ```python print(x) ``` [1.308 4.22 ] ### Segunda iteración ```python system_matrix = Hf_B(x, t_B_0) rhs = -gf_B(x, t_B_0) ``` ```python log_barrier_eval = logarithmic_barrier(fo,x,t_B_0,constraints_ineq) ``` ```python print(log_barrier_eval) ``` -255.91817648985682 ```python const_ineq_funcs_eval = -constraint_inequalities_funcs_eval(x,constraints_ineq) ``` ```python print(const_ineq_funcs_eval) ``` [2.692 3.56 5.637 1.308 4.22 ] ```python if(sum(const_ineq_funcs_eval < -np.nextafter(0,1)) >=1): print("Some constraint inequalities evaluated in x were nonpositive, check approximations") ``` ```python fo_eval = fo(x) ``` ```python print(fo_eval) ``` -25.022042371388302 ```python dir_Newton = np.linalg.solve(system_matrix, rhs) ``` ```python print(dir_Newton) ``` [11.93 94.597] ```python dec_Newton_squared = rhs.dot(dir_Newton) ``` ```python print(dec_Newton_squared) ``` 5021.819828077723 ```python stopping_criteria = dec_Newton_squared/2 ``` ```python print(stopping_criteria) ``` 2510.9099140388616 ```python der_direct = -dec_Newton_squared ``` ```python t = line_search_for_log_barrier_by_backtracking(fo,dir_Newton,x,t_B_0, constraints_ineq, der_direct) ``` ```python print(t) ``` 0.015625 ```python x = x + t*dir_Newton ``` ```python print(x) ``` [1.494 5.698] ### Tercera iteración ```python system_matrix = Hf_B(x, t_B_0) rhs = -gf_B(x, t_B_0) ``` ```python log_barrier_eval = logarithmic_barrier(fo,x,t_B_0,constraints_ineq) ``` ```python print(log_barrier_eval) ``` -333.0255657565843 ```python const_ineq_funcs_eval = -constraint_inequalities_funcs_eval(x,constraints_ineq) ``` ```python print(const_ineq_funcs_eval) ``` [2.506 0.604 2.122 1.494 5.698] ```python if(sum(const_ineq_funcs_eval < -np.nextafter(0,1)) >=1): print("Some constraint inequalities evaluated in x were nonpositive, check approximations") ``` ```python fo_eval = fo(x) ``` ```python print(fo_eval) ``` -32.97166508298113 ```python dir_Newton = np.linalg.solve(system_matrix, rhs) ``` ```python print(dir_Newton) ``` [9.648 2.785] ```python dec_Newton_squared = rhs.dot(dir_Newton) ``` ```python print(dec_Newton_squared) ``` 406.3137761754695 ```python stopping_criteria = dec_Newton_squared/2 ``` ```python print(stopping_criteria) ``` 203.15688808773476 ```python der_direct = -dec_Newton_squared ``` ```python t = line_search_for_log_barrier_by_backtracking(fo,dir_Newton,x,t_B_0, constraints_ineq, der_direct) ``` ```python print(t) ``` 0.03125 ```python x = x + t*dir_Newton ``` ```python print(x) ``` [1.796 5.785] ````{admonition} Ejercicio :class: tip Utiliza las definiciones: ```python A = np.array([[1, 0], [0, 2], [3, 2]]) m = 3 n = 2 b = np.array([4, 12, 18]) A_tilde = np.row_stack((A, np.eye(n))) d = np.array([1/(b[0]-A[0,:].dot(x)), 1/(b[1]-A[1,:].dot(x)), 1/(b[2]-A[2,:].dot(x)), 1/(-x[0]), 1/(-x[1])]) system_matrix = (A_tilde.T*(d*d))@A_tilde rhs = -(t_B*c +A_tilde.T@d) ``` y realiza cuatro iteraciones recalculando lo necesario para el sistema de ecuaciones lineales con `system_matrix` y `rhs` dadas por las últimas dos líneas del código que está en este ejercicio. Corrobora que obtienes los mismos resultados que con *SymPy*. ```` ```{margin} El POCE recuérdese es: $ \begin{eqnarray} \displaystyle \min_{x \in \mathbb{R}^n} &f_o(x)& \nonumber \\ &\text{sujeto a:}& \nonumber\\ f_i(x) &\leq& 0 \quad i=1,\dots,m \nonumber \\ Ax &=& b \end{eqnarray} $ y el PBL es: $ \displaystyle \min_{x \in \mathbb{R}^n} f_B(x|t_B) \\ \text{sujeto a:} \\ Ax = b $ con $A \in \mathbb{R}^{p \times n}$ y *rank* de $A$ igual a $p < n$, $\begin{eqnarray} f_B(x|t_B) &=& f_o(x) + \frac{1}{t_B} \phi(x) \nonumber \\ &=& f_o(x) - \frac{1}{t_B} \displaystyle \sum_{i=1}^m \log(-f_i(x)) \end{eqnarray} $. ``` ```{admonition} Comentarios * La forma general de la condición de KKT de optimalidad $\nabla_x \mathcal{L}_B(x^, \nu) = 0$ para un PBL que se obtuvo de un POCE es: $$ \begin{eqnarray} \nabla_x \mathcal{L}_B(x, \nu) &=& \nabla f_o(x) + \frac{1}{t_B}\nabla \phi(x) + A^T \nu = 0 \nonumber \\ &=& \nabla f_o(x) + \frac{1}{t_B} \displaystyle \sum_{i=1}^m \frac{\nabla f_i(x)}{-f_i(x)} + A^T \nu \nonumber \\ \end{eqnarray} $$ con: $\mathcal{L}: \mathbb{R}^n \times \mathbb{R}^p \rightarrow \mathbb{R}$, $$\begin{eqnarray} \mathcal{L}_B(x, \nu) &=& f_{B}(x|t_B) + \sum_{i=1}^p \nu_i h_i(x) \nonumber \\ &=& f_o(x|t_B) + \frac{1}{t_B} \phi(x) + \nu^T(b-Ax) \end{eqnarray} $$ y al aplicar el método de Newton al sistema de ecuaciones no lineales conduce a resolver el sistema de ecuaciones lineales siguiente: $$\left [ \begin{array}{cc} \nabla^2f_o(x) + \frac{1}{t_B} \nabla^2 \phi(x) & -A^T \\ -A & 0\end{array} \right ] \left [ \begin{array}{c} \Delta x \\ \Delta \nu \end{array} \right ] = -\left [ \begin{array}{c} \nabla f_o(x) + \frac{1}{t_B} \nabla \phi(x) \\ r_p \end{array} \right ]$$ donde: $r_p = b - Ax$ es el residual para factibilidad primal. Ver {ref}`la función Lagrangiana <FUNLAGRANGIANA>`. ``` ## Definición decremento de Newton Para problemas de optimización convexos sin restricciones: $$\min_{x \in \mathbb{R}^n} f_o(x)$$ en los que utilizamos el método de Newton para resolverlos, se utiliza una cantidad en criterios de paro y en resultados de convergencia nombrada el decremento de Newton. ```{admonition} Definición El decremento de Newton para $f_o: \mathbb{R}^n \rightarrow \mathbb{R}$ en $x$ es la cantidad: $$\lambda(x) = (\nabla f_o(x)^T \nabla^2f_o(x)^{-1} \nabla f_o(x))^{1/2}$$ en donde se asume que $f_o \in \mathcal{C}^2(\text{dom}f_o)$ y su Hessiana es definida positiva. ``` ```{admonition} Comentarios * Asumiendo que existe un punto óptimo $x^*$ y el valor óptimo se denota por $p^* = f_o(x^*)$ el decremento de Newton tiene propiedades como son: * $\frac{1}{2} \lambda ^2 (x)$ estima $f_o(x)-p^*$. * $|| \nabla^2 f_o(x)^{-1} \nabla f_o(x)||_{\nabla ^2f_o(x)} = \left ( \nabla f_o(x)^T \nabla^2 f_o(x)^{-1} \nabla ^2 f(x) \nabla ^2 f_o(x)^{-1} \nabla f_o(x) \right )^{1/2} = \lambda(x) $ que indica que $\lambda(x)$ es la norma del paso de Newton en la norma cuadrática definida por la Hessiana. * En el método de búsqueda de línea por *backtracking* $-\lambda (x) ^2$ es la derivada direccional de $f_o$ en $x$ en la dirección de $\Delta x_{\text{nt}}$: $$\frac{df(x+t \Delta x_{\text{nt}})}{dt} \Bigr|_{t=0} = \nabla f_o(x)^T \Delta x_{\text{nt}} = \nabla f_o(x)^T (-\nabla^2 f_o(x)^{-1} \nabla f_o(x)) = -\lambda(x)^2$$ donde: $t$ es el parámetro de búsqueda de línea por *backtracking*, $\Delta x_{\text{nt}} = -\nabla ^2 f_o(x)^{-1} \nabla f_o(x)$ para $x \in \text{dom}f_o$ es la dirección de Newton para $f_o$ en $x$. * En el método primal-dual para resolver un PBL el decremento de Newton se utiliza en las *inner iterations*. ``` ## Algoritmo primal-dual de BL para un PL con únicamente desigualdades Para un problema de la forma: $$\displaystyle \min_{x \in \mathbb{R}^n} c^Tx$$ $$\text{sujeto a:}$$ $$Ax \leq b$$ $$x \geq 0$$ >**Dados** $x$ un punto estrictamente factible, esto es: $x > 0$, $Ax < b$ (todas las entradas de $x$ son positivas y $a_i^Tx < b_i$), $t_B^{(0)}$ parámetro de barrera, $\mu > 1$, $tol > 0$. > >$t_B:= t_B^{(0)}$. > >**Repetir** el siguiente bloque para $k=1,2,\dots$ > >***Outer iterations***: >>**Paso de centrado o *inner iterations***: >> >>Calcular $x^*(t_B)$ que resuelva: $\displaystyle \min_{x \in \mathbb{R}^n} t_Bf_o(x) + \phi(x)$ iniciando con $x$. >> >>Utilizar criterio de paro para *inner iterations*. > >Actualizar $x:=x^*(t_B)$. > > Incrementar $t_B$ por $t_B=\mu t_B$. > > **hasta** convergencia: satisfacer criterio de paro en el que se utiliza $tol$ y $maxiter$. ```{admonition} Observación :class: tip Para un PL únicamente con desigualdades recuérdese: $$t_Bf_o(x) + \phi(x) = t_B c^Tx - \displaystyle \sum_{i=1}^m \log(-(a_i ^Tx-b_i)) - \sum_{i=1}^n \log(-{e}_i ^T(-x))$$ con $a_i$ $i$-ésimo renglón de $A$. ``` ````{admonition} Comentarios * $\mu$ es un parámetro que realiza un *trade-off* en el número de *inner* y *outer iterations*. Controla el seguimiento de la trayectoria central en las *inner iterations*. Valores grandes causan un mayor número de *inner iterations* y valores cercanos a $1$ causan un mayor número de *outer iterations*. * La elección de $t_B^{(0)}$ ayuda a dar una estimación del recíproco de la *duality gap* (recuérdese que la estimación en un PBL-PLE es $\frac{n}{t_B}$). Es similar el efecto que con el parámetro $\mu$. Valores grandes causan que se realicen mayor número de *inner iterations* y valores pequeños un mayor número de *outer iterations*. * El criterio de paro de las *outer iterations* en un PBL-PLE es de la forma: ``` while n/t_B > tol && iterations < max_iters ``` y el de las *inner iterations* es de la forma: ``` while dec_Newton/2 > tol && iterations < max_iters ``` con `dec_Newton/2` el decremento de Newton, `tol` una cantidad pequeña y positiva (comúnmente menor o igual a 10−8), `iterations` un contador de iteraciones. * El algoritmo también puede regresar estimaciones para $\lambda$ con $\lambda^*(t_B)$ y $\nu$ dada por $\nu^*(t_B)$. ```` ## Método primal-dual de BL aplicado al ejemplo prototipo (completo) Problema de optimización: $$ \displaystyle \max_{x \in \mathbb{R}^2} 3x_1 + 5x_2\\ \text{sujeto a: } \\ x_1 \leq 4 \nonumber \\ 2x_2 \leq 12 \\ 3x_1 + 2x_2 \leq 18 \\ x_1 \geq 0 \\ x_2 \geq 0 \\ $$ ```python tol_outer_iter = 1e-6 tol=1e-8 tol_backtracking=1e-12 max_inner_iter=30 mu=10 x_ast = np.array([2, 6], dtype=float) p_ast = fo(x_ast) ``` Se utiliza la función [primal_dual_method](https://analisis-numerico-computo-cientifico.readthedocs.io/en/latest/_autosummary/opt.logarithmic_barrier.linear_program_inequalities.primal_dual_method.html#opt.logarithmic_barrier.linear_program_inequalities.primal_dual_method) del paquete [opt](https://analisis-numerico-computo-cientifico.readthedocs.io/en/latest/) en la siguiente celda: ```python from opt.logarithmic_barrier.linear_program_inequalities import primal_dual_method ``` ```python [x,total_iter,t,x_plot] = primal_dual_method(fo, constraints_ineq, x_0, tol, tol_backtracking, t_B_0, x_ast=x_ast, p_ast=p_ast, max_inner_iter=max_inner_iter, mu=mu, tol_outer_iter=tol_outer_iter, gf_B=gf_B, Hf_B=Hf_B, ) ``` ```{admonition} Observación :class: tip Obsérvese que se realizan más iteraciones con el método primal-dual para este ejemplo prototipo que con el método símplex. ``` ```python from opt.utils_logarithmic_barrier import plot_central_path ``` ```python plt.contour(x1_p, x2_p, z) plt.xlim(-0.1, 5) plt.ylim(-0.1,6.5) #level curves for fo x_1_line_1 = np.linspace(0, 6, 100) x_2_line_1 = 1/5*(-3*x_1_line_1 + 23) x_1_line_2 = np.linspace(0, 6, 100) x_2_line_2 = 1/5*(-3*x_1_line_2 + 29) x_1_line_3 = np.linspace(0, 6, 100) x_2_line_3 = 1/5*(-3*x_1_line_3 + 36) plt.plot(x_1_line_1, x_2_line_1, "green",label="_nolegend_") plt.plot(x_1_line_2, x_2_line_2, "indigo",label="_nolegend_") plt.plot(x_1_line_3, x_2_line_3, "darkturquoise", label="_nolegend_") #central path plot_central_path(x_plot) ``` ```{admonition} Comentario En **este ejemplo** las curvas de nivel de la función objetivo $f_o$ representadas con rectas en la gráfica anterior son tangentes a las curvas de nivel de $\phi$ en $x^*(t_B)$ pues: $t_B \nabla f_o(x^*(t_B)) + \nabla \phi(x^*(t_B)) = 0$ por lo que: $$\nabla \phi(x^*(t_B)) = -t_B \nabla f_o(x^*(t_B)) = -t_Bc$$ ``` ```{admonition} Ejercicio :class: tip Resolver el siguiente problema con el método primal-dual de BL y corrobora con algún software tu respuesta: $$ \displaystyle \min_{x \in \mathbb{R}^2} x_1 + x_2 - 4x_3\\ \text{sujeto a:} \\ x_1 + x_2 + 2x_3 \leq 9 \nonumber \\ x_1 + x_2 - x_3 \leq 2 \nonumber \\ -x_1 + x_2 + x_3 \leq 4 \nonumber \\ x_1 \geq 0, x_2 \geq 0, x_3 \geq 0 $$ ``` ```{admonition} Comentario El método primal-dual puede modificarse para el caso en el que se tengan puntos no primal-dual factibles. En este caso se le nombra *path following method*. ``` ```{admonition} Ejercicios :class: tip 1.Resuelve los ejercicios y preguntas de la nota. ``` **Preguntas de comprehensión** 1)¿Qué es un método por puntos interiores? 2)¿Qué se busca con el método primal-dual y cuáles son las ideas que se utilizan para su desarrollo? 3)¿Por qué al método primal-dual se le nombra así? 4)¿Qué efecto y ventajas tienen añadir funciones de barrera que penalizan al no satisfacer las restricciones de desigualdad de un problema de optimización? 5)¿Qué propiedades se buscan que satisfagan las funciones de barrera? 6)¿Por qué se elige el método de Newton para resolver el problema PBL? 7)¿Qué son los puntos centrales y la trayectoria central? 8)¿Qué relación existe entre las condiciones KKT de optimalidad del PLE y las del PBL-PLE? 9)¿Cómo se define y qué propiedades tiene el decremento de Newton? 10)Explica la tarea que tienen los parámetros $\mu$ y $t_B$ en el problema PBL. Puedes apoyar tu respuesta considerando el efecto que resulta de elegir valores grandes, pequeños de tales parámetros. **Referencias:** 1. S. P. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2009. 2. J. Nocedal, S. J. Wright, Numerical Optimization, Springer, 2006. 3. F. Hillier, G. Lieberman, Introduction to Operations Research, Mc Graw Hill, 2014.
Cruise along the Karawari River to view crocodiles basking on the banks and locals paddling their slender dugout canoes with long, curved oars. Observe village life and customs, participate in typical daily activities, and see reenactments of traditional ceremonies. Take nature walks to search for unique flora and fauna, including blue, superb and king of Saxony birds of paradise; flightless cassowaries; and the delicate Sepik blue orchid. Travel to remote areas where you will stay in comfortable lodges with fantastic panoramic views of the surrounding wilderness.
/* Copyright (C) 2019-2020 JingWeiZhangHuai <[email protected]> Licensed under the Apache License, Version 2.0; you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #include "morn_util.h" #include <cblas.h> #ifdef MORN_USE_CL #include <CL/cl.h> #include <clBLAS.h> cl_context mDeviceContext(int device); cl_command_queue mDeviceQueue(int device); #endif #define MORN_NO_TRANS 0 #define MORN_TRANS 1 struct HandleCLBlas { int clblas_setup; }; void endCLBlas(struct HandleCLBlas *handle) { NULL; #ifdef MORN_USE_CL if(handle->clblas_setup) clblasTeardown(); #endif } #define HASH_CLBlas 0x3b4b7c3a static struct HandleCLBlas *morn_clblas_handle=NULL; void SetupCLBlas() { MHandle *hdl=mHandle(mMornObject(NULL,DFLT),CLBlas); morn_clblas_handle=(struct HandleCLBlas *)(hdl->handle); if(hdl->valid==0) { #ifdef MORN_USE_CL mException(clblasSetup()!=CL_SUCCESS,EXIT,"CLBlas not work"); morn_clblas_handle->clblas_setup=1; #endif hdl->valid=1; } } void mSgemm(int device,int a_trans,int b_trans,int m,int n,int k,float alpha,MMemoryBlock *a,int sa,MMemoryBlock *b,int sb,float beta,MMemoryBlock *c,int sc) { #ifdef MORN_USE_CL if(device!=MORN_HOST) { a_trans=(a_trans==MORN_TRANS)?clblasTrans:clblasNoTrans; b_trans=(b_trans==MORN_TRANS)?clblasTrans:clblasNoTrans; if(morn_clblas_handle==NULL) SetupCLBlas(); device = c->device; mMemoryBlockCopy(a,device); mMemoryBlockCopy(b,device); cl_event a_event=(cl_event)(a->cl_evt); cl_event b_event=(cl_event)(b->cl_evt); cl_event c_event=(cl_event)(c->cl_evt); cl_event event_list[2] = {a_event,b_event}; cl_command_queue queue=mDeviceQueue(device); int ret=clblasSgemm(clblasRowMajor,a_trans,b_trans,m,n,k,alpha,a->cl_data,0,sa,b->cl_data,0,sb,beta,c->cl_data,0,sc,1,&queue,2,event_list,&c_event); mException(ret!=CL_SUCCESS,EXIT,"clblas error"); c->flag = MORN_DEVICE; return; } #endif a_trans=(a_trans==MORN_TRANS)?CblasTrans:CblasNoTrans; b_trans=(b_trans==MORN_TRANS)?CblasTrans:CblasNoTrans; cblas_sgemm(CblasRowMajor,a_trans,b_trans,m,n,k,alpha,a->data,sa,b->data,sb,beta,c->data,sc); } char *saxpby=mString( __kernel void saxpby(__global const float* a,__global float* b,const float alpha,const float beta,const int sa,const int sb) { const int idx = get_global_id(0); const int ia = idx*sa; const int ib = idx*sb; b[ib]=a[ia]*alpha+b[ib]*beta; }); void mSaxpby(int device,int n,float alpha,MMemoryBlock *a,int sa,float beta,MMemoryBlock *b,int sb) { #ifdef MORN_USE_CL if(device!=MORN_HOST) { mCLFunction(saxpby,CLSIZE(n),CLIN(a),CLINOUT(b),CLPARA(&alpha,sizeof(float)),CLPARA(&beta,sizeof(float)),CLPARA(&sa,sizeof(int)),CLPARA(&sb,sizeof(int))); return; } #endif cblas_saxpby(n,alpha,a->data,sa,beta,b->data,sb); }
From Categories Require Import Essentials.Notations. From Categories Require Import Essentials.Types. From Categories Require Import Essentials.Facts_Tactics. From Categories Require Import Category.Category. From Categories Require Import Functor.Main. From Categories Require Import Cat.Cat. From Categories Require Import Basic_Cons.Terminal. From Categories Require Import Archetypal.Discr.Discr. (** The unique functor from the initial category to any other. *) Program Definition Functor_From_Empty_Cat (C' : Category) : (0 –≻ C')%functor := {| FO := fun x => Empty_rect _ x; FA := fun a b f => match a as _ return _ with end |}. Local Hint Extern 1 => cbn in *. (** Empty Cat is the initial category. *) Program Instance Cat_Init : (𝟘_ Cat)%object := {| terminal := 0%category; t_morph := fun x => Functor_From_Empty_Cat x |}.
!> @file !! Include fortran file for scatterv operations !! @author !! Copyright (C) 2015-2015 BigDFT group !! This file is distributed under the terms of the !! GNU General Public License, see ~/COPYING file !! or http://www.gnu.org/copyleft/gpl.txt . !! For the list of contributors, see ~/AUTHORS mpi_comm=MPI_COMM_WORLD if (present(comm)) mpi_comm=comm root_=0 if (present(root)) root_=root call MPI_SCATTERV(sendbuf,sendcounts,displs,mpitype(sendbuf),& recvbuf,recvcount,mpitype(recvbuf),root_,mpi_comm,ierr) if (ierr/=0) then call f_err_throw('Error in MPI_SCATTERV',err_id=ERR_MPI_WRAPPERS) end if
# #install.packages('ggplot2') # install.packages('reshape') # install.packages('lme4') # install.packages('ez') # install.packages('emmeans') # install.packages('gridExtra') # install.packages('knitr') library(ggplot2) library(reshape) library(lme4) library(ez) library(emmeans) library(gridExtra) library(knitr) # filename <- '/Users/iris.mencke/Documents/Peak_MGAs_four_channels.csv' filename <- '/Volumes/Projects/2017-0121-CCMusic/analysis_MM/Peak_MGAs_four_channels.csv' d <- read.csv(filename, header = TRUE, sep = ",", dec = ".",quote = "'") d <- subset(d, select = -c(atonal_comps,tonal_comps) ) d_long <- melt(d, id.vars = 1:2, measure.vars = seq(3,34,2)) d_lat <- melt(d, id.vars = 1:2, measure.vars = seq(4,34,2)) d_long$lat <- d_lat$value xx <- t(data.frame(strsplit(as.character(d_long$variable), "_"))) d_long <- cbind(d_long, xx[,c(1,2,4)]) rownames(d_long) <- c() colnames(d_long)[c(4,6,7,8)] <- c('amp', 'hem', 'dev', 'cond') d_long$variable <- c() # delete column #d_long$amp[grepl('left', d_long$hem)] <- d_long$amp[grepl('left', d_long$hem)]*-1 d_long$cond <- factor(d_long$cond, levels = c('tonal', 'atonal')) d_long$dev <- factor(d_long$dev, levels = c('pitch', 'location', 'intensity', 'timbre')) ######################################################################## ## AMPLITUDES ####################################################################### # 1. fit mixed effect models uamp0 <- lmer(amp~1+(1|ID), data = d_long, REML = FALSE) # model without any effects; null-model uamp1 <- lmer(amp~cond+(1|ID), data = d_long, REML = FALSE) # model with effect of condition; + random effect; random intercept change the basline per subject to mimic variance in a population uamp2 <- lmer(amp~cond+dev + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond uamp3 <- lmer(amp~cond+dev+hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond uamp4 <- lmer(amp~cond+dev+hem + cond:dev + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond uamp5 <- lmer(amp~cond+dev+hem + cond:dev + cond:hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond uamp6 <- lmer(amp~cond+dev+hem + cond*dev + cond*hem + dev*hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond uamp7 <- lmer(amp~cond+dev+hem + cond:dev + cond:hem + dev:hem + cond:dev:hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond # Likelihood ratio test and store models: uamps.test <- anova(uamp0,uamp1,uamp2,uamp3,uamp4,uamp5,uamp6,uamp7); uamps.test uamps.models <- list(uamp0,uamp1,uamp2,uamp3,uamp4,uamp5,uamp6,uamp7); # 2. PAIRWISE CONTRASTS # TABLE 2: pairwise comparisons with three factors confint.uamps3 <- as.data.frame(confint(lsmeans(uamp7,pairwise~cond|dev|hem, adjust = "Bonferroni")$contrast)) pairwise.uamps3 <-cbind(as.data.frame(lsmeans(uamp7, pairwise~cond|dev|hem, adjust="bonferroni")$contrasts), confint.uamps3[c("lower.CL","upper.CL")]) pairwise.uamps3 <- pairwise.uamps3[c("contrast","dev","hem","estimate","lower.CL", "upper.CL","t.ratio","p.value")] pairwise.uamps3[,"Cohen's d"] <- pairwise.uamps3$estimate/sigma(uamp7) pairwise.uamps3[,4:8] <- round(pairwise.uamps3[,4:8],2) colnames(pairwise.uamps3) <- c("contrast","deviant","hemisphere","estimate", "CI 2.5%","CI 97.5%","t","p","d") kable(pairwise.uamps3) # make table write.table(pairwise.uamps3,file= "/Volumes/Projects/2017-0121-CCMusic/analysis_MM/pairwise_cond_uamp7.csv", sep=",",row.names = FALSE,quote=FALSE) ################################################################################### # TABLE 4a: Pairwise contrasts of mean amplitudes between features. confint.uamps.hem <- as.data.frame(confint(lsmeans(uamp7,pairwise~dev, adjust = "Bonferroni")$contrast)) pairwise.uamps.hem <-cbind(as.data.frame(lsmeans(uamp7, pairwise~dev, adjust="bonferroni")$contrasts), confint.uamps.hem[c("lower.CL","upper.CL")]) pairwise.uamps.hem <- pairwise.uamps.hem[c("contrast","estimate","lower.CL", "upper.CL","t.ratio","p.value")] pairwise.uamps.hem[,"d"] <- pairwise.uamps.hem$estimate/sigma(uamp7) pairwise.uamps.hem[,2:7] <- round(pairwise.uamps.hem[,2:7],2) colnames(pairwise.uamps.hem) <- c("contrast","estimate", "CI 2.5%","CI 97.5%","t","p","d") kable(pairwise.uamps.hem) # make table write.table(pairwise.uamps.hem,file="/Volumes/Projects/2017-0121-CCMusic/analysis_MM/feature_hem_interaction_amp.csv", sep=",",row.names = FALSE,quote=FALSE) # Export to a table # Table 5a: pairwise contrasts of mean amplitudes for hemisphere for features / right-left interaction confint.uamps.hem <- as.data.frame(confint(lsmeans(uamp7,pairwise~hem|dev, adjust = "Bonferroni")$contrast)) pairwise.uamps.hem <-cbind(as.data.frame(lsmeans(uamp7, pairwise~hem|dev, adjust="bonferroni")$contrasts), confint.uamps.hem[c("lower.CL","upper.CL")]) pairwise.uamps.hem <- pairwise.uamps.hem[c("contrast","dev","estimate","lower.CL", "upper.CL","t.ratio","p.value")] pairwise.uamps.hem[,"d"] <- pairwise.uamps.hem$estimate/sigma(uamp7) pairwise.uamps.hem[,3:8] <- round(pairwise.uamps.hem[,3:8],2) colnames(pairwise.uamps.hem) <- c("contrast","feature","estimate", "CI 2.5%","CI 97.5%","t","p","d") kable(pairwise.uamps.hem) # TABLE write.table(pairwise.uamps.hem,file="/Volumes/Projects/2017-0121-CCMusic/analysis_MM/feature_hem_interaction_amp.csv", sep=",",row.names = FALSE,quote=FALSE) # Export to a table ######################################################### # AMPLITUDE FIGURES for uncertainty ######################################################### uamps <- ggplot(d_long,aes(x=cond, y=amp)) + geom_hline(yintercept = 0, size = 0.1) + geom_point(alpha = 0.6,color = 'black',size = 0.4) + geom_line(aes(group = ID), alpha = 0.5, size = 0.1) + geom_boxplot(aes(fill = cond),alpha = 0.7,fatten = 0.9, lwd = 0.1, color = 'black', width = 0.15, outlier.size = 0.4) + geom_violin(aes(color = cond),color = 'black', alpha = 0.2,trim = FALSE, size = 0.15) + scale_fill_manual(values = c('red','blue')) + facet_grid(dev~hem) + xlab('uncertainty') + ylab('mean amplitude (fT)') + theme_bw() + theme(legend.position = "none"); uamps #ylab('mean AMPLITUDE (\U1D707V)') + ############################################################################################# ## LATENCIES ############################################################################################# # 1. fit mixed effect models ulat0 <- lmer(lat~1+(1|ID), data = d_long, REML = FALSE) # control=lmerControl(optimizer="bobyqa")) # model without any effects; null-model ulat1 <- lmer(lat~cond+(1|ID), data = d_long, REML = FALSE) # model with effect of condition; + random effect; random intercept change the basline per subject to mimic variance in a population ulat2 <- lmer(lat~cond+dev + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond ulat3 <- lmer(lat~cond+dev+hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond ulat4 <- lmer(lat~cond+dev+hem + cond:dev + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond ulat5 <- lmer(lat~cond+dev+hem + cond:dev + cond:hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond ulat6 <- lmer(lat~cond+dev+hem + cond*dev + cond*hem + dev*hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond ulat7 <- lmer(lat~cond+dev+hem + cond:dev + cond:hem + dev:hem + cond:dev:hem + (1|ID), data = d_long, REML = FALSE) # random effects in brackets; across subjects (ID!); random intercept = 1; random slope = cond # Likelihood ratio test and store models: ulats.test <- anova(ulat0,ulat1,ulat2,ulat3,ulat4,ulat5,ulat6,ulat7); ulats.test ulats.models <- list(ulat0,ulat1,ulat2,ulat3,ulat4,ulat5,ulat6,ulat7); ################################################################################# # PAIRWISE CONTRASTS # TABLE 3: pairwise comparisons with three factors confint.ulats3 <- as.data.frame(confint(lsmeans(ulat7,pairwise~cond|dev|hem, adjust = "Bonferroni")$contrast)) pairwise.ulats3 <-cbind(as.data.frame(lsmeans(ulat7, pairwise~cond|dev|hem, adjust="bonferroni")$contrasts), confint.ulats3[c("lower.CL","upper.CL")]) pairwise.ulats3 <- pairwise.ulats3[c("contrast","dev","hem","estimate","lower.CL", "upper.CL","t.ratio","p.value")] pairwise.ulats3[,"Cohen's d"] <- pairwise.ulats3$estimate/sigma(ulat7) pairwise.ulats3[,4:8] <- round(pairwise.ulats3[,4:8],2) colnames(pairwise.ulats3) <- c("contrast","deviant","hemisphere","estimate", "CI 2.5%","CI 97.5%","t","p","Cohen's d") kable(pairwise.ulats3) # make table write.table(pairwise.ulats3,file= "/Volumes/Projects/2017-0121-CCMusic/analysis_MM/pairwise_cond_ulats7.csv", sep=",",row.names = FALSE,quote=FALSE) ################################################################################# # TABLE 4b: confint.ulats2 <- as.data.frame(confint(lsmeans(ulat7,pairwise~dev, adjust = "Bonferroni")$contrast)) pairwise.ulats2 <-cbind(as.data.frame(lsmeans(ulat7, pairwise~dev, adjust="bonferroni")$contrasts), confint.ulats2[c("lower.CL","upper.CL")]) pairwise.ulats2 <- pairwise.ulats2[c("contrast","estimate","lower.CL", "upper.CL","t.ratio","p.value")] pairwise.ulats2[,"d"] <- pairwise.ulats2$estimate/sigma(ulat7) pairwise.ulats2[,2:7] <- round(pairwise.ulats2[,2:7],2) colnames(pairwise.ulats2) <- c("contrast","deviant","estimate", "CI 2.5%","CI 97.5%","t","p","Cohen's d") kable(pairwise.ulats2) ########## # TABLE 5b confint.ulats.hem <- as.data.frame(confint(lsmeans(ulat7,pairwise~hem|dev, adjust = "Bonferroni")$contrast)) pairwise.ulats.hem <-cbind(as.data.frame(lsmeans(ulat7, pairwise~hem|dev, adjust="bonferroni")$contrasts), confint.ulats.hem[c("lower.CL","upper.CL")]) pairwise.ulats.hem <- pairwise.ulats.hem[c("contrast","dev","estimate","lower.CL", "upper.CL","t.ratio","p.value")] pairwise.ulats.hem[,"d"] <- pairwise.ulats.hem$estimate/sigma(uamp7) pairwise.ulats.hem[,3:8] <- round(pairwise.ulats.hem[,3:8],2) colnames(pairwise.ulats.hem) <- c("contrast","feature","estimate", "CI 2.5%","CI 97.5%","t","p","d") kable(pairwise.ulats.hem) ############ # FIGURE for uncertainty - latency analyses ulats <- ggplot(d_long,aes(x=cond, y=lat)) + geom_hline(yintercept = 0, size = 0.1) + geom_point(alpha = 0.6,color = 'black',size = 0.4) + geom_line(aes(group = ID), alpha = 0.5, size = 0.1) + geom_boxplot(aes(fill = cond),alpha = 0.7,fatten = 0.9, lwd = 0.1, color = 'black', width = 0.15, outlier.size = 0.4) + geom_violin(aes(color = cond),color = 'black', alpha = 0.2,trim = FALSE, size = 0.15) + scale_fill_manual(values = c('red','blue')) + facet_grid(dev~hem) + xlab('uncertainty') + ylab('peak latency (ms)') + theme_bw() + theme(legend.position = "none"); ulats ############################################### # Make joint uncertainty reports: ############################################### # mixed-effect models # amplitudes #uncertainty.report <- data.frame('model' = rownames(uamps.test)) #uncertainty.report[2:nrow(uncertainty.report),'null'] <- uncertainty.report[1:nrow(uncertainty.report)-1,'model'] #uncertainty.report <- cbind(uncertainty.report,round(uamps.test[,c('AIC','Chisq','Pr(>Chisq)')],2),round(ulats.test[,c('AIC','Chisq','Pr(>Chisq)')],2)) #colnames(uncertainty.report) <- c('model','null','AIC','X2','p','AIC','X2','p') #write.table(uncertainty.report,file="/Volumes/Projects/2017-0121-CCMusic/analysis_MM/uncertainty_report.csv", # sep= ",",row.names = FALSE,quote=FALSE) # pairwise contrasts: #uncertainty.pw <- cbind(pairwise.uamps.hem,pairwise.ulats.hem[,c(2:ncol(pairwise.ulats.hem))]) #write.table(uncertainty.pw,file="/Volumes/Projects/2017-0121-CCMusic/analysis_MM/feat_hem_interaction_all.csv", # sep=",",row.names = FALSE,quote=FALSE) #write.table(uncertainty.pw,file="/Users/iris.mencke/Documents/feat_hem_interaction_all.csv", # sep=",",row.names = FALSE,quote=FALSE) # JOINT FIGURE uplots <- arrangeGrob(uamps,ulats,ncol=2); plot(uplots) ggsave("/Volumes/Projects/2017-0121-CCMusic/analysis_MM/uncertainty.pdf", plot=uplots,width = 180, height = 190, units = 'mm', dpi = 300) ggsave("/Volumes/Projects/2017-0121-CCMusic/analysis_MM/uncertainty.png", plot=uplots,width = 180, height = 190, units = 'mm', dpi = 300)
module Skeletons.Pipeline import System import System.Concurrency import System.Concurrency.BufferedChannel %default total ||| Representation of the data in a pipeline. Data flows through the stage(s) ||| using NEXT, until we are DONE. public export data PipelineData : Type -> Type where DONE : PipelineData a NEXT : a -> PipelineData a ||| A stage in a pipeline, processing a type of @PipelineData@. data PipelineStage : Type -> Type -> Type where MkDStep : (PipelineData a -> PipelineData b) -> PipelineStage a b ||| A Pipeline from a -> b is a skeleton typically consisting of multiple ||| independent stages, with the final stage producing something of type `b`, ||| and the other stages doing intermediary work towards this. Since the stages ||| are independent, each stage can be run in parallel. export data Pipeline : Type -> Type -> Type where -- A Data Pipeline consists of either its Endpoint, where the final processing -- step happens; or a Stage where some processing happens, followed by a -- continuation Pipeline where the rest of the processing happens. DEndpoint : (lastly : PipelineStage a b) -> Pipeline a b DStage : (thisStage : PipelineStage a b) -> (continuation : Pipeline b c) -> Pipeline a c ||| Declare a new pipeline, with initStage as its only stage. ||| ||| @initStage A stage to use as the basis for a new pipeline. export initPipeline : (initStage : PipelineData a -> PipelineData b) -> Pipeline a b initPipeline initStage = DEndpoint $ MkDStep initStage ||| Add a stage to the end of an existing @Pipeline@, changing the output-type ||| of the Pipeline. ||| ||| @pl The Pipeline to add the stage to. ||| @newStage The stage to add. export addStage : (pl : Pipeline a b) -> (newStage : PipelineData b -> PipelineData c) -> Pipeline a c addStage (DEndpoint lastly) newStage = DStage lastly (DEndpoint $ MkDStep newStage) -- newStage becomes new `lastly` addStage (DStage thisStage continuation) newStage = DStage thisStage $ addStage continuation newStage infixl 8 |> ||| Shorthand for `addStage`. export (|>) : {b : _} -> Pipeline a b -> (PipelineData b -> PipelineData c) -> Pipeline a c (|>) = addStage ||| Given some input, process it and keep receiving input. If the input was ||| initially `DONE` / When the loop eventually gets a `DONE`, no processing is ||| computed and the `DONE` is simply passed along on the `outChan` ||| BufferedChannel. loop : (stage : (PipelineStage a b)) -> (plData : PipelineData a) -> (inRef : IORef (BufferedChannel (PipelineData a) )) -> (outRef : IORef (BufferedChannel (PipelineData b) )) -> IO () loop _ DONE _ outRef = do (MkDPair outChan send) <- becomeSender outRef send Signal outChan DONE loop stage@(MkDStep f) next inRef outRef = do (MkDPair outChan send) <- becomeSender outRef (MkDPair inChan recv) <- becomeReceiver Blocking inRef send Signal outChan (f next) next' <- recv inChan let inRef' = assert_smaller inRef inRef -- ^ Idris cannot know recv reduces the size of a shared channel loop stage next' inRef' outRef ||| Given a @Pipeline@ and a @BufferedChannel@ which supplies input for the ||| first stage, run each stage of the Pipeline in parallel, linking them up ||| using BufferedChannels. Returns the @ThreadID@ of the last stage in the ||| Pipeline, and an @IORef@ to a BufferedChannel over which to receive the ||| final results. ||| ||| @pl The Pipeline to run. ||| @inRef An IORef to a BufferedChannel containing the input for the initial ||| stage. export runPipeline : (pl : Pipeline x y) -> (inRef : IORef (BufferedChannel (PipelineData x))) -> IO (ThreadID, IORef (BufferedChannel (PipelineData y))) runPipeline (DEndpoint lastly) inRef = do outRef <- makeBufferedChannel (MkDPair inChan recv) <- becomeReceiver Blocking inRef input <- recv inChan threadID <- fork $ loop lastly input inRef outRef pure (threadID, outRef) runPipeline (DStage thisStage continuation) inRef = do linkRef <- makeBufferedChannel (MkDPair inChan recv) <- becomeReceiver Blocking inRef input <- recv inChan doWeCare <- fork $ loop thisStage input inRef linkRef -- ^ I don't think we do... runPipeline continuation linkRef
= Trinsey v. Pennsylvania =
The Gambia also has an under @-@ 19 team that was to play in the African Women 's U @-@ 19 Championship in 2002 . The Gambia 's first match was against Morocco , but the team withdrew from the competition .