text
stringlengths 0
3.34M
|
---|
\externaldocument{chapter3}
\chapter{York Extensible Testing Infrastructure}
\label{chap:yeti_3}
%\section{Random Testing}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% %
% YETI Section Starts here %
% %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%In this chapter we describe York Extensible Testing Infrastructure (YETI). YETI is used as a platform for implementation of all the strategies developed in the current research study.
\section{Overview}
The York Extensible Testing Infrastructure (YETI) is an automated random testing tool developed by Manuel Oriol~\cite{Oriol2011yeti}. It is capable of testing programs written in Java, JML and .NET languages~\cite{oriol2010testing}. YETI takes the program byte-code as input and executes it with randomly generated syntactically-correct inputs to find a failure. It runs at a high level of performance with $10^6$ calls on simple and efficient methods (i.e. methods from String etc) per minute on Java code. One of its prominent features is the Graphical User Interface (GUI), which makes YETI user-friendly and provides an option to change the testing parameters in real time. It can also distribute large testing tasks in the cloud for parallel execution~\cite{oriol2010yeti}. The main motivation for developing YETI is provision of a testing workbench to testers and developers for research. YETI by design is easily extendable to facilitate inclusion of new languages and testing strategies. Several researchers \cite{oriol2010testing, oriol2010yeti, Dimitraiadis2009, Khawaja2010} have contributed various features and strategies to the YETI project. The current study extends YETI with three more test strategies i.e. DSSR \cite{ahmad2014dirt}, ADFD \cite{ahmad2013adfd} and ADFD$^+$ \cite{ahmad2014adfd2} for software testing and with a graphical front-end to enable its execution from any GUI which supports Java. The latest version of YETI can be downloaded freely from \url{www.yetitest.org}. Figure \ref{fig:yetiOverview} briefly presents the working process of YETI.
\begin{figure}[h]
\centering
\includegraphics[width=14.5cm, height=3.5cm]{chapter3/workingProcess.png}
\caption{Working process of YETI}
\label{fig:yetiOverview}
\end{figure}
\section{Design}
YETI is a lightweight platform with around 10000 lines of code. It has been designed with the provision of extensibility for future growth. YETI enforces strong decoupling between test strategies and the actual language constructs, which adds new binding, without any modification in the available test strategies. YETI can be divided into three main parts on the basis of functionality: the core infrastructure, the strategy, and the language-specific binding. Each part is briefly described below.
\subsection{Core Infrastructure of YETI}
The core infrastructure of YETI provides extendibility through specialization. The abstract classes included in this section can be extended to create new strategies and language bindings. It is responsible for test data generation, test process management and test report production. The core infrastructure of YETI is split into four packages, i.e.~\verb+yeti+, \verb+yeti.environments+, \verb+yeti.monitoring+ and~\verb+yeti.strategies+. The package~\verb+yeti+ uses
classes from \verb+yeti.monitoring+ and \verb+yeti.strategies+ packages and calls classes in the \verb+yeti.environment+ package as shown in the Figure \ref{fig:yetiCore}.
\bigskip
\begin{figure}[h]
\centering
\includegraphics[width=14cm, height=9cm]{chapter3/yetiStructure.png}
\smallskip
\caption{Main packages of YETI with dependencies }
\label{fig:yetiCore}
\end{figure}
\bigskip
\bigskip
\bigskip
The most essential classes included in the YETI core infrastructure are:
\begin{enumerate}
\item {\textbf{Yeti}} is the entry point to the tool YETI and contains the main method. It parses the arguments, sets up the environment, initializes the testing and delivers the reports of the test results.
\item {\textbf{YetiLog}} prints debugging and testing logs.
\item {\textbf{YetiLogProcessor}} is an interface for processing testing logs.
\item {\textbf{YetiEngine}} binds~\verb+YetiStrategy+ and~\verb+YetiTestManager+ together and carries-out the actual testing process.
\item {\textbf{YetiTestManager}} makes the actual calls based on the~\verb+YetiEngine+ configuration, and activates the~\verb+YetiStrategy+ to generate test data and select the routines.
\item {\textbf{YetiProgrammingLanguageProperties}} is a place-holder for all language-related instances.
\item {\textbf{YetiInitializer}} is an abstract class for test initialization.
\end{enumerate}
\subsection{Strategy}
The strategy defines a specific way to generate test inputs. The strategy section consist of seven essential strategies stated below:
\begin{enumerate}
\item {\textbf{YetiStrategy}} is an abstract class which provides an interface for every strategy in YETI.
\item {\textbf{YetiRandomStrategy}} implements the random strategy and generates random values for testing. In this strategy the user can change the null values probability and the percentage of creating new objects for the test session.
\item {\textbf{YetiRandomPlusStrategy}} extends the random strategy by adding values to the list of interesting values. It allows the user to adjust the percentage of using interesting values in the test session.
\item {\textbf{YetiDSSRStrategy}} extends YetiRandomPlusStrategy by adding values surrounding the failure finding value. The strategy is described in detail in Chapter~\ref{chap:DSSR}.
\item {\textbf{YetiADFDStrategy}} extends YetiRandomPlusStrategy by adding the feature of graphical representation of failures and their domains within the specified lower and upper bounds. The strategy is described in detail in Chapter~\ref{chap:ADFD}.
\item {\textbf{YetiADFDPlusStrategy}} extends ADFD strategy by adding the feature of graphical representation of failures and failure domains in a given radius in simplified form. The strategy is described in detail in Chapter~\ref{chap:ADFD+}.
\item {\textbf{YetiRandomDecreasingStrategy}} extends the YetiRandomPlusStrategy in which all three probability values (null values, new objects, interesting values) are 100\% at the beginning and decrease to 0 at the end of the test.
\item {\textbf{YetiRandomPeriodicStrategy}} extends the YetiRandomPlusStrategy in which all three probability values (null values, new objects, interesting values) decrease and increase at random within the given range.
\end{enumerate}
\subsection{Language-specific Binding}
The language-specific binding facilitates modelling of programming languages. It is extended to provide support for a new language in YETI. The language-specific binding includes the following classes:
\begin{enumerate}
\item {\textbf{YetiVariable}} is a sub-class of~\verb+YetiCard+ representing a variable in YETI.
\item {\textbf{YetiType}} represents the type of data in YETI, e.g. integer, float, double, long, boolean and char.
\item {\textbf{YetiRoutine}} represents constructor, method and function in YETI.
\item {\textbf{YetiModule}} represents a module in YETI and stores one or more routines of the module.
\item {\textbf{YetiName}} represents a unique name assigned to each instance of~\verb+YetiRoutine+.
\item {\textbf{YetiCard}} represents a wild-card or a variable in YETI.
\item {\textbf{YetiIdentifier}} represents an identifier for an instance of a~\verb+YetiCard+.
\end{enumerate}
% if java binding example is required or instead of adding new steps if you want to show only java binding then for material check the msc thesis page 40 of test c code with Yeti.
\subsection{Construction of Test Cases} \label{sec:constructionOfTestCases}
YETI constructs test cases by creating objects of the classes under test and randomly calls methods with random inputs according to the parameter's-space. YETI splits input values into two types i.e. primitive data types and user-defined classes. For primitive data types as methods parameters, YETI in random strategy calls~\verb+Math.random()+ method to generate arithmetic values, which are converted to the required type using Java cast operation. In the case of user-defined classes as methods parameters YETI calls constructor or method to generate object of the class at run time. In the case, when the constructor requires another object, YETI recursively calls the constructor or method of that object. This process is continued until an object with an empty constructor or a constructor with only primitive types or the set level of recursion is reached.
\subsection{Call sequence of YETI}
% check testing jml code with yeti for more details.
The sequence diagram given in Figure~\ref{fig:yetiSequenceDiagram} depicts the interactions of processes and their order when a Java program in byte-code is tested by YETI for~\verb+n+ number of times using the default random strategy. The steps involved are as follows:
\bigskip
\begin{figure}[H]
\centering
\includegraphics[width=15.5cm, height=11cm]{chapter3/sequenceDiagram.png}
\bigskip
\caption{Call sequence of YETI with Java binding}
\label{fig:yetiSequenceDiagram}
\end{figure}
\bigskip
\begin{enumerate}
\item When the test starts, the test engine (\verb+YetiEngine+) instructs the test manager (\verb+YetiJavaTestManager+) to initiate the testing of given class (\verb+YetiJavaModule+) for~\verb+n+ number of times (\verb+testModuleForNumberofTimes+) using the test strategy (\verb+YetiRandomStrategy+).
\item The test manager creates (\verb+makeNextCall+) a thread (\verb+CallerThread+) which handles the testing of a given program. Threading is introduced for two reasons: (1) To enable the test manager to block/terminate a thread that is stuck because of an infinite loop inside the method or taking too long because of recursive calls. (2) To increase the speed of the testing process when multiple classes are under test.
\item The thread on instantiation, requests the test strategy (\verb+YetiRandomStrategy+) to fetch a routine (\verb+constructor/method+) from the given SUT (\verb+getNextRoutine+).
\item The test strategy random strategy selects (\verb+getRoutineAtRandom+) a routine from the class (\verb+YetiJavaModule+).
\item The thread requests the test strategy to generate the arguments (\verb+cards+) for the selected routine (\verb+getAllCards+).
\item The test strategy (\verb+YetiRandomStrategy+) generates the required arguments and sends them to the requesting thread.
\item The thread tests the selected routine (\verb+YetiJavaMethod+) of the module with the generated arguments (\verb+makeCall+).
\item The routine (\verb+YetiJavaRoutine+) is mapped to an instance of the class method (\verb+YetiJavaMethod+) with the help of class inheritance and dynamic binding. The (\verb+YetiJavaMethod+) executes the method under test with the supplied arguments using Java Reflection API (\verb+makeEffectiveCall+).
\item The output obtained from (\verb+makeEffectiveCall+) is evaluated against the Java oracle that resides in the (\verb+makeCall+) method of (\verb+YetiJavaMethod+).
\end{enumerate}
\bigskip
\bigskip
\subsection{Command-line Options}
YETI is originally developed as a command line program, which can be initiated from the Command Line Interface (CLI) as shown in Figure~\ref{fig:yeticommand}. During this study a \verb+yeti.jar+ package is created which allows the setting of the main parameters and the initiation of YETI from the GUI by double clicking the icon as shown in Figure~\ref{fig:yetiLauncher}. YETI is provided with several command line options which a tester can enable or disable according to the test requirement. These options are case insensitive and can be provided in any order as input to YETI from the command line interface. As an example, a tester can use command line option $-nologs$ to bypass real-time logging and save processing power by reducing overheads. Table~\ref{table:cliOptions} includes some of the common command line options available in YETI.
\begin{center}
\begin{table}[H]
%\scriptsize
\caption{YETI command line options} % title of Table
\bigskip
%\centering
\hspace{-0.8cm}
\noindent\makebox[\textwidth]{
{\renewcommand{\arraystretch}{1.6}
\begin{tabular}{|l|l|} % centered columns (4 columns)
\hline
\textbf{Options} &\textbf{Purpose} \\ \hline
-java, -Java &To test Java programs \\ \hline
-jml , -JML &To test JML programs \\ \hline
-dotnet, -DOTNET &To test .NET programs \\ \hline
-ea &To check code assertions \\ \hline
-nTests &To specify number of tests \\ \hline
-time &To specify test time \\ \hline
-initClass &To use user-defined class for initialization \\ \hline
-msCalltimeout &To set a time out for a method call \\ \hline
-testModules &To specify one or more modules to test \\ \hline
-rawlogs &To print real-time test logs \\ \hline
-nologs &To omit real time logs \\ \hline
-yetiPath &To specify the path to the test modules \\ \hline
-gui &To show the test session in the GUI \\ \hline
-help, -h &To print the help about using YETI \\ \hline
-DSSR &To specify Dirt Spot Sweeping Random strategy \\ \hline
-ADFD &To specify ADFD strategy \\ \hline
-ADFDPlus &To specify ADFD$^+$ strategy \\ \hline
-noInstanceCap &To remove the cap on no. of specific type instances \\ \hline
-branchCoverage &To measure the branch coverage \\ \hline
-tracesOutputFile &To specify the file to store output traces \\ \hline
-tracesInputFile &To specify the file for reading input traces \\ \hline
-random &To specify the Random test strategy \\ \hline
-printNumberOfCallsPerMethod &To print the number of calls per method \\ \hline
-randomPlus &To specify the Random plus test strategy \\ \hline
-probabilityToUseNullValue &To specify the probability of inserting null values \\ \hline
-randomPlusPeriodic &To specify the Random plus periodic test strategy \\ \hline
-newInstanceInjectionProability &To specify the probability of inserting new objects \\ \hline
\hline %inserts single line
\end{tabular}
}}
\bigskip
\label{table:cliOptions} % is used to refer this table in the text
\end{table}
\end{center}
\subsection{Execution}
YETI, developed in Java, is highly portable and can easily run on any operating system with a Java Virtual Machine (JVM) installed. It can be executed from both the CLI and GUI. To execute YETI, it is necessary to specify the \verb+project+ and the relevant \verb+jar+ library files, particularly \verb+javassist.jar+ in the \verb+CLASSPATH+. The typical command to execute YETI from CLI is given in Figure~\ref{fig:yeticommand}.
\smallskip
\begin{figure}[H]
\centering
\frame{\includegraphics[width= 15.3cm, height = 1.5cm]{chapter3/commandLineYeti.png}}
\smallskip
\caption{Command to launch YETI from CLI}
\label{fig:yeticommand}
\end{figure}
\smallskip
In this command YETI tests \verb+java.lang.String+ and \verb+yeti.test.YetiTest+ modules for \verb+10+ minutes using the default random strategy. Other CLI options are already indicated in Table~\ref{table:cliOptions}. To execute YETI from the GUI, \verb+YetiLauncher+ presented in Figure~\ref{fig:yetiLauncher} has been created for use in the present study.
\bigskip
\begin{figure}[H]
\centering
\frame{\includegraphics[width= 8cm, height = 9cm]{chapter3/yetiCommandGUI.pdf}}
\smallskip
\caption{GUI launcher of YETI}
\label{fig:yetiLauncher}
\end{figure}
\subsection{Test Oracle}
Oracles in YETI are language-dependent. In the presence of program specifications, it checks for inconsistencies between the code and the specifications. In the absence of specifications, it checks for assertion violations, which are considered as failures. % if assert statements are included by the programmer. However in the absence of
If specifications or assertions are absent, YETI performs robustness testing, which considers any undeclared runtime exceptions as failures.
%If code contracts are available, YETI uses them as oracle, however, in their absence YETI uses undeclared runtime exceptions of the underlying language as oracle. The test cases revealing errors are reproduced at the end of each test session for unit and regression testing.
%YETI deals with the oracle problem in two ways. If available, it uses code-contracts as oracles, however in the absence of contracts it uses runtime exceptions as errors which is also known as robustness testing.
\subsection{Report}
YETI gives a complete test report at the end of each test session. The report contains all the successful calls with the name of the routines and the unique identifiers for the parameters in each execution. These identifiers are recorded with the assigned values to help in debugging the identified fault as shown in Figure~\ref{successReport}.
\bigskip
\begin{figure}[H]
\centering
\frame{\includegraphics[width= 15cm, height = 3.5cm]{chapter3/yetiReport1.png}}
\smallskip
\caption{Successful method calls of YETI}
\label{successReport}
\end{figure}
YETI separates the found bugs from successful executions to simplify the test report. This helps debuggers to easily track the origin of the problem. When a bug is identified during testing, YETI saves the details and presents it in the bug report as shown in Figure \ref{bugReport}. The information includes all identifiers of the parameters the method call had along with the time at which the exception occurs.
\bigskip
\begin{figure}[H]
\centering
\frame{\includegraphics[width= 15cm, height = 3.5cm]{chapter3/yetiReport2.png}}
\smallskip
\caption{Sample of YETI bug report}
\label{bugReport}
\end{figure}
\subsection{Graphical User Interface}
YETI supports a GUI that allows testers to monitor the test session and modify the characteristics in real time during test execution. It is useful to have the option of modifying the test parameters at run time and observing the test behaviour in response. Figure \ref{fig:yetiGUI_3} presents the YETI GUI comprising of thirteen labelled components.
%\begin{figure}[h]
% \centering
% %\frame{\includegraphics[width= 15cm, height = 9.8cm]{chapter3/yetiGUI.png}}
% \includegraphics[width= 15cm, height = 9.8cm]{chapter3/yetiGUI.png}
% \caption{GUI of YETI}
% \label{fig:yetiGUI1}
%\end{figure}
\begin{sidewaysfigure}[htp]
\centering
\centerline{\includegraphics[width=23cm,height=16cm]{chapter3/yetiGUI.png}}
%\includegraphics[width=14cm,height=20cm]{myfigures/plan/project_table.pdf}
\caption{GUI front-end of YETI}
\label{fig:yetiGUI_3}
\end{sidewaysfigure}
\begin{enumerate}
\item \textbf{Menu bar:} contains two menu items i.e. Yeti and File.
\begin{enumerate}
\item \textbf{Yeti menu:} provides details of YETI contributors and option to quit the GUI.
\item \textbf{File menu:} provides an option to rerun the previously executed scripts.
\end{enumerate}
\item \textbf{Slider of \% null values:} displays the set probability of choosing a null value expressed as percentage for each variable. The default value of the probability is 10\%.
\item \textbf{Slider of \% new variables:} displays the set probability of creating new instances at each call. The default value of the probability is 10\%.
\item \textbf{Text-box of Max variables per type:} displays the number of variables created for a given type. The default value is 1000.
\item \textbf{Progress bar of testing session:} displays the test progress as a percentage.
\item \textbf{Slider of strategy:} displays the set random strategy for the test session. Each strategy has its control to change its various parameters.
\item \textbf{Module Name:} shows the list of modules under test.
\item \textbf{Graph window 1:} displays the total number of unique failures over time in the module under test.
\item \textbf{Graph window 2:} displays the total number of calls over time to the module under test.
\item \textbf{Routine's progress:} displays test progress of each routine in the module represented by four colours. Mostly green and red colour appears indicating successful and unsuccessful calls respectively. Occasionally black and yellow colours appear indicating no calls and incomplete calls respectively.
\item \textbf{Graph window 3:} displays the total number of failures over time in the module under test.
\item \textbf{Graph window 4:} displays the total number of variables over time generated by YETI in the test session.
%\item displays all the routines in the module under test with a rectangle.
%Each rectangle presents the results of calls of the routine. The rectangle can have in 4 colors. Black indicates no any calls of this routine. Green indicates that has successful calls of this routine. Red indicates that this routine is called unsuccessfully which means that the call to this routine results in an exception. Yellow indicates undecidable calls, for example if a call cannot finish in predefined time and Yeti stops this call, in this case yeti cannot decide this call is successful or unsuccessful. The text next the routines name show how many calls of this routine and text displays percentage of passed, failed and undescided when the cursor over the rectangle.
%\item displays a table which contains the unique faults are detected by Yeti. It records the detail of exceptions.
%\item Window No 5 displays colored rectangles: one for each constructor and method under test. Each rectangle represents the calls to a constructor or a method.
%\item The colors in a rectangle have the following meaning:
%\item Green indicates successful calls (✓). A successful call is one that does not raise an exception or if it does, the method or the constructor declares to throw it.
%\item Red indicates failed calls (X). A failed call results from raised RuntimeException or one of its subclasses.
%\item Yellow indicates “undecidable” calls (?). A call is “undecidable” if for some reason it takes too long to complete and needs to be stopped, or if a YetiSecurityException (custom exception in YETI) is thrown.
\item \textbf{Report section:} displays the number of unique failures by date and time, location and type detected in the module under test.
\end{enumerate}
\section{Summary}
The chapter explains in detail the automated random testing tool YETI which is being used in this study. YETI has been thoroughly reviewed including an overview, and addressing aspects such as design, core infrastructure, strategy, language-specific bindings, construction of test cases, command line options, execution, test oracle, report generation and graphical user interface.
%The main features of all the tools are noted in the following table.
%\begin{figure}[h]
% \centering
% \includegraphics[scale=0.6]{chapter2/tools.jpg}
% \caption{Summary of automated testing tools}
%\end{figure}
%\section{Conclusion}
% ------------------------------------------------------------------------
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../thesis"
%%% End:
|
/*
ODE: a program to get optime Runge-Kutta and multi-steps methods.
Copyright 2011-2019, Javier Burguete Tolosa.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY Javier Burguete Tolosa ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL Javier Burguete Tolosa OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/**
* \file rk_5_3.c
* \brief Source file to optimize Runge-Kutta 5 steps 3rd order methods.
* \author Javier Burguete Tolosa.
* \copyright Copyright 2011-2019.
*/
#define _GNU_SOURCE
#include <string.h>
#include <math.h>
#include <libxml/parser.h>
#include <glib.h>
#include <libintl.h>
#include <gsl/gsl_rng.h>
#include "config.h"
#include "utils.h"
#include "optimize.h"
#include "rk.h"
#include "rk_5_3.h"
#define DEBUG_RK_5_3 0 ///< macro to debug.
/**
* Function to obtain the coefficients of a 5 steps 3rd order Runge-Kutta
* method.
*/
int
rk_tb_5_3 (Optimize * optimize) ///< Optimize struct.
{
long double A[3], B[3], C[3], D[3];
long double *tb, *r;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3: start\n");
#endif
tb = optimize->coefficient;
r = optimize->random_data;
t5 (tb) = 1.L;
t1 (tb) = r[0];
t2 (tb) = r[1];
b21 (tb) = r[2];
t3 (tb) = r[3];
b31 (tb) = r[4];
b32 (tb) = r[5];
t4 (tb) = r[6];
b41 (tb) = r[7];
b42 (tb) = r[8];
b43 (tb) = r[9];
b54 (tb) = r[10];
A[0] = t1 (tb);
B[0] = t2 (tb);
C[0] = t3 (tb);
D[0] = 0.5L - b54 (tb) * t4 (tb);
A[1] = A[0] * t1 (tb);
B[1] = B[0] * t2 (tb);
C[1] = C[0] * t3 (tb);
D[1] = 1.L / 3.L - b54 (tb) * sqr (t4 (tb));
A[2] = 0.L;
B[2] = b21 (tb) * t1 (tb);
C[2] = b31 (tb) * t1 (tb) + b32 (tb) * t2 (tb);
D[2] = 1.L / 6.L - b54 (tb) * (b41 (tb) * t1 (tb) + b42 (tb) * t2 (tb)
+ b43 (tb) * t3 (tb));
solve_3 (A, B, C, D);
if (isnan (D[0]) || isnan (D[1]) || isnan (D[2]))
return 0;
b53 (tb) = D[2];
b52 (tb) = D[1];
b51 (tb) = D[0];
rk_b_5 (tb);
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3: end\n");
#endif
return 1;
}
/**
* Function to obtain the coefficients of a 5 steps 3rd order, 4th order in
* equations depending only in time, Runge-Kutta method.
*/
int
rk_tb_5_3t (Optimize * optimize) ///< Optimize struct.
{
long double A[4], B[4], C[4], D[4], E[4];
long double *tb, *r;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3t: start\n");
#endif
tb = optimize->coefficient;
r = optimize->random_data;
t5 (tb) = 1.L;
t1 (tb) = r[0];
t2 (tb) = r[1];
b21 (tb) = r[2];
t3 (tb) = r[3];
b31 (tb) = r[4];
b32 (tb) = r[5];
t4 (tb) = r[6];
b41 (tb) = r[7];
b42 (tb) = r[8];
b43 (tb) = r[9];
A[0] = t1 (tb);
B[0] = t2 (tb);
C[0] = t3 (tb);
D[0] = t4 (tb);
E[0] = 0.5L;
A[1] = A[0] * t1 (tb);
B[1] = B[0] * t2 (tb);
C[1] = C[0] * t3 (tb);
D[1] = D[0] * t4 (tb);
E[1] = 1.L / 3.L;
A[2] = A[1] * t1 (tb);
B[2] = B[1] * t2 (tb);
C[2] = C[1] * t3 (tb);
D[2] = D[1] * t4 (tb);
E[2] = 0.25L;
A[3] = 0.L;
B[3] = b21 (tb) * t1 (tb);
C[3] = b31 (tb) * t1 (tb) + b32 (tb) * t2 (tb);
D[3] = b41 (tb) * t1 (tb) + b42 (tb) * t2 (tb) + b43 (tb) * t3 (tb);
E[3] = 1.L / 6.L;
solve_4 (A, B, C, D, E);
if (isnan (E[0]) || isnan (E[1]) || isnan (E[2]) || isnan (E[3]))
return 0;
b54 (tb) = E[3];
b53 (tb) = E[2];
b52 (tb) = E[1];
b51 (tb) = E[0];
rk_b_5 (tb);
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3t: end\n");
#endif
return 1;
}
/**
* Function to obtain the coefficients of a 5 steps 2nd-3rd order Runge-Kutta
* pair.
*/
int
rk_tb_5_3p (Optimize * optimize) ///< Optimize struct.
{
long double *tb;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3p: start\n");
#endif
if (!rk_tb_5_3 (optimize))
return 0;
tb = optimize->coefficient;
e51 (tb) = 0.5L / t1 (tb);
e52 (tb) = e53 (tb) = 0.L;
rk_e_5 (tb);
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3p: end\n");
#endif
return 1;
}
/**
* Function to obtain the coefficients of a 5 steps 2nd-3rd order, 3rd-4th order
* in equations depending only in time, Runge-Kutta pair.
*/
int
rk_tb_5_3tp (Optimize * optimize) ///< Optimize struct.
{
long double *tb;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3tp: start\n");
#endif
if (!rk_tb_5_3t (optimize))
return 0;
tb = optimize->coefficient;
e53 (tb) = 0.L;
e52 (tb) = (1.L / 3.L - 0.5L * t1 (tb)) / (t2 (tb) * (t2 (tb) - t1 (tb)));
if (isnan (e52 (tb)))
return 0;
e51 (tb) = (0.5L - e52 (tb) * t2 (tb)) / t1 (tb);
if (isnan (e51 (tb)))
return 0;
rk_e_5 (tb);
#if DEBUG_RK_5_3
fprintf (stderr, "rk_tb_5_3tp: end\n");
#endif
return 1;
}
/**
* Function to calculate the objective function of a 5 steps 3rd order
* Runge-Kutta method.
*
* \return objective function value.
*/
long double
rk_objective_tb_5_3 (RK * rk) ///< RK struct.
{
long double *tb;
long double o;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3: start\n");
#endif
tb = rk->tb->coefficient;
o = fminl (0.L, b20 (tb));
if (b30 (tb) < 0.L)
o += b30 (tb);
if (b40 (tb) < 0.L)
o += b40 (tb);
if (b50 (tb) < 0.L)
o += b50 (tb);
if (b51 (tb) < 0.L)
o += b51 (tb);
if (b52 (tb) < 0.L)
o += b52 (tb);
if (b53 (tb) < 0.L)
o += b53 (tb);
if (o < 0.L)
{
o = 40.L - o;
goto end;
}
o = 30.L
+ fmaxl (1.L, fmaxl (t1 (tb), fmaxl (t2 (tb), fmaxl (t3 (tb), t4 (tb)))));
if (rk->strong)
{
rk_bucle_ac (rk);
o = fminl (o, *rk->ac0->optimal);
}
end:
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3: optimal=%Lg\n", o);
fprintf (stderr, "rk_objective_tb_5_3: end\n");
#endif
return o;
}
/**
* Function to calculate the objective function of a 5 steps 3rd order, 4th
* order in equations depending only in time, Runge-Kutta method.
*
* \return objective function value.
*/
long double
rk_objective_tb_5_3t (RK * rk) ///< RK struct.
{
long double *tb;
long double o;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3t: start\n");
#endif
tb = rk->tb->coefficient;
o = fminl (0.L, b20 (tb));
if (b30 (tb) < 0.L)
o += b30 (tb);
if (b40 (tb) < 0.L)
o += b40 (tb);
if (b50 (tb) < 0.L)
o += b50 (tb);
if (b51 (tb) < 0.L)
o += b51 (tb);
if (b52 (tb) < 0.L)
o += b52 (tb);
if (b53 (tb) < 0.L)
o += b53 (tb);
if (b54 (tb) < 0.L)
o += b54 (tb);
if (o < 0.L)
{
o = 40.L - o;
goto end;
}
o = 30.L
+ fmaxl (1.L, fmaxl (t1 (tb), fmaxl (t2 (tb), fmaxl (t3 (tb), t4 (tb)))));
if (rk->strong)
{
rk_bucle_ac (rk);
o = fminl (o, *rk->ac0->optimal);
}
end:
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3t: optimal=%Lg\n", o);
fprintf (stderr, "rk_objective_tb_5_3t: end\n");
#endif
return o;
}
/**
* Function to calculate the objective function of a 5 steps 2nd-3rd order
* Runge-Kutta pair.
*
* \return objective function value.
*/
long double
rk_objective_tb_5_3p (RK * rk) ///< RK struct.
{
long double *tb;
long double o;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3p: start\n");
#endif
tb = rk->tb->coefficient;
o = fminl (0.L, b20 (tb));
if (b30 (tb) < 0.L)
o += b30 (tb);
if (b40 (tb) < 0.L)
o += b40 (tb);
if (b50 (tb) < 0.L)
o += b50 (tb);
if (b51 (tb) < 0.L)
o += b51 (tb);
if (b52 (tb) < 0.L)
o += b52 (tb);
if (b53 (tb) < 0.L)
o += b53 (tb);
if (e50 (tb) < 0.L)
o += e50 (tb);
if (e51 (tb) < 0.L)
o += e51 (tb);
if (o < 0.L)
{
o = 40.L - o;
goto end;
}
o = 30.L
+ fmaxl (1.L, fmaxl (t1 (tb), fmaxl (t2 (tb), fmaxl (t3 (tb), t4 (tb)))));
if (rk->strong)
{
rk_bucle_ac (rk);
o = fminl (o, *rk->ac0->optimal);
}
end:
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3p: optimal=%Lg\n", o);
fprintf (stderr, "rk_objective_tb_5_3p: end\n");
#endif
return o;
}
/**
* Function to calculate the objective function of a 5 steps 2nd-3rd order,
* 3rd-4th order in equations depending only in time, Runge-Kutta pair.
*
* \return objective function value.
*/
long double
rk_objective_tb_5_3tp (RK * rk) ///< RK struct.
{
long double *tb;
long double o;
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3tp: start\n");
#endif
tb = rk->tb->coefficient;
o = fminl (0.L, b20 (tb));
if (b30 (tb) < 0.L)
o += b30 (tb);
if (b40 (tb) < 0.L)
o += b40 (tb);
if (b50 (tb) < 0.L)
o += b50 (tb);
if (b51 (tb) < 0.L)
o += b51 (tb);
if (b52 (tb) < 0.L)
o += b52 (tb);
if (b53 (tb) < 0.L)
o += b53 (tb);
if (b54 (tb) < 0.L)
o += b54 (tb);
if (e50 (tb) < 0.L)
o += e50 (tb);
if (e51 (tb) < 0.L)
o += e51 (tb);
if (e52 (tb) < 0.L)
o += e52 (tb);
if (o < 0.L)
{
o = 40.L - o;
goto end;
}
o = 30.L
+ fmaxl (1.L, fmaxl (t1 (tb), fmaxl (t2 (tb), fmaxl (t3 (tb), t4 (tb)))));
if (rk->strong)
{
rk_bucle_ac (rk);
o = fminl (o, *rk->ac0->optimal);
}
end:
#if DEBUG_RK_5_3
fprintf (stderr, "rk_objective_tb_5_3tp: optimal=%Lg\n", o);
fprintf (stderr, "rk_objective_tb_5_3tp: end\n");
#endif
return o;
}
|
# Dimensional Reduction
G. Richards (2016, 2018), based on materials from Ivezic, Connolly, Leighly, and VanderPlas
**Before class starts, please try to do the following:**
> find . -name “sdss_corrected_spectra.py” -print
> ./anaconda/lib/python2.7/site-packages/astroML/datasets/sdss_corrected_spectra.py
> emacs -nw ./anaconda/lib/python2.7/site-packages/astroML/datasets/sdss_corrected_spectra.py
> #DATA_URL = 'http://www.astro.washington.edu/users/vanderplas/spec4000.npz'
> DATA_URL = 'http://staff.washington.edu/jakevdp/spec4000.npz'
Just in case that doesn't work, I've put "spec4000.npz" in PHYS_T480_F18/data. Copy this to your "astroML_data" directory.
## Curse of Dimensionality
You want to buy a car. Right now--you don't want to wait. But you are picky and have certain things that you would like it to have. Each of those things has a probability between 0 and 1 of being on the the car dealer's lot. You want a red car which has a probability of being on the lot of $p_{\rm red}$; you want good gas mileage, $p_{\rm gas}$; you want leather seats, $p_{\rm leather}$; and you want a sunroof, $p_{\rm sunroof}$. The probability that the dealer has a car on the lot that meets all of those requirements is
$$p_{\rm red} \, p_{\rm gas} \, p_{\rm leather} \, p_{\rm sunroof},$$
or $p^n$ where $n$ is the number of features (assuming equal probability for each).
If the probability of each of these is 50%, then the probability of you driving off with your car of choice is only $0.5*0.5*0.5*0.5 = 0.0625$. Not very good. Imagine if you also wanted other things. This is the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
Let's illustrate the curse of dimensionality with two figures from [here.](https://medium.freecodecamp.org/the-curse-of-dimensionality-how-we-can-save-big-data-from-itself-d9fa0f872335)
In the first example we are trying to find which box hold some treasure, which gets harder and harder with more dimensions, despite there just being 5 boxes in each dimension:
In the next example we inscribe a circle in a square. The area outside of the circle grows larger and larger as the number of dimensions increase:
Mathematically we can describe this as: the more dimensions that your data span, the more points needed to uniformly sample the space.
For $D$ dimensions with coordinates $[-1,1]$, the fraction of points in a unit hypersphere (with radius $r$, as illustrated above) is
$$f_D = \frac{V_D(r)}{(2r)^D} = \frac{\pi^{D/2}}{D2^{D-1}\Gamma(D/2)}$$
which goes to $0$ as $D$ goes to infinity! Actually, as you can see from the plot below, it is effectively 0 much earlier than that!
```python
# Execute this cell
# from Andy Connolly
%matplotlib inline
import numpy as np
import scipy.special as sp
from matplotlib import pyplot as plt
def unitVolume(dimension, radius=1.):
return 2*(radius**dimension *np.pi**(dimension/2.))/(dimension*sp.gamma(dimension/2.))
dim = np.linspace(1,100)
#------------------------------------------------------------
# Plot the results
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(dim,unitVolume(dim)/2.**dim)
ax.set_yscale('log')
ax.set_xlabel('$Dimension$')
ax.set_ylabel('$Volume$')
plt.show()
```
Note that this works in the opposite direction too: let's say you want to find "rare" objects in 10 dimensions, where we'll define rare as <1% of the population. Then you'll need to accept objects from 63% of the distribution in all 10 dimensions! So are those really "rare" or are they just a particular 1% of the population?
```python
import numpy as np
p = 10**(np.log10(0.01)/10.0)
print p
```
0.63095734448
N.B. Dimensionality isn't just measuring $D$ parameters for $N$ objects. It could be a spectrum with $D$ values or an image with $D$ pixels, etc. In the book the examples used just happen to be spectra of galaxies from the SDSS project. But we can insert the data of our choice instead.
For example: the SDSS comprises a sample of 357 million sources:
- each source has 448 measured attributes
- selecting just 30 (e.g., magnitude, size..) and normalizing the data range $-1$ to $1$
yields a probability of having one of the 357 million sources reside within a unit hypersphere of 1 in 1.4$\times 10^5$.
## Principal Component Analysis (PCA)
In [Principal Component Analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) we seek to take a data set like the one shown below and apply a transform to the data such that the new axes are aligned with the maximal variance of the data. As can be seen in the Figure, this is basically just the same as doing regression by minimizing the square of the perpendicular distances to the new axes. Note that we haven't made any changes to the data, we have just defined new axes.
```python
# Execute this cell
# Ivezic, Figure 7.2
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
#------------------------------------------------------------
# Set parameters and draw the random sample
np.random.seed(42)
r = 0.9
sigma1 = 0.25
sigma2 = 0.08
rotation = np.pi / 6
s = np.sin(rotation)
c = np.cos(rotation)
X = np.random.normal(0, [sigma1, sigma2], size=(100, 2)).T
R = np.array([[c, -s],[s, c]])
X = np.dot(R, X) #Same data, now rotated by R matrix.
#------------------------------------------------------------
# Plot the diagram
fig = plt.figure(figsize=(5, 5), facecolor='w')
ax = plt.axes((0, 0, 1, 1), xticks=[], yticks=[], frameon=False)
# draw axes
ax.annotate(r'$x$', (-r, 0), (r, 0),
ha='center', va='center',
arrowprops=dict(arrowstyle='<->', color='k', lw=1))
ax.annotate(r'$y$', (0, -r), (0, r),
ha='center', va='center',
arrowprops=dict(arrowstyle='<->', color='k', lw=1))
# draw rotated axes
ax.annotate(r'$x^\prime$', (-r * c, -r * s), (r * c, r * s),
ha='center', va='center',
arrowprops=dict(color='k', arrowstyle='<->', lw=1))
ax.annotate(r'$y^\prime$', (r * s, -r * c), (-r * s, r * c),
ha='center', va='center',
arrowprops=dict(color='k', arrowstyle='<->', lw=1))
# scatter points
ax.scatter(X[0], X[1], s=25, lw=0, c='k', zorder=2)
# draw lines
vnorm = np.array([s, -c])
for v in (X.T):
d = np.dot(v, vnorm)
v1 = v - d * vnorm
ax.plot([v[0], v1[0]], [v[1], v1[1]], '-k')
# draw ellipses
for sigma in (1, 2, 3):
ax.add_patch(Ellipse((0, 0), 2 * sigma * sigma1, 2 * sigma * sigma2,
rotation * 180. / np.pi,
ec='k', fc='gray', alpha=0.2, zorder=1))
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
plt.show()
```
Note that the points are correlated along a particular direction which doesn't align with the initial choice of axes. So, we should rotate our axes to align with this correlation.
We'll choose the rotation to maximize the ability to discriminate between the data points:
* the first axis, or **principal component**, is direction of maximal variance
* the second principal component is orthogonal to the first component and maximizes the residual variance
* ...
PCA is a dimensional reduction process because we can generally account for nearly "all" of the variance in the data set with fewer than the original $K$ dimensions. See more below.
We start with a data set $\{x_i\}$ which consists of $N$ objects for which we measure $K$ features. We start by subtracting the mean for each feature in $\{x_i\}$ and write $X$ as a $N\times K$ matrix.
The covariance of this matrix is
$$C_X=\frac{1}{N-1}X^TX.$$
There are off-diagonal terms if there are correlations between the measurements (e.g., maybe two of the features are temperature dependent and the measurements were taken at the same time).
If $R$ is a projection of the data that is aligned with the maximal variance, then we have $Y= X R$ with covariance
$$ C_{Y} = R^T X^T X R = R^T C_X R.$$
$r_1$ is the first principal component of $R$, which can be derived using Langrange multipliers with the following cost function:
$$ \phi(r_1,\lambda_1) = r_1^TC_X r_1 - \lambda_1(r_1^Tr_1-1). $$
If we take derivative of $\phi(r_1,\lambda)$ with respect to $r_1$ and set it to 0, then we have
$$ C_Xr_1 - \lambda_1 r_1 = 0. $$
$\lambda_1$ (the largest eigenvalue of the matrix) is the root of the equation $\det(C_X -
\lambda_1 {\bf I})=0$ for which the eigenvalue is
$$ \lambda_1 = r_1^T C_X r_1.$$
The columns of the full matrix, $R$ are the eigenvectors (known here as principal components).
The diagonal values of $C_Y$ are the variance contained within each component.
We aren't going to go through the linear algebra more than that here. But it would be a good group project for someone. See the end of 7.3.1 starting at the bottom on page 294 or go through [Karen Leighly's PCA lecture notes](http://seminar.ouml.org/lectures/principal-components-analysis/) if you want to walk through the math in more detail.
### Preparing data for PCA
* Subtract the mean of each dimension (to "center" the data)
* Divide by the variance in each dimension (to "whiten" the data)
* (For spectra and images) normalize each row to yield an integral of unity.
Below is a typical call to the PCA algorithm. Note that this is somewhat backwards. We are starting with `X` and then we are making it higher dimensional--to create a mock high-$D$ data set. Then we are applying PCA as a dimensionality reduction technique.
```python
#Example call from 7.3.2
import numpy as np
from sklearn.decomposition import PCA
X = np.random.normal(size=(100,3)) # 100 points in 3D
R = np.random.random((3,10)) # projection matrix
X = np.dot(X,R) # X is now 10-dim, with 3 intrinsic dims
pca = PCA(n_components=4) # n_components can be optionally set
pca.fit(X)
comp = pca.transform(X) # compute the subspace projection of X, 4 eigenvalues for each of the 100 samples
mean = pca.mean_ # length 10 mean of the data
components = pca.components_ # 4x10 matrix of components, multiply each by respective "comp" to reconstruct
#Reconstruction of object1
#Xreconstruct[0] = mean + [components][comp[0]]
```
To illustrate what is happening here is a PCA reconstruction of handwritten "3s" from [Hastie et al.](https://web.stanford.edu/~hastie/ElemStatLearn/) :
[Scikit-Learn's decomposition module](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition) has a number of [PCA type implementations](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA).
Let's work through an example using spectra of galaxies take during the Sloan Digital Sky Survey. In this sample there are 4000 spectra with flux measurements in 1000 bins. 15 example spectra are shown below and our example will use half of the spectra chosen at random.
```python
%matplotlib inline
# Example from Andy Connolly
# See Ivezic, Figure 7.4
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
from sklearn.decomposition import RandomizedPCA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
print(len(spectra), len(wavelengths))
#----------------------------------------------------------------------
# Compute PCA
np.random.seed(500)
nrows = 2000 # We'll just look at 2000 random spectra
n_components = 5 # Do the fit with 5 components, which is the mean plus 4
ind = np.random.randint(spectra.shape[0], size=nrows)
spec_mean = spectra[ind].mean(0) # Compute the mean spectrum, which is the first component
# spec_mean = spectra[:50].mean(0)
# use Randomized PCA for speed
#pca = RandomizedPCA(n_components - 1)
pca = PCA(n_components - 1,svd_solver='randomized')
pca.fit(spectra[ind])
pca_comp = np.vstack([spec_mean,pca.components_]) #Add the mean to the components
evals = pca.explained_variance_ratio_
print(evals)
```
downloading PCA-processed SDSS spectra from http://staff.washington.edu/jakevdp/spec4000.npz to /home/dude/astroML_data
Downloading http://staff.washington.edu/jakevdp/spec4000.npz
[=========================================] 27.15Mb / 27.15Mb
4000 1000
[0.8893159 0.06058304 0.02481433 0.01012148]
```python
print(pca.explained_variance_ratio_)
```
[0.8893159 0.06058304 0.02481433 0.01012148]
Now let's plot the components. See also Ivezic, Figure 7.4. The left hand panels are just the first 5 spectra for comparison with the first 5 PCA components, which are shown on the right. They are ordered by the size of their eigenvalues.
```python
#Make plots
fig = plt.figure(figsize=(10, 8))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
titles = 'PCA components'
for j in range(n_components):
# plot the components
ax = fig.add_subplot(n_components, 2, 2*j+2)
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel('wavelength (Angstroms)')
ax.plot(wavelengths, pca_comp[j], '-k', lw=1)
# plot zero line
xlim = [3000, 7999]
ax.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax.set_xlim(xlim)
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
# plot the first j spectra
ax2 = fig.add_subplot(n_components, 2, 2*j+1)
ax2.yaxis.set_major_formatter(plt.NullFormatter())
ax2.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax2.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax2.set_xlabel('wavelength (Angstroms)')
ax2.plot(wavelengths, spectra[j], '-k', lw=1)
# plot zero line
ax2.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax2.set_xlim(xlim)
if j == 0:
ax.set_title(titles, fontsize='medium')
if j == 0:
label = 'mean'
else:
label = 'component %i' % j
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax2.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
ax.text(0.02, 0.95, label, transform=ax.transAxes,
ha='left', va='top', bbox=dict(ec='w', fc='w'),
fontsize='small')
plt.show()
```
Now let's make "scree" plots. These plots tell us how much of the variance is explained as a function of the each eigenvector. Our plot won't look much like Ivezic, Figure 7.5, so I've shown it below to explain where "scree" comes from.
```python
# Execute this cell
import numpy as np
from matplotlib import pyplot as plt
#----------------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(121)
ax.plot(np.arange(n_components-1), evals)
ax.set_xlabel("eigenvalue number")
ax.set_ylabel("eigenvalue ")
ax = fig.add_subplot(122)
ax.plot(np.arange(n_components-1), evals.cumsum())
ax.set_xlabel("eigenvalue number")
ax.set_ylabel("cumulative eigenvalue")
plt.show()
```
How much of the variance is explained by the first two components? How about all of the components?
```python
print("The first component explains {:.3f} of the variance in the data.".format(pca.explained_variance_ratio_[0]))
print("The second component explains {:.3f} of the variance in the data.".format(pca.explained_variance_ratio_[1]))
print("All components explain {:.3f} of the variance in the data.".format(sum(pca.explained_variance_ratio_)))
```
The first component explains 0.889 of the variance in the data.
The second component explains 0.061 of the variance in the data.
All components explain 0.985 of the variance in the data.
This is why PCA enables dimensionality reduction.
How many components would we need to explain 99.5% of the variance?
```python
for num_feats in np.arange(1,20, dtype = int):
pca = PCA(n_components=num_feats)
pca.fit(spectra[ind])
if (sum(pca.explained_variance_ratio_)>0.995):
break
print("{:d} features are needed to explain 99.5% of the variance".format(num_feats))
```
8 features are needed to explain 99.5% of the variance
Note that we would need 1000 components to encode *all* of the variance.
## Interpreting the PCA
- The output eigenvectors are ordered by their associated eigenvalues
- The eigenvalues reflect the variance within each eigenvector
- The sum of the eigenvalues is total variance of the system
- Projection of each spectrum onto the first few eigenspectra is a compression of the data
Once we have the eigenvectors, we can try to reconstruct an observed spectrum, ${x}(k)$, in the eigenvector basis, ${e}_i(k)$, as
$$ \begin{equation}
{x}_i(k) = {\mu}(k) + \sum_j^R \theta_{ij} {e}_j(k).
\end{equation}
$$
That would give a full (perfect) reconstruction of the data since it uses all of the eigenvectors. But if we truncate (i.e., $r<R$), then we will have reduced the dimensionality while still reconstructing the data with relatively little loss of information.
For example, we started with 4000x1000 floating point numbers. If we can explain nearly all of the variance with 8 eigenvectors, then we have reduced the problem to 4000x8+8x1000 floating point numbers!
Execute the next cell to see how the reconstruction improves by adding more components.
```python
# Execute this cell
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
#------------------------------------------------------------
# Compute PCA components
# Eigenvalues can be computed using PCA as in the commented code below:
#from sklearn.decomposition import PCA
#pca = PCA()
#pca.fit(spectra)
#evals = pca.explained_variance_ratio_
#evals_cs = evals.cumsum()
# because the spectra have been reconstructed from masked values, this
# is not exactly correct in this case: we'll use the values computed
# in the file compute_sdss_pca.py
evals = data['evals'] ** 2
evals_cs = evals.cumsum()
evals_cs /= evals_cs[-1]
evecs = data['evecs']
spec_mean = spectra.mean(0)
#------------------------------------------------------------
# Find the coefficients of a particular spectrum
spec = spectra[1]
coeff = np.dot(evecs, spec - spec_mean)
#------------------------------------------------------------
# Plot the sequence of reconstructions
fig = plt.figure(figsize=(8, 8))
fig.subplots_adjust(hspace=0)
for i, n in enumerate([0, 4, 8, 20]):
ax = fig.add_subplot(411 + i)
ax.plot(wavelengths, spec, '-', c='gray')
ax.plot(wavelengths, spec_mean + np.dot(coeff[:n], evecs[:n]), '-k')
if i < 3:
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylim(-2, 21)
ax.set_ylabel('flux')
if n == 0:
text = "mean"
elif n == 1:
text = "mean + 1 component\n"
text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
else:
text = "mean + %i components\n" % n
text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
ax.text(0.01, 0.95, text, ha='left', va='top', transform=ax.transAxes)
fig.axes[-1].set_xlabel(r'${\rm wavelength\ (\AA)}$')
plt.show()
```
### Caveats I
PCA is a linear process, whereas the variations in the data may not be. So it may not always be appropriate to use and/or may require a relatively large number of components to fully describe any non-linearity.
Note also that PCA can be very impractical for large data sets which exceed the memory per core as the computational requirement goes as $\mathscr{O}(D^3$) and the memory requirement goes as $\mathscr{O}(2D^2)$.
### Missing Data
We have assumed so far that there is no missing data (e.g., bad pixels in the spectrum, etc.). But often the data set is incomplete. Since PCA encodes the flux correlation with wavelength (or whatever parameters are in your data set), we can actually use it to determine missing values.
An example is shown below. Here, black are the observed spectra. Gray are the regions where we have no data. Blue is the PCA reconstruction, including the regions where there are no data. Awesome, isn't it?
```python
# Execute this cell
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import ticker
from astroML.datasets import fetch_sdss_corrected_spectra
from astroML.datasets import sdss_corrected_spectra
#------------------------------------------------------------
# Get spectra and eigenvectors used to reconstruct them
data = fetch_sdss_corrected_spectra()
spec = sdss_corrected_spectra.reconstruct_spectra(data)
lam = sdss_corrected_spectra.compute_wavelengths(data)
evecs = data['evecs']
mu = data['mu']
norms = data['norms']
mask = data['mask']
#------------------------------------------------------------
# plot the results
i_plot = ((lam > 5750) & (lam < 6350))
lam = lam[i_plot]
specnums = [20, 8, 9]
subplots = [311, 312, 313]
fig = plt.figure(figsize=(8, 10))
fig.subplots_adjust(hspace=0)
for subplot, i in zip(subplots, specnums):
ax = fig.add_subplot(subplot)
# compute eigen-coefficients
spec_i_centered = spec[i] / norms[i] - mu
coeffs = np.dot(spec_i_centered, evecs.T)
# blank out masked regions
spec_i = spec[i]
mask_i = mask[i]
spec_i[mask_i] = np.nan
# plot the raw masked spectrum
ax.plot(lam, spec_i[i_plot], '-', color='k', lw=2,
label='True spectrum')
# plot two levels of reconstruction
for nev in [10]:
if nev == 0:
label = 'mean'
else:
label = 'N EV=%i' % nev
spec_i_recons = norms[i] * (mu + np.dot(coeffs[:nev], evecs[:nev]))
ax.plot(lam, spec_i_recons[i_plot], label=label)
# plot shaded background in masked region
ylim = ax.get_ylim()
mask_shade = ylim[0] + mask[i][i_plot].astype(float) * ylim[1]
plt.fill(np.concatenate([lam[:1], lam, lam[-1:]]),
np.concatenate([[ylim[0]], mask_shade, [ylim[0]]]),
lw=0, fc='k', alpha=0.2)
ax.set_xlim(lam[0], lam[-1])
ax.set_ylim(ylim)
ax.yaxis.set_major_formatter(ticker.NullFormatter())
if subplot == 311:
ax.legend(loc=1, prop=dict(size=14))
ax.set_xlabel('$\lambda\ (\AA)$')
ax.set_ylabel('normalized flux')
plt.show()
```
The example that we have been using above is "spectral" PCA. Some examples from the literature include:
- [Francis et al. 1992](http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1992ApJ...398..476F&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf)
- [Connolly et al. 1995](http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1995AJ....110.1071C&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf)
- [Yip et al. 2004](http://iopscience.iop.org/article/10.1086/425626/meta;jsessionid=31BB5F11B85D2BF4180834DC71BA0B85.c3.iopscience.cld.iop.org)
One can also do PCA on features that aren't ordered (as they were for the spectra). E.g., if you have $D$ different parameters measured for your objects. The classic example in astronomy is
[Boroson & Green 1992](http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1992ApJS...80..109B&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf).
### Caveats II
One of the things that I don't like about PCA is that the eigenvectors are defined relative to the mean. So they can be positive or negative and they often don't look anything like the original data itself. Whereas it is often the case that you might expect that the components would look like, well, the physical components. For example, quasars are fundamentally galaxies. So, part of their flux comes from the galaxy that they live in. But PCA doesn't return any component that looks like a typical galaxy.
## Non-negative Matrix Factorization (NMF)
This is where [Non-negative Matrix Factorizaiton (NMF)](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) comes in. Here we are treating the data as a linear sum of positive-definite components.
NMF assumes any data matrix can be factored into two matrices, $W$ and $Y$, with
$$\begin{equation}
X=W Y,
\end{equation}
$$
where both $W$ and $Y$ are nonnegative.
So, $WY$ is an approximation of $X$. Minimizing the reconstruction error $|| (X - W Y)^2 ||$,
nonnegative bases can be derived through an iterative process.
Note, however, that the iterative process does not guarantee nonlocal minima (like $K$-means and EM), but using
random initialization and cross-validation can be used to find the global minimum.
An example from the literature is [Allen et al. 2008](http://arxiv.org/abs/0810.4231)
In Scikit-Learn the [NMF implementation](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html) looks like:
```python
# Execute this cell
import numpy as np
from sklearn.decomposition import NMF
X = np.random.random((100,10)) # 100 points in 10-D
nmf = NMF(n_components=3)
nmf.fit(X)
proj = nmf.transform(X) # project to 3 dimension
comp = nmf.components_ # 3x10 array of components
err = nmf.reconstruction_err_ # how well 3 components capture the data
```
An example (and comparison to PCA) is given below.
```python
# Execute the next 2 cells
# Example from Figure 7.4
# Author: Jake VanderPlas
# License: BSD
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import NMF
from sklearn.decomposition import RandomizedPCA
from sklearn.decomposition import PCA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
```
```python
#----------------------------------------------------------------------
# Compute PCA, and NMF components
def compute_PCA_NMF(n_components=5):
spec_mean = spectra.mean(0)
# PCA: use randomized PCA for speed
#pca = RandomizedPCA(n_components - 1)
pca = PCA(n_components - 1,svd_solver='randomized')
pca.fit(spectra)
pca_comp = np.vstack([spec_mean, pca.components_])
# NMF requires all elements of the input to be greater than zero
spectra[spectra < 0] = 0
nmf = NMF(n_components)
nmf.fit(spectra)
nmf_comp = nmf.components_
return pca_comp, nmf_comp
n_components = 5
decompositions = compute_PCA_NMF(n_components)
#----------------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
titles = ['PCA components', 'NMF components']
for i, comp in enumerate(decompositions):
for j in range(n_components):
ax = fig.add_subplot(n_components, 3, 3 * j + 1 + i)
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel('wavelength (Angstroms)')
ax.plot(wavelengths, comp[j], '-k', lw=1)
# plot zero line
xlim = [3000, 7999]
ax.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax.set_xlim(xlim)
if j == 0:
ax.set_title(titles[i])
if titles[i].startswith('PCA') or titles[i].startswith('ICA'):
if j == 0:
label = 'mean'
else:
label = 'component %i' % j
else:
label = 'component %i' % (j + 1)
ax.text(0.03, 0.94, label, transform=ax.transAxes,
ha='left', va='top')
for l in ax.get_xticklines() + ax.get_yticklines():
l.set_markersize(2)
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
plt.show()
```
## Independent Component Analysis (ICA)
For data where the components are statistically independent (or nearly so) [Independent Component Analysis (ICA)](https://en.wikipedia.org/wiki/Independent_component_analysis) has become a popular method for separating mixed components. The classical example is the so-called "cocktail party" problem. This is illustrated in the following figure from Hastie, Tibshirani, and Friedman (Figure 14.27 on page 497 in my copy, so they have clearly added some stuff!). Think of the "source signals" as two voices at a party. You are trying to concentrate on just one voice. What you hear is something like the "measured signals" pattern. You could run the data through PCA and that would do an excellent job of reconstructing the signal with reduced dimensionality, but it wouldn't actually isolate the different physical components (bottom-left panel). ICA on the other hand can (bottom-right panel).
.](images/HastieFigure14_37.png)
[Hastie et al.](https://web.stanford.edu/~hastie/ElemStatLearn/): "ICA applied to multivariate data looks for a sequence of orthogonal projections such that the projected data look as far from Gaussian as possible. With pre-whitened data, this amounts to looking for
components that are as independent as possible."
In short you want to find components that are maximally non-Gaussian since the sum of 2 random variables will be more Gaussian than either of the components (remember the Central Limit Theorem). Hastie et al. illustrate this as follows:
ICA is a good choice for a complex system with relatively indepent components. For example a galaxy is roughly a linear combination of cool stars and hot stars, and a quasar is just a galaxy with others component from an accretion disk and emission line regions. Ideally we want "eigenvectors" that are aligned with those physical traits/regions as opposed to mathematical constructs.
The basic call to the [FastICA algoirthm](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FastICA.html) in Scikit-Learn looks like:
```python
# Execute this cell
import numpy as np
from sklearn.decomposition import FastICA
X = np.random.normal(size=(100,2)) # 100 objects in 2D
R = np.random.random((2,5)) # mixing matrix
X = np.dot(X,R) # Simulation of a 5D data space
ica = FastICA(2) # Now reproject to 2-D
ica.fit(X)
proj = ica.transform(X) # 100x2 projection of the data
comp = ica.components_ # 2x5 matrix of independent components
## sources = ica.sources_ # 100x2 matrix of sources
```
Execute the next 2 cells to produce a plot showing the ICA components.
```python
%matplotlib inline
#Example from Andy Connolly
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import FastICA
from astroML.datasets import sdss_corrected_spectra
from astroML.decorators import pickle_results
#------------------------------------------------------------
# Download data
data = sdss_corrected_spectra.fetch_sdss_corrected_spectra()
spectra = sdss_corrected_spectra.reconstruct_spectra(data)
wavelengths = sdss_corrected_spectra.compute_wavelengths(data)
#----------------------------------------------------------------------
# Compute PCA
np.random.seed(500)
nrows = 500
n_components = 5
ind = np.random.randint(spectra.shape[0], size=nrows)
spec_mean = spectra[ind].mean(0)
# spec_mean = spectra[:50].mean(0)
ica = FastICA(n_components - 1)
ica.fit(spectra[ind])
ica_comp = np.vstack([spec_mean,ica.components_]) #Add the mean to the components
```
```python
#Make plots
fig = plt.figure(figsize=(10, 8))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05,
bottom=0.1, top=0.95, hspace=0.05)
titles = 'ICA components'
for j in range(n_components):
# plot the components
ax = fig.add_subplot(n_components, 2, 2*j+2)
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax.set_xlabel(r'wavelength ${\rm (\AA)}$')
ax.plot(wavelengths, ica_comp[j], '-k', lw=1)
# plot zero line
xlim = [3000, 7999]
ax.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax.set_xlim(xlim)
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
# plot the first j spectra
ax2 = fig.add_subplot(n_components, 2, 2*j+1)
ax2.yaxis.set_major_formatter(plt.NullFormatter())
ax2.xaxis.set_major_locator(plt.MultipleLocator(1000))
if j < n_components - 1:
ax2.xaxis.set_major_formatter(plt.NullFormatter())
else:
ax2.set_xlabel(r'wavelength ${\rm (\AA)}$')
ax2.plot(wavelengths, spectra[j], '-k', lw=1)
# plot zero line
ax2.plot(xlim, [0, 0], '-', c='gray', lw=1)
ax2.set_xlim(xlim)
if j == 0:
ax.set_title(titles, fontsize='medium')
if j == 0:
label = 'mean'
else:
label = 'component %i' % j
# adjust y limits
ylim = plt.ylim()
dy = 0.05 * (ylim[1] - ylim[0])
ax2.set_ylim(ylim[0] - dy, ylim[1] + 4 * dy)
ax.text(0.02, 0.95, label, transform=ax.transAxes,
ha='left', va='top', bbox=dict(ec='w', fc='w'),
fontsize='small')
plt.show()
```
As with PCA and NMF, we can similarly do a reconstruction:
```python
# Execute this cell
#------------------------------------------------------------
# Find the coefficients of a particular spectrum
spec = spectra[1]
evecs = data['evecs']
coeff = np.dot(evecs, spec - spec_mean)
#------------------------------------------------------------
# Plot the sequence of reconstructions
fig = plt.figure(figsize=(8, 8))
fig.subplots_adjust(hspace=0)
for i, n in enumerate([0, 2, 4, 8]):
ax = fig.add_subplot(411 + i)
ax.plot(wavelengths, spec, '-', c='gray')
ax.plot(wavelengths, spec_mean + np.dot(coeff[:n], evecs[:n]), '-k')
if i < 3:
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylim(-2, 21)
ax.set_ylabel('flux')
if n == 0:
text = "mean"
elif n == 1:
text = "mean + 1 component\n"
#text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
else:
text = "mean + %i components\n" % n
#text += r"$(\sigma^2_{tot} = %.2f)$" % evals_cs[n - 1]
ax.text(0.01, 0.95, text, ha='left', va='top', transform=ax.transAxes)
fig.axes[-1].set_xlabel(r'${\rm wavelength\ (\AA)}$')
plt.show()
```
Ivezic, Figure 7.4 compares the components found by the PCA, ICA, and NMF algorithms. Their differences and similarities are quite interesting.
If you think that I was pulling your leg about the cocktail problem, try it yourself!
Load the code instead of running it and see what effect changing some things has.
```python
%run code/plot_ica_blind_source_separation.py
```
Let's revisit the digits sample and see what PCA, NMF, and ICA do for it.
```python
## Execute this cell to load the digits sample
%matplotlib inline
import numpy as np
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
digits = load_digits()
grid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8
plt.imshow(grid_data, interpolation = "nearest", cmap = "bone_r")
print(grid_data)
X = digits.data
y = digits.target
```
Do the PCA transform, projecting to 2 dimensions and plot the results.
```python
# PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
pca.fit(X)
X_reduced = pca.transform(___)
plt.scatter(X_reduced[:,___], X_reduced[:,___], c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
Similarly for NMF and ICA
```python
# NMF
from sklearn.decomposition import ___
nmf = NMF(___)
nmf.___(___)
X_reduced = nmf.___(___)
plt.scatter(___, ___, c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
```python
# ICA
from sklearn.decomposition import ___
ica = FastICA(___)
ica.___(___)
X_reduced = ica.___(___)
plt.scatter(___, ___, c=y, cmap="nipy_spectral", edgecolor="None")
plt.colorbar()
```
Take a second to think about what ICA is doing. What if you had digits from digital clocks instead of handwritten?
I wasn't going to introduce [Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network) yet, but it is worth noting that Scikit-Learn's [`Bernoulli Restricted Boltzman Machine (RBM)`](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html) is discussed in the [(unsupervised) neural network](http://scikit-learn.org/stable/modules/neural_networks_unsupervised.html) part of the User's Guide and is relevant here as the data input must be either binary or values between 0 and 1, which is the case that we have here.
We could think about doing dimensional reduction of the digits data set in another way. There are 64 pixels in each of our images. Presumably all of them aren't equally useful. Let's figure out exactly which pixels are the most relevant. We'll use Scikit-Learn's [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html). We won't get to regression until next week, but you don't need to understand the algorithm to do this, just look at the inputs and outputs. Which pixels are the most important? As a bonus see if you can plot digit images with those pixels highlighted.
```python
from sklearn.ensemble import RandomForestRegressor
RFreg = RandomForestRegressor()# Complete or leave blank as you see fit
RFreg.fit(X,y)# Do Fitting
importances = RFreg.feature_importances_# Determine "importances"
pixelorder = np.argsort(importances)[::-1] #Rank importances (highest to lowest)
print(pixelorder)
plt.figure()
plt.imshow(np.reshape(importances,(8,8)),interpolation="nearest")
plt.show()
```
```python
```
|
function printModelStats(model, printModelIssues, printDetails)
% printModelStats
% prints some statistics about a model to the screen
%
% model a model structure
% printModelIssues true if information about unconnected
% reactions/metabolites and elemental balancing
% should be printed (opt, default false)
% printDetails true if detailed information should be printed
% about model issues. Only used if printModelIssues
% is true (opt, default true)
%
% Usage: printModelStats(model,printModelIssues, printDetails)
if nargin<2
printModelIssues=false;
end
if nargin<3
printDetails=true;
end
fprintf(['Network statistics for ' model.id ': ' model.name '\n']);
%Get which reactions are present in each compartment
rxnComps=sparse(numel(model.rxns),numel(model.comps));
%For each compartment, find the metabolites that are present in that
%compartment and then the reactions they are involved in
for i=1:numel(model.comps)
[~, I]=find(model.S(model.metComps==i,:));
rxnComps(I,i)=1;
end
if isfield(model,'eccodes')
fprintf(['EC-numbers\t\t\t' num2str(numel(unique(model.eccodes))) '\n']);
end
%Print information about genes
if isfield(model,'genes')
fprintf(['Genes*\t\t\t\t' num2str(numel(model.genes)) '\n']);
%Find the genes in each compartment
for i=1:numel(model.comps)
[~, I]=find(model.rxnGeneMat(rxnComps(:,i)==1,:));
fprintf(['\t' model.compNames{i} '\t' num2str(numel(unique(I))) '\n']);
end
end
%Print information about reactions
fprintf(['\nReactions*\t\t\t' num2str(numel(model.rxns)) '\n']);
for i=1:numel(model.comps)
fprintf(['\t' model.compNames{i} '\t' num2str(sum(rxnComps(:,i))) '\n']);
end
%Removes the effect of compartments and removes duplicate reactions
temp=model;
temp.comps(:)={'s'}; %Set all compartments to be the same
equ=constructEquations(sortModel(temp,true,true),temp.rxns,false);
fprintf(['Unique reactions**\t' num2str(numel(unique(equ))) '\n']);
%Print information about metabolites
fprintf(['\nMetabolites\t\t\t' num2str(numel(model.mets)) '\n']);
for i=1:numel(model.comps)
fprintf(['\t' model.compNames{i} '\t' num2str(sum(model.metComps==i)) '\n']);
end
fprintf(['Unique metabolites\t' num2str(numel(unique(model.metNames))) '\n']);
fprintf('\n* Genes and reactions are counted for each compartment if any of the corresponding metabolites are in that compartment. The sum may therefore not add up to the total number.\n');
fprintf('** Unique reactions are defined as being biochemically unique (no compartmentalization)\n');
%Also print some potential problems if there are any
if printModelIssues==true
fprintf(['\nShort model quality summary for ' model.id ': ' model.name '\n']);
%Check that all the metabolites are being used
involvedMat=model.S;
involvedMat(involvedMat~=0)=1;
usedMets=sum(involvedMat,2);
notPresent=find(usedMets==0);
if ~isempty(notPresent)
errorText=['Non-used metabolites\t' num2str(numel(notPresent)) '\n'];
if printDetails==true
for i=1:numel(notPresent)
errorText=[errorText '\t(' model.mets{notPresent(i)} ') ' model.metNames{notPresent(i)} '\n'];
end
errorText=[errorText '\n'];
end
fprintf(errorText);
end
%Check if there are empty reactions
usedRxns=sum(involvedMat,1);
notUsed=find(usedRxns==0);
if ~isempty(notUsed)
errorText=['Empty reactions\t' num2str(numel(notUsed)) '\n'];
if printDetails==true
for i=1:numel(notUsed)
errorText=[errorText '\t' model.rxns{notUsed(i)} '\n'];
end
errorText=[errorText '\n'];
end
fprintf(errorText);
end
%Check if there are dead-end reactions/metabolites
[~, deletedReactions, deletedMetabolites]=simplifyModel(model,true,false,false,true);
if ~isempty(deletedReactions)
errorText=['Dead-end reactions\t' num2str(numel(deletedReactions)) '\n'];
if printDetails==true
for i=1:numel(deletedReactions)
errorText=[errorText '\t' deletedReactions{i} '\n'];
end
errorText=[errorText '\n'];
end
fprintf(errorText);
end
%Ignore non-used metabolites
deletedMetabolites=setdiff(deletedMetabolites,model.mets(notPresent));
%Must map to indexes in order to print names
deletedMetabolites=find(ismember(model.mets,deletedMetabolites));
if ~isempty(deletedMetabolites)
errorText=['Dead-end metabolites\t' num2str(numel(deletedMetabolites)) '\n'];
if printDetails==true
for i=1:numel(deletedMetabolites)
errorText=[errorText '\t(' model.mets{deletedMetabolites(i)} ') ' model.metNames{deletedMetabolites(i)} '\n'];
end
errorText=[errorText '\n'];
end
fprintf(errorText);
end
balanceStructure=getElementalBalance(model);
notParsed=find(balanceStructure.balanceStatus<0);
notBalanced=find(balanceStructure.balanceStatus==0);
if ~isempty(notParsed)
errorText=['Reactions which could not be elementally balanced\t' num2str(numel(notParsed)) '\n'];
if printDetails==true
for i=1:numel(notParsed)
errorText=[errorText '\t' model.rxns{notParsed(i)} '\n'];
end
errorText=[errorText '\n'];
end
fprintf(errorText);
end
if ~isempty(notBalanced)
errorText=['Reactions which are elementally unbalanced\t' num2str(numel(notBalanced)) '\n'];
if printDetails==true
names=strcat(balanceStructure.elements.names,{', '});
for i=1:numel(notBalanced)
badOnes=sprintf('%s', names{abs(balanceStructure.leftComp(notBalanced(i),:)-balanceStructure.rightComp(notBalanced(i),:))>10^-7});
errorText=[errorText '\t' model.rxns{notBalanced(i)} '\t' badOnes(1:end-2) '\n'];
end
errorText=[errorText '\n'];
end
fprintf(errorText);
end
end
end
|
!
!***********************************************************************
! *
MODULE jj2lsj_code
! *
! This module contains the procedures which are used to performe *
! the jj-LSJ transformation. *
! *
! Written by G. Gaigalas, *
! Vilnius last update: Jan 2017 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
implicit none
!-----------------------------------------------
! R o u t i n e s
!-----------------------------------------------
public :: asf2ls
! Expands an atomic state function, which is represented
! in a jj-coupling CSF basis into a basis of LS-coupling CSF.
public :: coefLSjj
! Returns the value of the LS-jj transformation matrix
! for a given set of quantum numbers.
private :: coefLSjj2
! Returns the value of the LS-jj transformation matrix
! (l^2 LSJ| j_1 j_2 J).
private :: coefLSjjs
! Returns the value of the LS-jj transformation matrix
! form an array LS_jj_*_*.
public :: dallocASFLS
! Dellocates the storage of asf_set_LS.
public :: getchLS
! A spectroscopic notation of shell in LS coupling is return.
public :: getxj
!
public :: gettermLS
! This procedure return all allowed subshell terms
! (l, w, Q, L, S) for given l^N which must be 0, 1, 2 or 3.
public :: inscreen
! The input from the screen.
public :: inscreenlev
! Attempts to interpret the serial level numbers from a
! string.
public :: jj2lsj
! Controls the transformation of atomic states from a jj-
! into a LS-coupled CSF basis.
public :: packlsCSF
! Encording all CSF in LS - coupling.
public :: prCSFjj
! Prints all information about the single CSF in jj-coupling
public :: prCSFLS
! Prints all information about the single CSF in LS-coupling
public :: prCSFLSall
! Prints all information about the single CSF scheme csf_set
! in LS-coupling
private :: setLS
! This subroutine fills up the variable asf_set_LS%csf_set_LS
! with data generated using the one from asf_set%csf_set
! .....................................................
! This subroutine contains the following internal routines:
! * subroutine setLS_action
! The subroutine defines the "action" of subroutine
! setLS_job_count: whether it counts
! the number of csfs_LS (asf_set_LS%csf_set_LS%novcsf)
! or fills the arrays of wave functions in LS coupling
! with asf_set_LS%csf_set_LS%csf(...) with
! the corresponding quantum nubers.
! * subroutine setLS_add_quantum_numbers
! The subroutine adds quantum numbers stored
! in temprorary arrays Li, Si, L_i, S_i, w, Q to
! the corresponding arrays of asf_set_LS%csf_set_LS%csf().
! private :: setLS_job_count
! * recursive subroutine setLS_job_count
! Recursive subroutine for the calculation of the
! number of csfs_LS and corresponding quantum numbers.
! * function setLS_equivalent_csfs
! This subroutine defines the "equivalency" of two csfs_jj
! in the sence of generation of csfs_LS
! number of csfs_LS and corresponding quantum numbers.
! .....................................................
public :: traLSjj
! Return the value of the transformation matrix
! from jj- to LS-coupling scheme in the case of any
! number of open shells.
public :: traLSjjmp
! Return the value of main part of the transformation
! matrix from jj- to LS-coupling scheme in the
! case of any number of open shells.
private :: uniquelsj
! Subroutine defines a unique labels for energy levels
!-----------------------------------------------
! D e f i n i t i o n o f A S F
! i n L S - C o u p l i n g
!-----------------------------------------------
type, public :: nl
integer :: n, l
end type nl
!
type, public :: cs_function_LS
integer :: totalJ
character(len=1) :: parity
integer, dimension(:), pointer :: occupation
integer, dimension(:), pointer :: seniority
integer, dimension(:), pointer :: w
integer, dimension(:), pointer :: shellL
integer, dimension(:), pointer :: shellS
integer, dimension(:), pointer :: shellLX
integer, dimension(:), pointer :: shellSX
end type cs_function_LS
!
type, public :: csf_basis_LS
integer :: nocsf ! Number of CSF in the basis.
integer :: nwshells ! Number of (nonrelativistic) shells.
integer :: nwcore ! Number of (closed) core shells.
integer :: number_of_electrons
type(nl), dimension(:), pointer :: shell
type(cs_function_LS), dimension(:), pointer :: csf
type(parent_from_jj), dimension(:), pointer :: parent
end type csf_basis_LS
!
type, public :: as_function_LS
integer :: level_No
integer :: max_csf_No
integer :: totalL, totalS, totalJ
character(len=1) :: parity
real(DOUBLE) :: energy
real(DOUBLE), dimension(:), pointer :: eigenvector
end type as_function_LS
!
type, public :: asf_basis_LS
integer :: noasf ! Number of considered ASF.
real(DOUBLE) :: average_energy ! Averaged energy of this set of ASF.
type(as_function_LS), dimension(:), pointer :: asf
type(csf_basis_LS) :: csf_set_LS
end type asf_basis_LS
!-----------------------------------------------
! G l o b a l V a r i a b l e s
!-----------------------------------------------
type, public :: parent_from_jj
integer :: parent_minus
integer :: parent_plus
end type parent_from_jj
!
type, public :: lsj_list
integer::list_size !number items in a list
integer, dimension(:),pointer:: items !serial numbers of lists items
end type lsj_list
!
type(asf_basis_LS), public :: asf_set_LS
!
integer, private :: IMINCOMPOFF
integer, dimension(:,:),pointer:: Jcoup ! xJ values
real(DOUBLE) :: EPSNEW, MINCOMP
character(len=1), dimension(0:20), private, parameter :: L_string =&
(/ "S","P","D","F","G","H","I","K","L","M","N","O","Q", &
"R","T","U","V","W","X","Y","Z" /)
!-----------------------------------------------
! L o c a l P a r a m e t e r s
!-----------------------------------------------
INTEGER, PARAMETER :: Blocks_number = 20
INTEGER, PARAMETER :: Vectors_number = 100000
!-----------------------------------------------
!
CONTAINS
!
!***********************************************************************
! *
SUBROUTINE asf2ls(iw1,ithresh,levmax,IBLKNUM,levels,NCFMIN,NCFMAX)
! *
! Expands an atomic state functions from the same block, *
! which is represented in a jj-coupling CSF basis into a basis *
! of LS-coupling CSF. *
! *
! Calls: ispar, itjpo, tranLSjj. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE EIGV_C, ONLY: EVEC
USE PRNT_C, ONLY: IVEC
USE ORB_C, ONLY: NCF
USE CONS_C, ONLY: ZERO, ONE
! USE JJ2LSJ_C
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE itjpo_I
USE ispar_I
! USE idigit_I
! USE ittk_I
! USE ixjtik_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: iw1, levmax,IBLKNUM
integer, intent(in) :: NCFMIN, NCFMAX
integer, dimension(:), intent(in) :: ithresh
integer, dimension(Blocks_number,Vectors_number), intent(in) :: levels
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: jj_number, lev, level, LS_number
integer :: LOC, IMINCOMP
real(DOUBLE) :: wa_transformation
real(DOUBLE), dimension(Vectors_number) :: wa
real(DOUBLE), dimension(Vectors_number) :: wb
!-----------------------------------------------
wb = zero
do LS_number = 1, asf_set_LS%csf_set_LS%nocsf
if ((asf_set_LS%csf_set_LS%csf(LS_number)%parity == "+" &
.and. ISPAR(iw1) == 1) .or. &
(asf_set_LS%csf_set_LS%csf(LS_number)%parity == "-" &
.and. ISPAR(iw1) == -1)) then
wa = zero
do jj_number = NCFMIN, NCFMAX
if(ithresh(jj_number) == 1 .and. &
(asf_set_LS%csf_set_LS%csf(LS_number)%totalJ == &
ITJPO(jj_number)-1)) then
wa_transformation = traLSjj(jj_number,LS_number)
do lev = 1,levmax
level = levels(IBLKNUM,lev)
LOC = (level-1)*NCF
wa(lev) = wa(lev)+EVEC(jj_number+LOC)*wa_transformation
end do
end if
end do
IMINCOMP = 1
do lev = 1,levmax
level = levels(IBLKNUM,lev)
wb(lev) = wa(lev)*wa(lev) + wb(lev)
asf_set_LS%asf(level)%eigenvector(LS_number) = wa(lev)
asf_set_LS%asf(level)%level_No = IVEC(level)
if((wb(lev)*100.+dabs(MINCOMP)) <= 100.) IMINCOMP = 0
end do
if(IMINCOMPOFF == 1 ) THEN
if(IMINCOMP == 1) GO TO 1
end if
end if
end do
1 continue
return
END SUBROUTINE asf2ls
!
!***********************************************************************
! *
FUNCTION coefLSjj(l_shell,N,w,Q,L,S,J,jm_shell,Nm,Qm,Jm,jp_shell,&
Qp,Jp) result(wa)
! *
! Returns the value of the LS-jj transformation matrix for a given *
! set of quantum numbers form an array LS_jj_*_*. *
! *
! Note that all (generalized) angular momentum quantum numbers *
! except for l must be given as twice the original numbers, i.e. *
! for the quantum numbers Q, L, S, J, jm_shell, Qm, Jm, jp_shell, *
! Qp, Jp. *
! *
! Calls: coefLSjj2, coefLSjjs. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE CONS_C, ONLY: ZERO, ONE
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: l_shell, N, w, Q, L, S, J, &
jm_shell, Nm, Qm, Jm, jp_shell, Qp, Jp
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
real(DOUBLE) :: wa
integer :: NN, N1, N2, factor
!-----------------------------------------------
wa = zero; factor = ONE
if (jm_shell < jp_shell .or. jm_shell == jp_shell) then
if (l_shell > 0 .and. N > 2*l_shell +1) then
NN = 4*l_shell + 2 - N; N1 = jm_shell + 1 - Nm
N2 = jp_shell + 1 - N + Nm
if (mod(2*l_shell+1-Q -((jm_shell+1)/2-Qm) &
-((jp_shell+1)/2-Qp),4) /= 0) factor = - factor
else
NN = N; N1 = Nm; N2 = N - Nm;
end if
if (NN == 1 .or. NN == 0) then
if (NN == 0 .and. N1 == 0 .and. N2 == 0) then
wa = one
else if (N1 == 1 .and. N2 == 0 .and. J == Jm) then
wa = one
else if (N1 == 0 .and. N2 == 1 .and. J == Jp) then
wa = one
else
wa = zero
end if
else if (NN == 2) then
if (J > Jm + Jp .or. J < abs(Jm - Jp)) then
wa = zero
else
if (N1 == 2 .and. N2 == 0) then
wa = coefLSjj2(l_shell,L,S,J,jm_shell,jm_shell)
else if (N1 == 1 .and. N2 == 1) then
wa = coefLSjj2(l_shell,L,S,J,jm_shell,jp_shell)
else if (N1 == 0 .and. N2 == 2) then
wa = coefLSjj2(l_shell,L,S,J,jp_shell,jp_shell)
end if;
end if
else if (l_shell==1 .or. l_shell==2 .or. l_shell==3) then
wa = coefLSjjs(l_shell,NN,w,Q,L,S,J,N1,Qm,Jm,Qp,Jp)
end if
else if (l == 0) then
if (S == J .and. jm_shell == 1 .and. NN == N1) then
wa = one
else if (S == J .and. jp_shell == 1 .and. NN == N2) then
wa = one
else
wa = zero
end if
else
stop "coefLSjj: program stop A."
end if
wa = factor * wa
END FUNCTION coefLSjj
!
!***********************************************************************
! *
FUNCTION coefLSjj2(l_shell,L,S,J,jm_shell,jp_shell) result(wa)
! *
! Returns the value of the LS-jj transformation matrix *
! (l^2 LSJ| j_1 j_2 J). *
! *
! Note that all (generalized) angular momentum quantum numbers *
! except for l must be given as twice the original numbers, i.e. *
! for the quantum numbers L, S, J, jm_shell, jp_shell. *
! *
! Calls: nine. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE CONS_C, ONLY: ZERO, ONE
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: l_shell, L, S, J, jm_shell, jp_shell
real(DOUBLE) :: wa
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: delta_J
real(DOUBLE) :: RAC9
!-----------------------------------------------
wa = zero
if (mod(L+S,4) /= 0) then
wa = zero
elseif (mod(L+S+J,2) /= 0) then
wa = zero
elseif (mod(jm_shell+jp_shell+J,2) /= 0) then
wa = zero
else
if (jm_shell == jp_shell) then
if (mod(J,4) /= 0) then
wa = zero
else
wa = (jm_shell+one) * sqrt((L+one) * (S+one))
end if
else
wa = sqrt(2*(L+one)*(S+one)*(jm_shell+one)*(jp_shell+one))
end if
call nine(2*l_shell,2*l_shell,L,1,1,S,jm_shell,jp_shell,J, &
1,delta_J,RAC9)
if (delta_J /= 0) then
call nine(2*l_shell,2*l_shell,L,1,1,S,jm_shell,jp_shell,J,&
0,delta_J,RAC9)
wa = wa * RAC9
end if
end if
END FUNCTION coefLSjj2
!
!***********************************************************************
! *
FUNCTION coefLSjjs(lshell,N,w,Q,L,S,J,Nm,Qm,Jm,Qp,Jp) result(wa)
! *
! Returns the value of the LS-jj transformation matrix for a given *
! set of quantum numbers. *
! *
! Note that all (generalized) angular momentum quantum numbers *
! except for l must be given as twice the original numbers, i.e. *
! for the quantum numbers Q, L, S, J, Qm, Jm, Qp, Jp. *
! *
! Calls: *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE jj2lsj_C
USE jj2lsj_data_1_C
USE jj2lsj_data_2_C
USE jj2lsj_data_3_C
USE CONS_C, ONLY: ZERO, ONE
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: lshell, N, w, Q, L, S, J, Nm, Qm, Jm, Qp, Jp
real(DOUBLE) :: wa
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: i
!-----------------------------------------------
wa = zero
if (lshell == 0) then
else if (lshell == 1) then
select case(N)
case(3)
! Use data from the array LS_jj_p_3
do i = 1,LS_jj_number_p3
if(w ==LS_jj_p_3(i)%w .and. Q ==LS_jj_p_3(i)%Q .and. &
L ==LS_jj_p_3(i)%L .and. S ==LS_jj_p_3(i)%S .and. &
J ==LS_jj_p_3(i)%J .and. Nm ==LS_jj_p_3(i)%Nm .and. &
Qm ==LS_jj_p_3(i)%Qm .and. Jm ==LS_jj_p_3(i)%Jm .and. &
Qp ==LS_jj_p_3(i)%Qp .and. Jp ==LS_jj_p_3(i)%Jp) then
wa = LS_jj_p_3(i)%factor * &
sqrt( one*LS_jj_p_3(i)%nom/LS_jj_p_3(i)%denom )
return
end if
end do
case(4)
! Use data from the array LS_jj_p_4
do i = 1,LS_jj_number_p4
if(w ==LS_jj_p_4(i)%w .and. Q ==LS_jj_p_4(i)%Q .and. &
L ==LS_jj_p_4(i)%L .and. S ==LS_jj_p_4(i)%S .and. &
J ==LS_jj_p_4(i)%J .and. Nm ==LS_jj_p_4(i)%Nm .and. &
Qm ==LS_jj_p_4(i)%Qm .and. Jm ==LS_jj_p_4(i)%Jm .and. &
Qp ==LS_jj_p_4(i)%Qp .and. Jp ==LS_jj_p_4(i)%Jp) then
wa = LS_jj_p_4(i)%factor * &
sqrt( one*LS_jj_p_4(i)%nom/LS_jj_p_4(i)%denom )
return
end if
end do
case(5)
! Use data from the array LS_jj_p_5
do i = 1,LS_jj_number_p5
if(w ==LS_jj_p_5(i)%w .and. Q ==LS_jj_p_5(i)%Q .and. &
L ==LS_jj_p_5(i)%L .and. S ==LS_jj_p_5(i)%S .and. &
J ==LS_jj_p_5(i)%J .and. Nm ==LS_jj_p_5(i)%Nm .and. &
Qm ==LS_jj_p_5(i)%Qm .and. Jm ==LS_jj_p_5(i)%Jm .and. &
Qp ==LS_jj_p_5(i)%Qp .and. Jp ==LS_jj_p_5(i)%Jp) then
wa = LS_jj_p_5(i)%factor * &
sqrt( one*LS_jj_p_5(i)%nom/LS_jj_p_5(i)%denom )
return
end if
end do
case(6)
! Use data from the array LS_jj_p_6
do i = 1,LS_jj_number_p6
if(w ==LS_jj_p_6(i)%w .and. Q ==LS_jj_p_6(i)%Q .and. &
L ==LS_jj_p_6(i)%L .and. S ==LS_jj_p_6(i)%S .and. &
J ==LS_jj_p_6(i)%J .and. Nm ==LS_jj_p_6(i)%Nm .and. &
Qm ==LS_jj_p_6(i)%Qm .and. Jm ==LS_jj_p_6(i)%Jm .and. &
Qp ==LS_jj_p_6(i)%Qp .and. Jp ==LS_jj_p_6(i)%Jp) then
wa = LS_jj_p_6(i)%factor * &
sqrt( one*LS_jj_p_6(i)%nom/LS_jj_p_6(i)%denom )
return
end if
end do
case default
stop "coefLSjjs: program stop A."
end select
else if (lshell == 2) then
select case(N)
case(3)
! Use data from the array LS_jj_d_3
do i = 1,LS_jj_number_d3
if(w ==LS_jj_d_3(i)%w .and. Q ==LS_jj_d_3(i)%Q .and. &
L ==LS_jj_d_3(i)%L .and. S ==LS_jj_d_3(i)%S .and. &
J ==LS_jj_d_3(i)%J .and. Nm ==LS_jj_d_3(i)%Nm .and. &
Qm ==LS_jj_d_3(i)%Qm .and. Jm ==LS_jj_d_3(i)%Jm .and. &
Qp ==LS_jj_d_3(i)%Qp .and. Jp ==LS_jj_d_3(i)%Jp) then
wa = LS_jj_d_3(i)%factor * &
sqrt( one*LS_jj_d_3(i)%nom/LS_jj_d_3(i)%denom )
return
end if
end do
case(4)
! Use data from the array LS_jj_d_4
do i = 1,LS_jj_number_d4
if(w ==LS_jj_d_4(i)%w .and. Q ==LS_jj_d_4(i)%Q .and. &
L ==LS_jj_d_4(i)%L .and. S ==LS_jj_d_4(i)%S .and. &
J ==LS_jj_d_4(i)%J .and. Nm ==LS_jj_d_4(i)%Nm .and. &
Qm ==LS_jj_d_4(i)%Qm .and. Jm ==LS_jj_d_4(i)%Jm .and. &
Qp ==LS_jj_d_4(i)%Qp .and. Jp ==LS_jj_d_4(i)%Jp) then
wa = LS_jj_d_4(i)%factor * &
sqrt( one*LS_jj_d_4(i)%nom/LS_jj_d_4(i)%denom )
return
end if
end do
case(5)
! Use data from the array LS_jj_d_5
do i = 1,LS_jj_number_d5
if(w ==LS_jj_d_5(i)%w .and. Q ==LS_jj_d_5(i)%Q .and. &
L ==LS_jj_d_5(i)%L .and. S ==LS_jj_d_5(i)%S .and. &
J ==LS_jj_d_5(i)%J .and. Nm ==LS_jj_d_5(i)%Nm .and. &
Qm ==LS_jj_d_5(i)%Qm .and. Jm ==LS_jj_d_5(i)%Jm .and. &
Qp ==LS_jj_d_5(i)%Qp .and. Jp ==LS_jj_d_5(i)%Jp) then
wa = LS_jj_d_5(i)%factor * &
sqrt( one*LS_jj_d_5(i)%nom/LS_jj_d_5(i)%denom )
return
end if
end do
case(6)
! Use data from the array LS_jj_d_6
do i = 1,LS_jj_number_d6
if(w ==LS_jj_d_6(i)%w .and. Q ==LS_jj_d_6(i)%Q .and. &
L ==LS_jj_d_6(i)%L .and. S ==LS_jj_d_6(i)%S .and. &
J ==LS_jj_d_6(i)%J .and. Nm ==LS_jj_d_6(i)%Nm .and. &
Qm ==LS_jj_d_6(i)%Qm .and. Jm ==LS_jj_d_6(i)%Jm .and. &
Qp ==LS_jj_d_6(i)%Qp .and. Jp ==LS_jj_d_6(i)%Jp) then
wa = LS_jj_d_6(i)%factor * &
sqrt( one*LS_jj_d_6(i)%nom/LS_jj_d_6(i)%denom )
return
end if
end do
case(7)
! Use data from the array LS_jj_d_7
do i = 1,LS_jj_number_d7
if(w ==LS_jj_d_7(i)%w .and. Q ==LS_jj_d_7(i)%Q .and. &
L ==LS_jj_d_7(i)%L .and. S ==LS_jj_d_7(i)%S .and. &
J ==LS_jj_d_7(i)%J .and. Nm ==LS_jj_d_7(i)%Nm .and. &
Qm ==LS_jj_d_7(i)%Qm .and. Jm ==LS_jj_d_7(i)%Jm .and. &
Qp ==LS_jj_d_7(i)%Qp .and. Jp ==LS_jj_d_7(i)%Jp) then
wa = LS_jj_d_7(i)%factor * &
sqrt( one*LS_jj_d_7(i)%nom/LS_jj_d_7(i)%denom )
return
end if
end do
case(8)
! Use data from the array LS_jj_d_8
do i = 1,LS_jj_number_d8
if(w ==LS_jj_d_8(i)%w .and. Q ==LS_jj_d_8(i)%Q .and. &
L ==LS_jj_d_8(i)%L .and. S ==LS_jj_d_8(i)%S .and. &
J ==LS_jj_d_8(i)%J .and. Nm ==LS_jj_d_8(i)%Nm .and. &
Qm ==LS_jj_d_8(i)%Qm .and. Jm ==LS_jj_d_8(i)%Jm .and. &
Qp ==LS_jj_d_8(i)%Qp .and. Jp ==LS_jj_d_8(i)%Jp) then
wa = LS_jj_d_8(i)%factor * &
sqrt( one*LS_jj_d_8(i)%nom/LS_jj_d_8(i)%denom )
return
end if
end do
case(9)
! Use data from the array LS_jj_d_9
do i = 1,LS_jj_number_d9
if(w ==LS_jj_d_9(i)%w .and. Q ==LS_jj_d_9(i)%Q .and. &
L ==LS_jj_d_9(i)%L .and. S ==LS_jj_d_9(i)%S .and. &
J ==LS_jj_d_9(i)%J .and. Nm ==LS_jj_d_9(i)%Nm .and. &
Qm ==LS_jj_d_9(i)%Qm .and. Jm ==LS_jj_d_9(i)%Jm .and. &
Qp ==LS_jj_d_9(i)%Qp .and. Jp ==LS_jj_d_9(i)%Jp) then
wa = LS_jj_d_9(i)%factor * &
sqrt( one*LS_jj_d_9(i)%nom/LS_jj_d_9(i)%denom )
return
end if
end do
case(10)
! Use data from the array LS_jj_d_10
do i = 1,LS_jj_number_d10
if(w ==LS_jj_d_10(i)%w .and. Q ==LS_jj_d_10(i)%Q .and. &
L ==LS_jj_d_10(i)%L .and. S ==LS_jj_d_10(i)%S .and. &
J ==LS_jj_d_10(i)%J .and. Nm==LS_jj_d_10(i)%Nm .and. &
Qm==LS_jj_d_10(i)%Qm .and. Jm==LS_jj_d_10(i)%Jm .and. &
Qp==LS_jj_d_10(i)%Qp .and. Jp==LS_jj_d_10(i)%Jp) then
wa= LS_jj_d_10(i)%factor * &
sqrt( one*LS_jj_d_10(i)%nom/LS_jj_d_10(i)%denom )
return
end if
end do
case default
stop "coefLSjjs: program stop B."
end select
else if (lshell == 3) then
select case(N)
case(3)
! Use data from the array LS_jj_f_3
do i = 1,LS_jj_number_f3
if(w ==LS_jj_f_3(i)%w .and. Q ==LS_jj_f_3(i)%Q .and. &
L ==LS_jj_f_3(i)%L .and. S ==LS_jj_f_3(i)%S .and. &
J ==LS_jj_f_3(i)%J .and. Nm ==LS_jj_f_3(i)%Nm .and. &
Qm ==LS_jj_f_3(i)%Qm .and. Jm ==LS_jj_f_3(i)%Jm .and. &
Qp ==LS_jj_f_3(i)%Qp .and. Jp ==LS_jj_f_3(i)%Jp) then
wa = LS_jj_f_3(i)%factor * &
sqrt( one*LS_jj_f_3(i)%nom/LS_jj_f_3(i)%denom )
return
end if
end do
case(4)
! Use data from the array LS_jj_f_4
do i = 1,LS_jj_number_f4
if(w ==LS_jj_f_4(i)%w .and. Q ==LS_jj_f_4(i)%Q .and. &
L ==LS_jj_f_4(i)%L .and. S ==LS_jj_f_4(i)%S .and. &
J ==LS_jj_f_4(i)%J .and. Nm ==LS_jj_f_4(i)%Nm .and. &
Qm ==LS_jj_f_4(i)%Qm .and. Jm ==LS_jj_f_4(i)%Jm .and. &
Qp ==LS_jj_f_4(i)%Qp .and. Jp ==LS_jj_f_4(i)%Jp) then
wa = LS_jj_f_4(i)%factor * &
sqrt( one*LS_jj_f_4(i)%nom/LS_jj_f_4(i)%denom )
return
end if
end do
case(5)
! Use data from the array LS_jj_f_5
do i = 1,LS_jj_number_f5
if(w ==LS_jj_f_5(i)%w .and. Q ==LS_jj_f_5(i)%Q .and. &
L ==LS_jj_f_5(i)%L .and. S ==LS_jj_f_5(i)%S .and. &
J ==LS_jj_f_5(i)%J .and. Nm ==LS_jj_f_5(i)%Nm .and. &
Qm ==LS_jj_f_5(i)%Qm .and. Jm ==LS_jj_f_5(i)%Jm .and. &
Qp ==LS_jj_f_5(i)%Qp .and. Jp ==LS_jj_f_5(i)%Jp) then
wa = LS_jj_f_5(i)%factor * &
sqrt( one*LS_jj_f_5(i)%nom/LS_jj_f_5(i)%denom )
return
end if
end do
case(6)
! Use data from the array LS_jj_f_6
do i = 1,LS_jj_number_f6
if(w ==LS_jj_f_6(i)%w .and. Q ==LS_jj_f_6(i)%Q .and. &
L ==LS_jj_f_6(i)%L .and. S ==LS_jj_f_6(i)%S .and. &
J ==LS_jj_f_6(i)%J .and. Nm ==LS_jj_f_6(i)%Nm .and. &
Qm ==LS_jj_f_6(i)%Qm .and. Jm ==LS_jj_f_6(i)%Jm .and. &
Qp ==LS_jj_f_6(i)%Qp .and. Jp ==LS_jj_f_6(i)%Jp) then
wa = LS_jj_f_6(i)%factor * &
sqrt( one*LS_jj_f_6(i)%nom/LS_jj_f_6(i)%denom )
return
end if
end do
case(7)
! Use data from the array LS_jj_f_7
do i = 1, LS_jj_number_f7, 1
if (w == LS_jj_f_7(i)%w) then
if(Q == LS_jj_f_7(i)%Q) then
if(L == LS_jj_f_7(i)%L) then
if(S == LS_jj_f_7(i)%S) then
if(J == LS_jj_f_7(i)%J) then
if(Nm == LS_jj_f_7(i)%Nm) then
if(Qm == LS_jj_f_7(i)%Qm .and. &
Jm == LS_jj_f_7(i)%Jm .and. &
Qp == LS_jj_f_7(i)%Qp .and. &
Jp == LS_jj_f_7(i)%Jp) then
wa = one * LS_jj_f_7(i)%factor * &
dsqrt(one*LS_jj_f_7(i)%nom)/ &
dsqrt(one*LS_jj_f_7(i)%denom )
return
end if
end if
end if
end if
end if
end if
end if
end do
case default
stop "coefLSjjs: program stop C."
end select
else
stop "coefLSjjs: program stop D."
end if
END FUNCTION coefLSjjs
!
!***********************************************************************
! *
SUBROUTINE dallocASFLS(asf_set_LS)
! *
! Dellocates the storage of asf_set_LS. *
! *
! Calls: *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE PRNT_C, ONLY: NVEC
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE itjpo_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
type(asf_basis_LS), intent(inout) :: asf_set_LS
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: i
!-----------------------------------------------
deallocate(asf_set_LS%csf_set_LS%shell)
deallocate(asf_set_LS%csf_set_LS%parent)
do i = 1, asf_set_LS%csf_set_LS%nocsf, 1
deallocate(asf_set_LS%csf_set_LS%csf(i)%occupation)
deallocate(asf_set_LS%csf_set_LS%csf(i)%seniority)
deallocate(asf_set_LS%csf_set_LS%csf(i)%shellL)
deallocate(asf_set_LS%csf_set_LS%csf(i)%shellS)
deallocate(asf_set_LS%csf_set_LS%csf(i)%shellLX)
deallocate(asf_set_LS%csf_set_LS%csf(i)%shellSX)
end do
deallocate(asf_set_LS%csf_set_LS%csf)
do i = 1, NVEC
deallocate(asf_set_LS%asf(i)%eigenvector)
end do
deallocate(asf_set_LS%asf)
END SUBROUTINE dallocASFLS
!
!***********************************************************************
! *
SUBROUTINE getchLS(csf_number,shell_number,string_LS, string_XLS)
! *
! The spectroscopic notation of a shell in LS coupling is returned.*
! *
! Calls: convrt_double. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: csf_number,shell_number
character(len=4), intent(out) :: string_LS, string_XLS
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: string_lenth
character(len=1) :: string_S, string_v
character(len=4) :: string_CNUM
!-----------------------------------------------
call &
convrt_double(2*(1+asf_set_LS%csf_set_LS%csf(csf_number)%shellS(shell_number)),&
string_CNUM,string_lenth)
string_S = string_CNUM(1:string_lenth)
if (asf_set_LS%csf_set_LS%shell(shell_number)%l < 3) then
call &
convrt_double(2*asf_set_LS%csf_set_LS%csf(csf_number)%seniority(shell_number),&
string_CNUM,string_lenth)
string_v = string_CNUM(1:string_lenth)
else
call &
convrt_double(2*asf_set_LS%csf_set_LS%csf(csf_number)%w(shell_number),&
string_CNUM,string_lenth)
string_v = string_CNUM(1:string_lenth)
end if
string_LS = string_S // &
L_string(asf_set_LS%csf_set_LS%csf(csf_number)%shellL(shell_number)/2)&
//string_v
!
call &
convrt_double(2*(1+asf_set_LS%csf_set_LS%csf(csf_number)%shellSX(shell_number)),&
string_CNUM,string_lenth)
string_S = string_CNUM(1:string_lenth)
string_XLS = string_S // &
L_string(asf_set_LS%csf_set_LS%csf(csf_number)%shellLX(shell_number)/2)
END SUBROUTINE getchLS
!
!***********************************************************************
! *
SUBROUTINE getxj
! *
! *
! Calls: jcup, jqs, ichop. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE M_C, ONLY: NCORE
USE ORB_C, ONLY: NCF, NW
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE jqs_I
USE ichop_I
USE jcup_I
IMPLICIT NONE
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
INTEGER :: JCNT, JCNTOP, JNCF, JNW
!-----------------------------------------------
Jcoup = 0
DO JNCF = 1, NCF
JCNT = 1
JCNTOP = 0
DO JNW = NCORE+1, NW
IF(JNW == 1) THEN
IF(ICHOP(JNW,JNCF) /= 0) THEN
IF(NW == 1) THEN
Jcoup(JNW,JNCF) = 0
ELSE
IF(ICHOP(JNW+1,JNCF) /= 0) THEN
Jcoup(JNW,JNCF) = 0
Jcoup(JNW+1,JNCF) = 0
ELSE
Jcoup(JNW,JNCF) = 0
Jcoup(JNW+1,JNCF) = JQS(3,JNW+1,JNCF) - 1
JCNTOP = 1
END IF
END IF
ELSE
JCNTOP = 1
IF(NW > 1) THEN
IF (ICHOP(JNW+1,JNCF) == 0) THEN
!GG 2015_06_21 Gediminas Gaigalas
!GG Jcoup(JNW,JNCF) = JCUP(JCNT,JNCF) - 1
Jcoup(JNW,JNCF) = JQS(3,JNW,JNCF) - 1
Jcoup(JNW+1,JNCF) = JCUP(JCNT,JNCF) - 1
JCNT = JCNT + 1
ELSE
Jcoup(JNW,JNCF) = JQS(3,JNW,JNCF) - 1
Jcoup(JNW+1,JNCF) = JQS(3,JNW,JNCF) - 1
ENDIF
ELSE
Jcoup(JNW,JNCF) = JCUP(JCNT,JNCF) - 1
ENDIF
ENDIF
ELSE IF(JNW == 2 .AND. NCORE+1 .EQ. 2) THEN
IF(ICHOP(JNW,JNCF) /= 0) THEN
Jcoup(JNW,JNCF) = 0
ELSE
JCNTOP = 1
Jcoup(JNW,JNCF) = JQS(3,JNW,JNCF) - 1
ENDIF
ELSE IF(JNW > 2) THEN
IF (ICHOP(JNW,JNCF) /= 0) THEN
IF(JNW == NCORE+1) THEN
Jcoup(JNW,JNCF) = JQS(3,JNW,JNCF) - 1
ELSE
Jcoup(JNW,JNCF) = Jcoup(JNW-1,JNCF)
END IF
ELSE
IF (JCNTOP /= 0) THEN
Jcoup(JNW,JNCF) = JCUP(JCNT,JNCF) - 1
JCNT = JCNT + 1
ELSE
Jcoup(JNW,JNCF) = JQS(3,JNW,JNCF) - 1
ENDIF
JCNTOP = JCNTOP + 1
END IF
END IF
END DO
END DO
RETURN
END SUBROUTINE getxj
!
!***********************************************************************
! *
SUBROUTINE gettermLS(l_shell,N,LS,number)
! *
! This procedure returns all allowed subshell terms (l,w,Q,L,S) *
! for given l^N, N = 0, 1, 2 or 3. *
! *
! Calls: *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE jj2lsj_C
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: l_shell, N
type(subshell_term_LS), dimension(120), intent(out) :: LS
integer, intent(out) :: number
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: M_Q, i, j
!-----------------------------------------------
M_Q = N - 2* l_shell - 1; j = 0
select case (l_shell)
case (0)
do i = 1,2
if (mod(M_Q + term_LS_s(i)%Q,2) == 0) then
if (abs(M_Q) <= term_LS_s(i)%Q) then
j = j + 1
LS(j)%l_shell = term_LS_s(i)%l_shell
LS(j)%w = term_LS_s(i)%w
LS(j)%Q = term_LS_s(i)%Q
LS(j)%LL = term_LS_s(i)%LL
LS(j)%S = term_LS_s(i)%S
end if
end if
end do
case (1)
do i = 1,6
if (mod(M_Q + term_LS_p(i)%Q,2) == 0) then
if (abs(M_Q) <= term_LS_p(i)%Q) then
j = j + 1
LS(j)%l_shell = term_LS_p(i)%l_shell
LS(j)%w = term_LS_p(i)%w
LS(j)%Q = term_LS_p(i)%Q
LS(j)%LL = term_LS_p(i)%LL
LS(j)%S = term_LS_p(i)%S
end if
end if
end do
case (2)
do i = 1,32
if (mod(M_Q + term_LS_d(i)%Q,2) == 0) then
if (abs(M_Q) <= term_LS_d(i)%Q) then
j = j + 1
LS(j)%l_shell = term_LS_d(i)%l_shell
LS(j)%w = term_LS_d(i)%w
LS(j)%Q = term_LS_d(i)%Q
LS(j)%LL = term_LS_d(i)%LL
LS(j)%S = term_LS_d(i)%S
end if
end if
end do
case (3)
do i = 1,238
if (mod(M_Q + term_LS_f(i)%Q,2) == 0) then
if (abs(M_Q) <= term_LS_f(i)%Q) then
j = j + 1
LS(j)%l_shell = term_LS_f(i)%l_shell
LS(j)%w = term_LS_f(i)%w
LS(j)%Q = term_LS_f(i)%Q
LS(j)%LL = term_LS_f(i)%LL
LS(j)%S = term_LS_f(i)%S
end if
end if
end do
case (4)
select case (N)
case (1)
i = 1; j = 1
LS(j)%l_shell = term_LS_g1(i)%l_shell
LS(j)%w = term_LS_g1(i)%w
LS(j)%Q = term_LS_g1(i)%Q
LS(j)%LL = term_LS_g1(i)%LL
LS(j)%S = term_LS_g1(i)%S
case (2)
do i = 1,9
j = j + 1
LS(j)%l_shell = term_LS_g2(i)%l_shell
LS(j)%w = term_LS_g2(i)%w
LS(j)%Q = term_LS_g2(i)%Q
LS(j)%LL = term_LS_g2(i)%LL
LS(j)%S = term_LS_g2(i)%S
end do
case default
stop "gettermLS(): program stop A."
end select
case (5)
select case (N)
case (1)
i = 1; j = 1
LS(j)%l_shell = term_LS_h1(i)%l_shell
LS(j)%w = term_LS_h1(i)%w
LS(j)%Q = term_LS_h1(i)%Q
LS(j)%LL = term_LS_h1(i)%LL
LS(j)%S = term_LS_h1(i)%S
case (2)
do i = 1,11
j = j + 1
LS(j)%l_shell = term_LS_h2(i)%l_shell
LS(j)%w = term_LS_h2(i)%w
LS(j)%Q = term_LS_h2(i)%Q
LS(j)%LL = term_LS_h2(i)%LL
LS(j)%S = term_LS_h2(i)%S
end do
case default
stop "gettermLS(): program stop B."
end select
case (6)
select case (N)
case (1)
i = 1; j = 1
LS(j)%l_shell = term_LS_i1(i)%l_shell
LS(j)%w = term_LS_i1(i)%w
LS(j)%Q = term_LS_i1(i)%Q
LS(j)%LL = term_LS_i1(i)%LL
LS(j)%S = term_LS_i1(i)%S
case (2)
do i = 1,13
j = j + 1
LS(j)%l_shell = term_LS_i2(i)%l_shell
LS(j)%w = term_LS_i2(i)%w
LS(j)%Q = term_LS_i2(i)%Q
LS(j)%LL = term_LS_i2(i)%LL
LS(j)%S = term_LS_i2(i)%S
end do
case default
stop "gettermLS(): program stop C."
end select
case (7)
select case (N)
case (1)
i = 1; j = 1
LS(j)%l_shell = term_LS_k1(i)%l_shell
LS(j)%w = term_LS_k1(i)%w
LS(j)%Q = term_LS_k1(i)%Q
LS(j)%LL = term_LS_k1(i)%LL
LS(j)%S = term_LS_k1(i)%S
case (2)
do i = 1,15
j = j + 1
LS(j)%l_shell = term_LS_k2(i)%l_shell
LS(j)%w = term_LS_k2(i)%w
LS(j)%Q = term_LS_k2(i)%Q
LS(j)%LL = term_LS_k2(i)%LL
LS(j)%S = term_LS_k2(i)%S
end do
case default
stop "gettermLS(): program stop D."
end select
case (8)
select case (N)
case (1)
i = 1; j = 1
LS(j)%l_shell = term_LS_l1(i)%l_shell
LS(j)%w = term_LS_l1(i)%w
LS(j)%Q = term_LS_l1(i)%Q
LS(j)%LL = term_LS_l1(i)%LL
LS(j)%S = term_LS_l1(i)%S
case (2)
do i = 1,17
j = j + 1
LS(j)%l_shell = term_LS_l2(i)%l_shell
LS(j)%w = term_LS_l2(i)%w
LS(j)%Q = term_LS_l2(i)%Q
LS(j)%LL = term_LS_l2(i)%LL
LS(j)%S = term_LS_l2(i)%S
end do
case default
stop "gettermLS(): program stop E."
end select
case (9)
select case (N)
case (1)
i = 1; j = 1
LS(j)%l_shell = term_LS_m1(i)%l_shell
LS(j)%w = term_LS_m1(i)%w
LS(j)%Q = term_LS_m1(i)%Q
LS(j)%LL = term_LS_m1(i)%LL
LS(j)%S = term_LS_m1(i)%S
case (2)
do i = 1,19
j = j + 1
LS(j)%l_shell = term_LS_m2(i)%l_shell
LS(j)%w = term_LS_m2(i)%w
LS(j)%Q = term_LS_m2(i)%Q
LS(j)%LL = term_LS_m2(i)%LL
LS(j)%S = term_LS_m2(i)%S
end do
case default
stop "gettermLS(): program stop F."
end select
case default
stop "gettermLS(): program stop G."
end select
number = j
END SUBROUTINE gettermLS
!
!***********************************************************************
! *
SUBROUTINE inscreen(THRESH,levels,number_of_levels,ioutC,ioutj, &
UNIQUE)
! *
! The input from the screen. *
! *
! Calls: sercsla, getxj, getmixblock, inscreenlev, openfl. *
! inscreenlev. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE ORB_C, ONLY: NCF, NW
USE PRNT_C, ONLY: NVEC
USE IOUNIT_C, ONLY: ISTDI, ISTDE
USE CONS_C, ONLY: EPS, ZERO
USE BLK_C, ONLY: NEVINBLK, NBLOCK
USE m_C, ONLY: NCORE
USE def_C, ONLY: Z, NELEC
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(out) :: ioutC,ioutj,UNIQUE
real(DOUBLE), intent(out) :: THRESH
integer, dimension(Blocks_number), intent(out) :: number_of_levels
integer, dimension(Blocks_number,Vectors_number), intent(out) :: levels
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: I, II, ISUM, K, NCI, ierr
integer :: IBlock, number_of_levels_tmp
logical :: yes, fail, GETYN
character(len=24) :: NAME
character(len=256) :: record, util_csl_file
integer, dimension(Blocks_number) :: posi
integer, dimension(1:Vectors_number) :: levels_tmp
!-----------------------------------------------
NBLOCK = 0
1 WRITE (ISTDE,*) 'Name of state'
READ(ISTDI,'(A)') NAME
K=INDEX(NAME,' ')
IF (K.EQ.1) THEN
WRITE (istde,*) 'Names may not start with a blank'
GOTO 1
ENDIF
!
! Open, check, load data from, and close, the .csl file
CALL SETCSLA(NAME,NCORE)
allocate(Jcoup(1:NW,1:NCF))
CALL GETXJ
WRITE (ISTDE,*)
WRITE (ISTDE,*) 'Mixing coefficients from a CI calc.?'
YES = GETYN ()
IF (YES) THEN
NCI = 0
ELSE
NCI = 1
ENDIF
!GG-2017
WRITE (ISTDE,*) 'Do you need a unique labeling? (y/n)'
YES = GETYN ()
IF (YES) THEN
UNIQUE = 1
ELSE
UNIQUE = 0
ENDIF
!GG-2017 end
!
! Get the eigenvectors
CALL GETMIXBLOCK (NAME, NCI)
WRITE (ISTDE,*) 'Default settings? (y/n) '
YES = GETYN ()
IF (YES) THEN
EPSNEW = 0.005D00
ISUM = 0
IMINCOMPOFF = 1
MINCOMP = 1
THRESH = 0.001D00
ioutC = 0
ioutj = 0
DO I = 1, NBLOCK
number_of_levels(I) = NEVINBLK(I)
IF(NEVINBLK(I) /= 0) THEN
DO II = 1, NEVINBLK(I)
ISUM = ISUM + 1
levels(I,II) = ISUM
END DO
END IF
END DO
ELSE
WRITE (ISTDE,*) 'All levels (Y/N)'
YES = GETYN ()
WRITE (istde,*)
IF (YES) THEN
ISUM = 0
DO I = 1, NBLOCK
number_of_levels(I) = NEVINBLK(I)
IF(NEVINBLK(I) /= 0) THEN
DO II = 1, NEVINBLK(I)
ISUM = ISUM + 1
levels(I,II) = ISUM
END DO
END IF
END DO
ELSE
number_of_levels = 0
posi(1) = 0
DO I = 1, NBLOCK-1
posi(I+1) = NEVINBLK(I) + posi(I)
END DO
WRITE (ISTDE,*) "Maximum number of ASFs is:",NVEC
WRITE (ISTDE,*) "Enter the level numbers of the ASF which are to be transformed,"
2 WRITE (ISTDE,*) "Enter the block number"
read (*, "(I2)") IBLOCK
WRITE (ISTDE,*) "The block number is:",IBLOCK
WRITE (ISTDE,*) " e.g. 1 3 4 7 - 20 48 69 - 85 :"
read (*, "(a)") record
call inscreenlev(record,levels_tmp,number_of_levels_tmp,fail)
if (fail) then
WRITE (ISTDE,*) "Unable to interprete the serial level numbers; redo ..."
goto 2
end if
number_of_levels(IBLOCK) = number_of_levels_tmp
if (NVEC < number_of_levels(IBLOCK)) then
WRITE (ISTDE,*) "There are to much ASF:", number_of_levels(IBLOCK)
go to 2
end if
DO I = 1,number_of_levels_tmp
levels(IBLOCK,I) = posi(IBLOCK) + levels_tmp(I)
END DO
WRITE (ISTDE,*) ""
WRITE (ISTDE,*) "Do you need to include more levels? (y/n)"
YES = GETYN ()
IF (YES) go to 2
END IF
3 WRITE (ISTDE,*) 'Maximum % of omitted composition'
READ *, MINCOMP
IF(MINCOMP == ZERO) THEN
IMINCOMPOFF = 0
EPSNEW = EPS*EPS
ioutC = 1
ioutj = 1
ELSE IF( MINCOMP > ZERO) THEN
IMINCOMPOFF = 1
WRITE (ISTDE,*) 'What is the value below which an eigenvector component'
WRITE (ISTDE,*) 'is to be neglected in the determination of the LSJ expansion:'
WRITE (ISTDE,'(A,F8.5)') ' should be smaller than:',MINCOMP*0.01
READ *, EPSNEW
ELSE
WRITE (ISTDE,*) " THe maximum of omitted composition can be 100%"
GO TO 3
END IF
WRITE (ISTDE,*) 'What is the value below which an eigenvector composition'
WRITE (ISTDE,*) 'is to be neglected for printing?'
READ *, THRESH
IF( MINCOMP > ZERO) THEN
WRITE (ISTDE,*) "Do you need the output file *.lsj.c? (y/n)"
YES = GETYN ()
IF (YES) THEN
ioutC = 1
ELSE
ioutC = 0
END IF
WRITE (ISTDE,*) "Do you need the output file *.lsj.j? (y/n)"
YES = GETYN ()
IF (YES) THEN
ioutj = 1
ELSE
ioutj = 0
END IF
END IF
ENDIF
!
WRITE (ISTDE,*)
IF (IMINCOMPOFF == 1) &
WRITE (*,"(A,F8.3)") ' Maximum % of omitted composition is ',MINCOMP
WRITE (*,"(A,ES8.1,A)") &
' Below ',EPSNEW,' the eigenvector component is to be neglected for calculating'
WRITE (*,"(A,ES8.1,A)") &
' Below ',THRESH,' the eigenvector composition is to be neglected for printing'
print *, " "
!
! Opening the files *.lsj.c
!
IF(ioutC == 1) THEN
util_csl_file = NAME(1:K-1)//'.lsj'//'.c'
call OPENFL(56,util_csl_file,"formatted","UNKNOWN",ierr)
IF (IERR .EQ. 1) THEN
print *, 'Error when opening',util_csl_file
STOP
ENDIF
ENDIF
!
! opening the files *.lsj.lbl
!
util_csl_file = NAME(1:K-1)//'.lsj'//'.lbl'
call OPENFL(57,util_csl_file,"formatted","UNKNOWN",ierr)
IF (IERR .EQ. 1) THEN
print *, 'Error when opening',util_csl_file
STOP
ENDIF
!GG-2017
IF (UNIQUE .EQ. 1) THEN
util_csl_file = NAME(1:K-1)//'.uni.lsj'//'.lbl'
call OPENFL(73,util_csl_file,"formatted","UNKNOWN",ierr)
IF (IERR .EQ. 1) THEN
print *, 'Error when opening',util_csl_file
STOP
ENDIF
util_csl_file = NAME(1:K-1)//'.uni.lsj'//'.sum'
call OPENFL(72,util_csl_file,"formatted","UNKNOWN",ierr)
IF (IERR .EQ. 1) THEN
print *, 'Error when opening',util_csl_file
STOP
ENDIF
END IF
!GG-2017 end
write(57,'(A53)') " Pos J Parity Energy Total Comp. of ASF"
!
! Opening the files *.lsj.j and
!
IF(ioutj == 1) THEN
util_csl_file = NAME(1:K-1)//'.lsj'//'.j'
call OPENFL(58,util_csl_file,"formatted","UNKNOWN",ierr)
IF (IERR .EQ. 1) THEN
print *, 'Error when opening',util_csl_file
STOP
END IF
WRITE (58,'(2X,A6,A,F5.1,A,I3,A,I7)' ) &
NAME(1:K-1),' Z = ',Z ,' NEL = ',NELEC,' NCFG ='! ,asf_set_LS%csf_set_LS%nocsf
ENDIF
END SUBROUTINE inscreen
!
!***********************************************************************
! *
SUBROUTINE inscreenlev(record,levels,number_of_levels,fail)
! *
! Attempts to interprete the serial level numbers which are given *
! in record. These level numbers can be given in the format: *
! 1 3 4 7 - 20 48 69 - 85 *
! Any order and 'overlapping' intervals are also supported. *
! The procedure returns with fail = .true. if the level numbers *
! cannot be interpreted properly (fail = .false. otherwise). *
! *
! The level numbers are returned in the vector *
! levels(1:number_of_levels). *
! The procedure assumes that this vector has a sufficient *
! dimension to store all level numbers. *
! *
! Calls: idigit. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE idigit_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
character(len=*), intent(in) :: record
logical, intent(out) :: fail
integer, intent(out) :: number_of_levels
integer, dimension(1:Vectors_number), intent(inout) :: levels
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
!GG logical, dimension(200) :: low_to
logical, dimension(2000) :: low_to
character(len=500) :: string
integer :: a, i, lower, n
integer, dimension(2000) :: low
!GG integer, dimension(200) :: low
integer, dimension(Vectors_number) :: run
!-----------------------------------------------
fail = .true.; levels(1:Vectors_number) = 0; number_of_levels = 0
run(1:Vectors_number) = 0; string = adjustl(record)
n = 0; lower = 0; low_to(:) = .false.
1 string = adjustl(string)
if (string(1:1) == " ") then
if (n == 0 .or. low_to(n)) then
return
else
goto 10
end if
else if (string(1:1) == "-") then
if (n == 0) return
low_to(n) = .true.; string(1:1) = " "
goto 1
else if (string(1:1) == "0" .or. string(1:1) == "1" .or. &
string(1:1) == "2" .or. string(1:1) == "3" .or. &
string(1:1) == "4" .or. string(1:1) == "5" .or. &
string(1:1) == "6" .or. string(1:1) == "7" .or. &
string(1:1) == "8" .or. string(1:1) == "9") then
a = idigit(string(1:1)); lower = 10*lower + a
if (string(2:2) == " " .or. string(2:2) == "-") then
n = n + 1; low(n) = lower; lower = 0
end if
string(1:1) = " "
goto 1
end if
!
! Determine no_eigenpairs and max_eigenpair
10 levels(:) = 0
do i = 1,n
if (low_to(i)) then
if (low(i) <= low(i+1)) then; run(low(i):low(i+1)) = 1
else; run(low(i+1):low(i)) = 1
end if
if (low_to(i+1)) then; return
else; cycle
end if
else
run(low(i)) = 1
end if
end do
number_of_levels = 0
do i = 1,Vectors_number
if (run(i) == 1) then
number_of_levels = number_of_levels + 1
levels(number_of_levels) = i
end if
end do
fail = .false.
END SUBROUTINE inscreenlev
!
!***********************************************************************
! *
SUBROUTINE jj2lsj
! *
! Controls the transformation of atomic states from a jj- *
! to a LS-coupled CSF basis. *
! *
! Calls: asf2LS, convrt_double, dallocASFLS, inscreen, ispar, *
! itjpo, packlsCSF, prCSFLSall, prCSFjj, prCSFLS, prCSFall, *
! setLS. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE EIGV_C, ONLY: EAV, EVAL, EVEC
USE ORB_C, ONLY: NCF
USE PRNT_C, ONLY: IVEC, NVEC
USE IOUNIT_C, ONLY: ISTDE
USE CONS_C, ONLY: ZERO, ONE
USE def_C, ONLY: Z, NELEC
USE blk_C, ONLY: NEVINBLK, NCFINBLK, NBLOCK, TWO_J
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE itjpo_I
USE ispar_I
IMPLICIT NONE
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
!GG NIST
integer :: i, j, jj, ii, string_l, IBLKNUM,ioutC,ioutj,UNIQUE
integer :: level, nocsf_min, lev, string_length
integer :: nocsf_max, sum_nocsf_min, Before_J
!GG NIST
integer :: LOC, NCFMIN, NCFMAX, NCF_LS_jj_MAX
real(DOUBLE) :: THRESH, wa, wb,Ssms,g_j,g_JLS, sumthrsh
character(len=4) :: string_CNUM
!GG character(len=64) :: string_CSF_ONE
character(len=164) :: string_CSF_ONE
integer, dimension(Blocks_number) :: number_of_levels
integer, dimension(Blocks_number,Vectors_number) :: levels
integer, dimension(1:Vectors_number) :: max_comp
integer, dimension(1:Vectors_number) :: leading_LS
!GG integer, dimension(1:100) :: iw
integer, dimension(:), pointer :: iw
!GG real(DOUBLE), dimension(1:100) :: weights, weights2
real(DOUBLE), dimension(:), pointer :: weights, weights2
integer, dimension(:), pointer :: ithresh
!GG character(LEN=64), dimension(1:Vectors_number) :: string_CSF
character(LEN=164), dimension(1:Vectors_number) :: string_CSF
!-----------------------------------------------
Ssms = ZERO; g_j = ZERO; g_JLS = ZERO; Before_J = 0
call inscreen(THRESH,levels,number_of_levels,ioutC,ioutj,UNIQUE)
allocate(ithresh(NCF))
do IBLKNUM = 1, NBLOCK
if(IBLKNUM == 1) THEN
NCFMIN = 1
NCFMAX = NCFINBLK(IBLKNUM)
else
NCFMIN = NCFMAX + 1
NCFMAX = NCFMIN + NCFINBLK(IBLKNUM) - 1
end if
if(number_of_levels(IBLKNUM) == 0) GO TO 1
ithresh = 0
do i = NCFMIN, NCFMAX
sumthrsh = ZERO
do lev = 1, number_of_levels(IBLKNUM)
level = levels(IBLKNUM,lev)
LOC = (level-1)*NCF
sumthrsh = sumthrsh + dabs(EVEC(i+LOC))
end do
if(dabs(sumthrsh) >= dabs(EPSNEW)) ithresh(i) = 1
end do
call setLS(ithresh,NCFMIN,NCFMAX)
!
! output to *.lsj.c
if(ioutC == 1) call prCSFLSall (56,asf_set_LS%csf_set_LS,Before_J)
allocate(asf_set_LS%asf(1:NVEC))
do i = 1, NVEC
allocate(asf_set_LS%asf(i)%eigenvector(1:asf_set_LS%csf_set_LS%nocsf))
end do
do lev = 1, number_of_levels(IBLKNUM)
level = levels(IBLKNUM,lev)
asf_set_LS%asf(level)%level_No = level
asf_set_LS%asf(level)%energy = EAV+EVAL(level)
asf_set_LS%asf(level)%eigenvector = ZERO
end do
asf_set_LS%noasf = number_of_levels(IBLKNUM)
nocsf_max = asf_set_LS%csf_set_LS%nocsf
!GG NIST
NCF_LS_jj_MAX = MAX(1000,NCFMAX-NCFMIN+1,asf_set_LS%csf_set_LS%nocsf+1)
allocate (weights(NCF_LS_jj_MAX))
allocate (weights2(NCF_LS_jj_MAX))
allocate (iw(NCF_LS_jj_MAX))
do lev = 1, number_of_levels(IBLKNUM)
level = levels(IBLKNUM,lev)
weights = ZERO; iw = 0; wb = ZERO
do i = NCFMIN, NCFMAX
if(ithresh(i) == 1) then
LOC = (level-1)*NCF
wa = EVEC(i+LOC) * EVEC(i+LOC)
wb = wb + wa
do j = 1,999
if (wa > weights(j)) then
weights(j+1:1000) = weights(j:999)
weights(j) = wa
iw(j+1:1000) = iw(j:999)
iw(j) = i - NCFMIN + 1
exit
end if
end do
end if
end do
if(lev == 1) then
print *, " "
WRITE(*,'(A,A)') " . . . . . . . . . . . . . . .",&
" . . . . . . . . . . . . . . . . . . . ."
WRITE(*, '(A,2X,I4,16X,A,I3)') &
" Under investigation is the block:",IBLKNUM, &
" The number of eigenvectors:", number_of_levels(IBLKNUM)
WRITE(*,'(A,I10,10X,A,I10)') &
" The number of CSF (in jj-coupling):",NCFINBLK(IBLKNUM), &
" The number of CSF (in LS-coupling):",asf_set_LS%csf_set_LS%nocsf
else
print *, " "
WRITE(*,'(A)') " . . . . . . . . . . . . . . . . . ."
WRITE(*,'(A)') " The new level is under investigation."
end if
!
! perform the transformation
!
if(lev == 1) call asf2ls &
(iw(1),ithresh,number_of_levels(IBLKNUM),IBLKNUM,levels,NCFMIN,NCFMAX)
!
! output to the screen jj- coupling
print *, "Weights of major contributors to ASF in jj-coupling:"
print *, " "
print *, " Level J Parity CSF contributions"
print *, " "
if (level > 1 .and. dabs(wb) > 1.0001) then
print *, "level, wb = ",level,wb
stop "JJ2LSJ(): program stop A."
end if
nocsf_min = 5
do j = 1,5
if(abs(weights(j)) < 0.00001) then
nocsf_min = j - 1
exit
end if
end do
IF(ISPAR(iw(1)) == -1) THEN
asf_set_LS%asf(level)%parity = "-"
ELSE IF(ISPAR(iw(1)) == 1) THEN
asf_set_LS%asf(level)%parity = "+"
ELSE
STOP "JJ2LSJ: program stop D."
END IF
asf_set_LS%asf(level)%totalJ = ITJPO(iw(1)+NCFMIN-1) - 1
call convrt_double(asf_set_LS%asf(level)%totalJ,string_CNUM,string_l)
print 16, IVEC(level),string_CNUM(1:string_l), &
asf_set_LS%asf(level)%parity,(weights(j),iw(j),j=1,nocsf_min)
print*, " Total sum over weight (in jj) is:",wb
print *, " "
print *, "Definition of leading CSF:"
print *, " "
!CGG call prCSFjj(-1,iw(1))
call prCSFjj(-1,iw(1)+NCFMIN-1,iw(1))
!
! output to the screen LS- coupling
print *, " "
print *, " "
print *, "Weights of major contributors to ASF in LS-coupling:"
print *, " "
print *, " Level J Parity CSF contributions"
print *, " "
sum_nocsf_min = 0
weights = ZERO; weights2 = ZERO; iw = 0; wb = ZERO
do i = 1,asf_set_LS%csf_set_LS%nocsf
wa = asf_set_LS%asf(level)%eigenvector(i)
wb = wb + wa*wa
!GG NIST
if(i == 1) then
weights(1) = wa
weights2(1) = wa*wa
iw(1) = 1
else
do j = 1,i
if(j == i) then
weights(i) = wa
weights2(i) = wa*wa
iw(i) = i
else
if (wa*wa > weights2(j)) then
do ii = i,j,-1
weights2(ii+1) = weights2(ii)
weights(ii+1) = weights(ii)
iw(ii+1) = iw(ii)
end do
weights(j) = wa
weights2(j) = wa*wa
iw(j) = i
exit
end if
end if
end do
end if
end do
if(IMINCOMPOFF == 1) THEN
if (level > 1 .and. (wb*100.+dabs(MINCOMP)) <= 100.) THEN
WRITE (*,"(A,F5.0,A)") &
"Attention!!! The program not reach the accuracy",MINCOMP," %"
print *, "level, sum of weights in LSJ = ",level,wb
print *,""
end if
else
if (level > 1 .and. abs(wb) > 1.001) then
print*,"Attention!!! The sum of weights is bigger then 1.0001"
print *, "level, sum of weights in LSJ = ",level,wb
end if
if (level > 1 .and. abs(wb) < .998) then
print*,"Attention!!! The sum of weights is little then .998"
print *, "level, sum of weights in LSJ = ",level,wb
end if
end if
nocsf_min = nocsf_max; jj = 0
do j = 1,nocsf_max
if(dabs(weights2(j)) < dabs(THRESH)) then
nocsf_min = j - 1
exit
end if
jj = jj + 1
leading_LS(jj) = iw(j)
end do
max_comp(level) = IW(1)
sum_nocsf_min = sum_nocsf_min + nocsf_min
if (nocsf_min <= 0) then
stop "JJ2LSJ(): program stop C."
end if
call convrt_double(asf_set_LS%csf_set_LS%csf(iw(1))%totalJ,string_CNUM,string_l)
print 16, asf_set_LS%asf(level)%level_No,string_CNUM(1:string_l), &
asf_set_LS%csf_set_LS%csf(iw(1))%parity, &
(weights2(j),iw(j),J=1,nocsf_min)
print*, " Total sum over weight (in LSJ) is:",wb
print *, " "
print *, "Definition of leading CSF:"
print *, " "
do i = 1,NCF
do j = 1, sum_nocsf_min
if (i == leading_LS(j)) then
call prCSFLS (-1,asf_set_LS%csf_set_LS,leading_LS(j))
exit
end if
end do
end do
!
! output to *.lsj.lbl
IF(Before_J == 0) THEN
Before_J = 1
ELSE IF(lev == 1 .and. Before_J == 1) THEN
write(57,'(A1)') ' '
END IF
DO j = 1,nocsf_min
CALL packlsCSF(asf_set_LS%csf_set_LS,iw(j),string_CSF_ONE)
IF(J == 1) THEN
write(57,'(1X,I2,1X,A4,5X,A1,8X,F16.9,5X,F7.3,A)') &
asf_set_LS%asf(level)%level_No,string_CNUM(1:string_l),&
asf_set_LS%csf_set_LS%csf(iw(1))%parity, &
asf_set_LS%asf(level)%energy,wb*100,"%"
string_CSF(level) = string_CSF_ONE
END IF
string_length = Len_Trim(string_CSF_ONE)
write(57,'(7X,F12.8,3X,F11.8,3X,A)') weights(j),weights2(j),string_CSF_ONE(1:string_length)
END DO
! output *.lsj.j and
IF(ioutj == 1) THEN
IF(lev == 1) THEN
WRITE (58, '(//A8,I4,2X,A8,I4)' ) ' 2*J = ', &
asf_set_LS%asf(level)%totalJ,'NUMBER =',number_of_levels(IBLKNUM)
END IF
string_length = Len_Trim(string_CSF(level))
write(58,'(3(A8,F15.10))') 'Ssms=', Ssms, 'g_J=',g_J, 'g_JLS=', g_JLS
write(58,'(I6,F16.9,2X,A)') &
max_comp(level),asf_set_LS%asf(level)%energy,string_CSF(level)(1:string_length)
write(58,'(7F11.8)') &
(asf_set_LS%asf(level)%eigenvector(i), i=1,asf_set_LS%csf_set_LS%nocsf)
END IF
END DO
call dallocASFLS(asf_set_LS)
deallocate(weights)
deallocate(weights2)
deallocate(iw)
1 CONTINUE
END DO
deallocate(ithresh)
close(56)
!GG-2017
IF (UNIQUE .EQ. 1) THEN
CALL uniquelsj
close(72)
close(73)
end if
!GG-2017
close(57)
IF(ioutj == 1) write(58,'(A)') '**'
close(58)
deallocate(Jcoup)
16 format(1x,i4,1x,2a4,2x,100(3x,f8.5," of",i7,3x,f8.5," of",i7,3x,f8.5,&
" of",i7,3x,f8.5," of",i7/16X))
return
END SUBROUTINE jj2lsj
!
!***********************************************************************
! *
SUBROUTINE packlsCSF(csf_set_LS,csf_number,string_CSF)
! *
! Encording all CSF in LS - coupling. *
! *
! Rules for encoding *
! 1. All blanks deleted *
! 2. If Qi=1, omit Qi *
! 3. If Qi=1 or Qi>=4l+1, omit ALFAi *
! 4. If i=1 or (Qi=4l+2 and i<>m), insert '.'; else _BETAi. *
! *
! Calls: getchLS, packls. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
type(csf_basis_LS), intent(in) :: csf_set_LS
integer, intent(in) :: csf_number
!GG CHARACTER(LEN=64), INTENT(OUT) :: string_CSF
CHARACTER(LEN=164), INTENT(OUT) :: string_CSF
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: I, j, counter
integer, dimension(:), pointer :: occupation
!GG integer, dimension(8) :: Q
integer, dimension(18) :: Q
character(len=4) :: LS, XLS
character(len=4), dimension(:),pointer :: string_LS, string_XLS
!GG character(len=3), dimension(15) :: COUPLE
character(len=3), dimension(35) :: COUPLE
!GG character(len=3), dimension(8) :: ELC
character(len=3), dimension(18) :: ELC
CHARACTER(LEN=2) :: String
!GG CHARACTER(LEN=64) :: string_CSF_PACK
CHARACTER(LEN=164) :: string_CSF_PACK
CHARACTER(LEN=3), DIMENSION(:), POINTER :: String_NL
CHARACTER(LEN=1), DIMENSION(0:20) :: L1
DATA L1 /'s','p','d','f','g','h','i','k','l','m','n', &
'o','q','r','t','u','v','w','x','y','z'/
!-----------------------------------------------
counter = 0
allocate(occupation(csf_set_LS%nwshells))
allocate(string_LS(csf_set_LS%nwshells))
allocate(string_XLS(csf_set_LS%nwshells))
do j = csf_set_LS%nwcore+1, csf_set_LS%nwshells
if (csf_set_LS%csf(csf_number)%occupation(j) > 0) then
counter = counter +1; occupation(counter) = j
call getchLS(csf_number,j,LS,XLS)
string_LS(counter) = LS; string_XLS(counter) = XLS
end if
end do
!
allocate(String_NL(counter))
DO I = 1,counter
write(String,'(I2)')csf_set_LS%shell(occupation(I))%n
String_NL(I)(1:2) = String(1:2)
String_NL(I)(3:3) = L1(csf_set_LS%shell(occupation(I))%l)
END DO
!GG IF(counter > 8) THEN
IF(counter > 18) THEN
print*, counter
STOP "prCSFLS: A"
END IF
DO I = 1,counter
COUPLE(I)(1:3) = string_LS(I)(1:3)
IF(I > 1) THEN
COUPLE(I+counter-1)(1:3) = string_XLS(I)(1:3)
END IF
Q(I) = csf_set_LS%csf(csf_number)%occupation(occupation(I))
ELC(I)(1:3) = String_NL(I)(1:3)
END DO
CALL PACKLS(counter,ELC,Q,COUPLE,string_CSF_pack)
string_CSF = string_CSF_pack
deallocate(String_NL)
deallocate(occupation)
deallocate(string_LS)
deallocate(string_XLS)
END SUBROUTINE packlsCSF
!
!***********************************************************************
! *
SUBROUTINE prCSFjj(stream,csf_number,csf_number_local)
! *
! Print all information about the single CSF scheme csf_set in *
! a nead format on stream. *
! *
! Calls: convrt_double, itjpo, iq, jqs. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE ORB_C, ONLY: NW, NP, NAK
USE M_C, ONLY: NCORE
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE itjpo_I
USE IQ_I
USE JQS_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: stream, csf_number, csf_number_local
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: I, j, counter, string_lenth
integer, dimension(:), pointer :: occupation
character(len=4), dimension(:), pointer :: string_J, string_X
character(len=4) :: string_CNUM
CHARACTER(LEN=2) :: String
CHARACTER(LEN=4), DIMENSION(:), POINTER :: String_NL
CHARACTER(LEN=1), DIMENSION(0:20) :: L1
DATA L1 /'s','p','d','f','g','h','i','k','l','m','n', &
'o','q','r','t','u','v','w','x','y','z'/
!-----------------------------------------------
counter = 0
allocate(occupation(NW))
allocate(string_J(NW))
allocate(string_X(NW))
do j = NCORE+1, NW
if (IQ(j,csf_number) > 0) then
counter = counter +1
occupation(counter) = j
if (JQS(3,j,csf_number)-1 == 0) then
string_J(counter) = " "
else
call convrt_double(JQS(3,j,csf_number)-1,string_CNUM,string_lenth)
string_J(counter) = string_CNUM(1:string_lenth)
end if
if (Jcoup(j,csf_number) == 0) then
string_X(counter) = " "
else
call convrt_double(Jcoup(j,csf_number),string_CNUM,string_lenth)
string_X(counter) = string_CNUM(1:string_lenth)
end if
end if
end do
!
allocate(String_NL(counter))
DO I = 1,counter
write(String,'(I2)')NP(occupation(I))
String_NL(I)(1:2) = String(1:2)
J = ((IABS(NAK(occupation(I)))*2)-1+ &
NAK(occupation(I))/IABS(NAK(occupation(I))))/2
String_NL(I)(3:3) = L1(J)
IF(NAK(occupation(I)) <= 0) THEN
String_NL(I)(4:4) = " "
ELSE
String_NL(I)(4:4) = "-"
END IF
END DO
!
if (stream == -1) then
write(*,1) csf_number_local,(String_NL(j), &
IQ(occupation(j),csf_number),j=1,counter)
write(*,2)(string_J(j),j=1,counter)
call convrt_double(ITJPO(csf_number)-1,string_CNUM,string_lenth)
write(*,3)(string_X(j),j=2,counter-1),string_CNUM(1:string_lenth)
end if
deallocate(occupation)
deallocate(string_J)
deallocate(string_X)
1 format(4x,i9,')',100(a4,'(',i2,')',2x))
2 format(18x,a4,100(6X,a4))
3 format(31x,a4,100(6X,a5))
END SUBROUTINE prCSFjj
!
!***********************************************************************
! *
SUBROUTINE prCSFLS(stream,csf_set_LS,csf_number)
! *
! Print all information about the single CSF scheme csf_set in *
! a nead format on stream. *
! *
! Calls: convrt_double, getchLS. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: stream, csf_number
type(csf_basis_LS), intent(in) :: csf_set_LS
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: I, j, counter, string_lenth
integer, dimension(:), pointer :: occupation
character(len=4) :: LS, XLS
character(len=4), dimension(:),pointer :: string_LS, string_XLS
character(len=4) :: string_CNUM
CHARACTER(LEN=2) :: String
CHARACTER(LEN=3), DIMENSION(:), POINTER :: String_NL
CHARACTER(LEN=1), DIMENSION(0:20) :: L1
DATA L1 /'s','p','d','f','g','h','i','k','l','m','n', &
'o','q','r','t','u','v','w','x','y','z'/
!-----------------------------------------------
counter = 0
allocate(occupation(csf_set_LS%nwshells))
allocate(string_LS(csf_set_LS%nwshells))
allocate(string_XLS(csf_set_LS%nwshells))
do j = csf_set_LS%nwcore+1, csf_set_LS%nwshells
if (csf_set_LS%csf(csf_number)%occupation(j) > 0) then
counter = counter +1; occupation(counter) = j
call getchLS(csf_number,j,LS,XLS)
string_LS(counter) = LS; string_XLS(counter) = XLS
end if
end do
!
allocate(String_NL(counter))
DO I = 1,counter
write(String,'(I2)')csf_set_LS%shell(occupation(I))%n
String_NL(I)(1:2) = String(1:2)
String_NL(I)(3:3) = L1(csf_set_LS%shell(occupation(I))%l)
END DO
!
if (stream == -1) then
write(*,3)csf_number,(String_NL(j), &
csf_set_LS%csf(csf_number)%occupation(occupation(j)),j=1,counter)
call convrt_double(csf_set_LS%csf(csf_number)%totalJ,string_CNUM,string_lenth)
write(*,4)(string_LS(j),j=1,counter),(string_XLS(j),j=2,counter),&
string_CNUM(1:string_lenth)
else
write(stream,1)(String_NL(j), &
csf_set_LS%csf(csf_number)%occupation(occupation(j)),j=1,counter)
call convrt_double(csf_set_LS%csf(csf_number)%totalJ,string_CNUM,string_lenth)
write(stream,2)(string_LS(j),j=1,counter),(string_XLS(j),j=2,counter)
end if
deallocate(String_NL)
deallocate(occupation)
deallocate(string_LS)
deallocate(string_XLS)
1 format(8(1X,A3,'(',i2,')'))
2 format(1X,15(A4))
3 format(i10,')',4x,100(a3,'(',i2,')',2x))
4 format(19x,a4,100(5X,a4))
END SUBROUTINE prCSFLS
!
!***********************************************************************
! *
SUBROUTINE prCSFLSall(stream,csf_set_LS,Before_J)
! *
! Print all information about the CSF scheme csf_set in a nead *
! format on stream. *
! *
! Calls: prCSFLS. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: stream
type(csf_basis_LS), intent(in) :: csf_set_LS
integer, intent(in) :: Before_J
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: I, J
CHARACTER (LEN=2) :: String
CHARACTER (LEN=3), DIMENSION(:), POINTER :: String_NL
CHARACTER (LEN=1), DIMENSION(0:20) :: L1
DATA L1 /'s','p','d','f','g','h','i','k','l','m','n', &
'o','q','r','t','u','v','w','x','y','z'/
!-----------------------------------------------
if(Before_J == 0) then
write(stream,*) " "
allocate(String_NL(csf_set_LS%nwcore))
do I = 1,csf_set_LS%nwcore
write(String,'(I2)')csf_set_LS%shell(I)%n
String_NL(I)(1:2) = String(1:2)
String_NL(I)(3:3) = L1(csf_set_LS%shell(I)%l)
end do
write(stream,1)(String_NL(J),J=1,csf_set_LS%nwcore)
deallocate(String_NL)
end if
!
! output to *.lsj.c
do i = 1,csf_set_LS%nocsf
call prCSFLS(stream,csf_set_LS,i)
end do
write(stream,*) "*"
1 format(18(1X,A3))
END SUBROUTINE prCSFLSall
!
!***********************************************************************
! *
SUBROUTINE setLS(ithresh,NCFMIN,NCFMAX)
! *
! This subroutine fills up the variable asf_set_LS%csf_set_LS *
! with data generated using the one from asf_set%csf_set. *
! *
! This subroutine contains the following internal subroutines: *
! * subroutine setLS_action *
! * subroutine setLS_add_quantum_numbers *
! * recursive subroutine setLS_job_count *
! * function setLS_equivalent_csfs *
! *
! Calls: itjpo, iq, setLS_equivalent_csfs, setLS_job_count. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE DEF_C, ONLY: NELEC
USE ORB_C, ONLY: NW, NP, NAK, NCF
USE m_C, ONLY: NCORE
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE itjpo_I
USE IQ_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, dimension(:), intent(in) :: ithresh
integer, intent(in) :: NCFMIN, NCFMAX
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
type(nl), dimension(:), pointer :: shell_temp
integer, dimension(:), pointer :: nlLSval, nljjval
type(lsj_list) :: nonequiv_csfs_jj
!
integer :: isubc, isubc2, icsf_jj, icsf_jj2, icsf_jj_real, icsf_LS
integer :: N, action_type
logical :: new_one, found_parent_minus, found_parent_plus
!
integer, dimension(:), pointer :: all_occupation
integer, dimension(:), pointer :: Li, Si, L_i, S_i, w, Q
integer :: J
!-----------------------------------------------
asf_set_LS%csf_set_LS%number_of_electrons = NELEC
!
! 1. define nl, parent
allocate(shell_temp(NW)); allocate(nlLSval(NW)); allocate(nljjval(NW))
nljjval = 0; nlLSval = 0
!
asf_set_LS%csf_set_LS%nwshells = 0
do isubc = 1, NW, 1
nljjval(isubc) = NP(isubc)*100 + &
((IABS(NAK(isubc))*2)-1+NAK(isubc)/IABS(NAK(isubc)))/2
new_one = .true.
do isubc2 = 1, isubc-1, 1
if(nljjval(isubc) == nljjval(isubc2)) new_one = .false.
end do
if(new_one) then
asf_set_LS%csf_set_LS%nwshells = asf_set_LS%csf_set_LS%nwshells + 1
shell_temp(asf_set_LS%csf_set_LS%nwshells)%n = NP(isubc)
shell_temp(asf_set_LS%csf_set_LS%nwshells)%l= &
((IABS(NAK(isubc))*2)-1+NAK(isubc)/IABS(NAK(isubc)))/2
nlLSval(asf_set_LS%csf_set_LS%nwshells) = nljjval(isubc)
end if
end do
!
allocate(asf_set_LS%csf_set_LS%shell(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%parent(asf_set_LS%csf_set_LS%nwshells))
!
do isubc=1, asf_set_LS%csf_set_LS%nwshells, 1
asf_set_LS%csf_set_LS%shell(isubc) = shell_temp(isubc)
end do
!
deallocate(shell_temp)
!
! begin find parent
do isubc = 1, asf_set_LS%csf_set_LS%nwshells, 1
found_parent_minus = .false.; found_parent_plus = .false.
do isubc2 = 1, NW, 1
if(nlLSval(isubc) == nljjval(isubc2)) then
if(NAK(isubc2) > 0) then
found_parent_minus = .true.
asf_set_LS%csf_set_LS%parent(isubc)%parent_minus = isubc2
else
found_parent_plus = .true.
asf_set_LS%csf_set_LS%parent(isubc)%parent_plus = isubc2
end if
end if
end do
if(.not.found_parent_plus) &
asf_set_LS%csf_set_LS%parent(isubc)%parent_plus = 0
if(.not.found_parent_minus) &
asf_set_LS%csf_set_LS%parent(isubc)%parent_minus = 0
end do
!
! end find parent --------------------
!
! 2. define the number of "core" shells
! (the LS shell is supposed to be "core" if:
! 1. l=0 and corresponding jj subshell is "core"
! 2. l<>0 l+ and l - "core" subshells )
asf_set_LS%csf_set_LS%nwcore=0
do isubc=1, asf_set_LS%csf_set_LS%nwshells, 1
if(asf_set_LS%csf_set_LS%parent(isubc)%parent_minus .le. NCORE &
.and. &
asf_set_LS%csf_set_LS%parent(isubc)%parent_plus .le. NCORE) &
asf_set_LS%csf_set_LS%nwcore = asf_set_LS%csf_set_LS%nwcore+1
end do
!
! 3. form the list of "nonequivalent" csfs_jj
! (i.e. csfs_jj different in J,parity, or
! some l's occupation numbers Ni = N_(i+) + N_(i-))
allocate(nonequiv_csfs_jj%items(NCF))
nonequiv_csfs_jj%list_size = 0
do icsf_jj = NCFMIN, NCFMAX
new_one = .true.
if(ithresh(icsf_jj) == 1) then
do icsf_jj2 = NCFMIN , icsf_jj - 1
if(ithresh(icsf_jj2) == 1) then
if(setLS_equivalent_csfs(icsf_jj,icsf_jj2)) then
new_one = .false.
exit
end if
end if
end do
if(new_one) then
nonequiv_csfs_jj%list_size = nonequiv_csfs_jj%list_size + 1
nonequiv_csfs_jj%items(nonequiv_csfs_jj%list_size) = icsf_jj
end if
end if
end do
!
! 4. for each nonequivalent csf_jj find all the csfs_LS
!
! To avoid the dependency on the number of subshells
! the recursive subroutine is used
allocate(Li(asf_set_LS%csf_set_LS%nwshells))
allocate(L_i(asf_set_LS%csf_set_LS%nwshells))
allocate(Si(asf_set_LS%csf_set_LS%nwshells))
allocate(S_i(asf_set_LS%csf_set_LS%nwshells))
allocate(Q(asf_set_LS%csf_set_LS%nwshells))
allocate(w(asf_set_LS%csf_set_LS%nwshells))
allocate(all_occupation(asf_set_LS%csf_set_LS%nwshells))
!
! 4.1 - find the number of csfs_LS
asf_set_LS%csf_set_LS%nocsf = 0
!
do icsf_jj=1, nonequiv_csfs_jj%list_size, 1
!
! set the variable for conviniency ...
icsf_jj_real = nonequiv_csfs_jj%items(icsf_jj)
J = ITJPO(icsf_jj_real)-1
!
! define the occupation numbers
!
do isubc = 1, asf_set_LS%csf_set_LS%nwshells, 1
all_occupation(isubc) = 0
do isubc2 = 1, NW, 1
if(nlLSval(isubc) == nljjval(isubc2)) &
all_occupation(isubc)=all_occupation(isubc)+IQ(isubc2,icsf_jj_real)
end do !isubc2
!
end do !isubc
N = 0; action_type = 1
call setLS_job_count(1, N)
asf_set_LS%csf_set_LS%nocsf=asf_set_LS%csf_set_LS%nocsf + N
end do
!
! 4.2 - fill up the csf_LS arrays with the corresponding quantum numbers
! 4.2.1 allocate the arrays
allocate(asf_set_LS%csf_set_LS%csf(asf_set_LS%csf_set_LS%nocsf))
do N = 1, asf_set_LS%csf_set_LS%nocsf, 1
allocate(asf_set_LS%csf_set_LS%csf(N)%occupation(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%csf(N)%seniority(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%csf(N)%w(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%csf(N)%shellL(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%csf(N)%shellS(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%csf(N)%shellLX(asf_set_LS%csf_set_LS%nwshells))
allocate(asf_set_LS%csf_set_LS%csf(N)%shellSX(asf_set_LS%csf_set_LS%nwshells))
end do
!
! 4.2.2 - fill them with quantum numbers
icsf_LS = 0
!
do icsf_jj=1, nonequiv_csfs_jj%list_size, 1
!
! set the variable for conveniency ...
icsf_jj_real = nonequiv_csfs_jj%items(icsf_jj)
J = ITJPO(icsf_jj_real)-1
do isubc = 1, asf_set_LS%csf_set_LS%nwshells, 1
all_occupation(isubc)=0
do isubc2 = 1, NW, 1
if(nlLSval(isubc) == nljjval(isubc2)) &
all_occupation(isubc)=all_occupation(isubc)+IQ(isubc2,icsf_jj_real)
end do !isubc2
end do !isubc
action_type = 2
call setLS_job_count(1, N)
end do
!
deallocate(nonequiv_csfs_jj%items)
deallocate(nlLSval); deallocate(nljjval)
!
deallocate(Q); deallocate(w); deallocate(all_occupation)
deallocate(Li); deallocate(L_i); deallocate(Si);
deallocate(S_i)
!
CONTAINS
!
!***********************************************************************
! *
SUBROUTINE setLS_action(tip,irez)
! *
! The subroutine defines the "action" of subroutine *
! setLS_job_count: whether it counts the number *
! of csfs_LS (asf_set_LS%csf_set_LS%novcsf) or fills the *
! arrays of wave functions in LS coupling with *
! asf_set_LS%csf_set_LS%csf(...) with the corresponding *
! quantum nubers. *
! *
! Calls: setLS_add_quantum_numbers. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer :: tip, irez
!-----------------------------------------------
if (tip .eq. 1) then
irez = irez + 1
else
call setLS_add_quantum_numbers()
end if
END SUBROUTINE setLS_action
!
!***********************************************************************
! *
SUBROUTINE setLS_add_quantum_numbers()
! *
! The subroutine adds quantum numbers stored in temprorary arrays *
! Li, Si, L_i, S_i, w, Q to the corresponding arrays *
! of asf_set_LS%csf_set_LS%csf() (i.e. to the arrays of *
! corresponding quantum numbers of the wave function in LS *
! coupling). *
! *
! Calls: gettermLS, ispar setLS_action, setLS_job_count. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE ispar_I
IMPLICIT NONE
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: isubcx
!-----------------------------------------------
icsf_LS = icsf_LS + 1
!
if(icsf_LS .gt. asf_set_LS%csf_set_LS%nocsf) then
stop 'setLS_add_quantum_numbers(): program stop A.'
end if
!
asf_set_LS%csf_set_LS%csf(icsf_LS)%totalJ = J
!
IF (ISPAR(icsf_jj_real) == 1) THEN
asf_set_LS%csf_set_LS%csf(icsf_LS)%parity = "+"
ELSE IF (ISPAR(icsf_jj_real) == -1) THEN
asf_set_LS%csf_set_LS%csf(icsf_LS)%parity = "-"
END IF
!
do isubcx = 1, asf_set_LS%csf_set_LS%nwshells, 1
asf_set_LS%csf_set_LS%csf(icsf_LS)%occupation(isubcx) = &
all_occupation(isubcx)
asf_set_LS%csf_set_LS%csf(icsf_LS)%shellL(isubcx) = Li(isubcx)
asf_set_LS%csf_set_LS%csf(icsf_LS)%shellS(isubcx) = Si(isubcx)
asf_set_LS%csf_set_LS%csf(icsf_LS)%shellLX(isubcx) = L_i(isubcx)
asf_set_LS%csf_set_LS%csf(icsf_LS)%shellSX(isubcx) = S_i(isubcx)
asf_set_LS%csf_set_LS%csf(icsf_LS)%w(isubcx) = w(isubcx)
asf_set_LS%csf_set_LS%csf(icsf_LS)%seniority(isubcx) = &
2*asf_set_LS%csf_set_LS%shell(isubcx)%l + 1 - Q(isubcx)
end do
END SUBROUTINE setLS_add_quantum_numbers
!
!***********************************************************************
! *
RECURSIVE SUBROUTINE setLS_job_count(isubc, rez)
! *
! Recursive subroutine for the calculation of the *
! number of csfs_LS and corresponding quantum numbers. *
! *
! Calls: setLS_action, setLS_job_count, gettermLS, ittk. *
! *
! Written by G. Gaigalas, *
! NIST last update: Dec 2015 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE jj2lsj_C
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE ittk_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer :: isubc, rez
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
type(subshell_term_LS), dimension(120) :: LS_terms
integer :: iterm, nr_terms, suma, delta_J
!GG 2015.12.02 NIST
!GG integer :: number
integer :: number, numberGG
integer :: tempLmax,tempLmin,tempSmax,tempSmin,tempS,tempL
integer :: l_shell, N
!-----------------------------------------------
if(isubc.gt.(asf_set_LS%csf_set_LS%nwshells) .or. isubc.lt.1) then
print *, 'isubc = ', isubc
stop "setLS_job_count(): program stop A."
end if
!
if(isubc.le.asf_set_LS%csf_set_LS%nwshells) then
if(all_occupation(isubc).eq.0) then
if(isubc.gt.1) then
Li(isubc) = 0; L_i(isubc) = L_i(isubc-1)
Si(isubc) = 0; S_i(isubc) = S_i(isubc-1)
if(isubc .lt. asf_set_LS%csf_set_LS%nwshells) then
call setLS_job_count(isubc + 1, rez)
else
if(ittk(S_i(isubc),L_i(isubc),J).eq.1) &
call setLS_action(action_type, rez) !rez=rez+1
end if
else
Li(isubc) = 0; L_i(isubc) = 0
Si(isubc) = 0; S_i(isubc) = 0
end if
else
N = all_occupation(isubc)
l_shell = asf_set_LS%csf_set_LS%shell(isubc)%l
call gettermLS(l_shell,N,LS_terms,number)
do iterm=1, number, 1
!GG 2015.12.02 NIST
N = all_occupation(isubc)
l_shell = asf_set_LS%csf_set_LS%shell(isubc)%l
call gettermLS(l_shell,N,LS_terms,numberGG)
!GG 2015.12.02 NIST
Li(isubc) = LS_terms(iterm)%LL
Si(isubc) = LS_terms(iterm)%S
w(isubc) = LS_terms(iterm)%w
Q(isubc) = LS_terms(iterm)%Q
if(isubc.eq.1) then
L_i(isubc) = LS_terms(iterm)%LL
S_i(isubc) = LS_terms(iterm)%S
if(asf_set_LS%csf_set_LS%nwshells.gt.1) then
call setLS_job_count(isubc + 1, rez)
else
if(ittk(S_i(isubc),L_i(isubc),J).eq.1) &
call setLS_action(action_type, rez) !rez=rez+1
end if
else
tempLmax=L_i(isubc-1)+Li(isubc)
tempLmin=abs(L_i(isubc-1)-Li(isubc))
tempSmax=S_i(isubc-1)+Si(isubc)
tempSmin=abs(S_i(isubc-1)-Si(isubc))
do tempL = tempLmin, tempLmax, 2
L_i(isubc) = tempL
do tempS = tempSmin, tempSmax, 2
S_i(isubc) = tempS
if(isubc.lt.asf_set_LS%csf_set_LS%nwshells) then
call setLS_job_count(isubc+1,rez)
else
if(ittk(S_i(isubc),L_i(isubc),J).eq.1) &
call setLS_action(action_type, rez) ! rez=rez+1
end if
end do ! tempS
end do ! tempL
end if
end do ! iterm
end if
end if
END SUBROUTINE setLS_job_count
!
!***********************************************************************
! *
FUNCTION setLS_equivalent_csfs(ncsf1,ncsf2) result(rez)
! *
! This subroutine defines the "equivalency" of two csfs_jj *
! in the sence of generation of csfs_LS *
! number of csfs_LS and corresponding quantum numbers. *
! *
! Calls: itjpo, ispar, iq. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE ORB_C, ONLY: NW
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE itjpo_I
USE ispar_I
USE IQ_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer :: ncsf1, ncsf2
logical rez
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: isubc_LS, isubc_jj, NLS1, NLS2
!-----------------------------------------------
rez= .true.
IF(ITJPO(ncsf1) /= ITJPO(ncsf2)) THEN
rez = .false.
ELSE IF (ISPAR(ncsf1) /= ISPAR(ncsf2)) THEN
rez = .false.
ELSE
do isubc_LS = asf_set_LS%csf_set_LS%nwcore + 1, &
asf_set_LS%csf_set_LS%nwshells, 1
NLS1 = 0; NLS2 = 0
do isubc_jj = 1, NW, 1
if(nlLSval(isubc_LS) == nljjval(isubc_jj)) then
NLS1 = NLS1 + IQ(isubc_jj,ncsf1)
NLS2 = NLS2 + IQ(isubc_jj,ncsf2)
end if
end do
if(NLS1.ne.NLS2) then
rez=.false.
return
end if
end do
END IF
END FUNCTION setLS_equivalent_csfs
!
END SUBROUTINE setLS
!
!***********************************************************************
! *
FUNCTION traLSjj(jj_number,LS_number) result(wa)
! *
! This procedure return the value of the transformation matrix *
! from jj- to LS-coupling scheme in the case of any number of *
! open shells. *
! *
! Calls: coefLSjj, iq, jqs, traLSjjmp. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE ORB_C, ONLY: NAK
USE CONS_C, ONLY: ZERO
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE IQ_I
USE JQS_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: jj_number,LS_number
real(DOUBLE) :: wa
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
integer :: total_number, shell_number, number_minus, number_plus, &
jj_minus, N_minus, Q_minus, J_1_minus, &
jj_plus, N_plus, Q_plus, J_1_plus, J_1, &
l_shell, N_LS, W_1, Q_1, L_1, S_1
!-----------------------------------------------
shell_number=asf_set_LS%csf_set_LS%nwcore+1
number_minus=asf_set_LS%csf_set_LS%parent(shell_number)%parent_minus
number_plus =asf_set_LS%csf_set_LS%parent(shell_number)%parent_plus
if (number_minus+1 /= number_plus .and. &
number_minus*number_plus /= 0) then
stop "traLSjj(): program stop A."
end if
!
if (number_minus == 0) then
jj_minus = iabs(NAK(number_plus))*2 - 3
N_minus = 0; J_1_minus = 0; Q_minus = (jj_minus + 1)/2
else
jj_minus = iabs(NAK(number_minus))*2 - 1
N_minus = IQ(number_minus,jj_number)
Q_minus = (jj_minus +1)/2 - JQS(1,number_minus,jj_number)
J_1_minus = JQS(3,number_minus,jj_number)-1
J_1 = Jcoup(number_minus,jj_number)
end if
!
if (number_plus == 0) then
jj_plus = iabs(NAK(number_minus))*2 + 1
N_plus = 0; J_1_plus = 0; Q_plus = (jj_plus + 1)/2
else
jj_plus = iabs(NAK(number_plus))*2 - 1
N_plus = IQ(number_plus,jj_number)
Q_plus = (jj_plus + 1)/2 - JQS(1,number_plus,jj_number)
J_1_plus = JQS(3,number_plus,jj_number)-1
J_1 = Jcoup(number_plus,jj_number)
end if
!
l_shell=asf_set_LS%csf_set_LS%shell(shell_number)%l
N_LS =asf_set_LS%csf_set_LS%csf(LS_number)%occupation(shell_number)
W_1 =asf_set_LS%csf_set_LS%csf(LS_number)%w(shell_number)
Q_1 =2*l_shell+1- &
asf_set_LS%csf_set_LS%csf(LS_number)%seniority(shell_number)
!
L_1 = asf_set_LS%csf_set_LS%csf(LS_number)%shellL(shell_number)
S_1 = asf_set_LS%csf_set_LS%csf(LS_number)%shellS(shell_number)
!
wa = coefLSjj(l_shell,N_LS,W_1,Q_1,L_1,S_1,J_1, &
jj_minus,N_minus,Q_minus,J_1_minus, &
jj_plus,Q_plus,J_1_plus)
if (abs(wa) > EPSNEW) then
total_number = &
asf_set_LS%csf_set_LS%nwshells - asf_set_LS%csf_set_LS%nwcore
if (total_number == 1) then
wa = wa
else if (total_number >= 2) then
do shell_number = asf_set_LS%csf_set_LS%nwcore + 2, &
asf_set_LS%csf_set_LS%nwshells
if (abs(wa) > EPSNEW) then
wa = wa * traLSjjmp(shell_number,jj_number,LS_number)
end if
end do
else
wa = zero
end if
else
wa = zero
end if
END FUNCTION traLSjj
!
!***********************************************************************
! *
FUNCTION traLSjjmp(shell_number,jj_number,LS_number) result(wa)
! *
! Return the value of main part of the transformation matrix from *
! jj- to LS-coupling scheme in the case of any number of open *
! shells. *
! *
! Calls: coefLSjj, iq, ixjtik, jqs, sixj, nine. *
! *
! Written by G. Gaigalas, *
! NIST last update: May 2011 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE ORB_C, ONLY: NAK
USE CONS_C, ONLY: ZERO, ONE
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE ixjtik_I
USE IQ_I
USE JQS_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
integer, intent(in) :: shell_number,jj_number,LS_number
real(DOUBLE) :: wa
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
real(DOUBLE) :: wa_sum, RAC6, RAC9
integer :: delta_J, number_minus, number_plus, &
number_plus_1, number_minus_1, &
jj_minus, N_minus, Q_minus, &
jj_plus, N_plus, Q_plus, &
J_i_min, J_i_max, J_i_minus, &
J_i_plus, J_i, J_1_i, Jp_1_i, J_1_i1, &
l_shell, N_LS, W_i, Q_i, L_i, S_i, &
L_1_i, S_1_i, L_1_i1, S_1_i1
!-----------------------------------------------
wa = zero; wa_sum = zero
!
number_minus = asf_set_LS%csf_set_LS%parent(shell_number)%parent_minus
number_plus = asf_set_LS%csf_set_LS%parent(shell_number)%parent_plus
if (number_minus+1 /= number_plus .and. &
number_minus*number_plus /= 0) then
stop "tranLSjjmp(): program stop A."
end if
!
number_plus_1=asf_set_LS%csf_set_LS%parent(shell_number-1)%parent_plus
if (number_plus_1 == 0) then
number_plus_1 = &
asf_set_LS%csf_set_LS%parent(shell_number-1)%parent_minus
end if
!
if (number_minus == 0) then
jj_minus = iabs(NAK(number_plus))*2 - 1
N_minus = 0; J_i_minus = 0; Q_minus = (jj_minus + 1)/2
Jp_1_i = Jcoup(number_plus_1,jj_number)
else
jj_minus = iabs(NAK(number_minus))*2 - 1
N_minus = IQ(number_minus,jj_number)
Q_minus = (jj_minus + 1)/2 - JQS(1,number_minus,jj_number)
J_i_minus = JQS(3,number_minus,jj_number)-1
Jp_1_i = Jcoup(number_minus,jj_number)
end if
!
if (number_plus == 0) then
jj_plus = iabs(NAK(number_minus))*2 - 1
Q_plus = (jj_plus + 1)/2
N_plus = 0; J_i_plus = 0; J_1_i = Jp_1_i
else
jj_plus = iabs(NAK(number_plus))*2 - 1
N_plus = IQ(number_plus,jj_number)
Q_plus = (jj_plus + 1)/2 - JQS(1,number_plus,jj_number)
J_i_plus = JQS(3,number_plus,jj_number)-1
J_1_i = Jcoup(number_plus,jj_number)
end if
!
J_i_min = iabs(J_i_minus - J_i_plus); J_i_max = J_i_minus + J_i_plus
!
l_shell = asf_set_LS%csf_set_LS%shell(shell_number)%l
N_LS = asf_set_LS%csf_set_LS%csf(LS_number)%occupation(shell_number)
W_i = asf_set_LS%csf_set_LS%csf(LS_number)%w(shell_number)
Q_i = 2 * l_shell + 1 - &
asf_set_LS%csf_set_LS%csf(LS_number)%seniority(shell_number)
if (N_LS == N_minus + N_plus) then
L_i = asf_set_LS%csf_set_LS%csf(LS_number)%shellL(shell_number)
S_i = asf_set_LS%csf_set_LS%csf(LS_number)%shellS(shell_number)
!
L_1_i = asf_set_LS%csf_set_LS%csf(LS_number)%shellLX(shell_number)
S_1_i = asf_set_LS%csf_set_LS%csf(LS_number)%shellSX(shell_number)
if (shell_number == 2) then
number_minus_1 = &
asf_set_LS%csf_set_LS%parent(shell_number-1)%parent_minus
if (number_minus_1 == 0) then
number_minus_1 = &
asf_set_LS%csf_set_LS%parent(shell_number-1)%parent_plus
end if
J_1_i1 =JQS(3,number_minus_1,jj_number)-1
L_1_i1 =asf_set_LS%csf_set_LS%csf(LS_number)%shellL(shell_number-1)
S_1_i1 =asf_set_LS%csf_set_LS%csf(LS_number)%shellS(shell_number-1)
else
J_1_i1 =Jcoup(number_plus_1,jj_number)
L_1_i1 =asf_set_LS%csf_set_LS%csf(LS_number)%shellLX(shell_number-1)
S_1_i1 =asf_set_LS%csf_set_LS%csf(LS_number)%shellSX(shell_number-1)
end if
!
do J_i = J_i_min, J_i_max, 2
delta_J = ixjtik(J_i_minus,J_i_plus,J_i,J_1_i,J_1_i1,Jp_1_i)
if (delta_J /= 0) then
call nine(L_1_i1,S_1_i1,J_1_i1,L_i,S_i,J_i,L_1_i,S_1_i,J_1_i, &
1,delta_J,RAC9)
if (delta_J /= 0) then
call sixj(J_i_minus,J_i_plus,J_i,J_1_i,J_1_i1,Jp_1_i,0,RAC6)
call nine(L_1_i1,S_1_i1,J_1_i1,L_i,S_i,J_i,L_1_i,S_1_i,J_1_i,&
0,delta_J,RAC9)
wa_sum = wa_sum + (J_i + one) * RAC6 * RAC9 * &
coefLSjj(l_shell,N_LS,W_i,Q_i,L_i,S_i,J_i, &
jj_minus,N_minus,Q_minus,J_i_minus, &
jj_plus,Q_plus,J_i_plus)
end if
end if
end do
wa = wa_sum * sqrt((Jp_1_i+one)*(J_1_i1+one)*(L_1_i+one)*(S_1_i+one))
if (mod(J_i_minus+J_i_plus+J_1_i1+J_1_i,4) /= 0) wa = - wa
else
wa = zero
end if
END FUNCTION traLSjjmp
!
!***********************************************************************
! *
SUBROUTINE uniquelsj
! *
! Subroutine defines a unique labels for energy levels *
! *
! Written by G. Gaigalas Vilnius, May 2017 *
! *
!***********************************************************************
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
IMPLICIT NONE
!-----------------------------------------------
! L o c a l P a r a m e t e r s
!-----------------------------------------------
! The maximum number of levels in the list
INTEGER, PARAMETER :: Lev_No = 100
! The maximum number of mixing coefficients of ASF expension
INTEGER, PARAMETER :: Vec_No = 100
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
CHARACTER, DIMENSION(Vec_No,Lev_No) :: COUPLING*256
REAL(DOUBLE), DIMENSION(Vec_No,Lev_No) :: COMP, MIX
CHARACTER, DIMENSION(Lev_No) :: Str_No*3,Str_J*5,Str_P*1
CHARACTER, DIMENSION(Lev_No) :: OPT_COUPLING*256
CHARACTER, DIMENSION(Vec_No) :: tmp_COUPLING*256
REAL(DOUBLE), DIMENSION(Lev_No) :: ENERGY, PRO
REAL(DOUBLE), DIMENSION(Vec_No) :: tmp_COMP, tmp_MIX
INTEGER, DIMENSION(Lev_No) :: MAS_MAX, ICOUNT, IPRGG
CHARACTER :: RECORD*53
CHARACTER :: FORM*11
CHARACTER :: Str*7
REAL(DOUBLE) :: MAX_COMP
INTEGER :: IOS, IERR, INUM, NUM_Lev, I1,I2,I3,I4
INTEGER :: IPR, IPRF, NUM_OPT, Lev_OPT, New_J
INTEGER*4 :: last, length
!-----------------------------------------------
REWIND(57)
read (57, '(1A53)', IOSTAT=IOS) RECORD
write(73, '(1A53)') RECORD
!
DO WHILE (New_J < 1)
NUM_Lev = 0
New_J = 1
DO
if(NUM_Lev > Lev_No) then
print*, "Please extand the arrays. Now we have Lev_No=", &
Lev_No
stop
end if
NUM_Lev = NUM_Lev + 1
read(57,'(A3,A5,5X,A1,8X,F16.9,5X,F7.3)')Str_No(NUM_Lev), &
Str_J(NUM_Lev),Str_P(NUM_Lev),Energy(NUM_Lev),PRO(NUM_Lev)
INUM = 0
DO
INUM = INUM + 1
if(INUM > Vec_No) then
print*, "Please extand the arrays. Now we have Vec_No=", &
Vec_No
stop
end if
read(57,'(A7,F12.8,3X,F11.8,3X,A)',IOSTAT=IOS)Str, &
MIX(INUM,NUM_Lev),COMP(INUM,NUM_Lev),COUPLING(INUM,NUM_Lev)
if (IOS==-1) then
MAS_MAX(NUM_Lev) = INUM - 1
go to 1
endif
if(Str /= ' ') then
backspace(57)
MAS_MAX(NUM_Lev) = INUM - 1
exit
else if &
(MIX(INUM,NUM_Lev)==0.00.and.COMP(INUM,NUM_Lev)==0.00)then
MAS_MAX(NUM_Lev) = INUM - 1
New_J = 0
go to 1
end if
END DO
END DO
1 CONTINUE
!
NUM_OPT = 0
ICOUNT = 1
IPRGG = 1
write(72,'(A)') &
" Composition Serial No. Coupling"
write(72,'(A)') " of compos."
write(72,'(A5,A)') " J = ",trim(Str_J(NUM_Lev))
write(72,'(A)') &
"--------------------------------------------------"
DO I1 = 1, NUM_Lev
2 MAX_COMP = 0.0
DO I2 = 1, NUM_Lev
if(ICOUNT(I2) == 0) CYCLE
if(COMP(1,I2) > MAX_COMP) then
Lev_OPT = I2
MAX_COMP = COMP(1,I2)
end if
END DO
NUM_OPT = NUM_OPT + 1
ICOUNT(Lev_OPT) = 0
IPR = 1
if(NUM_OPT == 1) then
IPR = 1
else
I3 = 1
DO WHILE (I3 < NUM_OPT)
if(trim(OPT_COUPLING(I3))==trim(COUPLING(IPR,Lev_OPT)))then
IPR = IPR + 1
if(IPR > Vec_No) then
print*,"Please extand the arrays. Now we have Vec_No="&
,Vec_No
stop
end if
I3 = 1
else
I3 = I3 + 1
end if
END DO
end if
IPRGG(Lev_OPT) = IPR + IPRGG(Lev_OPT) - 1
if(IPRGG(Lev_OPT) >= MAS_MAX(Lev_OPT) &
.AND. IPRGG(Lev_OPT) /= 1) then
print*, &
"The program is not able perform the identification for", &
" level = ",Lev_OPT
stop
end if
if(IPR == 1) then
OPT_COUPLING(NUM_OPT) = COUPLING(IPR,Lev_OPT)
IPRF = IPRGG(Lev_OPT)
write(72,'(A,I4,2X,F12.9,I5,3X,A,A,I4)')"Pos",Lev_OPT, &
COMP(IPR,Lev_OPT),IPRF,trim(OPT_COUPLING(NUM_OPT))
IF(IPRF > 1) THEN
tmp_MIX = 0
tmp_COMP = 0
tmp_COUPLING = ""
I4 = MAS_MAX(Lev_OPT)
tmp_MIX(2:IPRF) = MIX(I4-IPRF+2:I4,Lev_OPT)
tmp_MIX(IPRF+1:I4) = MIX(2:I4-IPRF+1,Lev_OPT)
MIX(2:I4,Lev_OPT) = tmp_MIX(2:I4)
!
tmp_COMP(2:IPRF) = COMP(I4-IPRF+2:I4,Lev_OPT)
tmp_COMP(IPRF+1:I4) = COMP(2:I4-IPRF+1,Lev_OPT)
COMP(2:I4,Lev_OPT) = tmp_COMP(2:I4)
!
tmp_COUPLING(2:IPRF) = COUPLING(I4-IPRF+2:I4,Lev_OPT)
tmp_COUPLING(IPRF+1:I4) = COUPLING(2:I4-IPRF+1,Lev_OPT)
COUPLING(2:I4,Lev_OPT) = tmp_COUPLING(2:I4)
END IF
end if
if(IPR > 1) then
tmp_MIX = 0
tmp_COMP = 0
tmp_COUPLING = ""
I4 = MAS_MAX(Lev_OPT)
tmp_MIX(1:1) = MIX(IPR:IPR,Lev_OPT)
tmp_MIX(I4-IPR+2:I4) = MIX(1:IPR-1,Lev_OPT)
tmp_MIX(2:I4-IPR+1) = MIX(IPR+1:I4,Lev_OPT)
MIX(1:I4,Lev_OPT) = tmp_MIX(1:I4)
!
tmp_COMP(1:1) = COMP(IPR:IPR,Lev_OPT)
tmp_COMP(I4-IPR+2:I4) = COMP(1:IPR-1,Lev_OPT)
tmp_COMP(2:I4-IPR+1) = COMP(IPR+1:I4,Lev_OPT)
COMP(1:I4,Lev_OPT) = tmp_COMP(1:I4)
!
tmp_COUPLING(1:1) = COUPLING(IPR:IPR,Lev_OPT)
tmp_COUPLING(I4-IPR+2:I4) = COUPLING(1:IPR-1,Lev_OPT)
tmp_COUPLING(2:I4-IPR+1) = COUPLING(IPR+1:I4,Lev_OPT)
COUPLING(1:I4,Lev_OPT) = tmp_COUPLING(1:I4)
!
ICOUNT(Lev_OPT) = 1
NUM_OPT = NUM_OPT - 1
go to 2
end if
END DO
write(72,'(A)') &
"--------------------------------------------------"
write(72,'(A)')""
!
DO I1 = 1, NUM_Lev
write(73,'(A3,A5,5X,A1,8X,F16.9,5X,F7.3,A)') &
Str_No(I1),Str_J(I1),Str_P(I1),Energy(I1),PRO(I1),"%"
DO I2 = 1, MAS_MAX(I1)
write(73,'(A7,F12.8,3X,F11.8,3X,A)') &
Str,MIX(I2,I1),COMP(I2,I1),trim(COUPLING(I2,I1))
END DO
END DO
if(New_J < 1) write(73,'(A)') " "
!
end do
END SUBROUTINE uniquelsj
!
END MODULE jj2lsj_code
|
a<-read.csv(file.path("C:","MV","SMS","data_northSea","2011-data","mammals","seal_diet.csv"))
a<-subset(a,prey=='COD')
cleanup()
X11()
par(mfcol=c(2,1))
par(mar=c(4,4,3,2)) # c(bottom, left, top, right)
b<-aggregate(preyw~prey+lowpreyl,data=a,sum)
b$preyw<-b$preyw/sum(b$preyw)*100
plot(b$lowpreyl,b$preyw,type='h',xlab='length (cm)',ylab='Weight proportion (%)',lwd=3,col='blue')
b<-aggregate(nprey~prey+lowpreyl,data=a,sum)
b$nprey<-b$nprey/sum(b$nprey)*100
plot(b$lowpreyl,b$nprey,type='h',xlab='length (cm)',ylab='Number proportion (%)',lwd=3,col='blue')
|
The reduced form of $0$ is $0$.
|
State Before: R : Type u_4
B : Type u_1
F : Type u_2
E : B → Type u_3
inst✝¹⁰ : NontriviallyNormedField R
inst✝⁹ : (x : B) → AddCommMonoid (E x)
inst✝⁸ : (x : B) → Module R (E x)
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace R F
inst✝⁵ : TopologicalSpace B
inst✝⁴ : TopologicalSpace (TotalSpace E)
inst✝³ : (x : B) → TopologicalSpace (E x)
inst✝² : FiberBundle F E
e e' : Trivialization F TotalSpace.proj
inst✝¹ : Trivialization.IsLinear R e
inst✝ : Trivialization.IsLinear R e'
b : B
hb : b ∈ e.baseSet ∩ e'.baseSet
⊢ ContinuousLinearEquiv.trans (ContinuousLinearEquiv.symm (continuousLinearEquivAt R e b (_ : b ∈ e.baseSet)))
(continuousLinearEquivAt R e' b (_ : b ∈ e'.baseSet)) =
coordChangeL R e e' b State After: case h.h
R : Type u_4
B : Type u_1
F : Type u_2
E : B → Type u_3
inst✝¹⁰ : NontriviallyNormedField R
inst✝⁹ : (x : B) → AddCommMonoid (E x)
inst✝⁸ : (x : B) → Module R (E x)
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace R F
inst✝⁵ : TopologicalSpace B
inst✝⁴ : TopologicalSpace (TotalSpace E)
inst✝³ : (x : B) → TopologicalSpace (E x)
inst✝² : FiberBundle F E
e e' : Trivialization F TotalSpace.proj
inst✝¹ : Trivialization.IsLinear R e
inst✝ : Trivialization.IsLinear R e'
b : B
hb : b ∈ e.baseSet ∩ e'.baseSet
v : F
⊢ ↑(ContinuousLinearEquiv.trans (ContinuousLinearEquiv.symm (continuousLinearEquivAt R e b (_ : b ∈ e.baseSet)))
(continuousLinearEquivAt R e' b (_ : b ∈ e'.baseSet)))
v =
↑(coordChangeL R e e' b) v Tactic: ext v State Before: case h.h
R : Type u_4
B : Type u_1
F : Type u_2
E : B → Type u_3
inst✝¹⁰ : NontriviallyNormedField R
inst✝⁹ : (x : B) → AddCommMonoid (E x)
inst✝⁸ : (x : B) → Module R (E x)
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace R F
inst✝⁵ : TopologicalSpace B
inst✝⁴ : TopologicalSpace (TotalSpace E)
inst✝³ : (x : B) → TopologicalSpace (E x)
inst✝² : FiberBundle F E
e e' : Trivialization F TotalSpace.proj
inst✝¹ : Trivialization.IsLinear R e
inst✝ : Trivialization.IsLinear R e'
b : B
hb : b ∈ e.baseSet ∩ e'.baseSet
v : F
⊢ ↑(ContinuousLinearEquiv.trans (ContinuousLinearEquiv.symm (continuousLinearEquivAt R e b (_ : b ∈ e.baseSet)))
(continuousLinearEquivAt R e' b (_ : b ∈ e'.baseSet)))
v =
↑(coordChangeL R e e' b) v State After: case h.h
R : Type u_4
B : Type u_1
F : Type u_2
E : B → Type u_3
inst✝¹⁰ : NontriviallyNormedField R
inst✝⁹ : (x : B) → AddCommMonoid (E x)
inst✝⁸ : (x : B) → Module R (E x)
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace R F
inst✝⁵ : TopologicalSpace B
inst✝⁴ : TopologicalSpace (TotalSpace E)
inst✝³ : (x : B) → TopologicalSpace (E x)
inst✝² : FiberBundle F E
e e' : Trivialization F TotalSpace.proj
inst✝¹ : Trivialization.IsLinear R e
inst✝ : Trivialization.IsLinear R e'
b : B
hb : b ∈ e.baseSet ∩ e'.baseSet
v : F
⊢ ↑(ContinuousLinearEquiv.trans (ContinuousLinearEquiv.symm (continuousLinearEquivAt R e b (_ : b ∈ e.baseSet)))
(continuousLinearEquivAt R e' b (_ : b ∈ e'.baseSet)))
v =
(↑e' (totalSpaceMk b (Trivialization.symm e b v))).snd Tactic: rw [coordChangeL_apply e e' hb] State Before: case h.h
R : Type u_4
B : Type u_1
F : Type u_2
E : B → Type u_3
inst✝¹⁰ : NontriviallyNormedField R
inst✝⁹ : (x : B) → AddCommMonoid (E x)
inst✝⁸ : (x : B) → Module R (E x)
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace R F
inst✝⁵ : TopologicalSpace B
inst✝⁴ : TopologicalSpace (TotalSpace E)
inst✝³ : (x : B) → TopologicalSpace (E x)
inst✝² : FiberBundle F E
e e' : Trivialization F TotalSpace.proj
inst✝¹ : Trivialization.IsLinear R e
inst✝ : Trivialization.IsLinear R e'
b : B
hb : b ∈ e.baseSet ∩ e'.baseSet
v : F
⊢ ↑(ContinuousLinearEquiv.trans (ContinuousLinearEquiv.symm (continuousLinearEquivAt R e b (_ : b ∈ e.baseSet)))
(continuousLinearEquivAt R e' b (_ : b ∈ e'.baseSet)))
v =
(↑e' (totalSpaceMk b (Trivialization.symm e b v))).snd State After: no goals Tactic: rfl
|
If $f$ and $g$ are functions such that $f$ is continuous at $a$ and $g$ is continuous at $f(a)$, then $g \circ f$ is continuous at $a$.
|
function groupEdgeList=findEdgesInGroup(mesh,nodes)
% Create a list of edges in a given group of nodes
% See also findFacesInGroup
nVerts=length(mesh.uniqueVertices);
% This is wasteful - don't make two new sparse matrices! Just fiddle the one you've got....
% Want to eliminate entries that are not in (nodes,nodes);
diagMat=sparse(nodes,nodes,ones(length(nodes),1),nVerts,nVerts);
% make a connection matrix of just edges in this group
mesh.connectionMatrix=mesh.connectionMatrix*diagMat;
mesh.connectionMatrix=((mesh.connectionMatrix')*diagMat)';
[groupEdgeList1,groupEdgeList2]=find(triu(mesh.connectionMatrix));
groupEdgeList=[groupEdgeList1,groupEdgeList2];
groupEdgeList=sort(groupEdgeList,2); % Sort them so that the lowest numbered nodes appear first
|
import tactic -- hide
import data.real.basic -- imports the real numbers
/-
-/
open_locale classical -- allow proofs by contradiction
/-
-/
noncomputable theory -- don't fuss about the reals being noncomputable
namespace xena -- hide
-- Let a, b, c be real numbers
variables {a b c : ℝ}
/-
# Chapter ? : Max and abs
## Level 1
In this chapter we develop a basic interface for the `max a b` and `abs a`
function on the real numbers. Before we start, you will need to know
the basic API for `≤` and `<`, which looks like this:
```
example : a ≤ a := le_refl a
example : a ≤ b → b ≤ c → a ≤ c := le_trans
example : a ≤ b → b ≤ a → a = b := le_antisymm
example : a ≤ b ∨ b ≤ a := le_total a b
example : a < b ↔ a ≤ b ∧ a ≠ b := lt_iff_le_and_ne
example : a ≤ b → b < c → a < c := lt_of_le_of_lt
example : a < b → b ≤ c → a < c := lt_of_lt_of_le
```
-/
/- Axiom : le_refl a : a ≤ a
-/
/- Axiom : le_trans : a ≤ b → b ≤ c → a ≤ c
-/
/- Axiom : le_antisymm : a ≤ b → b ≤ a → a = b
-/
/- Axiom : le_total a b : a ≤ b ∨ b ≤ a
-/
/- Axiom : lt_iff_le_and_ne : a < b ↔ a ≤ b ∧ a ≠ b
-/
/- Axiom : lt_of_le_of_lt : a ≤ b → b < c → a < c
-/
/- Axiom : lt_of_lt_of_le : a < b → b ≤ c → a < c
-/
/-
We start with `max a b := if a ≤ b then b else a`. It is
uniquely characterised by the following two properties, which are hence
all you will need to know:
```
theorem max_eq_right : a ≤ b → max a b = b
theorem max_eq_left : b ≤ a → max a b = a
```
-/
/- Axiom : max_eq_right : a ≤ b → max a b = b
-/
/- Axiom : max_eq_left : b ≤ a → max a b = a
-/
-- begin hide
def max (a b : ℝ) := if a ≤ b then b else a
-- need if_pos to do this one
theorem max_eq_right (hab : a ≤ b) : max a b = b :=
begin
unfold max,
rw if_pos hab,
end
-- need if_neg to do this one
theorem max_eq_left (hba : b ≤ a) : max a b = a :=
begin
by_cases hab : a ≤ b,
{ rw max_eq_right hab,
exact le_antisymm hba hab,
},
{ unfold max,
rw if_neg hab,
}
end
-- end hide
/-
All of these theorems are in the theorem statement box on the left.
See if you can now prove the useful `max_choice` lemma using them.
-/
/- Hint : Hint
Do a case split with `cases le_total a b`.
-/
/- Lemma
For any two real numbers $a$ and $b$, either $\max(a,b) = a$
or $\max(a,b) = b$.
-/
theorem max_choice (a b : ℝ) : max a b = a ∨ max a b = b :=
begin
cases le_total a b with hab hba,
{ right,
exact max_eq_right hab
},
{ left,
exact max_eq_left hba
}
end
end xena --hide
|
[STATEMENT]
lemma tendsto_cong_limit: "(f \<longlongrightarrow> l) F \<Longrightarrow> k = l \<Longrightarrow> (f \<longlongrightarrow> k) F"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>(f \<longlongrightarrow> l) F; k = l\<rbrakk> \<Longrightarrow> (f \<longlongrightarrow> k) F
[PROOF STEP]
by simp
|
From Formalisation Require Import String Span.
From Formalisation Require Import Nom Monotone FuelMono.
From Formalisation Require Import combinator multi sequence bin_combinators bytes.
From Formalisation Require Import ProgramLogic adequacy.
From Raffinement Require Import PHOAS.
Open Scope N_scope.
Definition span_data_wf (data : list nat8) (s : span) :=
pos s + len s <= lengthN data.
Definition adequate {X Y} (R : span -> X -> type_to_Type Y -> Prop) (n : NomG X) (e : PHOASV Y)
(data : list nat8) (s : span) :=
span_data_wf data s ->
forall res,
sem_PHOAS data s e res ->
match res with
| None => exists fuel, run fuel n data s = NoRes
| Some (v, t) => exists r, R t r v /\ exists fuel, run fuel n data s = Res (t, r)
end.
Lemma adequacy_pure_PHOAS {X Y} :
forall (d : NomG X) (e : PHOASV Y) (R : span -> X -> type_to_Type Y -> Prop) data s,
adequate R d e data s ->
span_data_wf data s ->
forall v t,
sem_PHOAS data s e (Some (v, t)) ->
forall (Q : X -> Prop) (P : type_to_Type Y -> Prop),
{{ emp }} d {{ v; ⌜Q v⌝ }} ->
(forall x, Q x -> R t x v -> P v) ->
P v.
Proof.
unfold adequate. intros e h R data s ADE WF vv sres SEM Q P TRIPLE R_OK.
eapply ADE in SEM as [r [Rr [fuel RUN]]].
eapply R_OK; eauto. eapply adequacy_pure_run; eauto. auto.
Qed.
Lemma adequacy_pure_PHOAS_disjoint `{Foldable X} `{Foldable (fun s=> type_to_Type (Y s))}:
forall (e : NomG (X span)) (h : PHOASV (Y span)),
{{ emp }} e {{ v; ⌜all_disjointM v⌝ }} ->
forall (R : span -> X span -> type_to_Type (Y span) -> Prop) data s,
adequate R e h data s ->
forall vv sres,
sem_PHOAS data s h (Some (vv,sres)) ->
span_data_wf data s ->
(forall x, all_disjointM x -> R sres x vv -> all_disjointM vv) ->
all_disjointM vv.
Proof.
unfold adequate. intros e h TRIPLE R data s ADE vv sres SEM WF R_OK.
eapply adequacy_pure_PHOAS; eauto.
Qed.
Ltac simpl_existT :=
repeat match goal with
| H : existT _ _ = existT _ _ |- _ =>
eapply Eqdep_dec.inj_pair2_eq_dec in H; [idtac | intros; eapply type_eq_dec]
| H : existT _ _ = existT _ _ |- _ =>
eapply Eqdep_dec.inj_pair2_eq_dec in H; [idtac | intros; eapply N.eq_dec]
end.
Ltac next_step H := inversion H; subst; simpl_existT; subst; clear H.
Ltac sem_VAL_unif :=
match goal with
| H : sem_VAL ?t _, H0 : sem_VAL ?t _ |- _ => eapply (sem_VAL_inj _ t _ _ H) in H0
end.
Ltac VAL_unif := repeat sem_VAL_unif.
Lemma ret_adequate data s :
forall Y (va : VAL Y) (X : Type) (R : span -> X -> type_to_Type Y -> Prop) a (v : X),
sem_VAL va a ->
R s v a ->
adequate R (ret v) (Val va) data s.
Proof.
unfold adequate.
intros Y va X R a v SEMA RA WF res SEM.
next_step SEM. VAL_unif. subst. eexists. repeat split. eauto. exists O. reflexivity.
Qed.
Lemma extern_adequate :
forall ty constr l (X : Type) (v : X) data s,
adequate (fun _ _ _ => True%type) (ret v) (ExternStruct ty constr l) data s.
Proof.
unfold adequate.
intros ty constr l X v data s WF res SEM.
next_step SEM. exists v. split; auto. exists O. reflexivity.
Qed.
Lemma bind_adequate data s :
forall (X Y : Type) (X0 Y0 : type) T R (e : NomG X) (ke : X -> NomG Y)
(h : PHOASV X0) (kh : val X0 -> PHOASV Y0),
adequate T e h data s ->
(forall vres r t, T t r vres -> adequate R (ke r) (kh vres) data t) ->
adequate R (let! v := e in ke v) (let% v := h in kh v) data s.
Proof.
unfold adequate.
intros X Y X0 Y0 T R e ke h kh SEME SEMK WF res SEM.
next_step SEM.
- eapply SEME in H3 as [x [Rx [fuel P1]]]; auto.
eapply SEMK in H6. destruct res as [[v0 t]|].
+ destruct H6 as [x0 [Rx0 [fuel0 RUN]]]. exists x0.
repeat split; auto. eapply run_bind_weak; eauto.
+ destruct H6 as [fuel0 RUN]. eapply run_bind_weak; eauto.
+ eauto.
+ eapply run_mono in P1 as [P1 P2]. unfold span_data_wf in *. lia.
- eapply SEME in H2 as [fuel P1]; auto.
exists fuel. eapply run_bind_fail. auto.
Qed.
Lemma consequence_adequate :
forall (X : Type) Y (R T : span -> X -> type_to_Type Y -> Prop) (e : NomG X) (h : PHOASV Y) data s,
adequate R e h data s ->
(forall t v hv, R t v hv -> T t v hv) ->
adequate T e h data s.
Proof.
intros X Y R T e h data s ADE IMPL WF res SEM.
eapply ADE in SEM; auto. destruct res as [[v p]| ]; auto.
destruct SEM as [r [Rr [fuel RUN]]].
exists r. split; auto. exists fuel. auto.
Qed.
Definition length_spec s t x (y: type_to_Type Nat) := s = t /\ x = y /\ x = len t.
Lemma length_adequate : forall data s, adequate (length_spec s) length Length data s.
Proof.
intros data s WF res SEM. next_step SEM.
simpl. eexists. split. repeat split; econstructor. exists O. reflexivity.
Qed.
Definition span_eq data (s0 : span) (s1 : type_to_Type Span) := s0 = s1 /\ span_data_wf data s0.
Definition span_eq_take data s n t (s0 : span) (s1 : type_to_Type Span) : Prop :=
span_eq data s0 s1 /\ len s0 = n /\ len t = len s - n.
Lemma length_helper : forall data s v p1,
sem_PHOAS data s Length (Some (v, p1)) ->
s = p1 /\ v = len s.
Proof.
intros. simple inversion H; subst; simpl_existT.
inversion H0. inversion H3. inversion H4.
inversion H3. inversion H5. inversion H5.
inversion H4. inversion H4. inversion H4. inversion H2.
inversion H1. injection H3. intros. auto.
inversion H3. inversion H3. inversion H4.
inversion H5. inversion H5. inversion H5. inversion H5.
inversion H5. inversion H5.
Qed.
Lemma ite_helper : forall X data p1 hb (he1 : PHOAS X) he2 res,
sem_PHOAS data p1 (If hb Then he1 Else he2) res ->
(exists b, sem_VAL hb b /\ b = true /\ sem_PHOAS data p1 he1 res) \/
(exists b, sem_VAL hb b /\ b = false /\ sem_PHOAS data p1 he2 res).
Proof.
intros. inversion H; simpl_existT; subst X s vb ee et res.
left. exists b. split; auto.
right. exists b. split; auto.
Qed.
Lemma take_verif_adequate data s : forall (hn : VAL Nat) (n : N),
sem_VAL hn n ->
adequate (span_eq_take data s n) (take n)
(let% len := Length in
If hn <=! Var len
Then Take hn
Else Fail) data s.
Proof.
intros hn n VALN WF res SEM.
next_step SEM.
- eapply ite_helper in H6 as [[b [P0 [P1 P2]]]|[b [P0 [P1 P2]]]].
+ eapply length_helper in H3 as [P3 P4]. subst s v.
inversion P0. simpl_existT. subst X Y Z. simpl_existT. clear P0.
subst ob0 vv2 vv3 res0.
inversion H8. simpl_existT. subst n0 n1. clear H8.
next_step P2. next_step H7.
simpl in *. repeat VAL_unif. subst.
eexists. repeat split.
* unfold span_data_wf. simpl in *.
unfold span_data_wf in *. eapply N.leb_le in H6. lia.
* eapply O.
* unfold_MonSem. unfold run_take. unfold_MonSem.
rewrite H6. reflexivity.
+ eapply length_helper in H3 as [P3 P4]. subst s v.
inversion P0. simpl_existT. subst X Y Z. simpl_existT. clear P0.
subst ob0 vv2 vv3 res0.
inversion H8. simpl_existT. subst n0 n1. clear H8.
next_step P2. next_step H7.
repeat VAL_unif. subst.
exists O. simpl. unfold_MonSem. unfold run_take. unfold_MonSem.
rewrite H6. reflexivity.
- next_step H2.
Qed.
Lemma take_adequate data s : forall (hn : VAL Nat) (n : N),
sem_VAL hn n ->
n <= len s ->
adequate (span_eq_take data s n) (take n) (Take hn) data s.
Proof.
intros hn n VALN LE WF res SEM.
next_step SEM. VAL_unif. subst. simpl in *.
eexists. repeat split.
- unfold span_data_wf in *. simpl. lia.
- apply O.
- unfold_MonSem. unfold run_take. unfold_MonSem.
eapply N.leb_le in LE. rewrite LE. reflexivity.
Qed.
Lemma fail_adequate : forall data s X Y (R : span -> X -> type_to_Type Y -> Prop),
adequate R fail Fail data s.
Proof.
intros data s X Y R WF res SEM.
next_step SEM. exists O. reflexivity.
Qed.
Lemma lookupN_lt : forall pos X (data : list X),
pos < lengthN data ->
forall s v, lookupN data pos s = Res (s,nth (N.to_nat pos) data v).
Proof.
induction pos using N.peano_ind; simpl; intros X data LT s v.
- induction data; simpl in *; intros.
+ exfalso. eapply N.nlt_0_r. eexact LT.
+ rewrite lookupN_equation_2. reflexivity.
- destruct data; simpl in *.
+ exfalso. eapply N.nlt_0_r. eexact LT.
+ eapply N.succ_lt_mono in LT. eapply IHpos in LT.
rewrite <- N.succ_pos_spec. rewrite lookupN_equation_3.
rewrite N.succ_pos_spec. rewrite N.pred_succ.
rewrite N2Nat.inj_succ. eapply LT.
Qed.
Definition read_spec (s t : span) x0 (x1 : type_to_Type (NatN 8)) := s = t /\ x0 = x1.
Lemma read_verif_adequate data s : forall ht hn (n : type_to_Type Nat) (t : type_to_Type Span),
span_data_wf data t ->
sem_VAL ht t ->
sem_VAL hn n ->
adequate (read_spec s) (read t n)
(If hn <! EUna ELen ht
Then
Read ht hn
Else Fail) data s.
Proof.
intros ht hn n t RS VALR VALn WF res SEM.
eapply ite_helper in SEM as [[b [P0 [P1 P2]]]| [b [P0 [P1 P2]]]].
- inversion P0. simpl_existT. subst X Y Z. simpl_existT. subst ob0 vv2 vv3 res0. clear P0.
inversion H7. simpl_existT. subst X Y ou0 vv1 res0. clear H7.
inversion H8. simpl_existT. subst n0 n1. clear H8.
inversion H6. simpl_existT. subst s0 v1. clear H6.
next_step P2. repeat VAL_unif. subst. simpl in *.
do 2 eexists. repeat split.
exists O. unfold run_read. unfold_MonSem.
rewrite H9. erewrite lookupN_lt. reflexivity.
unfold span_data_wf in RS. eapply N.ltb_lt in H9. lia.
- inversion P0. simpl_existT. subst X Y Z. simpl_existT. subst ob0 vv2 vv3 res0. clear P0.
inversion H7. simpl_existT. subst X Y ou0 vv1 res0. clear H7.
inversion H8. simpl_existT. subst n0 n1. clear H8.
inversion H6. simpl_existT. subst s0 v1. clear H6.
next_step P2. repeat VAL_unif. subst. simpl in *.
exists O. unfold run_read. unfold_MonSem.
rewrite H9. reflexivity.
Qed.
Lemma read_adequate data s : forall ht hn (n : type_to_Type Nat) (t : type_to_Type Span),
span_data_wf data t ->
sem_VAL ht t ->
sem_VAL hn n ->
n < len t ->
adequate (read_spec s) (read t n) (Read ht hn) data s.
Proof.
intros ht hn n t RS VALr VALn LT WF res SEM.
next_step SEM.
repeat VAL_unif. subst. eexists. repeat split. simpl in *.
exists O. unfold run_read. eapply N.ltb_lt in LT. rewrite LT.
unfold_MonSem. erewrite lookupN_lt. reflexivity.
eapply N.ltb_lt in LT. unfold span_data_wf in RS. lia.
Qed.
Lemma alt_adequate data s : forall X Y R (e0 : NomG X) e1 (h0 : PHOASV Y) h1,
adequate R e0 h0 data s ->
adequate R e1 h1 data s ->
adequate R (alt e0 e1) (Alt h0 h1) data s.
Proof.
intros X Y R e0 e1 h0 h1 ADE0 ADE1 WF res SEM.
next_step SEM.
- eapply ADE0 in H2 as [x [RX [fuel0 P0]]]; auto.
exists x. repeat split; auto. exists fuel0.
simpl. unfold run_alt. unfold_MonSem. rewrite P0. reflexivity.
- eapply ADE0 in H4 as [fuel0 P0]; auto. destruct res as [[v p]|].
+ eapply ADE1 in H5 as [x [ROK [fuel1 P1]]]; auto.
eexists. repeat split; eauto.
exists (Nat.max fuel1 fuel0). simpl.
unfold run_alt. unfold_MonSem. eapply run_fuel_mono in P0.
erewrite P0. eapply run_fuel_mono in P1.
erewrite P1. reflexivity. auto. lia. auto. lia.
+ eapply ADE1 in H5 as [fuel1 P1]; auto. exists (Nat.max fuel1 fuel0).
simpl. unfold run_alt. unfold_MonSem.
eapply run_fuel_mono in P0. erewrite P0.
eapply run_fuel_mono in P1. erewrite P1. reflexivity. auto. lia. auto. lia.
Qed.
Definition local_spec {X Y} (R : X -> Y -> Prop) (s t : span) x y :=
s = t /\ R x y.
Lemma peek_adequate data s : forall X Y R (e : NomG X) (h : PHOASV Y),
adequate (fun _ => R) e h data s ->
adequate (local_spec R s) (peek e) (Local (Const ENone) h) data s.
Proof.
intros X Y R e h ADEE WF res SEM.
inversion SEM; simpl_existT; clear SEM.
- subst X0 p ex hs.
inversion H5. simpl_existT. subst lit v X0. clear H5.
inversion H3. simpl_existT. subst X0 res0. inversion H2.
- subst X0 p ex hs.
inversion H5. simpl_existT. subst lit v0 X0. clear H5.
inversion H3. simpl_existT. subst X0 res0. inversion H2.
- subst X0 p hs ex res res0. eapply ADEE in H6 as [fuel RUN].
exists fuel. simpl. unfold_MonSem. rewrite RUN. reflexivity. auto.
- subst X0 p hs ex res. eapply ADEE in H6 as [x [Rr [fuel RUN]]].
eexists. repeat split; eauto.
exists fuel. simpl. unfold_MonSem. rewrite RUN. reflexivity. auto.
Qed.
Lemma scope_adequate data s : forall (erange : VAL Span) range X Y R (e : NomG X) (h : PHOASV Y),
adequate (fun _ => R) e h data range ->
sem_VAL erange range ->
span_data_wf data range ->
adequate (local_spec R s) (scope range e) (Local (EUna ESome erange) h) data s.
Proof.
intros erange range X Y R e h ADEE VALR WFR WF res SEM.
inversion SEM; simpl_existT; clear SEM.
- subst X0 p hs ex res.
inversion H5. simpl_existT. subst Y0 X0 ou0 vv1 res. clear H5.
next_step H8. injection H5. clear H5. intro. repeat VAL_unif. subst.
eapply ADEE in H6 as [fuel RUN]. exists fuel.
simpl. unfold_MonSem. rewrite RUN. reflexivity. eauto.
- subst X0 p hs ex res.
inversion H5. simpl_existT. subst Y0 X0 ou0 vv1 res. clear H5.
next_step H8. injection H5. clear H5. intro. repeat VAL_unif. subst.
eapply ADEE in H6 as [x [Rx [fuel RUN]]]. exists x.
repeat split; auto. exists fuel.
simpl. unfold_MonSem. rewrite RUN. reflexivity. eauto.
- inversion H5. clear H5. inversion H13. simpl_existT. subst. inversion H17.
- inversion H5. clear H5. inversion H13. simpl_existT. subst. inversion H17.
Qed.
Fixpoint case_adequate {X Y} (R : span -> X -> type_to_Type Y -> Prop) (e : N -> NomG X)
(c : case_switch Y) (l : list N) data s {struct c}: Prop :=
match c in (case_switch T) return ((span -> X -> type_to_Type T -> Prop) -> Prop) with
| LSnil p => fun R => forall (n : N), n ∉ l -> adequate R (e n) p data s
| LScons n p c => fun R =>
adequate R (e n) p data s /\ case_adequate R e c (n :: l) data s
end R.
Lemma switch_adequate_bis data s :
forall X Y R (cases : case_switch Y) n (e : N -> NomG X) (hn : VAL Nat) l,
sem_VAL hn n ->
n ∉ l ->
case_adequate R e cases l data s ->
adequate R (e n) (Switch hn cases) data s.
Proof.
induction cases; simpl; intros.
- intros WF res SEM. next_step SEM. next_step H8.
repeat VAL_unif. subst. eapply H1; auto.
- destruct H1 as [P0 P1]. intros WF res SEM. next_step SEM. next_step H7.
+ repeat VAL_unif. subst. eapply P0; auto.
+ repeat VAL_unif. subst. eapply IHcases.
3 : eapply P1. eapply H. intro H1. inversion H1.
subst. contradiction. subst. contradiction. auto.
econstructor; eauto.
Qed.
Definition app {X Y} (e : X -> Y) (x : X): Y := e x.
Lemma switch_adequate :
forall (hn : VAL Nat) X Y (R : span -> X -> type_to_Type Y -> Prop) cases (e : N -> NomG X) n data s,
sem_VAL hn n ->
case_adequate R e cases nil data s ->
adequate R (e n) (Switch hn cases) data s.
Proof.
intros. eapply switch_adequate_bis; eauto. intro P. inversion P.
Qed.
Lemma match_N_adequate data s :
forall (hn : VAL Nat) X Y (R : span -> X -> type_to_Type Y -> Prop) cases (e1 : NomG X) e2 n,
sem_VAL hn n ->
case_adequate R (fun n => match n with
| 0 => e1
| Npos p => e2 p
end) cases nil data s ->
adequate R (match n with
| 0 => e1
| Npos p => e2 p
end) (Switch hn cases) data s.
Proof.
intros.
assert (match n with
| 0 => e1
| N.pos p0 => e2 p0
end = app (fun n => match n with
| 0 => e1
| N.pos p0 => e2 p0
end) n) as P by auto.
rewrite P. clear P.
eapply switch_adequate; eauto.
Qed.
Lemma natN_switch_adequate_bis :
forall X Y (R : span -> X -> type_to_Type Y -> Prop) cases p (n : natN p)
(hn : VAL Nat) (e1 : NomG X) e2 data l s,
sem_VAL hn (↓ n) ->
↓ n ∉ l ->
case_adequate R (fun n => match n with
| N0 => e1
| N.pos p => e2 p
end) cases l data s ->
adequate R (match SizeNat.val n with
| N0 => e1
| N.pos p => e2 p
end) (Switch hn cases) data s.
Proof.
intros X Y R cases p n hn e1 e2 data l s P0 P1 case.
assert (match ↓ n with
| 0 => e1
| N.pos p0 => e2 p0
end = app (fun n => match n with
| 0 => e1
| N.pos p0 => e2 p0
end) (↓ n)) by auto.
rewrite H. clear H.
eapply switch_adequate_bis; eauto.
Qed.
Lemma natN_switch_adequate :
forall p (hn : VAL Nat) X Y (R : span -> X -> type_to_Type Y -> Prop) cases (n : natN p)
(e1 : NomG X) e2 data s,
sem_VAL hn (↓ n) ->
case_adequate R (fun n => match n with
| 0 => e1
| N.pos p => e2 p
end) cases nil data s ->
adequate R (match SizeNat.val n with
| 0 => e1
| N.pos p => e2 p
end) (Switch hn cases) data s.
Proof.
intros. eapply natN_switch_adequate_bis; eauto. intro P. inversion P.
Qed.
Lemma LScons_adequate data s : forall n X Y (R : span -> X -> type_to_Type Y -> Prop) cases (e : N -> NomG X) h l,
adequate R (e n) h data s ->
case_adequate R e cases (n :: l) data s ->
case_adequate R e (LScons n h cases) l data s.
Proof. simpl. intros. split; auto. Qed.
Lemma LSnil_adequate data s: forall X Y (R : span -> X -> type_to_Type Y -> Prop) (e : N -> NomG X) h l,
(forall n, n ∉ l -> adequate R (e n) h data s) ->
case_adequate R e (LSnil h) l data s.
Proof. simpl. intros. eapply H. eapply H0. Qed.
Lemma ite_adequate data s :
forall (hb : VAL Bool) X Y R (b : bool) (et : NomG X) (ht : PHOASV Y) ef hf,
sem_VAL hb b ->
(b = true -> adequate R et ht data s) ->
(b = false -> adequate R ef hf data s) ->
adequate R (if b then et else ef) (If hb Then ht Else hf) data s.
Proof.
intros hb X Y R b et ht ef hf VALb ADET ADEF res WF SEM.
eapply ite_helper in SEM as [[b0 [P0 [P1 P2]]] | [b0 [P0 [P1 P2]]]].
- repeat VAL_unif. subst. eapply ADET; auto.
- repeat VAL_unif. subst. eapply ADEF; auto.
Qed.
Definition compt_Some n {X Y} (R : span -> X -> Y -> Prop) (RX : N -> X -> Prop) (RY : N -> Y -> Prop) s x0 x1:=
R s x0 x1 /\ RX n x0 /\ RY n x1.
Lemma repeat_Some_adequate_aux data s :
forall (hmax : VAL (Option Nat)) X Y (hb : VAL Y) (R : span -> X -> type_to_Type Y -> Prop) RX RY max e b he rb start,
sem_VAL hb rb ->
sem_VAL hmax (Some max) ->
compt_Some start R RX RY s b rb ->
(forall rv v n s,
compt_Some n R RX RY s v rv ->
n < max + start ->
adequate (compt_Some (N.succ n) R RX RY) (e v) (he rv) data s) ->
forall res,
span_data_wf data s ->
sem_PHOAS data s (Repeat hmax he hb) res ->
match res with
| None =>
exists fuel, run fuel (repeat (Some max) e b) data s = NoRes
| Some (v, t) =>
exists r,
R t r v /\
exists fuel, run fuel (repeat (Some max) e b) data s = Res (t, r)
end.
Proof.
intros hmax X Y hb R RX RY max e b he rb start VALb VALmax R'B IH res WF SEM.
next_step SEM.
- VAL_unif. subst. inversion H5.
- VAL_unif. subst. simpl in *. injection H5. intro; subst. clear H5.
clear VALmax VALb hb. revert b start R'B IH. induction H7.
+ intros. destruct R'B as [P0 [P1 P2]]. eexists. split. eauto. exists O. auto.
+ intros. eapply (IH _ _ start) in H as [r [Rr [fuel RUN]]].
eapply IHsem_repeat_Some in Rr. destruct res as [[v p]|].
* destruct Rr as [r0 [R0 [fuel0 RUN0]]]. subst.
eexists. repeat split. eauto.
exists (Nat.max fuel fuel0). rewrite N2Nat.inj_succ. unfold_MonSem.
erewrite run_fuel_mono; eauto. simpl.
erewrite repeat_some_fuel_mono. eapply RUN0.
intros. erewrite run_fuel_mono; eauto. rewrite H. auto. reflexivity. lia.
intro. rewrite H in RUN0. inversion RUN0. lia.
* destruct Rr as [fuel0 RUN0].
exists (Nat.max fuel fuel0). rewrite N2Nat.inj_succ.
unfold_MonSem. erewrite run_fuel_mono; eauto. simpl.
erewrite repeat_some_fuel_mono. eapply RUN0.
intros. erewrite run_fuel_mono; eauto. rewrite H. auto. reflexivity. lia.
intro. rewrite H in RUN0. inversion RUN0. lia.
* unfold span_data_wf in *. eapply run_mono in RUN as [P2 P3]. lia.
* intros. rewrite <- N.add_succ_comm in H0. eapply IH; eauto.
* eauto.
* lia.
* auto.
+ intros. eapply IH in H as [fuel RUN]; eauto.
exists fuel. unfold_MonSem. rewrite N2Nat.inj_succ. rewrite RUN. reflexivity. lia.
Qed.
Lemma repeat_Some_adequate :
forall (hmax : VAL (Option Nat)) Y (hb : VAL Y) X (R : span -> X -> type_to_Type Y -> Prop) RX RY max e b he rb data s,
sem_VAL hb rb ->
sem_VAL hmax (Some max) ->
compt_Some 0 R RX RY s b rb ->
(forall rv v n s,
compt_Some n R RX RY s v rv ->
n < max ->
adequate (compt_Some (N.succ n) R RX RY) (e v) (he rv) data s) ->
adequate R (repeat (Some max) e b) (Repeat hmax he hb) data s.
Proof.
intros. intros WF res SEM. eapply repeat_Some_adequate_aux; eauto.
intros. eapply H2; eauto. lia.
Qed.
Lemma repeat_Some2_adequate :
forall (hn : VAL Nat) X Y (hb : VAL Y) n e b he rb (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hb rb ->
sem_VAL hn n ->
R s b rb ->
(forall rv v t, R t v rv -> adequate R (e v) (he rv) data t) ->
adequate R (repeat (Some n) e b) (Repeat (EUna ESome hn) he hb) data s.
Proof.
intros hn X Y hb n e b he rb R data s VALb VALn R'B IH.
subst. eapply repeat_Some_adequate; repeat econstructor; eauto.
instantiate (1 := fun _ _ => True%type). simpl. trivial.
instantiate (1 := fun _ _ => True%type). simpl. trivial.
intros. eapply consequence_adequate. eapply IH; eauto.
destruct H. auto. intros. repeat split; auto.
Qed.
Lemma repeat_adequate data s :
forall (ho : VAL (Option Nat)) X Y (hb : VAL Y) (R : span -> X -> type_to_Type Y -> Prop) o e b he rb,
sem_VAL hb rb ->
sem_VAL ho o ->
R s b rb ->
(forall rv v t, R t v rv -> adequate R (e v) (he rv) data t) ->
adequate R (repeat o e b) (Repeat ho he hb) data s.
Proof.
intros ho X Y hb R o e b he rb VALb VALn RB IH WF res SEM.
next_step SEM.
- VAL_unif. subst. revert b RB. induction H7.
+ intros. eapply IH in H; eauto. destruct H as [fuel P0].
do 2 eexists. repeat split; eauto. exists (S fuel).
simpl. unfold_MonSem.
erewrite run_fuel_mono; eauto. simpl. reflexivity.
+ intros. eapply IH in H; eauto. destruct H as [r [Rr [fuel RUN]]].
subst. eapply IHsem_repeat_None in Rr; eauto.
2 : instantiate (1 := Var ve); econstructor. destruct res as [[p v]|].
* destruct Rr as [r0 [Rr0 [fuel0 RUN0]]].
do 2 eexists. repeat split; eauto. exists (S (Nat.max fuel0 fuel)).
simpl. unfold_MonSem. erewrite run_fuel_mono; eauto. simpl.
simpl in RUN0. rewrite ret_neutral_right in RUN0.
unfold_MonSem. eapply repeat_none_fuel_mono in RUN0.
unfold_MonSem. erewrite RUN0. reflexivity.
intros. eapply run_fuel_mono. eapply H. auto. lia. lia. lia. auto. lia.
* destruct Rr as [fuel0 RUN0].
simpl in RUN0. rewrite ret_neutral_right in RUN0.
unfold_MonSem. destruct fuel0. inversion RUN0.
destruct (match run (S fuel0) (e r) data p1 with
| Res (s, x) =>
(fix sem_repeat_none (n : nat) (x1 : X) {struct n} : MonSem X :=
match n with
| 0%nat => λ _ : span, NoFuel
| S n0 =>
λ s0 : span,
match
match run (S fuel0) (e x1) data s0 with
| Res (s1, x3) => sem_repeat_none n0 x3 s1
| NoRes => NoRes
| NoFuel => NoFuel
end
with
| Res (s1, v) => Res (s1, v)
| NoRes => Res (s0, x1)
| NoFuel => NoFuel
end
end) fuel0 x s
| NoRes => NoRes
| NoFuel => NoFuel
end). destruct x. inversion RUN0. inversion RUN0. inversion RUN0.
* unfold span_data_wf in *. eapply run_mono in RUN as [P0 P1]. lia.
- VAL_unif. subst. eapply repeat_Some2_adequate; eauto.
instantiate (1 := Const (ENat n)). repeat econstructor.
eapply SRepeatS. repeat econstructor. eauto. auto.
Qed.
Ltac Pos_is_literal p :=
match p with
| xO ?p => Pos_is_literal p
| xI ?p => Pos_is_literal p
| xH => idtac
end.
Ltac N_is_literal n :=
match n with
| N0 => idtac
| Npos ?p => Pos_is_literal p
end.
Ltac Bool_is_literal b :=
match b with
| true => idtac
| false => idtac
end.
Ltac String_is_literal s :=
match s with
| String.EmptyString => idtac
| String.String _ ?s => String_is_literal s
end.
Ltac is_None o :=
match o with
| Datatypes.None => idtac
end.
Ltac is_Unit o :=
match o with
| Datatypes.tt => idtac
end.
Ltac natN_is_literal n :=
match n with
| SizeNat.mk_natN _ _ _ => idtac
end.
Ltac what_is e k :=
match e with
(* Const *)
| ?e => N_is_literal e; k (@Const val _ (ENat e))
| ?e => natN_is_literal e; k (@Const val _ (ENatN e))
| ?e => Bool_is_literal e; k (@Const val _ (EBool e))
| ?e => String_is_literal e; k (@Const val _ (EString e))
| ?e => is_None e; k (@Const val _ ENone)
| ?e => is_Unit e; k (@Const val _ EUnit)
(* Var *)
| _ =>
match goal with
| H : sem_VAL ?h e |- _ => k h
| H : ?v = e |- _ => is_var v; k (@Var val _ v)
| H : e = ?v |- _ => is_var v; k (@Var val _ v)
(* les fonctions injectives : f v = f y -> v = y *)
| H : SizeNat.val ?v = SizeNat.val e |- _ => is_var v; k (@Var val _ v)
| H : SizeNat.val e = SizeNat.val ?v |- _ => is_var v; k (@Var val _ v)
| H : Some e = Some ?v |- _ => is_var v; k (@Var val _ v)
| H : Some ?v = Some e |- _ => is_var v; k (@Var val _ v)
| H : negb e = negb ?v |- _ => is_var v; k (@Var val _ v)
| H : negb ?v = negb e |- _ => is_var v; k (@Var val _ v)
| H : inl e = inl ?v |- _ => is_var v; k (@Var val _ v)
| H : inl ?v = inl e |- _ => is_var v; k (@Var val _ v)
| H : inr e = inr ?v |- _ => is_var v; k (@Var val _ v)
| H : inr ?v = inr e |- _ => is_var v; k (@Var val _ v)
(* les autres *)
| H : fst ?v = e |- _ => is_var v; k (@EUna val _ _ EFst (@Var val _ v))
| H : e = fst ?v |- _ => is_var v; k (@EUna val _ _ EFst (@Var val _ v))
| H : snd ?v = e |- _ => is_var v; k (@EUna val _ _ ESnd (@Var val _ v))
| H : e = snd ?v |- _ => is_var v; k (@EUna val _ _ ESnd (@Var val _ v))
| H : string_get ?n ?s = Some e |- _ =>
what_is n ltac:(fun n => what_is s ltac:(fun s => k (@EBin val _ _ _ EStringGet n s)))
end
| ?e =>
match goal with
| |- _ => is_var e; k (@Var val _ e)
end
(* EBin *)
| ?l + ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EAdd l r)))
| ?l - ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ ESub l r)))
| ?l * ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EMult l r)))
| ?l / ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EDiv l r)))
| ?l `mod` ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EMod l r)))
| ?l =? ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EEq l r)))
| SizeNat.eqb ?l ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EEq (EUna EVal l) (EUna EVal r))))
| ?l <=? ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ ELe l r)))
| ?l <? ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ ELt l r)))
| ?l && ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EAnd l r)))
| ?l || ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EOr l r)))
| string_get ?n ?s => what_is n ltac:(fun n => what_is s ltac:(fun s => k (@EBin val _ _ _ EStringGet n s)))
| (?l, ?r) => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EPair l r)))
| ?l ↑ ?r => what_is l ltac:(fun l => what_is r ltac:(fun r => k (@EBin val _ _ _ EpTo2p l r)))
(* EUna *)
| SizeNat.val ?n => what_is n ltac:(fun n => k (@EUna val _ _ EVal n ))
| negb ?n => what_is n ltac:(fun n => k (@EUna val _ _ ENot n))
| string_length ?v => what_is v ltac:(fun v => k (@EUna val _ _ EStringLen v))
| fst ?v => what_is v ltac:(fun v => k (@EUna val _ _ EFst v))
| snd ?v => what_is v ltac:(fun v => k (@EUna val _ _ ESnd v))
| Some ?v => what_is v ltac:(fun v => k (@EUna val _ _ ESome v))
| inl ?v => what_is v ltac:(fun v => k (@EUna val _ _ EInl v))
| inr ?v => what_is v ltac:(fun v => k (@EUna val _ _ EInr v))
| len ?s => what_is s ltac:(fun s => k (@EUna val _ _ ELen s))
end.
Lemma test: False.
String_is_literal "abc". String_is_literal "". pose (h := ""). Fail String_is_literal h.
Fail is_None (Some 1) . is_None (None : option nat).
Fail is_Some (None : option nat). is_Unit tt.
is_Unit tt. what_is tt ltac:(fun v => pose v).
Abort.
Ltac clean_up :=
cbv beta in *;
match goal with
| H : _ /\ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
destruct H as [P0 P1]
| H : span_eq_take _ _ _ _ _ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
let P2 := fresh "P" in
let P3 := fresh "P" in
destruct H as [[P0 P1] [P2 P3]]
| H : span_eq _ _ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
destruct H as [P0 P1]
| H : length_spec _ _ _ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
let P2 := fresh "P" in
destruct H as [P0 [P1 P2]]
| H : read_spec _ _ _ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
destruct H as [P0 P1]
| H : local_spec _ _ _ _ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
destruct H as [P0 P1]
| H : compt_Some _ _ _ _ _ _ _ |- _ =>
let P0 := fresh "P" in
let P1 := fresh "P" in
let P2 := fresh "P" in
destruct H as [P0 [P1 P2]]
end.
Ltac finish_me := subst; repeat econstructor; eauto.
Ltac step := repeat clean_up; (* subst; *)
match goal with
| |- adequate _ (let! _ := _ in _) _ _ _ =>
eapply bind_adequate; [ idtac | intros ]
| |- adequate _ (take ?e) _ _ _ =>
what_is e ltac:(fun v => eapply (take_adequate _ _ v));
[finish_me | lia]
| |- adequate _ (take ?e) _ _ _ =>
what_is e ltac:(fun v => eapply (take_verif_adequate _ _ v)); finish_me
| |- adequate _ (read ?s ?n) _ _ _ =>
what_is s ltac:(fun s => what_is n ltac:(fun n => eapply (read_adequate _ _ s n)));
[subst; eauto | finish_me | finish_me | lia]
| |- adequate _ (read ?s ?n) _ _ _ =>
what_is s ltac:(fun s => what_is n ltac:(fun n => eapply (read_verif_adequate _ _ s n)));
[subst; eauto | finish_me | finish_me]
| |- adequate _ (if ?b then ?et else ?ef) _ _ _ =>
what_is b ltac:(fun s => eapply (ite_adequate _ _ s));
[finish_me | intro | intro ]
| |- adequate _ fail _ _ _ => eapply fail_adequate
| |- adequate _ length _ _ _ => eapply length_adequate
| |- adequate _ (peek _) _ _ _ => eapply peek_adequate
| |- adequate _ (scope ?range _) _ _ _ =>
what_is range ltac:(fun range => eapply (scope_adequate _ _ range));
[idtac | finish_me | subst; eauto]
| |- adequate _ (repeat (Some ?range) _ ?b) _ _ _ =>
what_is (Some range)
ltac:(fun ho => what_is b ltac:(fun vb => eapply (repeat_Some_adequate ho _ vb)));
[finish_me | finish_me | repeat split; finish_me | intros]
| |- adequate _ (repeat ?o _ ?b) _ _ _ =>
what_is o
ltac:(fun ho => what_is b ltac:(fun vb => eapply (repeat_adequate _ _ ho _ _ vb)));
[finish_me | finish_me | finish_me | intros]
| |- adequate _ (ret ?v) _ _ _ =>
what_is v ltac:(fun v => eapply (ret_adequate _ _ _ v));
finish_me
end.
Definition rest_spec data t x y := len t = 0 /\ span_eq data x y /\ x = y.
Definition rest_adequate_sig : {code | forall data s, adequate (rest_spec data) rest code data s}.
eapply exist. intros. unfold rest. repeat step.
eapply consequence_adequate. clean_up. step.
intros t0 v hv [P2 [P3 P4]]. repeat clean_up. subst. repeat split; eauto.
rewrite P4. eapply N.sub_diag.
Defined.
Lemma rest_adequate : forall data s, adequate (rest_spec data) rest (`rest_adequate_sig) data s.
Proof. intros. destruct rest_adequate_sig as [p a]. eapply a. Qed.
Definition verify_spec {X Y} (R : span -> X -> type_to_Type Y -> Prop)
(hb : val Y -> VAL Bool) (b : X -> bool) s x y :=
R s x y /\ sem_VAL (hb y) true /\ b x = true.
Definition verify_adequate_sig {Y} h (hb : val Y -> VAL Bool) :
{code | forall X b (R : span -> X -> type_to_Type Y -> Prop),
(forall y x s,
R s x y ->
sem_VAL (hb y) (b x)) ->
forall data s (e : NomG X),
adequate R e h data s ->
adequate (verify_spec R hb b) (verify e b) code data s}.
eapply exist. intros. unfold verify.
repeat step. eauto. pose S := H1. eapply H in S. step.
eapply (ret_adequate _ _ _ (Var vres)). econstructor.
repeat split; auto. rewrite -H2. auto. step.
Defined.
Lemma verify_adequate {Y} h (hb : val Y -> VAL Bool) :
forall X b (R : span -> X -> type_to_Type Y -> Prop),
(forall y x s,
R s x y ->
sem_VAL (hb y) (b x)) ->
forall data s (e : NomG X),
adequate R e h data s ->
adequate (verify_spec R hb b) (verify e b) (proj1_sig (verify_adequate_sig h hb)) data s.
Proof. intros. destruct verify_adequate_sig as [a p]. eapply p; eauto. Qed.
Lemma decompose_app_adequate :
forall Z (hv : VAL Z) X Y v (e : type_to_Type Z -> NomG X) h (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hv v ->
adequate R (e v) (h v) data s ->
adequate R (e v) (let% v := Val hv in h v) data s.
Proof.
intros Z hv X Y v e h R data s VALv ADE WF res SEM.
next_step SEM.
- next_step H3. VAL_unif. subst. eapply ADE; auto.
- next_step H2.
Qed.
Lemma decompose_app2_adequate :
forall A0 (hv0 : VAL A0) B0 (hv1 : VAL B0) A B vv0 v0 vv1 v1 X Y (e : A -> B -> NomG X)
h (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hv0 vv0 ->
sem_VAL hv1 vv1 ->
adequate R (e v0 v1) (h vv0 vv1) data s ->
adequate R (e v0 v1) (let% v0 := Val hv0 in
let% v1 := Val hv1 in
h v0 v1) data s.
Proof.
intros A0 hv0 B0 hv1 A B vv0 v0 vv1 v1 X Y e h R data s VALv0 VALv1 ADE WF res SEM.
next_step SEM.
- next_step H3. next_step H6.
+ next_step H3. VAL_unif. subst. eapply ADE; auto.
+ next_step H2.
- next_step H2.
Qed.
Definition repeat_compt_Some_adequate_sig (hon : VAL (Option Nat)) {Y} (h : val Nat -> val Y -> PHOASV Y) (hb : VAL Y) :
{code | forall X vb c e (b : X) (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hon (Some c) ->
sem_VAL hb vb ->
R s b vb ->
(forall n vx x t,
compt_Some n R (fun _ _ => True%type) (fun _ _ => True%type) t x vx ->
n < c ->
adequate R (e n x) (h n vx) data t) ->
adequate R (repeat_compt (Some c) e b) code data s}.
Proof.
eapply exist. intros. unfold repeat_compt.
repeat step.
eapply (repeat_Some_adequate hon _ (EBin EPair (Const (ENat 0)) hb)).
repeat econstructor; simpl; eauto. eauto.
repeat econstructor; simpl; eauto.
instantiate (1 := (fun s x y => x.1 = y.1 /\ R s x.2 y.2 )). simpl. split; auto.
instantiate (1 := (fun n x => n = x.1)). simpl. reflexivity.
instantiate (1 := (fun n x => n = x.1)). simpl. reflexivity.
intros. destruct H3 as [[P0 P3] [P1 P2]]. step.
eapply (decompose_app2_adequate _ (EUna EFst (Var rv)) _ (EUna ESnd (Var rv)));
repeat econstructor.
rewrite P0. eapply H2. repeat split; auto. lia.
eapply (ret_adequate _ _ _
(EBin EPair (EBin EAdd (EUna EFst (Var rv)) (Const (ENat 1))) (Var vres))); repeat econstructor.
simpl. rewrite P0. reflexivity. simpl. auto. simpl. lia. simpl. lia.
destruct H3.
eapply (ret_adequate _ _ _ (EUna ESnd (Var vres))); repeat econstructor. auto.
Defined.
Lemma repeat_compt_Some_adequate (hon : VAL (Option Nat)) {Y} (h : val Nat -> val Y -> PHOASV Y) (hb : VAL Y) :
forall X vb c e (b : X) (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hon (Some c) ->
sem_VAL hb vb ->
R s b vb ->
(forall (vn : val Nat) n vx x t,
vn = n ->
compt_Some n R (fun _ _ => True%type) (fun _ _ => True%type) t x vx ->
n < c ->
adequate R (e n x) (h vn vx) data t) ->
adequate R (repeat_compt (Some c) e b) (proj1_sig (repeat_compt_Some_adequate_sig hon h hb)) data s.
Proof. intros. destruct repeat_compt_Some_adequate_sig as [a p]. eapply p; eauto. Qed.
Definition repeat_compt_adequate_sig (hon : VAL (Option Nat)) {Y} (h : val Nat -> val Y -> PHOASV Y) hb :
{code | forall X vb on e b (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hon on ->
sem_VAL hb vb ->
R s b vb ->
(forall n vx x t, R t x vx -> adequate R (e n x) (h n vx) data t) ->
adequate R (repeat_compt on e b) code data s}.
Proof.
eapply exist. intros. unfold repeat_compt.
step. eapply (repeat_adequate _ _ hon _ _ (EBin EPair (Const (ENat 0)) hb) _);
repeat econstructor; eauto.
simpl. instantiate (1 := fun s x y => x.1 = y.1 /\ R s x.2 y.2). split; auto.
intros. destruct H3. step.
eapply (decompose_app2_adequate _ (EUna EFst (Var rv)) _ (EUna ESnd (Var rv)));
repeat econstructor.
rewrite H3. eapply H2; eauto.
eapply (ret_adequate _ _ _ (EBin EPair (EBin EAdd (EUna EFst (Var rv)) (Const (ENat 1))) (Var vres))); repeat econstructor.
simpl. lia. simpl. auto. destruct H3.
eapply (ret_adequate _ _ _ (EUna ESnd (Var vres))); repeat econstructor. simpl in *.
destruct H3. auto.
Defined.
Lemma repeat_compt_adequate (hon : VAL (Option Nat)) {Y} (h : val Nat -> val Y -> PHOASV Y) hb :
forall X vb on e b (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hon on ->
sem_VAL hb vb ->
R s b vb ->
(forall n vx x t, R t x vx -> adequate R (e n x) (h n vx) data t) ->
adequate R (repeat_compt on e b) (proj1_sig (repeat_compt_adequate_sig hon h hb)) data s.
Proof. intros. destruct repeat_compt_adequate_sig as [a p]. eapply p; eauto. Qed.
Lemma string_get_lt : forall s n, n < string_length s -> exists r, string_get n s = Some r.
Proof.
induction s; simpl; intros.
- exfalso. eapply N.nlt_0_r. eauto.
- destruct n.
+ rewrite string_get_equation_2. eexists. reflexivity.
+ rewrite <- N.succ_pos_pred in H.
eapply N.succ_lt_mono in H.
eapply IHs in H. destruct H.
rewrite string_get_equation_3.
exists x. rewrite <- N.pos_pred_spec. auto.
Qed.
Definition tag_spec str s t (x : span) (y : type_to_Type Span) :=
len t = len s - string_length str /\ x = y.
Definition tag_adequate_sig (hs : VAL String) :
{code : PHOAS Span | forall str, sem_VAL hs str -> forall data s,
adequate (tag_spec str s) (tag str) code data s}.
eapply exist. intros. unfold tag.
repeat step.
eapply (repeat_compt_Some_adequate (EUna ESome (EUna EStringLen hs)) _ (Const (EBool true)));
repeat econstructor; eauto.
instantiate (1 := fun t0 x y => t = t0 /\ x = y). split; auto.
intros. destruct H1 as [[P5 P6] [P3 P4]]. step. step.
clean_up. eapply string_get_lt in H2. destruct H2. rewrite H1. step.
repeat step.
Defined.
Lemma tag_adequate (hs : VAL String) :
forall str, sem_VAL hs str -> forall data s,
adequate (tag_spec str s) (tag str) (proj1_sig (tag_adequate_sig hs)) data s.
Proof. intros. destruct tag_adequate_sig as [a p]. eapply p; eauto. Qed.
Definition recognize_adequate_sig {X} (h : PHOASV X) :
{code | forall e data s, adequate (fun _ => eq) e h data s ->
adequate (fun _ => span_eq data) (recognize e) code data s}.
eapply exist. intros. unfold recognize.
repeat step. subst. eapply H.
eapply consequence_adequate. step.
intros t1 v hv [P2 [P3 P4]]. subst. instantiate (1 := eq). reflexivity.
clean_up. eapply consequence_adequate. step.
intros. clean_up. subst. split; auto.
Defined.
Definition recognize_adequate : forall X (h : PHOASV X),
forall e data s,
adequate (fun _ => eq) e h data s ->
adequate (fun _ => span_eq data) (recognize e) (proj1_sig (recognize_adequate_sig h)) data s.
Proof. intros. destruct (recognize_adequate_sig h). eapply a; auto. Qed.
Definition be_spec n s t (x : natN (8 * n)) (y : type_to_Type (NatN (8 * n))) :=
len t = len s - n /\ x = y.
Ltac be_spec_clean :=
repeat match goal with
| H : be_spec _ _ _ _ _ |- _ => destruct H
end.
Definition be_u8_adequate_sig :
{code | forall data s, adequate (be_spec 1 s) be_u8 code data s}.
eapply exist. intros. unfold be_u8.
repeat step; repeat econstructor; eauto.
eapply consequence_adequate. step.
intros t0 v hv [P3 P4]. repeat clean_up. subst. repeat split; auto.
Defined.
Definition be_u8_adequate :
forall data s, adequate (be_spec 1 s) be_u8 (proj1_sig be_u8_adequate_sig) data s.
Proof. intros. destruct be_u8_adequate_sig as [p a]. eapply a. Qed.
Definition be_u16_adequate_sig :
{code | forall data s, adequate (be_spec 2 s) be_u16 code data s}.
Proof.
eapply exist. intros. unfold be_u16.
repeat step; repeat econstructor; eauto.
1-2 : eapply be_u8_adequate. be_spec_clean. subst. step. lia.
Defined.
Lemma be_u16_adequate : forall data s,
adequate (be_spec 2 s) be_u16 (`be_u16_adequate_sig) data s.
Proof. intro. destruct be_u16_adequate_sig as [a p]. eapply p. Qed.
Definition be_u32_adequate_sig :
{code | forall data s, adequate (be_spec 4 s) be_u32 code data s}.
Proof.
eapply exist. intros. unfold be_u32. repeat step.
1-2 : eapply be_u16_adequate. be_spec_clean. subst. step. lia.
Defined.
Lemma be_u32_adequate : forall data s,
adequate (be_spec 4 s) be_u32 (`be_u32_adequate_sig) data s.
Proof. intro. destruct be_u32_adequate_sig as [a p]. eapply p. Qed.
Definition value_not_in_string_adequate_sig (hv : VAL (NatN 8)) (hs : VAL String) :
{code | forall v,
sem_VAL hv v ->
forall str,
sem_VAL hs str ->
forall data s, adequate (fun t (x : bool) (y : type_to_Type Bool) => t = s /\ x = y)
(value_not_in_string v str) code data s}.
Proof.
eapply exist. intros. subst. unfold value_not_in_string.
eapply (repeat_compt_Some_adequate (EUna ESome (EUna EStringLen hs)) _ (Const (EBool true)));
repeat econstructor; eauto.
intros. eapply string_get_lt in H3. destruct H3. rewrite H3.
step.
Defined.
Lemma value_not_in_string_adequate : forall (hv : VAL (NatN 8)) (hs : VAL String),
forall v,
sem_VAL hv v ->
forall str,
sem_VAL hs str ->
forall data s,
adequate (fun t (x : bool) (y : type_to_Type Bool) => t = s /\ x = y)
(value_not_in_string v str) (proj1_sig (value_not_in_string_adequate_sig hv hs)) data s.
Proof. intros. destruct value_not_in_string_adequate_sig as [a p]. eapply p; eauto. Qed.
Definition is_not_adequate_sig (hs : VAL String) :
{code | forall str, sem_VAL hs str -> forall data s,
adequate (fun _ (x : span) (y : type_to_Type Span) => x = y)
(is_not str) code data s}.
eapply exist. intros. unfold is_not.
eapply consequence_adequate. eapply (recognize_adequate Unit).
eapply (repeat_adequate _ _ (Const ENone) _ _ (Const EUnit)); repeat econstructor; eauto.
intros. step. eapply be_u8_adequate. repeat step; repeat econstructor; eauto.
eapply (value_not_in_string_adequate (Var vres) hs); be_spec_clean; finish_me.
repeat step. intros. clean_up. auto.
Defined.
Lemma is_not_adequate : forall (hs : VAL String) str,
sem_VAL hs str ->
forall data s,
adequate (fun _ (x : span) (y : type_to_Type Span) => x = y)
(is_not str) (proj1_sig (is_not_adequate_sig hs)) data s.
Proof. intros. destruct is_not_adequate_sig as [a p]. eapply p; eauto. Qed.
Definition char_adequate_sig (hn : VAL (NatN 8)) :
{code | forall n, sem_VAL hn n -> forall data s,
adequate (fun t (x : nat8) (y : type_to_Type (NatN 8)) => len t = len s - 1 /\ x = y)
(char n) code data s}.
eapply exist. intros. unfold char.
repeat step. eapply be_u8_adequate. be_spec_clean. repeat step.
Defined.
Lemma char_adequate : forall (hn : VAL (NatN 8)) ,
forall n, sem_VAL hn n -> forall data s,
adequate (fun t (x : nat8) (y : type_to_Type (NatN 8)) => len t = len s - 1 /\ x = y)
(char n) (proj1_sig (char_adequate_sig hn)) data s.
Proof. intros. destruct char_adequate_sig. eapply a; eauto. Qed.
Definition length_data_adequate_sig (h : PHOAS Nat) :
{code | forall e data s,
adequate (fun _ => eq) e h data s ->
adequate (fun _ => span_eq data) (length_data e) code data s}.
eapply exist. intros. unfold length_data.
repeat step. eapply H.
eapply consequence_adequate. simpl in H0. step.
intros t0 v hv [[P0 P1] [P3 P2]]. split; auto.
Defined.
Definition length_data_adequate : forall (h : PHOAS Nat) e data s,
adequate (fun _ => eq) e h data s ->
adequate (fun _ => span_eq data) (length_data e) (proj1_sig (length_data_adequate_sig h)) data s.
Proof. intros. destruct length_data_adequate_sig as [a p]. eapply p; eauto. Qed.
Definition map_parser_adequate_sig (hc1 : PHOAS Span) {X} (hc2 : PHOAS X) :
{code | forall c1 Y (c2 : NomG Y) R' data s,
adequate (fun _ => span_eq data) c1 hc1 data s ->
(forall r res, span_data_wf data r -> r = res -> adequate (fun _ => R') c2 hc2 data res) ->
adequate (fun _ => R') (map_parser c1 c2) code data s}.
eapply exist. intros. unfold map_parser.
step. eauto. simpl in H1. eapply consequence_adequate. step.
eapply H0. eauto. reflexivity.
intros. eapply H2.
Defined.
Lemma map_parser_adequate hc1 {X} (hc2 : PHOASV X) :
forall c1 Y (c2 : NomG Y) R' data s,
adequate (fun _ => span_eq data) c1 hc1 data s ->
(forall r res, span_data_wf data r -> r = res -> adequate (fun _ => R') c2 hc2 data res) ->
adequate (fun _ => R') (map_parser c1 c2) (proj1_sig (map_parser_adequate_sig hc1 hc2)) data s.
Proof. intros. destruct map_parser_adequate_sig. auto. Qed.
(* Definition ipv4 : Type := nat8 * nat8 * nat8 * nat8. *)
(* Definition ipv4_spec (ip : Ipv4) (i : ipv4) := *)
(* a4 ip = i.1.1.1 /\ b4 ip = i.1.1.2 /\ c4 ip = i.1.2 /\ d4 ip = i.2. *)
(* Definition get_ipv4_adequate_sig : *)
(* {code | forall data s, adequate (fun t x y => len t = len s - 4 /\ ipv4_spec x y) get_ipv4 code data s}. *)
(* eapply exist. intros. unfold get_ipv4. *)
(* repeat step. *)
(* 1-4 : eapply be_u8_adequate. repeat clean_up. subst. *)
(* eapply (ret_adequate _ (EBin EPair (EBin EPair (EBin EPair (Var vres) (Var vres0)) (Var vres1)) (Var vres2))); repeat econstructor. lia. *)
(* Defined. *)
(* Lemma get_ipv4_adequate : forall data s, *)
(* adequate (fun t x y => len t = len s - 4 /\ ipv4_spec x y) *)
(* get_ipv4 (`get_ipv4_adequate_sig) data s. *)
(* Proof. intros. destruct get_ipv4_adequate_sig. eauto. Qed. *)
Definition get_ipv4_adequate_sig :
{code : PHOAS (Unknown "ipv4") | forall data s, adequate (fun _ _ _ => True%type) get_ipv4 code data s}.
eapply exist. intros. unfold get_ipv4.
repeat step.
1-4 : eapply be_u8_adequate. repeat clean_up. subst.
eapply (extern_adequate "ipv4" "create_ipv4"
(CONS (Var vres)
(CONS (Var vres0)
(CONS (Var vres1)
(CONS (Var vres2) NIL))))).
Defined.
Lemma get_ipv4_adequate : forall data s,
adequate (fun _ _ _ => True%type) get_ipv4 (`get_ipv4_adequate_sig) data s.
Proof. intros. destruct get_ipv4_adequate_sig. eauto. Qed.
Definition cond_adequate_sig {Y} (hb : VAL Bool) (h : PHOAS Y) :
{code | forall X b e (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hb b ->
(b = true -> adequate R e h data s) ->
adequate (fun t (x : option X) (y : type_to_Type (Option Y)) =>
match x, y with
| None, None => t = s
| Some x, Some y => R t x y
| _,_ => False%type
end) (cond b e) code data s}.
eapply exist. intros. unfold cond. repeat step.
- eauto.
- eapply (ret_adequate _ _ _ (EUna ESome (Var vres))); repeat econstructor; eauto.
- intro. eapply (ret_adequate _ _ _ (Const ENone)); repeat econstructor; eauto.
Defined.
Lemma cond_adequate {Y} (hb : VAL Bool) h :
forall X b e (R : span -> X -> type_to_Type Y -> Prop) data s,
sem_VAL hb b ->
(b = true -> adequate R e h data s) ->
adequate (fun t (x : option X) (y : type_to_Type (Option Y)) =>
match x, y with
| None, None => t = s
| Some x, Some y => R t x y
| _,_ => False%type
end) (cond b e) (proj1_sig (cond_adequate_sig hb h)) data s.
Proof. intros. destruct cond_adequate_sig. eauto. Qed.
(* TODO : définir une relation entre vecteurs. *)
Definition VECTOR_spec {X Y}
(R : X -> type_to_Type Y -> Prop) (vecx : VECTOR X) (vecy : type_to_Type (Vector Y)) :=
True%type.
Definition many1_adequate_sig {Y} (h : PHOAS Y) :
{code | forall X e (R : X -> type_to_Type Y -> Prop) data s,
(forall t, adequate (fun _ => R) e h data t) ->
adequate (fun _ => VECTOR_spec R) (many1 e) code data s}.
eapply exist. intros. unfold many1.
repeat step. eapply H.
eapply (repeat_adequate _ _ (Const ENone) _ _ (EBin EAddVec (EUna EMake (Const (ENat 2))) (Var vres0))).
1-3 : subst; repeat econstructor; eauto.
intros. repeat step. eapply H.
eapply (ret_adequate _ _ _ (EBin EAddVec (Var rv) (Var vres3))).
repeat econstructor; eauto. trivial.
Defined.
Lemma many1_adequate {Y} h :
forall X e (R : X -> type_to_Type Y -> Prop) data s,
(forall t, adequate (fun _ => R) e h data t) ->
adequate (fun _ => VECTOR_spec R) (many1 e) (proj1_sig (many1_adequate_sig h)) data s.
Proof. destruct many1_adequate_sig. eauto. Qed.
Close Scope N_scope.
|
State Before: α : Type u
β✝ : α → Type v
β : α → Type (max u v)
inst✝¹ : FinEnum α
inst✝ : (a : α) → FinEnum (β a)
f : (a : α) → β a
⊢ f ∈ enum β State After: α : Type u
β✝ : α → Type v
β : α → Type (max u v)
inst✝¹ : FinEnum α
inst✝ : (a : α) → FinEnum (β a)
f : (a : α) → β a
⊢ ∃ a, (a ∈ pi (toList α) fun x => toList (β x)) ∧ (fun x => a x (_ : x ∈ toList α)) = f Tactic: simp [pi.enum] State Before: α : Type u
β✝ : α → Type v
β : α → Type (max u v)
inst✝¹ : FinEnum α
inst✝ : (a : α) → FinEnum (β a)
f : (a : α) → β a
⊢ ∃ a, (a ∈ pi (toList α) fun x => toList (β x)) ∧ (fun x => a x (_ : x ∈ toList α)) = f State After: no goals Tactic: refine' ⟨fun a _ => f a, mem_pi _ _, rfl⟩
|
\subsection{Overview}
ActivitySim is an agent-based modeling (ABM) platform for modeling travel demand. Like UrbanSim, the ActivitySim software is entirely open source, and hosted as a part of the Urban Data Science Toolkit\footnote{The open-source Urban Data Science Toolkit is available online at \url{https://github.com/UDST}}. ActivitySim grew in large part out of a need for metropolitan planning organizations (MPOs) to standardize the modeling tools and methods that were common between them in order to facilitate more effective collaboration and sharing of innovations.
Today, ActivitySim is both used and maintained by an active consortium of MPOs, transportation engineers, and other industry practitioners. Because of the cooperative approach taken by ActivitySim stakeholders towards its ownership, and because many of its \enquote{owners} are also its main users, the platform continues to mature in the direction that most benefits the practitioners themselves. ActivitySim development is still in beta, with an official 1.0 release scheduled for 2018.
\subsection{Inputs}
ActivitySim requires two main sets of input data, one relating to geography and the other relating to the population of synthetic agents whose travel choices are being modeled.
The geographic data are stored at the level of the traffic analysis zone (TAZ) and are comprised of three components: 1) land use characteristics; 2) a matrix of zone-to-zone travel impedances (travel times, distances, or costs) specific to the mode of travel and time of day; and 3) a table of user-defined measures of aggregate utility estimated for each zone. In transportation planning, these zone-level impedances and utility measures are commonly referred to as \emph{skims} and \emph{accessibilities}, respectively.
The land use data consist of zone-level population and employment characteristics, along with measures of different land use and building types. In our integrated model these data are read directly from the outputs generated by UrbanSim, but for a single simulation iteration any source of aggregate land use data would suffice.
Travel skims are typically generated by a traffic assignment model, which ActivitySim is not. ActivitySim instead expects to load the skims from an OpenMatrix (OMX) formatted data file\footnote{The OpenMatrix format is specified online at \url{https://github.com/osPlanning/omx/wiki}}. The creation of these skims is described below in Section \ref{sec:ta} on traffic assignment.
Accessibilities can be generated directly from the skims or any other graph representation of the transportation network. They are computed by aggregating mode-specific measures of access to specific amenity types across the network, most commonly employment centers, retail outlets, and transportation hubs. The measures of access can be as simple as counts of amenities reachable within a given shortest-path distance or travel time, or as complex as composite utilities generated by a discrete choice model.
The second set of ActivitySim input data is the synthetic population. The synthetic population data consist of both individuals and their characteristics, as well as the households and household characteristics into which the individuals are organized. The synthetic population is shared between UrbanSim and ActivitySim, although UrbanSim does not make use of individual-level characteristics.
The exhaustive details of the ActivitySim data schema are documented online\footnote{The ActivitySim data schema is available online at \url{https://udst.github.io/activitysim/dataschema.html}}.
\subsection{How it works}
ActivitySim, like UrbanSim, relies heavily on discrete choice models and random utility maximization theory \citep{mcfadden-1974}. Please refer to Section \ref{sec:urbansim} for specific details about how discrete choice models work within an agent-based microsimulation framework.
An ActivitySim run consists of a series of sequentially executed model steps. The individual models can be grouped into the four clusters---long term decisions, coordinated daily activity patterns, tour-level decisions, and trip-level decisions---illustrated in Figure \ref{fig:asim-models} and summarized briefly here:
\begin{itemize}
\item \emph{Long-term choice models}: ActivitySim's three long-term choice models---workplace location choice, school location choice, and auto-ownership---model the choices that are not made every day in the real world but have a substantial impact on those that are. These models will eventually be migrated to run directly in the UrbanSim environment so that the time horizons of the two simulation platforms are internally consistent.
\item \emph{Coordinated Daily Activity Patterns}: the CDAP step models the group decision-making process for individual household members all seeking to maximize the utility of their daily activities together. CDAP takes into consideration mandatory and non-mandatory trips choosing activities to maximize each individual's utilities. The maximization process currently involves the estimation of all possible combinations of all individuals within a household, and thus has the longest run-time of all ActivitySim models.
\item \emph{Tour-level decisions}: tours define chains of trips that are completed together without returning home in between. Mandatory tours include trips to and from work and school, while non-mandatory trips are entirely discretionary. Non-mandatory tour alternatives are specified in a user-defined configuration file, and thus these steps include a destination choice model as well. Mandatory tour alternatives have already been computed by the long-term decision models. Each tour type has separate model steps for estimating mode choice, departure time, and the frequency of the tour.
\item \emph{Trip-level decisions}: mode choice must be selected at the level of the individual trip as well as the tour because a given tour may include different modes for different trip legs. Trip departure and arrival times are estimated as well. The rest of the trip characteristics are inherited from the tours to which a trip belongs.
\end{itemize}
\begin{figure}[htbp]
\center
\includegraphics[width=\textwidth]{graphics/asim_flow.png}
\caption[ActivitySim model flow]{ActivitySim model flow\footnote{The ActivitySim model flow is adapted from \url{http://analytics.mtc.ca.gov/foswiki/bin/view/Main/ModelSchematic}}}
\label{fig:asim-models}
\end{figure}
\subsection{Outputs}
The output of an ActivitySim run consists of a single HDF5 data file with a single table of results corresponding to each model step, along with the versions of the input files in their final, updated states. For the purpose of generating travel demand for traffic assignment, however, we are only concerned with the output of the trip generation step. This single file contains the origin and destination zones, start and end times, and mode choice for every trip taken by every agent over the course of a day. We then take the subset of these trips that are completed by automobile and aggregate the counts by origin-destination pair and hour of departure. These hourly, zone-level demand files are finally handed off for use in traffic assignment.
\subsection{Calibration and validation}
Compared to their meso- and macro-scale counterparts, microsimulations like ActivitySim more accurately capture the nonlinearities that define most patterns of human behavior by modeling the decision-making processes of individual agents. The models themselves, however, are not meant to be interpreted on the same disaggregate scale. We do not know which individuals will use which mode to complete which activity on a given day, but rather how an entire population of individuals is likely to behave \textit{en masse}.
As such, there are a variety of data sets available to us for validating our results, including the Bay Area Travel Survey (BATS), the U.S. Census Longitudinal Employer-Household Dynamics program (LEHD), and the California Household Travel Survey (CHTS). All of these products offer data that can be aggregated to the TAZ level and compared to the output of our models.
|
function UNew = fluidDirichlet2D(varargin);
% fluidDirichlet2D: solve fluid registraion in 2D with Dirichlet
% boundary conditions
%
%
% author: Nathan D. Cahill
% email: [email protected]
% affiliation: Rochester Institute of Technology
% date: January 2014
% licence: GNU GPL v3
%
% Copyright Nathan D. Cahill
% Code available from https://github.com/tomdoel/npReg
%
%
% parse input arguments
[DU,F,mu,lambda,PixSize,NumPix,HX,HY] = parse_inputs(varargin{:});
% construct filters that implement discretized Navier-Lame equations
d1 = [1;-2;1]/(PixSize(1)^2);
d2 = [1 -2 1]/(PixSize(2)^2);
d12 = [1 0 -1;0 0 0;-1 0 1]/(4*PixSize(1)*PixSize(2));
[A11,A22] = deal(zeros(3,3));
A11(:,2) = A11(:,2) + (lambda+2*mu)*d1;
A11(2,:) = A11(2,:) + mu*d2;
A22(:,2) = A22(:,2) + mu*d1;
A22(2,:) = A22(2,:) + (lambda+2*mu)*d2;
A12 = d12*(lambda+mu)/4;
A21 = A12;
% multiply force field by adjoint of Navier-Lame equations
Fnew = zeros(NumPix(1),NumPix(2),2);
Fnew(:,:,1) = imfilter(F(:,:,1),A22,'replicate') - imfilter(F(:,:,2),A12,'replicate');
Fnew(:,:,2) = imfilter(F(:,:,2),A11,'replicate') - imfilter(F(:,:,1),A21,'replicate');
% compute sine transform of new force field
FnewF1 = imag(fft(imag(fft(Fnew(:,:,1),2*NumPix(1)-2,1)),2*NumPix(2)-2,2));
FnewF2 = imag(fft(imag(fft(Fnew(:,:,2),2*NumPix(1)-2,1)),2*NumPix(2)-2,2));
FnewF1 = FnewF1(1:NumPix(1),1:NumPix(2));
FnewF2 = FnewF2(1:NumPix(1),1:NumPix(2));
% construct images of coordinates scaled by pi/(N or M)
[alpha,beta] = ndgrid(pi*(0:(NumPix(1)-1))/(NumPix(1)-1),pi*(0:(NumPix(2)-1))/(NumPix(2)-1));
% construct LHS factor
LHSfactor = mu.*(lambda+2*mu).*(2*cos(alpha) + 2*cos(beta) - 4).^2;
% set origin term to 1, as DC term does not matter
LHSfactor(1,1) = 1;
% solve for FFT of V
VF1 = FnewF1./LHSfactor;
VF2 = FnewF2./LHSfactor;
% perform inverse DST
V1 = imag(ifft(imag(ifft(VF1,2*NumPix(1)-2,1)),2*NumPix(2)-2,2));
V2 = imag(ifft(imag(ifft(VF2,2*NumPix(1)-2,1)),2*NumPix(2)-2,2));
% crop and concatenate
V = cat(3,V1(1:NumPix(1),1:NumPix(2)),V2(1:NumPix(1),1:NumPix(2)));
% construct estimate of transformation Jacobian
J = zeros(NumPix(1),NumPix(2),2,2);
J(:,:,1,1) = 1 - imfilter(V(:,:,1),HX,'replicate','same');
J(:,:,2,1) = -imfilter(V(:,:,1),HY,'replicate','same');
J(:,:,1,2) = -imfilter(V(:,:,2),HX,'replicate','same');
J(:,:,2,2) = 1 - imfilter(V(:,:,2),HY,'replicate','same');
% now perform Euler integration to construct new displacements
UNew = zeros(NumPix(1),NumPix(2),2);
UNew(:,:,1) = J(:,:,1,1).*V(:,:,1) + J(:,:,1,2).*V(:,:,2);
UNew(:,:,2) = J(:,:,2,1).*V(:,:,1) + J(:,:,2,2).*V(:,:,2);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [DU,F,mu,lambda,PixSize,NumPix,HX,HY] = parse_inputs(varargin);
% get arguments
F = varargin{2};
PixSize = varargin{4}(1:2);
NumPix = [varargin{5} varargin{6}];
mu = varargin{8};
lambda = varargin{9};
DU = varargin{11};
HX = varargin{12};
HY = varargin{13};
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
%\chapter{Drawing the Nets}
\chapter{Using Renew}
\label{ch:usage}
Renew offers a graphical, user-friendly interface for
drawing reference nets and auxiliary graphical elements.
The net editor contained within Renew is based upon a Java library
called JHotDraw \cite{Gamma98}.
The basic drawing capabilities are mainly taken over from JHotDraw,
while the multi-windowing
GUI, the net editor figures and tools, and the image figure tool
have been implemented by the Renew team.
Still, this manual covers a complete description of all drawing
and editing capabilities Renew offers.
\section{Mouse handling}
Most current computer platforms use a mouse with at least two mouse
buttons. Whenever this manual instructs you to do mouse operations,
you should use the \emph{left} mouse button. You should use the
\emph{right} mouse button only when it is especially indicated
to do so.
If your mouse has three or more buttons, the excess buttons
usually behave like the right mouse button.
Your operating system might support a special option
to switch the two mouse button, so that left handers
can operate the device more easily. This will also
switch the meanings of the mouse buttons within Renew.
Older Apple Macintosh mice have got one mouse button only.
In this case, you can press and hold the Apple key
while clicking the single mouse button, whenever
this manual commands you to press the right mouse button.
On other operating systems, too, you might be able to use
a single button mouse by pressing some control key
whenever you need access to the right mouse button.
In all cases, you can substitute right mouse clicks
by appropriate menu commands or tool selections,
but the right mouse button greatly adds to
drawing speed.
Some mouse operations react to a simultaneously held down shift or control
key.
These operations alter their behavior slightly as long as the modifier key
is pressed.
For example, a resize operation can be restricted to obtain square
dimensions as long as the control key is pressed.
If you move outside a drawing window while operating with the mouse
(i.e. while the mouse button is held down), the viewable area of the
drawing is scrolled until the drawing bounds are reached.
If you are dragging a figure or handle downward or rightward beyond the
current drawing bounds, the bounds are pushed forward until you either
release the button or move the mouse back into the window.
\section{Basic Concepts}
\label{sec:usageBasics}
When working with Renew, you edit so-called drawings.
A drawing consists of many drawing elements, called figures.
Each drawing is displayed in a separate drawing window.
Since you are expected to work on many different drawings and
thus have many different windows open at the same time, it
would consume lots of valuable desktop space to repeat
a menu bar and tool buttons in every window.
To avoid this, all commands have been grouped into one
central window, the Renew window, which contains a menubar,
a toolbar and a status line (see figure~\ref{fig:renewWindow}).
This might seen a bit unfamiliar for Mac users, but is
related with the platform independence of Java.
The shortcut \texttt{Ctrl+M} activates the central Renew
window and brings it on top of all other windows.
This key is useful if you are working with many large drawing windows,
these buried the central window and you need access to the menu bar or
tools.
\begin{figure}[htbp]
\centerline{%
\includegraphics[scale=\screenshotscale]{RenewWin2-5.eps}%
}
\caption{\label{fig:renewWindow}The Renew Window}
\end{figure}%
There is always one active drawing window.
Selecting a pull-down menu invokes a command which
affects the active window, its drawing, or a selection of
figures of that drawing,
unless it has a global effect only.
Examples of menu commands are saving or loading a document
or changing attributes of figures.
The menu commands are explained in Section~\ref{sec:menuCommands}.
On the other hand, the toolbar is used for selecting a current tool.
With a tool you can create or edit certain kinds of figures in a drawing.
All tools available in the toolbar are discussed in Section~\ref{sec:tools}.
Since each tool (but the selection tool) is related to a certain type
of figures, the corresponding
figure type is also explained in that section.
To manipulate figures, handles are used. Handles are small squares or
circles that appear at special points of a figure when the figure is
selected. Dragging and (double-)clicking handles has varying effects,
depending on the kind of figure and handle.
Handles are also explained in the corresponding figure's section.
To find out how to install Renew, refer to Section~\ref{sec:install}.
You should then be able to start Renew from the command line,
just typing \texttt{renew}, or using a program icon you created,
depending on your operation system.
You can also provide some drawings' file names as command line
parameters. After typing \texttt{renew}, just provide the (path and)
name of one or more files, including the extension \texttt{.rnw}, e.g.
\begin{lstlisting}[style=xnonfloating]
renew MyNet.rnw some/where/OtherNet.rnw
\end{lstlisting}
On start-up, Renew tries to load drawings from all specified files.
On Unix systems, you can even use
\begin{lstlisting}[style=xnonfloating]
renew some/where/*.rnw
\end{lstlisting}
to load all drawings in a directory.
If you have a program icon that is associated correctly, your OS
usually also supports double-clicking some \texttt{.rnw} file or
using drag \& drop.
In the rare case that Renew terminates abnormally, it should leave
an autosave file for each modified net drawing. Autosave files
are typically updates every two minutes. You can detect an autosave
file by its file extension \texttt{.aut}. Whenever possible,
the filename is derived from the main drawing's file name by
removing the old name extension \texttt{.rnw} and adding
\texttt{.aut}. If such a file exists already, a random file name
of the form \texttt{rnw99999.aut} with an arbitrary number
is chosen. In order to recover an autosave file, simply rename it,
so that it receives the \texttt{.rnw} extension.
Renew also leaves \texttt{.bak} files that constitute the
last version of the file that Renew loaded. Unlike autosave files,
these files are overwritten during subsequent runs of Renew.
\section{Tools}
\label{sec:tools}
In the toolbar, several tool buttons are displayed, which can be selected by
clicking on them.
The tool buttons are grouped in two or more toolbars (depending
on the mode of Renew). When resizing the Renew window, toolbars are
wrapped according to the size of the window.
The standard toolbars are the drawing toolbar and the Petri net
toolbar.
More toolbars can show up based on the chosen formalism.
Each single toolbar can be put into its own window by clicking
at the spot on the left of the toolbar.
Figure~\ref{fig:toolbarWindow} shows the Petri net toolbar in a
separate window.
If a toolbar window is closed, the toolbar is returned to the Renew
window.
\begin{figure}[htbp]
\centerline{%
\includegraphics[scale=\screenshotscale]{Toolbar2-5.eps}%
}
\caption{\label{fig:toolbarWindow}The Petri Net Toolbar in its own Window}
\end{figure}%
At any point in time, exactly one tool of all toolbars is selected,
which appears pushed down.
By default, a special tool, the selection tool, is selected,
whenever the work with the current tool is finished.
If you double-click a tool button, the tool will remain active until
you explicitly select another tool or right-click on an empty spot in
the drawing.
This replaces the menu \texttt{Toggle Sticky Tools} from the
\texttt{Edit} menu.
In general, double-clicking tools is most useful during the
initial creation of nets (but there are also other, probably more
elegant ways) and the normal selection is more apt to
later modification stages.
But of course, which way to use tools also depends on your
personal preferences.
In the status line in the Renew window, a short description of the tool
is displayed if you move the mouse pointer over a tool button.
All other tools but the selection tool are used to create a certain type
of figures. Some of the tools can also be used to manipulate already existing
figures of the corresponding type.
\subsection{The Selection Tool}
\label{subsec:toolSelection}
\toolicon{SEL}
The selection tool is the most basic tool and is not related to any special
figure type.
Instead, any figure or group of figures can be selected and moved using this
tool.
If not otherwise noted, when talking about pressing a mouse button,
the primary mouse button is meant.
If the selection tool is the current tool, the following user interactions
are possible:
\begin{description}
\item[Select] By clicking on a figure, it becomes selected.
A selected figure can be recognized by its visible handles.
Depending on the type of figure, different handles appear, but in
all cases, some handles will appear. There are even non-functional
handles, which are just there to show that a figure is selected and
do not have any additional (manipulation) functionality.
If another figure is selected, the current figure becomes deselected.
To clear the current selection, click inside the drawing, but not on any
figure.
\item[Add to Selection] If the shift key is pressed while clicking on a
figure, the figure is added to or removed from the current selection,
depending on its selection status.
This way, a group of objects can be selected, which
is convenient or even required for some commands.
\item[Area Selection] If the mouse button is pressed inside a drawing, but
not inside any figure, the area selection mode is activated after a short
delay.
The starting point marks one corner of a ``rubber band'' rectangle.
While the mouse button is held down, the other corner of that rectangle can be
dragged by moving the mouse.
When the button is released, all figures that are completely inside the
rectangle area are selected.
Combining this with the ``Add to Selection'' function is possible.
\item[Inspection.] Some figures have an additional inspect
function that is invoked by double-clicking them, which displays some
additional information of the figure without modifying it.
E.g., all connected text figures (see
Section~\ref{subsubsec:toolConnectedText}:
The Connected Text Tool) select their parent during inspection.
\item[Direct Modification] Some figures have an additional direct
manipulation function that is invoked by
clicking on them with the right mouse button.
E.g., all text figures switch into edit mode.
\item[Dragging] If the mouse button is pressed inside a figure and held down,
the drag mode is activated. All figures that are currently selected are moved
until the mouse button is released.
An advanced feature of dragging is that it is possible to change a
figure's parent. For more information on this function, see
Section~\ref{subsubsec:toolConnectedText}: The Connected Text Tool.
\item[Stepwise Movement] You can use the cursor movement keys on the
keyboard to move selected figures upward, downward, leftward or
rightward in steps of one pixel.
If no figure is selected, the cursor keys scroll the viewable area of the
drawing in its window.
By holding the shift key during cursor movement,
selected figures are moved in steps of 10 pixels.
\item[Manipulating] Depending on the kind of selected figure, handles
are displayed at special points within the figure. Using these handles, a
figure can be manipulated. The different types of handles are discussed in
Section~\ref{subsec:drawingTools} in the subsection of the corresponding
figure's tool.
\item[Open Target Location] If the control key is pressed while clicking on a
figure, the target location of the figure (if set) is opened (see
the subsection of the target tool in Section~\ref{subsubsec:toolTarget} for details).
\end{description}
To move a single figure, it is crucial not to hit a figure's handle,
otherwise the handle's function is invoked instead of moving the
figure(s).
When more than one figure is selected, the handles of all selected
figures are shown but have no effect.
To actually use the handles, you have to select exactly one figure.
The easiest way to do so is to click on an empty spot in the drawing
and then select the figure you want to manipulate.
In Table~\ref{tab:mani} we summarize the actions of
the inspection and direct manipulation functions
for all figures. The actions associated to the different figures
are explained in more detail
in the section that documents the corresponding tool.
Some of the entries in the table refer to
the simulation mode, which will be explained in more detail
in Section~\ref{sec:simulation}.
\begin{table}
\centerline{\begin{tabular}{l|lll}
type of figure & double click & right click \\
\hline
rectangle, ellipse, \dots & select children & select/drag \\
text & select & text edit \\
connected text & select parent & text edit \\
\hline
transition & select children & attach inscription \texttt{:s()} \\
place & select children & attach inscription \texttt{[]} \\
virtual place & select associated place & attach inscription \texttt{[]} \\
arc & select children & attach/edit inscription \\
declaration & select & text edit \\
inscription, name, label & select parent & text edit \\
\hline
transition instance & open binding window & fire arbitrary binding\\
place instance & select marking & open current marking window \\
cardinality marking & select place instance & show token marking \\
token marking & select place instance & show cardinality marking \\
\end{tabular}}
\caption{Summary of selection tool operations}\label{tab:mani}
\end{table}
\subsection{Drawing Tools}
\label{subsec:drawingTools}
Renew provides several drawing tools which create and manipulate drawing
figures.
These drawing figures do not have any semantic meaning to the net simulator,
but may be used for documentation or illustration purposes.
You may lack some functions that you are used to from your favorite drawing
tool (like adjusting line width and such), but remember that Renew is a
Petri net tool, not a drawing tool in the first place.
\subsubsection{The Rectangle Tool}
\toolicon{RECT}
The rectangle tool is used for creating new rectangle figures.
Press the mouse button at the point where the first corner is
supposed to be and drag the mouse to specify the opposite corner while
holding down the mouse button.
While dragging, you can already see the rectangle's dimension and location
which is confirmed as soon as you release the mouse button.
After a new figure has been created,
the new figure is not automatically selected.
To do so, just click on the figure with the selection tool
(see Section~\ref{subsec:toolSelection}).
Now, the figure's handles appear. In the case of a rectangle or ellipse
figure, these are sizing handles which are displayed as small white
boxes at the corners of the figure.
These handles let you change the dimension (and location) of a figure
after you created it.
Depending on the position of the handle, only certain changes are
allowed. For example, the ``east'' sizing handle only allows to change
the width of a figure, while maintaining the location of the left side,
and the ``south-west'' sizing handle only lets you relocate the lower
left corner of a figure, while maintaining the location of the upper and
right side.
The ``south-east'' handle restricts itself to sizes of
equal height and width (squares) as long as the control key is pressed.
The control key can also be used as a modifier while you are working with
the rectangle creation tool (and many other figure creation tools).
\newtwodotfive{
With the shift key pressed, the ``south-east'' handle restricts itself to constrain the proportions.
}
All newly created figures have a black outline and aquamarine as the
fill color (if there is any).
To change these attributes, use the \texttt{Attributes} menu (see
Section~\ref{subsec:menuAttributes}).
\tip{To create figures with the same attributes as an existing figure,
use copy~\&~paste (see Section~\ref{subsec:menuEdit}).}
\subsubsection{The Round Rectangle Tool}
\toolicon{RRECT}
The round rectangle tool works the same way as the rectangle tool (see above),
only that the created figure is a box with rounded corners.
A round rectangle figure has the same handles as a rectangle figure plus an
additional single round yellow handle to change the size of the curvature.
Drag this handle and change your round rectangle to anything between a
rectangle and an ellipse.
\subsubsection{The Ellipse Tool}
\toolicon{ELLIPSE}
The ellipse tool works the same way as the rectangle tool (see above),
only that ellipses are created within the given rectangle area.
An ellipse figure has the same handles as a rectangle figure.
\subsubsection{The Pie Tool}
\toolicon{PIE}
The pie tool works the same way as the rectangle tool (see above),
only that segments of ellipses are created within the given rectangle area.
A pie figure has the same handles as a rectangle figure, with two
additional ``angle'' handles that are small yellow circles.
The ``angle'' handles control start and end of the arc segment that frames
the pie.
By pressing the control key while dragging these handles around, their
movement can be restricted to steps of 15 degrees.
If a pie's fill color is set to ``none'', it displays as an open arc
segment instead of a closed pie.
\subsubsection{The Diamond Tool}
\toolicon{DIAMOND}
The diamond tool works the same way as the rectangle tool (see above),
only that diamonds are created within the given rectangle area.
A diamond figure has the same handles as a rectangle figure.
%Remark: Diamonds are a girl's best friend.
\subsubsection{The Triangle Tool}
\toolicon{TRIANGLE}
The triangle tool works the same way as the rectangle tool (see above),
only that triangles are created within the given rectangle area.
A triangle figure has the same handles as a rectangle figure, with an
additional ``turn'' handle that is a small yellow circle.
This handle lets you choose the direction the triangle points to, which
is restricted to one of the centers of the four sides or one of the four
corners.
%Remark: Triangle Man hates Particle Man.
\subsubsection{The Line Tool}
\toolicon{LINE}
The line tool produces simple lines that are not connected (see also
the next section: The Connection Tool).
Creating a line is similar to creating a rectangle:
Press the primary mouse button where the starting point is
supposed to be and drag the mouse to specify the end point while
holding down the mouse button.
The line figure has two sizing handles (small white boxes) in order
to let you change the starting and end point afterward. It also has
an {\em intermediate point\/} as described in Section~\ref{subsec:drawingTools}:
The Connection Tool.
A line figure has no fill color, but it respects the pen color
(see Section~\ref{subsec:menuAttributes}).
\subsubsection{The Connection Tool}
\label{subsubsec:toolConnection}
\toolicon{CONN}
This tool lets you create connections (arcs) between other figures.
A connection is like a line, except that it connects two existing figures
and is automatically adapted every time one of the connected figures changes.
Consequently, the location of pressing down the mouse button does not
specify a starting point, but a starting figure. Again, the mouse button
has to be held down while dragging the end point of the connection.
If an appropriate figure is found under the mouse button, the end point
``snaps'' into this figure's center. This figure is confirmed as the
end point figure as soon as you release the mouse button.
The connecting line always is ``cut off'' at the outline of the start
and end figure, so that it just touches their borders.
A connection can be re-connected using its green square connection handles.
Just drag one of these handles to the new start or end figure.
If you release the mouse button while the connection is not ``snapped''
into a new figure, the connection will jump back into its old position.
An advanced feature is to produce {\em intermediate points\/} (or
``pin-points'') in a connection.
When selected, connection figures show
additional {\em insert point\/} handles to create new intermediate
points in the middle of each line segment.
These are depicted as small circles with a cross (plus-sign) inside.
When you click on an insert point handle, a new location handle (see
below) is created within the given line segment and can immediately be
moved. By holding Ctrl while pressing, holding and dragging an intermediate
point you can orient the two emerging line segments in a right angle.
A different method to create and delete intermediate points is to use the
connection tool.
Activate the tool and click on a
point on the connecting line. Now, a new location handle (white square)
is created, which you can see the next time you select the connection figure.
This handle can be dragged to an arbitrary position.
When you hold down the control key while moving a location
handle, the intermediate point
jumps to the closest position so that the adjacent line segments
form a right angle.
You can also keep the mouse button pressed down right after clicking
on an intermediate point and drag the new handle immediately (without actually
having seen the handle itself).
If you want to get rid of a pin-point, simply
select the connection and double-click the associated handle.
Another (more complicated) way to remove intermediate points is to select
the connection tool and click on the intermediate point with the left mouse
button.
\tip{If you move two figures, a straight connection is automatically moved
with them. But if the connection has intermediate points, these stay at their
old location. Solution: Just select the connection itself additionally, and
everything will move together.}
\subsubsection{The Elbow Connection Tool}
\toolicon{OCONN}
The elbow connection tool establishes a connection between two
figures just like the connection tool.
The difference is that an elbow connection does not draw a direct
line from one figure to the other, but uses straight (horizontal
or vertical) lines only.
When you select an elbow connection, you see up to three yellow
handles which adjust the position of the horizontal and vertical
lines.
\bug{Changes to these handles are not stored.
Also, if the connected figures are close together, the decision whether
to go horizontal or vertical first is quite poor.
Since no elbow connections are needed to construct reference nets,
we do not really care about these bugs.}
\subsubsection{The Scribble Tool}
\toolicon{SCRIBBL}
The scribble tool lets you scribble in a drawing
with your mouse, just like the famous Java applet.
More precisely, a scribble figure traces the mouse movement while the
button is held down and thus defines several points, which are connected
by lines. You can also define single points by single mouse clicks.
The creation mode is ended by double-clicking at the last point or
right-clicking in the drawing window.
The clou about the scribble figure: After it has been
created, every single point can still be changed by dragging the
corresponding white, square handle.
To drag the whole figure, start dragging on a
line segment rather than inside a handle, or deselect the figure first
and then start dragging anywhere on a line of the figure.
\subsubsection{The Polygon Tool}
\toolicon{POLYGON}
A polygon is created analogous to a scribble figure (see above).
While you create the polygon, you can already see that the area
surrounded by the lines is filled with the fill color.
In contrast to the scribble figure, the surrounding line is closed
automatically. By intersecting the lines, you can create un-filled
areas. Like in the scribble figure, there are white, square handles
to drag every single point of the polygon figure. A point that is
dragged to somewhere on the direct line between
its ancestor and predecessor point is removed from the polygon.
Also, there is a round, yellow handle that can be used to
turn and to scale the entire polygon figure by dragging the handle,
which looks really nice (thanks to Doug Lea).
The round, yellow handle is restricted to pure rotation
as long as the shift key is pressed and to pure scaling as long as the
control key is pressed.
The behavior of white, square point handles can be modified with the
control key similar that of location handles of connections (see above).
It can be configured how the polygon smoothness lines by removing
intermediate points.
The property \texttt{ch.ifa.draw.polygon.smoothing} can be set to the
following values (changes take effect the next time a polygon is
manipulated):
\begin{description}
\item[alignment]
\ This is the default behavior.
Points are removed if they are located on a straight line between their
adjacent points.
\item[distances]
\ Points are removed only if they are too close to each
other (less than 5 pixels distance horizontally \emph{and}
vertically).
\item[off]
\ No smoothing at all (no points are removed).
\end{description}
\subsubsection{The Image Tool}
\toolicon{IMAGE}
The image tool offers you the possibility to include bitmap graphics into your
drawings. When activating this tool, a file dialog box opens that lets you
choose a bitmap graphic file from your file system.
\texttt{gif} files should work on all platforms, but other formats like
\texttt{jpg}, too.
Java (and thus Renew) even supports transparent GIF images.
\bug{Be aware that the Encapsulated PostScript output does not support
transparent GIF images, but some of the other export
formats (e.g. PDF and SVG) do.
}
After you confirmed the file selection, the dialog disappears and leaves you
with two options: Either you just click somewhere in your drawing, or you drag
open an area, just like when creating a rectangle. If you just click, the image
is inserted using its original dimensions (in pixels), otherwise it is scaled
to the rectangle area specified by your drag operation.
An image figure has the same handles as a rectangle figure.
\newtwodotfive{
With Renew 2.5 you can use drag and drop to add images. Just drag the image into the drawing editor.
}
\subsubsection{The Text Tool}
\label{subsubsec:toolText}
\toolicon{TEXT}
The text tool is used to arrange text with your graphical elements.
The first mouse click after activating the tool selects the upper left corner
of the text area and invokes a text editor.
Now you can type in any text,
including numbers, symbols, and so on. You can even use the cursor keys,
delete any characters, select some part of the text with the mouse and so on,
like in any other Java edit field.
Note that you can even type in several lines, as usual by pressing the return
or the enter key. This is why pressing return or enter does not end the edit
mode.
After you click somewhere outside of the text editing box, the text is entered
and all of the text is displayed.
If the editing box is empty at that moment (the entered text comprises white
spaces and line breaks only), the text figure is automatically removed from
the drawing.
The white box handles are just to show that a text figure is selected.
The dimension of a text figure can not be changed, as it only depends on its
text contents and font selection.
The only handle to modify a text figure is a small yellow round font sizing
handle.
It can be dragged to alter the font size, which can also be done using a menu
command (see Section~\ref{subsec:menuAttributes}).
If you want to change the text contents of an existing text figure, just
make sure the text tool is activated and click on the text figure.
The text editor described above will appear.
Again, confirm your changes by clicking somewhere outside the editing area.
\tip{A fast way to enter text edit mode for any text figure (including connected
text, inscription, name, and declaration figures) is to right-click on these
figures. The corresponding tool is activated and the figure is put into text
edit mode immediately.}
\subsubsection{The Connected Text Tool}
\label{subsubsec:toolConnectedText}
\toolicon{ATEXT}
Connected text works exactly like normal text, except that it is connected
to some other figure, which is called its parent.
To create a connected text figure, select the connected text tool and
click on the figure that is to become the parent of the new connected text
figure. If you select a figure that cannot take a connected text figure
or if you select no figure at all, your selection is ignored.
If the figure was successfully chosen,
continue with editing text like with a normal text figure (see above).
Now, every time you move the parent figure, the connected text figure will
move with it. Only when you drag the connected text figure itself, the offset
to its parent is changed.
To verify which figure is the parent of some connected text figure,
double-click on the connected text figure, and the parent (if there
is any) is selected.
A special feature of connected text
is dragging a single connected text figure, or any special subclass like inscriptions
(see Section~\ref{subsubsec:toolInscription}: The Inscription Tool),
to a new parent.
Whenever the ``landing point'' of a connected text drag operation is another
potential parent, it is selected immediately to indicate that
instead of changing the offset to the old parent, the
targeted figure will become the new parent of the connected text figure
as soon as you release the mouse button.
If you drag the connected text figure to a location outside this new parent
again, its old parent (if there is any) is selected in the same manner,
to indicate if you let go the mouse button now, the parent will stay the same.
Note that the offset the connected text figure had to its old parent is re-established
for its new parent, so it might jump to another position after reconnection.
This is quite convenient if you moved an inscription to a
preferred offset to its parent (e.g.\ to the right-hand side of a transition),
and want to keep this offset even after connecting it to a new figure.
\subsubsection{The Target Tool}
\label{subsubsec:toolTarget}
\toolicon{TARGETTEXT}
The target tool can be used to add hyperlinks (target locations) to figures. These target locations can point to other Renew drawings, files in the file system or to a locations written as URI (e.g. a website). The target tool works like the connected text tool with the difference that the target location is only visible when edited. The target location is stored as attribute of the figure (\texttt{targetLocation}). The target locations can be opened by using the selection tool and clicking on the figure while pressing the control key (see Section~\ref{subsec:toolSelection}).
\subsection{Net Drawing Tools}
Now it is really getting interesting:
This group of tools allows you to draw Petri nets that have a
semantic meaning to the simulation engine.
Renew differentiates between a simple rectangle and a transition,
although they may look the same.
When you use the net drawing tools, some syntactic constraints
are checked immediately (see Section~\ref{sec:errors}).
\tip{Since all net element figures (transitions, places, and arcs) may have
inscriptions, Renew supports automatic inscription
%selection and
generation.
Click on a net element figure with the right mouse button,
and
%its first inscription figure is selected. If there is no inscription
%figure, a new one is created with the default inscription \texttt{x}.
a new inscription figure is created with a default inscription
depending on the type of net element.
This is especially convenient for arc inscriptions,
since these usually consist of a single variable.
Of course, in most cases, you have to change the inscription
afterward, but you do not need to use the inscription tool.
Instead, you right-click on the net element and then right-click
on the newly created inscription.
}
\subsubsection{The Transition Tool}
\label{subsubsec:toolTransition}
\toolicon{TRANS}
This tool functions almost exactly like the rectangle tool.
The differences are:
\begin{itemize}
\item Only transition figures have a semantic meaning
to the simulator. A rectangle figure is ignored by the net
execution engine.
\item To create a transition with a default size, after selecting the
transition tool, just click instead of drag. The position of the
click specifies the center of the newly created transition.
\item A transition figure offers an additional handle. The arc handle,
a small blue circle in the middle of the figure, can be used to
create new output arcs (see Section~\ref{subsubsec:toolArc}:
The Arc Tool).
\end{itemize}
The new handle has a special behavior when you stop dragging
on figure that is not appropriate as a target for the arc.
A normal connection is deleted when there is no
appropriate end figure. However, for an arc it is quite clear what kind
of figure is supposed to be there: a place figure. And this is what happens:
Automatically, a place figure is created with its center set to the location
where you released the mouse pointer, and the newly created arc connects
the transition and the new place.
\tip{This feature offers you a very fast way to create reference nets.
Just start with a transition and use its blue arc handle to create a new arc
and the next place. Since this works for places (see below), too, you can
continue to create the next arc and transition using the arc handle of
the newly created place. If you want to reuse an existing place or transition,
just drag the arc to that figure as usual. Thus, you can create arbitrarily
complex nets without selecting any other tool!
If you combine this with the automatic inscription generation and editing
(see above), even colored nets will only take seconds to create.
}
\subsubsection{The Place Tool}
\toolicon{PLACE}
The place tool works analogously to the transition tool, only that
the arc handle (the small blue circle) creates input arcs (see
previous section). If the new arc is not released on top of an
existing transition, a new transition is created and used as the target
of the arc.
\subsubsection{The Virtual Place Tool}
\toolicon{VPLACE}
The virtual place tool is used to create virtual copies of a place.
It improves
the readability and graphical appearance of nets in which
certain places are used by many transitions.
Other Petri net tools sometimes call such a virtual copy of a place
a fusion place. If the contents of a place is needed for many
transitions, the readability of the net decreases because of
many crossing arcs. With a virtual place copy, you can draw the
same place many times, thus avoiding such crossing arcs and arcs
over long distances.
You create a virtual copy of a place by activating the virtual
place tool, then clicking on the place you want to copy (this
can also be a virtual place!) and
keeping the mouse button down while dragging the virtual place
figure to its destination location. The virtual place can be
distinguished from a normal place by the double border line
(see the graphics inside the tool button).
To find out which place a virtual place belongs to, just
double-click the virtual place.
To make this relation visible in printed versions of your nets,
you should copy the name of the place to the virtual place.
Unfortunately, the tool does not take care of the names of
virtual places automatically.
Another solution supported by the tool is to give
each group of a place and all its virtual copies a different
fill or pen color. All places belonging together
will change their colors if you change the color for
one place.
During simulation, every virtual copy of a place contains exactly
the same token multiset as its original place.
Still, it is possible to determine the marking appearance separately
for each virtual place (and the place itself)
(see Section~\ref{sec:simulation}).
\tip{A nice way to take advantage of this feature is to create
virtual copies of places with an important and extensive marking
and move these to an area outside the net. This has a similar
effect as the current marking window, but you do not get your
screen cluttered with so many windows.}
\subsubsection{The Arc Tools}
\label{subsubsec:toolArc}
The arc tool works quite the same as the connection tool (see
description in Section~\ref{subsec:drawingTools}).
The differences are, like above, that an arc has a
semantic meaning to the simulator.
A restriction coming from the Petri net structure is that
an arc always has to {\em connect one transition and one place},
not two figures of the same kind or any other figures.
The arc will not snap in on the wrong figures and disappear
if you release the mouse button over a wrong figure.
This behavior is different from when you create arcs using
the arc connection handle in places or transitions (see
Section~\ref{subsubsec:toolTransition}: The Transition Tool).
There are four arc tools for those different arc types that
are generally available:
\begin{description}
\item[Arc Tool]\hangtoolicon{ARC}
-- This tool is used for creating {\bf input} and
{\bf output arcs}, which only have one arrow tip at their ending.
If the start figure of the connection is a place (and thus, the
end figure has to be a transition), this one-way-arc is an
input arc. If the start figure is a transition, we have an
output arc.
\item[Test Arc Tool]\hangtoolicon{LINE}
-- Here, {\bf test arcs} without any arrow tips
are created. A test
arc has no direction, as no tokens are actually moved when the
transition fires (see Section \ref{subsec:reserveAndTestArcs}).
This means it does not matter whether you start a test arc
at the place or at the transition.
\item[Reserve Arc Tool]\hangtoolicon{CONN}
-- With this tool, {\bf reserve arcs} with
arrow tips at both sides are created. Again, the direction does
not matter. For the semantics of reserve arcs, see
Section~\ref{subsec:reserveAndTestArcs}.
\item[Flexible Arc Tool]\hangtoolicon{DARC}
-- An arc with \emph{two} arrow tips on one side is created.
These {\bf flexible arcs} transport a variable number of tokens.
For the semantics of flexible arcs, see
Section~\ref{subsec:flexArcs}.
\end{description}
There are two additional arc tools that are only displayed
on request, as described in
Subsection~\ref{subsec:sequentialTools}.
\begin{description}
\item[Clear Arc Tool]\hangtoolicon{DHARC}
-- This tool is used for creating {\bf clear arcs}, which
remove all tokens from a place. You have to select the place as
the start figure and the transition as the end figure
during the creation of a clear arc. For the semantics of clear arcs, see
Section~\ref{subsec:clearArcs}.
\item[Inhibitor Arc Tool]\hangtoolicon{INHIB}
-- This tool is used for creating {\bf inhibitor arcs}, which
stop the attached transition from firing as long as certain tokens
are contained in a place. This arc features circles at both
of it end points. Again, the direction does
not matter. For the semantics of inhibitor arcs, see
Section~\ref{subsec:inhibArcs}.
\end{description}
Using the \texttt{Attributes} menu, it is possible to
change the direction of an arc after its creation.
Simply select the desired value for the attribute
\texttt{Arrow}. However, you cannot currently change ordinary
arcs to flexible arcs, or vice versa. Neither can you access
inhibitor or clear arcs this way.
Let us repeat from Section~\ref{subsec:drawingTools}
that you can create intermediate points by selecting an arc tool
before clicking on an already existing figure.
You can then drag the intermediate point to its destination.
To get rid of intermediate point, right-click the associated handles.
\subsubsection{The Inscription Tool}
\label{subsubsec:toolInscription}
\toolicon{INSCR}
Inscriptions are an important ingredient for most high-level
Petri net formalisms. An inscription is a piece of text that
is connected to a net element (place, transition, or arc).
Refer to Section~\ref{ch:reference} to find out what kind of
inscriptions are valid in our formalism. You can inscribe
types and initial markings to places. You can provide inscriptions
for arcs, in order to determine the type of tokens moved.
Transitions may carry guards, actions, uplinks, downlinks, and expressions.
Multiple transition inscriptions may be given in a single figure,
but they have to be separated by semicolons.
When editing inscription figures, you have to know that
in principle they behave like connected text figures.
This means that all functions for connected text figures
also work for inscription figures
(see Section~\ref{subsubsec:toolConnectedText}: The Connected Text Tool).
For example, to check that an inscription figure is in fact connected
to the net element you want it to be connected to,
double-click on the inscription figure.
Then, the corresponding net element should be selected.
Also, you can drag an inscription to another net element.
Again, in contrast to text figures, inscription figures have
a semantic meaning to the simulator.
By default, inscriptions are set in plain style, while labels
(text without semantic meaning) are italic.
The syntax of an inscription is checked directly after you stop
editing it (see Section~\ref{sec:errors}).
Refer to Chapter~\ref{ch:reference} for
a description of the syntax of Renew net inscriptions.
\subsubsection{The Name Tool}
\label{subsubsec:toolName}
\toolicon{NAME}
The name tool also connects text to net elements, in this case
to places and transitions only. By default, a name is set in bold
style.
The idea of a name for a place or transition is to enhance
readability of the net as well as simulation runs.
When a transition
fires, its name is printed in the simulation trace exactly like
you specified it in the name figure.
Place names are used in the simulation trace whenever tokens are removed
from or put into a place.
Also, a place's name is used in the window title of current marking windows
and a transition's name is used in the new transition binding
window (see Section~\ref{sec:simulation}).
Each place and transition should have at most one name figure connected
and each name should be unique within one net
(but the editor does not check either of these conditions).
Places and transitions without connected name figures are given
a default name like \texttt{P1}, \texttt{P2}, \dots{} and
\texttt{T1}, \texttt{T2}, \dots{}
\subsubsection{The Declaration Tool}
\toolicon{DECL}
A declaration figure is only needed if you decide to use types
(see Section~\ref{subsec:types}).
Each drawing should have at most one declaration figure.
The figure is used like a text figure, only that the text it contains
has a semantic meaning to the simulator.
The text of the declaration figure is used for import statements
as well as variable declarations (see Section~\ref{subsec:types}).
As in the case of inscriptions (see above), the content of a
declaration figure is syntax-checked as soon as you stop editing.
For an explanation of syntax errors that may occur, refer to
Section~\ref{sec:errors}.
\subsubsection{The Comment Tool}
\toolicon{COMM}
The comment tool connects comment texts to net elements.
Comment texts have a blue text color as default and no semantic meaning for the simulator.
% commented until final release of Maria mode
%\subsubsection{The Type Tool}
%
%\toolicon{MT}
%This tool is only available in the Maria mode as described in
%subsection~\ref{subsec:mariamode}. It creates special
%net inscriptions that are used to indicate place types and also
%place capacities in the Petri net tool Maria. These inscriptions
%are displayed in bold face italics.
%
%As in the case of inscriptions (see above), the content of a
%declaration figure is syntax-checked as soon as you stop editing.
%\subsection{Type Modeling Tools}
% hier kommen die Typhierarchie-Tools und Figures hin!
\section{Menu commands}
\label{sec:menuCommands}
This section contains a reference to Renew's menus and the
functions invoked by them.
\subsection{File}
As usual, the file menu contains every function that is needed
to load, save and export drawings.
In the following section, all menu items of the file menu are
explained.
\subsubsection{New Net Drawing (*.rnw)}
This menu invokes a function that creates a new drawing and
opens it in a drawing window in a default window size.
The new drawing is named ``untitled'' and is added to the
list of drawings in memory (see Section~\ref{subsec:menuDrawings}).
The keyboard shortcut for this function is \texttt{Ctrl+N}.
\subsubsection{New Drawing\dots}
Renew supports different kinds of
drawings (dependent on the installed plug-ins), this menu entry
opens a dialog where the type of drawing can be chosen.
Select the appropriate drawing type in the dialog and press the
\texttt{New} button.
\subsubsection{Open Navigator}
\label{sec:navigator}
This command opens the Renew file navigator in a new window.
The navigator displays folders and their Renew-related content in a
directory tree. The navigator is shown in Figure~\ref{fig:navigator}.
The keyboard shortcut for this function is \texttt{Ctrl+Shift+N}.
\newtwodotfive{The Navigator plugin was completely reimplemented. It is now persistent and extensible. We optionally provide some convenient extensions, such as the integration of the drawing’s diff feature (ImageNetDiff), which can now be triggered directly from the Navigator GUI. Another addition is the possibility to filter the Navigator's content and to add files and folders via drag and drop.}
\begin{wrapfigure}{l}{0.4\textwidth}
\begin{center}
\includegraphics[scale=\screenshotscale]{Navigator2-5}
\vspace{-30pt} % Move the caption close to the image
\end{center}
\caption{The Renew Navigator}
\label{fig:navigator}
\end{wrapfigure}
\paragraph{Usage of the Navigator}
At the top of the navigator window is an icon bar with eight buttons and an additional filter bar with an input field and three additional filter buttons.
We describe these buttons and their function ``left to right''.
The \texttt{Home} button displays the home directory which defaults to the
preconfigured files (see next paragraph) or, if there are none, to the current
directory.
The \texttt{NetPath} button displays all folders which are included in the netpath of Renew. This is usually empty but can be set when starting Renew or in the menu \textit{Simulation$\,\rightarrow\,$Configure Simulation$\,\rightarrow\,$Net Path}.
The \texttt{Add Folder} button opens a file choose dialog and adds the chosen directory or file to the tree.
The \texttt{Expand} button expands the complete folder structure.
The \texttt{Collapse} button collapses all nodes of the tree.
The \texttt{Refresh} button checks for new and deleted files and updates the display in the tree area.
The \texttt{Remove} button removes a single node from the tree, while the \texttt{Remove all} button removes the whole tree.
The input field can be used to filter the Navigator's content. The first button next to the input field clears the input field and the last two buttons provide predefined filters for \texttt{.rnw} and \texttt{.java} files.
The persisted state of the Navigator is saved in the file \texttt{navigator.xml} in the \texttt{.renew} subdirectory of your home directory.
\paragraph{Configuring the Navigator}
The navigator has two properties that can be configured in the usual
configuration files: \texttt{de.\allowbreak{}renew.\allowbreak{}navigator.\allowbreak{}workspace} and
\texttt{de.\allowbreak{}renew.\allowbreak{}navigator.\allowbreak{}files\allowbreak{}At\allowbreak{}Startup}.
%
The first property defines the base directory for the navigator plugin.
It needs to be an absolute path like \texttt{/path/\allowbreak{}to/\allowbreak{}renew\renewversion{}/} or
\texttt{c:/\allowbreak{}path/\allowbreak{}to/\allowbreak{}Renew\renewversion{}/}.
The second property is a semicolon\footnote{A semicolon has to be used even
on Unix-based systems, where paths are usually separated with the colon (:).}
separated list of paths relative to the base directory.
All folders and files defined in this list will be added to the tree
area on startup.\\
Example: \texttt{MyNets;\allowbreak{}Core/\allowbreak{}samples;\allowbreak{}../\allowbreak{}../\allowbreak{}../\allowbreak{}home/\allowbreak{}renewuser/\allowbreak{}example\allowbreak{}Nets}
\subsubsection{Open URL\dots}
Renew can load drawings from a remote location
specified by a URL.
This command opens a dialog where you can type the URL and
press \texttt{OK}.
Note that Renew is not able to save drawings to URLs, it
still requires a local file name.
\subsubsection{Open Drawing\dots}
This function displays a file selector dialog that lets you select
a drawing that was saved previously.
The file selector dialog looks a little bit different depending
on the platform, but always allows you to browse the file system and
select an existing file. By pressing the OK button, the selection
is confirmed and Renew tries to load this file as a drawing.
If that does not succeed, an error message is displayed in the
application log and in the status line of the Renew window.
Otherwise, the drawing is added to the list of drawings in
memory (see Section~\ref{subsec:menuDrawings}) and opened in a new
drawing window.
The keyboard shortcut for this function is \texttt{Ctrl+O}.
The open dialog accepts the selection of multiple files.
This will result in multiple drawing windows to be opened in
the editor simultaneously.
Dependent on the set of installed plug-ins (and on your window
manager), one of several available drawing file types can be
chosen from a drop down box in the dialog.
This will restrict the display of files in the dialog.
You may override the file type filter by specifying a wildcard
pattern like \texttt{*.*} as file name and pressing
\texttt{Enter}.
\newtwodotfive{
With Renew 2.5 you can use drag and drop to open drawings. Just drag the files into the Renew menu and tools window.
}
\subsubsection{Insert Drawing\dots}
This function opens a previously saved drawing to be inserted into the
currently focused drawing editor (Opening works like in \texttt{Open Drawing\dots{}}).
All figures which are selected before insertion are deselected. In return all
the inserted figures are selected now
which makes it easy to move them around without jamming the other figures.
\subsubsection{Save Drawing}
This function saves the active drawing (see Section~\ref{sec:usageBasics})
to a file using a textual format.
The drawing is saved to the last file name used, which is the file
it was loaded from or the file it was last saved to.
If the drawing has not been saved before, this function behaves
like \texttt{Save Drawing As\dots{}} (see below).
If there is an old version of the file, it is overwritten.
Depending on your operating system, overwriting a file might
need confirmation by the user (you).
The keyboard shortcut for this function is \texttt{Ctrl+S}.
\subsubsection{Save Drawing As\dots}
This functions is used to determine a (new) file name for a
drawing and save it in textual format (see above).
Like in \texttt{Open Drawing\dots{}}, a file selector dialog is displayed to let
you determine the (new) file name and location.
After confirming with the OK button, the specified file name
is used to store the drawing now and during future invocations
of \texttt{Save Drawing}. The name of the drawing is
set to the file name without path and file extension.
If you cancel or do not select an appropriate file name,
the drawing will neither be saved nor renamed.
Dependent on the set of installed plug-ins (and on your window
manager), one of several available drawing file types can be
chosen from a drop down box in the dialog.
The effects are similar to the effects in the \texttt{Open
Drawing} dialog explained above.
However, the list of available file types is restricted by the
type of the drawing you are going to save.
The keyboard shortcut for this function is \texttt{Ctrl+Shift+S}.
%\subsubsection{Save Drawing As Serialized\dots}
%
%This functions saves the active drawing (see Section~\ref{sec:usageBasics})
%to a file using a binary format.
%To be precise, the Java objects the drawing consists of are converted
%to a byte stream using the Java Serialization feature.
%At this point, we recommend to use the previous menu and save in
%textual format. Firstly, serialized saving has not been tested
%intensively, and secondly, it is easier to ``patch'' a textual file
%in case of changes or loading/saving errors.
\subsubsection{Save All Drawings}
This function saves all drawings that are currently in memory
(see Section~\ref{subsec:menuDrawings}).
Before this can be done, all untitled drawings have to be given
a (file) name, which is done as in \texttt{Save Drawing As\dots{}}
(see above).
If you cancel any of the save dialogues, no drawing will be saved.
If all drawings are given a proper (file) name, they are all saved.
You should invoke this function before you exit Renew (see below).
\subsubsection{Close Drawing}
\label{subsubsec:close}
Closes the active drawing window and removes the corresponding drawing
from the list of drawings in memory (see Section~\ref{subsec:menuDrawings}).
%Be careful: Unlike most other applications, Renew does not require
%a confirmation to close an unsaved drawing!
%So save first, then close!
Before doing so, Renew checks if the drawing could have been
changed (this check is a little bit pessimistic) and if so,
asks you whether to save the drawing.
You have the options to answer \texttt{Save now}, \texttt{Close},
or \texttt{Cancel}.
\texttt{Save now} tries to save the drawing.
Drawings which already have a name are saved with that name.
If the drawing is untitled, the normal save dialog appears
(see above). Here, you still have the option to cancel, which
also cancels the closing of the drawing.
If you select \texttt{Close}, the drawing is closed and all
changes since the last save are lost (or the whole drawing,
if it was still untitled).
Last but not least, you have the option to \texttt{Cancel}
closing the drawing.
The keyboard shortcut for this function is \texttt{Ctrl+W}.
\subsubsection{Close All Drawing}
Closes all opened drawing windows and removes the corresponding drawings from the list of drawings in memory. If you cancel any of the save dialogues, the process is canceled and no further drawing windows are closed.
The keyboard shortcut for this function is \texttt{Ctrl+Shift+W}.
\subsubsection{Recently saved}
The \texttt{Recently saved} menu allows you to load recently saved files.
\subsubsection{Export}
The items in the export submenu allow you to save the active drawing
in several formats for use with other applications.
The \texttt{Export} menu has three submenus.
\texttt{Export current drawing} comprises export filters for the
active drawing only.
All these filters are available through the first menu entry
\texttt{Export current drawing (any type)}, too, where you can
choose the desired export format via a drop-down box in the file
dialog.
\texttt{Export all drawings (single file each)} provides the same
set of filters, but there they are applied to all drawings
automatically.
This results in one exported file per drawing, these files are
stored in the same location and with the same name as the respective
drawing files, but with a different extension.
\texttt{Export all drawings (merged file)} comprises export filters
that are able to merge all drawings into one file.
Since this feature must be supported by the format of the exported
file, the set of export filters in this submenu is restricted.
The export formats included in the basic Renew distribution are
listed as follows:
\paragraph{PDF}
This function produces a PDF document that contains the current drawing.
A file with the default extension of \texttt{.pdf} is generated.
The ``Renew FreeHEP Export'' plugin provides the
\texttt{de.renew.io.export.pageSize} and
\texttt{de.renew.io.export.pageOrientation} configuration properties, which
influence the page layout of generated PDF files.
Possible values for page size are: \emph{A3}, \emph{A4}, \emph{A5}, \emph{A6},
\emph{International}, \emph{Letter}, \emph{Legal}, \emph{Executive}, \emph{Ledger}
and \emph{BoundingBox}.
Possible values for orientation are: \emph{portrait} and \emph{landscape}.
The properties default to \emph{BoundingBox} for page size in \emph{portrait} for orientation.
However, orientation does not apply, if page size is set to \emph{BoundingBox}.
The keyboard shortcut for this function is \texttt{Ctrl+Shift+P}.
\paragraph{EPS}
If you want to include net drawings into written material,
you can use an Encapsulated Post\-script (EPS) file. The EPS file
can be used to insert graphics into other documents,
e.g.\ in LaTeX, OpenOffice, Microsoft Office, and others.
EPS files are not of a fixed page size.
Instead, their bounding box matches exactly the dimensions of the drawing.
The keyboard shortcut for this function is \texttt{Ctrl+E}.
The EPS and PDF export feature relies on the VectorGraphics
package of the FreeHEP project (see \url{java.freehep.org}). The ``Renew FreeHEP Export'' plugin provides a property for the configuration of the font handling (\texttt{de.renew.io.export.epsFontHandling}). It
can be set to \texttt{Embed}, \texttt{Shapes} or \texttt{None}.
The \texttt{Shapes} option is the default as it produces the most
similar output with respect to screen display.
However, the generated files can become rather large.
The \texttt{None} option comes close to the old Renew export behavior
without any font information included.
The \texttt{Embed} option should be the best (theoretically), but it
often produces unreadable results.
The background of drawings exported to eps can also be set to transparent
by setting the property \texttt{de.renew.io.export.eps-transparency} to true.
% For developers:
% The class \texttt{de.renew.util.PostScriptWriter}
% (which is a subclass of \texttt{java.awt.Graphics}
% that generates PostScript code)
% can easily be used in other applications. If you want to do so,
% please contact Frank Wienberg (see Section~\ref{ap:contact})
% to find out about capabilities and limitations of this class.
%
% \bug{The \texttt{Postscript Writer} lacks some features, so that it can not
% always produce an exact rendering of the drawing on the screen.
% Known missing features are:
% transparency in GIF images,
% transparency attribute of lines and shapes, and
% embedding of arbitrary fonts.
% Details are given in the respective tool explanations.}
\paragraph{PNG}
This function produces a PNG image that contains the current drawing.
A file with the default extension of \texttt{.png} is generated.
This export format differs from the previously mentioned formats since it
is pixel-oriented instead of vector-based.
The generated image has a fixed resolution that cannot be scaled without
loss of information.
The PNG export is based on the FreeHEP library.
The keyboard shortcut for this function is \texttt{Ctrl+0}.
\paragraph{XML}\label{subsec:xmlexp}
There are several export formats that use XML.
We provide experimental PNML support since Renew 2.0.
PNML stands for Petri net Markup language and has been presented
at the ICATPN'2003 in \cite{Billington2003}.
With Renew 2.2, the SVG format has been added.
With Renew 2.4, the support for the experimental XRN format provided in
previous releases has been discontinued.
\subparagraph{PNML http://www.informatik.hu-berlin.de/top/pntd/ptNetb}
This format saves the drawing as a P/T-net, compatible with the
P/T-net type definition in a version from summer 2003.
Note that all drawing elements which are not needed to describe
the P/T-net are omitted.
\subparagraph{PNML RefNet}
This format saves the drawing as a Renew reference net.
Graphical figures without semantic meaning (e.g. those figures
produced by the drawing tool bar) are omitted.
The underlying PNML type definition is experimental, it may be
subject to changes without notice.
Please note that the PNML standard allows multiple nets to be
contained within one file.
\bug{%
The Renew PNML export and import have been developed at a time where the
PNML standard was still under development.
The code has not been revised since, so that it might not comply with
the current PNML standard.
}
\subparagraph{SVG}
This format exports the complete graphical information of a drawing into an
SVG image file which can be displayed by many modern web browsers.
Petri net semantics are not retained.
The SVG export is based on the FreeHEP library.
\paragraph{Woflan}
Woflan (see~\cite{Woflan98})
is a Workflow Analysis tool that checks if a Petri net
conforms to some restrictions that make sense for Workflows.
As Woflan only handles single, non-colored Petri nets without
synchronizations, only the structure of the active window's net
is exported. Still, if you have the Woflan tool, it makes sense to check
Renew workflow models for severe modeling errors in their structure.
For the time being, the initial place of the workflow net must
carry the name \texttt{pinit}. Otherwise, a place with this name
(but without any connected arcs) will be generated in the
exported net.
% commented until final release of Maria mode
%\paragraph{to Maria\dots}\label{subsec:mariaexp}
%
%Exports the active drawing in the Maria format.
%This command is only available in the Maria
%mode as described in subsection~\ref{subsec:mariamode}.
\paragraph{Shadow Net System}\label{subsec:expshadow}
A \emph{shadow net system} can comprise one or more nets
which can be used by the non-graphical simulator (see
section~\ref{sec:simulationServer}), the net loader or other
tools.
Only the
semantic information is contained in the shadows, but not
the visual appearance.
The current formalism (see section~\ref{subsec:formalismgui}) and the
configuration of simulation traces for individual net elements (see
section~\ref{subsubsec:trace}) will be stored within the shadow net
system.
\subparagraph{As merged file}
A shadow net system that contains all nets needed for a
simulation can be generated by the \texttt{N to 1} entry in the
\texttt{Export all drawings (merged file)} menu.
Before exporting a collection of nets to the shadow simulator, it
is recommended to do a syntax check on the net.
Although any syntax errors in the nets will be detected before the
start of a non-graphical simulation, fixing these errors requires the
editor.
The current formalism (see section~\ref{subsec:formalismgui}) and the
configuration of simulation traces for individual net elements (see
section~\ref{subsubsec:trace}) will be stored within the shadow net
system.
\subparagraph{As single file each}
These files are well suited for the net
loading mechanism described in subsection \ref{subsec:netLoading}.
The command does not require any additional interaction, all file
names are derived from the corresponding drawing files. If a
drawing has not been assigned a file name, it is skipped during
the export.
\subsubsection{Import}
The items in the import menu allow you to load drawings from
different file formats.
\paragraph{import (any type)}
The first entry of the Menu combines all import filters into
one dialog where you can choose the desired format from a
drop-down box.
For window managers where this drop-down box is not available,
the separate menu entries are still available.
\paragraph{XML}
Analogous to the export features described in
subsection~\ref{subsec:xmlexp}, Renew provides
the PNML format as import filters.
%\subparagraph{XML}
%Imports a file in the obsolete XML format of Renew 1.6.
%Whenever possible, the graphical and semantic information
%is restored from the file.
%
%Note again that you will not be able to import an
%XML file of a different Renew version with this command!
%
%This feature has been cannot be used anymore. Its support
%has been discontinued.
\subparagraph{PNML}
Tries to import a file in PNML format.
The filter automatically guesses the net type used in the PNML
file.
It tries to extract as much graphical and semantic information
as possible from the file.
\paragraph{Shadow Net System}
Lets you import a previously exported (or automatically
generated) shadow net system (see above).
Since a shadow net system does not contain any graphical
information,
the places, transitions, arcs, and inscriptions are
located in a rather unreadable manner.
Thus, this function only makes sense for shadow net systems
automatically generated by other tools.
After importing, it is of course also possible to edit all
nodes and inscriptions in a normal fashion.
An automatic graph layout function that can
ease the task of making an imported net readable
is described in Subsection~\ref{subsubsec:netlayout}.
\subsubsection{Print\dots}
The print menu invokes a platform dependent print dialog and lets you
make hardcopies of the active drawing. Using the Java standard print
system, though, the quality of the printer output is usually very
poor. In that case, we encourage to use the EPS or PDF export instead and print with an external tool.
The keyboard shortcut for this function is \texttt{Ctrl+P}.
\subsubsection{Exit}
Tells Renew to terminate.
All drawings are closed as if you closed them manually,
which means that now Renew asks you about saving
changed drawings (see Subsection~\ref{subsubsec:close}).
Due to the introduction of the plug-in system, other plug-ins might
still be active when the editor is closed.
With respect to the simulator plug-in, the editor asks you for
confirmation to terminate a running simulation (if there is any).
If you choose \texttt{No}, then the non-graphical simulation of
Renew will continue.
To enforce that the whole plug-in system is shut down when you
close the editor, you can configure the property
\texttt{de.renew.gui.shutdownOnClose} (see
Subsection~\ref{subsec:systemExit} for details).
\subsection{Edit}
\label{subsec:menuEdit}
The Edit menu contains functions to insert, remove and group
figures and to change a figure's Z-order.
Details can be found in the following sections.
\subsubsection{Undo, Redo}
Up to ten modifications to each drawing can be undone step by
step. The effect of an undo can be undone by the redo command. The keyboard shortcut for undo is \texttt{Ctrl+Z} and for redo it is \texttt{Ctrl+Y}.
\subsubsection{Cut, Copy, Paste}
This function group offers the typical clipboard interactions.
Cut and Copy relate to the current selection in the active drawing
window (see Section~\ref{sec:usageBasics}).
Thus, these functions are only available if there is a
current selection.
{\bf Cut} puts all selected figures into the clipboard and removes
them from the drawing. The keyboard shortcut for Cut is \texttt{Ctrl+X}.
{\bf Copy} puts all selected figures into the clipboard, but they
also remain in the drawing. The keyboard shortcut for Copy is \texttt{Ctrl+C}.
{\bf Paste} inserts the current clipboard contents into the active
drawing. The upper left corner of the object or group of objects is
placed at the coordinates of the last mouse click.
The keyboard shortcut for Paste is \texttt{Ctrl+V}.
Note that due to restrictions of Java, Renew's clipboard does not
interact with your operating system's clipboard.
The current selection is automatically extended
to include all referenced figures before copying to the clipboard.
If for example you select an arc inscription and invoke copy and
then paste, the arc, the start figure, and the end figure of the arc
will also be copied. This is sometimes not what you intended to do,
but you can easily move the copied arc inscription to the original
arc (see Section~\ref{subsubsec:toolConnectedText}) and remove the
other duplicated figures.
Of course, cut only removes the figures which were originally selected.
\tip{The better alternative to copy inscriptions is to mark and copy
the text of the inscription when you are in text edit mode
(\texttt{Ctrl+C}, unfortunately, this does not work on all Unix
platforms).
Then, create a new inscription by right-clicking the net element.
Edit the new inscription by right-clicking it and paste the copied
text by pressing \texttt{Ctrl+V}.
}
%Bug fixed
%\bug{Connection figures should only be copied and pasted together with
%the figures they connect. Otherwise, pasting will create a copy of the
%connection figure that is not associated correctly to its start and end figure.
%To correct this, drag both connection handles and release them on other
%or even on the same figures.
%The same holds for connected text figures. After pasting, a connected text
%figure has to be assigned to a new parent like described in
%Section~\ref{subsubsec:toolConnectedText}: The Connected Text Tool.
%}
\subsubsection{Duplicate}
Duplicate works like Copy followed by Paste (see previous Section),
where the paste coordinates are not depending on the last mouse
click, but are just a small offset to the right and down from the
position of the original selection.
%Bug fixed:
%Mind the bug described above for the paste function!
The keyboard shortcut for Duplicate is \texttt{Ctrl+D}.
\subsubsection{Delete}
Removes the selected figures from the active drawing.
Note that if a figure is removed, all its connected text
figures and connection figures are also deleted.
The keyboard shortcut for Delete is the backspace and/or the delete key
(depending on the platform).
\subsubsection{Search, Search \& Replace}
\textbf{Search} looks for a match or substring-match of an
user given search string in all textfields of all loaded nets. Search is
case sensitive. After an occurrence of the search string is found the next
can be found by pressing the \texttt{search} button again.
Changes on the search string start
a new search.
The keyboard shortcut for this function is \texttt{Ctrl+F}.
\textbf{Search \& Replace} gives the opportunity to replace any found
search string with a replace string. Each replacement is prompted and has
to be confirmed by the user. Changes on the replace string start a
new search.
The keyboard shortcut for this function is \texttt{Ctrl+G}.
%TODO{Ctrl+G does not work for me, need Shift additionally.\\}
The search window allows you to select, whether the search should
be case sensitive and whether the search shall include all drawings or only
the active one.
\subsubsection{Group, Ungroup}
You can create a group of all currently selected figures in the active
drawing. A group is actually a new figure, which consists of all the
selected figures. You can even group a single figure, which does not
really make sense unless you want to prevent resizing of this figure.
From now on, the figures inside the group can only be moved, deleted,
etc. together, until you ``ungroup'' the group of figures again.
To release a group, one or more groups have to be selected.
Then, select the Ungroup menu, and all group participants are single
figures again (which are all selected).
\subsubsection{Select All}
Commands that allow selection or deselection
of large sets of nodes allow the user
to select groups of logically related net elements together.
For selecting locally related net elements or individual net
elements see Subsection~\ref{subsec:toolSelection}.
Using the select all command, all figures of a drawing are selected.
This is useful when you want to move all the net elements
to a different place. This command works even for figures that are located
off-screen.
The keyboard shortcut for this function is \texttt{Ctrl+A}.
\subsubsection{Invert Selection}
Inverts the selection of the drawing: All selected net elements will be
removed from the selection, whereas all the other net elements will be
selected.
\subsubsection{Select}
This menu hierarchy is used to select all nodes of a certain type.
E.g., it offers the possibility to select all transitions,
or all arcs, or all inscriptions that are attached to places.
This command comes in handy when you want to set attributes like
color or font size for all figures of a certain type.
\subsubsection{Add To Selection}
This command is similar to the select command, but it does not clear
the selection before it selects the net elements, thereby achieving
a union of the selection sets.
This command is especially useful when you want to select
a combination of net elements that is naturally covered
by the selection command itself. E.g., you can select all transitions
and then add all inscriptions of transitions to the selection.
\subsubsection{Remove From Selection}
This command is the opposite of the add-to-selection command.
It removes certain figures from the selection, but leaves
the selection state of the remaining figures unchanged.
This command can be used to select all figures, but \emph{not}
the transitions or \emph{not} the arcs.
\subsubsection{Restrict Selection}
Sometimes you want to select a certain type of net elements
inside a certain area. In this case, the restrict command
allows you to select the entire area as described in
Subsection~\ref{subsec:toolSelection}, but to restrict the selection
to a certain type of figures afterward.
The remove-from-selection command can be used instead of
this command, if you want to specify the figures to drop from
the selection instead of the figures to keep in the selection.
%\subsubsection{Toggle Sticky Tools}
%removed in Renew 1.4
%
% Selecting this menu toggles the Sticky Tools mode of Renew.
% By default, a tool will deactivate itself after the completion
% of a single task. Afterward, the selection tool will be
% reenabled. In Sticky Tools mode all tools will remain
% activated until you choose another tool explicitly.
% In general, Sticky Tools mode is most useful during the
% initial creation of nets and the default mode is more apt to
% later modification stages.
% But of course, which mode to use also depends on your
% personal preferences.
\subsection{Layout}
\label{subsubsec:netlayout}
The \texttt{Layout} menu allows to snap figures to a grid, to
align a figure's position according to other figures, to change the
Z-order of figures and to layout graphs automatically.
\subsubsection{Toggle Snap to Grid}
Selecting this menu toggles the Snap to Grid mode of Renew.
This grid is not absolute referring to the page, but
means that when the grid is active, figures can only be
placed to grid positions and moved by certain offsets.
Because the editor considers offsets while moving (not absolute
coordinates), figures should be aligned first (see below) and
then moved in grid mode.
\newtwodotfive{
The grid function is also very
basic, but the grid density is now customizable.
Set the option \texttt{ch.ifa.draw.grid.size} to the desired value in
your preferences. Default is \emph{5} pixel. This preference is
dynamic, i.e. can be set at runtime. Effects take place during next
execution of command. The grid can now also be set as the default
behavior for drawing views. Set the \texttt{ch.ifa.draw.grid.default}
property to \emph{true} (default \emph{false}).}
\subsubsection{Align}
The commands in this menu align the selected figures.
The figure selected first is the reference figure which
determines the coordinates for all others.
\paragraph{Lefts, Centers, Rights.}
These commands align the figure's $x$-coordinates, i.e.\ they move them
horizontally.
\texttt{Lefts} sets the left-hand side of all selected figures to
the $x$-coordinate of the left side of the figure that
was selected first, \texttt{Rights} does the same for the
right-hand side.
\texttt{Centers} takes into account the width of
each figure and places all figures so that their $x$-center
is below the reference figure's $x$-center.
The keyboard shortcut for aligning middles is \texttt{Ctrl+}{\textbackslash}
(the backslash character).
\bug{%
The shortcut works only on an English keyboard layout, where the
keys for the shortcut can be typed directly, i.e. without additional
modifiers like Shift.%
}
\paragraph{Tops, Middles, Bottoms.}
These commands work exactly like the commands in the previous
paragraph, except that the $y$-coor\-dinate is changed.
Thus, figures are moved vertically in order to be aligned
with their tops, middles, or bottoms.
The keyboard shortcut for aligning middles is \texttt{Ctrl+-} (the minus sign).
\subsubsection{Spread}
The items of this menu spread the selected figures equidistantly.
The two outermost figures that are selected stay at their previous location
while all other selected figures are repositioned.
The order of the figures (left-to-right, top-to-bottom or
diagonal) remains unchanged.
To use the spread commands, you must have selected at least three
figures.
\paragraph{Lefts, Centers, Rights, Distances.}
Here we spread the figures by modifying their $x$-coordinates. The
$y$-coordinate remains unchanged.
\texttt{Lefts} arranges the figures in a way that the
$x$-coordinates of their left borders are distributed equally.
\texttt{Rights} does the same with respect to the figure's right
borders, \texttt{Centers} with respect to each figure's center
point.
\texttt{Distances} arranges the figures in a way that the space
in between of each pair of neighboring figures will have the same
width.
The differences between the four commands will only be visible
when figures of different sizes are selected.
\paragraph{Tops, Middles, Bottoms, Distances.}
These functions work exactly like the functions in the previous
paragraph, except that the $y$-coordinate is changed.
Thus, the figures are moved vertically to equal the distances of
their tops, middles, bottoms or borders.
\paragraph{Diagonal.}
This command spreads the figures in both directions, horizontally and
vertically.
All figures are treated with respect to their center point.
First of all, a virtual line is drawn from the outermost figure (in
relation to the center of the bounding box of all selected
figures) to the figure most apart from the outermost one.
Afterward, all other figures are moved onto that line with equal distances
between their center points.
The order of the figures on the line is determined by the
order of the orthogonal projections of their original location onto the
virtual line.
\subsubsection{Send to Back, Bring to Front}
The figures in a drawing have a so-called Z-order that determines
the sequence in which the figures are drawn.
If a figure is drawn early, other figures may cover it partially
or totally.
To change the Z-order of figures, the functions \texttt{Send to Back}
and \texttt{Bring to Front} are available. \texttt{Send to Back}
puts the selected figure(s) at the beginning of the figure list and
\texttt{Bring to Front} puts it/them at the end, with the result
explained above.
\tip{Sometimes, certain figures can not be reached to select and modify
them. Using these functions it is possible to temporarily move
the covering figure to the back, select the desired figures, and
move the figure to the front again.
Another option in cases like this one is to use Area Selection
(see Section~\ref{subsec:toolSelection}).
}
\subsubsection{Figure size}
These two commands set the size of figures.
The function \texttt{copy within selection} sets
the size of all selected figures to the size of
the figure that was selected first.
The command \texttt{reset to default} resets the selected
figure's sizes to their figure type specific default. As there
are defaults specified only for net element figures, the command
will not have any effect on ordinary drawing figures.
\subsubsection{Automatic Net Layout\dots}
Especially for automatically generated nets, it is nice to
have an automatic layout of the net graph, so that one gets
at least a rough overview of the structure of the net.
This menu entry start an automatic net layout on the
current drawing.
While this mode is active, the nodes of the net are moved according
to certain rules that are to some extend inspired by
physical forces acting on a mesh of springs.
\begin{itemize}
\item Arcs have a certain optimal length that is dependent
on the size of the adjacent nodes. They will act as springs.
\item Arcs feel a torque whenever they are not horizontally
or vertically oriented. The torque works toward these optimal
positions.
\item Nodes feel a repulsive force from each other until
a certain distance is reached where this force disappears entirely.
\item Nodes feel friction, i.e., the motion that was caused
by the forces mentioned before continually slows down unless the force
is still applied and compensates the friction.
\item Nodes that would move out of the upper or left border
are pushed back into the viewable area of the drawing.
\end{itemize}
These rules will not produce the nicest net graph in many cases,
but they can ease the early stages of the layout considerably.
They might also be used to maintain a layout during early prototyping
phases when the structure of a net changes constantly.
In order to improve the layout of the graph, a special window
pops up that allows you to control some parameters or the physical model
using sliders. The first slider controls the length of the
springs. Some diagrams tend to clump together too much, which might
can be a reason to raise this value. On the other hand, the
spring might be too rigid, not allowing some springs to stretch to
their optimal length. In that case, you can control the rigidity of
the spring with the second slider.
The repelling force acts only up to a certain distance.
By default, the force is quite far reaching and establishes
a nice global spreading. But you may want to reduce this force's
maximum distance in order exclude only overlapping nodes.
In that case, it may also be good idea to increase the repulsion strength.
The torque strength controls whether the arcs are supposed to be
very strictly horizontal or vertical. Initially, this force might
actually inhibit the progress toward to optimal layout, but in the
end it helps to get a nice net. Try to vary this slider's position
during the layout for optimal results.
Lastly, the friction slider may be lowered, so that the motion is
faster overall. Use this slider with care, because the layout
algorithm may become unstable for very low friction values and
convergence to an equilibrium might actually slow down due to
oscillations. The optimal value depends heavily on the topology of the net.
If you feel that you cannot set some force's strength high enough,
consider lowering the other forces, and also lowering the friction
a little.
\tip{Even while the graph is changed automatically, you can still
grab a node with the selection tool and move it to a desired position.
Of course, it might fall back into the old position due to the
acting forces, but your action might establish a topologically
different situation where the forces act toward a different
equilibrium. This is especially useful when you have selected high torque
and rigid springs, but low or no repulsion.
}
After you are satisfied with the graph, switch off the layout mode.
If you add or remove nodes or arcs during the layout procedure,
you have to restart the net layout algorithm, before these changes
affect the layout algorithm. Note that the start of a layout procedure
always affects the current drawing, not the drawing that was previously
used for layout.
\subsubsection{Location}
Using this menu you can declare the currently selected
figures as either fixed or dynamic. Dynamic nodes participate in the
automatic layout as usual, which is the default.
On the other hand, fixed nodes still exert forces
upon other nodes, but they are rigidly glued to their position
and move only if the user moves them.
By fixing the location of some nodes, you can select a preferred
direction or specify the basic shape of the net while leaving the details
to the layout algorithm.
\subsection{Attributes}
\label{subsec:menuAttributes}
This menu helps you to change a figure's attributes after its
creation. If several figures are selected, the attribute is
changed for all figures that support that attribute.
If you try to change an attribute that some selected figures do
not support (e.g.\ font size for rectangles), nothing
is changed for that figures, but the change is still applied
to the other figures.
\subsubsection{Fill Color}
The fill color attribute determines the color of the inner
area of a figure. All figures but the line-based figures like
connection, arc, etc.\ offer this attribute.
The values for this attribute could be any RGB-color, but the
user interface only offers 14 predefined colors from which
you can choose. The default fill color is Aquamarine except for
text figures, where it is None.
When you choose \texttt{other\dots} at the end of the list of colors, you
get a full-featured color chooser dialog that provides multiple ways to
define any color.
There are four buttons at the bottom of the dialog:
\begin{description}
\item[Apply] applies the currently chosen color to selected figures.
\item[Update] chooses the color of a selected figure and makes it the
current color in the dialog.
\item[OK] closes the dialog and applies the currently chosen color to
selected figures.
\item[Cancel] closes the dialog.
\end{description}
\tip{The dialog can be used to copy color attributes between figures by a
sequence of \texttt{Update} and \texttt{Apply} actions. Similar dialogues
are provided for other attributes like pen color, text color, font and
font size.}
\subsubsection{Opaqueness}
The opaqueness attribute determines the transparency of the inner area of a figure,
of the pen color or of the font.
Each attribute \texttt{Fill Color}, \texttt{Pen Color} and \texttt{Font} has its own
opaqueness menu that is located right below each menu entry.
The visibility of each item can be set in values ranging from 0\% (invisible)
to 100\% (opaque).%
\bug{The transparency attribute is ignored in EPS export.
However, transparencies are printed correctly using the Print dialog and
in the PDF, SVG and PNG export formats.}
\subsubsection{Pen Color}
The pen color attribute is used for all lines that are drawn.
All figures but the image figure support this attribute.
Note that the pen color does not change a text figure's
text color (see below), but the color of a rectangle frame
that is drawn around the text.
Again, choose the desired color from the given list.
The default pen color is black, except for text figures, where
it is None (i.e.\ transparent).
The \texttt{other\dots} entry at the end of the list of colors opens a
full-featured color chooser dialog as described under \texttt{Fill color}.
\subsubsection{Visibility}
The visibility attribute can be used for all types of figures.
A figure marked as invisible is still part of the drawing, but
it will not be displayed. As it is not visible, it cannot be
selected by the mouse any more, but the select commands from
the menus \texttt{Edit} or \texttt{Net} will still include the
figure when appropriate.
This feature is useful especially in combination with the
\texttt{Associate highlight} command from the net menu. The
invisible figure will appear in the instance drawing while it
is highlighted.
\subsubsection{Arrow}
This attribute is only valid for the connection and the
arc figure and offers four possibilities of arrow tip
appearance: \texttt{None}, \texttt{at Start}, \texttt{at End},
or \texttt{at Both} ends of the line.
If the figure is an arc, its semantics are changed accordingly.
\subsubsection{Arrow shape}
This attribute is valid only for lines or connection figures.
The style of arrow tips can be changed to one of four shapes,
which are usually used to mark different semantics of arcs in
Renew.
But as it is currently not possible to change the arc semantics
in accordance to the arrow tip shape, this attribute will not
have any effect on arc figures.
\subsubsection{Line Style}
Every line possesses a line style, which can be chosen out of
the options \texttt{normal}, \texttt{dotted}, \texttt{dashed},
\texttt{medium dashed}, \texttt{long dashed} or
\texttt{dash-dotted}.
Lines are typically created as solid, normal lines.
It is also possible to define your own line style: After
choosing the option \texttt{other\dots{}}, you can enter any
custom line style in a non-modal dialog.
The dialog has four buttons \texttt{Apply}, \texttt{Update}, \texttt{OK}
and \texttt{Cancel} that work similar as in the \texttt{Fill color} dialog
(see above).
A custom line style consists of a space-separated sequence of numbers.
The first number of the sequence determines the length (in
pixels) of the first dash. The second number is interpreted as
the length of the gap after the first dash. The third number
determines the second dash's length, then the next gap's length
follows and so on.
The sequence must consist of an even number of numbers. There is
only one exception: A single number can be used for a simple dashed
line where dashes and gaps are of the same length.
The normal solid line style can be set by applying an empty sequence.
\begin{tabbing}
\texttt{medium dashed}\quad\quad\=''7 3 1 3''\quad\quad\=\kill
Some examples from our predefined line styles:\\
\texttt{dashed} \> ``10'' \> \begin{picture}(90,1)(0,-4)
\put(0,0){\line(1,0){10}}
\put(20,0){\line(1,0){10}}
\put(40,0){\line(1,0){10}}
\put(60,0){\line(1,0){10}}
\put(80,0){\line(1,0){10}}
\end{picture} \\
\texttt{medium dashed} \> ``15 10'' \> \begin{picture}(90,1)(0,-4)
\put(0,0){\line(1,0){15}}
\put(25,0){\line(1,0){15}}
\put(50,0){\line(1,0){15}}
\put(75,0){\line(1,0){15}}
\end{picture} \\
\texttt{dash-dotted} \> ``7 3 1 3'' \> \begin{picture}(90,1)(0,-4)
\put(0,0){\line(1,0){7}}
\put(10,0){\line(1,0){1}}
\put(14,0){\line(1,0){7}}
\put(24,0){\line(1,0){1}}
\put(28,0){\line(1,0){7}}
\put(38,0){\line(1,0){1}}
\put(42,0){\line(1,0){7}}
\put(52,0){\line(1,0){1}}
\put(56,0){\line(1,0){7}}
\put(66,0){\line(1,0){1}}
\put(70,0){\line(1,0){7}}
\put(80,0){\line(1,0){1}}
\put(84,0){\line(1,0){6}}
\end{picture}
\end{tabbing}
Line styles can not only be applied to lines, connections and scribble
figures, but also to rectangles, ellipses, polygons, transitions, places
and other closed shapes.
\subsubsection{Line Shape}
With this attribute a \texttt{straight}
line can be changed to a \texttt{B-Spline} and vice
versa. Every linetype can be changed to a B-spline. But
these lines retain their other like handles and behavior.
If this conversion is applied, there are more attributes offered to
influence the bspline algorithm:
\begin{description}
\item[\texttt{standard}] This works as a reset
to standard settings with a degree of 2 and a segment size of 15.
\item[\texttt{Segments}] This is used to
change the number of segments to smooth the edges.
\item[\texttt{Degree}] The lower the number, the closer
the line sticks to the handles. 2 creates maximally curved line.
The degree depends on the number of handles and is only effective
if the choice is not larger than the number of handles plus one.
\end{description}
\subsubsection{Round corners}
This attribute influences the behavior of round rectangles
when they are scaled.
When set to \texttt{fixed radius}, the size of the curvature
will remain unchanged regardless of the scaling of the figure.
Nevertheless, an explicit modification of the radius is still
possible by using the special yellow handle. This is the
default, which was exclusively used in previous releases of Renew.
The setting \texttt{scale with size} will adapt the curvature
size when the rectangle is scaled, so that the proportion of
the rectangle used for the curvature remains the same.
\subsubsection{Font}
Only applicable to text-based figures, this attribute
sets the font for the complete text of this text figure.
Not all fonts are available on all platforms.
It is not possible to use several fonts inside one text figure
(but still, this is a graph editor, not a word processor or
DTP application).
The \texttt{other\dots} entry at the end of the list of colors opens a
font selection dialog that works like the color dialog described
under \texttt{Fill color}.
The font selection dialog includes other font attributes like the size or
italic and bold style options.
\emph{Caution:} If you use non-standard fonts, the text will show
up differently on systems where the fonts are not installed.
% \bug{
% Any non-standard font will cause problems with Postscript or EPS
% export of the drawing.
% The export filter is not capable to embed fonts.
% If font names contain spaces, the postscript code becomes faulty.
% }
\subsubsection{Font Size}
Only for text-based figures, select one of the predefined font sizes
given in point with this menu.
The \texttt{other\dots} entry at the end of the list opens a dialog where you
can enter any number as size.
The dialog has four buttons \texttt{Apply}, \texttt{Update}, \texttt{OK}
and \texttt{Cancel} that work similar as in the \texttt{Fill color} dialog
(see above).
\subsubsection{Font Style}
Available font styles (again, only for text-based figures) are
Italic and Bold. If you select a style, it is toggled in the selected
text figure(s), i.e.\ added or removed. Thus, you can even combine
italic and bold style. To reset the text style to normal, select
Plain.
\subsubsection{Text alignment}
The direction of text justification can be configured by this
attribute. This will affect the alignment of lines in text figures with
multiple lines as well as the direction of growth or shrinking
when a text changes its width due to a change in its text length.
By default, inscriptions and other connected text is centered at
the parent figure while other text figures are left-aligned.
\subsubsection{Text Color}
The text color attribute is only applicable to text-based figures and
sets the color of the text (sic!). This is independent of the pen and
fill color. The default text color is (of course) black.
The \texttt{other\dots} entry at the end of the list of colors opens a
full-featured color chooser dialog as described under \texttt{Fill color}.
\subsubsection{Text Type}
This attribute is quite nice to debug your reference nets quickly.
The text type determines if and what semantic meaning a text
figure has for the simulator.
If a text figure is a \texttt{Label}, it has no semantic meaning at
all. If it is a \texttt{Inscription}, it is used for the simulation
(see Section~\ref{subsubsec:toolInscription}: The Inscription Tool).
A \texttt{Name} text type does not change the simulation, but makes
the log more readable (see Section~\ref{subsubsec:toolName}: The Name Tool).
\tip{It is quite convenient to ``switch off'' certain inscriptions by
converting them to labels if you suspect them causing some problems.
This way, you can easily re-activate them by converting them back
to inscriptions.
}
You might also want to have certain inscriptions appear as transition
names during the simulation. You can achieve this by duplicating the
inscription figure, dragging the duplicate to the transition (see
Section~\ref{subsubsec:toolConnectedText}: The Connected Text Tool)
and changing the duplicate's text type to \texttt{Name}.
\subsection{Net}
\label{sec:net-menu}
This menu offers commands that are useful for nets only.
You can semantically modify figures in a drawing, check
the active drawing for problems, or configure the
graphical simulation feedback for net elements.
\subsubsection{Split transition/place}
This command provides a simple way to refine net
elements by splitting a single transition or place into two.
If a transition is split the old transition is connected to a
newly created place. This place, in turn, is connected to a
newly created transition. The inbound arcs of the old
transition remain unchanged, the outbound arcs are reconnected
to the new transition. Reserve arcs are split into an inbound
and an outbound arc, which are handled respectively.
If a place is split it will be extended by a new transition and
a new place. The connected arcs are treated in the same manner
as described above (outbound arcs are reconnected to the new
place).
\subsubsection{Coarsen subnet}
This command coarsens place-bounded or transition-bounded subnets.
It is only available if a place-bounded or transition-bounded
subset of figures is selected within the drawing.
On execution,
if the selected subset is place-bounded,
all places are merged into one and all transitions are removed.
The inscriptions of the removed places are attached to the single
remaining place.
All arcs entering or leaving the selected subnet are
reconnected to this place, too.
If the selected subset is transition-bounded, transitions are merged and
places are removed, respectively.
\subsubsection{Trace}
\label{subsubsec:trace}
This menu and the next two are realized as figure attributes
that can be applied to each single net element.
However, they must be set before the simulation is started to
take effect.
They also cannot be applied to figures in net instance drawings.
Sometimes, the simulation log becomes very complex and full.
To reduce the amount of information that is logged, the trace
flag of net elements can be switched off.
\begin{itemize}
\item If a transition's trace flag is switched off, the firings of this
transition are not reported in the log window.
\item A place's trace flag determines whether the insertion of the initial
marking into the place should be logged.
\item If an arc's trace flag is switched off, the messages informing about
tokens flowing through this arc are omitted.
\end{itemize}
With the integration of the Log4j framework (see
Section~\ref{subsec:log4jConfiguration}), the need for the trace attribute has been
reduced.
The configuration of Log4j is much more flexible, it allows for multiple
log event targets with individual filter criteria while the trace flag
globally controls the generation of log events for a net element.
A valid reason to still use the trace attribute may be the simulation
speed when you want to discard the trace anyway, but Log4j is rather
efficient in such a situation, too.
Please note that Renew provides a graphical Log4j configuration dialog
for simulation traces (see Subsection~\ref{subsec:loggingPlugin}).%
\subsubsection{Marking}
\label{subsubsec:marking}
This menu controls the default as well as the current choice
how the contents of each place is to be displayed during simulation.
There are four ways to display the marking of a
place during simulation: Either the marked places
are simply highlighted in a different color (\texttt{highlight only}),
or the number of tokens is shown
(\texttt{Cardinality}), or the verbose multiset of tokens
(\texttt{Tokens}) is shown,
or each token and its attributes is shown in detail
(\texttt{expanded Tokens}). This is also
the default mode for current marking windows.
However, these modes can be switched at drawing time
and at simulation time using the \texttt{Marking} menu.
The expanded token mode relies on the undocumented
feature structure (fs) formalism to display object attributes.
Since the fs formalism is not any longer distributed with the
base renew distribution, this mode is not available unless you
install the FS plug-in.
In Expanded Tokens mode, token objects are shown in a UML-like (Unified
Modeling Language) notation. An object is noted by a box containing
two so-called compartments.
\begin{figure}[htbp]
\centerline{%
\includegraphics[scale=\netscale]{TokenObjectExample.eps}%
}
\caption{\label{fig:TokenObjectExample}An Example of Browsing Token
Objects in Expanded Tokens Mode}
\end{figure}%
The first compartment specifies a
temporary name of the object (Renew just gives numbers to objects),
followed by a colon (\texttt{:}), followed by this object's class name.
According to UML, the whole string is
underlined to indicate that this is an instance, not the class.
The second compartment is only shown if you click the shutter handle,
a small yellow rectangle with a cross (plus sign) inside.
Otherwise, the available information is indicated by three dots
(\texttt{...}) after the class name.
The second compartment contains a list of all attributes of the token
object and their values, which are basic types or again objects.
Multi-valued attributes (e.g.\ array values or \texttt{Enumeration}s)
are shown as lists in sharp brackets (this part is not quite UML).
After opening the attributes compartment, the handle changes to a
horizontal line (minus sign) and lets you close the compartment again
if you wish to do so.
This way, you can browse the object graph starting at the token
object.
If the value of an attribute happens to be an object that already
appeared in the open part of the object graph, only the temporary name
(number) of that object is display as the attribute's value.
To help you find the original object, you can click on this object
number, and all appearances of this object are highlighted by a red
frame. To get rid of the highlighting, just click on any of the
numbers again.
Figure~\ref{fig:TokenObjectExample} shows an example of a
\texttt{java.awt.MenuBar} object that is being browser as an Expanded
Token. In the example, the menu bar contains one menu \texttt{File}
with two menu item of which the first one is \texttt{Load\dots{}}.
The \texttt{parent} of the first menu item is again the menu, as you
can see by the highlighting. The second menu item is closed.
Renew tries to find attributes of the token object by using Java's
reflection mechanism on fields and \texttt{get}-methods.
Any method without parameters and with a return type which is not
\texttt{void} is regarded a \texttt{get}-method.
In some cases, such methods return volatile (changing) results, but
are only queried once when the token figure is expanded.
This means you should not expect to see changes of a token object
while browsing it!
Renew stores for each place the preferred display mode chosen by
the \texttt{Marking} menu. This means that every new simulation starts
with the display mode chosen for each place, and the display mode is
also saved to disk.
The menu can also be used to change the display mode during run-time.
To do this, either the token figure or the place instance has to be
selected.
\subsubsection{Breakpoints}
Using this attribute, you can request breakpoints for certain
places and transitions. These breakpoints will be established
immediately after the start of the simulation and have exactly
the same effect as a global breakpoint that is set during the
simulation.
In the net drawing, transitions and places with a set
breakpoint attribute are marked by a small red circle in their upper
right corner.
However, the tag is not shown in instance drawings.
Attributed breakpoints, like breakpoints set during the
simulation, will show up in the breakpoint menu while the
simulation is running.
Please see subsection~\ref{subsec:breakpoint}
for a detailed description of the possible breakpoints. Note that you
can set at most one breakpoint for each net element
using this menu command.
Attributed breakpoints are established only when the net drawing is loaded
in the editor at the moment where the compiled net is passed to the
simulation engine.
For the initial drawings (that were used to start the simulation) this is
usually the case.
But if nets are loaded later by the net loader from \texttt{.sns} files
(see Subsection~\ref{subsec:netLoading}), no breakpoints are set.
This behavior is due to the fact that
the responsibility for the creation of breakpoints lies in the
graphical user interface and not in the simulation engine. Since
the breakpoint attribute is dropped when exporting shadow net
systems (see Subsection~\ref{subsec:expshadow}), the
simulator is not able to establish these breakpoints.
\subsubsection{Set Selection as Icon}
\label{subsubsec:icon}
This feature allows you to assign icons to your nets.
These icons will be displayed during simulation, whenever
a place marking is displayed in \texttt{token} mode
(see subsection~\ref{subsubsec:marking}) and
references an instance of a net
with an icon.
Select exactly one figure, which can be of any type, then select the
menu \texttt{Set Selection as Icon}.
If more than one figure was selected, nothing happens,
but in the case of a single figure, it is assigned as the
net's icon.
When the figure is removed, the net does not have a special icon, so
that references to this net are again displayed as text.
When the figure is or includes a text figure, the string
\texttt{\$ID}, contained anywhere within the text, has a special
meaning: During simulation, \texttt{\$ID} will be replaced by the
index number of the referenced net instance.
You can use net icons as in the following example which can be found
in the samples folder \texttt{icon}.
Remember the Santa Claus example from Section~\ref{sec:channels}?
Imagine you want to visualize the bag and its contents as icons.
Figs~\ref{fig:iconsanta} and \ref{fig:iconbag} show modified versions of the
nets from the Santa Claus example.
\doublenet{iconsanta}{iconbag}
Add an icon to the \texttt{bag} net by drawing an ellipse,
coloring it gray, and drawing a polygon which looks like the closure
of the bag.
Add a text with the string \texttt{BAG \$ID} to the drawing.
\iffalse
Be careful not to connect the text to any other figure (see bug
below).
\fi
Group together all new figures (\texttt{Edit | Group}).
This is necessary, since the icon of a net has to be a single figure.
Now you can select the group and then the menu \texttt{Set Selection
as Icon}.
Note that when you have to \texttt{Ungroup} the icon (e.g.\ to move one
of the included figures individually), this corresponds to removing
the group figure. So, after re-grouping the icon, you have to invoke
the menu again, or the group figure will not be set as the net's icon.
The next step to make an iconized version of the Santa Claus example
is to create a new net, add an image figure with your favorite sweet
(in my case, this is a muffin) and a text figure saying \texttt{\$ID}.
Then again group together the image and the text, select this new
group, and select the menu \texttt{Set Selection as Icon}.
Save this net as \texttt{muffin}.
Now, you can select the net \texttt{iconsanta} and start a new
simulation. After performing two steps, the running nets may look like
those in Figure~\ref{fig:santabagmuffin}.
Note that the reference to the net \texttt{bag} is now display as the
bag icon with \texttt{\$ID} replaced by the net instance index 1.
Without the icon, the token would have been described as
\texttt{bag[1]}. Also note that the muffins all have different index
numbers, so that you can see to which net they refer.
\begin{figure}[htbp]
\centerline{%
\includegraphics[scale=\screenshotscale]{iconsanta-screenshot2-5}%
\vspace{1pt}
\includegraphics[scale=\screenshotscale]{iconbag-screenshot2-5}%
}
\caption{\label{fig:santabagmuffin}The Santa Claus Example with
Icons During Simulation.}
\end{figure}%
\tip{The background of expanded tokens in instance/simulation drawings is not transparent by default to improve readability. It can be changed to be transparent by setting the property \texttt{de.renew.gui.noTokenBackground}.}
\iffalse
% Bug fixed. Comment probably not needed any longer.
\bug{
There is a bug when putting connected text into a
group. Whenever such a group is copied, Renew throws an
exception. For example in case of a text element in a net icon, you
have to be careful not to connect this text to the figure,
because afterward you want to group together the text with the
figure and Renew will try to copy this group figure during
simulation.
Unfortunately, connecting text happens quite easily when you move
the unconnected text, as a figure below it will become its parent.
Instead, you should move the figure below, not the text itself, and
then move the group as a whole later on.
Another possibility is to select the unconnected text and another
figure, since when multiple figures are selected, Renew will not
assign new parents.
}
\fi
\subsubsection{Associate Highlight}
It is not only possible to select the kind of feedback given for
the marking of a place (see Subsection~\ref{subsec:netinstwind}), but
also to specify arbitrary graphical elements to be highlighted
whenever a place is marked or a transition is firing.
Each net element can have at most one highlight figure, but this
figure can be any Renew drawing figure like any rectangle, line, text,
etc., even a group figure.
You can for example draw a StateChart with Renew's drawing
facilities, construct a net which simulates the StateChart's
behavior, and associate figures such that during simulation,
the StateChart is highlighted accordingly.
The first function one needs for dealing with such highlights is to
associate a highlight to a net element such as a place or a
transition.
When the menu \texttt{Associate Highlight} is invoked, exactly two
figures have to be selected, of which one has to be a place or a
transition.%
\footnote{It is even possible to associate another net element as a
highlight, but this is not recommended, as it can lead to confusion.
}
The status line tells you if associating the highlight to the net
element was successful, otherwise displays an error message.
Now, during simulation, the associated figure will be
highlighted exactly when the net element is highlighted.
If the associated figure is invisible, it will be made visible whenever
it is highlighted. If the figure is already visible, its color
will change as a result of the highlighting.
\subsubsection{Select Highlight(s)}
To find the associated highlight figure (see above) to a net element,
select the net element and then this menu.
If the net element does not have any highlight figure, a corresponding
message appears in the status line.
You can also select multiple net elements, and all associated
highlight figures of any one net element of the group will be selected.
\subsubsection{Unassociate Highlight}
Sometimes you also want to get rid of a highlight-association (see
above). Then, select one single net element (place or transition) with
an associated highlight figure and then invoke this menu.
When you associate a net element to a highlight figure, any old
association is automatically canceled.
\subsubsection{Syntax Check}
This menu entry checks the net for syntax errors without
starting a simulation run. Of course, most syntax errors are
immediately reported after the editing of an inscription,
but not all errors are found this way. E.g., multiple uplink
inscriptions cannot be detected immediately. You can also
invoke a syntax check when you have corrected one error,
in order to make sure that no other error remains. It is
always a good idea to keep the nets syntactically
correct at all times.
\iffalse
% The lint checks have been removed to avoid inconsistencies with
% the new formalism model.
\subsubsection{Channel Check}
This menu entry checks for patterns in the synchronous channel
inscriptions that could throw the simulator into an infinite loop.
In these cases, one or more transitions have a downlink and an uplink,
so that a transition can invoke itself through synchronous channels.
There are occasions when this is sensible, but it should only be
used in experimental models.
For productions models you should
make sure that the simulator behaves correctly by running this
check once.
If the check fails, it reports the cyclic dependency of channels
that was found. Removing this problem can cause a substantial
amount of work, because a loop needs to be made explicit
in the net structure.
The check can also fail if a downlink does not have a matching
uplink. That means the offending transition can never fire.
This is usually a misspelled channel name and not a deep problem.
It is not reported as an error when an uplink does not have a matching
downlink, because this simply indicates that some functionality
of a net is not yet needed and it might well be required in the
future. The unmatched uplink might also be needed for a
Java stub as described in Section~\ref{sec:netcall}.
\subsubsection{Name Check}
This menu entry checks for net elements with the same
name within one net. This error is not reported
in the normal syntax check, because there might be occasions
when it is sensible to have a place and a transition that share
the same name, or even two transitions with the same name.
But since a repeated name can indicate a subtle modeling
problem, this check is provided.
\subsubsection{Isolated Node Check}
This menu entry checks for net elements which are not
connected to others. This error is not reported in the normal syntax
check, because there are situations when isolated net elements might be
reasonable. Normally, an isolated node indicates a modeling
problem.
\fi
\subsubsection{Layout Check}
This menu entry checks in all loaded drawings whether
textfields overlap by more than 50\%. Overlap indicates problems in the
clear representation. Also, the situation is detected that a second
inscription is accidentally assigned to an arc and is hidden because of the
overlap.
\subsection{Simulation}
\label{sec:simulation}
This menu controls the execution or simulation of the net system
you created (or loaded).
Before a simulation can be started, all necessary nets must be loaded into
memory (see subsection~\ref{subsec:menuDrawings}).
The drawing window containing the net that is to be
instantiated initially has to be activated.
Refer to Section~\ref{sec:simcontr}, if you want to learn how to
monitor and influence a simulation run
that you have started using this menu.
\subsubsection{Run Simulation}
This function starts or continues a simulation run that continues
automatically until you stop the simulation.
If you want to enforce starting a new simulation run, use
\texttt{Terminate Simulation} (see below) first.
For most net models, it is almost impossible to follow what's going
on in this simulation mode. Its main application is to execute a
net system of which you know that it works.
Some syntax checking is done even while you edit the net (see
Section~\ref{subsubsec:toolInscription}: The Inscription Tool),
but when you try to run a simulation of your reference nets,
the reference net compiler is invoked and
may report further errors (see Section~\ref{sec:errors}).
You have to correct all compiler errors before you can start a simulation
run.
The keyboard shortcut for this function is \texttt{Ctrl+R}.
\subsubsection{Simulation Step}
This menu performs the next simulation step in the active simulation
run or starts a new simulation run if there is no active simulation.
If a simulation is already running in continuous mode, one more step
is executed and then the simulation is paused to be continued in
single-step mode.
Thus, it is possible to switch between continuous and single-step
simulation modes.
The keyboard shortcut for this function is \texttt{Ctrl+I}.
\subsubsection{Simulation Net Step}
This menu entry performs a series of simulation steps in the active
simulation run or starts a new simulation run if there is no active
simulation.
The simulation is paused when an event in the net instance in the current
instance window occurs.%
The keyboard shortcut for this function is \texttt{Ctrl+Shift+I}.
\subsubsection{Halt Simulation}
This menu halts the current simulation run, which has been
started with \texttt{Run Simulation}, or terminates the
search for a possible binding in single step mode.
No further simulation steps are made, but you are free
to resume the simulation with \texttt{Run Simulation} or
\texttt{Simulation Step}.
\bug{There are situations where a net invokes a Java method that
does not terminate. In these cases Renew cannot succeed
in halting the simulation.}
The keyboard shortcut for this function is \texttt{Ctrl+H}.
On Mac OS X systems, \texttt{Cmd+H} is bound system-wide to
hide the application window.
Therefore, the shortcut key has been changed to \texttt{Shift+Cmd+H}.
\subsubsection{Terminate Simulation}
This menu entry stops the current simulation run (if there is any).
For certain reasons, the simulator can not know if the simulated
net is dead (it could always be re-activated from
outside, see Section~\ref{sec:netcall}), so a simulation only
ends when you invoke this command. When you issue
another simulation command after this command, a new simulation
is automatically started.
All simulation related
windows (net instances, current markings, now also
possible transition bindings) are now automatically closed
when simulation is terminated, since they cannot be used
after simulation anyway.
The keyboard shortcut for this function is \texttt{Ctrl+T}.
\subsubsection{Configure Simulation\dots{}}
\label{subsec:configureSimulation}
This dialog allows to change some simulation
related configuration options.
These options can also be controlled from the command line or
the configuration file \texttt{.renew.properties} (see
section~\ref{subsec:configMethods}).
All options presented in this dialog are evaluated each time a
new simulation is started.
However, the settings in this dialog are not stored
permanently.
The dialog comprises several tabs, each tab groups some
configuration options.
The buttons at the bottom of the dialog affect all tabs.
\begin{description}
\item[\texttt{Apply}] passes the current settings to the plug-in
system, so that the simulator plug-in can interpret them at the
next simulation startup.
\item[\texttt{Update}] refreshes the dialog to display the
current settings known to the plug-in system.
Unless you modify some properties concurrently, you can think
of this button as a ``revert'' button, that restores the most
recently applied configuration.
\item[\texttt{Update from simulation}] refreshes the dialog to
display the configuration of the running simulation, if there
is any.
These settings may differ from the current simulator plug-in
configuration, so you might want to press \texttt{Apply} or
\texttt{OK} afterward to bring the plug-in configuration back
in sync with the settings of the running simulation.
\item[\texttt{OK}] applies the current configuration (like
\texttt{Apply} would do) and closes the dialog.
\item[\texttt{Close}] closes the dialog and discards any setting
changes (unless they have been applied before).
\end{description}
The tabs provide the following options:
\paragraph{Engine}
The two options \texttt{Sequential mode} and
\texttt{Multiplicity} configure the concurrency of the simulation
engine.
The sequential mode is of interest when you work with a timed
formalism (see section~\ref{sec:timedNets}) or special arc types
(see section~\ref{subsec:sequentialTools}).
Multiple simulators may enhance the performance on multiprocessor
systems.
A sequential mode with multiplicity greater than one is not
sequential because it uses multiple concurrent sequential
simulators.
The settings are equivalent to the
\texttt{de.renew.simulatorMode} property mentioned in
sections~\ref{subsec:concsim} and \ref{subsec:seqmode}.
Just think of the \texttt{Sequential Mode} check box as the sign
of the \texttt{simulatorMode} value (if you enter a minus sign in
the \texttt{Multiplicity} field, it is ignored).
The \texttt{Class reinit mode} setting equivalents the
\texttt{de.renew.classReinit} property explained in
section~\ref{subsec:classReinit}.
It allows you to reload custom classes during development.
The \texttt{Simulation priority} sets the priority of each thread the simulation
spawns.
Higher values allow for faster simulations but might result in reduced GUI responsiveness.
The default value of 5 is considered a good tradeoff between speed and gui response time.
\paragraph{Remote Access}
The options provided by this tab find their equivalents in the
remote properties which are explained in
section~\ref{subsec:remoteSetup}.
When you check \texttt{Enable remote access}, the simulation will
be published over Java RMI to allow remote inspection and
simulation control (this feature needs a running RMI registry to
work).
To distinguish multiple simulations on the same registry, you can
assign a \texttt{Public name} to the simulation.
Plug-In developers might be interested in the possibility to
replace the remote \texttt{Server class} by a custom
implementation.
A custom RMI \texttt{Socket factory} can only be supplied at startup,
therefore this property cannot be changed here.
To observe the simulation from a remote editor, use the the
\texttt{Remote server} command explained in
section~\ref{subsec:connectToServer}.
\paragraph{Net path}
This tab allows the manipulation of the \texttt{de.renew.netPath}
property used by the net loader (see
section~\ref{subsec:netLoading}).
On the left, you have a list of path entries, one directory per
line.
The net loader searches the directories in order from top to
bottom.
In the list, you can select one or more entries to manipulate.
On the right, there are five buttons, most of which affect the
selected set of entries.
\begin{description}
\item[\texttt{Add\dots}] opens a dialog where you can enter a new
path entry.
The directory should be entered in os-specific syntax.
If you want to specify a directory relative to the classpath,
check the appropriate box and make sure that the path does
\emph{not} start with a slash, backslash, drive letter or what
else declares a path absolute at your operating system.
\item[\texttt{Edit\dots}] opens a dialog similar to the
\texttt{Add\dots} dialog for each selected path entry.
\item[\texttt{Move up}] moves all selected entries one line above
the first selected entry (or to the top of the list, if the
topmost entry was included in the selection).
\item[\texttt{Move down}] moves all selected entries one line below
the last selected entry (or to the end of the list, if the
bottom-most entry was included in the selection).
\item[\texttt{Delete}] removes all selected entries from the list.
\end{description}
\paragraph{Logging}
This tab configures the simulation log traces (see Menu entry ``Show
simulation trace\dots'' below). %~\ref{subsec:loggingPlugin}).
In contrast to other tabs, changes to the settings on this tab take effect
immediately.
It is possible to create additional loggers that focus on net-, transition-
or place-specific parts of the simulation trace.
A click with the right mouse button on the top-level entry of the logger
tree opens a context menu where additional loggers can be added.
The logger name serves as filter criterion.
Each logger can be configured to send its data to one or more appenders.
Depending on the kind of appender, the filtered simulation trace can go to
the console, a file or a trace window (\texttt{GuiAppender)}.
Each appender can be configured with various options.
For example the buffer size (the number of viewable simulation steps) of
the \texttt{GuiAppender} can be adjusted to your needs.
%% TODO: Detailed description of configuration options.
The textfield \texttt{Layout} is used to customize logger output
using log4j PatternLayout.
\subsubsection{Remote server\dots{}}
\label{subsec:connectToServer}
Using this menu entry, you can list all net instances of a
Renew simulation server. To be able to do this, a simulator must be running
with remote access enabled as described in
section~\ref{subsec:remoteSetup}.
The dialog comprises two parts:
The upper buttons switch between remote simulations, the lower
part shows a list of net instances.
Initially, the list shows net instances of the local simulation
(if there is a running simulation).
The \texttt{Connect\dots} button displays another dialog which
allows you to connect to a remote simulation server.
You must specify the host on which the \texttt{Server} is running.
The server \texttt{Name} can be left at the default value unless
you specified the \texttt{de.renew.remote.publicName} property on
the server side.
If the connection has been established, the drop-down box at the
top of the \texttt{Remote Renew servers} dialog includes the
remote simulation and the list of net instances is updated.
You can switch between servers by selecting them in the drop-down
box.
The connection stays alive until you press the
\texttt{Disconnect} button, or either Renew application (local or
remote) terminates.
In the net instance list, you can select a net instance and open
it by double-click or by pressing the \texttt{Open} button.
The title of the net instance window shows that it is the
instance of another server.
You can use nearly all the interaction features of local net instance
drawings. All your modifications are executed on the server. Like
local simulation windows, events from the remote simulation
ensure that the drawings will be up-to-date at every time.
\bug{The editor is not able to display two net instances with the
same name and id.
It will bring the existing net instance window to front when
you select a net instance with the same name and id from a
different simulation.
To see the other net instance, close the existing net instance
window.}
\subsubsection{Breakpoints}
\label{subsec:breakpoint}
You can set breakpoints to stop the simulation at a predefined
point of time, which is especially helpful for debugging purposes,
where the simulation might have to run for extended periods
of time, before an interesting situation arises.
The breakpoint menu consists of two sections. The first allows
you to set and clear breakpoints and the second allows you to
view all breakpoints currently set in the simulation.
A breakpoint will stop the search for enabled bindings when running
a simulations. However, the execution of those transitions
that are already firing continues. This is especially important if a
breakpoint is attached to a transition: The transition might still
run to completion while the breakpoint is reported.
That means that you will often want
to attach a breakpoint to an input place of a transition, if you
want to inspect the state of the net before a certain
transition fires. You cannot currently detect a change of enabledness
directly.
\paragraph{Set Breakpoint at Selection.}
Before setting a breakpoint you must select a place or transition
or a group thereof within a net instance window.
You can set a breakpoint either locally or globally.
A local breakpoint will affect exactly the chosen net instance
and will not cause a simulation stop if other net instances
change. A global breakpoint automatically applies to
all net instances, even those that will be created after the breakpoint
is established.
There are a number of different breakpoint types:
\begin{itemize}
\item Default. This is a convenience type that is equivalent to
a breakpoint on start of firing for transitions and on change
of marking for places. You can use it if you want to set
a breakpoint to a place and a transition simultaneously.
\item Firing starts. This breakpoint is triggered
whenever the transition starts firing. The breakpoint happens just
after all input tokens have been removed from their places and
the transition is about to execute its actions.
\item Firing completes. Unlike the previous item, the breakpoint
occurs at the end of a transition's firing. This is especially
useful in the case of net stubs, where you want to inspect the
result of a stub call.
\item Marking changes. Any change of the state of a place is detected here,
even if the change is simply due to a test arc.
\item Marking changes, ignoring test arcs.
Here it is required that tokens are actually moved and not merely
tested.
\item $+1$ token. Only a token deposit triggers this breakpoint.
\item $-1$ token. A token removal must occur before this breakpoint
is activated.
\item Test status changes. Normal arcs do not
trigger this breakpoint, but test arcs do.
\end{itemize}
Multiple breakpoint types may be set for a single net element
using this menu.
\paragraph{Clear Breakpoint at Selection.}
A breakpoint is not automatically cleared after it was invoked.
Instead, you must clear breakpoints explicitly.
Having selected the net element that contains a
breakpoint, you can either clear all local breakpoints or
all global breakpoints.
\paragraph{Clear All Breakpoints in Current Simulation.}
This command will get rid of all breakpoints that were ever set.
This is useful if you have reached a certain desired situation
and want to continue the simulation normally. Alternatively,
you might want to clear all breakpoints that were configured using
the attribute menu, if you require a completely automatic run
once in a while, but not want to loose the information about the standard
breakpoints.
\paragraph{Breakpoint List.}
The second part of the menu allows you to view all breakpoints,
locate the associated net elements, and possibly reset individual
breakpoints.
\subsubsection{Save simulation state\dots{}}
This menu entry saves the current simulation state to a file,
so it can be restored later on by the menu command
\texttt{Load simulation state}.
The saved state also includes all net instances currently
opened in drawings and all compiled nets.
The default extension for Renew simulator state files is
\texttt{.rst}.
Points to be aware of:
\begin{itemize}
\item Saved simulation states will most likely not be compatible
between different versions of Renew.
\item All custom classes used in the current marking of the net
must implement the interface \texttt{java.io.Serializable}
in a sensible way to obtain a complete state file.
\end{itemize}
There are also some minor side effects:
\begin{itemize}
\item This command halts the simulator, because there must not
occur any changes to the current simulation state while it
is saved to obtain a consistent state file. You can continue
the simulation afterward.
\item The binding selection window will be closed, if it is
open.
\end{itemize}
\subsubsection{Load simulation state\dots{}}
This menu entry loads a simulation state from a file
saved by the menu command \texttt{Save simulation state}
before.
You will then be able to continue the simulation as usual
from the point at which the simulation state was saved.
If all drawings used in the state are loaded, you can use
all simulation control facilities as usual.
However, it is not necessary to have all used drawings open.
If some drawing is missing, the only drawback is that its
net instances will not be displayed in instance drawings.
As a consequence, you will not be able to use the extended
control features described in Section~\ref{sec:simcontr}
for these nets, but the menu commands \texttt{Simulation step}
and \texttt{Run simulation} will still work and
trace events will still be logged.
This holds even if no drawing used by the saved simulation
state is loaded at all.
The mapping from a compiled net contained in the saved state
to an open net drawing is done by the net's name.
This mapping occurs every time when you try to open an
instance drawing for any instance of the net.
If you added to or removed from the net drawing
any transitions or places since the simulation state was
saved, some messages informing you about the problem and
its consequences are printed to the application log.
An instance drawing will still be opened, but it will not
necessarily display the same structure that the compiled net
uses.
Further points to be aware of:
\begin{itemize}
\item If you load a simulation state, any running simulation
will be terminated and all related windows are closed.
\item If the class reinit mode is selected (see Subsection
\ref{subsec:classReinit}), custom classes will be
reloaded while restoring the simulation state.
\item All custom classes used in the saved simulation state
must be available when restoring the state.
\end{itemize}
\subsubsection{Show simulation trace\dots}
\label{subsec:loggingPlugin}
This menu command opens a window that shows the trace of the
current simulation.
In previous Renew releases, the trace has always been printed to the
console, now you can closely inspect the trace inside the editor.
The shortcut for this command is \texttt{Ctrl+L}
By default, the drop-down list on top of the window provides one simulation
trace that covers the last 20 simulation steps.
You can configure additional traces of different length that focus on
specific net instances, places or transitions using the \texttt{Logging}
tab of the \texttt{Configure simulation} dialog (see above).
A double left mouse button click on a simulation trace entry opens a window
that displays the whole message, using multiple lines if appropriate.
A right mouse button click opens a context menu that allows you to display
the net template or instance that was involved in the simulation step.
It is also possible to select the individual place or
transition in the net template or instance.
\bug{The mouse actions to inspect a trace entry are not available before
you have selected any line of the simulation step it belongs to.}
\subsubsection{Formalisms}
\label{subsec:formalismgui}
This submenu configures the current formalism
used during compilation and simulation.
Please note that a running simulation will always stay with the
formalism it has been started with.
To apply the chosen formalism to the simulation, you have to
terminate it and start a new one.
The entries of this menu depend on the set of plug-ins currently installed.
The basic renew distribution includes four formalisms, represented by their
compilers:
\begin{description}
\item[P/T Net Compiler] compiles the net as a simple place-transition net.
It accepts integer numbers as initial markings and arc weights.
Capacities are not supported.
\item[Java Net Compiler] encapsulates the reference net formalism with Java
inscriptions as described in chapter~\ref{ch:reference}.
However, this compiler does not accept time annotations.
\item[Timed Java Compiler] represents the same formalism as the
\texttt{Java Net Compiler}, but with additional time annotations as
explained in section~\ref{sec:timedNets}.
Nets compiled by this compiler must be executed in a sequential
simulation.
\item[Bool Net Compiler] compiles nets according to the formalism presented
in \cite{Langner1998}.
A bool net is a restricted colored net with exactly one color
$bool:=\{0, 1\}$ (can also be represented as $\{\mathtt{false},
\mathtt{true}\}$).
It accepts one of the propositional logic operators \texttt{and},
\texttt{or} and \texttt{xor} as transition guard inscriptions.
\end{description}
\subsubsection{Show sequential-only arcs}
\label{subsec:sequentialTools}
This option is available only when the
\texttt{Java Net Compiler} is chosen as current formalism.
Selecting this option adds another toolbar to the editor.
This toolbar comprises two additional arc types (see
section~\ref{subsubsec:toolArc}) which are allowed in
sequential simulations only.
Please note that this option is automatically enabled (although
the menu entry is not visible) when you choose the
\texttt{Timed Java Compiler} as formalism.
For your convenience, the sequential simulation mode (see
sections~\ref{subsec:configureSimulation} and
\ref{subsec:seqmode}) is activated each time you check the box or
choose the \texttt{Timed Java Compiler}.
However, the engine is not switched back to concurrent mode when
you uncheck the box or change to another formalism.
\subsection{Windows}
\label{subsec:menuDrawings}
This menu contains a list of all drawings loaded into memory.
The drawings are classified into \texttt{Nets},
\texttt{Net instances} and \texttt{Token Bags} and appear in
alphabetically sorted submenus.
A drawing can be loaded supplying its file name to Renew as a
command line argument, invoking the \texttt{Open Drawing\dots{}}
menu, or created through the \texttt{New Drawing} menu.
A newly created drawing can be named and any drawing can be
renamed by saving it using the \texttt{Save Drawing as\dots{}} menu.
By selecting a drawing in the \texttt{Windows} menu, its window
is raised and becomes the active drawing window.
In the menu, the name of the active drawing appears checked.
Non-modal tool and attribute dialogues are included in the windows menu in
their own categories.
These windows are raised when the corresponding menu entry is selected,
but there is no effect with respect to the list of active drawings.
\subsection{Additional Top-Level Menus}
\label{sec:additional-top-level}
The menu manager allows for the registration of a menu item by the
plugins under any top-level menu.
%
Additionally, plugins may use a new top-level name.
%
Typical candidates are \emph{Plugins}, \emph{Tools} and
\emph{Application}
The optional plugin \emph{GuiPrompt} offers
its command under the \emph{Plugins} menu. The \emph{NetComponents}
plugin and the optional plugins%
\footnote{All mentioned optional plugins are not part of the release
of Renew. They are provided separately.} \emph{Diagram},
\emph{NetDiff} and \emph{Lola} reside under the
\emph{Tools} menu. Since version 2.3 it is also possible to
determine the position of the menu item within the menu. The
\emph{Navigator} plugin extends the \emph{File} menu.
\section{Net Simulations}
\label{sec:simcontr}
During simulation, there may be textual and graphical feedback.
The Log4j framework receives simulation events and can log them
alternatively to the console, a file, the trace window, etc.
In Subsection~\ref{subsec:loggingPlugin}, the graphical configuration dialog
for Log4j is explained.
In a trace of log events, you can see exactly which transitions fired and
which tokens were consumed and produced.
Alternatively, you can view the state of the various net instances graphically
and you can influence the simulation run.
The following sections describe the means to monitor and control the simulation.
\subsection{Net Instance Windows}\label{subsec:netinstwind}
The graphical feed-back consists of special windows, which contain
instances of your reference nets.
When a simulation run is started,
the first instance of the main reference net that is generated is
displayed in such a net instance window.
As in the simulation log, the name of a net instance
(and thus of its window) is composed of the net's name together
with a numbering in square brackets, e.g.\ \texttt{myNet[1]}.
Net instance windows can also be recognized by their special
background color (something bluish/purple), so they cannot
be confused with the windows where the nets are edited.
In a net instance window, you cannot edit the net, you cannot
even select net elements. The net is in a ``background layer'',
and only simulation relevant objects are selectable, like
current markings of places and transition instances.
Places in net instance windows are annotated with the number of
tokens they contain (if any).
If you double-click on a marking the containing place will be selected.
If you right-click on such a marking, the marking will switch between
the number of tokens and the tokens in a string representation.
If you right-click on the containing place, another window appears,
containing detailed information about the tokens.
You can display the contents of the current marking directly
inside the net instance window.
This is extremely useful when a place
contains only few tokens (or even only one). This also helps
to control the number of windows, which could become very
large using Renew.
To switch between the simple (cardinality of the
multiset) and the token display of a place marking, just
right-click it.
The expanded display behaves exactly like the contents of a
current marking window, which is described in the following
section.
Tokens in markings are always displayed with a white, opaque
background.
This increases the readability of markings.
\subsection{Current Marking Windows}
A current marking window shows the name of the corresponding place
(net instance name dot place name, e.g.\ \texttt{myNet[1].myPlace})
in its window title bar.
If the token list does not fit into the current marking window,
the scroll bars can be used.
For each different token value in the multiset, a current marking
window shows the multiplicity (if different from one) and
the value itself.
%The token value is usually displayed as some String, using the Java method
%\texttt{toString()}, which is supported by every Object.
The Expanded Tokens mode described in
Subsection~\ref{subsubsec:marking} is now the default mode for
current marking windows (if the FS plug-in is installed).
There is a special function to gain access to other net instances.
If a token's value is or contains a net instance, a blue frame appears
around the name of the net instance.
If you click inside that frame, a new net instance window for
that net instance is opened or the corresponding net instance
window is activated, if it already existed.
This also works for net references contained within
a tuple, or even within a nested tuple.
Using the Expanded Tokens mode, this also works for net references
contained within a list or inside any other Java object
\tip{You can open a net instance window,
double click all places you want to ``watch'' and close
the net instance window again. This helps to focus on the
state information you really want to see.
}
%Another feature since 1.1 is that when you double-click a token which
%is a reference to some \texttt{java.awt.Window} object (like a
%\texttt{java.awt.Frame}), this window is raised to the top.
\subsection{Simulation Control}
In a concurrent system, many transitions can be activated at
once.
Normally, the simulation engine decides which of these
transitions actually fires when the next simulation step
is executed.
For debugging and testing, it can be very convenient for
you to take care of this decision. Of course, this only
makes sense when the simulation is performed step by step
(see below).
Interactive simulation is possible.
You can force a specific enabled transition to fire in two ways:
\begin{itemize}
\item Right-click the transition. Here, the simulation engine
still decides nondeterministically about the variable bindings.
\item Double-click the transition. Then, the so-called
binding selection window is shown and switched to the
transition you double-clicked. The title of the window
says ``{\em transition-name\/}'s possible bindings'', where
{\em transition-name\/} is the full name (name of the net
instance-dot-transition-name) of the transition.
In the top part of the window a single binding is described.
Each transition instance that participates in this binding
is shown on a single line, listing
those variables that are already bound.
See Section~\ref{sec:channels} for an explanation why multiple
transition instances might participate in a single firing.
At the bottom of the window there is a list of all possible
bindings, where each binding is displayed in a single row.
When you press the \texttt{Fire} button, the binding of the
entry which is currently selected will be used in the firing.
This window should be automatically updated whenever the net's
marking changes. Use the \texttt{Update} button, if the
automatic update fails, and make sure to report this as a bug.
\texttt{Close} hides the transition binding window.
\end{itemize}
%
If the clicked transition is not activated, the status line
of the Renew window tells you so and nothing else is going to
happen.
There are situations where a transition cannot be fired
manually, although it is activated. This is the case for
all transitions with an uplink. Since a transition with an
uplink is waiting for a synchronization request from any
other transition with the corresponding downlink,
Renew cannot find such ``backward'' activations.
You have to fire the transition with the downlink instead.
You should experiment with the simulation mode using some of the
sample net systems first. Then, try to get your own reference nets
to run and enjoy the simulation!
\section{Simulation Server}
\label{sec:simulationServer}
Renew supports client/server simulations via RMI. You can set up
a simulation as a Java VM of its own. You are then able to connect
both locally and remotely, as long as the connection between the
computers allows RMI calls (e.g.\ no firewall blocks them).
As a consequence of the decomposition of Renew
into several plug-ins, any simulation can be published over RMI.
You just need to set the appropriate properties as explained in
section~\ref{subsec:remoteSetup} or use the \texttt{Configure
Simulation} dialog (see
section~\ref{subsec:configureSimulation}).
Therefore, this section does not focus on the configuration of a
remote simulation, it just describes how to set up a simulation
without using the editor's graphical user interface.
To do this, you have to export all required nets as a shadow
net system first (see~\ref{subsec:expshadow} for details). Whenever you make
changes to any net of this net system, you have to generate the
shadow net system again and start a new server with it.
Now you are ready to start the server itself, by issuing the
following command to the Renew plug-in system:
\begin{lstlisting}[style=xnonfloating]
startsimulation <net system> <primary net> [-i]
\end{lstlisting}
The parameters to this command have the following meaning:
\begin{description}
\item[\texttt{\mbox{net system}}:]
The \texttt{.sns} file, as generated in the step above.
\item[\texttt{\mbox{primary net}}:]
The name of the net, of which a net instance shall be opened when the simulation starts. Using the regular GUI, this equals the selecting of a net before starting the simulation.
\item[\texttt{\mbox{-i}}:] If you set this optional
flag, then the simulation is initialized only, that
is, the primary net instance is opened, but the
simulation is not started automatically.
\end{description}
As mentioned in section~\ref{sec:plugins}, the command can be passed
to the plug-in system by several means.
For example, to start a remotely accessible simulation with net
\texttt{systemnet} out of the net system \texttt{allnets.sns} direct
from the java command line, you will have to issue the following
command (in Unix syntax, the \verb:\: indicates that the printed lines
should be entered as one line):
\begin{lstlisting}[style=xnonfloating]
java -Dde.renew.remote.enable=true -jar renew§\renewversion§/loader.jar startsimulation allnets.sns systemnet
\end{lstlisting}
If you need a special simulation mode or any other Renew property to
be configured, you can add multiple \texttt{-D} options or use one of
the other configuration methods mentioned in
section~\ref{subsec:configMethods}.
A simulation started by the \texttt{startsimulation} command
differs slightly from a simulation started by the editor:
The net loader does not look for \texttt{.rnw} files, it loads
nets from \texttt{.sns} files only.
If you want to experiment with properties and commands, or if you need
to pause and run the simulation interactively, you should install the
Console plug-in (see section~\ref{subsec:consolePlugin}).
When a simulation is running, several commands can be entered at the
prompt to control the simulation.
These commands provide the same functionality as the menu entries
listed in section~\ref{sec:simulation}.
In fact, if you use the Console plug-in in combination with the
graphical editor, both command sets (menu and console) control the
same simulation.
The console commands are:
\begin{description}
\item[\texttt{\mbox{simulation run}}:]
Resumes a stopped simulation.
If the \texttt{-i} option was appended to the
\texttt{startsimulation} command, this command starts
the simulation.
\item[\texttt{\mbox{simulation step}}:]
Executes another simulation step.
If the \texttt{-i} option was appended to the
\texttt{startsimulation} command, this command executes
the first simulation step.
\item[\texttt{\mbox{simulation stop}}:]
Halts the simulation, but does not abandon it, despite of
the \texttt{term} command. The \texttt{run} command
continues it.
This is equivalent to the menu entry \texttt{Halt simulation}.
\item[\texttt{\mbox{simulation term}}:]
Ends and abandons the current simulation.
This may result in termination of the plug-in system
(see section~\ref{subsec:systemExit}).
\item[\texttt{\mbox{simulation help}}:]
Shows a short help for all available simulation commands.
\end{description}
\section{Error Handling}
\label{sec:errors}
Renew helps you to maintain a syntactically correct model
by making an immediate syntax check whenever an inscription
has been changed. Additionally, a syntax check is done
before the first simulation step of a model.
The simulation will not start if there is any error in any net.
If an error is detected, an error window is opened,
which displays the error message. At the bottom of the window
is a button labeled \texttt{select}. Pressing this button
selects the offending net element or net elements and
raises the corresponding drawing. If the error originates
from a text figure, that figure is edited with the corresponding text edit
tool. The cursor is automatically positioned close to the
point where Renew detected the error. For more information on editing see
Section~\ref{subsubsec:toolConnectedText}: The Text Tool.
Renew displays exactly one error at a time. If a second error is
found, the old error message will be discarded and the
new error message will be displayed in the error window.
Some errors are not reported at the place where they
originate. E.g., if you are using a declaration figure,
an undefined variable is detected where
it is used, but the missing definition has to be
added to declaration node. Similar effects might happen
due to missing import statements. This is unavoidable, because
Renew cannot tell an undeclared variable from a misspelled
variable.
\newtwodotfive{
For some errors Renew provides a Quick Fix feature, which is described in the following section. }
Other errors and possible solutions are described in the subsequent sections.
\subsection{Quick Fix}
The Quick Fix feature improves the reporting of syntax errors by providing suitable proposals for remedies and their automatic realization.
For errors of type \emph{No such constructor/field/method},
proposals for correct constructors, fields or methods are provided by
the syntax check. If a constructor with the wrong number or types of
arguments is entered, a list of existing constructor signatures is
provided. If a non-existing field name for a class or an object is
entered, a list of all known field names is provided. If a
non-existing method is entered, a list of known method signatures
where the method name is prefixed by the erroneous method name is
provided. If the method name \texttt{\_()} is entered, a list of all
known methods is provided.
For errors of type \emph{No such variable}, type proposals are provided.
By double-clicking on one of the proposals or selecting and pressing the \texttt{apply} button, you can apply the proposed fix for the reported error. The Quick Fix changes the erroneous method/field name or constructor into the selected one or declares the variable in the declaration note. It can also automatically add import statements for unambiguous types (requires full qualified class names).
\subsection{Parser Error Messages}
If the expression parser detects a syntax error,
it will report something like:
\begin{lstlisting}[style=xnonfloating]
Encountered "do" at line 1, column 3.
Was expecting one of:
"new" ...
<IDENTIFIER> ...
\end{lstlisting}
This gives at least a hint where the syntax error
originated and which context the parser expected.
In our case the inscription \texttt{a:do()}
was reported, because \texttt{do} is a keyword that
must not be used as a channel name.
\subsection{Early Error Messages}
These errors are determined during the immediate
syntax check following each text edit.
\subsubsection{Bad method call or no such method}
Typically you entered two pairs of parentheses
instead of one. Possibly a class name was mistaken
for a method call. Maybe a name was misspelled?
\subsubsection{Boolean expression expected}
An expression following the keyword \texttt{guard}
must be boolean. Maybe you wrote \texttt{guard x=y}, but
meant \texttt{guard x==y}?
\subsubsection{Cannot cast \dots}
An explicit cast was requested, but this cast
is forbidden by the Java typing rules.
Renew determined at compile time that this cast
can never succeed.
\subsubsection{Cannot convert \dots}
The Java type system does not support a conversion that would
be necessary at this point of the statement.
\subsubsection{Cannot make static call to instance method}
An instance method cannot be accessed statically
via the class name. A concrete reference must be provided.
Maybe the wrong method was called?
\subsubsection{Enumerable type expected}
The operator requested at the point of the error
can act only on enumerable types, but not on
floating point numbers.
\subsubsection{Expression of net instance type expected}
For a downlink expression, the expression before the colon must
denote a net instance. E.g.\ it is an error, if in \texttt{x:ch()} the
variable \texttt{x} is of type \texttt{String}.
Maybe you have to use a cast?
\subsubsection{Expression of type void not allowed here}
An expression of void type was encountered in the middle
of an expression where its result is supposed to be
processed further, e.g.\ by an operator or as an argument
to a method call. Maybe you called the wrong method?
\subsubsection{Integral type expected}
The operator requested at the point of the error
can act only on integral types, but not on
floating point numbers or booleans.
\subsubsection{Invalid left hand side of assignment}
In an \texttt{action} inscription, only variables, fields,
and array elements can occur on the left hand side of an equation.
Maybe this expression should not be an action?
\subsubsection{Multiple constructors match}
A constructor call was specified, but from the types of the
arguments it is not clear which constructor
is supposed to be called.
There are overloaded constructors, but none of
them seems to be better suited than the others.
Maybe you should use casts to indicate the intended constructor?
\subsubsection{Multiple methods match}
A method call was specified, but from the types of the
arguments it is not clear which method
is supposed to be called.
There are overloaded methods, but none of
them seems to be better suited than the others.
Maybe you should use casts to indicate the intended method?
\subsubsection{No such class}
The compiler could not find a class that matches a given class
name, but it is quite sure that a class name has to occur here.
Maybe you misspelled the class name? Maybe you forgot an import
statement in the declaration node?
\subsubsection{No such class or variable}
The meaning of a name could not be determined at all.
Maybe the name was misspelled?
Maybe a declaration or an import statement is missing?
\subsubsection{No such constructor}
A matching constructor could not be found. Maybe the
parameters are in the wrong order? Maybe the number of
parameters is not correct? Maybe the requested constructor
is not public?
\subsubsection{No such field}
A matching field could not be found.
Maybe the name was misspelled?
Maybe the requested field is not public?
\subsubsection{No such method}
A matching method could not be found.
Maybe the name was misspelled? Maybe the
parameters are in the wrong order? Maybe the number of
parameters is not correct? Maybe the requested method
is not public?
\subsubsection{No such variable}
A name that supposedly denotes a variable could not be
found in the declarations. Maybe the name was misspelled?
Maybe a declaration is missing?
\subsubsection{Not an array}
Only expressions of an array type can be postfixed with
an indexing argument in square brackets.
\subsubsection{Numeric type expected}
A boolean expression was used in a context where
only numeric expressions are allowed, possibly
after a unary numeric operator.
\subsubsection{Operator types do not match}
No appropriate version of the operator could
be found that matches both the left and the right hand
expression type, although both expression would be valid
individually.
\subsubsection{Primitive type expected}
Most operators can act only on values of primitive type,
but the compiler detected an object type.
\subsubsection{Type mismatch in assignment}
An equality specification could not be implemented, because
the types of both sides are incompatible. One type must be
subtype of the other type or the types must be identical.
\subsubsection{Variable must be assignable from
\texttt{de.renew.net.NetInstance}}
The variable to which a new net is assigned must
be of type \texttt{NetInstance}, i.e.\ of
exactly that type, of type \texttt{java.lang.Object}, or untyped.
E.g.\ it is an error, if in \texttt{x:new net} the
variable \texttt{x} is of type \texttt{java.lang.String}.
Maybe you have to use an intermediate variable of the proper
type and perform a cast later?
\subsubsection{Variable name expected}
The identifier to which a new net is assigned must denote
a variable. E.g.\ it is an error, if in \texttt{x:new net} the
identifier \texttt{x} is a class name.
\subsubsection{Cannot clear untyped place using typed variable}
A clear arc is inscribed with a variable that is typed.
The arc is supposed to clear an untyped place. Because it
cannot be safely assumed that all tokens in the place will have
the correct type, it might not be possible to clear the
place entirely. Consider declaring the variable that
is inscribed to the arc.
\subsubsection{Cannot losslessly convert \dots}
A typed place must hold only values of the given type.
Hence the type of an output arc expression must be
a subtype of the corresponding place type. The type
of an input arc expression is allowed to be
a subtype or a supertype, but it is not allowed
that the type is completely unrelated.
Maybe you were confused by the slight variations
of the typing rules compared to Java? Have a look at
Subsection~\ref{subsec:types}.
\subsubsection{Cannot use void expressions as arc inscriptions}
Void expressions do not compute a value. If you use
such an expression, typically a method call, as
an arc inscription, the simulator cannot determine
which kind of token to move.
\subsubsection{Class \dots\ imported twice}
In a declaration node there were two import statements
that made the same unqualified name well-known, e.g.,
\texttt{import java.lang.Double} and also
\texttt{import some.where.else.Double}.
Remove one import statement and use the fully qualified
class name for that class.
\subsubsection{Detected two nets with the same name}
The simulator must resolve textual references to nets
by net names, hence it is not allowed for two nets to
carry the same name. Maybe you have opened the same net twice?
Maybe you have created new nets, which have the name
\texttt{untitled} by default, and you have not saved
the nets yet?
\subsubsection{Flexible arcs must be inscribed}
A flexible arc is not equipped with an inscription.
Flexible arcs are supposed to move a variable amount
of tokens to or from a place, but this arc does not
depend on any variables and lacks the required variability.
Maybe you did not yet specify an inscription?
Maybe the inscription is attached to the wrong net element?
Maybe you want to use an ordinary arc instead?
\subsubsection{For non-array inscriptions the place must be untyped}
An inscription of a flexible arc is given as
a list or a vector or an enumeration, but the output place is typed.
The resulting restriction on the element types could not be verified.
Maybe it is possible to use an array inscription?
Maybe the place should not be typed?
\subsubsection{Incorrect type for flexible arc inscription}
An inscription of a flexible arc is expected to evaluate to
an array or a list or a vector.
It is only allowed to use enumerations on output arcs,
because the elements might have to be accessed multiple times
in the case of input arcs.
Use an inscriptions that is correctly typed.
Maybe the compiler determined the type \texttt{java.lang.Object},
but it is known that only arrays will result from the expression.
In that case, use an explicit cast to indicate this fact.
\subsubsection{Null not allowed for flexible arcs.}
An inscription of a flexible arc is expected to evaluate to
and array or a list. The compiler was able to determine that
the given expression will always evaluate to \texttt{null}.
Maybe the inscription is attached to the wrong net element?
Maybe the arc was not intended to be a flexible arc?
\subsubsection{Only one declaration node is allowed}
You have two or more declaration nodes in your net drawing.
In general, the simulator cannot determine in which
order multiple declaration nodes should be processed,
hence this is not allowed.
Maybe a declaration node was duplicated unintentionally?
Maybe you want to merge the nodes into one node?
\subsubsection{Output arc expression for typed place must be typed}
A typed place must only hold values of the given type.
An untyped output arc is not guaranteed to deliver an
appropriate value, so this might lead to potential problems.
Maybe you want to type your variables? Maybe you
want to remove the typing of the place?
\subsubsection{Place is typed more than once}
At most one type name can be inscribed to a place.
Multiple types are not allowed, even if they are identical.
Maybe a type was duplicated unintentionally?
\subsubsection{Time annotations are not allowed}
The compiler detected an annotation of the form
\texttt{...@...}, but the current compiler cannot handle
such inscriptions, which require a special net formalism.
You should switch to the \texttt{Timed Java Compiler} (see
Subsection~\ref{subsec:formalismgui}).
\subsubsection{Transition has more than one uplink}
At most one uplink can be inscribed to a transition.
Maybe an uplink was duplicated unintentionally?
Maybe one uplink has to be a downlink?
\subsubsection{Unknown net}
In a creation expression an unknown net name occurred.
Maybe the name is misspelled? Maybe you have not opened the
net in question?
\subsubsection{Variable \dots\ declared twice}
In a declaration node there were two declarations
of the same variable. Remove one variable declaration.
\subsubsection{Variable \dots\ is named identically to an imported class}
In a declaration node there was a variable declaration and
an import statement that referenced the same symbol, e.g.,
\texttt{import some.where.Name} and \texttt{String Name}.
This error is rare, because by convention class names should start
with an upper case letter and variable names should start with
a lower case letter. You should probably rename the variable.
\subsubsection{Variable of array type expected}
If a clear arc is inscribed with a typed variable,
that variable should have an array type, so that
the set of all tokens can be bound to the variable
in the form of an array. You should check whether the
correct variable is used and whether the variable is correctly
typed.
\subsection{Late Error Messages}
Here we discuss the error message that is not reported
during the immediate check, but only during the complete
check before the simulation.
\subsubsection{Unsupported arc type}
An arc of the net was of an illegal type, i.e., the
current net formalism does not support it. This can only
happen when you execute a net with a net formalism that is incompatible
with the net formalism that was used to draw the net.
Maybe you should restart Renew with another net formalism?
% Local Variables:
% mode: latex
% TeX-master:"renew.tex"
% End:
% LocalWords: Petri unary boolean subclasses instanceof Prolog tuple
% LocalWords: runtime tuples formalisms workflow uplink downlink
% LocalWords: untyped subtype supertype booleans multiset selectable
% LocalWords: cardinality breakpoint appender viewable classpath
% LocalWords: appenders
|
[STATEMENT]
lemma countable_ran:
assumes "countable (dom f)"
shows "countable (ran f)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. countable (ran f)
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. countable (ran f)
[PROOF STEP]
have "countable (map_graph f)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. countable (map_graph f)
[PROOF STEP]
by (simp add: assms)
[PROOF STATE]
proof (state)
this:
countable (map_graph f)
goal (1 subgoal):
1. countable (ran f)
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
countable (map_graph f)
[PROOF STEP]
have "countable (Range(map_graph f))"
[PROOF STATE]
proof (prove)
using this:
countable (map_graph f)
goal (1 subgoal):
1. countable (Range (map_graph f))
[PROOF STEP]
by (simp add: Range_snd)
[PROOF STATE]
proof (state)
this:
countable (Range (map_graph f))
goal (1 subgoal):
1. countable (ran f)
[PROOF STEP]
thus ?thesis
[PROOF STATE]
proof (prove)
using this:
countable (Range (map_graph f))
goal (1 subgoal):
1. countable (ran f)
[PROOF STEP]
by (simp add: ran_map_graph)
[PROOF STATE]
proof (state)
this:
countable (ran f)
goal:
No subgoals!
[PROOF STEP]
qed
|
% StackExchange Signal Processing Q63449
% https://dsp.stackexchange.com/questions/63449
% Deconvolution of an Image Acquired by a Square Uniform Detector
% References:
% 1. aa
% Remarks:
% 1. sa
% TODO:
% 1. ds
% Release Notes
% - 1.0.000 25/01/2020
% * First release.
%% General Parameters
subStreamNumberDefault = 79;
run('InitScript.m');
figureIdx = 0; %<! Continue from Question 1
figureCounterSpec = '%04d';
generateFigures = ON;
CONVOLUTION_SHAPE_FULL = 1;
CONVOLUTION_SHAPE_SAME = 2;
CONVOLUTION_SHAPE_VALID = 3;
%% Simulation Parameters
imageFileName = 'Lenna256.png';
kernelRadius = 1;
convShape = CONVOLUTION_SHAPE_VALID;
paramLambda = 0.005;
maxSize = 256;
%% Generate Data
mA = im2double(imread(imageFileName));
numRows = size(mA, 1);
numCols = size(mA, 2);
imgSize = min(numRows, numCols);
maxSize = min(imgSize, maxSize);
mA = mA(1:imgSize, 1:imgSize, :);
mA = imresize(mA, [maxSize, maxSize]);
mK = ones((2 * kernelRadius) + 1);
% mK = rand((2 * kernelRadius) + 1);
mK = mK / sum(mK(:));
switch(convShape)
case(CONVOLUTION_SHAPE_FULL)
convShapeString = 'full';
case(CONVOLUTION_SHAPE_SAME)
convShapeString = 'same';
case(CONVOLUTION_SHAPE_VALID)
convShapeString = 'valid';
end
hFigure = figure();
hAxes = axes();
hImageObj = imshow(mA);
set(get(hAxes, 'Title'), 'String', {['Input Image - Lenna']}, ...
'FontSize', fontSizeTitle);
mB = conv2(mA, mK, convShapeString);
hFigure = figure();
hAxes = axes();
hImageObj = imshow(mB);
set(get(hAxes, 'Title'), 'String', {['Sensor Image - Lenna']}, ...
'FontSize', fontSizeTitle);
%% Solution by Linear Algebra
mKK = CreateConvMtx2D(mK, maxSize, maxSize, convShape);
% Basically:
% vB = mKK * mA(:);
vA = mKK \ mB(:);
% vA = pinv(full(mKK)) * mB(:);
vA = ((mKK.' * mKK) + (paramLambda * speye(maxSize * maxSize))) \ (mKK.' * mB(:));
mAA = reshape(vA, maxSize, maxSize); %<! Restored Image
hFigure = figure();
hAxes = axes();
hImageObj = imshow(mAA);
set(get(hAxes, 'Title'), 'String', {['Estimated Image - Lenna']}, ...
'FontSize', fontSizeTitle);
%% Restore Defaults
% set(0, 'DefaultFigureWindowStyle', 'normal');
% set(0, 'DefaultAxesLooseInset', defaultLoosInset);
|
Describe Users/stevenlowe here.
Hi, thanks for helping with the Tanglewood page and removing the broken link to the Better Business Bureau. I updated it on the Sequoia Equities page. Users/NickSchmalenberger
|
module ProofColDivSeqPostulate
import ProofColDivSeqBase
%default total
-- %language ElabReflection
%access export
%hide Language.Reflection.P
-- from ProofColDivSeqBase
-- ########################################
public export
data Limited : CoList Integer -> Type where
IsLimited00 : (step : Nat ** limitedNStep xs step = True) -> Limited xs
IsLimited01 : (k : Nat)
-> Limited $ divSeq (plus (plus k k) k)
-> Limited $ divSeq (S (plus (plus (plus (plus (plus k k) (plus k k)) (plus k k))
(S (plus (plus (plus k k) (plus k k))(plus k k))))
(S (plus (plus (plus k k) (plus k k)) (plus k k)))))
IsLimited02 : (l : Nat)
-> Limited (divSeq (plus (plus (plus l l) (plus l l)) (plus l l)))
-> Limited (divSeq (S (S (plus (plus (plus (plus (plus (plus (plus l l) l) (plus (plus l l) l))
(S (plus (plus (plus l l) l) (plus (plus l l) l))))
(S (plus (plus (plus l l) l) (plus (plus l l) l))))
(S (S (plus (plus (plus (plus (plus l l) l) (plus (plus l l) l))
(S (plus (plus (plus l l) l) (plus (plus l l) l))))
(S (plus (plus (plus l l) l) (plus (plus l l) l)))))))
(S (S (plus (plus (plus (plus (plus l l) l) (plus (plus l l) l))
(S (plus (plus (plus l l) l) (plus (plus l l) l))))
(S (plus (plus (plus l l) l) (plus (plus l l) l))))))))))
IsLimited03 : (l : Nat)
-> Limited (divSeq (S (S (S (plus (plus (plus (plus (plus l l) (plus l l)) (plus (plus l l) (plus l l)))
(S (S (S (plus (plus (plus l l) (plus l l)) (plus (plus l l) (plus l l)))))))
(S (S (S (plus (plus (plus l l) (plus l l)) (plus (plus l l) (plus l l)))))))))))
-> Limited (divSeq (S (S (S (plus (plus (plus (plus (plus (plus (plus l l) l) (S (plus (plus l l) l)))
(S (S (plus (plus (plus l l) l) (S (plus (plus l l) l))))))
(S (S (plus (plus (plus l l) l) (S (plus (plus l l) l))))))
(S (S (S (plus (plus (plus (plus (plus l l) l) (S (plus (plus l l) l)))
(S (S (plus (plus (plus l l) l) (S (plus (plus l l) l))))))
(S (S (plus (plus (plus l l) l) (S (plus (plus l l) l))))))))))
(S (S (S (plus (plus (plus (plus (plus l l) l) (S (plus (plus l l) l)))
(S (S (plus (plus (plus l l) l) (S (plus (plus l l) l))))))
(S (S (plus (plus (plus l l) l) (S (plus (plus l l) l))))))))))))))
IsLimited04 : (l : Nat)
-> Limited (divSeq (S (S (S (plus (plus (plus (plus l l) (plus l l))
(S (S (S (plus (plus l l) (plus l l))))))
(S (S (S (plus (plus l l) (plus l l))))))))))
-> Limited (divSeq (S (S (S (S (plus (plus (plus (plus (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))
(S (S (S (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))))))
(S (S (S (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))))))
(S (S (S (S (plus (plus (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))
(S (S (S (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))))))
(S (S (S (plus (plus (plus l l) l) (S (S (plus (plus l l) l)))))))))))))
(S (S (S (S (plus (plus (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))
(S (S (S (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))))))
(S (S (S (plus (plus (plus l l) l) (S (S (plus (plus l l) l))))))))))))))))))
IsLimited05 : (j : Nat)
-> Limited (divSeq (plus (plus j j) j))
-> Limited (divSeq (S (S (plus (plus (plus (plus j j) j) (S (S (plus (plus j j) j))))
(S (S (plus (plus j j) j)))))))
IsLimited06 : (l : Nat)
-> Limited (divSeq (plus (plus l l) l))
-> Limited (divSeq (S (S (S (plus (plus (plus (plus (plus (plus l l) (plus l l)) (plus (plus l l) (plus l l))) (plus (plus l l) (plus l l)))
(S (S (S (plus (plus (plus (plus l l) (plus l l)) (plus (plus l l) (plus l l))) (plus (plus l l) (plus l l)))))))
(S (S (S (plus (plus (plus (plus l l) (plus l l)) (plus (plus l l) (plus l l))) (plus (plus l l) (plus l l)))))))))))
IsLimited07 : (o : Nat)
-> Limited (divSeq (S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o))))
(plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))))
(S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o))))
(plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o
o)))))))))))))
(S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o))))
(plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o
o)))))))))))))))))))))
-> Limited (divSeq (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o)
(plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o))))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (S (plus (plus (plus o o) o)
(plus (plus o o) o))))))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o))))))
(S (S (S (S (plus (plus (plus (plus (plus (plus o o) o)
(plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o))))
(S (plus (plus (plus (plus o o) o)
(plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o))))))
(S (plus (plus (plus (plus o o) o)
(plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o)))))))))))
(S (S (S (S (plus (plus (plus (plus (plus (plus o o) o)
(plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o))))
(S (plus (plus (plus (plus o o) o)
(plus (plus o o) o))
(S (plus (plus (plus o o) o)
(plus (plus o o) o))))))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (S (plus (plus (plus o o) o)
(plus (plus o o) o))))))))))))))))
IsLimited08 : (o : Nat)
-> Limited (divSeq (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))
(S (S (S (S (S (S (S (S (S (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))))))))))))) (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o)
(plus o o))))))))))))))))))))))))
-> Limited (divSeq (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o)
(S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))))))
(S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o)
(S (plus (plus o o) o))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o)
(S (plus (plus o o) o)))))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o)
(S (plus (plus o o) o)))))))))))))))
(S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o)
(S (plus (plus o o) o)))))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o)
(S (plus (plus o o) o)))))))))))))))))))))
IsLimited09 : (o : Nat)
-> Limited $ divSeq (S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(S (S (S (S (S (S (S (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))))))))))
(S (S (S (S (S (S (S (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))))))))))))))))
-> Limited $ divSeq (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))))))
(S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))))))))))
(S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o)
(S (S (plus (plus o o) o)))))))))))))))))))))))))
IsLimited10 : (o : Nat)
-> Limited $ divSeq (S (plus (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(S (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))))
(S (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))))
-> Limited $ divSeq (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(plus (plus (plus o o) o) (plus (plus o o) o)))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (plus (plus (plus o o) o) (plus (plus o o) o)))))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (plus (plus (plus o o) o) (plus (plus o o) o)))))
(S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (plus (plus (plus o o) o) (plus (plus o o) o))) (S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(plus (plus (plus o o) o) (plus (plus o o) o)))))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(plus (plus (plus o o) o) (plus (plus o o) o))))))))))
(S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (plus (plus (plus o o) o) (plus (plus o o) o)))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (plus (plus (plus o o) o) (plus (plus o o) o)))))
(S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(plus (plus (plus o o) o) (plus (plus o o) o))))))))))))))
IsLimited11 : (o : Nat)
-> Limited $ divSeq (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))
(plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))))
(S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o))) (plus (plus (plus o o) (plus o o))
(plus (plus o o)
(plus o o))))
(plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o))) (plus (plus (plus o o) (plus o o))
(plus (plus o o)
(plus o o)))))))))))))))))))))
(S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o))))
(plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o)
(plus o o))))))))))))))))))))))))))))))))))))
-> Limited $ divSeq (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))
(S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))))))))
(S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))
(S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))))))))))))))))
IsLimited12 : (o : Nat)
-> Limited $ divSeq (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))
(S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o))))))))))))))))))
(S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus o o) (plus o o))
(plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o))
(plus (plus o o)
(plus o o)))))))))))))))))))))))))))))))
-> Limited $ divSeq (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))
(S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))))))))))))
(S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))))))))))))))))))
IsLimited13 : (o : Nat)
-> Limited $ divSeq (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))
(S (S (S (S (S (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))))))))
(S (S (S (S (S (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))))))))))))))
-> Limited $ divSeq (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o))))
(S (S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o)))))))
(S (S (plus (plus (plus (plus o o) o) (plus (plus o o) o)) (S (plus (plus (plus o o) o) (plus (plus o o) o)))))))
(S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o))))
(S (S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o)))))))
(S (S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o)))))))))))))
(S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o))))
(S (S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o)))))))
(S (S (plus (plus (plus (plus o o) o) (plus (plus o o) o))
(S (plus (plus (plus o o) o) (plus (plus o o) o))))))))))))))))))
IsLimited14 : (o : Nat)
-> Limited $ divSeq (S (S (S (S (S (plus (plus (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))
(S (S (S (S (S (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o)))))))))
(S (S (S (S (S (plus (plus (plus o o) (plus o o)) (plus (plus o o) (plus o o))))))))))))))
-> Limited $ divSeq (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))))
(S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))))))))))))))
(S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o))))))))))
(S (S (S (plus (plus (plus (plus o o) o) (S (plus (plus o o) o)))
(S (S (plus (plus (plus o o) o) (S (plus (plus o o) o)))))))))))))))))))))))
IsLimited15 : (o : Nat)
-> Limited $ divSeq (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o))))
(plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))))
(S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o))))
(plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))))))))))))))))))))))))))))))))))))
(S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (plus (plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o))))
(plus (plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o)))
(plus (plus (plus o o)
(plus o o))
(plus (plus o o)
(plus o o))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
-> Limited $ divSeq (S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))
(S (S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))))
(S (S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o)))))))))))))
(S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))
(S (S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o)
(S (S (plus (plus o o) o)))))))))))))
(S (S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o)
(S (S (plus (plus o o) o)))))))))))))))))))))
(S (S (S (S (S (S (S (plus (plus (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))))))
(S (S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o)
(S (S (plus (plus o o) o)))))))))))))
(S (S (S (S (plus (plus (plus (plus o o) o) (S (S (plus (plus o o) o))))
(S (S (S (plus (plus (plus o o) o)
(S (S (plus (plus o o) o))))))))))))))))))))))))))))
P : Nat -> Type
P n = Limited $ divSeq (n+n+n)
definiP : (n : Nat) -> P n = Limited $ divSeq (n+n+n)
definiP n = Refl
-- 無限降下法(の変形) Isabelleで証明した
postulate infiniteDescent0 :
((n:Nat) -> (Not . P) (S n) -> (m ** (LTE (S m) (S n), (Not . P) m)))
-> P Z
-> P n
-- ########################################
-- from ProofColDivSeqMain
-- ########################################
-- ########################################
-- from sub0xxxxx
-- ########################################
-- ########################################
|
module Courgette
using SparseArrays
using JuMP
using LinearAlgebra: dot, I, Diagonal
import Random
include("cut_generating_cone.jl")
include("normalizing.jl")
end # module
|
[STATEMENT]
lemma is_syz_sigI:
assumes "s \<noteq> 0" and "lt s = u" and "s \<in> dgrad_sig_set d" and "rep_list s = 0"
shows "is_syz_sig d u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_syz_sig d u
[PROOF STEP]
unfolding is_syz_sig_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>s\<in>dgrad_sig_set' (length fs) d. s \<noteq> 0 \<and> lt s = u \<and> rep_list s = 0
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
s \<noteq> 0
lt s = u
s \<in> dgrad_sig_set' (length fs) d
rep_list s = 0
goal (1 subgoal):
1. \<exists>s\<in>dgrad_sig_set' (length fs) d. s \<noteq> 0 \<and> lt s = u \<and> rep_list s = 0
[PROOF STEP]
by blast
|
-- Copyright 2017, the blau.io contributors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
module API.Web.DOM.Node
import IdrisScript
%access public export
%default total
||| The original specification can be found at
||| https://dom.spec.whatwg.org/#interface-documenttype
record DocumentType where
constructor New
name : String
publicId : String
systemId : String
self : JSRef
||| documentTypeFromPointer is a helper function for easily creating
||| DocumentTypes from JavaScript references.
|||
||| @ ref a pointer to a documentType
documentTypeFromPointer : (ref : JSRef) -> JS_IO $ Maybe DocumentType
documentTypeFromPointer ref = let
nameRef = jscall "%0.name" (JSRef -> JS_IO JSRef) ref
publicIdRef = jscall "%0.publicId" (JSRef -> JS_IO JSRef) ref
systemIdRef = jscall "%0.systemId" (JSRef -> JS_IO JSRef) ref
in
case !(IdrisScript.pack !nameRef) of
(JSString ** name) => case !(IdrisScript.pack !publicIdRef) of
(JSString ** pubId) => case !(IdrisScript.pack !systemIdRef) of
(JSString ** sysId) => pure $ Just $
New (fromJS name) (fromJS pubId) (fromJS sysId)
ref
_ => pure Nothing
_ => pure Nothing
_ => pure Nothing
|
Formal statement is: lemma retract_of_path_connected: "\<lbrakk>path_connected T; S retract_of T\<rbrakk> \<Longrightarrow> path_connected S" Informal statement is: If $S$ is a retract of a path-connected space $T$, then $S$ is path-connected.
|
# Joint conf-call report #4
```python
import datetime
print(str(datetime.datetime.today()))
```
2018-05-28 14:46:22.329854
# Summary
- time distributions considered
- exponential
- uniform
- Pareto type 2
- bound in general
- memoryless lower bound (quick reminder)
- residual lifetime lower bound (full explanation)
- upper bound (full explanation)
- Don's velocity approximation bound (only for exponential)
- time distributions considered
- exponential
- all bounds for exponential distribution
- uniform
- all bounds for uniform distribution
- Pareto type 2
- all bounds for pareto type 2
- training set points' domain changing
- comparison with different w's components domains
- fix applied to code causes error to change a lot
- why?
- new tests
- tests' comparison
# Introduction
In this period we've focused on studying iterations' speed.
The time taken by node $u$ is a random variable $X_u$, $X_1,...,X_n$ are I.I.D.
The single iteration's velocity of $u$ is $1$ over the total time spent to move forward by one iteration.
In previous tests the only random distribution taken into account was the Exponential with parameter $
\lambda=1$. Now we've introduced also **Uniform** and **Type II Pareto distributions**.
# Time distributions
## Exponential
$X \sim Exp(\lambda)$.
## Uniform
## Type II Pareto (Lomax)
Heavy tailed distribution.
# Bounds
## Memoryless lower bound
Let $Y=max(X_1,...,X_k)$ be the slowest dependency node to complete the task,
$$Pr(Y \leq y) = \prod_{i=1}^{k} Pr(X_i \leq y) = Pr(X \leq y)^k = F_X(y)^k.$$
The expected value of $Y$ is
$$\mathbb{E}[Y] = \int_{0}^{\infty} (1-Pr(Y\leq y))\ dy = \int_{0}^{\infty} (1-F_X(y)^k)\ dy.$$
The total time $v$ has to wait is $\mathbb{E}[Y]+\mathbb{E}[X]$, so the single iteration velocity can be expressed as
$$V = \frac{1}{\mathbb{E}[Y]+\mathbb{E}[X]}.$$
Since we've considered the slowest node one can conclude that this is a lower bound for the single iteration velocity, but, actually, it is not: we suppose that by the time node $v$ finishes computation for iteration $t$ then its dependencies, in turn, have already received the required informations from their dependencies, so they can start calculations for iteration $t$. This assumption is not always true.
There is another assumption this bound does: dependencies of $v$ starts computing only when $v$ finishes iteration $t$. This is not a problem if $X$ follows a distribution which owns the memorylessness property.
**Memorylessness property**.
*A distribution is memoryless when the "waiting time" until a certain event, does not depend on how much time has elapsed already, so a memoryless random variable can be regenerated in any moment. The only continuous and memoryless random distribution is the **exponential one**.*
## Residual lifetime lower bound
This is a lower bound for non-memoryless distributions (Uniform and Pareto Type II).
Like in the previous bound, the first supposition persists: when node $u$ finishes computation for iteration $t$, its dependencies already own informations to start computing. But now, this bound takes into account that by the time node $u$ finishes its computation for iteration $t$, some ($0 \leq some \leq k$) of its dependencies may have already started computing, hence here we consider *residual lifetime* variables.
# Residual time CDF for non-memoryless distributions
$$F_{Xres}(x) = \frac{1}{\mathbb{E}[X]} \int_{0}^{x} (1-F_x(t))\ dt$$
## Uniform with residual time CDF
Uniform's residual time CDF:
$$
\begin{equation}
F_{Xres}(x) = \frac{1}{\mathbb{E}[X]} \int_{0}^{x} (1-F_x(t))\ dt = \frac{2}{a+b} \left[ x-\int_{0}^{x} F_X(t)\ dt \right]
\end{equation}
$$
$$
F_{Xres}(x) = \frac{2}{a+b}
\begin{cases}
x & \text{if } x \leq a \\
x-\int_{a}^{x}\frac{t-a}{b-a}\ dt & \text{if } a < x < b\\
x-\int_{a}^{b}\frac{t-a}{b-a}\ dt -(x-b)& \text{if } x \geq b
\end{cases}
$$
$$
\begin{cases}
\int_{a}^{x} \frac{t-a}{b-a}\ dt\\
u = \frac{t-a}{b-a}
\end{cases}
$$
$$
\begin{cases}
t = u(b-a)+a\\
dt = (b-a)\ du
\end{cases}
$$
$$
\begin{cases}
t = a \Rightarrow u=0\\
t = x \Rightarrow u = \frac{x-a}{b-a}
\end{cases}
$$
$$
\int_{a}^{x} \frac{t-a}{b-a}\ dt = \int_{0}^{\frac{x-a}{b-a}} u(b-a)\ du = (b-a) \frac{u^2}{2}\Biggr|_0^{\frac{x-a}{b-a}} = \frac{(x-a)^2}{2(b-a)}
$$
So
$$
F_{Xres}(x) =
\begin{cases}
\frac{2x}{a+b} & \text{if } x \leq a \\
\frac{x^2-2bx+a^2}{a^2-b^2} & \text{if } a < x < b\\
1 & \text{if } x \geq b
\end{cases}
$$
$Yres = max(Xres_1,...,Xres_k)$
$$
\begin{align}
\mathbb{E}[Yres] & = \int_{0}^{+\infty}1-F_{Xres}(y)^k\ dy =\\
& = \int_{0}^{a} 1-\left(\frac{2y}{a+b}\right)^k\ dy + \int_{a}^{b} 1- \left(\frac{y^2-2by+a^2}{a^2-b^2}\right)^k\ dy + \int_{b}^{+\infty}1-1\ dy =\\
& = a - \frac{2^k}{(a+b)^k} \int_{0}^{a} y^k\ dy +b-a - \frac{1}{(a^2-b^2)^k} \int_{a}^{b} (y^2-2by+a^2)^k\ dy=\\
& = b - \frac{2^k a^{k+1}}{(a+b)^k (k+1)} - \frac{1}{(a^2-b^2)^k} \int_{a}^{b} (y^2-2by+a^2)^k\ dy.
\end{align}
$$
For integral $\int_{a}^{b} (y^2-2by+a^2)^k\ dy$ (with generic $k\geq 1$) there are no primitives to calculate its closed form while it is always possible for fixed $k$.
For $a=0,\ b=2,\ k=1$ (cycle):
$$\mathbb{E}[Yres] = b - \frac{2^k a^{k+1}}{(a+b)^k (k+1)} - \frac{1}{(a^2-b^2)^k} \int_{a}^{b} (y^2-2by+a^2)^k\ dy = 2+\frac{1}{4}\int_{0}^{2} t^2 -4t\ dt = 2 + \frac{1}{4} \left(-\frac{16}{3}\right) = \frac{2}{3}$$
and velocity
$$\overline{V} = \frac{1}{\mathbb{E}[Yres] + \mathbb{E}[X]} = \frac{1}{\frac{2}{3} + 1} = \frac{3}{5}$$
# Velocity approximation
# Velocity assuming different time distributions
## Exponential time
The one always used up to now.
$$\mathbb{E}[Y] = \int_{0}^{\infty} 1-(1-e^{-\lambda y})^k\ dy$$
$$V = \frac{\lambda}{1+\sum_{i=1}^{k} \frac{1}{i}}$$
## Uniform time
$Y = max(X_1,...,X_k)$ with $X \sim \mathcal{U}(a,b)$.
Uniform's CDF:
$$
F_X(x) =
\begin{cases}
0 & \text{if } x < a \\
\frac{x-a}{b-a} & \text{if } a \leq x < b \\
1 & \text{if } x \geq b \end{cases}
$$
<br>
$$
\begin{align}
\mathbb{E}[Y] = \int_{0}^{\infty} 1 - (F_X(y)^k) dy &= \int_{0}^{a} 1\ dy + \int_{a}^{b} 1-\left(\frac{y-a}{b-a}\right)^k dy + \int_{b}^{\infty} 1-1\ dy =\\
&= a + \int_{a}^{b} 1\ dy - \frac{1}{(b-a)^k} \int_{a}^{b} (y-a)^k dy + 0 =\\
&= b - \frac{1}{(b-a)^k} \left[\frac{(y-a)^{k+1}}{k+1}\right]_{a}^{b} =\\
&= b - \frac{1}{(b-a)^k} \frac{(b-a)^{k+1}}{k+1} =\\
&= b + \frac{a-b}{k+1} = \frac{kb+b-b+a}{k+1} =\\
&= \frac{a+kb}{k+1}
\end{align}
$$
<br>
$$
\overline{T} = \mathbb{E}[X] + \mathbb{E}[Y] = \frac{a+b}{2} + \frac{a+kb}{k+1} = \frac{a(k+3) + b(3k+1)}{2(k+1)}
$$
<br>
$$
\overline{V} = \frac{1}{\overline{T}} = \frac{1}{\mathbb{E}[X] + \mathbb{E}[Y]} = \frac{2(k+1)}{a(k+3) + b(3k+1)}.
$$
For $a=0$ and $b=2$
$$
\overline{T} = \frac{3k+1}{k+1}.
$$
## Type II Pareto time
Type II Pareto's CDF:
$$
F_X(x) = 1-\left(1-\frac{x}{\lambda}\right)^{-\alpha}
$$
$Y = max(X_1,...,X_k)$ with $X \sim Pareto(\alpha)$.
$$
\begin{equation}
\mathbb{E}[Y] = \int_{0}^{\infty} 1 - (F_X(y)^k)\ dy = \dots
\end{equation}
$$
*to be completed...*
## Type II Pareto with residual time
Type II Pareto CDF with parameters $\alpha$ and $\sigma$:
$$F_X(x) = 1-\left(1+\frac{x}{\sigma}\right)^{-\alpha}.$$
$$
\begin{align}
F_{Xres}(x) &= \frac{1}{\mathbb{E}[X]} \int_{0}^{x} \left(1+\frac{u}{\sigma}\right)^{-\alpha}\ du = \qquad \left(\text{with } 1+\frac{u}{\sigma}=\nu\right) \\
&= \frac{1}{\mathbb{E}[X]} \int_{1}^{1+\frac{x}{\sigma}} \sigma \nu^{-\alpha}\ d\nu \\
&= \frac{1}{\mathbb{E}[X]} \sigma \frac{\nu^{-\alpha+1}}{\alpha-1}\Biggr|_{1+\frac{x}{\sigma}}^{1} =\\
&= \frac{\sigma}{(\alpha-1)\mathbb{E}[X]}\left(1-\left(1+\frac{x}{\sigma}\right)^{-\alpha+1}\right)=\\
&= 1-\left(1+\frac{x}{\sigma}\right)^{-\alpha+1}.
\end{align}
$$
Type II Pareto Residual lifetime CDF is a classical type II Pareto with parameters $\alpha-1$ and $\sigma$.
$Yres = max(Xres_1,...,Xres_k)$.
$$
\begin{align}
\mathbb{E}[Yres] &= \int_{0}^{\infty} 1-\left(1-\left(1+\frac{y}{\sigma}\right)^{-\alpha+1}\right)^k\ dy =\\
&= \int_{0}^{\infty} \sum_{i=1}^{k} \left[\binom{k}{i} (-1)^{(i+1)} \left(1+\frac{y}{\sigma}\right)^{(-\alpha+1)i}\right]\ dy =\\
&= \sum_{i=1}^{k} \left[\binom{k}{i} (-1)^{(i+1)} \int_{0}^{\infty} \underbrace{\left(1+\frac{y}{\sigma}\right)^{(-\alpha+1)i}}_\text{Pareto II for $\sigma$ and $(\alpha-1)i$}\ dy\right] =\\
&= \sum_{i=1}^{k} \left[\binom{k}{i} (-1)^{(i+1)} \frac{\sigma}{(\alpha-1)i-1}\right].
\end{align}
$$
$$\overline{V}=\frac{1}{\mathbb{E}[Yres]+\mathbb{E}[X]} = \frac{1}{\sum_{i=1}^{k} \left[\binom{k}{i} (-1)^{(i+1)} \frac{\sigma}{(\alpha-1)i-1}\right] + \frac{\sigma}{\alpha-1}} = \frac{1}{\sum_{i=1}^{k} \left[\binom{k}{i} (-1)^{(i+1)} \frac{\sigma}{(\alpha-1)i-1}\right] + \frac{\sigma}{\alpha-1}}$$
**Example**: ($k=1$, e.g. Cycle).
$\mathbb{E}[Yres] = \frac{\sigma}{\alpha-2}$, $\mathbb{E}[X]= \frac{\sigma}{\alpha-1}$.
For $\alpha=3$, $\sigma=2$:
$$
\overline{V} = \frac{1}{2+1} = \frac{1}{3}
$$
---
# *Everything below this line is just a raw draft*
---
## A hypothetical real (but very pessimistic) lower bound $\forall k$
$$V_l = \frac{\lambda}{1 + \sum_{j=0}^{\lambda-1} \sum_{i=1}^{\frac{n}{\lambda}(\lambda-j)} \frac{1}{i}}$$
where $d$ is the graph's diameter.
Let $u$ be the most advanced node in computation and suppose it being at iteration $t$. *Remember "node $u$ is at iteration $t$" means that $u$ has already performed the $t$-th update of its local model, so it owns $\mathbf{w}^{(t)}_u$*.
Let $d(u,v)$ be the "directed" distance from $u$ to $v$, e.g. the minimum amount of directed edges one needs to cross in order to go from $u$ to $v$ (obviously edges can be crossed just in the proper direction), then:
- nodes at distance 1 from $u$ must be at least at iteration $t-1$;
- nodes at distance 2 from $u$ must be at least at iteration $t-2$;
- in general, nodes at distance $l : l \leq d$ from $u$ must be at least at iteration $t-l$.
Due to exponential memoryless property, all exponential variables that rule the duration of each node's iteration can be regenerated in any moment. Therefore is legit assuming that right now node $u$ is starting iteration $t$ and nodes at distance $l$ from $u$ are starting iteration $t-l$ for all $l \leq d$.
The worst case for node $u$ is the following:
- by the time node $u$ finishes iteration $t$ it doesn't own inputs from its parent to start iteration $t+1$. So, node $u$ has finished, all random exponential variables can be regenerated, $u$ have to wait for the slowest node among those at distance 1 from it;
- but nodes at distance 1 from $u$ cannot start iteration $t$ since their still waiting their parents for their outcomes (e.g. nodes at distance 2 from $u$ haven't performed iteration $t-1$ yet); the same for nodes at distance 2 from $u$: they are waiting their parents, and so on up to nodes at distance $d$ from $u$, a straggler belongs to them, it is slowing down the whole computation.
In this case, the time taken by the slowest node at distance $d$ is distributed as $max \{X_1,...,X_n\}$ (where $X_i \sim Exp(\lambda)$ and $n$ is the total amount of computational units) since it is . After waiting this time a chain of calculations starts up to nodes at distance 1 from $u$, and after they have finished their computations, then $u$ can go on to the next iteration. The time taken by the slowest node at distance $d-1$ from $u$ is distributed as $max \{ X_1,...,X_{(n-\frac{n}{d})}\}$. In general, the time taken by the slowest node at distance $l$ from $u$ is distributed as $max \{ X_1,...,X_{\frac{n}{d}(d-l)}\}$. So the time $u$ have to wait to start iteration $t$ is distributed as
$$Z = \sum_{l=0}^{d-1} max \{X_1,...,X_{\frac{n}{d}(d-l)}\}$$
$$\mathbb{E}[Z] = \frac{1}{\lambda} \sum_{j=0}^{d-1} \sum_{i=1}^{\frac{n}{d}(d-j)} \frac{1}{i}$$
Clearly this situation is very rare, but it gives the most pessimistic lower bound of the time a node should wait in order to perform one iteration.
**In the following tests the _almost_-lower bound will be used (and not the latter).**
# Training set points' domain changing
For how the training set is generated, we know that the target weight vector is the vector of ones.
At beginning of a simulation, for each node $u$, its local model is now initialized to a random vector $\textbf{w}_u$ whose components are picked in $[0,2]$:
`self.W = np.random.uniform(0, 2, size=(X.shape[1] + 1,))`.
Previously $\textbf{w}_u$'s component were picked in $[0,1]$.
The expected value of $\textbf{w} = \sum_{i=1}^{n} \textbf{w}_i$ being equal to $(1,\dots,1)$, e.g. the optimal solution, is something that should be kept into account. *Actually I don't know how this is affecting simulations*.
# WTHeck is happening
## What needed a change
Let's consider what is happening inside a node $v$:
- iterations counter starts from 0;
- the weight vector is initialized to a random value (why and the significance of this random value will be discussed later);
- stepwise update is computed:
-
- the iteration counter is increased after each update of the local model. For instance, at iteration 0, the first model update is applied, then the iteration counter is set equal to 1. I used to consider iterations counter increase as the last instruction which concludes the update.
The problem arises because metrics calculation is made before the iterations counter increase, hence
Iteration 0 use to be considered the first update of weight vector applied by each node. The binding between iterations and weights was: iteration $i$ is paired with $w^(i)$. Actually if one thinks at $w^{(0)}$ then it should be the starting point, so $w^{(1)}$ should be the first weight update instead. In my code it wasn't so.
There was `W_log` which stored the history of $w$(s). The first element of `W_log` (e.g. `W_log[0]`) is the starting $w$ which is not the first weight update $w^(0)$
## Why these outcomes?
# 1e-6 alpha 10k samples 10k time GD test
Parameters:
- 100 nodes;
- 10k samples;
- 100 features/predictors;
- plotted graphs: diagonal, cycle, clique;
- 10k time limit;
- **1e-6 alpha**.
## X starts in [0,1]
## X starts in [0,2]
## X starts in 0
# Errors over iterations inspection
|
# Tutorial on Python for scientific computing
Marcos Duarte
This tutorial is a short introduction to programming and a demonstration of the basic features of Python for scientific computing. To use Python for scientific computing we need the Python program itself with its main modules and specific packages for scientific computing. [See this notebook on how to install Python for scientific computing](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/PythonInstallation.ipynb).
Once you get Python and the necessary packages for scientific computing ready to work, there are different ways to run Python, the main ones are:
- open a terminal window in your computer and type `python` or `ipython` that the Python interpreter will start
- run the IPython notebook and start working with Python in a browser
- run Spyder, an interactive development environment (IDE)
- run the IPython qtconsole, a more featured terminal
- run IPython completely in the cloud with for example, [https://cloud.sagemath.com](https://cloud.sagemath.com) or [https://www.wakari.io](https://www.wakari.io)
- run Python online in a website such as [https://www.pythonanywhere.com/](https://www.pythonanywhere.com/)
- run Python using any other Python editor or IDE
We will use the IPython Notebook for this tutorial but you can run almost all the things we will see here using the other forms listed above.
## Python as a calculator
Once in the IPython notebook, if you type a simple mathematical expression and press `Shift+Enter` it will give the result of the expression:
```python
1 + 2 - 30
```
-27
```python
4/5
```
0.8
If you are using Python version 2.x instead of Python 3.x, you should have got 0 as the result of 4 divided by 5, which is wrong! The problem is that for Python versions up to 2.x, the operator '/' performs division with integers and the result will also be an integer (this behavior was changed in version 3.x).
If you want the normal behavior for division, in Python 2.x you have two options: tell Python that at least one of the numbers is not an integer or import the new division operator (which is inoffensive if you are already using Python 3), let's see these two options:
```python
4/5.
```
0.8
```python
from __future__ import division
```
```python
4/5
```
0.8
I prefer to use the import division option (from future!); if we put this statement in the beginning of a file or IPython notebook, it will work for all subsequent commands.
Another command that changed its behavior from Python 2.x to 3.x is the `print` command.
In Python 2.x, the print command could be used as a statement:
```python
print 4/5
```
With Python 3.x, the print command bahaves as a true function and has to be called with parentheses. Let's also import this future command to Python 2.x and use it from now on:
```python
from __future__ import print_function
```
```python
print(4/5)
```
0.8
With the `print` function, let's explore the mathematical operations available in Python:
```python
print('1+2 = ', 1+2, '\n', '4*5 = ', 4*5, '\n', '6/7 = ', 6/7, '\n', '8**2 = ', 8**2, sep='')
```
1+2 = 3
4*5 = 20
6/7 = 0.8571428571428571
8**2 = 64
And if we want the square-root of a number:
```python
sqrt(9)
```
We get an error message saying that the `sqrt` function if not defined. This is because `sqrt` and other mathematical functions are available with the `math` module:
```python
import math
```
```python
math.sqrt(9)
```
3.0
```python
from math import sqrt
```
```python
sqrt(9)
```
3.0
## The import function
We used the command '`import`' to be able to call certain functions. In Python functions are organized in modules and packages and they have to be imported in order to be used.
A module is a file containing Python definitions (e.g., functions) and statements. Packages are a way of structuring Python’s module namespace by using “dotted module names”. For example, the module name A.B designates a submodule named B in a package named A. To be used, modules and packages have to be imported in Python with the import function.
Namespace is a container for a set of identifiers (names), and allows the disambiguation of homonym identifiers residing in different namespaces. For example, with the command import math, we will have all the functions and statements defined in this module in the namespace '`math.`', for example, '`math.pi`' is the π constant and '`math.cos()`', the cosine function.
By the way, to know which Python version you are running, we can use one of the following modules:
```python
import sys
sys.version
```
'3.4.1 |Anaconda 2.0.1 (64-bit)| (default, May 19 2014, 13:02:30) [MSC v.1600 64 bit (AMD64)]'
And if you are in an IPython session:
```python
from IPython import sys_info
print(sys_info())
```
{'commit_hash': '681fd77',
'commit_source': 'installation',
'default_encoding': 'cp1252',
'ipython_path': 'C:\\Anaconda3\\lib\\site-packages\\IPython',
'ipython_version': '2.1.0',
'os_name': 'nt',
'platform': 'Windows-7-6.1.7601-SP1',
'sys_executable': 'C:\\Anaconda3\\python.exe',
'sys_platform': 'win32',
'sys_version': '3.4.1 |Anaconda 2.0.1 (64-bit)| (default, May 19 2014, '
'13:02:30) [MSC v.1600 64 bit (AMD64)]'}
The first option gives information about the Python version; the latter also includes the IPython version, operating system, etc.
## Object-oriented programming
Python is designed as an object-oriented programming (OOP) language. OOP is a paradigm that represents concepts as "objects" that have data fields (attributes that describe the object) and associated procedures known as methods.
This means that all elements in Python are objects and they have attributes which can be acessed with the dot (.) operator after the name of the object. We already experimented with that when we imported the module `sys`, it became an object, and we acessed one of its attribute: `sys.version`.
OOP as a paradigm is much more than defining objects, attributes, and methods, but for now this is enough to get going with Python.
## Python and IPython help
To get help about any Python command, use `help()`:
```python
help(math.degrees)
```
Help on built-in function degrees in module math:
degrees(...)
degrees(x)
Convert angle x from radians to degrees.
Or if you are in the IPython environment, simply add '?' to the function that a window will open at the bottom of your browser with the same help content:
```python
math.degrees?
```
And if you add a second '?' to the statement you get access to the original script file of the function (an advantage of an open source language), unless that function is a built-in function that does not have a script file, which is the case of the standard modules in Python (but you can access the Python source code if you want; it just does not come with the standard program for installation).
So, let's see this feature with another function:
```python
import scipy.fftpack
scipy.fftpack.fft??
```
To know all the attributes of an object, for example all the functions available in `math`, we can use the function `dir`:
```python
print(dir(math))
```
['__doc__', '__loader__', '__name__', '__package__', '__spec__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'hypot', 'isfinite', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'log2', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc']
### Tab completion in IPython
IPython has tab completion: start typing the name of the command (object) and press `tab` to see the names of objects available with these initials letters. When the name of the object is typed followed by a dot (`math.`), pressing `tab` will show all available attribites, scroll down to the desired attribute and press `Enter` to select it.
### The four most helpful commands in IPython
These are the most helpful commands in IPython (from [IPython tutorial](http://ipython.org/ipython-doc/dev/interactive/tutorial.html)):
- `?` : Introduction and overview of IPython’s features.
- `%quickref` : Quick reference.
- `help` : Python’s own help system.
- `object?` : Details about ‘object’, use ‘object??’ for extra details.
[See these IPython Notebooks for more on IPython and the Notebook capabilities](http://nbviewer.ipython.org/github/ipython/ipython/tree/master/examples/Notebook/).
### Comments
Comments in Python start with the hash character, #, and extend to the end of the physical line:
```python
# Import the math library to access more math stuff
import math
math.pi # this is the pi constant; a useless comment since this is obvious
```
3.141592653589793
To insert comments spanning more than one line, use a multi-line string with a pair of matching triple-quotes: `"""` or `'''` (we will see the string data type later). A typical use of a multi-line comment is as documentation strings and are meant for anyone reading the code:
```python
"""Documentation strings are typically written like that.
A docstring is a string literal that occurs as the first statement
in a module, function, class, or method definition.
"""
```
'Documentation strings are typically written like that.\n\nA docstring is a string literal that occurs as the first statement\nin a module, function, class, or method definition.\n\n'
A docstring like above is useless and its output as a standalone statement looks uggly in IPython Notebook, but you will see its real importance when reading and writting codes.
Commenting a programming code is an important step to make the code more readable, which Python cares a lot.
There is a style guide for writting Python code ([PEP 8](http://www.python.org/dev/peps/pep-0008/)) with a session about [how to write comments](http://www.python.org/dev/peps/pep-0008/#comments).
### Magic functions
IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax.
There are two kinds of magics, line-oriented and cell-oriented.
Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes.
Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument.
## Assignment and expressions
The equal sign ('=') is used to assign a value to a variable. Afterwards, no result is displayed before the next interactive prompt:
```python
x = 1
```
Spaces between the statements are optional but it helps for readability.
To see the value of the variable, call it again or use the print function:
```python
x
```
1
```python
print(x)
```
1
Of course, the last assignment is that holds:
```python
x = 2
x = 3
x
```
3
In mathematics '=' is the symbol for identity, but in computer programming '=' is used for assignment, it means that the right part of the expresssion is assigned to its left part.
For example, 'x=x+1' does not make sense in mathematics but it does in computer programming:
```python
x = 1
print(x)
x = x + 1
print(x)
```
1
2
A value can be assigned to several variables simultaneously:
```python
x = y = 4
print(x)
print(y)
```
4
4
Several values can be assigned to several variables at once:
```python
x, y = 5, 6
print(x)
print(y)
```
5
6
And with that, you can do (!):
```python
x, y = y, x
print(x)
print(y)
```
6
5
Variables must be “defined” (assigned a value) before they can be used, or an error will occur:
```python
x = z
```
## Variables and types
There are different types of built-in objects in Python (and remember that everything in Python is an object):
```python
import types
print(dir(types))
```
['BuiltinFunctionType', 'BuiltinMethodType', 'CodeType', 'DynamicClassAttribute', 'FrameType', 'FunctionType', 'GeneratorType', 'GetSetDescriptorType', 'LambdaType', 'MappingProxyType', 'MemberDescriptorType', 'MethodType', 'ModuleType', 'SimpleNamespace', 'TracebackType', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_calculate_meta', 'new_class', 'prepare_class']
Let's see some of them now.
### Numbers: int, float, complex
Numbers can an integer (int), float, and complex (with imaginary part).
Let's use the function `type` to show the type of number (and later for any other object):
```python
type(6)
```
int
A float is a non-integer number:
```python
math.pi
```
3.141592653589793
```python
type(math.pi)
```
float
Python (IPython) is showing `math.pi` with only 15 decimal cases, but internally a float is represented with higher precision.
Floating point numbers in Python are implemented using a double (eight bytes) word; the precison and internal representation of floating point numbers are machine specific and are available in:
```python
sys.float_info
```
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
Be aware that floating-point numbers can be trick in computers:
```python
0.1 + 0.2
```
0.30000000000000004
```python
0.1 + 0.2 - 0.3
```
5.551115123125783e-17
These results are not correct (and the problem is not due to Python). The error arises from the fact that floating-point numbers are represented in computer hardware as base 2 (binary) fractions and most decimal fractions cannot be represented exactly as binary fractions. As consequence, decimal floating-point numbers are only approximated by the binary floating-point numbers actually stored in the machine. [See here for more on this issue](http://docs.python.org/2/tutorial/floatingpoint.html).
A complex number has real and imaginary parts:
```python
1+2j
```
(1+2j)
```python
print(type(1+2j))
```
<class 'complex'>
Each part of a complex number is represented as a floating-point number. We can see them using the attributes `.real` and `.imag`:
```python
print((1+2j).real)
print((1+2j).imag)
```
1.0
2.0
### Strings
Strings can be enclosed in single quotes or double quotes:
```python
s = 'string (str) is a built-in type in Python'
s
```
'string (str) is a built-in type in Python'
```python
type(s)
```
str
String enclosed with single and double quotes are equal, but it may be easier to use one instead of the other:
```python
'string (str) is a Python's built-in type'
```
```python
"string (str) is a Python's built-in type"
```
"string (str) is a Python's built-in type"
But you could have done that using the Python escape character '\':
```python
'string (str) is a Python\'s built-in type'
```
"string (str) is a Python's built-in type"
Strings can be concatenated (glued together) with the + operator, and repeated with *:
```python
s = 'P' + 'y' + 't' + 'h' + 'o' + 'n'
print(s)
print(s*5)
```
Python
PythonPythonPythonPythonPython
Strings can be subscripted (indexed); like in C, the first character of a string has subscript (index) 0:
```python
print('s[0] = ', s[0], ' (s[index], start at 0)')
print('s[5] = ', s[5])
print('s[-1] = ', s[-1], ' (last element)')
print('s[:] = ', s[:], ' (all elements)')
print('s[1:] = ', s[1:], ' (from this index (inclusive) till the last (inclusive))')
print('s[2:4] = ', s[2:4], ' (from first index (inclusive) till second index (exclusive))')
print('s[:2] = ', s[:2], ' (till this index, exclusive)')
print('s[:10] = ', s[:10], ' (Python handles the index if it is larger than the string length)')
print('s[-10:] = ', s[-10:])
print('s[0:5:2] = ', s[0:5:2], ' (s[ini:end:step])')
print('s[::2] = ', s[::2], ' (s[::step], initial and final indexes can be omitted)')
print('s[0:5:-1] = ', s[::-1], ' (s[::-step] reverses the string)')
print('s[:2] + s[2:] = ', s[:2] + s[2:], ' (because of Python indexing, this sounds natural)')
```
s[0] = P (s[index], start at 0)
s[5] = n
s[-1] = n (last element)
s[:] = Python (all elements)
s[1:] = ython (from this index (inclusive) till the last (inclusive))
s[2:4] = th (from first index (inclusive) till second index (exclusive))
s[:2] = Py (till this index, exclusive)
s[:10] = Python (Python handles the index if it is larger than the string length)
s[-10:] = Python
s[0:5:2] = Pto (s[ini:end:step])
s[::2] = Pto (s[::step], initial and final indexes can be omitted)
s[0:5:-1] = nohtyP (s[::-step] reverses the string)
s[:2] + s[2:] = Python (because of Python indexing, this sounds natural)
### len()
Python has a built-in functon to get the number of itens of a sequence:
```python
help(len)
```
Help on built-in function len in module builtins:
len(...)
len(object)
Return the number of items of a sequence or mapping.
```python
s = 'Python'
len(s)
```
6
The function len() helps to understand how the backward indexing works in Python.
The index s[-i] should be understood as s[len(s) - i] rather than accessing directly the i-th element from back to front. This is why the last element of a string is s[-1]:
```python
print('s = ', s)
print('len(s) = ', len(s))
print('len(s)-1 = ',len(s) - 1)
print('s[-1] = ', s[-1])
print('s[len(s) - 1] = ', s[len(s) - 1])
```
s = Python
len(s) = 6
len(s)-1 = 5
s[-1] = n
s[len(s) - 1] = n
Or, strings can be surrounded in a pair of matching triple-quotes: """ or '''. End of lines do not need to be escaped when using triple-quotes, but they will be included in the string. This is how we created a multi-line comment earlier:
```python
"""Strings can be surrounded in a pair of matching triple-quotes: \""" or '''.
End of lines do not need to be escaped when using triple-quotes,
but they will be included in the string.
"""
```
'Strings can be surrounded in a pair of matching triple-quotes: """ or \'\'\'.\n\nEnd of lines do not need to be escaped when using triple-quotes,\nbut they will be included in the string.\n\n'
### Lists
Values can be grouped together using different types, one of them is list, which can be written as a list of comma-separated values between square brackets. List items need not all have the same type:
```python
x = ['spam', 'eggs', 100, 1234]
x
```
['spam', 'eggs', 100, 1234]
Lists can be indexed and the same indexing rules we saw for strings are applied:
```python
x[0]
```
'spam'
The function len() works for lists:
```python
len(x)
```
4
### Tuples
A tuple consists of a number of values separated by commas, for instance:
```python
t = ('spam', 'eggs', 100, 1234)
t
```
('spam', 'eggs', 100, 1234)
The type tuple is why multiple assignments in a single line works; elements separated by commas (with or without surrounding parentheses) are a tuple and in an expression with an '=', the right-side tuple is attributed to the left-side tuple:
```python
a, b = 1, 2
print('a = ', a, '\nb = ', b)
```
a = 1
b = 2
Is the same as:
```python
(a, b) = (1, 2)
print('a = ', a, '\nb = ', b)
```
a = 1
b = 2
### Sets
Python also includes a data type for sets. A set is an unordered collection with no duplicate elements.
```python
basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
fruit = set(basket) # create a set without duplicates
fruit
```
{'apple', 'banana', 'orange', 'pear'}
As set is an unordered collection, it can not be indexed as lists and tuples.
```python
set(['orange', 'pear', 'apple', 'banana'])
'orange' in fruit # fast membership testing
```
True
### Dictionaries
Dictionary is a collection of elements organized keys and values. Unlike lists and tuples, which are indexed by a range of numbers, dictionaries are indexed by their keys:
```python
tel = {'jack': 4098, 'sape': 4139}
tel
```
{'sape': 4139, 'jack': 4098}
```python
tel['guido'] = 4127
tel
```
{'guido': 4127, 'sape': 4139, 'jack': 4098}
```python
tel['jack']
```
4098
```python
del tel['sape']
tel['irv'] = 4127
tel
```
{'guido': 4127, 'irv': 4127, 'jack': 4098}
```python
tel.keys()
```
dict_keys(['guido', 'irv', 'jack'])
```python
'guido' in tel
```
True
The dict() constructor builds dictionaries directly from sequences of key-value pairs:
```python
tel = dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
tel
```
{'guido': 4127, 'sape': 4139, 'jack': 4098}
## Built-in Constants
- **False** : false value of the bool type
- **True** : true value of the bool type
- **None** : sole value of types.NoneType. None is frequently used to represent the absence of a value.
In computer science, the Boolean or logical data type is composed by two values, true and false, intended to represent the values of logic and Boolean algebra. In Python, 1 and 0 can also be used in most situations as equivalent to the Boolean values.
## Logical (Boolean) operators
### and, or, not
- **and** : logical AND operator. If both the operands are true then condition becomes true. (a and b) is true.
- **or** : logical OR Operator. If any of the two operands are non zero then condition becomes true. (a or b) is true.
- **not** : logical NOT Operator. Reverses the logical state of its operand. If a condition is true then logical NOT operator will make false.
### Comparisons
The following comparison operations are supported by objects in Python:
- **==** : equal
- **!=** : not equal
- **<** : strictly less than
- **<=** : less than or equal
- **\>** : strictly greater than
- **\>=** : greater than or equal
- **is** : object identity
- **is not** : negated object identity
```python
True == False
```
False
```python
not True == False
```
True
```python
1 < 2 > 1
```
True
```python
True != (False or True)
```
False
```python
True != False or True
```
True
## Indentation and whitespace
In Python, statement grouping is done by indentation (this is mandatory), which are done by inserting whitespaces, not tabs. Indentation is also recommended for alignment of function calling that span more than one line for better clarity.
We will see examples of indentation in the next session.
## Control of flow
### `if`...`elif`...`else`
Conditional statements (to peform something if another thing is True or False) can be implemmented using the `if` statement:
```
if expression:
statement
elif:
statement
else:
statement
```
`elif` (one or more) and `else` are optionals.
The indentation is obligatory.
For example:
```python
if True:
pass
```
Which does nothing useful.
Let's use the `if`...`elif`...`else` statements to categorize the [body mass index](http://en.wikipedia.org/wiki/Body_mass_index) of a person:
```python
# body mass index
weight = 100 # kg
height = 1.70 # m
bmi = weight / height**2
```
```python
if bmi < 15:
c = 'very severely underweight'
elif 15 <= bmi < 16:
c = 'severely underweight'
elif 16 <= bmi < 18.5:
c = 'underweight'
elif 18.5 <= bmi < 25:
c = 'normal'
elif 25 <= bmi < 30:
c = 'overweight'
elif 30 <= bmi < 35:
c = 'moderately obese'
elif 35 <= bmi < 40:
c = 'severely obese'
else:
c = 'very severely obese'
print('For a weight of {0:.1f} kg and a height of {1:.2f} m,\n\
the body mass index (bmi) is {2:.1f} kg/m2,\nwhich is considered {3:s}.'\
.format(weight, height, bmi, c))
```
For a weight of 100.0 kg and a height of 1.70 m,
the body mass index (bmi) is 34.6 kg/m2,
which is considered moderately obese.
### for
The `for` statement iterates over a sequence to perform operations (a loop event).
```
for iterating_var in sequence:
statements
```
```python
for i in [3, 2, 1, 'go!']:
print(i),
```
3
2
1
go!
```python
for letter in 'Python':
print(letter),
```
P
y
t
h
o
n
#### The `range()` function
The built-in function range() is useful if we need to create a sequence of numbers, for example, to iterate over this list. It generates lists containing arithmetic progressions:
```python
help(range)
```
Help on class range in module builtins:
class range(object)
| range(stop) -> range object
| range(start, stop[, step]) -> range object
|
| Return a sequence of numbers from start to stop by step.
|
| Methods defined here:
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| __reduce__(...)
|
| __repr__(self, /)
| Return repr(self).
|
| __reversed__(...)
| Return a reverse iterator.
|
| count(...)
| rangeobject.count(value) -> integer -- return number of occurrences of value
|
| index(...)
| rangeobject.index(value, [start, [stop]]) -> integer -- return index of value.
| Raise ValueError if the value is not present.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| start
|
| step
|
| stop
```python
range(10)
```
range(0, 10)
```python
range(1, 10, 2)
```
range(1, 10, 2)
```python
for i in range(10):
n2 = i**2
print(n2),
```
0
1
4
9
16
25
36
49
64
81
### while
The `while` statement is used for repeating sections of code in a loop until a condition is met (this different than the `for` statement which executes n times):
```
while expression:
statement
```
Let's generate the Fibonacci series using a `while` loop:
```python
# Fibonacci series: the sum of two elements defines the next
a, b = 0, 1
while b < 1000:
print(b, end=' ')
a, b = b, a+b
```
1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
## Function definition
A function in a programming language is a piece of code that performs a specific task. Functions are used to reduce duplication of code making easier to reuse it and to decompose complex problems into simpler parts. The use of functions contribute to the clarity of the code.
A function is created with the `def` keyword and the statements in the block of the function must be indented:
```python
def function():
pass
```
As per construction, this function does nothing when called:
```python
function()
```
The general syntax of a function definition is:
```
def function_name( parameters ):
"""Function docstring.
The help for the function
"""
function body
return variables
```
A more useful function:
```python
def fibo(N):
"""Fibonacci series: the sum of two elements defines the next.
The series is calculated till the input parameter N and
returned as an ouput variable.
"""
a, b, c = 0, 1, []
while b < N:
c.append(b)
a, b = b, a + b
return c
```
```python
fibo(100)
```
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
```python
if 3 > 2:
print('teste')
```
teste
Let's implemment the body mass index calculus and categorization as a function:
```python
def bmi(weight, height):
"""Body mass index calculus and categorization.
Enter the weight in kg and the height in m.
See http://en.wikipedia.org/wiki/Body_mass_index
"""
bmi = weight / height**2
if bmi < 15:
c = 'very severely underweight'
elif 15 <= bmi < 16:
c = 'severely underweight'
elif 16 <= bmi < 18.5:
c = 'underweight'
elif 18.5 <= bmi < 25:
c = 'normal'
elif 25 <= bmi < 30:
c = 'overweight'
elif 30 <= bmi < 35:
c = 'moderately obese'
elif 35 <= bmi < 40:
c = 'severely obese'
else:
c = 'very severely obese'
s = 'For a weight of {0:.1f} kg and a height of {1:.2f} m,\
the body mass index (bmi) is {2:.1f} kg/m2,\
which is considered {3:s}.'\
.format(weight, height, bmi, c)
print(s)
```
```python
bmi(73, 1.70)
```
For a weight of 73.0 kg and a height of 1.70 m, the body mass index (bmi) is 25.3 kg/m2, which is considered overweight.
## Numeric data manipulation with Numpy
Numpy is the fundamental package for scientific computing in Python and has a N-dimensional array package convenient to work with numerical data. With Numpy it's much easier and faster to work with numbers grouped as 1-D arrays (a vector), 2-D arrays (like a table or matrix), or higher dimensions. Let's create 1-D and 2-D arrays in Numpy:
```python
import numpy as np
```
```python
x1d = np.array([1, 2, 3, 4, 5, 6])
print(type(x1d))
x1d
```
<class 'numpy.ndarray'>
array([1, 2, 3, 4, 5, 6])
```python
x2d = np.array([[1, 2, 3], [4, 5, 6]])
x2d
```
array([[1, 2, 3],
[4, 5, 6]])
len() and the Numpy functions size() and shape() give information aboout the number of elements and the structure of the Numpy array:
```python
print('1-d array:')
print(x1d)
print('len(x1d) = ', len(x1d))
print('np.size(x1d) = ', np.size(x1d))
print('np.shape(x1d) = ', np.shape(x1d))
print('np.ndim(x1d) = ', np.ndim(x1d))
print('\n2-d array:')
print(x2d)
print('len(x2d) = ', len(x2d))
print('np.size(x2d) = ', np.size(x2d))
print('np.shape(x2d) = ', np.shape(x2d))
print('np.ndim(x2d) = ', np.ndim(x2d))
```
1-d array:
[1 2 3 4 5 6]
len(x1d) = 6
np.size(x1d) = 6
np.shape(x1d) = (6,)
np.ndim(x1d) = 1
2-d array:
[[1 2 3]
[4 5 6]]
len(x2d) = 2
np.size(x2d) = 6
np.shape(x2d) = (2, 3)
np.ndim(x2d) = 2
Create random data
```python
x = np.random.randn(4,3)
x
```
array([[-0.84313748, 0.7128172 , 0.26805518],
[ 1.66552011, 0.81119094, 0.72512201],
[-2.6924966 , 1.75984096, 0.58488695],
[ 0.64630267, -0.8719821 , -0.8510621 ]])
Joining (stacking together) arrays
```python
x = np.random.randint(0, 5, size=(2, 3))
print(x)
y = np.random.randint(5, 10, size=(2, 3))
print(y)
```
[[0 4 0]
[3 1 2]]
[[6 9 6]
[6 5 5]]
```python
np.vstack((x,y))
```
array([[0, 4, 0],
[3, 1, 2],
[6, 9, 6],
[6, 5, 5]])
```python
np.hstack((x,y))
```
array([[0, 4, 0, 6, 9, 6],
[3, 1, 2, 6, 5, 5]])
Create equally spaced data
```python
np.arange(start = 1, stop = 10, step = 2)
```
array([1, 3, 5, 7, 9])
```python
np.linspace(start = 0, stop = 1, num = 11)
```
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ])
### Interpolation
Consider the following data:
```python
y = [5, 4, 10, 8, 1, 10, 2, 7, 1, 3]
```
Suppose we want to create data in between the given data points (interpolation); for instance, let's try to double the resolution of the data by generating twice as many data:
```python
t = np.linspace(0, len(y), len(y)) # time vector for the original data
tn = np.linspace(0, len(y), 2 * len(y)) # new time vector for the new time-normalized data
yn = np.interp(tn, t, y) # new time-normalized data
yn
```
array([ 5. , 4.52631579, 4.05263158, 6.52631579, 9.36842105,
9.26315789, 8.31578947, 5.78947368, 2.47368421, 3.36842105,
7.63157895, 8.31578947, 4.52631579, 2.78947368, 5.15789474,
6.36842105, 3.52631579, 1.10526316, 2.05263158, 3. ])
The key is the Numpy `interp` function, from its help:
interp(x, xp, fp, left=None, right=None)
One-dimensional linear interpolation.
Returns the one-dimensional piecewise linear interpolant to a function with given values at discrete data-points.
A plot of the data will show what we have done:
```python
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(10,5))
plt.plot(t, y, 'bo-', lw=2, label='original data')
plt.plot(tn, yn, '.-', color=[1, 0, 0, .5], lw=2, label='interpolated')
plt.legend(loc='best', framealpha=.5)
plt.show()
```
For more about Numpy, see [http://wiki.scipy.org/Tentative_NumPy_Tutorial](http://wiki.scipy.org/Tentative_NumPy_Tutorial).
## Read and save files
There are two kinds of computer files: text files and binary files:
> Text file: computer file where the content is structured as a sequence of lines of electronic text. Text files can contain plain text (letters, numbers, and symbols) but they are not limited to such. The type of content in the text file is defined by the Unicode encoding (a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems).
>
> Binary file: computer file where the content is encoded in binary form, a sequence of integers representing byte values.
Let's see how to save and read numeric data stored in a text file:
**Using plain Python**
```python
f = open("newfile.txt", "w") # open file for writing
f.write("This is a test\n") # save to file
f.write("And here is another line\n") # save to file
f.close()
f = open('newfile.txt', 'r') # open file for reading
f = f.read() # read from file
print(f)
```
This is a test
And here is another line
```python
help(open)
```
Help on built-in function open in module io:
open(...)
open(file, mode='r', buffering=-1, encoding=None,
errors=None, newline=None, closefd=True, opener=None) -> file object
Open file and return a stream. Raise IOError upon failure.
file is either a text or byte string giving the name (and the path
if the file isn't in the current working directory) of the file to
be opened or an integer file descriptor of the file to be
wrapped. (If a file descriptor is given, it is closed when the
returned I/O object is closed, unless closefd is set to False.)
mode is an optional string that specifies the mode in which the file
is opened. It defaults to 'r' which means open for reading in text
mode. Other common values are 'w' for writing (truncating the file if
it already exists), 'x' for creating and writing to a new file, and
'a' for appending (which on some Unix systems, means that all writes
append to the end of the file regardless of the current seek position).
In text mode, if encoding is not specified the encoding used is platform
dependent: locale.getpreferredencoding(False) is called to get the
current locale encoding. (For reading and writing raw bytes use binary
mode and leave encoding unspecified.) The available modes are:
========= ===============================================================
Character Meaning
--------- ---------------------------------------------------------------
'r' open for reading (default)
'w' open for writing, truncating the file first
'x' create a new file and open it for writing
'a' open for writing, appending to the end of the file if it exists
'b' binary mode
't' text mode (default)
'+' open a disk file for updating (reading and writing)
'U' universal newline mode (deprecated)
========= ===============================================================
The default mode is 'rt' (open for reading text). For binary random
access, the mode 'w+b' opens and truncates the file to 0 bytes, while
'r+b' opens the file without truncation. The 'x' mode implies 'w' and
raises an `FileExistsError` if the file already exists.
Python distinguishes between files opened in binary and text modes,
even when the underlying operating system doesn't. Files opened in
binary mode (appending 'b' to the mode argument) return contents as
bytes objects without any decoding. In text mode (the default, or when
't' is appended to the mode argument), the contents of the file are
returned as strings, the bytes having been first decoded using a
platform-dependent encoding or using the specified encoding if given.
'U' mode is deprecated and will raise an exception in future versions
of Python. It has no effect in Python 3. Use newline to control
universal newlines mode.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to select
line buffering (only usable in text mode), and an integer > 1 to indicate
the size of a fixed-size chunk buffer. When no buffering argument is
given, the default buffering policy works as follows:
* Binary files are buffered in fixed-size chunks; the size of the buffer
is chosen using a heuristic trying to determine the underlying device's
"block size" and falling back on `io.DEFAULT_BUFFER_SIZE`.
On many systems, the buffer will typically be 4096 or 8192 bytes long.
* "Interactive" text files (files for which isatty() returns True)
use line buffering. Other text files use the policy described above
for binary files.
encoding is the name of the encoding used to decode or encode the
file. This should only be used in text mode. The default encoding is
platform dependent, but any encoding supported by Python can be
passed. See the codecs module for the list of supported encodings.
errors is an optional string that specifies how encoding errors are to
be handled---this argument should not be used in binary mode. Pass
'strict' to raise a ValueError exception if there is an encoding error
(the default of None has the same effect), or pass 'ignore' to ignore
errors. (Note that ignoring encoding errors can lead to data loss.)
See the documentation for codecs.register or run 'help(codecs.Codec)'
for a list of the permitted encoding error strings.
newline controls how universal newlines works (it only applies to text
mode). It can be None, '', '\n', '\r', and '\r\n'. It works as
follows:
* On input, if newline is None, universal newlines mode is
enabled. Lines in the input can end in '\n', '\r', or '\r\n', and
these are translated into '\n' before being returned to the
caller. If it is '', universal newline mode is enabled, but line
endings are returned to the caller untranslated. If it has any of
the other legal values, input lines are only terminated by the given
string, and the line ending is returned to the caller untranslated.
* On output, if newline is None, any '\n' characters written are
translated to the system default line separator, os.linesep. If
newline is '' or '\n', no translation takes place. If newline is any
of the other legal values, any '\n' characters written are translated
to the given string.
If closefd is False, the underlying file descriptor will be kept open
when the file is closed. This does not work when a file name is given
and must be True in that case.
A custom opener can be used by passing a callable as *opener*. The
underlying file descriptor for the file object is then obtained by
calling *opener* with (*file*, *flags*). *opener* must return an open
file descriptor (passing os.open as *opener* results in functionality
similar to passing None).
open() returns a file object whose type depends on the mode, and
through which the standard file operations such as reading and writing
are performed. When open() is used to open a file in a text mode ('w',
'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open
a file in a binary mode, the returned class varies: in read binary
mode, it returns a BufferedReader; in write binary and append binary
modes, it returns a BufferedWriter, and in read/write mode, it returns
a BufferedRandom.
It is also possible to use a string or bytearray as a file for both
reading and writing. For strings StringIO can be used like a file
opened in a text mode, and for bytes a BytesIO can be used like a file
opened in a binary mode.
**Using Numpy**
```python
import numpy as np
data = np.random.randn(3,3)
np.savetxt('myfile.txt', data, fmt="%12.6G") # save to file
data = np.genfromtxt('myfile.txt', unpack=True) # read from file
data
```
array([[-0.113108, 0.510521, -0.240193],
[-0.448538, -0.348518, -1.97899 ],
[ 0.21628 , -0.572211, -1.49621 ]])
## Ploting with matplotlib
Matplotlib is the most-widely used packge for plotting data in Python. Let's see some examples of it.
```python
import matplotlib.pyplot as plt
```
Use the IPython magic `%matplotlib inline` to plot a figure inline in the notebook with the rest of the text:
```python
%matplotlib notebook
```
C:\Miniconda3\lib\site-packages\IPython\kernel\__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
```python
import numpy as np
```
```python
t = np.linspace(0, 0.99, 100)
x = np.sin(2 * np.pi * 2 * t)
n = np.random.randn(100) / 5
plt.Figure(figsize=(12,8))
plt.plot(t, x, label='sine', linewidth=2)
plt.plot(t, x + n, label='noisy sine', linewidth=2)
plt.annotate(s='$sin(4 \pi t)$', xy=(.2, 1), fontsize=20, color=[0, 0, 1])
plt.legend(loc='best', framealpha=.5)
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.title('Data plotting using matplotlib')
plt.show()
```
<IPython.core.display.Javascript object>
Use the IPython magic `%matplotlib qt` to plot a figure in a separate window (from where you will be able to change some of the figure proprerties):
```python
%matplotlib qt
```
```python
mu, sigma = 10, 2
x = mu + sigma * np.random.randn(1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
ax1.plot(x, 'ro')
ax1.set_title('Data')
ax1.grid()
n, bins, patches = ax2.hist(x, 25, normed=True, facecolor='r') # histogram
ax2.set_xlabel('Bins')
ax2.set_ylabel('Probability')
ax2.set_title('Histogram')
fig.suptitle('Another example using matplotlib', fontsize=18, y=1)
ax2.grid()
plt.tight_layout()
plt.show()
```
And a window with the following figure should appear:
```python
from IPython.display import Image
Image(url="./../images/plot.png")
```
You can switch back and forth between inline and separate figure using the `%matplotlib` magic commands used above. There are plenty more examples with the source code in the [matplotlib gallery](http://matplotlib.org/gallery.html).
```python
# get back the inline plot
%matplotlib inline
```
## Signal processing with Scipy
The Scipy package has a lot of functions for signal processing, among them: Integration (scipy.integrate), Optimization (scipy.optimize), Interpolation (scipy.interpolate), Fourier Transforms (scipy.fftpack), Signal Processing (scipy.signal), Linear Algebra (scipy.linalg), and Statistics (scipy.stats). As an example, let's see how to use a low-pass Butterworth filter to attenuate high-frequency noise and how the differentiation process of a signal affects the signal-to-noise content. We will also calculate the Fourier transform of these data to look at their frequencies content.
```python
from scipy.signal import butter, filtfilt
import scipy.fftpack
freq = 100.
t = np.arange(0,1,.01);
w = 2*np.pi*1 # 1 Hz
y = np.sin(w*t)+0.1*np.sin(10*w*t)
# Butterworth filter
b, a = butter(4, (5/(freq/2)), btype = 'low')
y2 = filtfilt(b, a, y)
# 2nd derivative of the data
ydd = np.diff(y,2)*freq*freq # raw data
y2dd = np.diff(y2,2)*freq*freq # filtered data
# frequency content
yfft = np.abs(scipy.fftpack.fft(y))/(y.size/2); # raw data
y2fft = np.abs(scipy.fftpack.fft(y2))/(y.size/2); # filtered data
freqs = scipy.fftpack.fftfreq(y.size, 1./freq)
yddfft = np.abs(scipy.fftpack.fft(ydd))/(ydd.size/2);
y2ddfft = np.abs(scipy.fftpack.fft(y2dd))/(ydd.size/2);
freqs2 = scipy.fftpack.fftfreq(ydd.size, 1./freq)
```
And the plots:
```python
fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10, 4))
ax1.set_title('Temporal domain', fontsize=14)
ax1.plot(t, y, 'r', linewidth=2, label = 'raw data')
ax1.plot(t, y2, 'b', linewidth=2, label = 'filtered @ 5 Hz')
ax1.set_ylabel('f')
ax1.legend(frameon=False, fontsize=12)
ax2.set_title('Frequency domain', fontsize=14)
ax2.plot(freqs[:yfft.size/4], yfft[:yfft.size/4],'r', lw=2,label='raw data')
ax2.plot(freqs[:yfft.size/4],y2fft[:yfft.size/4],'b--',lw=2,label='filtered @ 5 Hz')
ax2.set_ylabel('FFT(f)')
ax2.legend(frameon=False, fontsize=12)
ax3.plot(t[:-2], ydd, 'r', linewidth=2, label = 'raw')
ax3.plot(t[:-2], y2dd, 'b', linewidth=2, label = 'filtered @ 5 Hz')
ax3.set_xlabel('Time [s]'); ax3.set_ylabel("f ''")
ax4.plot(freqs[:yddfft.size/4], yddfft[:yddfft.size/4], 'r', lw=2, label = 'raw')
ax4.plot(freqs[:yddfft.size/4],y2ddfft[:yddfft.size/4],'b--',lw=2, label='filtered @ 5 Hz')
ax4.set_xlabel('Frequency [Hz]'); ax4.set_ylabel("FFT(f '')")
plt.show()
```
For more about Scipy, see [http://docs.scipy.org/doc/scipy/reference/tutorial/](http://docs.scipy.org/doc/scipy/reference/tutorial/).
## Symbolic mathematics with Sympy
Sympy is a package to perform symbolic mathematics in Python. Let's see some of its features:
```python
from IPython.display import display
import sympy as sym
from sympy.interactive import printing
printing.init_printing()
```
Define some symbols and the create a second-order polynomial function (a.k.a., parabola):
```python
x, y = sym.symbols('x y')
y = x**2 - 2*x - 3
y
```
Plot the parabola at some given range:
```python
from sympy.plotting import plot
%matplotlib inline
plot(y, (x, -3, 5));
```
And the roots of the parabola are given by:
```python
sym.solve(y, x)
```
$$\begin{bmatrix}-1, & 3\end{bmatrix}$$
We can also do symbolic differentiation and integration:
```python
dy = sym.diff(y, x)
dy
```
```python
sym.integrate(dy, x)
```
For example, let's use Sympy to represent three-dimensional rotations. Consider the problem of a coordinate system xyz rotated in relation to other coordinate system XYZ. The single rotations around each axis are illustrated by:
```python
from IPython.display import Image
Image(url="./../images/rotations.png")
```
The single 3D rotation matrices around Z, Y, and X axes can be expressed in Sympy:
```python
from IPython.core.display import Math
from sympy import symbols, cos, sin, Matrix, latex
a, b, g = symbols('alpha beta gamma')
RX = Matrix([[1, 0, 0], [0, cos(a), -sin(a)], [0, sin(a), cos(a)]])
display(Math(latex('\\mathbf{R_{X}}=') + latex(RX, mat_str = 'matrix')))
RY = Matrix([[cos(b), 0, sin(b)], [0, 1, 0], [-sin(b), 0, cos(b)]])
display(Math(latex('\\mathbf{R_{Y}}=') + latex(RY, mat_str = 'matrix')))
RZ = Matrix([[cos(g), -sin(g), 0], [sin(g), cos(g), 0], [0, 0, 1]])
display(Math(latex('\\mathbf{R_{Z}}=') + latex(RZ, mat_str = 'matrix')))
```
$$\mathbf{R_{X}}=\left[\begin{matrix}1 & 0 & 0\\0 & \cos{\left (\alpha \right )} & - \sin{\left (\alpha \right )}\\0 & \sin{\left (\alpha \right )} & \cos{\left (\alpha \right )}\end{matrix}\right]$$
$$\mathbf{R_{Y}}=\left[\begin{matrix}\cos{\left (\beta \right )} & 0 & \sin{\left (\beta \right )}\\0 & 1 & 0\\- \sin{\left (\beta \right )} & 0 & \cos{\left (\beta \right )}\end{matrix}\right]$$
$$\mathbf{R_{Z}}=\left[\begin{matrix}\cos{\left (\gamma \right )} & - \sin{\left (\gamma \right )} & 0\\\sin{\left (\gamma \right )} & \cos{\left (\gamma \right )} & 0\\0 & 0 & 1\end{matrix}\right]$$
And using Sympy, a sequence of elementary rotations around X, Y, Z axes is given by:
```python
RXYZ = RZ*RY*RX
display(Math(latex('\\mathbf{R_{XYZ}}=') + latex(RXYZ, mat_str = 'matrix')))
```
$$\mathbf{R_{XYZ}}=\left[\begin{matrix}\cos{\left (\beta \right )} \cos{\left (\gamma \right )} & \sin{\left (\alpha \right )} \sin{\left (\beta \right )} \cos{\left (\gamma \right )} - \sin{\left (\gamma \right )} \cos{\left (\alpha \right )} & \sin{\left (\alpha \right )} \sin{\left (\gamma \right )} + \sin{\left (\beta \right )} \cos{\left (\alpha \right )} \cos{\left (\gamma \right )}\\\sin{\left (\gamma \right )} \cos{\left (\beta \right )} & \sin{\left (\alpha \right )} \sin{\left (\beta \right )} \sin{\left (\gamma \right )} + \cos{\left (\alpha \right )} \cos{\left (\gamma \right )} & - \sin{\left (\alpha \right )} \cos{\left (\gamma \right )} + \sin{\left (\beta \right )} \sin{\left (\gamma \right )} \cos{\left (\alpha \right )}\\- \sin{\left (\beta \right )} & \sin{\left (\alpha \right )} \cos{\left (\beta \right )} & \cos{\left (\alpha \right )} \cos{\left (\beta \right )}\end{matrix}\right]$$
Suppose there is a rotation only around X ($\alpha$) by $\pi/2$; we can get the numerical value of the rotation matrix by substituing the angle values:
```python
r = RXYZ.subs({a: np.pi/2, b: 0, g: 0})
r
```
$$\left[\begin{matrix}1 & 0 & 0\\0 & 6.12323399573677 \cdot 10^{-17} & -1.0\\0 & 1.0 & 6.12323399573677 \cdot 10^{-17}\end{matrix}\right]$$
And we can prettify this result:
```python
display(Math(latex(r'\mathbf{R_{(\alpha=\pi/2)}}=') +
latex(r.n(chop=True, prec=3), mat_str = 'matrix')))
```
$$\mathbf{R_{(\alpha=\pi/2)}}=\left[\begin{matrix}1.0 & 0 & 0\\0 & 0 & -1.0\\0 & 1.0 & 0\end{matrix}\right]$$
For more about Sympy, see [http://docs.sympy.org/latest/tutorial/](http://docs.sympy.org/latest/tutorial/).
## Data analysis with pandas
> "[pandas](http://pandas.pydata.org/) is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python."
To work with labellled data, pandas has a type called DataFrame (basically, a matrix where columns and rows have may names and may be of different types) and it is also the main type of the software [R](http://www.r-project.org/). Fo ezample:
```python
import pandas as pd
```
```python
x = 5*['A'] + 5*['B']
x
```
['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B']
```python
df = pd.DataFrame(np.random.rand(10,2), columns=['Level 1', 'Level 2'] )
df['Group'] = pd.Series(['A']*5 + ['B']*5)
plot = df.boxplot(by='Group')
```
```python
from pandas.tools.plotting import scatter_matrix
df = pd.DataFrame(np.random.randn(100, 3), columns=['A', 'B', 'C'])
plot = scatter_matrix(df, alpha=0.5, figsize=(8, 6), diagonal='kde')
```
pandas is aware the data is structured and give you basic statistics considerint that and nicely formatted:
```python
df.describe()
```
<div style="max-height:1000px;max-width:1500px;overflow:auto;">
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td> 100.000000</td>
<td> 100.000000</td>
<td> 100.000000</td>
</tr>
<tr>
<th>mean</th>
<td> 0.176822</td>
<td> -0.028189</td>
<td> 0.079938</td>
</tr>
<tr>
<th>std</th>
<td> 0.962097</td>
<td> 1.056000</td>
<td> 0.955360</td>
</tr>
<tr>
<th>min</th>
<td> -1.909003</td>
<td> -2.515748</td>
<td> -2.564882</td>
</tr>
<tr>
<th>25%</th>
<td> -0.434171</td>
<td> -0.851826</td>
<td> -0.531646</td>
</tr>
<tr>
<th>50%</th>
<td> 0.113800</td>
<td> 0.053892</td>
<td> 0.088606</td>
</tr>
<tr>
<th>75%</th>
<td> 0.661160</td>
<td> 0.782833</td>
<td> 0.600897</td>
</tr>
<tr>
<th>max</th>
<td> 2.673605</td>
<td> 1.989213</td>
<td> 2.621580</td>
</tr>
</tbody>
</table>
</div>
For more on pandas, see this tutorial: [http://pandas.pydata.org/pandas-docs/stable/10min.html](http://pandas.pydata.org/pandas-docs/stable/10min.html).
## To learn more about Python
There is a lot of good material in the internet about Python for scientific computing, here is a small list of interesting stuff:
- [How To Think Like A Computer Scientist](http://www.openbookproject.net/thinkcs/python/english2e/) or [the interactive edition](http://interactivepython.org/courselib/static/thinkcspy/index.html) (book)
- [Python Scientific Lecture Notes](http://scipy-lectures.github.io/) (lecture notes)
- [Lectures on scientific computing with Python](https://github.com/jrjohansson/scientific-python-lectures#lectures-on-scientific-computing-with-python) (lecture notes)
- [IPython in depth: high-productivity interactive and parallel python](http://youtu.be/bP8ydKBCZiY) (video lectures)
|
State Before: α : Type u
β : Type v
γ : Type ?u.564674
δ : Type ?u.564677
ε : Type ?u.564680
ζ : Type ?u.564683
inst✝ : TopologicalSpace α
⊢ IsClosed (range down) State After: no goals Tactic: simp only [ULift.down_surjective.range_eq, isClosed_univ]
|
[GOAL]
G : Type u_1
α : Type u_2
inst✝⁴ : Group G
inst✝³ : MulAction G α
inst✝² : MeasurableSpace G
inst✝¹ : MeasurableSpace α
inst✝ : MeasurableSMul G α
s : Set α
hs : MeasurableSet s
a : G
⊢ MeasurableSet (a • s)
[PROOFSTEP]
rw [← preimage_smul_inv]
[GOAL]
G : Type u_1
α : Type u_2
inst✝⁴ : Group G
inst✝³ : MulAction G α
inst✝² : MeasurableSpace G
inst✝¹ : MeasurableSpace α
inst✝ : MeasurableSMul G α
s : Set α
hs : MeasurableSet s
a : G
⊢ MeasurableSet ((fun x => a⁻¹ • x) ⁻¹' s)
[PROOFSTEP]
exact measurable_const_smul _ hs
[GOAL]
G₀ : Type u_1
α : Type u_2
inst✝⁴ : GroupWithZero G₀
inst✝³ : MulAction G₀ α
inst✝² : MeasurableSpace G₀
inst✝¹ : MeasurableSpace α
inst✝ : MeasurableSMul G₀ α
s : Set α
hs : MeasurableSet s
a : G₀
ha : a ≠ 0
⊢ MeasurableSet (a • s)
[PROOFSTEP]
rw [← preimage_smul_inv₀ ha]
[GOAL]
G₀ : Type u_1
α : Type u_2
inst✝⁴ : GroupWithZero G₀
inst✝³ : MulAction G₀ α
inst✝² : MeasurableSpace G₀
inst✝¹ : MeasurableSpace α
inst✝ : MeasurableSMul G₀ α
s : Set α
hs : MeasurableSet s
a : G₀
ha : a ≠ 0
⊢ MeasurableSet ((fun x => a⁻¹ • x) ⁻¹' s)
[PROOFSTEP]
exact measurable_const_smul _ hs
[GOAL]
G₀ : Type u_1
α : Type u_2
inst✝⁶ : GroupWithZero G₀
inst✝⁵ : Zero α
inst✝⁴ : MulActionWithZero G₀ α
inst✝³ : MeasurableSpace G₀
inst✝² : MeasurableSpace α
inst✝¹ : MeasurableSMul G₀ α
inst✝ : MeasurableSingletonClass α
s : Set α
hs : MeasurableSet s
a : G₀
⊢ MeasurableSet (a • s)
[PROOFSTEP]
rcases eq_or_ne a 0 with (rfl | ha)
[GOAL]
case inl
G₀ : Type u_1
α : Type u_2
inst✝⁶ : GroupWithZero G₀
inst✝⁵ : Zero α
inst✝⁴ : MulActionWithZero G₀ α
inst✝³ : MeasurableSpace G₀
inst✝² : MeasurableSpace α
inst✝¹ : MeasurableSMul G₀ α
inst✝ : MeasurableSingletonClass α
s : Set α
hs : MeasurableSet s
⊢ MeasurableSet (0 • s)
case inr
G₀ : Type u_1
α : Type u_2
inst✝⁶ : GroupWithZero G₀
inst✝⁵ : Zero α
inst✝⁴ : MulActionWithZero G₀ α
inst✝³ : MeasurableSpace G₀
inst✝² : MeasurableSpace α
inst✝¹ : MeasurableSMul G₀ α
inst✝ : MeasurableSingletonClass α
s : Set α
hs : MeasurableSet s
a : G₀
ha : a ≠ 0
⊢ MeasurableSet (a • s)
[PROOFSTEP]
exacts [(subsingleton_zero_smul_set s).measurableSet, hs.const_smul_of_ne_zero ha]
|
The Lebesgue measure of a set is always a Lebesgue measurable set.
|
function data = read_data()
fprintf('Reading data ...\n');
fid = fopen('TrainingData.csv');
fgetl(fid);
data = zeros(37821,95);
days = {'"Saturday"','"Sunday"','"Monday"','"Tuesday"','"Wednesday"','"Thursday"','"Friday"'};
row_cnt = 0;
while ~feof(fid)
row_cnt = row_cnt + 1;
line = fgetl(fid);
C = strread(line,'%s','delimiter',',');
for i=1:95
if i==5
data(row_cnt,5) = find(strcmp(days,C{5}));
else
if strcmp(C{i},'NA')
data(row_cnt,i) = -1000000;
else
data(row_cnt,i) = str2num(C{i});
end
end
end
end
fprintf('Read in %d rows\n', row_cnt);
|
[STATEMENT]
lemma step_openByA_cases:
assumes "step s a = (ou, s')"
and "openByA s \<noteq> openByA s'"
obtains (CloseA) uid p uid' p' where "a = Cact (cUser uid p uid' p')"
"uid' = UID1 \<or> uid' = UID2" "ou = outOK" "p = pass s uid"
"openByA s" "\<not>openByA s'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
step s a = (ou, s')
openByA s \<noteq> openByA s'
goal (1 subgoal):
1. (\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
proof (cases a)
[PROOF STATE]
proof (state)
goal (7 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x3. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Dact x3\<rbrakk> \<Longrightarrow> thesis
4. \<And>x4. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Uact x4\<rbrakk> \<Longrightarrow> thesis
5. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
6. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
7. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
case (Dact da)
[PROOF STATE]
proof (state)
this:
a = Dact da
goal (7 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x3. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Dact x3\<rbrakk> \<Longrightarrow> thesis
4. \<And>x4. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Uact x4\<rbrakk> \<Longrightarrow> thesis
5. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
6. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
7. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
a = Dact da
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
a = Dact da
goal (1 subgoal):
1. thesis
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
a = Dact da
step s a = (ou, s')
openByA s \<noteq> openByA s'
goal (1 subgoal):
1. thesis
[PROOF STEP]
by (cases da) (auto simp: d_defs openByA_def)
[PROOF STATE]
proof (state)
this:
thesis
goal (6 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x4. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Uact x4\<rbrakk> \<Longrightarrow> thesis
4. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
5. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
6. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (6 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x4. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Uact x4\<rbrakk> \<Longrightarrow> thesis
4. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
5. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
6. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
case (Uact ua)
[PROOF STATE]
proof (state)
this:
a = Uact ua
goal (6 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x4. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Uact x4\<rbrakk> \<Longrightarrow> thesis
4. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
5. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
6. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
a = Uact ua
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
a = Uact ua
goal (1 subgoal):
1. thesis
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
a = Uact ua
step s a = (ou, s')
openByA s \<noteq> openByA s'
goal (1 subgoal):
1. thesis
[PROOF STEP]
by (cases ua) (auto simp: u_defs openByA_def)
[PROOF STATE]
proof (state)
this:
thesis
goal (5 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
4. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
5. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (5 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
4. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
5. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
case (COMact ca)
[PROOF STATE]
proof (state)
this:
a = COMact ca
goal (5 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
4. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
5. \<And>x7. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = COMact x7\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
a = COMact ca
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
a = COMact ca
goal (1 subgoal):
1. thesis
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
a = COMact ca
step s a = (ou, s')
openByA s \<noteq> openByA s'
goal (1 subgoal):
1. thesis
[PROOF STEP]
by (cases ca) (auto simp: com_defs openByA_def)
[PROOF STATE]
proof (state)
this:
thesis
goal (4 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
4. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (4 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
4. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
case (Sact sa)
[PROOF STATE]
proof (state)
this:
a = Sact sa
goal (4 subgoals):
1. \<And>x1. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Sact x1\<rbrakk> \<Longrightarrow> thesis
2. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
3. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
4. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
a = Sact sa
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
a = Sact sa
goal (1 subgoal):
1. thesis
[PROOF STEP]
using assms UID1_UID2
[PROOF STATE]
proof (prove)
using this:
a = Sact sa
step s a = (ou, s')
openByA s \<noteq> openByA s'
UID1 \<noteq> UID2
goal (1 subgoal):
1. thesis
[PROOF STEP]
by (cases sa) (auto simp: s_defs openByA_def)
[PROOF STATE]
proof (state)
this:
thesis
goal (3 subgoals):
1. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
2. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
3. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (3 subgoals):
1. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
2. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
3. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
case (Cact ca)
[PROOF STATE]
proof (state)
this:
a = Cact ca
goal (3 subgoals):
1. \<And>x2. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Cact x2\<rbrakk> \<Longrightarrow> thesis
2. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
3. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
a = Cact ca
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
a = Cact ca
goal (1 subgoal):
1. thesis
[PROOF STEP]
using assms UID1_UID2
[PROOF STATE]
proof (prove)
using this:
a = Cact ca
step s a = (ou, s')
openByA s \<noteq> openByA s'
UID1 \<noteq> UID2
goal (1 subgoal):
1. thesis
[PROOF STEP]
proof (cases ca)
[PROOF STATE]
proof (state)
goal (5 subgoals):
1. \<And>x11 x12. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cNUReq x11 x12\<rbrakk> \<Longrightarrow> thesis
2. \<And>x21 x22 x23 x24. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cUser x21 x22 x23 x24\<rbrakk> \<Longrightarrow> thesis
3. \<And>x31 x32 x33. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cPost x31 x32 x33\<rbrakk> \<Longrightarrow> thesis
4. \<And>x41 x42 x43 x44. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cFriendReq x41 x42 x43 x44\<rbrakk> \<Longrightarrow> thesis
5. \<And>x51 x52 x53. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cFriend x51 x52 x53\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
case (cUser uid p uid' p')
[PROOF STATE]
proof (state)
this:
ca = cUser uid p uid' p'
goal (5 subgoals):
1. \<And>x11 x12. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cNUReq x11 x12\<rbrakk> \<Longrightarrow> thesis
2. \<And>x21 x22 x23 x24. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cUser x21 x22 x23 x24\<rbrakk> \<Longrightarrow> thesis
3. \<And>x31 x32 x33. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cPost x31 x32 x33\<rbrakk> \<Longrightarrow> thesis
4. \<And>x41 x42 x43 x44. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cFriendReq x41 x42 x43 x44\<rbrakk> \<Longrightarrow> thesis
5. \<And>x51 x52 x53. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cFriend x51 x52 x53\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
ca = cUser uid p uid' p'
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
ca = cUser uid p uid' p'
goal (1 subgoal):
1. thesis
[PROOF STEP]
using Cact assms
[PROOF STATE]
proof (prove)
using this:
ca = cUser uid p uid' p'
a = Cact ca
step s a = (ou, s')
openByA s \<noteq> openByA s'
goal (1 subgoal):
1. thesis
[PROOF STEP]
by (intro that) (auto simp: c_defs openByA_def)
[PROOF STATE]
proof (state)
this:
thesis
goal (4 subgoals):
1. \<And>x11 x12. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cNUReq x11 x12\<rbrakk> \<Longrightarrow> thesis
2. \<And>x31 x32 x33. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cPost x31 x32 x33\<rbrakk> \<Longrightarrow> thesis
3. \<And>x41 x42 x43 x44. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cFriendReq x41 x42 x43 x44\<rbrakk> \<Longrightarrow> thesis
4. \<And>x51 x52 x53. \<lbrakk>a = Cact ca; step s a = (ou, s'); openByA s \<noteq> openByA s'; UID1 \<noteq> UID2; ca = cFriend x51 x52 x53\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
qed (auto simp: c_defs openByA_def)
[PROOF STATE]
proof (state)
this:
thesis
goal (2 subgoals):
1. \<And>x5. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Ract x5\<rbrakk> \<Longrightarrow> thesis
2. \<And>x6. \<lbrakk>\<And>uid p uid' p'. \<lbrakk>a = Cact (cUser uid p uid' p'); uid' = UID1 \<or> uid' = UID2; ou = outOK; p = pass s uid; openByA s; \<not> openByA s'\<rbrakk> \<Longrightarrow> thesis; step s a = (ou, s'); openByA s \<noteq> openByA s'; a = Lact x6\<rbrakk> \<Longrightarrow> thesis
[PROOF STEP]
qed auto
|
module InteractiveEditing
import Data.Vect
%default total
%name Vect xs, ys, zs
{-
<LocalLeader> a -> add initial clause
<LocalLeader> c -> case split
<LocalLeader> m -> add missing clause(s)
<LocalLeader> s -> proof search
<LocalLeader> w -> add with clause
-}
vzipwith : (a -> b -> c) -> Vect n a -> Vect n b -> Vect n c
vzipwith f [] [] = []
vzipwith f (x :: xs) (y :: ys) = f x y :: vzipwith f xs ys
|
module Test.Main
import Data.ByteString
import Data.List
import Data.SOP
import Data.Vect
import Hedgehog
--------------------------------------------------------------------------------
-- Generator
--------------------------------------------------------------------------------
smallBits32 : Gen Bits32
smallBits32 = bits32 (linear 0 10)
byte : Gen Bits8
byte = bits8 $ linear 0 0xff
byteList : Gen (List Bits8)
byteList = list (linear 0 30) byte
-- we make sure to not only generate `ByteString`s with
-- an offset of 0.
bytestring : Gen ByteString
bytestring =
let bs1 = pack <$> byteList
bs2 = [| substring smallBits32 smallBits32 bs1 |]
in choice [bs1, bs2]
latinString : Gen String
latinString = string (linear 0 30) latin
--------------------------------------------------------------------------------
-- Properties
--------------------------------------------------------------------------------
unpackPack : Property
unpackPack = property $ do
bs <- forAll byteList
bs === unpack (pack bs)
packUnpack : Property
packUnpack = property $ do
bs <- forAll bytestring
bs === pack (unpack bs)
singletonHead : Property
singletonHead = property $ do
b <- forAll byte
Just b === head (singleton b)
packNull : Property
packNull = property $ do
bs <- forAll byteList
null bs === null (pack bs)
packLength : Property
packLength = property $ do
bs <- forAll byteList
length bs === cast (length $ pack bs)
consUncons : Property
consUncons = property $ do
[b,bs] <- forAll $ np [byte,bytestring]
Just (b,bs) === uncons (cons b bs)
unconsCons : Property
unconsCons = property $ do
[b,bs] <- forAll $ np [byte,bytestring]
let bs' = cons b bs
Just bs' === map (uncurry cons) (uncons bs')
snocUnsnoc : Property
snocUnsnoc = property $ do
[b,bs] <- forAll $ np [byte,bytestring]
Just (b,bs) === unsnoc (snoc b bs)
unsnocSnoc : Property
unsnocSnoc = property $ do
[b,bs] <- forAll $ np [byte,bytestring]
let bs' = snoc b bs
Just bs' === map (uncurry snoc) (unsnoc bs')
appendNeutralLeft : Property
appendNeutralLeft = property $ do
bs <- forAll bytestring
bs === (neutral <+> bs)
appendNeutralRight : Property
appendNeutralRight = property $ do
bs <- forAll bytestring
bs === (bs <+> neutral)
appendAssociative : Property
appendAssociative = property $ do
[bs1,bs2,bs3] <- forAll $ np [bytestring,bytestring,bytestring]
((bs1 <+> bs2) <+> bs3) === (bs1 <+> (bs2 <+> bs3))
prop_fastConcat : Property
prop_fastConcat = property $ do
bss <- forAll (list (linear 0 10) byteList)
fastConcat (pack <$> bss) === pack (concat bss)
consHead : Property
consHead = property $ do
[b,bs] <- forAll $ np [byte, bytestring]
Just b === head (cons b bs)
snocLast : Property
snocLast = property $ do
[b,bs] <- forAll $ np [byte, bytestring]
Just b === last (snoc b bs)
consTail : Property
consTail = property $ do
[b,bs] <- forAll $ np [byte, bytestring]
Just bs === tail (cons b bs)
snocInit : Property
snocInit = property $ do
[b,bs] <- forAll $ np [byte, bytestring]
Just bs === init (snoc b bs)
prop_substring : Property
prop_substring = property $ do
[start,len,str] <- forAll $ np [smallBits32,smallBits32,latinString]
let ss = substring start len (fromString str)
fromString (substr (cast start) (cast len) str) ===
substring start len (fromString str)
reverseReverse : Property
reverseReverse = property $ do
bs <- forAll bytestring
bs === reverse (reverse bs)
fun : Bits8 -> Maybe Bits8
fun b = if b < 128 then Just (128 - b) else Nothing
prop_mapMaybe : Property
prop_mapMaybe = property $ do
bs <- forAll byteList
mapMaybe fun (pack bs) === pack (mapMaybe fun bs)
prop_filter : Property
prop_filter = property $ do
bs <- forAll byteList
filter (< 100) (pack bs) === pack (filter (< 100) bs)
prop_take : Property
prop_take = property $ do
[n,bs] <- forAll $ np [nat (linear 0 30), byteList]
take (cast n) (pack bs) === pack (take n bs)
prop_takeEnd : Property
prop_takeEnd = property $ do
[n,bs] <- forAll $ np [bits32 (linear 0 30), bytestring]
takeEnd n bs === reverse (take n $ reverse bs)
prop_drop : Property
prop_drop = property $ do
[n,bs] <- forAll $ np [nat (linear 0 30), byteList]
drop (cast n) (pack bs) === pack (drop n bs)
prop_dropEnd : Property
prop_dropEnd = property $ do
[n,bs] <- forAll $ np [bits32 (linear 0 30), bytestring]
dropEnd n bs === reverse (drop n $ reverse bs)
prop_takeWhile : Property
prop_takeWhile = property $ do
bs <- forAll byteList
takeWhile (< 100) (pack bs) ===
pack (takeWhile (< 100) bs)
prop_takeWhileEnd : Property
prop_takeWhileEnd = property $ do
bs <- forAll bytestring
takeWhileEnd (< 100) bs ===
reverse (takeWhile (< 100) $ reverse bs)
prop_dropWhileEnd : Property
prop_dropWhileEnd = property $ do
bs <- forAll bytestring
dropWhileEnd (< 100) bs ===
reverse (dropWhile (< 100) $ reverse bs)
prop_breakAppend : Property
prop_breakAppend = property $ do
bs <- forAll bytestring
let (a,b) = break (< 100) bs
bs === (a <+> b)
prop_breakFirst : Property
prop_breakFirst = property $ do
bs <- forAll bytestring
let (a,b) = break (< 100) bs
assert $ all (>= 100) a
main : IO ()
main = test . pure $ MkGroup "ByteString"
[ ("unpackPack", unpackPack)
, ("packUnpack", packUnpack)
, ("consUncons", consUncons)
, ("unsnocSnoc", unsnocSnoc)
, ("snocUnsnoc", snocUnsnoc)
, ("unconsCons", unconsCons)
, ("singletonHead", singletonHead)
, ("packNull", packNull)
, ("packLength", packLength)
, ("appendNeutralRight", appendNeutralRight)
, ("appendNeutralLeft", appendNeutralLeft)
, ("appendAssociative", appendAssociative)
, ("prop_fastConcat", prop_fastConcat)
, ("consHead", consHead)
, ("consTail", consTail)
, ("snocLast", snocLast)
, ("snocInit", snocInit)
, ("prop_substring", prop_substring)
, ("reverseReverse", reverseReverse)
, ("prop_mapMaybe", prop_mapMaybe)
, ("prop_filter", prop_filter)
, ("prop_drop", prop_drop)
, ("prop_take", prop_take)
, ("prop_takeWhile", prop_takeWhile)
, ("prop_takeWhileEnd", prop_takeWhileEnd)
, ("prop_dropWhileEnd", prop_dropWhileEnd)
, ("prop_breakAppend", prop_breakAppend)
, ("prop_breakFirst", prop_breakFirst)
]
|
/-
Copyright (c) 2018 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro, Kenny Lau
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.data.list.basic
import Mathlib.PostPort
universes u v w z u_1 u_2 u_3
namespace Mathlib
namespace list
/- zip & unzip -/
@[simp] theorem zip_with_cons_cons {α : Type u} {β : Type v} {γ : Type w} (f : α → β → γ) (a : α) (b : β) (l₁ : List α) (l₂ : List β) : zip_with f (a :: l₁) (b :: l₂) = f a b :: zip_with f l₁ l₂ :=
rfl
@[simp] theorem zip_cons_cons {α : Type u} {β : Type v} (a : α) (b : β) (l₁ : List α) (l₂ : List β) : zip (a :: l₁) (b :: l₂) = (a, b) :: zip l₁ l₂ :=
rfl
@[simp] theorem zip_with_nil_left {α : Type u} {β : Type v} {γ : Type w} (f : α → β → γ) (l : List β) : zip_with f [] l = [] :=
rfl
@[simp] theorem zip_with_nil_right {α : Type u} {β : Type v} {γ : Type w} (f : α → β → γ) (l : List α) : zip_with f l [] = [] :=
list.cases_on l (Eq.refl (zip_with f [] [])) fun (l_hd : α) (l_tl : List α) => Eq.refl (zip_with f (l_hd :: l_tl) [])
@[simp] theorem zip_nil_left {α : Type u} {β : Type v} (l : List α) : zip [] l = [] :=
rfl
@[simp] theorem zip_nil_right {α : Type u} {β : Type v} (l : List α) : zip l [] = [] :=
zip_with_nil_right Prod.mk l
@[simp] theorem zip_swap {α : Type u} {β : Type v} (l₁ : List α) (l₂ : List β) : map prod.swap (zip l₁ l₂) = zip l₂ l₁ := sorry
@[simp] theorem length_zip_with {α : Type u} {β : Type v} {γ : Type w} (f : α → β → γ) (l₁ : List α) (l₂ : List β) : length (zip_with f l₁ l₂) = min (length l₁) (length l₂) := sorry
@[simp] theorem length_zip {α : Type u} {β : Type v} (l₁ : List α) (l₂ : List β) : length (zip l₁ l₂) = min (length l₁) (length l₂) :=
length_zip_with Prod.mk
theorem lt_length_left_of_zip_with {α : Type u} {β : Type v} {γ : Type w} {f : α → β → γ} {i : ℕ} {l : List α} {l' : List β} (h : i < length (zip_with f l l')) : i < length l :=
and.left
(eq.mp (Eq._oldrec (Eq.refl (i < min (length l) (length l'))) (propext lt_min_iff))
(eq.mp (Eq._oldrec (Eq.refl (i < length (zip_with f l l'))) (length_zip_with f l l')) h))
theorem lt_length_right_of_zip_with {α : Type u} {β : Type v} {γ : Type w} {f : α → β → γ} {i : ℕ} {l : List α} {l' : List β} (h : i < length (zip_with f l l')) : i < length l' :=
and.right
(eq.mp (Eq._oldrec (Eq.refl (i < min (length l) (length l'))) (propext lt_min_iff))
(eq.mp (Eq._oldrec (Eq.refl (i < length (zip_with f l l'))) (length_zip_with f l l')) h))
theorem lt_length_left_of_zip {α : Type u} {β : Type v} {i : ℕ} {l : List α} {l' : List β} (h : i < length (zip l l')) : i < length l :=
lt_length_left_of_zip_with h
theorem lt_length_right_of_zip {α : Type u} {β : Type v} {i : ℕ} {l : List α} {l' : List β} (h : i < length (zip l l')) : i < length l' :=
lt_length_right_of_zip_with h
theorem zip_append {α : Type u} {β : Type v} {l₁ : List α} {r₁ : List α} {l₂ : List β} {r₂ : List β} (h : length l₁ = length l₂) : zip (l₁ ++ r₁) (l₂ ++ r₂) = zip l₁ l₂ ++ zip r₁ r₂ := sorry
theorem zip_map {α : Type u} {β : Type v} {γ : Type w} {δ : Type z} (f : α → γ) (g : β → δ) (l₁ : List α) (l₂ : List β) : zip (map f l₁) (map g l₂) = map (prod.map f g) (zip l₁ l₂) := sorry
theorem zip_map_left {α : Type u} {β : Type v} {γ : Type w} (f : α → γ) (l₁ : List α) (l₂ : List β) : zip (map f l₁) l₂ = map (prod.map f id) (zip l₁ l₂) :=
eq.mpr (id (Eq._oldrec (Eq.refl (zip (map f l₁) l₂ = map (prod.map f id) (zip l₁ l₂))) (Eq.symm (zip_map f id l₁ l₂))))
(eq.mpr (id (Eq._oldrec (Eq.refl (zip (map f l₁) l₂ = zip (map f l₁) (map id l₂))) (map_id l₂)))
(Eq.refl (zip (map f l₁) l₂)))
theorem zip_map_right {α : Type u} {β : Type v} {γ : Type w} (f : β → γ) (l₁ : List α) (l₂ : List β) : zip l₁ (map f l₂) = map (prod.map id f) (zip l₁ l₂) :=
eq.mpr (id (Eq._oldrec (Eq.refl (zip l₁ (map f l₂) = map (prod.map id f) (zip l₁ l₂))) (Eq.symm (zip_map id f l₁ l₂))))
(eq.mpr (id (Eq._oldrec (Eq.refl (zip l₁ (map f l₂) = zip (map id l₁) (map f l₂))) (map_id l₁)))
(Eq.refl (zip l₁ (map f l₂))))
theorem zip_map' {α : Type u} {β : Type v} {γ : Type w} (f : α → β) (g : α → γ) (l : List α) : zip (map f l) (map g l) = map (fun (a : α) => (f a, g a)) l := sorry
theorem mem_zip {α : Type u} {β : Type v} {a : α} {b : β} {l₁ : List α} {l₂ : List β} : (a, b) ∈ zip l₁ l₂ → a ∈ l₁ ∧ b ∈ l₂ := sorry
theorem map_fst_zip {α : Type u} {β : Type v} (l₁ : List α) (l₂ : List β) : length l₁ ≤ length l₂ → map prod.fst (zip l₁ l₂) = l₁ := sorry
theorem map_snd_zip {α : Type u} {β : Type v} (l₁ : List α) (l₂ : List β) : length l₂ ≤ length l₁ → map prod.snd (zip l₁ l₂) = l₂ := sorry
@[simp] theorem unzip_nil {α : Type u} {β : Type v} : unzip [] = ([], []) :=
rfl
@[simp] theorem unzip_cons {α : Type u} {β : Type v} (a : α) (b : β) (l : List (α × β)) : unzip ((a, b) :: l) = (a :: prod.fst (unzip l), b :: prod.snd (unzip l)) := sorry
theorem unzip_eq_map {α : Type u} {β : Type v} (l : List (α × β)) : unzip l = (map prod.fst l, map prod.snd l) := sorry
theorem unzip_left {α : Type u} {β : Type v} (l : List (α × β)) : prod.fst (unzip l) = map prod.fst l := sorry
theorem unzip_right {α : Type u} {β : Type v} (l : List (α × β)) : prod.snd (unzip l) = map prod.snd l := sorry
theorem unzip_swap {α : Type u} {β : Type v} (l : List (α × β)) : unzip (map prod.swap l) = prod.swap (unzip l) := sorry
theorem zip_unzip {α : Type u} {β : Type v} (l : List (α × β)) : zip (prod.fst (unzip l)) (prod.snd (unzip l)) = l := sorry
theorem unzip_zip_left {α : Type u} {β : Type v} {l₁ : List α} {l₂ : List β} : length l₁ ≤ length l₂ → prod.fst (unzip (zip l₁ l₂)) = l₁ := sorry
theorem unzip_zip_right {α : Type u} {β : Type v} {l₁ : List α} {l₂ : List β} (h : length l₂ ≤ length l₁) : prod.snd (unzip (zip l₁ l₂)) = l₂ :=
eq.mpr (id (Eq._oldrec (Eq.refl (prod.snd (unzip (zip l₁ l₂)) = l₂)) (Eq.symm (zip_swap l₂ l₁))))
(eq.mpr (id (Eq._oldrec (Eq.refl (prod.snd (unzip (map prod.swap (zip l₂ l₁))) = l₂)) (unzip_swap (zip l₂ l₁))))
(unzip_zip_left h))
theorem unzip_zip {α : Type u} {β : Type v} {l₁ : List α} {l₂ : List β} (h : length l₁ = length l₂) : unzip (zip l₁ l₂) = (l₁, l₂) := sorry
theorem zip_of_prod {α : Type u} {β : Type v} {l : List α} {l' : List β} {lp : List (α × β)} (hl : map prod.fst lp = l) (hr : map prod.snd lp = l') : lp = zip l l' := sorry
theorem map_prod_left_eq_zip {α : Type u} {β : Type v} {l : List α} (f : α → β) : map (fun (x : α) => (x, f x)) l = zip l (map f l) := sorry
theorem map_prod_right_eq_zip {α : Type u} {β : Type v} {l : List α} (f : α → β) : map (fun (x : α) => (f x, x)) l = zip (map f l) l := sorry
@[simp] theorem length_revzip {α : Type u} (l : List α) : length (revzip l) = length l := sorry
@[simp] theorem unzip_revzip {α : Type u} (l : List α) : unzip (revzip l) = (l, reverse l) :=
unzip_zip (Eq.symm (length_reverse l))
@[simp] theorem revzip_map_fst {α : Type u} (l : List α) : map prod.fst (revzip l) = l :=
eq.mpr (id (Eq._oldrec (Eq.refl (map prod.fst (revzip l) = l)) (Eq.symm (unzip_left (revzip l)))))
(eq.mpr (id (Eq._oldrec (Eq.refl (prod.fst (unzip (revzip l)) = l)) (unzip_revzip l)))
(Eq.refl (prod.fst (l, reverse l))))
@[simp] theorem revzip_map_snd {α : Type u} (l : List α) : map prod.snd (revzip l) = reverse l :=
eq.mpr (id (Eq._oldrec (Eq.refl (map prod.snd (revzip l) = reverse l)) (Eq.symm (unzip_right (revzip l)))))
(eq.mpr (id (Eq._oldrec (Eq.refl (prod.snd (unzip (revzip l)) = reverse l)) (unzip_revzip l)))
(Eq.refl (prod.snd (l, reverse l))))
theorem reverse_revzip {α : Type u} (l : List α) : reverse (revzip l) = revzip (reverse l) := sorry
theorem revzip_swap {α : Type u} (l : List α) : map prod.swap (revzip l) = revzip (reverse l) := sorry
theorem nth_zip_with {α : Type u_1} {β : Type u_1} {γ : Type u_1} (f : α → β → γ) (l₁ : List α) (l₂ : List β) (i : ℕ) : nth (zip_with f l₁ l₂) i = f <$> nth l₁ i <*> nth l₂ i := sorry
theorem nth_zip_with_eq_some {α : Type u_1} {β : Type u_2} {γ : Type u_3} (f : α → β → γ) (l₁ : List α) (l₂ : List β) (z : γ) (i : ℕ) : nth (zip_with f l₁ l₂) i = some z ↔ ∃ (x : α), ∃ (y : β), nth l₁ i = some x ∧ nth l₂ i = some y ∧ f x y = z := sorry
theorem nth_zip_eq_some {α : Type u} {β : Type v} (l₁ : List α) (l₂ : List β) (z : α × β) (i : ℕ) : nth (zip l₁ l₂) i = some z ↔ nth l₁ i = some (prod.fst z) ∧ nth l₂ i = some (prod.snd z) := sorry
@[simp] theorem nth_le_zip_with {α : Type u} {β : Type v} {γ : Type w} {f : α → β → γ} {l : List α} {l' : List β} {i : ℕ} {h : i < length (zip_with f l l')} : nth_le (zip_with f l l') i h =
f (nth_le l i (lt_length_left_of_zip_with h)) (nth_le l' i (lt_length_right_of_zip_with h)) := sorry
@[simp] theorem nth_le_zip {α : Type u} {β : Type v} {l : List α} {l' : List β} {i : ℕ} {h : i < length (zip l l')} : nth_le (zip l l') i h = (nth_le l i (lt_length_left_of_zip h), nth_le l' i (lt_length_right_of_zip h)) :=
nth_le_zip_with
theorem mem_zip_inits_tails {α : Type u} {l : List α} {init : List α} {tail : List α} : (init, tail) ∈ zip (inits l) (tails l) ↔ init ++ tail = l := sorry
|
Formal statement is: lemma (in bounded_bilinear) Zfun_left: "Zfun f F \<Longrightarrow> Zfun (\<lambda>x. f x ** a) F" Informal statement is: If $f$ is a zero-preserving function from a vector space $V$ to a vector space $W$, then the function $x \mapsto f(x) + a$ is also zero-preserving.
|
\chapter{Transport domain analysis}
In this chapter, we will analyze two variants of the Transport domain: sequential and temporal. To do this, we will describe the datasets
used for devising experiments and discuss the properties of Transport
that will help us in developing better quality planners.
\section{Problem complexity}
When domain-independent planners solve a sequential Transport problem,
they face a harder task than planners that have access to domain knowledge ahead of time.
For domain-independent planners, deciding whether a plan of a given length exists
(the \textsc{Plan-Length} decision problem) is
a NEXPTIME-complete task.
Deciding if a plan exists at all (the \textsc{Plan-Existence} decision problem)
is an EXPSPACE-complete task \citep[Table~3.2]{Ghallab2004}.
That does not mean domain knowledge makes Transport easy, as is evident from
the very thorough analysis by \citet{Helmert2001, Helmert2001a}.
We will categorize our problems using Helmert's notation to
be able to apply their results to our domain.
Helmert's \textsc{Transport} task is a 9-tuple $(V, E, M, P, fuel_0, l_0, l_G, cap, road),$
where:
\begin{itemize}
\item $(V, E)$ is the road graph;
\item $M$ is a finite set of vehicles (mobiles);
\item $P$ is a finite set of packages (portables);
\item $fuel_0 : V \to \N_0$ is the fuel function;
\item $l_0: (M \cup P) \to V$ is the initial location function;
\item $l_G: P \to V$ is the goal location function;
\item $cap: M \to \N$ is the capacity function; and
\item $road: M \to 2^E$ is the movement constraints function.
\end{itemize}
$V$, $M$, and $P$ are pairwise disjoint.
No Transport domain variants assume movement constraints, therefore, $road$ is a constant
function $\forall m \in M : road(m) = E$.
A simplified notation is also introduced in \citet{Helmert2001a} for special cases of \textsc{Transport} tasks.
For $i,j \in \{1, \infty, *\},$ $k \in \{1, +, *\}$ a $\textsc{Transport}_{i,j,k}$ task is defined as a
general \textsc{Transport} task (defined above) that satisfies:
\begin{itemize}
\item if $i=1$, then $\forall m \in M : cap(m) = 1$ (vehicles can only carry one package);
\item if $i=\infty$, then $\forall m \in M : cap(m) = |P|$ (vehicles have unlimited capacity);
\item if $j=1$, then $\forall v \in V : fuel_0(v) = 1$ (one fuel unit per location);
\item if $j=\infty$, then $\forall v \in V : fuel_0(v) = \infty$ (unlimited fuel per location);
\item if $k=+$, then $\forall m \in M : road(m) = E$ (no movement restrictions); and
\item if $k=1$, then $M = \{m\} \And road(m) = E$ (single vehicle, no restrictions).
\end{itemize}
The $*$ value for $i$, $j$ or $k$ signifies no restriction on that property.
Note that
\textsc{Transport} refers to the notation from \citet{Helmert2001a}, while Transport refers to our studied domain.
Using this notation, the sequential Transport domain could be thought of as a $\textsc{Transport}_{c,\infty,+}$ task, where $c \in \N$
(equivalent to $\textsc{Transport}_{1,\infty,+}$).
Similarly, the temporal variant represents a $\textsc{Transport}_{c,f,+}$ task, for $c, f \in \N$
(equivalent to $\textsc{Transport}_{1,1,+}$).
For sequential and temporal Transport without fuel, the \textsc{Plan-Existence} problem
reduces to verifying reachability of each package by at least one vehicle
and the reachability of target locations from the starting locations of all packages,
which we can do in polynomial time, as noted in \citet[Theorem 8]{Helmert2001}.
With fuel, there is no straightforward way of determining
if a plan exists and this problem is NP-complete, which is proven in \citet[Theorem 9 and 10]{Helmert2001}.
Even though fuel constraints are modeled differently than in Transport (constraints per location versus per vehicle), the proof of
NP-completeness of \textsc{Plan-Existence} for $\textsc{Transport}_{\infty,1,1}$
present in \citet[Theorem~3.9]{Helmert2001a}
can be trivially edited to prove the NP-completeness
of \textsc{Plan-Existence} for temporal Transport.
Instead of adding fuel conditions
to the entrance and exit nodes of a location,
we simply add it to edge between them. The rest of the proof holds as was presented originally.
Similarly, the \textsc{Plan-Length} problem is NP-complete for all mentioned variants of Transport
\citep[Section~3.6]{Helmert2001a}.
The fact that all the mentioned proofs work for temporal variants is explained in \citet[Section~3.5]{Helmert2001a}.
All of these results make clear that looking for an explicit planning algorithm is infeasible,
despite the advantage we gain by only focusing on one planning domain.
\section{Domain information}\label{domain-info}
There are several interesting properties and invariants that hold in both sequential and temporal Transport, which might prove useful for designing planners:
\begin{enumerate}
\item \textbf{Do not pick up delivered packages}: The simplest and trivially correct decision is to never touch packages that are already at their destinations, since there is nothing
we can do using those packages that would result in a plan with a lower total cost.
\item \textbf{Drop when at the destination}: Likewise, it is
always correct for a vehicle containing a package with a destination equal
to the vehicle's location to do a \drop{} action immediately.
\item \textbf{Do not drop and pick up}: It never makes sense to plan a \drop{} and \pickup{}
action of
the same package by the same vehicle in succession. We will only get to the same state
by using a longer plan. This rule also applies if an action of a different vehicle
gets between the two successive actions, even if it does an action with the
dropped package.
It is important to note that this is a symmetric property: picking up
and then dropping equally results in a worse plan.
\begin{enumerate}
\item \textbf{Do not drop a package where we picked it up}: A generalization
of the previous rule is that vehicles should never drop a package
at the location they last picked it up, independent of the actions they took
between the relevant \pickup{} and \drop{}. This rule is also symmetric.
\item \textbf{Never drop after picking up at a location}:
While the order of successive \pickup{} and \drop{} actions does not
influence the optimality of a plan, it makes the search space smaller and the implementation of these rules simpler,
without loss of generality.
\end{enumerate}
\item \textbf{Do not drive suboptimally}: If a vehicle does a series of
\drive{} actions from location $A$ to $B$ without ``touching'' packages
or refueling at any of the locations it visits,
it has to follow the shortest possible path from $A$ to $B$. If it does not,
the induced plan can be made less costly or shorter by swapping the actual \drive{} actions
for precalculated optimal \drive{} actions along the shortest path.
Do note, that it is not important for the application of this rule whether actions are in direct succession (in a sequential plan) or not.
\begin{enumerate}
\item \textbf{Do not drive in cycles}: A special but important case of the previous rule is that vehicles should not drive in cycles.
\end{enumerate}
\item \textbf{Do not forward packages using other vehicles}:
Let $p$ be a package of size $|p|$ located at $A$.
Let $v$ be a vehicle which drove through location $A$ to location $B \neq A$
and picked up $p$ at $B$,
without having less than $|p|$ free space in any intermediate state between leaving $A$ and picking up $p$.
If this sequence of events occurs, the plan is suboptimal in a sequential setting, because
$v$ could have picked up $p$ when driving through $A$, and the total plan cost
would have gone down by at least 2. The reason is that a different vehicle had to pick up, drive, and drop package $p$ at $B$.
While we cannot say if the \drive{} actions themself
were redundant, the \pickup{} and \drop{} actions definitely were.
By removing them, we save
2 on the total cost.
In a temporal domain without fuel, assuming that vehicles only drive along the shortest routes,
the plan does not necessarily have to be suboptimal, but it
is of equal length or longer: due to concurrent actions, the vehicles could have driven simultaneously.
In a few cases, the other vehicle could have dropped $p$ at $B$ before $v$ wants to pick it up,
which means that the total makespan of the partial plan did not become longer, but stayed the same.
The plan could not have become shorter, because $v$ does not have any time in this scenario when it is available to do another action.
In a temporal domain with fuel, it is not safe to say whether
such a scenario hurts the plan duration. If there is a petrol station at $B$
and $v$ wants to refuel there, the other vehicle could have enabled the parallelization
of the \refuel{} and \pickup{} actions, therefore shaving off 1 time unit in total.
However, if there is no petrol
station at $B$, this situation reduces to the no-fuel variant. Given the relative rarity of petrol stations, this will reduce the search space somewhat.
\end{enumerate}
Another insight can only be applied to the sequential variant of Transport:
\begin{enumerate}
\item \textbf{Drop from an active vehicle only}: Without loss of generality,
we can prune all plans where a \drop{} action of a vehicle happens
right after an action of a different vehicle. It is trivial to see that if we had a plan where
a \drop{} action
occurs after an action of a different vehicle, we can swap that action with the \drop{} action without changing the total plan cost or changing the validity of the plan.
Doing this repeatedly will yield an equivalent plan, in which the \drop{} action
occurs right after a different action of the same vehicle and the plan is of the same total cost and validity as the original plan. Repeating this process for each \drop{} action will yield a plan equivalent to the original plan, which additionally satisfies this rule.
\end{enumerate}
Finally, these are the properties that only meaningfully apply to the temporal variant:
\begin{enumerate}
\item \textbf{Refueling and dropping/picking up can occur at the same time}:
A plan in which a vehicle starts to pick up a package at the same location it just refueled at is suboptimal, if there was a time point during the \refuel{} action
when the vehicle was not dropping or picking up packages and the package was already
co-located with the vehicle at that time.
\item \textbf{No fuel left means refueling or ignoring the vehicle}:
If a vehicle is stuck with no fuel left, or with less fuel than is required for
any valid \drive{} action,
the correct thing to do is to either refuel or drop all packages and ignore the vehicle in further planning. Unfortunately,
we cannot say anything about the (non-)optimality of a plan where this occurs.
\end{enumerate}
\section{Datasets \& problem instances}\label{datasets}
For evaluation and comparison with other planners, we have acquired several problem datasets from previous runs of the IPC.
Table~\ref{tab:ipc-datasets} provides an overview of the individual datasets, their associated IPC competition, the track at the competition and the domain variant the problems are modeled in.
\begin{table}[tb]
\centering
\begin{tabular}{lclc}
\toprule
{\hspace{0.75em}\textbf{Dataset}} & \textbf{Competition} & {\hspace{2.5em}\textbf{IPC Track}} & \textbf{Formulation}\\
\midrule
netben-opt-6 & \multirow{4}{*}{IPC-6} & \href{http://icaps-conference.org/ipc2008/deterministic/NetBenefitOptimization.html}{Net-benefit: optimization} & Numeric \\
seq-opt-6 & & \href{http://icaps-conference.org/ipc2008/deterministic/SequentialOptimization.html}{Sequential: optimization} & STRIPS \\
seq-sat-6 & & \href{http://icaps-conference.org/ipc2008/deterministic/SequentialSatisficing.html}{Sequential: satisficing} & STRIPS \\
tempo-sat-6 & & \href{http://icaps-conference.org/ipc2008/deterministic/TemporalSatisficing.html}{Temporal: satisficing} & Temporal \\
\midrule
seq-mco-7 & \multirow{3}{*}{IPC-7} & \href{http://www.plg.inf.uc3m.es/ipc2011-deterministic/SequentialMulticore.html}{Sequential: multi-core} & \multirow{3}{*}{STRIPS} \\
seq-opt-7 & & \href{http://www.plg.inf.uc3m.es/ipc2011-deterministic/SequentialOptimization.html}{Sequential: optimization} & \\
seq-sat-7 & & \href{http://www.plg.inf.uc3m.es/ipc2011-deterministic/SequentialSatisficing.html}{Sequential: satisficing} & \\
\midrule
seq-agl-8 & \multirow{4}{*}{IPC-8} & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqagi.html}{Sequential: agile} & \multirow{4}{*}{STRIPS} \\
seq-mco-8 & & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqmulti.html}{Sequential: multi-core} & \\
seq-opt-8 & & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqopt.html}{Sequential: optimization} & \\
seq-sat-8 & & \href{https://helios.hud.ac.uk/scommv/IPC-14/seqsat.html}{Sequential: satisficing} & \\
\bottomrule
\end{tabular}
\caption[Transport datasets from the 2008, 2011, and 2014 IPCs.]{Transport datasets from the 2008, 2011, and 2014 IPCs. All formulations assume capacitated vehicles. Numeric and temporal formulations also contain fuel demands and capacities. The temporal formulation additionally adds concurrent actions and a notion of time. More information can be found in Section~\ref{domain-desc}.}
\label{tab:ipc-datasets}
\end{table}
Short descriptions of the various tracks and subtracks can be found in the rule pages of IPC-6,\puncfootnote{\url{http://icaps-conference.org/ipc2008/deterministic/CompetitionRules.html}}
IPC-7,\puncfootnote{\url{http://www.plg.inf.uc3m.es/ipc2011-deterministic/CompetitionRules.html}}
and IPC-8.\puncfootnote{\url{https://helios.hud.ac.uk/scommv/IPC-14/rules.html}}
We have decided to split our further research based on the tracks at the IPC: we will focus on constructing
Transport-specific planners for the seq-sat-6, seq-sat-7, seq-sat-8, and tempo-sat-6 datasets,
corresponding to the sequential and temporal variants of Transport.
The datasets labeled seq-opt correspond to sequential optimality planning tracks,
where only optimal plans for problems are accepted as correct.
Datasets labeled seq-mco are used in multi-core satisficing tracks (multi-threaded planners)
and seq-agl are used in agile tracks (minimize the CPU time required to find a satisficing plan).
The netben-opt-6 dataset contains Net Benefit problems, where the aim
is to compensate between achieving \textit{soft goals} and minimizing the total cost.
Soft goals are goals that do not necessarily have to be satisfied in a goal state,
but it is usually better for the total score if they are.
Each problem usually specifies a metric used for calculation of the score.
We will not focus on
these problems in this work.
In addition to the domain definition, we need to take a look at the individual problems to fully utilize our knowledge advantage.
Both the seq-sat-6 and tempo-sat-6 contain 30 problems, while seq-sat-7 and seq-sat-8 only contain 20 problems each. Table~\ref{tab:dataset-dimensions} shows the
dimensions of each problem instance for each mentioned dataset.
While the planners (including our domain-specific ones) do not know this,
each problem was constructed with a scenario in mind. Locations in problems are not just
placed randomly, but usually belong to cities. Inside a city, the road network
tends to be dense and road lengths small, while roads connecting cities
are rare and usually significantly longer.
\begin{table}[p]
\scriptsize
\centering
\begin{subtable}[t]{0.42\textwidth}
\centering
\csvreader[tabular=rrrrrrl,
table head=\toprule\textbf{\#} & \rot{\textbf{Vehicles}} & \rot{\textbf{Packages}} & \rot{\textbf{Cities}} & \rot{\textbf{Locations}} & \rot{\textbf{Roads}} & \rot{\textbf{States}}\\\midrule,
late after line=\mbox{},
table foot=\\\bottomrule]%
{../data/seq-sat-6.csv}{Problem=\problem,Vehicles=\vehicles,Packages=\packages,Cities=\cities,Locations=\locations,%
Roads=\roads,Stateslat=\stateslat}%
{\problem & \vehicles & \packages & \cities & \locations & \roads & \stateslat}%
\caption{Problem dimensions of seq-sat-6.}
\label{tab:seq-sat-6-dims}
\end{subtable}
\quad
\begin{subtable}[t]{0.54\textwidth}
\centering
\csvreader[tabular=rrrrrrrl,
table head=\toprule\textbf{\#} & \rot{\textbf{Vehicles}} & \rot{\textbf{Packages}} & \rot{\textbf{Cities}} & \rot{\textbf{Locations}} & \rot{\textbf{Roads}} & \rot{\textbf{Petrol}} & \rot{\textbf{States}}\\\midrule,
late after line=\mbox{},
table foot=\\\bottomrule]%
{../data/tempo-sat-6.csv}{Problem=\problem,Vehicles=\vehicles,Packages=\packages,Cities=\cities,Locations=\locations,%
Roads=\roads,Petrol=\petrol,Stateslat=\stateslat}%
{\problem & \vehicles & \packages & \cities & \locations & \roads & \petrol & \stateslat}%
\caption{Problem dimensions of tempo-sat-6.}
\label{tab:tempo-sat-6-dims}
\end{subtable}
\vspace{0.21cm}
\begin{subtable}[t]{0.42\textwidth}
\centering
\csvreader[tabular=rrrrrrl,
table head=\toprule\textbf{\#} & \rot{\textbf{Vehicles}} & \rot{\textbf{Packages}} & \rot{\textbf{Cities}} & \rot{\textbf{Locations}} & \rot{\textbf{Roads}} & \rot{\textbf{States}}\\\midrule,
late after line=\mbox{},
table foot=\\\bottomrule]%
{../data/seq-sat-7.csv}{Problem=\problem,Vehicles=\vehicles,Packages=\packages,Cities=\cities,Locations=\locations,%
Roads=\roads,Stateslat=\stateslat}%
{\problem & \vehicles & \packages & \cities & \locations & \roads & \stateslat}%
\caption{Problem dimensions of seq-sat-7.}
\label{tab:seq-sat-7-dims}
\end{subtable}
\quad
\begin{subtable}[t]{0.54\textwidth}
\centering
\csvreader[tabular=rrrrrrl,
table head=\toprule\textbf{\#} & \rot{\textbf{Vehicles}} & \rot{\textbf{Packages}} & \rot{\textbf{Cities}} & \rot{\textbf{Locations}} & \rot{\textbf{Roads}} & \rot{\textbf{States}}\\\midrule,
late after line=\mbox{},
table foot=\\\bottomrule]%
{../data/seq-sat-8.csv}{Problem=\problem,Vehicles=\vehicles,Packages=\packages,Cities=\cities,Locations=\locations,%
Roads=\roads,Stateslat=\stateslat}%
{\problem & \vehicles & \packages & \cities & \locations & \roads & \stateslat}%
\caption{Problem dimensions of seq-sat-8.}
\label{tab:seq-sat-8-dims}
\end{subtable}
\caption[Problem dimensions of selected Transport IPC datasets.]{Problem dimensions of selected Transport IPC datasets.
The ``states'' value is a state space size estimate as discussed in Section~\ref{datasets} (in temporal domains calculated with $f_{max} = 100$ and the $\mt{GCD}$ of \texttt{fuel-demand}s equal to 1).
Bold problem instances correspond to Figure~\ref{fig:ipc08_seq-sat_p13} and Figure~\ref{fig:ipc08_tempo-sat_p30}, respectively.}
\label{tab:dataset-dimensions}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=1.0\textwidth]{../img/ipc08_tempo-sat_p30_land}
\caption[Road network visualization of the \texttt{p30} problem from the tempo-sat track of IPC 2008.]{Road network visualization of the \texttt{p30} problem from the tempo-sat track of IPC 2008. Red dots represent locations (graph nodes), roads (graph edges) are represented by black arrows, vehicles are plotted as blue squares, and packages as purple squares. Darker red dots represent locations with petrol stations. In this specific problem, the circle of darker nodes in the center represents truck hubs and each of the attached subgraphs are individual cities.}
\label{fig:ipc08_tempo-sat_p30}
\end{figure}
All sequential problem instances in seq-sat datasets have symmetric roads and road lengths and can, therefore,
be simplified by assuming the use of an undirected graph.
All packages are always positioned at locations
in the initial state, not in vehicles (in all domain variants).
The temporal problems in tempo-sat-6 do not have the same properties;
the problems 1--20 have symmetric roads and lengths, but
the 21--30 problems only have symmetric roads, not lengths in general.
The same applies to fuel demands of roads. Additionally,
these problems have vehicle target locations, which means that not only packages,
but also
vehicles will need to be positioned at specific locations
after package delivery finishes. We can interpret this goal
in a similar way as in a VRP, where a vehicle target location is thought to be
a truck depot or hub. A visualization of such a problem can be seen in Figure~\ref{fig:ipc08_tempo-sat_p30}.
No sequential problem has this requirement, even though the domain formulation allows it.
Given a specific Transport problem, we can calculate the size of the set of states $S$.
For sequential Transport, the state space size can be estimated as: $$l^v \cdot (l+v)^p,$$ where $l$ is the number of locations,
$v$ the number of vehicles, and $p$ the number of packages. The formula represents
the number of choices for the location of vehicles, combined with the number of choices
for the location of packages (these include being loaded onto a vehicle). We have eliminated
invalid states arising from inconsistent $\mt{in}(p)$ and $\mt{at}(p)$ state variable values,
but some invalid states are still left in the state size estimate (for example states,
where vehicles are loaded beyond maximum capacity). We did not include a notion of
capacity in this estimate because it can be computed from the locations of packages.
For temporal Transport, the problem state space size estimate is more complicated,
due to actions being parallel. A reasonable estimate could be:
$$(l+r)^v \cdot \left(\frac{f_{max}}{\mt{GCD} \{\mt{fuel-demand}(l_1, l_2) | (l_1, l_2) \in R\}}\right)^v \cdot (l+v)^p,$$ where $R$ represents the set of roads, $r = |R|$ is the number of roads,
$\mt{GCD}$ is the greatest common divisor function,
and $f_{max}$ is the maximum fuel capacity for vehicles. The $f_{max}$ value presents a simplification, where all vehicles
have an equal maximum fuel capacity.
The formula expresses the choice of positions of vehicles (vehicles can now be in the middle
of a \drive{} action), the choice of the current fuel capacity of vehicles (cannot be simply
calculated from the other state variables, only from all previous actions),
and the choice of location for packages.
We can see that problems vary not only in size but also in what features they include
and what assumptions they make.
A summary of the acquired dataset-specific insights is available in Table~\ref{tab:problem-properties}.
\begin{table}[tb]
\centering
\begin{tabular}{cccccr}
\toprule
\multirow{3}{*}{\textbf{Dataset}} & \multirow{3}{*}{\textbf{Problems}} & \textbf{Sym.} & \textbf{Sym.} & \textbf{Vehicle} & \multirow{3}{*}{\textbf{$\approx$ \# states}}\\
& & \textbf{road} & \textbf{fuel} & \textbf{fuel} &\\
& & \textbf{lengths} & \textbf{demands} & \textbf{locations} &\\
\midrule
seq-sat-6 & 01--30 & Yes & N/A & No & $10^{3} \to 10^{43}$\\
seq-sat-7 & 01--30 & Yes & N/A & No & $10^{25} \to 10^{59}$\\
seq-sat-8 & 01--30 & Yes & N/A & No & $10^{50} \to 10^{78}$\\\midrule
\multirow{2}{*}{tempo-sat-6} & 01--20 & Yes & Yes & No & $10^{8} \to 10^{52}$\\
& 21--30 & No & No & Yes & $10^{18} \to 10^{81}$\\
\bottomrule
\end{tabular}
\caption[Summary of problem instance properties in IPC Transport datasets.]{Summary of problem instance properties in IPC Transport datasets.
State space size estimates in temporal domains
are calculated using $f_{max} = 100$ and the $\mt{GCD}$ of \texttt{fuel-demand}s equal to 1.}
\label{tab:problem-properties}
% https://www.wolframalpha.com/input/?i=l%5Ev+*+(v%2Bl)%5Ep,+p+%3D+2,+v+%3D+2,+l+%3D+5
% https://www.wolframalpha.com/input/?i=(l%2Br)%5Ev+*+c%5Ev+*(v%2Bl)%5Ep,+p+%3D+2,+v+%3D+2,+l+%3D5,+r%3D12,+c%3D100
\end{table}
|
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/experimental/io/HugePages.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <cctype>
#include <cstring>
#include <algorithm>
#include <stdexcept>
#include <system_error>
#include <boost/regex.hpp>
#include <folly/Conv.h>
#include <folly/CppAttributes.h>
#include <folly/Format.h>
#include <folly/Range.h>
#include <folly/String.h>
#include <folly/gen/Base.h>
#include <folly/gen/File.h>
#include <folly/gen/String.h>
namespace folly {
namespace {
// Get the default huge page size
size_t getDefaultHugePageSize() {
// We need to parse /proc/meminfo
static const boost::regex regex(R"!(Hugepagesize:\s*(\d+)\s*kB)!");
size_t pageSize = 0;
boost::cmatch match;
bool error = gen::byLine("/proc/meminfo") | [&](StringPiece line) -> bool {
if (boost::regex_match(line.begin(), line.end(), match, regex)) {
StringPiece numStr(
line.begin() + match.position(1), size_t(match.length(1)));
pageSize = to<size_t>(numStr) * 1024; // in KiB
return false; // stop
}
return true;
};
if (error) {
throw std::runtime_error("Can't find default huge page size");
}
return pageSize;
}
// Get raw huge page sizes (without mount points, they'll be filled later)
HugePageSizeVec readRawHugePageSizes() {
// We need to parse file names from /sys/kernel/mm/hugepages
static const boost::regex regex(R"!(hugepages-(\d+)kB)!");
boost::smatch match;
HugePageSizeVec vec;
fs::path path("/sys/kernel/mm/hugepages");
for (fs::directory_iterator it(path); it != fs::directory_iterator(); ++it) {
std::string filename(it->path().filename().string());
if (boost::regex_match(filename, match, regex)) {
StringPiece numStr(
filename.data() + match.position(1), size_t(match.length(1)));
vec.emplace_back(to<size_t>(numStr) * 1024);
}
}
return vec;
}
// Parse the value of a pagesize mount option
// Format: number, optional K/M/G/T suffix, trailing junk allowed
size_t parsePageSizeValue(StringPiece value) {
static const boost::regex regex(R"!((\d+)([kmgt])?.*)!", boost::regex::icase);
boost::cmatch match;
if (!boost::regex_match(value.begin(), value.end(), match, regex)) {
throw std::runtime_error("Invalid pagesize option");
}
char c = '\0';
if (match.length(2) != 0) {
c = char(tolower(value[size_t(match.position(2))]));
}
StringPiece numStr(value.data() + match.position(1), size_t(match.length(1)));
auto const size = to<size_t>(numStr);
auto const mult = [c] {
switch (c) {
case 't':
return 1ull << 40;
case 'g':
return 1ull << 30;
case 'm':
return 1ull << 20;
case 'k':
return 1ull << 10;
default:
return 1ull << 0;
}
}();
return size * mult;
}
/**
* Get list of supported huge page sizes and their mount points, if
* hugetlbfs file systems are mounted for those sizes.
*/
HugePageSizeVec readHugePageSizes() {
HugePageSizeVec sizeVec = readRawHugePageSizes();
if (sizeVec.empty()) {
return sizeVec; // nothing to do
}
std::sort(sizeVec.begin(), sizeVec.end());
size_t defaultHugePageSize = getDefaultHugePageSize();
struct PageSizeLess {
bool operator()(const HugePageSize& a, size_t b) const {
return a.size < b;
}
bool operator()(size_t a, const HugePageSize& b) const {
return a < b.size;
}
};
// Read and parse /proc/mounts
std::vector<StringPiece> parts;
std::vector<StringPiece> options;
gen::byLine("/proc/mounts") | gen::eachAs<StringPiece>() |
[&](StringPiece line) {
parts.clear();
split(" ", line, parts);
// device path fstype options uid gid
if (parts.size() != 6) {
throw std::runtime_error("Invalid /proc/mounts line");
}
if (parts[2] != "hugetlbfs") {
return; // we only care about hugetlbfs
}
options.clear();
split(",", parts[3], options);
size_t pageSize = defaultHugePageSize;
// Search for the "pagesize" option, which must have a value
for (auto& option : options) {
// key=value
auto p = static_cast<const char*>(
memchr(option.data(), '=', option.size()));
if (!p) {
continue;
}
if (StringPiece(option.data(), p) != "pagesize") {
continue;
}
pageSize = parsePageSizeValue(StringPiece(p + 1, option.end()));
break;
}
auto pos = std::lower_bound(
sizeVec.begin(), sizeVec.end(), pageSize, PageSizeLess());
if (pos == sizeVec.end() || pos->size != pageSize) {
throw std::runtime_error("Mount page size not found");
}
if (!pos->mountPoint.empty()) {
// Only one mount point per page size is allowed
return;
}
// Store mount point
fs::path path(parts[1].begin(), parts[1].end());
struct stat st;
const int ret = stat(path.string().c_str(), &st);
if (ret == -1 && errno == ENOENT) {
return;
}
checkUnixError(ret, "stat hugepage mountpoint failed");
pos->mountPoint = fs::canonical(path);
pos->device = st.st_dev;
};
return sizeVec;
}
} // namespace
const HugePageSizeVec& getHugePageSizes() {
static HugePageSizeVec sizes = readHugePageSizes();
return sizes;
}
const HugePageSize* getHugePageSize(size_t size) {
// Linear search is just fine.
for (auto& p : getHugePageSizes()) {
if (p.mountPoint.empty()) {
continue;
}
if (size == 0 || size == p.size) {
return &p;
}
}
return nullptr;
}
const HugePageSize* getHugePageSizeForDevice(dev_t device) {
// Linear search is just fine.
for (auto& p : getHugePageSizes()) {
if (p.mountPoint.empty()) {
continue;
}
if (device == p.device) {
return &p;
}
}
return nullptr;
}
} // namespace folly
|
"""Mobjects that represent coordinate systems."""
__all__ = [
"CoordinateSystem",
"Axes",
"ThreeDAxes",
"NumberPlane",
"PolarPlane",
"ComplexPlane",
]
import fractions as fr
import numbers
from typing import Callable, Dict, Iterable, Optional, Sequence, Tuple, Union
import numpy as np
from colour import Color
from manim.mobject.opengl_compatibility import ConvertToOpenGL
from .. import config
from ..constants import *
from ..mobject.functions import ParametricFunction
from ..mobject.geometry import (
Arrow,
Circle,
DashedLine,
Dot,
Line,
Rectangle,
RegularPolygon,
)
from ..mobject.number_line import NumberLine
from ..mobject.svg.tex_mobject import MathTex
from ..mobject.types.vectorized_mobject import (
Mobject,
VDict,
VectorizedPoint,
VGroup,
VMobject,
)
from ..utils.color import (
BLACK,
BLUE,
BLUE_D,
GREEN,
LIGHT_GREY,
WHITE,
YELLOW,
color_gradient,
invert_color,
)
from ..utils.config_ops import merge_dicts_recursively, update_dict_recursively
from ..utils.simple_functions import binary_search
from ..utils.space_ops import angle_of_vector
class CoordinateSystem:
"""
Abstract class for Axes and NumberPlane
Examples
--------
.. manim:: CoordSysExample
:save_last_frame:
class CoordSysExample(Scene):
def construct(self):
# the location of the ticks depends on the x_range and y_range.
grid = Axes(
x_range=[0, 1, 0.05], # step size determines num_decimal_places.
y_range=[0, 1, 0.05],
x_length=9,
y_length=5.5,
axis_config={
"numbers_to_include": np.arange(0, 1 + 0.1, 0.1),
"number_scale_value": 0.5,
},
tips=False,
)
# Labels for the x-axis and y-axis.
y_label = grid.get_y_axis_label("y", edge=LEFT, direction=LEFT, buff=0.4)
x_label = grid.get_x_axis_label("x")
grid_labels = VGroup(x_label, y_label)
graphs = VGroup()
for n in np.arange(1, 20 + 0.5, 0.5):
graphs += grid.get_graph(lambda x: x ** n, color=WHITE)
graphs += grid.get_graph(
lambda x: x ** (1 / n), color=WHITE, use_smoothing=False
)
# Extra lines and labels for point (1,1)
graphs += grid.get_horizontal_line(grid.c2p(1, 1, 0), color=BLUE)
graphs += grid.get_vertical_line(grid.c2p(1, 1, 0), color=BLUE)
graphs += Dot(point=grid.c2p(1, 1, 0), color=YELLOW)
graphs += Tex("(1,1)").scale(0.75).next_to(grid.c2p(1, 1, 0))
title = Title(
# spaces between braces to prevent SyntaxError
r"Graphs of $y=x^{ {1}\over{n} }$ and $y=x^n (n=1,2,3,...,20)$",
include_underline=False,
scale_factor=0.85,
)
self.add(title, graphs, grid, grid_labels)
"""
def __init__(
self,
x_range=None,
y_range=None,
x_length=None,
y_length=None,
dimension=2,
):
self.dimension = dimension
default_step = 1
if x_range is None:
x_range = [
round(-config["frame_x_radius"]),
round(config["frame_x_radius"]),
default_step,
]
elif len(x_range) == 2:
x_range = [*x_range, default_step]
if y_range is None:
y_range = [
round(-config["frame_y_radius"]),
round(config["frame_y_radius"]),
default_step,
]
elif len(y_range) == 2:
y_range = [*y_range, default_step]
self.x_range = x_range
self.y_range = y_range
self.x_length = x_length
self.y_length = y_length
self.num_sampled_graph_points_per_tick = 10
def coords_to_point(self, *coords):
raise NotImplementedError()
def point_to_coords(self, point):
raise NotImplementedError()
def c2p(self, *coords):
"""Abbreviation for coords_to_point"""
return self.coords_to_point(*coords)
def p2c(self, point):
"""Abbreviation for point_to_coords"""
return self.point_to_coords(point)
def get_axes(self):
raise NotImplementedError()
def get_axis(self, index):
return self.get_axes()[index]
def get_x_axis(self):
return self.get_axis(0)
def get_y_axis(self):
return self.get_axis(1)
def get_z_axis(self):
return self.get_axis(2)
def get_x_axis_label(self, label_tex, edge=UR, direction=UR, **kwargs):
return self.get_axis_label(
label_tex, self.get_x_axis(), edge, direction, **kwargs
)
def get_y_axis_label(
self, label_tex, edge=UR, direction=UP * 0.5 + RIGHT, **kwargs
):
return self.get_axis_label(
label_tex, self.get_y_axis(), edge, direction, **kwargs
)
# move to a util_file, or Mobject()??
@staticmethod
def create_label_tex(label_tex) -> "Mobject":
"""Checks if the label is a ``float``, ``int`` or a ``str`` and creates a :class:`~.MathTex` label accordingly.
Parameters
----------
label_tex : The label to be compared against the above types.
Returns
-------
:class:`~.Mobject`
The label.
"""
if (
isinstance(label_tex, float)
or isinstance(label_tex, int)
or isinstance(label_tex, str)
):
label_tex = MathTex(label_tex)
return label_tex
def get_axis_label(
self,
label: Union[float, str, "Mobject"],
axis: "Mobject",
edge: Sequence[float],
direction: Sequence[float],
buff: float = SMALL_BUFF,
) -> "Mobject":
"""Gets the label for an axis.
Parameters
----------
label
The label. Can be any mobject or `int/float/str` to be used with :class:`~.MathTex`
axis
The axis to which the label will be added.
edge
The edge of the axes to which the label will be added. ``RIGHT`` adds to the right side of the axis
direction
Allows for further positioning of the label.
buff
The distance of the label from the line.
Returns
-------
:class:`~.Mobject`
The positioned label along the given axis.
"""
label = self.create_label_tex(label)
label.next_to(axis.get_edge_center(edge), direction, buff=buff)
label.shift_onto_screen(buff=MED_SMALL_BUFF)
return label
def get_axis_labels(
self,
x_label: Union[float, str, "Mobject"] = "x",
y_label: Union[float, str, "Mobject"] = "y",
) -> "VGroup":
"""Defines labels for the x_axis and y_axis of the graph.
Parameters
----------
x_label
The label for the x_axis
y_label
The label for the y_axis
Returns
-------
:class:`~.VGroup`
A :class:`~.Vgroup` of the labels for the x_axis and y_axis.
See Also
--------
:class:`get_x_axis_label`
:class:`get_y_axis_label`
"""
self.axis_labels = VGroup(
self.get_x_axis_label(x_label),
self.get_y_axis_label(y_label),
)
return self.axis_labels
def add_coordinates(
self,
*axes_numbers: Union[
Optional[Iterable[float]], Union[Dict[float, Union[str, float, "Mobject"]]]
],
**kwargs,
):
"""Adds labels to the axes.
Parameters
----------
axes_numbers
The numbers to be added to the axes. Use ``None`` to represent an axis with default labels.
Examples
--------
.. code-block:: python
ax = ThreeDAxes()
x_labels = range(-4, 5)
z_labels = range(-4, 4, 2)
ax.add_coordinates(x_labels, None, z_labels) # default y labels, custom x & z labels
ax.add_coordinates(x_labels) # only x labels
.. code-block:: python
# specifically control the position and value of the labels using a dict
ax = Axes(x_range=[0, 7])
x_pos = [x for x in range(1, 8)]
# strings are automatically converted into a `Tex` mobject.
x_vals = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
x_dict = dict(zip(x_pos, x_vals))
ax.add_coordinates(x_dict)
"""
self.coordinate_labels = VGroup()
# if nothing is passed to axes_numbers, produce axes with default labelling
if not axes_numbers:
axes_numbers = [None for _ in range(self.dimension)]
for axis, values in zip(self.axes, axes_numbers):
if isinstance(values, dict):
labels = axis.add_labels(values, **kwargs)
else:
labels = axis.add_numbers(values, **kwargs)
self.coordinate_labels.add(labels)
return self
def get_line_from_axis_to_point(
self,
index: int,
point: Sequence[float],
line_func: Line = DashedLine,
color: Color = LIGHT_GREY,
stroke_width: float = 2,
) -> Line:
"""Returns a straight line from a given axis to a point in the scene.
Parameters
----------
index
Specifies the axis from which to draw the line. `0 = x_axis`, `1 = y_axis`
point
The point to which the line will be drawn.
line_func
The function of the :class:`~.Line` mobject used to construct the line.
color
The color of the line.
stroke_width
The stroke width of the line.
Returns
-------
:class:`~.Line`
The line from an axis to a point.
See Also
--------
:class:`get_vertical_line`
:class:`get_horizontal_line`
"""
axis = self.get_axis(index)
line = line_func(axis.get_projection(point), point)
line.set_stroke(color, stroke_width)
return line
def get_vertical_line(self, point: Sequence[float], **kwargs) -> Line:
"""A vertical line from the x-axis to a given point in the scene.
Parameters
----------
point
The point to which the vertical line will be drawn.
kwargs
Additional parameters to be passed to :class:`get_line_from_axis_to_point`
Returns
-------
:class:`Line`
A vertical line from the x-axis to the point.
"""
return self.get_line_from_axis_to_point(0, point, **kwargs)
def get_horizontal_line(self, point: Sequence[float], **kwargs) -> Line:
"""A horizontal line from the y-axis to a given point in the scene.
Parameters
----------
point
The point to which the horizontal line will be drawn.
kwargs
Additional parameters to be passed to :class:`get_line_from_axis_to_point`
Returns
-------
:class:`Line`
A horizontal line from the y-axis to the point.
"""
return self.get_line_from_axis_to_point(1, point, **kwargs)
# graphing
def get_graph(
self,
function: Callable[[float], float],
x_range: Optional[Sequence[float]] = None,
**kwargs,
):
"""Generates a curve based on a function.
Parameters
----------
function
The function used to construct the :class:`~.ParametricFunction`.
x_range
The range of the curve along the axes. ``x_range = [x_min, x_max]``.
kwargs
Additional parameters to be passed to :class:`~.ParametricFunction`.
Returns
-------
:class:`~.ParametricFunction`
The plotted curve.
"""
t_range = np.array(self.x_range, dtype=float)
if x_range is not None:
t_range[: len(x_range)] = x_range
if x_range is None or len(x_range) < 3:
# if t_range has a defined step size, increase the number of sample points per tick
t_range[2] /= self.num_sampled_graph_points_per_tick
# For axes, the third coordinate of x_range indicates
# tick frequency. But for functions, it indicates a
# sample frequency
graph = ParametricFunction(
lambda t: self.coords_to_point(t, function(t)), t_range=t_range, **kwargs
)
graph.underlying_function = function
return graph
def get_parametric_curve(self, function, **kwargs):
dim = self.dimension
graph = ParametricFunction(
lambda t: self.coords_to_point(*function(t)[:dim]), **kwargs
)
graph.underlying_function = function
return graph
def input_to_graph_point(self, x: float, graph: "ParametricFunction") -> np.ndarray:
"""Returns the coordinates of the point on the ``graph``
corresponding to the input ``x`` value.
Parameters
----------
x
The x-value for which the coordinates of corresponding point on the :attr:`graph` are to be found.
graph
The :class:`~.ParametricFunction` on which the x-value and y-value lie.
Returns
-------
:class:`np.ndarray`
The coordinates of the point on the :attr:`graph` corresponding to the :attr:`x` value.
"""
if hasattr(graph, "underlying_function"):
return graph.function(x)
else:
alpha = binary_search(
function=lambda a: self.point_to_coords(graph.point_from_proportion(a))[
0
],
target=x,
lower_bound=self.x_range[0],
upper_bound=self.x_range[1],
)
if alpha is not None:
return graph.point_from_proportion(alpha)
else:
return None
def i2gp(self, x, graph):
"""
Alias for :meth:`input_to_graph_point`.
"""
return self.input_to_graph_point(x, graph)
def get_graph_label(
self,
graph: "ParametricFunction",
label: Union[float, str, "Mobject"] = "f(x)",
x_val: Optional[float] = None,
direction: Sequence[float] = RIGHT,
buff: float = MED_SMALL_BUFF,
color: Optional[Color] = None,
dot: bool = False,
dot_config: Optional[dict] = None,
) -> Mobject:
"""Creates a properly positioned label for the passed graph,
styled with parameters and an optional dot.
Parameters
----------
graph
The curve of the function plotted.
label
The label for the function's curve. Written with :class:`MathTex` if not specified otherwise.
x_val
The x_value with which the label should be aligned.
direction
The cartesian position, relative to the curve that the label will be at --> ``LEFT``, ``RIGHT``
buff
The buffer space between the curve and the label.
color
The color of the label.
dot
Adds a dot at the given point on the graph.
dot_config
Additional parameters to be passed to :class:`~.Dot`.
Returns
-------
:class:`Mobject`
The positioned label and :class:`~.Dot`, if applicable.
"""
if dot_config is None:
dot_config = {}
label = self.create_label_tex(label)
color = color or graph.get_color()
label.set_color(color)
if x_val is None:
# Search from right to left
for x in np.linspace(self.x_range[1], self.x_range[0], 100):
point = self.input_to_graph_point(x, graph)
if point[1] < config["frame_y_radius"]:
break
else:
point = self.input_to_graph_point(x_val, graph)
label.next_to(point, direction, buff=buff)
label.shift_onto_screen()
if dot:
label.add(Dot(point=point, **dot_config))
return label
# calculus
def get_riemann_rectangles(
self,
graph: "ParametricFunction",
x_range: Optional[Sequence[float]] = None,
dx: Optional[float] = 0.1,
input_sample_type: str = "left",
stroke_width: float = 1,
stroke_color: Color = BLACK,
fill_opacity: float = 1,
color: Union[Iterable[Color], Color] = np.array((BLUE, GREEN)),
show_signed_area: bool = True,
bounded_graph: "ParametricFunction" = None,
blend: bool = False,
width_scale_factor: float = 1.001,
) -> VGroup:
"""This method returns the :class:`~.VGroup` of the Riemann Rectangles for
a particular curve.
Parameters
----------
graph
The graph whose area will be approximated by Riemann rectangles.
x_range
The minimum and maximum x-values of the rectangles. ``x_range = [x_min, x_max]``.
dx
The change in x-value that separates each rectangle.
input_sample_type
Can be any of ``"left"``, ``"right"`` or ``"center"``. Refers to where
the sample point for the height of each Riemann Rectangle
will be inside the segments of the partition.
stroke_width
The stroke_width of the border of the rectangles.
stroke_color
The color of the border of the rectangle.
fill_opacity
The opacity of the rectangles.
color
The colors of the rectangles. Creates a balanced gradient if multiple colors are passed.
show_signed_area
Indicates negative area when the curve dips below the x-axis by inverting its color.
blend
Sets the :attr:`stroke_color` to :attr:`fill_color`, blending the rectangles without clear separation.
bounded_graph
If a secondary graph is specified, encloses the area between the two curves.
width_scale_factor
The factor by which the width of the rectangles is scaled.
Returns
-------
:class:`~.VGroup`
A :class:`~.VGroup` containing the Riemann Rectangles.
"""
# setting up x_range, overwrite user's third input
if x_range is None:
if bounded_graph is None:
x_range = [graph.t_min, graph.t_max]
else:
x_min = max(graph.t_min, bounded_graph.t_min)
x_max = min(graph.t_max, bounded_graph.t_max)
x_range = [x_min, x_max]
x_range = [*x_range[:2], dx]
rectangles = VGroup()
x_range = np.arange(*x_range)
# allows passing a string to color the graph
if type(color) is str:
colors = [color] * len(x_range)
else:
colors = color_gradient(color, len(x_range))
for x, color in zip(x_range, colors):
if input_sample_type == "left":
sample_input = x
elif input_sample_type == "right":
sample_input = x + dx
elif input_sample_type == "center":
sample_input = x + 0.5 * dx
else:
raise ValueError("Invalid input sample type")
graph_point = self.input_to_graph_point(sample_input, graph)
if bounded_graph is None:
y_point = self.origin_shift(self.y_range)
else:
y_point = bounded_graph.underlying_function(x)
points = VGroup(
*list(
map(
VectorizedPoint,
[
self.coords_to_point(x, y_point),
self.coords_to_point(x + width_scale_factor * dx, y_point),
graph_point,
],
)
)
)
rect = Rectangle().replace(points, stretch=True)
rectangles.add(rect)
# checks if the rectangle is under the x-axis
if self.p2c(graph_point)[1] < y_point and show_signed_area:
color = invert_color(color)
# blends rectangles smoothly
if blend:
stroke_color = color
rect.set_style(
fill_color=color,
fill_opacity=fill_opacity,
stroke_color=stroke_color,
stroke_width=stroke_width,
)
return rectangles
def get_area(
self,
graph: "ParametricFunction",
x_range: Optional[Sequence[float]] = None,
color: Union[Color, Iterable[Color]] = [BLUE, GREEN],
opacity: float = 0.3,
dx_scaling: float = 1,
bounded: "ParametricFunction" = None,
):
"""Returns a :class:`~.VGroup` of Riemann rectangles sufficiently small enough to visually
approximate the area under the graph passed.
Parameters
----------
graph
The graph/curve for which the area needs to be gotten.
x_range
The range of the minimum and maximum x-values of the area. ``x_range = [x_min, x_max]``.
color
The color of the area. Creates a gradient if a list of colors is provided.
opacity
The opacity of the area.
bounded
If a secondary :attr:`graph` is specified, encloses the area between the two curves.
dx_scaling
The factor by which the :attr:`dx` value is scaled.
Returns
-------
:class:`~.VGroup`
The :class:`~.VGroup` containing the Riemann Rectangles.
"""
dx = self.x_range[2] / 500
return self.get_riemann_rectangles(
graph,
x_range=x_range,
dx=dx * dx_scaling,
bounded_graph=bounded,
blend=True,
color=color,
show_signed_area=False,
).set_opacity(opacity=opacity)
def angle_of_tangent(
self, x: float, graph: "ParametricFunction", dx: float = 1e-8
) -> float:
"""Returns the angle to the x-axis of the tangent
to the plotted curve at a particular x-value.
Parameters
----------
x
The x-value at which the tangent must touch the curve.
graph
The :class:`~.ParametricFunction` for which to calculate the tangent.
dx
The small change in `x` with which a small change in `y`
will be compared in order to obtain the tangent.
Returns
-------
:class:`float`
The angle of the tangent with the x axis.
"""
p0 = self.input_to_graph_point(x, graph)
p1 = self.input_to_graph_point(x + dx, graph)
return angle_of_vector(p1 - p0)
def slope_of_tangent(
self, x: float, graph: "ParametricFunction", **kwargs
) -> float:
"""Returns the slope of the tangent to the plotted curve
at a particular x-value.
Parameters
----------
x
The x-value at which the tangent must touch the curve.
graph
The :class:`~.ParametricFunction` for which to calculate the tangent.
Returns
-------
:class:`float`
The slope of the tangent with the x axis.
"""
return np.tan(self.angle_of_tangent(x, graph, **kwargs))
def get_derivative_graph(
self, graph: "ParametricFunction", color: Color = GREEN, **kwargs
) -> ParametricFunction:
"""Returns the curve of the derivative of the passed
graph.
Parameters
----------
graph
The graph for which the derivative will be found.
color
The color of the derivative curve.
**kwargs
Any valid keyword argument of :class:`~.ParametricFunction`
Returns
-------
:class:`~.ParametricFunction`
The curve of the derivative.
"""
def deriv(x):
return self.slope_of_tangent(x, graph)
return self.get_graph(deriv, color=color, **kwargs)
def get_secant_slope_group(
self,
x: float,
graph: ParametricFunction,
dx: Optional[float] = None,
dx_line_color: Color = YELLOW,
dy_line_color: Optional[Color] = None,
dx_label: Optional[Union[float, str]] = None,
dy_label: Optional[Union[float, str]] = None,
include_secant_line: bool = True,
secant_line_color: Color = GREEN,
secant_line_length: float = 10,
) -> VGroup:
"""Creates two lines representing `dx` and `df`, the labels for `dx` and `df`, and
the secant to the curve at a particular x-value.
Parameters
----------
x
The x-value at which the secant intersects the graph for the first time.
graph
The curve for which the secant will be found.
dx
The change in `x` after which the secant exits.
dx_line_color
The color of the line that indicates the change in `x`.
dy_line_color
The color of the line that indicates the change in `y`. Defaults to the color of :attr:`graph`.
dx_label
The label for the `dx` line.
dy_label
The label for the `dy` line.
include_secant_line
Whether or not to include the secant line in the graph,
or just have the df and dx lines and labels.
secant_line_color
The color of the secant line.
secant_line_length
The length of the secant line.
Returns
-------
:class:`~.VGroup`
A group containing the elements: `dx_line`, `df_line`, and
if applicable also :attr:`dx_label`, :attr:`df_label`, `secant_line`.
"""
group = VGroup()
dx = dx or float(self.x_range[1] - self.x_range[0]) / 10
dx_line_color = dx_line_color
dy_line_color = dy_line_color or graph.get_color()
p1 = self.input_to_graph_point(x, graph)
p2 = self.input_to_graph_point(x + dx, graph)
interim_point = p2[0] * RIGHT + p1[1] * UP
group.dx_line = Line(p1, interim_point, color=dx_line_color)
group.df_line = Line(interim_point, p2, color=dy_line_color)
group.add(group.dx_line, group.df_line)
labels = VGroup()
if dx_label is not None:
group.dx_label = self.create_label_tex(dx_label)
labels.add(group.dx_label)
group.add(group.dx_label)
if dy_label is not None:
group.df_label = self.create_label_tex(dy_label)
labels.add(group.df_label)
group.add(group.df_label)
if len(labels) > 0:
max_width = 0.8 * group.dx_line.width
max_height = 0.8 * group.df_line.height
if labels.width > max_width:
labels.width = max_width
if labels.height > max_height:
labels.height = max_height
if dx_label is not None:
group.dx_label.next_to(
group.dx_line, np.sign(dx) * DOWN, buff=group.dx_label.height / 2
)
group.dx_label.set_color(group.dx_line.get_color())
if dy_label is not None:
group.df_label.next_to(
group.df_line, np.sign(dx) * RIGHT, buff=group.df_label.height / 2
)
group.df_label.set_color(group.df_line.get_color())
if include_secant_line:
secant_line_color = secant_line_color
group.secant_line = Line(p1, p2, color=secant_line_color)
group.secant_line.scale_in_place(
secant_line_length / group.secant_line.get_length()
)
group.add(group.secant_line)
return group
def get_vertical_lines_to_graph(
self,
graph: ParametricFunction,
x_range: Optional[Sequence[float]] = None,
num_lines: int = 20,
**kwargs,
) -> VGroup:
"""Obtains multiple lines from the x-axis to the curve.
Parameters
----------
graph
The graph on which the line should extend to.
x_range
A list containing the lower and and upper bounds of the lines -> ``x_range = [x_min, x_max]``.
num_lines
The number of evenly spaced lines.
Returns
-------
:class:`~.VGroup`
The :class:`~.VGroup` of the evenly spaced lines.
"""
x_range = x_range if x_range is not None else self.x_range
return VGroup(
*[
self.get_vertical_line(self.i2gp(x, graph), **kwargs)
for x in np.linspace(x_range[0], x_range[1], num_lines)
]
)
def get_T_label(
self,
x_val: float,
graph: "ParametricFunction",
label: Optional[Union[float, str, "Mobject"]] = None,
label_color: Color = WHITE,
triangle_size: float = MED_SMALL_BUFF,
triangle_color: Color = WHITE,
line_func: "Line" = Line,
line_color: Color = YELLOW,
) -> VGroup:
"""Creates a labelled triangle marker with a vertical line from the x-axis
to a curve at a given x-value.
Parameters
----------
x_val
The position along the curve at which the label, line and triangle will be constructed.
graph
The :class:`~.ParametricFunction` for which to construct the label.
label
The label of the vertical line and triangle.
label_color
The color of the label.
triangle_size
The size of the triangle.
triangle_color
The color of the triangle.
line_func
The function used to construct the vertical line.
line_color
The color of the vertical line.
Examples
-------
.. manim:: T_labelExample
:save_last_frame:
class T_labelExample(Scene):
def construct(self):
# defines the axes and linear function
axes = Axes(x_range=[-1, 10], y_range=[-1, 10], x_length=9, y_length=6)
func = axes.get_graph(lambda x: x, color=BLUE)
# creates the T_label
t_label = axes.get_T_label(x_val=4, graph=func, label=Tex("x-value"))
self.add(axes, func, t_label)
Returns
-------
:class:`~.VGroup`
A :class:`~.VGroup` of the label, triangle and vertical line mobjects.
"""
T_label_group = VGroup()
triangle = RegularPolygon(n=3, start_angle=np.pi / 2, stroke_width=0).set_fill(
color=triangle_color, opacity=1
)
triangle.height = triangle_size
triangle.move_to(self.coords_to_point(x_val, 0), UP)
if label is not None:
t_label = self.create_label_tex(label).set_color(label_color)
t_label.next_to(triangle, DOWN)
T_label_group.add(t_label)
v_line = self.get_vertical_line(
self.i2gp(x_val, graph), color=line_color, line_func=line_func
)
T_label_group.add(triangle, v_line)
return T_label_group
class Axes(VGroup, CoordinateSystem, metaclass=ConvertToOpenGL):
"""Creates a set of axes.
Parameters
----------
x_range
The :code:`[x_min, x_max, x_step]` values of the x-axis.
y_range
The :code:`[y_min, y_max, y_step]` values of the y-axis.
x_length
The length of the x-axis.
y_length
The length of the y-axis.
axis_config
Arguments to be passed to :class:`~.NumberLine` that influences both axes.
x_axis_config
Arguments to be passed to :class:`~.NumberLine` that influence the x-axis.
y_axis_config
Arguments to be passed to :class:`~.NumberLine` that influence the y-axis.
tips
Whether or not to include the tips on both axes.
kwargs : Any
Additional arguments to be passed to :class:`CoordinateSystem` and :class:`~.VGroup`.
"""
def __init__(
self,
x_range: Optional[Sequence[float]] = None,
y_range: Optional[Sequence[float]] = None,
x_length: Optional[float] = round(config.frame_width) - 2,
y_length: Optional[float] = round(config.frame_height) - 2,
axis_config: Optional[dict] = None,
x_axis_config: Optional[dict] = None,
y_axis_config: Optional[dict] = None,
tips: bool = True,
**kwargs,
):
VGroup.__init__(self, **kwargs)
CoordinateSystem.__init__(self, x_range, y_range, x_length, y_length)
self.axis_config = {
"include_tip": tips,
"numbers_to_exclude": [0],
"exclude_origin_tick": True,
}
self.x_axis_config = {}
self.y_axis_config = {"rotation": 90 * DEGREES, "label_direction": LEFT}
self.update_default_configs(
(self.axis_config, self.x_axis_config, self.y_axis_config),
(axis_config, x_axis_config, y_axis_config),
)
self.x_axis_config = merge_dicts_recursively(
self.axis_config, self.x_axis_config
)
self.y_axis_config = merge_dicts_recursively(
self.axis_config, self.y_axis_config
)
self.x_axis = self.create_axis(self.x_range, self.x_axis_config, self.x_length)
self.y_axis = self.create_axis(self.y_range, self.y_axis_config, self.y_length)
# Add as a separate group in case various other
# mobjects are added to self, as for example in
# NumberPlane below
self.axes = VGroup(self.x_axis, self.y_axis)
self.add(*self.axes)
# finds the middle-point on each axis
lines_center_point = [((axis.x_max + axis.x_min) / 2) for axis in self.axes]
self.shift(-self.coords_to_point(*lines_center_point))
@staticmethod
def update_default_configs(default_configs, passed_configs):
for default_config, passed_config in zip(default_configs, passed_configs):
if passed_config is not None:
update_dict_recursively(default_config, passed_config)
def create_axis(
self,
range_terms: Sequence[float],
axis_config: dict,
length: float,
) -> NumberLine:
"""Creates an axis and dynamically adjusts its position depending on where 0 is located on the line.
Parameters
----------
range_terms
The range of the the axis : `(x_min, x_max, x_step)`.
axis_config
Additional parameters that are passed to :class:`NumberLine`.
length
The length of the axis.
Returns
-------
:class:`NumberLine`
Returns a number line with the provided x and y axis range.
"""
axis_config["length"] = length
axis = NumberLine(range_terms, **axis_config)
# without the call to origin_shift, graph does not exist when min > 0 or max < 0
# shifts the axis so that 0 is centered
axis.shift(-axis.number_to_point(self.origin_shift(range_terms)))
return axis
def coords_to_point(self, *coords: Sequence[float]) -> np.ndarray:
"""Transforms the vector formed from ``coords`` formed by the :class:`Axes`
into the corresponding vector with respect to the default basis.
Returns
-------
np.ndarray
A point that results from a change of basis from the coordinate system
defined by the :class:`Axes` to that of ``manim``'s default coordinate system
"""
origin = self.x_axis.number_to_point(self.origin_shift(self.x_range))
result = np.array(origin)
for axis, coord in zip(self.get_axes(), coords):
result += axis.number_to_point(coord) - origin
return result
def point_to_coords(self, point: float) -> Tuple:
"""Transforms the coordinates of the point which are with respect to ``manim``'s default
basis into the coordinates of that point with respect to the basis defined by :class:`Axes`.
Parameters
----------
point
The point whose coordinates will be found.
Returns
-------
Tuple
Coordinates of the point with respect to :class:`Axes`'s basis
"""
return tuple([axis.point_to_number(point) for axis in self.get_axes()])
def get_axes(self) -> VGroup:
"""Gets the axes.
Returns
-------
:class:`~.VGroup`
A pair of axes.
"""
return self.axes
def get_line_graph(
self,
x_values: Iterable[float],
y_values: Iterable[float],
z_values: Optional[Iterable[float]] = None,
line_color: Color = YELLOW,
add_vertex_dots: bool = True,
vertex_dot_radius: float = DEFAULT_DOT_RADIUS,
vertex_dot_style: Optional[dict] = None,
**kwargs,
) -> VDict:
"""Draws a line graph.
The graph connects the vertices formed from zipping
``x_values``, ``y_values`` and ``z_values``. Also adds :class:`Dots <.Dot>` at the
vertices if ``add_vertex_dots`` is set to ``True``.
Parameters
----------
x_values
Iterable of values along the x-axis.
y_values
Iterable of values along the y-axis.
z_values
Iterable of values (zeros if z_values is None) along the z-axis.
line_color
Color for the line graph.
add_vertex_dots
Whether or not to add :class:`~.Dot` at each vertex.
vertex_dot_radius
Radius for the :class:`~.Dot` at each vertex.
vertex_dot_style
Style arguments to be passed into :class:`~.Dot` at each vertex.
kwargs
Additional arguments to be passed into :class:`~.VMobject`.
Examples
--------
.. manim:: LineGraphExample
:save_last_frame:
class LineGraphExample(Scene):
def construct(self):
plane = NumberPlane(
x_range = (0, 7),
y_range = (0, 5),
x_length = 7,
axis_config={"include_numbers": True},
)
plane.center()
line_graph = plane.get_line_graph(
x_values = [0, 1.5, 2, 2.8, 4, 6.25],
y_values = [1, 3, 2.25, 4, 2.5, 1.75],
line_color=GOLD_E,
vertex_dot_style=dict(stroke_width=3, fill_color=PURPLE),
stroke_width = 4,
)
self.add(plane, line_graph)
"""
x_values, y_values = map(np.array, (x_values, y_values))
if z_values is None:
z_values = np.zeros(x_values.shape)
line_graph = VDict()
graph = VGroup(color=line_color, **kwargs)
vertices = [
self.coords_to_point(x, y, z)
for x, y, z in zip(x_values, y_values, z_values)
]
graph.set_points_as_corners(vertices)
graph.z_index = -1
line_graph["line_graph"] = graph
if add_vertex_dots:
vertex_dot_style = vertex_dot_style or {}
vertex_dots = VGroup(
*[
Dot(point=vertex, radius=vertex_dot_radius, **vertex_dot_style)
for vertex in vertices
]
)
line_graph["vertex_dots"] = vertex_dots
return line_graph
@staticmethod
def origin_shift(axis_range: Sequence[float]) -> float:
"""Determines how to shift graph mobjects to compensate when 0 is not on the axis.
Parameters
----------
axis_range
The range of the axis : ``(x_min, x_max, x_step)``.
"""
if axis_range[0] > 0:
return axis_range[0]
if axis_range[1] < 0:
return axis_range[1]
else:
return 0
class ThreeDAxes(Axes):
"""A 3-dimensional set of axes.
Parameters
----------
x_range
The :code:`[x_min, x_max, x_step]` values of the x-axis.
y_range
The :code:`[y_min, y_max, y_step]` values of the y-axis.
z_range
The :code:`[z_min, z_max, z_step]` values of the z-axis.
x_length
The length of the x-axis.
y_length
The length of the y-axis.
z_length
The length of the z-axis.
z_axis_config
Arguments to be passed to :class:`~.NumberLine` that influence the z-axis.
z_normal
The direction of the normal.
num_axis_pieces
The number of pieces used to construct the axes.
light_source
The direction of the light source.
depth
Currently non-functional.
gloss
Currently non-functional.
kwargs : Any
Additional arguments to be passed to :class:`Axes`.
"""
def __init__(
self,
x_range: Optional[Sequence[float]] = (-6, 6, 1),
y_range: Optional[Sequence[float]] = (-5, 5, 1),
z_range: Optional[Sequence[float]] = (-4, 4, 1),
x_length: Optional[float] = config.frame_height + 2.5,
y_length: Optional[float] = config.frame_height + 2.5,
z_length: Optional[float] = config.frame_height - 1.5,
z_axis_config: Optional[dict] = None,
z_normal: Sequence[float] = DOWN,
num_axis_pieces: int = 20,
light_source: Sequence[float] = 9 * DOWN + 7 * LEFT + 10 * OUT,
# opengl stuff (?)
depth=None,
gloss=0.5,
**kwargs,
):
Axes.__init__(
self,
x_range=x_range,
x_length=x_length,
y_range=y_range,
y_length=y_length,
**kwargs,
)
self.z_range = z_range
self.z_length = z_length
self.z_axis_config = {}
self.update_default_configs((self.z_axis_config,), (z_axis_config,))
self.z_axis_config = merge_dicts_recursively(
self.axis_config, self.z_axis_config
)
self.z_normal = z_normal
self.num_axis_pieces = num_axis_pieces
self.light_source = light_source
self.dimension = 3
z_axis = self.create_axis(self.z_range, self.z_axis_config, self.z_length)
z_axis.rotate_about_zero(-PI / 2, UP)
z_axis.rotate_about_zero(angle_of_vector(self.z_normal))
z_axis.shift(self.x_axis.number_to_point(self.origin_shift(x_range)))
self.axes.add(z_axis)
self.add(z_axis)
self.z_axis = z_axis
if not config.renderer == "opengl":
self.add_3d_pieces()
self.set_axis_shading()
def add_3d_pieces(self):
for axis in self.axes:
axis.pieces = VGroup(*axis.get_pieces(self.num_axis_pieces))
axis.add(axis.pieces)
axis.set_stroke(width=0, family=False)
axis.set_shade_in_3d(True)
def set_axis_shading(self):
def make_func(axis):
vect = self.light_source
return lambda: (
axis.get_edge_center(-vect),
axis.get_edge_center(vect),
)
for axis in self:
for submob in axis.family_members_with_points():
submob.get_gradient_start_and_end_points = make_func(axis)
submob.get_unit_normal = lambda a: np.ones(3)
submob.set_sheen(0.2)
class NumberPlane(Axes):
"""Creates a cartesian plane with background lines.
Parameters
----------
x_range
The :code:`[x_min, x_max, x_step]` values of the plane in the horizontal direction.
y_range
The :code:`[y_min, y_max, y_step]` values of the plane in the vertical direction.
x_length
The width of the plane.
y_length
The height of the plane.
background_line_style
Arguments that influence the construction of the background lines of the plane.
faded_line_style
Similar to :attr:`background_line_style`, affects the construction of the scene's background lines.
faded_line_ratio
Determines the number of boxes within the background lines: :code:`2` = 4 boxes, :code:`3` = 9 boxes.
make_smooth_after_applying_functions
Currently non-functional.
kwargs : Any
Additional arguments to be passed to :class:`Axes`.
.. note:: If :attr:`x_length` or :attr:`y_length` are not defined, the plane automatically adjusts its lengths based
on the :attr:`x_range` and :attr:`y_range` values to set the unit_size to 1.
Examples
--------
.. manim:: NumberPlaneExample
:save_last_frame:
class NumberPlaneExample(Scene):
def construct(self):
number_plane = NumberPlane(
x_range=[-10, 10, 1],
y_range=[-10, 10, 1],
background_line_style={
"stroke_color": TEAL,
"stroke_width": 4,
"stroke_opacity": 0.6
}
)
self.add(number_plane)
"""
def __init__(
self,
x_range: Optional[Sequence[float]] = (
-config["frame_x_radius"],
config["frame_x_radius"],
1,
),
y_range: Optional[Sequence[float]] = (
-config["frame_y_radius"],
config["frame_y_radius"],
1,
),
x_length: Optional[float] = None,
y_length: Optional[float] = None,
background_line_style: Optional[dict] = None,
faded_line_style: Optional[dict] = None,
faded_line_ratio: int = 1,
make_smooth_after_applying_functions=True,
**kwargs,
):
# configs
self.axis_config = {
"stroke_color": WHITE,
"stroke_width": 2,
"include_ticks": False,
"include_tip": False,
"line_to_number_buff": SMALL_BUFF,
"label_direction": DR,
"number_scale_value": 0.5,
}
self.y_axis_config = {"label_direction": DR}
self.background_line_style = {
"stroke_color": BLUE_D,
"stroke_width": 2,
"stroke_opacity": 1,
}
self.update_default_configs(
(self.axis_config, self.y_axis_config, self.background_line_style),
(
kwargs.pop("axis_config", None),
kwargs.pop("y_axis_config", None),
background_line_style,
),
)
# Defaults to a faded version of line_config
self.faded_line_style = faded_line_style
self.faded_line_ratio = faded_line_ratio
self.make_smooth_after_applying_functions = make_smooth_after_applying_functions
# init
super().__init__(
x_range=x_range,
y_range=y_range,
x_length=x_length,
y_length=y_length,
axis_config=self.axis_config,
y_axis_config=self.y_axis_config,
**kwargs,
)
# dynamically adjusts x_length and y_length so that the unit_size is one by default
if x_length is None:
x_length = self.x_range[1] - self.x_range[0]
if y_length is None:
y_length = self.y_range[1] - self.y_range[0]
self.init_background_lines()
def init_background_lines(self):
"""Will init all the lines of NumberPlanes (faded or not)"""
if self.faded_line_style is None:
style = dict(self.background_line_style)
# For anything numerical, like stroke_width
# and stroke_opacity, chop it in half
for key in style:
if isinstance(style[key], numbers.Number):
style[key] *= 0.5
self.faded_line_style = style
self.background_lines, self.faded_lines = self.get_lines()
self.background_lines.set_style(
**self.background_line_style,
)
self.faded_lines.set_style(
**self.faded_line_style,
)
self.add_to_back(
self.faded_lines,
self.background_lines,
)
def get_lines(self) -> Tuple[VGroup, VGroup]:
"""Generate all the lines, faded and not faded. Two sets of lines are generated: one parallel to the X-axis, and parallel to the Y-axis.
Returns
-------
Tuple[:class:`~.VGroup`, :class:`~.VGroup`]
The first (i.e the non faded lines) and second (i.e the faded lines) sets of lines, respectively.
"""
x_axis = self.get_x_axis()
y_axis = self.get_y_axis()
x_lines1, x_lines2 = self.get_lines_parallel_to_axis(
x_axis,
y_axis,
self.x_axis.x_step,
self.faded_line_ratio,
)
y_lines1, y_lines2 = self.get_lines_parallel_to_axis(
y_axis,
x_axis,
self.y_axis.x_step,
self.faded_line_ratio,
)
# TODO this was added so that we can run tests on NumberPlane
# In the future these attributes will be tacked onto self.background_lines
self.x_lines = x_lines1
self.y_lines = y_lines1
lines1 = VGroup(*x_lines1, *y_lines1)
lines2 = VGroup(*x_lines2, *y_lines2)
return lines1, lines2
def get_lines_parallel_to_axis(
self,
axis_parallel_to: NumberLine,
axis_perpendicular_to: NumberLine,
freq: float,
ratio_faded_lines: int,
) -> Tuple[VGroup, VGroup]:
"""Generate a set of lines parallel to an axis.
Parameters
----------
axis_parallel_to
The axis with which the lines will be parallel.
axis_perpendicular_to
The axis with which the lines will be perpendicular.
ratio_faded_lines
The ratio between the space between faded lines and the space between non-faded lines.
freq
Frequency of non-faded lines (number of non-faded lines per graph unit).
Returns
-------
Tuple[:class:`~.VGroup`, :class:`~.VGroup`]
The first (i.e the non-faded lines parallel to `axis_parallel_to`) and second (i.e the faded lines parallel to `axis_parallel_to`) sets of lines, respectively.
"""
line = Line(axis_parallel_to.get_start(), axis_parallel_to.get_end())
if ratio_faded_lines == 0: # don't show faded lines
ratio_faded_lines = 1 # i.e. set ratio to 1
step = (1 / ratio_faded_lines) * freq
lines1 = VGroup()
lines2 = VGroup()
unit_vector_axis_perp_to = axis_perpendicular_to.get_unit_vector()
# min/max used in case range does not include 0. i.e. if (2,6):
# the range becomes (0,4), not (0,6), to produce the correct number of lines
ranges = (
np.arange(
0,
min(
axis_perpendicular_to.x_max - axis_perpendicular_to.x_min,
axis_perpendicular_to.x_max,
),
step,
),
np.arange(
0,
max(
axis_perpendicular_to.x_min - axis_perpendicular_to.x_max,
axis_perpendicular_to.x_min,
),
-step,
),
)
for inputs in ranges:
for k, x in enumerate(inputs):
new_line = line.copy()
new_line.shift(unit_vector_axis_perp_to * x)
if k % ratio_faded_lines == 0:
lines1.add(new_line)
else:
lines2.add(new_line)
return lines1, lines2
def get_center_point(self) -> np.ndarray:
"""Gets the origin of :class:`NumberPlane`.
Returns
-------
np.ndarray
The center point.
"""
return self.coords_to_point(0, 0)
def get_x_unit_size(self):
return self.get_x_axis().get_unit_size()
def get_y_unit_size(self):
return self.get_x_axis().get_unit_size()
def get_axes(self) -> VGroup:
# Method Already defined at Axes.get_axes so we could remove this a later PR.
"""Gets the pair of axes.
Returns
-------
:class:`~.VGroup`
Axes
"""
return self.axes
def get_vector(self, coords, **kwargs):
kwargs["buff"] = 0
return Arrow(
self.coords_to_point(0, 0), self.coords_to_point(*coords), **kwargs
)
def prepare_for_nonlinear_transform(self, num_inserted_curves=50):
for mob in self.family_members_with_points():
num_curves = mob.get_num_curves()
if num_inserted_curves > num_curves:
mob.insert_n_curves(num_inserted_curves - num_curves)
return self
class PolarPlane(Axes):
r"""Creates a polar plane with background lines.
Parameters
----------
azimuth_step
The number of divisions in the azimuth (also known as the `angular coordinate` or `polar angle`). If ``None`` is specified then it will use the default
specified by ``azimuth_units``:
- ``"PI radians"`` or ``"TAU radians"``: 20
- ``"degrees"``: 36
- ``"gradians"``: 40
- ``None``: 1
A non-integer value will result in a partial division at the end of the circle.
size
The diameter of the plane.
radius_step
The distance between faded radius lines.
radius_max
The maximum value of the radius.
azimuth_units
Specifies a default labelling system for the azimuth. Choices are:
- ``"PI radians"``: Fractional labels in the interval :math:`\left[0, 2\pi\right]` with :math:`\pi` as a constant.
- ``"TAU radians"``: Fractional labels in the interval :math:`\left[0, \tau\right]` (where :math:`\tau = 2\pi`) with :math:`\tau` as a constant.
- ``"degrees"``: Decimal labels in the interval :math:`\left[0, 360\right]` with a degree (:math:`^{\circ}`) symbol.
- ``"gradians"``: Decimal labels in the interval :math:`\left[0, 400\right]` with a superscript "g" (:math:`^{g}`).
- ``None``: Decimal labels in the interval :math:`\left[0, 1\right]`.
azimuth_compact_fraction
If the ``azimuth_units`` choice has fractional labels, choose whether to combine the constant in a compact form :math:`\tfrac{xu}{y}` as opposed to :math:`\tfrac{x}{y}u`, where :math:`u` is the constant.
azimuth_offset
The angle offset of the azimuth, expressed in radians.
azimuth_direction
The direction of the azimuth.
- ``"CW"``: Clockwise.
- ``"CCW"``: Anti-clockwise.
azimuth_label_buff
The buffer for the azimuth labels.
azimuth_label_scale
The scale of the azimuth labels.
radius_config
The axis config for the radius.
Examples
--------
.. manim:: PolarPlaneExample
:ref_classes: PolarPlane
:save_last_frame:
class PolarPlaneExample(Scene):
def construct(self):
polarplane_pi = PolarPlane(
azimuth_units="PI radians",
size=6,
azimuth_label_scale=0.7,
radius_config={"number_scale_value": 0.7},
).add_coordinates()
self.add(polarplane_pi)
"""
def __init__(
self,
radius_max: float = config["frame_y_radius"],
size: Optional[float] = None,
radius_step: float = 1,
azimuth_step: Optional[float] = None,
azimuth_units: Optional[str] = "PI radians",
azimuth_compact_fraction: bool = True,
azimuth_offset: float = 0,
azimuth_direction: str = "CCW",
azimuth_label_buff: float = SMALL_BUFF,
azimuth_label_scale: float = 0.5,
radius_config: Optional[dict] = None,
background_line_style: Optional[dict] = None,
faded_line_style: Optional[dict] = None,
faded_line_ratio: int = 1,
make_smooth_after_applying_functions: bool = True,
**kwargs,
):
# error catching
if azimuth_units in ["PI radians", "TAU radians", "degrees", "gradians", None]:
self.azimuth_units = azimuth_units
else:
raise ValueError(
"Invalid azimuth units. Expected one of: PI radians, TAU radians, degrees, gradians or None."
)
if azimuth_direction in ["CW", "CCW"]:
self.azimuth_direction = azimuth_direction
else:
raise ValueError("Invalid azimuth units. Expected one of: CW, CCW.")
# configs
self.radius_config = {
"stroke_color": WHITE,
"stroke_width": 2,
"include_ticks": False,
"include_tip": False,
"line_to_number_buff": SMALL_BUFF,
"label_direction": DL,
"number_scale_value": 0.5,
}
self.background_line_style = {
"stroke_color": BLUE_D,
"stroke_width": 2,
"stroke_opacity": 1,
}
self.azimuth_step = (
(
{
"PI radians": 20,
"TAU radians": 20,
"degrees": 36,
"gradians": 40,
None: 1,
}[azimuth_units]
)
if azimuth_step is None
else azimuth_step
)
self.update_default_configs(
(self.radius_config, self.background_line_style),
(radius_config, background_line_style),
)
# Defaults to a faded version of line_config
self.faded_line_style = faded_line_style
self.faded_line_ratio = faded_line_ratio
self.make_smooth_after_applying_functions = make_smooth_after_applying_functions
self.azimuth_offset = azimuth_offset
self.azimuth_label_buff = azimuth_label_buff
self.azimuth_label_scale = azimuth_label_scale
self.azimuth_compact_fraction = azimuth_compact_fraction
# init
super().__init__(
x_range=np.array((-radius_max, radius_max, radius_step)),
y_range=np.array((-radius_max, radius_max, radius_step)),
x_length=size,
y_length=size,
axis_config=self.radius_config,
**kwargs,
)
# dynamically adjusts size so that the unit_size is one by default
if size is None:
size = 0
self.init_background_lines()
def init_background_lines(self):
"""Will init all the lines of NumberPlanes (faded or not)"""
if self.faded_line_style is None:
style = dict(self.background_line_style)
# For anything numerical, like stroke_width
# and stroke_opacity, chop it in half
for key in style:
if isinstance(style[key], numbers.Number):
style[key] *= 0.5
self.faded_line_style = style
self.background_lines, self.faded_lines = self.get_lines()
self.background_lines.set_style(
**self.background_line_style,
)
self.faded_lines.set_style(
**self.faded_line_style,
)
self.add_to_back(
self.faded_lines,
self.background_lines,
)
def get_lines(self) -> Tuple[VGroup, VGroup]:
"""Generate all the lines and circles, faded and not faded.
Returns
-------
Tuple[:class:`~.VGroup`, :class:`~.VGroup`]
The first (i.e the non faded lines and circles) and second (i.e the faded lines and circles) sets of lines and circles, respectively.
"""
center = self.get_center_point()
ratio_faded_lines = self.faded_line_ratio
offset = self.azimuth_offset
if ratio_faded_lines == 0: # don't show faded lines
ratio_faded_lines = 1 # i.e. set ratio to 1
rstep = (1 / ratio_faded_lines) * self.x_axis.x_step
astep = (1 / ratio_faded_lines) * (TAU * (1 / self.azimuth_step))
rlines1 = VGroup()
rlines2 = VGroup()
alines1 = VGroup()
alines2 = VGroup()
rinput = np.arange(0, self.x_axis.x_max + rstep, rstep)
ainput = np.arange(0, TAU, astep)
unit_vector = self.x_axis.get_unit_vector()[0]
for k, x in enumerate(rinput):
new_line = Circle(radius=x * unit_vector)
if k % ratio_faded_lines == 0:
alines1.add(new_line)
else:
alines2.add(new_line)
line = Line(center, self.get_x_axis().get_end())
for k, x in enumerate(ainput):
new_line = line.copy()
new_line.rotate(x + offset, about_point=center)
if k % ratio_faded_lines == 0:
rlines1.add(new_line)
else:
rlines2.add(new_line)
lines1 = VGroup(*rlines1, *alines1)
lines2 = VGroup(*rlines2, *alines2)
return lines1, lines2
def get_center_point(self):
return self.coords_to_point(0, 0)
def get_x_unit_size(self):
return self.get_x_axis().get_unit_size()
def get_y_unit_size(self):
return self.get_x_axis().get_unit_size()
def get_axes(self) -> VGroup:
"""Gets the axes.
Returns
-------
:class:`~.VGroup`
A pair of axes.
"""
return self.axes
def get_vector(self, coords, **kwargs):
kwargs["buff"] = 0
return Arrow(
self.coords_to_point(0, 0), self.coords_to_point(*coords), **kwargs
)
def prepare_for_nonlinear_transform(self, num_inserted_curves=50):
for mob in self.family_members_with_points():
num_curves = mob.get_num_curves()
if num_inserted_curves > num_curves:
mob.insert_n_curves(num_inserted_curves - num_curves)
return self
def polar_to_point(self, radius: float, azimuth: float) -> np.ndarray:
r"""Gets a point from polar coordinates.
Parameters
----------
radius
The coordinate radius (:math:`r`).
azimuth
The coordinate azimuth (:math:`\theta`).
Returns
-------
numpy.ndarray
The point.
Examples
--------
.. manim:: PolarToPointExample
:ref_classes: PolarPlane Vector
:save_last_frame:
class PolarToPointExample(Scene):
def construct(self):
polarplane_pi = PolarPlane(azimuth_units="PI radians", size=6)
polartopoint_vector = Vector(polarplane_pi.polar_to_point(3, PI/4))
self.add(polarplane_pi)
self.add(polartopoint_vector)
"""
return self.coords_to_point(radius * np.cos(azimuth), radius * np.sin(azimuth))
def pr2pt(self, radius: float, azimuth: float) -> np.ndarray:
"""Abbreviation for :meth:`polar_to_point`"""
return self.polar_to_point(radius, azimuth)
def point_to_polar(self, point: np.ndarray) -> Tuple[float, float]:
r"""Gets polar coordinates from a point.
Parameters
----------
point
The point.
Returns
-------
Tuple[:class:`float`, :class:`float`]
The coordinate radius (:math:`r`) and the coordinate azimuth (:math:`\theta`).
"""
x, y = self.point_to_coords(point)
return np.sqrt(x ** 2 + y ** 2), np.arctan2(y, x)
def pt2pr(self, point: np.ndarray) -> Tuple[float, float]:
"""Abbreviation for :meth:`point_to_polar`"""
return self.point_to_polar(point)
def get_coordinate_labels(
self,
r_values: Optional[Iterable[float]] = None,
a_values: Optional[Iterable[float]] = None,
**kwargs,
) -> VDict:
"""Gets labels for the coordinates
Parameters
----------
r_values
Iterable of values along the radius, by default None.
a_values
Iterable of values along the azimuth, by default None.
Returns
-------
VDict
Labels for the radius and azimuth values.
"""
if r_values is None:
r_values = [r for r in self.get_x_axis().get_tick_range() if r >= 0]
if a_values is None:
a_values = np.arange(0, 1, 1 / self.azimuth_step)
r_mobs = self.get_x_axis().add_numbers(r_values)
if self.azimuth_direction == "CCW":
d = 1
elif self.azimuth_direction == "CW":
d = -1
else:
raise ValueError("Invalid azimuth direction. Expected one of: CW, CCW")
a_points = [
{
"label": i,
"point": np.array(
[
self.get_right()[0]
* np.cos(d * (i * TAU) + self.azimuth_offset),
self.get_right()[0]
* np.sin(d * (i * TAU) + self.azimuth_offset),
0,
]
),
}
for i in a_values
]
if self.azimuth_units == "PI radians" or self.azimuth_units == "TAU radians":
a_tex = [
self.get_radian_label(i["label"])
.scale(self.azimuth_label_scale)
.next_to(
i["point"],
direction=i["point"],
aligned_edge=i["point"],
buff=self.azimuth_label_buff,
)
for i in a_points
]
elif self.azimuth_units == "degrees":
a_tex = [
MathTex(f'{360 * i["label"]:g}' + r"^{\circ}")
.scale(self.azimuth_label_scale)
.next_to(
i["point"],
direction=i["point"],
aligned_edge=i["point"],
buff=self.azimuth_label_buff,
)
for i in a_points
]
elif self.azimuth_units == "gradians":
a_tex = [
MathTex(f'{400 * i["label"]:g}' + r"^{g}")
.scale(self.azimuth_label_scale)
.next_to(
i["point"],
direction=i["point"],
aligned_edge=i["point"],
buff=self.azimuth_label_buff,
)
for i in a_points
]
elif self.azimuth_units is None:
a_tex = [
MathTex(f'{i["label"]:g}')
.scale(self.azimuth_label_scale)
.next_to(
i["point"],
direction=i["point"],
aligned_edge=i["point"],
buff=self.azimuth_label_buff,
)
for i in a_points
]
a_mobs = VGroup(*a_tex)
self.coordinate_labels = VGroup(r_mobs, a_mobs)
return self.coordinate_labels
def add_coordinates(
self,
r_values: Optional[Iterable[float]] = None,
a_values: Optional[Iterable[float]] = None,
):
"""Adds the coordinates.
Parameters
----------
r_values
Iterable of values along the radius, by default None.
a_values
Iterable of values along the azimuth, by default None.
"""
self.add(self.get_coordinate_labels(r_values, a_values))
return self
def get_radian_label(self, number, stacked=True):
constant_label = {"PI radians": r"\pi", "TAU radians": r"\tau"}[
self.azimuth_units
]
division = number * {"PI radians": 2, "TAU radians": 1}[self.azimuth_units]
frac = fr.Fraction(division).limit_denominator(max_denominator=100)
if frac.numerator == 0 & frac.denominator == 0:
return MathTex(r"0")
elif frac.numerator == 1 and frac.denominator == 1:
return MathTex(constant_label)
elif frac.numerator == 1:
if self.azimuth_compact_fraction:
return MathTex(
r"\tfrac{" + constant_label + r"}{" + str(frac.denominator) + "}"
)
else:
return MathTex(
r"\tfrac{1}{" + str(frac.denominator) + "}" + constant_label
)
elif frac.denominator == 1:
return MathTex(str(frac.numerator) + constant_label)
else:
if self.azimuth_compact_fraction:
return MathTex(
r"\tfrac{"
+ str(frac.numerator)
+ constant_label
+ r"}{"
+ str(frac.denominator)
+ r"}"
)
else:
return MathTex(
r"\tfrac{"
+ str(frac.numerator)
+ r"}{"
+ str(frac.denominator)
+ r"}"
+ constant_label
)
class ComplexPlane(NumberPlane):
"""
Examples
--------
.. manim:: ComplexPlaneExample
:save_last_frame:
:ref_classes: Dot MathTex
class ComplexPlaneExample(Scene):
def construct(self):
plane = ComplexPlane().add_coordinates()
self.add(plane)
d1 = Dot(plane.n2p(2 + 1j), color=YELLOW)
d2 = Dot(plane.n2p(-3 - 2j), color=YELLOW)
label1 = MathTex("2+i").next_to(d1, UR, 0.1)
label2 = MathTex("-3-2i").next_to(d2, UR, 0.1)
self.add(
d1,
label1,
d2,
label2,
)
"""
def __init__(self, color=BLUE, **kwargs):
super().__init__(
color=color,
**kwargs,
)
def number_to_point(self, number):
number = complex(number)
return self.coords_to_point(number.real, number.imag)
def n2p(self, number):
return self.number_to_point(number)
def point_to_number(self, point):
x, y = self.point_to_coords(point)
return complex(x, y)
def p2n(self, point):
return self.point_to_number(point)
def get_default_coordinate_values(self):
x_numbers = self.get_x_axis().get_tick_range()
y_numbers = self.get_y_axis().get_tick_range()
y_numbers = [complex(0, y) for y in y_numbers if y != 0]
return [*x_numbers, *y_numbers]
def get_coordinate_labels(self, *numbers, **kwargs):
if len(numbers) == 0:
numbers = self.get_default_coordinate_values()
self.coordinate_labels = VGroup()
for number in numbers:
z = complex(number)
if abs(z.imag) > abs(z.real):
axis = self.get_y_axis()
value = z.imag
kwargs["unit"] = "i"
else:
axis = self.get_x_axis()
value = z.real
number_mob = axis.get_number_mobject(value, **kwargs)
self.coordinate_labels.add(number_mob)
return self.coordinate_labels
def add_coordinates(self, *numbers):
self.add(self.get_coordinate_labels(*numbers))
return self
|
/*
Copyright (C) 2020 Quaternion Risk Management Ltd
All rights reserved.
This file is part of ORE, a free-software/open-source library
for transparent pricing and risk analysis - http://opensourcerisk.org
ORE is free software: you can redistribute it and/or modify it
under the terms of the Modified BSD License. You should have received a
copy of the license along with this program.
The license is also available online at <http://opensourcerisk.org>
This program is distributed on the basis that it will form a useful
contribution to risk analytics and model standardisation, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the license for more details.
*/
#include <orea/aggregation/dynamiccreditxvacalculator.hpp>
#include <boost/accumulators/accumulators.hpp>
#include <boost/accumulators/statistics/error_of_mean.hpp>
#include <boost/accumulators/statistics/mean.hpp>
#include <boost/accumulators/statistics/stats.hpp>
using namespace std;
using namespace QuantLib;
using namespace boost::accumulators;
namespace ore {
namespace analytics {
DynamicCreditXvaCalculator::DynamicCreditXvaCalculator(
//! Driving portfolio consistent with the cube below
const boost::shared_ptr<Portfolio> portfolio, const boost::shared_ptr<Market> market,
const string& configuration, const string& baseCurrency, const string& dvaName,
const string& fvaBorrowingCurve, const string& fvaLendingCurve,
const bool applyDynamicInitialMargin,
const boost::shared_ptr<DynamicInitialMarginCalculator> dimCalculator,
const boost::shared_ptr<NPVCube> tradeExposureCube,
const boost::shared_ptr<NPVCube> nettingSetExposureCube,
const boost::shared_ptr<NPVCube>& cptyCube,
const Size tradeEpeIndex, const Size tradeEneIndex,
const Size nettingSetEpeIndex, const Size nettingSetEneIndex, const Size cptySpIndex,
const bool flipViewXVA, const string& flipViewBorrowingCurvePostfix, const string& flipViewLendingCurvePostfix)
: ValueAdjustmentCalculator(portfolio, market, configuration, baseCurrency, dvaName,
fvaBorrowingCurve, fvaLendingCurve, applyDynamicInitialMargin,
dimCalculator, tradeExposureCube, nettingSetExposureCube, tradeEpeIndex, tradeEneIndex,
nettingSetEpeIndex, nettingSetEneIndex,
flipViewXVA, flipViewBorrowingCurvePostfix, flipViewLendingCurvePostfix),
cptyCube_(cptyCube), cptySpIndex_(cptySpIndex) {
// check consistency of input
QL_REQUIRE(tradeExposureCube_->numDates() == cptyCube->numDates(),
"number of dates in tradeExposureCube and cptyCube mismatch ("
<< tradeExposureCube_->numDates() << " vs " << cptyCube->numDates() << ")");
QL_REQUIRE(cptySpIndex < cptyCube->depth(), "cptySpIndex("
<< cptySpIndex << ") exceeds depth of cptyCube("
<< cptyCube->depth() << ")");
for (Size i = 0; i < tradeExposureCube_->numDates(); i++) {
QL_REQUIRE(tradeExposureCube_->dates()[i] == cptyCube->dates()[i],
"date at " << i << " in tradeExposureCube and cptyCube mismatch ("
<< tradeExposureCube_->dates()[i] << " vs " << cptyCube->dates()[i] << ")");
}
}
const Real DynamicCreditXvaCalculator::calculateCvaIncrement(
const string& tid, const string& cid, const Date& d0, const Date& d1, const Real& rr) {
Real increment = 0.0;
for (Size k = 0; k < tradeExposureCube_->samples(); ++k) {
Real s0 = d0 == asof() ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = cptyCube_->get(cid, d1, k, cptySpIndex_);
Real epe = tradeExposureCube_->get(tid, d1, k, tradeEpeIndex_);
increment += (s0 - s1) * epe;
}
return (1.0 - rr) * increment / tradeExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateDvaIncrement(
const string& tid, const Date& d0, const Date& d1, const Real& rr) {
Real increment = 0.0;
for (Size k = 0; k < tradeExposureCube_->samples(); ++k) {
Real s0 = d0 == asof() ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real s1 = cptyCube_->get(dvaName_, d1, k, cptySpIndex_);
Real ene = tradeExposureCube_->get(tid, d1, k, tradeEneIndex_);
increment += (s0 - s1) * ene;
}
return (1.0 - rr) * increment / tradeExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateNettingSetCvaIncrement(
const string& nid, const string& cid, const Date& d0, const Date& d1, const Real& rr) {
Real increment = 0.0;
for (Size k = 0; k < nettingSetExposureCube_->samples(); ++k) {
Real s0 = d0 == asof() ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = cptyCube_->get(cid, d1, k, cptySpIndex_);
Real epe = nettingSetExposureCube_->get(nid, d1, k, nettingSetEpeIndex_);
increment += (s0 - s1) * epe;
}
return (1.0 - rr) * increment / nettingSetExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateNettingSetDvaIncrement(
const string& nid, const Date& d0, const Date& d1, const Real& rr) {
Real increment = 0.0;
for (Size k = 0; k < nettingSetExposureCube_->samples(); ++k) {
Real s0 = d0 == asof() ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real s1 = cptyCube_->get(dvaName_, d1, k, cptySpIndex_);
Real ene = nettingSetExposureCube_->get(nid, d1, k, nettingSetEneIndex_);
increment += (s0 - s1) * ene;
}
return (1.0 - rr) * increment / nettingSetExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateFbaIncrement(
const string& tid, const string& cid, const string& dvaName,
const Date& d0, const Date& d1, const Real& dcf) {
Real increment = 0.0;
for (Size k = 0; k < tradeExposureCube_->samples(); ++k) {
Real s0 = (d0 == asof() || cid == "") ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = (d0 == asof() || dvaName == "") ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real ene = tradeExposureCube_->get(tid, d1, k, tradeEneIndex_);
increment += s0 * s1 * ene;
}
return increment * dcf / tradeExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateFcaIncrement(
const string& tid, const string& cid, const string& dvaName,
const Date& d0, const Date& d1, const Real& dcf) {
Real increment = 0.0;
for (Size k = 0; k < tradeExposureCube_->samples(); ++k) {
Real s0 = (d0 == asof() || cid == "") ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = (d0 == asof() || dvaName == "") ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real epe = tradeExposureCube_->get(tid, d1, k, tradeEpeIndex_);
increment += s0 * s1 * epe;
}
return increment * dcf / tradeExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateNettingSetFbaIncrement(
const string& nid, const string& cid, const string& dvaName,
const Date& d0, const Date& d1, const Real& dcf) {
Real increment = 0.0;
for (Size k = 0; k < nettingSetExposureCube_->samples(); ++k) {
Real s0 = (d0 == asof() || cid == "") ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = (d0 == asof() || dvaName == "") ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real ene = nettingSetExposureCube_->get(nid, d1, k, nettingSetEneIndex_);
increment += s0 * s1 * ene;
}
return increment * dcf / nettingSetExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateNettingSetFcaIncrement(
const string& nid, const string& cid, const string& dvaName,
const Date& d0, const Date& d1, const Real& dcf) {
Real increment = 0.0;
for (Size k = 0; k < nettingSetExposureCube_->samples(); ++k) {
Real s0 = (d0 == asof() || cid == "") ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = (d0 == asof() || dvaName == "") ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real epe = nettingSetExposureCube_->get(nid, d1, k, nettingSetEpeIndex_);
increment += s0 * s1 * epe;
}
return increment * dcf / nettingSetExposureCube_->samples();
}
const Real DynamicCreditXvaCalculator::calculateNettingSetMvaIncrement(
const string& nid, const string& cid, const Date& d0, const Date& d1, const Real& dcf) {
Real increment = 0.0;
for (Size k = 0; k < nettingSetExposureCube_->samples(); ++k) {
Real s0 = (d0 == asof() || cid == "") ? 1.0 : cptyCube_->get(cid, d0, k, cptySpIndex_);
Real s1 = (d0 == asof() || dvaName_ == "") ? 1.0 : cptyCube_->get(dvaName_, d0, k, cptySpIndex_);
Real im = dimCalculator_->dimCube()->get(nid, d1, k);
increment += s0 * s1 * im;
}
return increment * dcf / nettingSetExposureCube_->samples();
}
} // namespace analytics
} // namespace ore
|
State Before: f : ℕ → ℕ
hf : Nat.Primrec f
⊢ ∃ m, ∀ (n : ℕ), f n < ack m n State After: case zero
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), (fun x => 0) n < ack m n
case succ
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), succ n < ack m n
case left
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), (fun n => (unpair n).fst) n < ack m n
case right
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), (fun n => (unpair n).snd) n < ack m n
case pair
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), (fun n => pair (f n) (g n)) n < ack m n
case comp
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), (fun n => f (g n)) n < ack m n
case prec
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n Tactic: induction' hf with f g hf hg IHf IHg f g hf hg IHf IHg f g hf hg IHf IHg State Before: case pair
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), (fun n => pair (f n) (g n)) n < ack m n
case comp
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), (fun n => f (g n)) n < ack m n
case prec
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n State After: case pair.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∃ m, ∀ (n : ℕ), (fun n => pair (f n) (g n)) n < ack m n
case comp.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∃ m, ∀ (n : ℕ), (fun n => f (g n)) n < ack m n
case prec.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n Tactic: all_goals cases' IHf with a ha; cases' IHg with b hb State Before: case zero
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), (fun x => 0) n < ack m n State After: no goals Tactic: exact ⟨0, ack_pos 0⟩ State Before: case succ
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), succ n < ack m n State After: case succ
f : ℕ → ℕ
n : ℕ
⊢ succ n < ack 1 n Tactic: refine' ⟨1, fun n => _⟩ State Before: case succ
f : ℕ → ℕ
n : ℕ
⊢ succ n < ack 1 n State After: case succ
f : ℕ → ℕ
n : ℕ
⊢ 1 + n < ack 1 n Tactic: rw [succ_eq_one_add] State Before: case succ
f : ℕ → ℕ
n : ℕ
⊢ 1 + n < ack 1 n State After: no goals Tactic: apply add_lt_ack State Before: case left
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), (fun n => (unpair n).fst) n < ack m n State After: case left
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).fst) n < ack 0 n Tactic: refine' ⟨0, fun n => _⟩ State Before: case left
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).fst) n < ack 0 n State After: case left
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).fst) n ≤ n Tactic: rw [ack_zero, lt_succ_iff] State Before: case left
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).fst) n ≤ n State After: no goals Tactic: exact unpair_left_le n State Before: case right
f : ℕ → ℕ
⊢ ∃ m, ∀ (n : ℕ), (fun n => (unpair n).snd) n < ack m n State After: case right
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).snd) n < ack 0 n Tactic: refine' ⟨0, fun n => _⟩ State Before: case right
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).snd) n < ack 0 n State After: case right
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).snd) n ≤ n Tactic: rw [ack_zero, lt_succ_iff] State Before: case right
f : ℕ → ℕ
n : ℕ
⊢ (fun n => (unpair n).snd) n ≤ n State After: no goals Tactic: exact unpair_right_le n State Before: case prec
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHf : ∃ m, ∀ (n : ℕ), f n < ack m n
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n State After: case prec.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n Tactic: cases' IHf with a ha State Before: case prec.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
IHg : ∃ m, ∀ (n : ℕ), g n < ack m n
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n State After: case prec.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n Tactic: cases' IHg with b hb State Before: case pair.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∃ m, ∀ (n : ℕ), (fun n => pair (f n) (g n)) n < ack m n State After: case pair.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
n : ℕ
⊢ max (f n) (g n) ≤ ack (max a b) n Tactic: refine'
⟨max a b + 3, fun n =>
(pair_lt_max_add_one_sq _ _).trans_le <|
(pow_le_pow_of_le_left (add_le_add_right _ _) 2).trans <|
ack_add_one_sq_lt_ack_add_three _ _⟩ State Before: case pair.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
n : ℕ
⊢ max (f n) (g n) ≤ ack (max a b) n State After: case pair.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
n : ℕ
⊢ max (f n) (g n) ≤ max (ack a n) (ack b n) Tactic: rw [max_ack_left] State Before: case pair.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
n : ℕ
⊢ max (f n) (g n) ≤ max (ack a n) (ack b n) State After: no goals Tactic: exact max_le_max (ha n).le (hb n).le State Before: case comp.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∃ m, ∀ (n : ℕ), (fun n => f (g n)) n < ack m n State After: no goals Tactic: exact
⟨max a b + 2, fun n =>
(ha _).trans <| (ack_strictMono_right a <| hb n).trans <| ack_ack_lt_ack_max_add_two a b n⟩ State Before: case prec.intro.intro
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
this : ∀ {m n : ℕ}, rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ ∃ m, ∀ (n : ℕ), unpaired (fun z n => rec (f z) (fun y IH => g (pair z (pair y IH))) n) n < ack m n State After: no goals Tactic: exact ⟨max a b + 9, fun n => this.trans_le <| ack_mono_right _ <| unpair_add_le n⟩ State Before: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
⊢ ∀ {m n : ℕ}, rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n) State After: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
⊢ rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n) Tactic: intro m n State Before: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
⊢ rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n) State After: case zero
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m : ℕ
⊢ rec (f m) (fun y IH => g (pair m (pair y IH))) zero < ack (max a b + 9) (m + zero)
case succ
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ rec (f m) (fun y IH => g (pair m (pair y IH))) (succ n) < ack (max a b + 9) (m + succ n) Tactic: induction' n with n IH State Before: case zero
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m : ℕ
⊢ rec (f m) (fun y IH => g (pair m (pair y IH))) zero < ack (max a b + 9) (m + zero) State After: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m : ℕ
⊢ max a b < max a b + 9 Tactic: apply (ha m).trans (ack_strictMono_left m <| (le_max_left a b).trans_lt _) State Before: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m : ℕ
⊢ max a b < max a b + 9 State After: no goals Tactic: linarith State Before: case succ
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ rec (f m) (fun y IH => g (pair m (pair y IH))) (succ n) < ack (max a b + 9) (m + succ n) State After: case succ
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ g (pair m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) < ack (max a b + 9) (m + succ n) Tactic: simp only [ge_iff_le] State Before: case succ
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ g (pair m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) < ack (max a b + 9) (m + succ n) State After: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ ack (b + 4) (max m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) ≤ ack (max a b + 9) (m + succ n) Tactic: apply (hb _).trans ((ack_pair_lt _ _ _).trans_le _) State Before: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
⊢ ack (b + 4) (max m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) ≤ ack (max a b + 9) (m + succ n) State After: case inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : ?m.125091 < m
⊢ ack (b + 4) (max m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) ≤ ack (max a b + 9) (m + succ n)
case inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ ?m.125091
⊢ ack (b + 4) (max m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) ≤ ack (max a b + 9) (m + succ n) Tactic: cases' lt_or_le _ m with h₁ h₁ State Before: case inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
⊢ ack (b + 4) (max m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) ≤ ack (max a b + 9) (m + succ n) State After: case inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
⊢ ack (b + 4) (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) Tactic: rw [max_eq_right h₁] State Before: case inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
⊢ ack (b + 4) (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) State After: case inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
⊢ ack (b + 4 + 4) (max n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) Tactic: apply (ack_pair_lt _ _ _).le.trans State Before: case inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
⊢ ack (b + 4 + 4) (max n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) State After: case inr.inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : ?m.125452 < n
⊢ ack (b + 4 + 4) (max n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n)
case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ ?m.125452
⊢ ack (b + 4 + 4) (max n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) Tactic: cases' lt_or_le _ n with h₂ h₂ State Before: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + 4 + 4) (max n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) State After: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + 4 + 4) (rec (f m) (fun y IH => g (pair m (pair y IH))) n) ≤ ack (max a b + 9) (m + succ n) Tactic: rw [max_eq_right h₂] State Before: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + 4 + 4) (rec (f m) (fun y IH => g (pair m (pair y IH))) n) ≤ ack (max a b + 9) (m + succ n) State After: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + 4 + 4) (ack (max a b + 9) (m + n)) ≤ ack (max a b + 9) (m + succ n) Tactic: apply (ack_strictMono_right _ IH).le.trans State Before: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + 4 + 4) (ack (max a b + 9) (m + n)) ≤ ack (max a b + 9) (m + succ n) State After: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + (4 + 4)) (ack (max a b + 8 + 1) (m + n)) ≤ ack (max a b + 8) (ack (max a b + 8 + 1) (m + n)) Tactic: rw [add_succ m, add_succ _ 8, succ_eq_add_one, succ_eq_add_one,
ack_succ_succ (_ + 8), add_assoc] State Before: case inr.inr
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : n ≤ rec (f m) (fun y IH => g (pair m (pair y IH))) n
⊢ ack (b + (4 + 4)) (ack (max a b + 8 + 1) (m + n)) ≤ ack (max a b + 8) (ack (max a b + 8 + 1) (m + n)) State After: no goals Tactic: exact ack_mono_left _ (Nat.add_le_add (le_max_right a b) le_rfl) State Before: case inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : ?m.125091 < m
⊢ ack (b + 4) (max m (pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n))) ≤ ack (max a b + 9) (m + succ n) State After: case inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n) < m
⊢ ack (b + 4) m ≤ ack (max a b + 9) (m + succ n) Tactic: rw [max_eq_left h₁.le] State Before: case inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n) < m
⊢ ack (b + 4) m ≤ ack (max a b + 9) (m + succ n) State After: no goals Tactic: exact ack_le_ack (Nat.add_le_add (le_max_right a b) <| by norm_num)
(self_le_add_right m _) State Before: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n) < m
⊢ 4 ≤ 9 State After: no goals Tactic: norm_num State Before: case inr.inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : ?m.125452 < n
⊢ ack (b + 4 + 4) (max n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)) ≤ ack (max a b + 9) (m + succ n) State After: case inr.inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : rec (f m) (fun y IH => g (pair m (pair y IH))) n < n
⊢ ack (b + (4 + 4)) n ≤ ack (max a b + 9) (m + succ n) Tactic: rw [max_eq_left h₂.le, add_assoc] State Before: case inr.inl
f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : rec (f m) (fun y IH => g (pair m (pair y IH))) n < n
⊢ ack (b + (4 + 4)) n ≤ ack (max a b + 9) (m + succ n) State After: no goals Tactic: exact
ack_le_ack (Nat.add_le_add (le_max_right a b) <| by norm_num)
((le_succ n).trans <| self_le_add_left _ _) State Before: f✝ f g : ℕ → ℕ
hf : Nat.Primrec f
hg : Nat.Primrec g
a : ℕ
ha : ∀ (n : ℕ), f n < ack a n
b : ℕ
hb : ∀ (n : ℕ), g n < ack b n
m n : ℕ
IH : rec (f m) (fun y IH => g (pair m (pair y IH))) n < ack (max a b + 9) (m + n)
h₁ : m ≤ pair n (rec (f m) (fun y IH => g (pair m (pair y IH))) n)
h₂ : rec (f m) (fun y IH => g (pair m (pair y IH))) n < n
⊢ 4 + 4 ≤ 9 State After: no goals Tactic: norm_num
|
SELECT
imp.dma as dma,
imp.zip_code as zipcode,
imp.region as region_name,
imp.country as country_name,
COUNT(DISTINCT(imp.mm_uuid)) as geo_uu,
COUNT(DISTINCT(CASE WHEN evt.pixel_id=$pixel_id_1$ THEN evt.mm_uuid END)) as $pixel_name_1$,
COUNT(DISTINCT(CASE WHEN evt.pixel_id=$pixel_id_2$ THEN evt.mm_uuid END)) as $pixel_name_2$,
COUNT(DISTINCT(CASE WHEN evt.pixel_id=$pixel_id_3$ THEN evt.mm_uuid END)) as $pixel_name_3$,
COUNT(DISTINCT(CASE WHEN evt.pixel_id=$pixel_id_4$ THEN evt.mm_uuid END)) as $pixel_name_4$
FROM
(
SELECT DISTINCT
dma,
zip_code,
region,
country,
mm_uuid
FROM
mm_impressions_$ORG_ID$
WHERE
impression_date>="$start_date$"
AND
impression_date<="$end_date$"
AND
lower(country) LIKE "%united states%"
) as imp
LEFT OUTER JOIN
(
SELECT DISTINCT
pixel_id,
mm_uuid
FROM
mm_attributed_events_$ORG_ID$
WHERE
event_date>="$start_date$"
AND
event_date<="$end_date$"
AND
pixel_id IN ($pixel_id_1$, $pixel_id_2$, $pixel_id_3$, $pixel_id_4$)
AND
event_type IN ($event_type_conversion$)
AND
lower(country) LIKE "%united states%"
) as evt
ON
imp.mm_uuid=evt.mm_uuid
GROUP BY
imp.dma,
imp.zip_code,
imp.region,
imp.country;
|
#include "rundeck_opts.h"
MODULE LAKES_COM
!@sum LAKES_COM model variables for Lake/Rivers module
!@auth Gavin Schmidt
!@ver 1.0
USE MODEL_COM, only : IM,JM,ioread,iowrite,lhead,irerun,irsfic
* ,irsficno
#ifdef TRACERS_WATER
USE TRACER_COM, only : ntm
#endif
IMPLICIT NONE
SAVE
!@var MWL mass of lake water (kg)
REAL*8, DIMENSION(IM,JM) :: MWL
!@var GML total enthalpy of lake (J)
REAL*8, DIMENSION(IM,JM) :: GML
!@var TLAKE temperature of lake (C)
REAL*8, DIMENSION(IM,JM) :: TLAKE
!@var MLDLK mixed layer depth in lake (m)
REAL*8, DIMENSION(IM,JM) :: MLDLK
!@var FLAKE variable lake fraction (1)
REAL*8, DIMENSION(IM,JM) :: FLAKE
!@var TANLK tan(alpha) = slope for conical lake (1)
REAL*8, DIMENSION(IM,JM) :: TANLK
#ifdef TRACERS_WATER
!@var TRLAKE tracer amount in each lake level (kg)
REAL*8, DIMENSION(NTM,2,IM,JM) :: TRLAKE
#endif
END MODULE LAKES_COM
SUBROUTINE io_lakes(kunit,iaction,ioerr)
!@sum io_lakes reads and writes lake arrays to file
!@auth Gavin Schmidt
!@ver 1.0
USE LAKES_COM
IMPLICIT NONE
INTEGER kunit !@var kunit unit number of read/write
INTEGER iaction !@var iaction flag for reading or writing to file
!@var IOERR 1 (or -1) if there is (or is not) an error in i/o
INTEGER, INTENT(INOUT) :: IOERR
!@var HEADER Character string label for individual records
CHARACTER*80 :: HEADER, MODULE_HEADER = "LAKE01"
#ifdef TRACERS_WATER
!@var TRHEADER Character string label for individual records
CHARACTER*80 :: TRHEADER, TRMODULE_HEADER = "TRLAK01"
write (TRMODULE_HEADER(lhead+1:80)
* ,'(a7,i3,a)')'R8 dim(',NTM,',2,im,jm):TRLAKE'
#endif
MODULE_HEADER(lhead+1:80) = 'R8 dim(im,jm):MixLD,MWtr,Tlk,Enth'
SELECT CASE (IACTION)
CASE (:IOWRITE) ! output to standard restart file
WRITE (kunit,err=10) MODULE_HEADER,MLDLK,MWL,TLAKE,GML !,FLAKE
#ifdef TRACERS_WATER
WRITE (kunit,err=10) TRMODULE_HEADER,TRLAKE
#endif
CASE (IOREAD:) ! input from restart file
READ (kunit,err=10) HEADER,MLDLK,MWL,TLAKE,GML !,FLAKE
IF (HEADER(1:LHEAD).NE.MODULE_HEADER(1:LHEAD)) THEN
PRINT*,"Discrepancy in module version ",HEADER,MODULE_HEADER
GO TO 10
END IF
#ifdef TRACERS_WATER
SELECT CASE (IACTION)
CASE (IRERUN,IOREAD,IRSFIC,IRSFICNO) ! reruns/restarts
READ (kunit,err=10) TRHEADER,TRLAKE
IF (TRHEADER(1:LHEAD).NE.TRMODULE_HEADER(1:LHEAD)) THEN
PRINT*,"Discrepancy in module version ",TRHEADER
* ,TRMODULE_HEADER
GO TO 10
END IF
END SELECT
#endif
END SELECT
RETURN
10 IOERR=1
RETURN
END SUBROUTINE io_lakes
|
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal33.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Lemma conj14synthconj4 : forall (lv0 : natural) (lv1 : natural), (@eq natural (plus lv0 (plus Zero lv1)) (plus lv1 lv0)).
Admitted.
QuickChick conj14synthconj4.
|
(*<*)
theory AdvancedInd imports Main begin
(*>*)
text\<open>\noindent
Now that we have learned about rules and logic, we take another look at the
finer points of induction. We consider two questions: what to do if the
proposition to be proved is not directly amenable to induction
(\S\ref{sec:ind-var-in-prems}), and how to utilize (\S\ref{sec:complete-ind})
and even derive (\S\ref{sec:derive-ind}) new induction schemas. We conclude
with an extended example of induction (\S\ref{sec:CTL-revisited}).
\<close>
subsection\<open>Massaging the Proposition\<close>
text\<open>\label{sec:ind-var-in-prems}
Often we have assumed that the theorem to be proved is already in a form
that is amenable to induction, but sometimes it isn't.
Here is an example.
Since \<^term>\<open>hd\<close> and \<^term>\<open>last\<close> return the first and last element of a
non-empty list, this lemma looks easy to prove:
\<close>
lemma "xs \<noteq> [] \<Longrightarrow> hd(rev xs) = last xs"
apply(induct_tac xs)
txt\<open>\noindent
But induction produces the warning
\begin{quote}\tt
Induction variable occurs also among premises!
\end{quote}
and leads to the base case
@{subgoals[display,indent=0,goals_limit=1]}
Simplification reduces the base case to this:
\begin{isabelle}
\ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
\end{isabelle}
We cannot prove this equality because we do not know what \<^term>\<open>hd\<close> and
\<^term>\<open>last\<close> return when applied to \<^term>\<open>[]\<close>.
We should not have ignored the warning. Because the induction
formula is only the conclusion, induction does not affect the occurrence of \<^term>\<open>xs\<close> in the premises.
Thus the case that should have been trivial
becomes unprovable. Fortunately, the solution is easy:\footnote{A similar
heuristic applies to rule inductions; see \S\ref{sec:rtc}.}
\begin{quote}
\emph{Pull all occurrences of the induction variable into the conclusion
using \<open>\<longrightarrow>\<close>.}
\end{quote}
Thus we should state the lemma as an ordinary
implication~(\<open>\<longrightarrow>\<close>), letting
\attrdx{rule_format} (\S\ref{sec:forward}) convert the
result to the usual \<open>\<Longrightarrow>\<close> form:
\<close>
(*<*)oops(*>*)
lemma hd_rev [rule_format]: "xs \<noteq> [] \<longrightarrow> hd(rev xs) = last xs"
(*<*)
apply(induct_tac xs)
(*>*)
txt\<open>\noindent
This time, induction leaves us with a trivial base case:
@{subgoals[display,indent=0,goals_limit=1]}
And \<open>auto\<close> completes the proof.
If there are multiple premises $A@1$, \dots, $A@n$ containing the
induction variable, you should turn the conclusion $C$ into
\[ A@1 \longrightarrow \cdots A@n \longrightarrow C. \]
Additionally, you may also have to universally quantify some other variables,
which can yield a fairly complex conclusion. However, \<open>rule_format\<close>
can remove any number of occurrences of \<open>\<forall>\<close> and
\<open>\<longrightarrow>\<close>.
\index{induction!on a term}%
A second reason why your proposition may not be amenable to induction is that
you want to induct on a complex term, rather than a variable. In
general, induction on a term~$t$ requires rephrasing the conclusion~$C$
as
\begin{equation}\label{eqn:ind-over-term}
\forall y@1 \dots y@n.~ x = t \longrightarrow C.
\end{equation}
where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is a new variable.
Now you can perform induction on~$x$. An example appears in
\S\ref{sec:complete-ind} below.
The very same problem may occur in connection with rule induction. Remember
that it requires a premise of the form $(x@1,\dots,x@k) \in R$, where $R$ is
some inductively defined set and the $x@i$ are variables. If instead we have
a premise $t \in R$, where $t$ is not just an $n$-tuple of variables, we
replace it with $(x@1,\dots,x@k) \in R$, and rephrase the conclusion $C$ as
\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C. \]
For an example see \S\ref{sec:CTL-revisited} below.
Of course, all premises that share free variables with $t$ need to be pulled into
the conclusion as well, under the \<open>\<forall>\<close>, again using \<open>\<longrightarrow>\<close> as shown above.
Readers who are puzzled by the form of statement
(\ref{eqn:ind-over-term}) above should remember that the
transformation is only performed to permit induction. Once induction
has been applied, the statement can be transformed back into something quite
intuitive. For example, applying wellfounded induction on $x$ (w.r.t.\
$\prec$) to (\ref{eqn:ind-over-term}) and transforming the result a
little leads to the goal
\[ \bigwedge\overline{y}.\
\forall \overline{z}.\ t\,\overline{z} \prec t\,\overline{y}\ \longrightarrow\ C\,\overline{z}
\ \Longrightarrow\ C\,\overline{y} \]
where $\overline{y}$ stands for $y@1 \dots y@n$ and the dependence of $t$ and
$C$ on the free variables of $t$ has been made explicit.
Unfortunately, this induction schema cannot be expressed as a
single theorem because it depends on the number of free variables in $t$ ---
the notation $\overline{y}$ is merely an informal device.
\<close>
(*<*)by auto(*>*)
subsection\<open>Beyond Structural and Recursion Induction\<close>
text\<open>\label{sec:complete-ind}
So far, inductive proofs were by structural induction for
primitive recursive functions and recursion induction for total recursive
functions. But sometimes structural induction is awkward and there is no
recursive function that could furnish a more appropriate
induction schema. In such cases a general-purpose induction schema can
be helpful. We show how to apply such induction schemas by an example.
Structural induction on \<^typ>\<open>nat\<close> is
usually known as mathematical induction. There is also \textbf{complete}
\index{induction!complete}%
induction, where you prove $P(n)$ under the assumption that $P(m)$
holds for all $m<n$. In Isabelle, this is the theorem \tdx{nat_less_induct}:
@{thm[display]"nat_less_induct"[no_vars]}
As an application, we prove a property of the following
function:
\<close>
axiomatization f :: "nat \<Rightarrow> nat"
where f_ax: "f(f(n)) < f(Suc(n))" for n :: nat
text\<open>
\begin{warn}
We discourage the use of axioms because of the danger of
inconsistencies. Axiom \<open>f_ax\<close> does
not introduce an inconsistency because, for example, the identity function
satisfies it. Axioms can be useful in exploratory developments, say when
you assume some well-known theorems so that you can quickly demonstrate some
point about methodology. If your example turns into a substantial proof
development, you should replace axioms by theorems.
\end{warn}\noindent
The axiom for \<^term>\<open>f\<close> implies \<^prop>\<open>n <= f n\<close>, which can
be proved by induction on \mbox{\<^term>\<open>f n\<close>}. Following the recipe outlined
above, we have to phrase the proposition as follows to allow induction:
\<close>
lemma f_incr_lem: "\<forall>i. k = f i \<longrightarrow> i \<le> f i"
txt\<open>\noindent
To perform induction on \<^term>\<open>k\<close> using @{thm[source]nat_less_induct}, we use
the same general induction method as for recursion induction (see
\S\ref{sec:fun-induction}):
\<close>
apply(induct_tac k rule: nat_less_induct)
txt\<open>\noindent
We get the following proof state:
@{subgoals[display,indent=0,margin=65]}
After stripping the \<open>\<forall>i\<close>, the proof continues with a case
distinction on \<^term>\<open>i\<close>. The case \<^prop>\<open>i = (0::nat)\<close> is trivial and we focus on
the other case:
\<close>
apply(rule allI)
apply(case_tac i)
apply(simp)
txt\<open>
@{subgoals[display,indent=0]}
\<close>
by(blast intro!: f_ax Suc_leI intro: le_less_trans)
text\<open>\noindent
If you find the last step puzzling, here are the two lemmas it employs:
\begin{isabelle}
@{thm Suc_leI[no_vars]}
\rulename{Suc_leI}\isanewline
@{thm le_less_trans[no_vars]}
\rulename{le_less_trans}
\end{isabelle}
%
The proof goes like this (writing \<^term>\<open>j\<close> instead of \<^typ>\<open>nat\<close>).
Since \<^prop>\<open>i = Suc j\<close> it suffices to show
\hbox{\<^prop>\<open>j < f(Suc j)\<close>},
by @{thm[source]Suc_leI}\@. This is
proved as follows. From @{thm[source]f_ax} we have \<^prop>\<open>f (f j) < f (Suc j)\<close>
(1) which implies \<^prop>\<open>f j <= f (f j)\<close> by the induction hypothesis.
Using (1) once more we obtain \<^prop>\<open>f j < f(Suc j)\<close> (2) by the transitivity
rule @{thm[source]le_less_trans}.
Using the induction hypothesis once more we obtain \<^prop>\<open>j <= f j\<close>
which, together with (2) yields \<^prop>\<open>j < f (Suc j)\<close> (again by
@{thm[source]le_less_trans}).
This last step shows both the power and the danger of automatic proofs. They
will usually not tell you how the proof goes, because it can be hard to
translate the internal proof into a human-readable format. Automatic
proofs are easy to write but hard to read and understand.
The desired result, \<^prop>\<open>i <= f i\<close>, follows from @{thm[source]f_incr_lem}:
\<close>
lemmas f_incr = f_incr_lem[rule_format, OF refl]
text\<open>\noindent
The final @{thm[source]refl} gets rid of the premise \<open>?k = f ?i\<close>.
We could have included this derivation in the original statement of the lemma:
\<close>
lemma f_incr[rule_format, OF refl]: "\<forall>i. k = f i \<longrightarrow> i \<le> f i"
(*<*)oops(*>*)
text\<open>
\begin{exercise}
From the axiom and lemma for \<^term>\<open>f\<close>, show that \<^term>\<open>f\<close> is the
identity function.
\end{exercise}
Method \methdx{induct_tac} can be applied with any rule $r$
whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the
format is
\begin{quote}
\isacommand{apply}\<open>(induct_tac\<close> $y@1 \dots y@n$ \<open>rule:\<close> $r$\<open>)\<close>
\end{quote}
where $y@1, \dots, y@n$ are variables in the conclusion of the first subgoal.
A further useful induction rule is @{thm[source]length_induct},
induction on the length of a list\indexbold{*length_induct}
@{thm[display]length_induct[no_vars]}
which is a special case of @{thm[source]measure_induct}
@{thm[display]measure_induct[no_vars]}
where \<^term>\<open>f\<close> may be any function into type \<^typ>\<open>nat\<close>.
\<close>
subsection\<open>Derivation of New Induction Schemas\<close>
text\<open>\label{sec:derive-ind}
\index{induction!deriving new schemas}%
Induction schemas are ordinary theorems and you can derive new ones
whenever you wish. This section shows you how, using the example
of @{thm[source]nat_less_induct}. Assume we only have structural induction
available for \<^typ>\<open>nat\<close> and want to derive complete induction. We
must generalize the statement as shown:
\<close>
lemma induct_lem: "(\<And>n::nat. \<forall>m<n. P m \<Longrightarrow> P n) \<Longrightarrow> \<forall>m<n. P m"
apply(induct_tac n)
txt\<open>\noindent
The base case is vacuously true. For the induction step (\<^prop>\<open>m <
Suc n\<close>) we distinguish two cases: case \<^prop>\<open>m < n\<close> is true by induction
hypothesis and case \<^prop>\<open>m = n\<close> follows from the assumption, again using
the induction hypothesis:
\<close>
apply(blast)
by(blast elim: less_SucE)
text\<open>\noindent
The elimination rule @{thm[source]less_SucE} expresses the case distinction:
@{thm[display]"less_SucE"[no_vars]}
Now it is straightforward to derive the original version of
@{thm[source]nat_less_induct} by manipulating the conclusion of the above
lemma: instantiate \<^term>\<open>n\<close> by \<^term>\<open>Suc n\<close> and \<^term>\<open>m\<close> by \<^term>\<open>n\<close>
and remove the trivial condition \<^prop>\<open>n < Suc n\<close>. Fortunately, this
happens automatically when we add the lemma as a new premise to the
desired goal:
\<close>
theorem nat_less_induct: "(\<And>n::nat. \<forall>m<n. P m \<Longrightarrow> P n) \<Longrightarrow> P n"
by(insert induct_lem, blast)
text\<open>
HOL already provides the mother of
all inductions, well-founded induction (see \S\ref{sec:Well-founded}). For
example theorem @{thm[source]nat_less_induct} is
a special case of @{thm[source]wf_induct} where \<^term>\<open>r\<close> is \<open><\<close> on
\<^typ>\<open>nat\<close>. The details can be found in theory \isa{Wellfounded_Recursion}.
\<close>
(*<*)end(*>*)
|
[STATEMENT]
lemma lt_plus_1_le_word:
fixes x :: "'a::len word"
assumes bound:"n < unat (maxBound::'a word)"
shows "x < 1 + of_nat n = (x \<le> of_nat n)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (x < 1 + word_of_nat n) = (x \<le> word_of_nat n)
[PROOF STEP]
by (metis add.commute bound max_word_max word_Suc_leq word_not_le word_of_nat_less)
|
module CTL.Modalities.EG where
open import FStream.Core
open import Library
-- Possibly forever : s₀ ⊧ φ ⇔ ∃ s₀ R s₁ R ... ∀ i . sᵢ ⊧ φ
{-# NO_POSITIVITY_CHECK #-} -- Not necessary from EGda 2.6 upwards
record EG' {i : Size} {ℓ₁ ℓ₂} {C : Container ℓ₁}
(props : FStream' C (Set ℓ₂)) : Set (ℓ₁ ⊔ ℓ₂) where
coinductive
field
nowE : head props
laterE : {j : Size< i} → E (fmap EG' (inF (tail props)))
open EG' public
EG : ∀ {i : Size} {ℓ₁ ℓ₂} {C : Container ℓ₁}
→ FStream C (Set ℓ₂) → Set (ℓ₁ ⊔ ℓ₂)
EG props = EPred EG' (inF props)
mutual
EGₛ' : ∀ {i ℓ₁ ℓ₂} {C : Container ℓ₁}
→ FStream' C (Set ℓ₂) → FStream' {i} C (Set (ℓ₁ ⊔ ℓ₂))
head (EGₛ' props) = EG' props
tail (EGₛ' props) = EGₛ (tail props)
EGₛ : ∀ {i ℓ₁ ℓ₂} {C : Container ℓ₁}
→ FStream C (Set ℓ₂) → FStream {i} C (Set (ℓ₁ ⊔ ℓ₂))
inF (EGₛ props) = fmap EGₛ' (inF props)
|
section\<open>Constant Folding\<close>
theory Constant_Folding
imports
Solidity_Main
begin
text\<open>
The following function optimizes expressions w.r.t. gas consumption.
\<close>
primrec eupdate :: "E \<Rightarrow> E"
and lupdate :: "L \<Rightarrow> L"
where
"lupdate (Id i) = Id i"
| "lupdate (Ref i exp) = Ref i (map eupdate exp)"
| "eupdate (E.INT b v) =
(if (b\<in>vbits)
then if v \<ge> 0
then E.INT b (-(2^(b-1)) + (v+2^(b-1)) mod (2^b))
else E.INT b (2^(b-1) - (-v+2^(b-1)-1) mod (2^b) - 1)
else E.INT b v)"
| "eupdate (UINT b v) = (if (b\<in>vbits) then UINT b (v mod (2^b)) else UINT b v)"
| "eupdate (ADDRESS a) = ADDRESS a"
| "eupdate (BALANCE a) = BALANCE a"
| "eupdate THIS = THIS"
| "eupdate SENDER = SENDER"
| "eupdate VALUE = VALUE"
| "eupdate TRUE = TRUE"
| "eupdate FALSE = FALSE"
| "eupdate (LVAL l) = LVAL (lupdate l)"
| "eupdate (PLUS ex1 ex2) =
(case (eupdate ex1) of
E.INT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits
then let v=v1+v2 in
if v \<ge> 0
then E.INT (max b1 b2) (-(2^((max b1 b2)-1)) + (v+2^((max b1 b2)-1)) mod (2^(max b1 b2)))
else E.INT (max b1 b2) (2^((max b1 b2)-1) - (-v+2^((max b1 b2)-1)-1) mod (2^(max b1 b2)) - 1)
else (PLUS (E.INT b1 v1) (E.INT b2 v2))
| UINT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b2 < b1
then let v=v1+v2 in
if v \<ge> 0
then E.INT b1 (-(2^(b1-1)) + (v+2^(b1-1)) mod (2^b1))
else E.INT b1 (2^(b1-1) - (-v+2^(b1-1)-1) mod (2^b1) - 1)
else PLUS (E.INT b1 v1) (UINT b2 v2)
| _ \<Rightarrow> PLUS (E.INT b1 v1) (eupdate ex2))
else PLUS (E.INT b1 v1) (eupdate ex2)
| UINT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
UINT b2 v2 \<Rightarrow>
if b2 \<in> vbits
then UINT (max b1 b2) ((v1 + v2) mod (2^(max b1 b2)))
else (PLUS (UINT b1 v1) (UINT b2 v2))
| E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b1 < b2
then let v=v1+v2 in
if v \<ge> 0
then E.INT b2 (-(2^(b2-1)) + (v+2^(b2-1)) mod (2^b2))
else E.INT b2 (2^(b2-1) - (-v+2^(b2-1)-1) mod (2^b2) - 1)
else PLUS (UINT b1 v1) (E.INT b2 v2)
| _ \<Rightarrow> PLUS (UINT b1 v1) (eupdate ex2))
else PLUS (UINT b1 v1) (eupdate ex2)
| _ \<Rightarrow> PLUS (eupdate ex1) (eupdate ex2))"
| "eupdate (MINUS ex1 ex2) =
(case (eupdate ex1) of
E.INT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits
then let v=v1-v2 in
if v \<ge> 0
then E.INT (max b1 b2) (-(2^((max b1 b2)-1)) + (v+2^((max b1 b2)-1)) mod (2^(max b1 b2)))
else E.INT (max b1 b2) (2^((max b1 b2)-1) - (-v+2^((max b1 b2)-1)-1) mod (2^(max b1 b2)) - 1)
else (MINUS (E.INT b1 v1) (E.INT b2 v2))
| UINT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b2 < b1
then let v=v1-v2 in
if v \<ge> 0
then E.INT b1 (-(2^(b1-1)) + (v+2^(b1-1)) mod (2^b1))
else E.INT b1 (2^(b1-1) - (-v+2^(b1-1)-1) mod (2^b1) - 1)
else MINUS (E.INT b1 v1) (UINT b2 v2)
| _ \<Rightarrow> MINUS (E.INT b1 v1) (eupdate ex2))
else MINUS (E.INT b1 v1) (eupdate ex2)
| UINT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
UINT b2 v2 \<Rightarrow>
if b2 \<in> vbits
then UINT (max b1 b2) ((v1 - v2) mod (2^(max b1 b2)))
else (MINUS (UINT b1 v1) (UINT b2 v2))
| E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b1 < b2
then let v=v1-v2 in
if v \<ge> 0
then E.INT b2 (-(2^(b2-1)) + (v+2^(b2-1)) mod (2^b2))
else E.INT b2 (2^(b2-1) - (-v+2^(b2-1)-1) mod (2^b2) - 1)
else MINUS (UINT b1 v1) (E.INT b2 v2)
| _ \<Rightarrow> MINUS (UINT b1 v1) (eupdate ex2))
else MINUS (UINT b1 v1) (eupdate ex2)
| _ \<Rightarrow> MINUS (eupdate ex1) (eupdate ex2))"
| "eupdate (EQUAL ex1 ex2) =
(case (eupdate ex1) of
E.INT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits
then if v1 = v2
then TRUE
else FALSE
else EQUAL (E.INT b1 v1) (E.INT b2 v2)
| UINT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b2 < b1
then if v1 = v2
then TRUE
else FALSE
else EQUAL (E.INT b1 v1) (UINT b2 v2)
| _ \<Rightarrow> EQUAL (E.INT b1 v1) (eupdate ex2))
else EQUAL (E.INT b1 v1) (eupdate ex2)
| UINT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
UINT b2 v2 \<Rightarrow>
if b2 \<in> vbits
then if v1 = v2
then TRUE
else FALSE
else EQUAL (E.INT b1 v1) (UINT b2 v2)
| E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b1 < b2
then if v1 = v2
then TRUE
else FALSE
else EQUAL (UINT b1 v1) (E.INT b2 v2)
| _ \<Rightarrow> EQUAL (UINT b1 v1) (eupdate ex2))
else EQUAL (UINT b1 v1) (eupdate ex2)
| _ \<Rightarrow> EQUAL (eupdate ex1) (eupdate ex2))"
| "eupdate (LESS ex1 ex2) =
(case (eupdate ex1) of
E.INT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits
then if v1 < v2
then TRUE
else FALSE
else LESS (E.INT b1 v1) (E.INT b2 v2)
| UINT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b2 < b1
then if v1 < v2
then TRUE
else FALSE
else LESS (E.INT b1 v1) (UINT b2 v2)
| _ \<Rightarrow> LESS (E.INT b1 v1) (eupdate ex2))
else LESS (E.INT b1 v1) (eupdate ex2)
| UINT b1 v1 \<Rightarrow>
if b1 \<in> vbits
then (case (eupdate ex2) of
UINT b2 v2 \<Rightarrow>
if b2 \<in> vbits
then if v1 < v2
then TRUE
else FALSE
else LESS (E.INT b1 v1) (UINT b2 v2)
| E.INT b2 v2 \<Rightarrow>
if b2\<in>vbits \<and> b1 < b2
then if v1 < v2
then TRUE
else FALSE
else LESS (UINT b1 v1) (E.INT b2 v2)
| _ \<Rightarrow> LESS (UINT b1 v1) (eupdate ex2))
else LESS (UINT b1 v1) (eupdate ex2)
| _ \<Rightarrow> LESS (eupdate ex1) (eupdate ex2))"
| "eupdate (AND ex1 ex2) =
(case (eupdate ex1) of
TRUE \<Rightarrow> (case (eupdate ex2) of
TRUE \<Rightarrow> TRUE
| FALSE \<Rightarrow> FALSE
| _ \<Rightarrow> AND TRUE (eupdate ex2))
| FALSE \<Rightarrow> (case (eupdate ex2) of
TRUE \<Rightarrow> FALSE
| FALSE \<Rightarrow> FALSE
| _ \<Rightarrow> AND FALSE (eupdate ex2))
| _ \<Rightarrow> AND (eupdate ex1) (eupdate ex2))"
| "eupdate (OR ex1 ex2) =
(case (eupdate ex1) of
TRUE \<Rightarrow> (case (eupdate ex2) of
TRUE \<Rightarrow> TRUE
| FALSE \<Rightarrow> TRUE
| _ \<Rightarrow> OR TRUE (eupdate ex2))
| FALSE \<Rightarrow> (case (eupdate ex2) of
TRUE \<Rightarrow> TRUE
| FALSE \<Rightarrow> FALSE
| _ \<Rightarrow> OR FALSE (eupdate ex2))
| _ \<Rightarrow> OR (eupdate ex1) (eupdate ex2))"
| "eupdate (NOT ex1) =
(case (eupdate ex1) of
TRUE \<Rightarrow> FALSE
| FALSE \<Rightarrow> TRUE
| _ \<Rightarrow> NOT (eupdate ex1))"
| "eupdate (CALL i xs) = CALL i xs"
| "eupdate (ECALL e i xs r) = ECALL e i xs r"
value "eupdate (UINT 8 250)"
lemma "eupdate (UINT 8 250)
=UINT 8 250"
by(simp)
lemma "eupdate (UINT 8 500)
= UINT 8 244"
by(simp)
lemma "eupdate (E.INT 8 (-100))
= E.INT 8 (- 100)"
by(simp)
lemma "eupdate (E.INT 8 (-150))
= E.INT 8 106"
by(simp)
lemma "eupdate (PLUS (UINT 8 100) (UINT 8 100))
= UINT 8 200"
by(simp)
lemma "eupdate (PLUS (UINT 8 257) (UINT 16 100))
= UINT 16 101"
by(simp)
lemma "eupdate (PLUS (E.INT 8 100) (UINT 8 250))
= PLUS (E.INT 8 100) (UINT 8 250)"
by(simp)
lemma "eupdate (PLUS (E.INT 8 250) (UINT 8 500))
= PLUS (E.INT 8 (- 6)) (UINT 8 244)"
by(simp)
lemma "eupdate (PLUS (E.INT 16 250) (UINT 8 500))
= E.INT 16 494"
by(simp)
lemma "eupdate (EQUAL (UINT 16 250) (UINT 8 250))
= TRUE"
by(simp)
lemma "eupdate (EQUAL (E.INT 16 100) (UINT 8 100))
= TRUE"
by(simp)
lemma "eupdate (EQUAL (E.INT 8 100) (UINT 8 100))
= EQUAL (E.INT 8 100) (UINT 8 100)"
by(simp)
lemma update_bounds_int:
assumes "eupdate ex = (E.INT b v)" and "b\<in>vbits"
shows "(v < 2^(b-1)) \<and> v \<ge> -(2^(b-1))"
proof (cases ex)
case (INT b' v')
then show ?thesis
proof cases
assume "b'\<in>vbits"
show ?thesis
proof cases
let ?x="-(2^(b'-1)) + (v'+2^(b'-1)) mod 2^b'"
assume "v'\<ge>0"
with `b'\<in>vbits` have "eupdate (E.INT b' v') = E.INT b' ?x" by simp
with assms have "b=b'" and "v=?x" using INT by (simp,simp)
moreover from `b'\<in>vbits` have "b'>0" by auto
hence "?x < 2 ^(b'-1)" using upper_bound2[of b' "(v' + 2 ^ (b' - 1)) mod 2^b'"] by simp
moreover have "?x \<ge> -(2^(b'-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^(b'-1) - (-v'+2^(b'-1)-1) mod (2^b') - 1"
assume "\<not>v'\<ge>0"
with `b'\<in>vbits` have "eupdate (E.INT b' v') = E.INT b' ?x" by simp
with assms have "b=b'" and "v=?x" using INT by (simp,simp)
moreover have "(-v'+2^(b'-1)-1) mod (2^b')\<ge>0" by simp
hence "?x < 2 ^(b'-1)" by arith
moreover from `b'\<in>vbits` have "b'>0" by auto
hence "?x \<ge> -(2^(b'-1))" using lower_bound2[of b' v'] by simp
ultimately show ?thesis by simp
qed
next
assume "\<not> b'\<in>vbits"
with assms show ?thesis using INT by simp
qed
next
case (UINT b' v')
with assms show ?thesis
proof cases
assume "b'\<in>vbits"
with assms show ?thesis using UINT by simp
next
assume "\<not> b'\<in>vbits"
with assms show ?thesis using UINT by simp
qed
next
case (ADDRESS x3)
with assms show ?thesis by simp
next
case (BALANCE x4)
with assms show ?thesis by simp
next
case THIS
with assms show ?thesis by simp
next
case SENDER
with assms show ?thesis by simp
next
case VALUE
with assms show ?thesis by simp
next
case TRUE
with assms show ?thesis by simp
next
case FALSE
with assms show ?thesis by simp
next
case (LVAL x7)
with assms show ?thesis by simp
next
case p: (PLUS e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
let ?x="-(2^((max b1 b2)-1)) + (?v+2^((max b1 b2)-1)) mod 2^(max b1 b2)"
assume "?v\<ge>0"
with `b1\<in>vbits` `b2\<in>vbits` i i2 have "eupdate (PLUS e1 e2) = E.INT (max b1 b2) ?x" by simp
with assms have "b=max b1 b2" and "v=?x" using p by (simp,simp)
moreover from `b1\<in>vbits` have "max b1 b2>0" by auto
hence "?x < 2 ^(max b1 b2 - 1)"
using upper_bound2[of "max b1 b2" "(?v + 2 ^ (max b1 b2 - 1)) mod 2^max b1 b2"] by simp
moreover have "?x \<ge> -(2^(max b1 b2-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^((max b1 b2)-1) - (-?v+2^((max b1 b2)-1)-1) mod (2^(max b1 b2)) - 1"
assume "\<not>?v\<ge>0"
with `b1\<in>vbits` `b2\<in>vbits` i i2 have "eupdate (PLUS e1 e2) = E.INT (max b1 b2) ?x" by simp
with assms have "b=max b1 b2" and "v=?x" using p by (simp,simp)
moreover have "(-?v+2^(max b1 b2-1)-1) mod (2^max b1 b2)\<ge>0" by simp
hence "?x < 2 ^(max b1 b2-1)" by arith
moreover from `b1\<in>vbits` have "max b1 b2>0" by auto
hence "?x \<ge> -(2^(max b1 b2-1))" using lower_bound2[of "max b1 b2" ?v] by simp
ultimately show ?thesis by simp
qed
next
assume "b2\<notin>vbits"
with p i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
then show ?thesis
proof cases
let ?x="(-(2^(b1-1)) + (?v+2^(b1-1)) mod (2^b1))"
assume "?v\<ge>0"
with `b1\<in>vbits` `b2\<in>vbits` `b2<b1` i u have "eupdate (PLUS e1 e2) = E.INT b1 ?x" by simp
with assms have "b=b1" and "v=?x" using p by (simp,simp)
moreover from `b1\<in>vbits` have "b1>0" by auto
hence "?x < 2 ^(b1 - 1)" using upper_bound2[of b1] by simp
moreover have "?x \<ge> -(2^(b1-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^(b1-1) - (-?v+2^(b1-1)-1) mod (2^b1) - 1"
assume "\<not>?v\<ge>0"
with `b1\<in>vbits` `b2\<in>vbits` `b2<b1` i u have "eupdate (PLUS e1 e2) = E.INT b1 ?x" by simp
with assms have "b=b1" and "v=?x" using p i u by (simp,simp)
moreover have "(-?v+2^(b1-1)-1) mod 2^b1\<ge>0" by simp
hence "?x < 2 ^(b1-1)" by arith
moreover from `b1\<in>vbits` have "b1>0" by auto
hence "?x \<ge> -(2^(b1-1))" using lower_bound2[of b1 ?v] by simp
ultimately show ?thesis by simp
qed
next
assume "\<not> b2<b1"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with p i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with p i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
then show ?thesis
proof cases
let ?x="(-(2^(b2-1)) + (?v+2^(b2-1)) mod (2^b2))"
assume "?v\<ge>0"
with `b1\<in>vbits` `b2\<in>vbits` `b1<b2` i u have "eupdate (PLUS e1 e2) = E.INT b2 ?x" by simp
with assms have "b=b2" and "v=?x" using p by (simp,simp)
moreover from `b2\<in>vbits` have "b2>0" by auto
hence "?x < 2 ^(b2 - 1)" using upper_bound2[of b2] by simp
moreover have "?x \<ge> -(2^(b2-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^(b2-1) - (-?v+2^(b2-1)-1) mod (2^b2) - 1"
assume "\<not>?v\<ge>0"
with `b1\<in>vbits` `b2\<in>vbits` `b1<b2` i u have "eupdate (PLUS e1 e2) = E.INT b2 ?x" by simp
with assms have "b=b2" and "v=?x" using p i u by (simp,simp)
moreover have "(-?v+2^(b2-1)-1) mod 2^b2\<ge>0" by simp
hence "?x < 2 ^(b2-1)" by arith
moreover from `b2\<in>vbits` have "b2>0" by auto
hence "?x \<ge> -(2^(b2-1))" using lower_bound2[of b2 ?v] by simp
ultimately show ?thesis by simp
qed
next
assume "\<not> b1<b2"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
with `b1\<in>vbits` u u2 p show ?thesis using assms by simp
next
assume "\<not>b2\<in>vbits"
with p u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with p u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with p u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with p show ?thesis using assms by simp
next
case (BALANCE x4)
with p show ?thesis using assms by simp
next
case THIS
with p show ?thesis using assms by simp
next
case SENDER
with p show ?thesis using assms by simp
next
case VALUE
with p show ?thesis using assms by simp
next
case TRUE
with p show ?thesis using assms by simp
next
case FALSE
with p show ?thesis using assms by simp
next
case (LVAL x7)
with p show ?thesis using assms by simp
next
case (PLUS x81 x82)
with p show ?thesis using assms by simp
next
case (MINUS x91 x92)
with p show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with p show ?thesis using assms by simp
next
case (LESS x111 x112)
with p show ?thesis using assms by simp
next
case (AND x121 x122)
with p show ?thesis using assms by simp
next
case (OR x131 x132)
with p show ?thesis using assms by simp
next
case (NOT x131)
with p show ?thesis using assms by simp
next
case (CALL x181 x182)
with p show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with p show ?thesis using assms by simp
qed
next
case m: (MINUS e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
with m show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1-v2"
assume "b2\<in>vbits"
with `b1 \<in> vbits` have
u_def: "eupdate (MINUS e1 e2) =
(let v = v1 - v2
in if 0 \<le> v
then E.INT (max b1 b2)
(- (2 ^ (max b1 b2 - 1)) + (v + 2 ^ (max b1 b2 - 1)) mod 2 ^ max b1 b2)
else E.INT (max b1 b2)
(2 ^ (max b1 b2 - 1) - (- v + 2 ^ (max b1 b2 - 1) - 1) mod 2 ^ max b1 b2 - 1))"
using i i2 eupdate.simps(11)[of e1 e2] by simp
show ?thesis
proof cases
let ?x="-(2^((max b1 b2)-1)) + (?v+2^((max b1 b2)-1)) mod 2^(max b1 b2)"
assume "?v\<ge>0"
with u_def have "eupdate (MINUS e1 e2) = E.INT (max b1 b2) ?x" by simp
with assms have "b=max b1 b2" and "v=?x" using m by (simp,simp)
moreover from `b1\<in>vbits` have "max b1 b2>0" by auto
hence "?x < 2 ^(max b1 b2 - 1)"
using upper_bound2[of "max b1 b2" "(?v + 2 ^ (max b1 b2 - 1)) mod 2^max b1 b2"] by simp
moreover have "?x \<ge> -(2^(max b1 b2-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^((max b1 b2)-1) - (-?v+2^((max b1 b2)-1)-1) mod (2^(max b1 b2)) - 1"
assume "\<not>?v\<ge>0"
with u_def have "eupdate (MINUS e1 e2) = E.INT (max b1 b2) ?x" using u_def by simp
with assms have "b=max b1 b2" and "v=?x" using m by (simp,simp)
moreover have "(-?v+2^(max b1 b2-1)-1) mod (2^max b1 b2)\<ge>0" by simp
hence "?x < 2 ^(max b1 b2-1)" by arith
moreover from `b1\<in>vbits` have "max b1 b2>0" by auto
hence "?x \<ge> -(2^(max b1 b2-1))" using lower_bound2[of "max b1 b2" ?v] by simp
ultimately show ?thesis by simp
qed
next
assume "b2\<notin>vbits"
with m i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
let ?v="v1-v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
with `b1 \<in> vbits` `b2 \<in> vbits` have
u_def: "eupdate (MINUS e1 e2) =
(let v = v1 - v2
in if 0 \<le> v
then E.INT b1 (- (2 ^ (b1 - 1)) + (v + 2 ^ (b1 - 1)) mod 2 ^ b1)
else E.INT b1 (2 ^ (b1 - 1) - (- v + 2 ^ (b1 - 1) - 1) mod 2 ^ b1 - 1))"
using i u eupdate.simps(11)[of e1 e2] by simp
then show ?thesis
proof cases
let ?x="(-(2^(b1-1)) + (?v+2^(b1-1)) mod (2^b1))"
assume "?v\<ge>0"
with u_def have "eupdate (MINUS e1 e2) = E.INT b1 ?x" by simp
with assms have "b=b1" and "v=?x" using m by (simp,simp)
moreover from `b1\<in>vbits` have "b1>0" by auto
hence "?x < 2 ^(b1 - 1)" using upper_bound2[of b1] by simp
moreover have "?x \<ge> -(2^(b1-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^(b1-1) - (-?v+2^(b1-1)-1) mod (2^b1) - 1"
assume "\<not>?v\<ge>0"
with u_def have "eupdate (MINUS e1 e2) = E.INT b1 ?x" by simp
with assms have "b=b1" and "v=?x" using m i u by (simp,simp)
moreover have "(-?v+2^(b1-1)-1) mod 2^b1\<ge>0" by simp
hence "?x < 2 ^(b1-1)" by arith
moreover from `b1\<in>vbits` have "b1>0" by auto
hence "?x \<ge> -(2^(b1-1))" using lower_bound2[of b1 ?v] by simp
ultimately show ?thesis by simp
qed
next
assume "\<not> b2<b1"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with m i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with m i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1-v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
with `b1 \<in> vbits` `b2 \<in> vbits` have
u_def: "eupdate (MINUS e1 e2) =
(let v = v1 - v2
in if 0 \<le> v
then E.INT b2 (- (2 ^ (b2 - 1)) + (v + 2 ^ (b2 - 1)) mod 2 ^ b2)
else E.INT b2 (2 ^ (b2 - 1) - (- v + 2 ^ (b2 - 1) - 1) mod 2 ^ b2 - 1))"
using i u eupdate.simps(11)[of e1 e2] by simp
then show ?thesis
proof cases
let ?x="(-(2^(b2-1)) + (?v+2^(b2-1)) mod (2^b2))"
assume "?v\<ge>0"
with u_def have "eupdate (MINUS e1 e2) = E.INT b2 ?x" by simp
with assms have "b=b2" and "v=?x" using m by (simp,simp)
moreover from `b2\<in>vbits` have "b2>0" by auto
hence "?x < 2 ^(b2 - 1)" using upper_bound2[of b2] by simp
moreover have "?x \<ge> -(2^(b2-1))" by simp
ultimately show ?thesis by simp
next
let ?x="2^(b2-1) - (-?v+2^(b2-1)-1) mod (2^b2) - 1"
assume "\<not>?v\<ge>0"
with u_def have "eupdate (MINUS e1 e2) = E.INT b2 ?x" by simp
with assms have "b=b2" and "v=?x" using m i u by (simp,simp)
moreover have "(-?v+2^(b2-1)-1) mod 2^b2\<ge>0" by simp
hence "?x < 2 ^(b2-1)" by arith
moreover from `b2\<in>vbits` have "b2>0" by auto
hence "?x \<ge> -(2^(b2-1))" using lower_bound2[of b2 ?v] by simp
ultimately show ?thesis by simp
qed
next
assume "\<not> b1<b2"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
with `b1\<in>vbits` u u2 m show ?thesis using assms by simp
next
assume "\<not>b2\<in>vbits"
with m u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with m u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with m u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with m show ?thesis using assms by simp
next
case (BALANCE x4)
with m show ?thesis using assms by simp
next
case THIS
with m show ?thesis using assms by simp
next
case SENDER
with m show ?thesis using assms by simp
next
case VALUE
with m show ?thesis using assms by simp
next
case TRUE
with m show ?thesis using assms by simp
next
case FALSE
with m show ?thesis using assms by simp
next
case (LVAL x7)
with m show ?thesis using assms by simp
next
case (PLUS x81 x82)
with m show ?thesis using assms by simp
next
case (MINUS x91 x92)
with m show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with m show ?thesis using assms by simp
next
case (LESS x111 x112)
with m show ?thesis using assms by simp
next
case (AND x121 x122)
with m show ?thesis using assms by simp
next
case (OR x131 x132)
with m show ?thesis using assms by simp
next
case (NOT x131)
with m show ?thesis using assms by simp
next
case (CALL x181 x182)
with m show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with m show ?thesis using assms by simp
qed
next
case e: (EQUAL e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "b2\<notin>vbits"
with e i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
then show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
qed
next
assume "\<not> b2<b1"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with e i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with e i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
then show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
qed
next
assume "\<not> b1<b2"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "\<not>b2\<in>vbits"
with e u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with e u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with e u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with e show ?thesis using assms by simp
next
case (BALANCE x4)
with e show ?thesis using assms by simp
next
case THIS
with e show ?thesis using assms by simp
next
case SENDER
with e show ?thesis using assms by simp
next
case VALUE
with e show ?thesis using assms by simp
next
case TRUE
with e show ?thesis using assms by simp
next
case FALSE
with e show ?thesis using assms by simp
next
case (LVAL x7)
with e show ?thesis using assms by simp
next
case (PLUS x81 x82)
with e show ?thesis using assms by simp
next
case (MINUS x91 x92)
with e show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with e show ?thesis using assms by simp
next
case (LESS x111 x112)
with e show ?thesis using assms by simp
next
case (AND x121 x122)
with e show ?thesis using assms by simp
next
case (OR x131 x132)
with e show ?thesis using assms by simp
next
case (NOT x131)
with e show ?thesis using assms by simp
next
case (CALL x181 x182)
with e show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with e show ?thesis using assms by simp
qed
next
case l: (LESS e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "b2\<notin>vbits"
with l i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
then show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
qed
next
assume "\<not> b2<b1"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with l i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with l i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
then show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
qed
next
assume "\<not> b1<b2"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "\<not>b2\<in>vbits"
with l u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with l u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with l u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with l show ?thesis using assms by simp
next
case (BALANCE x4)
with l show ?thesis using assms by simp
next
case THIS
with l show ?thesis using assms by simp
next
case SENDER
with l show ?thesis using assms by simp
next
case VALUE
with l show ?thesis using assms by simp
next
case TRUE
with l show ?thesis using assms by simp
next
case FALSE
with l show ?thesis using assms by simp
next
case (LVAL x7)
with l show ?thesis using assms by simp
next
case (PLUS x81 x82)
with l show ?thesis using assms by simp
next
case (MINUS x91 x92)
with l show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with l show ?thesis using assms by simp
next
case (LESS x111 x112)
with l show ?thesis using assms by simp
next
case (AND x121 x122)
with l show ?thesis using assms by simp
next
case (OR x131 x132)
with l show ?thesis using assms by simp
next
case (NOT x131)
with l show ?thesis using assms by simp
next
case (CALL x181 x182)
with l show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with l show ?thesis using assms by simp
qed
next
case a: (AND e1 e2)
show ?thesis
proof (cases "eupdate e1")
case (INT x11 x12)
with a show ?thesis using assms by simp
next
case (UINT x21 x22)
with a show ?thesis using assms by simp
next
case (ADDRESS x3)
with a show ?thesis using assms by simp
next
case (BALANCE x4)
with a show ?thesis using assms by simp
next
case THIS
with a show ?thesis using assms by simp
next
case SENDER
with a show ?thesis using assms by simp
next
case VALUE
with a show ?thesis using assms by simp
next
case t: TRUE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with a t show ?thesis using assms by simp
next
case (UINT x21 x22)
with a t show ?thesis using assms by simp
next
case (ADDRESS x3)
with a t show ?thesis using assms by simp
next
case (BALANCE x4)
with a t show ?thesis using assms by simp
next
case THIS
with a t show ?thesis using assms by simp
next
case SENDER
with a t show ?thesis using assms by simp
next
case VALUE
with a t show ?thesis using assms by simp
next
case TRUE
with a t show ?thesis using assms by simp
next
case FALSE
with a t show ?thesis using assms by simp
next
case (LVAL x7)
with a t show ?thesis using assms by simp
next
case (PLUS x81 x82)
with a t show ?thesis using assms by simp
next
case (MINUS x91 x92)
with a t show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with a t show ?thesis using assms by simp
next
case (LESS x111 x112)
with a t show ?thesis using assms by simp
next
case (AND x121 x122)
with a t show ?thesis using assms by simp
next
case (OR x131 x132)
with a t show ?thesis using assms by simp
next
case (NOT x131)
with a t show ?thesis using assms by simp
next
case (CALL x181 x182)
with a t show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with a t show ?thesis using assms by simp
qed
next
case f: FALSE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with a f show ?thesis using assms by simp
next
case (UINT x21 x22)
with a f show ?thesis using assms by simp
next
case (ADDRESS x3)
with a f show ?thesis using assms by simp
next
case (BALANCE x4)
with a f show ?thesis using assms by simp
next
case THIS
with a f show ?thesis using assms by simp
next
case SENDER
with a f show ?thesis using assms by simp
next
case VALUE
with a f show ?thesis using assms by simp
next
case TRUE
with a f show ?thesis using assms by simp
next
case FALSE
with a f show ?thesis using assms by simp
next
case (LVAL x7)
with a f show ?thesis using assms by simp
next
case (PLUS x81 x82)
with a f show ?thesis using assms by simp
next
case (MINUS x91 x92)
with a f show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with a f show ?thesis using assms by simp
next
case (LESS x111 x112)
with a f show ?thesis using assms by simp
next
case (AND x121 x122)
with a f show ?thesis using assms by simp
next
case (OR x131 x132)
with a f show ?thesis using assms by simp
next
case (NOT x131)
with a f show ?thesis using assms by simp
next
case (CALL x181 x182)
with a f show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with a f show ?thesis using assms by simp
qed
next
case (LVAL x7)
with a show ?thesis using assms by simp
next
case (PLUS x81 x82)
with a show ?thesis using assms by simp
next
case (MINUS x91 x92)
with a show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with a show ?thesis using assms by simp
next
case (LESS x111 x112)
with a show ?thesis using assms by simp
next
case (AND x121 x122)
with a show ?thesis using assms by simp
next
case (OR x131 x132)
with a show ?thesis using assms by simp
next
case (NOT x131)
with a show ?thesis using assms by simp
next
case (CALL x181 x182)
with a show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with a show ?thesis using assms by simp
qed
next
case o: (OR e1 e2)
show ?thesis
proof (cases "eupdate e1")
case (INT x11 x12)
with o show ?thesis using assms by simp
next
case (UINT x21 x22)
with o show ?thesis using assms by simp
next
case (ADDRESS x3)
with o show ?thesis using assms by simp
next
case (BALANCE x4)
with o show ?thesis using assms by simp
next
case THIS
with o show ?thesis using assms by simp
next
case SENDER
with o show ?thesis using assms by simp
next
case VALUE
with o show ?thesis using assms by simp
next
case t: TRUE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with o t show ?thesis using assms by simp
next
case (UINT x21 x22)
with o t show ?thesis using assms by simp
next
case (ADDRESS x3)
with o t show ?thesis using assms by simp
next
case (BALANCE x4)
with o t show ?thesis using assms by simp
next
case THIS
with o t show ?thesis using assms by simp
next
case SENDER
with o t show ?thesis using assms by simp
next
case VALUE
with o t show ?thesis using assms by simp
next
case TRUE
with o t show ?thesis using assms by simp
next
case FALSE
with o t show ?thesis using assms by simp
next
case (LVAL x7)
with o t show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o t show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o t show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o t show ?thesis using assms by simp
next
case (LESS x111 x112)
with o t show ?thesis using assms by simp
next
case (AND x121 x122)
with o t show ?thesis using assms by simp
next
case (OR x131 x132)
with o t show ?thesis using assms by simp
next
case (NOT x131)
with o t show ?thesis using assms by simp
next
case (CALL x181 x182)
with o t show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o t show ?thesis using assms by simp
qed
next
case f: FALSE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with o f show ?thesis using assms by simp
next
case (UINT x21 x22)
with o f show ?thesis using assms by simp
next
case (ADDRESS x3)
with o f show ?thesis using assms by simp
next
case (BALANCE x4)
with o f show ?thesis using assms by simp
next
case THIS
with o f show ?thesis using assms by simp
next
case SENDER
with o f show ?thesis using assms by simp
next
case VALUE
with o f show ?thesis using assms by simp
next
case TRUE
with o f show ?thesis using assms by simp
next
case FALSE
with o f show ?thesis using assms by simp
next
case (LVAL x7)
with o f show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o f show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o f show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o f show ?thesis using assms by simp
next
case (LESS x111 x112)
with o f show ?thesis using assms by simp
next
case (AND x121 x122)
with o f show ?thesis using assms by simp
next
case (OR x131 x132)
with o f show ?thesis using assms by simp
next
case (NOT x131)
with o f show ?thesis using assms by simp
next
case (CALL x181 x182)
with o f show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o f show ?thesis using assms by simp
qed
next
case (LVAL x7)
with o show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o show ?thesis using assms by simp
next
case (LESS x111 x112)
with o show ?thesis using assms by simp
next
case (AND x121 x122)
with o show ?thesis using assms by simp
next
case (OR x131 x132)
with o show ?thesis using assms by simp
next
case (NOT x131)
with o show ?thesis using assms by simp
next
case (CALL x181 x182)
with o show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o show ?thesis using assms by simp
qed
next
case o: (NOT e1)
show ?thesis
proof (cases "eupdate e1")
case (INT x11 x12)
with o show ?thesis using assms by simp
next
case (UINT x21 x22)
with o show ?thesis using assms by simp
next
case (ADDRESS x3)
with o show ?thesis using assms by simp
next
case (BALANCE x4)
with o show ?thesis using assms by simp
next
case THIS
with o show ?thesis using assms by simp
next
case SENDER
with o show ?thesis using assms by simp
next
case VALUE
with o show ?thesis using assms by simp
next
case t: TRUE
with o show ?thesis using assms by simp
next
case f: FALSE
with o show ?thesis using assms by simp
next
case (LVAL x7)
with o show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o show ?thesis using assms by simp
next
case (LESS x111 x112)
with o show ?thesis using assms by simp
next
case (AND x121 x122)
with o show ?thesis using assms by simp
next
case (OR x131 x132)
with o show ?thesis using assms by simp
next
case (NOT x131)
with o show ?thesis using assms by simp
next
case (CALL x181 x182)
with o show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o show ?thesis using assms by simp
qed
next
case (CALL x181 x182)
with assms show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with assms show ?thesis by simp
qed
lemma update_bounds_uint:
assumes "eupdate ex = UINT b v" and "b\<in>vbits"
shows "v < 2^b \<and> v \<ge> 0"
proof (cases ex)
case (INT b' v')
with assms show ?thesis
proof cases
assume "b'\<in>vbits"
show ?thesis
proof cases
assume "v'\<ge>0"
with INT show ?thesis using assms `b'\<in>vbits` by simp
next
assume "\<not> v'\<ge>0"
with INT show ?thesis using assms `b'\<in>vbits` by simp
qed
next
assume "\<not> b'\<in>vbits"
with INT show ?thesis using assms by simp
qed
next
case (UINT b' v')
then show ?thesis
proof cases
assume "b'\<in>vbits"
with UINT show ?thesis using assms by auto
next
assume "\<not> b'\<in>vbits"
with UINT show ?thesis using assms by auto
qed
next
case (ADDRESS x3)
with assms show ?thesis by simp
next
case (BALANCE x4)
with assms show ?thesis by simp
next
case THIS
with assms show ?thesis by simp
next
case SENDER
with assms show ?thesis by simp
next
case VALUE
with assms show ?thesis by simp
next
case TRUE
with assms show ?thesis by simp
next
case FALSE
with assms show ?thesis by simp
next
case (LVAL x7)
with assms show ?thesis by simp
next
case p: (PLUS e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
with p show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "?v\<ge>0"
with assms show ?thesis using p i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not>?v\<ge>0"
with assms show ?thesis using p i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "b2\<notin>vbits"
with p i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
then show ?thesis
proof cases
assume "?v\<ge>0"
with assms show ?thesis using p i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
next
assume "\<not>?v\<ge>0"
with assms show ?thesis using p i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
qed
next
assume "\<not> b2<b1"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with p i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with p i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with p i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
with p show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
then show ?thesis
proof cases
assume "?v\<ge>0"
with assms show ?thesis using p i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
next
assume "\<not>?v\<ge>0"
with assms show ?thesis using p i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
qed
next
assume "\<not> b1<b2"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with p i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
let ?x="((v1 + v2) mod (2^(max b1 b2)))"
assume "b2\<in>vbits"
with `b1\<in>vbits` u u2 have "eupdate (PLUS e1 e2) = UINT (max b1 b2) ?x" by simp
with assms have "b=max b1 b2" and "v=?x" using p by (simp,simp)
moreover from `b1\<in>vbits` have "max b1 b2>0" by auto
hence "?x < 2 ^(max b1 b2)" by simp
moreover have "?x \<ge> 0" by simp
ultimately show ?thesis by simp
next
assume "\<not>b2\<in>vbits"
with p u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with p u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with p u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with p u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with p show ?thesis using assms by simp
next
case (BALANCE x4)
with p show ?thesis using assms by simp
next
case THIS
with p show ?thesis using assms by simp
next
case SENDER
with p show ?thesis using assms by simp
next
case VALUE
with p show ?thesis using assms by simp
next
case TRUE
with p show ?thesis using assms by simp
next
case FALSE
with p show ?thesis using assms by simp
next
case (LVAL x7)
with p show ?thesis using assms by simp
next
case (PLUS x81 x82)
with p show ?thesis using assms by simp
next
case (MINUS x91 x92)
with p show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with p show ?thesis using assms by simp
next
case (LESS x111 x112)
with p show ?thesis using assms by simp
next
case (AND x121 x122)
with p show ?thesis using assms by simp
next
case (OR x131 x132)
with p show ?thesis using assms by simp
next
case (NOT x131)
with p show ?thesis using assms by simp
next
case (CALL x181 x182)
with p show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with p show ?thesis using assms by simp
qed
next
case m: (MINUS e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
with m show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1-v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "?v\<ge>0"
with assms show ?thesis using m i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not>?v\<ge>0"
with assms show ?thesis using m i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "b2\<notin>vbits"
with m i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
let ?v="v1-v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
show ?thesis
proof cases
assume "?v\<ge>0"
with assms show ?thesis using m i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
next
assume "\<not>?v\<ge>0"
with assms show ?thesis using m i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
qed
next
assume "\<not> b2<b1"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with m i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with m i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with m i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
with m show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1-v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
show ?thesis
proof cases
assume "?v\<ge>0"
with assms show ?thesis using m i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
next
assume "\<not>?v\<ge>0"
with assms show ?thesis using m i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
qed
next
assume "\<not> b1<b2"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with m i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
let ?x="((v1 - v2) mod (2^(max b1 b2)))"
assume "b2\<in>vbits"
with `b1\<in>vbits` u u2 have "eupdate (MINUS e1 e2) = UINT (max b1 b2) ?x" by simp
with assms have "b=max b1 b2" and "v=?x" using m by (simp,simp)
moreover from `b1\<in>vbits` have "max b1 b2>0" by auto
hence "?x < 2 ^(max b1 b2)" by simp
moreover have "?x \<ge> 0" by simp
ultimately show ?thesis by simp
next
assume "\<not>b2\<in>vbits"
with m u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with m u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with m u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with m u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with m show ?thesis using assms by simp
next
case (BALANCE x4)
with m show ?thesis using assms by simp
next
case THIS
with m show ?thesis using assms by simp
next
case SENDER
with m show ?thesis using assms by simp
next
case VALUE
with m show ?thesis using assms by simp
next
case TRUE
with m show ?thesis using assms by simp
next
case FALSE
with m show ?thesis using assms by simp
next
case (LVAL x7)
with m show ?thesis using assms by simp
next
case (PLUS x81 x82)
with m show ?thesis using assms by simp
next
case (MINUS x91 x92)
with m show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with m show ?thesis using assms by simp
next
case (LESS x111 x112)
with m show ?thesis using assms by simp
next
case (AND x121 x122)
with m show ?thesis using assms by simp
next
case (OR x131 x132)
with m show ?thesis using assms by simp
next
case (NOT x131)
with m show ?thesis using assms by simp
next
case (CALL x181 x182)
with m show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with m show ?thesis using assms by simp
qed
next
case e: (EQUAL e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "b2\<notin>vbits"
with e i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
then show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
qed
next
assume "\<not> b2<b1"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with e i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with e i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with e i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
then show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
qed
next
assume "\<not> b1<b2"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with e i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1=v2"
with assms show ?thesis using e u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1=v2"
with assms show ?thesis using e u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "\<not>b2\<in>vbits"
with e u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with e u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with e u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with e u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with e show ?thesis using assms by simp
next
case (BALANCE x4)
with e show ?thesis using assms by simp
next
case THIS
with e show ?thesis using assms by simp
next
case SENDER
with e show ?thesis using assms by simp
next
case VALUE
with e show ?thesis using assms by simp
next
case TRUE
with e show ?thesis using assms by simp
next
case FALSE
with e show ?thesis using assms by simp
next
case (LVAL x7)
with e show ?thesis using assms by simp
next
case (PLUS x81 x82)
with e show ?thesis using assms by simp
next
case (MINUS x91 x92)
with e show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with e show ?thesis using assms by simp
next
case (LESS x111 x112)
with e show ?thesis using assms by simp
next
case (AND x121 x122)
with e show ?thesis using assms by simp
next
case (OR x131 x132)
with e show ?thesis using assms by simp
next
case (NOT x131)
with e show ?thesis using assms by simp
next
case (CALL x181 x182)
with e show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with e show ?thesis using assms by simp
qed
next
case l: (LESS e1 e2)
show ?thesis
proof (cases "eupdate e1")
case i: (INT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l i i2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "b2\<notin>vbits"
with l i i2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b2<b1"
then show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b2<b1` by simp
qed
next
assume "\<not> b2<b1"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with l i `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with l i `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with l i show ?thesis using assms by simp
qed
next
case u: (UINT b1 v1)
show ?thesis
proof cases
assume "b1\<in>vbits"
show ?thesis
proof (cases "eupdate e2")
case i: (INT b2 v2)
then show ?thesis
proof cases
let ?v="v1+v2"
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "b1<b2"
then show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l i u `b1\<in>vbits` `b2\<in>vbits` `b1<b2` by simp
qed
next
assume "\<not> b1<b2"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "b2\<notin>vbits"
with l i u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case u2: (UINT b2 v2)
then show ?thesis
proof cases
assume "b2\<in>vbits"
show ?thesis
proof cases
assume "v1<v2"
with assms show ?thesis using l u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
next
assume "\<not> v1<v2"
with assms show ?thesis using l u u2 `b1\<in>vbits` `b2\<in>vbits` by simp
qed
next
assume "\<not>b2\<in>vbits"
with l u u2 `b1\<in>vbits` show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (BALANCE x4)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case THIS
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case SENDER
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case VALUE
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case TRUE
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case FALSE
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LVAL x7)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (PLUS x81 x82)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (MINUS x91 x92)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (LESS x111 x112)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (AND x121 x122)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (OR x131 x132)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (NOT x131)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (CALL x181 x182)
with l u `b1\<in>vbits` show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with l u `b1\<in>vbits` show ?thesis using assms by simp
qed
next
assume "\<not> b1\<in>vbits"
with l u show ?thesis using assms by simp
qed
next
case (ADDRESS x3)
with l show ?thesis using assms by simp
next
case (BALANCE x4)
with l show ?thesis using assms by simp
next
case THIS
with l show ?thesis using assms by simp
next
case SENDER
with l show ?thesis using assms by simp
next
case VALUE
with l show ?thesis using assms by simp
next
case TRUE
with l show ?thesis using assms by simp
next
case FALSE
with l show ?thesis using assms by simp
next
case (LVAL x7)
with l show ?thesis using assms by simp
next
case (PLUS x81 x82)
with l show ?thesis using assms by simp
next
case (MINUS x91 x92)
with l show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with l show ?thesis using assms by simp
next
case (LESS x111 x112)
with l show ?thesis using assms by simp
next
case (AND x121 x122)
with l show ?thesis using assms by simp
next
case (OR x131 x132)
with l show ?thesis using assms by simp
next
case (NOT x131)
with l show ?thesis using assms by simp
next
case (CALL x181 x182)
with l show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with l show ?thesis using assms by simp
qed
next
case a: (AND e1 e2)
show ?thesis
proof (cases "eupdate e1")
case (INT x11 x12)
with a show ?thesis using assms by simp
next
case (UINT x21 x22)
with a show ?thesis using assms by simp
next
case (ADDRESS x3)
with a show ?thesis using assms by simp
next
case (BALANCE x4)
with a show ?thesis using assms by simp
next
case THIS
with a show ?thesis using assms by simp
next
case SENDER
with a show ?thesis using assms by simp
next
case VALUE
with a show ?thesis using assms by simp
next
case t: TRUE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with a t show ?thesis using assms by simp
next
case (UINT x21 x22)
with a t show ?thesis using assms by simp
next
case (ADDRESS x3)
with a t show ?thesis using assms by simp
next
case (BALANCE x4)
with a t show ?thesis using assms by simp
next
case THIS
with a t show ?thesis using assms by simp
next
case SENDER
with a t show ?thesis using assms by simp
next
case VALUE
with a t show ?thesis using assms by simp
next
case TRUE
with a t show ?thesis using assms by simp
next
case FALSE
with a t show ?thesis using assms by simp
next
case (LVAL x7)
with a t show ?thesis using assms by simp
next
case (PLUS x81 x82)
with a t show ?thesis using assms by simp
next
case (MINUS x91 x92)
with a t show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with a t show ?thesis using assms by simp
next
case (LESS x111 x112)
with a t show ?thesis using assms by simp
next
case (AND x121 x122)
with a t show ?thesis using assms by simp
next
case (OR x131 x132)
with a t show ?thesis using assms by simp
next
case (NOT x131)
with a t show ?thesis using assms by simp
next
case (CALL x181 x182)
with a t show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with a t show ?thesis using assms by simp
qed
next
case f: FALSE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with a f show ?thesis using assms by simp
next
case (UINT x21 x22)
with a f show ?thesis using assms by simp
next
case (ADDRESS x3)
with a f show ?thesis using assms by simp
next
case (BALANCE x4)
with a f show ?thesis using assms by simp
next
case THIS
with a f show ?thesis using assms by simp
next
case SENDER
with a f show ?thesis using assms by simp
next
case VALUE
with a f show ?thesis using assms by simp
next
case TRUE
with a f show ?thesis using assms by simp
next
case FALSE
with a f show ?thesis using assms by simp
next
case (LVAL x7)
with a f show ?thesis using assms by simp
next
case (PLUS x81 x82)
with a f show ?thesis using assms by simp
next
case (MINUS x91 x92)
with a f show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with a f show ?thesis using assms by simp
next
case (LESS x111 x112)
with a f show ?thesis using assms by simp
next
case (AND x121 x122)
with a f show ?thesis using assms by simp
next
case (OR x131 x132)
with a f show ?thesis using assms by simp
next
case (NOT x131)
with a f show ?thesis using assms by simp
next
case (CALL x181 x182)
with a f show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with a f show ?thesis using assms by simp
qed
next
case (LVAL x7)
with a show ?thesis using assms by simp
next
case (PLUS x81 x82)
with a show ?thesis using assms by simp
next
case (MINUS x91 x92)
with a show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with a show ?thesis using assms by simp
next
case (LESS x111 x112)
with a show ?thesis using assms by simp
next
case (AND x121 x122)
with a show ?thesis using assms by simp
next
case (OR x131 x132)
with a show ?thesis using assms by simp
next
case (NOT x131)
with a show ?thesis using assms by simp
next
case (CALL x181 x182)
with a show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with a show ?thesis using assms by simp
qed
next
case o: (OR e1 e2)
show ?thesis
proof (cases "eupdate e1")
case (INT x11 x12)
with o show ?thesis using assms by simp
next
case (UINT x21 x22)
with o show ?thesis using assms by simp
next
case (ADDRESS x3)
with o show ?thesis using assms by simp
next
case (BALANCE x4)
with o show ?thesis using assms by simp
next
case THIS
with o show ?thesis using assms by simp
next
case SENDER
with o show ?thesis using assms by simp
next
case VALUE
with o show ?thesis using assms by simp
next
case t: TRUE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with o t show ?thesis using assms by simp
next
case (UINT x21 x22)
with o t show ?thesis using assms by simp
next
case (ADDRESS x3)
with o t show ?thesis using assms by simp
next
case (BALANCE x4)
with o t show ?thesis using assms by simp
next
case THIS
with o t show ?thesis using assms by simp
next
case SENDER
with o t show ?thesis using assms by simp
next
case VALUE
with o t show ?thesis using assms by simp
next
case TRUE
with o t show ?thesis using assms by simp
next
case FALSE
with o t show ?thesis using assms by simp
next
case (LVAL x7)
with o t show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o t show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o t show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o t show ?thesis using assms by simp
next
case (LESS x111 x112)
with o t show ?thesis using assms by simp
next
case (AND x121 x122)
with o t show ?thesis using assms by simp
next
case (OR x131 x132)
with o t show ?thesis using assms by simp
next
case (NOT x131)
with o t show ?thesis using assms by simp
next
case (CALL x181 x182)
with o t show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o t show ?thesis using assms by simp
qed
next
case f: FALSE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with o f show ?thesis using assms by simp
next
case (UINT x21 x22)
with o f show ?thesis using assms by simp
next
case (ADDRESS x3)
with o f show ?thesis using assms by simp
next
case (BALANCE x4)
with o f show ?thesis using assms by simp
next
case THIS
with o f show ?thesis using assms by simp
next
case SENDER
with o f show ?thesis using assms by simp
next
case VALUE
with o f show ?thesis using assms by simp
next
case TRUE
with o f show ?thesis using assms by simp
next
case FALSE
with o f show ?thesis using assms by simp
next
case (LVAL x7)
with o f show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o f show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o f show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o f show ?thesis using assms by simp
next
case (LESS x111 x112)
with o f show ?thesis using assms by simp
next
case (AND x121 x122)
with o f show ?thesis using assms by simp
next
case (OR x131 x132)
with o f show ?thesis using assms by simp
next
case (NOT x131)
with o f show ?thesis using assms by simp
next
case (CALL x181 x182)
with o f show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o f show ?thesis using assms by simp
qed
next
case (LVAL x7)
with o show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o show ?thesis using assms by simp
next
case (LESS x111 x112)
with o show ?thesis using assms by simp
next
case (AND x121 x122)
with o show ?thesis using assms by simp
next
case (OR x131 x132)
with o show ?thesis using assms by simp
next
case (NOT x131)
with o show ?thesis using assms by simp
next
case (CALL x181 x182)
with o show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o show ?thesis using assms by simp
qed
next
case o: (NOT x)
show ?thesis
proof (cases "eupdate x")
case (INT x11 x12)
with o show ?thesis using assms by simp
next
case (UINT x21 x22)
with o show ?thesis using assms by simp
next
case (ADDRESS x3)
with o show ?thesis using assms by simp
next
case (BALANCE x4)
with o show ?thesis using assms by simp
next
case THIS
with o show ?thesis using assms by simp
next
case SENDER
with o show ?thesis using assms by simp
next
case VALUE
with o show ?thesis using assms by simp
next
case t: TRUE
with o show ?thesis using assms by simp
next
case f: FALSE
with o show ?thesis using assms by simp
next
case (LVAL x7)
with o show ?thesis using assms by simp
next
case (PLUS x81 x82)
with o show ?thesis using assms by simp
next
case (MINUS x91 x92)
with o show ?thesis using assms by simp
next
case (EQUAL x101 x102)
with o show ?thesis using assms by simp
next
case (LESS x111 x112)
with o show ?thesis using assms by simp
next
case (AND x121 x122)
with o show ?thesis using assms by simp
next
case (OR x131 x132)
with o show ?thesis using assms by simp
next
case (NOT x131)
with o show ?thesis using assms by simp
next
case (CALL x181 x182)
with o show ?thesis using assms by simp
next
case (ECALL x191 x192 x193 x194)
with o show ?thesis using assms by simp
qed
next
case (CALL x181 x182)
with assms show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with assms show ?thesis by simp
qed
lemma no_gas:
assumes "\<not> gas st > 0"
shows "expr ex ep env cd st = Exception Gas"
proof (cases ex)
case (INT x11 x12)
with assms show ?thesis by simp
next
case (UINT x21 x22)
with assms show ?thesis by simp
next
case (ADDRESS x3)
with assms show ?thesis by simp
next
case (BALANCE x4)
with assms show ?thesis by simp
next
case THIS
with assms show ?thesis by simp
next
case SENDER
with assms show ?thesis by simp
next
case VALUE
with assms show ?thesis by simp
next
case TRUE
with assms show ?thesis by simp
next
case FALSE
with assms show ?thesis by simp
next
case (LVAL x10)
with assms show ?thesis by simp
next
case (PLUS x111 x112)
with assms show ?thesis by simp
next
case (MINUS x121 x122)
with assms show ?thesis by simp
next
case (EQUAL x131 x132)
with assms show ?thesis by simp
next
case (LESS x141 x142)
with assms show ?thesis by simp
next
case (AND x151 x152)
with assms show ?thesis by simp
next
case (OR x161 x162)
with assms show ?thesis by simp
next
case (NOT x17)
with assms show ?thesis by simp
next
case (CALL x181 x182)
with assms show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with assms show ?thesis by simp
qed
lemma lift_eq:
assumes "expr e1 ep env cd st = expr e1' ep env cd st"
and "\<And>st' rv. expr e1 ep env cd st = Normal (rv, st') \<Longrightarrow> expr e2 ep env cd st'= expr e2' ep env cd st'"
shows "lift expr f e1 e2 ep env cd st=lift expr f e1' e2' ep env cd st"
proof (cases "expr e1 ep env cd st")
case s1: (n a st')
then show ?thesis
proof (cases a)
case f1:(Pair a b)
then show ?thesis
proof (cases a)
case k1:(KValue x1)
then show ?thesis
proof (cases b)
case v1: (Value x1)
then show ?thesis
proof (cases "expr e2 ep env cd st'")
case s2: (n a' st'')
then show ?thesis
proof (cases a')
case f2:(Pair a' b')
then show ?thesis
proof (cases a')
case (KValue x1')
with s1 f1 k1 v1 assms(1) assms(2) show ?thesis by auto
next
case (KCDptr x2)
with s1 f1 k1 v1 assms(1) assms(2) show ?thesis by auto
next
case (KMemptr x2')
with s1 f1 k1 v1 assms(1) assms(2) show ?thesis by auto
next
case (KStoptr x3')
with s1 f1 k1 v1 assms(1) assms(2) show ?thesis by auto
qed
qed
next
case (e e)
then show ?thesis using k1 s1 v1 assms(1) assms(2) f1 by auto
qed
next
case (Calldata x2)
then show ?thesis using k1 s1 assms(1) f1 by auto
next
case (Memory x2)
then show ?thesis using k1 s1 assms(1) f1 by auto
next
case (Storage x3)
then show ?thesis using k1 s1 assms(1) f1 by auto
qed
next
case (KCDptr x2)
then show ?thesis using s1 assms(1) f1 by fastforce
next
case (KMemptr x2)
then show ?thesis using s1 assms(1) f1 by fastforce
next
case (KStoptr x3)
then show ?thesis using s1 assms(1) f1 by fastforce
qed
qed
next
case (e e)
then show ?thesis using assms(1) by simp
qed
lemma ssel_eq_ssel:
"(\<And>i st. i \<in> set ix \<Longrightarrow> expr i ep env cd st = expr (f i) ep env cd st)
\<Longrightarrow> ssel tp loc ix ep env cd st = ssel tp loc (map f ix) ep env cd st"
proof (induction ix arbitrary: tp loc ep env cd st)
case Nil
then show ?case by simp
next
case c1: (Cons i ix)
then show ?case
proof (cases tp)
case tp1: (STArray al tp)
then show ?thesis
proof (cases "expr i ep env cd st")
case s1: (n a st')
then show ?thesis
proof (cases a)
case f1: (Pair a b)
then show ?thesis
proof (cases a)
case k1: (KValue v)
then show ?thesis
proof (cases b)
case v1: (Value t)
then show ?thesis
proof (cases "less t (TUInt 256) v (ShowL\<^sub>i\<^sub>n\<^sub>t al)")
case None
with v1 k1 tp1 s1 c1.prems f1 show ?thesis by simp
next
case s2: (Some a)
then show ?thesis
proof (cases a)
case p1: (Pair a b)
then show ?thesis
proof (cases b)
case (TSInt x1)
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis by simp
next
case (TUInt x2)
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis by simp
next
case b1: TBool
show ?thesis
proof cases
assume "a = ShowL\<^sub>b\<^sub>o\<^sub>o\<^sub>l True"
from c1.IH[OF c1.prems] have
"ssel tp (hash loc v) ix ep env cd st' = ssel tp (hash loc v) (map f ix) ep env cd st'"
by simp
with mp s2 b1 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis by simp
next
assume "\<not> a = ShowL\<^sub>b\<^sub>o\<^sub>o\<^sub>l True"
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis by simp
qed
next
case TAddr
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis by simp
qed
qed
qed
next
case (Calldata x2)
with k1 tp1 s1 c1.prems f1 show ?thesis by simp
next
case (Memory x2)
with k1 tp1 s1 c1.prems f1 show ?thesis by simp
next
case (Storage x3)
with k1 tp1 s1 c1.prems f1 show ?thesis by simp
qed
next
case (KCDptr x2)
with tp1 s1 c1.prems f1 show ?thesis by simp
next
case (KMemptr x2)
with tp1 s1 c1.prems f1 show ?thesis by simp
next
case (KStoptr x3)
with tp1 s1 c1.prems f1 show ?thesis by simp
qed
qed
next
case (e e)
with tp1 c1.prems show ?thesis by simp
qed
next
case tp1: (STMap _ t)
then show ?thesis
proof (cases "expr i ep env cd st")
case s1: (n a s)
then show ?thesis
proof (cases a)
case f1: (Pair a b)
then show ?thesis
proof (cases a)
case k1: (KValue v)
from c1.IH[OF c1.prems] have
"ssel tp (hash loc v) ix ep env cd st = ssel tp (hash loc v) (map f ix) ep env cd st" by simp
with k1 tp1 s1 c1 f1 show ?thesis by simp
next
case (KCDptr x2)
with tp1 s1 c1.prems f1 show ?thesis by simp
next
case (KMemptr x2)
with tp1 s1 c1.prems f1 show ?thesis by simp
next
case (KStoptr x3)
with tp1 s1 c1.prems f1 show ?thesis by simp
qed
qed
next
case (e e)
with tp1 c1.prems show ?thesis by simp
qed
next
case (STValue x2)
then show ?thesis by simp
qed
qed
lemma msel_eq_msel:
"(\<And>i st. i \<in> set ix \<Longrightarrow> expr i ep env cd st = expr (f i) ep env cd st) \<Longrightarrow>
msel c tp loc ix ep env cd st = msel c tp loc (map f ix) ep env cd st"
proof (induction ix arbitrary: c tp loc ep env cd st)
case Nil
then show ?case by simp
next
case c1: (Cons i ix)
then show ?case
proof (cases tp)
case tp1: (MTArray al tp)
then show ?thesis
proof (cases ix)
case Nil
thus ?thesis using tp1 c1.prems by auto
next
case c2: (Cons a list)
then show ?thesis
proof (cases "expr i ep env cd st")
case s1: (n a st')
then show ?thesis
proof (cases a)
case f1: (Pair a b)
then show ?thesis
proof (cases a)
case k1: (KValue v)
then show ?thesis
proof (cases b)
case v1: (Value t)
then show ?thesis
proof (cases "less t (TUInt 256) v (ShowL\<^sub>i\<^sub>n\<^sub>t al)")
case None
with v1 k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case s2: (Some a)
then show ?thesis
proof (cases a)
case p1: (Pair a b)
then show ?thesis
proof (cases b)
case (TSInt x1)
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case (TUInt x2)
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case b1: TBool
show ?thesis
proof cases
assume "a = ShowL\<^sub>b\<^sub>o\<^sub>o\<^sub>l True"
then show ?thesis
proof (cases c)
case True
then show ?thesis
proof (cases "accessStore (hash loc v) (memory st')")
case None
with s2 b1 p1 v1 k1 tp1 s1 c1.prems f1 True show ?thesis using c2 by simp
next
case s3: (Some a)
then show ?thesis
proof (cases a)
case (MValue x1)
with s2 s3 b1 p1 v1 k1 tp1 s1 c1.prems f1 True show ?thesis using c2 by simp
next
case mp: (MPointer l)
from c1.IH[OF c1.prems]
have "msel c tp l ix ep env cd st' = msel c tp l (map f ix) ep env cd st'" by simp
with mp s2 s3 b1 p1 v1 k1 tp1 s1 c1.prems f1 True show ?thesis using c2 by simp
qed
qed
next
case False
then show ?thesis
proof (cases "accessStore (hash loc v) cd")
case None
with s2 b1 p1 v1 k1 tp1 s1 c1.prems f1 False show ?thesis using c2 by simp
next
case s3: (Some a)
then show ?thesis
proof (cases a)
case (MValue x1)
with s2 s3 b1 p1 v1 k1 tp1 s1 c1.prems f1 False show ?thesis using c2 by simp
next
case mp: (MPointer l)
from c1.IH[OF c1.prems]
have "msel c tp l ix ep env cd st' = msel c tp l (map f ix) ep env cd st'" by simp
with mp s2 s3 b1 p1 v1 k1 tp1 s1 c1.prems f1 False show ?thesis using c2 by simp
qed
qed
qed
next
assume "\<not> a = ShowL\<^sub>b\<^sub>o\<^sub>o\<^sub>l True"
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
qed
next
case TAddr
with s2 p1 v1 k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
qed
qed
qed
next
case (Calldata x2)
with k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case (Memory x2)
with k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case (Storage x3)
with k1 tp1 s1 c1.prems f1 show ?thesis using c2 by simp
qed
next
case (KCDptr x2)
with tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case (KMemptr x2)
with tp1 s1 c1.prems f1 show ?thesis using c2 by simp
next
case (KStoptr x3)
with tp1 s1 c1.prems f1 show ?thesis using c2 by simp
qed
qed
next
case (e e)
with tp1 c1.prems show ?thesis using c2 by simp
qed
qed
next
case (MTValue x2)
then show ?thesis by simp
qed
qed
lemma ref_eq:
assumes "\<And>e st. e \<in> set ex \<Longrightarrow> expr e ep env cd st = expr (f e) ep env cd st"
shows "rexp (Ref i ex) ep env cd st=rexp (Ref i (map f ex)) ep env cd st"
proof (cases "fmlookup (denvalue env) i")
case None
then show ?thesis by simp
next
case s1: (Some a)
then show ?thesis
proof (cases a)
case p1: (Pair tp b)
then show ?thesis
proof (cases b)
case k1: (Stackloc l)
then show ?thesis
proof (cases "accessStore l (stack st)")
case None
with s1 p1 k1 show ?thesis by simp
next
case s2: (Some a')
then show ?thesis
proof (cases a')
case (KValue _)
with s1 s2 p1 k1 show ?thesis by simp
next
case cp: (KCDptr cp)
then show ?thesis
proof (cases tp)
case (Value x1)
with mp s1 s2 p1 k1 show ?thesis by simp
next
case mt: (Calldata ct)
from msel_eq_msel have
"msel False ct cp ex ep env cd st=msel False ct cp (map f ex) ep env cd st" using assms by blast
thus ?thesis using s1 s2 p1 k1 mt cp by simp
next
case mt: (Memory mt)
from msel_eq_msel have
"msel True mt cp ex ep env cd st=msel True mt cp (map f ex) ep env cd st" using assms by blast
thus ?thesis using s1 s2 p1 k1 mt cp by simp
next
case (Storage x3)
with cp s1 s2 p1 k1 show ?thesis by simp
qed
next
case mp: (KMemptr mp)
then show ?thesis
proof (cases tp)
case (Value x1)
with mp s1 s2 p1 k1 show ?thesis by simp
next
case mt: (Calldata ct)
from msel_eq_msel have
"msel True ct mp ex ep env cd st=msel True ct mp (map f ex) ep env cd st" using assms by blast
thus ?thesis using s1 s2 p1 k1 mt mp by simp
next
case mt: (Memory mt)
from msel_eq_msel have
"msel True mt mp ex ep env cd st=msel True mt mp (map f ex) ep env cd st" using assms by blast
thus ?thesis using s1 s2 p1 k1 mt mp by simp
next
case (Storage x3)
with mp s1 s2 p1 k1 show ?thesis by simp
qed
next
case sp: (KStoptr sp)
then show ?thesis
proof (cases tp)
case (Value x1)
then show ?thesis using s1 s2 p1 k1 sp by simp
next
case (Calldata x2)
then show ?thesis using s1 s2 p1 k1 sp by simp
next
case (Memory x2)
then show ?thesis using s1 s2 p1 k1 sp by simp
next
case st: (Storage stp)
from ssel_eq_ssel have
"ssel stp sp ex ep env cd st=ssel stp sp (map f ex) ep env cd st" using assms by blast
thus ?thesis using s1 s2 p1 k1 st sp by simp
qed
qed
qed
next
case sl:(Storeloc sl)
then show ?thesis
proof (cases tp)
case (Value x1)
then show ?thesis using s1 p1 sl by simp
next
case (Calldata x2)
then show ?thesis using s1 p1 sl by simp
next
case (Memory x2)
then show ?thesis using s1 p1 sl by simp
next
case st: (Storage stp)
from ssel_eq_ssel have
"ssel stp sl ex ep env cd st=ssel stp sl (map f ex) ep env cd st" using assms by blast
thus ?thesis using s1 sl p1 st by simp
qed
qed
qed
qed
text\<open>
The following theorem proves that the update function preserves the semantics of expressions.
\<close>
theorem update_correctness:
"\<And>st lb lv. expr ex ep env cd st = expr (eupdate ex) ep env cd st"
"\<And>st. rexp lv ep env cd st = rexp (lupdate lv) ep env cd st"
proof (induction ex and lv)
case (Id x)
then show ?case by simp
next
case (Ref d ix)
then show ?case using ref_eq[where f="eupdate"] by simp
next
case (INT b v)
then show ?case
proof (cases "gas st > 0")
case True
then show ?thesis
proof cases
assume "b\<in>vbits"
show ?thesis
proof cases
let ?m_def = "(-(2^(b-1)) + (v+2^(b-1)) mod (2^b))"
assume "v \<ge> 0"
from `b\<in>vbits` True have
"expr (E.INT b v) ep env cd st = Normal ((KValue (createSInt b v), Value (TSInt b)), st)" by simp
also have "createSInt b v = createSInt b ?m_def" using `b\<in>vbits` `v \<ge> 0` by auto
also from `v \<ge> 0` `b\<in>vbits` True have
"Normal ((KValue (createSInt b ?m_def), Value (TSInt b)),st) = expr (eupdate (E.INT b v)) ep env cd st"
by simp
finally show "expr (E.INT b v) ep env cd st = expr (eupdate (E.INT b v)) ep env cd st" by simp
next
let ?m_def = "(2^(b-1) - (-v+2^(b-1)-1) mod (2^b) - 1)"
assume "\<not> v \<ge> 0"
from `b\<in>vbits` True have
"expr (E.INT b v) ep env cd st = Normal ((KValue (createSInt b v), Value (TSInt b)), st)" by simp
also have "createSInt b v = createSInt b ?m_def" using `b\<in>vbits` `\<not> v \<ge> 0` by auto
also from `\<not> v \<ge> 0` `b\<in>vbits` True have
"Normal ((KValue (createSInt b ?m_def), Value (TSInt b)),st) =expr (eupdate (E.INT b v)) ep env cd st"
by simp
finally show "expr (E.INT b v) ep env cd st = expr (eupdate (E.INT b v)) ep env cd st" by simp
qed
next
assume "\<not> b\<in>vbits"
thus ?thesis by auto
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case (UINT x1 x2)
then show ?case by simp
next
case (ADDRESS x)
then show ?case by simp
next
case (BALANCE x)
then show ?case by simp
next
case THIS
then show ?case by simp
next
case SENDER
then show ?case by simp
next
case VALUE
then show ?case by simp
next
case TRUE
then show ?case by simp
next
case FALSE
then show ?case by simp
next
case (LVAL x)
then show ?case by simp
next
case p: (PLUS e1 e2)
show ?case
proof (cases "eupdate e1")
case i: (INT b1 v1)
with p.IH have expr1: "expr e1 ep env cd st = expr (E.INT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
then show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createSInt b1 v1), Value (TSInt b1)),st)" by simp
moreover from i `b1 \<in> vbits`
have "v1 < 2^(b1-1)" and "v1 \<ge> -(2^(b1-1))" using update_bounds_int by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TSInt b1)),st)"
using createSInt_id[of v1 b1] by simp
thus ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
with p.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 + v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i2 `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
let ?x="- (2 ^ (max b1 b2 - 1)) + (?v + 2 ^ (max b1 b2 - 1)) mod 2 ^ max b1 b2"
assume "?v\<ge>0"
hence "createSInt (max b1 b2) ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "add (TSInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt (max b1 b2) ?v, TSInt (max b1 b2))"
using Read_ShowL_id add_def olift.simps(1)[of "(+)" b1 b2] by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)" using r1 r2 True by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` `?v\<ge>0`
have "eupdate (PLUS e1 e2) = E.INT (max b1 b2) ?x" using i i2 by simp
moreover have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` have "max b1 b2 \<in> vbits" using vbits_max by simp
with True have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (createSInt (max b1 b2) ?x), Value (TSInt (max b1 b2))),st)" by simp
moreover from `0 < b1`
have "?x < 2 ^ (max b1 b2 - 1)" using upper_bound3 by simp
moreover from `0 < b1` have "0 < max b1 b2" using max_def by simp
ultimately show ?thesis using createSInt_id[of ?x "max b1 b2"] by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
let ?x="2^(max b1 b2 -1) - (-?v+2^(max b1 b2-1)-1) mod (2^max b1 b2) - 1"
assume "\<not> ?v\<ge>0"
hence "createSInt (max b1 b2) ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "add (TSInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt (max b1 b2) ?v, TSInt (max b1 b2))"
using Read_ShowL_id add_def olift.simps(1)[of "(+)" b1 b2] by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)" using True r1 r2 by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` `\<not>?v\<ge>0`
have "eupdate (PLUS e1 e2) = E.INT (max b1 b2) ?x" using i i2 by simp
moreover have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` have "max b1 b2 \<in> vbits" using vbits_max by simp
with True have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (createSInt (max b1 b2) ?x), Value (TSInt (max b1 b2))),st)" by simp
moreover from `0 < b1`
have "?x \<ge> - (2 ^ (max b1 b2 - 1))" using lower_bound2[of "max b1 b2" ?v] by simp
moreover from `b1 > 0` have "2^(max b1 b2 -1) > (0::nat)" by simp
hence "2^(max b1 b2 -1) - (-?v+2^(max b1 b2-1)-1) mod (2^max b1 b2) - 1 < 2 ^ (max b1 b2 - 1)"
by (simp add: algebra_simps flip: zle_diff1_eq)
moreover from `0 < b1` have "0 < max b1 b2" using max_def by simp
ultimately show ?thesis using createSInt_id[of ?x "max b1 b2"] by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with p i i2 show ?thesis by simp
qed
next
case u2: (UINT b2 v2)
with p.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 + v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u2 `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
using createUInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b2<b1"
thus ?thesis
proof (cases)
let ?x="- (2 ^ (b1 - 1)) + (?v + 2 ^ (b1 - 1)) mod 2 ^ b1"
assume "?v\<ge>0"
hence "createSInt b1 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" using `b2<b1` by auto
moreover have "add (TSInt b1) (TUInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b1 ?v, TSInt b1)"
using Read_ShowL_id add_def olift.simps(3)[of "(+)" b1 b2] `b2<b1` by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` `?v\<ge>0` `b2<b1`
have "eupdate (PLUS e1 e2) = E.INT b1 ?x" using i u2 by simp
moreover have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from `b1 \<in> vbits` True have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (createSInt b1 ?x), Value (TSInt b1)),st)" by simp
moreover from `0 < b1` have "?x < 2 ^ (b1 - 1)" using upper_bound2 by simp
ultimately show ?thesis using createSInt_id[of ?x "b1"] `0 < b1` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
let ?x="2^(b1 -1) - (-?v+2^(b1-1)-1) mod (2^b1) - 1"
assume "\<not> ?v\<ge>0"
hence "createSInt b1 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "add (TSInt b1) (TUInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b1 ?v, TSInt b1)"
using Read_ShowL_id add_def olift.simps(3)[of "(+)" b1 b2] `b2<b1` by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` `\<not>?v\<ge>0` `b2<b1`
have "eupdate (PLUS e1 e2) = E.INT b1 ?x" using i u2 by simp
moreover have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from `b1 \<in> vbits` True have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (createSInt b1 ?x), Value (TSInt b1)),st)" by simp
moreover from `0 < b1` have "?x \<ge> - (2 ^ (b1 - 1))" using upper_bound2 by simp
moreover have "2^(b1-1) - (-?v+2^(b1-1)-1) mod (2^b1) - 1 < 2 ^ (b1 - 1)"
by (simp add: algebra_simps flip: int_one_le_iff_zero_less)
ultimately show ?thesis using createSInt_id[of ?x b1] `0 < b1` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
next
assume "\<not> b2 < b1"
with p i u2 show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with p i u2 show ?thesis by simp
qed
next
case (ADDRESS _)
with p i show ?thesis by simp
next
case (BALANCE _)
with p i show ?thesis by simp
next
case THIS
with p i show ?thesis by simp
next
case SENDER
with p i show ?thesis by simp
next
case VALUE
with p i show ?thesis by simp
next
case TRUE
with p i show ?thesis by simp
next
case FALSE
with p i show ?thesis by simp
next
case (LVAL _)
with p i show ?thesis by simp
next
case (PLUS _ _)
with p i show ?thesis by simp
next
case (MINUS _ _)
with p i show ?thesis by simp
next
case (EQUAL _ _)
with p i show ?thesis by simp
next
case (LESS _ _)
with p i show ?thesis by simp
next
case (AND _ _)
with p i show ?thesis by simp
next
case (OR _ _)
with p i show ?thesis by simp
next
case (NOT _)
with p i show ?thesis by simp
next
case (CALL x181 x182)
with p i show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with p i show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with p i show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case u: (UINT b1 v1)
with p.IH have expr1: "expr e1 ep env cd st = expr (UINT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
then show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createUInt b1 v1), Value (TUInt b1)),st)" by simp
moreover from u `b1 \<in> vbits`
have "v1 < 2^b1" and "v1 \<ge> 0" using update_bounds_uint by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TUInt b1)),st)"
by simp
thus ?thesis
proof (cases "eupdate e2")
case u2: (UINT b2 v2)
with p.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 + v2"
let ?x="?v mod 2 ^ max b1 b2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u2 `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
by simp
moreover have "add (TUInt b1) (TUInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createUInt (max b1 b2) ?v, TUInt (max b1 b2))"
using Read_ShowL_id add_def olift.simps(2)[of "(+)" b1 b2] by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TUInt (max b1 b2))),st)" using r1 True by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TUInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits`
have "eupdate (PLUS e1 e2) = UINT (max b1 b2) ?x" using u u2 by simp
moreover have "expr (UINT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TUInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` have "max b1 b2 \<in> vbits" using vbits_max by simp
with True have "expr (UINT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (createUInt (max b1 b2) ?x), Value (TUInt (max b1 b2))),st)" by simp
moreover from `0 < b1`
have "?x < 2 ^ (max b1 b2)" by simp
moreover from `0 < b1` have "0 < max b1 b2" using max_def by simp
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
assume "\<not> b2 \<in> vbits"
with p u u2 show ?thesis by simp
qed
next
case i2: (INT b2 v2)
with p.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 + v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i2 `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b1<b2"
thus ?thesis
proof (cases)
let ?x="- (2 ^ (b2 - 1)) + (?v + 2 ^ (b2 - 1)) mod 2 ^ b2"
assume "?v\<ge>0"
hence "createSInt b2 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" using `b1<b2` by auto
moreover have "add (TUInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b2 ?v, TSInt b2)"
using Read_ShowL_id add_def olift.simps(4)[of "(+)" b1 b2] `b1<b2` by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` `?v\<ge>0` `b1<b2`
have "eupdate (PLUS e1 e2) = E.INT b2 ?x" using u i2 by simp
moreover have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from `b2 \<in> vbits` True have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (createSInt b2 ?x), Value (TSInt b2)),st)" by simp
moreover from `0 < b2` have "?x < 2 ^ (b2 - 1)" using upper_bound2 by simp
ultimately show ?thesis using createSInt_id[of ?x "b2"] `0 < b2` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
let ?x="2^(b2 -1) - (-?v+2^(b2-1)-1) mod (2^b2) - 1"
assume "\<not> ?v\<ge>0"
hence "createSInt b2 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "add (TUInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b2 ?v, TSInt b2)"
using Read_ShowL_id add_def olift.simps(4)[of "(+)" b1 b2] `b1<b2` by simp
ultimately have "expr (PLUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (PLUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` `\<not>?v\<ge>0` `b1<b2`
have "eupdate (PLUS e1 e2) = E.INT b2 ?x" using u i2 by simp
moreover have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from `b2 \<in> vbits` True have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (createSInt b2 ?x), Value (TSInt b2)),st)" by simp
moreover from `0 < b2` have "?x \<ge> - (2 ^ (b2 - 1))" using upper_bound2 by simp
moreover have "2^(b2-1) - (-?v+2^(b2-1)-1) mod (2^b2) - 1 < 2 ^ (b2 - 1)"
by (simp add: algebra_simps flip: int_one_le_iff_zero_less)
ultimately show ?thesis using createSInt_id[of ?x b2] `0 < b2` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
next
assume "\<not> b1 < b2"
with p u i2 show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with p u i2 show ?thesis by simp
qed
next
case (ADDRESS _)
with p u show ?thesis by simp
next
case (BALANCE _)
with p u show ?thesis by simp
next
case THIS
with p u show ?thesis by simp
next
case SENDER
with p u show ?thesis by simp
next
case VALUE
with p u show ?thesis by simp
next
case TRUE
with p u show ?thesis by simp
next
case FALSE
with p u show ?thesis by simp
next
case (LVAL _)
with p u show ?thesis by simp
next
case (PLUS _ _)
with p u show ?thesis by simp
next
case (MINUS _ _)
with p u show ?thesis by simp
next
case (EQUAL _ _)
with p u show ?thesis by simp
next
case (LESS _ _)
with p u show ?thesis by simp
next
case (AND _ _)
with p u show ?thesis by simp
next
case (OR _ _)
with p u show ?thesis by simp
next
case (NOT _)
with p u show ?thesis by simp
next
case (CALL x181 x182)
with p u show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with p u show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with p u show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case (ADDRESS x3)
with p show ?thesis by simp
next
case (BALANCE x4)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case THIS
with p show ?thesis by simp
next
case SENDER
with p show ?thesis by simp
next
case VALUE
with p show ?thesis by simp
next
case TRUE
with p show ?thesis by simp
next
case FALSE
with p show ?thesis by simp
next
case (LVAL x7)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (PLUS x81 x82)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (MINUS x91 x92)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (EQUAL x101 x102)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (LESS x111 x112)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (AND x121 x122)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (OR x131 x132)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (NOT x131)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (CALL x181 x182)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (ECALL x191 x192 x193 x194)
with p show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
qed
next
case m: (MINUS e1 e2)
show ?case
proof (cases "eupdate e1")
case i: (INT b1 v1)
with m.IH have expr1: "expr e1 ep env cd st = expr (E.INT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createSInt b1 v1), Value (TSInt b1)),st)" by simp
moreover from i `b1 \<in> vbits`
have "v1 < 2^(b1-1)" and "v1 \<ge> -(2^(b1-1))" using update_bounds_int by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TSInt b1)),st)"
using createSInt_id[of v1 b1] by simp
thus ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
with m.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 - v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i2 `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
from `b1 \<in> vbits` `b2 \<in> vbits` have
u_def: "eupdate (MINUS e1 e2) =
(let v = v1 - v2
in if 0 \<le> v
then E.INT (max b1 b2)
(- (2 ^ (max b1 b2 - 1)) + (v + 2 ^ (max b1 b2 - 1)) mod 2 ^ max b1 b2)
else E.INT (max b1 b2)
(2 ^ (max b1 b2 - 1) - (- v + 2 ^ (max b1 b2 - 1) - 1) mod 2 ^ max b1 b2 - 1))"
using i i2 eupdate.simps(11)[of e1 e2] by simp
show ?thesis
proof (cases)
let ?x="- (2 ^ (max b1 b2 - 1)) + (?v + 2 ^ (max b1 b2 - 1)) mod 2 ^ max b1 b2"
assume "?v\<ge>0"
hence "createSInt (max b1 b2) ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "sub (TSInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt (max b1 b2) ?v, TSInt (max b1 b2))"
using Read_ShowL_id sub_def olift.simps(1)[of "(-)" b1 b2] by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)" using r1 r2 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from u_def have "eupdate (MINUS e1 e2) = E.INT (max b1 b2) ?x" using `?v\<ge>0` by simp
moreover have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` have "max b1 b2 \<in> vbits" using vbits_max by simp
with True have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (createSInt (max b1 b2) ?x), Value (TSInt (max b1 b2))),st)" by simp
moreover from `0 < b1`
have "?x < 2 ^ (max b1 b2 - 1)" using upper_bound2 by simp
moreover from `0 < b1` have "0 < max b1 b2" using max_def by simp
ultimately show ?thesis using createSInt_id[of ?x "max b1 b2"] by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
let ?x="2^(max b1 b2 -1) - (-?v+2^(max b1 b2-1)-1) mod (2^max b1 b2) - 1"
assume "\<not> ?v\<ge>0"
hence "createSInt (max b1 b2) ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "sub (TSInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt (max b1 b2) ?v, TSInt (max b1 b2))"
using Read_ShowL_id sub_def olift.simps(1)[of "(-)" b1 b2] by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)" using r1 r2 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from u_def have "eupdate (MINUS e1 e2) = E.INT (max b1 b2) ?x" using `\<not> ?v\<ge>0` by simp
moreover have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` have "max b1 b2 \<in> vbits" using vbits_max by simp
with True have "expr (E.INT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (createSInt (max b1 b2) ?x), Value (TSInt (max b1 b2))),st)" by simp
moreover from `0 < b1`
have "?x \<ge> - (2 ^ (max b1 b2 - 1))" using lower_bound2[of "max b1 b2" ?v] by simp
moreover from `b1 > 0` have "2^(max b1 b2 -1) > (0::nat)" by simp
hence "2^(max b1 b2 -1) - (-?v+2^(max b1 b2-1)-1) mod (2^max b1 b2) - 1 < 2 ^ (max b1 b2 - 1)"
by (simp add: algebra_simps flip: int_one_le_iff_zero_less)
moreover from `0 < b1` have "0 < max b1 b2" using max_def by simp
ultimately show ?thesis using createSInt_id[of ?x "max b1 b2"] by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with m i i2 show ?thesis by simp
qed
next
case u: (UINT b2 v2)
with m.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 - v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
using createUInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b2<b1"
with `b1 \<in> vbits` `b2 \<in> vbits` have
u_def: "eupdate (MINUS e1 e2) =
(let v = v1 - v2
in if 0 \<le> v
then E.INT b1 (- (2 ^ (b1 - 1)) + (v + 2 ^ (b1 - 1)) mod 2 ^ b1)
else E.INT b1 (2 ^ (b1 - 1) - (- v + 2 ^ (b1 - 1) - 1) mod 2 ^ b1 - 1))"
using i u eupdate.simps(11)[of e1 e2] by simp
show ?thesis
proof (cases)
let ?x="- (2 ^ (b1 - 1)) + (?v + 2 ^ (b1 - 1)) mod 2 ^ b1"
assume "?v\<ge>0"
hence "createSInt b1 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" using `b2<b1` by auto
moreover have "sub (TSInt b1) (TUInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b1 ?v, TSInt b1)"
using Read_ShowL_id sub_def olift.simps(3)[of "(-)" b1 b2] `b2<b1` by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from u_def have "eupdate (MINUS e1 e2) = E.INT b1 ?x" using `?v\<ge>0` by simp
moreover have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from `b1 \<in> vbits` True have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (createSInt b1 ?x), Value (TSInt b1)),st)" by simp
moreover from `0 < b1` have "?x < 2 ^ (b1 - 1)" using upper_bound2 by simp
ultimately show ?thesis using createSInt_id[of ?x "b1"] `0 < b1` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
let ?x="2^(b1 -1) - (-?v+2^(b1-1)-1) mod (2^b1) - 1"
assume "\<not> ?v\<ge>0"
hence "createSInt b1 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "sub (TSInt b1) (TUInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b1 ?v, TSInt b1)"
using Read_ShowL_id sub_def olift.simps(3)[of "(-)" b1 b2] `b2<b1` by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from u_def have "eupdate (MINUS e1 e2) = E.INT b1 ?x" using `\<not> ?v\<ge>0` by simp
moreover have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b1)),st)"
proof -
from `b1 \<in> vbits` True have "expr (E.INT b1 ?x) ep env cd st
= Normal ((KValue (createSInt b1 ?x), Value (TSInt b1)),st)" by simp
moreover from `0 < b1` have "?x \<ge> - (2 ^ (b1 - 1))" using upper_bound2 by simp
moreover have "2^(b1-1) - (-?v+2^(b1-1)-1) mod (2^b1) - 1 < 2 ^ (b1 - 1)"
by (simp add: algebra_simps flip: int_one_le_iff_zero_less)
ultimately show ?thesis using createSInt_id[of ?x b1] `0 < b1` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
next
assume "\<not> b2 < b1"
with m i u show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with m i u show ?thesis by simp
qed
next
case (ADDRESS _)
with m i show ?thesis by simp
next
case (BALANCE _)
with m i show ?thesis by simp
next
case THIS
with m i show ?thesis by simp
next
case SENDER
with m i show ?thesis by simp
next
case VALUE
with m i show ?thesis by simp
next
case TRUE
with m i show ?thesis by simp
next
case FALSE
with m i show ?thesis by simp
next
case (LVAL _)
with m i show ?thesis by simp
next
case (PLUS _ _)
with m i show ?thesis by simp
next
case (MINUS _ _)
with m i show ?thesis by simp
next
case (EQUAL _ _)
with m i show ?thesis by simp
next
case (LESS _ _)
with m i show ?thesis by simp
next
case (AND _ _)
with m i show ?thesis by simp
next
case (OR _ _)
with m i show ?thesis by simp
next
case (NOT _)
with m i show ?thesis by simp
next
case (CALL x181 x182)
with m i show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with m i show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with m i show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case u: (UINT b1 v1)
with m.IH have expr1: "expr e1 ep env cd st = expr (UINT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createUInt b1 v1), Value (TUInt b1)),st)" by simp
moreover from u `b1 \<in> vbits`
have "v1 < 2^b1" and "v1 \<ge> 0" using update_bounds_uint by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TUInt b1)),st)"
by simp
thus ?thesis
proof (cases "eupdate e2")
case u2: (UINT b2 v2)
with m.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 - v2"
let ?x="?v mod 2 ^ max b1 b2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u2 `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
by simp
moreover have "sub (TUInt b1) (TUInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createUInt (max b1 b2) ?v, TUInt (max b1 b2))"
using Read_ShowL_id sub_def olift.simps(2)[of "(-)" b1 b2] by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TUInt (max b1 b2))),st)" using r1 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TUInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits`
have "eupdate (MINUS e1 e2) = UINT (max b1 b2) ?x" using u u2 by simp
moreover have "expr (UINT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TUInt (max b1 b2))),st)"
proof -
from `b1 \<in> vbits` `b2 \<in> vbits` have "max b1 b2 \<in> vbits" using vbits_max by simp
with True have "expr (UINT (max b1 b2) ?x) ep env cd st
= Normal ((KValue (createUInt (max b1 b2) ?x), Value (TUInt (max b1 b2))),st)" by simp
moreover from `0 < b1`
have "?x < 2 ^ (max b1 b2)" by simp
moreover from `0 < b1` have "0 < max b1 b2" using max_def by simp
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
assume "\<not> b2 \<in> vbits"
with m u u2 show ?thesis by simp
qed
next
case i: (INT b2 v2)
with m.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 - v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b1<b2"
with `b1 \<in> vbits` `b2 \<in> vbits` have
u_def: "eupdate (MINUS e1 e2) =
(let v = v1 - v2
in if 0 \<le> v
then E.INT b2 (- (2 ^ (b2 - 1)) + (v + 2 ^ (b2 - 1)) mod 2 ^ b2)
else E.INT b2 (2 ^ (b2 - 1) - (- v + 2 ^ (b2 - 1) - 1) mod 2 ^ b2 - 1))"
using u i by simp
show ?thesis
proof (cases)
let ?x="- (2 ^ (b2 - 1)) + (?v + 2 ^ (b2 - 1)) mod 2 ^ b2"
assume "?v\<ge>0"
hence "createSInt b2 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" using `b1<b2` by auto
moreover have "sub (TUInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b2 ?v, TSInt b2)"
using Read_ShowL_id sub_def olift.simps(4)[of "(-)" b1 b2] `b1<b2` by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from u_def have "eupdate (MINUS e1 e2) = E.INT b2 ?x" using `?v\<ge>0` by simp
moreover have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from `b2 \<in> vbits` True have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (createSInt b2 ?x), Value (TSInt b2)),st)" by simp
moreover from `0 < b2` have "?x < 2 ^ (b2 - 1)" using upper_bound2 by simp
ultimately show ?thesis using createSInt_id[of ?x "b2"] `0 < b2` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
next
let ?x="2^(b2 -1) - (-?v+2^(b2-1)-1) mod (2^b2) - 1"
assume "\<not> ?v\<ge>0"
hence "createSInt b2 ?v = (ShowL\<^sub>i\<^sub>n\<^sub>t ?x)" by simp
moreover have "sub (TUInt b1) (TSInt b2) (ShowL\<^sub>i\<^sub>n\<^sub>t v1) (ShowL\<^sub>i\<^sub>n\<^sub>t v2)
= Some (createSInt b2 ?v, TSInt b2)"
using Read_ShowL_id sub_def olift.simps(4)[of "(-)" b1 b2] `b1<b2` by simp
ultimately have "expr (MINUS e1 e2) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)" using r1 r2 True by simp
moreover have "expr (eupdate (MINUS e1 e2)) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from u_def have "eupdate (MINUS e1 e2) = E.INT b2 ?x" using `\<not> ?v\<ge>0` by simp
moreover have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t ?x), Value (TSInt b2)),st)"
proof -
from `b2 \<in> vbits` True have "expr (E.INT b2 ?x) ep env cd st
= Normal ((KValue (createSInt b2 ?x), Value (TSInt b2)),st)" by simp
moreover from `0 < b2` have "?x \<ge> - (2 ^ (b2 - 1))" using upper_bound2 by simp
moreover have "2^(b2-1) - (-?v+2^(b2-1)-1) mod (2^b2) - 1 < 2 ^ (b2 - 1)"
by (simp add: algebra_simps flip: int_one_le_iff_zero_less)
ultimately show ?thesis using createSInt_id[of ?x b2] `0 < b2` by simp
qed
ultimately show ?thesis by simp
qed
ultimately show ?thesis by simp
qed
next
assume "\<not> b1 < b2"
with m u i show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with m u i show ?thesis by simp
qed
next
case (ADDRESS _)
with m u show ?thesis by simp
next
case (BALANCE _)
with m u show ?thesis by simp
next
case THIS
with m u show ?thesis by simp
next
case SENDER
with m u show ?thesis by simp
next
case VALUE
with m u show ?thesis by simp
next
case TRUE
with m u show ?thesis by simp
next
case FALSE
with m u show ?thesis by simp
next
case (LVAL _)
with m u show ?thesis by simp
next
case (PLUS _ _)
with m u show ?thesis by simp
next
case (MINUS _ _)
with m u show ?thesis by simp
next
case (EQUAL _ _)
with m u show ?thesis by simp
next
case (LESS _ _)
with m u show ?thesis by simp
next
case (AND _ _)
with m u show ?thesis by simp
next
case (OR _ _)
with m u show ?thesis by simp
next
case (NOT _)
with m u show ?thesis by simp
next
case (CALL x181 x182)
with m u show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with m u show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with m u show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case (ADDRESS x3)
with m show ?thesis by simp
next
case (BALANCE x4)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case THIS
with m show ?thesis by simp
next
case SENDER
with m show ?thesis by simp
next
case VALUE
with m show ?thesis by simp
next
case TRUE
with m show ?thesis by simp
next
case FALSE
with m show ?thesis by simp
next
case (LVAL x7)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (PLUS x81 x82)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (MINUS x91 x92)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (EQUAL x101 x102)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (LESS x111 x112)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (AND x121 x122)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (OR x131 x132)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (NOT x131)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (CALL x181 x182)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
next
case (ECALL x191 x192 x193 x194)
with m show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
qed
next
case e: (EQUAL e1 e2)
show ?case
proof (cases "eupdate e1")
case i: (INT b1 v1)
with e.IH have expr1: "expr e1 ep env cd st = expr (E.INT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
then show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createSInt b1 v1), Value (TSInt b1)),st)" by simp
moreover from i `b1 \<in> vbits`
have "v1 < 2^(b1-1)" and "v1 \<ge> -(2^(b1-1))" using update_bounds_int by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TSInt b1)),st)"
using createSInt_id[of v1 b1] by simp
thus ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
with e.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i2 `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
with r1 True have "expr (EQUAL e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))=((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using equal_def plift.simps(1)[of "(=)"] by simp
hence "expr (EQUAL e1 e2) ep env cd st=Normal ((KValue (createBool (v1=v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` True show ?thesis using i i2 by simp
next
assume "\<not> b2 \<in> vbits"
with e i i2 show ?thesis by simp
qed
next
case u: (UINT b2 v2)
with e.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
using createUInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b2<b1"
with r1 r2 True have "expr (EQUAL e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))=((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using equal_def plift.simps(3)[of "(=)"] by simp
hence "expr (EQUAL e1 e2) ep env cd st=Normal ((KValue (createBool (v1=v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` `b2<b1` True show ?thesis using i u by simp
next
assume "\<not> b2 < b1"
with e i u show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with e i u show ?thesis by simp
qed
next
case (ADDRESS _)
with e i show ?thesis by simp
next
case (BALANCE _)
with e i show ?thesis by simp
next
case THIS
with e i show ?thesis by simp
next
case SENDER
with e i show ?thesis by simp
next
case VALUE
with e i show ?thesis by simp
next
case TRUE
with e i show ?thesis by simp
next
case FALSE
with e i show ?thesis by simp
next
case (LVAL _)
with e i show ?thesis by simp
next
case (PLUS _ _)
with e i show ?thesis by simp
next
case (MINUS _ _)
with e i show ?thesis by simp
next
case (EQUAL _ _)
with e i show ?thesis by simp
next
case (LESS _ _)
with e i show ?thesis by simp
next
case (AND _ _)
with e i show ?thesis by simp
next
case (OR _ _)
with e i show ?thesis by simp
next
case (NOT _)
with e i show ?thesis by simp
next
case (CALL x181 x182)
with e i show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with e i show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with e i show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case u: (UINT b1 v1)
with e.IH have expr1: "expr e1 ep env cd st = expr (UINT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
then show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createUInt b1 v1), Value (TUInt b1)),st)" by simp
moreover from u `b1 \<in> vbits`
have "v1 < 2^b1" and "v1 \<ge> 0" using update_bounds_uint by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TUInt b1)),st)"
by simp
thus ?thesis
proof (cases "eupdate e2")
case u2: (UINT b2 v2)
with e.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u2 `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
by simp
with r1 True have "expr (EQUAL e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))=((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using equal_def plift.simps(2)[of "(=)"] by simp
hence "expr (EQUAL e1 e2) ep env cd st=Normal ((KValue (createBool (v1=v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` show ?thesis using u u2 True by simp
next
assume "\<not> b2 \<in> vbits"
with e u u2 show ?thesis by simp
qed
next
case i: (INT b2 v2)
with e.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 + v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b1<b2"
with r1 r2 True have "expr (EQUAL e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))=((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using equal_def plift.simps(4)[of "(=)"] by simp
hence "expr (EQUAL e1 e2) ep env cd st=Normal ((KValue (createBool (v1=v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` `b1<b2` True show ?thesis using u i by simp
next
assume "\<not> b1 < b2"
with e u i show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with e u i show ?thesis by simp
qed
next
case (ADDRESS _)
with e u show ?thesis by simp
next
case (BALANCE _)
with e u show ?thesis by simp
next
case THIS
with e u show ?thesis by simp
next
case SENDER
with e u show ?thesis by simp
next
case VALUE
with e u show ?thesis by simp
next
case TRUE
with e u show ?thesis by simp
next
case FALSE
with e u show ?thesis by simp
next
case (LVAL _)
with e u show ?thesis by simp
next
case (PLUS _ _)
with e u show ?thesis by simp
next
case (MINUS _ _)
with e u show ?thesis by simp
next
case (EQUAL _ _)
with e u show ?thesis by simp
next
case (LESS _ _)
with e u show ?thesis by simp
next
case (AND _ _)
with e u show ?thesis by simp
next
case (OR _ _)
with e u show ?thesis by simp
next
case (NOT _)
with e u show ?thesis by simp
next
case (CALL x181 x182)
with e u show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with e u show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with e u show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case (ADDRESS x3)
with e show ?thesis by simp
next
case (BALANCE x4)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case THIS
with e show ?thesis by simp
next
case SENDER
with e show ?thesis by simp
next
case VALUE
with e show ?thesis by simp
next
case TRUE
with e show ?thesis by simp
next
case FALSE
with e show ?thesis by simp
next
case (LVAL x7)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (PLUS x81 x82)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (MINUS x91 x92)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (EQUAL x101 x102)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (LESS x111 x112)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (AND x121 x122)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (OR x131 x132)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (NOT x131)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (CALL x181 x182)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
next
case (ECALL x191 x192 x193 x194)
with e show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
qed
next
case l: (LESS e1 e2)
show ?case
proof (cases "eupdate e1")
case i: (INT b1 v1)
with l.IH have expr1: "expr e1 ep env cd st = expr (E.INT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
then show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createSInt b1 v1), Value (TSInt b1)),st)" by simp
moreover from i `b1 \<in> vbits`
have "v1 < 2^(b1-1)" and "v1 \<ge> -(2^(b1-1))" using update_bounds_int by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TSInt b1)),st)"
using createSInt_id[of v1 b1] by simp
thus ?thesis
proof (cases "eupdate e2")
case i2: (INT b2 v2)
with l.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i2 `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
with r1 True have "expr (LESS e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))<((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using less_def plift.simps(1)[of "(<)"] by simp
hence "expr (LESS e1 e2) ep env cd st=Normal ((KValue (createBool (v1<v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` show ?thesis using i i2 True by simp
next
assume "\<not> b2 \<in> vbits"
with l i i2 show ?thesis by simp
qed
next
case u: (UINT b2 v2)
with l.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
using createUInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b2<b1"
with r1 r2 True have "expr (LESS e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))<((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using less_def plift.simps(3)[of "(<)"] by simp
hence "expr (LESS e1 e2) ep env cd st=Normal ((KValue (createBool (v1<v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` `b2<b1` show ?thesis using i u True by simp
next
assume "\<not> b2 < b1"
with l i u show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with l i u show ?thesis by simp
qed
next
case (ADDRESS _)
with l i show ?thesis by simp
next
case (BALANCE _)
with l i show ?thesis by simp
next
case THIS
with l i show ?thesis by simp
next
case SENDER
with l i show ?thesis by simp
next
case VALUE
with l i show ?thesis by simp
next
case TRUE
with l i show ?thesis by simp
next
case FALSE
with l i show ?thesis by simp
next
case (LVAL _)
with l i show ?thesis by simp
next
case (PLUS _ _)
with l i show ?thesis by simp
next
case (MINUS _ _)
with l i show ?thesis by simp
next
case (EQUAL _ _)
with l i show ?thesis by simp
next
case (LESS _ _)
with l i show ?thesis by simp
next
case (AND _ _)
with l i show ?thesis by simp
next
case (OR _ _)
with l i show ?thesis by simp
next
case (NOT _)
with l i show ?thesis by simp
next
case (CALL x181 x182)
with l i show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with l i show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with l i show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case u: (UINT b1 v1)
with l.IH have expr1: "expr e1 ep env cd st = expr (UINT b1 v1) ep env cd st" by simp
then show ?thesis
proof (cases "gas st > 0")
case True
then show ?thesis
proof (cases)
assume "b1 \<in> vbits"
with expr1 True
have "expr e1 ep env cd st=Normal ((KValue (createUInt b1 v1), Value (TUInt b1)),st)" by simp
moreover from u `b1 \<in> vbits`
have "v1 < 2^b1" and "v1 \<ge> 0" using update_bounds_uint by auto
moreover from `b1 \<in> vbits` have "0 < b1" by auto
ultimately have r1: "expr e1 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v1), Value (TUInt b1)),st)"
by simp
thus ?thesis
proof (cases "eupdate e2")
case u2: (UINT b2 v2)
with l.IH have expr2: "expr e2 ep env cd st = expr (UINT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createUInt b2 v2), Value (TUInt b2)),st)" by simp
moreover from u2 `b2 \<in> vbits`
have "v2 < 2^b2" and "v2 \<ge> 0" using update_bounds_uint by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TUInt b2)),st)"
by simp
with r1 True have "expr (LESS e1 e2) ep env cd st=Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))<((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)" using less_def plift.simps(2)[of "(<)"] by simp
hence "expr (LESS e1 e2) ep env cd st=Normal ((KValue (createBool (v1<v2)), Value TBool),st)" using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` show ?thesis using u u2 True by simp
next
assume "\<not> b2 \<in> vbits"
with l u u2 show ?thesis by simp
qed
next
case i: (INT b2 v2)
with l.IH have expr2: "expr e2 ep env cd st = expr (E.INT b2 v2) ep env cd st" by simp
then show ?thesis
proof (cases)
let ?v="v1 + v2"
assume "b2 \<in> vbits"
with expr2 True
have "expr e2 ep env cd st=Normal ((KValue (createSInt b2 v2), Value (TSInt b2)),st)" by simp
moreover from i `b2 \<in> vbits`
have "v2 < 2^(b2-1)" and "v2 \<ge> -(2^(b2-1))" using update_bounds_int by auto
moreover from `b2 \<in> vbits` have "0 < b2" by auto
ultimately have r2: "expr e2 ep env cd st = Normal ((KValue (ShowL\<^sub>i\<^sub>n\<^sub>t v2), Value (TSInt b2)),st)"
using createSInt_id[of v2 b2] by simp
thus ?thesis
proof (cases)
assume "b1<b2"
with r1 r2 True have "expr (LESS e1 e2) ep env cd st=
Normal ((KValue (createBool ((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v1))<((ReadL\<^sub>i\<^sub>n\<^sub>t (ShowL\<^sub>i\<^sub>n\<^sub>t v2))))), Value TBool),st)"
using less_def plift.simps(4)[of "(<)"] by simp
hence "expr (LESS e1 e2) ep env cd st=Normal ((KValue (createBool (v1<v2)), Value TBool),st)"
using Read_ShowL_id by simp
with `b1 \<in> vbits` `b2 \<in> vbits` `b1<b2` show ?thesis using u i True by simp
next
assume "\<not> b1 < b2"
with l u i show ?thesis by simp
qed
next
assume "\<not> b2 \<in> vbits"
with l u i show ?thesis by simp
qed
next
case (ADDRESS _)
with l u show ?thesis by simp
next
case (BALANCE _)
with l u show ?thesis by simp
next
case THIS
with l u show ?thesis by simp
next
case SENDER
with l u show ?thesis by simp
next
case VALUE
with l u show ?thesis by simp
next
case TRUE
with l u show ?thesis by simp
next
case FALSE
with l u show ?thesis by simp
next
case (LVAL _)
with l u show ?thesis by simp
next
case (PLUS _ _)
with l u show ?thesis by simp
next
case (MINUS _ _)
with l u show ?thesis by simp
next
case (EQUAL _ _)
with l u show ?thesis by simp
next
case (LESS _ _)
with l u show ?thesis by simp
next
case (AND _ _)
with l u show ?thesis by simp
next
case (OR _ _)
with l u show ?thesis by simp
next
case (NOT _)
with l u show ?thesis by simp
next
case (CALL x181 x182)
with l u show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with l u show ?thesis by simp
qed
next
assume "\<not> b1 \<in> vbits"
with l u show ?thesis by simp
qed
next
case False
then show ?thesis using no_gas by simp
qed
next
case (ADDRESS x3)
with l show ?thesis by simp
next
case (BALANCE x4)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case THIS
with l show ?thesis by simp
next
case SENDER
with l show ?thesis by simp
next
case VALUE
with l show ?thesis by simp
next
case TRUE
with l show ?thesis by simp
next
case FALSE
with l show ?thesis by simp
next
case (LVAL x7)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (PLUS x81 x82)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (MINUS x91 x92)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (EQUAL x101 x102)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (LESS x111 x112)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (AND x121 x122)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (OR x131 x132)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (NOT x131)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (CALL x181 x182)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
next
case (ECALL x191 x192 x193 x194)
with l show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
qed
next
case a: (AND e1 e2)
show ?case
proof (cases "eupdate e1")
case (INT x11 x12)
with a show ?thesis by simp
next
case (UINT x21 x22)
with a show ?thesis by simp
next
case (ADDRESS x3)
with a show ?thesis by simp
next
case (BALANCE x4)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case THIS
with a show ?thesis by simp
next
case SENDER
with a show ?thesis by simp
next
case VALUE
with a show ?thesis by simp
next
case t: TRUE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with a t show ?thesis by simp
next
case (UINT x21 x22)
with a t show ?thesis by simp
next
case (ADDRESS x3)
with a t show ?thesis by simp
next
case (BALANCE x4)
with a t show ?thesis by simp
next
case THIS
with a t show ?thesis by simp
next
case SENDER
with a t show ?thesis by simp
next
case VALUE
with a t show ?thesis by simp
next
case TRUE
with a t show ?thesis by simp
next
case FALSE
with a t show ?thesis by simp
next
case (LVAL x7)
with a t show ?thesis by simp
next
case (PLUS x81 x82)
with a t show ?thesis by simp
next
case (MINUS x91 x92)
with a t show ?thesis by simp
next
case (EQUAL x101 x102)
with a t show ?thesis by simp
next
case (LESS x111 x112)
with a t show ?thesis by simp
next
case (AND x121 x122)
with a t show ?thesis by simp
next
case (OR x131 x132)
with a t show ?thesis by simp
next
case (NOT x131)
with a t show ?thesis by simp
next
case (CALL x181 x182)
with a t show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with a t show ?thesis by simp
qed
next
case f: FALSE
show ?thesis
proof (cases "eupdate e2")
case (INT b v)
with a f show ?thesis by simp
next
case (UINT b v)
with a f show ?thesis by simp
next
case (ADDRESS x3)
with a f show ?thesis by simp
next
case (BALANCE x4)
with a f show ?thesis by simp
next
case THIS
with a f show ?thesis by simp
next
case SENDER
with a f show ?thesis by simp
next
case VALUE
with a f show ?thesis by simp
next
case TRUE
with a f show ?thesis by simp
next
case FALSE
with a f show ?thesis by simp
next
case (LVAL x7)
with a f show ?thesis by simp
next
case (PLUS x81 x82)
with a f show ?thesis by simp
next
case (MINUS x91 x92)
with a f show ?thesis by simp
next
case (EQUAL x101 x102)
with a f show ?thesis by simp
next
case (LESS x111 x112)
with a f show ?thesis by simp
next
case (AND x121 x122)
with a f show ?thesis by simp
next
case (OR x131 x132)
with a f show ?thesis by simp
next
case (NOT x131)
with a f show ?thesis by simp
next
case (CALL x181 x182)
with a f show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with a f show ?thesis by simp
qed
next
case (LVAL x7)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case p: (PLUS x81 x82)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (MINUS x91 x92)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (EQUAL x101 x102)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (LESS x111 x112)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (AND x121 x122)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (OR x131 x132)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (NOT x131)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (CALL x181 x182)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
next
case (ECALL x191 x192 x193 x194)
with a show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
qed
next
case o: (OR e1 e2)
show ?case
proof (cases "eupdate e1")
case (INT x11 x12)
with o show ?thesis by simp
next
case (UINT x21 x22)
with o show ?thesis by simp
next
case (ADDRESS x3)
with o show ?thesis by simp
next
case (BALANCE x4)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case THIS
with o show ?thesis by simp
next
case SENDER
with o show ?thesis by simp
next
case VALUE
with o show ?thesis by simp
next
case t: TRUE
show ?thesis
proof (cases "eupdate e2")
case (INT x11 x12)
with o t show ?thesis by simp
next
case (UINT x21 x22)
with o t show ?thesis by simp
next
case (ADDRESS x3)
with o t show ?thesis by simp
next
case (BALANCE x4)
with o t show ?thesis by simp
next
case THIS
with o t show ?thesis by simp
next
case SENDER
with o t show ?thesis by simp
next
case VALUE
with o t show ?thesis by simp
next
case TRUE
with o t show ?thesis by simp
next
case FALSE
with o t show ?thesis by simp
next
case (LVAL x7)
with o t show ?thesis by simp
next
case (PLUS x81 x82)
with o t show ?thesis by simp
next
case (MINUS x91 x92)
with o t show ?thesis by simp
next
case (EQUAL x101 x102)
with o t show ?thesis by simp
next
case (LESS x111 x112)
with o t show ?thesis by simp
next
case (AND x121 x122)
with o t show ?thesis by simp
next
case (OR x131 x132)
with o t show ?thesis by simp
next
case (NOT x131)
with o t show ?thesis by simp
next
case (CALL x181 x182)
with o t show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with o t show ?thesis by simp
qed
next
case f: FALSE
show ?thesis
proof (cases "eupdate e2")
case (INT b v)
with o f show ?thesis by simp
next
case (UINT b v)
with o f show ?thesis by simp
next
case (ADDRESS x3)
with o f show ?thesis by simp
next
case (BALANCE x4)
with o f show ?thesis by simp
next
case THIS
with o f show ?thesis by simp
next
case SENDER
with o f show ?thesis by simp
next
case VALUE
with o f show ?thesis by simp
next
case TRUE
with o f show ?thesis by simp
next
case FALSE
with o f show ?thesis by simp
next
case (LVAL x7)
with o f show ?thesis by simp
next
case (PLUS x81 x82)
with o f show ?thesis by simp
next
case (MINUS x91 x92)
with o f show ?thesis by simp
next
case (EQUAL x101 x102)
with o f show ?thesis by simp
next
case (LESS x111 x112)
with o f show ?thesis by simp
next
case (AND x121 x122)
with o f show ?thesis by simp
next
case (OR x131 x132)
with o f show ?thesis by simp
next
case (NOT x131)
with o f show ?thesis by simp
next
case (CALL x181 x182)
with o f show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with o f show ?thesis by simp
qed
next
case (LVAL x7)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case p: (PLUS x81 x82)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (MINUS x91 x92)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (EQUAL x101 x102)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (LESS x111 x112)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (AND x121 x122)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (OR x131 x132)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (NOT x131)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by auto
next
case (CALL x181 x182)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
next
case (ECALL x191 x192 x193 x194)
with o show ?thesis using lift_eq[of e1 ep env cd st "eupdate e1" e2 "eupdate e2"] by simp
qed
next
case o: (NOT e)
show ?case
proof (cases "eupdate e")
case (INT x11 x12)
with o show ?thesis by simp
next
case (UINT x21 x22)
with o show ?thesis by simp
next
case (ADDRESS x3)
with o show ?thesis by simp
next
case (BALANCE x4)
with o show ?thesis by simp
next
case THIS
with o show ?thesis by simp
next
case SENDER
with o show ?thesis by simp
next
case VALUE
with o show ?thesis by simp
next
case t: TRUE
with o show ?thesis by simp
next
case f: FALSE
with o show ?thesis by simp
next
case (LVAL x7)
with o show ?thesis by simp
next
case p: (PLUS x81 x82)
with o show ?thesis by simp
next
case (MINUS x91 x92)
with o show ?thesis by simp
next
case (EQUAL x101 x102)
with o show ?thesis by simp
next
case (LESS x111 x112)
with o show ?thesis by simp
next
case (AND x121 x122)
with o show ?thesis by simp
next
case (OR x131 x132)
with o show ?thesis by simp
next
case (NOT x131)
with o show ?thesis by simp
next
case (CALL x181 x182)
with o show ?thesis by simp
next
case (ECALL x191 x192 x193 x194)
with o show ?thesis by simp
qed
next
case (CALL x181 x182)
show ?case by simp
next
case (ECALL x191 x192 x193 x194)
show ?case by simp
qed
end
|
import JuLIP.Chemistry: rnn
export rattle!, r_sum, r_dot,
swapxy!, swapxz!, swapyz!,
dist, displacement, rmin
############################################################
### Some useful utility functions
############################################################
### Robust Summation and Dot Products
############################################################
"Robust summation. Uses `sum_kbn`."
r_sum(a) = sum_kbn(a)
## NOTE: if I see this correctly, then r_dot allocates a temporary
## vector, which is likely quite a performance overhead.
## probably, we want to re-implement this.
## TODO: new version without the intermediate allocation
"Robust inner product. Defined as `r_dot(a, b) = r_sum(a .* b)`"
r_dot(a, b) = r_sum(a[:] .* b[:])
"""
`rattle!(at::AbstractAtoms, r::Float64; rnn = 1.0, respect_constraint = true)
-> at`
randomly perturbs the atom positions
* `r`: magnitude of perturbation
* `rnn` : nearest-neighbour distance
* `respect_constraint`: set false to also perturb the constrained atom positions
"""
function rattle!(at::AbstractAtoms, r::Float64; rnn = 1.0, respect_constraint = true)
# if there is no constraint, then revert to respect_constraint = false
if isa(constraint(at), NullConstraint)
respect_constraint = false
end
if respect_constraint
x = dofs(at)
x += r * rnn * 2.0/sqrt(3) * (rand(length(x)) - 0.5)
set_dofs!(at, x)
else
X = positions(at) |> mat
X += r * rnn * 2.0/sqrt(3) * (rand(size(X)) - 0.5)
set_positions!(at, X)
end
return at
end
"""
use this instead of `warn`, then warnings can be turned off by setting
`Main.JULIPWARN=false`
"""
function julipwarn(s)
if isdefined(Main, :JULIPWARN)
if Main.JULIPWARN == false
return false
end
end
warn(s)
end
"""
swap x and y position coordinates
"""
function swapxy!(at::AbstractAtoms)
X = positions(at) |> mat
X[[2,1],:] = X[[1,2],:]
set_positions!(at, X)
return at
end
"""
swap x and z position coordinates
"""
function swapxz!(at::AbstractAtoms)
X = positions(at) |> mat
X[[3,1],:] = X[[1,3],:]
set_positions!(at, X)
return at
end
"""
swap y and z position coordinates
"""
function swapyz!(at::AbstractAtoms)
X = positions(at) |> mat
X[[3,2],:] = X[[2,3],:]
set_positions!(at, X)
return at
end
"""
`dist(at, X1, X2, p = Inf)`
`dist(at1, at2, p = Inf)`
Returns the maximum distance (p = Inf) or alternatively a p-norm of
distances between the two configurations `X1, X2` or `at1, at2`.
This implementation accounts for periodic boundary conditions (in those
coordinate directions where they are set to `true`)
"""
function dist{T}(at::AbstractAtoms,
X1::Vector{JVec{T}}, X2::Vector{JVec{T}},
p = Inf)
@assert length(X1) == length(X2)
F = cell(at)'
Finv = inv(F)
bcrem = [ p ? 1.0 : Inf for p in pbc(at) ]
d = [ norm(_project_pbc_(F, Finv, bcrem, x1 - x2))
for (x1, x2) in zip(X1, X2) ]
return norm(d, p)
end
dist(at::AbstractAtoms, X::Vector) = dist(at, positions(at), X)
function dist(at1::AbstractAtoms, at2::AbstractAtoms, p = Inf)
@assert vecnorm(cell(at1) - cell(at2), Inf) < 1e-14
return dist(at1, positions(at1), positions(at2), p)
end
function _project_pbc_(F, Finv, bcrem, x)
λ = Finv * x # convex coordinates
# convex coords projected to the unit
λp = JVecF(rem(λ[1], bcrem[1]), rem(λ[2], bcrem[2]), rem(λ[3], bcrem[3]))
return F * λp # convert back to real coordinates
end
function _project_coord_min_(λ, p)
if !p
return λ
end
λ = mod(λ, 1.0) # project to cell
if λ > 0.5 # periodic image with minimal length
λ = λ - 1.0
end
return λ
end
function _project_pbc_min_(F, Finv, p, x)
λ = Finv * x # convex coordinates
# convex coords projected to the unit
λp = _project_coord_min_.(λ, JVec{Bool}(p...))
return F * λp # convert back to real coordinates
end
function displacement{T}(at::AbstractAtoms, X1::Vector{JVec{T}}, X2::Vector{JVec{T}})
@assert length(X1) == length(X2)
F = defm(at)
Finv = inv(F)
p = pbc(at)
U = [ _project_pbc_min_(F, Finv, p, x2-x1)
for (x1, x2) in zip(X1, X2) ]
return U
end
project_min(at, u) = _project_pbc_min_(defm(at), inv(defm(at)), pbc(at), u)
function rnn(at::AbstractAtoms)
syms = unique(chemical_symbols(at))
return minimum(rnn.(syms))
end
function wrap_pbc!(at)
X = positions(at)
F = defm(at)
X = [_project_pbc_min_(F, inv(F), pbc(at), x) for x in X]
set_positions!(at, X)
end
function rmin(at::AbstractAtoms)
at2 = at * 2
X = positions(at2)
r = norm(X[1]-X[2])
for n = 1:length(X)-1, m = n+1:length(X)
r = min(r, norm(X[n]-X[m]))
end
return r
end
|
From stdpp Require Export binders strings.
From stdpp Require Import gmap.
From iris.algebra Require Export ofe.
From trillium Require Export language ectx_language ectxi_language adequacy.
From trillium.fairness.heap_lang Require Export locations.
Set Default Proof Using "Type".
(** heap_lang. A fairly simple language used for common Iris examples.
- This is a right-to-left evaluated language, like CakeML and OCaml. The reason
for this is that it makes curried functions usable: Given a WP for [f a b], we
know that any effects [f] might have to not matter until after *both* [a] and
[b] are evaluated. With left-to-right evaluation, that triple is basically
useless unless the user let-expands [b].
- For prophecy variables, we annotate the reduction steps with an "observation"
and tweak adequacy such that WP knows all future observations. There is
another possible choice: Use non-deterministic choice when creating a prophecy
variable ([NewProph]), and when resolving it ([Resolve]) make the
program diverge unless the variable matches. That, however, requires an
erasure proof that this endless loop does not make specifications useless.
The expression [Resolve e p v] attaches a prophecy resolution (for prophecy
variable [p] to value [v]) to the top-level head-reduction step of [e]. The
prophecy resolution happens simultaneously with the head-step being taken.
Furthermore, it is required that the head-step produces a value (otherwise
the [Resolve] is stuck), and this value is also attached to the resolution.
A prophecy variable is thus resolved to a pair containing (1) the result
value of the wrapped expression (called [e] above), and (2) the value that
was attached by the [Resolve] (called [v] above). This allows, for example,
to distinguish a resolution originating from a successful [CmpXchg] from one
originating from a failing [CmpXchg]. For example:
- [Resolve (CmpXchg #l #n #(n+1)) #p v] will behave as [CmpXchg #l #n #(n+1)],
which means step to a value-boole pair [(n', b)] while updating the heap, but
in the meantime the prophecy variable [p] will be resolved to [(n', b), v)].
- [Resolve (! #l) #p v] will behave as [! #l], that is return the value
[w] pointed to by [l] on the heap (assuming it was allocated properly),
but it will additionally resolve [p] to the pair [(w,v)].
Note that the sub-expressions of [Resolve e p v] (i.e., [e], [p] and [v])
are reduced as usual, from right to left. However, the evaluation of [e]
is restricted so that the head-step to which the resolution is attached
cannot be taken by the context. For example:
- [Resolve (CmpXchg #l #n (#n + #1)) #p v] will first be reduced (with by a
context-step) to [Resolve (CmpXchg #l #n #(n+1) #p v], and then behave as
described above.
- However, [Resolve ((λ: "n", CmpXchg #l "n" ("n" + #1)) #n) #p v] is stuck.
Indeed, it can only be evaluated using a head-step (it is a β-redex),
but the process does not yield a value.
The mechanism described above supports nesting [Resolve] expressions to
attach several prophecy resolutions to a head-redex. *)
Delimit Scope expr_scope with E.
Delimit Scope val_scope with V.
Module heap_lang.
Open Scope Z_scope.
(** Expressions and vals. *)
Definition proph_id := positive.
(** We have a notion of "poison" as a variant of unit that may not be compared
with anything. This is useful for erasure proofs: if we erased things to unit,
[<erased> == unit] would evaluate to true after erasure, changing program
behavior. So we erase to the poison value instead, making sure that no legal
comparisons could be affected. *)
Inductive base_lit : Set :=
| LitInt (n : Z) | LitBool (b : bool) | LitUnit | LitPoison
| LitLoc (l : loc) | LitProphecy (p: proph_id).
Inductive un_op : Set :=
| NegOp | MinusUnOp.
Inductive bin_op : Set :=
| PlusOp | MinusOp | MultOp | QuotOp | RemOp (* Arithmetic *)
| AndOp | OrOp | XorOp (* Bitwise *)
| ShiftLOp | ShiftROp (* Shifts *)
| LeOp | LtOp | EqOp (* Relations *)
| OffsetOp. (* Pointer offset *)
Inductive expr :=
(* Values *)
| Val (v : val)
(* Base lambda calculus *)
| Var (x : string)
| Rec (f x : binder) (e : expr)
| App (e1 e2 : expr)
(* Base types and their operations *)
| UnOp (op : un_op) (e : expr)
| BinOp (op : bin_op) (e1 e2 : expr)
| If (e0 e1 e2 : expr)
(* Products *)
| Pair (e1 e2 : expr)
| Fst (e : expr)
| Snd (e : expr)
(* Sums *)
| InjL (e : expr)
| InjR (e : expr)
| Case (e0 : expr) (e1 : expr) (e2 : expr)
(* Concurrency *)
| Fork (e : expr)
(* Heap *)
| AllocN (e1 e2 : expr) (* array length (positive number), initial value *)
| Load (e : expr)
| Store (e1 : expr) (e2 : expr)
| CmpXchg (e0 : expr) (e1 : expr) (e2 : expr) (* Compare-exchange *)
| FAA (e1 : expr) (e2 : expr) (* Fetch-and-add *)
(* Non-determinism *)
| ChooseNat
with val :=
| LitV (l : base_lit)
| RecV (f x : binder) (e : expr)
| PairV (v1 v2 : val)
| InjLV (v : val)
| InjRV (v : val).
Bind Scope expr_scope with expr.
Bind Scope val_scope with val.
(** An observation associates a prophecy variable (identifier) to a pair of
values. The first value is the one that was returned by the (atomic) operation
during which the prophecy resolution happened (typically, a boolean when the
wrapped operation is a CmpXchg). The second value is the one that the prophecy
variable was actually resolved to. *)
Definition observation : Set := proph_id * (val * val).
Notation of_val := Val (only parsing).
Definition to_val (e : expr) : option val :=
match e with
| Val v => Some v
| _ => None
end.
(** We assume the following encoding of values to 64-bit words: The least 3
significant bits of every word are a "tag", and we have 61 bits of payload,
which is enough if all pointers are 8-byte-aligned (common on 64bit
architectures). The tags have the following meaning:
0: Payload is the data for a LitV (LitInt _).
1: Payload is the data for a InjLV (LitV (LitInt _)).
2: Payload is the data for a InjRV (LitV (LitInt _)).
3: Payload is the data for a LitV (LitLoc _).
4: Payload is the data for a InjLV (LitV (LitLoc _)).
4: Payload is the data for a InjRV (LitV (LitLoc _)).
6: Payload is one of the following finitely many values, which 61 bits are more
than enough to encode:
LitV LitUnit, InjLV (LitV LitUnit), InjRV (LitV LitUnit),
LitV LitPoison, InjLV (LitV LitPoison), InjRV (LitV LitPoison),
LitV (LitBool _), InjLV (LitV (LitBool _)), InjRV (LitV (LitBool _)).
7: Value is boxed, i.e., payload is a pointer to some read-only memory area on
the heap which stores whether this is a RecV, PairV, InjLV or InjRV and the
relevant data for those cases. However, the boxed representation is never
used if any of the above representations could be used.
Ignoring (as usual) the fact that we have to fit the infinite Z/loc into 61
bits, this means every value is machine-word-sized and can hence be atomically
read and written. Also notice that the sets of boxed and unboxed values are
disjoint. *)
Definition lit_is_unboxed (l: base_lit) : Prop :=
match l with
(** Disallow comparing (erased) prophecies with (erased) prophecies, by
considering them boxed. *)
| LitProphecy _ | LitPoison => False
| _ => True
end.
Definition val_is_unboxed (v : val) : Prop :=
match v with
| LitV l => lit_is_unboxed l
| InjLV (LitV l) => lit_is_unboxed l
| InjRV (LitV l) => lit_is_unboxed l
| _ => False
end.
#[global] Instance lit_is_unboxed_dec l : Decision (lit_is_unboxed l).
Proof. destruct l; simpl; exact (decide _). Defined.
#[global] Instance val_is_unboxed_dec v : Decision (val_is_unboxed v).
Proof. destruct v as [ | | | [] | [] ]; simpl; exact (decide _). Defined.
(** We just compare the word-sized representation of two values, without looking
into boxed data. This works out fine if at least one of the to-be-compared
values is unboxed (exploiting the fact that an unboxed and a boxed value can
never be equal because these are disjoint sets). *)
Definition vals_compare_safe (vl v1 : val) : Prop :=
val_is_unboxed vl ∨ val_is_unboxed v1.
Arguments vals_compare_safe !_ !_ /.
(** The state: heaps of vals. *)
Record state : Type := {
heap: gmap loc val;
used_proph_id: gset proph_id;
}.
(** Equality and other typeclass stuff *)
Lemma to_of_val v : to_val (of_val v) = Some v.
Proof. by destruct v. Qed.
Lemma of_to_val e v : to_val e = Some v → of_val v = e.
Proof. destruct e=>//=. by intros [= <-]. Qed.
#[global] Instance of_val_inj : Inj (=) (=) of_val.
Proof. intros ??. congruence. Qed.
#[global] Instance base_lit_eq_dec : EqDecision base_lit.
Proof. solve_decision. Defined.
#[global] Instance un_op_eq_dec : EqDecision un_op.
Proof. solve_decision. Defined.
#[global] Instance bin_op_eq_dec : EqDecision bin_op.
Proof. solve_decision. Defined.
#[global] Instance expr_eq_dec : EqDecision expr.
Proof.
refine (
fix go (e1 e2 : expr) {struct e1} : Decision (e1 = e2) :=
match e1, e2 with
| Val v, Val v' => cast_if (decide (v = v'))
| Var x, Var x' => cast_if (decide (x = x'))
| Rec f x e, Rec f' x' e' =>
cast_if_and3 (decide (f = f')) (decide (x = x')) (decide (e = e'))
| App e1 e2, App e1' e2' => cast_if_and (decide (e1 = e1')) (decide (e2 = e2'))
| UnOp o e, UnOp o' e' => cast_if_and (decide (o = o')) (decide (e = e'))
| BinOp o e1 e2, BinOp o' e1' e2' =>
cast_if_and3 (decide (o = o')) (decide (e1 = e1')) (decide (e2 = e2'))
| If e0 e1 e2, If e0' e1' e2' =>
cast_if_and3 (decide (e0 = e0')) (decide (e1 = e1')) (decide (e2 = e2'))
| Pair e1 e2, Pair e1' e2' =>
cast_if_and (decide (e1 = e1')) (decide (e2 = e2'))
| Fst e, Fst e' => cast_if (decide (e = e'))
| Snd e, Snd e' => cast_if (decide (e = e'))
| InjL e, InjL e' => cast_if (decide (e = e'))
| InjR e, InjR e' => cast_if (decide (e = e'))
| Case e0 e1 e2, Case e0' e1' e2' =>
cast_if_and3 (decide (e0 = e0')) (decide (e1 = e1')) (decide (e2 = e2'))
| Fork e, Fork e' => cast_if (decide (e = e'))
| AllocN e1 e2, AllocN e1' e2' =>
cast_if_and (decide (e1 = e1')) (decide (e2 = e2'))
| Load e, Load e' => cast_if (decide (e = e'))
| Store e1 e2, Store e1' e2' =>
cast_if_and (decide (e1 = e1')) (decide (e2 = e2'))
| CmpXchg e0 e1 e2, CmpXchg e0' e1' e2' =>
cast_if_and3 (decide (e0 = e0')) (decide (e1 = e1')) (decide (e2 = e2'))
| FAA e1 e2, FAA e1' e2' =>
cast_if_and (decide (e1 = e1')) (decide (e2 = e2'))
| ChooseNat, ChooseNat => left _
| _, _ => right _
end
with gov (v1 v2 : val) {struct v1} : Decision (v1 = v2) :=
match v1, v2 with
| LitV l, LitV l' => cast_if (decide (l = l'))
| RecV f x e, RecV f' x' e' =>
cast_if_and3 (decide (f = f')) (decide (x = x')) (decide (e = e'))
| PairV e1 e2, PairV e1' e2' =>
cast_if_and (decide (e1 = e1')) (decide (e2 = e2'))
| InjLV e, InjLV e' => cast_if (decide (e = e'))
| InjRV e, InjRV e' => cast_if (decide (e = e'))
| _, _ => right _
end
for go); try (clear go gov; abstract intuition congruence).
Defined.
#[global] Instance val_eq_dec : EqDecision val.
Proof. solve_decision. Defined.
#[global] Instance base_lit_countable : Countable base_lit.
Proof.
refine (inj_countable' (λ l, match l with
| LitInt n => (inl (inl n), None)
| LitBool b => (inl (inr b), None)
| LitUnit => (inr (inl false), None)
| LitPoison => (inr (inl true), None)
| LitLoc l => (inr (inr l), None)
| LitProphecy p => (inr (inl false), Some p)
end) (λ l, match l with
| (inl (inl n), None) => LitInt n
| (inl (inr b), None) => LitBool b
| (inr (inl false), None) => LitUnit
| (inr (inl true), None) => LitPoison
| (inr (inr l), None) => LitLoc l
| (_, Some p) => LitProphecy p
end) _); by intros [].
Qed.
#[global] Instance un_op_finite : Countable un_op.
Proof.
refine (inj_countable' (λ op, match op with NegOp => 0 | MinusUnOp => 1 end)
(λ n, match n with 0 => NegOp | _ => MinusUnOp end) _); by intros [].
Qed.
#[global] Instance bin_op_countable : Countable bin_op.
Proof.
refine (inj_countable' (λ op, match op with
| PlusOp => 0 | MinusOp => 1 | MultOp => 2 | QuotOp => 3 | RemOp => 4
| AndOp => 5 | OrOp => 6 | XorOp => 7 | ShiftLOp => 8 | ShiftROp => 9
| LeOp => 10 | LtOp => 11 | EqOp => 12 | OffsetOp => 13
end) (λ n, match n with
| 0 => PlusOp | 1 => MinusOp | 2 => MultOp | 3 => QuotOp | 4 => RemOp
| 5 => AndOp | 6 => OrOp | 7 => XorOp | 8 => ShiftLOp | 9 => ShiftROp
| 10 => LeOp | 11 => LtOp | 12 => EqOp | _ => OffsetOp
end) _); by intros [].
Qed.
#[global] Instance expr_countable : Countable expr.
Proof.
set (enc :=
fix go e :=
match e with
| Val v => GenNode 0 [gov v]
| Var x => GenLeaf (inl (inl x))
| Rec f x e => GenNode 1 [GenLeaf (inl (inr f)); GenLeaf (inl (inr x)); go e]
| App e1 e2 => GenNode 2 [go e1; go e2]
| UnOp op e => GenNode 3 [GenLeaf (inr (inr (inl op))); go e]
| BinOp op e1 e2 => GenNode 4 [GenLeaf (inr (inr (inr op))); go e1; go e2]
| If e0 e1 e2 => GenNode 5 [go e0; go e1; go e2]
| Pair e1 e2 => GenNode 6 [go e1; go e2]
| Fst e => GenNode 7 [go e]
| Snd e => GenNode 8 [go e]
| InjL e => GenNode 9 [go e]
| InjR e => GenNode 10 [go e]
| Case e0 e1 e2 => GenNode 11 [go e0; go e1; go e2]
| Fork e => GenNode 12 [go e]
| AllocN e1 e2 => GenNode 13 [go e1; go e2]
| Load e => GenNode 14 [go e]
| Store e1 e2 => GenNode 15 [go e1; go e2]
| CmpXchg e0 e1 e2 => GenNode 16 [go e0; go e1; go e2]
| FAA e1 e2 => GenNode 17 [go e1; go e2]
| ChooseNat => GenNode 18 []
end
with gov v :=
match v with
| LitV l => GenLeaf (inr (inl l))
| RecV f x e =>
GenNode 0 [GenLeaf (inl (inr f)); GenLeaf (inl (inr x)); go e]
| PairV v1 v2 => GenNode 1 [gov v1; gov v2]
| InjLV v => GenNode 2 [gov v]
| InjRV v => GenNode 3 [gov v]
end
for go).
set (dec :=
fix go e :=
match e with
| GenNode 0 [v] => Val (gov v)
| GenLeaf (inl (inl x)) => Var x
| GenNode 1 [GenLeaf (inl (inr f)); GenLeaf (inl (inr x)); e] => Rec f x (go e)
| GenNode 2 [e1; e2] => App (go e1) (go e2)
| GenNode 3 [GenLeaf (inr (inr (inl op))); e] => UnOp op (go e)
| GenNode 4 [GenLeaf (inr (inr (inr op))); e1; e2] => BinOp op (go e1) (go e2)
| GenNode 5 [e0; e1; e2] => If (go e0) (go e1) (go e2)
| GenNode 6 [e1; e2] => Pair (go e1) (go e2)
| GenNode 7 [e] => Fst (go e)
| GenNode 8 [e] => Snd (go e)
| GenNode 9 [e] => InjL (go e)
| GenNode 10 [e] => InjR (go e)
| GenNode 11 [e0; e1; e2] => Case (go e0) (go e1) (go e2)
| GenNode 12 [e] => Fork (go e)
| GenNode 13 [e1; e2] => AllocN (go e1) (go e2)
| GenNode 14 [e] => Load (go e)
| GenNode 15 [e1; e2] => Store (go e1) (go e2)
| GenNode 16 [e0; e1; e2] => CmpXchg (go e0) (go e1) (go e2)
| GenNode 17 [e1; e2] => FAA (go e1) (go e2)
| GenNode 18 [] => ChooseNat
| _ => Val $ LitV LitUnit (* dummy *)
end
with gov v :=
match v with
| GenLeaf (inr (inl l)) => LitV l
| GenNode 0 [GenLeaf (inl (inr f)); GenLeaf (inl (inr x)); e] => RecV f x (go e)
| GenNode 1 [v1; v2] => PairV (gov v1) (gov v2)
| GenNode 2 [v] => InjLV (gov v)
| GenNode 3 [v] => InjRV (gov v)
| _ => LitV LitUnit (* dummy *)
end
for go).
refine (inj_countable' enc dec _).
refine (fix go (e : expr) {struct e} := _ with gov (v : val) {struct v} := _ for go).
- destruct e as [v| | | | | | | | | | | | | | | | | | |]; simpl; f_equal;
[exact (gov v)|done..].
- destruct v; by f_equal.
Qed.
#[global] Instance val_countable : Countable val.
Proof. refine (inj_countable of_val to_val _); auto using to_of_val. Qed.
#[global] Instance state_inhabited : Inhabited state :=
populate {| heap := inhabitant; used_proph_id := inhabitant |}.
#[global] Instance val_inhabited : Inhabited val := populate (LitV LitUnit).
#[global] Instance expr_inhabited : Inhabited expr := populate (Val inhabitant).
Canonical Structure stateO := leibnizO state.
Canonical Structure locO := leibnizO loc.
Canonical Structure valO := leibnizO val.
Canonical Structure exprO := leibnizO expr.
(** Evaluation contexts *)
Inductive ectx_item :=
| AppLCtx (v2 : val)
| AppRCtx (e1 : expr)
| UnOpCtx (op : un_op)
| BinOpLCtx (op : bin_op) (v2 : val)
| BinOpRCtx (op : bin_op) (e1 : expr)
| IfCtx (e1 e2 : expr)
| PairLCtx (v2 : val)
| PairRCtx (e1 : expr)
| FstCtx
| SndCtx
| InjLCtx
| InjRCtx
| CaseCtx (e1 : expr) (e2 : expr)
| AllocNLCtx (v2 : val)
| AllocNRCtx (e1 : expr)
| LoadCtx
| StoreLCtx (v2 : val)
| StoreRCtx (e1 : expr)
| CmpXchgLCtx (v1 : val) (v2 : val)
| CmpXchgMCtx (e0 : expr) (v2 : val)
| CmpXchgRCtx (e0 : expr) (e1 : expr)
| FaaLCtx (v2 : val)
| FaaRCtx (e1 : expr).
(** Contextual closure will only reduce [e] in [Resolve e (Val _) (Val _)] if
the local context of [e] is non-empty. As a consequence, the first argument of
[Resolve] is not completely evaluated (down to a value) by contextual closure:
no head steps (i.e., surface reductions) are taken. This means that contextual
closure will reduce [Resolve (CmpXchg #l #n (#n + #1)) #p #v] into [Resolve
(CmpXchg #l #n #(n+1)) #p #v], but it cannot context-step any further. *)
Definition fill_item (Ki : ectx_item) (e : expr) : expr :=
match Ki with
| AppLCtx v2 => App e (of_val v2)
| AppRCtx e1 => App e1 e
| UnOpCtx op => UnOp op e
| BinOpLCtx op v2 => BinOp op e (Val v2)
| BinOpRCtx op e1 => BinOp op e1 e
| IfCtx e1 e2 => If e e1 e2
| PairLCtx v2 => Pair e (Val v2)
| PairRCtx e1 => Pair e1 e
| FstCtx => Fst e
| SndCtx => Snd e
| InjLCtx => InjL e
| InjRCtx => InjR e
| CaseCtx e1 e2 => Case e e1 e2
| AllocNLCtx v2 => AllocN e (Val v2)
| AllocNRCtx e1 => AllocN e1 e
| LoadCtx => Load e
| StoreLCtx v2 => Store e (Val v2)
| StoreRCtx e1 => Store e1 e
| CmpXchgLCtx v1 v2 => CmpXchg e (Val v1) (Val v2)
| CmpXchgMCtx e0 v2 => CmpXchg e0 e (Val v2)
| CmpXchgRCtx e0 e1 => CmpXchg e0 e1 e
| FaaLCtx v2 => FAA e (Val v2)
| FaaRCtx e1 => FAA e1 e
end.
(** Substitution *)
Fixpoint subst (x : string) (v : val) (e : expr) : expr :=
match e with
| Val _ => e
| Var y => if decide (x = y) then Val v else Var y
| Rec f y e =>
Rec f y $ if decide (BNamed x ≠ f ∧ BNamed x ≠ y) then subst x v e else e
| App e1 e2 => App (subst x v e1) (subst x v e2)
| UnOp op e => UnOp op (subst x v e)
| BinOp op e1 e2 => BinOp op (subst x v e1) (subst x v e2)
| If e0 e1 e2 => If (subst x v e0) (subst x v e1) (subst x v e2)
| Pair e1 e2 => Pair (subst x v e1) (subst x v e2)
| Fst e => Fst (subst x v e)
| Snd e => Snd (subst x v e)
| InjL e => InjL (subst x v e)
| InjR e => InjR (subst x v e)
| Case e0 e1 e2 => Case (subst x v e0) (subst x v e1) (subst x v e2)
| Fork e => Fork (subst x v e)
| AllocN e1 e2 => AllocN (subst x v e1) (subst x v e2)
| Load e => Load (subst x v e)
| Store e1 e2 => Store (subst x v e1) (subst x v e2)
| CmpXchg e0 e1 e2 => CmpXchg (subst x v e0) (subst x v e1) (subst x v e2)
| FAA e1 e2 => FAA (subst x v e1) (subst x v e2)
| ChooseNat => ChooseNat
end.
Definition subst' (mx : binder) (v : val) : expr → expr :=
match mx with BNamed x => subst x v | BAnon => id end.
(** The stepping relation *)
Definition un_op_eval (op : un_op) (v : val) : option val :=
match op, v with
| NegOp, LitV (LitBool b) => Some $ LitV $ LitBool (negb b)
| NegOp, LitV (LitInt n) => Some $ LitV $ LitInt (Z.lnot n)
| MinusUnOp, LitV (LitInt n) => Some $ LitV $ LitInt (- n)
| _, _ => None
end.
Definition bin_op_eval_int (op : bin_op) (n1 n2 : Z) : option base_lit :=
match op with
| PlusOp => Some $ LitInt (n1 + n2)
| MinusOp => Some $ LitInt (n1 - n2)
| MultOp => Some $ LitInt (n1 * n2)
| QuotOp => Some $ LitInt (n1 `quot` n2)
| RemOp => Some $ LitInt (n1 `rem` n2)
| AndOp => Some $ LitInt (Z.land n1 n2)
| OrOp => Some $ LitInt (Z.lor n1 n2)
| XorOp => Some $ LitInt (Z.lxor n1 n2)
| ShiftLOp => Some $ LitInt (n1 ≪ n2)
| ShiftROp => Some $ LitInt (n1 ≫ n2)
| LeOp => Some $ LitBool (bool_decide (n1 ≤ n2))
| LtOp => Some $ LitBool (bool_decide (n1 < n2))
| EqOp => Some $ LitBool (bool_decide (n1 = n2))
| OffsetOp => None (* Pointer arithmetic *)
end.
Definition bin_op_eval_bool (op : bin_op) (b1 b2 : bool) : option base_lit :=
match op with
| PlusOp | MinusOp | MultOp | QuotOp | RemOp => None (* Arithmetic *)
| AndOp => Some (LitBool (b1 && b2))
| OrOp => Some (LitBool (b1 || b2))
| XorOp => Some (LitBool (xorb b1 b2))
| ShiftLOp | ShiftROp => None (* Shifts *)
| LeOp | LtOp => None (* InEquality *)
| EqOp => Some (LitBool (bool_decide (b1 = b2)))
| OffsetOp => None (* Pointer arithmetic *)
end.
Definition bin_op_eval (op : bin_op) (v1 v2 : val) : option val :=
if decide (op = EqOp) then
(* Crucially, this compares the same way as [CmpXchg]! *)
if decide (vals_compare_safe v1 v2) then
Some $ LitV $ LitBool $ bool_decide (v1 = v2)
else
None
else
match v1, v2 with
| LitV (LitInt n1), LitV (LitInt n2) => LitV <$> bin_op_eval_int op n1 n2
| LitV (LitBool b1), LitV (LitBool b2) => LitV <$> bin_op_eval_bool op b1 b2
| LitV (LitLoc l), LitV (LitInt off) => Some $ LitV $ LitLoc (l +ₗ off)
| _, _ => None
end.
Definition state_upd_heap (f: gmap loc val → gmap loc val) (σ: state) : state :=
{| heap := f σ.(heap); used_proph_id := σ.(used_proph_id) |}.
Arguments state_upd_heap _ !_ /.
Definition state_upd_used_proph_id (f: gset proph_id → gset proph_id) (σ: state) : state :=
{| heap := σ.(heap); used_proph_id := f σ.(used_proph_id) |}.
Arguments state_upd_used_proph_id _ !_ /.
Fixpoint heap_array (l : loc) (vs : list val) : gmap loc val :=
match vs with
| [] => ∅
| v :: vs' => {[l := v]} ∪ heap_array (l +ₗ 1) vs'
end.
Lemma heap_array_singleton l v : heap_array l [v] = {[l := v]}.
Proof. by rewrite /heap_array right_id. Qed.
Lemma heap_array_lookup l vs w k :
heap_array l vs !! k = Some w ↔
∃ j, 0 ≤ j ∧ k = l +ₗ j ∧ vs !! (Z.to_nat j) = Some w.
Proof.
revert k l; induction vs as [|v' vs IH]=> l' l /=.
{ rewrite lookup_empty. naive_solver lia. }
rewrite -insert_union_singleton_l lookup_insert_Some IH. split.
- intros [[-> ->] | (Hl & j & ? & -> & ?)].
{ exists 0. rewrite loc_add_0. naive_solver lia. }
exists (1 + j). rewrite loc_add_assoc !Z.add_1_l Z2Nat.inj_succ; auto with lia.
- intros (j & ? & -> & Hil). destruct (decide (j = 0)); simplify_eq/=.
{ rewrite loc_add_0; eauto. }
right. split.
{ rewrite -{1}(loc_add_0 l). intros ?%(inj _); lia. }
assert (Z.to_nat j = S (Z.to_nat (j - 1))) as Hj.
{ rewrite -Z2Nat.inj_succ; last lia. f_equal; lia. }
rewrite Hj /= in Hil.
exists (j - 1). rewrite loc_add_assoc Z.add_sub_assoc Z.add_simpl_l.
auto with lia.
Qed.
Lemma heap_array_map_disjoint (h : gmap loc val) (l : loc) (vs : list val) :
(∀ i, (0 ≤ i) → (i < length vs) → h !! (l +ₗ i) = None) →
(heap_array l vs) ##ₘ h.
Proof.
intros Hdisj. apply map_disjoint_spec=> l' v1 v2.
intros (j&?&->&Hj%lookup_lt_Some%inj_lt)%heap_array_lookup.
move: Hj. rewrite Z2Nat.id // => ?. by rewrite Hdisj.
Qed.
(* [h] is added on the right here to make [state_init_heap_singleton] true. *)
Definition state_init_heap (l : loc) (n : Z) (v : val) (σ : state) : state :=
state_upd_heap (λ h, heap_array l (replicate (Z.to_nat n) v) ∪ h) σ.
Lemma state_init_heap_singleton l v σ :
state_init_heap l 1 v σ = state_upd_heap <[l:=v]> σ.
Proof.
destruct σ as [h p]. rewrite /state_init_heap /=. f_equiv.
rewrite right_id insert_union_singleton_l. done.
Qed.
Inductive head_step : expr → state → expr → state → list expr → Prop :=
| RecS f x e σ :
head_step (Rec f x e) σ (Val $ RecV f x e) σ []
| PairS v1 v2 σ :
head_step (Pair (Val v1) (Val v2)) σ (Val $ PairV v1 v2) σ []
| InjLS v σ :
head_step (InjL $ Val v) σ (Val $ InjLV v) σ []
| InjRS v σ :
head_step (InjR $ Val v) σ (Val $ InjRV v) σ []
| BetaS f x e1 v2 e' σ :
e' = subst' x v2 (subst' f (RecV f x e1) e1) →
head_step (App (Val $ RecV f x e1) (Val v2)) σ e' σ []
| UnOpS op v v' σ :
un_op_eval op v = Some v' →
head_step (UnOp op (Val v)) σ (Val v') σ []
| BinOpS op v1 v2 v' σ :
bin_op_eval op v1 v2 = Some v' →
head_step (BinOp op (Val v1) (Val v2)) σ (Val v') σ []
| IfTrueS e1 e2 σ :
head_step (If (Val $ LitV $ LitBool true) e1 e2) σ e1 σ []
| IfFalseS e1 e2 σ :
head_step (If (Val $ LitV $ LitBool false) e1 e2) σ e2 σ []
| FstS v1 v2 σ :
head_step (Fst (Val $ PairV v1 v2)) σ (Val v1) σ []
| SndS v1 v2 σ :
head_step (Snd (Val $ PairV v1 v2)) σ (Val v2) σ []
| CaseLS v e1 e2 σ :
head_step (Case (Val $ InjLV v) e1 e2) σ (App e1 (Val v)) σ []
| CaseRS v e1 e2 σ :
head_step (Case (Val $ InjRV v) e1 e2) σ (App e2 (Val v)) σ []
| ForkS e σ:
head_step (Fork e) σ (Val $ LitV LitUnit) σ [e]
| AllocNS n v σ l :
0 < n →
(∀ i, 0 ≤ i → i < n → σ.(heap) !! (l +ₗ i) = None) →
head_step (AllocN (Val $ LitV $ LitInt n) (Val v)) σ
(Val $ LitV $ LitLoc l) (state_init_heap l n v σ)
[]
| LoadS l v σ :
σ.(heap) !! l = Some v →
head_step (Load (Val $ LitV $ LitLoc l)) σ (of_val v) σ []
| StoreS l v σ :
is_Some (σ.(heap) !! l) →
head_step (Store (Val $ LitV $ LitLoc l) (Val v)) σ
(Val $ LitV LitUnit) (state_upd_heap <[l:=v]> σ)
[]
| CmpXchgS l v1 v2 vl σ b :
σ.(heap) !! l = Some vl →
(* Crucially, this compares the same way as [EqOp]! *)
vals_compare_safe vl v1 →
b = bool_decide (vl = v1) →
head_step (CmpXchg (Val $ LitV $ LitLoc l) (Val v1) (Val v2)) σ
(Val $ PairV vl (LitV $ LitBool b)) (if b then state_upd_heap <[l:=v2]> σ else σ)
[]
| FaaS l i1 i2 σ :
σ.(heap) !! l = Some (LitV (LitInt i1)) →
head_step (FAA (Val $ LitV $ LitLoc l) (Val $ LitV $ LitInt i2)) σ
(Val $ LitV $ LitInt i1) (state_upd_heap <[l:=LitV (LitInt (i1 + i2))]>σ)
[]
| ChooseNatS (n:nat) σ:
head_step ChooseNat σ (Val $ LitV $ LitInt n) σ []
.
(** Basic properties about the language *)
#[global] Instance fill_item_inj Ki : Inj (=) (=) (fill_item Ki).
Proof. induction Ki; intros ???; simplify_eq/=; auto with f_equal. Qed.
Lemma fill_item_val Ki e :
is_Some (to_val (fill_item Ki e)) → is_Some (to_val e).
Proof. intros [v ?]. induction Ki; simplify_option_eq; eauto. Qed.
Lemma val_head_stuck e1 σ1 e2 σ2 efs : head_step e1 σ1 e2 σ2 efs → to_val e1 = None.
Proof. destruct 1; naive_solver. Qed.
Lemma head_ctx_step_val Ki e σ1 e2 σ2 efs :
head_step (fill_item Ki e) σ1 e2 σ2 efs → is_Some (to_val e).
Proof. revert e2. induction Ki; inversion_clear 1; simplify_option_eq; eauto. Qed.
Lemma fill_item_no_val_inj Ki1 Ki2 e1 e2 :
to_val e1 = None → to_val e2 = None →
fill_item Ki1 e1 = fill_item Ki2 e2 → Ki1 = Ki2.
Proof. revert Ki1. induction Ki2, Ki1; naive_solver eauto with f_equal. Qed.
Lemma alloc_fresh v n σ :
let l := fresh_locs (dom σ.(heap)) in
0 < n →
head_step (AllocN ((Val $ LitV $ LitInt $ n)) (Val v)) σ
(Val $ LitV $ LitLoc l) (state_init_heap l n v σ) [].
Proof.
intros.
apply AllocNS; first done.
intros. apply (not_elem_of_dom (D := gset loc)).
by apply fresh_locs_fresh.
Qed.
Definition base_locale := nat.
Definition locale_of (c: list expr) (e : expr) := length c.
Lemma locale_step_same e1 e2 t1 σ1 σ2 efs:
head_step e1 σ1 e2 σ2 efs ->
locale_of t1 e1 = locale_of t1 e2.
Proof. done. Qed.
Lemma locale_fill e K t1: locale_of t1 (fill_item K e) = locale_of t1 e.
Proof. done. Qed.
Lemma heap_locale_injective tp0 e0 tp1 tp e :
(tp, e) ∈ prefixes_from (tp0 ++ [e0]) tp1 →
locale_of tp0 e0 ≠ locale_of tp e.
Proof.
intros (?&?&->&?)%prefixes_from_spec.
rewrite /locale_of !app_length /=. lia.
Qed.
Lemma heap_lang_mixin : EctxiLanguageMixin of_val to_val fill_item head_step locale_of.
Proof.
split; apply _ || eauto using to_of_val, of_to_val, val_head_stuck,
fill_item_val, fill_item_no_val_inj, head_ctx_step_val, locale_fill, locale_step_same, heap_locale_injective.
{ intros ??? H%Forall2_length. rewrite !prefixes_from_length // in H. }
Qed.
Definition context_step (_ _: state): Prop := False.
End heap_lang.
(** Language *)
Canonical Structure heap_ectxi_lang :=
EctxiLanguage heap_lang.head_step heap_lang.context_step heap_lang.locale_of heap_lang.heap_lang_mixin.
Canonical Structure heap_ectx_lang := EctxLanguageOfEctxi heap_ectxi_lang.
Canonical Structure heap_lang := LanguageOfEctx heap_ectx_lang.
(* Prefer heap_lang names over ectx_language names. *)
Export heap_lang.
(** The following lemma is not provable using the axioms of [ectxi_language].
The proof requires a case analysis over context items ([destruct i] on the
last line), which in all cases yields a non-value. To prove this lemma for
[ectxi_language] in general, we would require that a term of the form
[fill_item i e] is never a value. *)
Lemma to_val_fill_some K e v : to_val (fill K e) = Some v → K = [] ∧ e = Val v.
Proof.
intro H. destruct K as [|Ki K]; first by apply of_to_val in H. exfalso.
assert (to_val e ≠ None) as He.
{ intro A. by rewrite fill_not_val in H. }
assert (∃ w, e = Val w) as [w ->].
{ destruct e; try done; eauto. }
assert (to_val (fill (Ki :: K) (Val w)) = None).
{ destruct Ki; simpl; apply fill_not_val; done. }
by simplify_eq.
Qed.
Lemma prim_step_to_val_is_head_step e σ1 w σ2 efs :
prim_step e σ1 (Val w) σ2 efs → head_step e σ1 (Val w) σ2 efs.
Proof.
intro H. destruct H as [K e1 e2 H1 H2].
assert (to_val (fill K e2) = Some w) as H3; first by rewrite -H2.
apply to_val_fill_some in H3 as [-> ->]. subst e. done.
Qed.
(** If [e1] makes a head step to a value under some state [σ1] then any head
step from [e1] under any other state [σ1'] must necessarily be to a value. *)
Lemma head_step_to_val e1 σ1 e2 σ2 efs σ1' e2' σ2' efs' :
head_step e1 σ1 e2 σ2 efs →
head_step e1 σ1' e2' σ2' efs' → is_Some (to_val e2) → is_Some (to_val e2').
Proof. destruct 1; inversion 1; naive_solver. Qed.
|
module Control.Arrow
import Control.Category
import Data.Either
import Data.Morphisms
infixr 5 <++>
infixr 3 ***
infixr 3 &&&
infixr 2 +++
infixr 2 \|/
public export
interface Category arr => Arrow (0 arr : Type -> Type -> Type) where
||| Converts a function from input to output into a arrow computation.
arrow : (a -> b) -> arr a b
||| Converts an arrow from `a` to `b` into an arrow on pairs, that applies
||| its argument to the first component and leaves the second component
||| untouched, thus saving its value across a computation.
first : arr a b -> arr (a, c) (b, c)
||| Converts an arrow from `a` to `b` into an arrow on pairs, that applies
||| its argument to the second component and leaves the first component
||| untouched, thus saving its value across a computation.
second : arr a b -> arr (c, a) (c, b)
second f = arrow {arr = arr} swap >>> first f >>> arrow {arr = arr} swap
where
swap : (x, y) -> (y, x)
swap (a, b) = (b, a)
||| A combinator which processes both components of a pair.
(***) : arr a b -> arr a' b' -> arr (a, a') (b, b')
f *** g = first f >>> second g
||| A combinator which builds a pair from the results of two arrows.
(&&&) : arr a b -> arr a b' -> arr a (b, b')
f &&& g = arrow dup >>> f *** g
public export
implementation Arrow Morphism where
arrow f = Mor f
first (Mor f) = Mor $ \(a, b) => (f a, b)
second (Mor f) = Mor $ \(a, b) => (a, f b)
(Mor f) *** (Mor g) = Mor $ \(a, b) => (f a, g b)
(Mor f) &&& (Mor g) = Mor $ \a => (f a, g a)
public export
implementation Monad m => Arrow (Kleislimorphism m) where
arrow f = Kleisli (pure . f)
first (Kleisli f) = Kleisli $ \(a, b) => do x <- f a
pure (x, b)
second (Kleisli f) = Kleisli $ \(a, b) => do x <- f b
pure (a, x)
(Kleisli f) *** (Kleisli g) = Kleisli $ \(a, b) => do x <- f a
y <- g b
pure (x, y)
(Kleisli f) &&& (Kleisli g) = Kleisli $ \a => do x <- f a
y <- g a
pure (x, y)
public export
interface Arrow arr => ArrowZero (0 arr : Type -> Type -> Type) where
zeroArrow : arr a b
public export
interface ArrowZero arr => ArrowPlus (0 arr : Type -> Type -> Type) where
(<++>) : arr a b -> arr a b -> arr a b
public export
interface Arrow arr => ArrowChoice (0 arr : Type -> Type -> Type) where
left : arr a b -> arr (Either a c) (Either b c)
right : arr a b -> arr (Either c a) (Either c b)
right f = arrow mirror >>> left f >>> arrow mirror
(+++) : arr a b -> arr c d -> arr (Either a c) (Either b d)
f +++ g = left f >>> right g
(\|/) : arr a b -> arr c b -> arr (Either a c) b
f \|/ g = f +++ g >>> arrow fromEither
where
fromEither : Either b b -> b
fromEither (Left b) = b
fromEither (Right b) = b
public export
implementation Monad m => ArrowChoice (Kleislimorphism m) where
left f = f +++ (arrow id)
right f = (arrow id) +++ f
f +++ g = (f >>> (arrow Left)) \|/ (g >>> (arrow Right))
(Kleisli f) \|/ (Kleisli g) = Kleisli (either f g)
public export
interface Arrow arr => ArrowApply (0 arr : Type -> Type -> Type) where
app : arr (arr a b, a) b
public export
implementation Monad m => ArrowApply (Kleislimorphism m) where
app = Kleisli $ \(Kleisli f, x) => f x
public export
data ArrowMonad : (Type -> Type -> Type) -> Type -> Type where
MkArrowMonad : (runArrowMonad : arr (the Type ()) a) -> ArrowMonad arr a
public export
runArrowMonad : ArrowMonad arr a -> arr (the Type ()) a
runArrowMonad (MkArrowMonad a) = a
public export
implementation Arrow a => Functor (ArrowMonad a) where
map f (MkArrowMonad m) = MkArrowMonad $ m >>> arrow f
public export
implementation Arrow a => Applicative (ArrowMonad a) where
pure x = MkArrowMonad $ arrow $ \_ => x
(MkArrowMonad f) <*> (MkArrowMonad x) = MkArrowMonad $ f &&& x >>> arrow (uncurry id)
public export
implementation ArrowApply a => Monad (ArrowMonad a) where
(MkArrowMonad m) >>= f =
MkArrowMonad $ m >>> (arrow $ \x => (runArrowMonad (f x), ())) >>> app
public export
interface Arrow arr => ArrowLoop (0 arr : Type -> Type -> Type) where
loop : arr (Pair a c) (Pair b c) -> arr a b
||| Applying a binary operator to the results of two arrow computations.
public export
liftA2 : Arrow arr => (a -> b -> c) -> arr d a -> arr d b -> arr d c
liftA2 op f g = (f &&& g) >>> arrow (\(a, b) => a `op` b)
|
function varargout = panel_guidelines( varargin )
% PANEL_GUIDELINES: Load a scenario in the guidelines tab.
%
% USAGE: [bstPanel] = panel_guidelines('CreatePanel', ScenarioName)
% @=============================================================================
% This function is part of the Brainstorm software:
% https://neuroimage.usc.edu/brainstorm
%
% Copyright (c) University of Southern California & McGill University
% This software is distributed under the terms of the GNU General Public License
% as published by the Free Software Foundation. Further details on the GPLv3
% license can be found at http://www.gnu.org/copyleft/gpl.html.
%
% FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE
% UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY
% WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
% MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY
% LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE.
%
% For more information type "brainstorm license" at command prompt.
% =============================================================================@
%
% Authors: Francois Tadel, 2017
eval(macro_method);
end
%% ===== CREATE PANEL =====
function [bstPanelNew, panelName] = CreatePanel(ScenarioName) %#ok<DEFNU>
% Java initializations
import java.awt.*;
import javax.swing.*;
panelName = 'Guidelines';
% Load all the panels from the target scenario
switch lower(ScenarioName)
case 'epileptogenicity'
ctrl = scenario_epilepto('CreatePanels');
otherwise
error(['Unknow scenario: ' ScenarioName]);
end
% Create main panel
ctrl.jPanelContainer = gui_component('Panel');
ctrl.ScenarioName = ScenarioName;
% Create control panel
jPanelControl = gui_river([1,1], [1,3,1,3], ' ');
ctrl.jPanelContainer.add(jPanelControl, BorderLayout.EAST);
% Add buttons
buttonFormat = {Insets(0,0,0,0), Dimension(java_scaled('value', 47), java_scaled('value', 22))};
ctrl.jButtonReset = gui_component('button', jPanelControl, 'br', '<HTML><FONT COLOR="#808080"><I>Reset</I></FONT>', buttonFormat, [], @(h,ev)ResetPanel());
gui_component('label', jPanelControl, 'br', ' ');
ctrl.jButtonPrev = gui_component('button', jPanelControl, 'br', '<<', buttonFormat, [], @(h,ev)SwitchPanel('prev'));
ctrl.jButtonNext = gui_component('button', jPanelControl, 'br', '>>', buttonFormat, [], @(h,ev)SwitchPanel('next'));
ctrl.jLabelStep = gui_component('label', jPanelControl, 'br hfill', sprintf('0 / %d', length(ctrl.jPanels)));
ctrl.jLabelStep.setHorizontalAlignment(JTextField.CENTER);
gui_component('label', jPanelControl, 'br', ' ');
ctrl.jButtonSkip = gui_component('button', jPanelControl, 'br', '<HTML><FONT COLOR="#808080"><I>Skip</I></FONT>', buttonFormat, [], @(h,ev)SwitchPanel('skip'));
% Create the BstPanel object that is returned by the function
bstPanelNew = BstPanel(panelName, ...
ctrl.jPanelContainer, ...
ctrl);
end
%% =================================================================================
% === EXTERNAL CALLBACKS =========================================================
% =================================================================================
%% ===== GET PANEL CONTENTS =====
% GET Panel contents in a structure
function s = GetPanelContents() %#ok<DEFNU>
s = [];
end
%% ===== CLOSE PANEL =====
function ClosePanel(varargin) %#ok<DEFNU>
panelName = 'Guidelines';
% Hide panel
gui_hide(panelName);
% Release mutex
bst_mutex('release', panelName);
end
%% ===== GET CURRENT PANEL =====
function iPanel = GetCurrentPanel()
% Get panel controls handles
ctrl = bst_get('PanelControls', 'Guidelines');
if isempty(ctrl)
iPanel = 0;
return;
end
% Read panel index
strIndex = char(ctrl.jLabelStep.getText());
iPanel = str2num(strIndex(1:2));
end
%% ===== SWITCH PANEL =====
function SwitchPanel(command)
import java.awt.*;
% Get panel controls handles
ctrl = bst_get('PanelControls', 'Guidelines');
if isempty(ctrl)
return;
end
% Get current panel
iPanel = GetCurrentPanel();
% === VALIDATE CURRENT PANEL ===
% If there is a validation callback for this panel
if (iPanel >= 1) && ~isempty(ctrl.fcnValidate{iPanel}) && strcmpi(command, 'next')
[isValidated, errMsg] = ctrl.fcnValidate{iPanel}();
if ~isempty(errMsg)
bst_error(errMsg, sprintf('Step #%d', iPanel), 0);
return;
end
end
% If this is the last panel and moving forward: stop
if (iPanel == length(ctrl.jPanels)) && strcmpi(command, 'next')
return;
end
% === MOVE TO NEXT PANEL ===
% Remove existing panel
if (iPanel >= 1) && (iPanel <= length(ctrl.jPanels))
ctrl.jPanelContainer.remove(ctrl.jPanels(iPanel));
end
% Switch according to command
switch (command)
case 'next', iPanel = iPanel + 1;
case 'skip', iPanel = iPanel + 1;
case 'prev', iPanel = iPanel - 1;
end
% If invalid panel: stop
if (iPanel < 1) || (iPanel > length(ctrl.jPanels))
return;
end
% Add new panel
ctrl.jPanelContainer.add(ctrl.jPanels(iPanel), BorderLayout.CENTER);
% Update step number
ctrl.jLabelStep.setText(sprintf('%d / %d', iPanel, length(ctrl.jPanels)));
% Repaint
ctrl.jPanelContainer.invalidate();
ctrl.jPanelContainer.repaint();
% First panel: Disable previous button
if (iPanel == 1)
ctrl.jButtonPrev.setEnabled(0);
else
ctrl.jButtonPrev.setEnabled(1);
end
% Show/Hide Skip button
ctrl.jButtonSkip.setVisible(ctrl.isSkip(iPanel));
% % Last panel: Disable next button
% if (iPanel == length(ctrl.jPanels))
% ctrl.jButtonNext.setEnabled(0);
% else
% ctrl.jButtonNext.setEnabled(1);
% end
% === UPDATE NEW PANEL ===
% If there is an update callback for this panel
if (iPanel >= 1) && ~isempty(ctrl.fcnUpdate{iPanel})
ctrl.fcnUpdate{iPanel}();
end
end
%% ===== RESET PANEL =====
function ResetPanel()
% Get panel controls handles
ctrl = bst_get('PanelControls', 'Guidelines');
if isempty(ctrl)
return;
end
% Get current panel
iPanel = GetCurrentPanel();
if (iPanel >= 1) && ~isempty(ctrl.fcnReset{iPanel})
ctrl.fcnReset{iPanel}();
end
end
%% ===== OPTIONS: PICK FILE CALLBACK =====
function [OutputFiles, FileFormat] = PickFile(jControl, DefaultDir, SelectionMode, FilesOrDir, Filters, DefaultFormat) %#ok<DEFNU>
% Parse inputs
if (nargin < 6) || isempty(DefaultFormat)
DefaultFormat = [];
end
% Get default import directory and formats
LastUsedDirs = bst_get('LastUsedDirs');
DefaultFormats = bst_get('DefaultFormats');
% Default dir type
DefaultFile = LastUsedDirs.(DefaultDir);
% Default filter
if ~isempty(DefaultFormat) && isfield(DefaultFormats, DefaultFormat)
defaultFilter = DefaultFormats.(DefaultFormat);
else
defaultFilter = [];
DefaultFormat = [];
end
% Pick a file
[OutputFiles, FileFormat, FileFilter] = java_getfile('open', 'Select file', DefaultFile, SelectionMode, FilesOrDir, Filters, defaultFilter);
% If nothing selected
if isempty(OutputFiles)
return
end
% If only one file selected
if ~iscell(OutputFiles)
OutputFiles = {OutputFiles};
end
% Get the files
OutputFiles = file_expand_selection(FileFilter, OutputFiles);
if isempty(OutputFiles)
error(['No ' FileFormat ' file in the selected directories.']);
end
% Save default import directory
if ischar(OutputFiles)
newDir = OutputFiles;
elseif iscell(OutputFiles)
newDir = OutputFiles{1};
end
% Get parent folder if needed
if ~isdir(newDir)
newDir = bst_fileparts(newDir);
end
LastUsedDirs.(DefaultDir) = newDir;
bst_set('LastUsedDirs', LastUsedDirs);
% Save default import format
if ~isempty(DefaultFormat)
DefaultFormats.(DefaultFormat) = FileFormat;
bst_set('DefaultFormats', DefaultFormats);
end
% Get file descriptions (one/many)
if ischar(OutputFiles)
strFiles = OutputFiles;
else
if (length(OutputFiles) == 1)
strFiles = OutputFiles{1};
elseif isa(jControl, 'javax.swing.JLabel')
strFiles = sprintf('%s<BR>', OutputFiles{:});
else
strFiles = sprintf('[%d files]', length(OutputFiles));
end
end
% Update the attached control
if isempty(jControl)
%disp(strFiles);
elseif isa(jControl, 'javax.swing.JTextField')
jControl.setText(strFiles);
elseif isa(jControl, 'javax.swing.JLabel')
jControl.setText(['<HTML>' strFiles]);
end
end
|
On March 26 , 1931 , Ross substituted a sixth skater for goaltender Tiny Thompson in the final minute of play in a playoff game against the Montreal Canadiens . Although the Bruins lost the game 1 – 0 , Ross became the first coach to replace his goaltender with an extra attacker , a tactic which became widespread practice in hockey . Stepping aside as coach in 1934 to focus on managing the team , Ross hired Frank Patrick as coach with a salary of $ 10 @,@ 500 , which was high for such a role . However rumours spread during the season that Patrick was drinking heavily and not being as strict with the players as Ross wanted . After the Bruins lost their playoff series with the Toronto Maple Leafs in the 1936 playoffs , the result of an 8 – 1 score in the second game , a newspaper claimed that Patrick had been drinking the day of the game and had trouble controlling the team . Several days later , Ross relieved Patrick of his duties and once again assumed the role of coach .
|
%=================================
%====template for LATEX poster====
%=================================
\documentclass[final]{beamer}
%\usepackage[orientation=portrait,size=a0,
% scale=1.25 % font scale factor
% ]{beamerposter}
% modify for 36x48in poster
\usepackage{beamerposter}
\setlength{\paperwidth}{48in}
\setlength{\paperheight}{36in}
\geometry{
hmargin=2.5cm, % little modification of margins
}
%
\usepackage[utf8]{inputenc}
\linespread{1.15}
%
%==The poster style============================================================
\usetheme{sharelatex}
%==Title, date and authors of the poster=======================================
\title
[Gulf of Mexico Oil Spill \& Ecosystem Science Conference, February 5--10 2017,
New Orleans, USA] % Conference
{ % Poster title
A Dynamical Geography of the Gulf of Mexico
}
\author{ % Authors
P. Miron\inst{1}, %, Author Two\inst{2}, Author Three\inst{2,3}
F. J. Beron-Vera\inst{1},
M. J. Olascoaga\inst{1},
Paula P\'erez-Brunius\inst{2},
Julio Sheinbaum\inst{2} and
Gary Froyland\inst{3}
}
\institute
[Very Large University] % General University
{
\inst{1} Rosenstiel School of Marine and Atmospheric Science, University of
Miami, USA
\\[0.3ex]
\inst{2} CICESE, Ensenada, Mexico
\\[0.3ex]
\inst{3} University of New South Wales, Sydney, Australia
}
\date{\today}
% other useful packages
\usepackage[super,sort&compress]{natbib}
\usepackage{siunitx}
\usepackage{tikz}
\newcommand{\PF}{\mathcal{P}}
\newcommand{\ia}{\textit{a}}
\newcommand{\ib}{\textit{b}}
\newcommand{\ic}{\textit{c}}
\newcommand{\id}{\textit{d}}
\newcommand{\ie}{\textit{e}}
\newcommand{\gom}{GoM}
\let\vaccent=\v
\renewcommand{\v}[1]{\ensuremath{\mathbf{#1}}}
\newcommand{\minus}{\scalebox{0.5}[1.0]{$-$}}
\graphicspath{{"2017. dynamical geography (gomri)/figures/"}}
\begin{document}
\begin{frame}[t]
\begin{multicols}{3}
% The poster content
\section{Introduction}
Density dispersion in fluid flow is difficult to predict as it involves many
mechanisms affecting a wide range of time and length scales. In the present
study, we use the available drifter trajectories database (from 1994--2016) to
extract almost-invariant regions and predict transport in the Gulf of Mexico
(\gom). A total of 3207 drifter trajectories from several sources were
considered and all the data points are presented in Fig.~\ref{fig:gom}. These
drifters were deployed over many years and their design differs from experiment
to experiment, so some variations in their Lagrangian properties can be
expected. For the purposes of this work, these variations are ignored as the
main goal is the analysis of the average dynamics of the \gom.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{fig03}
\caption{All data points of drifter trajectories are plotted on the left panel
and the drifters density in each bin (from 1-4266 data points per bin with an
average of 246) is presented on the right panel.}
\label{fig:gom}
\end{figure}
\section{Method}
The eigenvector method\citep{froyland2014well} employed is rooted in
Markov-chain concepts that have led to the possibility of approximating
invariant sets in dynamical systems using short-run
trajectories\citep{dellnitz1997almost}. The dynamical system of interest is
that governing the motion of the fluid particles, which are described by
satellite-tracked drifters on the ocean surface.
Let $X$ be a closed 2D flow domain and denote by $T(x)$ the end point of a
trajectory starting at $x \in X$ after some short time. A discretization of the
dynamics can then be attained using Ulam's method\citep{ulam1960,Froyland-01}
by dividing the domain $X$ into $N$ boxes $\left(B_1,\cdots,B_N\right)$. The
probability of going from a box $B_i$ to a box $B_j$ under one application of
$T$ is (approximately) equal to
\begin{equation*}
\PF_{ij} \approx \frac{\#\lbrace d: d \in B_i \text{ and } T(d) \in
B_j\rbrace}{\#\lbrace d \in B_i\rbrace} \text{ with } d \text{ the individual
drifter.}
\end{equation*}
The transition matrix $\PF$ defines a Markov Chain of the dynamics, which is a
stochastic model describing a sequence of possible events in which the
probability of each event depends only on the current state. From an initial
density $f_0$, future distribution can be approximate by $\mathbf{f}_k =
\mathbf{f}_0 \PF^k$.
The left eigenvector ($\lambda L = L \PF$) with $\lambda = 1$ describes the
limiting distribution of the system while the corresponding right eigenvector
is supported on the basin of attraction of the distribution. Furthermore,
eigenvectors associated with eigenvalues smaller than one allow the extraction
of almost-invariant sets, which are relevant in dynamical systems governing
motion that exhibits transient behavior.
\subsection{Example using a reduced Markov chain}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig01}
\caption{The probability of moving from one state to another and if a state or
group of states belongs to a communicating class (CC) or an absorbing closed
communicating class (ACCC) are identified.}
\label{fig:ccc}
\end{figure}
On the associated transition matrix, there is one line for each state of the
system and the values in each column represents the probability of going from
state $i$ to state $j$.
\vspace{0.5cm}
\columnbreak
From left eigenvectors $L_{1-2}$, one can identify both ACCC $\lbrace A,
B\rbrace$ and $\lbrace E \rbrace$ while their respective basins of attraction
are identified by the corresponding right eigenvectors $R_{1-2}$.
\begin{equation*}
\PF =
\begin{tabular}{c|ccccc}
& A & B & C & D & E\\\hline
A & 0.8 & 0.2 & 0 & 0 & 0\\
B & 0.7 & 0.3 & 0 & 0 & 0\\
C & 0 & 0.2 & 0.8 & 0 & 0\\
D & 0 & 0 & 0 & 0.7 & 0.3\\
E & 0 & 0 & 0 & 0 & 1.0
\end{tabular}\quad
L_1^T =
\begin{pmatrix}
0.83\\
0.55\\
0\\
0\\
0
\end{pmatrix}\quad
R_1 =
\begin{pmatrix}
1\\
1\\
1\\
0\\
0
\end{pmatrix}\quad
L_2^T =
\begin{pmatrix}
0\\
0\\
0\\
0\\
1
\end{pmatrix}\quad
R_2 =
\begin{pmatrix}
0\\
0\\
0\\
1\\
1
\end{pmatrix}
\end{equation*}
\section{Results and discussions}
The eigenvector method is now applied to the transition matrix $\PF$
constructed from the drifter trajectories using a 2 day transition time. The
left eigenvector associated with the largest eigenvalue less than one
identifies the main attractor (Fig.~\ref{fig:l_eig}\ia) on the right part of
the domain, communicating with the Atlantic Ocean. The associated right
eigenvector (Fig.~\ref{fig:r_eig}\ia) covered the whole domain as stated by the
Perron--Frobenius theory, except from the small isolated close communicating
classes (CCC). This states that with the global circulation of the Gulf of
Mexico, any density is diluted, trapped by the Loop Current into the Florida
Strait and evacuated to the Atlantic Ocean.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig06}
\caption{Left eigenvectors showing the principal attractive regions.}
\label{fig:l_eig}
\end{figure}
Fig.~\ref{fig:l_eig}\ib\ identifies the main attractor inside the \gom. It is
located on the western boundary of the \gom\ on the Tamaulipas (Mexico) Gulf
Coast while its basin of attraction (Fig.~\ref{fig:r_eig}\ib) splits the \gom\
in two at the west of the Yucat\'{a}n Channel.
Fig.~\ref{fig:l_eig}\ic\ highlights an attractive region on the West Florida
Shelf while the next eigenvectors (bottom of Fig.~\ref{fig:l_eig}) emphasize on
the presence of 4 different attractors; on the south shore of Cuba, the SW
coast of the \gom, at the northern part of the West Florida Shelf and on the NW
shore of the \gom\ on the Texas Gulf Coast.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig07}
\caption{Right eigenvectors showing the corresponding basins of attraction of
the attractive regions in Fig.~\ref{fig:l_eig}.}
\label{fig:r_eig}
\end{figure}
\vspace{0.5cm}
\columnbreak
\subsection{Dynamical geography}
The dynamical geography is a partition of the surface of the \gom\ developed by
combining the basins of attraction presented in Fig.~\ref{fig:r_eig}. On the
left, the \gom\ is split into two regions, one connecting the Caribbean Sea to
the Atlantic while the other is isolated on the western side. The separation of
the \gom\ fit well with the boundary of the Loop Current, identified in diverse
papers\citep{maze2015historical}. On the right, five weakly interacting coastal
basins are extracted by thresholding the support area of the right
eigenvectors. Each coastal basin is characterized by a distinct flow dynamics
and coherence for important time scale. Those regions are consistent with
observations of slow dynamics around the Mississippi Delta, of more intensively
mixing region around the Florida Coast and of transient eddy structures south
of the Island of Cuba.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{fig09}
\end{figure}
The transition matrix $\PF$ can also be used to predict the dispersion of
passive tracers such as oil spills (Fig.~\ref{fig:oil}). Each row shows the
evolution of two distinct oil spills, one occurring in a coastal basin and the
other one further away from the coast. While the top row density stays mostly
inside the basin during all the \SI{36}{d} period, the bottom row evolution
presents a wider spread due to the increased mixing and transport further away
from the coast.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{fig_oil}
\caption{Each row presents the evolution of two different oil spills inside the
\gom. The left column shows the initial location of the density while the 2nd
and 3rd columns present the evolution after 16 and 36 days.}
\label{fig:oil}
\end{figure}
\section{Conclusions}
We construct a Markov-chain representation of the surface-ocean Lagrangian
dynamics in the region spanned by the \gom\ and the northern Caribbean Sea
using a very large collection of drifter trajectories. From the analysis of the
eigenvalues and eigenvectors of the transition matrix associated with the
chain, we identify almost-invariant attracting sets and their basins of
attraction. With this information, we decompose the GoM's geography into weakly
dynamically interacting provinces in both forward and backward time, which has
implications for the connectivity of passive (Lagrangian) and potentially also
non-passive (e.g., chemically reacting, biologically active) tracers in the
\gom.
% copy bbl data for archive
\bibliographystyle{abbrvnat}
\begingroup
\renewcommand{\section}[2]{}%
\begin{thebibliography}{5}
\bibitem[Dellnitz and Junge(1997)]{dellnitz1997almost}
M.~Dellnitz and O.~Junge.
\newblock Almost invariant sets in chua's circuit.
\newblock \emph{International Journal of Bifurcation and Chaos}, 7\penalty0
(11):\penalty0 2475--2485, 1997.
\bibitem[Froyland(2001)]{Froyland-01}
G.~Froyland.
\newblock Extracting dynamical behaviour via markov models.
\newblock \emph{Nonlinear Dynamics and Statistics:
Proceedings of the Newton Institute, Cambridge, 1998}, pages 283--324.
Birkhauser, 2001.
\bibitem[Froyland et~al.(2014)Froyland, Stuart, and van
Sebille]{froyland2014well}
G.~Froyland, R.~M. Stuart, and E.~van Sebille.
\newblock How well-connected is the surface of the global ocean?
\newblock \emph{Chaos: An Interdisciplinary Journal of Nonlinear Science},
24\penalty0 (3):\penalty0 033126, 2014.
\bibitem[Maze et~al.(2015)Maze, Olascoaga, and Brand]{maze2015historical}
G.~Maze, M.~Olascoaga, and L.~Brand.
\newblock Historical analysis of environmental conditions during florida red
tide.
\newblock \emph{Harmful Algae}, 50:\penalty0 1--7, 2015.
\bibitem[Ulam(1960)]{ulam1960}
S.~M. Ulam.
\newblock \emph{A collection of mathematical problems}, volume~8.
\newblock Interscience Publishers, 1960.
\end{thebibliography}
\endgroup
\vspace{0.5cm}
\end{multicols}
\end{frame}
\end{document}
|
(* Title: HOL/Auth/flash_data_cub_lemma_on_inv__16.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The flash_data_cub Protocol Case Study*}
theory flash_data_cub_lemma_on_inv__16 imports flash_data_cub_base
begin
section{*All lemmas on causal relation between inv__16 and some rule r*}
lemma n_PI_Remote_GetVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_Get src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_Get src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Remote_GetXVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_GetX src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_GetX src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_NakVsinv__16:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Nak dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Nak dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst=p__Inv3)\<or>(dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__2Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_HeadVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_PutVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_DirtyVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_NakVsinv__16:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_PutVsinv__16:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__2Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_2Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_3Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_4Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_5Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_6Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_HomeVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_Home_NODE_GetVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8Vsinv__16:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>pp=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_NODE_GetVsinv__16:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>pp=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__0Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__1Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10_HomeVsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10Vsinv__16:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>pp=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_11Vsinv__16:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Local'')) (Const true)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Local'')) (Const true)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_NakVsinv__16:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutXVsinv__16:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst=p__Inv3)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv3) ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') dst) ''CacheState'')) (Const CACHE_E)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') dst) ''CacheState'')) (Const CACHE_E)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutVsinv__16:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Put dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Put dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst=p__Inv3)\<or>(dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutXVsinv__16:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_PutX dst)" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_PutX dst" apply fastforce done
from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst=p__Inv3)\<or>(dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst=p__Inv3)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv3\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutX_HomeVsinv__16:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX__part__0Vsinv__16:
assumes a1: "r=n_PI_Local_GetX_PutX__part__0 " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_WbVsinv__16:
assumes a1: "r=n_NI_Wb " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_StoreVsinv__16:
assumes a1: "\<exists> src data. src\<le>N\<and>data\<le>N\<and>r=n_Store src data" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_3Vsinv__16:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_3 N src" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_1Vsinv__16:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_1 N src" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_GetX__part__1Vsinv__16:
assumes a1: "r=n_PI_Local_GetX_GetX__part__1 " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_GetX__part__0Vsinv__16:
assumes a1: "r=n_PI_Local_GetX_GetX__part__0 " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_ReplaceVsinv__16:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_Replace src" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_Store_HomeVsinv__16:
assumes a1: "\<exists> data. data\<le>N\<and>r=n_Store_Home data" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_ReplaceVsinv__16:
assumes a1: "r=n_PI_Local_Replace " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_existsVsinv__16:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_PutXVsinv__16:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_PI_Remote_PutX dst" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_Put_HomeVsinv__16:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvVsinv__16:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Inv dst" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_PutXVsinv__16:
assumes a1: "r=n_PI_Local_PutX " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_Get_PutVsinv__16:
assumes a1: "r=n_PI_Local_Get_Put " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ShWbVsinv__16:
assumes a1: "r=n_NI_ShWb N " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX_HeadVld__part__0Vsinv__16:
assumes a1: "r=n_PI_Local_GetX_PutX_HeadVld__part__0 N " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ReplaceVsinv__16:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Replace src" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_GetX_Nak_HomeVsinv__16:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_PutXAcksDoneVsinv__16:
assumes a1: "r=n_NI_Local_PutXAcksDone " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX__part__1Vsinv__16:
assumes a1: "r=n_PI_Local_GetX_PutX__part__1 " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_Nak_HomeVsinv__16:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_exists_HomeVsinv__16:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_exists_Home src" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Replace_HomeVsinv__16:
assumes a1: "r=n_NI_Replace_Home " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_PutVsinv__16:
assumes a1: "r=n_NI_Local_Put " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_ClearVsinv__16:
assumes a1: "r=n_NI_Nak_Clear " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_Get_GetVsinv__16:
assumes a1: "r=n_PI_Local_Get_Get " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_HomeVsinv__16:
assumes a1: "r=n_NI_Nak_Home " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_2Vsinv__16:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_2 N src" and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX_HeadVld__part__1Vsinv__16:
assumes a1: "r=n_PI_Local_GetX_PutX_HeadVld__part__1 N " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_FAckVsinv__16:
assumes a1: "r=n_NI_FAck " and
a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__16 p__Inv3 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
end
|
theory List_Demo
imports Main
begin
datatype 'a list = Nil | Cons "'a" "'a list"
term "Nil"
declare [[names_short]]
fun app :: "'a list \<Rightarrow> 'a list \<Rightarrow> 'a list" where
"app Nil ys = ys" |
"app (Cons x xs) ys = Cons x (app xs ys)"
fun rev :: "'a list \<Rightarrow> 'a list" where
"rev Nil = Nil" |
"rev (Cons x xs) = app (rev xs) (Cons x Nil)"
value "rev(Cons True (Cons False Nil))"
value "rev(Cons a (Cons b Nil))"
lemma app_Nil2[simp]: "app xs Nil = xs"
apply (induction xs)
apply auto
done
lemma app_assoc[simp]: "app (app xs ys) zs = app xs (app ys zs)"
apply (induction xs)
apply auto
done
lemma rev_app[simp]: "rev (app xs ys) = app (rev ys) (rev xs)"
apply (induction xs)
apply auto
done
theorem rev_rev[simp]: "rev (rev xs) = xs"
apply (induction xs)
apply auto
done
(* Hint for demo:
do the proof top down, discovering the lemmas one by one,
*)
end
|
library(plyr)
library(ggplot2)
library(lubridate)
## load metrics
X <- read.csv('data/clean/metrics_10011210.csv')
X$minutes <- parse_date_time(as.character(X$minutes), '%y-%m-%d %H:%M:%S')
|
function [biasFieldCorrected, varargout] = segment(this, varargin)
% Segments brain images using SPM's unified segmentation approach.
% This warps the brain into a standard space and segment it there using tissue
% probability maps in this standard space.
%
% Since good warping and good segmentation are interdependent, this is done
% iteratively until a good tissue segmentation is given by probality maps
% that store how likely a voxel is of a certain tissue type
% (either in native or standard space).
% Furthermore, a deformation field from native to standard space (or back)
% has then been found for warping other images of the same native space.
%
% Y = MrImage()
% [biasFieldCorrected, tissueProbMaps, deformationFields, biasField] = ...
% Y.segment(...
% 'representationIndexArray', representationIndexArray, ...
% 'spmParameterName1', spmParameterValue1, ...
% ...
% 'spmParameterNameN', spmParameterValueN)
%
% This is a method of class MrImageSpm4D.
%
% NOTE: If an nD image is given, then all dimension > 3 will be treated as
% channels.
%
% IN
% tissueTypes cell(1, nTissues) of strings to specify which
% tissue types shall be written out:
% 'GM' grey matter
% 'WM' white matter
% 'CSF' cerebrospinal fluid
% 'bone' skull and surrounding bones
% 'fat' fat and muscle tissue
% 'air' air surrounding head
%
% default: {'GM', 'WM', 'CSF'}
%
% mapOutputSpace 'native' (default) or 'warped'/'mni'/'standard'
% defines coordinate system in which images shall be
% written out;
% 'native' same space as image that was segmented
% 'warped' standard Montreal Neurological Institute
% (MNI) space used by SPM for unified segmentation
% deformationFieldDirection determines which deformation field shall be
% written out,if any
% 'none' (default) no deformation fields are stored
% 'forward' subject => mni (standard) space
% 'backward'/'inverse' mni => subject space
% 'both'/'all' = 'forward' and 'backward'
% saveBiasField 0 (default) or 1
% biasRegularisation describes the amount of expected bias field
% default: 0.001 (light)
% no: 0; extremely heavy: 10
% biasFWHM full-width-at-half-maximum of the Gaussian
% non-uniformity bias field (in mm)
% default: 60 (mm)
% fileTPM tissue probablity maps for each tissue class
% default: SPM' TPMs in spm/tpm
% mrfParameter strenght of the Markov Random Field cleanup
% performed on the tissue class images
% default: 1
% cleanUp crude routine for extracting the brain from
% segmented images ('no', 'light', 'thorough')
% default: 'light'
% warpingRegularization regularization for the different terms of the
% registration
% default: [0 0.001 0.5 0.05 0.2]
% affineRegularisation regularisation for the initial affine registration
% of the image to the tissue probability maps (i.e.
% into standard space)
% for example, the default ICBM template are slighlty
% larger than typical brains, so greater zooms are
% likely to be needed
% default: ICBM spase template - European brains
% smoothnessFwhm fudge factor to account for correlation between
% neighbouring voxels (in mm)
% default: 0 (for MRI)
% samplingDistance approximate distance between sampled points when
% estimating the model parameters (in mm)
% default: 3
%
% Parameters for high-dim application:
%
% representationIndexArray: a selection (e.g. {'t', 1}) which is then
% applied to obtain one nD image, where all
% dimensions > 3 are treated as additional
% channels
% default representationIndexArray: all
% dimensions not the imageSpaceDims
% imageSpaceDims cell array of three dimLabels defining the
% dimensions that define the physical space
% the image is in
% default imageSpaceDims: {'x', 'y', 'z'}
% splitComplex 'ri' or 'mp'
% If the data are complex numbers, real and
% imaginary or magnitude and phase are
% realigned separately.
% default: mp (magnitude and p)
% Typically, realigning the magnitude and
% applying it to the phase data makes most
% sense; otherwise, using real and imaginary
% part, more global phase changes would
% impact on estimation
%
% OUT
% biasCorrected bias corrected images
% tissueProbMaps (optional) cell(nTissues,1) of 3D MrImages
% containing the tissue probability maps in the
% respective order as volumes,
% deformationFields (optional) cell(nDeformationFieldDirections,1)
% if deformationFieldDirection is 'both', this cell
% contains the forward deformation field in the first
% entry, and the backward deformation field in the
% second cell entry; otherwise, a cell with only one
% element is returned
% biasField (optional) bias field
%
% EXAMPLE
% [biasFieldCorrected, tissueProbMaps, deformationFields, biasField] =
% Y.segment();
%
% for 7T images stronger non-uniformity expected
% [biasFieldCorrected, tissueProbMaps, deformationFields, biasField] = ...
% m.segment('biasRegularisation', 1e-4, 'biasFWHM', 18, ...
% 'cleanUp', 2, 'samplingDistance', 2);
%
% See also MrImage spm_preproc MrImageSpm4D.segment
% Author: Saskia Bollmann & Lars Kasper
% Created: 2019-12-23
% Copyright (C) 2019 Institute for Biomedical Engineering
% University of Zurich and ETH Zurich
%
% This file is part of the TAPAS UniQC Toolbox, which is released
% under the terms of the GNU General Public License (GPL), version 3.
% You can redistribute it and/or modify it under the terms of the GPL
% (either version 3 or, at your option, any later version).
% For further details, see the file COPYING or
% <http://www.gnu.org/licenses/>.
% spm parameters (details above)
spmDefaults.tissueTypes = {'WM', 'GM', 'CSF'};
spmDefaults.mapOutputSpace = 'native';
spmDefaults.deformationFieldDirection = 'none';
spmDefaults.saveBiasField = 0;
spmDefaults.biasRegularisation = 0.001;
spmDefaults.biasFWHM = 60;
spmDefaults.fileTPM = [];
spmDefaults.mrfParameter = 1;
spmDefaults.cleanUp = 'light';
spmDefaults.warpingRegularization = [0 0.001 0.5 0.05 0.2];
spmDefaults.affineRegularisation = 'mni';
spmDefaults.smoothnessFwhm = 0;
spmDefaults.samplingDistance = 3;
[spmParameters, unusedVarargin] = tapas_uniqc_propval(varargin, spmDefaults);
% for split/apply functionality
methodParameters = {spmParameters};
% use cases: abs of complex, single index on many!
defaults.representationIndexArray = {{}}; % default: use all
defaults.imageSpaceDims = {};
defaults.splitComplex = 'mp';
args = tapas_uniqc_propval(unusedVarargin, defaults);
tapas_uniqc_strip_fields(args);
% set imageSpaceDims
if isempty(imageSpaceDims)
imageSpaceDims = {'x','y','z'};
end
% check whether real/complex
isReal = isreal(this);
if isReal
splitComplexImage = this.copyobj();
else
splitComplexImage = this.split_complex(splitComplex);
end
% prepare output container with right size
varargoutForMrImageSpm4D = cell(numel(representationIndexArray),nargout-1);
hasArgOut = nargout > 1;
hasBiasField = nargout > 3;
for iRepresentation = 1:numel(representationIndexArray)
% apply selection
inputSegment = splitComplexImage.select(...
representationIndexArray{iRepresentation});
% Merge all n>3 dims, which are not part of the representationIndexArray, into 4D array
mergeDimLabels = setdiff(inputSegment.dimInfo.dimLabels, imageSpaceDims);
% additional channels need to be in the t dimensions so they become part of
% the same nifti file
% empty mergeDimLabels just return the original object, e.g. for true 3D
% images
[mergedImage, newDimLabel] = ...
inputSegment.merge(mergeDimLabels, 'dimLabels', 't');
if hasArgOut
[biasFieldCorrected{iRepresentation}, varargoutForMrImageSpm4D{iRepresentation, :}] = ...
mergedImage.apply_spm_method_per_4d_split(@segment, ...
'methodParameters', methodParameters);
else
biasFieldCorrected{iRepresentation} = mergedImage.apply_spm_method_per_4d_split(@segment, ...
'methodParameters', methodParameters);
end
% un-do merge operation using combine
if ~isempty(mergeDimLabels)
% not necessary for 4D images - just reset dimInfo
if numel(mergeDimLabels) == 1
origDimInfo = inputSegment.dimInfo;
biasFieldCorrected{iRepresentation}.dimInfo = origDimInfo;
% also for the bias fields
if hasBiasField
varargoutForMrImageSpm4D{iRepresentation, 3}.dimInfo = origDimInfo;
end
else
% created original dimInfo per split
origDimInfo = inputSegment.dimInfo.split(mergeDimLabels);
% un-do reshape
split_array = biasFieldCorrected{iRepresentation}.split('splitDims', newDimLabel);
split_array = reshape(split_array, size(origDimInfo));
% add original dimInfo
for nSplits = 1:numel(split_array)
split_array{nSplits}.dimInfo = origDimInfo{nSplits};
end
% and combine
biasFieldCorrected{iRepresentation} = split_array{1}.combine(split_array);
% same for the bias fields
if hasBiasField
% un-do reshape
split_array = varargoutForMrImageSpm4D{iRepresentation, 3}{1}.split('splitDims', newDimLabel);
split_array = reshape(split_array, size(origDimInfo));
% add original dimInfo
for nSplits = 1:numel(split_array)
split_array{nSplits}.dimInfo = origDimInfo{nSplits};
end
varargoutForMrImageSpm4D{iRepresentation, 3} = {split_array{1}.combine(split_array)};
end
end
end
if ~isReal
% un-do complex split
biasFieldCorrected{iRepresentation} = biasFieldCorrected{iRepresentation}.combine_complex();
end
% add representation index back to TPMs and deformation field to
% combine them later
if ~isempty(representationIndexArray{iRepresentation})
for nOut = 1:nargout-2
addDim = varargoutForMrImageSpm4D{iRepresentation, nOut}{1}.dimInfo.nDims + 1;
dimLabels = representationIndexArray{iRepresentation}{1};
% only pick the first one
samplingPoints = representationIndexArray{iRepresentation}{2}(1);
for nClasses = 1:numel(varargoutForMrImageSpm4D{iRepresentation, nOut})
varargoutForMrImageSpm4D{iRepresentation, nOut}{nClasses}.dimInfo.add_dims(...
addDim, 'dimLabels', dimLabels, ...
'samplingPoints', samplingPoints);
end
end
end
end
% combine bias field corrected
biasFieldCorrected = biasFieldCorrected{1}.combine(biasFieldCorrected);
% combine varargout
for nOut = 1:nargout-1
toBeCombined = varargoutForMrImageSpm4D(:, nOut);
% the TPMs are cell-arrays of images, so we need to combine them per
% tissue class
for nClasses = 1:numel(toBeCombined{1})
for nCells = 1:size(toBeCombined, 1)
thisCombine(nCells) = toBeCombined{nCells}(nClasses);
end
combinedImage{nClasses, 1} = thisCombine{1}.combine(thisCombine);
end
varargout{1, nOut} = combinedImage;
clear toBeCombined thisCombine combinedImage;
end
end
|
\section{Systematic Uncertainty}
\label{sec:Sys uncer}
Taking systematic uncertainties into account of \PhiTau maximum likelihood (ML) fit is necessary for the accuracy of the fit result. Four kinds of systematic uncertainties are included in this analysis: experimental uncertainties, analysis-specific uncertainties, data-driven fake estimation uncertainties and theoretical uncertainties.
Experimental uncertainties consist of the parameter uncertainties of the interested particles: muon, electron, tau, jet uncertainties, missing transverse energy (MET) uncertainties and luminosity uncertainties.
While analysis-specific uncertainties are directly related to \phistarCP observable reconstruction and decay mode classification: tau decay mode classification uncertainties and \phistarCP shape uncertainties based on \ztt RNN $\tau$-ID score bin.
In Sec.~\ref{sec:preliminary}, the study of systmatic uncertainty impacts on our parameter of interest (POI) \PhiTau is further presented.
|
open import Data.List using () renaming
( List to ♭List ; [] to []♭ ; _∷_ to _∷♭_ ; _++_ to _++♭_
; length to length♭ ; map to ♭map )
open import Data.Maybe using ( Maybe ; just ; nothing )
open import Data.Nat using ( zero ; suc ) renaming ( _+_ to _+♭_ )
open import Function using () renaming ( _∘′_ to _∘_ )
open import Relation.Binary.PropositionalEquality using
( _≡_ ; refl ; sym ; subst ; cong ; cong₂ )
open import AssocFree.Util using ( ≡-relevant ) renaming ( lookup to lookup♭ )
open import AssocFree.DNat using ( ℕ ; ♯0 ; _+_ ; iso-resp-+ ) renaming
( ♭ to ♭ⁿ ; ♯ to ♯ⁿ ; iso to isoⁿ ; iso⁻¹ to isoⁿ⁻¹ )
module AssocFree.DList where
open Relation.Binary.PropositionalEquality.≡-Reasoning using ( begin_ ; _≡⟨_⟩_ ; _∎ )
infixr 4 _++_
infixl 3 _≪_
infixr 2 _≫_
cat : ∀ {A : Set} → ♭List A → ♭List A → ♭List A
cat = _++♭_
cat-assoc : ∀ {A} (as bs : ♭List A) → (cat (cat as bs) ≡ ((cat as) ∘ (cat bs)))
cat-assoc []♭ bs = refl
cat-assoc (a ∷♭ as) bs = cong (_∘_ (_∷♭_ a)) (cat-assoc as bs)
cat-unit : ∀ {A} (as : ♭List A) → cat as []♭ ≡ as
cat-unit []♭ = refl
cat-unit (a ∷♭ as) = cong (_∷♭_ a) (cat-unit as)
-- List A is isomorphic to ♭List A, but has associativity of concatenation
-- up to beta reduction, not just up to propositional equality.
-- We keep track of the length explicitly to ensure that length is a monoid
-- homomorphism up to beta-reduction, not just up to ≡.
record List (A : Set) : Set where
constructor list
field
length : ℕ
fun : ♭List A → ♭List A
.length✓ : length ≡ ♯ⁿ (length♭ (fun []♭))
.fun✓ : fun ≡ cat (fun []♭)
open List public
-- Convert any ♭List into a List and back
♯ : ∀ {A} → ♭List A → List A
♯ as = list
(♯ⁿ (length♭ as))
(cat as)
(cong (♯ⁿ ∘ length♭) (sym (cat-unit as)))
(cong cat (sym (cat-unit as)))
♭ : ∀ {A} → List A → ♭List A
♭ (list n f n✓ f✓) = f []♭
-- Map
map : ∀ {A B} → (A → B) → List A → List B
map f as = ♯ (♭map f (♭ as))
-- Empty list
[] : ∀ {A} → List A
[] = ♯ []♭
-- Singleton
[_] : ∀ {A} → A → List A
[ a ] = ♯ (a ∷♭ []♭)
-- Concatenation
length♭-resp-++ : ∀ {A : Set} (as bs : ♭List A) → length♭ (as ++♭ bs) ≡ (length♭ as +♭ length♭ bs)
length♭-resp-++ []♭ bs = refl
length♭-resp-++ (a ∷♭ as) bs = cong (_+♭_ 1) (length♭-resp-++ as bs)
_++_ : ∀ {A} → List A → List A → List A
list m f m✓ f✓ ++ list n g n✓ g✓ = list (m + n) (f ∘ g) m+n✓ f∘g✓ where
.m+n✓ : (m + n) ≡ ♯ⁿ (length♭ (f (g []♭)))
m+n✓ = begin
m + n
≡⟨ cong₂ _+_ m✓ n✓ ⟩
♯ⁿ (length♭ (f []♭)) + ♯ⁿ (length♭ (g []♭))
≡⟨ sym (isoⁿ (♯ⁿ (length♭ (f []♭)) + ♯ⁿ (length♭ (g []♭)))) ⟩
♯ⁿ (♭ⁿ (♯ⁿ (length♭ (f []♭)) + ♯ⁿ (length♭ (g []♭))))
≡⟨ cong ♯ⁿ (iso-resp-+ (♯ⁿ (length♭ (f []♭))) (♯ⁿ (length♭ (g []♭)))) ⟩
♯ⁿ (♭ⁿ (♯ⁿ (length♭ (f []♭))) +♭ ♭ⁿ (♯ⁿ (length♭ (g []♭))))
≡⟨ cong ♯ⁿ (cong₂ _+♭_ (isoⁿ⁻¹ (length♭ (f []♭))) (isoⁿ⁻¹ (length♭ (g []♭)))) ⟩
♯ⁿ (length♭ (f []♭) +♭ length♭ (g []♭))
≡⟨ cong ♯ⁿ (sym (length♭-resp-++ (f []♭) (g []♭))) ⟩
♯ⁿ (length♭ (f []♭ ++♭ g []♭))
≡⟨ cong (λ X → ♯ⁿ (length♭ (X (g []♭)))) (sym f✓) ⟩
♯ⁿ (length♭ (f (g []♭)))
∎
.f∘g✓ : (f ∘ g) ≡ cat (f (g []♭))
f∘g✓ = begin
f ∘ g
≡⟨ cong₂ _∘_ f✓ g✓ ⟩
cat (f []♭) ∘ cat (g []♭)
≡⟨ sym (cat-assoc (f []♭) (g []♭)) ⟩
cat (f []♭ ++♭ g []♭)
≡⟨ cong (λ X → cat (X (g []♭))) (sym f✓) ⟩
cat (f (g []♭))
∎
-- Cons
_∷_ : ∀ {A} → A → List A → List A
a ∷ as = [ a ] ++ as
-- Ismorphism between List and ♭List which respects ++ and length
inject : ∀ {A} (as bs : List A) → (length as ≡ length bs) → (fun as ≡ fun bs) → (as ≡ bs)
inject (list m f m✓ f✓) (list .m .f n✓ g✓) refl refl = refl
iso : ∀ {A} (as : List A) → ♯ (♭ as) ≡ as
iso as = inject (♯ (♭ as)) as (sym (≡-relevant (length✓ as))) (sym (≡-relevant (fun✓ as)))
iso⁻¹ : ∀ {A} (as : ♭List A) → ♭ (♯ as) ≡ as
iso⁻¹ = cat-unit
iso-resp-++ : ∀ {A} (as bs : List A) → ♭ (as ++ bs) ≡ (♭ as ++♭ ♭ bs)
iso-resp-++ (list m f m✓ f✓) (list n g n✓ g✓) = cong (λ X → X (g []♭)) (≡-relevant f✓)
iso-resp-length : ∀ {A} (as : List A) → ♭ⁿ (length as) ≡ length♭ (♭ as)
iso-resp-length as =
begin
♭ⁿ (length as)
≡⟨ cong (♭ⁿ ∘ length) (sym (iso as)) ⟩
♭ⁿ (♯ⁿ (length♭ (♭ as)))
≡⟨ isoⁿ⁻¹ (length♭ (♭ as)) ⟩
length♭ (♭ as)
∎
-- Associtivity and units on the nose
++-assoc : ∀ {A} (as bs cs : List A) → ((as ++ bs) ++ cs) ≡ (as ++ (bs ++ cs))
++-assoc as bs cs = refl
++-unit₁ : ∀ {A} (as : List A) → (([] ++ as) ≡ as)
++-unit₁ as = refl
++-unit₂ : ∀ {A} (as : List A) → ((as ++ []) ≡ as)
++-unit₂ as = refl
-- Length is a monoid homomorphism on the nose
length-resp-++ : ∀ {A} (as bs : List A) → (length (as ++ bs) ≡ (length as + length bs))
length-resp-++ as bs = refl
length-resp-[] : ∀ {A} → (length {A} [] ≡ ♯0)
length-resp-[] = refl
-- Lookup
lookup : ∀ {A} → List A → ℕ → Maybe A
lookup as n = lookup♭ (♭ as) (♭ⁿ n)
lookup♭₁ : ∀ {A : Set} {a : A} as bs n →
(lookup♭ as n ≡ just a) → (lookup♭ (as ++♭ bs) n ≡ just a)
lookup♭₁ []♭ bs n ()
lookup♭₁ (a ∷♭ as) bs zero refl = refl
lookup♭₁ (a ∷♭ as) bs (suc n) as[n]≡a = lookup♭₁ as bs n as[n]≡a
lookup♭₂ : ∀ {A : Set} {a : A} as bs n →
(lookup♭ bs n ≡ just a) → (lookup♭ (as ++♭ bs) (length♭ as +♭ n) ≡ just a)
lookup♭₂ []♭ bs n bs[n]≡a = bs[n]≡a
lookup♭₂ (a ∷♭ as) bs n bs[n]≡a = lookup♭₂ as bs n bs[n]≡a
lookup₁ : ∀ {A} {a : A} as bs n →
(lookup as n ≡ just a) → (lookup (as ++ bs) n ≡ just a)
lookup₁ {A} {a} as bs n as[n]≡a =
begin
lookup♭ (♭ (as ++ bs)) (♭ⁿ n)
≡⟨ cong (λ X → lookup♭ X (♭ⁿ n)) (iso-resp-++ as bs) ⟩
lookup♭ (♭ as ++♭ ♭ bs) (♭ⁿ n)
≡⟨ lookup♭₁ (♭ as) (♭ bs) (♭ⁿ n) as[n]≡a ⟩
just a
∎
lookup₂ : ∀ {A} {a : A} as bs n →
(lookup bs n ≡ just a) → (lookup (as ++ bs) (length as + n) ≡ just a)
lookup₂ {A} {a} as bs n bs[n]≡a =
begin
lookup♭ (♭ (as ++ bs)) (♭ⁿ (length as + n))
≡⟨ cong₂ lookup♭ (iso-resp-++ as bs) (iso-resp-+ (length as) n) ⟩
lookup♭ (♭ as ++♭ ♭ bs) (♭ⁿ (length as) +♭ ♭ⁿ n)
≡⟨ cong (λ X → lookup♭ (♭ as ++♭ ♭ bs) (X +♭ ♭ⁿ n)) (iso-resp-length as) ⟩
lookup♭ (♭ as ++♭ ♭ bs) (length♭ (♭ as) +♭ ♭ⁿ n)
≡⟨ lookup♭₂ (♭ as) (♭ bs) (♭ⁿ n) bs[n]≡a ⟩
just a
∎
-- Membership
record _∈_ {A} (a : A) (as : List A) : Set where
constructor _,_
field
index : ℕ
.index✓ : lookup as index ≡ just a
open _∈_ public
-- Extending membership on the left and right
_≪_ : ∀ {A} {a : A} {as} → (a ∈ as) → ∀ bs → (a ∈ (as ++ bs))
_≪_ {A} {a} {as} (n , n✓) bs = (n , lookup₁ as bs n n✓)
_≫_ : ∀ {A} {a : A} as {bs} → (a ∈ bs) → (a ∈ (as ++ bs))
_≫_ as {bs} (n , n✓) = ((length as + n) , lookup₂ as bs n n✓)
-- Membership extensions have units
≪-unit : ∀ {A} {a : A} as (a∈as : a ∈ as) →
(a∈as ≪ []) ≡ a∈as
≪-unit as a∈as = refl
≫-unit : ∀ {A} {a : A} as (a∈as : a ∈ as) →
([] ≫ a∈as) ≡ a∈as
≫-unit as a∈as = refl
-- Membership extension is associative on the nose
≪-assoc : ∀ {A} {a : A} as bs cs (a∈as : a ∈ as) →
(a∈as ≪ bs ≪ cs) ≡ (a∈as ≪ bs ++ cs)
≪-assoc as bs cs a∈as = refl
≫-≪-assoc : ∀ {A} {a : A} as bs cs (a∈bs : a ∈ bs) →
((as ≫ a∈bs) ≪ cs) ≡ (as ≫ (a∈bs ≪ cs))
≫-≪-assoc as bs cs a∈bs = refl
≫-assoc : ∀ {A} {a : A} as bs cs (a∈cs : a ∈ cs) →
(as ≫ bs ≫ a∈cs) ≡ (as ≫ bs ≫ a∈cs)
≫-assoc as bs cs a∈cs = refl
-- Index is a monoid homomorphism on the nose
≪-index : ∀ {A} {a : A} as bs (a∈as : a ∈ as) →
index (a∈as ≪ bs) ≡ index a∈as
≪-index as bs a∈as = refl
≫-index : ∀ {A} {a : A} as bs (a∈bs : a ∈ bs) →
index (as ≫ a∈bs) ≡ (length as + index a∈bs)
≫-index as bs a∈bs = refl
-- Index is injective
index-inj : ∀ {A} (a : A) {as} (a∈as₁ a∈as₂ : a ∈ as) →
(index a∈as₁ ≡ index a∈as₂) → (a∈as₁ ≡ a∈as₂)
index-inj a (m , m✓) (.m , n✓) refl = refl
-- Membership of empty list
absurd : ∀ {β} {A} {B : Set β} {a : A} → (a ∈ []) → B
absurd (n , ())
-- Membership of singleton
lookup-uniq : ∀ {A} {a : A} {m} {b} n → (♯ⁿ n ≡ m) →
(lookup (b ∷ []) m ≡ just a) → (b ≡ a)
lookup-uniq zero refl refl = refl
lookup-uniq (suc n) refl ()
uniq : ∀ {A} {a b : A} → (a ∈ [ b ]) → (b ≡ a)
uniq (n , n✓) = lookup-uniq (♭ⁿ n) (isoⁿ n) (≡-relevant n✓)
singleton : ∀ {A} (a : A) → (a ∈ [ a ])
singleton a = (♯0 , refl)
-- Case on membership
data Case {A} a (as bs : List A) : Set where
inj₁ : (a∈as : a ∈ as) → Case a as bs
inj₂ : (a∈bs : a ∈ bs) → Case a as bs
-- Case function
_⋙_ : ∀ {A a} as {bs cs} → Case {A} a bs cs → Case a (as ++ bs) cs
as ⋙ inj₁ a∈bs = inj₁ (as ≫ a∈bs)
as ⋙ inj₂ a∈cs = inj₂ a∈cs
lookup♭-case : ∀ {A} {a : A} cs ds n →
.(lookup♭ (cs ++♭ ds ++♭ []♭) (n +♭ 0) ≡ just a) → (Case a (♯ cs) (♯ ds))
lookup♭-case []♭ ds n n✓ = inj₂ (♯ⁿ n , n✓)
lookup♭-case (c ∷♭ cs) ds zero n✓ = inj₁ (♯0 , n✓)
lookup♭-case (c ∷♭ cs) ds (suc n) n✓ = [ c ] ⋙ lookup♭-case cs ds n n✓
lookup-case : ∀ {A a as bs m} cs ds n →
.(lookup (as ++ bs) m ≡ just a) → (♯ cs ≡ as) → (♯ ds ≡ bs) → (♯ⁿ n ≡ m) →
Case {A} a as bs
lookup-case cs ds n n✓ refl refl refl = lookup♭-case cs ds n n✓
case : ∀ {A a} as bs → (a ∈ (as ++ bs)) → Case {A} a as bs
case {A} {a} as bs (n , n✓) =
lookup-case {A} {a} {as} {bs} {n} (♭ as) (♭ bs) (♭ⁿ n) n✓ (iso as) (iso bs) (isoⁿ n)
-- Beta for case with ≪
lookup♭-case-≪ : ∀ {A} {a : A} cs ds n .n✓₁ .n✓₂ →
lookup♭-case {A} {a} cs ds n n✓₂ ≡ inj₁ (♯ⁿ n , n✓₁)
lookup♭-case-≪ []♭ ds n () n✓₂
lookup♭-case-≪ (c ∷♭ cs) ds zero n✓₁ n✓₂ = refl
lookup♭-case-≪ (c ∷♭ cs) ds (suc n) n✓₁ n✓₂ = cong (_⋙_ [ c ]) (lookup♭-case-≪ cs ds n n✓₁ n✓₂)
lookup-case-≪ : ∀ {A} {a : A} {as bs m} cs ds n .m✓₁ .m✓₂ cs≡as ds≡bs n≡m →
lookup-case {A} {a} {as} {bs} {m} cs ds n m✓₂ cs≡as ds≡bs n≡m
≡ inj₁ (m , m✓₁)
lookup-case-≪ cs ds n m✓₁ m✓₂ refl refl refl = lookup♭-case-≪ cs ds n m✓₁ m✓₂
case-≪ : ∀ {A a as} (a∈as : a ∈ as) bs → (case {A} {a} as bs (a∈as ≪ bs) ≡ inj₁ a∈as)
case-≪ {A} {a} {as} (n , n✓) bs =
lookup-case-≪ {A} {a} {as} {bs} {n} (♭ as) (♭ bs) (♭ⁿ n) n✓
(lookup₁ as bs n n✓) (iso as) (iso bs) (isoⁿ n)
-- Beta for case with ≫
lookup♭-case-≫ : ∀ {A} {a : A} cs ds n₁ n₂ .n✓₁ .n✓₂ → (n₂ ≡ n₁) →
lookup♭-case {A} {a} cs ds (length♭ cs +♭ n₂) n✓₂ ≡ inj₂ (♯ⁿ n₁ , n✓₁)
lookup♭-case-≫ []♭ ds n .n n✓₁ n✓₂ refl = refl
lookup♭-case-≫ (c ∷♭ cs) ds n .n n✓₁ n✓₂ refl = cong (_⋙_ [ c ]) (lookup♭-case-≫ cs ds n n n✓₁ n✓₂ refl)
lookup-case-≫ : ∀ {A} {a : A} {as bs m l+m} cs ds n .m✓₁ .m✓₂ cs≡as ds≡bs l+n≡l+m → (♯ⁿ n ≡ m) →
lookup-case {A} {a} {as} {bs} {l+m} cs ds (♭ⁿ (length as + m)) m✓₂ cs≡as ds≡bs l+n≡l+m
≡ inj₂ (m , m✓₁)
lookup-case-≫ cs ds n m✓₁ m✓₂ refl refl refl refl =
lookup♭-case-≫ cs ds n (n +♭ 0) m✓₁ m✓₂ (isoⁿ⁻¹ n)
case-≫ : ∀ {A a} as {bs} (a∈bs : a ∈ bs) → (case {A} {a} as bs (as ≫ a∈bs) ≡ inj₂ a∈bs)
case-≫ {A} {a} as {bs} (n , n✓) =
lookup-case-≫ {A} {a} {as} {bs} {n} {length as + n} (♭ as) (♭ bs) (♭ⁿ n) n✓
(lookup₂ as bs n n✓) (iso as) (iso bs) (isoⁿ (length as + n)) (isoⁿ n)
-- A variant of case which remembers its argument
data Case+ {A} {a} (as bs : List A) : (a ∈ (as ++ bs)) → Set where
inj₁ : (a∈as : a ∈ as) → Case+ as bs (a∈as ≪ bs)
inj₂ : (a∈bs : a ∈ bs) → Case+ as bs (as ≫ a∈bs)
_⋙+_ : ∀ {A a} as {bs cs} {a∈bs++cs} →
Case+ {A} {a} bs cs a∈bs++cs → Case+ (as ++ bs) cs (as ≫ a∈bs++cs)
as ⋙+ inj₁ a∈bs = inj₁ (as ≫ a∈bs)
as ⋙+ inj₂ a∈cs = inj₂ a∈cs
lookup♭-case+ : ∀ {A a} cs ds n .n✓ → (Case+ {A} {a} (♯ cs) (♯ ds) (♯ⁿ n , n✓))
lookup♭-case+ []♭ ds n n✓ = inj₂ (♯ⁿ n , n✓)
lookup♭-case+ (c ∷♭ cs) ds zero n✓ = inj₁ (♯0 , n✓)
lookup♭-case+ (c ∷♭ cs) ds (suc n) n✓ = [ c ] ⋙+ lookup♭-case+ cs ds n n✓
lookup-case+ : ∀ {A a as bs m} cs ds n .m✓ → (♯ cs ≡ as) → (♯ ds ≡ bs) → (♯ⁿ n ≡ m) →
Case+ {A} {a} as bs (m , m✓)
lookup-case+ cs ds n n✓ refl refl refl = lookup♭-case+ cs ds n n✓
case+ : ∀ {A a} as bs a∈as++bs → Case+ {A} {a} as bs a∈as++bs
case+ {A} {a} as bs (n , n✓) =
lookup-case+ {A} {a} {as} {bs} {n} (♭ as) (♭ bs) (♭ⁿ n) n✓ (iso as) (iso bs) (isoⁿ n)
-- Inverse of case
case⁻¹ : ∀ {A a as bs} → Case {A} a as bs → (a ∈ (as ++ bs))
case⁻¹ {A} {a} {as} {bs} (inj₁ a∈as) = (a∈as ≪ bs)
case⁻¹ {A} {a} {as} {bs} (inj₂ a∈bs) = (as ≫ a∈bs)
case-iso : ∀ {A a} as bs (a∈as++bs : a ∈ (as ++ bs)) →
case⁻¹ (case {A} {a} as bs a∈as++bs) ≡ a∈as++bs
case-iso as bs a∈as++bs with case+ as bs a∈as++bs
case-iso as bs .(a∈as ≪ bs) | inj₁ a∈as = cong case⁻¹ (case-≪ a∈as bs)
case-iso as bs .(as ≫ a∈bs) | inj₂ a∈bs = cong case⁻¹ (case-≫ as a∈bs)
case-iso⁻¹ : ∀ {A a} as bs (a∈as++bs : Case a as bs) →
case {A} {a} as bs (case⁻¹ a∈as++bs) ≡ a∈as++bs
case-iso⁻¹ as bs (inj₁ a∈as) = case-≪ a∈as bs
case-iso⁻¹ as bs (inj₂ a∈bs) = case-≫ as a∈bs
-- ⋙ distributes through case
case-⋙ : ∀ {A} {a : A} as bs cs (a∈bs++cs : a ∈ (bs ++ cs)) →
(as ⋙ case bs cs a∈bs++cs) ≡ (case (as ++ bs) cs (as ≫ a∈bs++cs))
case-⋙ as bs cs a∈bs++cs with case+ bs cs a∈bs++cs
case-⋙ as bs cs .(a∈bs ≪ cs) | inj₁ a∈bs =
begin
as ⋙ case bs cs (a∈bs ≪ cs)
≡⟨ cong (_⋙_ as) (case-≪ a∈bs cs) ⟩
inj₁ (as ≫ a∈bs)
≡⟨ sym (case-≪ (as ≫ a∈bs) cs) ⟩
case (as ++ bs) cs (as ≫ a∈bs ≪ cs)
∎
case-⋙ as bs cs .(bs ≫ a∈cs) | inj₂ a∈cs =
begin
as ⋙ case bs cs (bs ≫ a∈cs)
≡⟨ cong (_⋙_ as) (case-≫ bs a∈cs) ⟩
inj₂ a∈cs
≡⟨ sym (case-≫ (as ++ bs) a∈cs) ⟩
case (as ++ bs) cs (as ≫ bs ≫ a∈cs)
∎
-- Three-way case, used for proving associativity properties
data Case₃ {A} (a : A) as bs cs : Set where
inj₁ : (a ∈ as) → Case₃ a as bs cs
inj₂ : (a ∈ bs) → Case₃ a as bs cs
inj₃ : (a ∈ cs) → Case₃ a as bs cs
case₂₃ : ∀ {A} {a : A} as bs cs → (Case a bs cs) → (Case₃ a as bs cs)
case₂₃ as bs cs (inj₁ a∈bs) = inj₂ a∈bs
case₂₃ as bs cs (inj₂ a∈cs) = inj₃ a∈cs
case₁₃ : ∀ {A} {a : A} as bs cs → (Case a as (bs ++ cs)) → (Case₃ a as bs cs)
case₁₃ as bs cs (inj₁ a∈as) = inj₁ a∈as
case₁₃ as bs cs (inj₂ a∈bs++cs) = case₂₃ as bs cs (case bs cs a∈bs++cs)
case₃ : ∀ {A} {a : A} as bs cs → (a ∈ (as ++ bs ++ cs)) → (Case₃ a as bs cs)
case₃ as bs cs a∈as++bs++cs = case₁₃ as bs cs (case as (bs ++ cs) a∈as++bs++cs)
-- Associating case₃ to the left gives case
caseˡ : ∀ {A} {a : A} {as bs cs} → Case₃ a as bs cs → Case a (as ++ bs) cs
caseˡ {bs = bs} (inj₁ a∈as) = inj₁ (a∈as ≪ bs)
caseˡ {as = as} (inj₂ a∈bs) = inj₁ (as ≫ a∈bs)
caseˡ (inj₃ a∈cs) = inj₂ a∈cs
caseˡ₂₃ : ∀ {A a} as bs cs {a∈bs++cs} → Case+ {A} {a} bs cs a∈bs++cs →
caseˡ (case₂₃ as bs cs (case bs cs a∈bs++cs)) ≡ case (as ++ bs) cs (as ≫ a∈bs++cs)
caseˡ₂₃ as bs cs (inj₁ a∈bs) =
begin
caseˡ (case₂₃ as bs cs (case bs cs (a∈bs ≪ cs)))
≡⟨ cong (caseˡ ∘ case₂₃ as bs cs) (case-≪ a∈bs cs) ⟩
inj₁ (as ≫ a∈bs)
≡⟨ sym (case-≪ (as ≫ a∈bs) cs) ⟩
case (as ++ bs) cs (as ≫ a∈bs ≪ cs)
∎
caseˡ₂₃ as bs cs (inj₂ a∈cs) =
begin
caseˡ (case₂₃ as bs cs (case bs cs (bs ≫ a∈cs)))
≡⟨ cong (caseˡ ∘ case₂₃ as bs cs) (case-≫ bs a∈cs) ⟩
inj₂ a∈cs
≡⟨ sym (case-≫ (as ++ bs) a∈cs) ⟩
case (as ++ bs) cs (as ≫ bs ≫ a∈cs)
∎
caseˡ₁₃ : ∀ {A a} as bs cs {a∈as++bs++cs} → Case+ {A} {a} as (bs ++ cs) a∈as++bs++cs →
caseˡ (case₁₃ as bs cs (case as (bs ++ cs) a∈as++bs++cs)) ≡ case (as ++ bs) cs a∈as++bs++cs
caseˡ₁₃ as bs cs (inj₁ a∈as) =
begin
caseˡ (case₁₃ as bs cs (case as (bs ++ cs) (a∈as ≪ bs ≪ cs)))
≡⟨ cong (caseˡ ∘ case₁₃ as bs cs) (case-≪ a∈as (bs ++ cs)) ⟩
inj₁ (a∈as ≪ bs)
≡⟨ sym (case-≪ (a∈as ≪ bs) cs) ⟩
case (as ++ bs) cs (a∈as ≪ bs ≪ cs)
∎
caseˡ₁₃ as bs cs (inj₂ a∈bs++cs) =
begin
caseˡ (case₁₃ as bs cs (case as (bs ++ cs) (as ≫ a∈bs++cs)))
≡⟨ cong (caseˡ ∘ case₁₃ as bs cs) (case-≫ as a∈bs++cs) ⟩
caseˡ (case₂₃ as bs cs (case bs cs a∈bs++cs))
≡⟨ caseˡ₂₃ as bs cs (case+ bs cs a∈bs++cs) ⟩
case (as ++ bs) cs (as ≫ a∈bs++cs)
∎
caseˡ₃ : ∀ {A} {a : A} as bs cs (a∈as++bs++cs : a ∈ (as ++ bs ++ cs)) →
caseˡ (case₃ as bs cs a∈as++bs++cs) ≡ case (as ++ bs) cs a∈as++bs++cs
caseˡ₃ as bs cs a∈as++bs++cs = caseˡ₁₃ as bs cs (case+ as (bs ++ cs) a∈as++bs++cs)
-- Associating case₃ to the right gives case
caseʳ : ∀ {A} {a : A} {as bs cs} → Case₃ a as bs cs → Case a as (bs ++ cs)
caseʳ (inj₁ a∈as) = inj₁ a∈as
caseʳ {cs = cs} (inj₂ a∈bs) = inj₂ (a∈bs ≪ cs)
caseʳ {bs = bs} (inj₃ a∈cs) = inj₂ (bs ≫ a∈cs)
caseʳ₂₃ : ∀ {A a} as bs cs {a∈bs++cs} → Case+ {A} {a} bs cs a∈bs++cs →
caseʳ (case₂₃ as bs cs (case bs cs a∈bs++cs)) ≡ inj₂ a∈bs++cs
caseʳ₂₃ as bs cs (inj₁ a∈bs) = cong (caseʳ ∘ case₂₃ as bs cs) (case-≪ a∈bs cs)
caseʳ₂₃ as bs cs (inj₂ a∈cs) = cong (caseʳ ∘ case₂₃ as bs cs) (case-≫ bs a∈cs)
caseʳ₁₃ : ∀ {A a} as bs cs {a∈as++bs++cs} → Case+ {A} {a} as (bs ++ cs) a∈as++bs++cs →
caseʳ (case₁₃ as bs cs (case as (bs ++ cs) a∈as++bs++cs)) ≡ case as (bs ++ cs) a∈as++bs++cs
caseʳ₁₃ as bs cs (inj₁ a∈as) =
begin
caseʳ (case₁₃ as bs cs (case as (bs ++ cs) (a∈as ≪ bs ≪ cs)))
≡⟨ cong (caseʳ ∘ case₁₃ as bs cs) (case-≪ a∈as (bs ++ cs)) ⟩
inj₁ a∈as
≡⟨ sym (case-≪ a∈as (bs ++ cs)) ⟩
case as (bs ++ cs) (a∈as ≪ bs ≪ cs)
∎
caseʳ₁₃ as bs cs (inj₂ a∈bs++cs) =
begin
caseʳ (case₁₃ as bs cs (case as (bs ++ cs) (as ≫ a∈bs++cs)))
≡⟨ cong (caseʳ ∘ case₁₃ as bs cs) (case-≫ as a∈bs++cs) ⟩
caseʳ (case₂₃ as bs cs (case bs cs a∈bs++cs))
≡⟨ caseʳ₂₃ as bs cs (case+ bs cs a∈bs++cs) ⟩
inj₂ a∈bs++cs
≡⟨ sym (case-≫ as a∈bs++cs) ⟩
case as (bs ++ cs) (as ≫ a∈bs++cs)
∎
caseʳ₃ : ∀ {A} {a : A} as bs cs (a∈as++bs++cs : a ∈ (as ++ bs ++ cs)) →
caseʳ (case₃ as bs cs a∈as++bs++cs) ≡ case as (bs ++ cs) a∈as++bs++cs
caseʳ₃ as bs cs a∈as++bs++cs = caseʳ₁₃ as bs cs (case+ as (bs ++ cs) a∈as++bs++cs)
|
(*This is an effort to reproduce the results presented in:
http://dl.acm.org/citation.cfm?id=976579
*)
Require Export Coq.Sets.Ensembles.
Require Export Omega.
Require Export Bool.
Require Export List.
Export ListNotations.
Require Export Arith.
Require Export Arith.EqNat.
Require Export hetList.
Require Export Coq.Program.Equality.
Require Export Coq.Logic.Classical_Prop.
Definition bvar := nat. (*boolean variables*)
Definition vvar := nat. (*vertex variables*)
Definition colors := nat. (*colors for graph vertices*)
Definition edge : Type := prod vvar vvar. (*graph edges*)
(*boolean formula (n is the number of variables*)
Inductive atom : Type := pos : bvar -> atom | neg : bvar -> atom.
Definition bformula := list (atom * atom * atom).
(*graph*)
Inductive graph : Type :=
|emptyGraph : graph
|newEdge : vvar -> vvar -> graph -> graph
|gunion : graph -> graph -> graph.
(*SAT atom is satisfiable*)
Inductive atomSAT : list (bvar * bool) -> atom -> Prop :=
|satp : forall eta u, In (u, true) eta -> atomSAT eta (pos u)
|satn : forall eta u, In (u, false) eta -> atomSAT eta (neg u).
(*formula is satisfiable*)
Inductive SAT' : list (bvar * bool) -> bformula -> Prop :=
|satCons : forall a1 a2 a3 tl eta, (atomSAT eta a1 \/ atomSAT eta a2 \/ atomSAT eta a3) ->
SAT' eta tl -> SAT' eta ((a1,a2,a3)::tl)
|satNil : forall eta, SAT' eta nil.
(*specification of graph coloring*)
Inductive coloring : list (vvar * colors) -> graph -> nat -> Prop :=
|cgempty : forall eta C, coloring eta emptyGraph C
|cgEdge : forall eta C1 C2 C A B G, In (A, C1) eta -> In (B, C2) eta ->
C1 <= C -> C2 <= C -> C1 <> C2 -> coloring eta G C ->
coloring eta (newEdge A B G) C
|cgUnion : forall eta G1 G2 C, coloring eta G1 C -> coloring eta G2 C ->
coloring eta (gunion G1 G2) C.
(*Recursively create new edges in a graph*)
Fixpoint newE es G :=
match es with
|(e1,e2)::es => newEdge e1 e2 (newE es G)
|_ => G
end.
(*-------------------------Reduction------------------------------*)
(*For every boolean variable in Delta, find the x that it maps to in Gamma and create
an edge from that x to to the vertex variable provided (X)*)
Inductive connectX : list (bvar * vvar * vvar * vvar) -> list bvar ->
vvar -> graph -> Prop :=
|connectXNil : forall Gamma X, connectX Gamma nil X emptyGraph
|connectX_vtx : forall Gamma Delta u X G,
In (u, 3*u, 3*u+1, 3*u+2) Gamma -> connectX Gamma Delta X G ->
connectX Gamma (u::Delta) X (newEdge X (3*u+2) G).
(*Same as above, but makes edges to v and v' rather than x*)
Inductive connectV : list (bvar * vvar * vvar * vvar) -> list bvar ->
vvar -> graph -> Prop :=
|connectV_nil : forall Gamma X, connectV Gamma nil X emptyGraph
|connectV_vtx : forall Gamma Delta u X G,
In (u, 3*u, 3*u+1, 3*u+2) Gamma -> connectV Gamma Delta X G ->
connectV Gamma (u::Delta) X (newEdge X (3*u) (newEdge X (3*u+1) G)).
(*Attach the v and v' variables to the clique created in the next set of rules*)
Inductive vars_to_clique : list (bvar * vvar * vvar * vvar) -> list bvar ->
graph -> Prop :=
|vars2cliqueNil : forall Gamma, vars_to_clique Gamma nil emptyGraph
|vars2cliqueVTX : forall Gamma Delta u G1 G2 G3 G4,
In (u,3*u,3*u+1,3*u+2) Gamma -> vars_to_clique Gamma Delta G1 ->
connectX Gamma Delta (3*u) G2 -> connectX Gamma Delta (3*u+1) G3 ->
connectV Gamma Delta (3*u+2) G4 ->
vars_to_clique Gamma (u::Delta) (gunion G1 (gunion G2 (gunion G3 G4))).
(*Create a clique out of the x variables*)
Inductive clique : list (bvar * vvar * vvar * vvar) -> list bvar ->
graph -> Prop :=
|clique_empty : forall Gamma, clique Gamma nil emptyGraph
|clique_vtx : forall Gamma u Delta G1 G2,
In (u,3*u,3*u+1,3*u+2) Gamma -> clique Gamma Delta G1 -> connectX Gamma Delta (3*u+2) G2 ->
clique Gamma (u::Delta) (gunion G1 G2).
(*convert base (Gamma; Delta |- C Downarrow G)*)
Inductive convert_base : list (bvar * vvar * vvar * vvar) -> list bvar ->
colors -> graph -> Prop :=
|conv'''_base : forall Gamma C, convert_base Gamma nil C emptyGraph
|conv'''_cont : forall Gamma Delta u C G,
In (u,3*u,3*u+1,3*u+2) Gamma -> convert_base Gamma Delta C G ->
convert_base Gamma (u::Delta) C (newEdge C (3*u) (newEdge C (3*u+1) G)).
(*mkEdge c u v v' e (determines if the edge from c goes to v or v')*)
Inductive mkEdge : vvar -> atom -> vvar -> vvar -> edge -> Prop :=
|mkPos : forall c u v v', mkEdge c (pos u) v v' (c, v')
|mkNeg : forall c u v v', mkEdge c (neg u) v v' (c, v).
Definition getVar a :=
match a with
|pos u => u
|neg u => u
end.
Inductive convFormula (c:vvar) : list (bvar * vvar * vvar * vvar) -> list bvar ->
(atom * atom * atom) -> graph -> Prop :=
|conv'' : forall Gamma Delta u1 u2 u3 G e1 e2 e3 p1 p2 p3 D1 D2 D3 D4,
In (u1,3*u1,3*u1+1,3*u1+2) Gamma ->
In (u2,3*u2,3*u2+1,3*u2+2) Gamma ->
In (u3,3*u3,3*u3+1,3*u3+2) Gamma ->
convert_base Gamma (D1++D2++D3++D4) c G ->
mkEdge c p1 (3*u1) (3*u1+1) e1 -> getVar p1 = u1 -> getVar p2 = u2 -> getVar p3 = u3 ->
mkEdge c p2 (3*u2) (3*u2+1) e2 ->
mkEdge c p3 (3*u3) (3*u3+1) e3 -> Delta = D1++[u1]++D2++[u2]++D3++[u3]++D4 ->
convFormula c Gamma Delta (p1, p2, p3)
(newE [e1;e2;e3] G)
.
(*Convert Stack of Continuations (Gamma; Delta |- K => G)*)
Inductive convStack (i:vvar) : list (bvar * vvar * vvar * vvar) -> list bvar ->
bformula -> graph -> Prop :=
|conv_base : forall Gamma Delta, convStack i Gamma Delta nil emptyGraph
|conv_cons : forall Gamma Delta K F G1 G2, convStack (i+1) Gamma Delta K G1 -> convFormula i Gamma Delta F G2 ->
convStack i Gamma Delta (F::K) (gunion G1 G2)
.
(*Top Level Reduction (Gamma; Delta |- F => C C' G)*)
Inductive reduce Gamma Delta : bformula -> graph -> Prop :=
|convV : forall F G1 G2 G3,
convStack (length Gamma * 3) Gamma Delta F G1 ->
clique Gamma Delta G2 -> vars_to_clique Gamma Delta G3 ->
reduce Gamma Delta F (gunion G1 (gunion G2 G3)).
Fixpoint buildCtxt i :=
match i with
|0 => (nil, nil)
|S i =>
let (Gamma, Delta) := buildCtxt i
in (((i,3*i,3*i+1,3*i+2)::Gamma), (i::Delta))
end.
(*
Inductive buildCtxt : nat -> list (bvar*vvar*vvar*vvar) -> list bvar -> Prop :=
|buildCons : forall Gamma Delta i, buildCtxt i Gamma Delta ->
buildCtxt (S i) ((i,3*i,3*i+1,3*i+2)::Gamma) (i::Delta)
|buildNil : buildCtxt 0 nil nil.
*)
(*
(*Build the Gamma and Delta contexts (n is the number of boolean variables in the formula we are reducing)*)
Inductive buildCtxt n : nat -> list (bvar*vvar*vvar*vvar) -> list bvar -> Prop :=
|buildCons : forall Gamma Delta i, buildCtxt n (S i) Gamma Delta ->
buildCtxt n i ((i,3*i,3*i+1,3*i+2)::Gamma) (i::Delta)
|buildNil : buildCtxt n n nil nil.
*)
Theorem buildCtxtSanityChk : buildCtxt 3 = ([(2,6,7,8);(1,3,4,5);(0,0,1,2)], [2; 1; 0]).
Proof.
simpl. reflexivity.
Qed.
Hint Constructors coloring.
(*invert a hypothesis, substitute its pieces and clear it*)
Ltac inv H := inversion H; subst; clear H.
(*invert existentials and conjunctions in the entire proof context*)
Ltac invertHyp :=
match goal with
|H:exists x, ?e |- _ => inv H; try invertHyp
|H:?x /\ ?y |- _ => inv H; try invertHyp
end.
(*A list is unique with respect to some set U (usually Empty_set)*)
Inductive unique {A:Type} (U:Ensemble A) : list A -> Prop :=
|uniqueCons : forall hd tl, unique (Add A U hd) tl -> ~ Ensembles.In _ U hd ->
unique U (hd::tl)
|uniqueNil : unique U nil.
(*specifies that an atom is satisfiable, and gives back the appropriate color and
**variable for setting the graph coloring environment*)
Inductive winner eta' eta : atom -> bvar -> colors -> Prop :=
|posWinner : forall c u, In (3*u, c) eta' -> In (u, true) eta -> winner eta' eta (pos u) u c
|negWinner : forall c u, In (3*u+1, c) eta' -> In (u, false) eta ->
winner eta' eta (neg u) u c.
(*Set the c_i variables in the graph coloring environment
** (c_i correspoinds to a clause in the boolean formula) *)
Inductive setCs env : nat -> bformula ->
list (vvar * colors) -> list (bvar * bool) -> Prop :=
|fstSAT : forall u eta eta' c i a1 a2 a3 F,
winner env eta a1 u c -> setCs env (S i) F eta' eta ->
setCs env i ((a1, a2, a3)::F) ((i, c)::eta') eta
|sndSAT : forall u eta eta' c i a1 a2 a3 F,
winner env eta a2 u c -> setCs env (S i) F eta' eta ->
setCs env i ((a1, a2, a3)::F) ((i, c)::eta') eta
|thirdSAT : forall u eta eta' c i a1 a2 a3 F,
winner env eta a3 u c -> setCs env (S i) F eta' eta ->
setCs env i ((a1, a2, a3)::F) ((i, c)::eta') eta
|setCsDone : forall i eta eta', setCs env i nil eta' eta.
(*N is the number of clauses in the boolean formula*)
(*specifies how a graph environment is built out of a boolean formula
**environment and reduction contexts*)
Inductive setVertices : list (bvar*vvar*vvar*vvar) -> nat -> nat ->
list (vvar * colors) -> list (bvar * bool) -> Prop :=
|setVerticesT : forall u eta eta' C Gamma,
setVertices Gamma C (S u) eta' eta-> u < C ->
setVertices ((u,3*u,3*u+1,3*u+2)::Gamma) C u
((3*u,u)::(3*u+1,C)::(3*u+2,u)::eta') ((u,true)::eta)
|setVerticesF : forall u eta eta' C Gamma,
setVertices Gamma C (S u) eta' eta -> u < C ->
setVertices ((u,3*u,3*u+1,3*u+2)::Gamma) C u
((3*u,C)::(3*u+1,u)::(3*u+2,u)::eta') ((u, false)::eta)
|setVerticesNil : forall C u, setVertices nil C u nil nil.
Theorem sanityCheck : setVertices [(0,0,1,2);(1,3,4,5);(2,6,7,8)]
3 0 [(0,0);(1,3);(2,0);(3,3);(4,1);(5,1);(6,2);(7,3);(8,2)]
[(0, true); (1, false); (2, true)].
Proof.
repeat constructor.
Qed.
(*Generates a graph coloring environment (valid might not be the greatest name
**for this, but thats what they call it in the paper)*)
Inductive valid : list (bvar*vvar*vvar*vvar) -> nat -> bformula ->
list (vvar * colors) -> list (bvar * bool) -> Prop :=
|valid_ : forall Gamma C eta eta' eta'' F res,
setVertices Gamma C 0 eta' eta ->
setCs eta' (3 * length Gamma) F eta'' eta ->
res = eta' ++ eta'' ->
valid Gamma C F res eta.
Theorem sanityCheck' :
valid [(0,0,1,2);(1,3,4,5);(2,6,7,8)]
3 [(neg 0, pos 1, pos 2)]
[(0,0);(1,3);(2,0);(3,3);(4,1);(5,1);(6,2);(7,3);(8,2);(9,2)]
[(0, true); (1, false); (2, true)].
Proof.
econstructor. repeat econstructor. Focus 2. simpl. auto.
eapply thirdSAT with (u:=2). simpl. apply posWinner. simpl. right. right.
right. right. right. right. auto. simpl. auto. constructor.
Qed.
(*Give the name of a hypothesis, and this tactic will copy it in the proof context*)
Ltac copy H :=
match type of H with
|?x => assert(x) by auto
end.
(*negation distributes over disjunction*)
Theorem notDistr : forall A B, ~(A \/ B) <-> ~ A /\ ~ B.
Proof.
intros. split; intros.
{unfold not in H. split. intros c. apply H. auto. intros c. apply H; auto. }
{unfold not in H. invertHyp. intros c. inv c. auto. auto. }
Qed.
(*If something is in the set a list is unique with respect to,
**then it can't be in the unique list*)
Theorem uniqueNotIn : forall A S (u:A) Delta,
Ensembles.In _ S u ->
unique S Delta -> ~ In u Delta.
Proof.
intros. induction H0.
{assert(u=hd \/ u <> hd). apply classic. inv H2.
{contradiction. }
{simpl. rewrite notDistr. split. auto. apply IHunique. constructor. auto. }
}
{intros c. auto. }
Qed.
(*invert tuple equality (this is a hack that only handles up to 4-arity)*)
Ltac invertTupEq :=
match goal with
|H:(?x1,?x2) = (?y1,?y2) |- _ => inv H; try invertTupEq
|H:(?x1,?x2,?x3) = (?y1,?y2,?y3) |- _ => inv H; try invertTupEq
|H:(?x1,?x2,?x3,?x4) = (?y1,?y2,?y3,?y4) |- _ => inv H; try invertTupEq
end.
(*invert a uniqueness hypothesis*)
Ltac invUnique :=
match goal with
|H:unique ?U (?x::?y) |- _ => inv H; try invUnique
end.
(*invert an assumption that something is in a list (creates two subgoals each time)*)
Ltac inCons :=
match goal with
|H:In ?X (?x::?y) |- _ => inv H
end.
(*If u is less than i (setVertices index) then its corresponding v, v', and x
**must not be in the graph coloring environment*)
Theorem notInEta' : forall Gamma C i eta' eta u epsilon c,
setVertices Gamma C i eta' eta -> u < i ->
(epsilon=0\/epsilon=1\/epsilon=2) ->
~ In (3 * u + epsilon, c) eta'.
Proof.
intros. genDeps {{ epsilon; u }}. induction H; intros.
{intros contra. inv contra.
{invertTupEq. omega. }
{inCons.
{invertTupEq. omega. }
{inCons.
{invertTupEq. omega. }
{assert(u0 < S u). omega. eapply IHsetVertices in H3; auto. }
}
}
}
{intros contra. inv contra.
{invertTupEq. omega. }
{inCons.
{invertTupEq. omega. }
{inCons.
{invertTupEq. omega. }
{assert(u0 < S u). omega. eapply IHsetVertices in H3; auto. }
}
}
}
{intros contra. inv contra. }
Qed.
(*color a graph with a larger environment*)
Theorem colorWeakening : forall eta1 eta2 G C,
coloring eta1 G C ->
coloring (eta1++eta2) G C.
Proof.
intros. generalize dependent eta2. induction H; intros.
{constructor. }
{econstructor. apply in_app_iff. eauto. apply in_app_iff. eauto. auto. auto.
auto. eauto. }
{constructor; eauto. }
Qed.
(*each x_i must map to u_i*)
Theorem XMapsToU : forall u Gamma C eta' eta i,
In (u,3*u,3*u+1,3*u+2) Gamma ->
setVertices Gamma C i eta' eta -> In (3*u+2,u) eta' /\ u < C.
Proof.
intros. induction H0.
{destruct (eq_nat_dec u u0).
{inCons. invertTupEq. simpl. split; auto. split. simpl. auto. omega. }
{inCons. invertTupEq. omega. eapply IHsetVertices in H2; eauto. invertHyp.
split. simpl. auto. auto. }
}
{destruct (eq_nat_dec u u0).
{inCons. invertTupEq. simpl. split; auto. split. simpl. auto. omega. }
{inCons. invertTupEq. omega. eapply IHsetVertices in H2; eauto. invertHyp.
split. simpl. auto. auto. }
}
{inv H. }
Qed.
(*v_i and v_i' must map to either u_i or C, and must be distinct*)
Theorem V_V'MapToUOrC : forall u Gamma C eta' eta i,
In (u,3*u,3*u+1,3*u+2) Gamma ->
setVertices Gamma C i eta' eta ->
(In (3*u,u) eta' /\ u < C /\ In (3*u+1,C) eta') \/
(In (3*u,C) eta' /\ u < C /\ In (3*u+1,u) eta').
Proof.
intros. induction H0.
{destruct (eq_nat_dec u u0).
{subst. inCons. invertTupEq. left. simpl. auto. eapply IHsetVertices in H2. inv H2.
invertHyp. left. simpl. auto. invertHyp. left. simpl. auto. }
{inCons. invertTupEq. omega. eapply IHsetVertices in H2. inv H2. invertHyp.
left. split. simpl. auto. split. auto. simpl. auto. invertHyp. right.
simpl. split; auto. }
}
{destruct (eq_nat_dec u u0).
{subst. inCons. invertTupEq. right. simpl. auto. eapply IHsetVertices in H2. inv H2.
invertHyp. right. simpl. auto. invertHyp. right. simpl. auto. }
{inCons. invertTupEq. omega. eapply IHsetVertices in H2. inv H2. invertHyp.
left. split. simpl. auto. split. auto. simpl. auto. invertHyp. right.
simpl. split; auto. }
}
{inv H. }
Qed.
(*color a graph with a larger environment*)
Theorem colorWeakeningApp : forall eta1 eta2 eta3 G C,
coloring (eta1++eta3) G C ->
coloring (eta1++eta2++eta3) G C.
Proof.
intros. genDeps {{ eta2 }}. remember(eta1++eta3). induction H; intros.
{constructor. }
{subst. econstructor. apply in_app_iff in H. inv H. auto. apply in_app_iff; simpl.
eauto. repeat rewrite in_app_iff. auto. apply in_app_iff in H0. inv H0;
repeat rewrite in_app_iff; eauto. auto. auto. auto. eapply IHcoloring; eauto. }
{constructor; eauto. }
Qed.
|
/-
Copyright (c) 2014 Parikshit Khanna. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro
-/
import data.list.big_operators.basic
/-!
# Counting in lists
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
This file proves basic properties of `list.countp` and `list.count`, which count the number of
elements of a list satisfying a predicate and equal to a given element respectively. Their
definitions can be found in [`data.list.defs`](./defs).
-/
open nat
variables {α β : Type*} {l l₁ l₂ : list α}
namespace list
section countp
variables (p q : α → Prop) [decidable_pred p] [decidable_pred q]
@[simp] lemma countp_nil : countp p [] = 0 := rfl
@[simp] lemma countp_cons_of_pos {a : α} (l) (pa : p a) : countp p (a::l) = countp p l + 1 :=
if_pos pa
@[simp] lemma countp_cons_of_neg {a : α} (l) (pa : ¬ p a) : countp p (a::l) = countp p l :=
if_neg pa
lemma countp_cons (a : α) (l) : countp p (a :: l) = countp p l + ite (p a) 1 0 :=
by { by_cases h : p a; simp [h] }
lemma length_eq_countp_add_countp (l) : length l = countp p l + countp (λ a, ¬p a) l :=
by induction l with x h ih; [refl, by_cases p x];
[simp only [countp_cons_of_pos _ _ h, countp_cons_of_neg (λ a, ¬p a) _ (decidable.not_not.2 h),
ih, length],
simp only [countp_cons_of_pos (λ a, ¬p a) _ h, countp_cons_of_neg _ _ h, ih, length]]; ac_refl
lemma countp_eq_length_filter (l) : countp p l = length (filter p l) :=
by induction l with x l ih; [refl, by_cases (p x)];
[simp only [filter_cons_of_pos _ h, countp, ih, if_pos h],
simp only [countp_cons_of_neg _ _ h, ih, filter_cons_of_neg _ h]]; refl
lemma countp_le_length : countp p l ≤ l.length :=
by simpa only [countp_eq_length_filter] using length_filter_le _ _
@[simp] lemma countp_append (l₁ l₂) : countp p (l₁ ++ l₂) = countp p l₁ + countp p l₂ :=
by simp only [countp_eq_length_filter, filter_append, length_append]
lemma countp_join : ∀ l : list (list α), countp p l.join = (l.map (countp p)).sum
| [] := rfl
| (a :: l) := by rw [join, countp_append, map_cons, sum_cons, countp_join]
lemma countp_pos {l} : 0 < countp p l ↔ ∃ a ∈ l, p a :=
by simp only [countp_eq_length_filter, length_pos_iff_exists_mem, mem_filter, exists_prop]
@[simp] theorem countp_eq_zero {l} : countp p l = 0 ↔ ∀ a ∈ l, ¬ p a :=
by { rw [← not_iff_not, ← ne.def, ← pos_iff_ne_zero, countp_pos], simp }
@[simp] lemma countp_eq_length {l} : countp p l = l.length ↔ ∀ a ∈ l, p a :=
by rw [countp_eq_length_filter, filter_length_eq_length]
lemma length_filter_lt_length_iff_exists (l) : length (filter p l) < length l ↔ ∃ x ∈ l, ¬p x :=
by rw [length_eq_countp_add_countp p l, ← countp_pos, countp_eq_length_filter, lt_add_iff_pos_right]
lemma sublist.countp_le (s : l₁ <+ l₂) : countp p l₁ ≤ countp p l₂ :=
by simpa only [countp_eq_length_filter] using length_le_of_sublist (s.filter p)
@[simp] lemma countp_filter (l : list α) : countp p (filter q l) = countp (λ a, p a ∧ q a) l :=
by simp only [countp_eq_length_filter, filter_filter]
@[simp] lemma countp_true : l.countp (λ _, true) = l.length := by simp
@[simp] lemma countp_false : l.countp (λ _, false) = 0 := by simp
@[simp] lemma countp_map (p : β → Prop) [decidable_pred p] (f : α → β) :
∀ l, countp p (map f l) = countp (p ∘ f) l
| [] := rfl
| (a::l) := by rw [map_cons, countp_cons, countp_cons, countp_map]
variables {p q}
lemma countp_mono_left (h : ∀ x ∈ l, p x → q x) : countp p l ≤ countp q l :=
begin
induction l with a l ihl, { refl },
rw [forall_mem_cons] at h, cases h with ha hl,
rw [countp_cons, countp_cons],
refine add_le_add (ihl hl) _,
split_ifs; try { simp only [le_rfl, zero_le] },
exact absurd (ha ‹_›) ‹_›
end
lemma countp_congr (h : ∀ x ∈ l, p x ↔ q x) : countp p l = countp q l :=
le_antisymm (countp_mono_left $ λ x hx, (h x hx).1) (countp_mono_left $ λ x hx, (h x hx).2)
end countp
/-! ### count -/
section count
variables [decidable_eq α]
@[simp] lemma count_nil (a : α) : count a [] = 0 := rfl
lemma count_cons (a b : α) (l : list α) :
count a (b :: l) = if a = b then succ (count a l) else count a l := rfl
lemma count_cons' (a b : α) (l : list α) :
count a (b :: l) = count a l + (if a = b then 1 else 0) :=
begin rw count_cons, split_ifs; refl end
@[simp] lemma count_cons_self (a : α) (l : list α) : count a (a::l) = count a l + 1 := if_pos rfl
@[simp, priority 990]
lemma count_cons_of_ne {a b : α} (h : a ≠ b) (l : list α) : count a (b::l) = count a l := if_neg h
lemma count_tail : Π (l : list α) (a : α) (h : 0 < l.length),
l.tail.count a = l.count a - ite (a = list.nth_le l 0 h) 1 0
| (_ :: _) a h := by { rw [count_cons], split_ifs; simp }
lemma count_le_length (a : α) (l : list α) : count a l ≤ l.length :=
countp_le_length _
lemma sublist.count_le (h : l₁ <+ l₂) (a : α) : count a l₁ ≤ count a l₂ := h.countp_le _
lemma count_le_count_cons (a b : α) (l : list α) : count a l ≤ count a (b :: l) :=
(sublist_cons _ _).count_le _
lemma count_singleton (a : α) : count a [a] = 1 := if_pos rfl
lemma count_singleton' (a b : α) : count a [b] = ite (a = b) 1 0 := rfl
@[simp] lemma count_append (a : α) : ∀ l₁ l₂, count a (l₁ ++ l₂) = count a l₁ + count a l₂ :=
countp_append _
lemma count_join (l : list (list α)) (a : α) : l.join.count a = (l.map (count a)).sum :=
countp_join _ _
lemma count_concat (a : α) (l : list α) : count a (concat l a) = succ (count a l) :=
by simp [-add_comm]
@[simp] lemma count_pos {a : α} {l : list α} : 0 < count a l ↔ a ∈ l :=
by simp only [count, countp_pos, exists_prop, exists_eq_right']
@[simp] lemma one_le_count_iff_mem {a : α} {l : list α} : 1 ≤ count a l ↔ a ∈ l :=
count_pos
@[simp, priority 980]
lemma count_eq_zero_of_not_mem {a : α} {l : list α} (h : a ∉ l) : count a l = 0 :=
decidable.by_contradiction $ λ h', h $ count_pos.1 (nat.pos_of_ne_zero h')
lemma not_mem_of_count_eq_zero {a : α} {l : list α} (h : count a l = 0) : a ∉ l :=
λ h', (count_pos.2 h').ne' h
@[simp] lemma count_eq_zero {a : α} {l} : count a l = 0 ↔ a ∉ l :=
⟨not_mem_of_count_eq_zero, count_eq_zero_of_not_mem⟩
@[simp] lemma count_eq_length {a : α} {l} : count a l = l.length ↔ ∀ b ∈ l, a = b :=
countp_eq_length _
@[simp] lemma count_replicate_self (a : α) (n : ℕ) : count a (replicate n a) = n :=
by rw [count, countp_eq_length_filter, filter_eq_self.2, length_replicate];
exact λ b m, (eq_of_mem_replicate m).symm
lemma count_replicate (a b : α) (n : ℕ) : count a (replicate n b) = if a = b then n else 0 :=
begin
split_ifs with h,
exacts [h ▸ count_replicate_self _ _, count_eq_zero_of_not_mem $ mt eq_of_mem_replicate h]
end
theorem filter_eq (l : list α) (a : α) : l.filter (eq a) = replicate (count a l) a :=
by simp [eq_replicate, count, countp_eq_length_filter, @eq_comm _ _ a]
theorem filter_eq' (l : list α) (a : α) : l.filter (λ x, x = a) = replicate (count a l) a :=
by simp only [filter_eq, @eq_comm _ _ a]
lemma replicate_count_eq_of_count_eq_length {a : α} {l : list α} (h : count a l = length l) :
replicate (count a l) a = l :=
(le_count_iff_replicate_sublist.mp le_rfl).eq_of_length $ (length_replicate (count a l) a).trans h
@[simp] lemma count_filter {p} [decidable_pred p]
{a} {l : list α} (h : p a) : count a (filter p l) = count a l :=
by simp only [count, countp_filter, show (λ b, a = b ∧ p b) = eq a, by { ext b, constructor; cc }]
lemma count_bind {α β} [decidable_eq β] (l : list α) (f : α → list β) (x : β) :
count x (l.bind f) = sum (map (count x ∘ f) l) :=
by rw [list.bind, count_join, map_map]
@[simp] lemma count_map_of_injective {α β} [decidable_eq α] [decidable_eq β]
(l : list α) (f : α → β) (hf : function.injective f) (x : α) :
count (f x) (map f l) = count x l :=
by simp only [count, countp_map, (∘), hf.eq_iff]
lemma count_le_count_map [decidable_eq β] (l : list α) (f : α → β) (x : α) :
count x l ≤ count (f x) (map f l) :=
begin
rw [count, count, countp_map],
exact countp_mono_left (λ y hyl, congr_arg f),
end
lemma count_erase (a b : α) : ∀ l : list α, count a (l.erase b) = count a l - ite (a = b) 1 0
| [] := by simp
| (c :: l) :=
begin
rw [erase_cons],
by_cases hc : c = b,
{ rw [if_pos hc, hc, count_cons', nat.add_sub_cancel] },
{ rw [if_neg hc, count_cons', count_cons', count_erase],
by_cases ha : a = b,
{ rw [← ha, eq_comm] at hc,
rw [if_pos ha, if_neg hc, add_zero, add_zero] },
{ rw [if_neg ha, tsub_zero, tsub_zero] } }
end
@[simp] lemma count_erase_self (a : α) (l : list α) : count a (list.erase l a) = count a l - 1 :=
by rw [count_erase, if_pos rfl]
@[simp] lemma count_erase_of_ne {a b : α} (ab : a ≠ b) (l : list α) :
count a (l.erase b) = count a l :=
by rw [count_erase, if_neg ab, tsub_zero]
@[to_additive]
lemma prod_map_eq_pow_single [monoid β] {l : list α} (a : α) (f : α → β)
(hf : ∀ a' ≠ a, a' ∈ l → f a' = 1) : (l.map f).prod = (f a) ^ (l.count a) :=
begin
induction l with a' as h generalizing a,
{ rw [map_nil, prod_nil, count_nil, pow_zero] },
{ specialize h a (λ a' ha' hfa', hf a' ha' (mem_cons_of_mem _ hfa')),
rw [list.map_cons, list.prod_cons, count_cons, h],
split_ifs with ha',
{ rw [ha', pow_succ] },
{ rw [hf a' (ne.symm ha') (list.mem_cons_self a' as), one_mul] } }
end
@[to_additive]
lemma prod_eq_pow_single [monoid α] {l : list α} (a : α)
(h : ∀ a' ≠ a, a' ∈ l → a' = 1) : l.prod = a ^ (l.count a) :=
trans (by rw [map_id'']) (prod_map_eq_pow_single a id h)
end count
end list
|
module refs_C
USE vast_kind_param, ONLY: double
!...Created by Pacific-Sierra Research 77to90 4.4G 11:05:00 03/09/06
character, dimension(107) :: refmn*80, refm3*80, refam*80, refpm3*80, &
refmd*80
character, dimension(107,5) :: allref*80
equivalence (refmn(1), allref(1,1)), (refm3(1), allref(1,2)), &
& (refam(1), allref(1,3)), (refpm3(1), allref(1,4)), (refmd(1), allref(1,5))
data refmd(11)/ &
' Na: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(12)/ &
' Mg: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(13)/ &
' Al: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(14)/ &
' Si: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. MOL. STRUCT., 313, 141 (1994&
&) '/
data refmd(15)/ &
' P : (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(16)/ &
' S : (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(17)/ &
' Cl: (MNDO/d): W.THIEL AND A.A.VOITYUK, INTERN. J. QUANT. CHEM., 44, 80&
&7 (1992)'/
data refmd(30)/ &
' Zn: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(35)/ &
' Br: (MNDO/d): W.THIEL AND A.A.VOITYUK, INTERN. J. QUANT. CHEM., 44, 80&
&7 (1992)'/
data refmd(48)/ &
' Cd: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmd(53)/ &
' I : (MNDO/d): W.THIEL AND A.A.VOITYUK, INTERN. J. QUANT. CHEM., 44, 80&
&7 (1992)'/
data refmd(80)/ &
' Hg: (MNDO/d): W.THIEL AND A.A.VOITYUK, J. PHYS. CHEM., 100, 616 (1996)&
& '/
data refmn(1)/ &
' H: (MNDO): M.J.S. DEWAR, W. THIEL, J. AM. CHEM. SOC., 99, 4899, (1977&
&) '/
data refmn(3)/ &
' Li: (MNDO): TAKEN FROM MNDOC BY W.THIEL, QCPE NO.438, V. 2, P.63,&
& (1982).'/
data refmn(4)/ &
' Be: (MNDO): M.J.S. DEWAR, H.S. RZEPA, J. AM. CHEM. SOC., 100, 777, (19&
&78) '/
data refmn(5)/ &
' B: (MNDO): M.J.S. DEWAR, M.L. MCKEE, J. AM. CHEM. SOC., 99, 5231, (19&
&77) '/
data refmn(6)/ &
' C: (MNDO): M.J.S. DEWAR, W. THIEL, J. AM. CHEM. SOC., 99, 4899, (1977&
&) '/
data refmn(7)/ &
' N: (MNDO): M.J.S. DEWAR, W. THIEL, J. AM. CHEM. SOC., 99, 4899, (1977&
&) '/
data refmn(8)/ &
' O: (MNDO): M.J.S. DEWAR, W. THIEL, J. AM. CHEM. SOC., 99, 4899, (1977&
&) '/
data refmn(9)/ &
' F: (MNDO): M.J.S. DEWAR, H.S. RZEPA, J. AM. CHEM. SOC., 100, 777, (19&
&78) '/
data refmn(11)/ &
' Na: (MNDO): ANGEW. CHEM. INT. ED. ENGL. 29, 1042 (1990).'/
data refam(11)/ &
' Na: (AM1): SODIUM-LIKE SPARKLE. USE WITH CARE. &
& '/
data refpm3(11)/ &
' Na: (PM3): SODIUM-LIKE SPARKLE. USE WITH CARE. &
& '/
data refmn(13)/ &
' Al: (MNDO): L.P. DAVIS, ET.AL. J. COMP. CHEM., 2, 433, (1981) SEE MAN&
&UAL. '/
data refmn(14)/ &
' Si: (MNDO): M.J.S.DEWAR, ET. AL. ORGANOMETALLICS 5, 375 (1986) &
& '/
data refmn(15)/ &
' P: (MNDO): M.J.S.DEWAR, M.L.MCKEE, H.S.RZEPA, J. AM. CHEM. SOC., 100 3&
&607 1978'/
data refmn(16)/ &
' S: (MNDO): M.J.S.DEWAR, C.H. REYNOLDS, J. COMP. CHEM. 7, 140-143 (198&
&6) '/
data refmn(17)/ &
' Cl: (MNDO): M.J.S.DEWAR, H.S.RZEPA, J. COMP. CHEM., 4, 158, (1983) &
& '/
data refmn(19)/ ' K: (MNDO): J. C. S. CHEM. COMM. 765, (1992).'/
data refam(19)/ &
' K: (AM1): POTASSIUM-LIKE SPARKLE. USE WITH CARE. &
& '/
data refpm3(19)/ &
' K: (PM3): POTASSIUM-LIKE SPARKLE. USE WITH CARE. &
& '/
data refmn(30)/ &
' Zn: (MNDO): M.J.S. DEWAR, K.M. MERZ, ORGANOMETALLICS, 5, 1494-1496 (19&
&86) '/
data refmn(32)/ &
' Ge: (MNDO): M.J.S.DEWAR, G.L.GRADY, E.F.HEALY,ORGANOMETALLICS 6 186-189&
&, (1987)'/
data refmn(35)/ &
' Br: (MNDO): M.J.S.DEWAR, E.F. HEALY, J. COMP. CHEM., 4, 542, (1983) &
& '/
data refmn(50)/ &
' Sn: (MNDO): M.J.S.DEWAR,G.L.GRADY,J.J.P.STEWART, J.AM.CHEM.SOC.,106 677&
&1 (1984)'/
data refmn(53)/ &
' I: (MNDO): M.J.S.DEWAR, E.F. HEALY, J.J.P. STEWART, J.COMP.CHEM., 5,35&
&8,(1984)'/
data refmn(80)/ &
' Hg: (MNDO): M.J.S.DEWAR, ET. AL. ORGANOMETALLICS 4, 1964, (1985) SEE M&
&ANUAL '/
data refmn(82)/ &
' Pb: (MNDO): M.J.S.DEWAR, ET.AL ORGANOMETALLICS 4 1973-1980 (1985) &
& '/
data refmn(90)/ &
' Si: (MNDO): M.J.S.DEWAR, M.L.MCKEE, H.S.RZEPA, J. AM. CHEM. SOC., 100 3&
&607 1978'/
data refmn(91)/ &
' S: (MNDO): M.J.S.DEWAR, H.S. RZEPA, M.L.MCKEE, J.AM.CHEM.SOC.100, 3607&
& (1978).'/
data refmn(102)/ &
' Cb: (MNDO): Capped Bond (Hydrogen-like, takes on a zero charge.) &
& '/
data refam(1)/ &
' H: (AM1): M.J.S. DEWAR ET AL, J. AM. CHEM. SOC. 107 3902-3909 (1985) &
& '/
data refam(4)/ &
' Be: (MNDO): M.J.S. DEWAR, H.S. RZEPA, J. AM. CHEM. SOC., 100, 777, (19&
&78) '/
data refam(5)/ &
' B: (AM1): M.J.S. DEWAR, C. JIE, E. G. ZOEBISCH ORGANOMETALLICS 7, 513&
& (1988) '/
data refam(6)/ &
' C: (AM1): M.J.S. DEWAR ET AL, J. AM. CHEM. SOC. 107 3902-3909 (1985) &
& '/
data refam(7)/ &
' N: (AM1): M.J.S. DEWAR ET AL, J. AM. CHEM. SOC. 107 3902-3909 (1985) &
& '/
data refam(8)/ &
' O: (AM1): M.J.S. DEWAR ET AL, J. AM. CHEM. SOC. 107 3902-3909 (1985) &
& '/
data refam(9)/ &
' F: (AM1): M.J.S. DEWAR AND E. G. ZOEBISCH, THEOCHEM, 180, 1 (1988). &
& '/
data refam(13)/ &
' Al: (AM1): M. J. S. Dewar, A. J. Holder, Organometallics, 9, 508-511 (&
&1990). '/
data refam(14)/ &
' Si: (AM1): M.J.S.DEWAR, C. JIE, ORGANOMETALLICS, 6, 1486-1490 (1987). &
& '/
data refam(15)/ &
' P: (AM1): M.J.S.DEWAR, JIE, C, THEOCHEM, 187, 1 (1989) &
& '/
data refam(16)/ &
' S: (AM1): M.J.S. DEWAR, Y-C YUAN, INORGANIC CHEMISTRY, 29, 3881:3890, &
&(1990) '/
data refam(17)/ &
' Cl: (AM1): M.J.S. DEWAR AND E. G. ZOEBISCH, THEOCHEM, 180, 1 (1988). &
& '/
data refam(30)/ &
' Zn: (AM1): M.J.S. DEWAR, K.M. MERZ, ORGANOMETALLICS, 7, 522-524 (1988)&
& '/
data refam(32)/ &
' Ge: (AM1): M.J.S.Dewar and C.Jie, Organometallics, 8, 1544, (1989) &
& '/
data refam(33)/ &
' As: (AM1): J. J. P. STEWART &
& '/
data refam(34)/ &
' Se: (AM1): J. J. P. STEWART &
& '/
data refam(35)/ &
' Br: (AM1): M.J.S. DEWAR AND E. G. ZOEBISCH, THEOCHEM, 180, 1 (1988). &
& '/
data refam(51)/ &
' Sb: (AM1): J. J. P. STEWART &
& '/
data refam(52)/ &
' Te: (AM1): J. J. P. STEWART &
& '/
data refam(53)/ &
' I: (AM1): M.J.S. DEWAR AND E. G. ZOEBISCH, THEOCHEM, 180, 1 (1988). &
& '/
data refam( 57) / &
' La: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. '/
data refam( 58) / &
' Ce: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. '/
data refam( 59) / &
' Pr: (AM1): R.O. Freire, et.al, J. Organometallic Chemistry 690 (2005) 4&
&099 '/
data refam( 60) / &
' Nd: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. '/
data refam( 61) / &
' Pm: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. '/
data refam( 62) / &
" Sm: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam( 63) / &
" Eu: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam( 64) / &
" Gd: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam( 65) / &
" Tb: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam( 66) / &
" Dy: (AM1): N.B. da Costa Jr, et.al. Inorg. Chem. Comm. 8 (2005) 831. &
& "/
data refam( 67) / &
" Ho: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam( 68) / &
" Er: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam( 69) / &
" Tm: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Chem. Phys. Let., 411 (&
&2005) 61"/
data refam( 65) / &
" Yb: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, J. Comp. Chem., 26 (20&
&05) 1524"/
data refam( 71) / &
" Lu: (AM1): R.O. Freire, G.B. Rocha, A.M. Simas, Inorg. Chem.44 (2005) 3&
&299. "/
data refam(80)/ &
' Hg: (AM1): M.J.S.Dewar and C.Jie, Organometallics 8, 1547, (1989) &
& '/
data refpm3(1)/ &
' H: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(3)/ &
' Li: (PM3): E. ANDERS, R. KOCH, P. FREUNSCHT, J. COMP. CHEM 14 1301-1&
&312 1993'/
data refpm3(4)/ &
' Be: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(6)/ &
' C: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(7)/ &
' N: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(8)/ &
' O: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(9)/ &
' F: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(12)/ &
' Mg: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(13)/ &
' Al: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(14)/ &
' Si: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(15)/ &
' P: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(16)/ &
' S: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(17)/ &
' Cl: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(30)/ &
' Zn: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(31)/ &
' Ga: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(32)/ &
' Ge: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(33)/ &
' As: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(34)/ &
' Se: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(35)/ &
' Br: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(48)/ &
' Cd: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(49)/ &
' In: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(50)/ &
' Sn: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(51)/ &
' Sb: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(52)/ &
' Te: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(53)/ &
' I: (PM3): J. J. P. STEWART, J. COMP. CHEM. 10, 209 (1989). &
& '/
data refpm3(80)/ &
' Hg: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(81)/ &
' Tl: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(82)/ &
' Pb: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(83)/ &
' Bi: (PM3): J. J. P. STEWART, J. COMP. CHEM. 12, 320-341 (1991). &
& '/
data refpm3(102)/ &
' Cb: (PM3): Capped Bond (Hydrogen-like, takes on a zero charge.) &
& '/
end module refs_C
|
If $A$ and $B$ are measurable sets and almost every point of $M$ is in at most one of $A$ and $B$, then the measure of $A \cup B$ is the sum of the measures of $A$ and $B$.
|
(*************************************************************)
(* This file is distributed under the terms of the *)
(* GNU Lesser General Public License Version 2.1 *)
(*************************************************************)
(* [email protected] [email protected] *)
(*************************************************************)
(**********************************************************************
Permutation.v
Defintion and properties of permutations
**********************************************************************)
Require Export List.
Require Export ListAux.
Section permutation.
Variable A : Set.
(**************************************
Definition of permutations as sequences of adjacent transpositions
**************************************)
Inductive permutation : list A -> list A -> Prop :=
| permutation_nil : permutation nil nil
| permutation_skip :
forall (a : A) (l1 l2 : list A),
permutation l2 l1 -> permutation (a :: l2) (a :: l1)
| permutation_swap :
forall (a b : A) (l : list A), permutation (a :: b :: l) (b :: a :: l)
| permutation_trans :
forall l1 l2 l3 : list A,
permutation l1 l2 -> permutation l2 l3 -> permutation l1 l3.
Hint Constructors permutation : core.
(**************************************
Reflexivity
**************************************)
Theorem permutation_refl : forall l : list A, permutation l l.
simple induction l.
apply permutation_nil.
intros a l1 H.
apply permutation_skip with (1 := H).
Qed.
Hint Resolve permutation_refl : core.
(**************************************
Symmetry
**************************************)
Theorem permutation_sym :
forall l m : list A, permutation l m -> permutation m l.
intros l1 l2 H'; elim H'.
apply permutation_nil.
intros a l1' l2' H1 H2.
apply permutation_skip with (1 := H2).
intros a b l1'.
apply permutation_swap.
intros l1' l2' l3' H1 H2 H3 H4.
apply permutation_trans with (1 := H4) (2 := H2).
Qed.
(**************************************
Compatibility with list length
**************************************)
Theorem permutation_length :
forall l m : list A, permutation l m -> length l = length m.
intros l m H'; elim H'; simpl in |- *; auto.
intros l1 l2 l3 H'0 H'1 H'2 H'3.
rewrite <- H'3; auto.
Qed.
(**************************************
A permutation of the nil list is the nil list
**************************************)
Theorem permutation_nil_inv : forall l : list A, permutation l nil -> l = nil.
intros l H; generalize (permutation_length _ _ H); case l; simpl in |- *;
auto.
intros; discriminate.
Qed.
(**************************************
A permutation of the singleton list is the singleton list
**************************************)
Let permutation_one_inv_aux :
forall l1 l2 : list A,
permutation l1 l2 -> forall a : A, l1 = a :: nil -> l2 = a :: nil.
intros l1 l2 H; elim H; clear H l1 l2; auto.
intros a l3 l4 H0 H1 b H2.
injection H2; intros; subst; auto.
rewrite (permutation_nil_inv _ (permutation_sym _ _ H0)); auto.
intros; discriminate.
Qed.
Theorem permutation_one_inv :
forall (a : A) (l : list A), permutation (a :: nil) l -> l = a :: nil.
intros a l H; apply permutation_one_inv_aux with (l1 := a :: nil); auto.
Qed.
(**************************************
Compatibility with the belonging
**************************************)
Theorem permutation_in :
forall (a : A) (l m : list A), permutation l m -> In a l -> In a m.
intros a l m H; elim H; simpl in |- *; auto; intuition.
Qed.
(**************************************
Compatibility with the append function
**************************************)
Theorem permutation_app_comp :
forall l1 l2 l3 l4,
permutation l1 l2 -> permutation l3 l4 -> permutation (l1 ++ l3) (l2 ++ l4).
intros l1 l2 l3 l4 H1; generalize l3 l4; elim H1; clear H1 l1 l2 l3 l4;
simpl in |- *; auto.
intros a b l l3 l4 H.
cut (permutation (l ++ l3) (l ++ l4)); auto.
intros; apply permutation_trans with (a :: b :: l ++ l4); auto.
elim l; simpl in |- *; auto.
intros l1 l2 l3 H H0 H1 H2 l4 l5 H3.
apply permutation_trans with (l2 ++ l4); auto.
Qed.
Hint Resolve permutation_app_comp : core.
(**************************************
Swap two sublists
**************************************)
Theorem permutation_app_swap :
forall l1 l2, permutation (l1 ++ l2) (l2 ++ l1).
intros l1; elim l1; auto.
intros; rewrite <- app_nil_end; auto.
intros a l H l2.
replace (l2 ++ a :: l) with ((l2 ++ a :: nil) ++ l).
apply permutation_trans with (l ++ l2 ++ a :: nil); auto.
apply permutation_trans with (((a :: nil) ++ l2) ++ l); auto.
simpl in |- *; auto.
apply permutation_trans with (l ++ (a :: nil) ++ l2); auto.
apply permutation_sym; auto.
replace (l2 ++ a :: l) with ((l2 ++ a :: nil) ++ l).
apply permutation_app_comp; auto.
elim l2; simpl in |- *; auto.
intros a0 l0 H0.
apply permutation_trans with (a0 :: a :: l0); auto.
apply (app_ass l2 (a :: nil) l).
apply (app_ass l2 (a :: nil) l).
Qed.
(**************************************
A transposition is a permutation
**************************************)
Theorem permutation_transposition :
forall a b l1 l2 l3,
permutation (l1 ++ a :: l2 ++ b :: l3) (l1 ++ b :: l2 ++ a :: l3).
intros a b l1 l2 l3.
apply permutation_app_comp; auto.
change
(permutation ((a :: nil) ++ l2 ++ (b :: nil) ++ l3)
((b :: nil) ++ l2 ++ (a :: nil) ++ l3)) in |- *.
repeat rewrite <- app_ass.
apply permutation_app_comp; auto.
apply permutation_trans with ((b :: nil) ++ (a :: nil) ++ l2); auto.
apply permutation_app_swap; auto.
repeat rewrite app_ass.
apply permutation_app_comp; auto.
apply permutation_app_swap; auto.
Qed.
(**************************************
An element of a list can be put on top of the list to get a permutation
**************************************)
Theorem in_permutation_ex :
forall a l, In a l -> exists l1 : list A, permutation (a :: l1) l.
intros a l; elim l; simpl in |- *; auto.
intros H; case H; auto.
intros a0 l0 H [H0| H0].
exists l0; rewrite H0; auto.
case H; auto; intros l1 Hl1; exists (a0 :: l1).
apply permutation_trans with (a0 :: a :: l1); auto.
Qed.
(**************************************
A permutation of a cons can be inverted
**************************************)
Let permutation_cons_ex_aux :
forall (a : A) (l1 l2 : list A),
permutation l1 l2 ->
forall l11 l12 : list A,
l1 = l11 ++ a :: l12 ->
exists l3 : list A,
(exists l4 : list A,
l2 = l3 ++ a :: l4 /\ permutation (l11 ++ l12) (l3 ++ l4)).
intros a l1 l2 H; elim H; clear H l1 l2.
intros l11 l12; case l11; simpl in |- *; intros; discriminate.
intros a0 l1 l2 H H0 l11 l12; case l11; simpl in |- *.
exists (nil (A:=A)); exists l1; simpl in |- *; split; auto.
injection H1; intros; subst; auto.
injection H1; intros H2 H3; rewrite <- H2; auto.
intros a1 l111 H1.
case (H0 l111 l12); auto.
injection H1; auto.
intros l3 (l4, (Hl1, Hl2)).
exists (a0 :: l3); exists l4; split; simpl in |- *; auto.
injection H1; intros; subst; auto.
injection H1; intros H2 H3; rewrite H3; auto.
intros a0 b l l11 l12; case l11; simpl in |- *.
case l12; try (intros; discriminate).
intros a1 l0 H; exists (b :: nil); exists l0; simpl in |- *; split; auto.
injection H; intros; subst; auto.
injection H; intros H1 H2 H3; rewrite H2; auto.
intros a1 l111; case l111; simpl in |- *.
intros H; exists (nil (A:=A)); exists (a0 :: l12); simpl in |- *; split; auto.
injection H; intros; subst; auto.
injection H; intros H1 H2 H3; rewrite H3; auto.
intros a2 H1111 H; exists (a2 :: a1 :: H1111); exists l12; simpl in |- *;
split; auto.
injection H; intros; subst; auto.
intros l1 l2 l3 H H0 H1 H2 l11 l12 H3.
case H0 with (1 := H3).
intros l4 (l5, (Hl1, Hl2)).
case H2 with (1 := Hl1).
intros l6 (l7, (Hl3, Hl4)).
exists l6; exists l7; split; auto.
apply permutation_trans with (1 := Hl2); auto.
Qed.
Theorem permutation_cons_ex :
forall (a : A) (l1 l2 : list A),
permutation (a :: l1) l2 ->
exists l3 : list A,
(exists l4 : list A, l2 = l3 ++ a :: l4 /\ permutation l1 (l3 ++ l4)).
intros a l1 l2 H.
apply (permutation_cons_ex_aux a (a :: l1) l2 H nil l1); simpl in |- *; auto.
Qed.
(**************************************
A permutation can be simply inverted if the two list starts with a cons
**************************************)
Theorem permutation_inv :
forall (a : A) (l1 l2 : list A),
permutation (a :: l1) (a :: l2) -> permutation l1 l2.
intros a l1 l2 H; case permutation_cons_ex with (1 := H).
intros l3 (l4, (Hl1, Hl2)).
apply permutation_trans with (1 := Hl2).
generalize Hl1; case l3; simpl in |- *; auto.
intros H1; injection H1; intros H2; rewrite H2; auto.
intros a0 l5 H1; injection H1; intros H2 H3; rewrite H2; rewrite H3; auto.
apply permutation_trans with (a0 :: l4 ++ l5); auto.
apply permutation_skip; apply permutation_app_swap.
apply (permutation_app_swap (a0 :: l4) l5).
Qed.
(**************************************
Take a list and return tle list of all pairs of an element of the
list and the remaining list
**************************************)
Fixpoint split_one (l : list A) : list (A * list A) :=
match l with
| nil => nil (A:=A * list A)
| a :: l1 =>
(a, l1)
:: map (fun p : A * list A => (fst p, a :: snd p)) (split_one l1)
end.
(**************************************
The pairs of the list are a permutation
**************************************)
Theorem split_one_permutation :
forall (a : A) (l1 l2 : list A),
In (a, l1) (split_one l2) -> permutation (a :: l1) l2.
intros a l1 l2; generalize a l1; elim l2; clear a l1 l2; simpl in |- *; auto.
intros a l1 H1; case H1.
intros a l H a0 l1 [H0| H0].
injection H0; intros H1 H2; rewrite H2; rewrite H1; auto.
generalize H H0; elim (split_one l); simpl in |- *; auto.
intros H1 H2; case H2.
intros a1 l0 H1 H2 [H3| H3]; auto.
injection H3; intros H4 H5; (rewrite <- H4; rewrite <- H5).
apply permutation_trans with (a :: fst a1 :: snd a1); auto.
apply permutation_skip.
apply H2; auto.
case a1; simpl in |- *; auto.
Qed.
(**************************************
All elements of the list are there
**************************************)
Theorem split_one_in_ex :
forall (a : A) (l1 : list A),
In a l1 -> exists l2 : list A, In (a, l2) (split_one l1).
intros a l1; elim l1; simpl in |- *; auto.
intros H; case H.
intros a0 l H [H0| H0]; auto.
exists l; left; subst; auto.
case H; auto.
intros x H1; exists (a0 :: x); right; auto.
apply
(in_map (fun p : A * list A => (fst p, a0 :: snd p)) (split_one l) (a, x));
auto.
Qed.
(**************************************
An auxillary function to generate all permutations
**************************************)
Fixpoint all_permutations_aux (l : list A) (n : nat) {struct n} :
list (list A) :=
match n with
| O => nil :: nil
| S n1 =>
flat_map
(fun p : A * list A =>
map (cons (fst p)) (all_permutations_aux (snd p) n1)) (
split_one l)
end.
(**************************************
Generate all the permutations
**************************************)
Definition all_permutations (l : list A) := all_permutations_aux l (length l).
(**************************************
All the elements of the list are permutations
**************************************)
Let all_permutations_aux_permutation :
forall (n : nat) (l1 l2 : list A),
n = length l2 -> In l1 (all_permutations_aux l2 n) -> permutation l1 l2.
intros n; elim n; simpl in |- *; auto.
intros l1 l2; case l2.
simpl in |- *; intros H0 [H1| H1].
rewrite <- H1; auto.
case H1.
simpl in |- *; intros; discriminate.
intros n0 H l1 l2 H0 H1.
case in_flat_map_ex with (1 := H1).
clear H1; intros x; case x; clear x; intros a1 l3 (H1, H2).
case in_map_inv with (1 := H2).
simpl in |- *; intros y (H3, H4).
rewrite H4; auto.
apply permutation_trans with (a1 :: l3); auto.
apply permutation_skip; auto.
apply H with (2 := H3).
apply eq_add_S.
apply trans_equal with (1 := H0).
change (length l2 = length (a1 :: l3)) in |- *.
apply permutation_length; auto.
apply permutation_sym; apply split_one_permutation; auto.
apply split_one_permutation; auto.
Qed.
Theorem all_permutations_permutation :
forall l1 l2 : list A, In l1 (all_permutations l2) -> permutation l1 l2.
intros l1 l2 H; apply all_permutations_aux_permutation with (n := length l2);
auto.
Qed.
(**************************************
A permutation is in the list
**************************************)
Let permutation_all_permutations_aux :
forall (n : nat) (l1 l2 : list A),
n = length l2 -> permutation l1 l2 -> In l1 (all_permutations_aux l2 n).
intros n; elim n; simpl in |- *; auto.
intros l1 l2; case l2.
intros H H0; rewrite permutation_nil_inv with (1 := H0); auto with datatypes.
simpl in |- *; intros; discriminate.
intros n0 H l1; case l1.
intros l2 H0 H1;
rewrite permutation_nil_inv with (1 := permutation_sym _ _ H1) in H0;
discriminate.
clear l1; intros a1 l1 l2 H1 H2.
case (split_one_in_ex a1 l2); auto.
apply permutation_in with (1 := H2); auto with datatypes.
intros x H0.
apply in_flat_map with (b := (a1, x)); auto.
apply in_map; simpl in |- *.
apply H; auto.
apply eq_add_S.
apply trans_equal with (1 := H1).
change (length l2 = length (a1 :: x)) in |- *.
apply permutation_length; auto.
apply permutation_sym; apply split_one_permutation; auto.
apply permutation_inv with (a := a1).
apply permutation_trans with (1 := H2).
apply permutation_sym; apply split_one_permutation; auto.
Qed.
Theorem permutation_all_permutations :
forall l1 l2 : list A, permutation l1 l2 -> In l1 (all_permutations l2).
intros l1 l2 H; unfold all_permutations in |- *;
apply permutation_all_permutations_aux; auto.
Qed.
(**************************************
Permutation is decidable
**************************************)
Definition permutation_dec :
(forall a b : A, {a = b} + {a <> b}) ->
forall l1 l2 : list A, {permutation l1 l2} + {~ permutation l1 l2}.
intros H l1 l2.
case (In_dec (list_eq_dec H) l1 (all_permutations l2)).
intros i; left; apply all_permutations_permutation; auto.
intros i; right; contradict i; apply permutation_all_permutations; auto.
Defined.
End permutation.
(**************************************
Hints
**************************************)
Global Hint Constructors permutation : core.
Global Hint Resolve permutation_refl : core.
Global Hint Resolve permutation_app_comp : core.
Global Hint Resolve permutation_app_swap : core.
(**************************************
Implicits
**************************************)
Arguments permutation [A] _ _.
Arguments split_one [A] _.
Arguments all_permutations [A] _.
Arguments permutation_dec [A].
(**************************************
Permutation is compatible with map
**************************************)
Theorem permutation_map :
forall (A B : Set) (f : A -> B) l1 l2,
permutation l1 l2 -> permutation (map f l1) (map f l2).
intros A B f l1 l2 H; elim H; simpl in |- *; auto.
intros l0 l3 l4 H0 H1 H2 H3; apply permutation_trans with (2 := H3); auto.
Qed.
Global Hint Resolve permutation_map : core.
(**************************************
Permutation of a map can be inverted
*************************************)
Local Definition permutation_map_ex_aux :
forall (A B : Set) (f : A -> B) l1 l2 l3,
permutation l1 l2 ->
l1 = map f l3 -> exists l4, permutation l4 l3 /\ l2 = map f l4.
intros A1 B1 f l1 l2 l3 H; generalize l3; elim H; clear H l1 l2 l3.
intros l3; case l3; simpl in |- *; auto.
intros H; exists (nil (A:=A1)); auto.
intros; discriminate.
intros a0 l1 l2 H H0 l3; case l3; simpl in |- *; auto.
intros; discriminate.
intros a1 l H1; case (H0 l); auto.
injection H1; auto.
intros l5 (H2, H3); exists (a1 :: l5); split; simpl in |- *; auto.
injection H1; intros; subst; auto.
intros a0 b l l3; case l3.
intros; discriminate.
intros a1 l0; case l0; simpl in |- *.
intros; discriminate.
intros a2 l1 H; exists (a2 :: a1 :: l1); split; simpl in |- *; auto.
injection H; intros; subst; auto.
intros l1 l2 l3 H H0 H1 H2 l0 H3.
case H0 with (1 := H3); auto.
intros l4 (HH1, HH2).
case H2 with (1 := HH2); auto.
intros l5 (HH3, HH4); exists l5; split; auto.
apply permutation_trans with (1 := HH3); auto.
Qed.
Theorem permutation_map_ex :
forall (A B : Set) (f : A -> B) l1 l2,
permutation (map f l1) l2 ->
exists l3, permutation l3 l1 /\ l2 = map f l3.
intros A0 B f l1 l2 H; apply permutation_map_ex_aux with (l1 := map f l1);
auto.
Qed.
(**************************************
Permutation is compatible with flat_map
**************************************)
Theorem permutation_flat_map :
forall (A B : Set) (f : A -> list B) l1 l2,
permutation l1 l2 -> permutation (flat_map f l1) (flat_map f l2).
intros A B f l1 l2 H; elim H; simpl in |- *; auto.
intros a b l; auto.
repeat rewrite <- app_ass.
apply permutation_app_comp; auto.
intros k3 l4 l5 H0 H1 H2 H3; apply permutation_trans with (1 := H1); auto.
Qed.
|
State Before: R : Type u_1
V : Type u_2
V' : Type ?u.135022
P : Type u_3
P' : Type ?u.135028
inst✝⁶ : StrictOrderedCommRing R
inst✝⁵ : AddCommGroup V
inst✝⁴ : Module R V
inst✝³ : AddTorsor V P
inst✝² : AddCommGroup V'
inst✝¹ : Module R V'
inst✝ : AddTorsor V' P'
s : AffineSubspace R P
x y : P
v : V
hv : v ∈ direction s
⊢ SSameSide s (v +ᵥ x) y ↔ SSameSide s x y State After: no goals Tactic: rw [SSameSide, SSameSide, wSameSide_vadd_left_iff hv, vadd_mem_iff_mem_of_mem_direction hv]
|
#!/usr/bin/env python3
import os
import sys
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import time
from tqdm import tqdm
import numpy as np
import cv2
from skimage import measure
# RESNET: import these for slim version of resnet
import tensorflow as tf
from tensorflow.python.framework import meta_graph
class Model:
def __init__ (self, X, anchor_th, nms_max, nms_th, is_training, path, name):
mg = meta_graph.read_meta_graph_file(path + '.meta')
self.predictions = tf.import_graph_def(mg.graph_def, name=name,
input_map={'images:0': X,
'anchor_th:0': anchor_th,
'nms_max:0': nms_max,
'nms_th:0': nms_th,
'is_training:0': is_training,
},
return_elements=['rpn_probs:0', 'rpn_shapes:0', 'rpn_index:0'])
self.saver = tf.train.Saver(saver_def=mg.saver_def, name=name)
self.loader = lambda sess: self.saver.restore(sess, path)
pass
pass
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('model', None, '')
flags.DEFINE_string('input', None, '')
flags.DEFINE_string('input_db', None, '')
flags.DEFINE_integer('stride', 16, '')
flags.DEFINE_float('anchor_th', 0.5, '')
flags.DEFINE_integer('nms_max', 10000, '')
flags.DEFINE_float('nms_th', 0.2, '')
flags.DEFINE_float('max', None, 'max images from db')
def save_prediction_image (path, image, preds):
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
rpn_probs, rpn_boxes, rpn_index = preds
assert np.all(rpn_index == 0)
rpn_boxes = np.round(rpn_boxes).astype(np.int32)
for i in range(rpn_boxes.shape[0]):
x1, y1, x2, y2 = rpn_boxes[i]
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255))
#boxes = np.round(boxes).astype(np.int32)
#for i in range(boxes.shape[0]):
# x1, y1, x2, y2 = boxes[i]
# cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0))
cv2.imwrite(path, image)
pass
def main (_):
X = tf.placeholder(tf.float32, shape=(None, None, None, 3), name="images")
is_training = tf.constant(False, name="is_training")
anchor_th = tf.constant(FLAGS.anchor_th, tf.float32)
nms_max = tf.constant(FLAGS.nms_max, tf.int32)
nms_th = tf.constant(FLAGS.nms_th, tf.float32)
model = Model(X, anchor_th, nms_max, nms_th, is_training, FLAGS.model, 'xxx')
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
with tf.Session(config=config) as sess:
model.loader(sess)
if FLAGS.input:
assert os.path.exists(FLAGS.input)
image = cv2.imread(FLAGS.input, cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
batch = np.expand_dims(image, axis=0).astype(dtype=np.float32)
preds = sess.run(model.predictions, feed_dict={X: batch})
save_prediction_image(FLAGS.input + '.prob.png', image, preds)
if FLAGS.input_db:
assert os.path.exists(FLAGS.input_db)
import picpac
from gallery import Gallery
picpac_config = {"db": FLAGS.input_db,
"loop": False,
"shuffle": False,
"reshuffle": False,
"annotate": False,
"channels": 3,
"stratify": False,
"dtype": "float32",
"colorspace": "RGB",
"batch": 1,
"transforms": []
}
stream = picpac.ImageStream(picpac_config)
gal = Gallery('output')
C = 0
for _, images in stream:
preds = sess.run(model.predictions, feed_dict={X: images, is_training: False})
save_prediction_image(gal.next(), images[0], preds)
C += 1
if FLAGS.max and C >= FLAGS.max:
break
pass
pass
gal.flush()
pass
if __name__ == '__main__':
tf.app.run()
|
module NTupleKmers
export
Kmer,
DNAKmer,
DNAKmer27,
DNAKmer31,
DNAKmer63,
kmertype,
capacity,
n_unused
using BioSequences
import BioSequences: twobitnucs, BioSequence, Alphabet, decode
###
### Type definition
###
# In BioSequences.jl kmers are called Mer, not Kmer so there is no name clash.
"""
Kmer{A<:NucleicAcidAlphabet{2},K,N} <: BioSequence{A}
A parametric, immutable, bitstype for representing Kmers - short sequences.
Given the number of Kmers generated from raw sequencing reads, avoiding
repetetive memory allocation and triggering of garbage collection is important,
as is the ability to effectively pack Kmers into arrays and similar collections.
In julia this means an immutable bitstype must represent such shorter Kmer
sequences. Thankfully this is not much of a limitation - kmers are rarely
manipulated and so by and large don't have to be mutable like `LongSequence`s.
Excepting their immutability, they fulfill the rest of the API and behaviours
expected from a concrete `BioSequence` type.
!!! warning
Given their immutability, `setindex` and mutating sequence transformations
are not implemented for kmers e.g. `reverse_complement!`.
!!! tip
Note that some sequence transformations that are not mutating are
available, since they can return a new kmer value as a result e.g.
`reverse_complement`.
"""
struct Kmer{A<:NucleicAcidAlphabet{2},K,N}
data::NTuple{N,UInt64}
end
###
### Shortcuts
###
"Shortcut for the type `Kmer{DNAAlphabet{2},K,N}`"
const DNAKmer{K,N} = Kmer{DNAAlphabet{2},K,N}
"Shortcut for the type `DNAKmer{27,1}`"
const DNAKmer27 = DNAKmer{27,1}
"Shortcut for the type `DNAKmer{31,1}`"
const DNAKmer31 = DNAKmer{31,1}
"Shortcut for the type `DNAKmer{63,2}`"
const DNAKmer63 = DNAKmer{63,2}
# TODO: Delete when Kmer is made to <: BioSequence.
Alphabet(kmer::Kmer{A,K,N}) where {A,K,N} = A()
###
### Base Functions
###
"""
kmertype(::Type{Kmer{A,K}}) where {A,K}
Fully resolve and incomplete kmer typing, computing the N parameter of
`Kmer{A,K,N}`, given only `Kmer{A,K}`.
## Example
```julia
julia> DNAKmer{63}
Kmer{DNAAlphabet{2},63,N} where N
julia> kmertype(DNAKmer{63})
Kmer{DNAAlphabet{2},63,2}
```
"""
@inline function kmertype(::Type{Kmer{A,K}}) where {A,K}
return Kmer{A,K,ifelse(rem(2K, 64) != 0, div(2K, 64) + 1, div(2K, 64))}
end
@inline capacity(::Type{Kmer{A,K,N}}) where {A,K,N} = div(64N, 2)
@inline n_unused(::Type{Kmer{A,K,N}}) where {A,K,N} = capacity(Kmer{A,K,N}) - K
@inline n_unused(seq::Kmer) = n_unused(typeof(seq))
@inline function checkmer(::Type{Kmer{A,K,N}}) where {A,K,N}
c = capacity(Kmer{A,K,N})
if !(1 ≤ K ≤ c)
throw(ArgumentError("K must be within 1..$c"))
end
end
@inline Base.eltype(::Type{Kmer{A,K,N}}) where {A,K,N} = eltype(A)
@inline Base.length(x::Kmer{A,K,N}) where {A,K,N} = K
@inline Base.summary(x::Kmer{DNAAlphabet{2},K,N}) where {K,N} = string("DNA ", K, "-mer")
@inline Base.summary(x::Kmer{RNAAlphabet{2},K,N}) where {K,N} = string("RNA ", K, "-mer")
function Base.typemin(::Type{Kmer{A,K,N}}) where {A,K,N}
checkmer(T)
return T(ntuple(i -> zero(UInt64), N))
end
function Base.typemax(::Type{Kmer{A,K,N}}) where {A,K,N}
checkmer(Kmer{A,K,N})
return Kmer{A,K,N}((typemax(UInt64) >> (64N - 2K), ntuple(i -> typemax(UInt64),N - 1)...))
end
function Base.rand(::Type{Kmer{A,K,N}}) where {A,K,N}
return Kmer{A,K,N}(ntuple(i -> rand(UInt64), N))
end
###
### Constructors
###
# Create a Mer from a sequence.
function (::Type{Kmer{A,K,N}})(seq::LongSequence{A}) where {A<:NucleicAcidAlphabet{2},K,N}
seqlen = length(seq)
if seqlen != K
throw(ArgumentError("seq does not contain the correct number of nucleotides ($seqlen ≠ $K)"))
end
if seqlen > capacity(Kmer{A,K,N})
throw(ArgumentError("Cannot build a mer longer than $(capacity(Kmer{A,K,N}))bp long"))
end
# Construct the head.
bases_in_head = div(64 - (64N - 2K), 2)
head = zero(UInt64)
@inbounds for i in 1:bases_in_head
nt = convert(eltype(typeof(seq)), seq[i])
head = (head << 2) | UInt64(twobitnucs[reinterpret(UInt8, nt) + 0x01])
end
# And the rest of the sequence
idx = Ref(bases_in_head + 1)
tail = ntuple(Val{N - 1}()) do i
Base.@_inline_meta
body = zero(UInt64)
@inbounds for i in 1:32
nt = convert(eltype(typeof(seq)), seq[idx[]])
body = (body << 2) | UInt64(twobitnucs[reinterpret(UInt8, nt) + 0x01])
idx[] += 1
end
return body
end
return Kmer{A,K,N}((head, tail...))
end
function (::Type{BigMer{A,K}})(seq::LongSequence{A}) where {A<:NucleicAcidAlphabet{2},K}
seqlen = length(seq)
if seqlen != K
throw(ArgumentError("seq does not contain the correct number of nucleotides ($seqlen ≠ $K)"))
end
if seqlen > BioSequences.capacity(BigMer{A,K})
throw(ArgumentError("Cannot build a mer longer than $(BioSequences.capacity(BigMer{A,K}))bp long"))
end
x = zero(BioSequences.encoded_data_type(BigMer{A,K}))
for c in seq
nt = convert(eltype(BigMer{A,K}), c)
x = (x << 2) | BioSequences.encoded_data_type(BigMer{A,K})(twobitnucs[reinterpret(UInt8, nt) + 0x01])
end
return BigMer{A,K}(x)
end
include("bitindex.jl")
include("indexing.jl")
include("predicates.jl")
include("transformations.jl")
@inline function choptail(x::NTuple{N,UInt64}) where {N}
ntuple(Val{N - 1}()) do i
Base.@_inline_meta
return @inbounds x[i]
end
end
@inline function setlast(x::NTuple{N,UInt64}, nt::DNA) where {N}
@inbounds begin
bits = UInt64(twobitnucs[reinterpret(UInt8, nt) + 0x01])
tail = (x[N] & (typemax(UInt64) - UInt64(3))) | bits
end
return (choptail(x)..., tail)
end
"""
shiftright
It is important to be able to efficiently shift the all the nucleotides in a kmer
one space to the left or right, as it is a key operation in iterating through
de bruijn graph neighbours or in building kmers a nucleotide at a time.
"""
function shiftright end
@inline shiftright(x::BigDNAMer{K}) where {K} = BigDNAMer{K}(reinterpret(UInt128, x) >> 2)
@inline function shiftright(x::Kmer{A,K,N}) where {A,K,N}
return Kmer{A,K,N}(_shiftright(zero(UInt64), x.data...))
end
@inline function _shiftright(carry::UInt64, head::UInt64, tail...)
return ((head >> 2) | carry, _shiftright((head & UInt64(3)) << 62, tail...)...)
end
@inline _shiftright(carry::UInt64) = ()
"""
shiftleft
It is important to be able to efficiently shift the all the nucleotides in a kmer
one space to the left or right, as it is a key operation in iterating through
de bruijn graph neighbours or in building kmers a nucleotide at a time.
"""
function shiftleft end
@inline shiftleft(x::BigDNAMer{K}) where {K} = BigDNAMer{K}(reinterpret(UInt128, x) << 2)
@inline function shiftleft(x::Kmer{A,K,N}) where {A,K,N}
_, newbits = _shiftleft(x.data...)
# TODO: The line below is a workaround for julia issues #29114 and #3608
newbits′ = newbits isa UInt64 ? (newbits,) : newbits
return Kmer{A,K,N}(_cliphead(64N - 2K, newbits′...))
end
@inline function shiftleft(x::NTuple{N,UInt64}) where {N}
_, newbits = _shiftleft(x...)
# TODO: The line below is a workaround for julia issues #29114 and #3608
return newbits isa UInt64 ? (newbits,) : newbits
end
@inline function _cliphead(by::Integer, head::UInt64, tail...)
return (head & (typemax(UInt64) >> by), tail...)
end
@inline function _shiftleft(head::UInt64, tail...)
carry, newtail = _shiftleft(tail...)
# TODO: The line below is a workaround for julia issues #29114 and #36087
newtail′ = newtail isa UInt64 ? (newtail,) : newtail
return head >> 62, ((head << 2) | carry, newtail′...)
end
@inline _shiftleft(head::UInt64) = (head & 0xC000000000000000) >> 62, head << 2
#=
@inline function shiftleft2(x::Kmer{A,K,N}) where {A,K,N}
return _cliphead(64N - 2K, _shiftleft2(x.data...)...)
end
@inline function _shiftleft2(head::UInt64, next::UInt64, tail...)
return (head << 2 | (next >> 62), _shiftleft2(next, tail...)...)
end
@inline _shiftleft2(head::UInt64) = head << 2
=#
end # module
|
/* BalSelSims.c
Simulation of balancing selection simulation with sampling (finite population effects)
Selection; reproduction; mutation based on deterministic recursions, then sampling
Repeats for 'Length' generations
Simulation uses routines found with the GNU Scientific Library (GSL)
(http://www.gnu.org/software/gsl/)
Since GSL is distributed under the GNU General Public License
(http://www.gnu.org/copyleft/gpl.html), you must download it
separately from this file.
This program can be compiled in e.g. GCC using a command like:
gcc BalSelSims -lm -lgsl -lgslcblas -I/usr/local/include -L/usr/local/lib BalSelSims.c
Then run by executing:
./BalSelSims N s rec sex self gc reps
Where:
- N is the population size
- s is the fitness disadvantage of homozygotes
- rec is recombination rate
- sex is rate of sex (a value between 0 = obligate asex, and 1 = obligate sex)
- self is selfing rate
- gc is gene conversion
- reps is how many times to introduce linked neutral allele
Note that haplotypes are defined as:
x1 = ab
x2 = Ab
x3 = aB
x4 = AB
Genotypes defined as:
g11 = g1 = ab/ab
g12 = g2 = Ab/ab
g13 = g3 = aB/ab
g14 = g4 = AB/ab
g22 = g5 = Ab/Ab
g23 = g6 = Ab/aB
g24 = g7 = Ab/AB
g33 = g8 = aB/aB
g34 = g9 = aB/AB
g44 = g10 = AB/AB
*/
/* Preprocessor statements */
#include <stdio.h>
#include <time.h>
#include <math.h>
#include <stddef.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
/* Function prototypes */
void geninit(double *geninit);
void selection(double *geninit);
void reproduction(double *geninit);
void gconv(double *geninit);
void neutinit(double *geninit,const gsl_rng *r);
double ncheck(double *geninit);
double pcheck(double *geninit);
/* Global variable declaration */
unsigned int N = 0; /* Pop size */
double s = 0; /* Fitness disadvantage of homozyogtes */
double rec = 0; /* Recombination rate */
double sex = 0; /* Rate of sexual reproduction */
double self = 0; /* Rate of self-fertilisation */
double gc = 0; /* Rate of gene conversion */
/* Main program */
int main(int argc, char *argv[]){
unsigned int g, i; /* Counters. Reps counter, geno counter */
unsigned int reps; /* Length of simulation (no. of introductions of neutral site) */
double Bcheck = 0; /* Frequency of B after each reproduction */
double Acheck = 0; /* Frequency of polymorphism */
double Hsum = 0; /* Summed heterozygosity over transit time of neutral allele */
/* GSL random number definitions */
const gsl_rng_type * T;
gsl_rng * r;
/* This reads in data from command line. */
if(argc != 8){
fprintf(stderr,"Invalid number of input values.\n");
exit(1);
}
N = strtod(argv[1],NULL);
s = strtod(argv[2],NULL);
rec = strtod(argv[3],NULL);
sex = strtod(argv[4],NULL);
self = strtod(argv[5],NULL);
gc = strtod(argv[6],NULL);
reps = strtod(argv[7],NULL);
/* Arrays definition and memory assignment */
double *genotype = calloc(10,sizeof(double)); /* Genotype frequencies */
unsigned int *gensamp = calloc(10,sizeof(unsigned int)); /* New population samples */
/* create a generator chosen by the
environment variable GSL_RNG_TYPE */
gsl_rng_env_setup();
if (!getenv("GSL_RNG_SEED")) gsl_rng_default_seed = time(0);
T = gsl_rng_default;
r = gsl_rng_alloc(T);
/* Initialising genotypes */
geninit(genotype);
/* Run simulation for 2000 generations to create a burn in */
for(g = 0; g < 2000; g++){
/* Selection routine */
selection(genotype);
/* Reproduction routine */
reproduction(genotype);
/* Gene conversion routine */
gconv(genotype);
/* Sampling based on new frequencies */
gsl_ran_multinomial(r,10,N,genotype,gensamp);
for(i = 0; i < 10; i++){
*(genotype + i) = (*(gensamp + i))/(1.0*N);
}
/* Printing out results (for testing) */
/*
for(i = 0; i < 10; i++){
printf("%.10lf ", *(genotype + i));
}
printf("\n");
*/
}
/* Reintroducing neutral genotype, resetting hap sum */
neutinit(genotype,r);
Bcheck = ncheck(genotype);
Hsum = Bcheck*(1-Bcheck);
/* printf("%.10lf %.10lf\n",Bcheck,Hsum); */
/* Introduce and track neutral mutations 'reps' times */
g = 0;
while(g < reps){
/* Selection routine */
selection(genotype);
/* Reproduction routine */
reproduction(genotype);
/* Gene conversion routine */
gconv(genotype);
/* Sampling based on new frequencies */
gsl_ran_multinomial(r,10,N,genotype,gensamp);
for(i = 0; i < 10; i++){
*(genotype + i) = (*(gensamp + i))/(1.0*N);
}
/* Checking state of haplotypes: if B fixed reset so can start fresh next time */
Bcheck = ncheck(genotype);
Hsum += Bcheck*(1-Bcheck);
/* printf("%.10lf %.10lf\n",Bcheck,Hsum); */
/* If polymorphism fixed then abandon simulation */
Acheck = pcheck(genotype);
if(Acheck == 0){
g = reps;
}
if(Bcheck == 0 || Bcheck == 1){
printf("%.10lf\n",Hsum);
g++;
/* printf("Rep Number %d\n",g); */
if(Bcheck == 1){
/* Reset genotypes so B becomes ancestral allele */
*(genotype + 0) = *(genotype + 7);
*(genotype + 1) = *(genotype + 8);
*(genotype + 4) = *(genotype + 9);
*(genotype + 7) = 0;
*(genotype + 8) = 0;
*(genotype + 9) = 0;
}
/* Reintroducing neutral genotype, resetting hap sum */
neutinit(genotype,r);
Bcheck = ncheck(genotype);
Hsum = Bcheck*(1-Bcheck);
}
} /* End of simulation */
/* Freeing memory and wrapping up */
gsl_rng_free(r);
free(gensamp);
free(genotype);
/* printf("The End!\n"); */
return 0;
}
/* Initialising genotypes */
void geninit(double *geninit){
/* Basic idea: before neutral mutation introduced, g0 = 1/4; g1 = 1/2; g4 = 1/4. Deterministic frequencies. */
*(geninit + 0) = 0.25;
*(geninit + 1) = 0.50;
*(geninit + 2) = 0;
*(geninit + 3) = 0;
*(geninit + 4) = 0.25;
*(geninit + 5) = 0;
*(geninit + 6) = 0;
*(geninit + 7) = 0;
*(geninit + 8) = 0;
*(geninit + 9) = 0;
} /* End of gen initiation routine */
/* Initialising NEUTRAL allele */
void neutinit(double *geninit, const gsl_rng *r){
double *probin = calloc(4,sizeof(double)); /* Probability inputs, determine location of neutral allele */
unsigned int *probout = calloc(4,sizeof(unsigned int)); /* Output from multinomial sampling */
/* Basic idea: g11 at freq p^2; g12 at freq 2pq; g22 at freq q^2.
So weighted sampling to determine which genotype the neutral allele arises on
*/
/* Prob definitions */
*(probin + 0) = *(geninit + 0);
*(probin + 1) = (*(geninit + 1))/2.0;
*(probin + 2) = (*(geninit + 1))/2.0;
*(probin + 3) = *(geninit + 4);
gsl_ran_multinomial(r,4,1,probin,probout);
/* Redefining genotypes depending on outcome */
if(*(probout + 0) == 1){
*(geninit + 0) = (*(geninit + 0)) - 1/(1.0*N);
*(geninit + 2) = (*(geninit + 2)) + 1/(1.0*N);
}
else if(*(probout + 1) == 1){
*(geninit + 1) = (*(geninit + 1)) - 1/(1.0*N);
*(geninit + 3) = (*(geninit + 3)) + 1/(1.0*N);
}
else if(*(probout + 2) == 1){
*(geninit + 1) = (*(geninit + 1)) - 1/(1.0*N);
*(geninit + 5) = (*(geninit + 5)) + 1/(1.0*N);
}
else if(*(probout + 3) == 1){
*(geninit + 4) = (*(geninit + 4)) - 1/(1.0*N);
*(geninit + 6) = (*(geninit + 6)) + 1/(1.0*N);
}
free(probout);
free(probin);
} /* End of gen initiation routine */
/* Selection routine */
void selection(double *geninit){
double Waa, WAa, WAA; /* Fitness of locus A (bal sel locus) */
double Wmean; /* Mean fitness */
Waa = 1-s;
WAa = 1;
WAA = 1-s;
/* Mean fitness calculation */
Wmean = ((*(geninit + 0))*Waa) + ((*(geninit + 1))*WAa) + ((*(geninit + 2))*Waa) + ((*(geninit + 3))*WAa) + ((*(geninit + 4))*WAA) + ((*(geninit + 5))*WAa) + ((*(geninit + 6))*WAA) + ((*(geninit + 7))*Waa) + ((*(geninit + 8))*WAa) + ((*(geninit + 9))*WAA);
/* Changing frequencies by selection */
*(geninit + 0) = ((*(geninit + 0))*Waa)/Wmean;
*(geninit + 1) = ((*(geninit + 1))*WAa)/Wmean;
*(geninit + 2) = ((*(geninit + 2))*Waa)/Wmean;
*(geninit + 3) = ((*(geninit + 3))*WAa)/Wmean;
*(geninit + 4) = ((*(geninit + 4))*WAA)/Wmean;
*(geninit + 5) = ((*(geninit + 5))*WAa)/Wmean;
*(geninit + 6) = ((*(geninit + 6))*WAA)/Wmean;
*(geninit + 7) = ((*(geninit + 7))*Waa)/Wmean;
*(geninit + 8) = ((*(geninit + 8))*WAa)/Wmean;
*(geninit + 9) = ((*(geninit + 9))*WAA)/Wmean;
} /* End of selection routine */
/* Reproduction routine */
void reproduction(double *geninit){
/* Fed-in genotype frequencies (for ease of programming) */
double g11s, g12s, g13s, g14s, g22s, g23s, g24s, g33s, g34s, g44s;
/* Genotype frequencies after sex (outcross and selfing) */
double g11SX, g12SX, g13SX, g14SX, g22SX, g23SX, g24SX, g33SX, g34SX, g44SX;
/* Genotype frequencies after ASEX */
double g11AS, g12AS, g13AS, g14AS, g22AS, g23AS, g24AS, g33AS, g34AS, g44AS;
/* Haplotypes */
double x1, x2, x3, x4;
/* Initial definition of genotypes */
g11s = *(geninit + 0);
g12s = *(geninit + 1);
g13s = *(geninit + 2);
g14s = *(geninit + 3);
g22s = *(geninit + 4);
g23s = *(geninit + 5);
g24s = *(geninit + 6);
g33s = *(geninit + 7);
g34s = *(geninit + 8);
g44s = *(geninit + 9);
/* Baseline change in haplotype frequencies */
x1 = g11s + (g12s + g13s + g14s)/2.0 - ((g14s - g23s)*rec)/2.0;
x2 = g22s + (g12s + g23s + g24s)/2.0 + ((g14s - g23s)*rec)/2.0;
x3 = g33s + (g13s + g23s + g34s)/2.0 + ((g14s - g23s)*rec)/2.0;
x4 = g44s + (g14s + g24s + g34s)/2.0 - ((g14s - g23s)*rec)/2.0;
/* Change in SEXUAL frequencies (both outcrossing and selfing) */
g11SX = (g11s + (g12s + g13s + g14s*pow((1 - rec),2) + g23s*pow(rec,2))/4.0)*self*sex + (1 - self)*pow(x1,2)*sex;
g22SX = (g22s + (g12s + g24s + g23s*pow((1 - rec),2) + g14s*pow(rec,2))/4.0)*self*sex + (1 - self)*pow(x2,2)*sex;
g33SX = (g33s + (g13s + g34s + g23s*pow((1 - rec),2) + g14s*pow(rec,2))/4.0)*self*sex + (1 - self)*pow(x3,2)*sex;
g44SX = (g44s + (g24s + g34s + g14s*pow((1 - rec),2) + g23s*pow(rec,2))/4.0)*self*sex + (1 - self)*pow(x4,2)*sex;
g12SX = ((g12s + (g14s + g23s)*(1 - rec)*rec)*self*sex)/2.0 + 2.0*(1 - self)*x1*x2*sex;
g13SX = ((g13s + (g14s + g23s)*(1 - rec)*rec)*self*sex)/2.0 + 2.0*(1 - self)*x1*x3*sex;
g14SX = ((g14s*pow((1 - rec),2) + g23s*pow(rec,2))*self*sex)/2.0 + 2.0*(1 - self)*x1*x4*sex;
g23SX = ((g23s*pow((1 - rec),2) + g14s*pow(rec,2))*self*sex)/2.0 + 2.0*(1 - self)*x2*x3*sex;
g24SX = ((g24s + (g14s + g23s)*(1 - rec)*rec)*self*sex)/2.0 + 2.0*(1 - self)*x2*x4*sex;
g34SX = ((g34s + (g14s + g23s)*(1 - rec)*rec)*self*sex)/2.0 + 2.0*(1 - self)*x3*x4*sex;
/* Change in ASEXUAL frequencies */
g11AS = g11s*(1 - sex);
g12AS = g12s*(1 - sex);
g13AS = g13s*(1 - sex);
g14AS = g14s*(1 - sex);
g22AS = g22s*(1 - sex);
g23AS = g23s*(1 - sex);
g24AS = g24s*(1 - sex);
g33AS = g33s*(1 - sex);
g34AS = g34s*(1 - sex);
g44AS = g44s*(1 - sex);
/* Combining to give overall frequency change following reproduction */
*(geninit + 0) = g11AS + g11SX;
*(geninit + 1) = g12AS + g12SX;
*(geninit + 2) = g13AS + g13SX;
*(geninit + 3) = g14AS + g14SX;
*(geninit + 4) = g22AS + g22SX;
*(geninit + 5) = g23AS + g23SX;
*(geninit + 6) = g24AS + g24SX;
*(geninit + 7) = g33AS + g33SX;
*(geninit + 8) = g34AS + g34SX;
*(geninit + 9) = g44AS + g44SX;
} /* End of reproduction routine */
/* Gene conversion routine */
void gconv(double *geninit){
/* Fed-in genotype frequencies (for ease of programming) */
double g11r, g12r, g13r, g14r, g22r, g23r, g24r, g33r, g34r, g44r;
/* Frequencies after gene conversion */
double g11gc, g12gc, g13gc, g14gc, g22gc, g23gc, g24gc, g33gc, g34gc, g44gc;
/* Initial definition of genotypes */
g11r = *(geninit + 0);
g12r = *(geninit + 1);
g13r = *(geninit + 2);
g14r = *(geninit + 3);
g22r = *(geninit + 4);
g23r = *(geninit + 5);
g24r = *(geninit + 6);
g33r = *(geninit + 7);
g34r = *(geninit + 8);
g44r = *(geninit + 9);
/* Gene conversion equations */
g11gc = g11r + (gc*g12r)/4.0 + (gc*g13r)/4.0;
g12gc = g12r*(1 - gc/2.0) + ((g14r + g23r)*gc)/4.0;
g13gc = g13r*(1 - gc/2.0) + ((g14r + g23r)*gc)/4.0;
g14gc = (1-gc)*g14r;
g22gc = g22r + (g12r*gc)/4.0 + (g24r*gc)/4.0;
g23gc = (1-gc)*g23r;
g24gc = g24r*(1 - gc/2.0) + ((g14r + g23r)*gc)/4.0;
g33gc = g33r + (g13r*gc)/4.0 + (g34r*gc)/4.0;
g34gc = g34r*(1 - gc/2.0) + ((g14r + g23r)*gc)/4.0;
g44gc = g44r + (g24r*gc)/4.0 + (g34r*gc)/4.0;
/* Output */
*(geninit + 0) = g11gc;
*(geninit + 1) = g12gc;
*(geninit + 2) = g13gc;
*(geninit + 3) = g14gc;
*(geninit + 4) = g22gc;
*(geninit + 5) = g23gc;
*(geninit + 6) = g24gc;
*(geninit + 7) = g33gc;
*(geninit + 8) = g34gc;
*(geninit + 9) = g44gc;
} /* End of gene conversion routine */
/* Has neutral allele fixed or not? Measuring freq of B */
double ncheck(double *geninit){
/* Fed-in genotype frequencies (for ease of programming) */
double g11s, g12s, g13s, g14s, g22s, g23s, g24s, g33s, g34s, g44s;
/* Haplotypes ONLY CONTAINING B */
double x3, x4;
double Btot = 0; /* Total frequency of B */
/* Initial definition of genotypes */
g11s = *(geninit + 0);
g12s = *(geninit + 1);
g13s = *(geninit + 2);
g14s = *(geninit + 3);
g22s = *(geninit + 4);
g23s = *(geninit + 5);
g24s = *(geninit + 6);
g33s = *(geninit + 7);
g34s = *(geninit + 8);
g44s = *(geninit + 9);
/* Calculation of haplotypes containing B */
x3 = g33s + (g13s + g23s + g34s)/2.0;
x4 = g44s + (g14s + g24s + g34s)/2.0;
/* Checking */
Btot = x3 + x4;
return Btot;
} /* End of B check routine */
/* Checking if balancing polymorphism lost or not */
double pcheck(double *geninit){
/* Fed-in genotype frequencies (for ease of programming) */
double g11s, g12s, g13s, g14s, g22s, g23s, g24s, g33s, g34s, g44s;
double x2, x4;
double Atot = 0; /* Total frequency of A */
/* Initial definition of genotypes */
g11s = *(geninit + 0);
g12s = *(geninit + 1);
g13s = *(geninit + 2);
g14s = *(geninit + 3);
g22s = *(geninit + 4);
g23s = *(geninit + 5);
g24s = *(geninit + 6);
g33s = *(geninit + 7);
g34s = *(geninit + 8);
g44s = *(geninit + 9);
/* Calculation of haplotypes containing B */
x2 = g22s + (g12s + g23s + g24s)/2.0;
x4 = g44s + (g14s + g24s + g34s)/2.0;
/* Checking */
Atot = x2 + x4;
return Atot;
} /* End of A check routine */
/* End of program */
|
-- An answer to a question by Nicolas Alexander Schmidt in the idris mailing list.
%default total
interface Foo (p : Type -> Type) where
bar : {0 a : _} -> p a => a
data OnlyInt : Type -> Type where
NiceCase : OnlyInt Int
Foo OnlyInt where
bar @{NiceCase} = 0
-- Why can't we remove `BadCase` and its match from `bar` implementation?
-- Compiler claims that match in not exhaustive, however, everything works for `Nat`s:
data X : Nat -> Type where
X10 : X 10
bbar : {a : _} -> X a -> Int
bbar {a=10} X10 = 1
|
/*
Copyright (c) 2015, Patrick Weltevrede
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <math.h>
#include <stdlib.h>
#include <string.h>
#include "psrsalsa.h"
#include <gsl/gsl_linalg.h>
int linalg_solve_matrix_eq_gauss_jordan(double *matrixa, double *matrixb, int n, int m, verbose_definition verbose)
{
int *orig_pivot_row, *orig_pivot_column, *pivot_column_done;
int pivotnr, pivot_col, pivot_row, rowi, coli, i;
double *ptr1, *ptr2, tmpvalue;
double largest_element_value;
int currow;
orig_pivot_row = malloc(n*sizeof(int));
orig_pivot_column = malloc(n*sizeof(int));
pivot_column_done = malloc(n*sizeof(int));
if(orig_pivot_row == NULL || orig_pivot_column == NULL || pivot_column_done == NULL) {
printerror(verbose.debug, "ERROR linalg_solve_matrix_eq_gauss_jordan: Memory allocation error.");
return 1;
}
for(coli = 0; coli < n; coli++)
pivot_column_done[coli] = 0;
for(pivotnr = 0; pivotnr < n; pivotnr++) {
largest_element_value = -1.0;
for(rowi = 0; rowi < n; rowi++) {
if(pivot_column_done[rowi] != 1) {
for(coli = 0; coli < n; coli++) {
if(pivot_column_done[coli] == 0) {
tmpvalue = matrixa[rowi*n+coli];
if(fabs(tmpvalue) >= largest_element_value) {
largest_element_value = fabs(tmpvalue);
pivot_row=rowi;
pivot_col=coli;
}
}else if(pivot_column_done[coli] > 1) {
printerror(verbose.debug, "ERROR linalg_solve_matrix_eq_gauss_jordan: The matrix equation is singular, no solution can be determined.");
return 2;
}
}
}
}
if(largest_element_value <= 0.0) {
printerror(verbose.debug, "ERROR linalg_solve_matrix_eq_gauss_jordan: The matrix equation is singular, no solution can be determined.");
return 2;
}
pivot_column_done[pivot_col] += 1;
if(pivot_row != pivot_col) {
ptr1 = matrixa + pivot_row*n;
ptr2 = matrixa + pivot_col*n;
for(i = 0; i < n; i++) {
tmpvalue = *ptr1;
*ptr1 = *ptr2;
*ptr2 = tmpvalue;
}
ptr1 = matrixb + pivot_row*n;
ptr2 = matrixb + pivot_col*n;
for(i = 0; i < n; i++) {
tmpvalue = *ptr1;
*ptr1 = *ptr2;
*ptr2 = tmpvalue;
}
}
orig_pivot_row[pivotnr] = pivot_row;
orig_pivot_column[pivotnr] = pivot_col;
matrixa[pivot_col*n+pivot_col]=1.0;
ptr1 = matrixa + pivot_col*n;
for(i = 0; i < n; i++) {
*ptr1 /= largest_element_value;
ptr1 += 1;
}
ptr1 = matrixb + pivot_col*m;
for(i = 0; i < m; i++) {
*ptr1 /= largest_element_value;
ptr1 += 1;
}
for(currow = 0; currow < n; currow++) {
if(currow != pivot_col) {
i = currow*n+pivot_col;
tmpvalue = matrixa[i];
matrixa[i] = 0.0;
ptr1 = matrixa + n*currow;
ptr2 = matrixa + n*pivot_col;
for(i=1; i <= n; i++) {
*ptr1 -= (*ptr2)*tmpvalue;
ptr1++;
ptr2++;
}
ptr1 = matrixb + currow*m;
ptr2 = matrixb + pivot_col*m;
for(i = 0; i < m; i++) {
*ptr1 -= (*ptr2)*tmpvalue;
ptr1++;
ptr2++;
}
}
}
}
for(coli = n-1; coli >= 0; coli--) {
if(orig_pivot_row[coli] != orig_pivot_column[coli]) {
for(rowi = 0; rowi <= n; rowi++) {
tmpvalue = matrixa[rowi*n+orig_pivot_row[coli]];
matrixa[rowi*n+orig_pivot_row[coli]] = matrixa[rowi*n+orig_pivot_column[coli]];
matrixa[rowi*n+orig_pivot_column[coli]] = tmpvalue;
}
}
}
free(pivot_column_done);
free(orig_pivot_row);
free(orig_pivot_column);
return 0;
}
|
rm(list=ls())
load("HDIdata.rda")
y=array(NA,c(164,6,3))
nams=HDIdata[1:164,2]
u=unique(HDIdata[,1])
for(j in 1:6) {
for(h in 1:3) {
y[,j,h]=HDIdata[HDIdata[,1]==u[j],h+2]}}
source("codeKvariable.r")
library(snipEM)
library(mvtnorm)
set.seed(12345)
load("rl.rda")
Bs=c(10,50,100,200,500)
rlB=list()
for(iter in 1:5) {
rlB[[iter]]=rlm(y,0,4,inits=inits0,hits=50,B=Bs[iter])
iter=iter+1
}
save.image(file="B.rda")
|
You Tube Videos - See what your baby bird can learn to do!
Must See Budgie Video...Very Smart Birds!!
Disco the Parakeet....Amazing talking budgie!
Oh boy can he talk!
37 week old talking baby budgie!
See how clicker training can teach your baby bird amazing tricks!
Shows you how a Linnie can learn to talk...Very Cute!
Great information to keep your birds safe!
Great training and behaviour tips and tricks for all companion birds!
Great Canadian source for cages!
Great place to buy bird supplies online!
Great information about these fun little birds!
|
PROGRAM run_parareal_openmp
USE parareal_openmp_pipe, only: InitializePararealOpenMP_Pipe, FinalizePararealOpenMP_Pipe, PararealOpenMP_Pipe
USE params, only : Nx, Ny, Nz, dx, dy, dz, nu, N_coarse, N_fine, Niter, Tend, do_io, be_verbose, ReadParameter
IMPLICIT NONE
DOUBLE PRECISION, ALLOCATABLE, DIMENSION(:,:,:) :: Q
! -- CODE: --
CALL ReadParameter()
! Initialize
CALL InitializePararealOpenMP_Pipe(nu, Nx, Ny, Nz)
! Load initial data
ALLOCATE(Q(-2:Nx+3,-2:Ny+3,-2:Nz+3))
OPEN(unit=20, FILE='q0.dat', ACTION='read', STATUS='old')
READ(20, '(F35.25)') Q
CLOSE(20)
CALL PararealOpenMP_Pipe(Q, Tend, N_fine, N_coarse, Niter, dx, dy, dz, do_io, be_verbose)
! Finalize
CALL FinalizePararealOpenMP_Pipe;
END PROGRAM run_parareal_openmp
|
\documentclass[11pt]{ltxdoc}
\usepackage{color}
\usepackage{xspace,fancyvrb}
\usepackage[neverdecrease]{paralist}
\definecolor{myblue}{rgb}{0.02,0.04,0.48}
\definecolor{lightblue}{rgb}{0.61,.8,.8}
\definecolor{myred}{rgb}{0.65,0.04,0.07}
\usepackage[
bookmarks=true,
colorlinks=true,
linkcolor=myblue,
urlcolor=myblue,
citecolor=myblue,
hyperindex=false,
hyperfootnotes=false,
pdftitle={Polyglossia: An alternative to Babel for XeLaTeX and LuaLaTeX},
pdfauthor={F Charette, A Reutenauer},
pdfkeywords={xetex, xelatex, luatex, lualatex, multilingual, babel, hyphenation}
]{hyperref}
\usepackage{metalogo}
\let\XeTeX\undefined
\let\XeLaTeX\undefined
\usepackage[babelshorthands]{polyglossia}
\usepackage{farsical}
\setmainlanguage[variant=british,ordinalmonthday=false]{english}
\setotherlanguages{arabic,hebrew,syriac,greek,russian,catalan}
\usepackage[protrusion]{microtype}
\newcommand*\Cmd[1]{\cmd{#1}\DescribeMacro{#1}\xspace}
\newcommand*\pkg[1]{\textsf{\color{myblue}#1}}
\newcommand*\file[1]{\texttt{\color{myblue}#1}}
\newcommand*\TR[1]{\textcolor{myred}{#1}}
\newcommand*\TX[1]{\hyperref[#1]{\textcolor{myred}{#1}}}
\newcommand*\TB[1]{\textcolor{myblue}{\bf #1}}
\newcommand*\link[1]{\href{#1}{#1}}
\def\eg{\textit{e.g.,}\xspace}
\def\ie{\textit{i.e.,}\xspace}
\def\ca{\textit{ca.}\@\xspace}
\def\Eg{\textit{E.g.,}\xspace}
\def\Ie{\textit{I.e.,}\xspace}
\def\etc{\@ifnextchar.{\textit{etc}}{\textit{etc.}\@\xspace}}
%% Sidenotes << copied from fontspec.dtx
\newcommand\new[1]{%
\edef\thisversion{#1}%
\ifhmode\unskip~\fi{\ifx\thisversion\fileversion\color{blue}\else\color[gray]{0.5}\fi
$\leftarrow$}%
\marginpar{\centering
\small\ifx\thisversion\fileversion\color{blue}\else\color[gray]{0.5}\fi
\textsf{#1}}}
\newcommand\displaycmd[2]{%
\\\DescribeMacro{#2}\centerline{\cmd{#1}}}
\renewenvironment{itemize}{\begin{compactitem}[\char"2023]}%[{\fontspec{DejaVu Sans}\char"25BB}]}%
{\end{compactitem}}
\renewenvironment{enumerate}{\begin{compactenum}}{\end{compactenum}}
%% fontspec declarations:
\setmainfont{Linux Libertine O}
\setsansfont{Linux Biolinum O}
\setmonofont[Scale=MatchLowercase]{DejaVu Sans Mono}
\newfontfamily\arabicfont[Script=Arabic]{Amiri}
\newfontfamily\syriacfont[Script=Syriac]{Serto Jerusalem}
\newfontfamily\hebrewfont[Script=Hebrew]{Ezra SIL}
\linespread{1.05}
\frenchspacing
\EnableCrossrefs
\CodelineIndex
\RecordChanges
% COMMENT THE NEXT LINE TO INCLUDE THE CODE
\AtBeginDocument{\OnlyDescription}
\begin{document}
\hyphenation{Kha-li-ghi}
\GetFileInfo{polyglossia.sty}
\title{\textcolor{lightblue}{\Huge\fontspec[LetterSpace=40]{GFS Ambrosia} Πολυγλωσσια}
\\[16pt]
\color{myblue}Polyglossia: An Alternative to Babel for \XeLaTeX\ and \LuaLaTeX}
\author{\scshape\color{myblue}François Charette\\\color{myblue}Current maintainer: \scshape Arthur Reutenauer}
\date{\filedate \qquad \fileversion\\
\footnotesize (\textsc{pdf} file generated on \today)}
\maketitle
\tableofcontents
\DeleteShortVerb{\|}
\MakeShortVerb{\¦}
%\begin{abstract}
%Blablabla
%\end{abstract}
\section{Introduction}
Polyglossia is a package for facilitating multilingual typesetting with
\XeLaTeX\ and (at an early stage) \LuaLaTeX. Basically, it
can be used as an alternative to \pkg{babel} for performing the following
tasks automatically:
\begin{enumerate}
\item Loading the appropriate hyphenation patterns.
\item Setting the script and language tags of the current font (if possible and
available), via the package \pkg{fontspec}.
\item Switching to a font assigned by the user to a particular script or language.
\item Adjusting some typographical conventions according to the current language
(such as afterindent, frenchindent, spaces before or after punctuation marks,
etc.).
\item Redefining all document strings (like “chapter”, “figure”, “bibliography”).
\item Adapting the formatting of dates (for non-Gregorian calendars via external
packages bundled with polyglossia: currently the Hebrew, Islamic and Farsi
calendars are supported).
\item For languages that have their own numbering system, modifying the formatting
of numbers appropriately (this also includes redefining the alphabetic sequence
for non-Latin alphabets).\footnote{ %
For the Arabic script this is now done by the bundled package \pkg{arabicnumbers}.}
\item Ensuring proper directionality if the document contains languages
that are written from right to left (via the package \pkg{bidi},
available separately).
\end{enumerate}
Several features of \pkg{babel} that do not make sense in the \XeTeX\ world (like font
encodings, shorthands, etc.) are not supported.
Generally speaking, \pkg{polyglossia} aims to remain as compatible as possible
with the fundamental features of \pkg{babel} while being cleaner, light-weight,
and modern. The package \pkg{antomega} has been very beneficial in our attempt to
reach this objective.
\paragraph{Requirements:} The current version of \pkg{polyglossia} makes use of some convenient
macros defined in the \pkg{etoolbox} package by Philipp Lehmann. Being designed
for \XeLaTeX\ and \LuaLaTeX, it obviously also relies on \pkg{fontspec} by Will
Robertson. For languages written from right to left, it needs the package \pkg{bidi}
by Vafa Khalighi (\textarabic{وفا خليقي}). Polyglossia also bundles three packages for calendaric
computations (\pkg{hebrewcal}, \pkg{hijrical}, and \pkg{farsical}).
\section{Loading language definition files}
\subsection{The recommended way}
You can determine the default language by means of the command:
\displaycmd{\setdefaultlanguage[⟨options⟩]\{lang\}}{\setdefaultlanguage}
(or equivalently \Cmd\setmainlanguage).
Secondary languages can be loaded with
\displaycmd{\setotherlanguage[⟨options⟩]\{lang\}.}{\setotherlanguage}
These commands have the advantage of being explicit and of allowing you to set
language-specific options.\footnote{ %
More on language-specific options below.}
It is also possible to load a series of secondary languages at once using
\displaycmd{\setotherlanguages\{lang1,lang2,lang3,…\}.}{\setotherlanguages}
Language-specific options can be set or changed at any time by means of
\displaycmd{\setkeys\{⟨lang⟩\}\{opt1=value1,opt2=value2,…\}.}{\setkeys}
\subsection{The “Babel way” – obsolete}
\new{v1.2.0}
{\color{red}\bfseries Warning}: \pkg{polyglossia} no longer supports loading
language definition files as package options!
%As with \pkg{babel}, \pkg{polyglossia} also allows you to load language definition files
%as package options. In most cases, option \texttt{⟨lang⟩} will load the file
%\file{gloss-⟨lang⟩.ldf}. Note however that the \textit{first} language listed in \\
%\centerline{\cmd{\usepackage[lang1,lang2,…]{polyglossia}}}
%will be the default language for the document, which
%is the opposite convention of \pkg{babel}.
%Note also that this method may not work in some cases, and should be
%considered deprecated.
\subsection{Supported languages}
Table~\ref{tab:lang} lists all languages currently supported.
Those in red have specific options and/or commands
that are explained in section \ref{specific} below.
\begin{table}[h]\centering
\label{tab:lang}
% Produced with tools/insert-language-list.rb -- AR, 2015-07-14
\begin{tabular}{lllll}
\hline
albanian & danish & icelandic & nko & \TX{slovenian}\\
amharic & divehi & interlingua & norsk & spanish \\
\TX{arabic} & \TX{dutch} & irish & nynorsk & swedish \\
armenian & \TX{english} & \TX{italian} & occitan & \TX{syriac} \\
asturian & \TX{esperanto} & kannada & piedmontese & tamil \\
bahasai & estonian & khmer & polish & telugu \\
bahasam & \TX{farsi} & \TX{korean} & portuges & \TX{thai} \\
basque & finnish & \TX{lao} & romanian & tibetan \\
\TX{bengali} & french & \TX{latin} & romansh & turkish \\
brazil[ian] & friulan & latvian & \TX{russian} & turkmen \\
breton & galician & lithuanian & samin & \TX{ukrainian}\\
bulgarian & \TX{german} & \TX{lsorbian} & \TX{sanskrit} & urdu \\
\TX{catalan} & \TX{greek} & \TX{magyar} & scottish & \TX{usorbian} \\
coptic & \TX{hebrew} & malayalam & \TX{serbian} & vietnamese \\
croatian & \TX{hindi} & marathi & slovak & \TX{welsh} \\
czech \\
\hline
\end{tabular}
\caption{Languages currently supported in \pkg{polyglossia}}
\end{table}
\textit{NB:} The support for Amharic\new{v1.0.1} should be considered an experimental attempt to
port the package \pkg{ethiop}.\footnote{ Feedback is welcome.}
Version 1.1.1\new{v1.1.1} added support for Asturian, %\footnote{ Provided by Kevin Godby and Xuacu Saturio.},
Lithuanian, %\footnote{ Provided by Kevin Godby and Paulius Sladkevičius.},
and Urdu. %\footnote{ Provided by Kamal Abdali.}
%
Version 1.2\new{v1.2.0} adds support for Armenian, Occitan, Bengali,
Lao, Malayalam, Marathi, Tamil, Telugu, and Turkmen.\footnote{ %
See acknowledgements at the end for due credit to the various contributors.}
Polyglossia can also be loaded with the option
‘babelshorthands’\new{v1.1.1}, which globally activates \pkg{babel}
shorthands whenever available. Currently shorthands are implemented for
Catalan, Dutch, German, Italian, and Russian: see these respective
languages for details.
Another option (turned off by default) is ‘localmarks’, which
redefines the internal \LaTeX\ macros \cmd\markboth\ and \cmd\markright.
\new{v1.2.0}Note that this was formerly turned on by default, but we
now realize that it causes more problems than otherwise. For backwards-compatibility
the opposite option ‘nolocalmarks’ is still available.
There is also the option ‘quiet’ which turns off most info messages and some of the warnings
issued by \LaTeX, \pkg{fontspec} and \pkg{polyglossia}.
\section{Language-switching commands}
Whenever a language definition file \file{gloss-⟨lang⟩.ldf} is loaded,
the command \cmd{\text⟨lang⟩[⟨options⟩]\{…\}} \DescribeMacro{\text⟨lang⟩}
becomes available for short insertions of text in that language.
For example ¦\textrussian{\today}¦ yields \textrussian{\today}
Longer passages are better put between the environment ¦⟨lang⟩¦
(again with the possibility of setting language options locally.
\DescribeEnv{⟨lang⟩}
For instance the following allows us to quote the beginning
of Homer’s \textit{Iliad}:
\begin{Verbatim}[formatcom=\color{myblue}]
\begin{greek}[variant=ancient]
μῆνιν ἄειδε θεὰ Πηληϊάδεω Ἀχιλῆος οὐλομένην, ἣ μυρί' Ἀχαιοῖς ἄλγε'
ἔθηκε, πολλὰς δ' ἰφθίμους ψυχὰς Ἄϊδι προί̈αψεν ἡρώων, αὐτοὺς δὲ ἑλώρια
τεῦχε κύνεσσιν οἰωνοῖσί τε πᾶσι, Διὸς δ' ἐτελείετο βουλή, ἐξ οὗ δὴ τὰ
πρῶτα διαστήτην ἐρίσαντε Ἀτρεί̈δης τε ἄναξ ἀνδρῶν καὶ δῖος Ἀχιλλεύς.
\end{greek}
\end{Verbatim}
\begin{greek}[variant=ancient]
μῆνιν ἄειδε θεὰ Πηληϊάδεω Ἀχιλῆος οὐλομένην, ἣ μυρί' Ἀχαιοῖς ἄλγε' ἔθηκε,
πολλὰς δ' ἰφθίμους ψυχὰς Ἄϊδι προί̈αψεν ἡρώων, αὐτοὺς δὲ ἑλώρια τεῦχε κύνεσσιν
οἰωνοῖσί τε πᾶσι, Διὸς δ' ἐτελείετο βουλή, ἐξ οὗ δὴ τὰ πρῶτα διαστήτην ἐρίσαντε
Ἀτρεί̈δης τε ἄναξ ἀνδρῶν καὶ δῖος Ἀχιλλεύς.
\end{greek}
\bigskip
Note that for Arabic one cannot use the environment ¦arabic¦,
as \cmd\arabic\ is defined internally by \LaTeX. In this case
we need to use the environment ¦Arabic¦ instead\DescribeEnv{Arabic}.
\subsection{Other commands}
The following commands are probably of lesser interest to the end user, but
ought to be mentioned here.
\begin{itemize}
\item \Cmd\selectbackgroundlanguage: this selects the global font setup and
the numbering definitions for the default language.
\item \Cmd\resetdefaultlanguage\ (experimental):
completely switches the default language
to another one in the middle of a document: \textit{this may have adverse effects}!
\item \Cmd\normalfontlatin: in an environment where \cmd\normalfont\ has been redefined
to a non-latin script, this will call the font defined with \cmd\setmainfont\ etc.
Likewise it is possible to use \Cmd\rmfamilylatin, \Cmd\sffamilylatin,
and \Cmd\ttfamilylatin.
\item Some macros defined in \pkg{babel}’s \file{hyphen.cfg} (and thus usually
compiled into the \XeLaTeX\ and \LuaLaTeX\ format) are redefined, but keep a similar
behaviour, namely \Cmd\selectlanguage, \Cmd\foreignlanguage,
and the environment ¦otherlanguage¦\DescribeEnv{otherlanguage}.
\end{itemize}
%
Since the \XeLaTeX\ and \LuaLaTeX\ format incorporate \pkg{babel}’s \file{hyphen.cfg},
the low-level commands for hyphenation and language switching
defined there are also accessible.
\section{Font setup}
With polyglossia it is possible to associate a specific font with any script or language
that occurs in the document. That font should always be defined as
¦\⟨script⟩font¦\ or ¦\⟨language⟩font¦.
For instance, if the default font defined by \cmd\setmainfont\
does not support Greek, then one can define the font used to display Greek with:\\
\centerline{ \cmd\newfontfamily\cmd{\greekfont[Script=Greek,⟨…⟩]\{⟨font⟩\}}. }
Note that polyglossia will use the font thus defined as is.
for instance if ¦\arabicfont¦ is explicitly defined, then one should take care of
including the option ¦Script=Arabic¦ in that definition.
See the \pkg{fontspec} documentation for more information.
If a specific sans or monospace font is needed for a particular script or language,
it can be defined by means of \new{v1.2.0}
¦\⟨script⟩fontsf¦\ or ¦\⟨language⟩fontsf¦ and ¦\⟨script⟩fonttt¦\ or ¦\⟨language⟩fonttt¦, respectively.
Whenever a new language is activated, \pkg{polyglossia} will first check whether
a font has been defined for that language or – for languages in non-Latin scripts –
for the script it uses. If it is not defined, it will use the currently active font
and – in the case of OpenType fonts – will attempt to turn on the appropriate
OpenType tags for the script and language used, in case these are available in
the font, by means of \pkg{fontspec}’s \cmd\addfontfeature. If the current font
does not appear to support the script of that language, an error message is
displayed.
\section{Hyphenation disabling}
In some very specific contexts (such as music score creation), \TeX{} hyphenation
is something to avoid as it may cause troubles. \pkg{polyglossia} provides two
functions: \cmd\disablehyphenation{} and \cmd\enablehyphenation . Note that when
you select a new language, hyphenation will be in the same state (enabled or
disabled) as before. When you reenable it, it will take the last selected
language.
\section{Language-specific options and commands}\label{specific}
This section gives a list of all languages for which options and end-user commands are defined.
The default value of each option is given in italic.
%\subsection{amharic}\label{amharic}
\subsection{arabic}\label{arabic}
\textbf{Options}:
\begin{itemize}
\item \TB{calendar} = \textit{gregorian} or islamic (= hijri)
\item \TB{locale} = \textit{default},\footnote{ %
For Egypt, Sudan, Yemen and the Gulf states.}
mashriq,\footnote{ %
For Iraq, Syria, Jordan, Lebanon and Palestine.}
libya, algeria, tunisia, morocco, or mauritania.
This setting influences the spelling of the month names for the Gregorian calendar,
as well as the form of the numerals (unless overriden by the following option).
\item \TB{numerals} = \textit{mashriq} or maghrib
(the latter is the default when locale = algeria, tunisia or morocco)
\item \TB{abjadjimnotail} = \textit{false} or true. \new{v1.0.3}
Set this to true if you want the \textit{abjad} form of the number three to be \textarabic{ج} – as in the manuscript tradition – instead of the modern usage \textarabic{ج}.
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\abjad and \Cmd\abjadmaghribi (see section \ref{abjad})
\item \Cmd\aemph to emphasize text with ¦\overline¦.\new{v1.2.0}
¦\textarabic{\aemph{اب}}¦ yields \textarabic{\aemph{اب}}.
This command is also available for Farsi, Urdu, etc.
\end{itemize}
\subsection{bengali}\label{bengali}\new{v1.2.0}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = Western, Bengali or \textit{Devanagari}
\item \TB{changecounternumbering} = true or \textit{false} (use specified
numerals for headings and page numbers)
\end{itemize}
\subsection{catalan}\label{catalan}
\textbf{Options:}
\begin{itemize}
\item \TB{babelshorthands} = \textit{false} or true. \new{v1.1.1}
Activates the shorthands \texttt{"l} and \texttt{"L} to type geminated l’s.
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd{\l.l} and \Cmd{\L.L} behave as in \pkg{babel} to type a geminated l, as in \textit{co\l.laborar}. \new{v1.1.1}
In polyglossia the same can also be achieved with \Cmd{\l·l} and \Cmd{\L·L}.\footnote{ %
NB: · is the glyph U+00B7 MIDDLE DOT.}
\end{itemize}
\subsection{dutch}\label{dutch}
\textbf{Options:}
\begin{itemize}
\item \TB{babelshorthands} = \textit{false} or true. \new{v1.1.1}
if this is turned on, all shorthands defined in \pkg{babel}
for fine-tuning the hyphenation of Dutch words are activated.
\begin{itemize}
\item ¦"-¦ for an explicit hyphen sign, allowing hyphenation in the rest of the word
\item ¦"~¦ for a compound word mark without a breakpoint
\item ¦"|¦ disables the ligature at this position
\item ¦""¦ is like ¦"-¦, but produces no hyphen sign
(for compound words with a hyphen, e.g., ¦foo-""bar¦)
\item ¦"/¦ to enable hyphenation in two words written together but separated by a slash.
\item In addition, the macro \Cmd\- is redefined to allow hyphens in the rest of the word.
\end{itemize}
\end{itemize}
\subsection{english}\label{english}
\textbf{Options}:
\begin{itemize}
\item \TB{variant} = \textit{american} (= us), usmax (same as ‘american’ but with additional hyphenation patterns), british (= uk), australian or newzealand
\item \TB{ordinalmonthday} = true/\textit{false} (true by default only when variant = british)
\end{itemize}
\subsection{esperanto}\label{esperanto}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\hodiau\ and \Cmd\hodiaun are special forms of \cmd\today\ (see the \pkg{babel} documentation)
\end{itemize}
\subsection{farsi}\label{farsi}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = western or \textit{eastern}
\item \TB{locale} (not yet implemented)
\item \TB{calendar} (not yet implemented)
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\abjad (see section \ref{abjad})
\item \Cmd\aemph (see section \ref{arabic}).
\end{itemize}
\subsection{french}\label{french}\new{v1.5.0}
\textbf{Options}:
\begin{itemize}
\item \TB{automaticspacesaroundguillemets} = true or \textit{false} (default value = true. Adds space after the opening guillemets and before the closing guillemets. Such space is usually not typed in source code, and you should let polyglossia add it. However, if your source code contains such space, you can set this option to false.)
\item \TB{frenchfootnote} = true or \textit{false} (default value = true. Determines whether the footnote mark starting the footnote is normal script followed by a dot (default) or superscript without a dot.)
\end{itemize}
\subsection{german}\label{german}
\textbf{Options}:
\begin{itemize}
\item\TB{variant} = \textit{german}, austrian or swiss.\new{v1.33.4}
Setting variant=austrian or variant=swiss uses some lexical variants.
With spelling=old, variant=swiss furthermore loads specific hyphenation
patterns.
\item \TB{spelling} = \textit{new} (= 1996) or old (= 1901):
indicates whether hyphenation patterns for traditional (1901) or reformed
(1996) orthography should be used. The latter is the default.
\item \TB{latesthyphen} = \textit{false} or true: if this option is set to true,
the latest (experimental) hyphenation patterns ‘(n)german-x-latest’
will be loaded instead of ‘german’ or ‘ngerman’. NB: This is based on
the file \texttt{language.dat} that comes with \TeX Live 2008 and later.
\item\TB{babelshorthands} = \textit{false} or true: \new{v1.0.3}
if this is turned on, all shorthands defined in \pkg{babel}
for fine-tuning the hyphenation of German words are activated.
\begin{itemize}
\item ¦"ck¦ for ¦ck¦ to be hyphenated as ¦k-k¦
\item ¦"ff¦ for ¦ff¦ to be hyphenated as ¦ff-f¦; this is also available for the letters l, m, n, p, r and t
\item ¦"|¦ disables the ligature at this position
\item ¦"-¦ for an explicit hyphen sign, allowing hyphenation in the rest of the word
\item ¦""¦ is like ¦"-¦, but produces no hyphen sign
(for compound words with a hyphen, e.g., ¦foo-""bar¦)
\item ¦"~¦ for a compound word mark without a breakpoint
\item ¦"=¦ for a compound word mark with a breakpoint,
allowing hyphenation in the composing words.
\item ¦"/¦ a slash that allows for a line break and maintains hyphenation points.
\end{itemize}
There are also four shorthands for quotation signs:
\begin{itemize}
\item ¦"`¦ for German left double quotes („)
\item ¦"'¦ for German right double quotes (“)
\item ¦"<¦ for French left double quotes («)
\item ¦">¦ for French right double quotes (»).
\end{itemize}
\item\TB{script} = \textit{latin} or fraktur.\new{v1.2.0}
Setting script=fraktur modifies the captions for typesetting German in Fraktur.
\end{itemize}
\subsection{greek}\label{greek}
\textbf{Options}:
\begin{itemize}
\item \TB{variant} = \textit{monotonic} (= mono), polytonic (= poly), or ancient
\item \TB{numerals} = \textit{greek} or arabic
\item \TB{attic} = \textit{false}/true
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\Greeknumber and \Cmd\greeknumber \ (see section \ref{abjad}).
\item The command \Cmd\atticnumeral (= \Cmd\atticnum) (activated with
the option ¦attic=true¦), displays numbers using the acrophonic
numbering system (defined in the Unicode range
\textsf{U+10140–U+10174}).\footnote{ %
See the documentation of the \pkg{xgreek} package for more details.}
\end{itemize}
\subsection{hebrew}\label{hebrew}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = hebrew or \textit{arabic}
\item \TB{calendar} = hebrew or \textit{gregorian}
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\hebrewnumeral\ (= \Cmd\hebrewalph) (see section \ref{abjad}).
\item \Cmd\aemph (see section \ref{arabic}).
\end{itemize}
\subsection{hindi}\label{hindi}\new{v1.2.0}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = Western or \textit{Devanagari}
\end{itemize}
\subsection{italian}\label{italian}
\textbf{Option:}
\begin{itemize}
\item \TB{babelshorthands} = \textit{false} or true. \new{v1.2.0cc}% TODO: check version
Activates the ¦"¦ character as a switch to perform etymological
hyphenation when followed by a letter, or other tasks when followed by
certain analphabetic characters; in particular ¦""¦ is used to enter
double raised open quotes (the Italian keyboard misses the backtick),
and ¦"<¦ and ¦">¦ to insert open and closed guillemets without any
spacing after the open or before the closed sign. ¦"/¦ is made
equivalent to \slash allowing a linebreak after the slash without any
hyphen sign; ¦"-¦ produces a short rule/hyphen and a discretional line
break alowing line breaks in the second compound word fragment.
\end{itemize}
\subsection{korean}\label{korean}\new{v1.40.0}
The language definition file includes U. S. hyphenation patterns in order to
enable hyphenation when writing English within Korean text.
\subsection{lao}\label{lao}\new{v1.2.0}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = lao or \textit{arabic}
\end{itemize}
\subsection{latin}\label{latin}
\textbf{Options}:
\begin{itemize}
\item \TB{variant} = classic, medieval or \textit{modern}
\end{itemize}
\subsection{lsorbian and usorbian}\label{lsorbian}\label{usorbian}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\oldtoday : see the \pkg{babel} documentation.
\end{itemize}
\subsection{magyar}\label{magyar}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\ontoday\ (= \Cmd\ondatemagyar): special forms of \cmd\today\
(see the \pkg{babel} documentation).
\end{itemize}
\subsection{russian}\label{russian}
\textbf{Options}:
\begin{itemize}
\item \TB{babelshorthands} = \textit{false} or true. % TODO check and document!
\item \TB{spelling} = \textit{modern} or old (for captions and date only, not for hyphenation)
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\Asbuk: produces the uppercase Russian alphabet, for
environments such as ¦enumerate¦
\item \Cmd\asbuk: same in lowercase
\end{itemize}
\subsection{sanskrit}\label{sanskrit}
\textbf{Options}:
\begin{itemize}
\item \TB{Script} (default = Devanagari). \new{v1.0.2}
The value is passed to \pkg{fontspec} in cases where ¦\sanskritfont¦ or
¦\devanagarifont¦ are not defined. This can be useful if you typeset
Sanskrit texts in scripts other than Devanagari.
%TODO \item Numerals <<<<
\end{itemize}
\pkg{polyglossia} currently supports the typesetting of Sanskrit in the
following writing systems: Devanagari, Gujarati, Malayalam, Bengali, Kannada,
Telugu, and Latin. Use the ¦Script=¦ option to select the writing system
you want, and enter your input in that script.
\subsection{serbian}\label{serbian}
\textbf{Options}:
\begin{itemize}
\item \TB{script} = \textit{cyrillic} or latin
\end{itemize}
\subsection{slovenian}\label{slovenian}
\textbf{Options}:
\begin{itemize}
\item \TB{localaph} = true \textit{false}
\end{itemize}
\subsection{syriac}\label{syriac}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = \textit{western} (i.e., 1234567890), eastern
(for which the Oriental Arabic numerals are used: \textarabic{١٢٣٤٥٦٧٨٩٠}),
or abjad. \new{v1.0.1}.
\end{itemize}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\abjadsyriac (see section \ref{abjad})
\item \Cmd\aemph (see section \ref{arabic}).
\end{itemize}
\subsection{thai}\label{thai}
\textbf{Options}:
\begin{itemize}
\item \TB{numerals} = thai or \textit{arabic}
\end{itemize}
To insert the word breaks, you need to use an external processor.
See the documentation to \pkg{thai-latex} and the file \file{testthai.tex}
that comes with this package.
\subsection{ukrainian}\label{russian}
\textbf{Commands}:
\begin{itemize}
\item \Cmd\Asbuk: produces the uppercase Ukrainian alphabet, for
environments such as ¦enumerate¦
\item \Cmd\asbuk: same in lowercase
\end{itemize}
\subsection{welsh}\label{welsh}
\textbf{Options}:
\begin{itemize}
\item \TB{date} = long or \textit{short}
\end{itemize}
\section{Modifying or extending captions and date formats}
To redefine internal macros, you can use the command ¦\gappto¦ from the package
\pkg{etoolbox}. For compatibility with \pkg{babel} the command ¦\addto¦ is also available
with the same effect. For instance, to change the ¦\chaptername¦ for language ¦lingua¦,
you can do this:
\begin{verbatim}
\gappto\captionslingua{\renewcommand{\chaptername}{Caput}}
\end{verbatim}
\section{Non-Western decimal digits}
Several scripts have their own versions of the decimal digits commonly called
‘Arabic numerals’. With the appropriate language option set, \pkg{polyglossia}
will automatically convert the output of internal \LaTeX\ counters to their
localized forms, for instance to display page, chapter and section numbers.
In previous versions this conversion was achieved my means of TECKit fontmappings.
If needed they can be activated with the fontspec ¦Mapping¦ option,
using ¦arabicdigits¦, ¦farsidigits¦ or ¦thaidigits¦.
For instance if \cmd\arabicfont\ is defined with the option ¦Mapping=arabicdigits¦,
then by typing ¦\textarabic{2010}¦ one will obtain \textarabic{٢٠١٠}.
With version v1.1.1\new{v1.1.1} the same conversion is achieved directly by
simple \TeX\ macros. This prevents some problems that occur when the value of a
counter has to be written and read from auxiliary files.\footnote{ %
For instance the package \pkg{lastpage} did not work with \pkg{polyglossia} in situations
where the display of counters was redefined to include a font-switching command.}
These macros (currently \Cmd\arabicdigits, \Cmd\farsidigits\ and \Cmd\thaidigits\ are provided)
are also available to the users. For instance in an Arabic environment
¦\arabicdigits{9182/738543-X}¦ yields
\textarabic{\arabicdigits{9182/738543-X}}.
\section{Alphabetic numbering in Greek, Arabic, Hebrew, Syriac and Farsi}\label{abjad}
In certain languages, numbers can be represented
by a special alphanumerical notation.\footnote{ %
See, e.g., \url{http://en.wikipedia.org/wiki/Greek_numerals},
\url{http://en.wikipedia.org/wiki/Abjad_numerals},
and \url{http://en.wikipedia.org/wiki/Hebrew_numerals}.}
%% \url{http://en.wikipedia.org/wiki/Syriac_alphabet}
The Greek numerals are obtained with \Cmd\greeknumeral (or \Cmd\Greeknumeral\ in uppercase).
Example: ¦\greeknumeral{1863}¦ yields \textgreek{\greeknumeral{1863}}.
The Arabic \textit{abjad} numbers can be generated with the command \Cmd\abjad.
Example: ¦\abjad{1863}¦ yields \textarabic{\abjad{1863}}.
In the Maghrib the conventions are somewhat different, and the maghribi forms
of the \textit{abjad} numerals are obtained with the \Cmd\abjadmaghribi\ command.
Example: ¦\abjadmaghribi{1863}¦ yields \textarabic{\abjadmaghribi{1863}}.
The code for Hebrew numerals, which was incorrect in previous versions, was
ported from the implementation in \pkg{babel} with v1.1.1\new{v1.1.1}, and the
user interface is identical to the one in \pkg{babel}.
The commands \Cmd\hebrewnumeral, \Cmd\Hebrewnumeral and \Cmd\Hebrewnumeralfinal\ behave exactly
as they do in \pkg{babel}: the second command prints the number with \textit{gereshayim} before
the last letter, and the latter uses in addition the final forms of Hebrew letters.
Examples:
¦\hebrewnumeral{1750}¦ yields \texthebrew{\hebrewnumeral{1750}},
¦\Hebrewnumeral{1750}¦ yields \texthebrew{\Hebrewnumeral{1750}},
and ¦\Hebrewnumeralfinal{1750}¦ yields \texthebrew{\Hebrewnumeralfinal{1750}}.
Support is also provided for Syriac abjad numerals, which can be generated
with \Cmd\abjadsyriac.\footnote{ %
A fine guide to numerals in Syriac can be found at \link{http://www.garzo.co.uk/documents/syriac-numerals.pdf}.}
Example: ¦\abjadsyriac{463}¦ yields \textsyriac{\abjadsyriac{463}}.
\section{Calendars}
\subsection{Hebrew calendar (hebrewcal.sty)}
The package \file{hebrewcal.sty} is almost a verbatim copy of \file{hebcal.sty}
that comes with \pkg{babel}.
The command \Cmd\Hebrewtoday\ formats the current date in the Hebrew calendar
(depending of the current writing direction this will automatically set either
in Hebrew script or in roman transliteration).
\subsection{Islamic calendar (hijrical.sty)}
This package computes dates in the lunar Islamic (Hijra) calendar.\footnote{ %
It makes use of the arithmetical algorithm in chapter 6 of
Reingold \& Gershowitz, \textit{Calendrical calculation: the Millenium edition}
(Cambridge University Press, 2001).\label{reingold}}
It provides two macros for the end-user.
The command
\displaycmd{\HijriFromGregorian\{⟨year⟩\}\{⟨month⟩\}\{⟨day⟩\}}{\HijriFromGregorian}
sets the counters ¦Hijriday¦, ¦Hijrimonth¦ and ¦Hijriyear¦.
\Cmd\Hijritoday\ formats the Hijri date for the current day.
This command is now locale-aware\new{v1.1.1}: its output will differ depending on the
currently active language. Presently \pkg{polyglossia}’s language definition files
for Arabic, Farsi, Urdu, Turkish, Bahasa Indonesia and Bahasa Melayu
provide a localized version of ¦\Hijritoday¦.
If the formatting macro for the current language is undefined, the Hijri date will be formatted
in Arabic or in roman transliteration, depending of the current writing direction.
You can define a new format or redefine one with the command
\displaycmd{\DefineHijriDateFormat\{<lang>\}\{<code>\}.}{\DefineHijriDateFormat}
The command ¦\Hijritoday¦ also accepts an optional argument to add or subtract a correction
(in days) to the date computed by the arithmetical algorithm.\footnote{ %
The Islamic calendar is indeed a purely lunar calendar based on the observation
of the first visibility of the lunar crescent at the beginning of the lunar month,
so there can be differences between different localities, as well as between
civil and religious authorities.}
For instance if ¦\Hijritoday¦ yields the date “7 Rajab 1429” (which is the date that was
displayed on the front page of \href{http://www.aljazeera.net}{aljazeera.net} on
11th July 2008), ¦\Hijritoday[1]¦ would rather print “8 Rajab 1429” (the date
indicated the same day on the site \href{http://www.gulfnews.com}{gulfnews.com}).
\subsection{Farsi (jalālī) calendar (farsical.sty)}
This package is an almost verbatim copy of ¦Arabiftoday.sty¦ (in the \pkg{Arabi} package),
itself a slight modification of ¦ftoday.sty¦ in Farsi\TeX.\footnote{ %
One day I may rewrite \pkg{farsical} from scratch using the algorithm in
Reingold \& Gershowitz (ref.~n.~\ref{reingold}).}
Here we have renamed the command \cmd\ftoday\ to
\Cmd\Jalalitoday.
Example: today is \Jalalitoday.
%\section{Varia}
\section{Acknowledgements (by François Charette)}
\pkg{Polyglossia} is notable for being a recycle box of previous contributions
by other people. I take this opportunity to thank the following individuals,
whose splendid work has made my task almost trivial in comparision: Johannes
Braams and the numerous contributors to the \pkg{babel}{} package (in particular
Boris Lavva and others for its Hebrew support), Alexej Kryukov (\pkg{antomega}), Will
Robertson (\pkg{fontspec}), Apostolos Syropoulos (\pkg{xgreek}), Youssef Jabri
(\pkg{arabi}), and Vafa Khalighi (\pkg{xepersian} and \pkg{bidi}).
The work of Mojca Miklavec and Arthur Reutenauer on hyphenation patterns with their package
\pkg{hyph-utf8} is of course invaluable. I should also thank other
individuals for their assistance in supporting specific languages: Yves Codet
(Sanskrit), Zdenek Wagner (Hindi), Mikhal Oren (Hebrew), Sergey Astanin (Russian),
Khaled Hosny (Arabic), Sertaç Ö. Yıldız (Turkish), Kamal Abdali (Urdu),
and several other members of the \XeTeX\ user community, notably Enrico Gregorio, who
has sent me many useful suggestions and corrections and contributed the ¦\newXeTeXintercharclass¦
mechanism in xelatex.ini which is now used by polyglossia.
More recently, Kevin Godby of the \href{http://ubuntu-manual.org}{Ubuntu Manual} project has
contributed very useful feedback, bug hunting and, with the help of translators,
new language definition files for Asturian, Lithuanian, Occitan, Bengali, Malayalam, Marathi, Tamil, and Telugu.
It is particularly heartening to realize that this package is used to typeset a widely-read
document in dozens of different languages!
Support for Lao was also added thanks to Brian Wilson.
I also thank Alan Munn for kindly proof-reading the penultimate version of this documentation.
And of course my gratitude also goes to Jonathan Kew, the formidable author of \XeTeX!
\section{More acknowledgements (by Arthur Reutenauer)}
Many thanks to all the people who have contributed bugfixes and new features to
Polyglossia since I took over. Most of them can be identified from the version
control log on \href{https://github.com/reutenauer/polyglossia}{GitHub} and I won’t try to name them
all (maybe, one day ...); among the ones who sent contributions directly to me
I would like to especially thank Claudio Beccari, the indefatigable champion of
Romance languages, and beyond!
\end{document}
|
//=============================================================================================================
/**
* @file hpifit.cpp
* @author Lorenz Esch <[email protected]>;
* Ruben Dörfel <[email protected]>;
* Matti Hamalainen <[email protected]>
* @since 0.1.0
* @date March, 2017
*
* @section LICENSE
*
* Copyright (C) 2017, Lorenz Esch, Matti Hamalainen. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are permitted provided that
* the following conditions are met:
* * Redistributions of source code must retain the above copyright notice, this list of conditions and the
* following disclaimer.
* * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and
* the following disclaimer in the documentation and/or other materials provided with the distribution.
* * Neither the name of MNE-CPP authors nor the names of its contributors may be used
* to endorse or promote products derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
* PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
*
* @brief HPIFit class defintion.
*
*/
//=============================================================================================================
// INCLUDES
//=============================================================================================================
#include "hpifit.h"
#include "hpifitdata.h"
#include "sensorset.h"
#include "hpidataupdater.h"
#include "signalmodel.h"
#include "hpimodelparameters.h"
#include <utils/ioutils.h>
#include <utils/mnemath.h>
#include <iostream>
#include <vector>
#include <numeric>
#include <fiff/fiff_cov.h>
#include <fiff/fiff_dig_point_set.h>
#include <fstream>
#include <fwd/fwd_coil_set.h>
//=============================================================================================================
// EIGEN INCLUDES
//=============================================================================================================
#include <Eigen/Dense>
//=============================================================================================================
// QT INCLUDES
//=============================================================================================================
#include <QVector>
#include <QFuture>
#include <QtConcurrent/QtConcurrent>
//=============================================================================================================
// USED NAMESPACES
//=============================================================================================================
using namespace Eigen;
using namespace INVERSELIB;
using namespace FIFFLIB;
using namespace FWDLIB;
//=============================================================================================================
// DEFINE GLOBAL METHODS
//=============================================================================================================
//=============================================================================================================
// DEFINE MEMBER METHODS
//=============================================================================================================
//=============================================================================================================
HPIFit::HPIFit(const SensorSet& sensorSet)
: m_sensors(sensorSet),
m_signalModel(SignalModel())
{
}
//=============================================================================================================
void HPIFit::checkForUpdate(const SensorSet &sensorSet)
{
if(m_sensors != sensorSet) {
m_sensors = sensorSet;
}
}
//=============================================================================================================
void HPIFit::fit(const MatrixXd& matProjectedData,
const MatrixXd& matProjectors,
const HpiModelParameters& hpiModelParameters,
const MatrixXd& matCoilsHead,
HpiFitResult& hpiFitResult)
{
fit(matProjectedData,matProjectors,hpiModelParameters,matCoilsHead,false,hpiFitResult);
}
//=============================================================================================================
void HPIFit::fit(const MatrixXd& matProjectedData,
const MatrixXd& matProjectors,
const HpiModelParameters& hpiModelParameters,
const MatrixXd& matCoilsHead,
const bool bOrderFrequencies,
HpiFitResult& hpiFitResult)
{
if(matProjectedData.rows() != matProjectors.rows()) {
std::cout<< "HPIFit::fit - Projector and data dimensions do not match. Returning."<<std::endl;
return;
} else if(hpiModelParameters.iNHpiCoils()!= matCoilsHead.rows()) {
std::cout<< "HPIFit::fit - Number of coils and hpi digitizers do not match. Returning."<<std::endl;
return;
} else if(matProjectedData.rows()==0 || matProjectors.rows()==0) {
std::cout<< "HPIFit::fit - No data or Projectors passed. Returning."<<std::endl;
return;
} else if(m_sensors.ncoils() != matProjectedData.rows()) {
std::cout<< "HPIFit::fit - Number of channels in sensors and data do not match. Returning."<<std::endl;
return;
}
const MatrixXd matAmplitudes = computeAmplitudes(matProjectedData,
hpiModelParameters);
const MatrixXd matCoilsSeed = computeSeedPoints(matAmplitudes,
hpiFitResult.devHeadTrans,
hpiFitResult.errorDistances,
matCoilsHead);
CoilParam fittedCoilParams = dipfit(matCoilsSeed,
m_sensors,
matAmplitudes,
hpiModelParameters.iNHpiCoils(),
matProjectors,
500,
1e-9f);
if(bOrderFrequencies) {
const std::vector<int> vecOrder = findCoilOrder(fittedCoilParams.pos,
matCoilsHead);
fittedCoilParams.pos = order(vecOrder,fittedCoilParams.pos);
hpiFitResult.hpiFreqs = order(vecOrder,hpiModelParameters.vecHpiFreqs());
}
hpiFitResult.GoF = computeGoF(fittedCoilParams.dpfiterror);
hpiFitResult.fittedCoils = getFittedPointSet(fittedCoilParams.pos);
hpiFitResult.devHeadTrans = computeDeviceHeadTransformation(fittedCoilParams.pos,
matCoilsHead);
hpiFitResult.errorDistances = computeEstimationError(fittedCoilParams.pos,
matCoilsHead,
hpiFitResult.devHeadTrans);
}
//=============================================================================================================
Eigen::MatrixXd HPIFit::computeAmplitudes(const Eigen::MatrixXd& matProjectedData,
const HpiModelParameters& hpiModelParameters)
{
// fit model
MatrixXd matTopo = m_signalModel.fitData(hpiModelParameters,matProjectedData);
matTopo.transposeInPlace();
// split into sine and cosine amplitudes
const int iNumCoils = hpiModelParameters.iNHpiCoils();
MatrixXd matAmpSine(matProjectedData.cols(), iNumCoils);
MatrixXd matAmpCosine(matProjectedData.cols(), iNumCoils);
matAmpSine = matTopo.leftCols(iNumCoils);
matAmpCosine = matTopo.middleCols(iNumCoils,iNumCoils);
// Select sine or cosine component depending on their contributions to the amplitudes
for(int j = 0; j < iNumCoils; ++j) {
float fNS = 0.0;
float fNC = 0.0;
fNS = matAmpSine.col(j).array().square().sum();
fNC = matAmpCosine.col(j).array().square().sum();
if(fNC > fNS) {
matAmpSine.col(j) = matAmpCosine.col(j);
}
}
return matAmpSine;
}
//=============================================================================================================
Eigen::MatrixXd HPIFit::computeSeedPoints(const Eigen::MatrixXd& matAmplitudes,
const FIFFLIB::FiffCoordTrans& transDevHead,
const QVector<double>& vecError,
const Eigen::MatrixXd& matCoilsHead)
{
const int iNumCoils = matCoilsHead.rows();
MatrixXd matCoilsSeed = MatrixXd::Zero(iNumCoils,3);
const double dError = std::accumulate(vecError.begin(), vecError.end(), .0) / vecError.size();
if(transDevHead.trans != MatrixXd::Identity(4,4).cast<float>() && dError < 0.010) {
// if good last fit, use old trafo
matCoilsSeed = transDevHead.apply_inverse_trans(matCoilsHead.cast<float>()).cast<double>();
} else {
// if not, find max amplitudes in channels
VectorXi vecChIdcs(iNumCoils);
for (int j = 0; j < iNumCoils; j++) {
int iChIdx = 0;
VectorXd::Index indMax;
matAmplitudes.col(j).maxCoeff(&indMax);
if(indMax < m_sensors.ncoils()) {
iChIdx = indMax;
}
vecChIdcs(j) = iChIdx;
}
// and go 3 cm inwards from max channels
for (int j = 0; j < vecChIdcs.rows(); ++j) {
if(vecChIdcs(j) < m_sensors.ncoils()) {
Vector3d r0 = m_sensors.r0(vecChIdcs(j));
Vector3d ez = m_sensors.ez(vecChIdcs(j));
matCoilsSeed.row(j) = (-1 * ez * 0.03 + r0);
}
}
}
return matCoilsSeed;
}
//=============================================================================================================
CoilParam HPIFit::dipfit(const MatrixXd matCoilsSeed,
const SensorSet& sensors,
const MatrixXd& matData,
const int iNumCoils,
const MatrixXd& matProjectors,
const int iMaxIterations,
const float fAbortError)
{
//Do this in conncurrent mode
//Generate QList structure which can be handled by the QConcurrent framework
QList<HPIFitData> lCoilData;
for(qint32 i = 0; i < iNumCoils; ++i) {
HPIFitData coilData;
coilData.m_coilPos = matCoilsSeed.row(i);
coilData.m_sensorData = matData.col(i);
coilData.m_sensors = sensors;
coilData.m_matProjector = matProjectors;
coilData.m_iMaxIterations = iMaxIterations;
coilData.m_fAbortError = fAbortError;
lCoilData.append(coilData);
}
//Do the concurrent filtering
CoilParam coil(iNumCoils);
if(!lCoilData.isEmpty()) {
// //Do sequential
// for(int l = 0; l < lCoilData.size(); ++l) {
// doDipfitConcurrent(lCoilData[l]);
// }
//Do concurrent
QFuture<void> future = QtConcurrent::map(lCoilData,
&HPIFitData::doDipfitConcurrent);
future.waitForFinished();
//Transform results to final coil information
for(qint32 i = 0; i < lCoilData.size(); ++i) {
coil.pos.row(i) = lCoilData.at(i).m_coilPos;
coil.mom = lCoilData.at(i).m_errorInfo.moment.transpose();
coil.dpfiterror(i) = lCoilData.at(i).m_errorInfo.error;
coil.dpfitnumitr(i) = lCoilData.at(i).m_errorInfo.numIterations;
//std::cout<<std::endl<< "HPIFit::dipfit - Itr steps for coil " << i << " =" <<coil.dpfitnumitr(i);
}
}
return coil;
}
//=============================================================================================================
std::vector<int> HPIFit::findCoilOrder(const MatrixXd& matCoilsDev,
const MatrixXd& matCoilsHead)
{
// extract digitized and fitted coils
MatrixXd matCoilTemp = matCoilsDev;
const int iNumCoils = matCoilsDev.rows();
std::vector<int> vecOrder(iNumCoils);
std::iota(vecOrder.begin(), vecOrder.end(), 0);;
// maximum 10 mm mean error
const double dErrorMin = 0.010;
double dErrorActual = 0.0;
double dErrorBest = dErrorMin;
MatrixXd matTrans(4,4);
std::vector<int> vecOrderBest = vecOrder;
bool bSuccess = false;
// permutation
do {
for(int i = 0; i < iNumCoils; i++) {
matCoilTemp.row(i) = matCoilsDev.row(vecOrder[i]);
}
matTrans = computeTransformation(matCoilsHead,matCoilTemp);
dErrorActual = objectTrans(matCoilsHead,matCoilTemp,matTrans);
if(dErrorActual < dErrorMin && dErrorActual < dErrorBest) {
// exit
dErrorBest = dErrorActual;
vecOrderBest = vecOrder;
bSuccess = true;
}
} while (std::next_permutation(vecOrder.begin(), vecOrder.end()));
return vecOrderBest;
}
//=============================================================================================================
double HPIFit::objectTrans(const MatrixXd& matHeadCoil,
const MatrixXd& matCoil,
const MatrixXd& matTrans)
{
// Compute the fiducial registration error - the lower, the better.
const int iNumCoils = matHeadCoil.rows();
MatrixXd matTemp = matCoil;
// homogeneous coordinates
matTemp.conservativeResize(matCoil.rows(),matCoil.cols()+1);
matTemp.block(0,3,iNumCoils,1).setOnes();
matTemp.transposeInPlace();
// apply transformation
MatrixXd matTestPos = matTrans * matTemp;
// remove
MatrixXd matDiff = matTestPos.block(0,0,3,iNumCoils) - matHeadCoil.transpose();
VectorXd vecError = matDiff.colwise().norm();
// compute error
double dError = matDiff.colwise().norm().mean();;
return dError;
}
//=============================================================================================================
Eigen::MatrixXd HPIFit::order(const std::vector<int>& vecOrder,
const Eigen::MatrixXd& matToOrder)
{
const int iNumCoils = vecOrder.size();
MatrixXd matToOrderTemp = matToOrder;
for(int i = 0; i < iNumCoils; i++) {
matToOrderTemp.row(i) = matToOrder.row(vecOrder[i]);
}
return matToOrderTemp;
}
//=============================================================================================================
QVector<int> HPIFit::order(const std::vector<int>& vecOrder,
const QVector<int>& vecToOrder)
{
const int iNumCoils = vecOrder.size();
QVector<int> vecToOrderTemp = vecToOrder;
for(int i = 0; i < iNumCoils; i++) {
vecToOrderTemp[i] = vecToOrder[vecOrder[i]];
}
return vecToOrderTemp;
}
//=============================================================================================================
Eigen::VectorXd HPIFit::computeGoF(const Eigen::VectorXd& vecDipFitError)
{
VectorXd vecGoF(vecDipFitError.size());
for(int i = 0; i < vecDipFitError.size(); ++i) {
vecGoF(i) = 1 - vecDipFitError(i);
}
return vecGoF;
}
//=============================================================================================================
FIFFLIB::FiffCoordTrans HPIFit::computeDeviceHeadTransformation(const Eigen::MatrixXd& matCoilsDev,
const Eigen::MatrixXd& matCoilsHead)
{
const MatrixXd matTrans = computeTransformation(matCoilsHead,matCoilsDev);
return FiffCoordTrans::make(1,4,matTrans.cast<float>(),true);
}
//=============================================================================================================
Eigen::Matrix4d HPIFit::computeTransformation(Eigen::MatrixXd matNH, MatrixXd matBT)
{
MatrixXd matXdiff, matYdiff, matZdiff, matC, matQ;
Matrix4d matTransFinal = Matrix4d::Identity(4,4);
Matrix4d matRot = Matrix4d::Zero(4,4);
Matrix4d matTrans = Matrix4d::Identity(4,4);
double dMeanX,dMeanY,dMeanZ,dNormf;
for(int i = 0; i < 15; ++i) {
// Calculate mean translation for all points -> centroid of both data sets
matXdiff = matNH.col(0) - matBT.col(0);
matYdiff = matNH.col(1) - matBT.col(1);
matZdiff = matNH.col(2) - matBT.col(2);
dMeanX = matXdiff.mean();
dMeanY = matYdiff.mean();
dMeanZ = matZdiff.mean();
// Apply translation -> bring both data sets to the same center location
for (int j = 0; j < matBT.rows(); ++j) {
matBT(j,0) = matBT(j,0) + dMeanX;
matBT(j,1) = matBT(j,1) + dMeanY;
matBT(j,2) = matBT(j,2) + dMeanZ;
}
// Estimate rotation component
matC = matBT.transpose() * matNH;
JacobiSVD< MatrixXd > svd(matC ,Eigen::ComputeThinU | ComputeThinV);
matQ = svd.matrixU() * svd.matrixV().transpose();
//Handle special reflection case
if(matQ.determinant() < 0) {
matQ(0,2) = matQ(0,2) * -1;
matQ(1,2) = matQ(1,2) * -1;
matQ(2,2) = matQ(2,2) * -1;
}
// Apply rotation on translated points
matBT = matBT * matQ;
// Calculate GOF
dNormf = (matNH.transpose()-matBT.transpose()).norm();
// Store rotation part to transformation matrix
matRot(3,3) = 1;
for(int j = 0; j < 3; ++j) {
for(int k = 0; k < 3; ++k) {
matRot(j,k) = matQ(k,j);
}
}
// Store translation part to transformation matrix
matTrans(0,3) = dMeanX;
matTrans(1,3) = dMeanY;
matTrans(2,3) = dMeanZ;
// Safe rotation and translation to final matrix for next iteration step
// This step is safe to do since we change one of the input point sets (matBT)
// ToDo: Replace this for loop with a least square solution process
matTransFinal = matRot * matTrans * matTransFinal;
}
return matTransFinal;
}
//=============================================================================================================
QVector<double> HPIFit::computeEstimationError(const Eigen::MatrixXd& matCoilsDev,
const Eigen::MatrixXd& matCoilsHead,
const FIFFLIB::FiffCoordTrans& transDevHead)
{
//Calculate Error
MatrixXd matTemp = matCoilsDev;
MatrixXd matTestPos = transDevHead.apply_trans(matTemp.cast<float>()).cast<double>();
MatrixXd matDiffPos = matTestPos - matCoilsHead;
// compute error
int iNumCoils = matCoilsDev.rows();
QVector<double> vecError(iNumCoils);
for(int i = 0; i < matDiffPos.rows(); ++i) {
vecError[i] = matDiffPos.row(i).norm();
}
return vecError;
}
//=============================================================================================================
FIFFLIB::FiffDigPointSet HPIFit::getFittedPointSet(const Eigen::MatrixXd& matCoilsDev)
{
FiffDigPointSet fittedPointSet;
const int iNumCoils = matCoilsDev.rows();
for(int i = 0; i < iNumCoils; ++i) {
FiffDigPoint digPoint;
digPoint.kind = FIFFV_POINT_EEG; //Store as EEG so they have a different color
digPoint.ident = i;
digPoint.r[0] = matCoilsDev(i,0);
digPoint.r[1] = matCoilsDev(i,1);
digPoint.r[2] = matCoilsDev(i,2);
fittedPointSet << digPoint;
}
return fittedPointSet;
}
//=============================================================================================================
void HPIFit::storeHeadPosition(float fTime,
const Eigen::MatrixXf& transDevHead,
Eigen::MatrixXd& matPosition,
const Eigen::VectorXd& vecGoF,
const QVector<double>& vecError)
{
Matrix3f matRot = transDevHead.block(0,0,3,3);
Eigen::Quaternionf quatHPI(matRot);
double dError = std::accumulate(vecError.begin(), vecError.end(), .0) / vecError.size(); // HPI estimation Error
matPosition.conservativeResize(matPosition.rows()+1, 10);
matPosition(matPosition.rows()-1,0) = fTime;
matPosition(matPosition.rows()-1,1) = quatHPI.x();
matPosition(matPosition.rows()-1,2) = quatHPI.y();
matPosition(matPosition.rows()-1,3) = quatHPI.z();
matPosition(matPosition.rows()-1,4) = transDevHead(0,3);
matPosition(matPosition.rows()-1,5) = transDevHead(1,3);
matPosition(matPosition.rows()-1,6) = transDevHead(2,3);
matPosition(matPosition.rows()-1,7) = vecGoF.mean();
matPosition(matPosition.rows()-1,8) = dError;
matPosition(matPosition.rows()-1,9) = 0;
}
|
# Teaching Physics to an AI
In this Notebook, I will run simple physics simulations, and then show how neural networks can be used to "learn" or predict future states in the simulation.
```python
import time
import numpy as np
from scipy.integrate import solve_ivp
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import pylab as py
from matplotlib.animation import FuncAnimation
from matplotlib import animation, rc
from IPython.display import HTML, Image
from matplotlib import pyplot as plt
%config InlineBackend.figure_format = 'retina'
```
## Double Pendulum Equations of Motion
$$
F = ma = -kx
$$
with the initial conditions of $x(0) = 1$ and $v(0) = x^\prime(0) = 0$.
### Computational Solution
Writing this as an ODE:
$$
x^{\prime\prime} = -\frac{k}{m}x
$$
Scipy's ODE solver can solve any system of first order ODEs, so we will rewrite this 2nd-order ODE as a system of first-order ODEs:
$$
\begin{align}
x_1^\prime &= x_2 \\
x_2^\prime &= -\frac{k}{m}x_1
\end{align}
$$
Now let's code this up in Python.
```python
Image("double-pendulum.png")
```
```python
m1 = 2 # mass of pendulum 1 (in kg)
m2 = 1 # mass of pendulum 2 (in kg)
L1 = 1.4 # length of pendulum 1 (in meter)
L2 = 1 # length of pendulum 2 (in meter)
g = 9.8 # gravitatioanl acceleration constant (m/s^2)
u0 = [-np.pi/2.2, 0, np.pi/1.8, 0] # initial conditions.
# u[0] = angle of the first pendulum
# u[1] = angular velocity of the first pendulum
# u[2] = angle of the second pendulum
# u[3] = angular velocity of the second pendulum
tfinal = 25.0 # Final time. Simulation time = 0 to tfinal.
Nt = 751
t = np.linspace(0, tfinal, Nt)
```
```python
# Differential equations describing the system
def double_pendulum(u,t,m1,m2,L1,L2,g):
# du = derivatives
# u = variables
# p = parameters
# t = time variable
du = np.zeros(4)
c = np.cos(u[0]-u[2]) # intermediate variables
s = np.sin(u[0]-u[2]) # intermediate variables
du[0] = u[1] # d(theta 1)
du[1] = ( m2*g*np.sin(u[2])*c - m2*s*(L1*c*u[1]**2 + L2*u[3]**2) - (m1+m2)*g*np.sin(u[0]) ) /( L1 *(m1+m2*s**2) )
du[2] = u[3] # d(theta 2)
du[3] = ((m1+m2)*(L1*u[1]**2*s - g*np.sin(u[2]) + g*np.sin(u[0])*c) + m2*L2*u[3]**2*s*c) / (L2 * (m1 + m2*s**2))
return du
```
```python
sol = odeint(double_pendulum, u0, t, args=(m1,m2,L1,L2,g))
#sol[:,0] = u1 = Θ_1
#sol[:,1] = u2 = ω_1
#sol[:,2] = u3 = Θ_2
#sol[:,3] = u4 = ω_2
u0 = sol[:,0] # theta_1
u1 = sol[:,1] # omega 1
u2 = sol[:,2] # theta_2
u3 = sol[:,3] # omega_2
# Mapping from polar to Cartesian
x1 = L1*np.sin(u0); # First Pendulum
y1 = -L1*np.cos(u0);
x2 = x1 + L2*np.sin(u2); # Second Pendulum
y2 = y1 - L2*np.cos(u2);
py.close('all')
py.figure(1)
#py.plot(t,x1)
#py.plot(t,y1)
py.plot(x1,y1,'.',color = '#0077BE',label = 'mass 1')
py.plot(x2,y2,'.',color = '#f66338',label = 'mass 2' )
py.legend()
py.xlabel('x (m)')
py.ylabel('y (m)')
#py.figure(2)
#py.plot(t,x2)
#py.plot(t,y2)
fig = plt.figure()
ax = plt.axes(xlim=(-L1-L2-0.5, L1+L2+0.5), ylim=(-2.5, 1.5))
#line, = ax.plot([], [], lw=2,,markersize = 9, markerfacecolor = "#FDB813",markeredgecolor ="#FD7813")
line1, = ax.plot([], [], 'o-',color = '#d2eeff',markersize = 12, markerfacecolor = '#0077BE',lw=2, markevery=10000, markeredgecolor = 'k') # line for Earth
line2, = ax.plot([], [], 'o-',color = '#ffebd8',markersize = 12, markerfacecolor = '#f66338',lw=2, markevery=10000, markeredgecolor = 'k') # line for Jupiter
line3, = ax.plot([], [], color='k', linestyle='-', linewidth=2)
line4, = ax.plot([], [], color='k', linestyle='-', linewidth=2)
line5, = ax.plot([], [], 'o', color='k', markersize = 10)
time_template = 'Time = %.1f s'
time_string = ax.text(0.05, 0.9, '', transform=ax.transAxes)
```
```python
ax.get_xaxis().set_ticks([]) # enable this to hide x axis ticks
ax.get_yaxis().set_ticks([]) # enable this to hide y axis ticks
# initialization function: plot the background of each frame
def init():
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
line4.set_data([], [])
line5.set_data([], [])
time_string.set_text('')
return line3,line4, line5, line1, line2, time_string
```
```python
# animation function. This is called sequentially
def animate(i):
# Motion trail sizes. Defined in terms of indices. Length will vary with the time step, dt. E.g. 5 indices will span a lower distance if the time step is reduced.
trail1 = 6 # length of motion trail of weight 1
trail2 = 8 # length of motion trail of weight 2
dt = t[2]-t[1] # time step
line1.set_data(x1[i:max(1,i-trail1):-1], y1[i:max(1,i-trail1):-1]) # marker + line of first weight
line2.set_data(x2[i:max(1,i-trail2):-1], y2[i:max(1,i-trail2):-1]) # marker + line of the second weight
line3.set_data([x1[i], x2[i]], [y1[i], y2[i]]) # line connecting weight 2 to weight 1
line4.set_data([x1[i], 0], [y1[i],0]) # line connecting origin to weight 1
line5.set_data([0, 0], [0, 0])
time_string.set_text(time_template % (i*dt))
return line3, line4,line5,line1, line2, time_string
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=Nt, interval=1000*(t[2]-t[1])*0.8, blit=True)
```
```python
from IPython.display import HTML
HTML(anim.to_html5_video())
```
### Neural Network Prediction
Now let's show a neural network part of the data from this harmonic oscillator and have it try to predict the rest.
```python
```
|
# types: id, num, command (e.g., \frac{..}{..}), function (e.g., \sin(x)),
# mtable (e.g., \begin{pmatrix}...\end{pmatrix})
#
# `f(x,y)` is by default parsed as the product of a variable f and a row vector
# (x,y), instead of as a function on x,y. On the other hand, the latter case
# is the parsing result of f\apply(x,y) (this hardly make a visual difference
# to human's eyes, but the translated content mathml strings indeed differ.
# For the same reason, sin(x) is parsed as the product of a identifier named
# "sin" and (x).
#
# To make parsing simpler, we enforce the following conditions:
#
# 1. mtable-type objects (array, matrix, etc) can not be nested.
# 2. \text objects can not be nested.
#
# These conditions make sense in the context of terminal rendering (for quick
# checking of results, not for publishing a full manuscript).
#
# Supported mtable-type tags: array, cases, gathered, aligned, split, matrix,
# pmatrix, bmatrix, vmatrix, Vmatrix
mutable struct CharStream
s::String
state::Int
cur::Union{Char, Nothing}
peek::Union{Char, Nothing}
end
# Upon object creation, one can peek into it by one char, but has not READ a
# char yet (the current char is nothing). To read the a char, explicitly call
# read_char!().
function CharStream(data::String)
cs = CharStream(data, 1, nothing, nothing)
read_char!(cs)
return cs
end
peek_char(cs::CharStream) = cs.peek
# read one char from the stream and update internal states (cur, peek)
# return `cur`
function read_char!(cs::CharStream)
next = iterate(cs.s, cs.state)
if next === nothing
cs.cur, cs.peek = cs.peek, nothing
else
ch, cs.state = next
cs.cur, cs.peek = cs.peek, ch
end
return cs.cur
end
# assume the next (peeked) position in the char stream points to a numeric
# literal
function read_number!(s::CharStream)
@assert isdigit(s.peek)
num = ""
c = peek_char(s)
has_period = false
while (c !== nothing) && (isdigit(c) || ((c == '.') && !has_period))
num *= read_char!(s)
c == '.' && (has_period = true)
c = peek_char(s)
end
return num
end
# assume the next (peeked) position in the char stream points to '\'
# there is a special case: `\{` and `\{` are indeed not commands,
# but escaped chars that should be converted to <mo>. We should deal
# with this case specially: just return "{" and "}". There are
# other escaped chars, in particular '&', '_', '^'. We do not deal
# with them, i.e., we treat thoses escaped chars as errors.
function read_tex_command!(s::CharStream)
cmd = ""
peek_char(s) == '\\' && read_char!(s)
c = peek_char(s)
c in "!,:; {}|\\" && return string(read_char!(s))
c in "&_^" && error("We do not deal with escaped &, _, ^ ")
c === nothing && error("Can not process \\ at the end of stream")
while c !== nothing && isletter(c)
cmd *= read_char!(s)
c = peek_char(s)
end
isempty(cmd) && error("Can not process TeX command char: $(c)")
return cmd
end
function skip_whitespace!(s::CharStream)
c = peek_char(s)
while c !== nothing && isspace(c)
read_char!(s)
c = peek_char(s)
end
end
# Split enclosing dollar signs from the input and use them to determine if we render the
# formula inline or as a block. We assume `s` can only be enclosed by a single-dollar
# pair: s=`$...$` or a double-dollar pair: s=`$$...$$$`, and there is no dollar sign inside
# the enclosing pair. All other cases are errored.
function split_top_element(s::String)
i = firstindex(s)
while isspace(s[i])
i = nextind(s, i)
end
n = lastindex(s)
while isspace(s[n])
n = prevind(s, n)
end
# many boring cases
n < i && return nothing, ""
if n == i
s[i] == '$' && error(raw"Invalid input \"$\"")
return (nothing, s)
end
start_with_D = startswith(s[i:n], "\$")
end_with_D = endswith(s[i:n], "\$")
if (start_with_D && !end_with_D) || (!start_with_D && end_with_D)
error(raw"The $ sign mismatch at the beginning and the end of input")
end
start_with_DD = end_with_DD = false
if n >= i + 2
if startswith(s[i:n], raw"$$$") || endswith(s[i:n], raw"$$$")
error(raw"Invalid input $$$")
end
start_with_DD = startswith(s[i:n], raw"$$")
end_with_DD = endswith(s[i:n], raw"$$")
if (start_with_DD && !end_with_DD) || (!start_with_DD && end_with_DD)
error(raw"$$ mismatch at the beginning and the end of input")
end
end
start_with_DD && return (Token(TK_MATH, "block"), s[i+2:prevind(s, n-1)])
start_with_D && return (Token(TK_MATH, "inline"), s[i+1:prevind(s, n)])
return (nothing, s)
end
# TODO: build a dictionary (read from a unicode char text file) to map LaTeX
# command names to UCS codepoints or MML commands.
# The main function: scan a string to get a stream of token.
# We assume the input data is just one LaTeX block that represents math
# content, not textual content. The input text can either be in the form of
# "$...$" for inline display, or be in the form of "$$...$$" for block display,
# or be bare LaTeX commands that are not wrapped by dollar sign at all. Dollar
# signs inside the input data except at the starting/ending positions are not
# specially treated.
function scan_tex(data::String)
stack = Token[]
# process the top element <math> if "$" is present in the input data
# This element is not pushed into the token stack, the returned `data`
# is stripped off of dollar signs
top, data = split_top_element(data)
s = CharStream(data)
while (ch = peek_char(s)) !== nothing
if isspace(ch)
skip_whitespace!(s)
elseif isletter(ch)
push!(stack, Token(TK_ID, read_char!(s)))
elseif isdigit(ch)
push!(stack, Token(TK_NUM, read_number!(s)))
elseif ch in OPERATOR_IDS
# "+-*/!:=,.\'"
push!(stack, Token(TK_OPID, read_char!(s)))
elseif ch in BRACKETS
# "|<>()[]"
push!(stack, Token(TK_PAREN, read_char!(s)))
elseif ch == '{'
push!(stack, Token(TK_LBRACE, read_char!(s)))
elseif ch == '}'
push!(stack, Token(TK_RBRACE, read_char!(s)))
elseif ch == '&'
push!(stack, Token(TK_AMP, read_char!(s)))
elseif ch == '^'
push!(stack, Token(TK_SUP, read_char!(s)))
elseif ch == '_'
push!(stack, Token(TK_SUB, read_char!(s)))
elseif ch in keys(unicode_to_optoken)
# a large unicode glyph sets that define a lot of
# math operators
push!(stack, unicode_to_optoken[read_char!(s)])
elseif ch == '\\'
# some tex commands are converted to identifiers and
# operator identifiers. Others are real commands
cmd = read_tex_command!(s)
if cmd in keys(command_to_token)
t = command_to_token[cmd]
push!(stack, t)
# Some commands need specical treatment:
# TK_BEGIN(TK_END): tokenize proceeding env names
if t.type == TK_BEGIN || t.type == TK_END
# get env name and env attributes if present.
# Envionment names are quite simple in semantics.
# it is just an atomic string that represents the
# name of an environment, it is not a sequence of
# identifiers. If we did not collect environment name
# here, the letters in the name would be scanned into
# individual identifiers and we would have to recover
# the name later in parsing, inefficient.
#
# format: \begin{env-name}[env-attrib]...\end{env-name}
# <env-name> is a sequence of letters and '*'.
# <env-attrib> is a sequence of letters, or numbers,
# or special chars in "=;+-*%/ "
skip_whitespace!(s)
l = read_char!(s)
# unlike \frac, \begin (\end) must be followed by `{`
l != '{' && error("Error! \\begin(\\end) not followed by {")
e, tag = "", ""
while peek_char(s) != nothing
e = read_char!(s)
e == '}' && break
if !isletter(e) && e != '*'
error("Error! illegal environment name")
end
tag *= e
end
e != '}' && error("Error! \\begin(\\end) tag not end with }")
push!(stack, Token(TK_ENV_NAME, tag))
# TODO: continue to process environment attributes
end
else
error("unknown LaTeX command: $(cmd)")
end
else
# all other chars are treated as TK_IDs
push!(stack, Token(TK_ID, read_char!(s)))
end
end # while (ch = read_char!()) !== nothing
return (top, stack)
end # function scan_tex
# `top` indicates if the stream is enclosed by a single-dollar pair or a
# double-dollar pair
mutable struct TokenStream
tokens::Vector{Token}
top::Union{Token, Nothing}
cur::Int
end
TokenStream() = TokenStream(Token[], nothing, 0)
TokenStream(tokens::Vector{Token}, top::Token) = TokenStream(tokens, top, 1)
function TokenStream(text::String)
top, tokens = scan_tex(text)
return TokenStream(tokens, top, 1)
end
# A shift operation
function get_token!(ts::TokenStream)
if ts.cur <= length(ts.tokens)
t = ts.tokens[ts.cur]
ts.cur += 1
return t
else
return nothing
end
end
function peek(ts::TokenStream)
if ts.cur <= length(ts.tokens)
return ts.tokens[ts.cur]
else
return nothing
end
end
function print_token_stream(ts::TokenStream)
println("top: " * (ts.top === nothing ? "Nothing" : ts.top.val))
for t in ts.tokens
println(token_names[t.type] * ": " * t.val)
end
println()
end
|
\subsection{Relations and equality}
\subsubsection{Relations}
A special type of predicates is a relation. These take two terms and can be written differently:
$P(x,y)\Leftrightarrow x\oplus y$
\subsubsection{Equality}
In preterite logic we define the relation for equality.
\(a=b\)
It is defined by the following:
\begin{itemize}
\item Reflexivity : \(x=x\)
\item Symmetry: \(x=y\leftrightarrow y=x\)
\item Transivity: \(x=y\land y=z \rightarrow x=z\)
\item Substitution for functions: \(x=y\rightarrow f(x)=f(y)\)
\item Substitution for formulae: \(x=y\land P(x)\rightarrow P(y)\)
\end{itemize}
|
# # Usage Guide
# In this example, we will present the basics of using Ripserer. We start by loading some
# packages.
using Distances
using Plots
using Ripserer
using Random # hide
Random.seed!(1337) # hide
gr() # hide
nothing # hide
# ## Using Ripserer With Point Cloud Data
# Let's start with generating some points, randomly sampled from a noisy circle.
function noisy_circle(n; r=1, noise=0.1)
points = NTuple{2,Float64}[]
for _ in 1:n
θ = 2π * rand()
push!(points, (r * sin(θ) + noise * rand(), r * cos(θ) + noise * rand()))
end
return points
end
circ_100 = noisy_circle(100)
scatter(circ_100; aspect_ratio=1, legend=false, title="Noisy Circle")
# !!! tip "Point-like data types"
# Ripserer can interpret various kinds of data as point clouds. The limitation is that
# the data set should be an `AbstractVector` with elements with the following
# properties:
# * all elements are collections numbers;
# * all elements have the same length.
# Examples of element types that work are `Tuple`s,
# [`SVector`](https://github.com/JuliaArrays/StaticArrays.jl)s, and
# [`Point`](https://github.com/JuliaGeometry/GeometryBasics.jl)s.
# To compute the Vietoris-Rips persistent homology of this data set, run the
# following.
ripserer(circ_100)
# You can use the `dim_max` argument to set the maximum dimension persistent homology is
# computed in.
result_rips = ripserer(circ_100; dim_max=3)
# The result can be plotted as a persistence diagram or as a barcode.
plot(result_rips)
barcode(result_rips)
plot(plot(result_rips), barcode(result_rips)) # hide
# We can also plot a single diagram or a subset of all diagrams in the same manner. Keep in
# mind that the result is just a vector of [`PersistenceDiagram`](@ref)s. The
# zero-dimensional diagram is found at index 1.
plot(result_rips[2])
barcode(result_rips[2:end]; linewidth=2)
plot(plot(result_rips[2]), barcode(result_rips[2:end]; linewidth=2)) # hide
# Plotting can be further customized using the standard attributes from
# [Plots.jl](http://docs.juliaplots.org/latest/).
plot(result_rips; markeralpha=1, markershape=:star, color=[:red, :blue, :green, :purple])
# ## Changing Filtrations
# By default, calling [`ripserer`](@ref) will compute persistent homology with the
# [`Rips`](@ref) filtration. To use a different filtration, we have two options.
# The first option is to pass the filtration constructor as the first argument. Any keyword
# arguments the filtration accepts can be passed to [`ripserer`](@ref) and it will be
# forwarded to the constructor.
ripserer(EdgeCollapsedRips, circ_100; threshold=1, dim_max=3, metric=Euclidean())
# The second option is to initialize the filtration object first and use that as an argument
# to [`ripserer`](@ref). This can be useful in cases where constructing the filtration takes
# a long time.
collapsed_rips = EdgeCollapsedRips(circ_100; threshold=1, metric=Euclidean())
ripserer(collapsed_rips; dim_max=3)
# ## Distance Matrix Inputs
# In the previous example, we got our result by passing a collection of points to
# [`ripserer`](@ref). Under the hood, [`Rips`](@ref) and [`EdgeCollapsedRips`](@ref)
# actually work with distance matrices. Let's define a distance matrix of the shortest
# paths on a [regular icosahedron](https://en.wikipedia.org/wiki/Regular_icosahedron) graph.
# ```@raw html
# <img src="https://upload.wikimedia.org/wikipedia/commons/8/83/Icosahedron_graph.svg" height="200" width="200">
# ```
icosahedron = [
0 1 2 2 1 2 1 1 2 2 1 3
1 0 3 2 1 1 2 1 2 1 2 2
2 3 0 1 2 2 1 2 1 2 1 1
2 2 1 0 3 2 1 1 2 1 2 1
1 1 2 3 0 1 2 2 1 2 1 2
2 1 2 2 1 0 3 2 1 1 2 1
1 2 1 1 2 3 0 1 2 2 1 2
1 1 2 1 2 2 1 0 3 1 2 2
2 2 1 2 1 1 2 3 0 2 1 1
2 1 2 1 2 1 2 1 2 0 3 1
1 2 1 2 1 2 1 2 1 3 0 2
3 2 1 1 2 1 2 2 1 1 2 0
]
nothing # hide
# To compute the persistent homology, simply feed the distance matrix to [`ripserer`](@ref).
result_icosa = ripserer(icosahedron; dim_max=2)
# ## Thresholding
# In our next example, we will show how to use thresholding to speed up computation. We
# start by defining a sampling function that generates ``n`` points from the square
# ``[-4,4]\times[-4,4]`` with a circular hole of radius 1 in the middle.
function cutout(n)
points = NTuple{2,Float64}[]
while length(points) < n
x, y = (8rand() - 4, 8rand() - 4)
if x^2 + y^2 > 1
push!(points, (x, y))
end
end
return points
end
# We sample 2000 points from this space.
cutout_2000 = cutout(2000)
scatter(cutout_2000; markersize=1, aspect_ratio=1, legend=false, title="Cutout")
# We calculate the persistent homology and time the calculation.
@time result_cut = ripserer(cutout_2000)
nothing # hide
#
plot(result_cut)
# Notice that while there are many 1-dimensional classes, one of them stands out. This class
# represents the hole in the middle of our square. Since the intervals are sorted by
# persistence, we know the last interval in the diagram will be the most persistent.
most_persistent = result_cut[2][end]
# Notice the death time of this interval is around 1.83 and that no intervals occur after
# that time. This means that we could stop computing when we reach this time and the result
# should not change. Let's try it out.
@time result_cut_thresh_2 = ripserer(cutout_2000; threshold=2)
nothing # hide
#
plot(result_cut_thresh_2; title="Persistence Diagram, threshold=2")
# Indeed, the result is exactly the same, but it took less than a third of the time to
# compute.
@assert result_cut_thresh_2 == result_cut # hide
result_cut_thresh_2 == result_cut
# If we pick a threshold that is too low, we still detect the interval, but its death time
# becomes infinite.
@time result_cut_thresh_1 = ripserer(cutout_2000; threshold=1)
nothing # hide
#
result_cut_thresh_1[2][end]
# ## Persistence Diagrams
# The result of a computation is returned as a vector of
# [`PersistenceDiagram`](@ref)s. Let's take a closer look at one of those.
diagram = result_cut[2]
# The diagram is a structure that acts as a vector of
# [`PersistenceInterval`](@ref)s. As such, you can use standard Julia
# functions on the diagram.
# For example, to extract the last three intervals by birth time, you can do
# something like this.
sort(diagram; by=birth, rev=true)[1:3]
# To find the [`persistence`](@ref)s of all the intervals, you can use broadcasting.
persistence.(diagram)
# Unlike regular vectors, a [`PersistenceDiagram`](@ref) has additional metadata attached
# to it. To see all metadata, use
# [`propertynames`](https://docs.julialang.org/en/v1/base/base/#Base.propertynames).
propertynames(diagram)
# You can access the properties with the dot syntax.
diagram.field
# The attributes `dim` and `threshold` are given special treatment and can be extracted
# with appropriately named functions.
dim(diagram), threshold(diagram)
# Now, let's take a closer look at one of the intervals.
interval = diagram[end]
# An interval is very similar to a tuple of two `Float64`s, but also has some metadata
# associated with it.
interval[1], interval[2]
# [`birth`](@ref), [`death`](@ref), [`persistence`](@ref), and [`midlife`](@ref) can be used
# to query commonly used values.
birth(interval), death(interval), persistence(interval), midlife(interval)
# Accessing metadata works in a similar manner as with diagrams.
propertynames(interval)
#
interval.birth_simplex
#
interval.death_simplex
# ## Simplices
# In the previous section, we saw each interval has an associated [`birth_simplex`](@ref)
# and [`death_simplex`](@ref). These values are of the type [`Simplex`](@ref). Let's take a
# closer look at simplices.
simplex = interval.death_simplex
# [`Simplex`](@ref) is an internal data structure that uses some tricks to increase
# efficiency. For example, if we were to
# [`dump`](https://docs.julialang.org/en/v1/base/io-network/#Base.dump) it, we notice the
# vertices are not actually stored in the simplex itself.
dump(simplex)
# To access the vertices, we use [`vertices`](@ref).
vertices(simplex)
# Other useful attributes a simplex has are [`index`](@ref), [`dim`](@ref), and
# [`birth`](@ref).
index(simplex), dim(simplex), birth(simplex)
# A few additional notes on simplex properties.
# * A `D`-dimensional simplex is of type `Simplex{D}` and has `D + 1` vertices.
# * [`vertices`](@ref) are always sorted in descending order.
# * [`index`](@ref) and [`dim`](@ref) can be used to uniquely identify a given simplex.
# * [`birth`](@ref) determines when a simplex is added to a filtration.
# ## Conclusion
# This concludes the basic usage of Ripserer. For more detailed information, please check
# out the [API](@ref) page, as well as other examples.
|
from __future__ import print_function
import numpy
from pygeo import pyGeo
# ==============================================================================
# Start of Script
# ==============================================================================
naf = 3
airfoil_list = ['naca2412.dat','naca2412.dat','naca2412.dat']
chord = [1.67,1.67,1.18]
x = [0,0,.125*1.18]
y = [0,0,0]
z = [0,2.5,10.58/2]
rot_x = [0,0,0]
rot_y = [0,0,0]
rot_z = [0,0,2]
offset = numpy.zeros((naf,2))
# There are several examples that follow showing many of the different
# combinations of tip/trailing edge options that are available.
# --------- Sharp Trailing Edge / Rounded Tip -------
wing = pyGeo('liftingSurface',
xsections=airfoil_list,
scale=chord, offset=offset, x=x, y=y, z=z,
rotX=rot_x, rotY=rot_y, rotZ=rot_z,
kSpan=2, tip='rounded')
wing.writeTecplot('c172_sharp_te_rounded_tip.dat')
wing.writeIGES('c172_sharp_te_rounded_tip.igs')
# --------- Sharp Trailing Edge / Pinched Tip -------
wing = pyGeo('liftingSurface',
xsections=airfoil_list,
scale=chord, offset=offset, x=x, y=y, z=z,
rotX=rot_x, rotY=rot_y, rotZ=rot_z,
kSpan=2, tip='pinched')
wing.writeTecplot('c172_sharp_te_pinched_tip.dat')
wing.writeIGES('c172_sharp_te_pinched_tip.igs')
# --------- Sharp Trailing Edge / Rounded Tip with Fitting -------
# This option shouldn't be used except to match previously generated
# geometries
wing = pyGeo('liftingSurface',
xsections=airfoil_list, nCtl=29,
scale=chord, offset=offset, x=x, y=y, z=z,
rotX=rot_x, rotY=rot_y, rotZ=rot_z,
kSpan=2, tip='rounded')
wing.writeTecplot('c172_sharp_te_rounded_tip_fitted.dat')
wing.writeIGES('c172_sharp_te_rounded_tip_fitted.igs')
# --------- Blunt Trailing (Flat) / Rounded Tip -------
# This is the normal way of producing blunt TE geometries. The
# thickness of the trailing edge is specified with 'te_height', either
# a constant value or an array of length naf. This is in physical
# units. Alternatively, 'te_height_scaled' can be specified to have a
# scaled thickness. This option is specified as a fraction of initial
# chord, so te_height_scaled=0.002 will give a 0.2% trailing edge
# thickness
wing = pyGeo('liftingSurface',
xsections=airfoil_list,
scale=chord, offset=offset, x=x, y=y, z=z,
rotX=rot_x, rotY=rot_y, rotZ=rot_z,
bluntTe=True, teHeightScaled=0.002,
kSpan=2, tip='rounded')
wing.writeTecplot('c172_blunt_te_rounded_tip.dat')
wing.writeIGES('c172_blunt_te_rounded_tip.igs')
# --------- Blunt Trailing (Rounded) / Rounded Tip -------
# Alternative way of producing rounded trailing edges that can be easier
# to mesh and extrude with pyHyp.
wing = pyGeo('liftingSurface',
xsections=airfoil_list,
scale=chord, offset=offset, x=x, y=y, z=z,
rotX=rot_x, rotY=rot_y, rotZ=rot_z,
bluntTe=True, roundedTe=True, teHeightScaled=0.002,
kSpan=2, tip='rounded')
wing.writeTecplot('c172_rounded_te_rounded_tip.dat')
wing.writeIGES('c172_rounded_te_rounded_tip.igs')
|
/-
Copyright (c) 2022 Jannis Limperg. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Jannis Limperg
-/
import Aesop
structure MyTrue₁
structure MyTrue₂
@[aesop safe]
structure MyTrue₃ where
tt : MyTrue₁
example : MyTrue₃ := by
aesop
apply MyTrue₁.mk
@[aesop safe]
structure MyFalse where
falso : False
example : MyFalse := by
aesop
example : MyFalse := by
fail_if_success aesop (options := { terminal := true })
example : MyFalse := by
aesop (options := { warnOnNonterminal := false })
@[aesop safe]
structure MyFalse₂ where
falso : False
tt : MyTrue₃
example : MyFalse₂ := by
aesop
|
function val=getoptkey(key,default,varargin)
%
% val=getoptkey(key,default,opt)
% or
% val=getoptkey(key,default,'key1',val1,'key2',val2, ...)
%
% query the value of a key from a structure or a list of key/value pairs
%
% author: Qianqian Fang, <q.fang at neu.edu>
%
% input:
% key: a string name for the target struct field name
% default: the default value of the key is not found
% opt: a struct object; the field names will be searched to match the
% key input, opt can be a list of 'keyname'/value pairs
%
% output:
% val: val=opt.key if found, otherwise val=default
%
% -- this function is part of iso2mesh toolbox (http://iso2mesh.sf.net)
%
val=default;
if(nargin<=2) return; end
opt=varargin2struct(varargin{:});
if(isstruct(opt) && isfield(opt,key))
val=getfield(opt,key);
end
|
# Advanced: Extending lambeq
## Creating readers
### [Reader](../lambeq.rst#lambeq.reader.Reader) example: "Comb" reader
In this example we will create a reader that, given a sentence, it generates the following tensor network:
<center>
</center>
Note that the particular compositional model is not appropriate for classical experiments, since the tensor that implements the layer can become very large for long sentences. However, the model can be implemented without problems on a quantum computer.
```python
from lambeq.reader import Reader
from lambeq.core.types import AtomicType
from discopy import Box, Id, Word
N = AtomicType.NOUN
class CombReader(Reader):
def sentence2diagram(self, sentence):
words = Id().tensor(*[Word(w, N) for w in sentence.split()])
layer = Box('LAYER', words.cod, N)
return words >> layer
diagram = CombReader().sentence2diagram('John gave Mary a flower')
diagram.draw()
```
```python
Id().tensor(*[Word(w, N) for w in ['John', 'gave', 'Mary', 'a', 'flower']]).draw()
```
## Creating rewrite rules
```python
import warnings
warnings.filterwarnings('ignore') # Ignore warnings
from lambeq.ccg2discocat import DepCCGParser
parser = DepCCGParser()
d = parser.sentence2diagram('The food is fresh')
```
### [SimpleRewriteRule](../lambeq.rst#lambeq.rewrite.SimpleRewriteRule) example: Negation functor
```python
from lambeq.rewrite import SimpleRewriteRule
from lambeq.core.types import AtomicType
from discopy.rigid import Box, Id
N = AtomicType.NOUN
S = AtomicType.SENTENCE
adj = N @ N.l
NOT = Box('NOT', S, S)
negation_rewrite = SimpleRewriteRule(
cod=N.r @ S @ S.l @ N,
template=SimpleRewriteRule.placeholder(N.r @ S @ S.l @ N) >> Id(N.r) @ NOT @ Id(S.l @ N),
words=['is', 'was', 'has', 'have'])
```
```python
from lambeq.rewrite import Rewriter
from discopy import drawing
not_d = Rewriter([negation_rewrite])(d)
drawing.equation(d, not_d, symbol='->', figsize=(14, 4))
```
### [RewriteRule](../lambeq.rst#lambeq.rewrite.RewriteRule) example: "Past" functor
```python
from lambeq.rewrite import RewriteRule
class PastRewriteRule(RewriteRule):
mapping = {
'is': 'was',
'are': 'were',
'has': 'had'
}
def matches(self, box):
return box.name in self.mapping
def rewrite(self, box):
new_name = self.mapping[box.name]
return type(box)(name=new_name, dom=box.dom, cod=box.cod)
```
```python
past_d = Rewriter([PastRewriteRule()])(d)
drawing.equation(d, past_d, symbol='->', figsize=(14, 4))
```
## Creating ansätze
```python
d = parser.sentence2diagram('We will go')
```
### [CircuitAnsatz](../lambeq.rst#lambeq.circuit.CircuitAnsatz) example: "Real-valued" ansatz
```python
from lambeq.circuit import CircuitAnsatz
from discopy.quantum.circuit import Functor, Id
from discopy.quantum.gates import Bra, CX, Ket, Ry
from lambeq.ansatz import Symbol
class RealAnsatz(CircuitAnsatz):
def __init__(self, ob_map, n_layers):
super().__init__(ob_map=ob_map, n_layers=n_layers)
self.n_layers = n_layers
self.functor = Functor(ob=self.ob_map, ar=self._ar)
def _ar(self, box):
# step 1: obtain label
label = self._summarise_box(box)
# step 2: map domain and codomain
dom, cod = self._ob(box.dom), self._ob(box.cod)
n_qubits = max(dom, cod)
n_layers = self.n_layers
# step 3: construct and return ansatz
if n_qubits == 1:
circuit = Ry(Symbol(f'{label}_0'))
else:
# this also deals with the n_qubits == 0 case correctly
circuit = Id(n_qubits)
for i in range(n_layers):
offset = i * n_qubits
syms = [Symbol(f'{label}_{offset + j}') for j in range(n_qubits)]
# adds a ladder of CNOTs
for j in range(n_qubits - 1):
circuit >>= Id(j) @ CX @ Id(n_qubits - j - 2)
# adds a layer of Y rotations
circuit >>= Id().tensor(*[Ry(sym) for sym in syms])
if cod <= dom:
circuit >>= Id(cod) @ Bra(*[0]*(dom - cod))
else:
circuit <<= Id(dom) @ Ket(*[0]*(cod - dom))
return circuit
```
```python
real_d = RealAnsatz({N: 1, S: 1}, n_layers=2)(d)
real_d.draw(figsize=(12, 10))
```
### [TensorAnsatz](../lambeq.rst#lambeq.tensor.TensorAnsatz) example: "Positive" ansatz
```python
from lambeq.tensor import TensorAnsatz
from discopy import rigid, tensor
from functools import reduce
class PositiveAnsatz(TensorAnsatz):
def _ar(self, box):
# step 1: obtain label
label = self._summarise_box(box)
# step 2: map domain and codomain
dom, cod = self._ob(box.dom), self._ob(box.cod)
# step 3: construct and return ansatz
name = self._summarise_box(box)
n_params = reduce(lambda x, y: x * y, dom @ cod, 1)
syms = Symbol(name, size=n_params)
return tensor.Box(box.name, dom, cod, syms ** 2)
```
```python
from discopy import Dim
ansatz = PositiveAnsatz({N: Dim(2), S: Dim(2)})
positive_d = ansatz(d)
positive_d.draw()
```
```python
import numpy as np
from sympy import default_sort_key
syms = sorted(positive_d.free_symbols, key=default_sort_key)
sym_dict = {k: -np.ones(k.size) for k in syms}
subbed_diagram = positive_d.lambdify(*syms)(*sym_dict.values())
subbed_diagram.eval()
```
Tensor(dom=Dim(1), cod=Dim(2), array=[8., 8.])
## Contributions
|
State Before: α : Type u_2
β : Type u_1
inst✝¹ : MeasurableSpace α
inst✝ : MeasurableSpace β
f : α → Measure β
⊢ bind 0 f = 0 State After: no goals Tactic: simp [bind]
|
lemma compact_sequence_with_limit: fixes f :: "nat \<Rightarrow> 'a::heine_borel" shows "(f \<longlongrightarrow> l) sequentially \<Longrightarrow> compact (insert l (range f))"
|
! { dg-do run }
! Tests the fix for pr28174, in which the fix for pr28118 was
! corrupting the character lengths of arrays that shared a
! character length structure. In addition, in developing the
! fix, it was noted that intent(out/inout) arguments were not
! getting written back to the calling scope.
!
! Based on the testscase by Harald Anlauf <[email protected]>
!
program pr28174
implicit none
character(len=12) :: teststring(2) = (/ "abc def ghij", &
"klm nop qrst" /)
character(len=12) :: a(2), b(2), c(2), d(2)
integer :: m = 7, n
a = teststring
b = a
c = a
d = a
n = m - 4
! Make sure that variable substring references work.
call foo (a(:)(m:m+5), c(:)(n:m+2), d(:)(5:9))
if (any (a .ne. teststring)) call abort ()
if (any (b .ne. teststring)) call abort ()
if (any (c .ne. (/"ab456789#hij", &
"kl7654321rst"/))) call abort ()
if (any (d .ne. (/"abc 23456hij", &
"klm 98765rst"/))) call abort ()
contains
subroutine foo (w, x, y)
character(len=*), intent(in) :: w(:)
character(len=*), intent(inOUT) :: x(:)
character(len=*), intent(OUT) :: y(:)
character(len=12) :: foostring(2) = (/"0123456789#$" , &
"$#9876543210"/)
! This next is not required by the standard but tests the
! functioning of the gfortran implementation.
! if (all (x(:)(3:7) .eq. y)) call abort ()
x = foostring (:)(5 : 4 + len (x))
y = foostring (:)(3 : 2 + len (y))
end subroutine foo
end program pr28174
|
(* Property from Case-Analysis for Rippling and Inductive Proof,
Moa Johansson, Lucas Dixon and Alan Bundy, ITP 2010.
This Isabelle theory is produced using the TIP tool offered at the following website:
https://github.com/tip-org/tools
This file was originally provided as part of TIP benchmark at the following website:
https://github.com/tip-org/benchmarks
Yutaka Nagashima at CIIRC, CTU changed the TIP output theory file slightly
to make it compatible with Isabelle2017.*)
theory TIP_prop_07
imports "../../Test_Base"
begin
datatype Nat = Z | S "Nat"
fun t22 :: "Nat => Nat => Nat" where
"t22 (Z) y = y"
| "t22 (S z) y = S (t22 z y)"
fun t2 :: "Nat => Nat => Nat" where
"t2 (Z) y = Z"
| "t2 (S z) (Z) = S z"
| "t2 (S z) (S x2) = t2 z x2"
theorem property0 :
"((t2 (t22 n m) n) = m)"
oops
end
|
UN's Sustainable Development Goal 3, which promotes health and well-being for all, helps balance healthcare provision and healthcare financing, as well as addressing various challenges faced by ordinary citizens. The complexities of healthcare access and financing were clearly shown in a moving life experience of a 13-year old Nigerian orphan named Praise Sunday during a church service on Sunday 12th February 2017 at The Synagogue, Church Of All Nations (SCOAN), Lagos, Nigeria.
In an armed robbery on May 8th 2016, Praise Sunday tragically lost his mother and sister. During this ordeal, the young boy, whose father passed away several years earlier, sustained life-threatening injuries to his throat.
Praise and his extended family members sought medical assistance across Nigeria, depleting their financial means as he underwent seven surgeries. Left with a tracheostomy tube in his throat which enabled him to breathe, he was completely unable to talk and communicated only through writing. In September 2016, they sought aid at The SCOAN, a religious institution known for its extensive charitable endeavours. The General Overseer, T.B. Joshua, through the humanitarian arm of his faith-based organization, financed a delicate and complex health procedure carried out in Life Vincent Pallotti Hospital, Cape Town, South Africa.
Specialists, Dr Martin Vanlierde and Professor Mark De Groot, undertook the corrective surgery to restore Praise's ability to both breathe and speak normally again. The procedure was successful and his speedy recovery exceeded expectations. The total cost of Praise's travelling expenses, welfare and medical bills was US$50,000 - all financed by T.B. Joshua's faith-based organisation.
During a live broadcast of his story on Emmanuel TV, T.B. Joshua addressed the congregation, encouraging faith leaders and medical doctors to work together to address the societal conundrum, related to healthcare access today. He said: "If God's servants and doctors work together, there will be no limit to what they can achieve. The medicine doctors use comes from nature and our God is the God of nature."
The SCOAN has previously financed other medical trips, including a Nigerian policeman who received more than $25,000 to be flown to India for a complicated medical procedure to restore his damaged urinary system after he was shot by gunmen during duty hours.
With their collaborative effort, faith-based organisations, such as The SCOAN, which helped turn a young boy's ordeal into an inspiring story, can largely enhance healthcare access and its financing.
|
If $I$ is a countable set and $N_i$ is a null set for each $i \in I$, then $\bigcup_{i \in I} N_i$ is a null set.
|
{-# OPTIONS --without-K --safe #-}
open import Level using (Level)
open import Relation.Binary.PropositionalEquality
open ≡-Reasoning
open import Data.Nat using (ℕ; zero; suc)
open import Data.Vec using (Vec; []; _∷_; _++_; foldr; map; replicate)
open import Data.Vec.Properties
open import FLA.Algebra.Structures
open import FLA.Algebra.Properties.Field
open import FLA.Algebra.LinearAlgebra
open import FLA.Data.Vec.Properties
module FLA.Algebra.LinearAlgebra.Properties where
private
variable
ℓ : Level
A : Set ℓ
m n p q : ℕ
module _ ⦃ F : Field A ⦄ where
open Field F
+ⱽ-assoc : (v₁ v₂ v₃ : Vec A n)
→ v₁ +ⱽ v₂ +ⱽ v₃ ≡ v₁ +ⱽ (v₂ +ⱽ v₃)
+ⱽ-assoc [] [] [] = refl
+ⱽ-assoc (v₁ ∷ vs₁) (v₂ ∷ vs₂) (v₃ ∷ vs₃) rewrite
+ⱽ-assoc vs₁ vs₂ vs₃
| +-assoc v₁ v₂ v₃
= refl
+ⱽ-comm : (v₁ v₂ : Vec A n) → v₁ +ⱽ v₂ ≡ v₂ +ⱽ v₁
+ⱽ-comm [] [] = refl
+ⱽ-comm (x₁ ∷ vs₁) (x₂ ∷ vs₂) =
begin
x₁ + x₂ ∷ vs₁ +ⱽ vs₂ ≡⟨ cong ((x₁ + x₂) ∷_) (+ⱽ-comm vs₁ vs₂) ⟩
x₁ + x₂ ∷ vs₂ +ⱽ vs₁ ≡⟨ cong (_∷ vs₂ +ⱽ vs₁) (+-comm x₁ x₂) ⟩
x₂ + x₁ ∷ vs₂ +ⱽ vs₁ ∎
v+0ᶠⱽ≡v : (v : Vec A n) → v +ⱽ (replicate 0ᶠ) ≡ v
v+0ᶠⱽ≡v [] = refl
v+0ᶠⱽ≡v (v ∷ vs) = cong₂ _∷_ (+0ᶠ v) (v+0ᶠⱽ≡v vs)
0ᶠⱽ+v≡v : (v : Vec A n) → (replicate 0ᶠ) +ⱽ v ≡ v
0ᶠⱽ+v≡v v = trans (+ⱽ-comm (replicate 0ᶠ) v) (v+0ᶠⱽ≡v v)
0ᶠ∘ⱽv≡0ᶠⱽ : (v : Vec A n) → 0ᶠ ∘ⱽ v ≡ replicate 0ᶠ
0ᶠ∘ⱽv≡0ᶠⱽ [] = refl
0ᶠ∘ⱽv≡0ᶠⱽ (v ∷ vs) = cong₂ _∷_ (0ᶠ*a≡0ᶠ v) (0ᶠ∘ⱽv≡0ᶠⱽ vs)
c∘ⱽ0ᶠⱽ≡0ᶠⱽ : {n : ℕ} → (c : A) → c ∘ⱽ replicate {n = n} 0ᶠ ≡ replicate 0ᶠ
c∘ⱽ0ᶠⱽ≡0ᶠⱽ {zero} c = refl
c∘ⱽ0ᶠⱽ≡0ᶠⱽ {suc n} c = cong₂ _∷_ (a*0ᶠ≡0ᶠ c) (c∘ⱽ0ᶠⱽ≡0ᶠⱽ c)
v*ⱽ0ᶠⱽ≡0ᶠⱽ : {n : ℕ} → (v : Vec A n) → v *ⱽ replicate 0ᶠ ≡ replicate 0ᶠ
v*ⱽ0ᶠⱽ≡0ᶠⱽ [] = refl
v*ⱽ0ᶠⱽ≡0ᶠⱽ (v ∷ vs) = cong₂ _∷_ (a*0ᶠ≡0ᶠ v) (v*ⱽ0ᶠⱽ≡0ᶠⱽ vs)
map-*c-≡c∘ⱽ : (c : A) (v : Vec A n) → map (_* c) v ≡ c ∘ⱽ v
map-*c-≡c∘ⱽ c [] = refl
map-*c-≡c∘ⱽ c (v ∷ vs) = cong₂ _∷_ (*-comm v c) (map-*c-≡c∘ⱽ c vs)
replicate-distr-+ : {n : ℕ} → (u v : A)
→ replicate {n = n} (u + v) ≡ replicate u +ⱽ replicate v
replicate-distr-+ u v = sym (zipWith-replicate _+_ u v)
-- This should work for any linear function (I think), instead of just -_,
*ⱽ-map--ⱽ : (a v : Vec A n)
→ a *ⱽ (map -_ v) ≡ map -_ (a *ⱽ v)
*ⱽ-map--ⱽ [] [] = refl
*ⱽ-map--ⱽ (a ∷ as) (v ∷ vs) = begin
(a ∷ as) *ⱽ map -_ (v ∷ vs)
≡⟨⟩
(a * - v) ∷ (as *ⱽ map -_ vs)
≡⟨ cong ((a * - v) ∷_) (*ⱽ-map--ⱽ as vs) ⟩
(a * - v) ∷ (map -_ (as *ⱽ vs))
≡⟨ cong (_∷ (map -_ (as *ⱽ vs))) (a*-b≡-[a*b] a v) ⟩
(- (a * v)) ∷ (map -_ (as *ⱽ vs))
≡⟨⟩
map -_ ((a ∷ as) *ⱽ (v ∷ vs))
∎
-ⱽ≡-1ᶠ∘ⱽ : (v : Vec A n) → map -_ v ≡ (- 1ᶠ) ∘ⱽ v
-ⱽ≡-1ᶠ∘ⱽ [] = refl
-ⱽ≡-1ᶠ∘ⱽ (v ∷ vs) = begin
map -_ (v ∷ vs) ≡⟨⟩
(- v) ∷ map -_ vs ≡⟨ cong₂ (_∷_) (-a≡-1ᶠ*a v) (-ⱽ≡-1ᶠ∘ⱽ vs) ⟩
(- 1ᶠ * v) ∷ (- 1ᶠ) ∘ⱽ vs ≡⟨⟩
(- 1ᶠ) ∘ⱽ (v ∷ vs) ∎
*ⱽ-assoc : (v₁ v₂ v₃ : Vec A n)
→ v₁ *ⱽ v₂ *ⱽ v₃ ≡ v₁ *ⱽ (v₂ *ⱽ v₃)
*ⱽ-assoc [] [] [] = refl
*ⱽ-assoc (v₁ ∷ vs₁) (v₂ ∷ vs₂) (v₃ ∷ vs₃) rewrite
*ⱽ-assoc vs₁ vs₂ vs₃
| *-assoc v₁ v₂ v₃
= refl
*ⱽ-comm : (v₁ v₂ : Vec A n) → v₁ *ⱽ v₂ ≡ v₂ *ⱽ v₁
*ⱽ-comm [] [] = refl
*ⱽ-comm (v₁ ∷ vs₁) (v₂ ∷ vs₂) rewrite
*ⱽ-comm vs₁ vs₂
| *-comm v₁ v₂
= refl
*ⱽ-distr-+ⱽ : (a u v : Vec A n)
→ a *ⱽ (u +ⱽ v) ≡ a *ⱽ u +ⱽ a *ⱽ v
*ⱽ-distr-+ⱽ [] [] [] = refl
*ⱽ-distr-+ⱽ (a ∷ as) (u ∷ us) (v ∷ vs) rewrite
*ⱽ-distr-+ⱽ as us vs
| *-distr-+ a u v
= refl
*ⱽ-distr--ⱽ : (a u v : Vec A n)
→ a *ⱽ (u -ⱽ v) ≡ a *ⱽ u -ⱽ a *ⱽ v
*ⱽ-distr--ⱽ a u v = begin
a *ⱽ (u -ⱽ v) ≡⟨⟩
a *ⱽ (u +ⱽ (map (-_) v)) ≡⟨ *ⱽ-distr-+ⱽ a u (map -_ v) ⟩
a *ⱽ u +ⱽ a *ⱽ (map -_ v) ≡⟨ cong (a *ⱽ u +ⱽ_) (*ⱽ-map--ⱽ a v) ⟩
a *ⱽ u +ⱽ (map -_ (a *ⱽ v)) ≡⟨⟩
a *ⱽ u -ⱽ a *ⱽ v ∎
-- Homogeneity of degree 1 for linear maps
*ⱽ∘ⱽ≡∘ⱽ*ⱽ : (c : A) (u v : Vec A n)
→ u *ⱽ c ∘ⱽ v ≡ c ∘ⱽ (u *ⱽ v)
*ⱽ∘ⱽ≡∘ⱽ*ⱽ c [] [] = refl
*ⱽ∘ⱽ≡∘ⱽ*ⱽ c (u ∷ us) (v ∷ vs) rewrite
*ⱽ∘ⱽ≡∘ⱽ*ⱽ c us vs
| *-assoc u c v
| *-comm u c
| sym (*-assoc c u v)
= refl
∘ⱽ*ⱽ-assoc : (c : A) (u v : Vec A n)
→ c ∘ⱽ (u *ⱽ v) ≡ (c ∘ⱽ u) *ⱽ v
∘ⱽ*ⱽ-assoc c [] [] = refl
∘ⱽ*ⱽ-assoc c (u ∷ us) (v ∷ vs) = cong₂ (_∷_) (*-assoc c u v)
(∘ⱽ*ⱽ-assoc c us vs)
∘ⱽ-distr-+ⱽ : (c : A) (u v : Vec A n)
→ c ∘ⱽ (u +ⱽ v) ≡ c ∘ⱽ u +ⱽ c ∘ⱽ v
∘ⱽ-distr-+ⱽ c [] [] = refl
∘ⱽ-distr-+ⱽ c (u ∷ us) (v ∷ vs) rewrite
∘ⱽ-distr-+ⱽ c us vs
| *-distr-+ c u v
= refl
∘ⱽ-comm : (a b : A) (v : Vec A n) → a ∘ⱽ (b ∘ⱽ v) ≡ b ∘ⱽ (a ∘ⱽ v)
∘ⱽ-comm a b [] = refl
∘ⱽ-comm a b (v ∷ vs) = cong₂ _∷_ (trans (trans (*-assoc a b v)
(cong (_* v) (*-comm a b)))
(sym (*-assoc b a v)))
(∘ⱽ-comm a b vs)
+ⱽ-flip-++ : (a b : Vec A n) → (c d : Vec A m)
→ (a ++ c) +ⱽ (b ++ d) ≡ a +ⱽ b ++ c +ⱽ d
+ⱽ-flip-++ [] [] c d = refl
+ⱽ-flip-++ (a ∷ as) (b ∷ bs) c d rewrite +ⱽ-flip-++ as bs c d = refl
∘ⱽ-distr-++ : (c : A) (a : Vec A n) (b : Vec A m)
→ c ∘ⱽ (a ++ b) ≡ (c ∘ⱽ a) ++ (c ∘ⱽ b)
∘ⱽ-distr-++ c [] b = refl
∘ⱽ-distr-++ c (a ∷ as) b rewrite ∘ⱽ-distr-++ c as b = refl
replicate[a*b]≡a∘ⱽreplicate[b] : {n : ℕ} (a b : A) →
replicate {n = n} (a * b) ≡ a ∘ⱽ replicate b
replicate[a*b]≡a∘ⱽreplicate[b] {n = n} a b = sym (map-replicate (a *_) b n)
replicate[a*b]≡b∘ⱽreplicate[a] : {n : ℕ} (a b : A) →
replicate {n = n} (a * b) ≡ b ∘ⱽ replicate a
replicate[a*b]≡b∘ⱽreplicate[a] a b = trans (cong replicate (*-comm a b))
(replicate[a*b]≡a∘ⱽreplicate[b] b a)
sum[0ᶠⱽ]≡0ᶠ : {n : ℕ} → sum (replicate {n = n} 0ᶠ) ≡ 0ᶠ
sum[0ᶠⱽ]≡0ᶠ {n = zero} = refl
sum[0ᶠⱽ]≡0ᶠ {n = suc n} = begin
sum (0ᶠ ∷ replicate {n = n} 0ᶠ) ≡⟨⟩
0ᶠ + sum (replicate {n = n} 0ᶠ) ≡⟨ cong (0ᶠ +_) (sum[0ᶠⱽ]≡0ᶠ {n}) ⟩
0ᶠ + 0ᶠ ≡⟨ 0ᶠ+0ᶠ≡0ᶠ ⟩
0ᶠ ∎
sum-distr-+ⱽ : (v₁ v₂ : Vec A n) → sum (v₁ +ⱽ v₂) ≡ sum v₁ + sum v₂
sum-distr-+ⱽ [] [] = sym (0ᶠ+0ᶠ≡0ᶠ)
sum-distr-+ⱽ (v₁ ∷ vs₁) (v₂ ∷ vs₂) rewrite
sum-distr-+ⱽ vs₁ vs₂
| +-assoc (v₁ + v₂) (foldr (λ v → A) _+_ 0ᶠ vs₁) (foldr (λ v → A) _+_ 0ᶠ vs₂)
| sym (+-assoc v₁ v₂ (foldr (λ v → A) _+_ 0ᶠ vs₁))
| +-comm v₂ (foldr (λ v → A) _+_ 0ᶠ vs₁)
| +-assoc v₁ (foldr (λ v → A) _+_ 0ᶠ vs₁) v₂
| sym (+-assoc (v₁ + (foldr (λ v → A) _+_ 0ᶠ vs₁)) v₂ (foldr (λ v → A) _+_ 0ᶠ vs₂))
= refl
sum[c∘ⱽv]≡c*sum[v] : (c : A) (v : Vec A n) → sum (c ∘ⱽ v) ≡ c * sum v
sum[c∘ⱽv]≡c*sum[v] c [] = sym (a*0ᶠ≡0ᶠ c)
sum[c∘ⱽv]≡c*sum[v] c (v ∷ vs) = begin
sum (c ∘ⱽ (v ∷ vs)) ≡⟨⟩
c * v + sum (c ∘ⱽ vs) ≡⟨ cong (c * v +_) (sum[c∘ⱽv]≡c*sum[v] c vs) ⟩
c * v + c * sum vs ≡⟨ sym (*-distr-+ c v (sum vs)) ⟩
c * (v + sum vs) ≡⟨⟩
c * sum (v ∷ vs) ∎
⟨⟩-comm : (v₁ v₂ : Vec A n)
→ ⟨ v₁ , v₂ ⟩ ≡ ⟨ v₂ , v₁ ⟩
⟨⟩-comm v₁ v₂ = cong sum (*ⱽ-comm v₁ v₂)
-- Should we show bilinearity?
-- ∀ λ ∈ F, B(λv, w) ≡ B(v, λw) ≡ λB(v, w)
-- B(v₁ + v₂, w) ≡ B(v₁, w) + B(v₂, w) ∧ B(v, w₁ + w₂) ≡ B(v, w₁) + B(v, w₂)
-- Additivity in both arguments
⟨x+y,z⟩≡⟨x,z⟩+⟨y,z⟩ : (x y z : Vec A n)
→ ⟨ x +ⱽ y , z ⟩ ≡ (⟨ x , z ⟩) + (⟨ y , z ⟩)
⟨x+y,z⟩≡⟨x,z⟩+⟨y,z⟩ x y z = begin
⟨ x +ⱽ y , z ⟩
≡⟨⟩
sum ((x +ⱽ y) *ⱽ z )
≡⟨ cong sum (*ⱽ-comm (x +ⱽ y) z) ⟩
sum (z *ⱽ (x +ⱽ y))
≡⟨ cong sum (*ⱽ-distr-+ⱽ z x y) ⟩
sum (z *ⱽ x +ⱽ z *ⱽ y)
≡⟨ sum-distr-+ⱽ (z *ⱽ x) (z *ⱽ y) ⟩
sum (z *ⱽ x) + sum (z *ⱽ y)
≡⟨⟩
⟨ z , x ⟩ + ⟨ z , y ⟩
≡⟨ cong (_+ ⟨ z , y ⟩) (⟨⟩-comm z x) ⟩
⟨ x , z ⟩ + ⟨ z , y ⟩
≡⟨ cong (⟨ x , z ⟩ +_ ) (⟨⟩-comm z y) ⟩
⟨ x , z ⟩ + ⟨ y , z ⟩
∎
⟨x,y+z⟩≡⟨x,y⟩+⟨x,z⟩ : (x y z : Vec A n)
→ ⟨ x , y +ⱽ z ⟩ ≡ (⟨ x , y ⟩) + (⟨ x , z ⟩)
⟨x,y+z⟩≡⟨x,y⟩+⟨x,z⟩ x y z =
begin
⟨ x , y +ⱽ z ⟩ ≡⟨ ⟨⟩-comm x (y +ⱽ z) ⟩
⟨ y +ⱽ z , x ⟩ ≡⟨ ⟨x+y,z⟩≡⟨x,z⟩+⟨y,z⟩ y z x ⟩
⟨ y , x ⟩ + ⟨ z , x ⟩ ≡⟨ cong (_+ ⟨ z , x ⟩) (⟨⟩-comm y x) ⟩
⟨ x , y ⟩ + ⟨ z , x ⟩ ≡⟨ cong (⟨ x , y ⟩ +_ ) (⟨⟩-comm z x) ⟩
⟨ x , y ⟩ + ⟨ x , z ⟩ ∎
⟨a++b,c++d⟩≡⟨a,c⟩+⟨b,d⟩ : (a : Vec A m) → (b : Vec A n) → (c : Vec A m) → (d : Vec A n)
→ ⟨ a ++ b , c ++ d ⟩ ≡ ⟨ a , c ⟩ + ⟨ b , d ⟩
⟨a++b,c++d⟩≡⟨a,c⟩+⟨b,d⟩ [] b [] d rewrite 0ᶠ+ (⟨ b , d ⟩) = refl
⟨a++b,c++d⟩≡⟨a,c⟩+⟨b,d⟩ (a ∷ as) b (c ∷ cs) d =
begin
⟨ a ∷ as ++ b , c ∷ cs ++ d ⟩
≡⟨⟩
(a * c) + ⟨ as ++ b , cs ++ d ⟩
≡⟨ cong ((a * c) +_) (⟨a++b,c++d⟩≡⟨a,c⟩+⟨b,d⟩ as b cs d) ⟩
(a * c) + (⟨ as , cs ⟩ + ⟨ b , d ⟩)
≡⟨ +-assoc (a * c) ⟨ as , cs ⟩ ⟨ b , d ⟩ ⟩
((a * c) + ⟨ as , cs ⟩) + ⟨ b , d ⟩
≡⟨⟩
⟨ a ∷ as , c ∷ cs ⟩ + ⟨ b , d ⟩
∎
⟨a,b⟩+⟨c,d⟩≡⟨a++c,b++d⟩ : (a b : Vec A m) → (c d : Vec A n)
→ ⟨ a , b ⟩ + ⟨ c , d ⟩ ≡ ⟨ a ++ c , b ++ d ⟩
⟨a,b⟩+⟨c,d⟩≡⟨a++c,b++d⟩ a b c d = sym (⟨a++b,c++d⟩≡⟨a,c⟩+⟨b,d⟩ a c b d)
|
DaysToMonths<-function(days) {
# Converts days to months using formula specified by contest
#
# Args:
# any numeric variable in days
#
# Returns:
# the same variable in months
return(days/365.24*12)
}
|
insertpart | int:id |
FLUSHDFA();
C_insertpart(id);
beginpart | int:id |
FLUSHDFA();
C_beginpart(id);
endpart | int:id |
FLUSHDFA();
C_endpart(id);
pro | char *:s arith:l |
FLUSHDFA();
C_pro(s,l);
pro_narg | char *:s |
FLUSHDFA();
C_pro_narg(s);
end | arith:l |
FLUSHDFA();
C_end(l);
end_narg | |
FLUSHDFA();
C_end_narg();
df_dlb | label:l |
C_df_dlb(l);
df_dnam | char *:s |
C_df_dnam(s);
exa_dnam | char *:s |
C_exa_dnam(s);
exa_dlb | label:l |
C_exa_dlb(l);
exp | char *:s |
C_exp(s);
ina_dnam | char *:s |
C_ina_dnam(s);
ina_dlb | label:l |
C_ina_dlb(l);
inp | char *:s |
C_inp(s);
bss_cst | arith:n arith:w int:i |
C_bss_cst(n,w,i);
bss_icon | arith:n char *:s arith:sz int:i |
C_bss_icon(n,s,sz,i);
bss_ucon | arith:n char *:s arith:sz int:i |
C_bss_ucon(n,s,sz,i);
bss_fcon | arith:n char *:s arith:sz int:i |
C_bss_fcon(n,s,sz,i);
bss_dnam | arith:n char *:s arith:offs int:i |
C_bss_dnam(n,s,offs,i);
bss_dlb | arith:n label:l arith:offs int:i |
C_bss_dlb(n,l,offs,i);
bss_ilb | arith:n label:l int:i |
C_bss_ilb(n,l,i);
bss_pnam | arith:n char *:s int:i |
C_bss_pnam(n,s,i);
hol_cst | arith:n arith:w int:i |
C_hol_cst(n,w,i);
hol_icon | arith:n char *:s arith:sz int:i |
C_hol_icon(n,s,sz,i);
hol_ucon | arith:n char *:s arith:sz int:i |
C_hol_ucon(n,s,sz,i);
hol_fcon | arith:n char *:s arith:sz int:i |
C_hol_fcon(n,s,sz,i);
hol_dnam | arith:n char *:s arith:offs int:i |
C_hol_dnam(n,s,offs,i);
hol_dlb | arith:n label:l arith:offs int:i |
C_hol_dlb(n,l,offs,i);
hol_ilb | arith:n label:l int:i |
C_hol_ilb(n,l,i);
hol_pnam | arith:n char *:s int:i |
C_hol_pnam(n,s,i);
con_cst | arith:l |
C_con_cst(l);
con_icon | char *:val arith:siz |
C_con_icon(val,siz);
con_ucon | char *:val arith:siz |
C_con_ucon(val,siz);
con_fcon | char *:val arith:siz |
C_con_fcon(val,siz);
con_scon | char *:str arith:siz |
C_con_scon(str,siz);
con_dnam | char *:str arith:val |
C_con_dnam(str,val);
con_dlb | label:l arith:val |
C_con_dlb(l,val);
con_ilb | label:l |
C_con_ilb(l);
con_pnam | char *:str |
C_con_pnam(str);
rom_cst | arith:l |
C_rom_cst(l);
rom_icon | char *:val arith:siz |
C_rom_icon(val,siz);
rom_ucon | char *:val arith:siz |
C_rom_ucon(val,siz);
rom_fcon | char *:val arith:siz |
C_rom_fcon(val,siz);
rom_scon | char *:str arith:siz |
C_rom_scon(str,siz);
rom_dnam | char *:str arith:val |
C_rom_dnam(str,val);
rom_dlb | label:l arith:val |
C_rom_dlb(l,val);
rom_ilb | label:l |
C_rom_ilb(l);
rom_pnam | char *:str |
C_rom_pnam(str);
cst | arith:l |
C_cst(l);
icon | char *:val arith:siz |
C_icon(val,siz);
ucon | char *:val arith:siz |
C_ucon(val,siz);
fcon | char *:val arith:siz |
C_fcon(val,siz);
scon | char *:str arith:siz |
C_scon(str,siz);
dnam | char *:str arith:val |
C_dnam(str,val);
dlb | label:l arith:val |
C_dlb(l,val);
ilb | label:l |
C_ilb(l);
pnam | char *:str |
C_pnam(str);
mes_begin | int:ms |
C_mes_begin(ms);
mes_end | |
C_mes_end();
exc | arith:c1 arith:c2 |
C_exc(c1,c2);
|
lemma convex_epigraph_convex: "convex S \<Longrightarrow> convex_on S f \<longleftrightarrow> convex(epigraph S f)"
|
.NH "Xware 2: Games"
.PP
Congratulations on purchasing Xware 2, which is a package of X Games for \*(CO.
.PP
This package also holds the library
.BR libXpm.a ,
whose functions manipulate pix-mapped images;
a revised version of the \*(CO command
.BR make ;
some revised X header files;
and a manual page for every game.
.SH "What Is Xware?"
Xware is a series of packages from Mark Williams Company
that bundle some of the most popular and useful software
for the X Window System.
Each program in a Xware package comes with an executable binary;
source code;
instructions and scripts to help you recompile and install the program;
and a manual page for the program edited and formatted in the \*(CO
Lexicon format, which you can view with the \*(CO command
.BR man .
.PP
Other Xware packages are available or are being prepared.
These include packages of X window managers, graphics programs,
tools and utilities, and development tools.
Xware makes it easy and convenient for you
to enlarge your supply of X software.
.SH "System Resources"
.PP
The games in this package require eight megabytes of RAM.
.PP
When you install this package, it takes up about three megabytes.
You should assume that de-archiving and installing a game will
consume one-half to three-quarters of a megabyte.
The total amount of disk space consumed depends upon how many games you
install, and whether you leave the source archives on your system or
delete them.
Directions on how to install a game are given below.
.PP
The size of the screen required varies from game to game.
A few run on a 640\(mu480 desktop; most use 800\(mu600;
a few use 1024\(mu768.
Those that use the 1024\(mu768 desktop can, in most instances,
be run under 800\(mu600; however, you will have to shift its
window around to see all of the game's actions.
.PP
Most games recompile correctly with the \*(CO C compiler.
A few must be recompiled with the GCC compiler.
Each archive contains a file named
.BR README.Coh ,
which describes how to install the game and how to recompile it.
.SH "Contents of This Package"
.PP
The individual games are kept in the form of compressed archives;
when you install this package onto your system, all of the archives are
copied into directory
.BR /usr/X11/games .
The following describes each game included in this package:
.IP "\fBcbzone\fR"
Drive a tank around a desert battlefield.
Battle enemy tanks, supertanks, helicopters, cruise missiles, and
landers.
This game features striking animation.
It uses a 1024\(mu768 window, but you can run it on an 800\(mu600 screen.
.IP "\fBroids\fR"
This is an implementation of the arcade game ``Asteroids'':
You guide a spaceship through an asteroid field and blow up rocks.
.IP "\fBspider\fR"
This implements a sophisticated game of two-deck solitaire.
It uses a 1024\(mu768 window.
You can play it on an 800\(mu600 screen, but you will have to shift the
window from time to time to see where all the cards lie.
.IP "\fBsvb\fR"
Spy vs. Bob:
Guide your spy through a skyscraper, as he is pursued by the
evil, pipe-smoking Bobs.
.IP "\fBtetris\fR"
Yet another implementation of ``Tetris''.
This game features attractive graphics.
.IP "\fBxchomp\fR"
This implements a version of the arcade game ``PacMan'', but with
some twists \(em including a number of different mazes.
.IP "\fBxconq\fR"
Fight to dominate the world!
At the beginning of the game, you control a nation; as you guide your
nation's development, it encounters one or more other nations that are
controlled by the computer.
Each nation tries to dominate all of the others, through trade, alliances,
and warfare.
The game continues until one nation rules all.
The topography and technology can be imaginary, or modelled after a
historical setting.
.IP "\fBxhextris\fR"
Yet another implementation of ``Tetris,'' except that the shapes are
built out of hexagons instead of squares.
As a result, there are more shapes, and they are more difficult to fit
together.
Not for the timid!
.IP "\fBxlander\fR"
Your space ship is in orbit around the Moon.
You must guide it to a safe landing before it crashes or runs out of fuel.
.IP "\fBxlife\fR"
This implements Conway's game of Life under X.
Also included is a library of shapes and screens that you can patch together,
just to make Life interesting!
.IP "\fBxmahjongg\fR"
This game implements the solitaire version of the Chinese game Mah-Jongg.
It uses a 1024\(mu768 window; but
you can play it on an 800\(mu600 screen if you shift the window from time
to time to see where all the tiles lie.
.IP "\fBxminesweep\fR"
This is an X implementation of the board game ``minesweeper''.
You must deduce where the mines are hidden on an otherwise blank game board.
Guess wrong, and the mine goes off!
.IP "\fBxpipeman\fR"
The water levels are rising; and you must fit together randomly
selected sections of pipe to keep it from hitting the floor.
.IP "\fBlibXpm.a.z\fR"
This archive holds the sources for the library
.BR Xpm ,
which holds X pixmap (XPM) functions.
This archive
also contains the command
.BR sxpm ,
which demonstrates how to use and display an XPM.
.IP
Note that when you install this package onto your system,
the compiled archive
.B libXpm.a
is automatically copied into directory
.BR /usr/X11/lib .
.IP "\fBxrobots\fR"
This is an X implementation of the game ``Daleks,'' seen on the Atari ST
and other machines.
You are pursued by evil robots; and your only chance is to trick them
into crashing into each other.
.SH "Installing This Package"
.PP
To install this package onto your system, log in as the superuser
.B root
and type the installation command:
either
.DM
/etc/install Xware2 /dev/fva0 3
.DE
if you have a 3.5-inch floppy-disk drive, or
.DM
/etc/install Xware2 /dev/fha0 3
.DE
if your floppy-disk drive is 5.25 inches.
.B install
copies the programs into directory
.BR /usr/X11/games ,
installs the few extra tools you need to recompile these programs,
and automatically installs the manual pages for the programs.
.SH "Installing an Individual Program"
.PP
As described above, each program is kept in a compressed archive in directory
.BR /usr/X11/games .
Each archive contains a compiled version of the program, ready for running;
the sources for the program;
a file called
.BR README.Coh ,
which describes how to recompile the program, should you wish;
and a script named
.BR Install.Coh ,
which installs the program for you.
To install a program so you can play it, do the following:
.IP \(bu 0.3i
.B cd
to directory
.BR /usr/X11/games .
.IP \(bu
Use the command
.B gtar
to de-archive and uncompress the archive that holds the program you want.
For example, if you want to install program
.BR cbzone ,
type the command:
.DM
gtar -xvzf cbzone.gtz
.DE
.IP \(bu
Change to the directory that holds the newly de-archived game.
For example, when you un-tar archive
.BR cbzone.gtz ,
the game and its source files are copied into directory
.BR cbzone .
.IP \(bu
Use the command
.B su
to become the superuser
.BR root ,
and execute script
.BR Install.Coh .
Each game has such a script, and it handles all the details of
installation for you.
.IP \(bu
After you have installed the game, you may wish to remove the source
code and other files that you do not need.
To do so,
.B cd
back to the game's parent directory, and use the command
.B "rm \-rf"
to remove the directory and its contents.
For example, after you have installed
.BR cbzone ,
you can remove this directory by typing the following commands:
.DM
cd ..
rm -rf cbzone
.DE
Note that the archive from which you copied the game,
.BR cbzone.gtz ,
is untouched by removing the directory
.BR cbzone .
You can always de-archive the game again should you wish to work with
its game files or re-install it.
.PP
If you wish to recompile a game, follow the directions given in file
.B README.Coh
in that game's source directory.
.SH "Copyrights"
.PP
Please note that the software in this package comes from third-party sources.
It is offered ``as is'', with no guarantee or warranty stated or implied,
whatsoever.
The copyrights on the source code and any executables built from it
belong to the persons who wrote the original code; full copyright
statements are included with the source code to each game.
Please read and respect these statements.
.SH "Technical Support"
.PP
The sources in this package have been compiled and run successfully under
\*(CO.
If you have a problem bringing up one of the games or with
compiling the source code as it is configured in this package, our
technical support team will attempt to help you.
However, because of the highly technical nature of the source
code in these packages, Mark Williams Company
.I cannot
give you technical assistance should you modify any of the source code
included in this package in any manner whatsoever.
.SH "A Final Word"
.PP
All of the source code for these games, and for all programs shipped in our
Xware packages, is available for free on the Mark Williams bulletin board
system, and on other publicly accessible archives.
The tutorial for
.B UUCP
in the \*(CO manual describes how to contact the BBS and make its services
available to you.
|
rm(list=ls())
library(dplyr)
library(ggplot2)
library(tidyr)
library(ggpubr)
# set dataset
dataset <- "Singhal"
setwd(paste("/Users/ChatNoir/Projects/Squam/Graphs/",dataset, sep=''))
# Read in data
load(paste("/Users/ChatNoir/Projects/Squam/Graphs/",dataset,"/Calcs_",dataset,".RData", sep=''))
# Merge datasets, then select and rename columns of interest
x <- merge(mLBF, mLGL, by = c("Locus"))
mL <- x %>% select("Locus")
mL$BF <- (x$TvS.x)/2
mL$dGLS <- x$TvS.y
hypoth <- "TvS"
t <- "ToxPoly vs. Sclero"
#mL$BF <- (x$AIvSA.x)/2
#mL$dGLS <- x$AIvSA.y
#hypoth <- "AIvSA"
#t <- "ToxAI vs. ToxSA"
#
# Get max min for graph
max(abs(min(mL$BF)),abs(max(mL$BF)),abs(min(mL$dGLS)),abs(max(mL$dGLS)))
limit <- 90
tic <- seq(-limit,limit,10)
# Names "TvS", "AIvSA", "AIvSI", "SAvAI", "SAvSI", "SIvAI", "SIvSA", "TvS_support"
color_S <- "orange"
color_TP <- "springgreen4"
color_AI <- "#2BB07FFF"
color_SA <- "#38598CFF"
color_SI <- "#C2DF23FF"
quartz()
# Set colors
color_h0 <- color_AI
color_h0 <- color_TP
graph_general <- ggplot(mL, aes(x=BF,y=dGLS)) +
geom_point(alpha=0.5, color=color_h0, size=1) + theme_bw() + theme(panel.border = element_blank()) +
theme_classic() +
theme(
axis.text = element_text(size=16, color="black"),
text = element_text(size=20),
legend.position = "none",
panel.border = element_blank(),
panel.background = element_rect(fill = "transparent"), # bg of the panel
plot.background = element_rect(fill = "transparent", color = NA), # bg of the plot
panel.grid = element_blank(), # get rid of major grid
plot.title = element_text(hjust = 0.5)
) +
coord_cartesian(ylim=tic,xlim = tic) +
scale_y_continuous(breaks = tic) +
scale_x_continuous(breaks = tic) +
labs(x='ln(BF)',y='dGLS')
graph_general
graph_custom <- graph_general +
geom_vline(xintercept=c(10,-10),color=c("gray"), linetype="dotted", size=0.5) +
geom_hline(yintercept=c(0.5,-0.5),color=c("gray"), linetype="dotted", size=0.5) +
ggtitle(paste(dataset,"\n",t,sep=""))
#geom_abline(color=c("gray"), size=0.2, linetype="solid") +
graph_custom
ggsave(paste(dataset,"_scatter_",hypoth,".pdf",sep=""), plot=graph_custom,width = 9.5, height = 6, units = "in", device = 'pdf',bg = "transparent")
|
(*
Copyright (C) 2017 M.A.L. Marques
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: gga_exc *)
(* prefix:
gga_c_zpbeint_params *params;
assert(p->params != NULL);
params = (gga_c_zpbeint_params * )(p->params);
*)
params_a_gamma := (1 - log(2))/Pi^2:
params_a_BB := 1:
$include "gga_c_pbe.mpl"
ff := (z, t) -> mphi(z)^(params_a_alpha*t^3):
f := (rs, z, xt, xs0, xs1) ->
f_pw(rs, z) + ff(z, tp(rs, z, xt))*fH(rs, z, tp(rs, z, xt)):
|
<p>
<div align="right">
Massimo Nocentini<br>
</div>
</p>
<br>
<div align="center">
<b>Abstract</b><br>
In this document we collect a naive <i>type system</i> based on sets.
</div>
```python
from itertools import repeat
from sympy import *
#from type_system import *
```
```python
%run ../../src/commons.py
```
```python
%run ./type-system.py
```
---
```python
init_printing()
```
```python
x,y,m,n,t,z = symbols('x y m n t z', commutative=True)
alpha, beta, gamma, eta = symbols(r'\alpha \beta \gamma \eta', commutative=True)
f,g = Function('f'), Function('g')
```
# Non-commutative symbols
```python
((1/(1-w[0]*z))*(1/(1-w[1]*z))).diff(z).series(z, n=6)
```
```python
define(f(z), z/((1-z)**2),ctor=FEq).series(z,n=10)
```
```python
define(f(z), 1/(1-alpha*z), ctor=FEq).series(z,n=10)
```
```python
define(f(z), 1/(1-(u[0]+u[1])*z), ctor=FEq).series(z,n=4)
```
```python
define(f(z), 1/(1-(o[0]+o[1])*z), ctor=FEq).series(z,n=4)
```
# Exponential gf recap
```python
define(f(z), z*(1/(1-z))*(1/(1-z)), ctor=FEq).series(z,n=10)
```
```python
define(f(z), z**3,ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), exp(z),ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), z*exp(z), ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), z**2*exp(z)/factorial(2,evaluate=False),
ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), z**3*exp(z)/factorial(3, evaluate=False),
ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), (exp(z)+exp(-z))/2, ctor=FEq).series(z, n=20, kernel='exponential')
```
```python
define(f(z), exp(m*z), ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), (exp(z)-1)/z, ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), 1/(1-z), ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), (1/(1-z))*(1/(1-z)), ctor=FEq).series(z, n=10, kernel='exponential')
```
```python
define(f(z), exp(z)**2, ctor=FEq).series(z, n=10, kernel='exponential')
```
# Linear types
```python
tyvar(x).gf()
```
```python
(tyvar(u[0]) * tyvar(u[1]) * tyvar(u[2])).gf()
```
```python
(tyvar(o[0]) * tyvar(o[1]) * tyvar(o[2])).gf()
```
```python
(tyvar(u[0]) | tyvar(u[1]) | tyvar(u[2])).gf()
```
```python
(tyvar(o[0]) | tyvar(o[1]) | tyvar(o[2])).gf()
```
```python
truth.gf() + falsehood.gf()
```
```python
boolean.gf()
```
```python
maybe(tyvar(alpha)[z]).gf()
```
# occupancies
```python
nel = 4
syms=[u[i] for i in range(nel)]
occ_prb, = cp(maybe(tyvar(u[i]*z)) for i in range(nel)).gf() # here we can use the `[z]` notation too.
occ_prb
```
```python
occupancy(occ_prb, syms, objects='unlike', boxes='unlike').series(z)
```
```python
occupancy(occ_prb, syms, objects='unlike', boxes='like').series(z)
```
```python
occupancy(occ_prb, syms, objects='like', boxes='unlike').series(z)
```
```python
occupancy(occ_prb, syms, objects='like', boxes='like').series(z)
```
---
```python
u_hat = symbols(r'␣_0:10')
nel = 3
occ_prb, = cp(tyvar(z*(sum(u[j] for j in range(nel) if j != i))) | tyvar(u_hat[i])
for i in range(nel)).gf()
occ_prb
```
```python
syms=[u[i] for i in range(nel)]+[u_hat[i] for i in range(nel)]
occupancy(occ_prb, syms, objects='unlike', boxes='unlike').series(z)
```
```python
occupancy(occ_prb, syms, objects='unlike', boxes='like').series(z)
```
```python
occupancy(occ_prb, syms, objects='like', boxes='unlike').series(z)
```
```python
occupancy(occ_prb, syms, objects='like', boxes='like').series(z)
```
---
```python
occupancy_problem, = cp(maybe(du(tyvar((u[i]*z)**(j+1)) for j in range(i+1)))
for i in range(3)).gf()
occupancy_problem
```
```python
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='unlike', boxes='unlike').series(z)
```
```python
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='unlike', boxes='like').series(z)
```
```python
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='like', boxes='unlike').series(z)
```
```python
occupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='like', boxes='like').series(z)
```
```python
((1+t)*(1+t+t**2)*(1+t+t**2+t**3)).series(t,n=10) # just for checking
```
---
```python
def sums_of_powers(boxes, base):
p = IndexedBase('\space')
return cp(cp() | tyvar(p[j]*z**(base**i))
for i in range(0,boxes)
for j in [Pow(base,i,evaluate=False)] # implicit let
).gf()
```
```python
occupancy, = sums_of_powers(boxes=4, base=2)
occupancy.series(z, n=32)
```
$$\times{\left (z,{\space}_{2^{3}},{\space}_{2^{1}},{\space}_{2^{2}},{\space}_{2^{0}} \right )} = \left(z {\space}_{2^{0}} + 1\right) \left(z^{2} {\space}_{2^{1}} + 1\right) \left(z^{4} {\space}_{2^{2}} + 1\right) \left(z^{8} {\space}_{2^{3}} + 1\right) = z^{15} {\space}_{2^{0}} {\space}_{2^{1}} {\space}_{2^{2}} {\space}_{2^{3}} + z^{14} {\space}_{2^{1}} {\space}_{2^{2}} {\space}_{2^{3}} + z^{13} {\space}_{2^{0}} {\space}_{2^{2}} {\space}_{2^{3}} + z^{12} {\space}_{2^{2}} {\space}_{2^{3}} + z^{11} {\space}_{2^{0}} {\space}_{2^{1}} {\space}_{2^{3}} + z^{10} {\space}_{2^{1}} {\space}_{2^{3}} + z^{9} {\space}_{2^{0}} {\space}_{2^{3}} + z^{8} {\space}_{2^{3}} + z^{7} {\space}_{2^{0}} {\space}_{2^{1}} {\space}_{2^{2}} + z^{6} {\space}_{2^{1}} {\space}_{2^{2}} + z^{5} {\space}_{2^{0}} {\space}_{2^{2}} + z^{4} {\space}_{2^{2}} + z^{3} {\space}_{2^{0}} {\space}_{2^{1}} + z^{2} {\space}_{2^{1}} + z {\space}_{2^{0}} + 1$$
```python
occupancy, = sums_of_powers(boxes=4, base=3)
occupancy.series(z, n=100)
```
$$\times{\left (z,{\space}_{3^{0}},{\space}_{3^{2}},{\space}_{3^{1}},{\space}_{3^{3}} \right )} = \left(z {\space}_{3^{0}} + 1\right) \left(z^{3} {\space}_{3^{1}} + 1\right) \left(z^{9} {\space}_{3^{2}} + 1\right) \left(z^{27} {\space}_{3^{3}} + 1\right) = z^{40} {\space}_{3^{0}} {\space}_{3^{1}} {\space}_{3^{2}} {\space}_{3^{3}} + z^{39} {\space}_{3^{1}} {\space}_{3^{2}} {\space}_{3^{3}} + z^{37} {\space}_{3^{0}} {\space}_{3^{2}} {\space}_{3^{3}} + z^{36} {\space}_{3^{2}} {\space}_{3^{3}} + z^{31} {\space}_{3^{0}} {\space}_{3^{1}} {\space}_{3^{3}} + z^{30} {\space}_{3^{1}} {\space}_{3^{3}} + z^{28} {\space}_{3^{0}} {\space}_{3^{3}} + z^{27} {\space}_{3^{3}} + z^{13} {\space}_{3^{0}} {\space}_{3^{1}} {\space}_{3^{2}} + z^{12} {\space}_{3^{1}} {\space}_{3^{2}} + z^{10} {\space}_{3^{0}} {\space}_{3^{2}} + z^{9} {\space}_{3^{2}} + z^{4} {\space}_{3^{0}} {\space}_{3^{1}} + z^{3} {\space}_{3^{1}} + z {\space}_{3^{0}} + 1$$
```python
occupancy, = sums_of_powers(boxes=4, base=5)
occupancy.series(z, n=200)
```
$$\times{\left (z,{\space}_{5^{3}},{\space}_{5^{1}},{\space}_{5^{2}},{\space}_{5^{0}} \right )} = \left(z {\space}_{5^{0}} + 1\right) \left(z^{5} {\space}_{5^{1}} + 1\right) \left(z^{25} {\space}_{5^{2}} + 1\right) \left(z^{125} {\space}_{5^{3}} + 1\right) = z^{156} {\space}_{5^{0}} {\space}_{5^{1}} {\space}_{5^{2}} {\space}_{5^{3}} + z^{155} {\space}_{5^{1}} {\space}_{5^{2}} {\space}_{5^{3}} + z^{151} {\space}_{5^{0}} {\space}_{5^{2}} {\space}_{5^{3}} + z^{150} {\space}_{5^{2}} {\space}_{5^{3}} + z^{131} {\space}_{5^{0}} {\space}_{5^{1}} {\space}_{5^{3}} + z^{130} {\space}_{5^{1}} {\space}_{5^{3}} + z^{126} {\space}_{5^{0}} {\space}_{5^{3}} + z^{125} {\space}_{5^{3}} + z^{31} {\space}_{5^{0}} {\space}_{5^{1}} {\space}_{5^{2}} + z^{30} {\space}_{5^{1}} {\space}_{5^{2}} + z^{26} {\space}_{5^{0}} {\space}_{5^{2}} + z^{25} {\space}_{5^{2}} + z^{6} {\space}_{5^{0}} {\space}_{5^{1}} + z^{5} {\space}_{5^{1}} + z {\space}_{5^{0}} + 1$$
```python
occupancy, = sums_of_powers(boxes=4, base=7)
occupancy.series(z, n=500)
```
$$\times{\left (z,{\space}_{7^{0}},{\space}_{7^{2}},{\space}_{7^{1}},{\space}_{7^{3}} \right )} = \left(z {\space}_{7^{0}} + 1\right) \left(z^{7} {\space}_{7^{1}} + 1\right) \left(z^{49} {\space}_{7^{2}} + 1\right) \left(z^{343} {\space}_{7^{3}} + 1\right) = z^{400} {\space}_{7^{0}} {\space}_{7^{1}} {\space}_{7^{2}} {\space}_{7^{3}} + z^{399} {\space}_{7^{1}} {\space}_{7^{2}} {\space}_{7^{3}} + z^{393} {\space}_{7^{0}} {\space}_{7^{2}} {\space}_{7^{3}} + z^{392} {\space}_{7^{2}} {\space}_{7^{3}} + z^{351} {\space}_{7^{0}} {\space}_{7^{1}} {\space}_{7^{3}} + z^{350} {\space}_{7^{1}} {\space}_{7^{3}} + z^{344} {\space}_{7^{0}} {\space}_{7^{3}} + z^{343} {\space}_{7^{3}} + z^{57} {\space}_{7^{0}} {\space}_{7^{1}} {\space}_{7^{2}} + z^{56} {\space}_{7^{1}} {\space}_{7^{2}} + z^{50} {\space}_{7^{0}} {\space}_{7^{2}} + z^{49} {\space}_{7^{2}} + z^{8} {\space}_{7^{0}} {\space}_{7^{1}} + z^{7} {\space}_{7^{1}} + z {\space}_{7^{0}} + 1$$
```python
assert 393 == 7**0 + 7**2 + 7**3 # _.rhs.rhs.coeff(z, 393)
```
# Differences
```python
difference = (cp() | tyvar(-gamma*z))
ones = nats * difference
ones_gf, = ones.gf()
ones_gf
```
```python
ones_gf(z,1,1,1).series(z, n=10) # check!
```
```python
one_gf, = (ones * difference).gf()
one_gf.series(z, n=10).rhs.rhs.subs({w[0]:1, w[1]:1, gamma:1})
```
---
```python
l = IndexedBase(r'\circ')
def linear_comb_of_powers(boxes, base):
return cp(lst(tyvar(Mul(l[j], z**(base**i), evaluate=False)))
for i in range(boxes)
for j in [Pow(base,i,evaluate=False)]).gf()
```
```python
occupancy, = linear_comb_of_powers(boxes=4, base=Integer(2))
occupancy.series(z, n=8)
```
```python
occupancy, = linear_comb_of_powers(boxes=4, base=3)
occupancy.series(z, n=9)
```
```python
occupancy, = linear_comb_of_powers(boxes=4, base=5)
occupancy.series(z, n=10)
```
```python
def uniform_rv(n):
return tyvar(S(1)/nel) * lst(tyvar(x))
occupancy, = uniform_rv(n=10).gf()
occupancy.series(x,n=10)
```
```python
class lst_structure_w(rec):
def definition(self, alpha):
me = self.me()
return alpha | lst(me)
def label(self):
return r'\mathcal{L}_{w}' # `_s` stands for "structure"
```
```python
lst_structure_w(tyvar(alpha)).gf()
```
```python
[gf.series(alpha) for gf in _]
```
```python
class lst_structure(rec):
def definition(self, alpha):
me = self.me()
return alpha | (lst(me) * me * me)
def label(self):
return r'\mathcal{L}_{s}' # `_s` stands for "structure"
```
```python
lst_structure(tyvar(alpha)).gf()
```
```python
_[0].series(alpha, n=10)
```
```python
class structure(rec):
def definition(self, alpha):
me = self.me()
return alpha | (bin_tree(me) * me * me)
def label(self):
return r'\mathcal{S}'
```
```python
structure(tyvar(alpha)).gf()
```
```python
gf = _[0]
```
```python
gf.simplify()
```
```python
nel = 7
s = gf.simplify().series(alpha, n=nel).rhs.rhs
[s.coeff(alpha, n=i).subs({pow(-1,S(1)/3):-1}).radsimp().powsimp() for i in range(nel)]
```
```python
class structure(rec):
def definition(self, alpha):
me = self.me()
return alpha | (nnbin_tree(me) * me)
def label(self):
return r'\mathcal{S}'
```
```python
structure(tyvar(alpha)).gf()
```
```python
gf = _[0]
```
```python
gf.simplify()
```
```python
nel = 20
s = gf.simplify().series(alpha, n=nel).rhs.rhs
[s.coeff(alpha, n=i).subs({pow(-1,S(1)/3):-1}).radsimp().powsimp() for i in range(nel)]
```
```python
class nn_structure(rec):
def definition(self, alpha):
me = self.me()
return alpha * bin_tree(nnbin_tree(me))
def label(self):
return r'\mathcal{L}_{s}^{+}' # `_s` stands for "structure"
```
```python
nn_structure(tyvar(alpha)).gf()
```
```python
_[0].series(alpha, n=10)
```
```python
class nnlst_structure(rec):
def definition(self, alpha):
me = self.me()
return alpha * lst(nnlst(me))
def label(self):
return r'\mathcal{L}_{s}^{+}' # `_s` stands for "structure"
```
```python
nnlst_structure(tyvar(alpha)).gf()
```
```python
_[0].series(alpha, n=10)
```
```python
class tree(rec):
def definition(self, alpha):
return alpha * lst(self.me())
def label(self):
return r'\mathcal{T}'
```
```python
tree(tyvar(alpha)).gf()
```
```python
_[0].series(alpha, n=10)
```
```python
class combination(rec):
def definition(self, alpha):
me = self.me()
return alpha | (me * me)
def label(self):
return r'\mathcal{C}'
```
```python
combination(tyvar(alpha)).gf()
```
```python
_[0].series(alpha, n=10)
```
```python
class ab_tree(rec):
def definition(self, alpha, beta):
me = self.me()
return beta | (alpha * me * me)
def label(self):
return r'\mathcal{T}_{a,b}'
```
```python
ab_tree_gfs = ab_tree(tyvar(alpha), tyvar(beta)).gf()
ab_tree_gfs
```
```python
ab_tree_gf = ab_tree_gfs[0]
```
```python
fab_eq = FEq(ab_tree_gf.lhs, ab_tree_gf.rhs.series(beta, n=20).removeO(), evaluate=False)
fab_eq
```
```python
fab_eq(x,x)
```
```python
(_*alpha).expand()
```
```python
#with lift_to_Lambda(fab_eq) as F:
B = fab_eq(x,1)
A = fab_eq(1,x)
A,B,
```
```python
(A+B).expand()
```
```python
((1+x)*A).expand()
```
```python
class dyck(rec):
def definition(self, alpha, beta):
me = self.me()
return cp() | (alpha * me * beta * me)
def label(self):
return r'\mathcal{D}'
```
```python
dyck_gfs = dyck(tyvar(alpha*x), tyvar(beta*x)).gf()
dyck_gfs
```
```python
dyck_gf = dyck_gfs[0]
```
```python
dyck_gf.series(x,n=10)
```
```python
class motzkin(rec):
def definition(self, alpha, beta, gamma):
me = self.me()
return cp() | (alpha * me * beta * me) | (gamma * me)
def label(self):
return r'\mathcal{M}'
```
```python
motzkin_gfs = motzkin(tyvar(alpha*x), tyvar(beta*x), tyvar(gamma*x),).gf()
motzkin_gfs
```
```python
motzkin_gf = motzkin_gfs[0]
```
```python
motzkin_gf.series(x,n=10)
```
```python
motzkin_gf(x,1,1,1).series(x,n=10)
```
```python
class motzkin_p(rec):
def definition(self, alpha, beta, gamma, eta):
me = self.me()
return cp() | (alpha * me * beta * me) | (gamma * me) | (eta * me)
def label(self):
return r'\mathcal{M}^{+}'
```
```python
motzkinp_gfs = motzkin_p(tyvar(alpha*x), tyvar(beta*x), tyvar(gamma*x), tyvar(eta*x),).gf()
motzkinp_gfs
```
```python
motzkinp_gf = motzkinp_gfs[0]
```
```python
motzkinp_gf.series(x,n=6)
```
```python
motzkinp_gf(x,1,1,1,1).series(x,n=10)
```
```python
class fibo(rec):
def definition(self, alpha, beta):
me = self.me()
return cp() | alpha | ((beta | (alpha * beta)) * me)
def label(self):
return r'\mathcal{F}'
```
```python
fibo_gf, = fibo(tyvar(alpha*x), tyvar(beta*x),).gf()
fibo_gf
```
```python
fibo_gf.series(x,n=10)
```
```python
fibo_gf(1,x,1).series(x,n=10)
```
```python
lst_of_truth_gf, = lst(tyvar(x)).gf()
lst_of_truth_gf.series(x, n=10, is_exp=True)
```
```python
lst_of_boolean_gf.series(x,n=10,is_exp=True)
```
```python
_.rhs.rhs.subs({w[0]:1,w[1]:1})
```
```python
sum((_.rhs.rhs.coeff(x,i)/factorial(i))*x**i for i in range(1,10))
```
```python
class powerset(ty):
def gf_rhs(self, ty):
return [exp(self.mulfactor() * gf.rhs) for gf in ty.gf()]
def mulfactor(self):
return 1
def label(self):
return r'\mathcal{P}'
```
```python
powerset_of_tyvar_gf, = (2**(nnlst(tyvar(alpha)))).gf()
powerset_of_tyvar_gf
```
```python
powerset_of_tyvar_gf.series(alpha, n=10, is_exp=True)
```
```python
powerset_of_tyvar_gf, = (2**(nnlst(boolean))).gf()
powerset_of_tyvar_gf
```
```python
powerset_of_tyvar_gf.series(x, n=5, is_exp=True)
```
```python
_.rhs.rhs.subs({w[0]:1,w[1]:1})
```
```python
powerset_of_tyvar_gf, _ = (2**(bin_tree(tyvar(alpha)))).gf()
powerset_of_tyvar_gf
```
```python
powerset_of_tyvar_gf.series(alpha, n=10, is_exp=True)
```
```python
l, = (2**(2**(nnlst(tyvar(alpha))))).gf()
define(l.lhs, l.rhs.ratsimp(), ctor=FEq).series(alpha,n=8,is_exp=True)
```
```python
class cycle(ty):
def gf_rhs(self, ty):
return [log(gf.rhs) for gf in ty.gf()]
def label(self):
return r'\mathcal{C}'
```
```python
cycle_of_tyvar_gf, = (~(lst(tyvar(alpha)))).gf()
cycle_of_tyvar_gf
```
```python
cycle_of_tyvar_gf.series(alpha, n=10, is_exp=True)
```
```python
cycle_of_tyvar_gf, = (~(lst(boolean))).gf()
cycle_of_tyvar_gf
```
```python
cycle_of_tyvar_gf.series(x, n=8, is_exp=True)
```
```python
_.rhs.rhs.subs({w[0]:1,w[1]:1})
```
```python
Pstar_gf, = (2**(~(lst(tyvar(alpha))))).gf()
Pstar_gf.series(alpha, n=10, is_exp=True)
```
```python
class ipowerset(powerset):
def mulfactor(self):
return -1
```
```python
derangements_gf, = ((-2)**tyvar(alpha)).gf()
derangements_gf.series(alpha, n=10, is_exp=True)
```
```python
derangements_gf, = ((-2)**nnlst(tyvar(alpha))).gf()
derangements_gf.series(alpha, n=10, is_exp=True)
```
```python
[1,2][1:]
```
```python
def foldr(f, l, i):
if not l:
return i
else:
car, *cdr = l
return f(car, foldr(f, cdr, i))
class arrow(ty):
def label(self):
return r'\rightarrow'
def gf_rhs(self, alpha, beta):
v = Dummy()
return [foldr(lambda gf, acc: Lambda([x], acc(gf.rhs)),
gfs[:-1],
Lambda([x], gfs[-1].rhs))(x)
for gfs in self.gfs_space()]
return [foldr(lambda gf, acc: acc**gf.rhs, gfs[:-1], gfs[-1].rhs)
for gfs in self.gfs_space()]
```
```python
arr, = arrow(boolean, boolean).gf()
arr
```
```python
arr.series(x,n=5,is_exp=False)
```
```python
_.rhs.rhs.removeO().subs({w[0]:1,w[1]:1})
```
```python
arr, = arrow(lst(boolean), lst(boolean)).gf()
arr
```
```python
arr.series(x,n=5,is_exp=False)
```
```python
_.rhs.rhs.removeO().subs({w[0]:1,w[1]:1})
```
---
```python
lamda_gf = lamda(tyvar(x)).gf_rhs(tyvar(x))
lamda_gf
```
```python
lamda_gf.rhs.series(x,n=10)
```
---
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
|
PROGRAM test_ptr4
IMPLICIT NONE
REAL, POINTER :: p1, p2, p3
REAL, TARGET :: a = 11., b = 12.5, c
NULLIFY ( p1, p2, p3) ! Nullify pointers
p1 => a ! p1 points to a
p2 => b ! p2 points to b
p3 => c ! p3 points to c
p3 = p1 + p2 ! Same as c = a + b
WRITE (*,*) 'p3 = ', p3
p2 => p1 ! p2 points to a
p3 = p1 + p2 ! Same as c = a + a
WRITE (*,*) 'p3 = ', p3
p3 = p1 ! Same as c = a
p3 => p1 ! p3 points to a
WRITE (*,*) 'p3 = ', p3
WRITE (*,*) 'a, b, c = ', a, b, c
END PROGRAM test_ptr4
|
! mpiexec -np 4 bin/Debug/WGFM_test2.exe
program mainWGF
use meshread_mod
use linalg_mod
use WMFIE_matrices
use spherePEC_mod
use mpi
implicit none
real(8), parameter :: pi = 4d0 * atan(1d0)
complex(8), parameter :: IU = (0d0, 1d0)
! Variables
type(Mesh) :: msh, msh_evl
complex(8), allocatable :: Zmat(:, :) ! WMFIE matrix
complex(8), allocatable :: Dpot(:, :, :) ! WMFIE potencial
complex(8), allocatable :: Esrc(:, :)
complex(8), allocatable :: Msrc(:)
complex(8), allocatable :: Ivec(:)
complex(8), allocatable :: Eexact(:, :) ! Exact electric field solution
complex(8), allocatable :: Esol(:, :) ! WMFIE electric field solution
real(8) :: ExNorm, Error1
integer :: nEvl
! MPI variables
integer :: ierr, N_procs, id
! Parameters
real(8), parameter :: ep0 = 1d0 / (35950207149.4727056d0 * PI) ! vacuum permittivity [F/m]
real(8), parameter :: mu0 = PI * 4d-7 ! vacuum permeability [H/m]
real(8), parameter :: k0 = 2*PI ! free-space wavenumber [1/m]
real(8), parameter :: ep1_r = 1d0 ! upper half-space relative permittivity
real(8), parameter :: ep1 = ep0*ep1_r ! upper half-space permittivity [F/m]
real(8), parameter :: mu1 = mu0 ! upper half-space permeability [H/m]
real(8), parameter :: w = k0 / sqrt(ep0 * mu0) ! angular frequency [rad/s]
real(8), parameter :: k1 = w * sqrt(ep1 * mu1) ! upper half-space wavenumber [1/m]
character(len=100), parameter :: file_msh = 'meshes/hemisphere_plane/hemisphere_plane_A4_h015.msh'
character(len=100), parameter :: file_evl_msh = 'meshes/eval_msh/dome.msh'
real(8), parameter :: sph_radius = 1d0 ! hemisphere radius
real(8), parameter :: win_radius = 4d0 ! Window radius
real(8), parameter :: win_cparam = 0.7d0 ! Window 'c' parameter
! Planewave parameters
integer, parameter :: n_terms = 100 ! number of terms in Mie series
real(8), parameter :: alpha = PI/30d0 ! planewave angle of incidence in (0, pi)
logical, parameter :: TE_mode = .true. ! planewave mode: TEmode = .true.
! TMmode = .false.
!************************************************************************************
! MPI init
call mpi_init(ierr)
call mpi_comm_size(MPI_COMM_WORLD, N_procs, ierr)
call mpi_comm_rank(MPI_COMM_WORLD, id, ierr)
print *, "Total processes:", N_procs, "Process id:", id
! Set window type and size
call set_circ_window(win_radius, win_cparam)
! Load mesh
call load_gmsh(file_msh, msh)
! Load eval mesh
call load_gmsh(file_evl_msh, msh_evl)
nEvl = size(msh_evl%POS, 1)
! Compute WMFIE matrix
if (id == 0) print *, "WMFIE matrix, ", 'nEdges: ', msh%nbEdg
call genWMFIE(Zmat, msh, k1)
if (id == 0) print *, "done"
if (id == 0) then
! Compute source field
call halfsphere_on_plane_source_field(alpha, k1, msh, Esrc, TE_mode)
! WMFIE RHS
call genRHS_WMFIE(msh, Esrc, Msrc)
! Solve
allocate(Ivec(msh%nbEdg))
print *, "solving"
call diagPrecond(Zmat, Msrc)
call gmres_solver(Ivec, Zmat, Msrc)
deallocate(Msrc, Zmat)
print *, "done"
end if
! Compute WMFIE potencial
if (id == 0) print *, "Potencial"
call genWMFIEPot(Dpot, nEvl, msh_evl%POS, msh, k1)
if (id == 0) print *, "done"
! Compute exact solution
if (id == 0) then
print *, "Exact solution"
call halfsphere_on_plane_scattered_field(msh_evl%POS, k1, sph_radius, &
alpha, n_terms, Eexact, TE_mode)
ExNorm = maxval(sqrt(abs(Eexact(:,1))**2+abs(Eexact(:,2))**2+abs(Eexact(:,3))**2))
print *, "done"
endif
! Compute WMFIE solution
if (id == 0) then
allocate(Esol(nEvl, 3))
Esol(:,1) = matmul(Dpot(:,:,1), Ivec)
Esol(:,2) = matmul(Dpot(:,:,2), Ivec)
Esol(:,3) = matmul(Dpot(:,:,3), Ivec)
! Error
Esol = Esol - Eexact
Error1 = maxval(sqrt(abs(Esol(:,1))**2+abs(Esol(:,2))**2+abs(Esol(:,3))**2))
print *, 'nEdges ', 'h ',' Error WMFIE: '
print *, msh%nbEdg, msh%h, Error1 / ExNorm
! Save errors to evl mesh
call saveToMesh(ESol/ExNorm, file_evl_msh, "error", 'nodes', 'norm')
end if
call MPI_Finalize(ierr)
end program
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.