text
stringlengths 0
3.34M
|
---|
"""Recalculate background images for camera feed when there is a number of frames has very little difference
than the previious background image.
It uses the new frames to recalculate a new grayscale background image"""
import cv2
import numpy as np
import sys
sys.path.extend("../../")
from app.config import MAX_CONSECUTIVE_BG, THRESH_PIXEL_CHANGE
class Calibrator:
def __init__(self, bg_img, max_consecutive_bg=MAX_CONSECUTIVE_BG):
"""
:param bg_img: initial background image, grayscale
:param max_consecutive_bg: consecutive number of background-like frames
"""
self.frames = [None] * max_consecutive_bg
self.bg_img = bg_img
self.max_consecutive_bg = max_consecutive_bg
self.counter = 0
def run(self, img):
"""
:param img: a regular bgr frame
:return:
"""
assert img.shape[:2] == self.bg_img.shape[:2]
# height, width = img.shape[:2]
if len(img.shape) == 3 and img.shape[2] == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
diff = cv2.absdiff(img.copy(), self.bg_img)
_, fg_mask = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY)
if np.sum(fg_mask / 255) <= THRESH_PIXEL_CHANGE:
self.frames[self.counter] = img
self.counter += 1
else:
for i in range(self.counter):
self.frames[i] = None
self.counter = 0
if self.counter >= self.max_consecutive_bg:
# recalculate background image
self.bg_img = np.median(self.frames, axis=0).astype(dtype=np.uint8)
for i in range(self.max_consecutive_bg):
self.frames = None
self.counter = 0
return self.bg_img
|
Autograd: automatic differentiation
===================================
Central to all neural networks in PyTorch is the ``autograd`` package.
Let’s first briefly visit this, and we will then go to training our
first neural network.
The ``autograd`` package provides automatic differentiation for all operations
on Tensors. It is a define-by-run framework, which means that your backprop is
defined by how your code is run, and that every single iteration can be
different.
Let us see this in more simple terms with some examples.
Tensor
--------
``torch.Tensor`` is the central class of the package. If you set its attribute
``.requires_grad`` as ``True``, it starts to track all operations on it. When
you finish your computation you can call ``.backward()`` and have all the
gradients computed automatically. The gradient for this tensor will be
accumulated into ``.grad`` attribute.
To stop a tensor from tracking history, you can call ``.detach()`` to detach
it from the computation history, and to prevent future computation from being
tracked.
To prevent tracking history (and using memory), you can also wrap the code block
in ``with torch.no_grad():``. This can be particularly helpful when evaluating a
model because the model may have trainable parameters with `requires_grad=True`,
but for which we don't need the gradients.
There’s one more class which is very important for autograd
implementation - a ``Function``.
``Tensor`` and ``Function`` are interconnected and build up an acyclic
graph, that encodes a complete history of computation. Each tensor has
a ``.grad_fn`` attribute that references a ``Function`` that has created
the ``Tensor`` (except for Tensors created by the user - their
``grad_fn is None``).
If you want to compute the derivatives, you can call ``.backward()`` on
a ``Tensor``. If ``Tensor`` is a scalar (i.e. it holds a one element
data), you don’t need to specify any arguments to ``backward()``,
however if it has more elements, you need to specify a ``gradient``
argument that is a tensor of matching shape.
```
import torch
```
Create a tensor and set requires_grad=True to track computation with it
```
x = torch.ones(2, 2, requires_grad=True)
```
Do an operation of tensor:
```
y = x + 2
print(y)
```
``y`` was created as a result of an operation, so it has a ``grad_fn``.
```
print(y.grad_fn)
```
Do more operations on y
```
z = y * y * 3
out = z.mean()
print(z, out)
```
``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``
flag in-place. The input flag defaults to ``True`` if not given.
```
a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
```
Gradients
---------
Let's backprop now
Because ``out`` contains a single scalar, ``out.backward()`` is
equivalent to ``out.backward(torch.tensor(1))``.
```
out.backward()
```
print gradients d(out)/dx
```
print(x.grad)
```
You should have got a matrix of ``4.5``.
Let’s call the ``out``
*Tensor* $o$.
We have that $o = \frac{1}{4}\sum_i z_i$,
$z_i = 3(x_i+2)^2$ and $z_i\bigr\rvert_{x_i=1} = 27$.
Therefore,
\begin{equation}
\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)
\end{equation}
hence
\begin{equation}
\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5
\end{equation}
You can do many crazy things with autograd!
```
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
```
```
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
```
You can also stop autograd from tracking history on Tensors
with ``.requires_grad``=True by wrapping the code block in
``with torch.no_grad():``
```
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
```
**Read Later:**
Documentation of ``autograd`` and ``Function`` is at
http://pytorch.org/docs/autograd
|
In October 1891 , upon the formation of the first constitutional government in Zanzibar , Mathews was appointed First Minister , despite some hostility from Sultan Ali bin Said . In this capacity Mathews was " irremovable by the sultan " and answerable only to the Sultan and the British Consul . His position was so strong that one missionary on the island is quoted as saying that his powers defied " analytical examination " and that Mathews really could say " L 'état est moi " ( I am the state ) . Mathews was also known as the " Strong man of Zanzibar " . The principal departments of government were mostly run by Britons or British Indians and Mathews ' approval was required before they could be removed from office . Mathews was rewarded by the Zanzibar government for his role with his appointment as a first class member of the Order of the Brilliant Star of Zanzibar , which he was granted licence by Queen Victoria to accept and wear on 17 May 1886 . Mathews used his position to suppress slavery in the country and in 1889 convinced the Sultan to issue a decree purchasing the freedom of all slaves who had taken refuge in his dominions and , from 1890 , the prohibiting the slave trade . On 1 February 1891 Mathews was appointed Her Majesty 's Commissioner and Consul @-@ General to the British Sphere of Influence in East Africa . He never took up the post and instead chose to remain in Zanzibar .
|
State Before: α : Type u
β : Type v
γ : Type ?u.59142
δ : Type ?u.59145
ε : Type ?u.59148
ζ : Type ?u.59151
inst✝⁵ : TopologicalSpace α
inst✝⁴ : TopologicalSpace β
inst✝³ : TopologicalSpace γ
inst✝² : TopologicalSpace δ
inst✝¹ : TopologicalSpace ε
inst✝ : TopologicalSpace ζ
s : Set α
t : Set β
x✝ : α × β
a : α
b : β
⊢ (a, b) ∈ closure (s ×ˢ t) ↔ (a, b) ∈ closure s ×ˢ closure t State After: no goals Tactic: simp_rw [mem_prod, mem_closure_iff_nhdsWithin_neBot, nhdsWithin_prod_eq, prod_neBot]
|
\section{\module{cgi} ---
Common Gateway Interface support.}
\declaremodule{standard}{cgi}
\modulesynopsis{Common Gateway Interface support, used to interpret
forms in server-side scripts.}
\indexii{WWW}{server}
\indexii{CGI}{protocol}
\indexii{HTTP}{protocol}
\indexii{MIME}{headers}
\index{URL}
Support module for Common Gateway Interface (CGI) scripts.%
\index{Common Gateway Interface}
This module defines a number of utilities for use by CGI scripts
written in Python.
\subsection{Introduction}
\nodename{cgi-intro}
A CGI script is invoked by an HTTP server, usually to process user
input submitted through an HTML \code{<FORM>} or \code{<ISINDEX>} element.
Most often, CGI scripts live in the server's special \file{cgi-bin}
directory. The HTTP server places all sorts of information about the
request (such as the client's hostname, the requested URL, the query
string, and lots of other goodies) in the script's shell environment,
executes the script, and sends the script's output back to the client.
The script's input is connected to the client too, and sometimes the
form data is read this way; at other times the form data is passed via
the ``query string'' part of the URL. This module is intended
to take care of the different cases and provide a simpler interface to
the Python script. It also provides a number of utilities that help
in debugging scripts, and the latest addition is support for file
uploads from a form (if your browser supports it).
The output of a CGI script should consist of two sections, separated
by a blank line. The first section contains a number of headers,
telling the client what kind of data is following. Python code to
generate a minimal header section looks like this:
\begin{verbatim}
print "Content-Type: text/html" # HTML is following
print # blank line, end of headers
\end{verbatim}
The second section is usually HTML, which allows the client software
to display nicely formatted text with header, in-line images, etc.
Here's Python code that prints a simple piece of HTML:
\begin{verbatim}
print "<TITLE>CGI script output</TITLE>"
print "<H1>This is my first CGI script</H1>"
print "Hello, world!"
\end{verbatim}
\subsection{Using the cgi module}
\nodename{Using the cgi module}
Begin by writing \samp{import cgi}. Do not use \samp{from cgi import
*} --- the module defines all sorts of names for its own use or for
backward compatibility that you don't want in your namespace.
When you write a new script, consider adding the line:
\begin{verbatim}
import cgitb; cgitb.enable()
\end{verbatim}
This activates a special exception handler that will display detailed
reports in the Web browser if any errors occur. If you'd rather not
show the guts of your program to users of your script, you can have
the reports saved to files instead, with a line like this:
\begin{verbatim}
import cgitb; cgitb.enable(display=0, logdir="/tmp")
\end{verbatim}
It's very helpful to use this feature during script development.
The reports produced by \refmodule{cgitb} provide information that
can save you a lot of time in tracking down bugs. You can always
remove the \code{cgitb} line later when you have tested your script
and are confident that it works correctly.
To get at submitted form data,
it's best to use the \class{FieldStorage} class. The other classes
defined in this module are provided mostly for backward compatibility.
Instantiate it exactly once, without arguments. This reads the form
contents from standard input or the environment (depending on the
value of various environment variables set according to the CGI
standard). Since it may consume standard input, it should be
instantiated only once.
The \class{FieldStorage} instance can be indexed like a Python
dictionary, and also supports the standard dictionary methods
\method{has_key()} and \method{keys()}. The built-in \function{len()}
is also supported. Form fields containing empty strings are ignored
and do not appear in the dictionary; to keep such values, provide
a true value for the optional \var{keep_blank_values} keyword
parameter when creating the \class{FieldStorage} instance.
For instance, the following code (which assumes that the
\mailheader{Content-Type} header and blank line have already been
printed) checks that the fields \code{name} and \code{addr} are both
set to a non-empty string:
\begin{verbatim}
form = cgi.FieldStorage()
if not (form.has_key("name") and form.has_key("addr")):
print "<H1>Error</H1>"
print "Please fill in the name and addr fields."
return
print "<p>name:", form["name"].value
print "<p>addr:", form["addr"].value
...further form processing here...
\end{verbatim}
Here the fields, accessed through \samp{form[\var{key}]}, are
themselves instances of \class{FieldStorage} (or
\class{MiniFieldStorage}, depending on the form encoding).
The \member{value} attribute of the instance yields the string value
of the field. The \method{getvalue()} method returns this string value
directly; it also accepts an optional second argument as a default to
return if the requested key is not present.
If the submitted form data contains more than one field with the same
name, the object retrieved by \samp{form[\var{key}]} is not a
\class{FieldStorage} or \class{MiniFieldStorage}
instance but a list of such instances. Similarly, in this situation,
\samp{form.getvalue(\var{key})} would return a list of strings.
If you expect this possibility
(when your HTML form contains multiple fields with the same name), use
the \function{getlist()} function, which always returns a list of values (so that you
do not need to special-case the single item case). For example, this
code concatenates any number of username fields, separated by
commas:
\begin{verbatim}
value = form.getlist("username")
usernames = ",".join(value)
\end{verbatim}
If a field represents an uploaded file, accessing the value via the
\member{value} attribute or the \function{getvalue()} method reads the
entire file in memory as a string. This may not be what you want.
You can test for an uploaded file by testing either the \member{filename}
attribute or the \member{file} attribute. You can then read the data at
leisure from the \member{file} attribute:
\begin{verbatim}
fileitem = form["userfile"]
if fileitem.file:
# It's an uploaded file; count lines
linecount = 0
while 1:
line = fileitem.file.readline()
if not line: break
linecount = linecount + 1
\end{verbatim}
The file upload draft standard entertains the possibility of uploading
multiple files from one field (using a recursive
\mimetype{multipart/*} encoding). When this occurs, the item will be
a dictionary-like \class{FieldStorage} item. This can be determined
by testing its \member{type} attribute, which should be
\mimetype{multipart/form-data} (or perhaps another MIME type matching
\mimetype{multipart/*}). In this case, it can be iterated over
recursively just like the top-level form object.
When a form is submitted in the ``old'' format (as the query string or
as a single data part of type
\mimetype{application/x-www-form-urlencoded}), the items will actually
be instances of the class \class{MiniFieldStorage}. In this case, the
\member{list}, \member{file}, and \member{filename} attributes are
always \code{None}.
\subsection{Higher Level Interface}
\versionadded{2.2} % XXX: Is this true ?
The previous section explains how to read CGI form data using the
\class{FieldStorage} class. This section describes a higher level
interface which was added to this class to allow one to do it in a
more readable and intuitive way. The interface doesn't make the
techniques described in previous sections obsolete --- they are still
useful to process file uploads efficiently, for example.
The interface consists of two simple methods. Using the methods
you can process form data in a generic way, without the need to worry
whether only one or more values were posted under one name.
In the previous section, you learned to write following code anytime
you expected a user to post more than one value under one name:
\begin{verbatim}
item = form.getvalue("item")
if isinstance(item, list):
# The user is requesting more than one item.
else:
# The user is requesting only one item.
\end{verbatim}
This situation is common for example when a form contains a group of
multiple checkboxes with the same name:
\begin{verbatim}
<input type="checkbox" name="item" value="1" />
<input type="checkbox" name="item" value="2" />
\end{verbatim}
In most situations, however, there's only one form control with a
particular name in a form and then you expect and need only one value
associated with this name. So you write a script containing for
example this code:
\begin{verbatim}
user = form.getvalue("user").upper()
\end{verbatim}
The problem with the code is that you should never expect that a
client will provide valid input to your scripts. For example, if a
curious user appends another \samp{user=foo} pair to the query string,
then the script would crash, because in this situation the
\code{getvalue("user")} method call returns a list instead of a
string. Calling the \method{toupper()} method on a list is not valid
(since lists do not have a method of this name) and results in an
\exception{AttributeError} exception.
Therefore, the appropriate way to read form data values was to always
use the code which checks whether the obtained value is a single value
or a list of values. That's annoying and leads to less readable
scripts.
A more convenient approach is to use the methods \method{getfirst()}
and \method{getlist()} provided by this higher level interface.
\begin{methoddesc}[FieldStorage]{getfirst}{name\optional{, default}}
This method always returns only one value associated with form field
\var{name}. The method returns only the first value in case that
more values were posted under such name. Please note that the order
in which the values are received may vary from browser to browser
and should not be counted on.\footnote{Note that some recent
versions of the HTML specification do state what order the
field values should be supplied in, but knowing whether a
request was received from a conforming browser, or even from a
browser at all, is tedious and error-prone.} If no such form
field or value exists then the method returns the value specified by
the optional parameter \var{default}. This parameter defaults to
\code{None} if not specified.
\end{methoddesc}
\begin{methoddesc}[FieldStorage]{getlist}{name}
This method always returns a list of values associated with form
field \var{name}. The method returns an empty list if no such form
field or value exists for \var{name}. It returns a list consisting
of one item if only one such value exists.
\end{methoddesc}
Using these methods you can write nice compact code:
\begin{verbatim}
import cgi
form = cgi.FieldStorage()
user = form.getfirst("user", "").upper() # This way it's safe.
for item in form.getlist("item"):
do_something(item)
\end{verbatim}
\subsection{Old classes}
These classes, present in earlier versions of the \module{cgi} module,
are still supported for backward compatibility. New applications
should use the \class{FieldStorage} class.
\class{SvFormContentDict} stores single value form content as
dictionary; it assumes each field name occurs in the form only once.
\class{FormContentDict} stores multiple value form content as a
dictionary (the form items are lists of values). Useful if your form
contains multiple fields with the same name.
Other classes (\class{FormContent}, \class{InterpFormContentDict}) are
present for backwards compatibility with really old applications only.
If you still use these and would be inconvenienced when they
disappeared from a next version of this module, drop me a note.
\subsection{Functions}
\nodename{Functions in cgi module}
These are useful if you want more control, or if you want to employ
some of the algorithms implemented in this module in other
circumstances.
\begin{funcdesc}{parse}{fp\optional{, keep_blank_values\optional{,
strict_parsing}}}
Parse a query in the environment or from a file (the file defaults
to \code{sys.stdin}). The \var{keep_blank_values} and
\var{strict_parsing} parameters are passed to \function{parse_qs()}
unchanged.
\end{funcdesc}
\begin{funcdesc}{parse_qs}{qs\optional{, keep_blank_values\optional{,
strict_parsing}}}
Parse a query string given as a string argument (data of type
\mimetype{application/x-www-form-urlencoded}). Data are
returned as a dictionary. The dictionary keys are the unique query
variable names and the values are lists of values for each name.
The optional argument \var{keep_blank_values} is
a flag indicating whether blank values in
URL encoded queries should be treated as blank strings.
A true value indicates that blanks should be retained as
blank strings. The default false value indicates that
blank values are to be ignored and treated as if they were
not included.
The optional argument \var{strict_parsing} is a flag indicating what
to do with parsing errors. If false (the default), errors
are silently ignored. If true, errors raise a ValueError
exception.
Use the \function{\refmodule{urllib}.urlencode()} function to convert
such dictionaries into query strings.
\end{funcdesc}
\begin{funcdesc}{parse_qsl}{qs\optional{, keep_blank_values\optional{,
strict_parsing}}}
Parse a query string given as a string argument (data of type
\mimetype{application/x-www-form-urlencoded}). Data are
returned as a list of name, value pairs.
The optional argument \var{keep_blank_values} is
a flag indicating whether blank values in
URL encoded queries should be treated as blank strings.
A true value indicates that blanks should be retained as
blank strings. The default false value indicates that
blank values are to be ignored and treated as if they were
not included.
The optional argument \var{strict_parsing} is a flag indicating what
to do with parsing errors. If false (the default), errors
are silently ignored. If true, errors raise a ValueError
exception.
Use the \function{\refmodule{urllib}.urlencode()} function to convert
such lists of pairs into query strings.
\end{funcdesc}
\begin{funcdesc}{parse_multipart}{fp, pdict}
Parse input of type \mimetype{multipart/form-data} (for
file uploads). Arguments are \var{fp} for the input file and
\var{pdict} for a dictionary containing other parameters in
the \mailheader{Content-Type} header.
Returns a dictionary just like \function{parse_qs()} keys are the
field names, each value is a list of values for that field. This is
easy to use but not much good if you are expecting megabytes to be
uploaded --- in that case, use the \class{FieldStorage} class instead
which is much more flexible.
Note that this does not parse nested multipart parts --- use
\class{FieldStorage} for that.
\end{funcdesc}
\begin{funcdesc}{parse_header}{string}
Parse a MIME header (such as \mailheader{Content-Type}) into a main
value and a dictionary of parameters.
\end{funcdesc}
\begin{funcdesc}{test}{}
Robust test CGI script, usable as main program.
Writes minimal HTTP headers and formats all information provided to
the script in HTML form.
\end{funcdesc}
\begin{funcdesc}{print_environ}{}
Format the shell environment in HTML.
\end{funcdesc}
\begin{funcdesc}{print_form}{form}
Format a form in HTML.
\end{funcdesc}
\begin{funcdesc}{print_directory}{}
Format the current directory in HTML.
\end{funcdesc}
\begin{funcdesc}{print_environ_usage}{}
Print a list of useful (used by CGI) environment variables in
HTML.
\end{funcdesc}
\begin{funcdesc}{escape}{s\optional{, quote}}
Convert the characters
\character{\&}, \character{<} and \character{>} in string \var{s} to
HTML-safe sequences. Use this if you need to display text that might
contain such characters in HTML. If the optional flag \var{quote} is
true, the double-quote character (\character{"}) is also translated;
this helps for inclusion in an HTML attribute value, as in \code{<A
HREF="...">}. If the value to be quoted might include single- or
double-quote characters, or both, consider using the
\function{quoteattr()} function in the \refmodule{xml.sax.saxutils}
module instead.
\end{funcdesc}
\subsection{Caring about security \label{cgi-security}}
\indexii{CGI}{security}
There's one important rule: if you invoke an external program (via the
\function{os.system()} or \function{os.popen()} functions. or others
with similar functionality), make very sure you don't pass arbitrary
strings received from the client to the shell. This is a well-known
security hole whereby clever hackers anywhere on the Web can exploit a
gullible CGI script to invoke arbitrary shell commands. Even parts of
the URL or field names cannot be trusted, since the request doesn't
have to come from your form!
To be on the safe side, if you must pass a string gotten from a form
to a shell command, you should make sure the string contains only
alphanumeric characters, dashes, underscores, and periods.
\subsection{Installing your CGI script on a \UNIX\ system}
Read the documentation for your HTTP server and check with your local
system administrator to find the directory where CGI scripts should be
installed; usually this is in a directory \file{cgi-bin} in the server tree.
Make sure that your script is readable and executable by ``others''; the
\UNIX{} file mode should be \code{0755} octal (use \samp{chmod 0755
\var{filename}}). Make sure that the first line of the script contains
\code{\#!} starting in column 1 followed by the pathname of the Python
interpreter, for instance:
\begin{verbatim}
#!/usr/local/bin/python
\end{verbatim}
Make sure the Python interpreter exists and is executable by ``others''.
Make sure that any files your script needs to read or write are
readable or writable, respectively, by ``others'' --- their mode
should be \code{0644} for readable and \code{0666} for writable. This
is because, for security reasons, the HTTP server executes your script
as user ``nobody'', without any special privileges. It can only read
(write, execute) files that everybody can read (write, execute). The
current directory at execution time is also different (it is usually
the server's cgi-bin directory) and the set of environment variables
is also different from what you get when you log in. In particular, don't
count on the shell's search path for executables (\envvar{PATH}) or
the Python module search path (\envvar{PYTHONPATH}) to be set to
anything interesting.
If you need to load modules from a directory which is not on Python's
default module search path, you can change the path in your script,
before importing other modules. For example:
\begin{verbatim}
import sys
sys.path.insert(0, "/usr/home/joe/lib/python")
sys.path.insert(0, "/usr/local/lib/python")
\end{verbatim}
(This way, the directory inserted last will be searched first!)
Instructions for non-\UNIX{} systems will vary; check your HTTP server's
documentation (it will usually have a section on CGI scripts).
\subsection{Testing your CGI script}
Unfortunately, a CGI script will generally not run when you try it
from the command line, and a script that works perfectly from the
command line may fail mysteriously when run from the server. There's
one reason why you should still test your script from the command
line: if it contains a syntax error, the Python interpreter won't
execute it at all, and the HTTP server will most likely send a cryptic
error to the client.
Assuming your script has no syntax errors, yet it does not work, you
have no choice but to read the next section.
\subsection{Debugging CGI scripts} \indexii{CGI}{debugging}
First of all, check for trivial installation errors --- reading the
section above on installing your CGI script carefully can save you a
lot of time. If you wonder whether you have understood the
installation procedure correctly, try installing a copy of this module
file (\file{cgi.py}) as a CGI script. When invoked as a script, the file
will dump its environment and the contents of the form in HTML form.
Give it the right mode etc, and send it a request. If it's installed
in the standard \file{cgi-bin} directory, it should be possible to send it a
request by entering a URL into your browser of the form:
\begin{verbatim}
http://yourhostname/cgi-bin/cgi.py?name=Joe+Blow&addr=At+Home
\end{verbatim}
If this gives an error of type 404, the server cannot find the script
-- perhaps you need to install it in a different directory. If it
gives another error, there's an installation problem that
you should fix before trying to go any further. If you get a nicely
formatted listing of the environment and form content (in this
example, the fields should be listed as ``addr'' with value ``At Home''
and ``name'' with value ``Joe Blow''), the \file{cgi.py} script has been
installed correctly. If you follow the same procedure for your own
script, you should now be able to debug it.
The next step could be to call the \module{cgi} module's
\function{test()} function from your script: replace its main code
with the single statement
\begin{verbatim}
cgi.test()
\end{verbatim}
This should produce the same results as those gotten from installing
the \file{cgi.py} file itself.
When an ordinary Python script raises an unhandled exception (for
whatever reason: of a typo in a module name, a file that can't be
opened, etc.), the Python interpreter prints a nice traceback and
exits. While the Python interpreter will still do this when your CGI
script raises an exception, most likely the traceback will end up in
one of the HTTP server's log files, or be discarded altogether.
Fortunately, once you have managed to get your script to execute
\emph{some} code, you can easily send tracebacks to the Web browser
using the \refmodule{cgitb} module. If you haven't done so already,
just add the line:
\begin{verbatim}
import cgitb; cgitb.enable()
\end{verbatim}
to the top of your script. Then try running it again; when a
problem occurs, you should see a detailed report that will
likely make apparent the cause of the crash.
If you suspect that there may be a problem in importing the
\refmodule{cgitb} module, you can use an even more robust approach
(which only uses built-in modules):
\begin{verbatim}
import sys
sys.stderr = sys.stdout
print "Content-Type: text/plain"
print
...your code here...
\end{verbatim}
This relies on the Python interpreter to print the traceback. The
content type of the output is set to plain text, which disables all
HTML processing. If your script works, the raw HTML will be displayed
by your client. If it raises an exception, most likely after the
first two lines have been printed, a traceback will be displayed.
Because no HTML interpretation is going on, the traceback will be
readable.
\subsection{Common problems and solutions}
\begin{itemize}
\item Most HTTP servers buffer the output from CGI scripts until the
script is completed. This means that it is not possible to display a
progress report on the client's display while the script is running.
\item Check the installation instructions above.
\item Check the HTTP server's log files. (\samp{tail -f logfile} in a
separate window may be useful!)
\item Always check a script for syntax errors first, by doing something
like \samp{python script.py}.
\item If your script does not have any syntax errors, try adding
\samp{import cgitb; cgitb.enable()} to the top of the script.
\item When invoking external programs, make sure they can be found.
Usually, this means using absolute path names --- \envvar{PATH} is
usually not set to a very useful value in a CGI script.
\item When reading or writing external files, make sure they can be read
or written by the userid under which your CGI script will be running:
this is typically the userid under which the web server is running, or some
explicitly specified userid for a web server's \samp{suexec} feature.
\item Don't try to give a CGI script a set-uid mode. This doesn't work on
most systems, and is a security liability as well.
\end{itemize}
|
r=359.64
https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7pp4q/media/images/d7pp4q-022/svc:tesseract/full/full/359.64/default.jpg Accept:application/hocr+xml
|
Formal statement is: lemma brouwer_surjective: fixes f :: "'n::euclidean_space \<Rightarrow> 'n" assumes "compact T" and "convex T" and "T \<noteq> {}" and "continuous_on T f" and "\<And>x y. \<lbrakk>x\<in>S; y\<in>T\<rbrakk> \<Longrightarrow> x + (y - f y) \<in> T" and "x \<in> S" shows "\<exists>y\<in>T. f y = x" Informal statement is: If $f$ is a continuous function from a compact convex set $T$ to itself, and if $S$ is a set such that for all $x \in S$ and $y \in T$, we have $x + (y - f(y)) \in T$, then $f$ is surjective.
|
r=0.45
https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7hp45/media/images/d7hp45-026/svc:tesseract/full/full/0.45/default.jpg Accept:application/hocr+xml
|
#' Predict method for MBM objects
#'
#' @param x A previously-fit MBM object
#' @param newdata Optional dataset for prediction. If present, it should be a new dataset in
#' the same format used to fit the model (i.e., a site by covariate matrix). If missing,
#' predictions will be for the original data.
#' @param n_samples NA or integer; if NA, analytical predictions with standard deviation
#' are returned, otherwise posterior samples are returned.
#' @param type Whether to return predictions on the link or response scale
#' @details Prediction to new data is possible after the fact for mbm models, however
#' there can be performance penalties for doing so with large models. Thus, it is
#' sometimes preferable to predict during model fitting via the \code{predictX}
#' argument to the \code{\link{mbm}} function.
#'
#' All prediction is done on the link scale.
#'
#' This function caches to disk, thus it is important to ensure that adequate disk
#' space is available when using large prediction datasets.
#' @return A data frame of predictions and standard deviations (on the link scale); use
#' \code{x$y_rev_transform(x$rev_link(predictions$fit))} for the response scale.
#' @export
# predict.mbm <- function(x, newdata, n_samples = NA, GPy_location = NA, pyMsg = FALSE)
predict.mbm <- function(x, newdata, type=c("link", "response")) {
type <- match.arg(type)
if(missing(newdata)) {
newdata <- x$covariates
} else {
newdata <- as.matrix(newdata)
if(ncol(newdata) != ncol(x$x)) {
stop("newdata must have the same number of variables as the original data")
}
# parse newdata into dissimilarity format
newdata <- x$x_scaling(newdata)
newdata <- env_dissim(newdata)
newdata <- as.matrix(newdata[,-which(colnames(newdata) %in% c("site1", "site2"))])
}
pr <- x$pyobj$gp$predict_noiseless(newdata)
pr[[2]] <- sqrt(pr[[2]])
names(pr) <- c("mean", "sd")
if(type == "link") {
as.data.frame(pr)
} else {
pr <- pr[[1]]
pr <- x$y_rev_transform(x$inv_link(pr))
}
}
#' Spatial MBM prediction
#'
#' @param x A previously-fit MBM object
#' @param prdat New dataset to be used for prediction; either a raster stack or data
#' frame. See 'Details'
#' @param coords matrix with 2 columns containing X-Y coordinates for \code{prdat},
#' required if prdat does not have a \code{coordinates} method.
#' @param method How to compute the spatial predictions; see 'Details'
#' @param ... Other named parameters to pass to \code{\link{predict.mbm}}.
#' @details \code{prdat} can either be a raster stack with new variables (and spatial
#' information) for prediction, or a data frame-like object with previous
#' predictions from \code{\link{predict.mbm}} with 4 columns: 1. site1, 2. site2,
#' 3. mean, and 4. sd.
#'
#' For rasters, if a layer named 'names' is included (recommended), this layer will
#' be used as sitenames, otherwise they will be assigned unique numbers.
#'
#' If \code{method} is "slow", spatial predictions will be computed by first
#' predicting dissimilarity to all pairs of raster cells, then performing an
#' ordination on the dissimilarity matrix to produce an RGB raster of spatial
#' predictions.
#'
#' For method == 'fast' (currently not implemented), computation is sped up by first
#' performing hierarchical clustering on the predicted dissimilarity matrix for the
#' calibration data (which will have already been computed when mbm was run) to
#' produce cell categories. Each raster cell will then be assigned the category of
#' the calibration data point that is closest environmentally. Then, we compute the
#' dissimilarity matrix of the categories (based on the mean environmental
#' values). The ordination is performed as with the slow method on this
#' dissimilarity matrix.
#' @return An object of class mbmSP, which is a list with three named items: \code{fits}
#' is a 3-band gridded SpatialPointsDataFrame giving the first three prinipal
#' components of predicted pairwise dissimilarity, stdev is a SpatialPointsDataFrame
#' giving the mean of pairwise dissimilarities among all other sites in a given site,
#' and pcoa is the principal coordinates analysis for the fits. Both fits and stdev
#' can be made into rasters using raster::stack() and raster::raster().
#' @export
spatial_predict <- function(x, prdat, coords, method = c('slow', 'fast'), ...)
{
method <- match.arg(method)
if(method == 'fast') {
stop('Fast method is not implemented, use method="slow"')
} else {
if(inherits(prdat, "RasterStack") | inherits(prdat, "RasterLayer") |
inherits(prdat, "RasterBrick"))
{
preds <- predict_mbm_raster(x, prdat)
} else {
if(missing(coords))
coords <- coordinates(prdat)
fitMat <- make_symmetric(prdat, site1 ~ site2, value.var = "fit")
sdMat <- make_symmetric(prdat, site1 ~ site2, value.var = "stdev")
preds <- list(fits = fitMat, stdev = sdMat, coords = coords)
}
fits <- x$y_rev_transform(x$rev_link(preds$fits))
# for fits, use a PCoA to collapse to 3 axes.
# For stdev, use rowmeans to collapse to one
fitPCoA <- ade4::dudi.pco(as.dist(fits), scannf = FALSE, nf = 3)
fitPCoA_scaled <- as.data.frame(apply(as.matrix(fitPCoA$l1), 2, function(x) {
x <- x - min(x)
x / max(x)
}))
sdMeans <- data.frame(sd = rowMeans(preds$stdev, na.rm = TRUE))
# make grids
sp::coordinates(fitPCoA_scaled) <- sp::coordinates(sdMeans) <- preds$coords
sp::gridded(fitPCoA_scaled) <- sp::gridded(sdMeans) <- TRUE
ret <- list(fits = fitPCoA_scaled, stdev = sdMeans, pcoa = fitPCoA)
class(ret) <- c("mbmSP", class(ret))
}
ret
}
#' Prediction for MBM from a raster dataset
#'
#' @param x A previously-fit MBM object
#' @param rasterdat Raster stack containing named layers matching the variable names in x (i.e., colnames(x$covariates)[-1]).
#' If a layer named 'names' is included, this layer will be used as sitenames, otherwise they will be assigned unique
#' numbers
#' @param ... Other named parameters to pass to \code{\link{predict.mbm}}.
#' @return A named list; \code{fits} is a cell by cell matrix of predictions (on the link scale; use \code{x$y_rev_transform(x$rev_link(predictions$fit))}
#' for the response scale), \code{stdev} is a cell by cell matrix of standard deviations, and \code{coords} is a matrix of coordinates. Row/column
#' names in \code{fits} and \code{stdev} match the rownames in \code{coords}.
#' @export
predict_mbm_raster <- function(x, rasterdat, ...)
{
newdata <- raster::getValues(rasterdat[[colnames(x$covariates)[-1]]]) # the -1 is to account for the fact that the first covariate name is always distance
rows <- complete.cases(newdata)
newdata <- newdata[rows,]
coords <- sp::coordinates(rasterdat)[rows,]
if("names" %in% names(rasterdat))
{
names <- raster::getValues(rasterdat[['names']])
names <- names[rows]
} else {
names <- 1:nrow(newdata)
}
rownames(newdata) <- rownames(coords) <- names
preds <- predict(x, newdata, ...)
diagSites <- unique(c(preds$site1, preds$site2))
predsDF <- rbind(preds, data.frame(site1 = diagSites, site2 = diagSites, fit = 0, stdev = NA))
# make site by site matrices and fill in lower triangle
fitMat <- make_symmetric(predsDF, site1 ~ site2, value.var = "fit")
sdMat <- make_symmetric(predsDF, site1 ~ site2, value.var = "stdev")
list(fits = fitMat, stdev = sdMat, coords = coords)
}
## deprecated - old hack, delete if nothing breaks
# #' Turn an MBM prediction dataframe into a symmetric matrix
# #' @param DF MBM predfiction dataframe
# #' @param formula A formula to be passed to \code{link{reshape2::acast}}
# #' @param value.var Name of value variable for \code{acast}
# #' @param ... Additional parameters for \code{acast}
# #' @return A symmetric matrix of predictions
# #' @keywords internal
# make_symmetric <- function(DF, formula, value.var, ...)
# {
# mat <- reshape2::acast(DF, formula, value.var = value.var, ...)
# mat[lower.tri(mat)] <- t(mat)[lower.tri(mat)]
# mat
# }
|
section "Quantitative Hoare Logic (due to Carbonneaux)"
theory Quant_Hoare
imports Big_StepT Complex_Main "HOL-Library.Extended_Nat"
begin
abbreviation "eq a b == (And (Not (Less a b)) (Not (Less b a)))"
type_synonym lvname = string
type_synonym assn = "state \<Rightarrow> bool" (* time bound *)
type_synonym qassn = "state \<Rightarrow> enat" (* time bound *)
text \<open>The support of an assn2\<close>
abbreviation state_subst :: "state \<Rightarrow> aexp \<Rightarrow> vname \<Rightarrow> state"
("_[_'/_]" [1000,0,0] 999)
where "s[a/x] == s(x := aval a s)"
fun emb :: "bool \<Rightarrow> enat" ("\<up>") where
"emb False = \<infinity>"
| "emb True = 0"
subsection "Validity of quantitative Hoare Triple"
(* this definition refines the definition of validity of normal Hoare Triple for total correctness *)
definition hoare2_valid :: "qassn \<Rightarrow> com \<Rightarrow> qassn \<Rightarrow> bool"
("\<Turnstile>\<^sub>2 {(1_)}/ (_)/ {(1_)}" 50) where
"\<Turnstile>\<^sub>2 {P} c {Q} \<longleftrightarrow> (\<forall>s. P s < \<infinity> \<longrightarrow> (\<exists>t p. ((c,s) \<Rightarrow> p \<Down> t) \<and> P s \<ge> p + Q t))"
subsection "Hoare logic for quantiative reasoning"
inductive
hoare2 :: "qassn \<Rightarrow> com \<Rightarrow> qassn \<Rightarrow> bool" ("\<turnstile>\<^sub>2 ({(1_)}/ (_)/ {(1_)})" 50)
where
Skip: "\<turnstile>\<^sub>2 {%s. eSuc (P s)} SKIP {P}" |
Assign: "\<turnstile>\<^sub>2 {\<lambda>s. eSuc (P (s[a/x]))} x::=a { P }" |
If: "\<lbrakk> \<turnstile>\<^sub>2 {\<lambda>s. P s + \<up>( bval b s)} c\<^sub>1 { Q};
\<turnstile>\<^sub>2 {\<lambda>s. P s + \<up>(\<not> bval b s)} c\<^sub>2 { Q} \<rbrakk>
\<Longrightarrow> \<turnstile>\<^sub>2 {\<lambda>s. eSuc (P s)} IF b THEN c\<^sub>1 ELSE c\<^sub>2 { Q }" |
Seq: "\<lbrakk> \<turnstile>\<^sub>2 { P\<^sub>1 } c\<^sub>1 { P\<^sub>2 }; \<turnstile>\<^sub>2 {P\<^sub>2} c\<^sub>2 { P\<^sub>3 }\<rbrakk> \<Longrightarrow> \<turnstile>\<^sub>2 {P\<^sub>1} c\<^sub>1;;c\<^sub>2 {P\<^sub>3}" |
While:
"\<lbrakk> \<turnstile>\<^sub>2 { %s. I s + \<up>(bval b s) } c { %t. I t + 1 } \<rbrakk>
\<Longrightarrow> \<turnstile>\<^sub>2 {\<lambda>s. I s + 1 } WHILE b DO c {\<lambda>s. I s + \<up>(\<not> bval b s) }" |
conseq: "\<lbrakk> \<turnstile>\<^sub>2 {P}c{Q} ; \<And>s. P s \<le> P' s ; \<And>s. Q' s \<le> Q s \<rbrakk> \<Longrightarrow>
\<turnstile>\<^sub>2 {P'}c{ Q'}"
text \<open>derived rules\<close>
lemma Assign': "\<forall>s. P s \<ge> eSuc ( Q(s[a/x])) \<Longrightarrow> \<turnstile>\<^sub>2 {P} x ::= a {Q}"
by (simp add: strengthen_pre[OF _ Assign])
lemma progress: "(c, s) \<Rightarrow> p \<Down> t \<Longrightarrow> p > 0"
by (induct rule: big_step_t.induct, auto)
lemma FalseImplies: "\<turnstile>\<^sub>2 {%s. \<infinity>} c { Q}"
apply (induction c arbitrary: Q)
apply(auto intro: hoare2.Skip hoare2.Assign hoare2.Seq hoare2.conseq)
subgoal apply(rule hoare2.conseq) apply(rule hoare2.If[where P="%s. \<infinity>"]) by(auto intro: hoare2.If hoare2.conseq)
subgoal apply(rule hoare2.conseq) apply(rule hoare2.While[where I="%s. \<infinity>"]) apply(rule hoare2.conseq) by auto
done
subsection "Soundness"
text\<open>The soundness theorem:\<close>
lemma help1: assumes " enat a + X \<le> Y"
"enat b + Z \<le> X"
shows "enat (a + b) + Z \<le> Y"
using assms by (metis ab_semigroup_add_class.add_ac(1) add_left_mono order_trans plus_enat_simps(1))
lemma help2': assumes "enat p + INV t \<le> INV s"
"0 < p" "INV s = enat n"
shows "INV t < INV s"
using assms iadd_le_enat_iff by auto
lemma help2: assumes "enat p + INV t + 1 \<le> INV s"
"INV s = enat n"
shows "INV t < INV s"
using assms le_less_trans not_less_iff_gr_or_eq by fastforce
lemma Seq_sound: assumes "\<Turnstile>\<^sub>2 {P1} C1 {P2}"
"\<Turnstile>\<^sub>2 {P2} C2 {P3}"
shows "\<Turnstile>\<^sub>2 {P1} C1 ;; C2 {P3}"
unfolding hoare2_valid_def
proof (safe)
fix s
assume ninfP1: "P1 s < \<infinity>"
with assms(1)[unfolded hoare2_valid_def] obtain t1 p1
where 1: "(C1, s) \<Rightarrow> p1 \<Down> t1" and q1: "enat p1 + P2 t1 \<le> P1 s" by blast
with ninfP1 have ninfP2: "P2 t1 < \<infinity>"
using not_le by fastforce
with assms(2)[unfolded hoare2_valid_def] obtain t2 p2
where 2: "(C2, t1) \<Rightarrow> p2 \<Down> t2" and q2: "enat p2 + P3 t2 \<le> P2 t1" by blast
with ninfP2 have ninfP3: "P3 t2 < \<infinity>"
using not_le by fastforce
from Big_StepT.Seq[OF 1 2] have bigstep: "(C1;; C2, s) \<Rightarrow> p1 + p2 \<Down> t2" by simp
from help1[OF q1 q2] have potential: "enat (p1 + p2) + P3 t2 \<le> P1 s" .
show "\<exists>t p. (C1;; C2, s) \<Rightarrow> p \<Down> t \<and> enat p + P3 t \<le> P1 s "
apply(rule exI[where x="t2"])
apply(rule exI[where x="p1 + p2"])
using bigstep potential by simp
qed
theorem hoare2_sound: "\<turnstile>\<^sub>2 {P}c{ Q} \<Longrightarrow> \<Turnstile>\<^sub>2 {P}c{ Q}"
proof(induction rule: hoare2.induct)
case (Skip P)
show ?case unfolding hoare2_valid_def apply(safe)
subgoal for s apply(rule exI[where x=s]) apply(rule exI[where x="Suc 0"])
by (auto simp: eSuc_enat_iff eSuc_enat)
done
next
case (Assign P a x)
show ?case unfolding hoare2_valid_def apply(safe)
subgoal for s apply(rule exI[where x="s[a/x]"]) apply(rule exI[where x="Suc 0"])
by (auto simp: eSuc_enat_iff eSuc_enat)
done
next
case (Seq P1 C1 P2 C2 P3)
thus ?case using Seq_sound by auto
next
case (If P b c1 Q c2)
show ?case unfolding hoare2_valid_def
proof (safe)
fix s
assume "eSuc (P s) < \<infinity>"
then have i: "P s < \<infinity>"
using enat_ord_simps(4) by fastforce
show "\<exists>t p. (IF b THEN c1 ELSE c2, s) \<Rightarrow> p \<Down> t \<and> enat p + Q t \<le> eSuc (P s)"
proof(cases "bval b s")
case True
with i have "P s + emb (bval b s) < \<infinity>" by simp
with If(3)[unfolded hoare2_valid_def] obtain p t
where 1: "(c1, s) \<Rightarrow> p \<Down> t" and q: "enat p + Q t \<le> P s + emb (bval b s)" by blast
from Big_StepT.IfTrue[OF True 1] have 2: "(IF b THEN c1 ELSE c2, s) \<Rightarrow> p + 1 \<Down> t" by simp
show ?thesis apply(rule exI[where x=t]) apply(rule exI[where x="p+1"])
apply(safe) apply(fact)
using q True apply(simp)
by (metis eSuc_enat eSuc_ile_mono iadd_Suc)
next
case False
with i have "P s + emb (~ bval b s) < \<infinity>" by simp
with If(4)[unfolded hoare2_valid_def] obtain p t
where 1: "(c2, s) \<Rightarrow> p \<Down> t" and q: "enat p + Q t \<le> P s + emb (~ bval b s)" by blast
from Big_StepT.IfFalse[OF False 1] have 2: "(IF b THEN c1 ELSE c2, s) \<Rightarrow> p + 1 \<Down> t" by simp
show ?thesis apply(rule exI[where x=t]) apply(rule exI[where x="p+1"])
apply(safe) apply(fact)
using q False apply(simp)
by (metis eSuc_enat eSuc_ile_mono iadd_Suc)
qed
qed
next
case (conseq P c Q P' Q')
show ?case unfolding hoare2_valid_def
proof (safe)
fix s
assume "P' s < \<infinity>"
with conseq(2) have "P s < \<infinity>"
using le_less_trans by blast
with conseq(4)[unfolded hoare2_valid_def] obtain p t where "(c, s) \<Rightarrow> p \<Down> t" "enat p + Q t \<le> P s" by blast
with conseq(2,3) show "\<exists>t p. (c, s) \<Rightarrow> p \<Down> t \<and> enat p + Q' t \<le> P' s"
by (meson add_left_mono dual_order.trans)
qed
next
case (While INV b c)
from While(2)[unfolded hoare2_valid_def]
have WH2: "\<And>s. INV s + \<up> (bval b s) < \<infinity> \<Longrightarrow> (\<exists>t p. (c, s) \<Rightarrow> p \<Down> t \<and> enat p + INV t + 1 \<le> INV s + \<up> (bval b s))"
by (simp add: add.commute add.left_commute)
show ?case unfolding hoare2_valid_def
proof (safe)
fix s
assume ninfINV: "INV s + 1 < \<infinity>"
then have "INV s < \<infinity>"
using enat_ord_simps(4) by fastforce
then obtain n where i: "INV s = enat n" using not_infinity_eq
by auto
text \<open>In order to prove validity, we induct on the value of the Invariant, which is a finite number
and decreases in every loop iteration. For each step we show that validity holds.\<close>
have "INV s = enat n \<Longrightarrow> \<exists>t p. (WHILE b DO c, s) \<Rightarrow> p \<Down> t \<and> enat p + (INV t + emb (\<not> bval b t)) \<le> INV s + 1"
proof (induct n arbitrary: s rule: less_induct)
case (less n)
show ?case
proof (cases "bval b s")
case False
show ?thesis
using WhileFalse[OF False] one_enat_def by fastforce
next
case True
\<comment> \<open>obtain the loop body from the outer IH\<close>
with less(2) WH2 obtain t p
where o: "(c, s) \<Rightarrow> p \<Down> t"
and q: "enat p + INV t + 1 \<le> INV s " by force
\<comment> \<open>prepare premises to ...\<close>
from q have g: "INV t < INV s"
using help2 less(2) by metis
then have ninfINVt: "INV t < \<infinity>" using less(2)
using enat_ord_simps(4) by fastforce
then obtain n' where i: "INV t = enat n'" using not_infinity_eq
by auto
with less(2) have ii: "n' < n"
using g by auto
\<comment> \<open>... obtain the tail of the While loop from the inner IH\<close>
from i ii less(1) obtain t2 p2
where o2: "(WHILE b DO c, t) \<Rightarrow> p2 \<Down> t2"
and q2: "enat p2 + (INV t2 + emb (\<not> bval b t2)) \<le> INV t + 1" by blast
have ende: "~ bval b t2"
apply(rule ccontr) apply(simp) using q2 ninfINVt
by (simp add: i one_enat_def)
\<comment> \<open>combine body and tail to one loop unrolling:\<close>
\<comment> \<open>- the Bigstep Semantic\<close>
from WhileTrue[OF True o o2] have BigStep: "(WHILE b DO c, s) \<Rightarrow> 1 + p + p2 \<Down> t2" by simp
\<comment> \<open>- the potentialPreservation\<close>
from ende q2 have q2': "enat p2 + INV t2 \<le> INV t + 1" by simp
have potentialPreservation: "enat (1 + p + p2) + (INV t2 + \<up> (\<not> bval b t2)) \<le> INV s + 1"
proof -
have "enat (1 + p + p2) + (INV t2 + \<up> (\<not> bval b t2))
= enat (Suc (p + p2)) + INV t2" using ende by simp
also have "\<dots> = enat (Suc p) + enat p2 + INV t2" by fastforce
also have "\<dots> \<le> enat (Suc p) + INV t + 1" using q2'
by (metis ab_semigroup_add_class.add_ac(1) add_left_mono)
also have "\<dots> \<le> INV s + 1" using q
by (metis (no_types, hide_lams) add.commute add_left_mono eSuc_enat iadd_Suc plus_1_eSuc(1))
finally show "enat (1 + p + p2) + (INV t2 + \<up> (\<not> bval b t2)) \<le> INV s + 1" .
qed
\<comment> \<open>finally combine BigStep Semantic and TimeBound\<close>
show ?thesis
apply(rule exI[where x=t2])
apply(rule exI[where x= "1 + p + p2"])
apply(safe)
by(fact BigStep potentialPreservation)+
qed
qed
from this[OF i] show "\<exists>t p. (WHILE b DO c, s) \<Rightarrow> p \<Down> t \<and> enat p + (INV t + emb (\<not> bval b t)) \<le> INV s + 1" .
qed
qed
subsection "Completeness"
(* the WeakestPrePotential *)
definition wp2 :: "com \<Rightarrow> qassn \<Rightarrow> qassn" ("wp\<^sub>2") where
"wp\<^sub>2 c Q = (\<lambda>s. (if (\<exists>t p. (c,s) \<Rightarrow> p \<Down> t \<and> Q t < \<infinity>) then enat (THE p. \<exists>t. (c,s) \<Rightarrow> p \<Down> t) + Q (THE t. \<exists>p. (c,s) \<Rightarrow> p \<Down> t) else \<infinity>))"
lemma wp2_alt: "wp\<^sub>2 c Q = (\<lambda>s. (if \<down>(c,s) then enat (\<down>\<^sub>t (c, s)) + Q (\<down>\<^sub>s (c, s)) else \<infinity>))"
apply(rule ext) by(auto simp: bigstepT_the_state wp2_def split: if_split)
lemma wp2_Assign[simp]: "wp\<^sub>2 (x ::= e) Q = (\<lambda>s. eSuc (Q (s(x := aval e s))))"
by (auto intro!: ext simp: wp2_def ASSp ASSt ASSnot eSuc_enat)
lemma wp2_Seq[simp]: "wp\<^sub>2 (c\<^sub>1;;c\<^sub>2) Q = wp\<^sub>2 c\<^sub>1 (wp\<^sub>2 c\<^sub>2 Q)"
unfolding wp2_def (* what rule is doing: it uses the extensionality (ext) of functions *)
proof (rule, case_tac "\<exists>t p. (c\<^sub>1;; c\<^sub>2, s) \<Rightarrow> p \<Down> t \<and> Q t < \<infinity>", goal_cases)
case (1 s)
then obtain u p where ter: "(c\<^sub>1;; c\<^sub>2, s) \<Rightarrow> p \<Down> u" and Q: "Q u < \<infinity>" by blast
then obtain t p1 p2 where i: "(c\<^sub>1 , s) \<Rightarrow> p1 \<Down> t" and ii: "(c\<^sub>2 , t) \<Rightarrow> p2 \<Down> u" and p: "p1 + p2 = p" by blast
from bigstepT_the_state[OF i] have t: "\<down>\<^sub>s (c\<^sub>1, s) = t"
by blast
from bigstepT_the_state[OF ii] have t2: "\<down>\<^sub>s (c\<^sub>2, t) = u"
by blast
from bigstepT_the_cost[OF i] have firstcost: "\<down>\<^sub>t (c\<^sub>1, s) = p1"
by blast
from bigstepT_the_cost[OF ii] have secondcost: "\<down>\<^sub>t (c\<^sub>2, t) = p2"
by blast
have totalcost: "\<down>\<^sub>t(c\<^sub>1;; c\<^sub>2, s) = p1 + p2"
using bigstepT_the_cost[OF ter] p by auto
have totalstate: "\<down>\<^sub>s(c\<^sub>1;; c\<^sub>2, s) = u"
using bigstepT_the_state[OF ter] by auto
have c2: "\<exists>ta p. (c\<^sub>2, t) \<Rightarrow> p \<Down> ta \<and> Q ta < \<infinity>"
apply(rule exI[where x= u])
apply(rule exI[where x= p2]) apply safe apply fact+ done
have C: "\<exists>t p. (c\<^sub>1, s) \<Rightarrow> p \<Down> t \<and> (if \<exists>ta p. (c\<^sub>2, t) \<Rightarrow> p \<Down> ta \<and> Q ta < \<infinity> then enat (THE p. Ex (big_step_t (c\<^sub>2, t) p)) + Q (THE ta. \<exists>p. (c\<^sub>2, t) \<Rightarrow> p \<Down> ta) else \<infinity>) < \<infinity>"
apply(rule exI[where x=t])
apply(rule exI[where x=p1])
apply safe
apply fact
apply(simp only: c2 if_True)
using Q bigstepT_the_state ii by auto
show ?case
apply(simp only: 1 if_True t t2 c2 C totalcost totalstate firstcost secondcost) by fastforce
next
case (2 s)
show ?case apply(simp only: 2 if_False)
apply auto using 2
by force
qed
lemma wp2_If[simp]:
"wp\<^sub>2 (IF b THEN c\<^sub>1 ELSE c\<^sub>2) Q = (\<lambda>s. eSuc (wp\<^sub>2 (if bval b s then c\<^sub>1 else c\<^sub>2) Q s))"
apply (auto simp: wp2_def fun_eq_iff)
subgoal for x t p i ta ia xa apply(simp only: IfTrue[THEN bigstepT_the_state])
apply(simp only: IfTrue[THEN bigstepT_the_cost])
apply(simp only: bigstepT_the_cost bigstepT_the_state)
by (simp add: eSuc_enat)
apply(simp only: bigstepT_the_state bigstepT_the_cost) apply force
apply(simp only: bigstepT_the_state bigstepT_the_cost)
proof(goal_cases)
case (1 x t p i ta ia xa)
note f= IfFalse[THEN bigstepT_the_state, of b x c\<^sub>2 xa ta "Suc xa" c\<^sub>1, simplified, OF 1(4) 1(5)]
note f2= IfFalse[THEN bigstepT_the_cost, of b x c\<^sub>2 xa ta "Suc xa" c\<^sub>1, simplified, OF 1(4) 1(5)]
note g= bigstep_det[OF 1(1) 1(5)]
show ?case
apply(simp only: f f2) using 1 g
by (simp add: eSuc_enat)
next
case 2
then
show ?case
apply(simp only: bigstepT_the_state bigstepT_the_cost) apply force done
qed
lemma assumes b: "bval b s"
shows wp2WhileTrue: " wp\<^sub>2 c (wp\<^sub>2 (WHILE b DO c) Q) s + 1 \<le> wp\<^sub>2 (WHILE b DO c) Q s"
proof (cases "\<exists>t p. (WHILE b DO c, s) \<Rightarrow> p \<Down> t \<and> Q t < \<infinity>")
case True
then obtain t p where w: "(WHILE b DO c, s) \<Rightarrow> p \<Down> t" and q: "Q t < \<infinity>" by blast
from b w obtain p1 p2 t1 where c: "(c, s) \<Rightarrow> p1 \<Down> t1" and w': "(WHILE b DO c, t1) \<Rightarrow> p2 \<Down> t" and sum: "1 + p1 + p2 = p"
by auto
have g: "\<exists>ta p. (WHILE b DO c, t1) \<Rightarrow> p \<Down> ta \<and> Q ta < \<infinity>"
apply(rule exI[where x="t"])
apply(rule exI[where x="p2"])
apply safe apply fact+ done
have h: "\<exists>t p. (c, s) \<Rightarrow> p \<Down> t \<and> (if \<exists>ta p. (WHILE b DO c, t) \<Rightarrow> p \<Down> ta \<and> Q ta < \<infinity> then enat (THE p. Ex (big_step_t (WHILE b DO c, t) p)) + Q (THE ta. \<exists>p. (WHILE b DO c, t) \<Rightarrow> p \<Down> ta) else \<infinity>) < \<infinity>"
apply(rule exI[where x="t1"])
apply(rule exI[where x="p1"])
apply safe apply fact
apply(simp only: g if_True) using bigstepT_the_state bigstepT_the_cost w' q by(auto)
have "wp\<^sub>2 c (wp\<^sub>2 (WHILE b DO c) Q) s + 1 = enat p + Q t"
unfolding wp2_def apply(simp only: h if_True)
apply(simp only: bigstepT_the_state[OF c] bigstepT_the_cost[OF c] g if_True bigstepT_the_state[OF w'] bigstepT_the_cost[OF w']) using sum
by (metis One_nat_def ab_semigroup_add_class.add_ac(1) add.commute add.right_neutral eSuc_enat plus_1_eSuc(2) plus_enat_simps(1))
also have "\<dots> = wp\<^sub>2 (WHILE b DO c) Q s"
unfolding wp2_def apply(simp only: True if_True)
using bigstepT_the_state bigstepT_the_cost w apply(simp) done
finally show ?thesis by simp
next
case False
have "wp\<^sub>2 (WHILE b DO c) Q s = \<infinity>"
unfolding wp2_def
apply(simp only: False if_False) done
then show ?thesis by auto
qed
lemma assumes b: "bval b s"
shows wp2WhileTrue': "wp\<^sub>2 c (wp\<^sub>2 (WHILE b DO c) Q) s + 1 = wp\<^sub>2 (WHILE b DO c) Q s"
proof (cases "\<exists>p t. (WHILE b DO c, s) \<Rightarrow> p \<Down> t")
case True
then obtain t p where w: "(WHILE b DO c, s) \<Rightarrow> p \<Down> t" by blast
from b w obtain p1 p2 t1 where c: "(c, s) \<Rightarrow> p1 \<Down> t1" and w': "(WHILE b DO c, t1) \<Rightarrow> p2 \<Down> t" and sum: "1 + p1 + p2 = p"
by auto
then have z: "\<down> (c, s)" and z2: "\<down> (WHILE b DO c, t1)" by auto
have "wp\<^sub>2 c (wp\<^sub>2 (WHILE b DO c) Q) s + 1 = enat p + Q t"
unfolding wp2_alt apply(simp only: z if_True)
apply(simp only: bigstepT_the_state[OF c] bigstepT_the_cost[OF c] z2 if_True bigstepT_the_state[OF w'] bigstepT_the_cost[OF w'])
using sum
by (metis One_nat_def ab_semigroup_add_class.add_ac(1) add.commute add.right_neutral eSuc_enat plus_1_eSuc(2) plus_enat_simps(1))
also have "\<dots> = wp\<^sub>2 (WHILE b DO c) Q s"
unfolding wp2_alt apply(simp only: True if_True)
using bigstepT_the_state bigstepT_the_cost w apply(simp) done
finally show ?thesis by simp
next
case False
have "\<not> (\<down> (WHILE b DO c, \<down>\<^sub>s(c,s)) \<and> \<down> (c, s))"
proof (rule)
assume P: "\<down> (WHILE b DO c, \<down>\<^sub>s (c, s)) \<and> \<down> (c, s)"
then obtain t s' where A: "(c,s) \<Rightarrow> t \<Down> s'" by blast
with A P have "\<down> (WHILE b DO c, s')" using bigstepT_the_state by auto
then obtain t' s'' where B: "(WHILE b DO c,s') \<Rightarrow> t' \<Down> s''" by auto
have "(WHILE b DO c, s) \<Rightarrow> 1+t+t' \<Down> s''" apply(rule WhileTrue) using b A B by auto
then have "\<down> (WHILE b DO c, s)" by auto
thus "False" using False by auto
qed
then have "\<not>\<down> (WHILE b DO c, \<down>\<^sub>s(c,s)) \<or> \<not>\<down> (c, s)" by simp
then show ?thesis apply rule
subgoal unfolding wp2_alt apply(simp only: if_False False) by auto
subgoal unfolding wp2_alt apply(simp only: if_False False) by auto
done
qed
lemma assumes b: "~ bval b s"
shows wp2WhileFalse: " Q s + 1 \<le> wp\<^sub>2 (WHILE b DO c) Q s"
proof (cases "\<exists>t p. (WHILE b DO c, s) \<Rightarrow> p \<Down> t \<and> Q t < \<infinity>")
case True
with b obtain t p where w: "(WHILE b DO c, s) \<Rightarrow> p \<Down> t" and "Q t < \<infinity>" by blast
with b have c: "s=t" "p=Suc 0" by auto
have " wp\<^sub>2 (WHILE b DO c) Q s = Q s + 1"
unfolding wp2_def apply(simp only: True if_True)
using w c bigstepT_the_cost bigstepT_the_state by(auto simp add: one_enat_def)
then show ?thesis by auto
next
case False
have "wp\<^sub>2 (WHILE b DO c) Q s = \<infinity>"
unfolding wp2_def
apply(simp only: False if_False) done
then show ?thesis by auto
qed
lemma wp2_is_pre: "\<turnstile>\<^sub>2 {wp\<^sub>2 c Q} c { Q}"
proof (induction c arbitrary: Q)
case SKIP show ?case by (auto intro: hoare2.Skip)
next
case Assign show ?case by (auto intro:hoare2.Assign)
next
case Seq thus ?case by (auto intro:hoare2.Seq)
next
case (If x1 c1 c2 Q) thus ?case
apply (auto intro!: hoare2.If )
apply(rule hoare2.conseq)
apply(auto)
apply(rule hoare2.conseq)
apply(auto)
done
next
case (While b c)
show ?case
apply(rule conseq)
apply(rule hoare2.While[where I="%s. (if bval b s then wp\<^sub>2 c (wp\<^sub>2 (WHILE b DO c) Q) s else Q s)"])
apply(rule conseq)
apply(rule While[of "wp\<^sub>2 (WHILE b DO c) Q"])
using wp2While by auto
qed
lemma wp2_is_weakestprePotential1: "\<Turnstile>\<^sub>2 {P}c{Q} \<Longrightarrow> (\<forall>s. wp\<^sub>2 c Q s \<le> P s)"
apply(auto simp: hoare2_valid_def wp2_def)
proof (goal_cases)
case (1 s t p i)
show ?case
proof(cases "P s < \<infinity>")
case True
with 1(1) obtain t p' where i: "(c, s) \<Rightarrow> p' \<Down> t" and ii: "enat p' + Q t \<le> P s"
by auto
show ?thesis apply(simp add: bigstepT_the_state[OF i] bigstepT_the_cost[OF i] ii) done
qed simp
qed force
lemma wp2_is_weakestprePotential2: "(\<forall>s. wp\<^sub>2 c Q s \<le> P s) \<Longrightarrow> \<Turnstile>\<^sub>2 {P}c{Q}"
apply(auto simp: hoare2_valid_def wp2_def)
proof (goal_cases)
case (1 s i)
then have A: "(if \<exists>t. (\<exists>p. (c, s) \<Rightarrow> p \<Down> t) \<and> (\<exists>i. Q t = enat i) then enat (THE p. Ex (big_step_t (c, s) p)) + Q (THE t. \<exists>p. (c, s) \<Rightarrow> p \<Down> t) else \<infinity>) \<le> P s"
by fast
show ?case
proof (cases "\<exists>t. (\<exists>p. (c, s) \<Rightarrow> p \<Down> t) \<and> (\<exists>i. Q t = enat i)")
case True
then obtain t p where i: "(c, s) \<Rightarrow> p \<Down> t" by blast
from True A have "enat p + Q t \<le> P s" by (simp add: bigstepT_the_cost[OF i] bigstepT_the_state[OF i])
then have "(c, s) \<Rightarrow> p \<Down> t \<and> enat p + Q t \<le> enat i" using 1(2) i by simp
then show ?thesis by auto
next
case False
with A have "P s \<ge> \<infinity>" by auto
then show ?thesis using 1 by auto
qed
qed
theorem wp2_is_weakestprePotential: "(\<forall>s. wp\<^sub>2 c Q s \<le> P s) \<longleftrightarrow> \<Turnstile>\<^sub>2 {P}c{Q}"
using wp2_is_weakestprePotential2 wp2_is_weakestprePotential1 by metis
theorem hoare2_complete: "\<Turnstile>\<^sub>2 {P}c{Q} \<Longrightarrow> \<turnstile>\<^sub>2 {P}c{ Q}"
apply(rule conseq[OF wp2_is_pre, where Q'=Q and Q=Q, simplified])
using wp2_is_weakestprePotential1 by blast
corollary hoare2_sound_complete: " \<turnstile>\<^sub>2 {P}c{Q} \<longleftrightarrow> \<Turnstile>\<^sub>2 {P}c{ Q}"
by (metis hoare2_sound hoare2_complete)
end
|
% Basic LaTeX template for NE 204 lab report
\documentclass[11pt]{article}
%==============================================================================
%%% Everything between the "="'s is the preamble.
%%% Define packages and meta data here
% Common packages
\usepackage{amsmath} % Expanded math
\usepackage{amssymb} % Expanded math symbols
\usepackage{graphicx} % For images
\usepackage[margin=1.25in]{geometry}
\usepackage{subfig}
%\usepackage[version=3]{mhchem} % For nuclide formatting
% All images/figures will be stored in the images folder.
% Specify that here so pdflatex knows where to look for images.
\graphicspath{{./figures/}}
% Metadata
\title{Final Project}
\author{Joanna Szornel}
\date{\today}
%==============================================================================
\begin{document}
% Compile metadata from preamble into a nicely-rendered title section
\maketitle
% The *'s next so section/subsection definitions suppresses numbering
\section*{Introduction}
\label{sec:intro}
\input{text/introduction.tex}
\section*{Methods}
\label{sec:meth}
\input{text/methods.tex}
\section*{Results}
\label{sec:res}
\input{text/results.tex}
\section*{Discussion}
\label{sec:disc}
\input{text/discussion.tex}
% Bibliography
\bibliographystyle{plain}
% Refers to a bibtex file in the current dir named "references.bib"
\bibliography{references}
\nocite{*}
\end{document}
|
[STATEMENT]
lemma star_outer_increasing:
"x \<le> y\<^sup>\<star> * x * y\<^sup>\<star>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> y\<^sup>\<star> * x * y\<^sup>\<star>
[PROOF STEP]
by (metis star.circ_back_loop_prefixpoint star.circ_loop_fixpoint sup.boundedE)
|
(* Title: JinjaDCI/J/WellType.thy
Author: Tobias Nipkow, Susannah Mansky
Copyright 2003 Technische Universitaet Muenchen, 2019-20 UIUC
Based on the Jinja theory J/WellType.thy by Tobias Nipkow
*)
section \<open> Well-typedness of Jinja expressions \<close>
theory WellType
imports "../Common/Objects" Expr
begin
type_synonym
env = "vname \<rightharpoonup> ty"
inductive
WT :: "[J_prog,env, expr , ty ] \<Rightarrow> bool"
("_,_ \<turnstile> _ :: _" [51,51,51]50)
and WTs :: "[J_prog,env, expr list, ty list] \<Rightarrow> bool"
("_,_ \<turnstile> _ [::] _" [51,51,51]50)
for P :: J_prog
where
WTNew:
"is_class P C \<Longrightarrow>
P,E \<turnstile> new C :: Class C"
| WTCast:
"\<lbrakk> P,E \<turnstile> e :: Class D; is_class P C; P \<turnstile> C \<preceq>\<^sup>* D \<or> P \<turnstile> D \<preceq>\<^sup>* C \<rbrakk>
\<Longrightarrow> P,E \<turnstile> Cast C e :: Class C"
| WTVal:
"typeof v = Some T \<Longrightarrow>
P,E \<turnstile> Val v :: T"
| WTVar:
"E V = Some T \<Longrightarrow>
P,E \<turnstile> Var V :: T"
| WTBinOpEq:
"\<lbrakk> P,E \<turnstile> e\<^sub>1 :: T\<^sub>1; P,E \<turnstile> e\<^sub>2 :: T\<^sub>2; P \<turnstile> T\<^sub>1 \<le> T\<^sub>2 \<or> P \<turnstile> T\<^sub>2 \<le> T\<^sub>1 \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e\<^sub>1 \<guillemotleft>Eq\<guillemotright> e\<^sub>2 :: Boolean"
| WTBinOpAdd:
"\<lbrakk> P,E \<turnstile> e\<^sub>1 :: Integer; P,E \<turnstile> e\<^sub>2 :: Integer \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e\<^sub>1 \<guillemotleft>Add\<guillemotright> e\<^sub>2 :: Integer"
| WTLAss:
"\<lbrakk> E V = Some T; P,E \<turnstile> e :: T'; P \<turnstile> T' \<le> T; V \<noteq> this \<rbrakk>
\<Longrightarrow> P,E \<turnstile> V:=e :: Void"
| WTFAcc:
"\<lbrakk> P,E \<turnstile> e :: Class C; P \<turnstile> C sees F,NonStatic:T in D \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e\<bullet>F{D} :: T"
| WTSFAcc:
"\<lbrakk> P \<turnstile> C sees F,Static:T in D \<rbrakk>
\<Longrightarrow> P,E \<turnstile> C\<bullet>\<^sub>sF{D} :: T"
| WTFAss:
"\<lbrakk> P,E \<turnstile> e\<^sub>1 :: Class C; P \<turnstile> C sees F,NonStatic:T in D; P,E \<turnstile> e\<^sub>2 :: T'; P \<turnstile> T' \<le> T \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e\<^sub>1\<bullet>F{D}:=e\<^sub>2 :: Void"
| WTSFAss:
"\<lbrakk> P \<turnstile> C sees F,Static:T in D; P,E \<turnstile> e\<^sub>2 :: T'; P \<turnstile> T' \<le> T \<rbrakk>
\<Longrightarrow> P,E \<turnstile> C\<bullet>\<^sub>sF{D}:=e\<^sub>2 :: Void"
| WTCall:
"\<lbrakk> P,E \<turnstile> e :: Class C; P \<turnstile> C sees M,NonStatic:Ts \<rightarrow> T = (pns,body) in D;
P,E \<turnstile> es [::] Ts'; P \<turnstile> Ts' [\<le>] Ts \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e\<bullet>M(es) :: T"
| WTSCall:
"\<lbrakk> P \<turnstile> C sees M,Static:Ts \<rightarrow> T = (pns,body) in D;
P,E \<turnstile> es [::] Ts'; P \<turnstile> Ts' [\<le>] Ts; M \<noteq> clinit \<rbrakk>
\<Longrightarrow> P,E \<turnstile> C\<bullet>\<^sub>sM(es) :: T"
| WTBlock:
"\<lbrakk> is_type P T; P,E(V \<mapsto> T) \<turnstile> e :: T' \<rbrakk>
\<Longrightarrow> P,E \<turnstile> {V:T; e} :: T'"
| WTSeq:
"\<lbrakk> P,E \<turnstile> e\<^sub>1::T\<^sub>1; P,E \<turnstile> e\<^sub>2::T\<^sub>2 \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e\<^sub>1;;e\<^sub>2 :: T\<^sub>2"
| WTCond:
"\<lbrakk> P,E \<turnstile> e :: Boolean; P,E \<turnstile> e\<^sub>1::T\<^sub>1; P,E \<turnstile> e\<^sub>2::T\<^sub>2;
P \<turnstile> T\<^sub>1 \<le> T\<^sub>2 \<or> P \<turnstile> T\<^sub>2 \<le> T\<^sub>1; P \<turnstile> T\<^sub>1 \<le> T\<^sub>2 \<longrightarrow> T = T\<^sub>2; P \<turnstile> T\<^sub>2 \<le> T\<^sub>1 \<longrightarrow> T = T\<^sub>1 \<rbrakk>
\<Longrightarrow> P,E \<turnstile> if (e) e\<^sub>1 else e\<^sub>2 :: T"
| WTWhile:
"\<lbrakk> P,E \<turnstile> e :: Boolean; P,E \<turnstile> c::T \<rbrakk>
\<Longrightarrow> P,E \<turnstile> while (e) c :: Void"
| WTThrow:
"P,E \<turnstile> e :: Class C \<Longrightarrow>
P,E \<turnstile> throw e :: Void"
| WTTry:
"\<lbrakk> P,E \<turnstile> e\<^sub>1 :: T; P,E(V \<mapsto> Class C) \<turnstile> e\<^sub>2 :: T; is_class P C \<rbrakk>
\<Longrightarrow> P,E \<turnstile> try e\<^sub>1 catch(C V) e\<^sub>2 :: T"
\<comment> \<open>well-typed expression lists\<close>
| WTNil:
"P,E \<turnstile> [] [::] []"
| WTCons:
"\<lbrakk> P,E \<turnstile> e :: T; P,E \<turnstile> es [::] Ts \<rbrakk>
\<Longrightarrow> P,E \<turnstile> e#es [::] T#Ts"
(*<*)
declare WT_WTs.intros[intro!] (* WTNil[iff] *)
lemmas WT_WTs_induct = WT_WTs.induct [split_format (complete)]
and WT_WTs_inducts = WT_WTs.inducts [split_format (complete)]
(*>*)
lemma init_nwt [simp]:"\<not>P,E \<turnstile> INIT C (Cs,b) \<leftarrow> e :: T"
by(auto elim:WT.cases)
lemma ri_nwt [simp]:"\<not>P,E \<turnstile> RI(C,e);Cs \<leftarrow> e' :: T"
by(auto elim:WT.cases)
lemma [iff]: "(P,E \<turnstile> e#es [::] T#Ts) = (P,E \<turnstile> e :: T \<and> P,E \<turnstile> es [::] Ts)"
(*<*)by (rule iffI) (auto elim: WTs.cases)(*>*)
lemma [iff]: "(P,E \<turnstile> (e#es) [::] Ts) =
(\<exists>U Us. Ts = U#Us \<and> P,E \<turnstile> e :: U \<and> P,E \<turnstile> es [::] Us)"
(*<*)by (rule iffI) (auto elim: WTs.cases)(*>*)
lemma [iff]: "\<And>Ts. (P,E \<turnstile> es\<^sub>1 @ es\<^sub>2 [::] Ts) =
(\<exists>Ts\<^sub>1 Ts\<^sub>2. Ts = Ts\<^sub>1 @ Ts\<^sub>2 \<and> P,E \<turnstile> es\<^sub>1 [::] Ts\<^sub>1 \<and> P,E \<turnstile> es\<^sub>2[::]Ts\<^sub>2)"
(*<*)
proof(induct es\<^sub>1 type:list)
case (Cons a list)
let ?lhs = "(\<exists>U Us. Ts = U # Us \<and> P,E \<turnstile> a :: U \<and>
(\<exists>Ts\<^sub>1 Ts\<^sub>2. Us = Ts\<^sub>1 @ Ts\<^sub>2 \<and> P,E \<turnstile> list [::] Ts\<^sub>1 \<and> P,E \<turnstile> es\<^sub>2 [::] Ts\<^sub>2))"
let ?rhs = "(\<exists>Ts\<^sub>1 Ts\<^sub>2. Ts = Ts\<^sub>1 @ Ts\<^sub>2 \<and>
(\<exists>U Us. Ts\<^sub>1 = U # Us \<and> P,E \<turnstile> a :: U \<and> P,E \<turnstile> list [::] Us) \<and> P,E \<turnstile> es\<^sub>2 [::] Ts\<^sub>2)"
{ assume ?lhs
then have ?rhs by (auto intro: Cons_eq_appendI)
}
moreover {
assume ?rhs
then have ?lhs by fastforce
}
ultimately have "?lhs = ?rhs" by(rule iffI)
then show ?case by (clarsimp simp: Cons)
qed simp
(*>*)
lemma [iff]: "P,E \<turnstile> Val v :: T = (typeof v = Some T)"
(*<*)proof(rule iffI) qed (auto elim: WT.cases)(*>*)
lemma [iff]: "P,E \<turnstile> Var V :: T = (E V = Some T)"
(*<*)proof(rule iffI) qed (auto elim: WT.cases)(*>*)
lemma [iff]: "P,E \<turnstile> e\<^sub>1;;e\<^sub>2 :: T\<^sub>2 = (\<exists>T\<^sub>1. P,E \<turnstile> e\<^sub>1::T\<^sub>1 \<and> P,E \<turnstile> e\<^sub>2::T\<^sub>2)"
(*<*)proof(rule iffI) qed (auto elim: WT.cases)(*>*)
lemma [iff]: "(P,E \<turnstile> {V:T; e} :: T') = (is_type P T \<and> P,E(V\<mapsto>T) \<turnstile> e :: T')"
(*<*)proof(rule iffI) qed (auto elim: WT.cases)(*>*)
(*<*)
inductive_cases WT_elim_cases[elim!]:
"P,E \<turnstile> V :=e :: T"
"P,E \<turnstile> if (e) e\<^sub>1 else e\<^sub>2 :: T"
"P,E \<turnstile> while (e) c :: T"
"P,E \<turnstile> throw e :: T"
"P,E \<turnstile> try e\<^sub>1 catch(C V) e\<^sub>2 :: T"
"P,E \<turnstile> Cast D e :: T"
"P,E \<turnstile> a\<bullet>F{D} :: T"
"P,E \<turnstile> C\<bullet>\<^sub>sF{D} :: T"
"P,E \<turnstile> a\<bullet>F{D} := v :: T"
"P,E \<turnstile> C\<bullet>\<^sub>sF{D} := v :: T"
"P,E \<turnstile> e\<^sub>1 \<guillemotleft>bop\<guillemotright> e\<^sub>2 :: T"
"P,E \<turnstile> new C :: T"
"P,E \<turnstile> e\<bullet>M(ps) :: T"
"P,E \<turnstile> C\<bullet>\<^sub>sM(ps) :: T"
(*>*)
lemma wt_env_mono:
"P,E \<turnstile> e :: T \<Longrightarrow> (\<And>E'. E \<subseteq>\<^sub>m E' \<Longrightarrow> P,E' \<turnstile> e :: T)" and
"P,E \<turnstile> es [::] Ts \<Longrightarrow> (\<And>E'. E \<subseteq>\<^sub>m E' \<Longrightarrow> P,E' \<turnstile> es [::] Ts)"
(*<*)
proof(induct rule: WT_WTs_inducts)
case WTVar then show ?case by(simp add: map_le_def dom_def)
next
case WTLAss then show ?case by(force simp:map_le_def)
qed fastforce+
(*>*)
lemma WT_fv: "P,E \<turnstile> e :: T \<Longrightarrow> fv e \<subseteq> dom E"
and "P,E \<turnstile> es [::] Ts \<Longrightarrow> fvs es \<subseteq> dom E"
(*<*)
proof(induct rule:WT_WTs.inducts)
case WTVar then show ?case by fastforce
next
case WTLAss then show ?case by fastforce
next
case WTBlock then show ?case by fastforce
next
case WTTry then show ?case by fastforce
qed simp_all
(*>*)
lemma WT_nsub_RI: "P,E \<turnstile> e :: T \<Longrightarrow> \<not>sub_RI e"
and WTs_nsub_RIs: "P,E \<turnstile> es [::] Ts \<Longrightarrow> \<not>sub_RIs es"
(*<*)proof(induct rule: WT_WTs.inducts) qed(simp_all)
end
(*>*)
|
Lecture 3: Digital Audio Signals
Audio Processing, MED4, Aalborg University, 2021
By Jesper Kjær Nielsen ([email protected]) and Cumhur Erkut ([email protected])
Central aspects of the course
- [x] What is sound?
- [x] How is sound generated?
- [ ] **How is sound turned into signals (i.e., data) on a computer?**
- [ ] How can we analyse these signals (i.e. extract information from them)?
- [ ] How can we modify these signals?
<center>
</center>
# Sampling and reconstruction
In the next 20 minutes, you will learn
- What a **continuous-time** signal is.
- What a **discrete-time** signal is.
- How you turn a continuous-time signal into a discrete-time signal (i.e., **sampling**).
- How you turn a discrete-time signal into a continuous-time signal (i.e., **reconstruction**).
In the figure below, the following is happening:
1. An audio signal $p_\text{i}(t)$ is propagating through the air as **pressure variations**.
2. The microphone picks up the pressure variations and turns them into **voltage variations** $v_\text{i}(t)$.
3. The voltage is converted into a series of numbers $x_n$ via **sampling**.
4. A voltage signal $v_\text{o}(t)$ is **reconstructed** from the series of numbers $x_n$.
5. A loudspeaker converts the voltage variations into pressure variations $p_\text{o}(t)$.
<center>
</center>
## Continuous-time signal
A **continuous-time** signal is characterised by
- **time**: the signal has a value $x(t)$ for every possible time $t$
- **amplitude**: the signal value $x(t)$ can take on any value from a continuum of numbers (such as the real numbers).
A continuous-time signal is often also referred to as an **analog signal**.
<center>
</center>
Informally: You draw a continuous-time signal without lifting your pen from the paper.
## Sampling
- Storing a continuous-time signal on a computer requires an infinite amount of memory!
- Solution: We only measure the value of a continuous-time signal every $T_\text{s}$ seconds. This is called **sampling**.
- A very important quantity is
$$
f_\text{s} = 1/T_\text{s}
$$
where
- $f_\text{s}$ is the **sampling frequency** (measured in Hz) and describes how many times per second the continuous-time signal is sampled
- $T_\text{s}$ is the **sampling time** (measured in seconds).
<center>
</center>
We can illustrate sampling by a person controlling a contact (see figure below):
1. When $T_\text{s}$ seconds has passed, the contact is pushed and released immidiately.
2. At that exact time instant, the value of the continuous-time signal $x(T_\text{s})$ is stored on the computer.
3. After another $T_\text{s}$ seconds, the contact is again pushed and released immidiately so that $x(2T_\text{s})$ is now stored.
4. If we keep pushing/relasing the contact every $T_\text{s}$ seconds, we will after $n$ times store the signal value $x(t_n)$ where
$$
t_n = nT_\text{s} = n/f_\text{s}\ .
$$
The scaler $n$ is often referred to as the **sampling index**.
Note that people often write $x(t_n)$ as
$$
x(t_n) = x_n = x[n]\ .
$$
<center>
</center>
## Discrete-time signal
A **discrete-time** signal is characterised by
- **time**: the signal only has a value $x_n$ at certain times, i.e., $t_n=nT_\text{s}$ for $n=\cdots,-3,-2,1,0,1,2,3,\cdots$. Therefore, the $x$-axis is often the sampling index $n$ instead of time.
- **amplitude**: the signal value $x_n$ can take on any value from a continuum of numbers (such as the real numbers).
A discrete-time signal is sometimes also referred to as a **digital signal** (although we will use this term for something slightly different later).
<center>
</center>
Informally: A discrete-time signal is a series of time-ordered numbers.
## Reconstruction
If we want to play back a discrete-time signal on, e.g., a loudspeaker, we have to convert the discrete-time signal back into a continuous-time signal.
- Converting a discrete-time signal $x_n$ into a continuous-time signal $x(t)$ is called **reconstruction**.
- Reconstruction is performed using two components:
- **hold circuit**: holds a value $x_n$ for $T_\text{s}$ seconds. This will create a **staircase** signal.
- **post filter**: smooth out the discountinuities in the staircase signal by using a low-pass filter with a cut-off frequency of $f_\text{s}/2$ Hz.
<center>
</center>
---
Note that we will talk much more about filtering in the next lectures.
## Summary
1. A **continuous-time** signal $x(t)$ can be drawn without lifting the pen from the paper.
2. A **discrete-time** signal $x_n = x(t_n)$ is a series of time-ordered numbers.
3. **Sampling** converts a $x(t)$ into $x_n$ by measuring the value of $x(t)$ at the times
$$
t_n = nT_\text{s} = n/f_\text{s}
$$
where
- $n$ is the **sampling index**
- $T_\text{s}$ is the **sampling time**
- $f_\text{s}=1/T_\text{s}$ is the **sampling frequency**
4. **Reconstruction** converts $x_n$ into $x(t)$ by first creating a staircase signal from $x_n$ and then by filtering this staircase signal with a low-pass filter.
## Assignment
Assume that we have a continuous-time signal given by
$$
x(t) = \sin(2\pi f t)
$$
where the frequency of the sinusoid is $f=1$ Hz.
1. Sketch the signal from $t=-0.5$ s to $t= 2$ s.
We now sample the signal with a sampling frequency of $f_\text{s}=4$ Hz.
2. What is the sampling time $T_\text{s}$?
3. Sketch the sampled signal $x_n$ (in a new plot with the time index $n$ on the $x$-axis).
4. Sketch the staircase signal $x_\text{sc}(t)$ from $x_n$.
5. Repeat 2. to 4. with a sampling frequency of $f_\text{s}=1$ Hz.
# Aliasing
In the next 20 minutes, you will learn
- How we write a discrete-time sinusoid.
- What aliasing is.
- How we can avoid aliasing by selecting the sampling frequency $f_\text{s}$.
- What an anti-aliasing filter is and why we need it.
### Discrete-time sinusoid
As we have seen in the first two lectures, a continuous-time sinusoid can be written as
$$
x(t) = A\cos(\Omega t+\psi)
$$
where
- $A\geq 0$ is an amplitude
- $\Omega=2\pi f$ is a frequency measured in rad/s
- $\psi$ is the inial phase.
Let us now sample this signal with a sampling frequency of $f_\text{s}$ Hz. We then get the discrete-time sinusoid
\begin{align}
x_n &= x(t_n) = x(n/f_\text{s}) = A\cos(\Omega n/f_\text{s}+\psi) = A\cos((2\pi f/f_\text{s}) n+\psi)\\
&= A\cos(\omega n+\psi)
\end{align}
where
- $\omega = \Omega f_\text{s}= 2\pi f/f_\text{s}$ is the **digital frequency** measured in **radians/sample**.
For a discrete-time signal, $\omega = 2\pi$ corresponds to the sampling frequency, and we will also write this frequency as $\omega_\text{s}$.
## What is Aliasing?
- **Aliasing** comes from the word **alias**.
- It refers to that a sinusoidal component of one frequency is 'disguising' itself as a sinusoidal component with another frequency.
As an example, let us try to sample the three continuous-time sinusoids
\begin{align}
x(t) &= \cos(2\pi f t)\\
y(t) &= \cos(2\pi (f_\text{s}-f) t)
\end{align}
using a sampling frequency of $f_\text{s}$ Hz.
For the first sinusoid, we get that
$$
x_n = \cos(\omega n)\ .
$$
For the second sinusoid, we get that
\begin{align}
y_n &= \cos((\omega_\text{s}-\omega) n) = \text{Re}\left[\mathrm{e}^{j(\omega_\text{s}-\omega) n}\right] = \text{Re}\left[\mathrm{e}^{j\omega_\text{s} n}\mathrm{e}^{-j\omega n}\right]
\end{align}
However, we have that
$$
\mathrm{e}^{j\omega_\text{s}n} = \mathrm{e}^{j2\pi n} = \cos(2\pi n)+j\sin(2\pi n) = 1
$$
for all time indices $n$ since $n$ is an integer. This means that
$$
y_n = \text{Re}\left[\mathrm{e}^{-j\omega n}\right] = \cos(-\omega n) = \cos(\omega n)
$$
which is exactly the same as $x_n$.
**Observation:** Even though the continuous time signals $x(t)$ and $y(t)$ have **different frequencies**, the discrete-time signals $x_n$ and $y_n$ have the **same digital frequency**.
Some consequences:
- Reconstructing a continuous-time signal from $y_n$ results in $x(t)$ - not $y(t)$.
- We say that $y(t)$ has been **aliased** when we cannot recover it again after sampling it.
- A discrete-time sinusoid of digital frequency $\omega=2\pi f/f_\text{s}$, could be a sampled continuous-time sinusoid given by
$$
y(t) = A\cos((\Omega+k2\pi f_\text{s})t + \psi)
$$
for any integer $k$.
<center>
</center>
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def sinusoid(samplingIndices, digitalFreq):
'''Compute a cosine'''
return np.cos(2*np.pi*digitalFreq*samplingIndices)
nData = 100
samplingFreq = 100 # Hz
samplingTime = 1/samplingFreq # s
samplingIndices = np.arange(nData)
time = samplingIndices*samplingTime
freqA = 10 # Hz
freqB = 5 # Hz (samplingFreq-freqA)
# plot the results
plt.figure(figsize=(10,6))
plt.plot(time, sinusoid(samplingIndices,freqA/samplingFreq), linewidth=2, marker='o', label="$x(t)$")
plt.plot(time, sinusoid(samplingIndices,freqB/samplingFreq), linewidth=2, marker='o', label="$y(t)$")
plt.legend()
plt.xlim((time[0],time[nData-1])), plt.ylim((-1.5,1.5))
plt.xlabel('time [s]'), plt.ylabel('Amplitude [.]');
```
## Nyquist-Shannon sampling theorem
To avoid aliasing, the maximum frequency $f_\text{max}$ in a continuous-time signal must satisfy that
$$
2f_\text{max} < f_\text{s}
$$
where $f_\text{s}$ is the sampling frequency.
We can satisfy the sampling theorem in two ways:
1. Select the sampling frequency $f_\text{s}$ high enough
2. Pre-filter the continuous-time signal with a low-pass filter (a so-called **anti-aliasing filter**) with a cut-off frequency below $f_\text{s}/2$.
### Typical sampling frequencies used for recording audio
| Application | Sampling frequency $f_\text{s}$ |
|-------------|---------------------------------|
|CD | 44100 Hz|
|Narrowband speech| 8000 Hz|
|Widebannd speech, VoIP| 16000 Hz|
|Video recorders| 48000 Hz|
|DVD-audio and Blu-ray| 96000 and 192000 Hz|
## Anti-aliasing filter
- Often, we do not know the highest frequency $f_\text{max}$ in our input signal.
- Instead, we simply filter out all frequency content above $f_\text{s}/2$ to avoid aliasing.
- This filter is called an **anti-aliasing filter** and is present in all practical sampling blocks.
<center>
</center>
### Aliasing also occours in videos and images
<br />
<center>
<a href="http://www.youtube.com/watch?feature=player_embedded&v=ttgLyWFINJI
" target="_blank"></a>
</center>
<center>
</center>
## Summary
1. A discrete-time sinusoid is written as
$$
x_n = A\cos(\omega n +\psi)
$$
where $A$ and $\psi$ have the same meaning as for the continuous-time sinusoid and
- $\omega = 2\pi f/f_\text{s}$ is the **digital frequency** and measured in rad/sample
- $f_\text{s}$ is the **sampling frequency** measured in Hz.
2. Aliasing refers to when the frequency of a sinusoid is lowered due to undersampling the signal.
3. To avoid aliasing, we must satisfy **Nyquist's sampling theorem** stating that
$$
2f_\text{max} < f_\text{s}
$$
where $f_\text{max}$ is the maximum frequency in the continuous-time input signal.
4. We can limit the maximum frequency of a continuous-time input signal by passing it through an **anti-aliasing filter**. This filter, which is a low-pass filter, ensures that aliasing does not occur.
# Binary numbers
In the next 20 minutes, you will learn
- What a binary number is
- What a bit and a byte is
- How you store data on, e.g., a computer or a CD
- To get the following slightly geeky joke :)
> There are 10 types of people in this world.
> Those who understand binary numbers and those who don't!
## The decimal numer systems
We are used to the decimal number system where we encounter numbers such 3, 42, and 89809.
Let's look at the example
$$
1314\ .
$$
We can make the following observations about this decimal number:
- The number consists of four symbols, each called a **digit**
- Each digit can be one of ten possible symbols (either 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9).
- The order of the digits matter. For example, the right-most one in 1314 represents the number of 10s whereas the left-most one represents the number of 1000s. A number system with ordering is called a **positional number system**.
We can rewrite the decimal number 1314 as
\begin{align}
1314 &= 1000 + 300 + 10 + 4\\
&= 1\cdot 1000 + 3\cdot 100 + 1\cdot 10 + 4\cdot 1\\
&= 1\cdot 10^3 + 3\cdot 10^2 + 1\cdot 10^1 + 4\cdot 10^0\ .
\end{align}
In general, we can write an $N$ digit decimal number $d_{N-1}d_{N-2}\cdots d_2d_1d_0$ as
$$
d_{N-1}d_{N-2}\cdots d_2d_1d_0 = \sum_{n=0}^{N-1}d_n10^n\ .
$$
Note that
- $d_n\in\{0,1,2,3,4,5,6,7,8,9\}$
- 10 is the number of symbols that $d_n$ can take on and is called the **base** of the decimal number system.
Let us now allow for an arbitrary base $b$. Then we can write numbers as
$$
d_{N-1}d_{N-2}\cdots d_2d_1d_0 = \sum_{n=0}^{N-1} d_n b^n
$$
where
- $d_n\in\{0,1,\ldots,b-1\}$
- $b$ is the base of the number.
For different values of $b$, we get different number systems. Some examples are
- $b=10$: The **decimal number system** with possible symbols 0,1,2,3,4,5,6,7,8,9
- $b=2$: The **binary number system** with possible symbols 0,1
- $b=16$: The **hexadecimal number system** with possible symbols 0,1,2,3,4,5,6,7,8,9, A, B, C, D, E, F
## The binary number system
A binary number
- has base 2 and
- is written only in terms of 0s and 1s.
An example of a binary number is
$$
0110\ 1101_2
$$
where the subscript 2 is here only added to make it explicit that $0110\ 1101$ is a binary number.
Note that
- a 'digit' in a binary number is called a **bit**
- a collection of 8 bits is called a **byte** with symbol B
- a computer represents everything (numbers, colours, text, etc.) as binary numbers
Thus, the binary number $0110\ 1101_2$ has 8 bits and 1 byte (or 1 B)
### Converting binary numbers to decimal numbers
To convert from a binary number to a decimal number, we simple use the expression
$$
d_{N-1}d_{N-2}\cdots d_2d_1d_0 = \sum_{n=0}^{N-1} d_n b^n\ .
$$
As an example, we get that $1101_2$ converts to
\begin{align}
1101_2 &= 1\cdot 2^3 + 1\cdot 2^2 + 0\cdot 2^1 + 1\cdot 2^0\\
&= 1 \cdot 8 + 1 \cdot 4 + 0\cdot 2 + 1\cdot 1\\
&= 13_{10}\ .
\end{align}
Converting from decimal numbers to binary numbers is also possible, but is not covered here.
### Adding binary numbers
You do it exactly as you learned in 2nd grade with decimal numbers. That is,
- $0_2+0_2 = 0_2$
- $0_2+1_2 = 1_2$
- $1_2+1_2 = 0_2$ with $1_2$ in carry
Using these three rules, we obtain
$$
\begin{array}[t]{r}
0100\ 1001_2 \\
+ \ 0111\ 1100_2 \\ \hline
1100\ 0101_2
\end{array}
$$
### Example: Representing text using binary numbers
<center>
</center
### Example: Storing data on a disc
Information is stored by making tiny indentations known as **pits** on a disc.
- A pit represents a 0
- The opposite of a pit (called land) represents a 1
<center>
</center
- The binary data is stored along one long spiral on the disc.
- **CD**: The spiral is 5.7 km long
- **DVD**: The spiral is 12.3 km long
- **Blu-ray**: The spiral is 28.4 km long
- A laser is used for reading the binary data by following this spiral path.
<br />
<center>
</center
## Summary
1. A binary number
- only contains 0s and 1s
- consists of bits
2. A collection of eight bit is called a **byte**
3. Everything (numbers, text, images, video, audio, etc.) is stored and manipulated as binary numbers on a computer
4. The following slightly geeky joke is now funny ;)
> There are 10 types of people in this world.
> Those who understand binary numbers and those who don't!
## Assignment
1. Convert the following binary numbers to decimal numbers: $10_2$, $110_2$, and $0101_2$.
2. How many different binary numbers could you write with 4 bits?
3. How many bits can be stored on a 700 GB harddrive?
4. If you have an internet connection with a download speed of 10 Mbit/s, how much time does it then take to download a 100 MB file?
---
Bonus info:
1. With base $b=2$, a binary number can be written as
$$
d_{N-1}d_{N-2}\cdots d_2d_1d_0 = \sum_{n=0}^{N-1} d_n b^n\ .
$$
2. G means billion ($10^9$), M means million ($10^6$), k means thousand ($10^3$), and B means byte (8 bit)
# Quantisation
In the next 20 minutes, you will learn
- What quantisation is and why it is necessary
- How you will typically do it
- What signal-to-noise ratio (SNR) and dynamic range is
## Example: Storing $\pi$ on a computer
How would you store $\pi$ (or other irrational numbers) on a computer?
<br />
<center>
</center
- It requires an **infinite** amount of memory to store $\pi$ on a computer.
- Therefore, we have to store an approximation to $\pi$ with only a **finite** number of digits. Let us call this approximation $p$.
- The approximation error $e$ can be written as
$$
e = \pi-p\ .
$$
**Example**: Let $p$ contain only the first two digits of $\pi$ after the comma (i.e., $p=3.14$). Then
$$
e = \pi-p = \pi-3.14 = 0.001592653589\ldots
$$
We say that we have rounded of (or **quantized**) $\pi$ to its nearest two-digits-after-the-comma representation.
## The need for quantisation
Sampling converts a continuous-time signal into a discrete-time signal. That is, we go from an infinite number of time values to a finite number of time values.
<center>
</center>
However, we also have to do something about the signal value $x(t_n)=x_n$ for every sampling time, so that we can store this number on the computer using a finite number of digits. This is called **quantisation**.
<center>
</center>
## Uniform quantisation
Assume that we sample the signal value $x_n$ and that
- the signal value is in the interval $(-\alpha,\alpha)$
- we have $\beta$ bits available for storing this signal value.
Note that we can represent $2^\beta$ different values with $\beta$ bits.
We now do the following.
1. Divide the interval $(-\alpha,\alpha)$ into $2^\beta$ equally large cells, each of size
$$
\Delta = \frac{\alpha-(-\alpha)}{2^\beta} = \frac{2\alpha}{2^\beta} = \frac{\alpha}{2^{\beta-1}}\ .
$$
2. Round (or **quantise**) the signal value $x_n$ to the value $y_n$ at the nearest cell boundary, i.e.,
$$
y_n = Q(x_n) = \Delta \left\lfloor\frac{x_n}{\Delta}\right\rceil = \Delta \left\lfloor\frac{x_n}{\Delta}+\frac{1}{2}\right\rfloor
$$
where $\lfloor\cdot\rceil$ and $\lfloor\cdot\rfloor$ refer to the rounding and flooring operations, respectively.
### Example: 3 bit quantisation
In the figure below, the continuous-time signal (dashed gray) is first sampled (green) and then quantised (orange) using a three bit quantiser. The horisontal dashed red lines mark the quantisation levels. The final bit stream is
$$
100\ 110\ 111\ 111\ 111\ 110\ 110\ 100\ 011\ 010\ 001\ 001\ .
$$
<center>
</center>
## Quantisation error
The quantisation error $e_n$ is the difference between the signal value $x_n$ and its rounded value $y_n=Q(x_n)$, i.e.,
$$
e_n = x_n-Q(x_n)\ .
$$
We can rearrange this into
$$
Q(x_n) = x_n+e_n\ .
$$
That is, we can think of quantisation as **adding an error** to the signal value $x_n$.
<center>
</center>
A measure of quantisation quality is how big the average power of $e_n$ is compared to the average power of $x_n$. The average power of, e.g., $e_n$ is defined as
$$
P_e = \frac{1}{N}\sum_{n=0}^{N-1} e_n^2\ .
$$
If we define $P_x$ in a similar way, the signal-to-noise ratio (SNR) is defined as
$$
\text{SNR} = 10\log_{10}\frac{P_x}{P_e}\ ,
$$
and it is measured in decibel (dB).
Now, assume that
- the signal values $x_n$ take on values in $(-\alpha,\alpha)$ equally often (a uniform distribution)
- the quantisation errors $e_n$ take on values in $(-\Delta/2,\Delta/2)$ equally often (a uniform distribution)
We then get
\begin{align}
P_x &= \frac{(2\alpha)^2}{12} = \frac{\alpha^2}{3}\\
P_e &= \frac{\Delta^2}{12} = \frac{1}{12}\left(\frac{\alpha}{2^{\beta-1}}\right)^2\ .
\end{align}
These results can be derived by computing the variance of a uniform distribution.
Finally, we get the SNR
\begin{align}
\text{SNR} &= 10\log_{10}\frac{P_x}{P_e} = 10\log_{10}\left(\frac{\alpha^2}{3}12\left(\frac{2^{\beta-1}}{\alpha}\right)^2\right)\\
&= 10\log_{10}\left(2^{2\beta}\right) = \beta 20 \log_{10} 2 \approx 6\beta\ .
\end{align}
Thus, for every additional bit, the SNR is improved by approximately 6 dB.
## Dynamic range
The **dynamic range** is the ratio between the loudest and softest values we can represent using a $\beta$ bit quantiser.
- **Softest value**: $1$
- **Loudest value**: $2^\beta$
The dynamic range of a quantiser is thus
$$
\text{DR} = 10\log_{10}\left(\left(\frac{2^\beta}{1}\right)^2\right) = 10\log_{10}\left(2^{2\beta}\right) = \beta 20 \log_{10} 2 \approx 6\beta\ .
$$
Thus, we get a dynamic range of 96 dB for a 16 bit quantiser (typical CD quality) and 144 dB for a 24 bit quantiser. Note that the dynamic range of the human ear is approximately 120 dB.
## Summary
1. A quantiser rounds the signal values to a value on a grid.
2. All the points of the grid can be represented using $\beta$ bits which results in $2^\beta$ possible values.
3. Quantisation introduces noise into the digital signal. The signal-to-noise ratio (SNR) describes how powerful the signal is compared to this quantisation noise.
4. The SNR (and dynamic range) depends on the number of bits used as
$$
\beta20\log_{10} \approx 6\beta\ .
$$
## Assignment
Think of ways for increasing the dynamic range without increasing the number of the bits. *Hint* = Could non-uniform quantisation work?
|
\documentclass[12pt]{article}
%Sets size of page and margins
\oddsidemargin -8mm \evensidemargin -8mm
\topmargin 0pt \headheight 0pt \headsep 0pt
\textwidth 17cm
\title{Programs for Applying Symmetries of PDEs}
\author{Thomas Wolf \\
Department of Mathematics \\
Brock University \\
St.Catharines \\
Ontario, Canada L2S 3A1 \\
[email protected]}
\begin{document}
\maketitle
\begin{abstract}
In this paper the programs {\tt APPLYSYM}, {\tt QUASILINPDE} and
{\tt DETRAFO} are described which aim at the utilization
of infinitesimal symmetries of differential equations. The purpose
of {\tt QUASILINPDE} is the general solution of
quasilinear PDEs. This procedure is used by {\tt APPLYSYM}
for the application of point symmetries for either
\begin{itemize}
\item calculating similarity variables to perform a point transformation
which lowers the order of an ODE or effectively reduces the number of
explicitly occuring independent variables in a PDE(-system) or for
\item generalizing given special solutions of ODEs / PDEs with new constant
parameters.
\end{itemize}
The program {\tt DETRAFO} performs arbitrary point- and contact
transformations of ODEs / PDEs and is applied if similarity
and symmetry variables have been found.
The program {\tt APPLYSYM} is used in connection with the program
{\tt LIEPDE} for formulating and solving the conditions for point- and
contact symmetries which is described in \cite{LIEPDE}.
The actual problem solving is done in all these programs through a call
to the package {\tt CRACK} for solving overdetermined PDE-systems.
\end{abstract}
\tableofcontents
%-------------------------------------------------------------------------
\section{Introduction and overview of the symmetry \\ method}
The investigation of infinitesimal symmetries of differential equations
(DEs) with computer algebra programs attrackted considerable attention
over the last years. Corresponding programs are available in all
major computer algebra systems. In a review article by W.\ Hereman
\cite{WHer} about 200 references are given, many of them describing related
software.
One reason for the popularity of the symmetry method
is the fact that Sophus Lie's method
\cite{lie1},\cite{lie2} is the most widely
used method for computing exact solutions of non-linear DEs. Another reason is
that the first step in this
method, the formulation of the determining equation for the generators
of the symmetries, can already be very cumbersome, especially in the
case of PDEs of higher order and/or in case of many dependent and independent
variables. Also, the formulation of the conditions is a straight forward
task involving only differentiations and basic algebra - an ideal task for
computer algebra systems. Less straight forward is the automatic solution
of the symmetry conditions which is the strength of the program {\tt LIEPDE}
(for a comparison with another program see \cite{LIEPDE}).
The novelty described in this paper are programs aiming at
the final third step: Applying symmetries for
\begin{itemize}
\item calculating similarity variables to perform a point transformation
which lowers the order of an ODE or effectively reduces the number of
explicitly occuring independent variables of a PDE(-system) or for
\item generalizing given special solutions of ODEs/PDEs with new constant
parameters.
\end{itemize}
Programs which run on their own but also allow interactive user control
are indispensible for these calculations. On one hand the calculations can
become quite lengthy, like variable transformations of PDEs (of higher order,
with many variables). On the other hand the freedom of choosing the right
linear combination of symmetries and choosing the optimal new symmetry- and
similarity variables makes it necessary to `play' with the problem
interactively.
The focus in this paper is directed on questions of implementation and
efficiency, no principally new mathematics is presented.
In the following subsections a review of the first two steps of the symmetry
method is given as well as the third, i.e.\ the application step is outlined.
Each of the remaining sections is devoted to one procedure.
%---------------------------------------
\subsection{The first step: Formulating the symmetry conditions}
To obey classical Lie-symmetries, differential equations
\begin{equation}
H_A = 0 \label{PDEs}
\end{equation}
for unknown functions $y^\alpha,\;\;1\leq \alpha \leq p$
of independent variables $x^i,\;\;1\leq i \leq q$
must be forminvariant against infinitesimal transformations
\begin{equation}
\tilde{x}^i = x^i + \varepsilon \xi^i, \;\; \;\;\;
\tilde{y}^\alpha = y^\alpha + \varepsilon \eta^\alpha \label{tran}
\end{equation}
in first order of $\varepsilon.$ To transform the equations (\ref{PDEs})
by (\ref{tran}), derivatives of $y^\alpha$ must be transformed, i.e. the part
linear in $\varepsilon$ must be determined. The corresponding formulas are
(see e.g. \cite{Olv}, \cite{Step})
\begin{eqnarray}
\tilde{y}^\alpha_{j_1\ldots j_k} & = &
y^\alpha_{j_1\ldots j_k} + \varepsilon
\eta^\alpha_{j_1\ldots j_k} + O(\varepsilon^2) \nonumber \\ \vspace{3mm}
\eta^\alpha_{j_1\ldots j_{k-1}j_k} & = &
\frac{D \eta^\alpha_{j_1\ldots j_{k-1}}}{D x^k} -
y^\alpha_{ij_1\ldots j_{k-1}}\frac{D \xi^i}{D x^k} \label{recur}
\end{eqnarray}
where $D/Dx^k$ means total differentiation w.r.t.\ $x^k$ and
from now on lower latin indices of functions $y^\alpha,$
(and later $u^\alpha$)
denote partial differentiation w.r.t.\ the independent variables $x^i,$
(and later $v^i$).
The complete symmetry condition then takes the form
\begin{eqnarray}
X H_A & = & 0 \;\; \; \; \mbox{mod} \; \; \; H_A = 0\ \label{sbed1} \\
X & = & \xi^i \frac{\partial}{\partial x^i} +
\eta^\alpha \frac{\partial}{\partial y^\alpha} +
\eta^\alpha_m \frac{\partial}{\partial y^\alpha_m} +
\eta^\alpha_{mn} \frac{\partial}{\partial y^\alpha_{mn}} + \ldots +
\eta^\alpha_{mn\ldots p} \frac{\partial}{\partial y^\alpha_{mn\ldots p}}.
\label{sbed2}
\end{eqnarray}
where mod $H_A = 0$ means that the original PDE-system is used to replace
some partial derivatives of $y^\alpha$ to reduce the number of independent
variables, because the symmetry condition (\ref{sbed1}) must be
fulfilled identically in $x^i, y^\alpha$ and all partial
derivatives of $y^\alpha.$
For point symmetries, $\xi^i, \eta^\alpha$ are functions of $x^j,
y^\beta$ and for contact symmetries they depend on $x^j, y^\beta$ and
$y^\beta_k.$ We restrict ourself to point symmetries as those are the only
ones that can be applied by the current version of the program {\tt APPLYSYM}
(see below). For literature about generalized symmetries see \cite{WHer}.
Though the formulation of the symmetry conditions (\ref{sbed1}),
(\ref{sbed2}), (\ref{recur})
is straightforward and handled in principle by all related
programs \cite{WHer}, the computational effort to formulate
the conditions (\ref{sbed1}) may cause problems if
the number of $x^i$ and $y^\alpha$ is high. This can
partially be avoided if at first only a few conditions are formulated
and solved such that the remaining ones are much shorter and quicker to
formulate.
A first step in this direction is to investigate one PDE $H_A = 0$
after another, as done in \cite{Cham}. Two methods to partition the
conditions for a single PDE are described by Bocharov/Bronstein
\cite{Alex} and Stephani \cite{Step}.
In the first method only those terms of the symmetry condition
$X H_A = 0$ are calculated which contain
at least a derivative of $y^\alpha$ of a minimal order $m.$
Setting coefficients
of these $u$-derivatives to zero provides symmetry conditions. Lowering the
minimal order $m$ successively then gradually provides all symmetry conditions.
The second method is even more selective. If $H_A$ is of order $n$
then only terms of the symmetry condition $X H_A = 0$ are generated which
contain $n'$th order derivatives of $y^\alpha.$ Furthermore these derivatives
must not occur in $H_A$ itself. They can therefore occur
in the symmetry condition
(\ref{sbed1}) only in
$\eta^\alpha_{j_1\ldots j_n},$ i.e. in the terms
\[\eta^\alpha_{j_1\ldots j_n}
\frac{\partial H_A}{\partial y^\alpha_{j_1\ldots j_n}}. \]
If only coefficients of $n'$th order derivatives of $y^\alpha$ need to be
accurate to formulate preliminary conditions
then from the total derivatives to be taken in
(\ref{recur}) only that part is performed which differentiates w.r.t.\ the
highest $y^\alpha$-derivatives.
This means, for example, to form only
$y^\alpha_{mnk} \partial/\partial y^\alpha_{mn} $
if the expression, which is to be differentiated totally w.r.t.\ $x^k$,
contains at most second order derivatives of $y^\alpha.$
The second method is applied in {\tt LIEPDE}.
Already the formulation of the remaining conditions is speeded up
considerably through this iteration process. These methods can be applied if
systems of DEs or single PDEs of at least second order are investigated
concerning symmetries.
%---------------------------------------
\subsection{The second step: Solving the symmetry conditions}
The second step in applying the whole method consists in solving the
determining conditions (\ref{sbed1}), (\ref{sbed2}), (\ref{recur})
which are linear homogeneous PDEs for $\xi^i, \eta^\alpha$. The
complete solution of this system is not algorithmic any more because the
solution of a general linear PDE-system is as difficult as the solution of
its non-linear characteristic ODE-system which is not covered by algorithms
so far.
Still algorithms are used successfully to simplify the PDE-system by
calculating
its standard normal form and by integrating exact PDEs
if they turn up in this simplification process \cite{LIEPDE}.
One problem in this respect, for example,
concerns the optimization of the symbiosis of both algorithms. By that we
mean the ranking of priorities between integrating, adding integrability
conditions and doing simplifications by substitutions - all depending on
the length of expressions and the overall structure of the PDE-system.
Also the extension of the class of PDEs which can be integrated exactly is
a problem to be pursuit further.
The program {\tt LIEPDE} which formulates the symmetry conditions calls the
program {\tt CRACK} to solve them. This is done in a number of successive
calls in order to formulate and solve some first order PDEs of the
overdetermined system first and use their solution to formulate and solve the
next subset of conditions as described in the previous subsection.
Also, {\tt LIEPDE} can work on DEs that contain parametric constants and
parametric functions. An ansatz for the symmetry generators can be
formulated. For more details see \cite{LIEPDE} or \cite{WoBra}.
The procedure {\tt LIEPDE} is called through \\
{\tt LIEPDE({\it problem,symtype,flist,inequ}); } \\
All parameters are lists. \vspace{6pt} \\
The first parameter specifies the DEs to be investigated: \\
{\it problem} has the form \{{\it equations, ulist, xlist}\} where
\begin{tabbing}
\hspace{0.5cm}
{\it equations } \= is a list of equations,
each has the form {\tt df(ui,..)=...} where \\
\> the LHS (left hand side) {\tt df(ui,..)} is selected such that \\
\> - The RHS (right h.s.) of an equations must not include \\
\>$\;\,$ the derivative on the LHS nor a derivative of it. \\
\> - Neither the LHS nor any derivative of it of any equation \\
\>$\;\,$ may occur in any other equation.\\
\> - Each of the unknown functions occurs on the LHS of \\
\>$\;\,$ exactly one equation. \\
\hspace{0.5cm}
{\it ulist} \> is a list of function names, which can be chosen freely \\
\hspace{0.5cm}
{\it xlist} \> is a list of variable names, which can be chosen freely
\end{tabbing}
Equations can be given as a list of single differential expressions and then
the program will try to bring them into the `solved form' {\tt df(ui,..)=...}
automatically. If equations are given in the solved form then the above
conditions are checked and execution is stopped it they are not satisfied.
An easy way to get the equations in the desired form is to use \\
\verb+ FIRST SOLVE({+{\it eq1,eq2,}...\verb+},{+{\it one highest
derivative for each function u}\verb+})+ \\
(see the example of the Karpman equations in {\tt LIEPDE.TST}).
The example of the Burgers equation in {\tt LIEPDE.TST} demonstrates
that the number of symmetries for a given maximal order of the infinitesimal
generators depends on the derivative chosen for the LHS.
The second parameter {\it symtype} of {\tt LIEPDE} is a list $\{\;\}$ that
specifies the symmetry to be calculated. {\it symtype} can have the following
values and meanings:
\begin{tabbing}
\verb+{"point"} + \= Point symmetries with $\xi^i=\xi^i(x^j,u^{\beta}),\;
\eta^{\alpha}=\eta^{\alpha}(x^j,u^{\beta})$ are \\
\> determined.\\
\verb+{"contact"}+ \> Contact symmetries with $\xi^i=0, \;
\eta=\eta(x^j,u,u_k)$ are \\
\> determined $(u_k = \partial u/\partial x^k)$, which is only applicable if a \\
\> single equation (\ref{PDEs}) with an order $>1$ for a
single function \\
\> $u$ is to be investigated. (The {\it symtype}
\verb+{"contact"}+ \\
\> is equivalent to \verb+{"general",1}+ (see below) apart from \\
\> the additional checks done for \verb+{"contact"}+.)\\
\verb+{"general"+,{\it order}\verb+}+ \> where {\it order} is an integer $>0$.
Generalized symmetries $\xi^i=0,$ \\
\> $\eta^{\alpha}=\eta^{\alpha}(x^j,u^{\beta},\ldots,u^{\beta}_K)$
of a specified order are determined \\
\> (where $_K$ is a multiple index representing {\it order} many indices.) \\
\> NOTE: Characteristic functions of generalized symmetries \\
\> ($= \eta^{\alpha}$ if $\xi^i=0$) are equivalent if they are equal on\\
\> the solution manifold. Therefore, all dependences of\\
\> characteristic functions on the substituted derivatives \\
\> and their derivatives are dropped. For example, if the heat \\
\> equation is given as $u_t=u_{xx}$ (i.e.\ $u_t$ is substituted by $u_{xx}$) \\
\> then \verb+{"general",2}+ would not include characteristic \\
\> functions depending on $u_{tx}$ or $u_{xxx}$. \\
\> THEREFORE: \\
\> If you want to find {\it all} symmetries up to a given order then either \\
\> - avoid using $H_A=0$ to substitute lower order \\
\> $\;\,$derivatives by expressions involving higher derivatives, or \\
\> - increase the order specified in {\it symtype}. \\
\> For an illustration of this effect see the two symmetry \\
\> determinations of the Burgers equation in the file \\
\> {\tt LIEPDE.TST}. \\
\verb+{xi!_+{\it x1}\verb+ =...,..., + \> \\
\verb+ eta!_+{\it u1}\verb+=...,...}+ \> It is possible to specify an
ansatz for the symmetry. Such \\
\> an ansatz must specify all $\xi^i$ for all independent variables and \\
\> all $\eta^{\alpha}$ for all dependent variables in terms of differential \\
\> expressions which may involve unknown functions/constants. \\
\> The dependences of the unknown functions have to be declared \\
\> in advance by using the {\tt DEPEND} command. For example, \\
\> \verb+ DEPEND f, t, x, u$ + \\
\> specifies $f$ to be a function of $t,x,u$. If one wants to have $f$ as \\
\> a function of derivatives of $u(t,x)$, say $f$ depending on $u_{txx}$, \\
\> then one \underline{{\it cannot}} write \\
\> \verb+ DEPEND f, df(u,t,x,2)$ + \\
\> but instead must write \\
\> \verb+ DEPEND f, u!`1!`2!`2$ + \\
\> assuming {\it xlist} has been specified as \verb+ {t,x}+.
Because $t$ is the \\
\> first variable and $x$ is the second variable in {\it xlist} and $u$ is \\
\> differentiated oncs wrt.\ $t$ and twice wrt.\ $x$ we therefore \\
\> use \verb+ u!`1!`2!`2+. The character {\tt !} is the escape character \\
\> to allow special characters like ` to occur in an identifier. \\
\> \hspace{4mm} For generalized symmetries one usually sets all $\xi^i=0$.\\
\> Then the $\eta^{\alpha}$ are equal to the characteristic functions.
\end{tabbing}
\noindent The third parameter {\it flist} of {\tt LIEPDE} is a list $\{\;\}$
that includes
\begin{itemize}
\item all parameters and functions in the equations which are to
be determined such that symmetries exist (if any such
parameters/functions are
specified in {\it flist} then the symmetry conditions
formulated in {\tt LIEPDE}
become non-linear conditions which may be much harder for
{\tt CRACK} to solve with many cases and subcases to be considered.)
\item all unknown functions and constants in the ansatz
\verb+xi!_..+ and \verb+eta!_..+
if that has been specified in {\it symtype}.
\end{itemize}
\noindent The fourth parameter {\it inequ} of {\tt LIEPDE} is a list $\{\;\}$
that includes all non-vanishing expressions which represent
inequalities for the functions in flist.
The result of {\tt LIEPDE} is a list with 3 elements, each of which
is a list:
\[ \{\{{\it con}_1,{\it con}_2,\ldots\},
\{{\tt xi}\__{\ldots}=\ldots, \ldots,
{\tt eta}\__{\ldots}=\ldots, \ldots\},
\{{\it flist}\}\}. \]
The first list contains remaining unsolved symmetry conditions {\it con}$_i$. It
is the empty list \{\} if all conditions have been solved. The second list
gives the symmetry generators, i.e.\ expressions for $\xi_i$ and $\eta_j$. The
last list contains all free constants and functions occuring in the first
and second list.
%That the automatic calculation of symmetries run in most practical cases
%is shown with the following example. It is insofar difficult, as many
%symmetries exist and the solution consequently more difficult is to deriv.
%
%---------------------------------------
%\subsection{Example}
%For the following PDE-system, which takes its simplest form in the
%formalism of exterior forms:
%
%\begin{eqnarray*}
%0 & = & 3k_t,_{tt}-2k_t,_{xx}-2k_t,_{yy}-2k_t,_{zz}-k_x,_{tx}-2k_zk_x,_y \\
% & & +2k_yk_x,_z-k_y,_{ty}+2k_zk_y,_x-2k_xk_y,_z-k_z,_{tz}-2k_yk_z,_x+2k_xk_z,_y \\
%0 & = & k_t,_{tx}-2k_zk_t,_y+2k_yk_t,_z+2k_x,_{tt}-3k_x,_{xx}-2k_x,_{yy} \\
% & & -2k_x,_{zz}+2k_zk_y,_t-k_y,_{xy}-2k_tk_y,_z-2k_yk_z,_t-k_z,_{xz}+2k_tk_z,_y \\
%0 & = & k_t,_{ty}+2k_zk_t,_x-2k_xk_t,_z-2k_zk_x,_t-k_x,_{xy}+2k_tk_x,_z \\
% & & +2k_y,_{tt}-2k_y,_{xx}-3k_y,_{yy}-2k_y,_{zz}+2k_xk_z,_t-2k_tk_z,_x-k_z,_{yz} \\
%0 & = & k_t,_{tz}-2k_yk_t,_x+2k_xk_t,_y+2k_yk_x,_t-k_x,_{xz}-2k_tk_x,_y \\
% & & -2k_xk_y,_t+2k_tk_y,_x-k_y,_{yz}+2k_z,_{tt}-2k_z,_{xx}-2k_z,_{yy}-3k_z,_{zz}
%\end{eqnarray*}
%---------------------------------------
\subsection{The third step: Application of infinitesimal symmetries}
If infinitesimal symmetries have been found then
the program {\tt APPLYSYM} can use them for the following purposes:
\begin{enumerate}
\item Calculation of one symmetry variable and further similarity variables.
After transforming
the DE(-system) to these variables, the symmetry variable will not occur
explicitly any more. For ODEs this has the consequence that their order has
effectively been reduced.
\item Generalization of a special solution by one or more constants of
integration.
\end{enumerate}
Both methods are described in the following section.
%-------------------------------------------------------------------------
\section{Applying symmetries with {\tt APPLYSYM}}
%---------------------------------------
\subsection{The first mode: Calculation of similarity and symmetry variables}
In the following we assume that a symmetry generator $X$, given
in (\ref{sbed2}), is known such that ODE(s)/PDE(s) $H_A=0$
satisfy the symmetry condition (\ref{sbed1}). The aim is to
find new dependent functions $u^\alpha = u^\alpha(x^j,y^\beta)$ and
new independent variables $v^i = v^i(x^j,y^\beta),\;\;
1\leq\alpha,\beta\leq p,\;1\leq i,j \leq q$
such that the symmetry generator
$X = \xi^i(x^j,y^\beta)\partial_{x^i} +
\eta^\alpha(x^j,y^\beta)\partial_{y^\alpha}$
transforms to
\begin{equation}
X = \partial_{v^1}. \label{sbed3}
\end{equation}
Inverting the above transformation to $x^i=x^i(v^j,u^\beta),
y^\alpha=y^\alpha(v^j,u^\beta)$ and setting \\
$H_A(x^i(v^j,u^\beta), y^\alpha(v^j,u^\beta),\ldots) =
h_A(v^j, u^\beta,\ldots)$
this means that
\begin{eqnarray*}
0 & = & X H_A(x^i,y^\alpha,y^\beta_j,\ldots)\;\;\; \mbox{mod} \;\;\; H_A=0 \\
& = & X h_A(v^i,u^\alpha,u^\beta_j,\ldots)\;\;\; \mbox{mod} \;\;\; h_A=0 \\
& = & \partial_{v^1}h_A(v^i,u^\alpha,u^\beta_j,\ldots)\;\;\; \mbox{mod}
\;\;\; h_A=0.
\end{eqnarray*}
Consequently, the variable $v^1$ does not occur explicitly in $h_A$.
In the case of an ODE(-system) $(v^1=v)$
the new equations $0=h_A(v,u^\alpha,du^\beta/dv,\ldots)$
are then of lower total order
after the transformation $z = z(u^1) = du^1/dv$ with now $z, u^2,\ldots u^p$
as unknown functions and $u^1$ as independent variable.
The new form (\ref{sbed3}) of $X$ leads directly to conditions for the
symmetry variable $v^1$ and the similarity variables
$v^i|_{i\neq 1}, u^\alpha$ (all functions of $x^k,y^\gamma$):
\begin{eqnarray}
X v^1 = 1 & = & \xi^i(x^k,y^\gamma)\partial_{x^i}v^1 +
\eta^\alpha(x^k,y^\gamma)\partial_{y^\alpha}v^1 \label{ql1} \\
X v^j|_{j\neq 1} = X u^\beta = 0 & = &
\xi^i(x^k,y^\gamma)\partial_{x^i}u^\beta +
\eta^\alpha(x^k,y^\gamma)\partial_{y^\alpha}u^\beta \label{ql2}
\end{eqnarray}
The general solutions of (\ref{ql1}), (\ref{ql2}) involve free functions
of $p+q-1$ arguments. From the general solution of equation (\ref{ql2}),
$p+q-1$ functionally independent special solutions have to be selected
($v^2,\ldots,v^p$ and $u^1,\ldots,u^q$),
whereas from (\ref{ql1}) only one solution $v^1$ is needed.
Together, the expressions for the symmetry and similarity variables must
define a non-singular transformation $x,y \rightarrow u,v$.
Different special solutions selected at this stage
will result in different
resulting DEs which are equivalent under point transformations but may
look quite differently. A transformation that is more difficult than another
one will in general
only complicate the new DE(s) compared with the simpler transformation.
We therefore seek the simplest possible special
solutions of (\ref{ql1}), (\ref{ql2}). They also
have to be simple because the transformation has to be inverted to solve for
the old variables in order to do the transformations.
The following steps are performed in the corresponding mode of the
program {\tt APPLYSYM}:
\begin{itemize}
\item The user is asked to specify a symmetry by selecting one symmetry
from all the known symmetries or by specifying a linear combination of them.
\item Through a call of the procedure {\tt QUASILINPDE} (described in a later
section) the two linear first order PDEs (\ref{ql1}), (\ref{ql2}) are
investigated and, if possible, solved.
\item From the general solution of (\ref{ql1}) 1 special solution
is selected and from (\ref{ql2}) $p+q-1$ special
solutions are selected which should be as simple as possible.
\item The user is asked whether the symmetry variable should be one of the
independent variables (as it has been assumed so far) or one of the new
functions (then only derivatives of this function and not the function itself
turn up in the new DE(s)).
\item Through a call of the procedure {\tt DETRAFO} the transformation
$x^i,y^\alpha \rightarrow v^j,u^\beta$ of the DE(s) $H_A=0$ is finally done.
\item The program returns to the starting menu.
\end{itemize}
%---------------------------------------
\subsection{The second mode: Generalization of special solutions}
A second application of infinitesimal symmetries is the generalization
of a known special solution given in implicit form through
$0 = F(x^i,y^\alpha)$. If one knows a symmetry variable $v^1$ and
similarity variables $v^r, u^\alpha,\;\;2\leq r\leq p$ then
$v^1$ can be shifted by a constant $c$ because of
$\partial_{v^1}H_A = 0$ and
therefore the DEs $0 = H_A(v^r,u^\alpha,u^\beta_j,\ldots)$
are unaffected by the shift. Hence from
\[0 = F(x^i, y^\alpha) = F(x^i(v^j,u^\beta), y^\alpha(v^j,u^\beta)) =
\bar{F}(v^j,u^\beta)\] follows that
\[ 0 = \bar{F}(v^1+c,v^r,u^\beta) =
\bar{F}(v^1(x^i,y^\alpha)+c, v^r(x^i,y^\alpha), u^\beta(x^i,y^\alpha))\]
defines implicitly a generalized solution $y^\alpha=y^\alpha(x^i,c)$.
This generalization works only if $\partial_{v^1}\bar{F} \neq 0$ and
if $\bar{F}$ does not already have
a constant additive to $v^1$.
The method above needs to know $x^i=x^i(u^\beta,v^j),\;
y^\alpha=y^\alpha(u^\beta,v^j)$ \underline{and}
$u^\alpha = u^\alpha(x^j,y^\beta), v^\alpha = v^\alpha(x^j,y^\beta)$
which may be practically impossible.
Better is, to integrate $x^i,y^\alpha$ along $X$:
\begin{equation}
\frac{d\bar{x}^i}{d\varepsilon} = \xi^i(\bar{x}^j(\varepsilon),
\bar{y}^\beta(\varepsilon)), \;\;\;\;\;
\frac{d\bar{y}^\alpha}{d\varepsilon} = \eta^\alpha(\bar{x}^j(\varepsilon),
\bar{y}^\beta(\varepsilon))
\label{ODEsys}
\end{equation}
with initial values $\bar{x}^i = x^i, \bar{y}^\alpha = y^\alpha$
for $\varepsilon = 0.$
(This ODE-system is the characteristic system of (\ref{ql2}).)
Knowing only the finite transformations
\begin{equation}
\bar{x}^i = \bar{x}^i(x^j,y^\beta,\varepsilon),\;\;
\bar{y}^\alpha = \bar{y}^\alpha(x^j,y^\beta,\varepsilon) \label{ODEsol}
\end{equation}
gives immediately the inverse transformation
$\bar{x}^i = \bar{x}^i(x^j,y^\beta,\varepsilon),\;\;
\bar{y}^\alpha = \bar{y}^\alpha(x^j,y^\beta,\varepsilon)$
just by $\varepsilon \rightarrow -\varepsilon$ and renaming
$x^i,y^\alpha \leftrightarrow \bar{x}^i,\bar{y}^\alpha.$
The special solution $0 = F(x^i,y^\alpha)$
is generalized by the new constant
$\varepsilon$ through
\[ 0 = F(x^i,y^\alpha) = F(x^i(\bar{x}^j,\bar{y}^\beta,\varepsilon),
y^\alpha(\bar{x}^j,\bar{y}^\beta,\varepsilon)) \]
after dropping the $\bar{~}$.
The steps performed in the corresponding mode of the
program {\tt APPLYSYM} show features of both techniques:
\begin{itemize}
\item The user is asked to specify a symmetry by selecting one symmetry
from all the known symmetries or by specifying a linear combination of them.
\item The special solution to be generalized and the name of the new
constant have to be put in.
\item Through a call of the procedure {\tt QUASILINPDE}, the PDE (\ref{ql1})
is solved which amounts to a solution of its characteristic ODE system
(\ref{ODEsys}) where $v^1=\varepsilon$.
\item {\tt QUASILINPDE} returns a list of constant expressions
\begin{equation}
c_i = c_i(x^k, y^\beta, \varepsilon),\;\;1\leq i\leq p+q
\end{equation}
which are solved for
$x^j=x^j(c_i,\varepsilon),\;\; y^\alpha=y^\alpha(c_i,\varepsilon)$
to obtain the generalized solution through
\[ 0 = F(x^j, y^\alpha)
= F( x^j(c_i(x^k, y^\beta, 0), \varepsilon),
y^\alpha(c_i(x^k, y^\beta, 0), \varepsilon)). \]
\item The new solution is availabe for further generalizations w.r.t.\ other
symmetries.
\end{itemize}
If one would like to generalize a given special solution with $m$ new
constants because $m$ symmetries are known, then one could run the whole
program $m$ times, each time with a different symmetry or one could run the
program once with a linear combination of $m$ symmetry generators which
again is a symmetry generator. Running the program once adds one constant
but we have in addition $m-1$ arbitrary constants in the linear combination
of the symmetries, so $m$ new constants are added.
Usually one will generalize the solution gradually to make solving
(\ref{ODEsys}) gradually more difficult.
%---------------------------------------
\subsection{Syntax}
The call of {\tt APPLYSYM} is
{\tt APPLYSYM}(\{{\it de}, {\it fun}, {\it var}\}, \{{\it sym}, {\it cons}\});
\begin{itemize}
\item {\it de} is a single DE or a list of DEs in the form of a vanishing
expression or in the form $\ldots=\ldots\;\;$.
\item {\it fun} is the single function or the list of functions occuring
in {\it de}.
\item {\it var} is the single variable or the list of variables in {\it de}.
\item {\it sym} is a linear combination of all symmetries, each with a
different constant coefficient, in form of a list of the $\xi^i$ and
$\eta^\alpha$: \{xi\_\ldots=\ldots,\ldots,eta\_\ldots=\ldots,\ldots\},
where the indices after `xi\_' are the variable names and after `eta\_'
the function names.
\item {\it cons} is the list of constants in {\it sym}, one constant for each
symmetry.
\end{itemize}
The list that is the first argument of {\tt APPLYSYM} is the same as the
first argument of {\tt LIEPDE} and the
second argument is the list that {\tt LIEPDE} returns without its first
element (the unsolved conditions). An example is given below.
What {\tt APPLYSYM} returns depends on the last performed modus.
After modus 1 the return is \\
\{\{{\it newde}, {\it newfun}, {\it newvar}\}, {\it trafo}\} \\
where
\begin{itemize}
\item {\it newde} lists the transformed equation(s)
\item {\it newfun} lists the new function name(s)
\item {\it newvar} lists the new variable name(s)
\item {\it trafo} lists the transformations $x^i=x^i(v^j,u^\beta),
y^\alpha=y^\alpha(v^j,u^\beta)$
\end{itemize}
After modus 2, {\tt APPLYSYM} returns the generalized special solution.
%---------------------------------------
\subsection{Example: A second order ODE}
Weyl's class of solutions of Einsteins field equations consists of
axialsymmetric time independent metrics of the form
\begin{equation}
{\rm{d}} s^2 = e^{-2 U} \left[ e^{2 k} \left( \rm{d} \rho^2 + \rm{d}
z^2 \right)+\rho^2 \rm{d} \varphi^2 \right] - e^{2 U} \rm{d} t^2,
\end{equation}
where $U$ and $k$ are functions of $\rho$ and $z$. If one is interested in
generalizing these solutions to have a time dependence then the resulting
DEs can be transformed such that one longer third order ODE for $U$ results
which contains only $\rho$ derivatives \cite{Markus}. Because $U$ appears
not alone but only as derivative, a substitution
\begin{equation}
g = dU/d\rho \label{g1dgl}
\end{equation}
lowers the order and the introduction of a function
\begin{equation}
h = \rho g - 1 \label{g2dgl}
\end{equation}
simplifies the ODE to
\begin{equation}
0 = 3\rho^2h\,h''
-5\rho^2\,h'^2+5\rho\,h\,h'-20\rho\,h^3h'-20\,h^4+16\,h^6+4\,h^2. \label{hdgl}
\end{equation}
where $'= d/d\rho$.
Calling {\tt LIEPDE} through
\small \begin{verbatim}
depend h,r;
prob:={{-20*h**4+16*h**6+3*r**2*h*df(h,r,2)+5*r*h*df(h,r)
-20*h**3*r*df(h,r)+4*h**2-5*r**2*df(h,r)**2},
{h}, {r}};
sym:=liepde(prob, {"point"},{},{});
end; \end{verbatim} \normalsize
gives \small \begin{verbatim}
3 2
sym := {{}, {xi_r= - c10*r - c11*r, eta_h=c10*h*r }, {c10,c11}}.
\end{verbatim} \normalsize
All conditions have been solved because the first element of {\tt sym}
is $\{\}$. The two existing symmetries are therefore
\begin{equation}
- \rho^3 \partial_{\rho} + h \rho^2 \,\partial_{h} \;\;\;\;\;\;\mbox{and}
\;\;\;\;\;\;\rho \partial_{\rho}.
\end{equation}
Corresponding finite
transformations can be calculated with {\tt APPLYSYM} through
\small \begin{verbatim}
newde:=applysym(prob,rest sym);
\end{verbatim} \normalsize
The interactive session is given below with the user input following
the prompt `{\tt Input:3:}' or following `?'. (Empty lines have been deleted.)
\small \begin{verbatim}
Do you want to find similarity and symmetry variables (enter `1;')
or generalize a special solution with new parameters (enter `2;')
or exit the program (enter `;')
Input:3: 1;
\end{verbatim} \normalsize
We enter `1;' because we want to reduce dependencies by finding similarity
variables and one symmetry variable and then doing the transformation such
that the symmetry variable does not explicitly occur in the DE.
\small \begin{verbatim}
---------------------- The 1. symmetry is:
3
xi_r= - r
2
eta_h=h*r
---------------------- The 2. symmetry is:
xi_r= - r
----------------------
Which single symmetry or linear combination of symmetries
do you want to apply?
Enter an expression with `sy_(i)' for the i'th symmetry.
sy_(1);
\end{verbatim} \normalsize
We could have entered `sy\_(2);' or a combination of both
as well with the calculation running then
differently.
\small \begin{verbatim}
The symmetry to be applied in the following is
3 2
{xi_r= - r ,eta_h=h*r }
Enter the name of the new dependent variables:
Input:3: u;
Enter the name of the new independent variables:
Input:3: v;
\end{verbatim} \normalsize
This was the input part, now the real calculation starts.
\small \begin{verbatim}
The ODE/PDE (-system) under investigation is :
2 2 2 3
0 = 3*df(h,r,2)*h*r - 5*df(h,r) *r - 20*df(h,r)*h *r
6 4 2
+ 5*df(h,r)*h*r + 16*h - 20*h + 4*h
for the function(s) : h.
It will be looked for a new dependent variable u
and an independent variable v such that the transformed
de(-system) does not depend on u or v.
1. Determination of the similarity variable
2
The quasilinear PDE: 0 = r *(df(u_,h)*h - df(u_,r)*r).
The equivalent characteristic system:
3
0= - df(u_,r)*r
2
0= - r *(df(h,r)*r + h)
for the functions: h(r) u_(r).
\end{verbatim} \normalsize
The PDE is equation (\ref{ql2}).
\small \begin{verbatim}
The general solution of the PDE is given through
0 = ff(u_,h*r)
with arbitrary function ff(..).
A suggestion for this function ff provides:
0 = - h*r + u_
Do you like this choice? (Y or N)
?y
\end{verbatim} \normalsize
For the following calculation only a single special solution of the PDE is
necessary
and this has to be specified from the general solution by choosing a special
function {\tt ff}. (This function is called {\tt ff} to prevent a clash with
names of user variables/functions.) In principle any choice of {\tt ff} would
work, if it defines a non-singular coordinate transformation, i.e.\ here $r$
must be a function of $u\_$. If we have $q$ independent variables and
$p$ functions of them then {\tt ff} has $p+q$ arguments. Because of the
condition $0 = ${\tt ff} one has essentially the freedom of choosing a function
of $p+q-1$ arguments freely. This freedom is also necessary to select $p+q-1$
different functions {\tt ff} and to find as many functionally independent
solutions $u\_$ which all become the new similarity variables. $q$ of them
become the new functions $u^\alpha$ and $p-1$ of them the new variables
$v^2,\ldots,v^p$. Here we have $p=q=1$ (one single ODE).
Though the program could have done that alone, once the general solution
{\tt ff(..)} is known, the user can interfere here to enter a simpler solution,
if possible.
\small \begin{verbatim}
2. Determination of the symmetry variable
2 3
The quasilinear PDE: 0 = df(u_,h)*h*r - df(u_,r)*r - 1.
The equivalent characteristic system:
3
0=df(r,u_) + r
2
0=df(h,u_) - h*r
for the functions: r(u_) h(u_) .
New attempt with a different independent variable
The equivalent characteristic system:
2
0=df(u_,h)*h*r - 1
2
0=r *(df(r,h)*h + r)
for the functions: r(h) u_(h) .
The general solution of the PDE is given through
2 2 2
- 2*h *r *u_ + h
0 = ff(h*r,--------------------)
2
with arbitrary function ff(..).
A suggestion for this function ff(..) yields:
2 2
h *( - 2*r *u_ + 1)
0 = ---------------------
2
Do you like this choice? (Y or N)
?y
\end{verbatim} \normalsize
Similar to above.
\small \begin{verbatim}
The suggested solution of the algebraic system which will
do the transformation is:
sqrt(v)*sqrt(2)
{h=sqrt(v)*sqrt(2)*u,r=-----------------}
2*v
Is the solution ok? (Y or N)
?y
In the intended transformation shown above the dependent
variable is u and the independent variable is v.
The symmetry variable is v, i.e. the transformed expression
will be free of v.
Is this selection of dependent and independent variables ok? (Y or N)
?n
\end{verbatim} \normalsize
We so far assumed that the symmetry variable is one of the new variables, but,
of course we also could choose it to be one of the new functions.
If it is one of the functions then only derivatives of this function occur
in the new DE, not the function itself. If it is one of the variables then
this variable will not occur explicitly.
In our case we prefer (without strong reason) to have the function as
symmetry variable. We therefore answered with `no'. As a consequence, $u$ and
$v$ will exchange names such that still all new functions have the name $u$
and the new variables have name $v$:
\small \begin{verbatim}
Please enter a list of substitutions. For example, to
make the variable, which is so far call u1, to an
independent variable v2 and the variable, which is
so far called v2, to an dependent variable u1,
enter: `{u1=v2, v2=u1};'
Input:3: {u=v,v=u};
The transformed equation which should be free of u:
3 6 2 3
0=3*df(u,v,2)*v - 16*df(u,v) *v - 20*df(u,v) *v + 5*df(u,v)
Do you want to find similarity and symmetry variables (enter `1;')
or generalize a special solution with new parameters (enter `2;')
or exit the program (enter `;')
Input:3: ;
\end{verbatim}
We stop here. The following is returned from our {\tt APPLYSYM} call:
\small \begin{verbatim}
3 6 2 3
{{{3*df(u,v,2)*v - 16*df(u,v) *v - 20*df(u,v) *v + 5*df(u,v)},
{u},
{v}},
sqrt(u)*sqrt(2)
{r=-----------------, h=sqrt(u)*sqrt(2)*v }}
2*u
\end{verbatim} \normalsize
The use of {\tt APPLYSYM} effectively provided us the finite
transformation
\begin{equation}
\rho=(2\,u)^{-1/2},\;\;\;\;\;h=(2\,u)^{1/2}\,v \label{trafo1}.
\end{equation}
and the new ODE
\begin{equation}
0 = 3u''v - 16u'^3v^6 - 20u'^2v^3 + 5u' \label{udgl}
\end{equation}
where $u=u(v)$ and $'=d/dv.$
Using one symmetry we reduced the 2.\,order ODE (\ref{hdgl})
to a first order ODE (\ref{udgl}) for $u'$ plus one
integration. The second symmetry can be used to reduce the remaining ODE
to an integration too by introducing a variable $w$ through $v^3d/dv = d/dw$,
i.e. $w = -1/(2v^2)$. With
\begin{equation}
p=du/dw \label{udot}
\end{equation}
the remaining ODE is
\[0 = 3\,w\,\frac{dp}{dw} + 2\,p\,(p+1)(4\,p+1) \]
with solution
\[ \tilde{c}w^{-2}/4 = \tilde{c}v^4 = \frac{p^3(p+1)}{(4\,p+1)^4},\;\;\;
\tilde{c}=const. \]
Writing (\ref{udot}) as $p = v^3(du/dp)/(dv/dp)$ we get $u$ by integration
and with (\ref{trafo1}) further a parametric solution for $\rho,h$:
\begin{eqnarray}
\rho & = & \left(\frac{3c_1^2(2p-1)}{p^{1/2}(p+1)^{1/2}}+c_2\right)^{-1/2} \\
h & = & \frac{(c_2p^{1/2}(p+1)^{1/2}+6c_1^2p-3c_1^2)^{1/2}p^{1/2}}{c_1(4p+1)}
\end{eqnarray}
where $c_1, c_2 = const.$ and $c_1=\tilde{c}^{1/4}.$ Finally, the metric
function $U(p)$ is obtained as an integral from (\ref{g1dgl}),(\ref{g2dgl}).
%---------------------------------------
\subsection{Limitations of {\tt APPLYSYM}}
Restrictions of the applicability of the program {\tt APPLYSYM} result
from limitations of the program {\tt QUASILINPDE} described in a section below.
Essentially this means that symmetry generators may only be polynomially
non-linear in $x^i, y^\alpha$.
Though even then the solvability can not be guaranteed, the
generators of Lie-symmetries are mostly very simple such that the
resulting PDE (\ref{PDE}) and the corresponding characteristic
ODE-system have good chances to be solvable.
Apart from these limitations implied through the solution of differential
equations with {\tt CRACK} and algebraic equations with {\tt SOLVE}
the program {\tt APPLYSYM} itself is free of restrictions,
i.e.\ if once new versions of {\tt CRACK, SOLVE}
would be available then {\tt APPLYSYM} would not have to be changed.
Currently, whenever a computational step could not be performed
the user is informed and has the possibility of entering interactively
the solution of the unsolved algebraic system or the unsolved linear PDE.
%-------------------------------------------------------------------------
\section{Solving quasilinear PDEs}
%---------------------------------------
\subsection{The content of {\tt QUASILINPDE}}
The generalization of special solutions of DEs as well as the computation of
similarity and symmetry variables involve the general solution of single
first order linear PDEs.
The procedure {\tt QUASILINPDE} is a general procedure
aiming at the general solution of
PDEs
\begin{equation}
a_1(w_i,\phi)\phi_{w_1} + a_2(w_i,\phi)\phi_{w_2} + \ldots +
a_n(w_i,\phi)\phi_{w_n} = b(w_i,\phi) \label{PDE}
\end{equation}
in $n$ independent variables $w_i, i=1\ldots n$ for one unknown function
$\phi=\phi(w_i)$.
\begin{enumerate}
\item
The first step in solving a quasilinear PDE (\ref{PDE})
is the formulation of the corresponding characteristic ODE-system
\begin{eqnarray}
\frac{dw_i}{d\varepsilon} & = & a_i(w_j,\phi) \label{char1} \\
\frac{d\phi}{d\varepsilon} & = & b(w_j,\phi) \label{char2}
\end{eqnarray}
for $\phi, w_i$ regarded now as functions of one variable $\varepsilon$.
Because the $a_i$ and $b$ do not depend explicitly on $\varepsilon$, one of the
equations (\ref{char1}),(\ref{char2}) with non-vanishing right hand side
can be used to divide all others through it and by that having a system
with one less ODE to solve.
If the equation to divide through is one of
(\ref{char1}) then the remaining system would be
\begin{eqnarray}
\frac{dw_i}{dw_k} & = & \frac{a_i}{a_k} , \;\;\;i=1,2,\ldots k-1,k+1,\ldots n
\label{char3} \\
\frac{d\phi}{dw_k} & = & \frac{b}{a_k} \label{char4}
\end{eqnarray}
with the independent variable $w_k$ instead of $\varepsilon$.
If instead we divide through equation
(\ref{char2}) then the remaining system would be
\begin{eqnarray}
\frac{dw_i}{d\phi} & = & \frac{a_i}{b} , \;\;\;i=1,2,\ldots n
\label{char3a}
\end{eqnarray}
with the independent variable $\phi$ instead of $\varepsilon$.
The equation to divide through is chosen by a
subroutine with a heuristic to find the ``simplest'' non-zero
right hand side ($a_k$ or $b$), i.e.\ one which
\begin{itemize}
\item is constant or
\item depends only on one variable or
\item is a product of factors, each of which depends only on
one variable.
\end{itemize}
One purpose of this division is to reduce the number of ODEs by one.
Secondly, the general solution of (\ref{char1}), (\ref{char2}) involves
an additive constant to $\varepsilon$ which is not relevant and would
have to be set to zero. By dividing through one ODE we eliminate
$\varepsilon$ and lose the problem of identifying this constant in the
general solution before we would have to set it to zero.
\item % from enumerate
To solve the system (\ref{char3}), (\ref{char4}) or (\ref{char3a}),
the procedure {\tt CRACK} is called.
Although being designed primarily for the solution of overdetermined
PDE-systems, {\tt CRACK} can also be used to solve simple not
overdetermined ODE-systems. This solution
process is not completely algorithmic. Improved versions of {\tt CRACK}
could be used, without making any changes of {\tt QUASILINPDE}
necessary.
If the characteristic ODE-system can not be solved in the form
(\ref{char3}), (\ref{char4}) or (\ref{char3a})
then successively all other ODEs of (\ref{char1}), (\ref{char2})
with non-vanishing right hand side are used for division until
one is found
such that the resulting ODE-system can be solved completely.
Otherwise the PDE can not be solved by {\tt QUASILINPDE}.
\item % from enumerate
If the characteristic ODE-system (\ref{char1}), (\ref{char2}) has been
integrated completely and in full generality to the implicit solution
\begin{equation}
0 = G_i(\phi, w_j, c_k, \varepsilon),\;\;
i,k=1,\ldots,n+1,\;\;j=1,\ldots,n \label{charsol1}
\end{equation}
then according to the general theory for solving first order PDEs,
$\varepsilon$ has
to be eliminated from one of the equations and to be substituted in the
others to have left $n$ equations.
Also the constant that turns up additively to $\varepsilon$
is to be set to zero. Both tasks are automatically
fulfilled, if, as described above, $\varepsilon$ is already eliminated
from the beginning by dividing all equations of (\ref{char1}),
(\ref{char2})
through one of them.
On either way one ends up with $n$ equations
\begin{equation}
0=g_i(\phi,w_j,c_k),\;\;i,j,k=1\ldots n \label{charsol2}
\end{equation}
involving $n$ constants $c_k$.
The final step is to solve (\ref{charsol2}) for the $c_i$ to obtain
\begin{equation}
c_i = c_i(\phi, w_1,\ldots ,w_n) \;\;\;\;\;i=1,\ldots n . \label{cons}
\end{equation}
The final solution $\phi = \phi(w_i)$ of the PDE (\ref{PDE}) is then
given implicitly through
\[ 0 = F(c_1(\phi,w_i),c_2(\phi,w_i),\ldots,c_n(\phi,w_i)) \]
where $F$ is an arbitrary function with $n$ arguments.
\end{enumerate}
%---------------------------------------
\subsection{Syntax}
The call of {\tt QUASILINPDE} is \\
{\tt QUASILINPDE}({\it de}, {\it fun}, {\it varlist});
\begin{itemize}
\item
{\it de} is the differential expression which vanishes due to the PDE
{\it de}$\; = 0$ or, {\it de} may be the differential equation itself in the
form $\;\;\ldots = \ldots\;\;$.
\item
{\it fun} is the unknown function.
\item
{\it varlist} is the list of variables of {\it fun}.
\end{itemize}
The result of {\tt QUASILINPDE} is a list of general solutions
\[ \{{\it sol}_1, {\it sol}_2, \ldots \}. \]
If {\tt QUASILINPDE} can not solve the PDE then it returns $\{\}$.
Each solution ${\it sol}_i$ is a list of expressions
\[ \{{\it ex}_1, {\it ex}_2, \ldots \} \]
such that the dependent function ($\phi$ in (\ref{PDE})) is determined
implicitly through an arbitrary function $F$ and the algebraic
equation \[ 0 = F({\it ex}_1, {\it ex}_2, \ldots). \]
%---------------------------------------
\subsection{Examples}
{\em Example 1:}\\
To solve the quasilinear first order PDE \[1 = xu,_x + uu,_y - zu,_z\]
for the function $u = u(x,y,z),$ the input would be
\small \begin{verbatim}
depend u,x,y,z;
de:=x*df(u,x)+u*df(u,y)-z*df(u,z) - 1;
varlist:={x,y,z};
QUASILINPDE(de,u,varlist);
\end{verbatim} \normalsize
In this example the procedure returns
\[\{ \{ x/e^u, ze^u, u^2 - 2y \} \},\]
i.e. there is one general solution (because the outer list has only one
element which itself is a list) and $u$ is given implicitly through
the algebraic equation
\[ 0 = F(x/e^u, ze^u, u^2 - 2y)\]
with arbitrary function $F.$ \\
{\em Example 2:}\\
For the linear inhomogeneous PDE
\[ 0 = y z,_x + x z,_y - 1, \;\;\;\;\mbox{for}\;\;\;\;z=z(x,y)\]
{\tt QUASILINPDE} returns the result that for an arbitrary function $F,$ the
equation
\[ 0 = F\left(\frac{x+y}{e^z},e^z(x-y)\right) \]
defines the general solution for $z$. \\
{\em Example 3:}\\
For the linear inhomogeneous PDE (3.8) from \cite{KamkePDE}
\[ 0 = x w,_x + (y+z)(w,_y - w,_z), \;\;\;\;\mbox{for}\;\;\;\;w=w(x,y,z)\]
{\tt QUASILINPDE} returns the result
that for an arbitrary function $F,$ the equation
\[ 0 = F\left(w, \;y+z, \;\ln(x)(y+z)-y\right) \]
defines the general solution for $w$, i.e.\ for any function $f$
\[ w = f\left(y+z, \;\ln(x)(y+z)-y\right) \]
solves the PDE.
%---------------------------------------
\subsection{Limitations of {\tt QUASILINPDE}}
One restriction on the applicability of {\tt QUASILINPDE} results from
the program {\tt CRACK} which tries to solve the
characteristic ODE-system of the PDE. So far {\tt CRACK} can be
applied only to polynomially non-linear DE's, i.e.\ the characteristic
ODE-system (\ref{char3}),(\ref{char4}) or (\ref{char3a}) may
only be polynomially non-linear, i.e.\ in the PDE (\ref{PDE})
the expressions $a_i$ and $b$ may only be rational in $w_j,\phi$.
The task of {\tt CRACK} is simplified as (\ref{charsol1}) does not have to
be solved for $w_j, \phi$. On the other hand (\ref{charsol1}) has to be
solved for the $c_i$. This gives a
second restriction coming from the REDUCE function {\tt SOLVE}.
Though {\tt SOLVE} can be applied
to polynomial and transzendential equations, again no guarantee for
solvability can be given.
%-------------------------------------------------------------------------
\section{Transformation of DEs}
%---------------------------------------
\subsection{The content of {\tt DETRAFO}}
Finally, after having found the finite transformations,
the program {\tt APPLYSYM} calls the procedure
{\tt DETRAFO} to perform the transformations. {\tt DETRAFO}
can also be used alone to do point- or higher order transformations
which involve a considerable computational effort if the
differential order of the expression to be transformed is high and
if many dependent and independent variables are involved.
This might be especially useful if one wants to experiment
and try out different coordinate transformations interactively,
using {\tt DETRAFO} as standalone procedure.
To run {\tt DETRAFO}, the old functions $y^{\alpha}$ and old
variables $x^i$ must be
known explicitly in terms of algebraic or
differential expressions of the new functions $u^{\beta}$
and new variables $v^j$. Then for point transformations the identity
\begin{eqnarray}
dy^{\alpha} & = & \left(y^{\alpha},_{v^i} +
y^{\alpha},_{u^{\beta}}u^{\beta},_{v^i}\right) dv^i \\
& = & y^{\alpha},_{x^j}dx^j \\
& = & y^{\alpha},_{x^j}\left(x^j,_{v^i} +
x^j,_{u^{\beta}}u^{\beta},_{v^i}\right) dv^i
\end{eqnarray}
provides the transformation
\begin{equation}
y^{\alpha},_{x^j} = \frac{dy^\alpha}{dv^i}\cdot
\left(\frac{dx^j}{dv^i}\right)^{-1} \label{trafo}
\end{equation}
with {\it det}$\left(dx^j/dv^i\right) \neq 0$ because of the regularity
of the transformation which is checked by {\tt DETRAFO}. Non-regular
transformations are not performed.
{\tt DETRAFO} is not restricted to point transformations.
In the case of
contact- or higher order transformations, the total
derivatives $dy^{\alpha}/dv^i$ and $dx^j/dv^i$ then only include all
$v^i-$ derivatives of $u^{\beta}$ which occur in
\begin{eqnarray*}
y^{\alpha} & = & y^{\alpha}(v^i,u^{\beta},u^{\beta},_{v^j},\ldots) \\
x^k & = & x^k(v^i,u^{\beta},u^{\beta},_{v^j},\ldots).
\end{eqnarray*}
%---------------------------------------
\subsection{Syntax}
The call of {\tt DETRAFO} is
\begin{tabbing}
{\tt DETRAFO}(\=\{{\it ex}$_1$, {\it ex}$_2$, \ldots , {\it ex}$_m$\}, \\
\>\{{\it ofun}$_1=${\it fex}$_1$, {\it ofun}$_2=${\it fex}$_2$,
\ldots ,{\it ofun}$_p=${\it fex}$_p$\}, \\
\>\{{\it ovar}$_1=${\it vex}$_1$, {\it ovar}$_2=${\it vex}$_2$, \ldots ,
{\it ovar}$_q=${\it vex}$_q$\}, \\
\>\{{\it nfun}$_1$, {\it nfun}$_2$, \ldots , {\it nfun}$_p$\},\\
\>\{{\it nvar}$_1$, {\it nvar}$_2$, \ldots , {\it nvar}$_q$\});
\end{tabbing}
where $m,p,q$ are arbitrary.
\begin{itemize}
\item
The {\it ex}$_i$ are differential expressions to be transformed.
\item
The second list is the list of old functions {\it ofun} expressed
as expressions {\it fex} in terms
of new functions {\it nfun} and new independent variables {\it nvar}.
\item
Similarly the third list expresses the old independent variables {\it ovar}
as expressions {\it vex} in terms of new functions
{\it nfun} and new independent variables {\it nvar}.
\item
The last two lists include the new functions {\it nfun}
and new independent variables {\it nvar}.
\end{itemize}
Names for {\it ofun, ovar, nfun} and {\it nvar} can be arbitrarily
chosen.
As the result {\tt DETRAFO} returns the first argument of its input,
i.e.\ the list
\[\{{\it ex}_1, {\it ex}_2, \ldots , {\it ex}_m\}\]
where all ${\it ex}_i$ are transformed.
%---------------------------------------
\subsection{Limitations of {\tt DETRAFO}}
The only requirement is that
the old independent variables $x^i$ and old functions $y^\alpha$ must be
given explicitly in terms of new variables $v^j$ and new functions $u^\beta$
as indicated in the syntax.
Then all calculations involve only differentiations and basic algebra.
%-------------------------------------------------------------------------
\section{Availability}
The programs run under {\tt REDUCE 3.6} and are available
by anonymous ftp from \\ {\tt ftp.maths.qmw.ac.uk}, directory
{\tt pub/tw}.
\begin{thebibliography}{99}
\bibitem{WHer} W.\,Hereman, Chapter 13 in vol 3 of the CRC Handbook of
Lie Group Analysis of Differential Equations, Ed.: N.H.\,Ibragimov,
CRC Press, Boca Raton, Florida (1995).
Systems described in this paper are among others: \\
DELiA (Alexei Bocharov et.al.) Pascal \\
DIFFGROB2 (Liz Mansfield) Maple \\
DIMSYM (James Sherring and Geoff Prince) REDUCE \\
HSYM (Vladimir Gerdt) Reduce \\
LIE (V. Eliseev, R.N. Fedorova and V.V. Kornyak) Reduce \\
LIE (Alan Head) muMath \\
Lie (Gerd Baumann) Mathematica \\
LIEDF/INFSYM (Peter Gragert and Paul Kersten) Reduce \\
Liesymm (John Carminati, John Devitt and Greg Fee) Maple \\
MathSym (Scott Herod) Mathematica \\
NUSY (Clara Nucci) Reduce \\
PDELIE (Peter Vafeades) Macsyma \\
SPDE (Fritz Schwarz) Reduce and Axiom \\
SYM\_DE (Stanly Steinberg) Macsyma \\
Symmgroup.c (Dominique Berube and Marc de Montigny) Mathematica \\
STANDARD FORM (Gregory Reid and Alan Wittkopf) Maple \\
SYMCAL (Gregory Reid) Macsyma and Maple \\
SYMMGRP.MAX (Benoit Champagne, Willy Hereman and Pavel Winternitz) Macsyma \\
LIE package (Khai Vu) Maple \\
Toolbox for symmetries (Mark Hickman) Maple \\
Lie symmetries (Jeffrey Ondich and Nick Coult) Mathematica.
\bibitem{lie1} S.\, Lie, Sophus Lie's 1880 Transformation Group Paper,
Translated by M.\, Ackerman, comments by R.\, Hermann, Mathematical Sciences
Press, Brookline, (1975).
\bibitem{lie2} S.\,Lie, Differentialgleichungen, Chelsea Publishing Company,
New York, (1967).
\bibitem{LIEPDE} T.\,Wolf, An efficiency improved program {\tt LIEPDE}
for determining Lie - symmetries of PDEs, Proceedings of the workshop on
Modern group theory methods in Acireale (Sicily) Nov.\,(1992)
\bibitem{Riq} C.\,Riquier, Les syst\`{e}mes d'\'{e}quations
aux d\'{e}riv\'{e}es partielles, Gauthier--Villars, Paris (1910).
\bibitem{Th} J.\,Thomas, Differential Systems, AMS, Colloquium
publications, v.\,21, N.Y.\,(1937).
\bibitem{Ja} M.\,Janet, Le\c{c}ons sur les syst\`{e}mes d'\'{e}quations aux
d\'{e}riv\'{e}es, Gauthier--Villars, Paris (1929).
\bibitem{Topu} V.L.\,Topunov, Reducing Systems of Linear Differential
Equations to a Passive Form, Acta Appl.\,Math.\,16 (1989) 191--206.
\bibitem{Alex} A.V.\,Bocharov and M.L.\,Bronstein, Efficiently
Implementing Two Methods of the Geometrical Theory of Differential
Equations: An Experience in Algorithm and Software Design, Acta.\,Appl.
Math.\,16 (1989) 143--166.
\bibitem{Olv} P.J. Olver, Applications of Lie Groups to Differential
Equations, Springer-Verlag New York (1986).
\bibitem{Reid1} G.J.\,Reid, A triangularization algorithm which
determines the Lie symmetry algebra of any system of PDEs, J.Phys.\,A:
Math.\,Gen.\,23 (1990) L853-L859.
\bibitem{FS} F.\,Schwarz, Automatically Determining Symmetries of Partial
Differential Equations, Computing 34, (1985) 91-106.
\bibitem{Fush} W.I.\,Fushchich and V.V.\,Kornyak, Computer Algebra
Application for Determining Lie and Lie--B\"{a}cklund Symmetries of
Differential Equations, J.\,Symb.\,Comp.\,7 (1989) 611--619.
\bibitem{Ka} E.\,Kamke, Differentialgleichungen, L\"{o}sungsmethoden
und L\"{o}sungen, Band 1, Gew\"{o}hnliche Differentialgleichungen,
Chelsea Publishing Company, New York, 1959.
\bibitem{KamkePDE} E.\,Kamke, Differentialgleichungen, L\"{o}sungsmethoden
und L\"{o}sungen, Band 2, Partielle Differentialgleichungen, 6.Aufl.,
Teubner, Stuttgart:Teubner, 1979.
\bibitem{Wo} T.\,Wolf, An Analytic Algorithm for Decoupling and Integrating
systems of Nonlinear Partial Differential Equations, J.\,Comp.\,Phys.,
no.\,3, 60 (1985) 437-446 and, Zur analytischen Untersuchung und exakten
L\"{o}sung von Differentialgleichungen mit Computeralgebrasystemen,
Dissertation B, Jena (1989).
\bibitem{WoBra} T.\,Wolf, A. Brand, The Computer Algebra Package {\tt CRACK}
for Investigating PDEs, Manual for the package {\tt CRACK} in the REDUCE
network library and in Proceedings of ERCIM School on Partial
Differential Equations and Group Theory, April 1992 in Bonn, GMD Bonn.
\bibitem{WM} M.A.H.\,MacCallum, F.J.\,Wright, Algebraic Computing with REDUCE,
Clarendon Press, Oxford (1991).
\bibitem{Mal} M.A.H.\, MacCallum, An Ordinary Differential Equation
Solver for REDUCE, Proc.\, ISAAC'88, Springer Lect.\, Notes in Comp Sci.
358, 196--205.
\bibitem{Step} H.\,Stephani, Differential equations, Their solution using
symmetries, Cambridge University Press (1989).
\bibitem{Karp} V.I.\,Karpman, Phys.\,Lett.\,A 136, 216 (1989)
\bibitem{Cham} B.\,Champagne, W.\,Hereman and P.\,Winternitz, The computer
calculation of Lie point symmetries of large systems of differential
equations, Comp.\,Phys.\,Comm.\,66, 319-340 (1991)
\bibitem{Markus} M.\,Kubitza, private communication
\end{thebibliography}
\end{document}
|
"""
get_node(graph, label)
Returns the node corresponding to `label`.
"""
get_node(graph::DispatchGraph, node::T) where T<:DispatchNode = node
get_node(graph::DispatchGraph, label::T) where T<:AbstractString = begin
found = Set{DispatchNode}()
for node in nodes(graph)
if has_label(node)
get_label(node) == label && push!(found, node)
end
end
length(found) > 1 && throw(ErrorException("Labels in dispatch graph are not unique."))
length(found) < 1 && throw(ErrorException("No nodes with label $label found."))
return pop!(found)
end
get_node(graph::DispatchGraph, node::T) where T = begin
throw(ArgumentError("A node identifier can be either a " *
"::DispatchNode or ::AbstractString."))
end
"""
load_hashchain(cachedir [; compression=DEFAULT_COMPRESSION])
Loads the hashchain file found in the directory `cachedir`. Before
loading, the `compression` value is checked against the one stored
in the hashchain file (both have to match). If the file does not exist,
it is created.
"""
function load_hashchain(cachedir::String=DEFAULT_CACHE_DIR;
compression::String=DEFAULT_COMPRESSION)
cachedir = abspath(expanduser(cachedir))
file = joinpath(cachedir, DEFAULT_HASHCHAIN_FILENAME)
cachedir_outputs = joinpath(cachedir, DEFAULT_HASHCACHE_DIR)
if !ispath(cachedir_outputs)
@debug "Creating the cache directory..."
mkpath(cachedir_outputs)
end
local hashchain
if !isfile(file)
@debug "Creating a new hashchain file $file..."
hashchain = Dict{String, Any}()
store_hashchain(hashchain, cachedir, compression=compression)
else
local data
open(file, "r") do fid # read the whole JSON hashchain file
data = JSON.parse(read(fid, String))
end
if compression != data["compression"]
throw(ErrorException("Compression mismatch: $compression vs. "*
"$(data["compression"])"))
end
hashchain = data["hashchain"]
# Clean up hashchain based on what exists already on disk
# i.e. remove keys not found on disk
on_disk_hashes = map(filename->split(filename, ".")[1],
filter!(!isfile, readdir(cachedir_outputs)))
keys_to_delete = setdiff(keys(hashchain), on_disk_hashes)
for key in keys_to_delete
delete!(hashchain, key)
end
store_hashchain(hashchain, cachedir, compression=compression)
end
return hashchain
end
"""
store_hashchain(hashchain, cachedir=DEFAULT_CACHE_DIR [; compression=DEFAULT_COMPRESSION, version=1])
Stores the `hashchain` object in a file named `DEFAULT_HASHCHAIN_FILENAME`,
in the directory `cachedir`. The values of `compression` and `version` are
stored as well in the file.
"""
function store_hashchain(hashchain::Dict{String, Any},
cachedir::String=DEFAULT_CACHE_DIR;
compression::String=DEFAULT_COMPRESSION,
version::Int=1)
cachedir = abspath(expanduser(cachedir))
if !ispath(cachedir)
@debug "Creating the cache directory..."
mkpath(cachedir)
end
file = joinpath(cachedir, DEFAULT_HASHCHAIN_FILENAME)
hashchain = Dict("version" => version,
"compression" => compression,
"hashchain" => hashchain)
open(file, "w+") do fid
write(fid, JSON.json(hashchain, 4))
end
end
"""
get_compressor(compression, action)
Return a `TranscodingStreams` compatible compressor or decompressor
based on the values of `compression` and `action`.
"""
function get_compressor(compression::AbstractString, action::AbstractString)
# Checks
if !(compression in ["bz2", "bzip2", "gz", "gzip", "none"])
throw(ErrorException("Unknown compression option,"*
" aborting."))
end
if !(action in ["compress", "decompress"])
throw(ErrorException("The action can only be \"compress\" or \"decompress\"."))
end
# Get compressor/decompressor
if compression == "bz2" || compression == "bzip2"
compressor = ifelse(action == "compress",
Bzip2CompressorStream,
Bzip2DecompressorStream)
elseif compression == "gz" || compression == "gzip"
compressor = ifelse(action == "compress",
GzipCompressorStream,
GzipDecompressorStream)
elseif compression == "none"
compressor = NoopStream # no compression
end
return compressor
end
"""
root_nodes(graph::DispatchGraph) ->
Return an iterable of all nodes in the graph with no input edges.
"""
function root_nodes(graph::DispatchGraph)
imap(n->graph.nodes[n], filter(1:nv(graph.graph)) do node_index
indegree(graph.graph, node_index) == 0
end)
end
|
# GraphHopper Directions API
#
# You use the GraphHopper Directions API to add route planning, navigation and route optimization to your software. E.g. the Routing API has turn instructions and elevation data and the Route Optimization API solves your logistic problems and supports various constraints like time window and capacity restrictions. Also it is possible to get all distances between all locations with our fast Matrix API.
#
# OpenAPI spec version: 1.0.0
#
# Generated by: https://github.com/swagger-api/swagger-codegen.git
#' Shipment Class
#'
#' @field id
#' @field name
#' @field priority
#' @field pickup
#' @field delivery
#' @field size
#' @field required_skills
#' @field allowed_vehicles
#' @field disallowed_vehicles
#' @field max_time_in_vehicle
#'
#' @importFrom R6 R6Class
#' @importFrom jsonlite fromJSON toJSON
#' @export
Shipment <- R6::R6Class(
'Shipment',
public = list(
`id` = NULL,
`name` = NULL,
`priority` = NULL,
`pickup` = NULL,
`delivery` = NULL,
`size` = NULL,
`required_skills` = NULL,
`allowed_vehicles` = NULL,
`disallowed_vehicles` = NULL,
`max_time_in_vehicle` = NULL,
initialize = function(`id`, `name`, `priority`, `pickup`, `delivery`, `size`, `required_skills`, `allowed_vehicles`, `disallowed_vehicles`, `max_time_in_vehicle`){
if (!missing(`id`)) {
stopifnot(is.character(`id`), length(`id`) == 1)
self$`id` <- `id`
}
if (!missing(`name`)) {
stopifnot(is.character(`name`), length(`name`) == 1)
self$`name` <- `name`
}
if (!missing(`priority`)) {
stopifnot(is.numeric(`priority`), length(`priority`) == 1)
self$`priority` <- `priority`
}
if (!missing(`pickup`)) {
stopifnot(R6::is.R6(`pickup`))
self$`pickup` <- `pickup`
}
if (!missing(`delivery`)) {
stopifnot(R6::is.R6(`delivery`))
self$`delivery` <- `delivery`
}
if (!missing(`size`)) {
stopifnot(is.list(`size`), length(`size`) != 0)
lapply(`size`, function(x) stopifnot(is.character(x)))
self$`size` <- `size`
}
if (!missing(`required_skills`)) {
stopifnot(is.list(`required_skills`), length(`required_skills`) != 0)
lapply(`required_skills`, function(x) stopifnot(is.character(x)))
self$`required_skills` <- `required_skills`
}
if (!missing(`allowed_vehicles`)) {
stopifnot(is.list(`allowed_vehicles`), length(`allowed_vehicles`) != 0)
lapply(`allowed_vehicles`, function(x) stopifnot(is.character(x)))
self$`allowed_vehicles` <- `allowed_vehicles`
}
if (!missing(`disallowed_vehicles`)) {
stopifnot(is.list(`disallowed_vehicles`), length(`disallowed_vehicles`) != 0)
lapply(`disallowed_vehicles`, function(x) stopifnot(is.character(x)))
self$`disallowed_vehicles` <- `disallowed_vehicles`
}
if (!missing(`max_time_in_vehicle`)) {
stopifnot(is.numeric(`max_time_in_vehicle`), length(`max_time_in_vehicle`) == 1)
self$`max_time_in_vehicle` <- `max_time_in_vehicle`
}
},
toJSON = function() {
ShipmentObject <- list()
if (!is.null(self$`id`)) {
ShipmentObject[['id']] <- self$`id`
}
if (!is.null(self$`name`)) {
ShipmentObject[['name']] <- self$`name`
}
if (!is.null(self$`priority`)) {
ShipmentObject[['priority']] <- self$`priority`
}
if (!is.null(self$`pickup`)) {
ShipmentObject[['pickup']] <- self$`pickup`$toJSON()
}
if (!is.null(self$`delivery`)) {
ShipmentObject[['delivery']] <- self$`delivery`$toJSON()
}
if (!is.null(self$`size`)) {
ShipmentObject[['size']] <- self$`size`
}
if (!is.null(self$`required_skills`)) {
ShipmentObject[['required_skills']] <- self$`required_skills`
}
if (!is.null(self$`allowed_vehicles`)) {
ShipmentObject[['allowed_vehicles']] <- self$`allowed_vehicles`
}
if (!is.null(self$`disallowed_vehicles`)) {
ShipmentObject[['disallowed_vehicles']] <- self$`disallowed_vehicles`
}
if (!is.null(self$`max_time_in_vehicle`)) {
ShipmentObject[['max_time_in_vehicle']] <- self$`max_time_in_vehicle`
}
ShipmentObject
},
fromJSON = function(ShipmentJson) {
ShipmentObject <- jsonlite::fromJSON(ShipmentJson)
if (!is.null(ShipmentObject$`id`)) {
self$`id` <- ShipmentObject$`id`
}
if (!is.null(ShipmentObject$`name`)) {
self$`name` <- ShipmentObject$`name`
}
if (!is.null(ShipmentObject$`priority`)) {
self$`priority` <- ShipmentObject$`priority`
}
if (!is.null(ShipmentObject$`pickup`)) {
pickupObject <- Stop$new()
pickupObject$fromJSON(jsonlite::toJSON(ShipmentObject$pickup, auto_unbox = TRUE))
self$`pickup` <- pickupObject
}
if (!is.null(ShipmentObject$`delivery`)) {
deliveryObject <- Stop$new()
deliveryObject$fromJSON(jsonlite::toJSON(ShipmentObject$delivery, auto_unbox = TRUE))
self$`delivery` <- deliveryObject
}
if (!is.null(ShipmentObject$`size`)) {
self$`size` <- ShipmentObject$`size`
}
if (!is.null(ShipmentObject$`required_skills`)) {
self$`required_skills` <- ShipmentObject$`required_skills`
}
if (!is.null(ShipmentObject$`allowed_vehicles`)) {
self$`allowed_vehicles` <- ShipmentObject$`allowed_vehicles`
}
if (!is.null(ShipmentObject$`disallowed_vehicles`)) {
self$`disallowed_vehicles` <- ShipmentObject$`disallowed_vehicles`
}
if (!is.null(ShipmentObject$`max_time_in_vehicle`)) {
self$`max_time_in_vehicle` <- ShipmentObject$`max_time_in_vehicle`
}
},
toJSONString = function() {
sprintf(
'{
"id": %s,
"name": %s,
"priority": %s,
"pickup": %s,
"delivery": %s,
"size": [%s],
"required_skills": [%s],
"allowed_vehicles": [%s],
"disallowed_vehicles": [%s],
"max_time_in_vehicle": %d
}',
self$`id`,
self$`name`,
self$`priority`,
self$`pickup`$toJSON(),
self$`delivery`$toJSON(),
lapply(self$`size`, function(x) paste(paste0('"', x, '"'), sep=",")),
lapply(self$`required_skills`, function(x) paste(paste0('"', x, '"'), sep=",")),
lapply(self$`allowed_vehicles`, function(x) paste(paste0('"', x, '"'), sep=",")),
lapply(self$`disallowed_vehicles`, function(x) paste(paste0('"', x, '"'), sep=",")),
self$`max_time_in_vehicle`
)
},
fromJSONString = function(ShipmentJson) {
ShipmentObject <- jsonlite::fromJSON(ShipmentJson)
self$`id` <- ShipmentObject$`id`
self$`name` <- ShipmentObject$`name`
self$`priority` <- ShipmentObject$`priority`
StopObject <- Stop$new()
self$`pickup` <- StopObject$fromJSON(jsonlite::toJSON(ShipmentObject$pickup, auto_unbox = TRUE))
StopObject <- Stop$new()
self$`delivery` <- StopObject$fromJSON(jsonlite::toJSON(ShipmentObject$delivery, auto_unbox = TRUE))
self$`size` <- ShipmentObject$`size`
self$`required_skills` <- ShipmentObject$`required_skills`
self$`allowed_vehicles` <- ShipmentObject$`allowed_vehicles`
self$`disallowed_vehicles` <- ShipmentObject$`disallowed_vehicles`
self$`max_time_in_vehicle` <- ShipmentObject$`max_time_in_vehicle`
}
)
)
|
immutable Uniform <: ContinuousUnivariateDistribution
a::Float64
b::Float64
function Uniform(a::Real, b::Real)
a < b || error("a < b required for range [a, b]")
new(float64(a), float64(b))
end
Uniform() = new(0.0, 1.0)
end
@_jl_dist_2p Uniform unif
min(d::Uniform) = d.a
max(d::Uniform) = d.b
entropy(d::Uniform) = log(d.b - d.a)
insupport(d::Uniform, x::Real) = d.a <= x <= d.b
kurtosis(d::Uniform) = -6.0 / 5.0
mean(d::Uniform) = (d.a + d.b) / 2.0
median(d::Uniform) = (d.a + d.b) / 2.0
function mgf(d::Uniform, t::Real)
a, b = d.a, d.b
return (exp(t * b) - exp(t * a)) / (t * (b - a))
end
function cf(d::Uniform, t::Real)
a, b = d.a, d.b
return (exp(im * t * b) - exp(im * t * a)) / (im * t * (b - a))
end
mode(d::Uniform) = d.a
modes(d::Uniform) = error("The uniform distribution has no modes")
rand(d::Uniform) = d.a + (d.b - d.a) * rand()
skewness(d::Uniform) = 0.0
function var(d::Uniform)
w = d.b - d.a
return w * w / 12.0
end
# fit model
function fit_mle{T <: Real}(::Type{Uniform}, x::Vector{T})
if isempty(x)
throw(ArgumentError("x cannot be empty."))
end
xmin = xmax = x[1]
for i = 2:length(x)
xi = x[i]
if xi < xmin
xmin = xi
elseif xi > xmax
xmax = xi
end
end
Uniform(xmin, xmax)
end
|
{-# LANGUAGE TemplateHaskell #-}
module FractalsTest
( runTests
) where
import Test.QuickCheck
import Data.Complex
import Fractals
import Palette
prop_toGrayscaleLength :: [Double] -> Bool
prop_toGrayscaleLength ws = length (toBitmap grayscale ws) == 4 * (length ws)
inMandelbrotSet :: Int -> Complex Double -> Bool
inMandelbrotSet nMax z0 = nMax == (fst $ order (mandelbrot z0) 2.0 nMax z0)
prop_orderMandelbrotOutside :: Positive Int -> Complex Double -> Property
prop_orderMandelbrotOutside (Positive nMax) z0 =
(mag2 z0 > 4.0) ==> not $ inMandelbrotSet nMax z0
toUnitCircle :: RealFloat a => Complex a -> Complex a -> Complex a
toUnitCircle w z = if magW>0.0 && magZ>0.0 then mu else (0.0 :+ 0.0)
where magW = magnitude w
magZ = magnitude z
mu = if magW < magZ then w/(magZ:+0.0) else z/(magW:+0.0)
prop_orderMandelbrotCardioid :: Positive Int -- ^ the max number of iterations
-> Complex Double -- ^ required to generate a value in the unit disk
-> Complex Double -- ^ required to generate a value in the unit disk
-> Bool
prop_orderMandelbrotCardioid (Positive nMax) w z = inMandelbrotSet nMax z0
where z0 = mandelbrotCardioid $ toUnitCircle w z
prop_orderMandelbrotCircle :: Positive Int -- ^ the max number of iterations
-> Complex Double -- ^ required to generate a value in the unit disk
-> Complex Double -- ^ required to generate a value in the unit disk
-> Bool
prop_orderMandelbrotCircle (Positive nMax) w z = inMandelbrotSet nMax z0
where z0 = (-1.0) + mu * 0.25
mu = toUnitCircle w z
return []
runTests :: IO Bool
runTests = $quickCheckAll
|
```python
import numpy as np
import scipy as sp
import scipy.stats
import matplotlib.pyplot as plt
import math as mt
import scipy.special
import seaborn as sns
plt.style.use('fivethirtyeight')
from statsmodels.graphics.tsaplots import plot_acf
import pandas as pd
```
# <font face="gotham" color="orange"> Gibbs Sampling Algorithm </font>
The **Gibbs sampler** is a special case of the Metropolis sampler in which the proposal distributions exactly match the posterior conditional distributions and naturally proposals are accepted 100% of the time.
However a specialty of Gibbs Sampler is that can allow one to estimate multiple parameters.
In this section, we will use Normal-Normal and Gamma-Normal conjugate priors to demonstrate the algorithm of Gibbs sampler.
<div style="background-color:Bisque; color:DarkBlue; padding:30px;">
Suppose you want to know the average height of female in your city, in the current setting, we assume $\mu$ and $\tau$ are our parameters of interest for estimation. Note that in conjugate prior section we assumed $\tau$ to be known, however in Gibbs sampling, both can be estimated.<br>
<br>
A prior of _normal distribution_ will be assumed for $\mu$ with hyperparameters
$$
\text{inverse of }\sigma_0:\tau_0 = .35\\
\text{mean}:\mu_0 = 170
$$
A prior of _gamma distribution_ will be assumed for $\tau$ since it can't be negative.
$$
\text{shape}: \alpha_0 = 2\\
\text{rate}:\beta_0 = 1
$$
</div>
The priors graphically are
```python
mu_0, tau_0 = 170, .35
x_mu = np.linspace(150, 190, 100)
y_mu = sp.stats.norm(loc=mu_0, scale=1/tau_0).pdf(x_mu)
alpha_0, beta_0 = 2, 1
x_tau = np.linspace(0, 8, 100)
y_tau = sp.stats.gamma(a=alpha_0, scale=1/beta_0).pdf(x_tau)
fig, ax = plt.subplots(figsize=(15,5), nrows=1, ncols=2)
ax[0].plot(x_mu, y_mu, label =r'$\mu_0={}, \tau_0={}$'.format(mu_0, tau_0))
ax[0].set_title(r'Prior of $\mu$')
ax[0].legend()
ax[1].plot(x_tau, y_tau, label = r'$\alpha_0={}, \beta_0={}$'.format(alpha_0, beta_0))
ax[1].set_title(r'Prior of $\tau$')
ax[1].legend()
plt.show()
```
Here is the joint frequency distribution of $\mu$ and $\tau$.
```python
n = 30000
mu_height = sp.stats.norm(loc=mu_0, scale=1/tau_0).rvs(n)
tau_height = sp.stats.gamma(a=alpha_0, scale=1/beta_0).rvs(n)
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(mu_height, tau_height, s = 3)
plt.show()
```
```python
```
Choose an initial value of proposal of $\tau$, denoted as $\tau_{\text{proposal},0}$, the $0$ subscript represents the time period, since this is the initial value.
Say
$$
\tau_{\text{proposal},0} = 7
$$
Next step is to obtain
$$
\mu_{\text{proposal},0}|\tau_{\text{proposal},0}
$$
where $\mu_{\text{proposal},0}$ is the first value of proposal $\mu$ conditional on $\tau_{\text{proposal},0}$.
Now go collect some data, for instance you measured $10$ random women's heights, here's the data.
```python
heights = np.array([156, 167, 178, 182, 169, 174, 175, 164, 181, 170])
np.sum(heights)
```
1716
Recall we have a sets of analytical solutions on Normal-Normal conjugate derived in chapter 2
\begin{align}
\mu_{\text {posterior }} &=\frac{\tau_{0} \mu_{0}+\tau \sum x_{i}}{\tau_{0}+n \tau}\\
\tau_{\text {posterior }} &=\tau_{0}+n \tau
\end{align}
Substitute $\tau_{\text{proposal},0}$ into both formula.
$$
\mu_{\text {posterior},1} =\frac{\tau_{0} \mu_{0}+\tau_{\text{proposal},0} \sum_{i=0}^{100} x_{i}}{\tau_{0}+n \tau_{\text{proposal},0}}=\frac{.15\times170+7\times 1716}{.15+10\times7}\\
\tau_{\text {posterior}, 1} =\tau_{0}+n \tau_{\text{proposal},0} = .15 + 10\times 7
$$
```python
mu_post = [0]
tau_post = [0] # 0 is placeholder, there isn't 0th eletment, according to algo
tau_proposal = [7]
mu_proposal = [0] # 0 is placeholder
mu_post.append((.15*170+tau_proposal[0]*1716)/(.15+10*tau_proposal[0]))
tau_post.append(.15+10*tau_proposal[0])
```
Draw a proposal from updated distribution for $\mu$, that is $\mu_{\text{posterior}, 1}$ and $\tau_{\text{posterior}, 1}$
```python
mu_proposal_draw = sp.stats.norm(loc=mu_post[1], scale=1/tau_post[1]).rvs()
mu_proposal.append(mu_proposal_draw)
```
Now turn to $\tau$ for a proposal, there is also Gamma-Normal conjugate prior which we didn't derive before. But here are the posterior
\begin{align}
\alpha_{\text{posterior}}&=\alpha_0+\frac{n}{2}\\
\beta_{\text{posterior}}&=\beta_0+\frac{\sum_{i=1}^{n}\left(x_{i}-\mu\right)^{2}}{2}
\end{align}
```python
alpha_post = [0]
beta_post = [0]
alpha_post.append(alpha_0+10/2)
beta_post.append(beta_0+np.sum((heights-mu_post[-1])**2)/2)
tau_proposal_draw = sp.stats.gamma(a=alpha_post[-1], scale=1/beta_post[-1]).rvs()
tau_proposal.append(tau_proposal_draw)
```
```python
tau_proposal
```
[7, 0.01714209509457282]
# <font face="gotham" color="orange"> Gibbs Sampling in a Loop </font>
```python
mu_post, tau_post, alpha_post, beta_post, mu_proposal, chain_size = [], [], [], [], [], 10000
def gibbs_norm_gam_joint(mu_0=170, tau_0=.35, tau_proposal_init=7, alpha_0=2, beta_0=1, chain_size = chain_size, data=heights):
tau_proposal = [tau_proposal_init]
n = len(data)
for i in range(chain_size):
mu_post.append((tau_0*mu_0+tau_proposal[-1]*np.sum(data))/(tau_0+n*tau_proposal[-1]))
tau_post.append(tau_0+n*tau_proposal[-1])
mu_proposal_draw = sp.stats.norm(loc=mu_post[-1], scale=1/tau_post[-1]).rvs()
mu_proposal.append(mu_proposal_draw)
alpha_post.append(alpha_0+n/2)
beta_post.append(beta_0+np.sum((heights-mu_post[-1])**2)/2)
tau_proposal_draw = sp.stats.gamma(a=alpha_post[-1], scale=1/beta_post[-1]).rvs()
tau_proposal.append(tau_proposal_draw)
return mu_post, tau_post
```
```python
mu_post, tau_post = gibbs_norm_gam_joint()
```
Calculate the moment matching PDF for both distributions.
```python
x_mu = np.linspace(170, 172.2, 100)
mu_dist = sp.stats.norm(loc=np.mean(mu_post), scale=np.std(mu_post)).pdf(x_mu)
```
To acquire $\alpha$ and $\beta$, we need solve a system of equations
$$
\mu=\frac{\alpha}{\beta}\\
\sigma^2=\frac{\alpha}{\beta^2}
$$
```python
from scipy.optimize import fsolve
def solve_equ(x, mu=np.mean(tau_post), sigma=np.std(tau_post)):
a, b = x[0], x[1]
F = np.empty(2)
F[0] = mu-x[0]/x[1]
F[1] = sigma**2-x[0]/(x[1]**2)
return F
xGuess = np.array([2, 2])
z = fsolve(solve_equ, xGuess)
print('alpha: {}'.format(z[0]))
print('beta: {}'.format(z[1]))
```
alpha: 0.7043480496517439
beta: 1.1937535586319403
```python
x_tau = np.linspace(.37, 1, 100)
tau_dist = sp.stats.gamma(a=z[0],scale=1/z[1]).pdf(x_tau)
```
```python
fig, ax = plt.subplots(figsize = (18, 12), nrows = 2, ncols = 2)
ax[0,0].hist(mu_post, bins=100, density=True)
ax[0,0].plot(x_mu, mu_dist, alpha=.7)
ax[0,0].set_ylabel('Posterior distribution of $\mu$')
ax[1,0].hist(tau_post[1:], bins=100, density=True)
ax[1,0].plot(x_tau, tau_dist, alpha=.7)
ax[1,0].set_ylabel(r'Posterior distribution of $\tau$')
ax[0,1].plot(np.arange(chain_size-1),mu_post[1:],lw=.5)
ax[1,1].plot(np.arange(chain_size-1),tau_post[1:], lw=.5)
fig.suptitle('Gibbs Sampling Example of Female Height', y = .93, size=26)
plt.show()
```
```python
```
```python
```
|
{-# OPTIONS --type-in-type #-}
open import Data.Unit
open import Data.Product hiding ( curry ; uncurry )
open import Data.List hiding ( concat )
open import Data.String
open import Relation.Binary.PropositionalEquality
open import Function
module GenericElim.Desc where
----------------------------------------------------------------------
Label : Set
Label = String
Enum : Set
Enum = List Label
data Tag : Enum → Set where
here : ∀{l E} → Tag (l ∷ E)
there : ∀{l E} → Tag E → Tag (l ∷ E)
Branches : (E : Enum) (P : Tag E → Set) → Set
Branches [] P = ⊤
Branches (l ∷ E) P = P here × Branches E (λ t → P (there t))
case : {E : Enum} (P : Tag E → Set) (cs : Branches E P) (t : Tag E) → P t
case P (c , cs) here = c
case P (c , cs) (there t) = case (λ t → P (there t)) cs t
UncurriedBranches : (E : Enum) (P : Tag E → Set) (X : Set)
→ Set
UncurriedBranches E P X = Branches E P → X
CurriedBranches : (E : Enum) (P : Tag E → Set) (X : Set)
→ Set
CurriedBranches [] P X = X
CurriedBranches (l ∷ E) P X = P here → CurriedBranches E (λ t → P (there t)) X
curryBranches : {E : Enum} {P : Tag E → Set} {X : Set}
→ UncurriedBranches E P X → CurriedBranches E P X
curryBranches {[]} f = f tt
curryBranches {l ∷ E} f = λ c → curryBranches (λ cs → f (c , cs))
uncurryBranches : {E : Enum} {P : Tag E → Set} {X : Set}
→ CurriedBranches E P X → UncurriedBranches E P X
uncurryBranches {[]} x tt = x
uncurryBranches {l ∷ E} f (c , cs) = uncurryBranches (f c) cs
----------------------------------------------------------------------
data Desc (I : Set) : Set₁ where
End : (i : I) → Desc I
Rec : (i : I) (D : Desc I) → Desc I
Arg : (A : Set) (B : A → Desc I) → Desc I
ISet : Set → Set₁
ISet I = I → Set
El : {I : Set} (D : Desc I) → ISet I → ISet I
El (End j) X i = j ≡ i
El (Rec j D) X i = X j × El D X i
El (Arg A B) X i = Σ A (λ a → El (B a) X i)
Hyps : {I : Set} (D : Desc I) (X : ISet I) (P : (i : I) → X i → Set) (i : I) (xs : El D X i) → Set
Hyps (End j) X P i q = ⊤
Hyps (Rec j D) X P i (x , xs) = P j x × Hyps D X P i xs
Hyps (Arg A B) X P i (a , b) = Hyps (B a) X P i b
----------------------------------------------------------------------
BranchesD : (I : Set) (E : Enum) → Set
BranchesD I E = Branches E (λ _ → Desc I)
caseD : {I : Set} {E : Enum} (cs : BranchesD I E) (t : Tag E) → Desc I
caseD = case (λ _ → Desc _)
----------------------------------------------------------------------
UncurriedEl : {I : Set} (D : Desc I) (X : ISet I) → Set
UncurriedEl D X = ∀{i} → El D X i → X i
CurriedEl : {I : Set} (D : Desc I) (X : ISet I) → Set
CurriedEl (End i) X = X i
CurriedEl (Rec i D) X = (x : X i) → CurriedEl D X
CurriedEl (Arg A B) X = (a : A) → CurriedEl (B a) X
CurriedEl' : {I : Set} (D : Desc I) (X : ISet I) (i : I) → Set
CurriedEl' (End j) X i = j ≡ i → X i
CurriedEl' (Rec j D) X i = (x : X j) → CurriedEl' D X i
CurriedEl' (Arg A B) X i = (a : A) → CurriedEl' (B a) X i
curryEl : {I : Set} (D : Desc I) (X : ISet I)
→ UncurriedEl D X → CurriedEl D X
curryEl (End i) X cn = cn refl
curryEl (Rec i D) X cn = λ x → curryEl D X (λ xs → cn (x , xs))
curryEl (Arg A B) X cn = λ a → curryEl (B a) X (λ xs → cn (a , xs))
uncurryEl : {I : Set} (D : Desc I) (X : ISet I)
→ CurriedEl D X → UncurriedEl D X
uncurryEl (End i) X cn refl = cn
uncurryEl (Rec i D) X cn (x , xs) = uncurryEl D X (cn x) xs
uncurryEl (Arg A B) X cn (a , xs) = uncurryEl (B a) X (cn a) xs
----------------------------------------------------------------------
UncurriedHyps : {I : Set} (D : Desc I) (X : ISet I)
(P : (i : I) → X i → Set)
(cn : UncurriedEl D X)
→ Set
UncurriedHyps D X P cn =
∀ i (xs : El D X i) (ihs : Hyps D X P i xs) → P i (cn xs)
CurriedHyps : {I : Set} (D : Desc I) (X : ISet I)
(P : (i : I) → X i → Set)
(cn : UncurriedEl D X)
→ Set
CurriedHyps (End i) X P cn =
P i (cn refl)
CurriedHyps (Rec i D) X P cn =
(x : X i) → P i x → CurriedHyps D X P (λ xs → cn (x , xs))
CurriedHyps (Arg A B) X P cn =
(a : A) → CurriedHyps (B a) X P (λ xs → cn (a , xs))
CurriedHyps' : {I : Set} (D : Desc I) (X : ISet I)
(P : (i : I) → X i → Set)
(i : I)
(cn : El D X i → X i)
→ Set
CurriedHyps' (End j) X P i cn = (q : j ≡ i) → P i (cn q)
CurriedHyps' (Rec j D) X P i cn =
(x : X j) → P j x → CurriedHyps' D X P i (λ xs → cn (x , xs))
CurriedHyps' (Arg A B) X P i cn =
(a : A) → CurriedHyps' (B a) X P i (λ xs → cn (a , xs))
curryHyps : {I : Set} (D : Desc I) (X : ISet I)
(P : (i : I) → X i → Set)
(cn : UncurriedEl D X)
→ UncurriedHyps D X P cn
→ CurriedHyps D X P cn
curryHyps (End i) X P cn pf =
pf i refl tt
curryHyps (Rec i D) X P cn pf =
λ x ih → curryHyps D X P (λ xs → cn (x , xs)) (λ i xs ihs → pf i (x , xs) (ih , ihs))
curryHyps (Arg A B) X P cn pf =
λ a → curryHyps (B a) X P (λ xs → cn (a , xs)) (λ i xs ihs → pf i (a , xs) ihs)
uncurryHyps : {I : Set} (D : Desc I) (X : ISet I)
(P : (i : I) → X i → Set)
(cn : UncurriedEl D X)
→ CurriedHyps D X P cn
→ UncurriedHyps D X P cn
uncurryHyps (End .i) X P cn pf i refl tt =
pf
uncurryHyps (Rec j D) X P cn pf i (x , xs) (ih , ihs) =
uncurryHyps D X P (λ ys → cn (x , ys)) (pf x ih) i xs ihs
uncurryHyps (Arg A B) X P cn pf i (a , xs) ihs =
uncurryHyps (B a) X P (λ ys → cn (a , ys)) (pf a) i xs ihs
----------------------------------------------------------------------
data μ {I : Set} (D : Desc I) : ISet I where
init : UncurriedEl D (μ D)
inj : {I : Set} (D : Desc I) → CurriedEl D (μ D)
inj D = curryEl D (μ D) init
----------------------------------------------------------------------
ind :
{I : Set}
(D : Desc I)
(P : (i : I) → μ D i → Set)
(α : UncurriedHyps D (μ D) P init)
(i : I)
(x : μ D i)
→ P i x
hyps :
{I : Set}
(D₁ : Desc I)
(P : (i : I) → μ D₁ i → Set)
(α : UncurriedHyps D₁ (μ D₁) P init)
(D₂ : Desc I)
(i : I)
(xs : El D₂ (μ D₁) i)
→ Hyps D₂ (μ D₁) P i xs
ind D P α i (init xs) = α i xs (hyps D P α D i xs)
hyps D P α (End j) i q = tt
hyps D P α (Rec j A) i (x , xs) = ind D P α j x , hyps D P α A i xs
hyps D P α (Arg A B) i (a , b) = hyps D P α (B a) i b
----------------------------------------------------------------------
indCurried : {I : Set} (D : Desc I)
(P : (i : I) → μ D i → Set)
(f : CurriedHyps D (μ D) P init)
(i : I)
(x : μ D i)
→ P i x
indCurried D P f i x = ind D P (uncurryHyps D (μ D) P init f) i x
Summer : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(X : ISet I) (cn : UncurriedEl D X)
(P : (i : I) → X i → Set)
→ Tag E → Set
Summer E C X cn P t =
let D = Arg (Tag E) C in
CurriedHyps (C t) X P (λ xs → cn (t , xs))
SumCurriedHyps : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
→ Tag E → Set
SumCurriedHyps E C P t =
let D = Arg (Tag E) C in
Summer E C (μ D) init P t
elimUncurried : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
→ Branches E (SumCurriedHyps E C P)
→ (i : I) (x : μ D i) → P i x
elimUncurried E C P cs i x =
let D = Arg (Tag E) C in
indCurried D P
(case (SumCurriedHyps E C P) cs)
i x
elim : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
→ CurriedBranches E
(SumCurriedHyps E C P)
((i : I) (x : μ D i) → P i x)
elim E C P = curryBranches (elimUncurried E C P)
----------------------------------------------------------------------
Soundness : Set₁
Soundness = {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
(cs : Branches E (SumCurriedHyps E C P))
(i : I) (x : μ D i)
→ ∃ λ α
→ elimUncurried E C P cs i x ≡ ind D P α i x
sound : Soundness
sound E C P cs i x =
let D = Arg (Tag E) C in
(uncurryHyps D (μ D) P init (case (SumCurriedHyps E C P) cs)) , refl
Completeness : Set₁
Completeness = {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
(α : UncurriedHyps D (μ D) P init)
(i : I) (x : μ D i)
→ ∃ λ cs
→ ind D P α i x ≡ elimUncurried E C P cs i x
uncurryHypsIdent : {I : Set} (D : Desc I) (X : ISet I)
(P : (i : I) → X i → Set)
(cn : UncurriedEl D X)
(α : UncurriedHyps D X P cn)
(i : I) (xs : El D X i) (ihs : Hyps D X P i xs)
→ α i xs ihs ≡ uncurryHyps D X P cn (curryHyps D X P cn α) i xs ihs
uncurryHypsIdent (End .i) X P cn α i refl tt = refl
uncurryHypsIdent (Rec j D) X P cn α i (x , xs) (p , ps) =
uncurryHypsIdent D X P (λ xs → cn (x , xs)) (λ k ys rs → α k (x , ys) (p , rs)) i xs ps
uncurryHypsIdent (Arg A B) X P cn α i (a , xs) ps =
uncurryHypsIdent (B a) X P (λ xs → cn (a , xs)) (λ j ys → α j (a , ys)) i xs ps
postulate
ext3 : {A : Set} {B : A → Set} {C : (a : A) → B a → Set} {Z : (a : A) (b : B a) → C a b → Set}
(f g : (a : A) (b : B a) (c : C a b) → Z a b c)
→ ((a : A) (b : B a) (c : C a b) → f a b c ≡ g a b c)
→ f ≡ g
toBranches : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(X : ISet I) (cn : UncurriedEl D X)
(P : (i : I) → X i → Set)
(α : UncurriedHyps D X P cn)
→ Branches E (Summer E C X cn P)
toBranches [] C X cn P α = tt
toBranches (l ∷ E) C X cn P α =
curryHyps (C here) X P (λ xs → cn (here , xs)) (λ i xs → α i (here , xs))
, toBranches E (λ t → C (there t)) X
(λ xs → cn (there (proj₁ xs) , proj₂ xs))
P (λ i xs ih → α i (there (proj₁ xs) , proj₂ xs) ih)
ToBranches : {I : Set} {E : Enum} (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(X : ISet I) (cn : UncurriedEl D X)
(P : (i : I) → X i → Set)
(α : UncurriedHyps D X P cn)
(t : Tag E)
→ let β = toBranches E C X cn P α in
case (Summer E C X cn P) β t ≡ curryHyps D X P cn α t
ToBranches C X cn P α here = refl
ToBranches C X cn P α (there t)
with ToBranches (λ t → C (there t)) X
(λ xs → cn (there (proj₁ xs) , proj₂ xs))
P (λ i xs ih → α i (there (proj₁ xs) , proj₂ xs) ih) t
... | ih rewrite ih = refl
completeα : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
(α : UncurriedHyps D (μ D) P init)
(i : I) (xs : El D (μ D) i) (ihs : Hyps D (μ D) P i xs)
→ let β = toBranches E C (μ D) init P α in
α i xs ihs ≡ uncurryHyps D (μ D) P init (case (SumCurriedHyps E C P) β) i xs ihs
completeα E C P α i (t , xs) ihs
with ToBranches C (μ D) init P α t where D = Arg (Tag E) C
... | q rewrite q = uncurryHypsIdent D (μ D) P init α i (t , xs) ihs where D = Arg (Tag E) C
complete' : {I : Set} (E : Enum) (C : Tag E → Desc I)
→ let D = Arg (Tag E) C in
(P : (i : I) → μ D i → Set)
(α : UncurriedHyps D (μ D) P init)
(i : I) (x : μ D i)
→ let β = toBranches E C (μ D) init P α in
ind D P α i x ≡ elimUncurried E C P β i x
complete' E C P α i (init (t , xs)) = cong
(λ f → ind D P f i (init (t , xs)))
(ext3 α
(uncurryHyps D (μ D) P init (case (SumCurriedHyps E C P) β))
(completeα E C P α))
where
D = Arg (Tag E) C
β = toBranches E C (μ D) init P α
complete : Completeness
complete E C P α i x =
let D = Arg (Tag E) C in
toBranches E C (μ D) init P α
, complete' E C P α i x
----------------------------------------------------------------------
ℕE : Enum
ℕE = "zero" ∷ "suc" ∷ []
VecE : Enum
VecE = "nil" ∷ "cons" ∷ []
ℕT : Set
ℕT = Tag ℕE
VecT : Set
VecT = Tag VecE
zeroT : ℕT
zeroT = here
sucT : ℕT
sucT = there here
nilT : VecT
nilT = here
consT : VecT
consT = there here
ℕC : ℕT → Desc ⊤
ℕC = caseD $
End tt
, Rec tt (End tt)
, tt
ℕD : Desc ⊤
ℕD = Arg ℕT ℕC
ℕ : ⊤ → Set
ℕ = μ ℕD
zero : ℕ tt
zero = init (zeroT , refl)
suc : ℕ tt → ℕ tt
suc n = init (sucT , n , refl)
VecC : (A : Set) → VecT → Desc (ℕ tt)
VecC A = caseD $
End zero
, Arg (ℕ tt) (λ n → Arg A λ _ → Rec n (End (suc n)))
, tt
nilD : (A : Set) → Desc (ℕ tt)
nilD A = End zero
consD : (A : Set) → Desc (ℕ tt)
consD A = Arg (ℕ tt) (λ n → Arg A (λ _ → Rec n (End (suc n))))
VecD : (A : Set) → Desc (ℕ tt)
VecD A = Arg VecT (VecC A)
Vec : (A : Set) → ℕ tt → Set
Vec A = μ (VecD A)
NilEl : (A : Set) (n : ℕ tt) → Set
NilEl A n = El (nilD A) (Vec A) n
ConsEl : (A : Set) → ℕ tt → Set
ConsEl A n = El (consD A) (Vec A) n
VecEl : (A : Set) → ℕ tt → Set
VecEl A n = El (VecD A) (Vec A) n
NilHyps : (A : Set) (P : (n : ℕ tt) → Vec A n → Set) (n : ℕ tt) (xs : NilEl A n) → Set
NilHyps A P n xs = Hyps (nilD A) (Vec A) P n xs
ConsHyps : (A : Set) (P : (n : ℕ tt) → Vec A n → Set) (n : ℕ tt) (xs : ConsEl A n) → Set
ConsHyps A P n xs = Hyps (consD A) (Vec A) P n xs
VecHyps : (A : Set) (P : (n : ℕ tt) → Vec A n → Set) (n : ℕ tt) (xs : VecEl A n) → Set
VecHyps A P n xs = Hyps (VecD A) (Vec A) P n xs
ConsUncurriedHyps : (A : Set)
(P : (n : ℕ tt) → Vec A n → Set)
(cn : UncurriedEl (consD A) (Vec A)) → Set
ConsUncurriedHyps A P cn = UncurriedHyps (consD A) (Vec A) P cn
nil : (A : Set) → Vec A zero
nil A = init (nilT , refl)
cons : (A : Set) (n : ℕ tt) (x : A) (xs : Vec A n) → Vec A (suc n)
cons A n x xs = init (consT , n , x , xs , refl)
nil2 : (A : Set) → Vec A zero
nil2 A = inj (VecD A) nilT
cons2 : (A : Set) (n : ℕ tt) (x : A) (xs : Vec A n) → Vec A (suc n)
cons2 A = inj (VecD A) consT
----------------------------------------------------------------------
module Induction where
add : ℕ tt → ℕ tt → ℕ tt
add = ind ℕD (λ _ _ → ℕ tt → ℕ tt)
(λ u t,c → case
(λ t → (c : El (ℕC t) ℕ u)
(ih : Hyps ℕD ℕ (λ u n → ℕ u → ℕ u) u (t , c))
→ ℕ u → ℕ u
)
( (λ q ih n → n)
, (λ m,q ih,tt n → suc (proj₁ ih,tt n))
, tt
)
(proj₁ t,c)
(proj₂ t,c)
)
tt
mult : ℕ tt → ℕ tt → ℕ tt
mult = ind ℕD (λ _ _ → ℕ tt → ℕ tt)
(λ u t,c → case
(λ t → (c : El (ℕC t) ℕ u)
(ih : Hyps ℕD ℕ (λ u n → ℕ u → ℕ u) u (t , c))
→ ℕ u → ℕ u
)
( (λ q ih n → zero)
, (λ m,q ih,tt n → add n (proj₁ ih,tt n))
, tt
)
(proj₁ t,c)
(proj₂ t,c)
)
tt
append : (A : Set) (m : ℕ tt) (xs : Vec A m) (n : ℕ tt) (ys : Vec A n) → Vec A (add m n)
append A = ind (VecD A) (λ m xs → (n : ℕ tt) (ys : Vec A n) → Vec A (add m n))
(λ m t,c → case
(λ t → (c : El (VecC A t) (Vec A) m)
(ih : Hyps (VecD A) (Vec A) (λ m xs → (n : ℕ tt) (ys : Vec A n) → Vec A (add m n)) m (t , c))
(n : ℕ tt) (ys : Vec A n) → Vec A (add m n)
)
( (λ q ih n ys → subst (λ m → Vec A (add m n)) q ys)
, (λ m',x,xs,q ih,tt n ys →
let m' = proj₁ m',x,xs,q
x = proj₁ (proj₂ m',x,xs,q)
q = proj₂ (proj₂ (proj₂ m',x,xs,q))
ih = proj₁ ih,tt
in
subst (λ m → Vec A (add m n)) q (cons A (add m' n) x (ih n ys))
)
, tt
)
(proj₁ t,c)
(proj₂ t,c)
)
Concat : (A : Set) (m n : ℕ tt) (xss : Vec (Vec A m) n) → Set
Concat A m n xss = Vec A (mult n m)
ConsBranch : (A : Set) (m : ℕ tt)
→ Set
ConsBranch A m = UncurriedHyps (consD (Vec A m)) (Vec (Vec A m)) (Concat A m)
(λ xs → init (consT , xs))
ConsElimBranch : (A : Set) (m : ℕ tt)
→ Set
ConsElimBranch A m = CurriedHyps (consD (Vec A m)) (Vec (Vec A m)) (Concat A m)
(λ xs → init (consT , xs))
ElimBranch : (t : VecT) (A : Set) (m : ℕ tt)
→ Set
ElimBranch t A m = SumCurriedHyps VecE (VecC (Vec A m)) (Concat A m) t
nilBranch : (A : Set) (m n : ℕ tt)
(xss : NilEl (Vec A m) n)
(ihs : NilHyps (Vec A m) (Concat A m) n xss)
→ Vec A (mult n m)
nilBranch A m n q u = subst
(λ n → Vec A (mult n m))
q (nil A)
consBranch : (A : Set) (m : ℕ tt) → ConsBranch A m
consBranch A m n n',xs,xss,q ih,u =
let n' = proj₁ n',xs,xss,q
xs = proj₁ (proj₂ n',xs,xss,q)
q = proj₂ (proj₂ (proj₂ n',xs,xss,q))
ih = proj₁ ih,u
in subst
(λ n → Vec A (mult n m))
q (append A m xs (mult n' m) ih)
ConcatConvoy : (A : Set) (m n : ℕ tt) (t : VecT) → Set
ConcatConvoy A m n t =
(xss : El (VecC (Vec A m) t) (Vec (Vec A m)) n)
(ihs : VecHyps (Vec A m) (Concat A m) n (t , xss))
→ Vec A (mult n m)
concatα : (A : Set) (m n : ℕ tt)
(xss : VecEl (Vec A m) n)
(ihs : VecHyps (Vec A m) (Concat A m) n xss)
→ Vec A (mult n m)
concatα A m n xss = case (ConcatConvoy A m n)
(nilBranch A m n , consBranch A m n , tt)
(proj₁ xss)
(proj₂ xss)
concat : (A : Set) (m n : ℕ tt) (xss : Vec (Vec A m) n) → Concat A m n xss
concat A m = ind
(VecD (Vec A m))
(Concat A m)
(concatα A m)
----------------------------------------------------------------------
module GenericElim where
add : ℕ tt → ℕ tt → ℕ tt
add = elim ℕE ℕC _
(λ n → n)
(λ m ih n → suc (ih n))
tt
mult : ℕ tt → ℕ tt → ℕ tt
mult = elim ℕE ℕC _
(λ n → zero)
(λ m ih n → add n (ih n))
tt
append : (A : Set) (m : ℕ tt) (xs : Vec A m) (n : ℕ tt) (ys : Vec A n) → Vec A (add m n)
append A = elim VecE (VecC A) (λ m xs → (n : ℕ tt) (ys : Vec A n) → Vec A (add m n))
(λ n ys → ys)
(λ m x xs ih n ys → cons A (add m n) x (ih n ys))
concat : (A : Set) (m n : ℕ tt) (xss : Vec (Vec A m) n) → Vec A (mult n m)
concat A m = elim VecE (VecC (Vec A m)) (λ n xss → Vec A (mult n m))
(nil A)
(λ n xs xss ih → append A m xs (mult n m) ih)
----------------------------------------------------------------------
|
function lambdamax = tvdiplmax(y)
% Calculate the value of lambda so that if lambda >= lambdamax, the TVD
% functional solved by TVDIP is minimized by the trivial constant
% solution x = mean(y). This can then be used to determine a useful range
% of values of lambda, for example.
%
% Usage:
% lambdamax = tvdiplmax(y)
%
% Input arguments:
% - y Original signal to denoise, size N x 1.
%
% Output arguments:
% - lambdamax Value of at which x = mean(y) is the output of the TVDIP
% function.
%
% (c) Max Little, 2010. Based around code originally written by
% S.J. Kim, K. Koh, S. Boyd and D. Gorinevsky. If you use this code for
% your research, please cite:
% M.A. Little, Nick S. Jones (2010)
% "Sparse Bayesian Step-Filtering for High-Throughput Analysis of Molecular
% Machine Dynamics", in 2010 IEEE International Conference on Acoustics,
% Speech and Signal Processing, 2010, ICASSP 2010 Proceedings.
%
% This code is released under the terms of GNU General Public License as
% published by the Free Software Foundation; version 2 or later.
error(nargchk(1,1,nargin));
y = y(:);
N = length(y);
M = N - 1;
% Construct sparse operator matrices
I1 = speye(M,M);
O1 = spalloc(M,1,M);
D = [I1 O1]-[O1 I1];
DDT = D*D';
Dy = D*y;
lambdamax = max(abs(DDT\Dy));
|
library(readr)
library(dplyr)
library(readxl)
library(tidyr)
recode <- read_csv(file="P://Outside DUSON RMT Projects//IRB 82649 Immune Development//Stewart Folder//Raw data//recode_trucount.csv")
filepref<-"P://Outside DUSON RMT Projects//IRB 82649 Immune Development//Stewart Folder//Raw data//CTOT-C02 Phenotypic flow data "
excel <- c('2009 07-08','2009 09-10','2009 11-12','2010 01-02V2','2010 03-04','2010 05-06','2010 07-08','2010 09-10','2010 11-12','2011 01-02','2011 03-04','2011 05-06','2011 07-08','2011 09-10','2011 11-12','2012 01-02','2012 03-04','2012 05-06','2012 07-08','2012 09-10','2012 11-12','2013 01-02')
#Define a function for calculating freq
freq_calc <- function(cell, lymph = Lymphocyte){
x = 100*(cell/lymph)
return(x)
}
#Cycle through each of the 22 spreadsheets
for(h in 1:22){
fn <- paste('file',h,sep='') #Name for each spreadsheets
#Read in sheet by sheet removing the extra column, filtering off the extra rows, and removing the R rows
#Then perform the transpose using gather, convert the value to numeric, add additional variables, and filter of null value
assign(fn,
read_excel(paste(filepref,excel[h],'.xlsx',sep=''),sheet = 'Trucount') %>%
filter(`A/R`!="R" & `A/R`!="R/?") %>%
mutate(`B cells_FREQ` = 100*(as.numeric(`B cells`)/Lymphocyte),
`T cells_FREQ` = 100*(as.numeric(`T cells`)/Lymphocyte),
`CD4+CD8+_FREQ` = 100*(as.numeric(`CD4+CD8+`)/Lymphocyte),
`CD4+ T cells_FREQ` = 100*(as.numeric(`CD4+ T cells`)/Lymphocyte),
`CD8+ T cells_FREQ` = 100*(as.numeric(`CD8+ T cells`)/Lymphocyte),
`CD3+CD4-CD8-_FREQ` = 100*(as.numeric(`CD3+CD4-CD8-`)/Lymphocyte),
`Lymph / NOT CD3 or CD20_FREQ` = 100*(as.numeric(`Lymph / NOT CD3 or CD20`)/Lymphocyte),
`lymph /CD3- CD20-/ CD16+ CD56-_FREQ` = 100*(as.numeric(`lymph /CD3- CD20-/ CD16+ CD56-`)/Lymphocyte),
`lymph /CD3- CD20-/ CD16+ CD56+_FREQ` = 100*(as.numeric(`lymph /CD3- CD20-/ CD16+ CD56+`)/Lymphocyte),
`lymph /CD3- CD20-/ CD16- CD56+_FREQ` = 100*(as.numeric(`lymph /CD3- CD20-/ CD16- CD56+`)/Lymphocyte),
`lymph / CD3- CD20- / CD16- CD56-_FREQ` = 100*(as.numeric(`lymph / CD3- CD20- / CD16- CD56-`)/Lymphocyte)) %>%
select(`Trucount sample`:`lymph / CD3- CD20- / CD16- CD56-`,`B cells_FREQ`:`lymph / CD3- CD20- / CD16- CD56-_FREQ`,Comments:`A/R`) %>%
gather(key="item",value="value",Lymphocyte:`lymph / CD3- CD20- / CD16- CD56-_FREQ`) %>%
mutate(value=as.numeric(value),`Entered by` = as.character(`Entered by`),Comments = as.character(Comments),`Visit #` = as.character(`Visit #`),`Collection Time`=as.character(`Collection Time`),panel_type='3',source_table='Trucount',source_file=excel[h],item=chartr('ï','i',item)) %>%
filter(!is.na(value)))
}
complete <- bind_rows(file1,file2,file3,file4,file5,file6,file7,file8,file9,file10,file11,file12,file13,file14,file15,file16,file17,file18,file19,file20,file21,file22) %>%
mutate(calc_type = ifelse(grepl('_FREQ',item),'FREQ','ABS')) %>%
mutate(item = gsub('_FREQ','',item)) %>%
merge(recode,by='item') %>%
arrange(source_file,`Trucount sample`,itemnum,calc_type) %>%
select(`Trucount sample`:`A/R`,panel_type,source_table,source_file,item,itemnum,calc_type,value)
write.csv(complete,file="P://Outside DUSON RMT Projects//IRB 82649 Immune Development//Stewart Folder//Complete//CTOTO2_TRUCOUNT_DATA_COMPLETE.csv",na='',row.names=FALSE)
|
module Issue1296.SolvedMeta where
open import Common.Prelude
open import Common.Equality
test : zero ≡ {!!}
test = refl
|
# AxiScan Example
Here we show an example of the AxiScan analysis pipeline.
## Import Code and Setup Plotting Defaults
```python
# Import basics
import numpy as np
import scipy.stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import pymultinest
import corner
# Plotting Settings
mpl.rcParams['figure.figsize'] = 20, 14 # default figure size
%matplotlib inline
%load_ext autoreload
%autoreload 2
# Import MC generation and data analysis code The necessary modules
from AxiScan import mc_gen # MC Generation
from AxiScan import scan # Data Analysis
import analysis_utilities as au # Convenient Utility Methods
```
# Step 1: Generate Monte Carlo
## Set the Parameters
First we generate Monte Carlo data for a scenario in which the majority of the dark matter is contained within a bulk halo following the Standard Halo Model parameters with a subdominant fraction contained within the Sagitarrius Stream. Although we have chosen to illustrate the analysis with a large signal strength, this can be easily adjusted.
This is accomplished by seeding an instance (`generator`) of the Generator class in `mc_gen` with arguments detailed below. Data on the $i^\text{th}$ day of data collection is generated by calling `generator.makePSD(i)`. The arguments for the Generator class are
| Argument | Purpose |
| ------------- | ------------- |
| ma | ma/2pi is the axion mass [Hz] |
| A | Proxy for the axion-photon coupling, $A \propto g_{a \gamma \gamma}^2$ |
| v0_Halo | Velocity dispersion of the bulk halo [km/s] |
| vDotMag_Halo | Speed of the sun with respect to the bulk halo [km/s]|
| alpha_Halo | Bulk halo annual modulation scale, $\alpha \in [0, 1]$|
| tbar_Halo | Date parameter for the bulk halo annual modultion [days] |
| v0_Sub | Speed dispersion of the substructure halo [km/s] |
| vDotMag_Sub | Speed of the sun with respect to the substructure halo [km/s]|
| alpha_Sub | Substructure halo annual modulation scale, $\alpha \in [0, 1]$|
| tbar_Sub | $\qquad$ Date parameter for the substructure halo annual modultion [days] |
| frac_Sub | Fraction of the axion DM in the substructure |
| PSDback | Mean expected background Power Spectral Density |
| freqs | Array of frequencies to calculate the PSD at [Hz] |
The code generates data in the form of Power Spectral Densities (PSD).
```python
########################
### Seed Values ###
########################
c = 299798.452
# Physics Parameters
ma = 5.5e5*2*np.pi
A = 10000.0
PSDback= 163539.36
# Bulk SHM Parameters
v0_Halo = 220.0
vDotMag_Halo = 232.36
alpha_Halo = .49
tbar_Halo = 72.40
# Sagitarrius Stream Parameters
v0_Sub = 10.0
vDotMag_Sub = 418.815
alpha_Sub = .65903
tbar_Sub = 279.51
frac_Sub = 0.0
# Data Output Size
freqs = np.linspace(.99999, 1.00001, 10000)*5.5e5
PSD_Data = np.zeros((365, len(freqs)))
collectionTime = 1/(freqs[1] - freqs[0])
stacked_per_day = 86400 / collectionTime
num_stacked = 365*stacked_per_day
# Instantiate the data generator
generator = mc_gen.Generator(ma, A, PSDback, v0_Halo, vDotMag_Halo, alpha_Halo, tbar_Halo,
v0_Sub, vDotMag_Sub, alpha_Sub, tbar_Sub, frac_Sub, freqs)
```
## Generate the Data
Here we fill the `PSD_Data` array with each day of collected data. Data is generated assuming that that the entire 24 hours is used for data collection. If the collection time $T$ as inferred from the user-defined frequency resolution in `freqs` is less than 24 hours, then the data generated for each day is constructed as $24$ hours / $T$ stacked copies of data collections of duration $T$.
We then stack data over the course of the year. The data stacked on the duration of a year is used for simple scans for an axion signal. The data stacked on the duration of the year may be used for more sophisticated scans and parameter estimation.
```python
# Fill the PSD_Data array
for i in range(365):
PSD_Data[i] = np.array(generator.makePSD(i))
# Average over the days in the PSD_Data array for the simple scan
Stacked_PSD_Data = np.mean(PSD_Data, axis = 0)
plt.plot(freqs, Stacked_PSD_Data)
plt.xlabel('Frequency [Hz]')
plt.ylabel('PSD')
plt.show()
```
# Step 2: The Simple Scan
## Calculating the Test Statistic
Next we analyze the MC data when stacked over the duration of a year. In this analysis, we only scan over values of A and ma, and we will assume the Axion DM to follow a bulk Standard Halo Model profile with no substructure present. These steps can be repeated on real data.
The anlysis is performed using `scan.TS_Scan`, which has the following arguments:
| Argument | Purpose |
| ------------- | ------------- |
| Stacked_PSD_Data | Array of PSD data associated with the measurements when stacked over the duration of a year|
| freqs | Array of frequencies associated with the data points [Hz] |
|mass_TestSet | Range of axion masses scanned for in the analysis|
| A_TestSet| Range of values of the A parameter scanned for at each mass|
| PSDback | Mean expected background Power Spectral Density |
| v0_Exp | Expected value of the SHM velocity dispersion [km/s]|
| vObs_Exp | Expected value of the sun's speed with respect to the bulk SHM Halo [km/s]|
| num_stacked | Total number of collections of duration T contained in the stacked data |
The output of `scan.TS_Scan` is `TS_Array`, the value of the test statistic TS(`ma`, `A`) at each value of `ma` and `A` in `mass_TestSet` and `A_TestSet`.
## Defining the Scan Parameters
Since we expect to be searching for a bulk SHM distribution, we take SHM parameters `v0_Exp = 220.0` and `vObs_Exp = 232.36`.
The set of masses in `mass_TestSet` is taken to be points on a log-spaced grid beginning at the mass corresponding to the minimum frequency for which we have data with a spacing factor of `1 + v0_Exp**2 /(2 c**2)`.
The set of `A` in `A_TestSet` is determined by the necessary value of `A` of an injected signal expected to produce a 5$\sigma$ detection. At a given mass-point, this value of A can be computed using [57] and [60] of 1711.10489. To ensure a sufficiently large range, we compute the maximum value of such an `A` over all mass, denoting this `A_max`. Then at each mass-point, we scan over values `-A_max` to `5 * A_max`
```python
# Expectation Parameters
v0_Exp = 220.0
vObs_Exp = 232.36
# Construct the range of masses to scan over
N_testMass = int(np.log(freqs[-50] / freqs[0]) / np.log(1. + v0_Exp**2. / 2. / c**2.))
mass_TestSet = (freqs[0]*(1. + v0_Exp**2. / 2. / c**2.)**np.arange(N_testMass) * 2*np.pi)
# Construct the range of signal strengths to scan over
Sigma_A = au.getSigma_A(mass_TestSet, 365, 86400, v0_Exp, vObs_Exp, PSDback)
N_indMasses = 4 * c**2 / (3 * v0_Exp**2) * np.log(np.amax(freqs)/np.amin(freqs))
TS_Thresh = scipy.stats.norm.ppf(1 - (1-scipy.stats.norm.cdf(5))/N_indMasses)**2
detection_Threshold = np.sqrt(TS_Thresh)*Sigma_A
A_TestSet = np.linspace(-1.0, 5.0, 101)*np.amax(detection_Threshold)
# Run the Scan
TS_Array = np.array(scan.TS_Scan(Stacked_PSD_Data, freqs, mass_TestSet, A_TestSet, PSDback, v0_Exp, vObs_Exp, num_stacked))
```
# Extracting Scan Values and Limits
Now that we have obtained `TS_Array`, we can extract our maximum-likelihood estimates and the 95% limits of `A` at each `ma`.
At a given `ma`, the maximum-likelihood estimate of A is given by
\begin{equation}
\hat A = \text{argmax}_{A} TS(m_a, A)
\end{equation}
At a given `ma`, the 95% limit on `A` is given by solving
\begin{equation}
TS(m_a, A_{95\%}) - TS(m_A, \hat A) = 2.71, \qquad A_{95\%} \geq \hat A
\end{equation}
```python
A_Limits = np.zeros(mass_TestSet.shape) # The expected 95% constraint
A_Scans = np.zeros((mass_TestSet.shape)) # The TS maximizing value
for i in range(len(A_Limits)):
# Naive TS maximizing value
A_Scans[i] = A_TestSet[np.argmax(TS_Array[i])]
# Extracting the 95% constraint by a shift in the TS of 2.71
temp = np.copy(TS_Array[i])
temp[0:np.nanargmax(temp)] = float('nan')
temp -= np.nanmax(temp)
A_Limits[i] = A_TestSet[np.nanargmin(np.abs(temp+2.706))]
A_Limits = np.maximum(A_Limits, au.zScore(-1)*Sigma_A)
A_Scans = np.maximum(0, A_Scans)
```
```python
plt.subplot(2, 2, 1)
plt.title('Limits', size = 20)
plt.plot(mass_TestSet, A_Limits)
plt.fill_between(mass_TestSet, au.zScore(-1)*Sigma_A, au.zScore(2)*Sigma_A, color = 'yellow')
plt.fill_between(mass_TestSet, au.zScore(-1)*Sigma_A, au.zScore(1)*Sigma_A, color = 'limegreen')
plt.axvline(x=ma, ls = '--', c = 'black')
plt.subplot(2, 2, 2)
plt.title('MLE Values', size = 20)
plt.plot(mass_TestSet, A_Scans)
plt.plot(mass_TestSet, detection_Threshold)
plt.axvline(x=ma, ls = '--', c = 'black')
plt.show()
```
Above, we plot the results of the simple scan for an axion signal. In the left panel, we plot the resulting 95% constraints (solid black) against the expected 95% constraints (dashed black) and 1$\sigma$ (green) and 2$\sigma$ (yellow) containment determined by the Asimov dataset according to [56] of 1711.10489. In the right panel, we plot at each mass-point the MLE of `A` (solid black) and the value of A at the threshold of a 5$\sigma$ detection (dashed black).
# Step 3: The MultiNest Scan
Now that we have discovered a well-localized axion signal, we proceed to perform a MultiNest Scan over the data stacked at the level of a day. This will allow us to perform more detailed analysis of the signal parameters. For example, a MultiNest scan could be used to gain a more accurate estimate of `A` or `ma`, to study the annual modulation parameters, or to search for substructure. With sufficient computational resources, these could all be accomplished simultaneously.
In the example below, we will perform a very basic MultiNest scan to gain a more accurate estimate of the `A` parameter under the assumption that all other signal parameters are known with perfect accuracy.
```python
# Basic Settings
nlive = 500
chains_dir = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/'
pymultinest_options = {'importance_nested_sampling': False,
'resume': False, 'verbose': True,
'sampling_efficiency': 'model',
'init_MPI': False, 'evidence_tolerance': 0.5,
'const_efficiency_mode': False}
```
## A-Parameter Scan
```python
# Parameter to Scan Over
A_Prior = [.5*np.amax(A_Scans), 10*np.amax(A_Scans)]
# Formatting the prior cube as required by MultiNest
theta_min = [A_Prior[0]]
theta_max = [A_Prior[1]]
theta_interval = list(np.array(theta_max) - np.array(theta_min))
n_params = len(theta_min) # number of parameters to fit for
def prior_cube(cube, ndim=1, nparams=1):
""" Cube of priors - in the format required by MultiNest
"""
for i in range(ndim):
cube[i] = cube[i] * theta_interval[i] + theta_min[i]
return cube
# Defining the likelihood function in terms of fixed and floated parameters
def LL_Multinest(theta, ndim = 1, nparams = 1):
return scan.SHM_AnnualMod_ll(freqs, PSD_Data, ma, theta[0], v0_Halo, vDotMag_Halo,
alpha_Halo, tbar_Halo, PSDback, stacked_per_day)
# Run the MultiNest Scan
pymultinest.run(LL_Multinest, prior_cube, n_params,
outputfiles_basename=chains_dir,
n_live_points=nlive, **pymultinest_options)
# Plot the posteriors found by the MultiNest Scan
chain_file = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/post_equal_weights.dat'
chain = np.array(np.loadtxt(chain_file))[:, :-1]
# Now make a triangle plot using corner
corner.corner(chain, smooth=1.5,
labels = ['$A$'], truths = [A],
smooth1d=1, quantiles=[0.16, 0.5, 0.84], show_titles=True,
title_fmt='.2f', title_args={'fontsize': 14},
range=[1 for _ in range(chain.shape[1])],
plot_datapoints=False, verbose=False)
plt.show()
```
# Alpha_Halo Scan
```python
# Parameter to Scan Over
alpha_Prior = [0.0, 1.0]
# Formatting the prior cube as required by MultiNest
theta_min = [alpha_Prior[0]]
theta_max = [alpha_Prior[1]]
theta_interval = list(np.array(theta_max) - np.array(theta_min))
n_params = len(theta_min) # number of parameters to fit for
def prior_cube(cube, ndim=1, nparams=1):
""" Cube of priors - in the format required by MultiNest
"""
for i in range(ndim):
cube[i] = cube[i] * theta_interval[i] + theta_min[i]
return cube
# Defining the likelihood function in terms of fixed and floated parameters
def LL_Multinest(theta, ndim = 1, nparams = 1):
return scan.SHM_AnnualMod_ll(freqs, PSD_Data, ma, A, v0_Halo, vDotMag_Halo,
theta[0], tbar_Halo, PSDback, stacked_per_day)
# Run the MultiNest Scan
pymultinest.run(LL_Multinest, prior_cube, n_params,
outputfiles_basename=chains_dir,
n_live_points=nlive, **pymultinest_options)
# Plot the posteriors found by the MultiNest Scan
chain_file = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/post_equal_weights.dat'
chain = np.array(np.loadtxt(chain_file))[:, :-1]
# Now make a triangle plot using corner
corner.corner(chain, smooth=1.5,
labels = ['alpha_Halo'], truths = [alpha_Halo],
smooth1d=1, quantiles=[0.16, 0.5, 0.84], show_titles=True,
title_fmt='.2f', title_args={'fontsize': 14},
range=[1 for _ in range(chain.shape[1])],
plot_datapoints=False, verbose=False)
plt.show()
```
# tbar_Halo Scan
```python
# Parameter to Scan Over
tbar_Prior = [0, 365.0]
# Formatting the prior cube as required by MultiNest
theta_min = [tbar_Prior[0]]
theta_max = [tbar_Prior[1]]
theta_interval = list(np.array(theta_max) - np.array(theta_min))
n_params = len(theta_min) # number of parameters to fit for
def prior_cube(cube, ndim=1, nparams=1):
""" Cube of priors - in the format required by MultiNest
"""
for i in range(ndim):
cube[i] = cube[i] * theta_interval[i] + theta_min[i]
return cube
# Defining the likelihood function in terms of fixed and floated parameters
def LL_Multinest(theta, ndim = 1, nparams = 1):
return scan.SHM_AnnualMod_ll(freqs, PSD_Data, ma, A, v0_Halo, vDotMag_Halo,
alpha_Halo, theta[0], PSDback, stacked_per_day)
# Run the MultiNest Scan
pymultinest.run(LL_Multinest, prior_cube, n_params,
outputfiles_basename=chains_dir,
n_live_points=nlive, **pymultinest_options)
# Plot the posteriors found by the MultiNest Scan
chain_file = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/post_equal_weights.dat'
chain = np.array(np.loadtxt(chain_file))[:, :-1]
# Now make a triangle plot using corner
corner.corner(chain, smooth=1.5,
labels = ['tbar_Halo'], truths = [tbar_Halo],
smooth1d=1, quantiles=[0.16, 0.5, 0.84], show_titles=True,
title_fmt='.2f', title_args={'fontsize': 14},
range=[1 for _ in range(chain.shape[1])],
plot_datapoints=False, verbose=False)
plt.show()
```
```python
```
|
r=0.59
https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7xw26/media/images/d7xw26-051/svc:tesseract/full/full/0.59/default.jpg Accept:application/hocr+xml
|
(*
Title: Strong-Security
Authors: Sylvia Grewe, Alexander Lux, Heiko Mantel, Jens Sauer
*)
theory Strongly_Secure_Skip_Assign
imports MWLf Parallel_Composition
begin
locale Strongly_Secure_Programs =
L : MWLf_semantics "E" "BMap"
+ SS: Strong_Security "MWLfSteps_det" "DA"
for E :: "('exp, 'id, 'val) Evalfunction"
and BMap :: "'val \<Rightarrow> bool"
and DA :: "('id, 'd::order) DomainAssignment"
begin
abbreviation USdBname ::"'d \<Rightarrow> ('exp, 'id) MWLfCom Bisimulation_type"
("\<approx>\<^bsub>_\<^esub>")
where "\<approx>\<^bsub>d\<^esub> \<equiv> USdB d"
abbreviation relatedbyUSdB :: "('exp,'id) MWLfCom list \<Rightarrow> 'd
\<Rightarrow> ('exp,'id) MWLfCom list \<Rightarrow> bool" (infixr "\<approx>\<^bsub>_\<^esub>" 65)
where "V \<approx>\<^bsub>d\<^esub> V' \<equiv> (V,V') \<in> USdB d"
-- "define when two expressions are indistinguishable with respect to a domain d"
definition d_indistinguishable :: "'d::order \<Rightarrow> 'exp \<Rightarrow> 'exp \<Rightarrow> bool"
where
"d_indistinguishable d e1 e2 \<equiv>
\<forall>m m'. ((m =\<^bsub>d\<^esub> m') \<longrightarrow> ((E e1 m) = (E e2 m')))"
abbreviation d_indistinguishable' :: "'exp \<Rightarrow> 'd::order \<Rightarrow> 'exp \<Rightarrow> bool"
( "(_ \<equiv>\<^bsub>_\<^esub> _)" )
where
"e1 \<equiv>\<^bsub>d\<^esub> e2 \<equiv> d_indistinguishable d e1 e2"
-- "symmetry of d-indistinguishable"
lemma d_indistinguishable_sym:
"e \<equiv>\<^bsub>d\<^esub> e' \<Longrightarrow> e' \<equiv>\<^bsub>d\<^esub> e"
by (simp add: d_indistinguishable_def d_equal_def, metis)
--"transitivity of d-indistinguishable"
lemma d_indistinguishable_trans:
"\<lbrakk> e \<equiv>\<^bsub>d\<^esub> e'; e' \<equiv>\<^bsub>d\<^esub> e'' \<rbrakk> \<Longrightarrow> e \<equiv>\<^bsub>d\<^esub> e''"
by (simp add: d_indistinguishable_def d_equal_def, metis)
theorem Strongly_Secure_Skip:
"[skip] \<approx>\<^bsub>d\<^esub> [skip]"
proof -
def R0 \<equiv> "{(V::('exp,'id) MWLfCom list,V'::('exp,'id) MWLfCom list).
V = [skip] \<and> V' = [skip]}"
have uptoR0: "d_Bisimulation_Up_To_USdB d R0"
proof (simp add: d_Bisimulation_Up_To_USdB_def, auto)
show "sym R0" by (simp add: R0_def sym_def)
next
fix V V'
assume "(V,V') \<in> R0"
thus "length V = length V'"
by (simp add: R0_def)
next
fix V V' i m1 m1' W m2
assume inR0: "(V,V') \<in> R0"
assume irange: "i < length V"
assume step: "\<langle>V!i,m1\<rangle> \<rightarrow> \<langle>W,m2\<rangle>"
assume dequal: "m1 =\<^bsub>d\<^esub> m1'"
from inR0 have Vassump:
"V = [skip] \<and> V' = [skip]"
by (simp add: R0_def)
with step irange have step1:
"W = [] \<and> m2 = m1"
by (simp, metis MWLf_semantics.MWLfSteps_det_cases(1))
from Vassump irange obtain m2' where step2:
"\<langle>V'!i,m1'\<rangle> \<rightarrow> \<langle>[],m2'\<rangle> \<and> m2' = m1'"
by (simp, metis MWLfSteps_det.skip)
with step1 dequal trivialpair_in_USdB show "\<exists>W' m2'.
\<langle>V'!i,m1'\<rangle> \<rightarrow> \<langle>W',m2'\<rangle> \<and>
((W,W') \<in> R0 \<or> W \<approx>\<^bsub>d\<^esub> W') \<and> m2 =\<^bsub>d\<^esub> m2'"
by auto
qed
hence "R0 \<subseteq> (\<approx>\<^bsub>d\<^esub>)"
by (rule Up_To_Technique)
thus ?thesis
by (simp add: R0_def)
qed
theorem Strongly_Secure_Assign:
assumes d_indistinguishable_exp: "e \<equiv>\<^bsub>DA x\<^esub> e'"
shows "[x := e] \<approx>\<^bsub>d\<^esub> [x := e']"
proof -
def R0 \<equiv> "{(V,V'). \<exists>x e e'. V = [x := e] \<and> V' = [x := e'] \<and>
e \<equiv>\<^bsub>DA x\<^esub> e'}"
from d_indistinguishable_exp have inR0: "([x:=e],[x:=e']) \<in> R0"
by (simp add: R0_def)
have "d_Bisimulation_Up_To_USdB d R0"
proof (simp add: d_Bisimulation_Up_To_USdB_def, auto)
from d_indistinguishable_sym show "sym R0"
by (simp add: R0_def sym_def, fastforce)
next
fix V V'
assume "(V,V') \<in> R0"
thus "length V = length V'"
by (simp add: R0_def, auto)
next
fix V V' i m1 m1' W m2
assume inR0: "(V,V') \<in> R0"
assume irange: "i < length V"
assume step: "\<langle>V!i,m1\<rangle> \<rightarrow> \<langle>W,m2\<rangle>"
assume dequal: "m1 =\<^bsub>d\<^esub> m1'"
from inR0 obtain x e e' where Vassump:
"V = [x := e] \<and> V' = [x := e'] \<and>
e \<equiv>\<^bsub>DA x\<^esub> e'"
by (simp add: R0_def, auto)
with step irange obtain v where step1:
"E e m1 = v \<and> W = [] \<and> m2 = m1(x := v)"
by (auto, metis MWLf_semantics.MWLfSteps_det_cases(2))
from Vassump irange obtain m2' v' where step2:
"E e' m1' = v' \<and> \<langle>V'!i,m1'\<rangle> \<rightarrow> \<langle>[],m2'\<rangle> \<and> m2' = m1'(x := v')"
by (auto, metis MWLfSteps_det.assign)
with Vassump dequal step step1
have dequalnext: "m1(x := v) =\<^bsub>d\<^esub> m1'(x := v')"
by (simp add: d_equal_def d_indistinguishable_def, auto)
with step1 step2 trivialpair_in_USdB show "\<exists>W' m2'.
\<langle>V'!i,m1'\<rangle> \<rightarrow> \<langle>W',m2'\<rangle> \<and> ((W,W') \<in> R0 \<or> W \<approx>\<^bsub>d\<^esub> W')
\<and> m2 =\<^bsub>d\<^esub> m2'"
by auto
qed
hence "R0 \<subseteq> (\<approx>\<^bsub>d\<^esub>)"
by (rule Up_To_Technique)
with inR0 show ?thesis
by auto
qed
end
end
|
from math import log, isnan
from scipy.stats import chi2
from denovonear.load_gene import load_gene, get_de_novos_in_transcript, \
minimise_transcripts
from denovonear.load_mutation_rates import load_mutation_rates
from denovonear.load_de_novos import load_de_novos
from denovonear.site_specific_rates import SiteRates
from denovonear.simulate import get_p_value
def fishers_method(values):
""" function to combine p values, using Fisher's method
We occasionally have multiple P values for a mutation type, obtained from
alternative transcripts for the gene. If we have only one P value for the
gene for the mutation type, we just use that value, if we don't have any
data, we use "NA", otherwise combine p-values using Fisher's method.
Args:
x: list of P-values for a gene
Returns:
combined P-value
"""
values = [ x for x in values if not isnan(x) ]
# use Fisher's combined method to estimate the P value from multiple
# P-values. The chi square statistic is -2*sum(ln(P-values))
return chi2.sf(-2 * sum(map(log, values)), 2 * len(values))
def cluster_de_novos(symbol, de_novos, gene, iterations=1000000,
mut_dict=None):
""" analysis proximity cluster of de novos in a single gene
Args:
symbol: HGNC symbol for a gene
de_novos: dictionary of de novo positions for the HGNC gene,
indexed by functional type
gene: denovnonear.gencode.Gene object
iterations: number of simulations to run
mut_dict: dictionary of mutation rates, indexed by trinuclotide sequence
Returns:
a dictionary containing P values, and distances for missense, nonsense,
and synonymous de novos events. Missing data is represented by "NA".
"""
if mut_dict is None:
mut_dict = load_mutation_rates()
missense = de_novos["missense"]
nonsense = de_novos["nonsense"]
if len(gene.transcripts) == 0:
nan = float('nan')
return {'miss_dist': nan, 'miss_prob': nan, 'nons_prob': nan, 'nons_dist': nan}
# load the set of transcripts that are the minimum set of transcripts
# required to contain all the de novos, unless we can't find any coding
# transcripts that contain the de novos.
transcripts = gene.transcripts
minimized = minimise_transcripts(transcripts, missense + nonsense)
transcripts = [x for x in transcripts if x.get_name() in minimized]
if len(transcripts) == 0:
nan = float('nan')
return {'miss_dist': nan, 'miss_prob': nan, 'nons_prob': nan, 'nons_dist': nan}
probs = {"miss_prob": [], "nons_prob": []}
dists = {"miss_dist": [], "nons_dist": []}
for transcript in transcripts:
missense_events = get_de_novos_in_transcript(transcript, missense)
nonsense_events = get_de_novos_in_transcript(transcript, nonsense)
rates = SiteRates(transcript, mut_dict)
(miss_dist, miss_prob) = get_p_value(transcript, rates, iterations, "missense", missense_events)
(nons_dist, nons_prob) = get_p_value(transcript, rates, iterations, "lof", nonsense_events)
dists["miss_dist"].append(miss_dist)
dists["nons_dist"].append(nons_dist)
probs["miss_prob"].append(miss_prob)
probs["nons_prob"].append(nons_prob)
# remove the de novos analysed in the current transcript, so that
# analysis of subsequent transcripts uses independent events. NOTE THAT
# THIS MIGHT MISS SOME CLUSTERING ACROSS MUTUALLY EXCLUSIVE TRANSCRIPTS
# IF THE DE NOVO EVENTS ARE NEAR THE TRANSCRIPT DIVERGENCE.
missense = [x for x in missense if x not in missense_events]
nonsense = [x for x in nonsense if x not in nonsense_events]
for key in dists:
dists[key] = ",".join([ str(x) for x in dists[key] ])
probs = {k: fishers_method(probs[k]) for k in probs}
probs.update(dists)
return probs
|
Formal statement is: lemma continuous_at_within_divide[continuous_intros]: fixes f g :: "'a::t2_space \<Rightarrow> 'b::real_normed_field" assumes "continuous (at a within s) f" "continuous (at a within s) g" and "g a \<noteq> 0" shows "continuous (at a within s) (\<lambda>x. (f x) / (g x))" Informal statement is: If $f$ and $g$ are continuous functions on a set $S$, and $g(a) \neq 0$, then the function $f/g$ is continuous at $a$.
|
(** * Language Syntax *)
(** We follow a novel interpretation of Frege's Begriffsschrift
which is considered as the origin of using two sorts of variables.
- Our style is closely related to the McKinna-Pollack's _locally-named_ style
where variables and parameters are used:
-- (locally bound) _variables_
-- _parameters_ (also called _free_ variables)
- A main difference is that we renounce to use parameters, i.e., free variables.
*)
Set Implicit Arguments.
Require Import decidable_IN.
(** The language syntax for the first-order logic is represented.
- Implication and universal quantification are the only logical symbols.
- We assume there are two decidable sets of infinitely many "unary" predicates
and "binary" functions.
This is general enough because predicate and function symbols of different
arities can be encoded. *)
Parameter predicate : Set. (* unary predicates *)
Parameter function : Set. (* binary functions *)
(** We assume decidability of function and predicate symbols *)
Axiom pred_dec : decidable predicate.
Axiom fun_dec : decidable function.
(* ####################################################### *)
(** *** Constants as parameters *)
(** Syntactically, We don't use parameters (free variables).
- We let constants play the role of constants.
- An important consequence at the syntactic level is that substitution
is needed only for variables.
- The language itself needs to contain infinitely many constants.
Note that every language can be conservatively extended to a language
with infinitely many constants.
- Even if we formally introduce parameters, the substitution for parameters
do not play any role
- For the formalization with parameters, see the Coq files, called [with_FreeVar_*.v].
*)
(* ####################################################### *)
(** ** Pseudo-terms *)
(** Any decidable, denumerable set can be used for the representation of variables
and constants.
Here we use [name = nat], the set of natural numbers, to make the formlization
as simple as possible.
*)
Inductive trm : Set :=
| Ltr : name -> trm (* local variables *)
| Par : name -> trm (* global variables *)
| Cst : name -> trm
| App : function -> trm -> trm -> trm.
(** ** Pseudo-formulas *)
Definition atom := (predicate * trm)%type.
Inductive fml : Set :=
| Atom : atom -> fml
| Imply : fml -> fml -> fml
| Forall : name -> fml -> fml.
Notation "A --> B" := (Imply A B) (at level 69, right associativity).
(* list of constants not NECESSARY
(** List of constants occurring in an expression *)
Fixpoint oc_t (t:trm) {struct t} : list name :=
match t with
| Ltr _ => nil
| Par _ => nil
| Cst c => c :: nil
| App _ t0 t1 => (oc_t t0) ++ (oc_t t1)
end.
Fixpoint oc (A:fml) {struct A} : list name :=
match A with
| Atom (_,t) => oc_t t
| Imply B C => oc B ++ oc C
| Forall x A => oc A
end.
Fixpoint oc_c (Ga : list fml) {struct Ga} : list name :=
match Ga with
| nil => nil
| A :: Ga' => oc A ++ oc_c Ga'
end.
*)
(** List of letters occurring free in expressions *)
Fixpoint ol_t (t : trm) {struct t} : list name :=
match t with
| Ltr x => x :: nil
| Par _ => nil
| Cst _ => nil
| App _ t0 t1 => (ol_t t0) ++ (ol_t t1)
end.
Fixpoint ol (A : fml) {struct A} : list name :=
match A with
| Atom (_,t) => ol_t t
| Imply B C => ol B ++ ol C
| Forall x B => rm_name x (ol B)
end.
Fixpoint ol_c (Ga : list fml) {struct Ga} : list name :=
match Ga with
| nil => nil
| (A :: Ga') => ol A ++ ol_c Ga'
end.
(** List of parameters occurring free in expressions *)
Fixpoint op_t (t : trm) {struct t} : list name :=
match t with
| Ltr _ => nil
| Par x => x :: nil
| Cst _ => nil
| App _ t0 t1 => (op_t t0) ++ (op_t t1)
end.
Fixpoint op (A : fml) {struct A} : list name :=
match A with
| Atom (_,t) => op_t t
| Imply B C => op B ++ op C
| Forall x B => op B
end.
Fixpoint op_c (Ga : list fml) {struct Ga} : list name :=
match Ga with
| nil => nil
| (A :: Ga') => op A ++ op_c Ga'
end.
(** Fresh constants w.r.t. an expression *)
(* Definition fresh_t c t := c # (oc_t t). *)
(* Definition fresh c A := c # (oc A). *)
(** ** Decidability of syntactic equality *)
(** A specified tactic for uninteresting cases *)
Ltac neq_case :=
match goal with
| H : ?x <> ?y |- _ =>
right; red; intro Heq; inversion Heq; congruence
end.
(** [fml] is decidable. *)
Ltac neq_trm :=
match goal with
| |- context [{Ltr ?n = Ltr ?m} + {Ltr ?n <> Ltr ?m}]
=> destruct (n == m); subst; try neq_case; left; auto
| |- context [{Par ?n = Ltr ?m} + {Par ?n <> Ltr ?m}]
=> right; discriminate
| |- context [{Cst ?n = Ltr ?m} + {Cst ?n <> Ltr ?m}]
=> right; discriminate
| |- context [{App ?f ?n ?m = Ltr ?t} + {App ?f ?n ?m <> Ltr ?t}]
=> right; discriminate
(* Par *)
| |- context [{Ltr ?n = Par ?m} + {Ltr ?n <> Par ?m}]
=> right; discriminate
| |- context [{Par ?n = Par ?m} + {Par ?n <> Par ?m}]
=> destruct (n == m); subst; try neq_case; left; auto
| |- context [{Cst ?n = Par ?m} + {Cst ?n <> Par ?m}]
=> right; discriminate
| |- context [{App ?f ?n ?m = Par ?t} + {App ?f ?n ?m <> Par ?t}]
=> right; discriminate
(* Cst *)
| |- context [{Ltr ?n = Cst ?m} + {Ltr ?n <> Cst ?m}]
=> right; discriminate
| |- context [{Par ?n = Cst ?m} + {Par ?n <> Cst ?m}]
=> right; discriminate
| |- context [{Cst ?n = Cst ?m} + {Cst ?n <> Cst ?m}]
=> destruct (n == m); subst; try neq_case; left; auto
| |- context [{App ?f ?n ?m = Cst ?t} + {App ?f ?n ?m <> Cst ?t}]
=> right; discriminate
(* App *)
| |- context [{Ltr ?n = App ?f ?m ?t} + {Ltr ?n <> App ?f ?m ?t}]
=> right; discriminate
| |- context [{Par ?n = App ?f ?m ?t} + {Par ?n <> App ?f ?m ?t}]
=> right; discriminate
| |- context [{Cst ?n = App ?f ?m ?t} + {Cst ?n <> App ?f ?m ?t}]
=> right; discriminate
end.
Lemma trm_dec :
forall (t t0 : trm), {t = t0} + {t <> t0}.
Proof.
induction t, t0; try neq_trm.
destruct (fun_dec f f0); destruct (IHt1 t0_1); destruct (IHt2 t0_2);
subst; try neq_case; left; auto.
Defined.
Lemma atom_dec : forall (P Q : atom), {P = Q} + {P <> Q}.
Proof.
destruct P as [p t]; destruct Q as [q u].
decide equality; [ apply trm_dec | apply pred_dec ].
Qed.
Lemma fml_dec : forall (A B : fml), {A = B} + {A <> B}.
Proof.
induction A as [P1 |A1 IHA1 A2 IHA2| x A IHA];
destruct B as [P2 |B1 B2| y B]; try (right; discriminate).
- decide equality; auto using atom_dec, eq_nat_dec.
- destruct (IHA1 B1); destruct (IHA2 B2);
[ subst; left; reflexivity
| neq_case
| neq_case
| neq_case
].
- destruct (eqdec x y).
+ subst x;
destruct (IHA B) as [ | neq];
[left; subst; reflexivity |
right; intro Heq; elim neq; inversion Heq; reflexivity].
+ right; intro Heq; elim n; inversion Heq; reflexivity.
Qed.
(** A tactic for destructing the decidable equality between pseudo-expressions. *)
Ltac case_lang :=
let destr t u := destruct (trm_dec t u); [try subst t | idtac] in
let destr A B := destruct (fml_dec A B); [try subst A | idtac] in
match goal with
| |- context [trm_dec ?t ?u] => destr t u
| _ : context [trm_dec ?t ?u] |- _ => destr t u
| |- context [fml_dec ?A ?B] => destr A B
| _ : context [fml_dec ?A ?B] |- _ => destr A B
end.
|
<a href="https://colab.research.google.com/github/AnilZen/centpy/blob/master/notebooks/Scalar_2d.ipynb" target="_parent"></a>
# Quasilinear scalar equation with CentPy in 2d
### Import packages
```python
# Install the centpy package
!pip install centpy
```
Collecting centpy
Downloading https://files.pythonhosted.org/packages/92/89/7cbdc92609ea7790eb6444f8a189826582d675f0b7f59ba539159c43c690/centpy-0.1-py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from centpy) (1.18.5)
Installing collected packages: centpy
Successfully installed centpy-0.1
```python
# Import numpy and centpy for the solution
from numpy import pi, sin, cos, abs, min, max
import centpy
```
```python
# Imports functions from matplotlib and setup for the animation
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
```
## Equation
We solve the nonlinear scalar conservation law
\begin{equation}
\partial_t u + \partial_x \sin u + \frac{1}{3} \partial_y u^3= 0,
\end{equation}
on the domain $(x,y,t)\in([0,2\pi]\times[0,2\pi]\times[0,6])$ with initial data
\begin{equation}
u(x,y,0) = \sin \left(x+\frac{1}{2}\right) \cos(2x+y)
\end{equation}
and periodic boundary conditions. The solution is computed using a 144 $\times$ 144 mesh and CFL number 0.9.
```python
pars = centpy.Pars2d(
x_init=0, x_final=2*pi,
y_init=0.0, y_final=2*pi,
J=144, K=144,
t_final=6.0,
dt_out=0.1,
cfl=0.9,
scheme="sd3",
)
```
```python
class Scalar2d(centpy.Equation2d):
def initial_data(self):
x = self.xx.T; y = self.yy.T
return sin(x + 0.5) * cos(2*x + y)
def boundary_conditions(self, u):
# x-boundary
u[0] = u[-4]
u[1] = u[-3]
u[-2] = u[2]
u[-1] = u[3]
# y-boundary
u[:, 0] = u[:, -4]
u[:, 1] = u[:, -3]
u[:, -2] = u[:, 2]
u[:, -1] = u[:, 3]
def flux_x(self, u):
return sin(u)
def flux_y(self, u):
return 1./3 *u**3
def spectral_radius_x(self, u):
return abs(cos(u))
def spectral_radius_y(self, u):
return u**2
```
## Solution
```python
eqn = Scalar2d(pars)
soln = centpy.Solver2d(eqn)
soln.solve()
```
## Animation
```python
# Animation
j0 = slice(2, -2)
fig = plt.figure()
ax = plt.axes(xlim=(soln.x_init,soln.x_final), ylim=(soln.y_init, soln.y_final))
ax.set_title("Nonlinear scalar")
ax.set_xlabel("x")
ax.set_ylabel("y")
contours=ax.contour(soln.x[j0], soln.y[j0], soln.u_n[0,j0,j0], 8, colors='black')
img=ax.imshow(soln.u_n[0,j0,j0], extent=[0, 6.3, 0, 6.3], origin='lower',
cmap='ocean', alpha=0.5)
fig.colorbar(img)
def animate(i):
ax.collections = []
ax.contour(soln.x[j0], soln.y[j0], soln.u_n[i,j0,j0], 8, colors='black')
img.set_array(soln.u_n[i,j0,j0])
img.autoscale()
plt.close()
anim = animation.FuncAnimation(fig, animate, frames=soln.Nt, interval=100, blit=False);
HTML(anim.to_html5_video())
```
```python
```
|
(* Title: HOL/IMPP/Hoare.thy
Author: David von Oheimb
Copyright 1999 TUM
*)
section \<open>Inductive definition of Hoare logic for partial correctness\<close>
theory Hoare
imports Natural
begin
text \<open>
Completeness is taken relative to completeness of the underlying logic.
Two versions of completeness proof: nested single recursion
vs. simultaneous recursion in call rule
\<close>
type_synonym 'a assn = "'a => state => bool"
translations
(type) "'a assn" <= (type) "'a => state => bool"
definition
state_not_singleton :: bool where
"state_not_singleton = (\<exists>s t::state. s ~= t)" (* at least two elements *)
definition
peek_and :: "'a assn => (state => bool) => 'a assn" (infixr "&>" 35) where
"peek_and P p = (%Z s. P Z s & p s)"
datatype 'a triple =
triple "'a assn" com "'a assn" ("{(1_)}./ (_)/ .{(1_)}" [3,60,3] 58)
definition
triple_valid :: "nat => 'a triple => bool" ( "|=_:_" [0 , 58] 57) where
"|=n:t = (case t of {P}.c.{Q} =>
\<forall>Z s. P Z s \<longrightarrow> (\<forall>s'. <c,s> -n-> s' \<longrightarrow> Q Z s'))"
abbreviation
triples_valid :: "nat => 'a triple set => bool" ("||=_:_" [0 , 58] 57) where
"||=n:G == Ball G (triple_valid n)"
definition
hoare_valids :: "'a triple set => 'a triple set => bool" ("_||=_" [58, 58] 57) where
"G||=ts = (\<forall>n. ||=n:G --> ||=n:ts)"
abbreviation
hoare_valid :: "'a triple set => 'a triple => bool" ("_|=_" [58, 58] 57) where
"G |=t == G||={t}"
(* Most General Triples *)
definition
MGT :: "com => state triple" ("{=}._.{->}" [60] 58) where
"{=}.c.{->} = {%Z s0. Z = s0}. c .{%Z s1. <c,Z> -c-> s1}"
inductive
hoare_derivs :: "'a triple set => 'a triple set => bool" ("_||-_" [58, 58] 57) and
hoare_deriv :: "'a triple set => 'a triple => bool" ("_|-_" [58, 58] 57)
where
"G |-t == G||-{t}"
| empty: "G||-{}"
| insert: "[| G |-t; G||-ts |]
==> G||-insert t ts"
| asm: "ts <= G ==>
G||-ts" (* {P}.BODY pn.{Q} instead of (general) t for SkipD_lemma *)
| cut: "[| G'||-ts; G||-G' |] ==> G||-ts" (* for convenience and efficiency *)
| weaken: "[| G||-ts' ; ts <= ts' |] ==> G||-ts"
| conseq: "\<forall>Z s. P Z s \<longrightarrow> (\<exists>P' Q'. G|-{P'}.c.{Q'} \<and>
(\<forall>s'. (\<forall>Z'. P' Z' s \<longrightarrow> Q' Z' s') \<longrightarrow> Q Z s'))
==> G|-{P}.c.{Q}"
| Skip: "G|-{P}. SKIP .{P}"
| Ass: "G|-{%Z s. P Z (s[X::=a s])}. X:==a .{P}"
| Local: "G|-{P}. c .{%Z s. Q Z (s[Loc X::=s'<X>])}
==> G|-{%Z s. s'=s & P Z (s[Loc X::=a s])}. LOCAL X:=a IN c .{Q}"
| Comp: "[| G|-{P}.c.{Q};
G|-{Q}.d.{R} |]
==> G|-{P}. (c;;d) .{R}"
| If: "[| G|-{P &> b }.c.{Q};
G|-{P &> (Not o b)}.d.{Q} |]
==> G|-{P}. IF b THEN c ELSE d .{Q}"
| Loop: "G|-{P &> b}.c.{P} ==>
G|-{P}. WHILE b DO c .{P &> (Not o b)}"
(*
BodyN: "(insert ({P}. BODY pn .{Q}) G)
|-{P}. the (body pn) .{Q} ==>
G|-{P}. BODY pn .{Q}"
*)
| Body: "[| G Un (%p. {P p}. BODY p .{Q p})`Procs
||-(%p. {P p}. the (body p) .{Q p})`Procs |]
==> G||-(%p. {P p}. BODY p .{Q p})`Procs"
| Call: "G|-{P}. BODY pn .{%Z s. Q Z (setlocs s (getlocs s')[X::=s<Res>])}
==> G|-{%Z s. s'=s & P Z (setlocs s newlocs[Loc Arg::=a s])}.
X:=CALL pn(a) .{Q}"
section \<open>Soundness and relative completeness of Hoare rules wrt operational semantics\<close>
lemma single_stateE:
"state_not_singleton \<Longrightarrow> \<forall>t. (\<forall>s::state. s = t) \<longrightarrow> False"
apply (unfold state_not_singleton_def)
apply clarify
apply (case_tac "ta = t")
apply blast
apply (blast dest: not_sym)
done
declare peek_and_def [simp]
subsection "validity"
lemma triple_valid_def2:
"|=n:{P}.c.{Q} = (\<forall>Z s. P Z s \<longrightarrow> (\<forall>s'. <c,s> -n-> s' \<longrightarrow> Q Z s'))"
apply (unfold triple_valid_def)
apply auto
done
lemma Body_triple_valid_0: "|=0:{P}. BODY pn .{Q}"
apply (simp (no_asm) add: triple_valid_def2)
apply clarsimp
done
(* only ==> direction required *)
lemma Body_triple_valid_Suc: "|=n:{P}. the (body pn) .{Q} = |=Suc n:{P}. BODY pn .{Q}"
apply (simp (no_asm) add: triple_valid_def2)
apply force
done
lemma triple_valid_Suc [rule_format (no_asm)]: "|=Suc n:t --> |=n:t"
apply (unfold triple_valid_def)
apply (induct_tac t)
apply simp
apply (fast intro: evaln_Suc)
done
lemma triples_valid_Suc: "||=Suc n:ts ==> ||=n:ts"
apply (fast intro: triple_valid_Suc)
done
subsection "derived rules"
lemma conseq12: "[| G|-{P'}.c.{Q'}; \<forall>Z s. P Z s \<longrightarrow>
(\<forall>s'. (\<forall>Z'. P' Z' s \<longrightarrow> Q' Z' s') --> Q Z s') |]
==> G|-{P}.c.{Q}"
apply (rule hoare_derivs.conseq)
apply blast
done
lemma conseq1: "[| G|-{P'}.c.{Q}; \<forall>Z s. P Z s \<longrightarrow> P' Z s |] ==> G|-{P}.c.{Q}"
apply (erule conseq12)
apply fast
done
lemma conseq2: "[| G|-{P}.c.{Q'}; \<forall>Z s. Q' Z s \<longrightarrow> Q Z s |] ==> G|-{P}.c.{Q}"
apply (erule conseq12)
apply fast
done
lemma Body1: "[| G Un (\<lambda>p. {P p}. BODY p .{Q p})`Procs
||- (\<lambda>p. {P p}. the (body p) .{Q p})`Procs;
pn \<in> Procs |] ==> G|-{P pn}. BODY pn .{Q pn}"
apply (drule hoare_derivs.Body)
apply (erule hoare_derivs.weaken)
apply fast
done
lemma BodyN: "(insert ({P}. BODY pn .{Q}) G) |-{P}. the (body pn) .{Q} ==>
G|-{P}. BODY pn .{Q}"
apply (rule Body1)
apply (rule_tac [2] singletonI)
apply clarsimp
done
lemma escape: "[| \<forall>Z s. P Z s \<longrightarrow> G|-{\<lambda>Z s'. s'=s}.c.{\<lambda>Z'. Q Z} |] ==> G|-{P}.c.{Q}"
apply (rule hoare_derivs.conseq)
apply fast
done
lemma "constant": "[| C ==> G|-{P}.c.{Q} |] ==> G|-{\<lambda>Z s. P Z s & C}.c.{Q}"
apply (rule hoare_derivs.conseq)
apply fast
done
lemma LoopF: "G|-{\<lambda>Z s. P Z s \<and> \<not>b s}.WHILE b DO c.{P}"
apply (rule hoare_derivs.Loop [THEN conseq2])
apply (simp_all (no_asm))
apply (rule hoare_derivs.conseq)
apply fast
done
(*
Goal "[| G'||-ts; G' <= G |] ==> G||-ts"
by (etac hoare_derivs.cut 1);
by (etac hoare_derivs.asm 1);
qed "thin";
*)
lemma thin [rule_format]: "G'||-ts \<Longrightarrow> \<forall>G. G' <= G \<longrightarrow> G||-ts"
apply (erule hoare_derivs.induct)
apply (tactic \<open>ALLGOALS (EVERY'[clarify_tac \<^context>, REPEAT o smp_tac \<^context> 1])\<close>)
apply (rule hoare_derivs.empty)
apply (erule (1) hoare_derivs.insert)
apply (fast intro: hoare_derivs.asm)
apply (fast intro: hoare_derivs.cut)
apply (fast intro: hoare_derivs.weaken)
apply (rule hoare_derivs.conseq, intro strip, tactic "smp_tac \<^context> 2 1", clarify, tactic "smp_tac \<^context> 1 1",rule exI, rule exI, erule (1) conjI)
prefer 7
apply (rule_tac hoare_derivs.Body, drule_tac spec, erule_tac mp, fast)
apply (tactic \<open>ALLGOALS (resolve_tac \<^context> ((funpow 5 tl) @{thms hoare_derivs.intros}) THEN_ALL_NEW (fast_tac \<^context>))\<close>)
done
lemma weak_Body: "G|-{P}. the (body pn) .{Q} ==> G|-{P}. BODY pn .{Q}"
apply (rule BodyN)
apply (erule thin)
apply auto
done
lemma derivs_insertD: "G||-insert t ts ==> G|-t & G||-ts"
apply (fast intro: hoare_derivs.weaken)
done
lemma finite_pointwise [rule_format (no_asm)]: "[| finite U;
\<forall>p. G |- {P' p}.c0 p.{Q' p} --> G |- {P p}.c0 p.{Q p} |] ==>
G||-(%p. {P' p}.c0 p.{Q' p}) ` U --> G||-(%p. {P p}.c0 p.{Q p}) ` U"
apply (erule finite_induct)
apply simp
apply clarsimp
apply (drule derivs_insertD)
apply (rule hoare_derivs.insert)
apply auto
done
subsection "soundness"
lemma Loop_sound_lemma:
"G|={P &> b}. c .{P} ==>
G|={P}. WHILE b DO c .{P &> (Not o b)}"
apply (unfold hoare_valids_def)
apply (simp (no_asm_use) add: triple_valid_def2)
apply (rule allI)
apply (subgoal_tac "\<forall>d s s'. <d,s> -n-> s' --> d = WHILE b DO c --> ||=n:G --> (\<forall>Z. P Z s --> P Z s' & ~b s') ")
apply (erule thin_rl, fast)
apply ((rule allI)+, rule impI)
apply (erule evaln.induct)
apply (simp_all (no_asm))
apply fast
apply fast
done
lemma Body_sound_lemma:
"[| G Un (%pn. {P pn}. BODY pn .{Q pn})`Procs
||=(%pn. {P pn}. the (body pn) .{Q pn})`Procs |] ==>
G||=(%pn. {P pn}. BODY pn .{Q pn})`Procs"
apply (unfold hoare_valids_def)
apply (rule allI)
apply (induct_tac n)
apply (fast intro: Body_triple_valid_0)
apply clarsimp
apply (drule triples_valid_Suc)
apply (erule (1) notE impE)
apply (simp add: ball_Un)
apply (drule spec, erule impE, erule conjI, assumption)
apply (fast intro!: Body_triple_valid_Suc [THEN iffD1])
done
lemma hoare_sound: "G||-ts ==> G||=ts"
apply (erule hoare_derivs.induct)
apply (tactic \<open>TRYALL (eresolve_tac \<^context> [@{thm Loop_sound_lemma}, @{thm Body_sound_lemma}] THEN_ALL_NEW assume_tac \<^context>)\<close>)
apply (unfold hoare_valids_def)
apply blast
apply blast
apply (blast) (* asm *)
apply (blast) (* cut *)
apply (blast) (* weaken *)
apply (tactic \<open>ALLGOALS (EVERY'
[REPEAT o Rule_Insts.thin_tac \<^context> "hoare_derivs _ _" [],
simp_tac \<^context>, clarify_tac \<^context>, REPEAT o smp_tac \<^context> 1])\<close>)
apply (simp_all (no_asm_use) add: triple_valid_def2)
apply (intro strip, tactic "smp_tac \<^context> 2 1", blast) (* conseq *)
apply (tactic \<open>ALLGOALS (clarsimp_tac \<^context>)\<close>) (* Skip, Ass, Local *)
prefer 3 apply (force) (* Call *)
apply (erule_tac [2] evaln_elim_cases) (* If *)
apply blast+
done
section "completeness"
(* Both versions *)
(*unused*)
lemma MGT_alternI: "G|-MGT c \<Longrightarrow>
G|-{\<lambda>Z s0. \<forall>s1. <c,s0> -c-> s1 \<longrightarrow> Z=s1}. c .{\<lambda>Z s1. Z=s1}"
apply (unfold MGT_def)
apply (erule conseq12)
apply auto
done
(* requires com_det *)
lemma MGT_alternD: "state_not_singleton \<Longrightarrow>
G|-{\<lambda>Z s0. \<forall>s1. <c,s0> -c-> s1 \<longrightarrow> Z=s1}. c .{\<lambda>Z s1. Z=s1} \<Longrightarrow> G|-MGT c"
apply (unfold MGT_def)
apply (erule conseq12)
apply auto
apply (case_tac "\<exists>t. <c,s> -c-> t" for s)
apply (fast elim: com_det)
apply clarsimp
apply (drule single_stateE)
apply blast
done
lemma MGF_complete:
"{}|-(MGT c::state triple) ==> {}|={P}.c.{Q} ==> {}|-{P}.c.{Q::state assn}"
apply (unfold MGT_def)
apply (erule conseq12)
apply (clarsimp simp add: hoare_valids_def eval_eq triple_valid_def2)
done
declare WTs_elim_cases [elim!]
declare not_None_eq [iff]
(* requires com_det, escape (i.e. hoare_derivs.conseq) *)
lemma MGF_lemma1 [rule_format (no_asm)]: "state_not_singleton \<Longrightarrow>
\<forall>pn \<in> dom body. G|-{=}.BODY pn.{->} \<Longrightarrow> WT c --> G|-{=}.c.{->}"
apply (induct_tac c)
apply (tactic \<open>ALLGOALS (clarsimp_tac \<^context>)\<close>)
prefer 7 apply (fast intro: domI)
apply (erule_tac [6] MGT_alternD)
apply (unfold MGT_def)
apply (drule_tac [7] bspec, erule_tac [7] domI)
apply (rule_tac [7] escape, tactic \<open>clarsimp_tac \<^context> 7\<close>,
rename_tac [7] "fun" y Z,
rule_tac [7] P1 = "%Z' s. s= (setlocs Z newlocs) [Loc Arg ::= fun Z]" in hoare_derivs.Call [THEN conseq1], erule_tac [7] conseq12)
apply (erule_tac [!] thin_rl)
apply (rule hoare_derivs.Skip [THEN conseq2])
apply (rule_tac [2] hoare_derivs.Ass [THEN conseq1])
apply (rule_tac [3] escape, tactic \<open>clarsimp_tac \<^context> 3\<close>,
rename_tac [3] loc "fun" y Z,
rule_tac [3] P1 = "%Z' s. s= (Z[Loc loc::=fun Z])" in hoare_derivs.Local [THEN conseq1],
erule_tac [3] conseq12)
apply (erule_tac [5] hoare_derivs.Comp, erule_tac [5] conseq12)
apply (tactic \<open>(resolve_tac \<^context> @{thms hoare_derivs.If} THEN_ALL_NEW
eresolve_tac \<^context> @{thms conseq12}) 6\<close>)
apply (rule_tac [8] hoare_derivs.Loop [THEN conseq2], erule_tac [8] conseq12)
apply auto
done
(* Version: nested single recursion *)
lemma nesting_lemma [rule_format]:
assumes "!!G ts. ts <= G ==> P G ts"
and "!!G pn. P (insert (mgt_call pn) G) {mgt(the(body pn))} ==> P G {mgt_call pn}"
and "!!G c. [| wt c; \<forall>pn\<in>U. P G {mgt_call pn} |] ==> P G {mgt c}"
and "!!pn. pn \<in> U ==> wt (the (body pn))"
shows "finite U ==> uG = mgt_call`U ==>
\<forall>G. G <= uG --> n <= card uG --> card G = card uG - n --> (\<forall>c. wt c --> P G {mgt c})"
apply (induct_tac n)
apply (tactic \<open>ALLGOALS (clarsimp_tac \<^context>)\<close>)
apply (subgoal_tac "G = mgt_call ` U")
prefer 2
apply (simp add: card_seteq)
apply simp
apply (erule assms(3-)) (*MGF_lemma1*)
apply (rule ballI)
apply (rule assms) (*hoare_derivs.asm*)
apply fast
apply (erule assms(3-)) (*MGF_lemma1*)
apply (rule ballI)
apply (case_tac "mgt_call pn \<in> G")
apply (rule assms) (*hoare_derivs.asm*)
apply fast
apply (rule assms(2-)) (*MGT_BodyN*)
apply (drule spec, erule impE, erule_tac [2] impE, drule_tac [3] spec, erule_tac [3] mp)
apply (erule_tac [3] assms(4-))
apply fast
apply (drule finite_subset)
apply (erule finite_imageI)
apply (simp (no_asm_simp))
done
lemma MGT_BodyN: "insert ({=}.BODY pn.{->}) G|-{=}. the (body pn) .{->} ==>
G|-{=}.BODY pn.{->}"
apply (unfold MGT_def)
apply (rule BodyN)
apply (erule conseq2)
apply force
done
(* requires BodyN, com_det *)
lemma MGF: "[| state_not_singleton; WT_bodies; WT c |] ==> {}|-MGT c"
apply (rule_tac P = "%G ts. G||-ts" and U = "dom body" in nesting_lemma)
apply (erule hoare_derivs.asm)
apply (erule MGT_BodyN)
apply (rule_tac [3] finite_dom_body)
apply (erule MGF_lemma1)
prefer 2 apply (assumption)
apply blast
apply clarsimp
apply (erule (1) WT_bodiesD)
apply (rule_tac [3] le_refl)
apply auto
done
(* Version: simultaneous recursion in call rule *)
(* finiteness not really necessary here *)
lemma MGT_Body: "[| G Un (%pn. {=}. BODY pn .{->})`Procs
||-(%pn. {=}. the (body pn) .{->})`Procs;
finite Procs |] ==> G ||-(%pn. {=}. BODY pn .{->})`Procs"
apply (unfold MGT_def)
apply (rule hoare_derivs.Body)
apply (erule finite_pointwise)
prefer 2 apply (assumption)
apply clarify
apply (erule conseq2)
apply auto
done
(* requires empty, insert, com_det *)
lemma MGF_lemma2_simult [rule_format (no_asm)]: "[| state_not_singleton; WT_bodies;
F<=(%pn. {=}.the (body pn).{->})`dom body |] ==>
(%pn. {=}. BODY pn .{->})`dom body||-F"
apply (frule finite_subset)
apply (rule finite_dom_body [THEN finite_imageI])
apply (rotate_tac 2)
apply (tactic "make_imp_tac \<^context> 1")
apply (erule finite_induct)
apply (clarsimp intro!: hoare_derivs.empty)
apply (clarsimp intro!: hoare_derivs.insert simp del: range_composition)
apply (erule MGF_lemma1)
prefer 2 apply (fast dest: WT_bodiesD)
apply clarsimp
apply (rule hoare_derivs.asm)
apply (fast intro: domI)
done
(* requires Body, empty, insert, com_det *)
lemma MGF': "[| state_not_singleton; WT_bodies; WT c |] ==> {}|-MGT c"
apply (rule MGF_lemma1)
apply assumption
prefer 2 apply (assumption)
apply clarsimp
apply (subgoal_tac "{}||- (%pn. {=}. BODY pn .{->}) `dom body")
apply (erule hoare_derivs.weaken)
apply (fast intro: domI)
apply (rule finite_dom_body [THEN [2] MGT_Body])
apply (simp (no_asm))
apply (erule (1) MGF_lemma2_simult)
apply (rule subset_refl)
done
(* requires Body+empty+insert / BodyN, com_det *)
lemmas hoare_complete = MGF' [THEN MGF_complete]
subsection "unused derived rules"
lemma falseE: "G|-{%Z s. False}.c.{Q}"
apply (rule hoare_derivs.conseq)
apply fast
done
lemma trueI: "G|-{P}.c.{%Z s. True}"
apply (rule hoare_derivs.conseq)
apply (fast intro!: falseE)
done
lemma disj: "[| G|-{P}.c.{Q}; G|-{P'}.c.{Q'} |]
==> G|-{%Z s. P Z s | P' Z s}.c.{%Z s. Q Z s | Q' Z s}"
apply (rule hoare_derivs.conseq)
apply (fast elim: conseq12)
done (* analogue conj non-derivable *)
lemma hoare_SkipI: "(\<forall>Z s. P Z s \<longrightarrow> Q Z s) \<Longrightarrow> G|-{P}. SKIP .{Q}"
apply (rule conseq12)
apply (rule hoare_derivs.Skip)
apply fast
done
subsection "useful derived rules"
lemma single_asm: "{t}|-t"
apply (rule hoare_derivs.asm)
apply (rule subset_refl)
done
lemma export_s: "[| !!s'. G|-{%Z s. s'=s & P Z s}.c.{Q} |] ==> G|-{P}.c.{Q}"
apply (rule hoare_derivs.conseq)
apply auto
done
lemma weak_Local: "[| G|-{P}. c .{Q}; \<forall>k Z s. Q Z s --> Q Z (s[Loc Y::=k]) |] ==>
G|-{%Z s. P Z (s[Loc Y::=a s])}. LOCAL Y:=a IN c .{Q}"
apply (rule export_s)
apply (rule hoare_derivs.Local)
apply (erule conseq2)
apply (erule spec)
done
(*
Goal "!Q. G |-{%Z s. ~(? s'. <c,s> -c-> s')}. c .{Q}"
by (induct_tac "c" 1);
by Auto_tac;
by (rtac conseq1 1);
by (rtac hoare_derivs.Skip 1);
force 1;
by (rtac conseq1 1);
by (rtac hoare_derivs.Ass 1);
force 1;
by (defer_tac 1);
###
by (rtac hoare_derivs.Comp 1);
by (dtac spec 2);
by (dtac spec 2);
by (assume_tac 2);
by (etac conseq1 2);
by (Clarsimp_tac 2);
force 1;
*)
end
|
module FunctionName where
open import OscarPrelude
record FunctionName : Set
where
constructor ⟨_⟩
field
name : Nat
open FunctionName public
instance EqFunctionName : Eq FunctionName
Eq._==_ EqFunctionName _ = decEq₁ (cong name) ∘ (_≟_ on name $ _)
|
`basis/ass_operad` := (A::set) -> map(u -> bar(op(u)),`list_elements/ord`(A));
`vec/ass_operad` := (A::set) -> proc(u)
local n,ub,uc,T,B,i;
global `vec_table/ass_operad`;
n := nops(A);
if u = 0 then
return [0$(n!)];
elif type(u,`+`) then
return map(`vec/ass_operad`(A),u);
else
if type(u,specfunc(bar)) then
ub := u;
uc := 1;
elif type(u,`*`) then
ub,uc := selectremove(type,u,specfunc(bar));
ub := map(op,bar(ub));
else
return FAIL;
fi;
if not(type(`vec_table/ass_operad`[A],table)) then
T := table():
B := `basis/ass_operad`(A);
for i from 1 to n! do
T[B[i]] := [0$(i-1),1,0$(n!-i)];
od:
`vec_table/ass_operad`[A] := eval(T);
fi;
return uc *~ `vec_table/ass_operad`[A][ub];
fi;
end:
|
# returns a list of adshs which contain 2 give tags
import datetime
import numpy as np
import csv
import h5py as h5
import datasets_lib as ds
# store starting time for calculating total processing time
time_start = datetime.datetime.now()
# set path to folder Datasets/
path = 'D:/DataSets/'
# given tags as original string
tag_to_look_1 = 'NetIncomeLoss'
tag_to_look_2 = 'ProfitLoss'
# load index_tag
index_tag=ds.list_from_file(path+'/reindexed/index_tag.txt')
# get reindexed value of the tag
tag_to_look_1_int = index_tag.index(tag_to_look_1)
tag_to_look_2_int = index_tag.index(tag_to_look_2)
# load a list of all adshs
index_adsh=ds.list_from_file(path+'/reindexed/index_adsh.txt')
# create an array for marking adshs
adsh_mark = np.zeros((len(index_adsh),2), dtype = bool)
# collect adsh statistics
with open(path + 'filter_1/filter_1_num.txt') as f:
f_object = csv.reader(f, delimiter='\t')
for row in f_object:
adsh_int = int(row[0])
tag_int = int(row[1])
# is it the tag we are looking for?
if tag_int == tag_to_look_1_int:
# if yes, mark corresponding adsh as used
adsh_mark[adsh_int,0] = True
if tag_int == tag_to_look_2_int:
adsh_mark[adsh_int,1] = True
# create vector with True elements for adsh for which both tags exist
adsh_mark_all = np.all(adsh_mark, axis=1, keepdims=True)
# load elig_adsh_list
with h5.File(path + 'filter_1/elig_adsh_list.h5', 'r') as hf:
elig_adsh_list = hf['elig_adsh_list'][:]
# create a mask of eligible adsh
elig_adsh_mask = np.zeros((len(index_adsh),1), dtype = bool)
elig_adsh_mask[elig_adsh_list] = True
# create a vector with True for adsh which we are looking for
result = np.logical_and(elig_adsh_mask, adsh_mark_all)
# extract adsh indexes
valid_adsh_list = np.nonzero(result)[0]
print('Number of adsh for which both tags exist:',len(valid_adsh_list))
print('First 20 entries of the list:')
print(valid_adsh_list[0:20])
# processing time
print('time elapsed - ', datetime.datetime.now() - time_start)
|
State Before: α : Type u_1
β : Type u_2
s : Multiset α
t : Multiset β
⊢ ↑card (disjSum s t) = ↑card s + ↑card t State After: no goals Tactic: rw [disjSum, card_add, card_map, card_map]
|
module Data.Rel.Complement
import Data.Rel
import Data.Fun
import Data.Fun.Extra
import Data.HVect
%default total
||| The logical complement of a relation.
public export
complement : {ts : Vect n Type} -> (p : Rel ts) -> Rel ts
complement = chain Not
||| The negation of a relation for some elements
||| is equal to the complement of the relation.
public export
notToComplement :
{0 ts : Vect n Type}
-> (p : Rel ts)
-> (elems : HVect ts)
-> Not (uncurry p elems) = uncurry (complement {ts = ts} p) elems
notToComplement p = chainUncurry p Not
|
Require Import String.
Local Open Scope string.
Definition procRqValidReg := "procRqValid".
Definition procRqReplaceReg := "procRqReplace".
Definition procRqWaitReg := "procRqWait".
Definition procRqReg := "procRq".
Definition l1MissByState := "l1MissByState".
Definition l1MissByLine := "l1MissByLine".
Definition l1Hit := "l1Hit".
Definition writeback := "writeback".
Definition upgRq := "upgRq".
Definition upgRs := "upgRs".
Definition ld := "ld".
Definition st := "st".
Definition drop := "drop".
Definition pProcess := "pProcess".
Definition cRqValidReg := "cRqValid".
Definition cRqDirwReg := "cRqDirw".
Definition cRqReg := "cRqReg".
Definition missByState := "missByState".
Definition dwnRq := "dwnRq".
Definition dwnRs_wait := "dwnRs_wait".
Definition dwnRs_noWait := "dwnRs_noWait".
Definition deferred := "deferred".
Definition rqFromProc := "rqFromProc".
Definition rsToProc := "rsToProc".
Definition rqToParent := "rqToParent".
Definition rsToParent := "rsToParent".
Definition rqFromChild := "rqFromChild".
Definition rsFromChild := "rsFromChild".
Definition fromParent := "fromParent".
Definition toChild := "toChild".
Definition line := "line".
Definition tag := "tag".
Definition cs := "cs".
Definition mcs := "mcs".
Definition mline := "mline".
Definition elt := "elt".
Definition enqName := "enq".
Definition deqName := "deq".
Definition enqP := "enqP".
Definition deqP := "deqP".
Definition empty := "empty".
Definition full := "full".
Definition firstEltName := "firstElt".
Definition addr := "addr".
Definition data := "data".
Definition dataArray := "dataArray".
Definition read := "read".
Definition write := "write".
Definition rqFromCToPRule := "rqFromCToP".
Definition rsFromCToPRule := "rsFromCToP".
Definition fromPToCRule := "fromPToC".
Definition read0 := "read0".
Definition read1 := "read1".
Definition read2 := "read2".
Definition read3 := "read3".
Definition read4 := "read4".
Definition read5 := "read5".
Definition read6 := "read6".
Definition read7 := "read7".
Definition read8 := "read8".
Definition read9 := "read9".
Close Scope string.
#[global] Hint Unfold
procRqValidReg procRqReplaceReg procRqWaitReg procRqReg
l1MissByState l1MissByLine l1Hit writeback
upgRq upgRs ld st drop pProcess
cRqValidReg cRqDirwReg cRqReg missByState
dwnRq dwnRs_wait dwnRs_noWait deferred
rqFromProc rsToProc rqToParent rsToParent
rqFromChild rsFromChild fromParent toChild
line tag cs mcs mline
elt enqName deqName enqP deqP empty full firstEltName
addr data dataArray read write
read0 read1 read2 read3 read4 read5 read6 read7 read8 read9
rqFromCToPRule rsFromCToPRule fromPToCRule
: NameDefs.
|
import logging
from typing import List
import networkx as nx
from fastapi import APIRouter, BackgroundTasks, Depends, HTTPException
from models_library.projects import ProjectID
from models_library.projects_state import RunningState
from starlette import status
from starlette.requests import Request
from tenacity import (
before_sleep_log,
retry,
retry_if_result,
stop_after_delay,
wait_random,
)
from ...models.domains.comp_tasks import (
CompTaskAtDB,
ComputationTaskCreate,
ComputationTaskDelete,
ComputationTaskOut,
ComputationTaskStop,
)
from ...models.domains.projects import ProjectAtDB
from ...models.schemas.constants import UserID
from ...modules.db.repositories.comp_pipelines import CompPipelinesRepository
from ...modules.db.repositories.comp_tasks import CompTasksRepository
from ...modules.db.repositories.projects import ProjectsRepository
from ...utils.computations import (
get_pipeline_state_from_task_states,
is_pipeline_running,
is_pipeline_stopped,
)
from ...utils.dags import create_dag_graph, find_entrypoints
from ...utils.exceptions import ProjectNotFoundError
from ..dependencies.celery import CeleryClient, get_celery_client
from ..dependencies.database import get_repository
from ..dependencies.director_v0 import DirectorV0Client, get_director_v0_client
router = APIRouter()
log = logging.getLogger(__file__)
PIPELINE_ABORT_TIMEOUT_S = 10
def celery_on_message(body):
# FIXME: this might become handy when we stop starting tasks recursively
log.warning(body)
def background_on_message(task):
# FIXME: this might become handy when we stop starting tasks recursively
log.warning(task.get(on_message=celery_on_message, propagate=False))
async def _abort_pipeline_tasks(
project: ProjectAtDB,
tasks: List[CompTaskAtDB],
computation_tasks: CompTasksRepository,
celery_client: CeleryClient,
):
await computation_tasks.mark_project_tasks_as_aborted(project)
celery_client.abort_computation_tasks([str(t.job_id) for t in tasks])
log.debug(
"Computational task stopped for project %s",
project.uuid,
)
@router.post(
"",
summary="Create and optionally start a new computation",
response_model=ComputationTaskOut,
status_code=status.HTTP_201_CREATED,
)
async def create_computation(
# pylint: disable=too-many-arguments
job: ComputationTaskCreate,
background_tasks: BackgroundTasks,
request: Request,
project_repo: ProjectsRepository = Depends(get_repository(ProjectsRepository)),
computation_pipelines: CompPipelinesRepository = Depends(
get_repository(CompPipelinesRepository)
),
computation_tasks: CompTasksRepository = Depends(
get_repository(CompTasksRepository)
),
celery_client: CeleryClient = Depends(get_celery_client),
director_client: DirectorV0Client = Depends(get_director_v0_client),
):
log.debug(
"User %s is creating a new computation from project %s",
job.user_id,
job.project_id,
)
try:
# get the project
project: ProjectAtDB = await project_repo.get_project(job.project_id)
# FIXME: this could not be valid anymore if the user deletes the project in between right?
# check if current state allow to modify the computation
comp_tasks: List[CompTaskAtDB] = await computation_tasks.get_comp_tasks(
job.project_id
)
pipeline_state = get_pipeline_state_from_task_states(
comp_tasks, celery_client.settings.publication_timeout
)
if is_pipeline_running(pipeline_state):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"Projet {job.project_id} already started, current state is {pipeline_state}",
)
# create the computational DAG
dag_graph = create_dag_graph(project.workbench)
# validate DAG
if not nx.is_directed_acyclic_graph(dag_graph):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=f"Project {job.project_id} is not a valid directed acyclic graph!",
)
if job.start_pipeline:
# find the entrypoints, if not the pipeline cannot be started
entrypoints = find_entrypoints(dag_graph)
if not entrypoints:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail=f"Project {job.project_id} has no services to compute",
)
# ok so put the tasks in the db
await computation_pipelines.upsert_pipeline(
project.uuid, dag_graph, job.start_pipeline
)
await computation_tasks.upsert_tasks_from_project(
project, director_client, job.start_pipeline
)
if job.start_pipeline:
# trigger celery
task = celery_client.send_computation_task(job.user_id, job.project_id)
background_tasks.add_task(background_on_message, task)
log.debug(
"Started computational task %s for user %s based on project %s",
task.id,
job.user_id,
job.project_id,
)
return ComputationTaskOut(
id=job.project_id,
state=RunningState.PUBLISHED
if job.start_pipeline
else RunningState.NOT_STARTED,
url=f"{request.url}/{job.project_id}",
stop_url=f"{request.url}/{job.project_id}:stop"
if job.start_pipeline
else None,
)
except ProjectNotFoundError as e:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e)) from e
@router.get(
"/{project_id}",
summary="Returns a computation pipeline state",
response_model=ComputationTaskOut,
status_code=status.HTTP_202_ACCEPTED,
)
async def get_computation(
user_id: UserID,
project_id: ProjectID,
request: Request,
project_repo: ProjectsRepository = Depends(get_repository(ProjectsRepository)),
computation_tasks: CompTasksRepository = Depends(
get_repository(CompTasksRepository)
),
celery_client: CeleryClient = Depends(get_celery_client),
):
log.debug("User %s getting computation status for project %s", user_id, project_id)
try:
# check that project actually exists
# TODO: get a copy of the project and process it here instead!
await project_repo.get_project(project_id)
# get the project task states
comp_tasks: List[CompTaskAtDB] = await computation_tasks.get_comp_tasks(
project_id
)
pipeline_state = get_pipeline_state_from_task_states(
comp_tasks, celery_client.settings.publication_timeout
)
log.debug(
"Computational task status by user %s for project %s is %s",
user_id,
project_id,
pipeline_state,
)
task_out = ComputationTaskOut(
id=project_id,
state=pipeline_state,
url=f"{request.url.remove_query_params('user_id')}",
stop_url=f"{request.url.remove_query_params('user_id')}:stop"
if is_pipeline_running(pipeline_state)
else None,
)
return task_out
except ProjectNotFoundError as e:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e)) from e
# NOTE: this will be re-used for the prep2go API stuff... don't worry...
# task = AsyncResult(str(computation_id))
# if task.state == RunningState.SUCCESS:
# return ComputationTask(id=task.id, state=task.state, result=task.result)
# if task.state == RunningState.FAILED:
# return ComputationTask(
# id=task.id,
# state=task.state,
# result=task.backend.get(task.backend.get_key_for_task(task.id)),
# )
# return ComputationTask(id=task.id, state=task.state, result=task.info)
@router.post(
"/{project_id}:stop",
summary="Stops a computation pipeline",
response_model=None,
status_code=status.HTTP_202_ACCEPTED,
)
async def stop_computation_project(
comp_task_stop: ComputationTaskStop,
project_id: ProjectID,
request: Request,
project_repo: ProjectsRepository = Depends(get_repository(ProjectsRepository)),
computation_tasks: CompTasksRepository = Depends(
get_repository(CompTasksRepository)
),
celery_client: CeleryClient = Depends(get_celery_client),
):
log.debug(
"User %s stopping computation for project %s",
comp_task_stop.user_id,
project_id,
)
try:
# get the project
project: ProjectAtDB = await project_repo.get_project(project_id)
# check if current state allow to stop the computation
comp_tasks: List[CompTaskAtDB] = await computation_tasks.get_comp_tasks(
project_id
)
pipeline_state = get_pipeline_state_from_task_states(
comp_tasks, celery_client.settings.publication_timeout
)
if is_pipeline_running(pipeline_state):
await _abort_pipeline_tasks(
project, comp_tasks, computation_tasks, celery_client
)
return ComputationTaskOut(
id=project_id,
state=pipeline_state,
url=f"{str(request.url).rstrip(':stop')}",
)
except ProjectNotFoundError as e:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e)) from e
@router.delete(
"/{project_id}",
summary="Deletes a computation pipeline",
response_model=None,
status_code=status.HTTP_204_NO_CONTENT,
)
async def delete_pipeline(
comp_task_stop: ComputationTaskDelete,
project_id: ProjectID,
project_repo: ProjectsRepository = Depends(get_repository(ProjectsRepository)),
computation_pipelines: CompPipelinesRepository = Depends(
get_repository(CompPipelinesRepository)
),
computation_tasks: CompTasksRepository = Depends(
get_repository(CompTasksRepository)
),
celery_client: CeleryClient = Depends(get_celery_client),
):
try:
# get the project
project: ProjectAtDB = await project_repo.get_project(project_id)
# check if current state allow to stop the computation
comp_tasks: List[CompTaskAtDB] = await computation_tasks.get_comp_tasks(
project_id
)
pipeline_state = get_pipeline_state_from_task_states(
comp_tasks, celery_client.settings.publication_timeout
)
if is_pipeline_running(pipeline_state):
if not comp_task_stop.force:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"Projet {project_id} is currently running and cannot be deleted, current state is {pipeline_state}",
)
# abort the pipeline first
await _abort_pipeline_tasks(
project, comp_tasks, computation_tasks, celery_client
)
def return_last_value(retry_state):
"""return the result of the last call attempt"""
return retry_state.outcome.result()
@retry(
stop=stop_after_delay(PIPELINE_ABORT_TIMEOUT_S),
wait=wait_random(0, 2),
retry_error_callback=return_last_value,
retry=retry_if_result(lambda result: result is False),
reraise=False,
before_sleep=before_sleep_log(log, logging.INFO),
)
async def check_pipeline_stopped() -> bool:
comp_tasks: List[CompTaskAtDB] = await computation_tasks.get_comp_tasks(
project_id
)
pipeline_state = get_pipeline_state_from_task_states(
comp_tasks,
celery_client.settings.publication_timeout,
)
return is_pipeline_stopped(pipeline_state)
# wait for the pipeline to be stopped
if not await check_pipeline_stopped():
log.error(
"pipeline %s could not be stopped properly after %ss",
project_id,
PIPELINE_ABORT_TIMEOUT_S,
)
# delete the pipeline now
await computation_tasks.delete_tasks_from_project(project)
await computation_pipelines.delete_pipeline(project_id)
except ProjectNotFoundError as e:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=str(e)) from e
|
!
! CSEM_Model_Manager
!
! Module containing functions to manage the all model options already implemented in CSEM
! and registered in the Model_Registor_File.
!
!
! CREATION HISTORY:
! Written by: Ming Chen, 08-17-2017
! [email protected]
!
MODULE CSEM_Model_Manager
! -----------------
! Environment setup
! -----------------
! Module use
USE CSEM_Type_Kinds, ONLY: fp => CSEM_fp
! Disable implicit typing
IMPLICIT NONE
! ------------
! Visibilities
! ------------
! Everything private by default
PRIVATE
PUBLIC :: CSEM_Model_ID
PUBLIC :: Load_Model_Repo
PUBLIC :: Set_Model_Option
PUBLIC :: Inq_Model_Option
PUBLIC :: Get_Data_Path
TYPE CSEM_Model_ID
CHARACTER(LEN=10) :: CLASS
CHARACTER(LEN=30) :: NAME
CHARACTER(LEN=256) :: DATA_PATH
END TYPE CSEM_Model_ID
INTEGER, PARAMETER :: N_CSEM_Models = 100
TYPE(CSEM_Model_ID),SAVE :: Model_LIST(N_CSEM_Models)
LOGICAL, SAVE :: IS_INITED = .FALSE.
! Default algorithms
TYPE(CSEM_Model_ID), PRIVATE :: &
MW_LAND_Model = CSEM_Model_ID("MW_LAND", "NESDIS_Land_MW", "./"), &
MW_WATER_Model = CSEM_Model_ID("MW_WATER", "NESDIS_FASTEM_V6", "./"), &
MW_SNOW_Model = CSEM_Model_ID("MW_SNOW", "NESDIS_Snow_MW", "./"), &
MW_ICE_Model = CSEM_Model_ID("MW_ICE", "NESDIS_Ice_MW", "./"), &
IR_LAND_Model = CSEM_Model_ID("IR_LAND", "NPOESS_LUT", "./"), &
IR_WATER_Model = CSEM_Model_ID("IR_WATER", "NESDIS_IRW_WuSmith", "./"), &
IR_SNOW_Model = CSEM_Model_ID("IR_SNOW", "NPOESS_LUT", "./"), &
IR_ICE_Model = CSEM_Model_ID("IR_ICE", "NPOESS_LUT", "./"), &
VIS_LAND_Model = CSEM_Model_ID("VIS_LAND", "NPOESS_LUT", "./"), &
VIS_WATER_Model = CSEM_Model_ID("VIS_WATER", "NPOESS_LUT", "./"), &
VIS_SNOW_Model = CSEM_Model_ID("VIS_SNOW", "NPOESS_LUT", "./"), &
VIS_ICE_Model = CSEM_Model_ID("VIS_ICE", "NPOESS_LUT", "./")
CONTAINS
! Load_Model_Repo is to parse the Model_Registor_File
! and to load all the model options that are implemented in CSEM
!
FUNCTION Load_Model_Repo( Model_Registor_File ) RESULT ( Error_Status )
CHARACTER(LEN=*), INTENT(IN) :: Model_Registor_File
! Function result
INTEGER :: Error_Status
! local
CHARACTER(LEN=256) :: LINE
CHARACTER(LEN=256) :: Model_CLASS
CHARACTER(LEN=256) :: Model_REC(3)
CHARACTER(LEN=1) :: Model_ON
TYPE(CSEM_Model_ID) :: DEFAULT_Model
INTEGER :: IC, ID, ILS, IRS, IRE, IDF, IModel=1
INTEGER :: FUNIT
Error_Status = 0
IF(IS_INITED)THEN
RETURN
ENDIF
FUNIT=get_lun()
OPEN(FUNIT, FILE=TRIM(Model_Registor_File), STATUS='OLD', ACTION='READ', &
FORM='FORMATTED', IOSTAT=Error_Status)
IF (Error_Status /= 0) THEN
WRITE(*,*) 'Error in opening Model_Registor_File '//TRIM(Model_Registor_File)//' ....'
RETURN
END IF
DO WHILE(.TRUE.)
READ(FUNIT, '(A)', IOSTAT=Error_Status) LINE
IF (Error_Status /= 0) THEN
IF(Error_Status == -1) THEN
CLOSE(FUNIT)
IS_INITED=.TRUE.
Error_Status = 0
ELSE
PRINT*,'Error reading Model_Registor_File '//TRIM(Model_Registor_File)
PRINT*,'FUNIT: ', FUNIT
ENDIF
RETURN
ENDIF
LINE = StrCompress(LINE)
IF(LEN(TRIM(ADJUSTL(LINE))) < 1)CYCLE
IC=SCAN(LINE, "#")
IF(IC == 1) CYCLE
IF(IC>1) LINE=LINE(1:IC-1)
ILS=SCAN(LINE, "[")
IRS=SCAN(LINE, "]")
IF(ILS >=1 .AND. IRS > ILS+1) THEN
Model_CLASS = TRIM(LINE(ILS+1:IRS-1))
!WRITE(*,*)'Algorithm Class: ', TRIM(Model_CLASS)
CYCLE
ENDIF
ID = SCAN(LINE, ",")
IF(ID <=1) CYCLE
IRE = 1 ; Model_REC = ""
DO WHILE(ID >=1 .AND. IRE <= 3)
Model_REC(IRE) = TRIM(ADJUSTL(LINE(1:ID-1)))
IRE = IRE + 1
LINE = TRIM(ADJUSTL(LINE(ID+1:)))
ID = SCAN(LINE, ",")
ENDDO
IF(IRE <=3 ) Model_REC(IRE) = TRIM(ADJUSTL(LINE))
Model_LIST(IModel) = CSEM_Model_ID(TRIM(Model_CLASS), TRIM(Model_REC(1)), TRIM(Model_REC(3)))
Model_ON = TRIM(ADJUSTL(Model_REC(2))) ; IDF = 0
IF(LEN(Model_ON) < 1) Model_ON = '0'
READ(Model_ON,'(I1)') IDF
IF(IDF > 0 )THEN
DEFAULT_Model = Model_LIST(IModel)
CALL SET_Model_Option( DEFAULT_Model)
ENDIF
IModel = IModel + 1
END DO
CLOSE(FUNIT)
IS_INITED=.TRUE.
END FUNCTION Load_Model_Repo
SUBROUTINE SET_Model_Option(Model, verbose)
TYPE(CSEM_Model_ID), INTENT(INOUT) :: Model
LOGICAL, OPTIONAL :: verbose
IF(.NOT. Valid_Check(Model)) THEN
WRITE(*,*)"Not Valid Model%CLASS: ", TRIM(Model%CLASS), &
" or Model%NAME: ", TRIM(Model%NAME)
RETURN
END IF
IF(PRESENT(verbose)) &
WRITE(*,*)"Setting Model%CLASS : ", TRIM(Model%CLASS), &
" Model%NAME: ", TRIM(Model%NAME)
SELECT CASE (TRIM(Model%CLASS))
CASE ("MW_LAND")
MW_LAND_Model = Model
CASE ("MW_SNOW")
MW_SNOW_Model = Model
CASE ("MW_WATER")
MW_WATER_Model = Model
CASE ("MW_ICE")
MW_ICE_Model = Model
CASE ("IR_LAND")
IR_LAND_Model = Model
CASE ("IR_SNOW")
IR_SNOW_Model = Model
CASE ("IR_WATER")
IR_WATER_Model = Model
CASE ("IR_ICE")
IR_ICE_Model = Model
CASE ("VIS_LAND")
VIS_LAND_Model = Model
CASE ("VIS_SNOW")
VIS_SNOW_Model = Model
CASE ("VIS_WATER")
VIS_WATER_Model = Model
CASE ("VIS_ICE")
VIS_ICE_Model = Model
CASE DEFAULT
WRITE(*,*)'Wrong AlgType NAME ',TRIM(Model%CLASS)
END SELECT
END SUBROUTINE SET_Model_Option
TYPE(CSEM_Model_ID) FUNCTION Inq_Model_Option(ModelClass)
CHARACTER(LEN=*), INTENT(IN) :: ModelClass
TYPE(CSEM_Model_ID) :: Model
SELECT CASE (TRIM(ModelClass))
CASE ("MW_LAND")
Model = MW_LAND_Model
CASE ("MW_SNOW")
Model = MW_SNOW_Model
CASE ("MW_WATER")
Model = MW_WATER_Model
CASE ("MW_ICE")
Model = MW_ICE_Model
CASE ("IR_LAND")
Model = IR_LAND_Model
CASE ("IR_SNOW")
Model = IR_SNOW_Model
CASE ("IR_WATER")
Model = IR_WATER_Model
CASE ("IR_ICE")
Model = IR_ICE_Model
CASE ("VIS_LAND")
Model = VIS_LAND_Model
CASE ("VIS_SNOW")
Model = VIS_SNOW_Model
CASE ("VIS_WATER")
Model = VIS_WATER_Model
CASE ("VIS_ICE")
Model = VIS_ICE_Model
CASE DEFAULT
WRITE(*,*)'Wrong ModelClass NAME'
Model = CSEM_Model_ID(ModelClass,"","")
END SELECT
Inq_Model_Option = Model
END FUNCTION Inq_Model_Option
LOGICAL FUNCTION Valid_Check(Model)
TYPE(CSEM_Model_ID), INTENT(INOUT) :: Model
INTEGER :: i
Valid_Check = .FALSE.
DO i = 1, N_CSEM_Models
IF(TRIM(ADJUSTL(Model%CLASS)) .EQ. TRIM(ADJUSTL(Model_LIST(i)%CLASS)) .AND. &
TRIM(ADJUSTL(Model%NAME)) .EQ. TRIM(ADJUSTL(Model_LIST(i)%NAME))) THEN
Model%DATA_PATH = Model_LIST(i)%DATA_PATH
Valid_Check = .TRUE.
RETURN
ENDIF
ENDDO
END FUNCTION Valid_Check
FUNCTION GET_DATA_PATH(ModelClass, ModelName) RESULT(ModelPath)
CHARACTER(LEN=*), INTENT(IN) :: ModelClass, ModelName
CHARACTER(LEN=256) :: ModelPath
INTEGER :: i
ModelPath='./'
DO i = 1, N_CSEM_Models
IF(TRIM(ADJUSTL(ModelClass)) .EQ. TRIM(ADJUSTL(Model_LIST(i)%CLASS)) .AND. &
TRIM(ADJUSTL(ModelName)) .EQ. TRIM(ADJUSTL(Model_LIST(i)%NAME))) THEN
ModelPath = TRIM(Model_LIST(i)%DATA_PATH)
RETURN
ENDIF
ENDDO
END FUNCTION GET_DATA_PATH
FUNCTION StrCompress( Input_String, n ) RESULT( Output_String )
! Arguments
CHARACTER(*), INTENT(IN) :: Input_String
INTEGER, OPTIONAL, INTENT(OUT) :: n
! Function result
CHARACTER(LEN(Input_String)) :: Output_String
! Local parameters
INTEGER, PARAMETER :: IACHAR_SPACE = 32
INTEGER, PARAMETER :: IACHAR_TAB = 9
! Local variables
INTEGER :: i, j
INTEGER :: IACHAR_Character
! Setup
! -----
! Initialise output string
Output_String = ' '
! Initialise output string "useful" length counter
j = 0
! Loop over string contents character by character
! ------------------------------------------------
DO i = 1, LEN(Input_String)
! Convert the current character to its position
! in the ASCII collating sequence
IACHAR_Character = IACHAR(Input_String(i:i))
! If the character is NOT a space ' ' or a tab '->|'
! copy it to the output string.
IF ( IACHAR_Character /= IACHAR_SPACE .AND. &
IACHAR_Character /= IACHAR_TAB ) THEN
j = j + 1
Output_String(j:j) = Input_String(i:i)
END IF
END DO
! Save the non-whitespace count
! -----------------------------
IF ( PRESENT(n) ) n = j
END FUNCTION StrCompress
FUNCTION Get_Lun() RESULT( Lun )
INTEGER :: Lun
LOGICAL :: Is_Open
LOGICAL :: Existence
! Initialise logical unit number
Lun = 9
! Start open loop for Lun Search
Lun_Search: DO
Lun = Lun + 1
INQUIRE( UNIT = LUN, EXIST = Existence )
IF ( .NOT. Existence ) THEN
Lun = -1
EXIT Lun_Search
END IF
INQUIRE( UNIT = Lun, OPENED = Is_Open )
IF ( .NOT. Is_Open ) EXIT Lun_Search
END DO Lun_Search
END FUNCTION Get_Lun
END MODULE CSEM_Model_Manager
|
\section{New Features Available in Stock Synthesis Version 3.30}
Stock Synthesis version 3.30 was designed specifically to provide more precise control in modeling temporal changes in biology, expected values for data, and for recruitment. In addition, a large number of new features that make substantial changes to the input formats have been introduced.
\begin{center}
{\renewcommand{\arraystretch}{1.5}%
\begin{longtable}{p{1.75cm} p{9.5cm}}
\hline
Item & Description\\
\hline
\endfirsthead
\hline
\toprule
Item & Description\\
\hline
\endhead
\hline
\endfoot
\endlastfoot
\multicolumn{1}{l}{\hyperlink{GenericFleets}{Generic Fleets}} &
Fleet specification section of data file has substantially changed and now includes specific fleet type; fishery fleets, bycatch fleets, and surveys which can be specified in any order.\\
\multicolumn{1}{l}{\hyperlink{ListBased}{List-oriented inputs}} &
Older versions of SS (3.24 and earlier) required users to specify the number of items to be read, now SS3 can determine the number of lines to read through the application of a terminator line using -9999 in first field of the read vector. \\
\multicolumn{1}{l}{\hyperlink{SubSeas}{Internal sub-seasons}} &
SS3.24 inherently has 2 subseasons within each season (begin and middle) at which the age-length-key is calculated; now user specifies an even number of sub-seasons to use (2 to many). \\
\multicolumn{1}{l}{\hyperlink{ObsTiming}{Observation Timing}} &
Timing of observations now is input as year and month (e.g., April 15 is 4.5). The age-length-key used for each observation is calculated to the nearest sub-season month. Old "survey\_timing" replaced by the month specific inputs. Season is calculated at runtime from the input month and the input season durations. \\
\multicolumn{1}{l}{\hyperlink{ALK}{Speed}} &
Smarter at when to re-calculate the age-length-key; trims tails of size-at-age so calculations avoid many inconsequential cells of the age-length matrix. Age-length-key tail compression is specified in the starter file.\\
\multicolumn{1}{l}{\hyperlink{Convert} {Converter}} &
A special version of SS3, ss\_trans.exe, will read files in SS3.24 format and write *.ss\_new files in SS3.30 format. This is the advised method for converting previous version files, but always do a side-by-side comparison of the old 3.24 model and the newly translated 3.30 model.\\
\multicolumn{1}{l}{\hyperlink{WAAparm} {Weight-at-Age}} & Implementing empirical weight-at-age is now specified separately in the control file rather than under the maturity options.\\
\multicolumn{1}{l}{\hyperlink{Priors}{Prior Type}} & Change in the prior numbering for parameters. Now, 0 indicates no prior, and 6 indicates a normal distribution prior.\\
\multicolumn{1}{l}{\hyperlink{CatchMult}{Catch multiplier}} &
Each fishing fleet's catch can now have a catchability (Q) that is a parameter in the mortality-growth parameter section.\\
\multicolumn{1}{l}{\hyperlink{CatchFormat}{Catch input}} &
Catch input now as list: year, season, fleet, amount, standard error. \\
\multicolumn{1}{l}{\hyperlink{CompTiming}{Observations}} &
Fishery composition observations can be related to season long catch-at-age, or to a month-specific timing.\\
\multicolumn{1}{l}{\hyperlink{DomeRetention}{Retention}} &
Option for dome-shaped retention function and for age-based retention. \\
\multicolumn{1}{l}{Scaling Options} &
New non-parametric selectivity types that are scaled by the raw values at particular ages, rather than the max age.\\
\multicolumn{1}{l}{\hyperlink{2DAR}{2D AR Selectivity}} &
Implementation of two-dimensional autoregressive selectivity implemented in SS3.30.10.\\
\multicolumn{1}{l}{\hyperlink{SpecialSurvey}{Special survey types}} &
Special selectivity options (type 30 or $>$) are no longer specified within the control file. Specifying the use of one of these selectivity types is now done within the data file by selecting the survey "units". \\
\multicolumn{1}{l}{\hyperlink{Qsetup}{Link functions}} &
Q\_power is now one of several, and growing, set of link functions for catchability. \\
\multicolumn{1}{l}{\hyperlink{Qsetup}{Catchability setup}} &
Major reorganization of catchability (Q) setup, including the link specification. \\
\multicolumn{1}{l}{\hyperlink{Qsetup}{Q as a parameter}} &
Each survey now must have a Q parameter and its value still can float (as old option 5).\\
\multicolumn{1}{l}{\hyperlink{Shepherd}{Shepherd SRR}} &
The traditional 3-parameter Shepherd stock-recruitment curve is now an option.\\
% \multicolumn{1}{l}{\hyperlink{Shepherd2}{Shepherd SRR re-parameterization}} &
% A re-parameterized 3-parameter Shepherd stock-recruitment curve, distinct from the traditional parameterization, is now an option in SS v.3.30.11 and higher.\\
\multicolumn{1}{l}{\hyperlink{Ricker2}{Ricker SRR}} &
A 3-parameter Ricker stock-recruitment curve is now an option in SS3.30.11 and higher.\\
\multicolumn{1}{l}{\hyperlink{RecrTiming}{Recruitment timing}} &
Replace "birthseason" with "settlement event" that has explicit timing offset from spawning. Month of spawning and each settlement event must be specified and need not be at beginning of a season.\\
\multicolumn{1}{l}{Global MSY} &
Global MSY based on knife-edged age selection; also do calculation with single age selection. The global MSY value will automatically be included in the report file.\\
\multicolumn{1}{l}{ Mean recruitment distribution} &
In multi-area model, can now specify range of years to use for the average recruitment distribution for forecasting. This feature is not yet implemented. \\
\multicolumn{1}{l}{Process error} &
Propagate random walk in mortality growth parameters, catchability, and selectivity into forecast. Specifying the end year for process error in the forecast period will implement this option. This option has only been partial implemented at this junction and will be completed in later versions.\\
\multicolumn{1}{l}{\hyperlink{MGorder}{Parameter order}} &
Mortality growth parameters now have maturity, fecundity, sex ratio, and weight-length by growth pattern.\\
\multicolumn{1}{l}{\hyperlink{SexRatio}{Sex ratio}} &
Change sex ratio at birth from a constant to a morph-specific mortality growth parameter. This feature was not correctly implemented in SS3.30.11 and earlier. \\
\multicolumn{1}{l}{\hyperlink{GrowthCessation}{Growth cessation}} &
New growth option which allows for growth cessation, implemented in SS3.30.13. \\
\multicolumn{1}{l}{\hyperlink{GcompVar}{Input variance adjuster}} &
Added variance adjustment factor for generalized size comp. \\
\multicolumn{1}{l}{Deviation vectors} &
Variance of deviation vectors is now specified with 2 parameters for standard error and auto-correlation (rho), so can be estimated.\\
\multicolumn{1}{l}{\hyperlink{Dirichlet}{Dirichlet multinomial}} &
Dirichlet multinomial now a fleet-specific option; takes one parameter per fleet. \\
\multicolumn{1}{l}{\hyperlink{paraOrder}{Parameter order}} & The prior standard deviation column for all parameter lines has been moved before the prior type column. This modification improves formatting output between integer and decimal inputs.\\
\multicolumn{1}{l}{Density dependence} &
Beginning of year summary biomass and the recruitment deviation parameters are mapped to the "environmental" matrix so that parameters can be density-dependent based on environmental factors.\\
\multicolumn{1}{l}{\hyperlink{tvOrder}{Re-order}} &
Pay attention to the new order of the time-varying adjustments to parameters (block/trend, then environmental, then deviations). \\
\multicolumn{1}{l}{\hyperlink{time-vary}{Time-varying parameters}} &
Long parameter lines for spawner-recruit relationship (SRR), catchability (Q), and tag parameters and complete re-vamp of the way that time-varying parameters are implemented for SRR and Q. Now shares same internal code as mortality-growth and selectivity parameters for time-varying capabilities.\\
\multicolumn{1}{l}{Version numbering} & The implementation of as new version control has changed how executable versions will be specified. The executable releases are now named SS3.3x.xx.xx representing, in order; major features, minor features, and code fixes. \\
\hline
\end{longtable}}
\end{center}
\subsection{SS3.24 Issues Detected}
The process of updating and adding new features within SS3.30 exposed several issues with the previous version that have been corrected:
\begin{enumerate}
\item Recruitment timing in multi-season models: When spawning occurred in a late season one year and recruits occurred at beginning of a season the next year, the recruits were starting at age-0, which was illogical. SS3.30 corrects this so that recruits are age-0 only if recruiting at or between the time of spawning and the end of the year, and recruits after January 1st start at age-0. A manual option in the control file allows users to replicate the SS3.24 protocol.
\item Lorenzen $M$ and time-varying growth interaction: There needs to be a revision to SS3.30 so that growth can be updated each season prior to calculating Lorenzen $M$.
\item Length at maximum age: SS3.24 intended to decay numbers at the maximum length over-time at $M + F$ decreasing the abundance of fish implicitly older than the maximum age (agemax). However, this decay was only implemented in years for which time-varying growth was updated.
\item SS3.24 had a lower bound of 1 when adjusting annual sample size (Nsamp) downward for composition data (length and age). The variance adjustment factors specified in the control file are multiplied across all annual sample size values for each data source (fleet and composition type). The issue with the lower bound of 1 resulted in sample size adjustment not being constant across small and large sample size years, possibly resulting in smaller samples have higher impact than may be desired. SS3.30 has reduced this lower bound to a value of 0.001 but has retained user control over this value within the data file ("minsamplesize" column in the Composition Data Structure matrix at the top of the length and age data sections) to allow comparison with older model versions.
\end{enumerate}
|
{- Byzantine Fault Tolerant Consensus Verification in Agda, version 0.9.
Copyright (c) 2020, 2021, Oracle and/or its affiliates.
Licensed under the Universal Permissive License v 1.0 as shown at https://opensource.oracle.com/licenses/upl
-}
open import LibraBFT.Prelude
open import LibraBFT.Lemmas
open import LibraBFT.Abstract.Types
open import LibraBFT.Abstract.Types.EpochConfig
open WithAbsVote
-- This module defines the notion of one Record r "extending" another
-- Record r' (denoted r' ← r), ensuring rules about rounds and that r
-- correctly identifies r'
module LibraBFT.Abstract.Records.Extends
(UID : Set)
(_≟UID_ : (u₀ u₁ : UID) → Dec (u₀ ≡ u₁))
(NodeId : Set)
(𝓔 : EpochConfig UID NodeId)
(𝓥 : VoteEvidence UID NodeId 𝓔)
where
open import LibraBFT.Abstract.Records UID _≟UID_ NodeId 𝓔 𝓥
-- Most of the conditions in section 4.2 of the paper (see
-- LibraBFT.Abstract.RecordChain.Properties) would be checked
-- by the implementation to validate data received.
--
-- In the Abstract model, however, we are only concerned with
-- proving the properties; only round numbers and identifiers
-- for previous records are actually critical to thm S5!
data _←_ : Record → Record → Set where
I←B : {b : Block}
→ 0 < getRound b
→ bPrevQC b ≡ nothing
→ I ← B b
Q←B : {q : QC} {b : Block}
→ getRound q < getRound b
→ just (qCertBlockId q) ≡ bPrevQC b
→ Q q ← B b
B←Q : {b : Block} {q : QC}
→ getRound q ≡ getRound b
→ bId b ≡ qCertBlockId q
→ B b ← Q q
-- Equivalent records extend equivalent records (modulo injectivity
-- failure of bId).
←-≈Rec : ∀{r₀ r₁ s₀ s₁} → s₀ ← r₀ → s₁ ← r₁
→ r₀ ≈Rec r₁
→ NonInjective-≡ bId ⊎ (s₀ ≈Rec s₁)
←-≈Rec (I←B x x₁) (I←B x₂ x₃) hyp = inj₂ eq-I
←-≈Rec (I←B x x₁) (Q←B x₂ x₃) (eq-B refl)
= ⊥-elim (maybe-⊥ (sym x₃) x₁)
←-≈Rec (Q←B x x₁) (I←B x₂ x₃) (eq-B refl)
= ⊥-elim (maybe-⊥ (sym x₁) x₃)
←-≈Rec (Q←B {q₀} x refl) (Q←B {q₁} x₂ refl) (eq-B refl)
= inj₂ (eq-Q refl) -- Here is where we wouldn't be able to
-- complete the proof if we required round equality
-- in eq-Q
←-≈Rec (B←Q {b₀} x refl) (B←Q {b₁} w refl) (eq-Q refl)
with b₀ ≟Block b₁
...| no hb = inj₁ ((b₀ , b₁) , (λ x → hb x) , refl)
...| yes prf = inj₂ (eq-B prf)
←-irrelevant : Irrelevant _←_
←-irrelevant (I←B r₁ h₁) (I←B r₂ h₂)
= cong₂ I←B (≤-irrelevant r₁ r₂) (≡-irrelevant h₁ h₂)
←-irrelevant (Q←B r₁ h₁) (Q←B r₂ h₂)
= cong₂ Q←B (≤-irrelevant r₁ r₂) (≡-irrelevant h₁ h₂)
←-irrelevant (B←Q r₁ h₁) (B←Q r₂ h₂)
= cong₂ B←Q (≡-irrelevant r₁ r₂) (≡-irrelevant h₁ h₂)
←-round-≤ : ∀{r₀ r₁} → r₀ ← r₁ → round r₀ ≤ round r₁
←-round-≤ (I←B r h) = z≤n
←-round-≤ (Q←B r h) = <⇒≤ r
←-round-≤ (B←Q refl h) = ≤-refl
←←-round-< : ∀{r r₀ r₁} → r ← r₀ → r₀ ← r₁
→ round r < round r₁
←←-round-< (I←B r h) (B←Q refl _) = r
←←-round-< (Q←B r h) rr = ≤-trans r (←-round-≤ rr)
←←-round-< (B←Q refl h) (Q←B prf _) = prf
-- LemmaS1, clause 2: injectivity of _←_
lemmaS1-2 : ∀{r₀ r₁ r₂ r₂'}
→ r₂ ≈Rec r₂'
→ r₀ ← r₂ → r₁ ← r₂'
→ uid r₀ ≡ uid r₁
lemmaS1-2 {i₀} {i₁} {b} hyp (I←B _ i₀←b) (I←B _ i₁←b) = refl
lemmaS1-2 {q} {i} {b} (eq-B refl) (Q←B _ ()) (I←B _ refl)
lemmaS1-2 {i} {q} {b} (eq-B refl) (I←B _ refl) (Q←B _ ())
lemmaS1-2 {q₀} {q₁} {b} (eq-B refl) (Q←B _ refl) (Q←B _ refl) = refl
lemmaS1-2 {b₀} {b₁} {q} (eq-Q refl) (B←Q _ refl) (B←Q _ refl) = refl
-- A better name for lemmaS1-2
←-inj : ∀{r₀ r₁ r₂}
→ r₀ ← r₂ → r₁ ← r₂
→ uid r₀ ≡ uid r₁
←-inj = lemmaS1-2 ≈Rec-refl
|
"""
Return 6 surfaces of B-spline solid
"""
function _bsplinesurfaces(M::AbstractBSplineManifold)
a = controlpoints(M)
P1, P2, P3 = bsplinespaces(M)
I1, I2, I3 = bsplineunity(P1), bsplineunity(P2), bsplineunity(P3)
n1, n2, n3 = dim(P1), dim(P2), dim(P3)
t_⚀ = minimum(I1)
t_⚁ = minimum(I2)
t_⚂ = minimum(I3)
t_⚃ = maximum(I1)
t_⚄ = maximum(I2)
t_⚅ = maximum(I3)
B_⚀ = [bsplinebasis(i,P1,t_⚀) for i in 1:n1]
B_⚁ = [bsplinebasis(i,P2,t_⚁) for i in 1:n2]
B_⚂ = [bsplinebasis(i,P3,t_⚂) for i in 1:n3]
B_⚃ = [bsplinebasis(i,P1,t_⚃) for i in 1:n1]
B_⚄ = [bsplinebasis(i,P2,t_⚄) for i in 1:n2]
B_⚅ = [bsplinebasis(i,P3,t_⚅) for i in 1:n3]
a_⚀ = sum(a[i1,:,:,:]*B_⚀[i1] for i1 in 1:n1)
a_⚁ = sum(a[:,i2,:,:]*B_⚁[i2] for i2 in 1:n2)
a_⚂ = sum(a[:,:,i3,:]*B_⚂[i3] for i3 in 1:n3)
a_⚃ = sum(a[i1,:,:,:]*B_⚃[i1] for i1 in 1:n1)
a_⚄ = sum(a[:,i2,:,:]*B_⚄[i2] for i2 in 1:n2)
a_⚅ = sum(a[:,:,i3,:]*B_⚅[i3] for i3 in 1:n3)
M_⚀ = BSplineSurface([P2,P3], a_⚀)
M_⚁ = BSplineSurface([P1,P3], a_⚁)
M_⚂ = BSplineSurface([P1,P2], a_⚂)
M_⚃ = BSplineSurface([P2,P3], a_⚃)
M_⚄ = BSplineSurface([P1,P3], a_⚄)
M_⚅ = BSplineSurface([P1,P2], a_⚅)
return (M_⚀, M_⚁, M_⚂, M_⚃, M_⚄, M_⚅)
end
|
\chapter{Discussion}
\label{chapter:discussion}
In this chapter, we interpret the results described from the previous chapter
and discuss their implications in order to answer our research questions.
Similar to the previous chapter, we break this chapter into three sections with
each section addressing one research question. Aside from answering the research
questions, we also gather interesting observations and propose hypotheses which could
motivate future research.
\section{RQ1: Evaluating performance of SoPa++}
To answer our first research question on whether \ac{spp} can deliver competitive
performance on the \ac{fmtod} English language intent classification task, we compare
the mean performance metrics of our \ac{spp} models against those from other
recent studies as mentioned in Section \ref{section:fmtod_performance}.
Referring to our accuracy ranges from Table \ref{tab:results_evaluation}, we
observe that the \ac{spp} models show a mean accuracy range of 97.6-98.3$\%$ for the
best performing models given their respective sizes. This falls into the general
accuracy range of 96.6-99.5$\%$ observed in other studies as per Table
\ref{tab:fmtod_examples}; albeit in the lower end of this spectrum. We can
therefore conclude that \ac{spp} offers competitive performance on the \ac{fmtod}'s
English language intent classification task.
While \ac{spp}'s performance range falling in the lower end of the aforementioned
spectrum can be seen as disadvantageous, it is worth noting that the models
\ac{spp} is being compared against are vastly different. For one, the \ac{bert}
models shown in Table \ref{tab:fmtod_results} had parameter counts ranging from
$\sim$110-340 million parameters \citep{devlin-etal-2019-bert}; which are
$\sim$100-300 times larger than our \ac{spp} models. In addition, models from
\citet{zhang-etal-2020-intent} showed an exceptionally high accuracy of 99.5$\%$
mainly because of pre-training on the external WikiHow data set for general
intent classification tasks. Finally, many of the models described in Table
\ref{tab:fmtod_results} were jointly trained on both \ac{fmtod} intent classification
and slot filling tasks; which could have contributed to certain joint-task
performance benefits. These significant differences between \ac{spp} and the
aforementioned models should be taken into account when comparing \ac{spp}'s
performance with other studies.
\section{RQ2: Evaluating explanations by simplification}
To answer our second research question on whether \ac{spp} provides effective
explanations by simplification, we summarize the minimum differences in \ac{spp}
and \ac{re} proxy model pair performance metrics; as well as the minimum
distance metrics observed as per Table \ref{tab:explain_evaluate_performance}.
Regarding performance metrics, we observe the lowest accuracy score differences
to be 0.7$\%$ for small-sized models, 0.2$\%$ for medium-sized models and 0.1$\%$ for
large-sized models. Regarding distance metrics, we observe the lowest
$\overline{\delta_{\sigma}}$ and $\overline{\delta_{b}}$ to be 10.0$\%$ and
12.4$\%$ for small-sized models, 5.8 $\%$ and 13.5$\%$ for medium-sized models
and 4.3$\%$ and 14.2$\%$ for large-sized models respectively. These minimum
performance metric differences and distance metrics are typically observed with
larger $\tau$-thresholds ranging from 0.50-1.00. Unlike the case for RQ1, we do
not have an objective range of competitive accuracy differences or distance
metrics to compare against with other studies. As a result, our interpretation
of the effectiveness of the explanations by simplification technique will be
subjective. That being said, we still believe that accuracy differences as low
as 0.1$\%$ and softmax distance norms as small as 4.3$\%$ provide significant
evidence towards a high degree of resemblance between \ac{spp} and \ac{re} proxy
models. In summary, we find that the explanations by simplification post-hoc
explainability technique in \ac{spp} is effective, in particular for medium and
large-sized models with $\tau$-thresholds ranging from 0.50-1.00.
In the interest of objectivity, we would like to provide some perspectives in
which the explanations by simplification technique is not effective. For one,
explanations by simplification as a post-hoc explainability technique as per
Definition \ref{def:explain_simplify} explicitly requires the simplified proxy
model to be more transparent than its antecedent counterpart. While we made a
case for the transparency of the \ac{re} proxy model in Section
\ref{section:re_transparency}, one could also provide arguments for the \ac{re} proxy
model being non-transparent; especially when the \ac{re} lookup layer contains too
many regular expressions for a human to comprehend. This could indeed be the
case for medium and large-sized \ac{re} proxy models which have \ac{re} lookup layers
containing tens of thousands of "activating" regular expressions. In cases such,
it would not be possible for a human to understand all of the regular
expressions; which could ultimately render the \ac{re} proxy model as yet another
black-box model. In such cases, the explanations by simplification post-hoc
explainability technique would likely be ineffective.
With these arguments set aside, we now proceed to discuss some interesting
observations in regards to our results for RQ2. Firstly, we can observe the
performance-interpretability tradeoff from Section
\ref{section:performance_interpretability_tradeoff} in Table
\ref{tab:explain_evaluate_performance} with the more transparent \ac{re} proxy
models almost always performing worse than their black-box \ac{spp}
counterparts. Next, as per Table \ref{tab:explain_evaluate_performance}; we
observe that \ac{re} proxy models tend to perform better as the $\tau$-threshold
increases. We hypothesize that this occurs mainly because larger
$\tau$-threshold forces the memorization of higher scoring paths which
ultimately reduces the chances of the \ac{re} proxy model memorizing superfluous
or unimportant regular expressions in the \ac{re} lookup layer. Finally as per
Figure \ref{fig:explain_evaluate}, we observe that the $\overline{\delta_{b}}$
metric continues to decrease as the $\tau$-threshold increases, while the
$\overline{\delta_{\sigma}}$ metric plateaus beforehand and then slightly
increases. This could be seen as counter-intuitive, since more similar
\ac{tauste} binary vectors should imply more similar softmax distributions. It
would be interesting to further explore these trends with even higher
$\tau$-thresholds.
\section{RQ3: Interesting and relevant explanations}
\label{section:discussion_regex}
To answer our third research question on interesting and relevant explanations
derived from \ac{spp} on the \ac{fmtod} data set, we refer back to our results for this
research question and attempt to interpret them. Since this research question is
more open-ended than the previous two, our approach to answer it will also be
opinionated and subjective. One interesting observation is in the relative
linear weights applied to the \ac{tauste} neurons in Figure \ref{fig:neuron_weights}.
We can observe that weights are generally continuously distributed across all
neurons; with some exceptions such as neurons 19, 25 and 17 where the weights
are more skewed towards the alarm, reminder and weather domains respectively.
This implies that \ac{spp} and \ac{re} proxy models still distribute feature
importance across \ac{tauste} neurons in a highly connective sense; which also implies that
each \ac{tauste} neuron has a non-negligible impact on all classification decisions.
With the identification of the salient \ac{tauste} neurons 19, 25 and 17 specializing
in the alarm, reminder and weather domains respectively; we draw out ten regular
expression samples from the \ac{re} lookup layer corresponding to each of these neurons
as reflected in Figures \ref{fig:regex_example_neuron_alarm},
\ref{fig:regex_example_neuron_reminder} and
\ref{fig:regex_example_neuron_weather} respectively. To extract interesting and
relevant explanations, we attempt to interpret these sampled regular
expressions. Firstly, we can observe a segmentation of lexical information
between the regular expressions corresponding to these neurons. For example,
many of the regular expressions corresponding to neuron 19 use words related to
alarms such as \textit{"snooze"} and \textit{"clock"}; while those corresponding to neuron 17 use
words related to weather such as \textit{"fahrenheit"} and \textit{"forecast"}. Next, we can
observe transition branching in sampled regular expressions across
all three \ac{tauste} neurons. This branching phenomenon is interesting because words
in these branches can sometimes have very similar lexical semantics. For
example, in the third regular expression from the bottom in Figure
\ref{fig:regex_example_neuron_weather}, we observe branching with three
different digital tokens \textit{"44"}, \textit{"70"} and \textit{"67"} which
represent the temperatures encountered in the training data. Similarly, the
third regular expression from the top in Figure
\ref{fig:regex_example_neuron_weather} shows branching with the tokens
\textit{"atlanta"}, \textit{"omaha"} and \textit{"hawaii"}, which all represent
locations in the USA encountered in the training data. Finally, we can observe
interesting positional, or possibly syntactic, features in the regular
expressions in Figures \ref{fig:regex_example_neuron_alarm} and
\ref{fig:regex_example_neuron_reminder}; which all have a $\omega$-transition in
the same position.
Finally, the sampled regular expressions allow us to identify various inductive
biases incorporated by \ac{spp} and its \ac{re} proxy models from the training data.
Going back to the digital and location-based tokens mentioned in the previous
paragraph, we can observe how the training data induces USA-centric biases
pertaining to locations such as \textit{"atlanta"} and hard-coded Fahrenheit
temperatures such as \textit{"70"}. As a result, we can extrapolate that the
\ac{spp} and \ac{re} proxy models will likely only perform well on unseen data based in
USA-centric domains since they likely would not have encountered tokens from
non-USA-centric domains. An advantageous aspect of the \ac{re} proxy model is that
these inductive biases can be easily identified and also corrected. In the case
of correcting USA-centric locations, we could manually add more non-USA-based
locations in the branching transition of the third regular expression from the
top in Figure \ref{fig:regex_example_neuron_weather}. Another possible inductive
bias could be in the third regular expression from the top in Figure
\ref{fig:regex_example_neuron_alarm}, where the first transition only allows for
the pronoun \textit{"i"}. This inductive bias could be corrected in the \ac{re} proxy
model by augmenting it with all other pronouns available in the English
language. Finally, we can propagate these manual corrections in the \ac{re} proxy
model back to \ac{spp} by copying the word embeddings and transition matrix
diagonals of the biased word to now represent those of the manually added new
words.
% LocalWords: pre WikiHow explainability softmax interpretability tradeoff
% LocalWords: fahrenheit omaha hawaii centric atlanta embeddings
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End:
|
[STATEMENT]
lemma span_eq_iff[simp]: "span s = s \<longleftrightarrow> subspace s"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (span s = s) = subspace s
[PROOF STEP]
unfolding span_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (subspace hull s = s) = subspace s
[PROOF STEP]
by (rule hull_eq) (rule subspace_Inter)
|
[STATEMENT]
lemma bd_G_Cinfinite: "Cinfinite bd_G"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Cinfinite bd_G
[PROOF STEP]
using bd_G_card_order bd_G_cinfinite card_order_on_Card_order
[PROOF STATE]
proof (prove)
using this:
card_order bd_G
cinfinite bd_G
card_order_on ?A ?r \<Longrightarrow> ?A = Field ?r \<and> Card_order ?r
goal (1 subgoal):
1. Cinfinite bd_G
[PROOF STEP]
by blast
|
Interview with Dr. Zingaro:
What Chiropractic College did you attend and why did you choose that school?
When I decided to attend chiropractic school, I had heard that some schools did not have a strong academic program. I visited four colleges and chose to attend Northwestern College of Chiropractic. I chose Northwestern because it had a very strong academic program, especially in the area of science. The clinical science program was taught by Ph.D.s in their field. I also liked the fact that many Northwestern graduates passed the National Board Exams without needing to take review courses, like the other colleges I visited.
Why did you become a chiropractor?
At first, I thought I wanted to be a medical doctor. I started college in a sixyear medical program. In this program, a student completed the premed requirements in the first two years of school and then went to medical school. After completing my premed requirements, I became disillusioned with conventional medicine and decided to pursue the bodymind connection and studied psychology. I graduated with a Masters Degree in Counseling. Although I worked as a counselor for six years, my interest in medicine was still strong and I was continually reading medical literature. At this time, I became very good friends with an elderly medical doctor. I saw that even though this doctor wanted to treat patients with nutrition and other more natural methods, she mostly wrote prescriptions for her patients. I was very aware that prescription medicine had side effects and for this reason I wanted to choose a way to treat patients that did not include prescription medication. I then became interested in manual medicine and decided to become a chiropractor.
What distinguishes you from other chiropractors?
Well, I think Chiropractic, like most work you do with your hands, is an art as well as a science. And this art reflects the soul of the practitioner. So, I would say I am unique (as is everyone else), because of my background as a counselor and the blend of techniques I use. I am also a massage therapist and I have studied many soft tissue and chiropractic techniques.
What techniques do you use?
I use a blend of massage and chiropractic techniques. The type of chiropractic techniques I use is called diversified technique. I have also studied cranial sacral therapy, orthobionomy, somatic therapy, Swedish massage and many other soft tissue techniques.
How do you treat a condition?
I blend effective and gentle chiropractic techniques with massage techniques to address a problem. I examine and treat each person as a unique individual. Even individuals with the same diagnosis may receive different treatments depending on that persons particular needs. Each patient receives my complete attention for at least 45 minutes.
What conditions do you treat?
I treat any musculoskeletal condition .Some of the conditions I treat are: headaches, neck pain, back pain, muscle sprains, carpal tunnel syndrome, rotator cuff injuries, auto injuries, arthritis pain, any joint pain, chronic pain, stress conditions, fibromyalgia, chronic fatigue syndrome
How long have you been in Davis and why did you choose this area?
I have worked in the Davis area for almost 20 years. I love this area and now have two young children who attend school here. I think its a small town with a lot of good opportunities. I have a saying, If you cant be healthy in Davis, you cant be healthy. With the emphasis on bicycles and swim programs to name a few, its a very healthconscious community.
Past reviews can be found under Dr. Zingaros old location, the Davis Holistic Health Center
From therapy to pure relaxation, Davis offers a variety of Massage Services
Visit our Chiropractors page for a listing of other Doctors of Chiropractic in Davis.
20100116 15:30:12 nbsp I highly recommend Dr. Zingaro. She was my anatomy/physiology instructor when I attended Integrative Therapy Massage School in 1992. She is the most gentle chiropractor. I have been to about 6 of them on, and off as needed over about a 25 years period starting way back when that my back was killing me when I was working two waitress jobs at a time. Now, she is treating me post treatment from a car accident with massage/ultrasound/and, adjustments. She is an extremely caring person, I really like her, she is helping me alot.
I am getting better, and am very satisfied. Rating AAA+++++++++++++++++++++++
Maureen Turner
[email protected] Users/deals911
20100605 07:51:58 nbsp Dr. Zingaro is wonderful! She very sweet and sensitive to your needs. I have had backpain for years and she is the first person to help me relieve it! I enjoy her use of massage and gentle chiropractic techniques. She is a great listener and has a good intuition for what you need. I feel blessed that she is in Davis and highly recommend her! Users/sophieee
20100930 22:30:30 nbsp I had a painful knot in my neck for about a month and my coworker recommended that I see Dr. Zingaro. I had never been to a chiropractor before, but was tired of the pain and thought I would give it a try. She was very kind and did gentle manipulation. I went to bed that night and the next morning my pain was completely gone. I felt like a car that went to the mechanic and got fixed. I feel very fortunate to have her in our town. I can not recommend her enough. Users/annemeck
20110113 12:27:56 nbsp Dr. Zingaro is excellent. Users/JimStewart
20111203 23:15:37 nbsp I highly recommend Dr. Zingaro. As a massage therapist, I am very careful about chiropractic referrals to my clients. There is a lot of variation across chiropractic styles and I think weve all hear negative stories from chiropractors that are too forceful or not in tune with the clients body. Dr. Zingaro has the benefit of being a massage therapist as well as a chiropractor, so she understands the importance of loosening tight muscles and making sure they are ready to receive bony adjustments. She is also extremely kind, forthright, and puts you at ease right away. I am happy to have her to refer to! And her office is BEAUTIFUL. Give her a call! Users/LilyS
20130924 21:45:37 nbsp I had persistent neck pain from the base of my skull all the way to my fingertips; it made it hard to turn my head or sleep, and sometimes Id get a zap feeling down the neck. It also felt like one of my vertebrae was a little further to the right than the others. I saw my general practice doctor for it, and she took Xrays then told me that it was probably nerve pain and there wasnt much she could do, as the Xrays were normal. She suggested looking for a chiropractor.
I found Dr. Zingaro on my own and went in for an appointment. She immediately asked me if Id seen a GP Doctor and if theyd done any tests, and went through my medical history with me before starting, which I really liked. She got the overly tense muscles which were pulling on my vertebrae to relax, and then put the vertebrae back into place, and I felt better IMMEDIATELY. Not 100% cessation of pain, but significant relief which improved with time. She gave me an athome care regimen and some advise on posture to prevent the problem from returning. Id had the problem for years and never thought too much of it until it worsened, and after my appointment it was better than it had been since high school.
Overall, she was gentle, thorough, very considerate, and extremely skilled. Im really happy with the results, and. Dr. Zingaro herself is extremely professional and compassionate! Users/SarahBon
|
// This file is part of slideio project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://slideio.com/license.html.
#include "slideio/drivers/afi/afiimagedriver.hpp"
#include "slideio/drivers/afi/afislide.hpp"
#include <boost/filesystem.hpp>
slideio::AFIImageDriver::AFIImageDriver()
{
}
slideio::AFIImageDriver::~AFIImageDriver()
{
}
std::string slideio::AFIImageDriver::getID() const
{
return std::string("AFI");
}
std::shared_ptr<slideio::CVSlide> slideio::AFIImageDriver::openFile(const std::string& filePath)
{
return AFISlide::openFile(filePath);
}
std::string slideio::AFIImageDriver::getFileSpecs() const
{
static std::string pattern("*.afi");
return pattern;
}
|
theory Refine_ScaleR2
imports
Refine_Unions
Refine_Interval
Refine_String
begin
definition "scaleR2 l u X = (\<lambda>(r, (x, y)). (x, r *\<^sub>R y)) ` (ereal -` {l .. u} \<times> X)"
lemma scaleR2_1_1[simp]: "scaleR2 1 1 = (\<lambda>x::(_\<times>'x::real_vector)set. x)"
by (force simp: scaleR2_def[abs_def] image_def vimage_def)
consts i_scaleR2::"interface\<Rightarrow>interface"
abbreviation "ereal_rel \<equiv> (Id::ereal rel)"
definition scaleR2_rel where scaleR2_rel_internal:
"scaleR2_rel A = ((ereal_rel \<times>\<^sub>r ereal_rel) \<times>\<^sub>r A) O
br (\<lambda>((l, u), X). scaleR2 l u X) (\<lambda>((l, u), _). ereal -` {l..u} \<noteq> {})"
definition [refine_vcg_def]: "scaleR2_rep X = SPEC (\<lambda>((l, u), Y). ereal -` {l..u} \<noteq> {} \<and> X = scaleR2 l u Y)"
definition [refine_vcg_def]: "scaleRe_ivl_spec l u X = SPEC (\<lambda>Y. Y = scaleR2 l u X)"
definition [simp]: "op_image_fst_colle = (`) fst"
definition [simp]: "op_image_fste = (`) fst"
definition "scaleR2_rep_coll X = do {
XS \<leftarrow> sets_of_coll X;
FORWEAK XS (RETURN ((0, 0), op_empty_coll)) (\<lambda>X. do {
((l, u), Y) \<leftarrow> scaleR2_rep X;
RETURN ((l, u), mk_coll Y)
}) (\<lambda>((l, u), Y) ((l', u'), Y'). RETURN ((inf l' l, sup u' u), Y' \<union> Y))
}"
abbreviation "elvivl_rel \<equiv> \<langle>lvivl_rel\<rangle>scaleR2_rel"
definition [simp]: "op_times_UNIV_coll X = X \<times> UNIV"
definition [simp]: "op_inter_fst X Y = X \<inter> Y \<times> UNIV"
definition "scaleRe_ivl_coll_spec l u X = do {
XS \<leftarrow> sets_of_coll X;
FORWEAK XS (RETURN op_empty_coll)
(\<lambda>X. do {I \<leftarrow> scaleRe_ivl_spec l u X; RETURN (mk_coll I)})
(\<lambda>X X'. RETURN (X' \<union> X))
}"
definition "op_inter_fst_ivl_scaleR2 X Y = do {
((l, u), X) \<leftarrow> scaleR2_rep X;
(i, s) \<leftarrow> ivl_rep (op_inter_fst X Y);
let R = op_inter_fst (op_atLeastAtMost_ivl i s) Y;
scaleRe_ivl_coll_spec l u (filter_empty_ivls (mk_coll R))
}"
definition "op_inter_fst_ivl_coll_scaleR2 X Y = do {
Xs \<leftarrow> sets_of_coll X;
FORWEAK Xs (RETURN op_empty_coll) (\<lambda>X. op_inter_fst_ivl_scaleR2 X Y) (\<lambda>X X'. RETURN (X' \<union> X))
}"
definition [refine_vcg_def]: "op_image_fst_ivl X = SPEC (\<lambda>R. R = fst ` X)"
definition "op_image_fst_ivl_coll X = do {
Xs \<leftarrow> sets_of_coll X;
FORWEAK Xs (RETURN op_empty_coll) (\<lambda>X. do {i \<leftarrow> op_image_fst_ivl X; RETURN (mk_coll i)}) (\<lambda>X' X. RETURN (X' \<union> X))
}"
lemma scaleR2_rel_def:
"\<langle>A\<rangle>scaleR2_rel = ((ereal_rel \<times>\<^sub>r ereal_rel) \<times>\<^sub>r A) O
br (\<lambda>((l, u), X). scaleR2 l u X) (\<lambda>((l, u), _). ereal -` {l..u} \<noteq> {})"
by (auto simp: relAPP_def scaleR2_rel_internal)
lemmas [autoref_rel_intf] = REL_INTFI[of scaleR2_rel i_scaleR2]
lemma fst_scaleR2_image[simp]: "ad \<le> ereal r \<Longrightarrow> ereal r \<le> bd \<Longrightarrow> fst ` scaleR2 ad bd be = fst ` be"
by (cases ad; cases bd; force simp: scaleR2_def image_image split_beta' vimage_def)
lemma scaleR2_rel_br: "\<langle>br a I\<rangle>scaleR2_rel =
br (\<lambda>((x, xa), y). scaleR2 x xa (a y)) (\<lambda>((l, u), y). I y \<and> ereal -` {l..u} \<noteq> {})"
unfolding scaleR2_rel_def
unfolding Id_br br_rel_prod br_chain o_def
by (auto simp: split_beta')
context includes autoref_syntax begin
lemma [autoref_rules]:
"(sup, sup) \<in> ereal_rel \<rightarrow> ereal_rel \<rightarrow> ereal_rel"
"(inf, inf) \<in> ereal_rel \<rightarrow> ereal_rel \<rightarrow> ereal_rel"
by auto
lemma [autoref_rules]:
"(ereal, ereal) \<in> rnv_rel \<rightarrow> ereal_rel"
"((*), (*)) \<in> ereal_rel \<rightarrow> ereal_rel \<rightarrow> ereal_rel"
by auto
lemma [autoref_rules]: "(\<infinity>, \<infinity>) \<in> ereal_rel"
by auto
lemma lift_scaleR2:
"(\<lambda>(lu, x). (lu, fi x), f) \<in> \<langle>A\<rangle>scaleR2_rel \<rightarrow> \<langle>B\<rangle>scaleR2_rel"
if "(fi, f) \<in> A \<rightarrow> B"
"\<And>l u x. x \<in> Range A \<Longrightarrow> ereal -` {l..u} \<noteq> {} \<Longrightarrow> scaleR2 l u (f x) = f (scaleR2 l u x)"
using that
apply (auto simp: scaleR2_rel_def )
apply (rule relcompI)
apply (rule prod_relI)
apply (rule IdI)
apply (drule fun_relD, assumption, assumption)
apply (auto simp: br_def vimage_def)
done
lemma appr1e_rep_impl[autoref_rules]:
"(\<lambda>x. RETURN x, scaleR2_rep) \<in> \<langle>A\<rangle>scaleR2_rel \<rightarrow> \<langle>(ereal_rel \<times>\<^sub>r ereal_rel) \<times>\<^sub>r A\<rangle>nres_rel"
by (force simp: nres_rel_def scaleR2_rep_def scaleR2_rel_def image_image split_beta'
dest!: brD intro!: RETURN_SPEC_refine)
lemma [autoref_op_pat]: "fst ` X \<equiv> (OP op_image_fste) $ X"
by simp
lemma scaleRe_ivl_impl[autoref_rules]:
"(\<lambda>l u X. if l < u \<or> l > - \<infinity> \<and> l \<le> u \<and> u < \<infinity> then RETURN ((l, u), X) else SUCCEED,
scaleRe_ivl_spec) \<in> ereal_rel \<rightarrow> ereal_rel \<rightarrow> A \<rightarrow> \<langle>\<langle>A\<rangle>scaleR2_rel\<rangle>nres_rel"
apply (auto simp: scaleRe_ivl_spec_def scaleR2_rep_def scaleR2_rel_def nres_rel_def
RETURN_RES_refine_iff
intro!: RETURN_SPEC_refine )
apply (rule relcompI)
apply (rule prod_relI)
apply (rule IdI)
apply assumption defer
apply (rule relcompI)
apply (rule prod_relI)
apply (rule IdI)
apply assumption defer
apply (auto intro!: brI)
subgoal for a b c d
apply (cases a; cases b)
by (auto simp: vimage_def)
subgoal for a b c d
apply (cases a; cases b)
by (auto simp: vimage_def)
done
lemma is_empty_scaleR2_rel[autoref_rules]:
assumes "GEN_OP ie is_empty (A \<rightarrow> bool_rel)"
shows "(\<lambda>(_, b). ie b, is_empty) \<in> (\<langle>A\<rangle>scaleR2_rel \<rightarrow> bool_rel)"
using assms[THEN GEN_OP_D, param_fo]
by (auto simp: scaleR2_rep_def scaleR2_rel_def scaleR2_def vimage_def
dest!: brD)
lemma sv_appr1e_rel[relator_props]: "single_valued A \<Longrightarrow> single_valued (\<langle>A\<rangle>scaleR2_rel)"
by (auto simp: scaleR2_rep_def scaleR2_rel_def intro!: relator_props)
schematic_goal scaleR2_rep_coll_impl:
assumes [THEN PREFER_sv_D, relator_props]: "PREFER single_valued A"
assumes [autoref_rules]: "(ai, a) \<in> clw_rel (\<langle>A\<rangle>scaleR2_rel)"
shows "(nres_of ?r, scaleR2_rep_coll a) \<in> \<langle>(ereal_rel \<times>\<^sub>r ereal_rel) \<times>\<^sub>r clw_rel A\<rangle>nres_rel"
unfolding scaleR2_rep_coll_def
including art
by autoref_monadic
concrete_definition scaleR2_rep_coll_impl for ai uses scaleR2_rep_coll_impl
lemmas scaleR2_rep_coll_impl_refine[autoref_rules] =
scaleR2_rep_coll_impl.refine[autoref_higher_order_rule (1)]
lemma fst_imageIcc:
"fst ` {a::'a::ordered_euclidean_space\<times>'c::ordered_euclidean_space .. b} =
(if a \<le> b then {fst a .. fst b} else {})"
by (auto intro!: simp: less_eq_prod_def)
lemma
interval_inter_times_UNIVI:
assumes "{fst a .. fst b} \<inter> {c .. d} = {fst e .. fst f}"
assumes "{snd a .. snd b} = {snd e .. snd f}"
shows "{a::('a::ordered_euclidean_space \<times> 'c::ordered_euclidean_space) .. b} \<inter>
({c .. d} \<times> UNIV) = {e .. f}"
using assms
by (cases a; cases b; cases e; cases f) (auto simp: subset_iff set_eq_iff)
lemma op_inter_fst_impl:
assumes "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes "GEN_OP intr (op_inter_ivl::('a) set\<Rightarrow>_) (lvivl_rel \<rightarrow> lvivl_rel \<rightarrow> lvivl_rel)"
assumes "GEN_OP le ((\<le>) ::'a\<times>('b::executable_euclidean_space) \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
shows "(\<lambda>x y.
if le (fst x) (snd x) then
case (intr (pairself (take D) x) y, pairself (drop D) x) of
((i, s), j, t) \<Rightarrow> (i @ j, s @ t)
else x,
op_inter_fst::('a \<times> 'b) set \<Rightarrow> 'a set \<Rightarrow> ('a \<times> 'b) set) \<in> lvivl_rel \<rightarrow> lvivl_rel \<rightarrow> lvivl_rel"
proof (auto simp: split: prod.splits, goal_cases)
case (1 a b c d e f g h)
from 1 have lens: "length a = DIM('a) + DIM('b)" "length b = DIM('a) + DIM('b)"
by (auto simp: lvivl_rel_br br_def)
have f_eq: "f = {eucl_of_list d .. eucl_of_list e}"
and c_eq: "c = {eucl_of_list a .. eucl_of_list b}"
using 1
by (auto simp: lvivl_rel_br br_def set_of_ivl_def)
from 1 assms(1,2) assms(3)[THEN GEN_OP_D, param_fo, OF lv_relI lv_relI, of a b]
have "((take D a, take D b), fst ` c) \<in> \<langle>lv_rel\<rangle>ivl_rel"
apply (auto simp: lv_rel_def ivl_rel_def dest!: brD)
apply (rule relcompI)
apply (rule prod_relI)
apply (rule brI)
apply (rule refl)
apply (simp;fail)
apply (rule brI)
apply (rule refl)
apply (simp;fail)
apply (rule brI)
apply (simp add: set_of_ivl_def fst_imageIcc)
by (auto simp: eucl_of_list_prod)
from assms(1) assms(2)[THEN GEN_OP_D, param_fo, OF this 1(2)]
show ?case
unfolding 1
apply (auto simp: lv_rel_def ivl_rel_def dest!: brD)
apply (rule relcompI)
apply (rule prod_relI)
apply (rule brI)
apply (rule refl)
apply (simp add: lens;fail)
apply (rule brI)
apply (rule refl)
apply (simp add: lens;fail)
apply (rule brI)
apply (simp add: set_of_ivl_def fst_imageIcc)
defer apply (simp; fail)
apply (cases "(eucl_of_list (take DIM('a) a)::'a) \<le> eucl_of_list (take DIM('a) b) \<and>
(eucl_of_list (drop DIM('a) a)::'b) \<le> eucl_of_list (drop DIM('a) b)")
subgoal apply (simp split: if_splits add: c_eq f_eq)
apply (rule interval_inter_times_UNIVI)
by (auto simp: eucl_of_list_prod fst_imageIcc split: if_splits)
subgoal
by (auto simp: eucl_of_list_prod fst_imageIcc c_eq f_eq)
done
next
case (2 a b c d e f g h)
from assms(3)[THEN GEN_OP_D, param_fo, OF lv_relI lv_relI, of a b] assms(1) 2
show ?case
apply (auto simp: lv_rel_def ivl_rel_def dest!: brD)
apply (rule relcompI)
apply (rule prod_relI)
apply (rule brI)
apply (rule refl)
apply (simp;fail)
apply (rule brI)
apply (rule refl)
apply (simp;fail)
apply (rule brI)
apply (simp add: set_of_ivl_def fst_imageIcc)
apply (simp; fail)
done
qed
concrete_definition op_inter_fst_impl uses op_inter_fst_impl
lemmas [autoref_rules] = op_inter_fst_impl.refine
definition "op_inter_fst_coll XS Y = do {
XS \<leftarrow> sets_of_coll XS;
FORWEAK XS (RETURN op_empty_coll) (\<lambda>X. RETURN (mk_coll (op_inter_fst X Y))) (\<lambda>X X'. RETURN (X' \<union> X))
}"
schematic_goal op_inter_fst_coll_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [THEN GEN_OP_D, autoref_rules]: "GEN_OP le ((\<le>) ::'a\<times>'b::executable_euclidean_space \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
assumes [autoref_rules]: "(XSi, XS::('a \<times> 'b) set) \<in> clw_rel lvivl_rel"
"(Yi, Y::'a set) \<in> lvivl_rel"
shows "(nres_of ?r, op_inter_fst_coll XS Y) \<in> \<langle>clw_rel lvivl_rel\<rangle>nres_rel"
unfolding op_inter_fst_coll_def
by autoref_monadic
concrete_definition op_inter_fst_coll_impl uses op_inter_fst_coll_impl
lemmas op_inter_fst_coll_impl_refine[autoref_rules] =
op_inter_fst_coll_impl.refine[autoref_higher_order_rule(1 2)]
lemma [autoref_op_pat]: "X \<inter> Y \<times> UNIV \<equiv> OP op_inter_fst $ X $ Y"
by auto
schematic_goal scaleRe_ivl_coll_impl:
assumes [relator_props]: "single_valued A"
assumes [autoref_rules]: "(li, l) \<in> ereal_rel" "(ui, u) \<in> ereal_rel" "(Xi, X) \<in> clw_rel A"
shows "(nres_of ?r, scaleRe_ivl_coll_spec l u X) \<in> \<langle>clw_rel (\<langle>A\<rangle>scaleR2_rel)\<rangle>nres_rel"
unfolding scaleRe_ivl_coll_spec_def
including art
by autoref_monadic
concrete_definition scaleRe_ivl_coll_impl uses scaleRe_ivl_coll_impl
lemma scaleRe_ivl_coll_impl_refine[autoref_rules]:
"PREFER single_valued A \<Longrightarrow>
(\<lambda>li ui Xi. nres_of (scaleRe_ivl_coll_impl li ui Xi), scaleRe_ivl_coll_spec)
\<in> ereal_rel \<rightarrow> ereal_rel \<rightarrow> clw_rel A \<rightarrow> \<langle>clw_rel (\<langle>A\<rangle>scaleR2_rel)\<rangle>nres_rel"
using scaleRe_ivl_coll_impl.refine by force
schematic_goal op_inter_fst_ivl_scaleR2_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) E"
assumes [THEN GEN_OP_D, autoref_rules]: "GEN_OP le ((\<le>) ::'a\<times>'b::executable_euclidean_space \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
assumes [autoref_rules]: "(XSi, XS::('a\<times>'b) set) \<in> elvivl_rel"
"(Yi, Y::'a set) \<in> lvivl_rel"
shows "(nres_of ?r, op_inter_fst_ivl_scaleR2 XS Y) \<in> \<langle>clw_rel elvivl_rel\<rangle>nres_rel"
unfolding op_inter_fst_ivl_scaleR2_def
including art
by autoref_monadic
concrete_definition op_inter_fst_ivl_scaleR2_impl uses op_inter_fst_ivl_scaleR2_impl
lemmas op_inter_fst_ivl_scaleR2_impl_refine[autoref_rules] =
op_inter_fst_ivl_scaleR2_impl.refine[autoref_higher_order_rule(1 2)]
schematic_goal op_inter_fst_ivl_coll_scaleR2_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) E"
assumes [THEN GEN_OP_D, autoref_rules]: "GEN_OP le ((\<le>) ::'a\<times>'b::executable_euclidean_space \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
assumes [autoref_rules]: "(XSi, XS::('a\<times>'b) set) \<in> clw_rel elvivl_rel"
"(Yi, Y::'a set) \<in> lvivl_rel"
shows "(nres_of ?r, op_inter_fst_ivl_coll_scaleR2 XS Y) \<in> \<langle>clw_rel elvivl_rel\<rangle>nres_rel"
unfolding op_inter_fst_ivl_coll_scaleR2_def
including art
by autoref_monadic
concrete_definition op_inter_fst_ivl_coll_scaleR2_impl uses op_inter_fst_ivl_coll_scaleR2_impl
lemmas op_inter_fst_ivl_coll_scaleR2_impl_refine[autoref_rules]
= op_inter_fst_ivl_coll_scaleR2_impl.refine[autoref_higher_order_rule(1 2)]
definition "op_inter_ivl_coll_scaleR2 X Y = do {
eivls \<leftarrow> op_inter_fst_ivl_coll_scaleR2 X Y;
((l, u), ivls) \<leftarrow> scaleR2_rep_coll eivls;
ivl \<leftarrow> op_ivl_of_ivl_coll ivls;
let R = op_inter_fst ivl Y;
scaleRe_ivl_coll_spec l u (filter_empty_ivls (mk_coll R))
}"
definition "op_single_inter_ivl a fxs = do {
let isa = (op_inter_ivl_coll (fxs:::clw_rel lvivl_rel) (a:::lvivl_rel));
(if op_coll_is_empty isa then RETURN op_empty_coll else do {
ivl \<leftarrow> op_ivl_of_ivl_coll isa;
RETURN (mk_coll ((ivl:::lvivl_rel) \<inter> a))
})
}"
schematic_goal op_inter_ivl_coll_scaleR2_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [autoref_rules_raw]: "DIM_precond TYPE('b::executable_euclidean_space) E"
assumes [THEN GEN_OP_D, autoref_rules]: "GEN_OP le ((\<le>) ::'a\<times>'b::executable_euclidean_space \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
assumes [autoref_rules]: "(XSi, XS::('a\<times>'b) set) \<in> clw_rel elvivl_rel"
"(Yi, Y::'a set) \<in> lvivl_rel"
shows "(nres_of ?r, op_inter_ivl_coll_scaleR2 XS Y) \<in> \<langle>clw_rel elvivl_rel\<rangle>nres_rel"
unfolding op_inter_ivl_coll_scaleR2_def
including art
by autoref_monadic
concrete_definition op_inter_ivl_coll_scaleR2_impl uses op_inter_ivl_coll_scaleR2_impl
lemmas op_inter_ivl_coll_scaleR2_impl_refine[autoref_rules] =
op_inter_ivl_coll_scaleR2_impl.refine[autoref_higher_order_rule(1 2 3)]
lemma op_image_fst_ivl[autoref_rules]:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [THEN GEN_OP_D, autoref_rules]: "GEN_OP le ((\<le>) ::'a\<times>'b::executable_euclidean_space \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
shows "(\<lambda>(l,u). nres_of (if le l u then dRETURN (pairself (take D) (l, u)) else dSUCCEED)
, op_image_fst_ivl::('a\<times>'b) set\<Rightarrow>_) \<in> lvivl_rel \<rightarrow> \<langle>lvivl_rel\<rangle>nres_rel"
using assms
apply (auto simp: ivl_rel_def nres_rel_def op_image_fst_ivl_def RETURN_RES_refine_iff
dest!: brD intro!: )
apply (rule relcompI)
apply (rule prod_relI)
apply (rule lv_relI)
apply (simp add: lv_rel_def br_def)
apply (rule lv_relI)
apply (simp add: lv_rel_def br_def)
apply (rule brI)
subgoal for a b
apply (drule fun_relD)
apply (rule lv_relI[where x=a])
apply (simp add: lv_rel_def br_def)
apply (drule fun_relD)
apply (rule lv_relI[where x=b])
apply (simp add: lv_rel_def br_def)
apply (auto simp: set_of_ivl_def lv_rel_def br_def fst_imageIcc eucl_of_list_prod)
done
subgoal by simp
done
schematic_goal op_image_fst_ivl_coll_impl[autoref_rules]:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes "GEN_OP le ((\<le>) ::'a\<times>'b::executable_euclidean_space \<Rightarrow>_) (lv_rel \<rightarrow> lv_rel \<rightarrow> bool_rel)"
assumes [autoref_rules]: "(Xi, X) \<in> clw_rel lvivl_rel"
shows "(nres_of ?r, (op_image_fst_ivl_coll::('a\<times>'b) set\<Rightarrow>_) X) \<in> \<langle>clw_rel lvivl_rel\<rangle>nres_rel"
unfolding op_image_fst_ivl_coll_def
by autoref_monadic
concrete_definition op_image_fst_ivl_coll_impl uses op_image_fst_ivl_coll_impl
lemmas op_image_fst_ivl_coll_impl_refine[autoref_rules] =
op_image_fst_ivl_coll_impl.refine[autoref_higher_order_rule(1 2)]
schematic_goal op_single_inter_ivl_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [autoref_rules]: "(FXSi, FXS) \<in> clw_rel lvivl_rel" "(Ai, A::'a set) \<in> lvivl_rel"
shows "(nres_of ?r, op_single_inter_ivl A FXS) \<in> \<langle>clw_rel lvivl_rel\<rangle>nres_rel"
unfolding op_single_inter_ivl_def
by autoref_monadic
concrete_definition op_single_inter_ivl_impl for Ai FXSi uses op_single_inter_ivl_impl
lemmas op_single_inter_ivl_impl_refine[autoref_rules]
= op_single_inter_ivl_impl.refine[autoref_higher_order_rule (1)]
definition [refine_vcg_def]: "le_post_inter_granularity_op ro r = SPEC(\<lambda>x::bool. True)"
lemma le_post_inter_granularity_op_itype[autoref_itype]:
"le_post_inter_granularity_op ::\<^sub>i A \<rightarrow>\<^sub>i \<langle>i_rnv\<rangle>\<^sub>ii_ivl \<rightarrow>\<^sub>i \<langle>i_bool\<rangle>\<^sub>ii_nres"
by auto
definition partition_ivle::
"_ \<Rightarrow> ('a::executable_euclidean_space \<times> 'c::executable_euclidean_space) set \<Rightarrow> _ set nres"
where
"partition_ivle ro xse =
(if op_coll_is_empty xse then RETURN (op_empty_coll:::clw_rel (elvivl_rel)) else do {
(_, xs) \<leftarrow> scaleR2_rep_coll xse;
xsf \<leftarrow> op_image_fst_ivl_coll xs;
r \<leftarrow> op_ivl_of_ivl_coll (xsf:::clw_rel (lvivl_rel));
(i, s) \<leftarrow> ivl_rep r;
CHECK (\<lambda>_. ()) (i \<le> s);
(rs, ps) \<leftarrow>
WHILE\<^bsup>(\<lambda>(rs, ps). xse \<subseteq> (rs \<times> UNIV) \<union> ps)\<^esup> (\<lambda>(rs, ps). \<not> op_coll_is_empty (rs:::clw_rel lvivl_rel))
(\<lambda>(rs, ps).
do {
(r, rs') \<leftarrow> (split_spec_exact rs:::\<langle>lvivl_rel \<times>\<^sub>r clw_rel lvivl_rel\<rangle>nres_rel);
okay \<leftarrow> le_post_inter_granularity_op ro r;
if okay then do {
I \<leftarrow> op_inter_ivl_coll_scaleR2 (xse) (r);
RETURN (rs', I \<union> ps)
} else do {
(a, b) \<leftarrow> split_spec_ivl DIM('a) r;
fxs \<leftarrow> op_image_fst_ivl_coll xs;
ra' \<leftarrow> op_single_inter_ivl a fxs;
rb' \<leftarrow> op_single_inter_ivl b fxs;
RETURN (ra' \<union> rb' \<union> rs', ps)
}
}) (mk_coll r:::clw_rel lvivl_rel, op_empty_coll :::clw_rel elvivl_rel);
RETURN ps
})"
schematic_goal partition_ivle_nres:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) F"
assumes [autoref_rules_raw]: "DIM_precond TYPE('c::executable_euclidean_space) E"
assumes okgs[THEN GEN_OP_D, autoref_rules]:
"GEN_OP okay_granularityi (le_post_inter_granularity_op::_\<Rightarrow>'a set\<Rightarrow>_) (A \<rightarrow> lvivl_rel \<rightarrow> \<langle>bool_rel\<rangle>nres_rel)"
assumes [unfolded autoref_tag_defs, refine_transfer]:
"\<And>ro X. TRANSFER (nres_of (okay_granularityd ro X) \<le> okay_granularityi ro X)"
assumes [autoref_rules]:
"(xsi, xs::('a\<times>'c::executable_euclidean_space) set)\<in> clw_rel elvivl_rel"
assumes [autoref_rules]: "(roi, ro) \<in> A"
shows "(nres_of ?f, partition_ivle ro xs)\<in>\<langle>clw_rel elvivl_rel\<rangle>nres_rel"
unfolding partition_ivle_def[abs_def]
including art
by autoref_monadic
concrete_definition partition_ivle_nres for okay_granularityd xsi uses partition_ivle_nres
lemmas [autoref_rules] = partition_ivle_nres.refine[autoref_higher_order_rule(1 2 3 4)]
definition "reduce_ivl (X::('a::executable_euclidean_space\<times>'b::executable_euclidean_space)set) b = do {
(i, s) \<leftarrow> ivl_rep X;
CHECK (\<lambda>_. ST ''reduce_ivl strange basis'') (b \<in> set Basis_list);
CHECK (\<lambda>_. ST ''reduce_ivl strange ivl'') (i \<le> s);
let (i0, i1) = split_lv_rel i;
let (s0, s1) = split_lv_rel s;
let ivl2 = op_atLeastAtMost_ivl i1 s1;
P \<leftarrow> project_set_ivl ivl2 b 0;
(iP, sP) \<leftarrow> ivl_rep P;
if iP \<le> 0 \<and> 0 \<le> sP then
if i1 \<bullet> b > 0 then do {
let s = (i1 \<bullet> b) *\<^sub>R b;
let P' = op_atLeastAtMost_ivl (Pair_lv_rel i0 (iP + s)) (Pair_lv_rel s0 (sP + s));
scaleRe_ivl_spec 1 \<infinity> P'
} else if s1 \<bullet> b < 0 then do {
let s = (s1 \<bullet> b) *\<^sub>R b;
let P' = op_atLeastAtMost_ivl (Pair_lv_rel i0 (iP + s)) (Pair_lv_rel s0 (sP + s));
scaleRe_ivl_spec 1 \<infinity> P'
} else scaleRe_ivl_spec 1 1 X
else scaleRe_ivl_spec 1 1 X
}"
definition "reduce_ivle Y b = do {
((l, u), X) \<leftarrow> scaleR2_rep Y;
R \<leftarrow> reduce_ivl X b;
((l', u'), R) \<leftarrow> scaleR2_rep R;
CHECK (\<lambda>_. ()) (0 < l' \<and> 0 < l \<and> 0 \<le> u \<and> l \<le> u \<and> l' \<le> u');
scaleRe_ivl_spec (l'*l) (u' * u) R
}"
definition "reduces_ivle (X::('a::executable_euclidean_space\<times>'b::executable_euclidean_space)set) =
FOREACH\<^bsup>\<lambda>B R. X \<subseteq> R\<^esup> (set Basis_list:::\<langle>lv_rel\<rangle>list_set_rel) (\<lambda>b X. reduce_ivle X b) X"
definition "setse_of_ivlse (X:: ('a::executable_euclidean_space \<times> 'c::executable_euclidean_space) set) = do {
Xs \<leftarrow> sets_of_coll X;
FORWEAK Xs (RETURN op_empty_coll) (\<lambda>X. do {
((l, u), x) \<leftarrow> scaleR2_rep X;
(i, s) \<leftarrow> ivl_rep x;
if i \<le> s then do {
x \<leftarrow> scaleRe_ivl_spec l u {i .. s};
RETURN (mk_coll x)
} else RETURN op_empty_coll
}) (\<lambda>X' X. RETURN (X' \<union> X))
}"
schematic_goal reduce_ivl_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [autoref_rules_raw]: "DIM_precond TYPE('b::executable_euclidean_space) E"
assumes [autoref_rules]:
"(Yi, Y::('a\<times>'b::executable_euclidean_space) set) \<in> lvivl_rel"
"(bi, b::'b) \<in> lv_rel"
shows "(nres_of ?r, reduce_ivl Y b) \<in> \<langle>elvivl_rel\<rangle>nres_rel"
unfolding autoref_tag_defs
unfolding reduce_ivl_def
including art
by autoref_monadic
concrete_definition reduce_ivl_impl for Yi bi uses reduce_ivl_impl
lemmas [autoref_rules] = reduce_ivl_impl.refine[autoref_higher_order_rule(1 2)]
schematic_goal reduce_ivle_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [autoref_rules_raw]: "DIM_precond TYPE('b::executable_euclidean_space) E"
assumes [autoref_rules]:
"(Yi, Y::('a\<times>'b::executable_euclidean_space) set) \<in> elvivl_rel"
"(bi, b::'b) \<in> lv_rel"
shows "(nres_of ?r, reduce_ivle Y b) \<in> \<langle>elvivl_rel\<rangle>nres_rel"
unfolding autoref_tag_defs
unfolding reduce_ivle_def
including art
by autoref_monadic
concrete_definition reduce_ivle_impl for Yi bi uses reduce_ivle_impl
lemmas [autoref_rules] = reduce_ivle_impl.refine[autoref_higher_order_rule(1 2)]
schematic_goal reduces_ivle_impl:
assumes [autoref_rules_raw]: "DIM_precond TYPE('a::executable_euclidean_space) D"
assumes [autoref_rules_raw]: "DIM_precond TYPE('b::executable_euclidean_space) E"
assumes [autoref_rules]: "(Yi, Y::('a\<times>'b::executable_euclidean_space) set) \<in> elvivl_rel"
shows "(nres_of ?r, reduces_ivle Y) \<in> \<langle>elvivl_rel\<rangle>nres_rel"
unfolding autoref_tag_defs
unfolding reduces_ivle_def
including art
by autoref_monadic
concrete_definition reduces_ivle_impl for Yi uses reduces_ivle_impl
lemmas [autoref_rules] = reduces_ivle_impl.refine[autoref_higher_order_rule(1 2)]
lemma scaleR2_subset:
assumes "x \<in> scaleR2 i' j' k'"
assumes "i \<le> i'" "j' \<le> j" "k' \<subseteq> k"
shows "x \<in> scaleR2 i j k"
using assms
by (force simp: scaleR2_def vimage_def image_def)
lemma subset_scaleR2_fstD: "X \<subseteq> scaleR2 l u Y \<Longrightarrow> fst ` X \<subseteq> fst ` Y"
by (force simp: scaleR2_def subset_iff image_def vimage_def)
lemma mem_scaleR2_union[simp]: "x \<in> scaleR2 l u (A \<union> B) \<longleftrightarrow> x \<in> scaleR2 l u A \<or> x \<in> scaleR2 l u B"
by (force simp: scaleR2_def vimage_def image_def)
lemma scaleR2_empty[simp]: "scaleR2 l u {} = {}"
by (auto simp: scaleR2_def)
lemma scaleR2_eq_empty_iff:
"scaleR2 l u X = {} \<longleftrightarrow> X = {} \<or> ereal -` {l..u} = {}"
by (auto simp: scaleR2_def)
lemma scaleR2_id[simp]: "scaleR2 (1::ereal) 1 = (\<lambda>(x::('d \<times> 'c::real_vector) set). x)"
by (rule scaleR2_1_1)
end
end
|
lemma homeomorphism_moving_point_1: fixes a :: "'a::euclidean_space" assumes "affine T" "a \<in> T" and u: "u \<in> ball a r \<inter> T" obtains f g where "homeomorphism (cball a r \<inter> T) (cball a r \<inter> T) f g" "f a = u" "\<And>x. x \<in> sphere a r \<Longrightarrow> f x = x"
|
module JS.Inheritance
import Control.Monad.Either
import JS.Util
import Data.List.Elem
import Data.String
import Data.SOP
--------------------------------------------------------------------------------
-- Upcasting
--------------------------------------------------------------------------------
||| A `JSType` describes a type's inheritance chains and implemented
||| mixins. It is used to safely and conveniently cast a value to a
||| less specific type mentioned either in the list of
||| mixins or parent types by means of funciton `up` and operator `:>`.
public export
interface JSType a where
||| The inheritance chain of parent types of this data type
||| (starting at the direct supertype). At runtime, such an inheritance
||| chain can be inspected by recursively calling the Javascript
||| function `Object.getPrototypeOf`.
parents : List Type
||| A Mixin is a concept from WebIDL: It is as programming interface
||| shared by several types. Unlike a WebIDL interface, a mixin does
||| not describe a type but just set of shared functions and
||| attributes. Mixins are not observable by means of inspecting
||| a value's prototype chain. It is therefore much harder
||| (and right now not supported in this library) to at runtime
||| check, whether a value implements a given mixin.
mixins : List Type
||| Convenience alias for `parents`, which takes an explicit
||| erased type argument.
public export
0 Types : (0 a : Type) -> JSType a => List Type
Types a = a :: parents {a} ++ mixins {a}
||| Safe upcasting. This uses `believe_me` internally and is
||| therefore of course only safe, if the `JSType` implementation
||| is correct according to some specification and the backend
||| properly adhere to this specification.
public export %inline
up : (0 _ : JSType a) => a -> {auto 0 _ : Elem b (Types a)} -> b
up v = believe_me v
infixl 1 :>
||| Operator version of `up`.
public export %inline
(:>) : (0 _ : JSType a) => a -> (0 b : Type) -> {auto 0 _ : Elem b (Types a)} -> b
a :> _ = up a
--------------------------------------------------------------------------------
-- Downcasting
--------------------------------------------------------------------------------
%foreign #"""
javascript:lambda:(s,v)=>{
var o = v;
while (o != null) {
var p = Object.getPrototypeOf(o);
var cn = p.constructor.name;
if (cn === s) {
return 1;
} else if (cn === "Object") {
return 0;
}
o = p;
}
return 0;
}
"""#
prim__hasProtoName : String -> AnyPtr -> Double
||| This is an interface which should be implemented by external
||| types, the type of which can be inspected at runtime.
|||
||| This allows us to at runtime try and safely cast any value
||| to the type implementing this interface.
|||
||| Typically, there are two mechanisms for inspecting a value's
||| type at runtime: Function `typeof`, which is mainly useful
||| for primitives, and function `unsafeCastOnPrototypeName`, which
||| inspects a value's prototype chain
||| ([see also](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Inheritance_and_the_prototype_chain)).
|||
||| Note, that the intention of this interface is to use it
||| on *external* types and *primitives*, but not on types
||| defined in Idris2. If you need to marshal Idris2 values
||| from and to the FFI, use interfaces `ToJS` and `FromJS`.
public export
interface SafeCast a where
safeCast : {0 x : Type} -> x -> Maybe a
public export %inline
castTo : (0 a : Type) -> SafeCast a => x -> Maybe a
castTo _ v = safeCast v
||| Tries to create an n-ary sum by trying all possible
||| casts. The first successful cast will determine the
||| result.
export
safeCastNS : (np : NP SafeCast ts) => x -> Maybe (NS I ts)
safeCastNS x = choiceMap runNS $ apInjsNP np
where runNS : NS SafeCast ts -> Maybe (NS I ts)
runNS = htraverse (\sc => safeCast x)
||| This is a utility function to implement instances of
||| `SafeCast`. Only use, if you know what you are doing.
export
unsafeCastOnPrototypeName : String -> a -> Maybe b
unsafeCastOnPrototypeName s a =
if prim__hasProtoName s (believe_me a) == 1.0
then Just (believe_me a)
else Nothing
||| This is a utility function to implement instances of
||| `SafeCast`. Only use, if you know what you are doing.
export
unsafeCastOnTypeof : String -> a -> Maybe b
unsafeCastOnTypeof s a =
if typeof a == s then Just (believe_me a) else Nothing
export
SafeCast Integer where
safeCast = unsafeCastOnTypeof "bigint"
export
SafeCast Double where
safeCast = unsafeCastOnTypeof "number"
export
SafeCast String where
safeCast = unsafeCastOnTypeof "string"
-- As far as I understand, there are no "single characters"
-- in Javascript, only strings of length 1. Thats why we go via
-- String here
export
SafeCast Char where
safeCast v = safeCast v >>= \s => case strM s of
StrCons x "" => Just x
_ => Nothing
export
bounded : Num a => (min : Integer) -> (max : Integer) -> x -> Maybe a
bounded min max ptr =
safeCast ptr >>= \n => if n >= min && n <= max
then Just (fromInteger n)
else Nothing
export
SafeCast Bits8 where
safeCast = bounded 0 0xff
export
SafeCast Bits16 where
safeCast = bounded 0 0xffff
export
SafeCast Bits32 where
safeCast = bounded 0 0xffffffff
export
SafeCast Bits64 where
safeCast = bounded 0 0xffffffffffffffff
export
SafeCast Int8 where
safeCast = bounded (-0x80) 0x7f
export
SafeCast Int16 where
safeCast = bounded (-0x8000) 0x7fff
export
SafeCast Int32 where
safeCast = bounded (-0x80000000) 0x7fffffff
export
SafeCast Int64 where
safeCast = bounded (-0x8000000000000000) 0x7fffffffffffffff
export
SafeCast Int where
safeCast = bounded (- 0x80000000) (0x7fffffff)
export
tryCast : SafeCast a => (fun : Lazy String) -> x -> JSIO a
tryCast fun val = case safeCast val of
Just a => pure a
Nothing => throwError $ CastErr fun val
export
tryCast_ : (0 a : Type) -> SafeCast a => (fun : Lazy String) -> x -> JSIO a
tryCast_ _ = tryCast
export
castingTo : SafeCast a => (fun : String) -> JSIO x -> JSIO a
castingTo fun io = io >>= tryCast fun
|
If $m^k \leq n < (m+1)^k$, then $n^{1/k} = m$.
|
######################################################################
`eta/ord_simplex_interior` := proc(A::set)
if nops(A) <> 1 then return FAIL; fi;
return [`eta/ord`(A),`eta/simplex_interior`(A)];
end;
`gamma/ord_simplex_interior` := (A::set,B::set) -> (p) -> proc(U,V)
local RU,RV,xU,xV,b;
RU := U[1];
RV := table([seq(b = eval(V[b][1]),b in B)]);
xU := U[2];
xV := table([seq(b = eval(V[b][2]),b in B)]);
return [`gamma/ord`(A,B)(p)(RU,RV),
`gamma/simplex_interior`(A,B)(p)(xU,xV)];
end;
|
lemma continuous_at_split: "continuous (at x) f \<longleftrightarrow> continuous (at_left x) f \<and> continuous (at_right x) f" for x :: "'a::linorder_topology"
|
The convex hull of a set $S$ and a point $a$ is the set of all points that can be written as a convex combination of $a$ and a point in $S$.
|
The convex hull of a set $S$ and a point $a$ is the set of all points that can be written as a convex combination of $a$ and a point in $S$.
|
\documentclass[12pt]{article}
\usepackage[pdftex]{graphicx}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\kg}{\mathrm{kg}}
\newcommand{\m}{\mathrm{m}}
\newcommand{\cm}{\mathrm{cm}}
\newcommand{\mm}{\mathrm{mm}}
\newcommand{\km}{\mathrm{km}}
\newcommand{\mi}{\mathrm{mi}}
\newcommand{\s}{\mathrm{s}}
\newcommand{\ms}{\mathrm{ms}}
\newcommand{\h}{\mathrm{h}}
\newcommand{\N}{\mathrm{N}}
\newcommand{\W}{\mathrm{W}}
\newcommand{\hp}{\mathrm{hp}}
\newcommand{\rad}{\mathrm{rad}}
\newcounter{problem}
\stepcounter{problem}
\newcounter{answer}[problem]
\newenvironment{problem}{\noindent\begin{minipage}{\textwidth}\sloppy\sloppypar\raggedright\textbf{\theproblem.}\refstepcounter{problem}\stepcounter{answer}---}{\end{minipage}\\[2ex]}
\newcommand{\source}[1]{[{#1}]}
\newenvironment{answers}{\\[0.5ex]}{}
\newcommand{\answer}[1]{\textbf{\Alph{answer}:}\refstepcounter{answer}~\mbox{#1\hspace{3ex}}}
\newcommand{\longanswer}[1]{\textbf{\Alph{answer}:}\refstepcounter{answer}~{#1}\\}
\begin{document}
\section*{NYU General Physics 1---Term Exam 3}
\begin{problem}
\source{from lecture 2013-10-24} You analyzed a mass $M$ sitting on
a table-top of mass $m$ held up on two fulcra (points of contact),
separated by a distance $L$. If the mass $M$ is put directly over
the \emph{left} fulcrum, what is the magnitude of the contact force
at the \emph{right} fulcrum? By ``contact force'' we mean the force
of the fulcrum on the table top.
\begin{answers}
\answer{$\displaystyle \frac{m\,g}{2}$}
\answer{$\displaystyle \frac{M\,g}{2}+\frac{m\,g}{2}$}
\answer{$\displaystyle M\,g+\frac{mg}{2}$}
\answer{$\displaystyle M\,g-\frac{mg}{2}$}
\answer{$\displaystyle \frac{Mg}{2}$}
\end{answers}
\end{problem}
\begin{problem}
\source{from lecture 2013-10-29} We considered a hanging sign, with
the cable at an angle of $\theta$ to the horizontal, as shown.
\\\includegraphics{../mp/hanging_sign2.pdf}\\ If we choose the
pin (black dot) connecting the beam and the wall as the axis (reference point), what is the magnitude of
the torque on the beam coming from the cable marked ``$T_1$''?
\begin{answers}
\answer{$T_1\,\cos\theta$}
\answer{$T_1\,\sin\theta$}
\answer{$T_1\,L\,\cos\theta$}
\answer{$T_1\,L\,\sin\theta$}
\end{answers}
\end{problem}
\begin{problem}
\source{from lecture 2013-10-31} Just like we did in lecture,
consider a harmonic oscillator consisting of a mass $m$ attached to
a spring of spring constant $k$ (force per unit length) on a
frictionless table. It oscillates with amplitude $A$. The period
of the oscillator:
\begin{answers}
\answer{does not depend on the amplitude}
\answer{does not depend on the mass}
\answer{does not depend on the spring constant}
\answer{changes with time}
\end{answers}
\end{problem}
\begin{problem}
\source{from lecture 2013-11-05} We considered a stretched brass
string of density $\rho$ and diameter $D$, fixed at both ends,
tightened to tension $T$. Assuming $\rho$, $D$, and $T$ remain
fixed, how does the fundamental frequency of the string depend on
length $L$? (Hint: As you change the length, you also change the
total mass $M$ of the string.)
\begin{answers}
\answer{$f\propto 1/L$}
\answer{$f\propto 1/\sqrt{L}$}
\answer{$f$ is independent of $L$}
\answer{$f\propto \sqrt{L}$}
\answer{$f\propto L$}
\end{answers}
\end{problem}
\begin{problem}
\source{from lecture 2013-11-07} We discussed a traveling sinusoidal
wave on a stretched string. The wave moved in the $x$ direction,
and the the bits of string oscillated in the $y$ direction. What
statement is \emph{false?}
\begin{answers}
\answer{The wave was transverse.}
\answer{Each bit of string moves in both the $x$ and $y$ directions.}
\answer{Every bit of string oscillates at precisely the same frequency.}
\answer{The plots of $y$ vs time and $y$ vs $x$ were both sinusoidal in form.}
\end{answers}
\end{problem}
\begin{problem}
\source{from lecture 2013-11-12} The amplitude of the standing wave
in a piano string, fixed at both ends, declines with time. What is
one possible reason?
\begin{answers}
\answer{It is subject to static friction.}
\answer{It is under tension.}
\answer{It is emitting a sound into the air.}
\answer{The oscillation frequency is changing with time.}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 8, problem 1} Where is the center of mass
of this object, which you considered in the problem set?
\\\includegraphics{../py/com_shape.pdf}
\begin{answers}
\answer{$x=2.5\,\cm$, $y=3.0\,\cm$}
\answer{$x=3.0\,\cm$, $y=3.0\,\cm$}
\answer{$x=2.5\,\cm$, $y=3.5\,\cm$}
\answer{$x=3.0\,\cm$, $y=3.5\,\cm$}
\answer{None of the above}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 8, problem 2} Consider the topmost,
horizontal beam in this bridge, which you considered in the problem
set.
\\\includegraphics{../py/bridge.pdf}\\
Is that top beam under tension or compression? Assume that the
joints are bearings, free to rotate!
\begin{answers}
\answer{tension}
\answer{compression}
\answer{the beam is not under stress}
\answer{could be either, depending on other details}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 8, problem 3} If you hold your right arm
such that your upper arm is vertical and your forearm is horizontal,
and you have a $10\,\kg$ grocery bag hanging from your right hand,
\emph{very roughly} what is the contact force between the bones at
your elbow joint?
\begin{answers}
\answer{$0.25\,\N$}
\answer{$5\,\N$}
\answer{$100\,\N$}
\answer{$2000\,\N$}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 9, problem 1} You solved this problem
(ladder leaning against the wall) with two choices of ``axis of
rotation'' or reference point. There were some differences between
the two calculations. In each case you had some forces and some
torques. What is true about the corresponding forces and torques
across the two calculations?
\begin{answers}
\longanswer{the corresponding torques and corresponding forces were all the same across the two calculations}
\longanswer{the corresponding torques were different in the two calculations, but forces were the same}
\longanswer{the corresponding forces were different in the two calculations, but torques were the same}
\longanswer{the corresponding torques and forces were all different across the two calculations}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 9, problem 2} A long, thin rod of length
$L$ and cross-sectional area $A$ and elastic (Young's) modulus $E$
has mass $M$. Think of the rod as being like a Hooke's Law spring;
it can be stretched by applying a force. What is the spring
constant $k$ for this spring?
\begin{answers}
\answer{$\displaystyle{E\,L}$}
\answer{$\displaystyle\frac{E\,A}{L}$}
\answer{$\displaystyle\frac{E}{A\,L}$}
\answer{$\displaystyle\frac{L}{E\,A}$}
\answer{none of these}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 9, problem 2} You compared the vibration
frequency $f_1$ of a bone supporting your mass with the pendulum
frequency $f_2$ of the bone swinging freely under the influence of
gravity. What did you find?
\begin{answers}
\answer{$f_1 > f_2$}
\answer{$f_1 = f_2$}
\answer{$f_1 < f_2$}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 9, problem 3} You plotted the kinetic
energy of a simple harmonic oscillator as a function of time. The
plot of the kinetic energy $K$ as a function of time has what
property?
\begin{answers}
\longanswer{the average kinetic energy is zero}
\longanswer{kinetic energy increases and decreases repeatedly with time}
\longanswer{the plot looks quadratic (like a parabola) everywhere}
\longanswer{the kinetic energy is at a maximum when the position is at a maximum}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 10, problem 1} I asked you to differentiate
the expression
$$ x(t) = A\,\sin(\omega\,t+\phi) $$ twice with respect to time.
The second derivative with respect to time of this function is
\begin{answers}
\answer{$\displaystyle \frac{\dd^2 x}{\dd t^2} = A\,\cos(\omega\,t+\phi)$}
\answer{$\displaystyle \frac{\dd^2 x}{\dd t^2} = -\omega^2\,A\,\sin(\omega\,t+\phi)$}
\answer{$\displaystyle \frac{\dd^2 x}{\dd t^2} = \omega\,A\,\cos(\omega\,t+\phi)$}
\answer{$\displaystyle \frac{\dd^2 x}{\dd t^2} = \omega^2\,A\,\sin(\omega\,t+\phi)$}
\answer{$\displaystyle \frac{\dd^2 x}{\dd t^2} = -A\,\sin(\omega\,t+\phi)$}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 10, problem 2} You made plots of the expression
$$ y(x,t) = A\,\cos(\frac{2\pi\,x}{\lambda} - \frac{2\pi\,t}{T}) $$
at various different times. All these plots were sinusoidal, but
differed from one another in what respect?
\begin{answers}
\answer{the amplitude was different in each plot}
\answer{the wavelength was different in each plot}
\answer{each plot was shifted in the $x$-direction relative to the others}
\answer{each plot was shifted in the $y$-direction relative to the others}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 10, problem 2} I gave you the expression
$$ y(x,t) = A\,\cos(\frac{2\pi\,x}{\lambda} - \frac{2\pi\,t}{T}) $$
where $A$, $\lambda$, and $T$ are constants. This represents a
traveling wave. At what speed (magnitude of velocity) is it
traveling?
\begin{answers}
\answer{$\displaystyle \lambda\,T$}
\answer{$\displaystyle \frac{\lambda}{T}$}
\answer{$\displaystyle \frac{T}{\lambda}$}
\answer{$\displaystyle 2\,\pi\,\frac{\lambda}{T}$}
\answer{$\displaystyle 2\,\pi\,\frac{T}{\lambda}$}
\end{answers}
\end{problem}
\begin{problem}
\source{from problem set 10, problem 3} The speed of sound in air is
about
\begin{answers}
\answer{$0.3\,\m\,s^{-1}$}
\answer{$3\,\m\,s^{-1}$}
\answer{$30\,\m\,s^{-1}$}
\answer{$300\,\m\,s^{-1}$}
\end{answers}
\end{problem}
\begin{problem}
\source{from \textit{Conservation of Energy} lab} You used a
pendulum consisting of a mass on a light string. Why does the
tension force in the string not do any work on the mass? That is,
why do you not have to consider the tension force when you
calculate the potential or kinetic energies?
\begin{answers}
\longanswer{The tension force is always perpendicular to the direction of motion.}
\longanswer{The tension force is zero.}
\longanswer{The tension force does not act on the mass.}
\longanswer{The force of the mass on the string is opposite to the force of the string on the mass.}
\end{answers}
\end{problem}
\begin{problem}
\source{from \textit{Collisions in One Dimension} lab} In the lab
you did some collision in which the lab write-up said that ``energy
is not conserved.'' What was the more \emph{precise} meaning of this
in the context of the lab?
\begin{answers}
\longanswer{The laws of physics don't hold in Meyer Hall.}
\longanswer{Kinetic energy is not always conserved.}
\longanswer{Momentum is not always conserved.}
\longanswer{It is impossible to calculate the kinetic energy in these kinds of collision problems.}
\end{answers}
\end{problem}
\begin{problem}
\source{from \textit{Ballistic Pendulum} lab} You inferred the speed
of the ball out of the gun by using an equation that looked like
$$ v = D \left(\frac{g}{2\,d}\right)^{\frac{1}{2}} $$ This equation
depends on some premises or assumptions. Which of the following is
\emph{not} one of those premises?
\begin{answers}
\longanswer{Air resistance is negligible.}
\longanswer{The gun fires the ball horizontally.}
\longanswer{The ball is not spinning when it is launched by the gun.}
\longanswer{The gravitational force affects only the vertical component of velocity.}
\end{answers}
\end{problem}
\end{document}
|
If $f$ converges to $L$ at $a$, then there exists a neighborhood of $a$ such that $f$ is within $r$ of $L$ on that neighborhood.
|
word <- 'sleeveless'
sleeveless <- function(word) {
a <- letter.count(word)
a <- a[a != 0]
a <- sort(a)
sum(a == 1:length(a)) == length(a)
}
lapply(words, sleeveless)
|
function scanQuery(command,varargin)
% scanQuery
%
% Queries the scan Intensity values and displays in the CERR Status Bar
%
% written DK
%
% Copyright 2010, Joseph O. Deasy, on behalf of the CERR development team.
%
% This file is part of The Computational Environment for Radiotherapy Research (CERR).
%
% CERR development has been led by: Aditya Apte, Divya Khullar, James Alaly, and Joseph O. Deasy.
%
% CERR has been financially supported by the US National Institutes of Health under multiple grants.
%
% CERR is distributed under the terms of the Lesser GNU Public License.
%
% This version of CERR is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% CERR is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
% without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
% See the GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with CERR. If not, see <http://www.gnu.org/licenses/>.
global planC stateS
switch upper(command)
case 'SCANQUERYSTART'
cP = get(gcbo, 'CurrentPoint');
%delete([findobj('tag', 'scanQueryPoint')]);
delete(stateS.handle.scanQueryPoint)
stateS.handle.scanQueryPoint = [];
%line([cP(1,1) cP(1,1)], [cP(2,2) cP(2,2)], 'tag', 'scanQueryPoint', 'userdata', gcbo, 'eraseMode', 'xor', 'parent', gcbo, 'marker', '+', 'color', [1 1 1], 'hittest', 'off');
stateS.handle.scanQueryPoint = line([cP(1,1) cP(1,1)], [cP(2,2) cP(2,2)], 'tag', 'scanQueryPoint', 'userdata', gcbo, 'parent', gcbo, 'marker', '+', 'color', [1 1 1], 'hittest', 'off');
return;
case 'SCANQUERYMOTION'
%dQP = findobj('tag', 'scanQueryPoint');
hAxis = get(stateS.handle.scanQueryPoint, 'userdata');
[view, coord, scanSets] = getAxisInfo(hAxis, 'view', 'coord', 'scanSets');
if isempty(scanSets)
CERRStatusString('Cannot query scan in this axis: no scan is being displayed.')
return;
end
scanSet = scanSets(1);
if isempty(varargin)
cP = get(hAxis, 'CurrentPoint');
set(stateS.handle.scanQueryPoint, 'XData', [cP(1,1) cP(1,1)]);
set(stateS.handle.scanQueryPoint, 'YData', [cP(2,2) cP(2,2)]);
else
xd = get(stateS.handle.scanQueryPoint, 'XData');
yd = get(stateS.handle.scanQueryPoint, 'YData');
cP = [xd(:) yd(:)];
end
switch lower(view)
case 'transverse'
x = cP(1,1); y = cP(2,2); z = coord;
case 'sagittal'
y = cP(1,1); z = cP(2,2); x = coord;
case 'coronal'
x = cP(1,1); z = cP(2,2); y = coord;
otherwise
return;
end
%Get scan's transM, and convert requested point to scan coords.
transM = getTransM('scan', scanSet, planC);
[xD, yD, zD] = applyTransM(inv(transM), x, y, z);
%Get the actual scan value using the converted point.
scan = getScanAt(scanSet,xD,yD,zD,planC);
indexS = planC{end};
%imageType = planC{indexS.scan}(scanSet).scanInfo(1).imageType;
if ~isempty(planC{indexS.scan}(scanSet).scanInfo(1).CTOffset)
scan = scan - planC{indexS.scan}(scanSet).scanInfo(1).CTOffset;
end
% if strfind(upper(imageType), 'CT')
% CTOffset = planC{indexS.scan}(scanSet).scanInfo(1).CTOffset;
% scan = scan - CTOffset;
% end
CERRStatusString(['x = ' num2str(x) ', y = ' num2str(y) ', z = ' num2str(z) ' scan: ' num2str(scan)], 'gui');
return;
case 'SCANQUERYMOTIONDONE'
hFig = gcbo;
set(hFig, 'WindowButtonMotionFcn', '', 'WindowButtonUpFcn', '');
return;
end
|
section \<open>Nested datatypes\<close>
theory Nested_Datatype
imports MainRLT
begin
subsection \<open>Terms and substitution\<close>
datatype ('a, 'b) "term" =
Var 'a
| App 'b "('a, 'b) term list"
primrec subst_term :: "('a \<Rightarrow> ('a, 'b) term) \<Rightarrow> ('a, 'b) term \<Rightarrow> ('a, 'b) term"
and subst_term_list :: "('a \<Rightarrow> ('a, 'b) term) \<Rightarrow> ('a, 'b) term list \<Rightarrow> ('a, 'b) term list"
where
"subst_term f (Var a) = f a"
| "subst_term f (App b ts) = App b (subst_term_list f ts)"
| "subst_term_list f [] = []"
| "subst_term_list f (t # ts) = subst_term f t # subst_term_list f ts"
lemmas subst_simps = subst_term.simps subst_term_list.simps
text \<open>\<^medskip> A simple lemma about composition of substitutions.\<close>
lemma
"subst_term (subst_term f1 \<circ> f2) t =
subst_term f1 (subst_term f2 t)"
and
"subst_term_list (subst_term f1 \<circ> f2) ts =
subst_term_list f1 (subst_term_list f2 ts)"
by (induct t and ts rule: subst_term.induct subst_term_list.induct) simp_all
lemma "subst_term (subst_term f1 \<circ> f2) t = subst_term f1 (subst_term f2 t)"
proof -
let "?P t" = ?thesis
let ?Q = "\<lambda>ts. subst_term_list (subst_term f1 \<circ> f2) ts =
subst_term_list f1 (subst_term_list f2 ts)"
show ?thesis
proof (induct t rule: subst_term.induct)
show "?P (Var a)" for a by simp
show "?P (App b ts)" if "?Q ts" for b ts
using that by (simp only: subst_simps)
show "?Q []" by simp
show "?Q (t # ts)" if "?P t" "?Q ts" for t ts
using that by (simp only: subst_simps)
qed
qed
subsection \<open>Alternative induction\<close>
lemma "subst_term (subst_term f1 \<circ> f2) t = subst_term f1 (subst_term f2 t)"
proof (induct t rule: term.induct)
case (Var a)
show ?case by (simp add: o_def)
next
case (App b ts)
then show ?case by (induct ts) simp_all
qed
end
|
#! @Title Skeletal vector spaces
#! @Author Sebastian Gutsche and Sebastian Posur
#! @Date 01/08/2018
#! @Chapter HandsOn
#! @Section Objects and morphisms
LoadPackage( "CAP" );
LoadPackage( "MatricesForHomalg" );
#################################
##
## Technical declarations
##
#################################
## Objects
DeclareCategory( "IsQVectorSpace",
IsCapCategoryObject );
## Morphisms
DeclareCategory( "IsQVectorSpaceMorphism",
IsCapCategoryMorphism );
#################################
##
## Attributes
##
#################################
#! @Description
#! The argument is an object $V$ in the category of rational vector spaces.
#! The output is its dimension.
#! @Returns a non-negative integer
#! @Arguments V
DeclareAttribute( "Dimension",
IsQVectorSpace );
#! @Description
#! The argument is a morphism $\alpha$ in the category of rational vector spaces.
#! The output is its underlying matrix.
#! @Returns a homalg matrix
#! @Arguments alpha
DeclareAttribute( "UnderlyingMatrix",
IsQVectorSpaceMorphism );
#################################
##
## Operations
##
#################################
#! @Description
#! The argument is a non-negative integer $d$.
#! The output is a rational vector space of dimension $d$.
#! @Returns an object
#! @Arguments d
DeclareOperation( "QVectorSpace",
[ IsInt ] );
#! @Description
#! The first argument is a rational vector space $V$.
#! The second argument $A$ is either a homalg matrix defined over the rationals
#! or an input that can be used as the first argument in the HomalgMatrix constructor.
#! The third argument is a rational vector space $W$.
#! The output is vector space morphism from $V$ to $W$ defined by $A$.
#! @Returns an morphism in $\mathrm{Hom}(V,W)$
#! @Arguments V,A,W
DeclareOperation( "QVectorSpaceMorphism",
[ IsQVectorSpace, IsObject, IsQVectorSpace ] );
#################################
##
## Creation of category
##
#################################
BindGlobal( "vecspaces", CreateCapCategory( "QVectorSpaces" ) );
AddObjectRepresentation( vecspaces, IsQVectorSpace );
AddMorphismRepresentation( vecspaces, IsQVectorSpaceMorphism );
SetIsAbelianCategory( vecspaces, true );
#################################
##
## Creation of Q
##
#################################
BindGlobal( "VECTORSPACES_FIELD", HomalgFieldOfRationals( ) );
#################################
##
## Constructors for objects and morphisms
##
#################################
##
InstallMethod( QVectorSpace,
[ IsInt ],
function( dim )
local space;
if dim < 0 then
Error( "the argument has to be a non-negative integer" );
fi;
space := rec( );
ObjectifyObjectForCAPWithAttributes( space, vecspaces,
Dimension, dim
);
return space;
end );
##
InstallMethod( QVectorSpaceMorphism,
[ IsQVectorSpace, IsObject, IsQVectorSpace ],
function( source, matrix, range )
local morphism;
if not IsHomalgMatrix( matrix ) then
matrix := HomalgMatrix( matrix, Dimension( source ), Dimension( range ), VECTORSPACES_FIELD );
fi;
morphism := rec( );
ObjectifyMorphismForCAPWithAttributes( morphism, vecspaces,
Source, source,
Range, range,
UnderlyingMatrix, matrix
);
return morphism;
end );
#################################
##
## View
##
#################################
##
InstallMethod( ViewObj,
[ IsQVectorSpace ],
function( obj )
Print( "<A rational vector space of dimension ", String( Dimension( obj ) ), ">" );
end );
##
InstallMethod( ViewObj,
[ IsQVectorSpaceMorphism ],
function( mor )
Print( "A rational vector space homomorphism with matrix: \n" );
Display( UnderlyingMatrix( mor ) );
end );
#################################
##
## Functions to be added to category
##
#################################
##
identity_morphism := function( obj )
return QVectorSpaceMorphism( obj, HomalgIdentityMatrix( Dimension( obj ), VECTORSPACES_FIELD ), obj );
end;
AddIdentityMorphism( vecspaces, identity_morphism );
##
pre_compose := function( mor_left, mor_right )
local composition;
composition := UnderlyingMatrix( mor_left ) * UnderlyingMatrix( mor_right );
return QVectorSpaceMorphism( Source( mor_left ), composition, Range( mor_right ) );
end;
AddPreCompose( vecspaces, pre_compose );
##
is_equal_for_objects := function( vecspace_1, vecspace_2 )
return Dimension( vecspace_1 ) = Dimension( vecspace_2 );
end;
AddIsEqualForObjects( vecspaces, is_equal_for_objects );
##
is_equal_for_morphisms := function( a, b )
return UnderlyingMatrix( a ) = UnderlyingMatrix( b );
end;
AddIsEqualForMorphisms( vecspaces, is_equal_for_morphisms );
##
kernel_emb := function( morphism )
local syzygies, kernel_obj;
syzygies := SyzygiesOfRows( UnderlyingMatrix( morphism ) );
kernel_obj := QVectorSpace( NrRows( syzygies ) );
return QVectorSpaceMorphism( kernel_obj, syzygies, Source( morphism ) );
end;
AddKernelEmbedding( vecspaces, kernel_emb );
##
lift := function( alpha, beta )
local solution;
solution := RightDivide( UnderlyingMatrix( alpha ), UnderlyingMatrix( beta ) );
if solution = fail then
return fail;
fi;
return QVectorSpaceMorphism( Source( alpha ),
solution,
Source( beta ) );
end;
AddLift( vecspaces, lift );
##
cokernel_proj := function( morphism )
local syzygies, cokernel_obj;
syzygies := SyzygiesOfColumns( UnderlyingMatrix( morphism ) );
cokernel_obj := QVectorSpace( NrColumns( syzygies ) );
return QVectorSpaceMorphism( Range( morphism ),
syzygies, cokernel_obj );
end;
AddCokernelProjection( vecspaces, cokernel_proj );
##
colift := function( alpha, beta )
local solution;
solution := LeftDivide( UnderlyingMatrix( alpha ), UnderlyingMatrix( beta ) );
if solution = fail then
return fail;
fi;
return QVectorSpaceMorphism( Range( alpha ),
solution,
Range( beta ) );
end;
AddColift( vecspaces, colift );
##
zero_object := function( )
return QVectorSpace( 0 );
end;
AddZeroObject( vecspaces, zero_object );
##
universal_morphism_into_zero_object := function( source )
return QVectorSpaceMorphism( source,
HomalgZeroMatrix( Dimension( source ), 0, VECTORSPACES_FIELD ),
QVectorSpace( 0 ) );
end;
AddUniversalMorphismIntoZeroObject( vecspaces, universal_morphism_into_zero_object );
##
universal_morphism_into_zero_object_with_given_zero_object := function( source, terminal_object )
return QVectorSpaceMorphism( source,
HomalgZeroMatrix( Dimension( source ), 0, VECTORSPACES_FIELD ),
terminal_object );
end;
AddUniversalMorphismIntoZeroObjectWithGivenZeroObject( vecspaces, universal_morphism_into_zero_object_with_given_zero_object );
##
universal_morphism_from_zero_object := function( sink )
return QVectorSpaceMorphism( QVectorSpace( 0 ),
HomalgZeroMatrix( 0, Dimension( sink ), VECTORSPACES_FIELD ),
sink );
end;
AddUniversalMorphismFromZeroObject( vecspaces, universal_morphism_from_zero_object );
##
universal_morphism_from_zero_object_with_given_zero_object := function( sink, initial_object )
return QVectorSpaceMorphism( initial_object,
HomalgZeroMatrix( 0, Dimension( sink ), VECTORSPACES_FIELD ),
sink );
end;
AddUniversalMorphismFromZeroObjectWithGivenZeroObject( vecspaces, universal_morphism_from_zero_object_with_given_zero_object );
##
addition_for_morphisms := function( a, b )
return QVectorSpaceMorphism( Source( a ),
UnderlyingMatrix( a ) + UnderlyingMatrix( b ),
Range( a ) );
end;
AddAdditionForMorphisms( vecspaces, addition_for_morphisms );
##
additive_inverse_for_morphisms := function( a )
return QVectorSpaceMorphism( Source( a ),
- UnderlyingMatrix( a ),
Range( a ) );
end;
AddAdditiveInverseForMorphisms( vecspaces, additive_inverse_for_morphisms );
##
direct_sum := function( object_product_list )
local dim;
dim := Sum( List( object_product_list, c -> Dimension( c ) ) );
return QVectorSpace( dim );
end;
AddDirectSum( vecspaces, direct_sum );
##
injection_of_cofactor_of_direct_sum := function( object_product_list, injection_number )
local components, dim, dim_pre, dim_post, dim_cofactor, coproduct, number_of_objects, injection_of_cofactor;
components := object_product_list;
number_of_objects := Length( components );
dim := Sum( components, c -> Dimension( c ) );
dim_pre := Sum( components{ [ 1 .. injection_number - 1 ] }, c -> Dimension( c ) );
dim_post := Sum( components{ [ injection_number + 1 .. number_of_objects ] }, c -> Dimension( c ) );
dim_cofactor := Dimension( object_product_list[ injection_number ] );
coproduct := QVectorSpace( dim );
injection_of_cofactor := HomalgZeroMatrix( dim_cofactor, dim_pre ,VECTORSPACES_FIELD );
injection_of_cofactor := UnionOfColumns( injection_of_cofactor,
HomalgIdentityMatrix( dim_cofactor, VECTORSPACES_FIELD ) );
injection_of_cofactor := UnionOfColumns( injection_of_cofactor,
HomalgZeroMatrix( dim_cofactor, dim_post, VECTORSPACES_FIELD ) );
return QVectorSpaceMorphism( object_product_list[ injection_number ], injection_of_cofactor, coproduct );
end;
AddInjectionOfCofactorOfDirectSum( vecspaces, injection_of_cofactor_of_direct_sum );
##
universal_morphism_from_direct_sum := function( diagram, sink )
local dim, coproduct, components, universal_morphism, morphism;
components := sink;
dim := Sum( components, c -> Dimension( Source( c ) ) );
coproduct := QVectorSpace( dim );
universal_morphism := UnderlyingMatrix( sink[1] );
for morphism in components{ [ 2 .. Length( components ) ] } do
universal_morphism := UnionOfRows( universal_morphism, UnderlyingMatrix( morphism ) );
od;
return QVectorSpaceMorphism( coproduct, universal_morphism, Range( sink[1] ) );
end;
AddUniversalMorphismFromDirectSum( vecspaces, universal_morphism_from_direct_sum );
##
projection_in_factor_of_direct_sum := function( object_product_list, projection_number )
local components, dim, dim_pre, dim_post, dim_factor, direct_product, number_of_objects, projection_in_factor;
components := object_product_list;
number_of_objects := Length( components );
dim := Sum( components, c -> Dimension( c ) );
dim_pre := Sum( components{ [ 1 .. projection_number - 1 ] }, c -> Dimension( c ) );
dim_post := Sum( components{ [ projection_number + 1 .. number_of_objects ] }, c -> Dimension( c ) );
dim_factor := Dimension( object_product_list[ projection_number ] );
direct_product := QVectorSpace( dim );
projection_in_factor := HomalgZeroMatrix( dim_pre, dim_factor, VECTORSPACES_FIELD );
projection_in_factor := UnionOfRows( projection_in_factor,
HomalgIdentityMatrix( dim_factor, VECTORSPACES_FIELD ) );
projection_in_factor := UnionOfRows( projection_in_factor,
HomalgZeroMatrix( dim_post, dim_factor, VECTORSPACES_FIELD ) );
return QVectorSpaceMorphism( direct_product, projection_in_factor, object_product_list[ projection_number ] );
end;
AddProjectionInFactorOfDirectSum( vecspaces, projection_in_factor_of_direct_sum );
##
universal_morphism_into_direct_sum := function( diagram, sink )
local dim, direct_product, components, universal_morphism, morphism;
components := sink;
dim := Sum( components, c -> Dimension( Range( c ) ) );
direct_product := QVectorSpace( dim );
universal_morphism := UnderlyingMatrix( sink[1] );
for morphism in components{ [ 2 .. Length( components ) ] } do
universal_morphism := UnionOfColumns( universal_morphism, UnderlyingMatrix( morphism ) );
od;
return QVectorSpaceMorphism( Source( sink[1] ), universal_morphism, direct_product );
end;
AddUniversalMorphismIntoDirectSum( vecspaces, universal_morphism_into_direct_sum );
#################################
##
## Finalize category
##
#################################
Finalize( vecspaces );
#################################
##
## Test the basic operations
##
#################################
# Creating objects and morphisms
V := QVectorSpace( 2 );
CapCategory( V );
Dimension( V );
W := QVectorSpace( 3 );
alpha := QVectorSpaceMorphism( V, [ [ 1, 1, 1 ], [ -1, -1, -1 ] ], W );
CapCategory( alpha );
UnderlyingMatrix( alpha );
# Testing the KernelEmbedding
KernelEmbedding( alpha );
KernelObject( alpha );
# Computing an intersection
M1 := QVectorSpace( 2 );
M2 := QVectorSpace( 2 );
N := QVectorSpace( 3 );
iota1 := QVectorSpaceMorphism( M1, [ [ 1, 0, 0 ], [ 0, 1, 1 ] ], N );
IsMonomorphism( iota1 );
iota2 := QVectorSpaceMorphism( M2, [ [ 1, 1, 0 ], [ 0, 0, 1 ] ], N );
IsMonomorphism( iota2 );
pi1 := ProjectionInFactorOfDirectSum( [ M1, M2 ], 1 );
pi2 := ProjectionInFactorOfDirectSum( [ M1, M2 ], 2 );
lambda := PostCompose( iota1, pi1 );
phi := lambda - PostCompose( iota2, pi2 );
kappa := KernelEmbedding( phi );
PostCompose( lambda, kappa );
PreCompose( ProjectionInFactorOfFiberProduct( [ iota1, iota2 ], 1 ), iota1 );
#################################
##
## A function for computing homology
##
#################################
HomologyObject := function( alpha, beta )
local iota, lambda;
if not IsZero( PreCompose( alpha, beta ) ) then
Error( "the composition of the given morphisms has to be zero" );
fi;
iota := ImageEmbedding( alpha );
lambda := KernelLift( beta, iota );
return CokernelObject( lambda );
end;
HomologyObject( alpha, CokernelProjection( alpha ) );
HomologyObject( KernelEmbedding( alpha ), alpha );
pi1 := ProjectionInFactorOfDirectSum( [ V, V, V ], 1 );
iota1 := InjectionOfCofactorOfDirectSum( [ V, V, V ], 1 );
pi2 := ProjectionInFactorOfDirectSum( [ V, V, V ], 2 );
iota2 := InjectionOfCofactorOfDirectSum( [ V, V, V ], 2 );
HomologyObject( PreCompose( pi1, iota1 ), PreCompose( pi2, iota2 ) );
|
[STATEMENT]
lemma length_map_upt: "length (map f [a..<b]) = b - a"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. length (map f [a..<b]) = b - a
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. length (map f [a..<b]) = b - a
[PROOF STEP]
have "length [a..<b] = b - a"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. length [a..<b] = b - a
[PROOF STEP]
using length_upt
[PROOF STATE]
proof (prove)
using this:
length [?i..<?j] = ?j - ?i
goal (1 subgoal):
1. length [a..<b] = b - a
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
length [a..<b] = b - a
goal (1 subgoal):
1. length (map f [a..<b]) = b - a
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
length [a..<b] = b - a
goal (1 subgoal):
1. length (map f [a..<b]) = b - a
[PROOF STEP]
have "length (map f [a..<b]) = length [a..<b]"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. length (map f [a..<b]) = length [a..<b]
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
length (map f [a..<b]) = length [a..<b]
goal (1 subgoal):
1. length (map f [a..<b]) = b - a
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
length [a..<b] = b - a
length (map f [a..<b]) = length [a..<b]
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
length [a..<b] = b - a
length (map f [a..<b]) = length [a..<b]
goal (1 subgoal):
1. length (map f [a..<b]) = b - a
[PROOF STEP]
by argo
[PROOF STATE]
proof (state)
this:
length (map f [a..<b]) = b - a
goal:
No subgoals!
[PROOF STEP]
qed
|
Require Import Crypto.Arithmetic.PrimeFieldTheorems.
Require Import Crypto.Specific.solinas32_2e251m9_10limbs.Synthesis.
(* TODO : change this to field once field isomorphism happens *)
Definition freeze :
{ freeze : feBW_tight -> feBW_limbwidths
| forall a, phiBW_limbwidths (freeze a) = phiBW_tight a }.
Proof.
Set Ltac Profiling.
Time synthesize_freeze ().
Show Ltac Profile.
Time Defined.
Print Assumptions freeze.
|
`is_element/FFbar` := (N::posint) -> (A::set) -> proc(Q)
local B,C,P,P1;
global reason;
P := {op(`list_elements/big_subsets`(A))};
if not(is_table_on(P)(Q)) then
reason := [convert(procname,string),"Q is not a table indexed by big subsets of A",eval(Q)];
return false;
fi;
for B in P do
if not(`is_element/SCP`(N)(B)(Q[B])) then
reason := [convert(procname,string),"Q[B] is not in SCP(N)(B)",eval(Q[B]),N,B,reason];
return false;
fi;
P1 := {op(`list_elements/big_subsets`(B))} minus {B};
for C in P1 do
if not(`is_element/SCP2`(N)(B,C)([Q[B],Q[C]])) then
reason := [convert(procname,string),"(Q[B],Q[C]) is not in SCP(N)(B,C)",
eval(Q[B]),eval(Q[C]),N,B,C,reason];
return false;
fi;
od;
od;
return true;
end;
######################################################################
`is_equal/FFbar` := (N::posint) -> (A::set) -> proc(Q0,Q1)
local B,P;
global reason;
P := `list_elements/big_subsets`(A);
for B in P do
if not(`is_equal/SCP`(N)(B)(Q0[B],Q1[B])) then
reason := [convert(procname,string),"Q0[B] <> Q1[B]",B,Q0[B],Q1[B],reason];
return false;
fi;
od;
return true;
end;
######################################################################
`is_leq/FFbar` := (N) -> (A) -> proc(Q0,Q1)
local B,P;
global reason;
P := `list_elements/big_subsets`(A);
for B in P do
if not(`is_leq/SCP`(N)(B)(Q0[B],Q1[B])) then
reason := [convert(procname,string),"Q0[B] is not <= Q1[B]",B,Q0[B],Q1[B],reason];
return false;
fi;
od;
return true;
end;
######################################################################
`build/FFbar` := (N::posint) -> (A::set) -> proc(RTTu)
local R,TT,i,j,k,n,m,ij,A0,C,TT1,TTS,TTc,T,Q,Q0,D,u,v,M,L,M0,M1,RT,BS,UU,U,a,b;
R,TT,u := op(RTTu);
n := nops(A);
A0 := {seq(i,i=1..n)};
C := children_map(A0)(TT);
TT1 := select(T -> nops(T) > 1,TT);
TTS := sort(map(T -> sort([op(T)]),[op(TT1)]));
Q := table():
for T in TTS do
D := sort([op(C[{op(T)}])],(U,V) -> (U[1] <= V[1]));
D := map(U -> sort([op(U)]),D);
m := nops(D);
v := [seq(u[max(D[i])],i=1..nops(D)-1)];
M := [seq(i,i=1..m)];
Q0 := `build/ICP`(N)({op(M)})([M,v]);
L := [];
for k from 1 to N do
M0 := Q0[k];
M1 := NULL;
for ij in M0 do
i,j := op(ij);
M1 := M1,seq(seq([R[a],R[b]],a in D[i]),b in D[j]);
od;
M1 := {M1};
L := [op(L),M1];
od;
RT := {seq(R[a],a in T)};
Q[RT] := L;
od:
TT1 := map(op,{indices(Q)});
BS := {op(`list_elements/big_subsets`(A))};
TTc := BS minus TT1;
for T in TTc do
UU := select(U -> T minus U = {},TT1);
m := min(map(nops,UU));
UU := select(U -> nops(U) = m,UU);
U := UU[1];
Q[T] := `res/ICP`(N)(U,T)(Q[U]);
od:
return eval(Q);
end:
`unbuild/FFbar` := (N::posint) -> (A::set) -> proc(Q)
local n,R,r,i,TT,TT0,u,a,b,UU,d,U,P,k;
n := nops(A);
R := `totalise/FFbar/ord`(N)(A)(Q);
r := table():
for i from 1 to n do r[R[i]] := i; od;
TT := `critical_tree/FFbar`(N)(A)(Q);
TT0 := map(U -> map(a -> r[a],U),TT);
u := NULL;
for i from 1 to n-1 do
a := R[i];
b := R[i+1];
UU := select(U -> member(a,U) and member(b,U),TT);
d := min(map(nops,UU));
UU := select(U -> nops(U) = d,UU);
U := UU[1];
P := Q[U];
k := 1;
while k < N and member([b,a],P[k]) do k := k+1; od;
u := u,k;
od;
u := [u];
return [R,TT0,u];
end:
######################################################################
`is_interior/FFbar` := (N::posint) -> (A::set) -> proc(Q)
return `is_separated/SCP`(N)(A)(Q[N]);
end;
######################################################################
`inc/ICP/FFbar` := (N::posint) -> (A::set) -> proc(Q0)
local TT,T,Q;
Q := table();
TT := `list_elements/big_subsets`(A);
for T in TT do
Q[T] := `res/ACP`(N)(A,T)(Q0);
od;
return eval(Q);
end:
######################################################################
`res/FFbar/ICP` := (N::posint) -> (A::set) -> proc(Q)
if not(`is_interior/FFbar`(N)(A)(Q)) then
return FAIL;
fi;
return Q[A];
end;
######################################################################
# The functions below could be made much more efficient, but for
# the moment we will stick with the most obvious approach.
`is_critical/FFbar` := (N::posint) -> (A::set) -> proc(Q,U)
local UU,a;
UU := `top/autorel`(U);
for a in A minus U do
if UU minus Q[U union {a}][N] <> {} then
return false;
fi;
od;
return true;
end;
##################################################
`critical_tree/FFbar` := (N::posint) -> (A::set) -> proc(Q)
select(U -> `is_critical/FFbar`(N)(A)(Q,U),
{op(`list_elements/nonempty_subsets`(A))});
end;
##################################################
`rank/FFbar` := (N::posint) -> (A::set) -> proc(Q)
local TT,TT1,T;
TT := `critical_tree/FFbar`(N)(A)(Q);
TT1 := select(T -> nops(T) > 1,TT);
return add(`rank/ACP`(N)(T)(Q[T])-1,T in TT1);
end;
##################################################
`totalise/FFbar/ord` := (N::posint) -> (A::set) -> proc(Q)
local C,P,k;
C := proc(a,b)
option remember;
if a = b then return true; fi;
P := Q[{a,b}];
for k from 1 to N do
if not(member([b,a],P[k])) then
return true;
fi;
if not(member([a,b],P[k])) then
return false;
fi;
od;
return true;
end:
return sort([op(A)],C);
end:
##################################################
`random_element/FFbar` := (N::posint) -> (A::set) -> proc()
local n,R,TT,u,i;
n := nops(A);
R := `random_element/ord`(A)();
TT := `random_element/standard_stasheff_trees`(n)();
u := [seq(rand(1..N)(),i=1..n-1)];
return `build/FFbar`(N)(A)([R,TT,u]);
end:
`list_elements/FFbar` := (N::posint) -> proc(A::set)
local n,RR,R,U,u,i,j,TTT,TT;
n := nops(A);
RR := `list_elements/ord`(A);
U := [[]];
for i from 1 to n-1 do
U := [seq(seq([op(u),j],j=1..N),u in U)];
od;
TTT := `list_elements/standard_stasheff_trees`(n);
return([
seq(seq(seq(
`build/FFbar`(N)(A)([R,TT,u]),
u in U),TT in TTT),R in RR)
]);
end:
`count_elements/FFbar` := (N::posint) -> (A::set) ->
nops(A)! * N^(nops(A)-1) *
`count_elements/standard_stasheff_trees`(nops(A));
`list_ordered_elements/FFbar` := (N::posint) -> proc(A::{set,list})
local n,A0,R,U,u,i,j,TTT,TT;
n := nops(A);
A0 := {op(A)};
R := [op(A)];
U := [[]];
for i from 1 to n-1 do
U := [seq(seq([op(u),j],j=1..N),u in U)];
od;
TTT := `list_elements/standard_stasheff_trees`(n);
return([
seq(seq(
`build/FFbar`(N)(A)([R,TT,u]),
u in U),TT in TTT)
]);
end:
`count_ordered_elements/FFbar` := (N::posint) -> (A::set) ->
N^(nops(A)-1) *
`count_elements/standard_stasheff_trees`(nops(A));
######################################################################
`eta/FFbar` := (N::posint) -> (A::set) -> `if`(nops(A) = 1,table(),FAIL);
######################################################################
`gamma/FFbar` := (N::posint) -> (A::set,B::set) -> (p) -> proc(Q,P)
local R,TT,T,U,M,i;
R := table();
TT := `list_elements/big_subsets`(A);
for T in TT do
U := map(a -> p[a],T);
if nops(U) > 1 then
M := Q[U];
R[T] := [seq(select(u -> member([p[u[1]],p[u[2]]],M[i]),`top/autorel`(T)),i=1..N)];
else
R[T] := eval(P[op(U)][T]);
fi;
od;
return eval(R);
end;
######################################################################
`mu/Fbar/FFbar` := (N::posint) -> (A::set) -> proc(x)
local Q,TT,T;
Q := table();
TT := `list_elements/big_subsets`(A);
for T in TT do
Q[T] := `mu/W/ACP`(N)(T)(x[T]);
od;
return eval(Q);
end;
######################################################################
`sigma/Fbar/FFbar` := (N::posint) -> (A::set) -> proc(Q)
local x,TT,T;
x := table();
TT := `list_elements/big_subsets`(A);
for T in TT do
x[T] := `sigma/ACP/W`(N)(T)(Q[T]);
od;
return eval(x);
end;
######################################################################
`describe/FFbar` := (N::posint) -> (A::set) -> proc(Q)
local TT,TT1,T,s;
TT := `critical_tree/FFbar`(N)(A)(Q);
TT1 := select(T -> nops(T) > 1,TT);
TT1 := sort([op(TT1)],(U,V) -> nops(U) >= nops(V));
s := "";
for T in TT1 do
if s <> "" then s := cat(s,"\n"); fi;
s := cat(s,`describe/ACP`(N)(T)(Q[T]));
od:
return s;
end:
|
module TestTasks
# using Revise
using Test
using MLJBase
XX = (Crim = [0.00632, 0.02731, 0.02729],
Zn = [18.0, 0.0, 0.0],
Indus = [2.31, 7.07, 7.07],
Chas = [0, 0, 0],
NOx = [0.538, 0.469, 0.469],
Rm = [6.575, 6.421, 7.185],
Age = [65.2, 78.9, 61.1],
Dis = [4.09, 4.9671, 4.9671],
Rad = [1.0, 2.0, 2.0],
Tax = [296.0, 242.0, 242.0],
PTRatio = [15.3, 17.8, 17.8],
Black = [396.9, 396.9, 392.83],
LStat = [4.98, 9.14, 4.03],
MedV = [24.0, 21.6, 34.7],)
allnames = collect(MLJBase.schema(XX).names)
task = SupervisedTask(data=XX, target=[:Crim, :Zn], is_probabilistic=true, ignore=:Dis)
y = y_(task);
X = X_(task);
t = MLJBase.schema(X);
@test collect(t.names) == filter(allnames) do ftr
!(ftr in [:Crim, :Zn, :Dis])
end
y1 = [(XX.Crim[i], XX.Zn[i]) for i in 1:length(y)];
@test y == y1
task = SupervisedTask(data=XX, target=:Crim, is_probabilistic=true, ignore=[:Dis, :Rm])
y = y_(task);
X = X_(task);
t = MLJBase.schema(X)
@test collect(t.names) == filter(allnames) do ftr
!(ftr in [:Crim, :Dis, :Rm])
end
task = UnsupervisedTask(data=XX, ignore=[:Dis, :Rm])
X = task();
t = MLJBase.schema(X)
@test collect(t.names) == filter(allnames) do ftr
!(ftr in [:Dis, :Rm])
end
task = UnsupervisedTask(data=XX, ignore=:Rm)
X = X_(task);
t = MLJBase.schema(X)
@test collect(t.names) == filter(allnames) do ftr
!(ftr in [:Rm, ])
end
# single feature for input:
task = UnsupervisedTask(data=(Crim=XX.Crim,))
@test task.X == XX.Crim
task = SupervisedTask(data=MLJBase.selectcols(XX, [:Crim, :Zn]), target=:Zn, is_probabilistic=true)
@test task.X == XX.Crim
@test task.y == XX.Zn
end # module
true
|
\documentclass[]{article}
%opening
\title{MTH 343 Numerical Analysis: Lecture 12}
\author{Sheikh Abdul Raheem Ali}
\begin{document}
\maketitle
\section*{Interpolation}
\begin{enumerate}
\item Solving Systems of Equations
\item Lagrange Polynomials
\end{enumerate}
\begin{tabular}{c c}
$ x_i $ & $ f_i $ \\
$ x_0 $ & $ f_0 $ \\
$ x_1 $ & $ f_1 $ \\
$ x_2 $ & $ f_2 $
\end{tabular}
\[ P_2(x) = \frac{(x-x_1)(x - x_2)}{(x_0 - x_1)(x_0 - x_2)}f_0 + \frac{(x-x_0)(x - x_2)}{(x_1 - x_0)(x_1 - x_2)}f_1 + \frac{(x-x_0)(x - x_1)}{(x_2 - x_0)(x_2 - x_1)}f_2 \]
\begin{tabular}{c c}
$ x_i $ & $ f_i $ \\
$ 0 $ & $ 1 $ \\
$ -1 $ & $ 2 $ \\
$ 2 $ & $ 3 $
\end{tabular}
\[ P_2(x) = \frac{(x+1)(x - 2)}{(0 + 1)(0 - 2)} + \frac{(x-0)(x - 2)}{(-1 - 0)(-1 - 2)}2 + \frac{(x-0)(x + 1)}{(2 - 0)(2 + 1)}3 \]
$ P_n(x) = \sum_{0}^{n}L_{n,i}f(x_i) $
\section*{Lagrange Polynomial Error Function}
\[ E(x) = |f(x) - P_n(x)| = \frac{|(x-x_0)(x-x_1)\dots(x-x_n)f^{n+1}(\xi)|}{(n+1)!} \]
Where $ \xi $ is the element in the smallest interval containing $ x_0,x_1,\dots,x_n $
\end{document}
|
import numpy as np
from findpairs import findpairs
from findtriplets import findtriplets
# Solution for advent of code 2020 #1
# Load a list as a numpy array, call "findpair" to find which pairs have a joint sum of
# 2020 and then multiplies each found pair
# first a test
testlist = np.array([1, 2019, 2, 1, 2018, 4, 5, 6])
matched_pairs = findpairs(testlist,2020)
matched_triplets = findtriplets(testlist,2020)
# test seems fine, now with real data
lista = np.loadtxt('input.txt')
matched_pairs = findpairs(lista,2020)
matched_triplets = findtriplets(lista,2020)
# compute the multiplication and outputs it
for n in matched_pairs:
print(n[0]*n[1])
for n in matched_triplets:
print(n[0]*n[1]*n[2])
|
\documentclass[15pt]{beamer}
\usetheme{Boadilla}
\usepackage{tikz}
\usepackage{graphicx}
\usepackage{amsmath}
\usetikzlibrary{shapes.geometric,arrows, positioning, fit}
\title{Presentation Arrowhead framework}
\author{}
\institute{Luleå University of Technology}
\date{\today}
\begin{document}
\begin{frame}
\titlepage
\end{frame}
\begin{frame}
\frametitle{Overview}
\tableofcontents
\end{frame}
\section{Core systems}
\subsection{Service Registry}
\subsection{Orchestrator}
\subsection{Authorization system}
\input{frames/core_systems}
\section{Service Registry}
\input{frames/service_registry}
\section{Authorization system}
\input{frames/authorization_system}
\section{Orchestrator}
\input{frames/orchestrator}
\section{Sequence diagram}
\input{frames/sequence_diagram}
\section{Implementation}
\input{frames/implementation}
\section{Advantages and purpose}
\input{frames/purpose}
\begin{frame}
\begin{center}
\Huge Questions?
\end{center}
\end{frame}
\end{document}
|
#' List unique values of `data.table` object columns
#'
#' Compute a named list whose element `i` corresponds to the unique values of
#' the data.table `DT` column `z[i]`. If `exclude_na` is `NULL` or `FALSE`,
#' missing values are included in the list of unique values. If `exclude_na =
#' TRUE`, missing values are excluded for all columns in `z`. If `exclude_na` is
#' a subset of `z`, the missing values of those columns will be excluded.
#'
#'
#' @param DT the data.table object
#' @param z the column names whose unique values are to be computed
#' @param exclude_na either a logical value or a subset of `z` values
#'
#' @import data.table
.lunique <- function(DT, z, exclude_na = NULL) {
exclude_na <- .excludena_algebra(z, exclude_na)
l <- lapply(z, function(zn) {
u <- DT[, unique(get(zn))]
if (zn %in% exclude_na) u <- na.exclude(u)
return(u)
})
names(l) <- z
return(l)
}
.excludena_algebra <- function(vlist, vexclude) {
if (is.null(vexclude) || vexclude == FALSE) return(NULL)
if (vexclude == TRUE) return(vlist)
return(intersect(vlist, vexclude))
}
#' Given a list of IDates, compute associated ages.
#'
#' @param vdate0 `IDate` vector
#' @param date1 `IDate` vector of length one, or of the same length as `vdate0`
#'
#' @export
#' @import data.table
age_calc <- function(vdate0, date1 = as.Date(Sys.time())) {
y1 <- data.table::year(date1)
m1 <- data.table::month(date1)
d1 <- data.table::mday(date1)
y0 <- data.table::year(vdate0)
m0 <- data.table::month(vdate0)
d0 <- data.table::mday(vdate0)
idx0 <- m0 > m1
idx1 <- m1 == m0
idx2 <- d0 > d1
ret0 <- y1 - y0
ret0[!is.na(idx0) & idx0] <- ret0[!is.na(idx0) & idx0] - 1
ret0[!is.na(idx1) & !is.na(idx2) & idx1 & idx2] <-
ret0[!is.na(idx1) & !is.na(idx2) & idx1 & idx2] - 1
return(ret0)
}
#' Create a maximal ID based on 2 id'ing schemes
#'
#' @param DT the data.table object
#' @param c1 a string with the column name for the first id
#' @param c2 a string with the column name for the second id
#' @param exclude_na either a logical value or a subset of `z` values
#'
#' @export
#' @import data.table
union_ids <- function(DT, c1, c2, name_join, maxiter=50) {
if (name_join %in% colnames(DT)) name_join <- paste0(name_join, "_new")
DT[!is.na(get(c1)), .c1 := .GRP, c1]
DT[!is.na(get(c2)), .c2 := .GRP, c2]
max1 <- DT[, max(.c1, na.rm = TRUE)]
max2 <- DT[, max(.c2, na.rm = TRUE)]
DT[is.na(.c1), .c1 := max1 + .I]
DT[is.na(.c2), .c2 := max2 + .I]
DT[, .join_sv := seq_len(.N)]
DT[!is.na(.c1), .join := min(.join_sv), .c1]
sep <- c(".c1" = ".c2",
".c2" = ".c1")
i <- 1
x0 <- ".c2"
allsame <- DT[, all(.join == .join_sv)]
while (!allsame && i <= maxiter) {
DT[!is.na(get(x0)), .join := min(.join), x0]
allsame <- DT[, all(.join_sv == .join, na.rm = TRUE)]
DT[, .join_sv := .join][]
x0 <- sep[x0]
i <- i + 1
}
if (!allsame) {
warning(glue::glue("[union_ids] Could not join IDs after {i} iterations; consider chaing `maxiter` parameter"))
}
DT[is.na(get(c1)) & is.na(get(c2)), .join := NA]
DT[, c(".join_sv", ".c1", ".c2") := NULL]
setnames(DT, ".join", name_join)
return(DT)
}
|
!
! -- MAGMA (version 2.1.0) --
! Univ. of Tennessee, Knoxville
! Univ. of California, Berkeley
! Univ. of Colorado, Denver
! @date August 2016
!
! @generated from testing/testing_zgetrf_f.f90, normal z -> c, Tue Aug 30 09:39:19 2016
!
program testing_cgetrf_f
use magma
external slamch, clange, cgemm, cgesv, cgetrs
real clange, slamch
real :: rnumber(2), Anorm, Bnorm, Xnorm, Rnorm
real, allocatable :: work(:)
complex, allocatable :: A(:), B(:), X(:)
complex, allocatable :: A2(:)
integer, allocatable :: ipiv(:)
complex :: c_one, c_neg_one
integer :: i, n, info, lda
integer :: nrhs
real(kind=8) :: flops, t, tstart, tend
PARAMETER ( nrhs = 1, c_one = 1., c_neg_one = -1. )
call cublas_init()
n = 2048
lda = n
!------ Allocate CPU memory
allocate(A(lda*n))
allocate(A2(lda*n))
allocate(B(lda*nrhs))
allocate(X(lda*nrhs))
allocate(ipiv(n))
allocate(work(n))
!---- Initializa the matrix
do i=1,n*n
call random_number(rnumber)
A(i) = rnumber(1)
end do
A2(:) = A(:)
do i=1,n*nrhs
call random_number(rnumber)
B(i) = rnumber(1)
end do
X(:) = B(:)
!---- Call magma LU ----------------
call magmaf_wtime(tstart)
call magmaf_cgetrf(n, n, A, lda, ipiv, info)
call magmaf_wtime(tend)
if ( info .ne. 0 ) then
write(*,*) "Info : ", info
end if
!---- Call solve -------------
call cgetrs('n', n, nrhs, A, lda, ipiv, X, lda, info)
if ( info .ne. 0 ) then
write(*,*) "Info : ", info
end if
!---- Compare the two results ------
Anorm = clange('I', n, n, A2, lda, work)
Bnorm = clange('I', n, nrhs, B, lda, work)
Xnorm = clange('I', n, nrhs, X, lda, work)
call cgemm('n', 'n', n, nrhs, n, c_one, A2, lda, X, lda, c_neg_one, B, lda)
Rnorm = clange('I', n, nrhs, B, lda, work)
write(*,*)
write(*,* ) 'Solving A x = b using LU factorization:'
write(*,105) ' || A || = ', Anorm
write(*,105) ' || b || = ', Bnorm
write(*,105) ' || x || = ', Xnorm
write(*,105) ' || b - A x || = ', Rnorm
flops = 2. * n * n * n / 3.
t = tend - tstart
write(*,*) ' Gflops = ', flops / t / 1e9
write(*,*)
Rnorm = Rnorm / ( (Anorm*Xnorm+Bnorm) * n * slamch('E') )
if ( Rnorm > 60. ) then
write(*,105) ' Solution is suspicious, ', Rnorm
else
write(*,105) ' Solution is CORRECT'
end if
!---- Free CPU memory
deallocate(A, A2, B, X, ipiv, work)
!---- Free GPU memory
call cublas_shutdown()
105 format((a35,es10.3))
end
|
1960 – 61 , 1974 – 75 , 1976 – 77 , 1993 – 94 , 1995 – 96
|
module Data.Vect.Extras
import Data.Vect
import Data.Vect.Views
%default total
%access export
foldrM : Monad m => (func : elem -> acc -> m acc) -> (init : acc) -> (input : Vect n elem) -> m acc
foldrM func init input with (snocVect input)
foldrM _ init [] | Empty = pure init
foldrM func init (xs ++ [x]) | Snoc rec = foldrM func !(func x init) xs | rec
mapM : Monad m => (func : elem -> m r) -> (input : Vect n elem) -> m (Vect n r)
mapM = helper []
where helper : Monad m => Vect n r -> (elem -> m r) -> Vect n' elem -> m (Vect (n + n') r)
helper {n} acc func [] =
rewrite plusZeroRightNeutral n in
pure acc
helper {n} {n' = S k} acc func (x :: xs) =
rewrite sym $ plusSuccRightSucc n k in
do helper (!(func x) :: acc) func xs
|
const AVAILABLE_THREADS = Base.RefValue{Channel{Int}}()
# Somehow, fetch doesn't do a very good job at preserving
# stacktraces. So, we catch any error in spawn_background
# And return it as a CapturedException, and then use checked_fetch to
# rethrow any exception in that case
function checked_fetch(future)
value = fetch(future)
value isa Exception && throw(value)
return value
end
"""
spawnbg(f)
Spawn work on any available background thread.
Captures any exception thrown in the thread, to give better stacktraces.
You can use `checked_fetch(spawnbg(f))` to rethrow any exception.
** Warning ** this doesn't compose with other ways of scheduling threads
So, one should use `spawn_background` exclusively in each Julia process.
"""
function spawnbg(f)
# -1, because we don't spawn on foreground thread 1
nbackground = Threads.nthreads() - 1
if nbackground == 0
# we don't run in threaded mode, so we just run things async
# to not block forever
@warn("No threads available, running in foreground thread")
return @async try
return f()
catch e
# If we don't do this, we get pretty bad stack traces... not sure why!?
return CapturedException(e, Base.catch_backtrace())
end
end
# Initialize dynamically, could also do this in __init__ but it's nice to keep things in one place
if !isassigned(AVAILABLE_THREADS)
# Allocate a Channel with n background threads
c = Channel{Int}(nbackground)
AVAILABLE_THREADS[] = c
# fill queue with available threads
foreach(i -> put!(c, i + 1), 1:nbackground)
end
# take the next free thread... Will block/wait until a thread becomes free
thread_id = take!(AVAILABLE_THREADS[])
return ThreadPools.@tspawnat thread_id begin
try
return f()
catch e
# If we don't do this, we get pretty bad stack traces...
# not sure why something so basic just doesn't work well \_(ツ)_/¯
return CapturedException(e, Base.catch_backtrace())
finally
# Make thread available again after work is done!
put!(AVAILABLE_THREADS[], thread_id)
end
end
end
|
[STATEMENT]
lemma mult_L_star_mult_below:
"(x * L)\<^sup>\<star> * y \<le> y \<squnion> x * L"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (x * L)\<^sup>\<star> * y \<le> y \<squnion> x * L
[PROOF STEP]
by (metis sup_right_isotone mult_assoc mult_right_isotone n_L_below_L star_left_induct)
|
From Coq Require Import ZArith.
Require Import coqutil.Z.Lia.
Require Import coqutil.Z.Lia.
Require Import coqutil.Z.div_mod_to_equations.
Require Import Coq.Lists.List. Import ListNotations.
Require Import coqutil.Map.Interface coqutil.Map.Properties.
Require Import coqutil.Word.Interface coqutil.Word.Properties.
Require Import riscv.Utility.Monads.
Require Import riscv.Utility.Utility.
Require Import riscv.Platform.Memory.
Require Import riscv.Spec.Machine.
Require Import riscv.Platform.RiscvMachine.
Require Import riscv.Platform.MetricRiscvMachine.
Require Import riscv.Spec.Primitives.
Require Import riscv.Spec.MetricPrimitives.
Require Import riscv.Platform.MetricLogging.
Require Import riscv.Platform.Run.
Require Import riscv.Spec.Execute.
Require Import coqutil.Tactics.Tactics.
Require Import compiler.SeparationLogic.
Require Export coqutil.Word.SimplWordExpr.
Require Import compiler.DivisibleBy4.
Require Import bedrock2.ptsto_bytes.
Require Import bedrock2.Scalars.
Require Import riscv.Utility.Encode.
Require Import riscv.Proofs.EncodeBound.
Require Import riscv.Proofs.DecodeEncode.
Require Import riscv.Platform.MetricSane.
Require Import coqutil.Decidable.
Require Import coqutil.Tactics.Simp.
Require Import riscv.Utility.runsToNonDet.
Require Import coqutil.Datatypes.ListSet.
Import Utility.
Section Go.
Context {width} {BW: Bitwidth width} {word: word.word width} {word_ok: word.ok word}.
Context {Registers: map.map Z word}.
Context {mem: map.map word byte}.
Context {mem_ok: map.ok mem}.
Local Notation RiscvMachineL := MetricRiscvMachine.
Context {M: Type -> Type}.
Context {MM: Monad M}.
Context {RVM: RiscvProgram M word}.
Context {PRParams: PrimitivesParams M MetricRiscvMachine}.
Context {PR: MetricPrimitives PRParams}.
Add Ring wring : (word.ring_theory (word := word))
(preprocess [autorewrite with rew_word_morphism],
morphism (word.ring_morph (word := word)),
constants [word_cst]).
Lemma spec_Bind_det{A B: Type}: forall (initialL: RiscvMachineL)
(post: B -> RiscvMachineL -> Prop) (m: M A) (f : A -> M B) (a: A) (mid: RiscvMachineL),
mcomp_sat m initialL (fun a' mid' => a' = a /\ mid' = mid) ->
mcomp_sat (f a) mid post ->
mcomp_sat (Bind m f) initialL post.
Proof.
intros. eapply spec_Bind. eexists. split; [exact H|]. intros. simpl in *.
destruct H1. subst. assumption.
Qed.
(* redefine mcomp_sat to simplify for the case where no answer is returned *)
Definition mcomp_sat(m: M unit)(initialL: RiscvMachineL)(post: RiscvMachineL -> Prop): Prop :=
mcomp_sat m initialL (fun (_: unit) => post).
Lemma mcomp_sat_weaken: forall initialL m (post1 post2: RiscvMachineL -> Prop),
(forall mach, post1 mach -> post2 mach) ->
mcomp_sat m initialL post1 ->
mcomp_sat m initialL post2.
Proof.
intros. eapply mcomp_sat_weaken; [|eassumption].
simpl. intros _. assumption.
Qed.
(* nicer version of mcomp_sat_weaken which gives you two more hypotheses while proving P -> Q *)
Lemma run1_get_sane: forall iset (P Q: RiscvMachineL -> Prop) mach,
valid_machine mach ->
mcomp_sat (run1 iset) mach P ->
(forall mach': RiscvMachineL,
(exists diff, mach'.(getLog) = diff ++ mach.(getLog)) ->
valid_machine mach' ->
P mach' ->
Q mach') ->
mcomp_sat (run1 iset) mach Q.
Proof.
intros.
pose proof run1_sane as A.
unfold mcomp_sane in A.
specialize A with (1 := H) (2 := H0).
apply proj2 in A.
eapply mcomp_sat_weaken. 2: exact A. cbv beta.
intros. destruct H2 as ((? & (diff & ?)) & ?).
eapply H1; eauto.
Qed.
Lemma runsTo_sane: forall iset (P: RiscvMachineL -> Prop) mach,
runsTo (mcomp_sat (run1 iset)) mach P ->
valid_machine mach ->
runsTo (mcomp_sat (run1 iset)) mach (fun mach' =>
(P mach' /\ exists diff, mach'.(getLog) = diff ++ mach.(getLog)) /\ valid_machine mach').
Proof.
induction 1; intros.
- eapply runsToDone. ssplit; try assumption. exists nil. reflexivity.
- pose proof run1_sane as A.
unfold mcomp_sane in A.
specialize A with (1 := H2) (2 := H).
apply proj2 in A.
eapply runsToStep. 1: exact A.
cbv beta.
intros. destruct H3 as ((? & (diff & ?)) & ?). eapply runsTo_weaken.
+ eapply H1; eassumption.
+ cbv beta. intros. destruct H6 as ((? & (diff' & ?)) & ?).
ssplit; try eassumption.
rewrite H7. rewrite H4. rewrite app_assoc. eexists. reflexivity.
Qed.
(* a nicer version of runsTo_weaken which gives you two more hypotheses while proving P -> Q *)
Lemma runsTo_get_sane: forall iset (P Q: RiscvMachineL -> Prop) mach,
valid_machine mach ->
runsTo (mcomp_sat (run1 iset)) mach P ->
(forall mach': RiscvMachineL,
(exists diff, mach'.(getLog) = diff ++ mach.(getLog)) ->
valid_machine mach' ->
P mach' ->
Q mach') ->
runsTo (mcomp_sat (run1 iset)) mach Q.
Proof.
intros.
eapply runsTo_weaken.
- eapply runsTo_sane; eassumption.
- cbv beta. intros. destruct H2 as ((? & ?) & ?).
eapply H1; assumption.
Qed.
Lemma spec_Bind_unit: forall (initialL: RiscvMachineL)
(mid post: RiscvMachineL -> Prop) (m1: M unit) (m2 : M unit),
mcomp_sat m1 initialL mid ->
(forall middle, mid middle -> mcomp_sat m2 middle post) ->
mcomp_sat (Bind m1 (fun _ => m2)) initialL post.
Proof.
intros. eapply spec_Bind. eexists. split; [exact H|]. intros. simpl in *.
apply H0. assumption.
Qed.
Lemma ExecuteFetchP: forall (addr: word) xAddrs, Execute = Fetch -> isXAddr4 addr xAddrs.
Proof. intros. discriminate. Qed.
Ltac t lem :=
intros;
try (eapply spec_Bind_det; [|eassumption]); (* try because go_step doesn't need Bind *)
apply lem;
rewrite_match;
eauto 10 using ExecuteFetchP.
Lemma go_getRegister: forall (initialL: RiscvMachineL) (x: Z) v post (f: word -> M unit),
valid_register x ->
map.get initialL.(getRegs) x = Some v ->
mcomp_sat (f v) initialL post ->
mcomp_sat (Bind (getRegister x) f) initialL post.
Proof. t spec_getRegister. Qed.
Lemma go_getRegister0: forall (initialL: RiscvMachineL) post (f: word -> M unit),
mcomp_sat (f (ZToReg 0)) initialL post ->
mcomp_sat (Bind (getRegister Register0) f) initialL post.
Proof. t spec_getRegister. Qed.
Lemma go_setRegister: forall (initialL: RiscvMachineL) x v post (f: unit -> M unit),
valid_register x ->
mcomp_sat (f tt) (withRegs (map.put initialL.(getRegs) x v) initialL) post ->
mcomp_sat (Bind (setRegister x v) f) initialL post.
Proof. t spec_setRegister. Qed.
Lemma go_setRegister0: forall (initialL: RiscvMachineL) v post (f: unit -> M unit),
mcomp_sat (f tt) initialL post ->
mcomp_sat (Bind (setRegister Register0 v) f) initialL post.
Proof. t spec_setRegister. Qed.
Lemma go_loadByte: forall (initialL: RiscvMachineL) addr (v: w8) (f: w8 -> M unit) post,
Memory.loadByte initialL.(getMem) addr = Some v ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (Machine.loadByte Execute addr) f) initialL post.
Proof. t spec_loadByte. Qed.
Lemma go_loadHalf: forall (initialL: RiscvMachineL) addr (v: w16) (f: w16 -> M unit) post,
Memory.loadHalf initialL.(getMem) addr = Some v ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (Machine.loadHalf Execute addr) f) initialL post.
Proof. t spec_loadHalf. Qed.
Lemma go_loadWord: forall (initialL: RiscvMachineL) addr (v: w32) (f: w32 -> M unit) post,
Memory.loadWord initialL.(getMem) addr = Some v ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (Machine.loadWord Execute addr) f) initialL post.
Proof. t spec_loadWord. Qed.
Lemma go_loadWord_Fetch: forall (initialL: RiscvMachineL) addr (v: w32) (f: w32 -> M unit) post,
isXAddr4 addr initialL.(getXAddrs) ->
Memory.loadWord initialL.(getMem) addr = Some v ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (Machine.loadWord Fetch addr) f) initialL post.
Proof. t spec_loadWord. Qed.
Lemma go_loadDouble: forall (initialL: RiscvMachineL) addr (v: w64) (f: w64 -> M unit) post,
Memory.loadDouble initialL.(getMem) addr = Some v ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (Machine.loadDouble Execute addr) f) initialL post.
Proof. t spec_loadDouble. Qed.
Lemma go_storeByte: forall (initialL: RiscvMachineL) kind addr v m' post (f: unit -> M unit),
Memory.storeByte initialL.(getMem) addr v = Some m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 1 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post ->
mcomp_sat (Bind (Machine.storeByte kind addr v) f) initialL post.
Proof. t spec_storeByte. Qed.
Lemma go_storeHalf: forall (initialL: RiscvMachineL) kind addr v m' post (f: unit -> M unit),
Memory.storeHalf initialL.(getMem) addr v = Some m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 2 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post ->
mcomp_sat (Bind (Machine.storeHalf kind addr v) f) initialL post.
Proof. t spec_storeHalf. Qed.
Lemma go_storeWord: forall (initialL: RiscvMachineL) kind addr v m' post (f: unit -> M unit),
Memory.storeWord initialL.(getMem) addr v = Some m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 4 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post ->
mcomp_sat (Bind (Machine.storeWord kind addr v) f) initialL post.
Proof. t spec_storeWord. Qed.
Lemma go_storeDouble: forall (initialL: RiscvMachineL) kind addr v m' post (f: unit -> M unit),
Memory.storeDouble initialL.(getMem) addr v = Some m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 8 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post ->
mcomp_sat (Bind (Machine.storeDouble kind addr v) f) initialL post.
Proof. t spec_storeDouble. Qed.
Lemma go_getPC: forall (initialL: RiscvMachineL) (f: word -> M unit) post,
mcomp_sat (f initialL.(getPc)) initialL post ->
mcomp_sat (Bind getPC f) initialL post.
Proof. t spec_getPC. Qed.
Lemma go_setPC: forall (initialL: RiscvMachineL) v post (f: unit -> M unit),
mcomp_sat (f tt) (withNextPc v (updateMetrics (addMetricJumps 1) initialL)) post ->
mcomp_sat (Bind (setPC v) f) initialL post.
Proof.
intros.
t (spec_setPC initialL v (fun a' mid' => a' = tt /\
mid' = withNextPc v (updateMetrics (addMetricJumps 1) initialL))).
Qed.
Lemma go_endCycleNormal: forall (initialL: RiscvMachineL) (post: RiscvMachineL -> Prop),
post (withPc initialL.(getNextPc)
(withNextPc (word.add initialL.(getNextPc) (word.of_Z 4))
(updateMetrics (addMetricInstructions 1) initialL))) ->
mcomp_sat endCycleNormal initialL post.
Proof. t spec_endCycleNormal. Qed.
Lemma go_done: forall (initialL: RiscvMachineL) (post: RiscvMachineL -> Prop),
post initialL ->
mcomp_sat (Return tt) initialL post.
Proof. intros. apply spec_Return. exact H. Qed.
Lemma go_left_identity{A: Type}: forall (initialL: RiscvMachineL) post a
(f : A -> M unit),
mcomp_sat (f a) initialL post ->
mcomp_sat (Bind (Return a) f) initialL post.
Proof.
intros. rewrite left_identity. assumption.
Qed.
Lemma go_right_identity: forall (initialL: RiscvMachineL) post
(m: M unit),
mcomp_sat m initialL post ->
mcomp_sat (Bind m Return) initialL post.
Proof.
intros. rewrite right_identity. assumption.
Qed.
Lemma go_associativity{A B: Type}: forall (initialL: RiscvMachineL) post
(m: M A)
(f : A -> M B) (g : B -> M unit),
mcomp_sat (Bind m (fun x : A => Bind (f x) g)) initialL post ->
mcomp_sat (Bind (Bind m f) g) initialL post.
Proof.
intros. rewrite associativity. assumption.
Qed.
Local Arguments Z.of_nat: simpl never.
Local Arguments Z.mul: simpl never.
Local Arguments Z.add: simpl never.
Definition unchecked_store_program(addr: word)(p: list Decode.Instruction)(m: mem): mem :=
unchecked_store_byte_list addr (Z32s_to_bytes (List.map encode p)) m.
Lemma unchecked_store_byte_list_None: forall (l: list byte) (z: Z) m (addr: word),
0 < z ->
z + Z.of_nat (length l) < 2 ^ width ->
map.get m addr = None ->
map.get (unchecked_store_byte_list (word.add addr (word.of_Z z)) l m) addr = None.
Proof.
intros. unfold unchecked_store_byte_list, unchecked_store_bytes.
apply putmany_of_footprint_None; try assumption; try blia.
Qed.
Fixpoint in_tuple{T: Type}(a: T){n: nat}: HList.tuple T n -> Prop :=
match n with
| O => fun _ => False
| S n' => fun '(PrimitivePair.pair.mk t ts) => a = t \/ in_tuple a ts
end.
Lemma ptsto_bytes_putmany_of_tuple: forall n addr vs (R: mem -> Prop) m,
Z.of_nat n < 2 ^ width ->
R m ->
(forall k, in_tuple k (footprint addr n) -> map.get m k = None) ->
(ptsto_bytes n addr vs * R)%sep (map.putmany_of_tuple (footprint addr n) vs m).
Proof.
assert (2 ^ width > 0) as Gz. {
destruct width_cases as [E | E]; rewrite E; reflexivity.
}
induction n; intros.
- simpl. unfold ptsto_bytes. destruct vs. simpl. apply sep_emp_l. auto.
- simpl. unfold ptsto_bytes. destruct vs as [v vs].
simpl.
replace (Z.of_nat (S n)) with (1 + Z.of_nat n) in H by blia.
match goal with
| |- (?A * ?B * ?C)%sep ?m => assert ((A * (B * C))%sep m); [|ecancel_assumption]
end.
eapply sep_on_undef_put.
+ apply putmany_of_footprint_None; try blia.
eapply H1.
simpl. left. reflexivity.
+ apply IHn; blia || assumption || idtac.
intros. eapply H1.
simpl. right. assumption.
Qed.
Lemma ptsto_bytes_putmany_of_tuple_empty: forall n (addr: word) vs,
Z.of_nat n < 2 ^ width ->
ptsto_bytes n addr vs (map.putmany_of_tuple (footprint addr n) vs map.empty).
Proof.
induction n; intros.
- cbv. auto.
- simpl. unfold ptsto_bytes. destruct vs as [v vs].
simpl.
replace (Z.of_nat (S n)) with (1 + Z.of_nat n) in H by blia.
eapply sep_on_undef_put.
+ apply putmany_of_footprint_None; try blia.
apply map.get_empty.
+ apply IHn. blia.
Qed.
Lemma ptsto_bytes_array: forall (l: list byte) (addr: word),
iff1 (array ptsto (word.of_Z 1) addr l)
(ptsto_bytes (length l) addr (HList.tuple.of_list l)).
Proof.
induction l; intros.
- simpl. reflexivity.
- simpl. unfold ptsto_bytes. simpl. apply iff1_sep_cancel. apply IHl.
Qed.
Lemma array_on_undef_store_byte_list: forall addr l (R: mem -> Prop) m,
Z.of_nat (length l) < 2 ^ width ->
R m ->
(forall k, in_tuple k (footprint addr (length l)) -> map.get m k = None) ->
(array ptsto (word.of_Z 1) addr l * R)%sep (unchecked_store_byte_list addr l m).
Proof.
intros.
seprewrite ptsto_bytes_array.
apply ptsto_bytes_putmany_of_tuple; assumption.
Qed.
Lemma mod_eq_to_diff: forall e1 e2 m,
m <> 0 ->
e1 mod m = e2 mod m ->
(e1 - e2) mod m = 0.
Proof.
intros. rewrite !Z.mod_eq in H0 by assumption.
replace (e1 - e2) with (m * (e1 / m) - m * (e2 / m)) by blia.
rewrite Z.mod_eq by assumption.
rewrite <- Z.mul_sub_distr_l.
rewrite (Z.mul_comm m (e1 / m - e2 / m)).
rewrite Z.div_mul by assumption.
rewrite Z.mul_comm.
apply Z.sub_diag.
Qed.
Ltac word_simpl :=
rewrite <-? word.add_assoc;
rewrite <-? word.ring_morph.(morph_add);
simpl.
Lemma pow2width_nonzero: 2 ^ width <> 0.
Proof.
destruct width_cases as [E | E]; rewrite E; cbv; discriminate.
Qed.
Lemma ptsto_subset_to_isXAddr1: forall (a : word) (v : Init.Byte.byte) xAddrs,
subset (footpr (ptsto a v)) (of_list xAddrs) ->
isXAddr1 a xAddrs.
Proof.
unfold subset, footpr, footprint_underapprox, ptsto, elem_of, of_list, isXAddr1.
intros.
eapply H.
intros.
subst.
eexists.
apply map.get_put_same.
Qed.
Context (iset: Decode.InstructionSet).
Lemma ptsto_instr_subset_to_isXAddr4: forall (a: word) i xAddrs,
subset (footpr (ptsto_instr iset a i)) (of_list xAddrs) ->
isXAddr4 a xAddrs.
Proof.
unfold isXAddr4, ptsto_instr, truncated_scalar, littleendian, ptsto_bytes, array. simpl.
intros.
ssplit; eapply ptsto_subset_to_isXAddr1;
(eapply shrink_footpr_subset; [eassumption|wcancel]).
Qed.
Definition not_InvalidInstruction(inst: Decode.Instruction): Prop :=
match inst with
| Decode.InvalidInstruction _ => False
| _ => True
end.
Lemma go_fetch_inst{initialL: RiscvMachineL} {inst pc0 R Rexec} (post: RiscvMachineL -> Prop):
pc0 = initialL.(getPc) ->
subset (footpr (program iset pc0 [inst] * Rexec)%sep) (of_list initialL.(getXAddrs)) ->
(program iset pc0 [inst] * Rexec * R)%sep initialL.(getMem) ->
not_InvalidInstruction inst ->
mcomp_sat (Bind (execute inst) (fun _ => endCycleNormal))
(updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (run1 iset) initialL post.
Proof.
intros. subst.
unfold run1.
apply go_getPC.
unfold program in *.
unfold array, ptsto_instr in H1.
match goal with
| H: (?T * ?P1 * ?P2 * emp True * Rexec * R)%sep ?m |- _ =>
assert ((T * R * Rexec * P1 * P2)%sep m) as A by ecancel_assumption; clear H
end.
do 2 (apply sep_emp_r in A; destruct A as [A ?]).
eapply go_loadWord_Fetch.
- eapply ptsto_instr_subset_to_isXAddr4.
eapply shrink_footpr_subset. 1: eassumption. simpl. ecancel.
- unfold Memory.loadWord.
unfold truncated_scalar, littleendian, Memory.bytes_per in A.
eapply load_bytes_of_sep with (n:=(length (LittleEndianList.le_split 4 (encode inst)))).
(* TODO here it would be useful if seplog unfolded Memory.bytes_per for me,
ie. did more than just syntactic unify *)
ecancel_assumption.
- change 4%nat with (length (LittleEndianList.le_split 4 (encode inst))).
rewrite LittleEndian.combine_eq, HList.tuple.to_list_of_list, LittleEndianList.le_combine_split.
assert (0 <= encode inst < 2 ^ width) as F. {
pose proof (encode_range inst) as P.
destruct width_cases as [E | E]; rewrite E; split. all: blia.
}
rewrite Z.mod_small; try assumption; try apply encode_range.
destruct H1.
+ rewrite decode_encode; assumption.
+ exfalso. unfold not_InvalidInstruction, valid_InvalidInstruction in *. simp. contradiction.
Qed.
(* go_load/storeXxx lemmas phrased in terms of separation logic instead of
Memory.load/storeXxx *)
Lemma go_loadByte_sep:
forall (initialL : RiscvMachineL) (addr : word) (v : w8)
(f : w8 -> M unit) (post : RiscvMachineL -> Prop) (R: mem -> Prop),
(ptsto_bytes 1 addr v * R)%sep initialL.(getMem) ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (loadByte Execute addr) f) initialL post.
Proof.
intros.
eapply go_loadByte; [|eassumption].
eapply load_bytes_of_sep. eassumption.
Qed.
Lemma preserve_subset_of_xAddrs: forall m Rexec n (R: mem -> Prop) (xAddrs: list word) addr v,
subset (footpr Rexec) (of_list xAddrs) ->
(ptsto_bytes n addr v * R * Rexec)%sep m ->
subset (footpr Rexec) (of_list (invalidateWrittenXAddrs n addr xAddrs)).
Proof.
induction n; intros.
- simpl. assumption.
- destruct v as [v vs]. unfold ptsto_bytes in *. simpl in *.
assert (exists R',
(array ptsto (word.of_Z 1) (word.add addr (word.of_Z 1)) (HList.tuple.to_list vs)
* R' * Rexec)%sep m) as F by (eexists; ecancel_assumption).
destruct F as [R' F].
specialize IHn with (2 := F).
change removeXAddr with (@List.removeb word word.eqb).
rewrite ListSet.of_list_removeb.
unfold subset.
intros x Hx.
destr (word.eqb x addr).
+ subst. exfalso. clear F IHn.
unfold sep, map.split in H0.
simp.
unfold elem_of, footpr, footprint_underapprox in Hx.
specialize (Hx _ H0p2).
destruct Hx as [w Hx].
rename H0p1p1p1 into B.
unfold ptsto in B.
subst.
unfold map.disjoint in *.
eapply H0p0p1. 2: exact Hx.
rewrite map.get_putmany_left; cycle 1. {
destr (map.get mq0 addr); [exfalso|reflexivity].
eapply H0p1p0p1. 2: exact E.
rewrite map.get_putmany_left; cycle 1. {
destr (map.get mq1 addr); [exfalso|reflexivity].
eapply H0p1p1p0p1. 2: exact E0.
rewrite map.get_put_same. reflexivity.
}
rewrite map.get_put_same. reflexivity.
}
rewrite map.get_putmany_left; cycle 1. {
destr (map.get mq1 addr); [exfalso|reflexivity].
eapply H0p1p1p0p1. 2: exact E.
rewrite map.get_put_same. reflexivity.
}
rewrite map.get_put_same. reflexivity.
+ unfold diff, elem_of, singleton_set. split; [|congruence].
eapply IHn; assumption.
Qed.
Lemma go_storeByte_sep:
forall (initialL : RiscvMachineL) (addr : word) (v_old v_new : w8)
(post : RiscvMachineL -> Prop) (f : unit -> M unit) (R Rexec: mem -> Prop),
subset (footpr Rexec) (of_list initialL.(getXAddrs)) ->
(ptsto_bytes 1 addr v_old * R * Rexec)%sep initialL.(getMem) ->
(forall m': mem,
subset (footpr Rexec) (of_list (invalidateWrittenXAddrs 1 addr initialL.(getXAddrs))) ->
(ptsto_bytes 1 addr v_new * R * Rexec)%sep m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 1 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post) ->
mcomp_sat (Bind (storeByte Execute addr v_new) f) initialL post.
Proof.
intros.
pose proof (store_bytes_of_sep (mem_ok := mem_ok)) as P.
edestruct P as [m' [P1 P2]]; cycle 2.
- eapply go_storeByte.
+ exact P1.
+ exact P2.
- ecancel_assumption.
- cbv beta. intros m' Hm'.
eapply H1. 2: ecancel_assumption.
eapply preserve_subset_of_xAddrs; eassumption.
Qed.
Lemma go_loadHalf_sep:
forall (initialL : RiscvMachineL) (addr : word) (v : w16)
(f : w16 -> M unit) (post : RiscvMachineL -> Prop) (R: mem -> Prop),
(ptsto_bytes 2 addr v * R)%sep initialL.(getMem) ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (loadHalf Execute addr) f) initialL post.
Proof.
intros.
eapply go_loadHalf; [|eassumption].
eapply load_bytes_of_sep. eassumption.
Qed.
Lemma go_storeHalf_sep:
forall (initialL : RiscvMachineL) (addr : word) (v_old v_new : w16)
(post : RiscvMachineL -> Prop) (f : unit -> M unit) (R Rexec: mem -> Prop),
subset (footpr Rexec) (of_list initialL.(getXAddrs)) ->
(ptsto_bytes 2 addr v_old * R * Rexec)%sep initialL.(getMem) ->
(forall m': mem,
subset (footpr Rexec) (of_list (invalidateWrittenXAddrs 2 addr initialL.(getXAddrs))) ->
(ptsto_bytes 2 addr v_new * R * Rexec)%sep m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 2 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post) ->
mcomp_sat (Bind (storeHalf Execute addr v_new) f) initialL post.
Proof.
intros.
pose proof (store_bytes_of_sep (mem_ok := mem_ok)) as P.
edestruct P as [m' [P1 P2]]; cycle 2.
- eapply go_storeHalf.
+ exact P1.
+ exact P2.
- ecancel_assumption.
- cbv beta. intros m' Hm'.
eapply H1. 2: ecancel_assumption.
eapply preserve_subset_of_xAddrs; eassumption.
Qed.
Lemma go_loadWord_sep:
forall (initialL : RiscvMachineL) (addr : word) (v : w32)
(f : w32 -> M unit) (post : RiscvMachineL -> Prop) (R: mem -> Prop),
(ptsto_bytes 4 addr v * R)%sep initialL.(getMem) ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (loadWord Execute addr) f) initialL post.
Proof.
intros.
eapply go_loadWord; [|eassumption].
eapply load_bytes_of_sep. eassumption.
Qed.
Lemma go_storeWord_sep:
forall (initialL : RiscvMachineL) (addr : word) (v_old v_new : w32)
(m': mem) (post : RiscvMachineL -> Prop) (f : unit -> M unit) (R: mem -> Prop),
(ptsto_bytes 4 addr v_old * R)%sep initialL.(getMem) ->
(ptsto_bytes 4 addr v_new * R)%sep m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 4 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post ->
mcomp_sat (Bind (storeWord Execute addr v_new) f) initialL post.
Proof.
intros.
eapply go_storeWord; [|eassumption].
unfold Memory.storeWord.
pose proof (unchecked_store_bytes_of_sep (mem_ok := mem_ok)) as P.
specialize P with (1 := H). specialize (P v_new).
(* Does not hold because if R does not completely determine the contents of the memory,
initialL.(getMem) and m' could change in locations other than at addr,
and post could check for that, so if the post in the hyp requires some specific value
in m', this value might not be present in initialL.(getMem), and still not be present
after the storeWord operation, so the conclusion would not hold. *)
Abort.
Lemma go_storeWord_sep:
forall (initialL : RiscvMachineL) (addr : word) (v_old v_new : w32)
(post : RiscvMachineL -> Prop) (f : unit -> M unit) (R Rexec: mem -> Prop),
subset (footpr Rexec) (of_list initialL.(getXAddrs)) ->
(ptsto_bytes 4 addr v_old * R * Rexec)%sep initialL.(getMem) ->
(let m' := Memory.unchecked_store_bytes 4 (getMem initialL) addr v_new in
let xaddrs' := invalidateWrittenXAddrs 4 addr initialL.(getXAddrs) in
subset (footpr Rexec) (of_list xaddrs') ->
(ptsto_bytes 4 addr v_new * R * Rexec)%sep m' ->
mcomp_sat (f tt) (withXAddrs xaddrs'
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post) ->
mcomp_sat (Bind (storeWord Execute addr v_new) f) initialL post.
Proof.
intros.
pose proof (unchecked_store_bytes_of_sep (mem_ok := mem_ok)) as P.
assert ((ptsto_bytes 4 addr v_old * (R * Rexec))%sep initialL.(getMem)) as H0'
by ecancel_assumption.
specialize P with (1 := H0'). specialize (P v_new).
cbv zeta in H1.
assert ((ptsto_bytes 4 addr v_new * R * Rexec)%sep
(Memory.unchecked_store_bytes 4 (getMem initialL) addr v_new)) as P'
by ecancel_assumption.
specialize H1 with (2 := P').
eapply go_storeWord; cycle 1. {
eapply H1.
eapply preserve_subset_of_xAddrs; eassumption.
}
unfold Memory.storeWord, store_bytes.
erewrite load_bytes_of_sep; eauto using unchecked_store_bytes_of_sep.
Qed.
Lemma go_storeWord_sep_holds_but_results_in_evars_out_of_scope:
forall (initialL : RiscvMachineL) (addr : word) (v_old v_new : w32)
(post : RiscvMachineL -> Prop) (f : unit -> M unit) (R: mem -> Prop),
(ptsto_bytes 4 addr v_old * R)%sep initialL.(getMem) ->
(forall m': mem,
(ptsto_bytes 4 addr v_new * R)%sep m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 4 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post) ->
mcomp_sat (Bind (storeWord Execute addr v_new) f) initialL post.
Proof.
intros.
pose proof (store_bytes_of_sep (mem_ok := mem_ok)) as P.
specialize P with (1 := H) (2 := H0).
destruct P as (m' & P & Q).
eapply go_storeWord; eassumption.
Qed.
Lemma go_loadDouble_sep:
forall (initialL : RiscvMachineL) (addr : word) (v : w64)
(f : w64 -> M unit) (post : RiscvMachineL -> Prop) (R: mem -> Prop),
(ptsto_bytes 8 addr v * R)%sep initialL.(getMem) ->
mcomp_sat (f v) (updateMetrics (addMetricLoads 1) initialL) post ->
mcomp_sat (Bind (loadDouble Execute addr) f) initialL post.
Proof.
intros.
eapply go_loadDouble; [|eassumption].
eapply load_bytes_of_sep. eassumption.
Qed.
Lemma go_storeDouble_sep:
forall (initialL : RiscvMachineL) (addr : word) (v_old v_new : w64)
(post : RiscvMachineL -> Prop) (f : unit -> M unit) (R Rexec: mem -> Prop),
subset (footpr Rexec) (of_list initialL.(getXAddrs)) ->
(ptsto_bytes 8 addr v_old * R * Rexec)%sep initialL.(getMem) ->
(forall m': mem,
subset (footpr Rexec) (of_list (invalidateWrittenXAddrs 8 addr initialL.(getXAddrs))) ->
(ptsto_bytes 8 addr v_new * R * Rexec)%sep m' ->
mcomp_sat (f tt) (withXAddrs (invalidateWrittenXAddrs 8 addr initialL.(getXAddrs))
(withMem m' (updateMetrics (addMetricStores 1) initialL))) post) ->
mcomp_sat (Bind (storeDouble Execute addr v_new) f) initialL post.
Proof.
intros.
pose proof (store_bytes_of_sep (mem_ok := mem_ok)) as P.
edestruct P as [m' [P1 P2]]; cycle 2.
- eapply go_storeDouble.
+ exact P1.
+ exact P2.
- ecancel_assumption.
- cbv beta. intros m' Hm'.
eapply H1. 2: ecancel_assumption.
eapply preserve_subset_of_xAddrs; eassumption.
Qed.
End Go.
Ltac simpl_MetricRiscvMachine_get_set :=
cbn [
withMetrics
updateMetrics
getMachine
getMetrics
getRegs
getPc
getNextPc
getMem
getXAddrs
getLog
withRegs
withPc
withNextPc
withMem
withXAddrs
withLog
withLogItem
withLogItems
RiscvMachine.withRegs
RiscvMachine.withPc
RiscvMachine.withNextPc
RiscvMachine.withMem
RiscvMachine.withXAddrs
RiscvMachine.withLog
RiscvMachine.withLogItem
RiscvMachine.withLogItems
].
Ltac simpl_MetricRiscvMachine_mem :=
unfold getPc, getMem in *;
simpl RiscvMachine.getPc in *;
simpl RiscvMachine.getMem in *.
Ltac sidecondition_hook := idtac.
#[export] Hint Resolve Forall_impl : sidecondition_hints.
Ltac subst_if_not_in x t :=
lazymatch t with
| context[x] => fail
| _ => progress subst x
end.
Ltac subst_sep_var_only_in_lhs lhs rhs :=
match lhs with
| context[sep ?x _] => is_var x; subst_if_not_in x rhs
| context[sep _ ?x] => is_var x; subst_if_not_in x rhs
end.
Ltac subst_sep_vars :=
match goal with
| |- iff1 ?LHS ?RHS =>
repeat (subst_sep_var_only_in_lhs LHS RHS);
repeat (subst_sep_var_only_in_lhs RHS LHS)
end.
Ltac sidecondition :=
simpl; simpl_MetricRiscvMachine_get_set;
match goal with
(* these branches are allowed to instantiate evars in a controlled manner: *)
| H: map.get _ _ = Some _ |- _ => exact H
| |- map.get _ _ = Some _ =>
simpl;
match goal with
| |- map.get (map.put _ ?x _) ?y = Some _ =>
constr_eq x y; apply map.get_put_same
end
| |- @sep ?K ?V ?M ?P ?Q ?m => simpl in *;
simpl_MetricRiscvMachine_get_set;
use_sep_assumption;
wwcancel
| |- iff1 ?x _ =>
simpl_MetricRiscvMachine_get_set;
(tryif is_var x then
lazymatch goal with
| H: iff1 x _ |- _ => etransitivity; [exact H|]
end
else idtac);
subst_sep_vars;
wwcancel
| H: subset (footpr _) _ |- subset (footpr ?F) _ =>
tryif is_evar F then
eassumption
else
(simpl in H |- *;
eapply rearrange_footpr_subset; [ exact H | solve [sidecondition] ])
| |- _ => reflexivity
| A: map.get ?lH ?x = Some _, E: map.extends ?lL ?lH |- map.get ?lL ?x = Some _ =>
eapply (map.extends_get A E)
(* but we don't have a general "eassumption" branch, only "assumption": *)
| |- _ => solve [auto with sidecondition_hints]
| |- ?G => assert_fails (has_evar G); solve [eauto with sidecondition_hints]
| |- Memory.load ?sz ?m ?addr = Some ?v =>
unfold Memory.load, Memory.load_Z in *;
simpl_MetricRiscvMachine_mem;
erewrite load_bytes_of_sep; [ reflexivity | ecancel_assumption ]
| |- Memory.load ?sz ?m ?addr = Some ?v => eassumption
| |- Memory.store ?sz ?m ?addr ?val = Some ?m' => eassumption
| |- _ => sidecondition_hook
end.
(* eapply and rapply don't always work (they failed in compiler.MMIO), so we use refine below
Trick to test if right number of underscores:
let c := open_constr:(go_associativity _ _ _ _ _ _) in
let t := type of c in idtac t. *)
Ltac simulate_step :=
first (* lemmas packing multiple primitives need to go first: *)
[ refine (go_fetch_inst _ _ _ _ _ _ _); [sidecondition..|]
(* single-primitive lemmas: *)
(* lemmas about Register0 need to go before lemmas about other Registers *)
| refine (go_getRegister0 _ _ _ _); [sidecondition..|]
| refine (go_setRegister0 _ _ _ _ _); [sidecondition..|]
| refine (go_getRegister _ _ _ _ _ _ _ _); [sidecondition..|]
| refine (go_setRegister _ _ _ _ _ _ _); [sidecondition..|]
(* Note: One might not want these, but the separation logic version, or
the version expressed in terms of compile_load/store, so they're commented out
| eapply go_loadByte ; [sidecondition..|]
| eapply go_storeByte ; [sidecondition..|]
| eapply go_loadHalf ; [sidecondition..|]
| eapply go_storeHalf ; [sidecondition..|]
| eapply go_loadWord ; [sidecondition..|]
| eapply go_storeWord ; [sidecondition..|]
| eapply go_loadDouble ; [sidecondition..|]
| eapply go_storeDouble ; [sidecondition..|]
*)
| refine (go_getPC _ _ _ _); [sidecondition..|]
| refine (go_setPC _ _ _ _ _); [sidecondition..|]
| refine (go_endCycleNormal _ _ _); [sidecondition..|]
(* monad law lemmas: *)
| refine (go_left_identity _ _ _ _ _); [sidecondition..|]
| refine (go_right_identity _ _ _ _); [sidecondition..|]
| refine (go_associativity _ _ _ _ _ _); [sidecondition..|] ].
Ltac simulate := repeat simulate_step.
|
function TetraFile = fem_hexa2tetra(HexaFile)
% FEM_HEXA2TETRA: Converts hexahedral mesh to tetrahedral mesh
%
% USAGE: TetraFile = fem_hexa2tetra(HexaFile)
% @=============================================================================
% This function is part of the Brainstorm software:
% https://neuroimage.usc.edu/brainstorm
%
% Copyright (c) University of Southern California & McGill University
% This software is distributed under the terms of the GNU General Public License
% as published by the Free Software Foundation. Further details on the GPLv3
% license can be found at http://www.gnu.org/copyleft/gpl.html.
%
% FOR RESEARCH PURPOSES ONLY. THE SOFTWARE IS PROVIDED "AS IS," AND THE
% UNIVERSITY OF SOUTHERN CALIFORNIA AND ITS COLLABORATORS DO NOT MAKE ANY
% WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
% MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, NOR DO THEY ASSUME ANY
% LIABILITY OR RESPONSIBILITY FOR THE USE OF THIS SOFTWARE.
%
% For more information type "brainstorm license" at command prompt.
% =============================================================================@
%
% Authors: Takfarinas Medani, Francois Tadel, 2020
% Load file
bst_progress('start', 'Convert FEM mesh', ['Loading file "' HexaFile '"...']);
HexaFile = file_fullpath(HexaFile);
FemMat = load(HexaFile);
% Already tetrahedral
if (size(FemMat.Elements,2) == 4)
disp(['BST> Warning: Mesh is already tetrahedral: ' HexaFile])
TetraFile = HexaFile;
return;
end
% Convert to tetrahedral
bst_progress('text', 'Converting to tetrahedral...');
[tetraElem, tetraNode, tetraLabel] = hex2tet(FemMat.Elements, FemMat.Vertices, FemMat.Tissue, 4);
% Update output structure
FemMat.Vertices = tetraNode;
FemMat.Elements = tetraElem(:, [2 1 3 4]);
FemMat.Tissue = tetraLabel;
FemMat.Comment = sprintf('FEM %dV (hexa2tetra, %d layers)', length(FemMat.Vertices), length(FemMat.TissueLabels));
% Add history
FemMat = bst_history('add', FemMat, 'fem_hexa2tetra', 'Converted to tetrahedral');
% Output filename
[fPath, fBase, fExt] = bst_fileparts(HexaFile);
TetraFile = file_unique(bst_fullfile(fPath, [fBase, '_hexa', fExt]));
% Get subject
[sSubject, iSubject] = bst_get('SurfaceFile', HexaFile);
% Save new surface in Brainstorm format
bst_progress('text', 'Saving tetra mesh...');
bst_save(TetraFile, FemMat, 'v7');
db_add_surface(iSubject, TetraFile, FemMat.Comment);
% Close progress bar
bst_progress('stop');
|
#################################
# Using direct einstein's tensor
# equation to built IREs
################################
read("eq_ire.mpl"):
res2:=grcomponent(HSR):
tg2:=grcomponent(TG2(up),[x]):
with(DEtools):
addcoords(compact,[t,x],[t,ctfm(x)]);
ire_fields := [res2, tg2, restt, restx, resxx, resthth, ire_rpsi_direct]:
c_ire_fields := [0, 0, 0 , 0 , 0 , 0 , 0]:
# Compactifying Einstein's equation:
for ii from 1 to nops(ire_fields) do
c_ire_fields[ii] := PDEchangecoords(ire_fields[ii],[t,x],compact,[t,x]);
c_ire_fields[ii] := subs({diff(ctfm(x),x)=ctfmp(x),diff(ctfm(x),x,x)=ctfmpp(x)},c_ire_fields[ii]);
for kk from 1 to nops(grid_functions) do
fn := grid_functions[kk];
c_ire_fields[ii] := subs(fn(t,ctfm(x))=fn(t,x),c_ire_fields[ii]);
end do:
end do:
printf("checking compactification... all residuals should be zero:\n");
for ii from 1 to nops(ire_fields) do
res := simplify(ire_fields[ii] - eval(c_ire_fields[ii],{ctfm(x) = x, ctfmp(x) = 1, ctfmpp(x) = 0})):
printf("res = %a\n",res);
end do:
for ii from 1 to nops(ire_fields) do
c_ire_fields[ii] := collect(collect(eval(expand(c_ire_fields[ii]),{ctfmpp(x)=octfmp(x)*ctfmp(x)^2}),1/ctfmp(x)),1/ctfm(x));
end do:
res2 := c_ire_fields[1]:
tg2 := c_ire_fields[2]:
restt := c_ire_fields[3]:
restx := c_ire_fields[4]:
resxx := c_ire_fields[5]:
resthth := c_ire_fields[6]:
ire_rpsi_direct := c_ire_fields[7]:
Gen_Eval_Code(restx,input="c",proc_name="ire_restx");
Gen_Eval_Code(resxx,input="c",proc_name="ire_resxx");
Gen_Eval_Code(restt,input="c",proc_name="ire_restt");
Gen_Eval_Code(resthth,input="c",proc_name="ire_resthth");
Gen_Eval_Code(tg2-Lamx(t,x),input="c",proc_name="ire_val_lamx");
Gen_Eval_Code(ire_rpsi_direct,input="c",proc_name="ire_rpsi_direct");
pl:=table([ t=[-1,-1],x=[-1,0],y=[-1,-1],z=[-1,-1] ]):
Update_FD_Table(2,pl):
Gen_Eval_Code(res2,input="c",proc_name="ire_hs");
|
[STATEMENT]
lemma nonzero_of_rat_inverse: "a \<noteq> 0 \<Longrightarrow> of_rat (inverse a) = inverse (of_rat a)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. a \<noteq> 0 \<Longrightarrow> of_rat (inverse a) = inverse (of_rat a)
[PROOF STEP]
by (rule inverse_unique [symmetric]) (simp add: of_rat_mult [symmetric])
|
Require Import ProofCheckingEuclid.euclidean_axioms.
Require Import ProofCheckingEuclid.lemma_congruencesymmetric.
Require Import ProofCheckingEuclid.lemma_congruencetransitive.
Require Import ProofCheckingEuclid.lemma_localextension.
Require Import ProofCheckingEuclid.proposition_02.
Section Euclid.
Context `{Ax:euclidean_neutral_ruler_compass}.
Lemma lemma_extension_eq_B_P :
forall A B P Q,
neq A B ->
neq P Q ->
eq B P ->
exists X, BetS A B X /\ Cong B X P Q.
Proof.
intros A B P Q.
intros neq_A_B.
intros neq_P_Q.
intros eq_B_P.
assert (neq B Q) as neq_B_Q by (rewrite eq_B_P; exact neq_P_Q).
pose proof (lemma_localextension _ _ _ neq_A_B neq_B_Q) as (X & BetS_A_B_X & Cong_BX_BQ).
assert (Cong B X P Q) as Cong_BX_PQ by (rewrite <- eq_B_P; exact Cong_BX_BQ).
exists X.
split.
exact BetS_A_B_X.
exact Cong_BX_PQ.
Qed.
Lemma lemma_extension_neq_B_P :
forall A B P Q,
neq A B ->
neq P Q ->
neq B P ->
exists X, BetS A B X /\ Cong B X P Q.
Proof.
intros A B P Q.
intros neq_A_B.
intros neq_P_Q.
intros neq_B_P.
pose proof (proposition_02 _ _ _ neq_B_P neq_P_Q) as (D & Cong_BD_PQ).
pose proof (lemma_congruencesymmetric _ _ _ _ Cong_BD_PQ) as Cong_PQ_BD.
pose proof (axiom_nocollapse _ _ _ _ neq_P_Q Cong_PQ_BD) as neq_B_D.
pose proof (lemma_localextension _ _ _ neq_A_B neq_B_D) as (X & BetS_A_B_X & Cong_BX_BD).
pose proof (lemma_congruencetransitive _ _ _ _ _ _ Cong_BX_BD Cong_BD_PQ) as Cong_BX_PQ.
exists X.
split.
exact BetS_A_B_X.
exact Cong_BX_PQ.
Qed.
Lemma lemma_extension :
forall A B P Q,
neq A B ->
neq P Q ->
exists X, BetS A B X /\ Cong B X P Q.
Proof.
intros A B P Q.
intros neq_A_B.
intros neq_P_Q.
assert (eq B P \/ neq B P) as eq_B_P_or_neq_B_P by (apply Classical_Prop.classic).
destruct eq_B_P_or_neq_B_P as [eq_B_P | neq_B_P].
{
pose proof (
lemma_extension_eq_B_P _ _ _ _ neq_A_B neq_P_Q eq_B_P
) as (X & BetS_A_B_X & Cong_BX_PQ).
exists X.
split.
exact BetS_A_B_X.
exact Cong_BX_PQ.
}
{
pose proof (
lemma_extension_neq_B_P _ _ _ _ neq_A_B neq_P_Q neq_B_P
) as (X & BetS_A_B_X & Cong_BX_PQ).
exists X.
split.
exact BetS_A_B_X.
exact Cong_BX_PQ.
}
Qed.
End Euclid.
|
" There 's Got to Be a Way " ( 7 " remix )
|
\section{Instance Generation}
The instances are generated randomly \cite{bib:instances-CVRP}, \cite{bib:constrained-knapsack}, \cite{bib:grasp-and-tabu}. For that, first the graph is generated, and then the weight of each vertex is chosen. The knapsack capacity is selected so that, on average, X percent of the vertices fit in it. The following subsections analyze each of those aspects.
Consider the parameters:
\begin{enumerate}
\item $n$: number of vertices;
\item $K$: average number of branches;
\item $L$: maximum number of leaf vertices;
\item $H$: the maximum value of an entry of the weight of each vertex;
\item $m$: fraction of the average number of elements that fit in the knapsack;
\end{enumerate}
\subsection{How to Generate the Precedences}
The process of generating the precedences is specified in \algref{algorith:find-trees}, which uses \algref{algorith:generate-precedences}. The \figref{fig:precedence-generation} has an example of such procedure. The following parameters are used to control the generation:
\begin{algorithm}[ht!]
\caption{Find-Trees}
\label{algorith:find-trees}
\begin{algorithmic}[1]
\Require{
$\vertices$: vertices in the 2D plane,
$K$: average number of branches,
$L$: maximum number of leaf vertices
}
\State{$k \gets $ random number from 1 to $K$}
\State{$\tuple{R, \mathcal{V}} \gets $ find $k$ clusters in $V$}
\Comment{$R$: a set of centers}\\
\Comment{$\mathcal{V}$: a set with each element being the set vertices of each cluster}
\State{$\mathcal{T} \gets \emptyset$}
\For{each pair $r \in R $ and $V' \in \mathcal{V}$}
\If{$\abs{V'} \leqslant L$}
\State{$T \gets $ tree with $r$ as the root node and $V'$ as the leaves}
\Else
\State{$T \gets $ tree with $r$ as the root node of the subtree Find-Trees($V', K, L$)}
\EndIf
\State{$\mathcal{T} \gets \mathcal{T} \cup \Set{T}$}
\EndFor
\\\Return{$\mathcal{T}$}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht]
\caption{Generate-Precedences}
\label{algorith:generate-precedences}
\begin{algorithmic}[1]
\Require{
$n$: number of vertices,
$K$: average number of branches,
$L$: maximum number of leaf vertices
}
\State{$V \gets $ generate $n$ points in the 2D plane randomly}
\State{$\mathcal{T} \gets $ Find-Trees($V, K, L$)}
\\\Return{$\mathcal{T}$}
\end{algorithmic}
\end{algorithm}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{images/precedence_construction.jpg}
\caption{Precedence generation. The root nodes are the green, red and lemon. Red has four leaf vertices. Green has two branches, the pink with one leaf and the purple with two leaves. Lemon has one leaf and one branch with two leaves.}
\label{fig:precedence-generation}
\end{figure}
\subsection{How to Generate the Weights}
Generate the weights randomly in the interval $\interval{0}{H}$.
\subsection{How to Generate the Knapsack Capacity}
Generate each entry of the knapsack capacity $\maximumWeight$ randomly in the interval $\interval{0}{m \cdot n \cdot H}$.
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.Algebra.Base where
open import Cubical.Core.Everything
------------------------------------------------------------------------
-- Unary and binary operations
Op₁ : ∀ {ℓ} → Type ℓ → Type ℓ
Op₁ A = A → A
Op₂ : ∀ {ℓ} → Type ℓ → Type ℓ
Op₂ A = A → A → A
------------------------------------------------------------------------
-- Left and right actions
Opₗ : ∀ {a b} → Type a → Type b → Type _
Opₗ A B = A → B → B
Opᵣ : ∀ {a b} → Type a → Type b → Type _
Opᵣ A B = B → A → B
|
module BinaryFormats
import Control.Pipeline
import Data.Vect
import Data.ZZ
import BinaryFormats.Data.Bit
import BinaryFormats.Data.List
%default total
mutual
||| A universe of binary data formats
data Format : Type where
FBad : Format
FEnd : Format
FBit : Format
FByte : Format
FChar : Format
FU8 : Format
FU16 : Format -- TODO: Endianness
FU32 : Format -- TODO: Endianness
FS8 : Format
FS16 : Format -- TODO: Endianness
FS32 : Format -- TODO: Endianness
FRef : Format
FPtr : Nat -> List Bit -> Format -> Format
FVect : Nat -> Format -> Format
FPlus : Format -> Format -> Format
FSkip : Format -> Format -> Format
FRead : (f : Format) -> (embed f -> Format) -> Format
||| Interprets `Format` as an Idris type
embed : Format -> Type
embed FBad = Void
embed FEnd = Unit
embed FBit = Bit
embed FByte = Bits8
embed FChar = Char
embed FU8 = Nat
embed FU16 = Nat
embed FU32 = Nat
embed FS8 = ZZ
embed FS16 = ZZ
embed FS32 = ZZ
embed FRef = List Bit
embed (FPtr _ _ f) = Lazy (Maybe (embed f))
embed (FVect n a) = Vect n (embed a)
embed (FPlus f1 f2) = Either (embed f1) (embed f2)
embed (FSkip _ f) = embed f
embed (FRead f1 f2) = (f : embed f1 ** embed (f2 f))
||| Require a predicate to be satisfied
satisfy : (f : Format) -> (embed f -> Bool) -> Format
satisfy f pred =
FRead f (\x => if pred x then FEnd else FBad)
||| Require a character literal to be parsed
char : Char -> Format
char c =
satisfy FChar ((==) c)
||| Sequence two binary formats, one after the other
(>>) : Format -> Format -> Format
(>>) x f = FSkip x f
||| Parsed one format and then use it to figure out what to parse next
(>>=) : (f : Format) -> (embed f -> Format) -> Format
(>>=) x f = FRead x f
Parser : Type -> Type
Parser a = (bits : List Bit) -> Maybe (a, List Bit)
parseBit : Parser Bit
parseBit [] = Nothing
parseBit (bit :: bits) = Just (bit, bits)
parseUNum : {a : Type} -> Num a => Nat -> Parser a
parseUNum size bits =
trySplitAt size bits
|> map (\(headBits, tailBits) => (go headBits 0, tailBits))
where
go : {a : Type} -> Num a => List Bit -> a -> a
go [] acc = acc
go (O :: bits) acc = go bits (2 * acc)
go (I :: bits) acc = go bits (1 + (2 * acc))
parseUInt : Nat -> Parser Nat
parseUInt = parseUNum
-- FXME: Two's complement?
parseSInt : Nat -> Parser ZZ
parseSInt Z bits = Just (0, bits)
parseSInt (S _) [] = Nothing
parseSInt (S size) (O :: bits) = parseUInt size bits |> map (\(n, bits') => (Pos n, bits'))
parseSInt (S size) (I :: bits) = parseUInt size bits |> map (\(n, bits') => (negNat n, bits'))
parseChar : Parser Char
parseChar bits =
parseUInt 16 bits -- Idris' `Char`s are supposedly 2 bytes wide
|> map (\(n, bits') => (chr (toIntNat n), bits'))
mutual
parseVect : {n : Nat} -> (f : Format) -> Parser (Vect n (embed f))
parseVect {n} f = rewrite plusCommutative Z n in go n []
where
go : {m : Nat} -> (n : Nat) -> Vect m (embed f) -> Parser (Vect (n + m) (embed f))
go {m} Z acc bits = Just (reverse acc, bits)
go {m} (S k) acc bits with (parse f bits)
| Nothing = Nothing
| Just (elem, bits'') =
rewrite plusSuccRightSucc k m in go k (elem :: acc) bits''
||| Interpret a binary format specification as a parser
parse : (f : Format) -> Parser (embed f)
parse FBad bits = Nothing
parse FEnd bits = Just ((), bits)
parse FBit bits = parseBit bits
parse FByte bits = parseUNum 8 bits
parse FChar bits = parseChar bits
parse FU8 bits = parseUInt 8 bits
parse FU16 bits = parseUInt 16 bits
parse FU32 bits = parseUInt 32 bits
parse FS8 bits = parseSInt 8 bits
parse FS16 bits = parseSInt 16 bits
parse FS32 bits = parseSInt 32 bits
parse FRef bits = Just (bits, bits)
parse (FPtr offset refBits f) bits with (tryDrop offset refBits)
| Nothing = Nothing
| Just refBits' = Just (Delay (parse f refBits' |> map fst), bits)
parse (FVect n f) bits = parseVect f bits
parse (FPlus f1 f2) bits with (parse f1 bits)
| (Just (x, bits')) = Just (Left x, bits')
| Nothing with (parse f2 bits)
| (Just (y, bits')) = Just (Right y, bits')
| Nothing = Nothing
parse (FSkip f1 f2) bits with (parse f1 bits)
| Nothing = Nothing
| (Just (x, bits')) = parse f2 bits'
parse (FRead f1 f2) bits with (parse f1 bits)
| Nothing = Nothing
| (Just (x, bits')) with (parse (f2 x) bits')
| Nothing = Nothing
| Just (y, bits'') = Just ((x ** y), bits'')
||| PBM binary format
pbm : Format
pbm = do
char 'p'
char '4'
char ' '
n <- FU16
char ' '
m <- FU16
char '\n'
bs <- FVect n (FVect m FBit)
FEnd
||| Parse PBM data from a string of bits
parsePbm : Parser (embed BinaryFormats.pbm)
parsePbm = parse pbm
test : Format
test = do
x <- FBit
case x of
O => FU8
I => FS8
testPtr : Format
testPtr = do
start <- FRef
offset <- FU16
ptr <- FPtr offset start <| do
namesLen <- FU16
names <- FVect namesLen <| do
nameLen <- FU16
name <- FVect nameLen FChar
FEnd
FEnd
FEnd
|
#
# Read("~/Workspace/Chevalley.gap/init.gi"); Read(Filename(test_dir,"_allTests.gi"));
#
LogTo(Filename(test_dir,"_allTests.log"));
#
# Polynomial stuff
#
Print("[Polynomial stuff]\n");
Read(Filename(test_dir,"poly.test.init.gi"));
Read(Filename(test_dir,"poly.test.gi"));
#
# Root system
#
Print("[Root system]\n");
Read(Filename(test_dir,"rsys.test.init.gi"));
Read(Filename(test_dir,"rsys.test.gi"));
#
# Parabolic system
#
Print("[Parabolic system]\n");
Read(Filename(test_dir,"psys.test.init.gi"));
Read(Filename(test_dir,"psys.test.gi"));
#
# Chevalley group stuff
#
Print("[Chevalley group stuff]\n");
Read(Filename(home_dir,"lib/chvadj.gd"));
Read(Filename(home_dir,"lib/chvadj.gi"));
Read(Filename(test_dir,"chvadj.test.gi"));
#
# Nilpotent elements
#
Print("[Nilpotent elements]\n");
Read(Filename(home_dir,"lib/nilchv.gd"));
Read(Filename(home_dir,"lib/nilchv.gi"));
Read(Filename(test_dir,"nilchv.test.gi"));
#
# Algebraic unipotent Sylow stuff
#
Print("[Algebraic unipotent Sylow stuff]\n");
Read(Filename(home_dir,"lib/algU.gd"));
Read(Filename(home_dir,"lib/algU.gi"));
Read(Filename(test_dir,"algU.test.gi"));
#
# Unipotent elements
#
Print("[Unipotent elements]\n");
Read(Filename(home_dir,"lib/unichv.gd"));
Read(Filename(home_dir,"lib/witt.gd"));
Read(Filename(home_dir,"lib/unialg.gd"));
#
Read(Filename(home_dir,"lib/unichv.gi"));
Read(Filename(test_dir,"unichv.test.gi"));
#
# Arithmetics modulo p
#
Print("[Arithmetics modulo p]\n");
Read(Filename(home_dir,"lib/unimod.gd"));
Read(Filename(home_dir,"lib/unimod.gi"));
#Read(Filename(test_dir,"unimod.test.gi"));
#
# Witt groups
#
Print("[Witt groups]\n");
Read(Filename(home_dir,"lib/witt.gi"));
Read(Filename(test_dir,"witt.test.gi"));
#
# Unipotent elements over polynomials
#
Print("[Unipotent elements over polynomials]\n");
Read(Filename(home_dir,"lib/unialg.gi"));
Read(Filename(test_dir,"unialg.test.gi"));
|
# NRPy+'s Reference Metric Interface
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
### NRPy+ Source Code for this module: [reference_metric.py](../edit/reference_metric.py)
## Introduction:
### Why use a reference metric? Benefits of choosing the best coordinate system for the problem
When solving a partial differential equation on the computer, it is useful to first pick a coordinate system well-suited to the geometry of the problem. For example, if we are modeling a spherically-symmetric star, it would be hugely wasteful to model the star in 3-dimensional Cartesian coordinates ($x$,$y$,$z$). This is because in Cartesian coordinates, we would need to choose high sampling in all three Cartesian directions. If instead we chose to model the star in spherical coordinates ($r$,$\theta$,$\phi$), so long as the star is centered at $r=0$, we would not need to model the star with more than one point in the $\theta$ and $\phi$ directions!
A similar argument holds for stars that are *nearly* spherically symmetric. Such stars may exhibit density distributions that vary slowly in $\theta$ and $\phi$ directions (e.g., isolated neutron stars or black holes). In these cases the number of points needed to sample the angular directions will still be much smaller than in the radial direction.
Thus choice of an appropriate reference metric may directly mitigate the [Curse of Dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follow
1. [Step 1](#define_ref_metric): Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py)
1. [Step 2](#define_geometric): Defining geometric quantities, **`ref_metric__hatted_quantities()`**
1. [Step 3](#prescribed_ref_metric): Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py)
1. [Step 3.a](#sphericallike): Spherical-like coordinate systems
1. [Step 3.a.i](#spherical): **`reference_metric::CoordSystem = "Spherical"`**
1. [Step 3.a.ii](#sinhspherical): **`reference_metric::CoordSystem = "SinhSpherical"`**
1. [Step 3.a.iii](#sinhsphericalv2): **`reference_metric::CoordSystem = "SinhSphericalv2"`**
1. [Step 3.b](#cylindricallike): Cylindrical-like coordinate systems
1. [Step 3.b.i](#cylindrical): **`reference_metric::CoordSystem = "Cylindrical"`**
1. [Step 3.b.ii](#sinhcylindrical): **`reference_metric::CoordSystem = "SinhCylindrical"`**
1. [Step 3.b.iii](#sinhcylindricalv2): **`reference_metric::CoordSystem = "SinhCylindricalv2"`**
1. [Step 3.c](#cartesianlike): Cartesian-like coordinate systems
1. [Step 3.c.i](#cartesian): **`reference_metric::CoordSystem = "Cartesian"`**
1. [Step 3.d](#prolatespheroidal): Prolate spheroidal coordinates
1. [Step 3.d.i](#symtp): **`reference_metric::CoordSystem = "SymTP"`**
1. [Step 3.d.ii](#sinhsymtp): **`reference_metric::CoordSystem = "SinhSymTP"`**
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='define_ref_metric'></a>
# Step 1: Defining a reference metric, [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\]
$$\label{define_ref_metric}$$
***Note that currently only orthogonal reference metrics of dimension 3 or fewer are supported. This can be extended if desired.***
NRPy+ assumes all curvilinear coordinate systems map directly from a uniform, Cartesian numerical grid with coordinates $(x,y,z)$=(`xx[0]`,`xx[1]`,`xx[2]`). Thus when defining reference metrics, all defined coordinate quantities must be in terms of the `xx[]` array. As we will see, this adds a great deal of flexibility
For example, [**reference_metric.py**](../edit/reference_metric.py) requires that the *orthogonal coordinate scale factors* be defined. As described [here](https://en.wikipedia.org/wiki/Curvilinear_coordinates), the $i$th scale factor is the positive root of the metric element $g_{ii}$. In ordinary spherical coordinates $(r,\theta,\phi)$, with line element $ds^2 = g_{ij} dx^i dx^j = dr^2+ r^2 d \theta^2 + r^2 \sin^2\theta \ d\phi^2$, we would first define
* $r = xx_0$
* $\theta = xx_1$
* $\phi = xx_2$,
so that the scale factors are defined as
* `scalefactor_orthog[0]` = $1$
* `scalefactor_orthog[1]` = $r$
* `scalefactor_orthog[2]` = $r \sin \theta$
Here is the corresponding code:
```python
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: parameter interface
import reference_metric as rfm # NRPy+: Reference metric support
r = rfm.xx[0]
th = rfm.xx[1]
ph = rfm.xx[2]
rfm.scalefactor_orthog[0] = 1
rfm.scalefactor_orthog[1] = r
rfm.scalefactor_orthog[2] = r*sp.sin(th)
# Notice that the scale factor will be given
# in terms of the fundamental Cartesian
# grid variables, and not {r,th,ph}:
print("r*sin(th) = "+str(rfm.scalefactor_orthog[2]))
```
r*sin(th) = xx0*sin(xx1)
Next suppose we wish to modify our radial coordinate $r(xx_0)$ to be an exponentially increasing function, so that our numerical grid $(xx_0,xx_1,xx_2)$ will map to a spherical grid with radial grid spacing ($\Delta r$) that *increases* with $r$. Generally we will find it useful to define $r(xx_0)$ to be an odd function, so let's choose
$$r(xx_0) = a \sinh(xx_0/s),$$
where $a$ is an overall radial scaling factor, and $s$ denotes the scale (in units of $xx_0$) over which exponential growth will take place. In our implementation below, note that we use the relation
$$\sinh(x) = \frac{e^x - e^{-x}}{2},$$
as SymPy finds it easier to evaluate exponentials than hyperbolic trigonometric functions.
```python
a,s = sp.symbols('a s',positive=True)
xx0_rescaled = rfm.xx[0] / s
r = a*(sp.exp(xx0_rescaled) - sp.exp(-xx0_rescaled))/2
# Must redefine the scalefactors since 'r' has been updated!
rfm.scalefactor_orthog[0] = 1
rfm.scalefactor_orthog[1] = r
rfm.scalefactor_orthog[2] = r*sp.sin(th)
print(rfm.scalefactor_orthog[2])
```
a*(exp(xx0/s) - exp(-xx0/s))*sin(xx1)/2
Often we will find it useful to also define the appropriate mappings from (`xx[0]`,`xx[1]`,`xx[2]`) to Cartesian coordinates (for plotting purposes) and ordinary spherical coordinates (e.g., in case initial data when solving a PDE are naturally written in spherical coordinates). For this purpose, reference_metric.py also declares lists **`xxCart[]`** and **`xxSph[]`**, which in this case are defined as
```python
rfm.xxSph[0] = r
rfm.xxSph[1] = th
rfm.xxSph[2] = ph
rfm.xxCart[0] = r*sp.sin(th)*sp.cos(ph)
rfm.xxCart[1] = r*sp.sin(th)*sp.sin(ph)
rfm.xxCart[2] = r*sp.cos(th)
# Here we show off SymPy's pretty_print()
# and simplify() functions. Nice, no?
sp.pretty_print(sp.simplify(rfm.xxCart[0]))
```
⎛xx₀⎞
a⋅sin(xx₁)⋅cos(xx₂)⋅sinh⎜───⎟
⎝ s ⎠
<a id='define_geometric'></a>
# Step 2: Define geometric quantities, `ref_metric__hatted_quantities()` \[Back to [top](#toc)\]
$$\label{define_geometric}$$
Once `scalefactor_orthog[]` has been defined, the function **`ref_metric__hatted_quantities()`** within [reference_metric.py](../edit/reference_metric.py) can be called to define a number of geometric quantities useful for solving PDEs in curvilinear coordinate systems.
Adopting the notation of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632), geometric quantities related to the reference metric are named "hatted" quantities, . For example, the reference metric is defined as $\hat{g}_{ij}$=`ghatDD[i][j]`:
```python
rfm.ref_metric__hatted_quantities()
sp.pretty_print(sp.Matrix(rfm.ghatDD))
```
⎡1 0 0 ⎤
⎢ ⎥
⎢ 2 ⎥
⎢ ⎛ xx₀ -xx₀ ⎞ ⎥
⎢ ⎜ ─── ─────⎟ ⎥
⎢ 2 ⎜ s s ⎟ ⎥
⎢ a ⋅⎝ℯ - ℯ ⎠ ⎥
⎢0 ─────────────────── 0 ⎥
⎢ 4 ⎥
⎢ ⎥
⎢ 2 ⎥
⎢ ⎛ xx₀ -xx₀ ⎞ ⎥
⎢ ⎜ ─── ─────⎟ ⎥
⎢ 2 ⎜ s s ⎟ 2 ⎥
⎢ a ⋅⎝ℯ - ℯ ⎠ ⋅sin (xx₁)⎥
⎢0 0 ─────────────────────────────⎥
⎣ 4 ⎦
In addition to $\hat{g}_{ij}$, **`ref_metric__hatted_quantities()`** also provides:
* The rescaling "matrix" `ReDD[i][j]`, used for separating singular (due to chosen coordinate system) pieces of smooth rank-2 tensor components from the smooth parts, so that the smooth parts can be used within temporal and spatial differential operators.
* Inverse reference metric: $\hat{g}^{ij}$=`ghatUU[i][j]`.
* Reference metric determinant: $\det\left(\hat{g}_{ij}\right)$=`detgammahat`.
* First and second derivatives of the reference metric: $\hat{g}_{ij,k}$=`ghatDD_dD[i][j][k]`; $\hat{g}_{ij,kl}$=`ghatDD_dDD[i][j][k][l]`
* Christoffel symbols associated with the reference metric, $\hat{\Gamma}^i_{jk}$ = `GammahatUDD[i][j][k]` and their first derivatives $\hat{\Gamma}^i_{jk,l}$ = `GammahatUDD_dD[i][j][k][l]`
For example, the Christoffel symbol $\hat{\Gamma}^{xx_1}_{xx_2 xx_2}=\hat{\Gamma}^1_{22}$ is given by `GammahatUDD[1][2][2]`:
```python
sp.pretty_print(sp.simplify(rfm.GammahatUDD[1][2][2]))
```
-sin(2⋅xx₁)
────────────
2
Given the trigonometric identity $2\sin(x)\cos(x) = \sin(2x)$, notice that the above expression is equivalent to Eq. 18 of [Baumgarte, Montero, Cordero-Carrión, and Müller, PRD 87, 044026 (2012)](https://arxiv.org/abs/1211.6632). This is expected since the sinh-radial spherical coordinate system is equivalent to ordinary spherical coordinates in the angular components.
<a id='prescribed_ref_metric'></a>
# Step 3: Prescribed reference metrics in [`reference_metric.py`](../edit/reference_metric.py) \[Back to [top](#toc)\]
$$\label{prescribed_ref_metric}$$
One need not manually define scale factors or other quantities for reference metrics, as a number of prescribed reference metrics are already defined in [reference_metric.py](../edit/reference_metric.py). These can be accessed by first setting the parameter **reference_metric::CoordSystem** to one of the following, and then calling the function **`rfm.reference_metric()`**.
```python
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import grid as gri # NRPy+: Functions having to do with numerical grids
# Step 0a: Initialize parameters
thismodule = __name__
par.initialize_param(par.glb_param("char", thismodule, "CoordSystem", "Spherical"))
# Step 0b: Declare global variables
xx = gri.xx
xxCart = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
Cart_to_xx = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
Cartx,Carty,Cartz = sp.symbols("Cartx Carty Cartz", real=True)
Cart = [Cartx,Carty,Cartz]
xxSph = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
scalefactor_orthog = ixp.zerorank1(DIM=4) # Must be set in terms of xx[]s
have_already_called_reference_metric_function = False
CoordSystem = par.parval_from_str("reference_metric::CoordSystem")
M_PI,M_SQRT1_2 = par.Cparameters("#define",thismodule,["M_PI","M_SQRT1_2"],"")
global xxmin
global xxmax
global UnitVectors
UnitVectors = ixp.zerorank2(DIM=3)
```
We will find the following plotting function useful for analyzing coordinate systems in which the radial coordinate is rescaled.
```python
def create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0):
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
plt.clf()
Nr = 20
dxx0 = 1.0 / float(Nr)
xx0s = []
rs = []
deltars = []
rprimes = []
for i in range(Nr):
xx0 = (float(i) + 0.5)*dxx0
xx0s.append(xx0)
rs.append( sp.sympify(str(r_of_xx0 ).replace("xx0",str(xx0))))
rprimes.append(sp.sympify(str(rprime_of_xx0).replace("xx0",str(xx0))))
if i>0:
deltars.append(sp.log(rs[i]-rs[i-1],10))
else:
deltars.append(sp.log(2*rs[0],10))
# fig, ax = plt.subplots()
fig = plt.figure(figsize=(12,12)) # 8 in x 8 in
ax = fig.add_subplot(221)
ax.set_title('$r(xx_0)$ for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$r(xx_0)$',fontsize='x-large')
ax.plot(xx0s, rs, 'k.', label='Spacing between\nadjacent gridpoints')
# legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
ax = fig.add_subplot(222)
ax.set_title('Grid spacing for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$\log_{10}(\Delta r)$',fontsize='x-large')
ax.plot(xx0s, deltars, 'k.', label='Spacing between\nadjacent gridpoints\nin $r(xx_0)$ plot')
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
ax = fig.add_subplot(223)
ax.set_title('$r\'(xx_0)$ for '+CoordSystem,fontsize='x-large')
ax.set_xlabel('$xx_0$',fontsize='x-large')
ax.set_ylabel('$r\'(xx_0)$',fontsize='x-large')
ax.plot(xx0s, rprimes, 'k.', label='Nr=96')
# legend = ax.legend(loc='upper left', shadow=True, fontsize='x-large')
# legend.get_frame().set_facecolor('C1')
plt.tight_layout(pad=2)
plt.show()
```
<a id='sphericallike'></a>
## Step 3.a: Spherical-like coordinate systems \[Back to [top](#toc)\]
$$\label{sphericallike}$$
<a id='spherical'></a>
### Step 3.a.i: **`reference_metric::CoordSystem = "Spherical"`** \[Back to [top](#toc)\]
$$\label{spherical}$$
Standard spherical coordinates, with $(r,\theta,\phi)=(xx_0,xx_1,xx_2)$
```python
if CoordSystem == "Spherical":
# Adding assumption real=True can help simplify expressions involving xx[0] & xx[1] below.
xx[0] = sp.symbols("xx0", real=True)
xx[1] = sp.symbols("xx1", real=True)
RMAX = par.Cparameters("REAL", thismodule, ["RMAX"],10.0)
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [ RMAX, M_PI, M_PI]
r = xx[0]
th = xx[1]
ph = xx[2]
Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)
Cart_to_xx[1] = sp.acos(Cartz / Cart_to_xx[0])
Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now let's analyze $r(xx_0)$ for **"Spherical"** coordinates.
```python
%matplotlib inline
CoordSystem = "Spherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
RMAX = 10.0
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("RMAX",str(RMAX)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("RMAX",str(RMAX)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='sinhspherical'></a>
### Step 3.a.ii: **`reference_metric::CoordSystem = "SinhSpherical"`** \[Back to [top](#toc)\]
$$\label{sinhspherical}$$
Spherical coordinates, but with $$r(xx_0) = \text{AMPL} \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}.$$
SinhSpherical uses two parameters: `AMPL` and `SINHW`. `AMPL` sets the outer boundary distance; and `SINHW` sets the focusing of the coordinate points near $r=0$, where a small `SINHW` ($\sim 0.125$) will greatly focus the points near $r=0$ and a large `SINHW` will look more like an ordinary spherical polar coordinate system.
```python
if CoordSystem == "SinhSpherical":
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [sp.sympify(1), M_PI, M_PI]
AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2])
# Set SinhSpherical radial coordinate by default; overwrite later if CoordSystem == "SinhSphericalv2".
r = AMPL * (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) / \
(sp.exp(1 / SINHW) - sp.exp(-1 / SINHW))
th = xx[1]
ph = xx[2]
Cart_to_xx[0] = SINHW*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)*sp.sinh(1/SINHW)/AMPL)
Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2))
Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now we explore $r(xx_0)$ for `SinhSpherical` assuming `AMPL=10.0` and `SINHW=0.2`:
```python
%matplotlib inline
CoordSystem = "SinhSpherical"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
AMPL = 10.0
SINHW = 0.2
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='sinhsphericalv2'></a>
### Step 3.a.iii: **`reference_metric::CoordSystem = "SinhSphericalv2"`** \[Back to [top](#toc)\]
$$\label{sinhsphericalv2}$$
The same as SinhSpherical coordinates, but with an additional `AMPL*const_dr*xx_0` term:
$$r(xx_0) = \text{AMPL} \left[\text{const_dr}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHW}}\right)}{\sinh\left(\frac{1}{\text{SINHW}}\right)}\right].$$
```python
if CoordSystem == "SinhSphericalv2":
# SinhSphericalv2 adds the parameter "const_dr", which allows for a region near xx[0]=0 to have
# constant radial resolution of const_dr, provided the sinh() term does not dominate near xx[0]=0.
xxmin = [sp.sympify(0), sp.sympify(0), -M_PI]
xxmax = [sp.sympify(1), M_PI, M_PI]
AMPL, SINHW = par.Cparameters("REAL",thismodule,["AMPL","SINHW"],[10.0,0.2])
const_dr = par.Cparameters("REAL",thismodule,["const_dr"],0.0625)
r = AMPL*( const_dr*xx[0] + (sp.exp(xx[0] / SINHW) - sp.exp(-xx[0] / SINHW)) /
(sp.exp(1 / SINHW) - sp.exp(-1 / SINHW)) )
th = xx[1]
ph = xx[2]
# NO CLOSED-FORM EXPRESSION FOR RADIAL INVERSION.
# Cart_to_xx[0] = "NewtonRaphson"
# Cart_to_xx[1] = sp.acos(Cartz / sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2))
# Cart_to_xx[2] = sp.atan2(Carty, Cartx)
xxSph[0] = r
xxSph[1] = th
xxSph[2] = ph
# Now define xCart, yCart, and zCart in terms of x0,xx[1],xx[2].
# Note that the relation between r and x0 is not necessarily trivial in SinhSpherical coordinates. See above.
xxCart[0] = xxSph[0]*sp.sin(xxSph[1])*sp.cos(xxSph[2])
xxCart[1] = xxSph[0]*sp.sin(xxSph[1])*sp.sin(xxSph[2])
xxCart[2] = xxSph[0]*sp.cos(xxSph[1])
scalefactor_orthog[0] = sp.diff(xxSph[0],xx[0])
scalefactor_orthog[1] = xxSph[0]
scalefactor_orthog[2] = xxSph[0]*sp.sin(xxSph[1])
# Set the unit vectors
UnitVectors = [[ sp.sin(xxSph[1])*sp.cos(xxSph[2]), sp.sin(xxSph[1])*sp.sin(xxSph[2]), sp.cos(xxSph[1])],
[ sp.cos(xxSph[1])*sp.cos(xxSph[2]), sp.cos(xxSph[1])*sp.sin(xxSph[2]), -sp.sin(xxSph[1])],
[ -sp.sin(xxSph[2]), sp.cos(xxSph[2]), sp.sympify(0) ]]
```
Now we explore $r(xx_0)$ for `SinhSphericalv2` assuming `AMPL=10.0`, `SINHW=0.2`, and `const_dr=0.05`. Notice that the `const_dr` term significantly increases the grid spacing near $xx_0=0$ relative to `SinhSpherical` coordinates.
```python
%matplotlib inline
CoordSystem = "SinhSphericalv2"
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
AMPL = 10.0
SINHW = 0.2
const_dr = 0.05
r_of_xx0 = sp.sympify(str(rfm.xxSph[0] ).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr)))
rprime_of_xx0 = sp.sympify(str(sp.diff(rfm.xxSph[0],rfm.xx[0])).replace("AMPL",str(AMPL)).replace("SINHW",str(SINHW)).replace("const_dr",str(const_dr)))
create_r_of_xx0_plots(CoordSystem, r_of_xx0,rprime_of_xx0)
```
<a id='cylindricallike'></a>
## Step 3.b: Cylindrical-like coordinate systems \[Back to [top](#toc)\]
$$\label{cylindricallike}$$
<a id='cylindrical'></a>
### Step 3.b.i: **`reference_metric::CoordSystem = "Cylindrical"`** \[Back to [top](#toc)\]
$$\label{cylindrical}$$
Standard cylindrical coordinates, with $(\rho,\phi,z)=(xx_0,xx_1,xx_2)$
```python
if CoordSystem == "Cylindrical":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
RHOMAX,ZMIN,ZMAX = par.Cparameters("REAL",thismodule,["RHOMAX","ZMIN","ZMAX"],[10.0,-10.0,10.0])
xxmin = [sp.sympify(0), -M_PI, ZMIN]
xxmax = [ RHOMAX, M_PI, ZMAX]
RHOCYL = xx[0]
PHICYL = xx[1]
ZCYL = xx[2]
Cart_to_xx[0] = sp.sqrt(Cartx ** 2 + Carty ** 2)
Cart_to_xx[1] = sp.atan2(Carty, Cartx)
Cart_to_xx[2] = Cartz
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
Next let's plot **"Cylindrical"** coordinates.
```python
%matplotlib inline
import numpy as np # NumPy: A numerical methods module for Python
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
R = np.linspace(0, 2, 24)
h = 2
u = np.linspace(0, 2*np.pi, 24)
x = np.outer(R, np.cos(u))
y = np.outer(R, np.sin(u))
z = h * np.outer(np.ones(np.size(u)), np.ones(np.size(u)))
r = np.arange(0,2,0.25)
theta = 2*np.pi*r*0
fig = plt.figure(figsize=(12,12)) # 8 in x 8 in
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1 = plt.axes(projection='polar')
ax1.set_rmax(2)
ax1.set_rgrids(r,labels=[])
thetas = np.linspace(0,360,24, endpoint=True)
ax1.set_thetagrids(thetas,labels=[])
# ax.grid(True)
ax1.grid(True,linewidth='1.0')
ax1.set_title("Top Down View")
plt.show()
ax2 = plt.axes(projection='3d', xticklabels=[], yticklabels=[], zticklabels=[])
#ax2.plot_surface(x,y,z, alpha=.75, cmap = 'viridis') # z in case of disk which is parallel to XY plane is constant and you can directly use h
x=np.linspace(-2, 2, 100)
z=np.linspace(-2, 2, 100)
Xc, Zc=np.meshgrid(x, z)
Yc = np.sqrt(4-Xc**2)
rstride = 10
cstride = 10
ax2.plot_surface(Xc, Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis')
ax2.plot_surface(Xc, -Yc, Zc, alpha=1.0, rstride=rstride, cstride=cstride, cmap = 'viridis')
ax2.set_title("Standard Cylindrical Grid in 3D")
ax2.grid(False)
plt.axis('off')
plt.show()
```
<a id='sinhcylindrical'></a>
### Step 3.b.ii" **`reference_metric::CoordSystem = "SinhCylindrical"`** \[Back to [top](#toc)\]
$$\label{sinhcylindrical}$$
Cylindrical coordinates, but with
$$\rho(xx_0) = \text{AMPLRHO} \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}$$
and
$$z(xx_2) = \text{AMPLZ} \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}$$
```python
if CoordSystem == "SinhCylindrical":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)]
xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)]
AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2])
# Set SinhCylindrical radial & z coordinates by default; overwrite later if CoordSystem == "SinhCylindricalv2".
RHOCYL = AMPLRHO * (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO))
# phi coordinate remains unchanged.
PHICYL = xx[1]
ZCYL = AMPLZ * (sp.exp(xx[2] / SINHWZ) - sp.exp(-xx[2] / SINHWZ)) / (sp.exp(1 / SINHWZ) - sp.exp(-1 / SINHWZ))
Cart_to_xx[0] = SINHWRHO*sp.asinh(sp.sqrt(Cartx ** 2 + Carty ** 2)*sp.sinh(1/SINHWRHO)/AMPLRHO)
Cart_to_xx[1] = sp.atan2(Carty, Cartx)
Cart_to_xx[2] = SINHWZ*sp.asinh(Cartz*sp.sinh(1/SINHWZ)/AMPLZ)
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
Next let's plot **"SinhCylindrical"** coordinates.
```python
fig=plt.figure()
plt.clf()
fig = plt.figure()
ax = plt.subplot(1,1,1, projection='polar')
ax.set_rmax(2)
Nr = 20
xx0s = np.linspace(0,2,Nr, endpoint=True) + 1.0/(2.0*Nr)
rs = []
AMPLRHO = 1.0
SINHW = 0.4
for i in range(Nr):
rs.append(AMPLRHO * (np.exp(xx0s[i] / SINHW) - np.exp(-xx0s[i] / SINHW)) / \
(np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW)))
ax.set_rgrids(rs,labels=[])
thetas = np.linspace(0,360,25, endpoint=True)
ax.set_thetagrids(thetas,labels=[])
# ax.grid(True)
ax.grid(True,linewidth='1.0')
plt.show()
```
<a id='sinhcylindricalv2'></a>
### Step 3.b.iii: **`reference_metric::CoordSystem = "SinhCylindricalv2"`** \[Back to [top](#toc)\]
$$\label{sinhcylindricalv2}$$
Cylindrical coordinates, but with
$$\rho(xx_0) = \text{AMPLRHO} \left[\text{const_drho}\ xx_0 + \frac{\sinh\left(\frac{xx_0}{\text{SINHWRHO}}\right)}{\sinh\left(\frac{1}{\text{SINHWRHO}}\right)}\right]$$
and
$$z(xx_2) = \text{AMPLZ} \left[\text{const_dz}\ xx_2 + \frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}\right]$$
```python
if CoordSystem == "SinhCylindricalv2":
# Assuming the cylindrical radial coordinate
# is positive makes nice simplifications of
# unit vectors possible.
xx[0] = sp.symbols("xx0", real=True)
# SinhCylindricalv2 adds the parameters "const_drho", "const_dz", which allows for regions near xx[0]=0
# and xx[2]=0 to have constant rho and z resolution of const_drho and const_dz, provided the sinh() terms
# do not dominate near xx[0]=0 and xx[2]=0.
xxmin = [sp.sympify(0), -M_PI, sp.sympify(-1)]
xxmax = [sp.sympify(1), M_PI, sp.sympify(+1)]
AMPLRHO, SINHWRHO, AMPLZ, SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLRHO","SINHWRHO","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2])
const_drho, const_dz = par.Cparameters("REAL",thismodule,["const_drho","const_dz"],[0.0625,0.0625])
RHOCYL = AMPLRHO * ( const_drho*xx[0] + (sp.exp(xx[0] / SINHWRHO) - sp.exp(-xx[0] / SINHWRHO)) / (sp.exp(1 / SINHWRHO) - sp.exp(-1 / SINHWRHO)) )
PHICYL = xx[1]
ZCYL = AMPLZ * ( const_dz *xx[2] + (sp.exp(xx[2] / SINHWZ ) - sp.exp(-xx[2] / SINHWZ )) / (sp.exp(1 / SINHWZ ) - sp.exp(-1 / SINHWZ )) )
# NO CLOSED-FORM EXPRESSION FOR RADIAL OR Z INVERSION.
# Cart_to_xx[0] = "NewtonRaphson"
# Cart_to_xx[1] = sp.atan2(Carty, Cartx)
# Cart_to_xx[2] = "NewtonRaphson"
xxCart[0] = RHOCYL*sp.cos(PHICYL)
xxCart[1] = RHOCYL*sp.sin(PHICYL)
xxCart[2] = ZCYL
xxSph[0] = sp.sqrt(RHOCYL**2 + ZCYL**2)
xxSph[1] = sp.acos(ZCYL / xxSph[0])
xxSph[2] = PHICYL
scalefactor_orthog[0] = sp.diff(RHOCYL,xx[0])
scalefactor_orthog[1] = RHOCYL
scalefactor_orthog[2] = sp.diff(ZCYL,xx[2])
# Set the unit vectors
UnitVectors = [[ sp.cos(PHICYL), sp.sin(PHICYL), sp.sympify(0)],
[-sp.sin(PHICYL), sp.cos(PHICYL), sp.sympify(0)],
[ sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
For example, let's set up **`SinhCylindricalv2`** coordinates and output the Christoffel symbol $\hat{\Gamma}^{xx_2}_{xx_2 xx_2}$, or more simply $\hat{\Gamma}^2_{22}$:
```python
par.set_parval_from_str("reference_metric::CoordSystem","SinhCylindricalv2")
rfm.reference_metric()
sp.pretty_print(sp.simplify(rfm.GammahatUDD[2][2][2]))
```
⎛ 2⋅xx₂ ⎞ 1
⎜ ────── ⎟ ──────
⎜ SINHWZ ⎟ SINHWZ
-⎝ℯ - 1⎠⋅ℯ
────────────────────────────────────────────────────────────────────────
⎛ ⎛ 2 ⎞ xx₂ ⎛ 2⋅xx₂ ⎞ 1 ⎞
⎜ ⎜ ────── ⎟ ────── ⎜ ────── ⎟ ──────⎟
⎜ ⎜ SINHWZ ⎟ SINHWZ ⎜ SINHWZ ⎟ SINHWZ⎟
SINHWZ⋅⎝- SINHWZ⋅const_dz⋅⎝ℯ - 1⎠⋅ℯ - ⎝ℯ + 1⎠⋅ℯ ⎠
As we will soon see, defining these "hatted" quantities will be quite useful when expressing hyperbolic ([wave-equation](https://en.wikipedia.org/wiki/Wave_equation)-like) PDEs in non-Cartesian coordinate systems.
<a id='cartesianlike'></a>
## Step 3.c: Cartesian-like coordinate systems \[Back to [top](#toc)\]
$$\label{cartesianlike}$$
<a id='cartesian'></a>
### Step 3.c.i: **`reference_metric::CoordSystem = "Cartesian"`** \[Back to [top](#toc)\]
$$\label{cartesian}$$
Standard Cartesian coordinates, with $(x,y,z)=$ `(xx0,xx1,xx2)`
```python
if CoordSystem == "Cartesian":
xmin, xmax, ymin, ymax, zmin, zmax = par.Cparameters("REAL",thismodule,
["xmin","xmax","ymin","ymax","zmin","zmax"],
[ -10.0, 10.0, -10.0, 10.0, -10.0, 10.0])
xxmin = ["xmin", "ymin", "zmin"]
xxmax = ["xmax", "ymax", "zmax"]
xxCart[0] = xx[0]
xxCart[1] = xx[1]
xxCart[2] = xx[2]
xxSph[0] = sp.sqrt(xx[0] ** 2 + xx[1] ** 2 + xx[2] ** 2)
xxSph[1] = sp.acos(xx[2] / xxSph[0])
xxSph[2] = sp.atan2(xx[1], xx[0])
Cart_to_xx[0] = Cartx
Cart_to_xx[1] = Carty
Cart_to_xx[2] = Cartz
scalefactor_orthog[0] = sp.sympify(1)
scalefactor_orthog[1] = sp.sympify(1)
scalefactor_orthog[2] = sp.sympify(1)
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)],
[sp.sympify(0), sp.sympify(1), sp.sympify(0)],
[sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
```python
%matplotlib inline
import numpy as np # NumPy: A numerical methods module for Python
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
plt.clf()
fig = plt.figure()
ax = fig.gca()
Nx = 16
ax.set_xticks(np.arange(0, 1., 1./Nx))
ax.set_yticks(np.arange(0, 1., 1./Nx))
for tick in ax.get_xticklabels():
tick.set_rotation(60)
# plt.scatter(x, y)
ax.set_aspect('equal')
plt.grid()
# plt.savefig("Cartgrid.png",dpi=300)
plt.show()
# plt.close(fig)
```
<a id='cartesian'></a>
### Step 3.c.ii: **`reference_metric::CoordSystem = "SinhCartesian"`** \[Back to [top](#toc)\]
$$\label{cartesian}$$
In this coordinate system, all three coordinates behave like the $z$-coordinate in SinhCylindrical coordinates, i.e.
$$
\begin{align}
x(xx_0) &= \text{AMPLX} \left[\frac{\sinh\left(\frac{xx_0}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWX}}\right)}\right]\ ,\\
y(xx_1) &= \text{AMPLY} \left[\frac{\sinh\left(\frac{xx_1}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWY}}\right)}\right]\ ,\\
z(xx_2) &= \text{AMPLZ} \left[\frac{\sinh\left(\frac{xx_2}{\text{SINHWZ}}\right)}{\sinh\left(\frac{1}{\text{SINHWZ}}\right)}\right]\ .
\end{align}
$$
```python
if CoordSystem == "SinhCartesian":
# SinhCartesian coordinates allows us to push the outer boundary of the
# computational domain a lot further away, while keeping reasonably high
# resolution towards the center of the computational grid.
# Set default values for min and max (x,y,z)
xxmin = [sp.sympify(-1), sp.sympify(-1), sp.sympify(-1)]
xxmax = [sp.sympify(+1), sp.sympify(+1), sp.sympify(+1)]
# Declare basic parameters of the coordinate system and their default values
AMPLX,SINHWX,AMPLY,SINHWY,AMPLZ,SINHWZ = par.Cparameters("REAL",thismodule,
["AMPLX","SINHWX","AMPLY","SINHWY","AMPLZ","SINHWZ"],
[ 10.0, 0.2, 10.0, 0.2, 10.0, 0.2])
# Compute (xxCart0,xxCart1,xxCart2) from (xx0,xx1,xx2)
xxCart[0] = AMPLX*(sp.exp(xx[0]/SINHWX) - sp.exp(-xx[0]/SINHWX))/(sp.exp(1/SINHWX) - sp.exp(-1/SINHWX))
xxCart[1] = AMPLY*(sp.exp(xx[1]/SINHWY) - sp.exp(-xx[1]/SINHWY))/(sp.exp(1/SINHWY) - sp.exp(-1/SINHWY))
xxCart[2] = AMPLZ*(sp.exp(xx[2]/SINHWZ) - sp.exp(-xx[2]/SINHWZ))/(sp.exp(1/SINHWZ) - sp.exp(-1/SINHWZ))
# Compute (r,th,ph) from (xxCart2,xxCart1,xxCart2)
xxSph[0] = sp.sqrt(xxCart[0] ** 2 + xxCart[1] ** 2 + xxCart[2] ** 2)
xxSph[1] = sp.acos(xxCart[2] / xxSph[0])
xxSph[2] = sp.atan2(xxCart[1], xxCart[0])
# Compute (xx0,xx1,xx2) from (Cartx,Carty,Cartz)
Cart_to_xx[0] = SINHWX*sp.asinh(AMPLX*Cartx*(sp.exp(1/SINHWX) - sp.exp(-1/SINHWX))/2)
Cart_to_xx[1] = SINHWY*sp.asinh(AMPLY*Carty*(sp.exp(1/SINHWY) - sp.exp(-1/SINHWY))/2)
Cart_to_xx[2] = SINHWZ*sp.asinh(AMPLZ*Cartz*(sp.exp(1/SINHWZ) - sp.exp(-1/SINHWZ))/2)
# Compute scale factors
scalefactor_orthog[0] = sp.diff(xxCart[0],xx[0])
scalefactor_orthog[1] = sp.diff(xxCart[1],xx[1])
scalefactor_orthog[2] = sp.diff(xxCart[2],xx[2])
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sympify(1), sp.sympify(0), sp.sympify(0)],
[sp.sympify(0), sp.sympify(1), sp.sympify(0)],
[sp.sympify(0), sp.sympify(0), sp.sympify(1)]]
```
```python
%matplotlib inline
import numpy as np # NumPy: A numerical methods module for Python
import matplotlib.pyplot as plt # matplotlib: Python module specializing in plotting capabilities
plt.clf()
fig = plt.figure()
ax = fig.gca()
# Set plot title
ax.set_title(r"$z=0$ slice of the 3D grid")
# Set SINH parameters. Here we assume:
#
# AMPLX = AMPLY = SINHA
# SINHWX = SINHWY = SINHW
SINHA = 10.0
SINHW = 0.3
# Set number of points. We assume the same point
# distribution along the (x,y)-directions
Nxxs = 20
xxis = np.linspace(-1,1,Nxxs, endpoint=True)
# Compute axis ticks by evaluating x and y using SinhCartesian coordinates
axis_ticks = []
for i in range(Nxxs):
axis_ticks.append(SINHA * (np.exp(xxis[i] / SINHW) - np.exp(-xxis[i] / SINHW)) / \
(np.exp(1.0 / SINHW) - np.exp(-1.0 / SINHW)))
# Set the axis ticks
ax.set_xticks(axis_ticks)
ax.set_yticks(axis_ticks)
# Set x and y labels. Initialize array with empty strings
labelsx = ["" for i in range(Nxxs)]
labelsy = ["" for i in range(Nxxs)]
# Set x_min and x_max tick label
labelsx[0] = r"-AMPLX"
labelsx[-1] = r"AMPLX"
# Set y_min and y_max tick label
labelsy[0] = r"-AMPLY"
labelsy[-1] = r"AMPLY"
# Set tick labels
ax.set_xticklabels(labelsx)
ax.set_yticklabels(labelsy)
# Rotate x labels by 60 degrees
for tick in ax.get_xticklabels():
tick.set_rotation(60)
# Draw the x=0 and y=0 ticklabel
ax.text(0,-11,"0",ha="center",va="center")
ax.text(-11,0,"0",ha="center",va="center")
# plt.scatter(x, y)
ax.set_aspect('equal')
plt.grid()
# plt.savefig("Cartgrid.png",dpi=300)
plt.show()
# plt.close(fig)
```
<a id='prolatespheroidal'></a>
## Step 3.d: [Prolate spheroidal](https://en.wikipedia.org/wiki/Prolate_spheroidal_coordinates)-like coordinate systems \[Back to [top](#toc)\]
$$\label{prolatespheroidal}$$
<a id='symtp'></a>
### Step 3.d.i: **`reference_metric::CoordSystem = "SymTP"`** \[Back to [top](#toc)\]
$$\label{symtp}$$
Symmetric TwoPuncture coordinates, with $(\rho,\phi,z)=(xx_0\sin(xx_1), xx_2, \sqrt{xx_0^2 + \text{bScale}^2}\cos(xx_1))$
```python
if CoordSystem == "SymTP":
var1, var2= sp.symbols('var1 var2',real=True)
bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule,
["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"],
[0.5, 0.2, 10.0, 10.0, -10.0, 10.0])
# Assuming xx0, xx1, and bScale
# are positive makes nice simplifications of
# unit vectors possible.
xx[0],xx[1] = sp.symbols("xx0 xx1", real=True)
xxmin = [sp.sympify(0), sp.sympify(0),-M_PI]
xxmax = [ AMAX, M_PI, M_PI]
AA = xx[0]
if CoordSystem == "SinhSymTP":
AA = (sp.exp(xx[0]/AW)-sp.exp(-xx[0]/AW))/2
var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2)
var2 = sp.sqrt(AA**2 + bScale**2)
RHOSYMTP = AA*sp.sin(xx[1])
PHSYMTP = xx[2]
ZSYMTP = var2*sp.cos(xx[1])
xxCart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2])
xxCart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2])
xxCart[2] = ZSYMTP
xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2)
xxSph[1] = sp.acos(ZSYMTP / xxSph[0])
xxSph[2] = PHSYMTP
rSph = sp.sqrt(Cartx ** 2 + Carty ** 2 + Cartz ** 2)
thSph = sp.acos(Cartz / rSph)
phSph = sp.atan2(Carty, Cartx)
# Mathematica script to compute Cart_to_xx[]
# AA = x1;
# var2 = Sqrt[AA^2 + bScale^2];
# RHOSYMTP = AA*Sin[x2];
# ZSYMTP = var2*Cos[x2];
# Solve[{rSph == Sqrt[RHOSYMTP^2 + ZSYMTP^2],
# thSph == ArcCos[ZSYMTP/Sqrt[RHOSYMTP^2 + ZSYMTP^2]],
# phSph == x3},
# {x1, x2, x3}]
Cart_to_xx[0] = sp.sqrt(-bScale**2 + rSph**2 +
sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 -
4*bScale**2*rSph**2*sp.cos(thSph)**2))*M_SQRT1_2 # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting
# The sign() function in the following expression ensures the correct root is taken.
Cart_to_xx[1] = sp.acos(sp.sign(Cartz)*(
sp.sqrt(1 + rSph**2/bScale**2 -
sp.sqrt(bScale**4 + 2*bScale**2*rSph**2 + rSph**4 -
4*bScale**2*rSph**2*sp.cos(thSph)**2)/bScale**2)*M_SQRT1_2)) # M_SQRT1_2 = 1/sqrt(2); define this way for UnitTesting
Cart_to_xx[2] = phSph
```
<a id='sinhsymtp'></a>
### Step 3.d.ii: **`reference_metric::CoordSystem = "SinhSymTP"`** \[Back to [top](#toc)\]
$$\label{sinhsymtp}$$
Symmetric TwoPuncture coordinates, but with $$xx_0 \to \sinh(xx_0/\text{AW})$$
```python
if CoordSystem == "SinhSymTP":
var1, var2= sp.symbols('var1 var2',real=True)
bScale, AW, AMAX, RHOMAX, ZMIN, ZMAX = par.Cparameters("REAL",thismodule,
["bScale","AW","AMAX","RHOMAX","ZMIN","ZMAX"],
[0.5, 0.2, 10.0, 10.0, -10.0, 10.0])
# Assuming xx0, xx1, and bScale
# are positive makes nice simplifications of
# unit vectors possible.
xx[0],xx[1] = sp.symbols("xx0 xx1", real=True)
xxmin = [sp.sympify(0), sp.sympify(0),-M_PI]
xxmax = [ AMAX, M_PI, M_PI]
AA = xx[0]
if CoordSystem == "SinhSymTP":
# With xxmax[0] == AMAX, sinh(xx0/AMAX) will evaluate to a number between 0 and 1.
# Similarly, sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will also evaluate to a number between 0 and 1.
# Then AA = AMAX*sinh(xx0/(AMAX*SINHWAA)) / sinh(1/SINHWAA) will evaluate to a number between 0 and AMAX.
AA = AMAX * (sp.exp(xx[0] / (AMAX*SINHWAA)) - sp.exp(-xx[0] / (AMAX*SINHWAA))) / (sp.exp(1 / SINHWAA) - sp.exp(-1 / AMAX))
var1 = sp.sqrt(AA**2 + (bScale * sp.sin(xx[1]))**2)
var2 = sp.sqrt(AA**2 + bScale**2)
RHOSYMTP = AA*sp.sin(xx[1])
PHSYMTP = xx[2]
ZSYMTP = var2*sp.cos(xx[1])
xxCart[0] = AA *sp.sin(xx[1])*sp.cos(xx[2])
xxCart[1] = AA *sp.sin(xx[1])*sp.sin(xx[2])
xxCart[2] = ZSYMTP
xxSph[0] = sp.sqrt(RHOSYMTP**2 + ZSYMTP**2)
xxSph[1] = sp.acos(ZSYMTP / xxSph[0])
xxSph[2] = PHSYMTP
scalefactor_orthog[0] = sp.diff(AA,xx[0]) * var1 / var2
scalefactor_orthog[1] = var1
scalefactor_orthog[2] = AA * sp.sin(xx[1])
# Set the transpose of the matrix of unit vectors
UnitVectors = [[sp.sin(xx[1]) * sp.cos(xx[2]) * var2 / var1,
sp.sin(xx[1]) * sp.sin(xx[2]) * var2 / var1,
AA * sp.cos(xx[1]) / var1],
[AA * sp.cos(xx[1]) * sp.cos(xx[2]) / var1,
AA * sp.cos(xx[1]) * sp.sin(xx[2]) / var1,
-sp.sin(xx[1]) * var2 / var1],
[-sp.sin(xx[2]), sp.cos(xx[2]), sp.sympify(0)]]
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Reference_Metric.pdf](Tutorial-Reference_Metric.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Reference_Metric")
```
Created Tutorial-Reference_Metric.tex, and compiled LaTeX file to PDF file
Tutorial-Reference_Metric.pdf
|
lemma tendsto_0_le: "(f \<longlongrightarrow> 0) F \<Longrightarrow> eventually (\<lambda>x. norm (g x) \<le> norm (f x) * K) F \<Longrightarrow> (g \<longlongrightarrow> 0) F"
|
abstract type AbstractProblem{dType <: Number, tType <: Real, arrayType <: AbstractArray{dType}} end
"Returns the parent equation object of the problem."
equation(prob::AbstractProblem) = error("equation() not implemented for ", typeof(prob), ".")
"Returns a NamedTuple containing all functions (e.g. vector fields) provided by the equation."
functions(prob::AbstractProblem) = functions(equation(prob))
"Returns a NamedTuple containing all solutions provided by the equation."
solutions(prob::AbstractProblem) = solutions(equation(prob))
"Returns a NamedTuple containing all invariants provided by the equation."
invariants(prob::AbstractProblem) = invariants(equation(prob))
tspan(prob::AbstractProblem) = error("tspan() not implemented for ", typeof(prob), ".")
tstep(prob::AbstractProblem) = error("tstep() not implemented for ", typeof(prob), ".")
tbegin(prob::AbstractProblem) = tspan(prob)[begin]
tend(prob::AbstractProblem) = tspan(prob)[end]
GeometricBase.parameters(prob::AbstractProblem) = error("parameters() not implemented for ", typeof(prob), ".")
GeometricBase.periodicity(prob::AbstractProblem) = periodicity(equation(prob))
hassolution(prob::AbstractProblem) = hassolution(equation(prob))
hasvectorfield(prob::AbstractProblem) = hasvectorfield(equation(prob))
hasprimary(prob::AbstractProblem) = hasprimary(equation(prob))
hassecondary(prob::AbstractProblem) = hassecondary(equation(prob))
hasinvariants(prob::AbstractProblem) = hasinvariants(equation(prob))
hasparameters(prob::AbstractProblem) = hasparameters(equation(prob))
hasperiodicity(prob::AbstractProblem) = hasperiodicity(equation(prob))
hashamiltonian(prob::AbstractProblem) = hashamiltonian(equation(prob))
haslagrangian(prob::AbstractProblem) = haslagrangian(equation(prob))
|
module A.Path.Of.Dires.Second
import public A.Path.Of.Dires.First
%default total
export
example : Tree Nat
example = Node (Node (Leaf 0) (Leaf 1)) (Leaf 2)
|
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x✝ x : List α
⊢ x ∈ 1 ↔ x = []
[PROOFSTEP]
rfl
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
x✝ : List α
h : x✝ ∈ []
⊢ x✝ ∈ l
[PROOFSTEP]
contradiction
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ 1 * l = l
[PROOFSTEP]
simp [mul_def, one_def]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ l * 1 = l
[PROOFSTEP]
simp [mul_def, one_def]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x : List α
n : ℕ
⊢ NatCast.natCast (n + 1) = NatCast.natCast n + 1
[PROOFSTEP]
cases n
[GOAL]
case zero
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x : List α
⊢ NatCast.natCast (Nat.zero + 1) = NatCast.natCast Nat.zero + 1
[PROOFSTEP]
simp [Nat.cast, add_def, zero_def]
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x : List α
n✝ : ℕ
⊢ NatCast.natCast (Nat.succ n✝ + 1) = NatCast.natCast (Nat.succ n✝) + 1
[PROOFSTEP]
simp [Nat.cast, add_def, zero_def]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ ↑(map id) l = l
[PROOFSTEP]
simp [map]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
g : β → γ
f : α → β
l : Language α
⊢ ↑(map g) (↑(map f) l) = ↑(map (g ∘ f)) l
[PROOFSTEP]
simp [map, image_image]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ l∗ = {x | ∃ S, x = join S ∧ ∀ (y : List α), y ∈ S → y ∈ l ∧ y ≠ []}
[PROOFSTEP]
ext x
[GOAL]
case h
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ x ∈ l∗ ↔ x ∈ {x | ∃ S, x = join S ∧ ∀ (y : List α), y ∈ S → y ∈ l ∧ y ≠ []}
[PROOFSTEP]
constructor
[GOAL]
case h.mp
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ x ∈ l∗ → x ∈ {x | ∃ S, x = join S ∧ ∀ (y : List α), y ∈ S → y ∈ l ∧ y ≠ []}
[PROOFSTEP]
rintro ⟨S, rfl, h⟩
[GOAL]
case h.mp.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
S : List (List α)
h : ∀ (y : List α), y ∈ S → y ∈ l
⊢ join S ∈ {x | ∃ S, x = join S ∧ ∀ (y : List α), y ∈ S → y ∈ l ∧ y ≠ []}
[PROOFSTEP]
refine'
⟨S.filter fun l ↦ ¬List.isEmpty l, by simp, fun y hy ↦ _⟩
-- Porting note: The previous code was:
-- rw [mem_filter, empty_iff_eq_nil] at hy
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
S : List (List α)
h : ∀ (y : List α), y ∈ S → y ∈ l
⊢ join S = join (filter (fun l => decide ¬isEmpty l = true) S)
[PROOFSTEP]
simp
[GOAL]
case h.mp.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
S : List (List α)
h : ∀ (y : List α), y ∈ S → y ∈ l
y : List α
hy : y ∈ filter (fun l => decide ¬isEmpty l = true) S
⊢ y ∈ l ∧ y ≠ []
[PROOFSTEP]
rw [mem_filter, decide_not, Bool.decide_coe, Bool.not_eq_true', ← Bool.bool_iff_false, isEmpty_iff_eq_nil] at hy
[GOAL]
case h.mp.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
S : List (List α)
h : ∀ (y : List α), y ∈ S → y ∈ l
y : List α
hy : y ∈ S ∧ ¬y = []
⊢ y ∈ l ∧ y ≠ []
[PROOFSTEP]
exact ⟨h y hy.1, hy.2⟩
[GOAL]
case h.mpr
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ x ∈ {x | ∃ S, x = join S ∧ ∀ (y : List α), y ∈ S → y ∈ l ∧ y ≠ []} → x ∈ l∗
[PROOFSTEP]
rintro ⟨S, hx, h⟩
[GOAL]
case h.mpr.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
S : List (List α)
hx : x = join S
h : ∀ (y : List α), y ∈ S → y ∈ l ∧ y ≠ []
⊢ x ∈ l∗
[PROOFSTEP]
exact ⟨S, hx, fun y hy ↦ (h y hy).1⟩
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x : List α
l₁ l₂ m₁ m₂ : Language α
⊢ l₁ ≤ m₁ → l₂ ≤ m₂ → l₁ * l₂ ≤ m₁ * m₂
[PROOFSTEP]
intro h₁ h₂ x hx
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x✝ : List α
l₁ l₂ m₁ m₂ : Language α
h₁ : l₁ ≤ m₁
h₂ : l₂ ≤ m₂
x : List α
hx : x ∈ l₁ * l₂
⊢ x ∈ m₁ * m₂
[PROOFSTEP]
simp only [mul_def, exists_and_left, mem_image2, image_prod] at hx ⊢
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l m : Language α
a b x✝ : List α
l₁ l₂ m₁ m₂ : Language α
h₁ : l₁ ≤ m₁
h₂ : l₂ ≤ m₂
x : List α
hx : ∃ a, a ∈ l₁ ∧ ∃ x_1, x_1 ∈ l₂ ∧ a ++ x_1 = x
⊢ ∃ a, a ∈ m₁ ∧ ∃ x_1, x_1 ∈ m₂ ∧ a ++ x_1 = x
[PROOFSTEP]
tauto
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
n : ℕ
⊢ x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
induction' n with n ihn generalizing x
[GOAL]
case zero
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ x : List α
⊢ x ∈ l ^ Nat.zero ↔ ∃ S, x = join S ∧ length S = Nat.zero ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
simp only [mem_one, pow_zero, length_eq_zero]
[GOAL]
case zero
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ x : List α
⊢ x = [] ↔ ∃ S, x = join S ∧ S = [] ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
constructor
[GOAL]
case zero.mp
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ x : List α
⊢ x = [] → ∃ S, x = join S ∧ S = [] ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
rintro rfl
[GOAL]
case zero.mp
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ ∃ S, [] = join S ∧ S = [] ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
exact ⟨[], rfl, rfl, fun _ h ↦ by contradiction⟩
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x x✝ : List α
h : x✝ ∈ []
⊢ x✝ ∈ l
[PROOFSTEP]
contradiction
[GOAL]
case zero.mpr
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ x : List α
⊢ (∃ S, x = join S ∧ S = [] ∧ ∀ (y : List α), y ∈ S → y ∈ l) → x = []
[PROOFSTEP]
rintro ⟨_, rfl, rfl, _⟩
[GOAL]
case zero.mpr.intro.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
right✝ : ∀ (y : List α), y ∈ [] → y ∈ l
⊢ join [] = []
[PROOFSTEP]
rfl
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ : List α
n : ℕ
ihn : ∀ {x : List α}, x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
x : List α
⊢ x ∈ l ^ Nat.succ n ↔ ∃ S, x = join S ∧ length S = Nat.succ n ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
simp only [pow_succ, mem_mul, ihn]
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ : List α
n : ℕ
ihn : ∀ {x : List α}, x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
x : List α
⊢ (∃ a b, a ∈ l ∧ (∃ S, b = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l) ∧ a ++ b = x) ↔
∃ S, x = join S ∧ length S = Nat.succ n ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
constructor
[GOAL]
case succ.mp
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ : List α
n : ℕ
ihn : ∀ {x : List α}, x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
x : List α
⊢ (∃ a b, a ∈ l ∧ (∃ S, b = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l) ∧ a ++ b = x) →
∃ S, x = join S ∧ length S = Nat.succ n ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
rintro ⟨a, b, ha, ⟨S, rfl, rfl, hS⟩, rfl⟩
[GOAL]
case succ.mp.intro.intro.intro.intro.intro.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a✝ b x✝ : List α
l : Language α
x a : List α
ha : a ∈ l
S : List (List α)
hS : ∀ (y : List α), y ∈ S → y ∈ l
ihn : ∀ {x : List α}, x ∈ l ^ length S ↔ ∃ S_1, x = join S_1 ∧ length S_1 = length S ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l
⊢ ∃ S_1, a ++ join S = join S_1 ∧ length S_1 = Nat.succ (length S) ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l
[PROOFSTEP]
exact ⟨a :: S, rfl, rfl, forall_mem_cons.2 ⟨ha, hS⟩⟩
[GOAL]
case succ.mpr
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝¹ : List α
l : Language α
x✝ : List α
n : ℕ
ihn : ∀ {x : List α}, x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
x : List α
⊢ (∃ S, x = join S ∧ length S = Nat.succ n ∧ ∀ (y : List α), y ∈ S → y ∈ l) →
∃ a b, a ∈ l ∧ (∃ S, b = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l) ∧ a ++ b = x
[PROOFSTEP]
rintro ⟨_ | ⟨a, S⟩, rfl, hn, hS⟩
[GOAL]
case succ.mpr.intro.nil.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
n : ℕ
ihn : ∀ {x : List α}, x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
hn : length [] = Nat.succ n
hS : ∀ (y : List α), y ∈ [] → y ∈ l
⊢ ∃ a b, a ∈ l ∧ (∃ S, b = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l) ∧ a ++ b = join []
[PROOFSTEP]
cases hn
[GOAL]
case succ.mpr.intro.cons.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a✝ b x✝ : List α
l : Language α
x : List α
n : ℕ
ihn : ∀ {x : List α}, x ∈ l ^ n ↔ ∃ S, x = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l
a : List α
S : List (List α)
hn : length (a :: S) = Nat.succ n
hS : ∀ (y : List α), y ∈ a :: S → y ∈ l
⊢ ∃ a_1 b, a_1 ∈ l ∧ (∃ S, b = join S ∧ length S = n ∧ ∀ (y : List α), y ∈ S → y ∈ l) ∧ a_1 ++ b = join (a :: S)
[PROOFSTEP]
cases hn
[GOAL]
case succ.mpr.intro.cons.intro.intro.refl
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a✝ b x✝ : List α
l : Language α
x a : List α
S : List (List α)
hS : ∀ (y : List α), y ∈ a :: S → y ∈ l
ihn :
∀ {x : List α},
x ∈ l ^ Nat.add (length S) 0 ↔
∃ S_1, x = join S_1 ∧ length S_1 = Nat.add (length S) 0 ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l
⊢ ∃ a_1 b,
a_1 ∈ l ∧
(∃ S_1, b = join S_1 ∧ length S_1 = Nat.add (length S) 0 ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l) ∧
a_1 ++ b = join (a :: S)
[PROOFSTEP]
rw [forall_mem_cons] at hS
[GOAL]
case succ.mpr.intro.cons.intro.intro.refl
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a✝ b x✝ : List α
l : Language α
x a : List α
S : List (List α)
hS : a ∈ l ∧ ∀ (x : List α), x ∈ S → x ∈ l
ihn :
∀ {x : List α},
x ∈ l ^ Nat.add (length S) 0 ↔
∃ S_1, x = join S_1 ∧ length S_1 = Nat.add (length S) 0 ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l
⊢ ∃ a_1 b,
a_1 ∈ l ∧
(∃ S_1, b = join S_1 ∧ length S_1 = Nat.add (length S) 0 ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l) ∧
a_1 ++ b = join (a :: S)
[PROOFSTEP]
exact ⟨a, _, hS.1, ⟨S, rfl, rfl, hS.2⟩, rfl⟩
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ l∗ = ⨆ (i : ℕ), l ^ i
[PROOFSTEP]
ext x
[GOAL]
case h
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ x ∈ l∗ ↔ x ∈ ⨆ (i : ℕ), l ^ i
[PROOFSTEP]
simp only [mem_kstar, mem_iSup, mem_pow]
[GOAL]
case h
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ (∃ L, x = join L ∧ ∀ (y : List α), y ∈ L → y ∈ l) ↔ ∃ i S, x = join S ∧ length S = i ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
constructor
[GOAL]
case h.mp
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ (∃ L, x = join L ∧ ∀ (y : List α), y ∈ L → y ∈ l) → ∃ i S, x = join S ∧ length S = i ∧ ∀ (y : List α), y ∈ S → y ∈ l
[PROOFSTEP]
rintro ⟨S, rfl, hS⟩
[GOAL]
case h.mp.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
S : List (List α)
hS : ∀ (y : List α), y ∈ S → y ∈ l
⊢ ∃ i S_1, join S = join S_1 ∧ length S_1 = i ∧ ∀ (y : List α), y ∈ S_1 → y ∈ l
[PROOFSTEP]
exact ⟨_, S, rfl, rfl, hS⟩
[GOAL]
case h.mpr
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x✝ : List α
l : Language α
x : List α
⊢ (∃ i S, x = join S ∧ length S = i ∧ ∀ (y : List α), y ∈ S → y ∈ l) → ∃ L, x = join L ∧ ∀ (y : List α), y ∈ L → y ∈ l
[PROOFSTEP]
rintro ⟨_, S, rfl, rfl, hS⟩
[GOAL]
case h.mpr.intro.intro.intro.intro
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
S : List (List α)
hS : ∀ (y : List α), y ∈ S → y ∈ l
⊢ ∃ L, join S = join L ∧ ∀ (y : List α), y ∈ L → y ∈ l
[PROOFSTEP]
exact ⟨S, rfl, hS⟩
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
f : α → β
l : Language α
⊢ ↑(map f) l∗ = (↑(map f) l)∗
[PROOFSTEP]
rw [kstar_eq_iSup_pow, kstar_eq_iSup_pow]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
f : α → β
l : Language α
⊢ ↑(map f) (⨆ (i : ℕ), l ^ i) = ⨆ (i : ℕ), ↑(map f) l ^ i
[PROOFSTEP]
simp_rw [← map_pow]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
f : α → β
l : Language α
⊢ ↑(map f) (⨆ (i : ℕ), l ^ i) = ⨆ (i : ℕ), ↑(map f) (l ^ i)
[PROOFSTEP]
exact image_iUnion
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ l∗ * l = l * l∗
[PROOFSTEP]
simp only [kstar_eq_iSup_pow, mul_iSup, iSup_mul, ← pow_succ, ← pow_succ']
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ 1 + l * l∗ = l∗
[PROOFSTEP]
simp only [kstar_eq_iSup_pow, mul_iSup, ← pow_succ, ← pow_zero l]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ l ^ 0 + ⨆ (i : ℕ), l ^ (i + 1) = ⨆ (i : ℕ), l ^ i
[PROOFSTEP]
exact sup_iSup_nat_succ _
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a b x : List α
l : Language α
⊢ 1 + l∗ * l = l∗
[PROOFSTEP]
rw [mul_self_kstar_comm, one_add_self_mul_kstar_eq_kstar]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m : Language α
a✝ b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
a : Language α
l : List α
hl : l ∈ 1
⊢ ∀ (y : List α), y ∈ [] → y ∈ a
[PROOFSTEP]
simp
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : m * l ≤ m
⊢ m * l∗ ≤ m
[PROOFSTEP]
rw [kstar_eq_iSup_pow, mul_iSup]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : m * l ≤ m
⊢ ⨆ (i : ℕ), m * l ^ i ≤ m
[PROOFSTEP]
refine' iSup_le (fun n ↦ _)
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : m * l ≤ m
n : ℕ
⊢ m * l ^ n ≤ m
[PROOFSTEP]
induction' n with n ih
[GOAL]
case zero
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : m * l ≤ m
⊢ m * l ^ Nat.zero ≤ m
[PROOFSTEP]
simp
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : m * l ≤ m
n : ℕ
ih : m * l ^ n ≤ m
⊢ m * l ^ Nat.succ n ≤ m
[PROOFSTEP]
rw [pow_succ, ← mul_assoc m l (l ^ n)]
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : m * l ≤ m
n : ℕ
ih : m * l ^ n ≤ m
⊢ m * l * l ^ n ≤ m
[PROOFSTEP]
exact le_trans (le_mul_congr h le_rfl) ih
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : l * m ≤ m
⊢ l∗ * m ≤ m
[PROOFSTEP]
rw [kstar_eq_iSup_pow, iSup_mul]
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : l * m ≤ m
⊢ ⨆ (i : ℕ), l ^ i * m ≤ m
[PROOFSTEP]
refine' iSup_le (fun n ↦ _)
[GOAL]
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : l * m ≤ m
n : ℕ
⊢ l ^ n * m ≤ m
[PROOFSTEP]
induction' n with n ih
[GOAL]
case zero
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : l * m ≤ m
⊢ l ^ Nat.zero * m ≤ m
[PROOFSTEP]
simp
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : l * m ≤ m
n : ℕ
ih : l ^ n * m ≤ m
⊢ l ^ Nat.succ n * m ≤ m
[PROOFSTEP]
rw [pow_succ', mul_assoc (l ^ n) l m]
[GOAL]
case succ
α : Type u_1
β : Type u_2
γ : Type u_3
l✝ m✝ : Language α
a b x : List α
src✝¹ : Semiring (Language α) := instSemiring
src✝ : CompleteAtomicBooleanAlgebra (Set (List α)) := Set.completeAtomicBooleanAlgebra
l m : Language α
h : l * m ≤ m
n : ℕ
ih : l ^ n * m ≤ m
⊢ l ^ n * (l * m) ≤ m
[PROOFSTEP]
exact le_trans (le_mul_congr le_rfl h) ih
|
/*
File: ACComponentResources.r
Abstract: ACComponentResources.r
Version: 1.1
Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple
Inc. ("Apple") in consideration of your agreement to the following
terms, and your use, installation, modification or redistribution of
this Apple software constitutes acceptance of these terms. If you do
not agree with these terms, please do not use, install, modify or
redistribute this Apple software.
In consideration of your agreement to abide by the following terms, and
subject to these terms, Apple grants you a personal, non-exclusive
license, under Apple's copyrights in this original Apple software (the
"Apple Software"), to use, reproduce, modify and redistribute the Apple
Software, with or without modifications, in source and/or binary forms;
provided that if you redistribute the Apple Software in its entirety and
without modifications, you must retain this notice and the following
text and disclaimers in all such redistributions of the Apple Software.
Neither the name, trademarks, service marks or logos of Apple Inc. may
be used to endorse or promote products derived from the Apple Software
without specific prior written permission from Apple. Except as
expressly stated in this notice, no other rights or licenses, express or
implied, are granted by Apple herein, including but not limited to any
patent rights that may be infringed by your derivative works or by other
works in which the Apple Software may be incorporated.
The Apple Software is provided by Apple on an "AS IS" basis. APPLE
MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION
THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND
OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION,
MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED
AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE),
STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Copyright (C) 2014 Apple Inc. All Rights Reserved.
*/
#ifndef GEN_MISSING
#define GEN_MISSING 0
#endif
#ifndef thng_RezTemplateVersion
#define thng_RezTemplateVersion 2
#endif
//=============================================================================
// Includes
//=============================================================================
#include "ConditionalMacros.r"
#include "MacTypes.r"
#include "Components.r"
//=============================================================================
// Platform constants for the thng resources
//=============================================================================
#if TARGET_OS_MAC && TARGET_API_MAC_OSX
#define Target_PlatformType 1000
#define Target_CodeResType 'dlle'
#define kUseDLLEResource 1
#elif TARGET_OS_WIN32
#define Target_PlatformType platformWin32
#define Target_CodeResType 'dlle'
#define kUseDLLEResource 1
#else
#define Target_PlatformType platformPowerPC
#define Target_CodeResType 'tppc'
#define kUseDLLEResource 0
#endif
#if kComponentIsThreadSafe
#ifndef cmpThreadSafeOnMac // so we don't need Panther headers to build
#define cmpThreadSafeOnMac 0x10000000
#endif
#define COMPONENT_FLAGS cmpThreadSafeOnMac
#else
#define COMPONENT_FLAGS 0
#endif
//=============================================================================
// The thng and related resources
//
// The definitions below use the following macros, all of which must be
// defined. Note that kPrimaryResourceID is used to define two 'STR '
// resources with consecutive IDs so be sure to space them at least two'
// apart. Here's a sample of how to do the defines:
//
// #define kPrimaryResourceID 128
// #define kComponentType 'aenc'
// #define kComponentSubtype 'ima4'
// #define kComponentManufacturer 'appl'
// #define kComponentFlags 0
// #define kComponentVersion 0x00010000
// #define kComponentName "Apple IMA4 Encoder"
// #define kComponentInfo "An AudioCodec that encodes linear PCM data into IMA4"
// #define kComponentEntryPoint "ACAppleIMA4EncoderEntry"
// #define kComponentPublicResourceMapType 0
// #define kComponentIsThreadSafe 1
//=============================================================================
#ifndef AC_LOCALIZED
resource 'strn' (kPrimaryResourceID, purgeable)
{
kComponentName
};
resource 'stri' (kPrimaryResourceID, purgeable)
{
kComponentInfo
};
#endif
#if !GEN_MISSING
#if kUseDLLEResource
resource 'dlle' (kPrimaryResourceID)
{
kComponentEntryPoint
};
#endif
#define kComponentRegistrationFlags componentHasMultiplePlatforms | componentDoAutoVersion | componentLoadResident
resource 'thng' (kPrimaryResourceID, kComponentName)
{
kComponentType, // Component type
kComponentSubtype, // Component subtype
kComponentManufacturer, // Component manufacturer
kComponentFlags, // Component flags
0, // Component flags mask
0, 0, // Code type, Code ID
'strn', kPrimaryResourceID, // Name resource type, resource ID
'stri', kPrimaryResourceID, // Info resource type, resource ID
0, 0, // Icon resource type, resource ID
kComponentVersion, // Component version
kComponentRegistrationFlags, // Registration flags
0, // Icon family resource ID
{ // Beginning of platform info
COMPONENT_FLAGS, // Component flags
Target_CodeResType, kPrimaryResourceID, // Code resource type, resource ID
Target_PlatformType, // Platform type
},
#if thng_RezTemplateVersion >= 2
kComponentPublicResourceMapType, kPrimaryResourceID // Resource map type, resource map ID
#endif
};
#else // GEN_MISSING
resource 'thga' (kPrimaryResourceID) {
kComponentType, // Component type
kComponentSubtype, // Component subtype
kComponentManufacturer, // Component manufacturer
kComponentFlags, // Component flags
0, // Component flags mask
0, 0, // Code type, Code ID
'strn', kPrimaryResourceID, // Name resource type, resource ID
'stri', kPrimaryResourceID, // Info resource type, resource ID
0, 0, // Icon resource type, resource ID
'miss', // Alias component type
'base', // Alias component subtype
0, // Alias component manufacturer
0, // Alias component flags
0, // Alias component flags mask
#if thng_RezTemplateVersion >= 2
kComponentPublicResourceMapType, kPrimaryResourceID, // Resource map type, resource map ID
cmpAliasNoFlags // Alias flags
#endif
};
#endif // GEN_MISSING
#undef kPrimaryResourceID
#undef kComponentType
#undef kComponentSubtype
#undef kComponentManufacturer
#undef kComponentVersion
#undef kComponentRegistrationFlags
#undef kComponentName
#undef kComponentInfo
#undef kComponentEntryPoint
#undef kComponentPublicResourceMapType
#undef Target_PlatformType
#undef Target_CodeResType
#undef kUseDLLEResource
|
/-
Copyright (c) 2019 Scott Morrison. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Scott Morrison
-/
import algebraic_geometry.presheafed_space
import category_theory.limits.final
import topology.sheaves.stalks
/-!
# Stalks for presheaved spaces
This file lifts constructions of stalks and pushforwards of stalks to work with
the category of presheafed spaces. Additionally, we prove that restriction of
presheafed spaces does not change the stalks.
-/
noncomputable theory
universes v u v' u'
open category_theory
open category_theory.limits category_theory.category category_theory.functor
open algebraic_geometry
open topological_space
open opposite
variables {C : Type u} [category.{v} C] [has_colimits C]
local attribute [tidy] tactic.op_induction'
open Top.presheaf
namespace algebraic_geometry.PresheafedSpace
/--
The stalk at `x` of a `PresheafedSpace`.
-/
abbreviation stalk (X : PresheafedSpace C) (x : X) : C := X.presheaf.stalk x
/--
A morphism of presheafed spaces induces a morphism of stalks.
-/
def stalk_map {X Y : PresheafedSpace C} (α : X ⟶ Y) (x : X) : Y.stalk (α.base x) ⟶ X.stalk x :=
(stalk_functor C (α.base x)).map (α.c) ≫ X.presheaf.stalk_pushforward C α.base x
@[simp, elementwise, reassoc]
lemma stalk_map_germ {X Y : PresheafedSpace C} (α : X ⟶ Y) (U : opens Y.carrier)
(x : (opens.map α.base).obj U) :
Y.presheaf.germ ⟨α.base x, x.2⟩ ≫ stalk_map α ↑x = α.c.app (op U) ≫ X.presheaf.germ x :=
by rw [stalk_map, stalk_functor_map_germ_assoc, stalk_pushforward_germ]
section restrict
/--
For an open embedding `f : U ⟶ X` and a point `x : U`, we get an isomorphism between the stalk
of `X` at `f x` and the stalk of the restriction of `X` along `f` at t `x`.
-/
def restrict_stalk_iso {U : Top} (X : PresheafedSpace C)
{f : U ⟶ (X : Top.{v})} (h : open_embedding f) (x : U) :
(X.restrict h).stalk x ≅ X.stalk (f x) :=
begin
-- As a left adjoint, the functor `h.is_open_map.functor_nhds x` is initial.
haveI := initial_of_adjunction (h.is_open_map.adjunction_nhds x),
-- Typeclass resolution knows that the opposite of an initial functor is final. The result
-- follows from the general fact that postcomposing with a final functor doesn't change colimits.
exact final.colimit_iso (h.is_open_map.functor_nhds x).op
((open_nhds.inclusion (f x)).op ⋙ X.presheaf),
end
@[simp, elementwise, reassoc]
lemma restrict_stalk_iso_hom_eq_germ {U : Top} (X : PresheafedSpace C) {f : U ⟶ (X : Top.{v})}
(h : open_embedding f) (V : opens U) (x : U) (hx : x ∈ V) :
(X.restrict h).presheaf.germ ⟨x, hx⟩ ≫ (restrict_stalk_iso X h x).hom =
X.presheaf.germ ⟨f x, show f x ∈ h.is_open_map.functor.obj V, from ⟨x, hx, rfl⟩⟩ :=
colimit.ι_pre ((open_nhds.inclusion (f x)).op ⋙ X.presheaf)
(h.is_open_map.functor_nhds x).op (op ⟨V, hx⟩)
@[simp, elementwise, reassoc]
lemma restrict_stalk_iso_inv_eq_germ {U : Top} (X : PresheafedSpace C) {f : U ⟶ (X : Top.{v})}
(h : open_embedding f) (V : opens U) (x : U) (hx : x ∈ V) :
X.presheaf.germ ⟨f x, show f x ∈ h.is_open_map.functor.obj V, from ⟨x, hx, rfl⟩⟩ ≫
(restrict_stalk_iso X h x).inv = (X.restrict h).presheaf.germ ⟨x, hx⟩ :=
by rw [← restrict_stalk_iso_hom_eq_germ, category.assoc, iso.hom_inv_id, category.comp_id]
lemma restrict_stalk_iso_inv_eq_of_restrict {U : Top} (X : PresheafedSpace C)
{f : U ⟶ (X : Top.{v})} (h : open_embedding f) (x : U) :
(X.restrict_stalk_iso h x).inv = stalk_map (X.of_restrict h) x :=
begin
ext V,
induction V using opposite.rec,
let i : (h.is_open_map.functor_nhds x).obj ((open_nhds.map f x).obj V) ⟶ V :=
hom_of_le (set.image_preimage_subset f _),
erw [iso.comp_inv_eq, colimit.ι_map_assoc, colimit.ι_map_assoc, colimit.ι_pre],
simp_rw category.assoc,
erw colimit.ι_pre ((open_nhds.inclusion (f x)).op ⋙ X.presheaf)
(h.is_open_map.functor_nhds x).op,
erw ← X.presheaf.map_comp_assoc,
exact (colimit.w ((open_nhds.inclusion (f x)).op ⋙ X.presheaf) i.op).symm,
end
instance of_restrict_stalk_map_is_iso {U : Top} (X : PresheafedSpace C)
{f : U ⟶ (X : Top.{v})} (h : open_embedding f) (x : U) :
is_iso (stalk_map (X.of_restrict h) x) :=
by { rw ← restrict_stalk_iso_inv_eq_of_restrict, apply_instance }
end restrict
namespace stalk_map
@[simp] lemma id (X : PresheafedSpace C) (x : X) : stalk_map (𝟙 X) x = 𝟙 (X.stalk x) :=
begin
dsimp [stalk_map],
simp only [stalk_pushforward.id],
rw [←map_comp],
convert (stalk_functor C x).map_id X.presheaf,
tidy,
end
-- TODO understand why this proof is still gross (i.e. requires using `erw`)
@[simp] lemma comp {X Y Z : PresheafedSpace C} (α : X ⟶ Y) (β : Y ⟶ Z) (x : X) :
stalk_map (α ≫ β) x =
(stalk_map β (α.base x) : Z.stalk (β.base (α.base x)) ⟶ Y.stalk (α.base x)) ≫
(stalk_map α x : Y.stalk (α.base x) ⟶ X.stalk x) :=
begin
dsimp [stalk_map, stalk_functor, stalk_pushforward],
ext U,
induction U using opposite.rec,
cases U,
simp only [colimit.ι_map_assoc, colimit.ι_pre_assoc, colimit.ι_pre,
whisker_left_app, whisker_right_app,
assoc, id_comp, map_id, map_comp],
dsimp,
simp only [map_id, assoc, pushforward.comp_inv_app],
-- FIXME Why doesn't simp do this:
erw [category_theory.functor.map_id],
erw [category_theory.functor.map_id],
erw [id_comp, id_comp],
end
/--
If `α = β` and `x = x'`, we would like to say that `stalk_map α x = stalk_map β x'`.
Unfortunately, this equality is not well-formed, as their types are not _definitionally_ the same.
To get a proper congruence lemma, we therefore have to introduce these `eq_to_hom` arrows on
either side of the equality.
-/
lemma congr {X Y : PresheafedSpace C} (α β : X ⟶ Y) (h₁ : α = β) (x x': X) (h₂ : x = x') :
stalk_map α x ≫ eq_to_hom (show X.stalk x = X.stalk x', by rw h₂) =
eq_to_hom (show Y.stalk (α.base x) = Y.stalk (β.base x'), by rw [h₁, h₂]) ≫ stalk_map β x' :=
stalk_hom_ext _ $ λ U hx, by { subst h₁, subst h₂, simp }
lemma congr_hom {X Y : PresheafedSpace C} (α β : X ⟶ Y) (h : α = β) (x : X) :
stalk_map α x =
eq_to_hom (show Y.stalk (α.base x) = Y.stalk (β.base x), by rw h) ≫ stalk_map β x :=
by rw [← stalk_map.congr α β h x x rfl, eq_to_hom_refl, category.comp_id]
lemma congr_point {X Y : PresheafedSpace C} (α : X ⟶ Y) (x x' : X) (h : x = x') :
stalk_map α x ≫ eq_to_hom (show X.stalk x = X.stalk x', by rw h) =
eq_to_hom (show Y.stalk (α.base x) = Y.stalk (α.base x'), by rw h) ≫ stalk_map α x' :=
by rw stalk_map.congr α α rfl x x' h
instance is_iso {X Y : PresheafedSpace C} (α : X ⟶ Y) [is_iso α] (x : X) :
is_iso (stalk_map α x) :=
{ out := begin
let β : Y ⟶ X := category_theory.inv α,
have h_eq : (α ≫ β).base x = x,
{ rw [is_iso.hom_inv_id α, id_base, Top.id_app] },
-- Intuitively, the inverse of the stalk map of `α` at `x` should just be the stalk map of `β`
-- at `α x`. Unfortunately, we have a problem with dependent type theory here: Because `x`
-- is not *definitionally* equal to `β (α x)`, the map `stalk_map β (α x)` has not the correct
-- type for an inverse.
-- To get a proper inverse, we need to compose with the `eq_to_hom` arrow
-- `X.stalk x ⟶ X.stalk ((α ≫ β).base x)`.
refine ⟨eq_to_hom (show X.stalk x = X.stalk ((α ≫ β).base x), by rw h_eq) ≫
(stalk_map β (α.base x) : _), _, _⟩,
{ rw [← category.assoc, congr_point α x ((α ≫ β).base x) h_eq.symm, category.assoc],
erw ← stalk_map.comp β α (α.base x),
rw [congr_hom _ _ (is_iso.inv_hom_id α), stalk_map.id, eq_to_hom_trans_assoc,
eq_to_hom_refl, category.id_comp] },
{ rw [category.assoc, ← stalk_map.comp, congr_hom _ _ (is_iso.hom_inv_id α),
stalk_map.id, eq_to_hom_trans_assoc, eq_to_hom_refl, category.id_comp] },
end }
/--
An isomorphism between presheafed spaces induces an isomorphism of stalks.
-/
def stalk_iso {X Y : PresheafedSpace C} (α : X ≅ Y) (x : X) :
Y.stalk (α.hom.base x) ≅ X.stalk x :=
as_iso (stalk_map α.hom x)
@[simp, reassoc, elementwise]
lemma stalk_specializes_stalk_map {X Y : PresheafedSpace C} (f : X ⟶ Y) {x y : X} (h : x ⤳ y) :
Y.presheaf.stalk_specializes (f.base.map_specialization h) ≫ stalk_map f x =
stalk_map f y ≫ X.presheaf.stalk_specializes h :=
by { delta PresheafedSpace.stalk_map, simp [stalk_map] }
end stalk_map
end algebraic_geometry.PresheafedSpace
|
lemma borel_cantelli_AE1: assumes [measurable]: "\<And>n. A n \<in> sets M" and "\<And>n. emeasure M (A n) < \<infinity>" "summable (\<lambda>n. measure M (A n))" shows "AE x in M. eventually (\<lambda>n. x \<in> space M - A n) sequentially"
|
f : Nat -> Nat
f n = case n of case_val => ?f_rhs
g : Nat -> Nat
g n = (case n of case_val => ?g_rhs)
h : Nat -> Nat
h n = (case n of
case_val => ?h_rhs )
data Test = One
| Two Nat
| Three String Nat
| Four
toTest : Nat -> Test
i : Nat -> Nat
i n = case toTest n of case_val => ?i_rhs
j : Nat -> Nat
j n = j_Where n where
j_Where : Nat -> Nat
j_Where k = (case toTest k of case_val => ?j_Where_rhs )
k : Nat -> Nat
k n = (case toTest n of
case_val => ?k_rhs)
l : Nat -> Nat -> Unit
l n m = case n of foo => case toTest m of case_val => ?l_rhs
m : Nat -> Nat -> Unit
m n k = (case n of foo => case toTest k of case_val => ?m_rhs )
n : Nat -> Nat -> Unit
n k m = case k of foo => case toTest m of
case_val => ?n_rhs
o : Nat -> Nat -> Unit
o n m = (case n of foo => case toTest m of
case_val => ?o_rhs )
|
Fatto a Mano restaurant is located in the North Laines of Brighton and specialises in soft and pillowy pizza, baked in a wood-fired oven at +450 degrees. They also have two other locations: one in Hove and one in Brighton’s London Road, but this one in the North Laines is the most central of them all.
Fatto a Mano is definitely one of the most well-renowned pizza restaurants in Brighton – it’s the one that kept coming up when I asked for suggestions or googled Brighton pizzerias, so it was about time that I tried it out for myself!
The Fatto a Mano restaurant in the North Laines has a lovely little outside area for sunny days, and Jim and I were lucky enough to grab the last available table during our visit. This branch is reserved only for walk-ins, and even in the other branches you can only book a table for 6 people or more, so I suggest going early if you want to nab a table – it was very busy when we visited during lunchtime on Saturday.
The decor and branding throughout the restaurant is very simple and streamlined: the biggest colours are blue and white, which to me felt very nautical, though I’m not sure if this was on purpose. Whatever the colours symbolise, I love the feel they create – the whole space feels very fresh and made me feel like I was on a Mediterranean holiday, which is never a bad thing!
We were trying to be good and opted for sparkling water with lemon instead of Italian wine – next time, though!
The pizza menu is divided into four sections: red (with a tomato sauce base), white (without tomato sauce), vegan and saltimbocca (folded filled pizza).
I went for the Marinara from the vegan menu which consisted of tomato sauce, garlic, oregano and basil, and no cheese. The Marinara pizza costs £6 as it is, but I added a few extra toppings (sundried tomatoes and fresh chilli) for a few extra pounds. I really loved the flavours: they definitely weren’t being stingy when it came to the toppings, which meant you could really taste all the different ingredients very distinctively.
Personally, I’m a thin and crispy base kind of girl myself, so a soft and fluffy base was never going to be completely to my tastes, but if that’s your jam, you will definitely love the dough here as the base was very well baked and so pillowy.
Overall I really liked Fatto a Mano – the atmosphere was lovely and the people watching opportunities endless. The pizza was really good, too, even if the base wasn’t quite to my tastes.
Another good thing is the variety of the menu: you can take pretty much anyone here as the restaurant caters for most diets – there are three vegan pizzas and you can also get a gluten-free base for a £2 surcharge. Personally, I really want to try out those folded and filled pizzas next!
You can find the North Laines branch of Fatto a Mano restaurant at 25 Gloucester Road – enjoy!
|
#' Render Table of Contents
#'
#' A simple function to extract headers from an xaringan RMarkdown
#' and build a table of contents. Returns a markdown list with links to the
#' headers using the `name:` attribute of each slide.
#'
#' @section Usage:
#' Just drop in a chunk where you want the toc to appear (set `echo=FALSE`):
#'
#' # Table of Contents
#'
#' ```{r echo=FALSE}
#' render_toc("/path/to/the/file.Rmd")
#' ```
#'
#' @param filename Name of xaringan RMarkdown
render_toc <- function(filename) {
x <- readLines(filename, warn = FALSE)
x5 <- stringr::str_sub(x, 1, 5)
xname <- stringr::str_which(x5, "name:")
xtext <- stringr::str_which(x5, "text:")
xtext <- stringr::str_trim(sapply(x[xtext], function(i){stringr::str_split(i, "text:")[[1]][2]}))
has.xtext <- stringr::str_detect(x5[xname+1], "text:")
header_slug <- stringr::str_trim(sapply(x[xname], function(i){stringr::str_split(i, "name:")[[1]][2]}))
header_text <- header_slug
header_text[has.xtext] <- xtext
x <- paste0("* [", header_text, "](#", header_slug, ")")
x <- c(".flexcolumn[",paste0("* [", header_text, "](#", header_slug, ")"), "]")
knitr::asis_output(paste(x, collapse = "\n"))
}
|
Require Import Problem Arith.
From mathcomp Require Import ssreflect.
Lemma L0: forall n m, product_of_range n (S m) = (S n) * product_of_range (S n) m.
Proof.
by rewrite /=.
Qed.
Lemma L1: forall n m, product_of_range n (S m) = (n + S m) * product_of_range n m.
Proof.
intros n m.
revert n.
elim: m => [| m' IHm'].
intros n.
by rewrite /= !Nat.mul_1_r Nat.add_1_r.
intros.
rewrite L0.
rewrite IHm'.
rewrite L0.
by ring.
Qed.
Theorem solution: task.
Proof.
rewrite /task.
intros n m.
revert n.
elim m => [| m' IHm].
rewrite /product_of_range.
exists 1.
by rewrite /=.
intros n.
elim n => [| n' IHn].
exists 1.
by rewrite /=.
case: (IHm (S n')) => p Hp.
case: IHn => k Hk.
exists (p + k).
rewrite L1.
rewrite L0 in Hk.
repeat rewrite Nat.mul_add_distr_r.
rewrite Hk.
rewrite Hp.
rewrite (_ : S m' * (p * product_of_range 0 m') = p * ((0 + S m') * product_of_range 0 m')).
rewrite -L1.
by rewrite Nat.add_comm.
rewrite Nat.add_0_l.
rewrite !Nat.mul_assoc.
by rewrite (Nat.mul_comm (S m') p).
Qed.
|
function G = GCVstopfun(alpha, u, s, beta, m, n)
%
% G = GCVstopfun(alpha, u, s, beta, n, insolv)
% This function evaluates the GCV function G(i, alpha), that will be used
% to determine a stopping iteration.
%
% Input:
% alpha - regularization parameter at the kth iteration of HyBR
% u - P_k^T e_1 where P_k contains the left singular vectors of B_k
% s - singular values of bidiagonal matrix B_k
% beta - norm of rhs b
% m,n - size of the ORIGINAL problem (matrix A)
% Silvia Gazzola, University of Bath
% Per Christian Hansen, Technical University of Denmark
% James G. Nagy, Emory University
% April, 2018.
% This file is part of the IR Tools package and is distributed under the
% 3-Clause BSD License. A separate license file should be provided as part
% of the package.
k = length(s);
beta2 = beta^2;
s2 = abs(s) .^ 2;
alpha2 = alpha^2;
t1 = 1 ./ (s2 + alpha2);
t2 = abs(alpha2*u(1:k) .* t1) .^2;
t3 = s2 .* t1;
num = beta2*(sum(t2) + abs(u(k+1))^2)/n;
den = ( (m - sum(t3))/n )^2;
G = num / den;
|
Many characteristics of the Comet type are said to be shown in the Kaimanawa horses today , although the varied gene input has produced a wide range of sizes , colours , and body types among the wild horses . The Kaimanawa breed varies widely in general appearance , with heights ranging between 12 @.@ 2 and 15 hands ( 50 and 60 inches , 127 and 152 cm ) high . Any coat colour or pattern marking is acceptable . They are usually well @-@ muscled . Their feral way of life has given them the ability to adapt quickly and live on very little , and they are usually sure @-@ footed and tough . They have a medium @-@ sized head in good proportion to their body , with wide variation in shape due to the different conformation of their ancestors . Kaimanawa horses have a short , deep neck with a thick throat area , straight shoulders , a deep girth , and a short to medium back . The hindquarters vary from sloping to well @-@ rounded . The legs are long and well @-@ muscled , with strong hooves , and hind hooves that are generally smaller than the front ones . All horses are considered to age a year on the first of August , regardless of their actual foaling date .
|
C *********************************************************
C * *
C * TEST NUMBER: 06.01.02/09 *
C * TEST TITLE : Visual effect of modelling *
C * transformation *
C * *
C * PHIGS Validation Tests, produced by NIST *
C * *
C *********************************************************
COMMON /GLOBNU/ CTLHND, ERRSIG, ERRFIL, IERRCT, UNERR,
1 TESTCT, IFLERR, PASSSW, ERRSW, MAXLIN,
2 CONID, MEMUN, WKID, WTYPE, GLBLUN, INDLUN,
3 DUMINT, DUMRL
INTEGER CTLHND, ERRSIG, ERRFIL, IERRCT, UNERR,
1 TESTCT, IFLERR, PASSSW, ERRSW, MAXLIN,
2 CONID, MEMUN, WKID, WTYPE, GLBLUN, INDLUN,
3 DUMINT(20), ERRIND
REAL DUMRL(20)
COMMON /GLOBCH/ PIDENT, GLBERR, TSTMSG, FUNCID,
1 DUMCH
CHARACTER PIDENT*40, GLBERR*60, TSTMSG*900, FUNCID*80,
1 DUMCH(20)*20
COMMON /DIALOG/ DOUTYP, DINTYP, DSTDNR, DSTRID, PSTRID, DTCLIM,
1 SCRMOD, DTXCI, SPECWT,
2 DSIZE, EFRAC, DYXRAT, SYXRAT, MTRPDC, WCPDC, QVIS
INTEGER DOUTYP, DINTYP, DSTDNR, DSTRID, PSTRID, DTCLIM,
1 SCRMOD, DTXCI, SPECWT
REAL DSIZE, EFRAC, DYXRAT, SYXRAT, MTRPDC, WCPDC, QVIS
C
C Aspect source
C BUNDLED INDIVIDUAL
INTEGER PBUNDL, PINDIV
PARAMETER (PBUNDL = 0, PINDIV = 1)
INTEGER IDUM1,IDUM2,IDUM3, IDUM4
INTEGER PICSTR, TXCI, NGBOX, MODTRN
REAL NOMMS, MSCF, RDUM1, RDUM2
CALL INITGL ('06.01.02/09')
CALL XPOPPH (ERRFIL, MEMUN)
C
C Set-up of workstation and dialogue area
PICSTR = 101
TXCI = 1
CALL SETDLG (PICSTR, 801,TXCI)
CALL POPST (PICSTR)
C By convention, view #1 is for picture
CALL PSVWI (1)
C Use individual attributes
CALL SETASF (PINDIV)
C Adjust polymarker size
CALL PQPMF (SPECWT, 0, ERRIND, IDUM1,IDUM2,IDUM3, NOMMS, RDUM1,
1 RDUM2, IDUM4)
CALL CHKINQ ('pqpmf', ERRIND)
MSCF = .02 / (NOMMS*WCPDC)
CALL PSMKSC (MSCF)
C Adjust polyline width
CALL PQPLF (SPECWT, 0, ERRIND, IDUM1,IDUM2,IDUM3, NOMMS, RDUM1,
1 RDUM2, IDUM4)
CALL CHKINQ ('pqplf', ERRIND)
MSCF = .01 / (NOMMS*WCPDC)
CALL PSLWSC (MSCF)
CALL PEXST (106)
CALL PEXST (102)
CALL PCLST
C *** *** *** *** *** 3D Transformations *** *** *** *** ***
CALL SETMSG ('1 8 9 15 16 17', 'The modelling coordinates of ' //
1 'a 3D polyline should be transformed into world ' //
2 'coordinates by 3D local and global modelling ' //
3 'transformation.')
NGBOX = MODTRN(3)
CALL DCHPFV ('3D MODELLING TRANSFORMATION: Which box ' //
1 'contains something other than a single ' //
2 'line segment with circled endpoints? ', 6, NGBOX)
CALL PEMST (102)
CALL PEMST (106)
C *** *** *** *** *** 2D Transformations *** *** *** *** ***
CALL SETMSG ('4 8 12 15 16 17 18', 'The modelling coordinates ' //
1 'of a 3D polyline should be transformed into ' //
2 'world coordinates by 2D local and global ' //
3 'modelling transformation.')
NGBOX = MODTRN(2)
CALL DCHPFV ('2D MODELLING TRANSFORMATION: Which box ' //
1 'contains something other than a single ' //
2 'line segment with circled endpoints? ', 6, NGBOX)
C Wrap it up.
CALL ENDIT
END
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.