repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/Dherse/typst-glossary | https://raw.githubusercontent.com/Dherse/typst-glossary/main/README.md | markdown | # Typst glossary
A simple, easily customizable typst glossary. You can see an example showing the different features in [`example.typ`](example/example.typ).
 |
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/lang/dangling.typ | typst |
#import "../lib.typ": *
#show heading: heading_fct
Dans un langage de programmation comme `C` ou `OCaml`, on se heurte à un problème connu sous le nom de _dangling else_.
On considère la grammaire suivante :
$
S -> "if b then" S | "if b then" S "else" S | "a"
$
#question(0)[Montrer que cette grammaire est ambiguë.]
#question(0)[Indiquer pourquoi cela pourrait être un problème en `C` ou `OCaml`.]
#question(1)[Quitte à changer la syntaxe, proposer une grammaire non ambiguë reconnaissant les expressions `if-else`.]
On pourrait proposer la grammaire suivante :
$
S -> "a" | "if b then" S | "if b then" N "else" S
N -> "a" | "if b then" N
$
qui essaie d'associer chaque $"else"$ au $"then"$ le plus proche.
#question(2)[Montrer que cette grammaire est ambiguë ou ne génère pas le bon langage.]
On propose enfin la grammaire suivante :
$
S -> F | O \
O -> "if b then" S | "if b then" F "else" O \
F -> "if b then" F "else F" | "a"
$
#question(1)[Interpréter le langage de chaque variable.]
#question(2)[Démontrer que cette grammaire génère le même langage que la première.]
#question(3)[Démontrer que cette grammaire est non-ambiguë.]
|
|
https://github.com/mumblingdrunkard/mscs-thesis | https://raw.githubusercontent.com/mumblingdrunkard/mscs-thesis/master/src/computer-architecture-fundamentals/abstractions-and-implementations.typ | typst | == Abstractions and Implementations
#quote(block: true, [
If you wish to makke an apple pie from scratch, you must first invent the universe.
],
attribution: "<NAME>",
)
We take the above quote as a reflection on how easy it often is to forget the complexities we deal with.
This is true for hardware development too.
Most everything in the field of computing is an _abstraction_.
There are multiple _layers_ of abstraction.
There are _contracts/interfaces_ between those layers, specifying a common _language_ that the layer above speaks, and that the layer below understands.
_A programmer describes a program in a defined language._
The language standard defines how parts of the language affect an _abstract machine_.
Programs are written with _intent_ and are written for _machines_.
The fundamental job of a compiler or interpreter is to take the program code and transform it into a different form that is executable on a _target_ machine while preserving the behaviour of the program (the intent) as it would have executed on the abstract machine.
This target machine is no different from the abstract machine:
The interface of the target machine is defined by a document that specifies a language---_instructions_ and instruction _encodings_---and the effects that this language has on the state of the machine.
This is referred to as the _instruction set architecture_ (ISA).
When programmers write this language, it is usually in the form of _assembly_ which is a human-readable encoding that has a direct and obvious mapping to the machine version of the same language.
The machine specified by the ISA is an abstraction.
Computer hardware engineers are tasked with _implementing_ a machine that behaves like this abstract machine using transistors and wires.
Different requirements of the hardware and its use-cases will motivate different implementations.
Some use-cases require low power consumption---others might require the most computing performance possible.
=== Logic Fundamentals
The most basic unit of computation is the transistor.
It is a switch that can be turned on or off using electricity.
By using clever organisations of transistors, it is possible to express boolean logic.
Boolean logic concerns itself with two values: true and false, 1 and 0, yes and no, on and off, high and low.
Transistors are grouped together to form basic _logic gates_ that perform fundamental boolean operations such as $"AND"$, $"OR"$, and $"XOR"$.
When working with digital circuits, it is common to describe them in terms of logic gates.
This is an abstraction to more easily focus on the logic and not the physical implementation, though it is a trivial mapping, like assembly to machine code.
A logic gate has one or more inputs and outputs.
It always has the same output for the same input.
The behaviour of a logic gate can be expressed through truth tables such as the one shown in @tab:truth-tables.
#figure(caption: "Truth table for two-input, single-output AND, OR, and XOR gates", {
show "F": set text(fill: gray.darken(20%))
table(
columns: (auto, ) * 5,
$p$, $q$, $p "AND" q$, $p "OR" q$, $p "XOR" q$,
[F], [F], [ F], [ F], [ F],
[F], [T], [ F], [ T], [ T],
[T], [F], [ F], [ T], [ T],
[T], [T], [ T], [ T], [ F],
)})<tab:truth-tables>
Here, the values 'F' and 'T' stand for "False" and "True", respectively.
$p$ and $q$ are the inputs and the remaining three columns show the output of three types of gates.
Logic gates can be arranged in larger circuits to perform more complex operations.
==== Selecting From Several Sources
As an example of how gates can be arranged in larger circuits, a multiplexer, or "mux" for short, is a very fundamental kind of circuit.
It has at least three inputs: $p$, $q$, and $s$, and an output $o$.
The truth table for a mux is shown in @tab:mux-truth-table.
#figure(caption: "Truth table for a two-input multiplexer", {
show "F": set text(fill: gray.darken(20%))
table(
columns: (auto, ) * 4,
$p$, $q$, $s$, $o$,
[F], [F], [F], [F],
[F], [T], [F], [F],
[T], [F], [F], [T],
[T], [T], [F], [T],
[F], [F], [T], [F],
[F], [T], [T], [T],
[T], [F], [T], [F],
[T], [T], [T], [T],
)})<tab:mux-truth-table>
The basic operation of a mux is that $s = "F" ==> o = p$, and $s = "T" ==> o = q$.
In other terms: when $s$ is false, the output is set to the first input and when $s$ is true, the output is set to the second input;
$s$ _selects_ which input to assign to the output.
A mux can, as an example, be implemented as $(p "AND" ("NOT" s)) "OR" (q "AND" s)$.
The unary $"NOT"$-gate simply inverts its input.
==== Working with Numbers
"True" and "False" can be used to represent the ones and zeroes of a binary number.
It is simple to create a logic circuit that performs, for example, long-addition on these numbers.
The most basic version is called a _half-adder_ which takes two input bits $a$ and $b$ and sums them up.
It has two outputs: sum $s = a "XOR" b$, and carry $c = a "AND" b$.
A _full-adder_ is like a half-adder, but it also accounts for a third input bit: carry-in.
An adder is constructed by chaining full-adders, connecting the carry output of one full-adder into the carry-in of the next.
==== Circuits with Memory
Logic is useful, but computers require _state_---as in "state of being".
When building circuits, it is a good idea to ensure logic does not directly depend on its own result.
That is to say: the input of any one gate cannot depend on its own output, directly or transitively; there is no path from the output of the gate back to the input.
Such a path is called a _combinational loop_ and tools generally do not accept them in a design.
An exception is made for the _register_ cell which is constructed by using logic gates that connect back to themselves with positive feedback.
A register cell _stores_ data that can be read back out at a later time.
It will usually have two inputs: data $d$, and enable $e$.
The operation of the register cell can be described thus:
When enable $e$ is true, the data $d$ are stored in the cell.
@fig:register-cell-diagram shows a basic register cell as described.
Notice how the output of each of the rightmost NOT-gates feed back into each other's inputs.
Because of this feedback, when one output is "True", the other must be "False".
#figure(
```monosketch
┌───┐
╭┤NOT├┬───┐
│└───┘│AND├┬──┐ ┌───┐
│ ╭──┴───┘│OR├─┤NOT├┬──── o
│ │ ╭─┴──┘ └───┘│
│ │ ╭─│───────────╯
│ │ │ ╰───────────╮
│ │ ╰───┬──┐ ┌───┐│
d ──────┴──│──┬───┐│OR├─┤NOT├┴──── o'
│ │AND├┴──┘ └───┘
e ─────────┴──┴───┘
```,
caption: [A register cell using logic gates],
kind: image,
)<fig:register-cell-diagram>
With registers in place, _time_ is introduced as a factor.
The output of the circuit is no longer purely a function of the current input, but can depend on previous inputs and an initial state.
For example: the operation of a register cell is shown in @fig:register-cell-waveform.
This kind of diagram is called a _waveform_.
#figure(
```monosketch
╭─╮ ╭─╮ ╭─╮
e ─╯ ╰─╯ ╰─────╯ ╰─
───────╮
d ╰─────────
╭───────────╮
o ─╯ ╰───
```,
caption: [How the output $o$ changes over time with the three inputs for a register cell],
kind: image
)<fig:register-cell-waveform>
The storage element shown here is actually called a _latch_ and it updates continuously while the enable signal $e$ is active.
Another kind of register cell is the _flip-flop_ which can be constructed from two latches where the output of the first one (called the primary), is fed into a second (called the secondary).
The enable input of the secondary latch $e'$ is the inverted value of the enable input $e$ of the primary latch.
In this way, the primary latch can receive an updated value while signal is high, and the secondary latch is only updated once the enable signal goes low again.
It is difficult to ensure all latches update at the same time in a reliable manner.
Because of this, registers are usually implemented using flip-flops to give more tolerance.
==== Register-Transfer Level
Registers and combinational logic are the basic building blocks of the _register-transfer level_ (RTL).
This is an abstraction level where circuits are modelled as flows of data between registers.
A _clock_ signal that toggles between on and off can be attached to the enable input $e$ of all flip-flops in the circuit to ensure a common time for when values change.
The space between two _rising edges_ (where the signal goes from low to high), is called a _clock cycle_.
When drawing diagrams, the clock signal is often left out for brevity.
=== Elements of an Instruction Set Architecture
An ISA defines an abstract machine, the instructions it executes, and what the effects of those instructions are.
That is, an implementation should behave as if there is some set of resources, and instructions that use and modify those resources.
In this section, we cover the most basic elements of such a specification.
Most ISA documents will specify all of these concepts.
==== Memory Space
Values can be loaded from or stored to memory at an _address_ which is an index into a large array of values.
Different ranges of addresses may be mapped to different types of memory.
The main memory stores program data and instructions and has no side-effects---i.e. using load and store instructions on the main memory has no other observable effect than to read or write those values.
Other address ranges may be mapped to various devices and can have side-effects.
ISAs designed for running operating systems (OSs) usually contain specifications for _memory virtualisation_.
Virtualised memory uses _virtual addresses_ and a _translation_ scheme to translate from these virtual addresses to the "real" physical addresses.
This way, individual applications can access the same virtual address, but refer to different values.
Thus, an operating system can, for example, start two instances of the same program without them interferring with each other's values.
Modern virtual memory is handled at the granulaity of _pages_ where a fixed size virtual address range is mapped continuously to an equally sized section in physical memory.
Pages that are adjacent---according to their addresses---in virtual memory are not necessarily adjacent in physical memory.
Virtual memory is transparent.
I.e.: it does not matter to an individual application whether the memory space it uses is virtualised or not.
==== Program Counter
The _program counter_ (PC) holds the memory address of the next instruction to be executed.
==== Register File
Most ISAs state that the machine should have a set of registers, often called the _register file_.
This is storage that instructions will have fast and direct access to.
The ISA defines how many registers there should be and how large they are.
Each register in the file is assigned a number and instructions can refer to the particular register by its number.
==== Arithmetic and Logic Instructions
These instructions perform arithmetic and logic.
They read values from the register file, perform some computation with the values, and write the result to a destination in the register file.
==== Memory Instructions
Memory instructions load from or store to memory.
A load instruction has a destination register that it loads into, and a source register where the address comes from.
A store instruction has a source register where the address comes from, and another source register where the data come from.
==== Branch and Jump Instructions
Branch instructions take two source registers and compare them.
If the result of the comparison fulfills some condition, the program counter is updated with some new value.
The new value can come from a register, but often it will be constructed by adding the current program counter to a value encoded in the instruction, called an _immediate_.
Most instruction types can have immediate values.
Jump instructions are like branch instructions, except there are no registers to compare and the condition is always true.
Jump instructions come in several variants, but _jump-and-link_ (JAL) is a common one.
Jump-and-link writes the current value of the program counter to a destination register and jumps to the specified location.
This is useful for function calls and returns.
==== Instruction Encoding Formats
Along with instructions and their effects, the ISA document must also specify what instructions "look like" to the processor: which sequences of bits and bytes correspond to each instruction.
=== _An_ Implementation
@fig:basic-computer shows an implementation of a compute-capable architecture.
Components with double borders are registers (storage), while those with a single border perform logic.
#figure(
```monosketch
┏ ━ ━ ━ ━ ━ ━ ━ ━ ━ ━ ╔════╗ ╔════════╗
┃ ║ADDR◀─────┐◁──▶ REG ║
┃ ╚═╤══╝ │ ╚════════╝
┃ ╔═▼════╗ ├─────┬────┐
┃ CTRL ║ MEM ◀──▷│ ╔═▼═╗╔═▼═╗
┃ ╚══════╝ │ ║OP1║║OP2║
┃ ◀────────────▷│ ╚═╤═╝╚═╤═╝
┃ ╔══════╗ │ ┌─▼────▼─┐
┃ ║ PC ◀──▷│◁──┤ ALU │
━ ━ ━ ━ ━ ━ ━ ━ ━ ━ ┛ ╚══════╝ └────────┘
```,
caption: [A basic computer with a shared bus],
kind: image,
)<fig:basic-computer>
The components are as follows:
- The shared bus, which is the line that runs vertically between the components,
- `ADDR`, the memory address to load from or store to in the memory:
- `MEM`, the memory of the processor,
- `REG`, the register file,
- `OP1` and `OP2`, the source operands of the
- `ALU`, the _arithmetic-logic unit_, and
- `PC`, the program counter.
- Finally, the control logic: `CTRL`.
Not shown are the connections from `CTRL` to all of the other components control signals.
The solid arrowheads indicate that there is always a connection.
The unfilled arrowheads indicate that the connection is optional.
Because this architecture uses a shared bus, components must be able disconnect their outputs from the bus to prevent interferring with values from other components.
==== Control Signals
- `ADDR`, `OP1`, and `OP2` all have input signals for write-enable.
- `MEM` has an input signal for write-enable and another for output-enable that controls whether `MEM` is outputting to the bus, in addition to the address coming from `ADDR`.
- `REG` also has input signals for write-enable and output-enable, but also has an input signal for register-select that selects which register is being read or written.
- `PC` only has write-enable and output-enable signals.
- `ALU` has a function-select signal that specifies what operation it should perform on the two values in `OP1` and `OP2` (add, subtract, compare...).
It also has an output-enable.
==== Control Logic
Without going into too much detail, the control logic contains components that interpret encoded instructions and determine what and when control signals should be set to certain values to perform the instructions.
We will assume everything runs on a common clock.
The first thing the control logic should do is to load the next instruction from memory.
Cycle for cycle:
+ `PC` output-enable, `ADDR` write-enable.
+ `MEM` output-enable, `CTRL` stores the resulting value from the bus in some internal register.
If the instruction is an addition, the following should happen:
+ `REG` register-select set to first source register, `REG` output-enable, `OP1` write-enable.
+ `REG` register-select set to second source register, `REG` output-enable, `OP2` write-enable.
+ `ALU` function-select set to addition, `ALU` output-enable, `REG` register-select set to destination register, `REG` write-enable.
The `PC` then needs to be updated by incrementing the stored value:
+ `PC` output-enable, `OP1` write-enable.
+ `CTRL` puts increment value on bus, `OP2` write-enable.
+ `ALU` output-enable, `PC` write-enable.
And so it continues.
Notice that even a basic instruction like addition requires at least eight cycles---likely more, as the control logic has to determine which operations to perform in each step.
There are some easy optimisations like adding a separate connection from `MEM` to `CTRL` and read the instruction address straight from the bus instead, or to add specialised hardware to increment `PC`.
=== Microarchitecture vs. Instruction Set Architecture
The presented computer is an example of how any given ISA can be physically implemented.
It is not the only possible implementation.
Just like the language standard does not specify which machine instructions should be used to implement specific concepts, ISAs do not specify what circuits to use, or where transistors should be placed relative to each other.
Herein lies the distinction between the ISA and what is called _microarchitecture_.
For an ISA, the basic unit of a program is an instruction.
However, as shown, any single instruction may require multiple steps such as various output-enable's and write-enable's at different times.
These steps are called _micro-operations_ (uOPs, u resembling the Greek letter #math.mu, the SI-prefix for micro-).
This under-specification of what an implementation must do gives a lot of freedom in choosing an appropriate microarchitecture for various use-cases.
Throughout this thesis, we present and discuss various microarchitectural patterns and optimisations.
|
|
https://github.com/matthiasGmayer/structural-independence-typst | https://raw.githubusercontent.com/matthiasGmayer/structural-independence-typst/Typst/presentation.typ | typst | #import "symbols.typ": *
#import "@preview/touying:0.5.2": *
#import themes.university: *
#import "@preview/cetz:0.2.2"
#import "@preview/fletcher:0.5.1" as fletcher: node, edge
#import "@preview/ctheorems:1.1.2": *
#import "@preview/numbly:0.1.0": numbly
#let disintegrates = "disintegrates"
#let generates = "generates"
#let given = "given"
// cetz and fletcher bindings for touying
#let cetz-canvas = touying-reducer.with(reduce: cetz.canvas, cover: cetz.draw.hide.with(bounds: true))
#let fletcher-diagram = touying-reducer.with(reduce: fletcher.diagram, cover: fletcher.hide)
// Theorems configuration by ctheorems
#show: thmrules.with(qed-symbol: $square$)
#let _thmargs = (inset:0pt,padding:(top:0pt,bottom:0pt))
#let _thmbox(str) = thmbox("theorem",str,.._thmargs)
#let theorem = _thmbox("Theorem")
#let definition = _thmbox("Definition")
#let exercise = _thmbox("Exercise")
#let lemma = _thmbox("Lemma")
#let remark = _thmbox("Remark")
#let example = _thmbox("Example")
#let corollary = _thmbox("Corollary")
#let conjecture = _thmbox("Conjecture")
#let observation = _thmbox("Observation")
#let proof = thmproof("proof","Proof",.._thmargs)
// #let theorem = thmbox("theorem", "Theorem", inset:(x:0pt, top:0pt),padding:(top:0pt,bottom:0pt),base:none)
// #let lemma = thmbox("theorem", "Lemma", inset:(x:0pt, top:0pt),padding:(top:0pt,bottom:0pt), base:none)
// #let corollary = thmbox(
// "theorem",
// "Corollary",
// base: none,
// inset : 0pt,
// padding:(top:0pt,bottom:0pt)
// )
// #let definition = thmbox("theorem", "Definition", inset: (x: 0em, top: 0em),padding:(top:0pt,bottom:0pt), base:none)
// #let conjecture = thmbox("theorem", "Conjecture", inset: (x: 0em, top: 0em),padding:(top:0pt,bottom:0pt), base:none)
// #let example = thmplain("example", "Example").with(numbering: none)
// #let proof = thmproof("proof", "Proof")
#let animations = true
// Animations On/Off
// #let animations = false
#show: university-theme.with(
aspect-ratio: "16-9",
// config-common(handout: true),
config-info(
title: [A Theory of Structural Independence],
author: [<NAME>],
date: [2024-10-25],
// institution: [],
logo: none,
),
)
#set heading(numbering: numbly("{1}.", default: "1.1"))
#title-slide()
// == Outline <touying:hidden>
// #components.adaptive-columns(outline(title: none, indent: 1em))
#align(center)[#text(size:1.4em)[*A Theory of Structural Independence*]]
#align(center)[<NAME>]
#text(size:0.9em)[_Abstract:
We will review the usage of Bayesian networks, $d$-separation
and causal discovery, and their limitations for making sense of structure in observed data distributions. We will highlight d-separation as the central object in classical causal discovery and present its generalization, "structural independence" as a combinatorial property of random variables on a product space.
The main theorem justifies this definition by showing the equivalence to
independence in all product probability distributions on the product space, generalizing soundness and completeness of $d$-separation._]
= Background
== Structure
Let $V$ be a set of variables and $PP$ a distributions over those.
What can we tell about the structure that the variables $V$ have?
(Assume finiteness for now)
// #set node(radius: 1em)
#pause
#table(columns: 2, stroke: none, column-gutter: 1em)[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 2em,
node((0,0), $V_1$, radius: 1em, name:<N1>),
edge("-|>"),
node((-1/2,1), $V_2$, radius: 1em, name:<N2>),
edge("-|>"),
node((0,2), $V_4$, radius: 1em, name:<N4>),
node((1/2,1), $V_3$, radius: 1em, name:<N3>),
edge(<N1>,<N3>,"-|>"),
edge(<N3>,<N4>,"-|>"),
)
#pause
][
#v(-15pt)
#definition[
Let $G=(V,E)$ be a directed acyclyc graph (DAG).
#pause
$PP$ is compatible with $G$ if the following holds:
#pause
// A bayesian network is a directed graph $G=(V,E)$
// with a probability distribution $PP$ over $V$, s.t.
for all $X,Y in V$, s.t. there is no path from $X$ to $Y$, we have $X indep_PP Y | PA(X)$.
#pause
// \
Let $distributions(G)$ be the set of all $PP$ compatible with $G$.
#only("6-7")[
In this example: \
$V_2 indep_PP V_3 | V_1$
]
#only("7")[
$quad quad quad V_4 indep_PP V_1 | V_2, V_3$
]
#only("8")[
Equivalently, for all $v = (v_X)_(X in V) in Val(V)$,
$ PP(V=v) = product_(X in V) PP(X=v_X|PA(X)=v_PA(X)) $
]
]
// #pause
// Such a pair $(G,PP)$ is called bayesian network.
// A directed graph without $PP$ is a qualitative bayesian network. It characterizes admissible distributions.
]
#slide[
That means all $PP in distributions(G)$ arise as follows:
#figure[#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 3em,
node((0,0), $V_1$, radius: 1em, name:<N1>),
edge("-|>"),
node((-1/2,1), $V_2$, radius: 1em, name:<N2>),
edge("-|>"),
node((0,2), $V_4$, radius: 1em, name:<N4>),
node((1/2,1), $V_3$, radius: 1em, name:<N3>),
edge(<N1>,<N3>,"-|>"),
edge(<N3>,<N4>,"-|>"),
node((0.75,0), $PP(V_1 in dot| nothing)$, stroke: 0em, fill: white, name:<T1>),
node((-1.30,1), $PP(V_2 in dot| V_1)$, stroke: 0em, fill: white, name:<T2>),
node((1.30,1), $PP(V_3 in dot| V_1)$, stroke: 0em, fill: white, name:<T3>),
node((0.9,2), $PP(V_4 in dot| V_2, V_3)$, stroke: 0em, fill: white, name:<T4>),
edge(<T1>, <N1>, "-->"),
edge(<T2>, <N2>, "-->"),
edge(<T3>, <N3>, "-->"),
edge(<T4>, <N4>, "-->"),
)
]
]
== $d$-separation
Given a DAG can we characterize all independencies which are implied by the assumptions on $PP in distributions(G)$?
#pause
There is a nice graphical criterion:
#pause
#definition[
Given a DAG $G$ and sets of nodes $X$,$Y$ and $Z$.
$X$ and $Y$ are
#box(stroke: (bottom: 1pt), outset: (bottom: 2pt))[$d$-connected]
given $Z$ if
there is a walk $W$ from a node in $X$ to a node in $Y$ s.t.
$w in W$ is a collider w.r.t. $W$ if and only if $w in Z$.
#pause
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 1em,
edge((-3,0),<N1>,"--", label:$W_i$),
node((0,0),radius:1em, name:<N1>),
edge("-|>", label:$W_(i+1)$),
node((3,0),radius:1em,name:<N2>),
node((6,0),radius:1em, name:<N3>),
edge(<N2>,"-|>",label:$W_(i+2)$),
edge((9,0),<N3>,"--", label:$W_(i+3)$),
node((3,0.8),[Collider], fill:none, stroke:none),
(pause,),
node((6,0.8),[#h(-90pt)$arrow.l$ iff in $Z$], fill:none, stroke:none),
)
#pause
$X$ and $Y$ given $Z$ are
#box(stroke: (bottom: 1pt), outset: (bottom: 2pt))[$d$-separated]
if they are not $d$-connected given $Z$.
]
#let diagram1(fill1,fill2,fill3,fill4,fill5,fill6,fill7, caption:[Blue nodes are $d$-connected given yellow node.]) = figure(supplement:none, caption:caption )[#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 1em,
node((-2,0), $V_1$, radius: 1em, name:<N1>,fill:fill1),
edge("-|>"),
node((-1,1), $V_2$, radius: 1em, name:<N2>,fill:fill2),
node((2,0), $V_3$, radius: 1em, name:<N3>,fill:fill3),
edge("-|>"),
node((1,1), $V_4$, radius: 1em, name:<N4>,fill:fill4),
edge("-|>"),
node((0,2), $V_5$, radius: 1em, name:<N5>, fill:fill5),
edge("-|>"),
node((0,4), $V_6$, radius: 1em, name:<N6>, fill:fill6),
edge(<N2>,<N5>,"-|>"),
node((3,1), $V_7$, radius: 1em, name:<N7>, fill:fill7),
edge(<N3>,<N7>,"-|>"),
)
]
// #[
#let nf = gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%)
#let cf = gradient.radial(yellow.lighten(80%),yellow, center: (30%, 20%), radius: 80%)
#let sf = gradient.radial(blue.lighten(80%), blue, center: (30%, 20%), radius: 80%)
#let nrepeat = 9
#if animations==false {
nrepeat = 1
}
#slide(repeat: nrepeat, self => [
#let (uncover, only, alternatives) = utils.methods(self)
#let diaglist = (
diagram1(nf,nf,nf,nf,nf,nf,nf,caption:[#h(-60pt) Example Graph]),
// diagram1(nf,nf,nf,nf,nf,nf,nf,caption:[#h(-60pt) Example Graph]),
diagram1(nf,sf,nf,sf,cf,nf,nf),
diagram1(sf,nf,nf,sf,cf,nf,nf),
diagram1(sf,nf,sf,nf,cf,nf,nf),
diagram1(sf,nf,nf,nf,cf,nf,sf),
diagram1(sf,nf,nf,nf,nf,cf,sf),
diagram1(sf,cf,nf,nf,nf,cf,sf, caption:[Blue nodes are #text(red)[#underline[not]] $d$-connected given the yellow nodes, \ i.e. $d$-separated.]),
diagram1(sf,cf,nf,nf,nf,nf,sf, caption:[Blue nodes are #text(red)[#underline[not]] $d$-connected given the yellow nodes, \ i.e. $d$-separated.]),
// diagram1(sf,nf,nf,nf,cf,nf,sf),
diagram1(sf,nf,nf,cf,nf,nf,sf, caption:[Blue nodes are #text(red)[#underline[not]] $d$-connected given the yellow nodes, \ i.e. $d$-separated.]),
diagram1(sf,nf,cf,nf,nf,nf,sf, caption:[Blue nodes are #text(red)[#underline[not]] $d$-connected given the yellow nodes, \ i.e. $d$-separated.]),
)
#diaglist.at(self.subslide, default:none)
])
#slide[
We write $orth_d$ for the $d$-separation relation.
#theorem[
Let $X,Y,Z$ be set of nodes.
$ X orth_d Y | Z <=>forall P in distributions(G) : X indep_P Y | Z. $
]
This reveals that $d$-separation is a type of 'structural independence'.
#pause
#figure[
#fletcher-diagram(
spacing:1em,
node-outset:4pt,
node((0,0),$G$,name:<G>),
edge("->",label:$PP in distributions(G)$),
node((3,0), $(G,PP)$,name:<GP>, outset:8pt),
pause,
node((0,1),$X orth_d Y | Z$,name:<d>),
edge("->"),
node((3,1),$X indep_PP Y | Z$,name:<p>),
edge(<G>,<d>,"-->", label:""),
edge(<GP>,<p>,"-->", label:""),
)
]
]
// #table(columns: 2, stroke: none, column-gutter: 1em)[
// ][
// #definition[
// $PP$ is compatible with a directed graph $G=(V,E)$ if
// // A bayesian network is a directed graph $G=(V,E)$
// // with a probability distribution $PP$ over $V$, s.t.
// $X indep_PP Y | PA(X)$ for all $X,Y in V$, s.t. $X cancel(-->,stroke:#1.5pt) Y$.
// Let $distributions(G)$ be the set of such $PP$.
// Then it holds that
// $ PP(V=v) = product_(X in V) PP(X=v_X|PA(X)=v_PA(X)) $
// ]
// Such a pair $(G,PP)$ is called bayesian network.
// // A directed graph without $PP$ is a qualitative bayesian network. It characterizes admissible distributions.
// ]
== Causal Discovery
We want to infer the structure of the graph given $PP$. \
#pause
Realizability assumption: $PP$ is compatible with a graph and
all inde-pendencies are characterized by $d$-separation. We say $PP$ is a perfect map.
// #pause
// #theorem[
// $X orth_d Y | Z <=> {P in distributions(G): X indep_P Y | Z}$ has interior.
// ]
#table(columns: 2, stroke: none, column-gutter: 1em)[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 2em,
node((0,0), $V_1$, radius: 1em, name:<N1>),
node((-1/2,1), $V_2$, radius: 1em, name:<N2>),
node((0,2), $V_4$, radius: 1em, name:<N4>),
node((1/2,1), $V_3$, radius: 1em, name:<N3>),
edge(<N1>,<N3>,[?],"-|>"),
edge(<N2>,<N3>,[?],"-|>"),
)
][
#pause
In a sense, we can see that this is not too strong an assumption.
#pause
#theorem[
$X orth_d Y | Z <=> {P in distributions(G): X indep_P Y | Z}$ has interior.
]
// In a graph, the set of $P$ where \
// $X indep_P Y | Z$ holds contrary to $d$-separation is nowhere dense and closed.
#pause
It is known that it is impossible to infer the whole graph in general.
The equivalence classes that you can infer are called _markov equivalence classes_.
// #definition[
// A bayesian network is a directed graph $G=(V,E)$
// with a probability distribution $PP$ over $V$, s.t.
// $X indep Y | PA(X)$ for all $X,Y in V$, s.t. $X cancel(arrow,stroke:#1.5pt) Y$.
// It holds that
// $ PP(V=v) = product_(X in V) PP(X=v_X|PA(X)=v_PA(X)) $
// ]
// A directed graph without $PP$ is a qualitative bayesian network. It characterizes admissible distributions.
]
#slide[
For example, the following graphs have the same independencies:
#figure[#table(columns:2,column-gutter: 2em,stroke:none)[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 2em,
node((0,0),$V_1$, radius:1em),
edge("-|>"),
node((1,0),$V_2$, radius:1em),
edge("-|>"),
node((2,0),$V_3$, radius:1em),
)
][
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 2em,
node((2,0),$V_3$, radius:1em),
edge("-|>"),
node((1,0),$V_2$, radius:1em),
edge("-|>"),
node((0,0),$V_1$, radius:1em),
)
]
]
namely, $V_1 orth_d V_3 | V_2$.
#pause
Or even simpler,
#figure[#table(columns:2,column-gutter: 5em,stroke:none)[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 2em,
node((0,0),$V_1$, radius:1em),
edge("-|>"),
node((1,0),$V_2$, radius:1em),
)
][
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 2em,
node((1,0),$V_2$, radius:1em),
edge("-|>"),
node((0,0),$V_1$, radius:1em),
)
]
]
with no structural independencies.
]
#slide(repeat:8, self => [
But we only used the independencies between nodes!
#pause
There are many more potential independencies like
$V_1 indep (V_1+V_2)$.
#pause
We want to extend '$d$-separation' to arbitrary random variables on the graph,
so that we can infer more structure.
#pause
#if self.subslide <= 5 {
figure[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 1em,
node((0,0),$V_1$, radius:1em),
edge("-|>",shift:5pt, label:"?"),
edge("<|-",shift:-5pt),
node((2,0), $V_2$,radius:1em),
node((4,0), $V_1 cancel(indep,stroke:#1.5pt) V_2$, stroke:none,fill:none),
pause,
node((1,1), $V_1 + V_2$,shape:fletcher.shapes.ellipse,extrude:10pt),
node((4.3,1), $V_1 indep V_1 + V_2$, stroke:none,fill:none),
)
]
} else if self.subslide == 6 {
figure[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 1em,
node((0,0),$V_1$, radius:1em),
edge("-|>"),
node((2,0), $V_2$,radius:1em),
node((4,0), $V_1 cancel(indep,stroke:#1.5pt) V_2$, stroke:none,fill:none),
node((1,1), $V_1 + V_2$,shape:fletcher.shapes.ellipse,extrude:10pt),
edge((2,0),"-|>"),
node((4.3,1), $V_1 indep V_1 + V_2$, stroke:none,fill:none),
)
]
} else {
figure[
#fletcher-diagram(
node-stroke: .1em,
node-fill: gradient.radial(white, blue.lighten(80%), center: (30%, 20%), radius: 80%),
spacing: 1em,
node((0,0),$V_1$, radius:1em),
edge("-|>"),
node((2,0), $V_2$,radius:1em),
node((4,0), $V_1 cancel(indep,stroke:#1.5pt) V_2$, stroke:none,fill:none),
node((1,1), $V_1 + V_2$,shape:fletcher.shapes.ellipse,extrude:10pt,outset:10pt, fill:none, stroke:(dash:"dashed")),
edge((2,0),"--|>"),
node((4.3,1), $V_1 indep V_1 + V_2$, stroke:none,fill:none),
)
]
[
#pause
#pause
#pause
#pause
Therefore there is value to be gained from a general account of structural independence.
]
}
])
// #pause
//
// The product $PP(V=v) = product_(X in V) PP(X=v_X|PA(X)=v_PA(X))$
// reveals a product space structure.
// One factor corresponding to $PP(X=x| PA(X)="pa")$ with fixed 'pa'.
// #[
= Structural Independence #h(-13pt)
// ]
== Setting
Let $I$ be an arbitrary index set.
#pause
$(Omega, AS) := Times.circle_(i in I) (Omega_i, AS_i)$.
#pause
Reference measure: $PP = times.big_(i in I) PP_i$, where $PP_i in distributions (Omega_i)$.
#pause
$distributionstimes (Omega) := {Times_(i in I) P_i << PP | forall i in I: P_i in distributions(Omega_i)}$.
#pause
Question: Can we characterize
$forall P in distributionstimes (Omega) : X indep_P Y | Z?
$
Recall: $X indep_P Y | Z :<=> forall A in sigma(X), B in sigma(Y) : P(A|Z)P(B|Z) =^"a.s." P(A,B|Z) #h(-100pt) $
#pause
#figure[
#fletcher-diagram(
spacing:1em,
node-outset:4pt,
node((0,0),$Omega$,name:<G>),
edge("->",label:$P in distributionstimes(Omega)$),
node((3,0), $(Omega,P)$,name:<GP>, outset:8pt),
pause,
node((0,1),$X orth Y | Z$,name:<d>),
edge("->"),
node((3,1),$X indep_P Y | Z$,name:<p>),
edge(<G>,<d>,"-->", label:""),
edge(<GP>,<p>,"-->", label:""),
)
]
== Definitions
#definition[Index-set function][We call a measurable mapping $J : Omega -> 2^I$ an index-set function.]
#pause
For $J subset.eq I$, let $pi_J : Omega -> Times_(i in J) Omega_i$ be the projection on the $J$ components. #h(-100pt)
#pause
#definition[Generalized projection][
For an index-set function $J$, let $pi_J (omega) = pi_(J (omega)) (omega)$ with signature
$ pi_J : Omega -> Union_(I_0 subset.eq I) (times.big_(i in I_0) Omega_i) $
]
#let x-max = 5
#let y-max = 5
#let start = (
cetz.draw.translate(x:0, y:-100),
cetz.draw.scale(1.4),
)
#let basic = (
cetz.draw.rect((-1,7.25),(6,5),fill:white,stroke:none),
cetz.draw.rect((-1,-2.25),(6,0),fill:white,stroke:none),
cetz.draw.rect((-1,-1),(0,6),fill:white,stroke:none),
cetz.draw.rect((6.5,-1),(5,6),fill:white,stroke:none),
cetz.draw.line((0, 0), (x-max+0.4, 0), mark: (end: ">")),
cetz.draw.line((0, 0), (0, y-max+0.4), mark: (end: ">")),
// Origin label
cetz.draw.content((0, -0.5), [0]),
cetz.draw.content((-0.5, 0), [0]),
cetz.draw.content((-0.5, y-max), [1]),
cetz.draw.content((x-max,-0.5), [1]),
cetz.draw.line((0,y-max),(x-max,y-max)),
cetz.draw.line((x-max,0),(x-max,y-max))
)
#let colors = (
rgb(173, 216, 230), // Light Blue
rgb(135, 206, 235), // Sky Blue
rgb(176, 224, 230), // Powder Blue
rgb(175, 238, 238), // Pale Turquoise
rgb(202, 225, 255), // Light Steel Blue
)
// #let grid(n) = (
// if n == 0 {
// none
// } else if n == 1 {
// cetz.draw.grid((0,0),(1,1),grid:(x:0.1,y:1))
// } else if n == 2 {
// cetz.draw.grid((0,0),(1,1),grid:(x:1,y:0.1))
// } else if n == 3 {
// cetz.draw.grid((0,0),(1,1),grid:(x:0.1,y:0.1))
// }
// )
#let covering = (
cetz.draw.scale(x-max),
cetz.draw.hobby(
(0, 0),
(0.4, 0.3),
(0.7, 0.5),
(1, 0.4),
(1, 0),
close: true,
fill: colors.at(0)
),
cetz.draw.hobby(
(0, 0),
(0.5, 0.3),
(0.1, 0.7),
(0, 0.6),
close: true,
fill: colors.at(1)
),
cetz.draw.hobby(
(0.3, 0.5),
(1, 0.4),
(1, 1),
(0.2, 0.9),
(0.2, 0.8),
close: true,
fill: colors.at(3)
),
cetz.draw.hobby(
(0, 0.6),
(0.3, 0.7),
(0.6, 0.8),
(0.8, 0.9),
(1, 1),
(0, 1),
close: true,
fill: colors.at(4)
),
cetz.draw.scale(1/x-max),
)
#let rects = (
cetz.draw.scale(5),
cetz.draw.rect((0, 0), (0.6, 0.4), fill:rgb(173, 216, 230)),
cetz.draw.rect((0.6, 0), (1, 0.4), fill:rgb(135, 206, 235)),
cetz.draw.rect((0, 0.4), (0.3, 1), fill:rgb(176, 224, 230)),
cetz.draw.rect((0.3, 0.4), (0.8, 0.7), fill:rgb(175, 238, 238)),
cetz.draw.rect((0.8, 0.4), (1, 1), fill:rgb(202, 225, 255)),
cetz.draw.rect((0.3, 0.7), (0.8, 1), fill:rgb(173, 216, 230)),
cetz.draw.scale(1/5),
)
#slide(repeat: 4, self => [
#v(-30pt)
#if self.subslide==1 {
figure[
#cetz-canvas({
start
basic
cetz.draw.content((2.5,-1.5),[$Omega = [0,1] times [0,1]$])
})]
} else if self.subslide == 2 {
figure[#cetz-canvas({
start
covering
basic
cetz.draw.content((2.5,-1.5),[$Omega = [0,1] times [0,1]$])
})]
} else if self.subslide == 3 {
figure[#cetz-canvas({
start
covering
basic
cetz.draw.content((2.5,-1.5),[$J:Omega->2^I$])
cetz.draw.content((1.2,4.2),[$nothing$])
cetz.draw.content((0.8,1.2),[${1}$])
cetz.draw.content((3.8,0.8),[${2}$])
cetz.draw.content((3.3,2.8),[${1,2}$])
})]
} else {
figure[#cetz-canvas({
start
covering
basic
cetz.draw.content((2.5,-1.5),[$sigma(pi_J)$])
let step = 0.5
let y1 = (0,0,0,0,0.4,1.5,1.47,1.4,1.45,1.63)
let y2 = (0,2.97,3.2,3.5,3.71,3.84,4,4.27,4.5,4.68)
for i in range(calc.min(y1.len(),y2.len())) {
cetz.draw.line((step*i,y1.at(i)),(step*i,y2.at(i)))
}
let x1 = (0,2.08,2.37,2.5,1.94,1.5,1.25,1.5,3,4)
let x2 = (0,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5)
for i in range(calc.min(x1.len(),x2.len())) {
cetz.draw.line((x1.at(i),step*i),(x2.at(i),step*i))
}
// cetz.draw.content((1.2,4.2),[$nothing$])
// cetz.draw.content((0.8,1.2),[${1}$])
// cetz.draw.content((3.8,0.8),[${2}$])
// cetz.draw.content((3.3,2.8),[${1,2}$])
})]
}
])
== Definitions
#slide(repeat:5, self => [
#definition[Disintegration][$J disintegrates Z$ iff
$ forall P in distributionstimes(Omega): pi_J indep_P pi_comp(J) | Z. $]
#pause
#definition[Generation][Let $J$ be a $Z ms$ index-set function.
$J$ generates $X$ given $Z$ if
$X$ is $sigma(pi_J,Z) ms$ and $J$ disintegrates $Z$.
]
#pause
#lemma[
If $J generates X$ given $Z$ and $K generates Y$ given $Z$ and
$J sect K =^"a.s." nothing$ then
$forall P in distributionstimes(Omega) : X indep_P Y | Z$.
]
#pause
#proof[
By the definition of generation, we have $sigma(X) subset.eq sigma(pi_J,Z)$ and likewise
by assumption $sigma(Y) subset.eq sigma(pi_comp(J),Z).$
#pause
Disintegration gives
$(pi_J,Z) indep_p (pi_comp(J),Z) | Z$ for any $P in distributionstimes(Omega)$.
// #v(-100pt)
]
])
== Results
#lemma[
There exists a minimal generating index-set function for $X$ given $Z$.
Denote this minimal element by $history (X|Z)$. I.e.
for any generating $J$, we have
$history(X|Z) subset.eq J "a.s."$
]
#pause
#proof[idea][
$Sect_(s in S) A_s$ exists up to nullsets.
]
#definition[history][
$history(X|Z) : Omega -> 2^I$ is the almost surely unique minimal index-set function that generates
$X$ given $Z$.
]
#slide[#figure[#cetz-canvas({
start
basic
rects
cetz.draw.content((2.5,-1.5),$sigma(Z)$)
})]
]
#slide[#figure[#cetz-canvas({
start
basic
rects
cetz.draw.content((2.5,-1.5),$A subset.eq Omega$)
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.content((2,2),text(size:2em)[$A$])
})]
]
#slide[#figure[#cetz-canvas({
start
basic
rects
cetz.draw.content((2.5,-1.5),$history(1_A|Z)$)
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
// cetz.draw.content((2,2),text(size:2em)[$A$])
// content()
cetz.draw.content((0.75,3.5), ${2}$)
cetz.draw.content((1.5,1.0), $nothing$)
cetz.draw.content((4.0,1.0), ${1}$)
cetz.draw.content((2.75,2.75), ${1}$)
cetz.draw.content((2.75,4.35), ${1,2}$)
cetz.draw.content((4.5,3.5), $nothing$)
})]
]
#slide[#figure[#cetz-canvas({
start
basic
cetz.draw.rect((0,0),(5,5), fill: rgb(173, 216, 230))
cetz.draw.rect((2,2),(5,5), fill: rgb(202, 225, 255))
cetz.draw.content((2.5,-1.5),$sigma(Z)$)
})]
]
#slide[#figure[#cetz-canvas({
start
basic
cetz.draw.rect((0,0),(5,5), fill: rgb(173, 216, 230))
cetz.draw.rect((2,2),(5,5), fill: rgb(202, 225, 255))
cetz.draw.rect((0,0),(1.5,5),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.content((0.75,2.5),text(size:2em)[$A$])
cetz.draw.content((2.5,-1.5),$A subset.eq Omega$)
})]
]
#slide[#figure[#cetz-canvas({
start
basic
cetz.draw.rect((0,0),(5,5), fill: rgb(173, 216, 230))
cetz.draw.rect((2,2),(5,5), fill: rgb(202, 225, 255))
cetz.draw.rect((0,0),(1.5,5),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.content((1.35,1.25),${1,2}$)
cetz.draw.content((3.35,3.25),$nothing$)
cetz.draw.content((2.5,-1.5),$history(1_A | Z)$)
})]
]
#slide[
#definition[Structural independence][
$ X orth Y | Z :<=> history(X|Z) sect history(Y|Z) = nothing "a.s." $
]
#pause
#theorem[
$X orth Y | Z <=> forall P in distributionstimes (Omega) : X indep_P Y | Z.
$
]
#pause
#lemma[If ${P in distributionstimes(Omega):X indep_P Y | Z}$ has interior, it is equal to $distributionstimes(Omega)$. ]
#pause
#corollary[
If $forall P in distributionstimes(Omega): X indep_P Y | Z$ then \
$forall P in distributionstimes (Omega) : pi_(history(X|Z)) indep_P
pi_(history(Y|Z)) | Z$.
]
]
#slide[
#figure[#cetz-canvas({
start
basic
rects
cetz.draw.content((2.5,-1.25),$forall P in distributionstimes(Omega): A indep_P B$)
cetz.draw.content((2.5,-2.1), $=> forall P in distributionstimes(Omega) : pi_history(1_A|Z) indep_P B$)
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.content((2,2),text(size:2em)[$A$])
})]
]
#slide[#figure[#cetz-canvas({
start
basic
rects
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.content((2.5,-1.25),$forall P in distributionstimes(Omega): A indep_P B$)
cetz.draw.content((2.5,-2.1), $=> forall P in distributionstimes(Omega) : pi_history(1_A|Z) indep_P B$)
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
// cetz.draw.content((2,2),text(size:2em)[$A$])
// content()
cetz.draw.content((0.75,3.5), ${2}$)
cetz.draw.content((1.5,1.0), $nothing$)
cetz.draw.content((4.0,1.0), ${1}$)
cetz.draw.content((2.75,2.75), ${1}$)
cetz.draw.content((2.75,4.35), ${1,2}$)
cetz.draw.content((4.5,3.5), $nothing$)
})]
]
#let rectgrids = (
cetz.draw.grid((0,2),(1.5,5), step:(5,0.2)),
cetz.draw.grid((1.5,3.5),(4,5), step:(0.2,0.2)),
cetz.draw.grid((1.5,2.0),(4,3.5), step:(0.2,2.2)),
cetz.draw.grid((3,0.0),(5,2.0), step:(0.2,2.2)),
)
#slide[#figure[#cetz-canvas({
start
basic
rects
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.content((2.5,-1.25),$forall P in distributionstimes(Omega): A indep_P B$)
cetz.draw.content((2.5,-2.1), $=> forall P in distributionstimes(Omega) : pi_history(1_A|Z) indep_P B$)
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
rectgrids
// cetz.draw.content((2,2),text(size:2em)[$A$])
// content()
})]
]
#slide[
#figure[#cetz-canvas({
start
basic
rects
// cetz.draw.content((2.5,-1.25),$forall P in distributionstimes(Omega): A indep_P B$)
// cetz.draw.content((2.5,-2.1), $=> forall P in distributionstimes(Omega) : pi_history(1_A|Z) indep_P B$)
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.content((2,2),text(size:2em)[$A$])
})]
]
#slide[
#figure[#cetz-canvas({
start
basic
rects
// cetz.draw.content((2.5,-1.25),$forall P in distributionstimes(Omega): A indep_P B$)
// cetz.draw.content((2.5,-2.1), $=> forall P in distributionstimes(Omega) : pi_history(1_A|Z) indep_P B$)
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
// cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.rect((1.5,1),(5,3),fill:rgb(000,200,100,130), stroke:(dash:"dashed"))
// cetz.draw.content((2,2),text(size:2em)[$A$])
cetz.draw.content((3,2),text(size:2em)[$B$])
})]
]
#slide[
#figure[#cetz-canvas({
start
basic
rects
// cetz.draw.content((2.5,-1.25),$forall P in distributionstimes(Omega): A indep_P B$)
// cetz.draw.content((2.5,-2.1), $=> forall P in distributionstimes(Omega) : pi_history(1_A|Z) indep_P B$)
// cetz.draw.line((3.5,0),(3.5,4),stroke:(dash:"dashed"))
// cetz.draw.line((3.5,4),(0,4),stroke:(dash:"dashed"))
cetz.draw.rect((1.5,1),(5,3),fill:rgb(000,200,100,130), stroke:(dash:"dashed"))
cetz.draw.rect((0,0),(3.5,4),fill:rgb(100,100,100,130), stroke:(dash:"dashed"))
cetz.draw.content((1.4,2),text(size:2em)[$A$])
cetz.draw.content((3.5,2),text(size:2em)[$B$])
})]
]
// #slide(repeat:4, self =>[
// #if self.subslide == 2 {
// figure[#cetz-canvas({
// covering
// basic
// cetz.draw.content((2.5,-1.5),[$sigma(Z)$])
// })]
// } else if self.subslide == 3 {
// figure[#cetz-canvas({
// covering
// basic
// cetz.draw.content((2.5,-1.5),[Let $B subset.eq Omega$, if $forall P in distributionstimes(Omega): A indep_P B | Z$,])
// let step = 0.5
// let y1 = (0,0,0,0,0.4,1.5,1.47,1.4,1.45,1.63)
// let y2 = (0,2.97,3.2,3.5,3.71,3.84,4,4.27,4.5,4.68)
// for i in (2,) {
// cetz.draw.line((step*i,y1.at(i)),(step*i,y2.at(i)))
// }
// let x1 = (0,2.08,2.37,2.5,1.94,1.5,1.25,1.5,3,4)
// let x2 = (0,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5)
// for i in (2,) {
// cetz.draw.line((x1.at(i),step*i),(x2.at(i),step*i))
// // cetz.draw.circle((3.3,3),radius:1)
// cetz.draw.line((2,3.7),(5,2))
// cetz.draw.content((2.3,1.8),text(size:2em)[$A$])
// // cetz.draw.arc-through((3.5,),(2,2),(2,3))
// }
// })]
// } else if self.subslide == 4 {
// figure[#cetz-canvas({
// covering
// basic
// cetz.draw.content((2.5,-1.5),[Let $B subset.eq Omega$, if $forall P in distributionstimes(Omega): A indep_P B | Z$,])
// cetz.draw.content((2.5,-2.7),[then $forall P in distributionstimes(Omega): pi_J indep_P B | Z.$])
// let step = 0.5
// let y1 = (0,0,0,0,0.4,1.5,1.47,1.4,1.45,1.63)
// let y2 = (0,2.97,3.2,3.5,3.71,3.84,4,4.27,4.5,4.68)
// for i in range(calc.min(y1.len(),y2.len())) {
// cetz.draw.line((step*i,y1.at(i)),(step*i,y2.at(i)))
// }
// let x1 = (0,2.08,2.37,2.5,1.94,1.5,1.25,1.5,3,4)
// let x2 = (0,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5)
// for i in range(calc.min(x1.len(),x2.len())) {
// cetz.draw.line((x1.at(i),step*i),(x2.at(i),step*i))
// }
// })]
// }
// ])
== Compositional Semigraphoid
#counter("theorem").update(12)
#theorem[compositional semigraphoid axioms][
#pause
#let given = math.bar
$
&"1." quad&& X perp Y given Z <=> Y perp X given Z &"(symmetry)" \
&"2."&& X perp (Y,W) given Z => X perp Y given Z &#h(-100pt)"(decomposition)"\
&"3."&& X perp (Y,W) given Z => X perp Y given (Z,W) &"(weak union)"\
&"4."&& X perp Y given Z and X perp W given (Z,Y) =>
X perp (Y,W) given Z & "(contraction)" \
&&&"(Semigraphoid Axioms)"#pause \
&"5."&& X perp Y given Z and X perp W given Z =>
X perp (Y,W) given Z &"(composition)"\
$
]
By composition, pairwise structural independence and structural independence are equivalent.
== Disintegration
The definition of disintegration is not satisfactory.
Can we characterize
$forall P in distributionstimes(Omega) : pi_J indep_P pi_comp(J) | Z$ without
a quantifier?
#pause
#lemma[
$J$ disintegrates $Z$ iff $pi_J indep_PP pi_comp(J) | Z$.
]
#pause
#proof[idea][
$P = f dot PP$, $thick EE_f := integral dot dif PP$.
#pause
$thick EE_f (X|Z) = EE(f X|Z)/EE(f|Z). $
#pause
$f = product_(i in I) f_i =product_(i in I)
underbrace((1_(i in J) f_i + 1_(i in.not J)), pi_J ms)
underbrace((1_(i in J) + 1_(i in.not J) f_i), pi_comp(J) ms).
$
]
#pause
#lemma[
If $sigma(Z)$ corresponds to a countable partition , then $J disintegrates Z$, if
any atom $C in sigma(Z)$ fulfills.
$C= pi_J (C) times pi_comp(J) (C)$. This is not true in general.
]
#slide[
#figure[#cetz-canvas(
start,
basic,
rects,
cetz.draw.content((2.5,-1.5),${1} disintegrates Z.$)
)]
]
#slide(repeat:4, self => [
#lemma[
If $J disintegrates Z$, then for $A in sigma(pi_J, Z), B in sigma(pi_comp(J),Z)$, with $A sect B = nothing$ there is $C in sigma(Z)$ s.t.
$A subset.eq C$, $B subset.eq C^c$.
]
#pause
#if self.subslide == 2 {figure[#cetz-canvas(
basic,
rects,
cetz.draw.content((2.5,-1.5),$A in sigma(pi_1,Z), quad B in sigma(pi_2,Z)$),
cetz.draw.rect((0,0),(2,2),stroke:(dash:"dashed"), fill:rgb(100,100,100,100)),
cetz.draw.content((1,1),$A$),
cetz.draw.rect((3,0),(5,1.5), stroke:(dash:"dashed"), fill:rgb(100,100,100,100)),
cetz.draw.content((4,0.75),$B$),
)]} else [
#pause
Interaction between ($pi_J,Z$) and $(pi_comp(J),Z)$ happens only through $Z$.
#pause
When $sigma(Z)$ corresponds to a countable partition, this characterizes disintegration.
]
])
== Conjectures
We can put a metric on sets modulo nullsets by
$d(A,B) = PP(A triangle.t B)$.
#pause
#lemma[
$A_n -> nothing $ if and only if every subsequence has a subsequence s.t.
$lim sup A_n = nothing$
]
#pause
#conjecture[
There is a sense of convergence of $sigma$-algebras that only depends on nullsets, s.t.
$J$ disintegrates $Z$ iff there is a sequence of
partitions $BS_n$ with rectangular parts, s.t.
$sigma(BS_n) -> sigma(Z)$.
]
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/meta/link_05.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Styled with underline and color.
#show link: it => underline(text(fill: rgb("283663"), it))
You could also make the
#link("https://html5zombo.com/")[link look way more typical.]
|
https://github.com/Fr4nk1inCs/typreset | https://raw.githubusercontent.com/Fr4nk1inCs/typreset/master/src/bundles/homework.typ | typst | MIT License | #import "../styles/homework.typ": style
#import "../utils/question.typ": simple_question, complex_question |
https://github.com/i-am-wololo/cours | https://raw.githubusercontent.com/i-am-wololo/cours/master/TP/i22/tp3.typ | typst | #import "templates.typ": *
#show: project.with(title: "TP 3: circuits combinatoire")
ce document est cense documenter toute les etapes de ma procedure, mais il peut aussi etre utilise comme guide
= Inverseur commande
Un composant qui prend 2 entrees, une entree de donnee et une entree pour le parametre, et selon le parametre, renvoie la negation du signal d'entree.
Exemple $C=1; O=not E; C=0 O=E$
la table de verite est sur la feuille jointe avec ce document, et le circuit est qui suit:
#image("./inverseur.png", width: 70%)
= Multiplexeur 2 vers 1
3 entrees, 1 sortie, une entree sert a determiner quel canal pars en sortie
#image("mux.png", width: 70%)
= encodeur 4 vers 2
4 entrees, 2 sortie, simplifie en ignorant les bits insignifiants. Parmi les 4 entrees, une est ignoree.
La formule est $(s 1, s 0) = E 3+E 2, E 2+not E 1* E 0 $
#image("./encoder.png", width: 70%)
Reponse a: "Comment gérer l'ambiguïté dans l'encodage entre la présence d'une valeur sur l'entrée 0 et l'absence de valeur toutes les entrées ?"
on peut ajouter une troisieme sortie $G$, pour savoir si l'encodeur est active, c'est a dire qu'il y a au moins un signal actif dans une des entrees pour que $G = I 0 + I 1+I 2+ I 3$
= Decodeur 2 vers 4
2 entrees, 4 sortie. c'est l'inverse de l'encodeur:
#image("./decoder.png", width:70%)
celui la je n'ai pas su le faire de tete, j'ai du consulter le cours pour l'avoir
|
|
https://github.com/LeoJuguet/report-internship2024 | https://raw.githubusercontent.com/LeoJuguet/report-internship2024/main/report.typ | typst | // Setup
#import "@preview/ansi-render:0.6.1": *
#import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#import "@preview/cetz:0.2.2"
#import fletcher.shapes: diamond
#set page(numbering: "1")
#let head(title, authors, date) = align(center)[
#text(17pt)[*#title*]\
#authors \
#date \
]
#set heading(numbering:"1.")
#set raw(syntaxes: "jasmin.sublime-syntax")
#show raw.where(block: true) : block.with(
fill : luma(240),
inset : 10pt,
stroke : black,
)
#let def_counter = counter("definition")
#let def(title,text) = block(
fill: rgb("#edFCe6"),
stroke: green,
radius : 2pt,
inset: 1em,
width: 100%,
)[
#def_counter.step()
*Definition #context def_counter.display(): #title * \
#text
]
// Defined some symbols
#let widen = math.gradient
#let join = math.union
#let meet = math.sect
#let Init = (
init : "INIT",
inot : "NOT_INIT",
maybe : "MAYBE_INIT",
)
#let maybe_eq = math.tilde.eq
#let maybe_le = math.lt.tilde
#let maybe_ge = math.gt.tilde
#let eq = math.eq
#let le = math.lt.eq
#let ge = math.gt.eq
#let len = "len"
#let semOf(expr, isabstract: true, domain : "",name ) = [
#let abstrcat = if isabstract {
math.hash
} else {
""
}
#text(red)[$name^abstrcat_domain [|$]$expr$#text(red)[$|]$]
]
#let semExpr(expr, isabstract : false) = semOf(expr, isabstract: isabstract, $EE$)
#let semStmt(stmt, isabstract : false) = semOf(stmt, isabstract: isabstract, $SS$)
#let semCond(cond, isabstract : false) = semOf(cond, isabstract: isabstract, $CC$)
#let semExprA(expr, isabstract : true,domain : "") = semOf(expr, isabstract: isabstract,domain: domain , $EE$)
#let semStmtA(stmt, isabstract : true,domain : "") = semOf(stmt, isabstract: isabstract,domain: domain, $SS$)
#let semCondA(cond, isabstract : true,domain : "") = semOf(cond, isabstract: isabstract,domain: domain, $CC$)
// Truc
#let title = [
Jasa : The new Jasmin Safety Analyser based on MOPSA
]
#set text(11pt)
#set page(
header: [
#align(
right + horizon,
title
)
//#line(length: 100%)
],
)
#head(title, "<NAME>", "March - Jully 2024")
#align(center)[
#set par(justify:false)
*Abstract* \
We present a new safety checker for Jasmin, a programming language for high-assurance and high-speed
cryptographic code implementation. The safety checker detects initialization of scalars, initialization of
arrays to a certain extent, and access out of bounds without unrolling loops using widening. This new safety
checker is mostly based on the MOPSA library. It is undoubtedly faster than the previous one,
thanks to the support of function contract and to do modular analysis.
This new safety checker, created with MOPSA, also enables checking more properties than previous
safety checker, with the ability to analyze the values.
]
= Introduction <intro>
Jasmin @jasmin is a programming language that aims to be secure, particularly for cryptographic code.
Its compiler is written in Coq and provides a proof that the code will be correctly translated to assembly, under certain assumptions.
These assumptions are checked by a safety checker, but the previous one was too slow, and not as precise as we wanted. Moreover
the safety checker is hard to maintain today, and doesn't allow modular analysis.
The goal of this internship was to create a new, more efficient, more maintanable, and more precise
safety analyzer that would be able to check that the conditions on any Jasmin code is verified.
For this task, I used MOPSA, a static analyzer library that aims to be modular, in order to easily add a frontend for the Jasmin language.
By relying on the third-party library MOPSA, which maintains the backend of the abstract interpretation, the resulting codebase is more maintainable.
= Context
Writing a secure program is hard, as there are many considerations to take into account, and often small mistakes are
involuntarily made by programmers, such as badly definining variables or accessing out-of-bound memory. These mistakes
can lead to writing at unsafe locations, leading to an information leak. Hence, a tool is needed to detect mistakes.
In Jasmin, a safety analyzer already exists, but there is a main problem:
The current implementation is too slow for big programs, it's impossible to run an analysis on an independent function without
inlining other functions.
Moreover the safety checker also asks to maintain a backend to be able to do abstract interpretation.
The main goal of this internship was to explore MOPSA to see if it could be used to replace the old safety checker.
== Contribution
During this internship, I wrote a brand new safety analyzer for Jasmin @jasmin using the MOPSA @mopsa-phd library.
This safety checker is modular, meaning that functions can be analyzed independently. It can check if scalars are
properly initialized, as well as arrays and memory, and ensure there are no out-of-bounds accesses.
The safety checker uses contracts to check properties and offers a modular analyzer. It is also faster than the previous one.
#outline(indent : auto)
= Jasmin
Jasmin is a programming language for writing high-assurance and high-speed cryptography code.
The compiler is mostly written and verified in Coq, and except for the parser, the other code written in OCaml is also verified in Coq @jasmin.
Jasmin has some tools to translate a Jasmin program to EasyCrypt or CryptoLine to allow developers to prove that their code is correct.
However, this translation and the compilation are correct only if the safety properties are verified.
The safety properties are checked by a safety checker.
Jasmin has a low-level approach, with a syntax that is a mix between assembly, C, and Rust. Jasmin has 3 types:
- bool for boolean
- inline integers that are not saved somewhere but directly written in the source code when the compiler unrolls the loop
- words of size with the size in ${8,16,32,64,128,256}$
- arrays of words of fixed length at compile time
- reg variables that can hold addresses
The user has to specify if a variable has to be in a register or on the stack. But this will not affect the safety analysis.
A Jasmin program is a collection of:
- parameters (that are removed right after the parsing)
- global variables (that are immutable)
- functions (they can't be recursive)
The control flow of a Jasmin program is simple, with blocks, if statements, for loops, and while loops, all of which have a classic semantic of an imperative language.
== Safety
In Jasmin, safety is formally defined as "having a well-defined semantics", as specified in Coq.
A function is deemed safe if it can reach a final memory state and result values such that executing
the function from the initial state successfully terminates in that final state. The properties that must be verified include the following:
#def("Safety properties")[
The safety properties of a Jasmin program are :
- all scalar arguments are well initialized
- all return scalars are well initialized
- there is no out-of-bounds access for arrays
- there is no division by zero
- termination
- alignment of memory access
]
Out-of-bounds access is a common condition in software verification, as it helps prevent writing to unauthorized memory locations and avoids information leaks.
Checking for division by zero is also crucial to ensure that the program does not crash. Memory access alignment is required by certain architectures and
can enhance execution speed.
Termination is important, especially in cryptographic programs, as most of them need to execute in constant time. Scalars must be initialized to ensure
that they are properly represented in the generated assembly code.
In the following report, we will not talk about division by zero, even if a prototype of division by zero detection is implemented,
because Jasmin is focused on cryptography code, and in particular constant-time programs. Division operations are not used
because they are not constant-time operator.
We will focus on the initialization of scalars and arrays, as well as out-of-bounds access to arrays
= Overview of Abstract Interpretation <abstract-interpretation>
> *Note* : this part and @overview-mopsa are mostly a summary of @mopsa-phd. For proof and more details, please refer to this document.
Abstract interpretation is a technique in the field of static analysis of programs. It analyzes a program by an upper approximation way,
without having to execute the program.
Abstract interpretation was formalized by <NAME> and <NAME> @cousot-lattice. All concrete values are replaced by an abstract
value, such that all possible values in a normal execution are included in the concretization of the abstract value.
A perfect abstract interpretation would calculate the exact set of possible states of a program during or at the end of the interpretation.
However, it is generally impossible to calculate such a set. Therefore, we compute an approximation. To do this, we define a poset over the set
of possible states of a program, and we say that the abstraction is sound, meaning it is a correct approximation, if it is possible to derive an
upper set of possible states of the program from the abstraction.
#def("Poset")[
A poset or partial order set is a couple $(X, subset.sq)$ with $X$ a set and $subset.sq in X times X$ a relation reflexive,
anti-symmetric and transitive
]
#def("Sound abstraction")[
Let $A$ be a set $(C,subset.eq)$ a poset, and $gamma in A -> C$ the concretisation function.
$a$ is a sound abstraction of $c$ if $c subset.eq gamma(a)$
]
Intuitively, the abstraction calculates an overbound of the real possible values. The abstraction
is correct even if it detects a bug that didn't exist, but it is incorrect if it fails to detect a bug that does exist.
The notion of sound abstraction can also be extends to function.
#def("Sound abstraction operator")[
Let $A$ be a set $(C,subset.eq)$ a poset, and $gamma in A -> C$ the concretisation function.
$f^hash in A -> A$ is a sound abstraction of $f in C -> C$ if
$forall a in A, f(gamma(a)) subset.eq gamma(f^hash (a))$
]
#figure([
#grid(
columns: (30%,70%),
[
```jasmin
fn f(reg u64 x)
{
reg u64 a;
if x > 0 {
a = 5;
}else{
a = 7;
}
}
```]
,
[
#text(10pt)[
#figure(
[#diagram(
node-stroke: 1pt,
node((0.5,0), [$ a = top, x = top$], name: <init>),
edge("-|>",[then]),
node((0,1), [$ a = 5, x > 0$], name: <then>),
node((1,1), [$ a = 7, x <= 0$], name: <else>),
edge(<init>,<else>,"-|>" ,[else]),
node((0.5,2), [$ a = [5;7], x = top$], name:<end>),
edge(<then>,<end>,"-|>" ),
edge(<else>,<end>,"-|>" ),
node((0.5,1.5),$join$,stroke : none),
)],
)<hass-diagram-init>
]]
)],
caption: [ Example of abstraction of a program]
)<example-abstraction>
For example in @example-abstraction, first we abstract $a$ and $x$ by $top$ value, that represent the full range of possible values,
when we execute the `then` branch we have the approximation that $x > 0$ and that $a = 5$, but in the `else` branch we have that $x <= 0$ and $a = 7$.
At the end we join the two branch, to limit the number of different abstract state that we have, and the approximation give tha $a$ is in $[5;7]$ and $x$ can
take all values. This is an overapproximation because $a = 6$ is impossible with the code of the example.
For example, in @example-abstraction, we first abstract $a$ and $x$ with the $top$ value,
which represents the full range of possible values. When we execute the `then` branch, we have the approximation that
$x > 0$ and $a = 5$. However, in the `else` branch, we have $x <= 0$ and $a = 7$.
At the end, we join the two branches to limit the number of different abstract states we have,
and the approximation gives that $a$ is in the range $[5; 7]$ while $x$ can take all values.
This is an overapproximation of the possible states, because $a = 6$ is impossible with the code in the example.
== Widening <widening>
The widening operator was initially introduced in a Cousot's Cousot's paper @cousot-lattice.
The widening operator was defined to find a fixpoint in the approximation of while loops, like it's
impossible to compute the fixpoint of all loops with the guaratee that this calcul terminate.
The semantic of while loop is given in @sem-stat.
#def("widening")[
A binary operator $widen$ is defined by
- $widen : A times A -> A$
- $forall (C,C') in A^2, C <= C widen C', C' <= C widen C'$
- $forall (y_i)_(i in NN) in A$, the sequence $(x_i)_(i in NN)$ computed as $x_0 = y_0, x_(i+1) = x_i widen y_(i+1)$,$exists k >= 0 : x_(k+1) = x_k$
]
The 3rd condition of the widening operator is important to guarantee that the analysis will finish, but this does not
permit us to conclude that the loop can finish, unless the index used in the loop is bounded.
= Overview of MOPSA <overview-mopsa>
MOPSA, which stands for Modular Open Platform for Static Analysis, is a research project
that develops a tool in OCaml of the same name. The goal is to create static analyzers based
on abstract domains modularity.
MOPSA work with Domain, named composable abstract domain in @mopsa-phd.
These domains handle an abstract representation of the things they want to abstract
(e.g. an interval of values, or the initialization states).
These domains also take partial expression and statement transfer functions, which indicate how certain instructions
modify the abstract representation (e.g. `a = a + 2` has the effect of adding 2 to the interval representation of `a`).
#def("Composable abstract domain")[
A value abstract domain is :
- an abstract poset $(D^hash, subset.sq.eq\ ^hash)$
- a smallest element $bot$ and a largest element $top$
- Sound abstractions of set union and intersection $union.sq.big^hash, meet.big.sq^hash$
- a widening operator $widen$
- a partial expression and statement transfer functions, operating on the global abstraction
state $Sigma^hash$. $semExprA("expr" in Epsilon,domain: D^hash)$, $semStmtA("stmt" in S, domain: D^hash)$
- Concrete input and output states of the abstract domain, written $D^"in"$ and $D^"out"$
- a concretisation operator $gamma : D^hash -> P(P(D^"in") times P(D^"out"))$
]
In a configuration file, the programmer defines how different domains coexist.
When composing a domain, the system takes the first domain that can handle a given expression or statement.
The @figure-mopsa-domains illustrates a simplified configuration.
The red domains are defined specifically for Jasmin. The blue domains are defined by Mopsa for his universal language.
Some domains only translate Jasmin statements into equivalent
statements in the universal language offered by Mopsa, like the loop domain that translates Jasmin's while and
for loops into a while loop of the universal language.
Other domains, like array initialization, provide a special abstraction for Jasmin, in this case to handle
the initialization of arrays (@init-array). In this simple example, the system first tries to see if the
intraproc domain from the Jasmin frontend can handle the current expression or statement. If so, that domain
will handle the case. Otherwise, the system checks the following domains.
Some cases will try to execute two different domains, like array out-of-bounds and array initialization,
where one domain checks for out-of-bounds access and the other checks and modifies the abstraction to determine
if an array is properly initialized. A domain can call other domains.
For example, if we have `a = b[i]`, the domains that handle integer assignments will ask for the value of the
expression `b[i]`, and the same mechanism described before will be used to find the right domains (`array init` to have the value and
`array out of bounds` to check that there is no error).
#figure(
[
#diagram(
node-stroke: 1pt,
spacing: 1em,
node((0,0),text(red)[jasmin],stroke: 0pt),
node((0.5,0),"intraproc", name:<intraproc>),
node((0.5,1),"interproc", name:<interproc>),
node((0.5,2),"loop", name:<loop>),
node((0,3),"array out-of-bounds", name:<arr-out>),
node((1,3),"array initialization", name:<arr-init>),
node((0,5),text(blue)[Universal],stroke: 0pt),
node((0.5,5),"intraproc", name:<intraproc-mopsa>),
node((0.5,6),"loop", name:<loop-mopsa>),
edge(<intraproc>,<interproc>, "-|>"),
edge(<interproc>,<loop>, "-|>"),
edge(<loop>,<arr-out>, "-|>"),
edge(<loop>,<arr-init>, "-|>"),
edge(<arr-init>,<intraproc-mopsa>, "-|>"),
edge(<arr-out>,<intraproc-mopsa>, "-|>"),
edge(<intraproc-mopsa>,<loop-mopsa>, "-|>"),
{
let tint(c) = (stroke: c, fill: rgb(..c.components().slice(0,3), 5%), inset: 10pt)
node(enclose: ((-1.5,-0.5),(1,3) ), ..tint(red))
node(enclose: ((0,5),(1.5,6.5) ), ..tint(blue))
},
)
],
caption: "Simplified domain configuration"
)<figure-mopsa-domains>
This way of handling abstraction offers a simple method for developers to add abstractions.
The developer only needs to extend the AST of MOPSA to translate their language into the MOPSA AST,
which is straightforward with OCaml's extensible types. After that, the developer simply defines new domains
to handle the new AST cases they have added. Moreover, MOPSA also provides the possibility to use a reduced
domain to avoid requiring the developer to redo things that are already well-defined, such as the `Value Abstract Domain`,
which allows for not having to reimplement a full domain. This Value Abstract Domain was used to analyze the initialization of
scalar variables (@section-init-scalar).
#def("Value abstract domain")[
A value abstract domain is :
- a poset $(D^hash, subset.sq.eq\ ^hash)$
- a smallest element $bot$ and a largest element $top$
- a sound abstraction of :
- constant and intervals
- binary operators
- set union and intersection
- a widening operator $widen$
- a concretisation operator $gamma : D^hash -> P(ZZ)$
]
== Semantic of statements <sem-stat>
=== Semantic of Jasmin
We write $semExpr(e) : S -> P(ZZ)$ the semantic of an expression e.
Variables are evaluate in a context $sigma$.
$
semExpr(v)sigma = { sigma(v) }\
semExpr(z in ZZ)sigma = {z} \
semExpr(e_1 smash e_2) = semExpr(e_1) smash semExpr(e_2) "with" smash in {+,-,times,div,%}\
$
With $A smash B = {x smash y | x in A, y in B} "for" smash in {+,-,times}$ and
$A smash B = {x smash y | x in A, y in B, y != 0} "for" smash in {div,%}$
We write $semStmt(s) : S -> P(S)$ the semantic of an expression $s$.
And we defined a conditional operator $semCond(c) : P(S) -> P(S)$, that filter the state that can statify the condition
$c$.
$
semCond( e_1 square e_2)Sigma = { sigma in Sigma | exists v_1 in semExpr(e_1)sigma, exists v_2 in semExpr(e_2)sigma, v_1 square v_2 }
$
$
semStmt(s_1\; ...\; s_n) = semStmt(s_n) compose ... compose semStmt(s_1) \
semStmt("if" c {s_t} "else" {s_f}) = semStmt(s_t) compose semCond(c) join semStmt(s_f) compose semCond(not c) \
semStmt( x = e)Sigma = {sigma[x <- v] | sigma in Sigma, v in semExpr(e)sigma} \
semStmt("for" v = c_1 "to" c_2 { s }) = semStmt(v = c_1\; "while" v < s { s; v = v + 1;}) \
semStmt("while" c { s_1 } (s_2)) = semStmt( s_2 \; "while" c { s_1 \; s_2 } ) \
semStmt("while" c { s_1 })Sigma = semCond(not c) ( union.big_(n in NN) (semStmt(s) compose semCond(c))^n Sigma)
$
In Jasmin, a function only have one `return` statement, at the end of the function.
=== Semantic of Jasmin Abstraction
Because it's not possible to calculate the exact semantic of jasmin, we calculate an overapproximation of states.
In particular for while loops, we didn't try to calculate the fixpoint of each possible iteration, but we use the
widening operator (defined in @widening), we add a $hash$ in exponent to mark that the calcul is in the abstraction.
$
semStmtA(s_1\; ...\; s_n) = semStmtA(s_n) compose ... compose semStmtA(s_1) \
semStmtA("if" c {s_t} "else" {s_f}) = semStmtA(s_t) compose semCondA(c) join semStmtA(s_f) compose semCondA(not c) \
semStmtA( x = e)Sigma = {sigma[x <- v] | sigma in Sigma, v in semExprA(e)sigma} \
semStmtA("for" v = c_1 "to" c_2 { s }) = semStmtA(v = c_1\; "while" v < s { s; v = v + 1;}) \
semStmtA("while" c { s_1 } (s_2)) = semStmtA( s_2 \; "while" c { s_1 \; s_2 } ) \
semStmtA("while" c { s_1 })sigma^hash = semCondA(not c) lim delta^n (bot) "with" delta(x^hash) = x^hash widen (sigma^hash union.sq^hash semStmtA(s) compose semCondA(c) x^hash)
$
To have a better approximation of loops, Mopsa, always unrolls the first iteration of the loop.
For loops, if the user has the guarantee that the loops terminate, it is also possible to unroll
the loop to have a better approximation. However, this slows down the analysis, because each iteration
of the loop is simulated and there is no guarantee that the analysis will terminate.
The user can also choose to unroll only a fixed number of iterations, and then perform the widening
operation afterwards. This also provides a better approximation, but it slows down the analysis.
The semantics of these statements and instructions are classic in abstract interpretation, so Mopsa already has domains
for its universal language that can be reused. The Jasmin expressions and statements can be simply translated to the universal language,
and this translation is natural without any surprises.
= Initialisation of scalar <section-init-scalar>
In Jasmin, all scalar arguments or return values of a function have to be initialized.
Moreover, a variable is well-initialized if it is assigned to a well-initialized expression.
A well-initialized expression is defined according to the @concrete-semantic-init.
#figure(
[
$
semExpr(a star b) = cases("if" semExpr(a) = Init.init and semExpr(b) = Init.init "then" Init.init, "else" Init.inot) "whith" star in { + , - , * , \/ , %} \
semExpr(circle.filled.small a) = semExpr( a ) "whith" circle.filled.small in { + , - } \
semExpr(c) = Init.init "with" c "a constant" \
semStmt( v = e)Sigma = { sigma{ v <- i} | sigma in Sigma, i in semExpr(e)sigma}
$]
, caption: "concrete semantic of variable initialization"
)<concrete-semantic-init>
We defined a slighty modify value abstract domain, where we add a $Init.maybe$ value, to be able to have a more precise abstraction.
#figure(
diagram(
node((0.5,0), [ MAYBE_INIT], name: <maybe>),
node((0,1), [INIT], name: <init>),
node((1,1), [NOT_INIT], name:<not_init>),
node((0.5,2), [$bot$], name:<bot>),
edge(<init>,<maybe>, "->"),
edge(<not_init>,<maybe>, "->"),
edge(<bot>, <init>, "->"),
edge(<bot>,<not_init>, "->"),
spacing: 1em,
),
caption: "Poset of Init domain"
)<poset-init-domain>
The $bot$ value of this abstraction is never reached, and is only present to have a lattice.
$Init.maybe$ play the role of $top$.
#let init_domain = "Init"
#figure(
[
$
a meet^hash b = a widen^hash b = a join^hash b = cases(
a "if" a = b,
Init.maybe "if" a != b,
)
$
$
semExprA(a star b, domain: #init_domain) = semExprA(a, domain: #init_domain) union^hash semExprA(b, domain: #init_domain) "whith" star in { + , - , * , \/ , %} \
semExprA(circle.filled.small a, domain: #init_domain) = semExprA( a, domain: #init_domain ) "whith" circle.filled.small in { + , - } \
semExprA(c, domain: #init_domain) = Init.init "with" c "a constant" \
$],
caption: "Abstract semantic of variable initialization"
)
We also add a special instruction `InitVariable(var)` that takes a variable and return the same
context where the variable in argument is now initialized.
Inside abstract interpretation these values can be interpret has :
- $Init.maybe$ there exist a path were the variable can be initialized
- $Init.init$ for all path the variable is initialized
- $Init.inot$ for all path the variable is not initialized
We also defined the concretization function to numerical value $gamma_(#init_domain -> "num")$ such that $gamma_(#init_domain -> "num") (v) = {top_("typeof"(v))} $
and to `Init` concrete domain $gamma_(#init_domain -> "concrete init") (v) = cases( { Init.init } "if" v = Init.init, { Init.inot } "if" v = Init.inot, {Init.init, Init.inot} "if" v = Init.maybe)$.
This is a value abstract domain. So by the property 2.5 of @mopsa-phd, MOPSA built a
non-relationnal abstract semantics, that is a sound abstraction of the concrete semantics.
= Initialisation of arrays <init-array>
Determining which parts of an array are initialized is essential for validating functions that access arrays
== The difficulty <difficulty-array-init>
The main difficulty in proving that an array is well-initialized lies in ensuring that the over-approximation
of the index used to set a cell in a loop does not lead to a situation where it falsely appears that the entire
array is initialized. For example, in @ex1, the over-approximation might suggest that `a[x] = 5` for `x` $in [2; 512]$,
while in reality, only the even indexes are initialized
#figure(
```jasmin
inline int i;
stack u32 a[512];
for i = 0 to 256 {
a[2*i] = 5;
}
```,
caption: [partial initialisation of an array],
) <ex1>
== Aproaches
To initialize an array, different methods exist:
*Expanding the array:* A first approach is to define a variable for each cell of an array. This can work in Jasmin because
arrays have a fixed size. However, this will be memory-consuming, and there is a risk of slowing down the analysis, because
the analysis has to iterate over all cells when we do an assignment with an index that has an overapproximation value in the analysis,
and this ask to unroll loops.
*Array smashing:* This approach abstracts a variable by a single variable that represents the entire content of the array,
but this will not work in your case to prove the initialization.
*Parametric segmentation functor:* The approach described in the Cousot's Cousot's paper @cousot-p @cousot-article defines a
functor that takes 3 different domains to represent values and bounds. The advantage of this algorithm is that it can
show that an array is initialized even if it is not initialized from one border to the center. However, the problem with
the implementation described is that the functor takes three domains, and the function deals with values in that domain,
particularly with some symbolic equalities on the bounds. This way of dealing with domains is not really in the spirit of
MOPSA, which prefers to deal with domains and have less information about them.
The symbolic equalities would require reimplementing the relational-domain already available in MOPSA, in order to be able
to do symbolic equalities of a variable in two different contexts. However, this would ask us to modify MOPSA itself.
We did try to do this, but we finally chose a simpler implementation, for reasons of time and because the simpler algorithm
covers a sufficiently large number of Jasmin programs.
== 3 Segments <3-array-segs>
We want deal with to constraints. First we want avoid to have one variable for
each cells of the arrays. Because create a variable for each cells, means
that each cells has a representation in each abstract domain, like we don't
know who many domains we have the memory needed to represent a simple array
can grow rapidly. Moreover this also means that we need to modify each variable
and check each variable, this can also slow down our safety checker.
Secondly, we want to be able to prove that an array is well initialised without
unrolling loop, because unrolling loop slowdown the analysis, but give a more
precise analysis for loop with invariant difficult to infer, see @perf.
So we finally move to a more simpler algorithm that can only prove the initialization of arrays if
we initialize from border of the array to the center. In pratices, this is the case in a majority of cases that
jasmin deal.
This initialization is a form of array partitionning but
with a fixed number of partition at 3.
=== Representation
#let arr_domain = "Arr"
We suppose that we have a numerical domain, if possible a relational domain, $D^hash_"num"$
and a domain that represents initialization $D^hash_"init"$
(a numerical domain can also be used to analyze the values of the array) in the configuration of the analyzer.
We represent each arrays with 3 segments, so only 7 variables are needed to represent an array.
The bounds of segments are represented by variables handled by the $D^hash_"num"$ domain,
while the values of the segments are represented in the $D^hash_"init"$ domain.
Four variables are to represent the limit of segments, and three are for representing the value in the segment.
So an array $a$ can be represented like that $a = {b_0} s_1 {b_1} s_2 {b_2} s_3 {b_3}$ with the properties $0 <= b_1 <= b_2 <= "len"(a)$.
Moreover $b_0$ and $b_3$ are always constant with repectively the value $0$ and $len(a)$.
Variables never change, whe only reassign variable with new values, this permit to delegate operations,
union, intersection, to other domain were the variable live.
Initially we have :
$
b_0 = 0,
b_1 = 0,
b_2 = b_3,
b_3 = "len"(a),
s_1 = s_2 = s_3 = top "and not init"
$
#figure(
[
#box(
stroke:black,
inset: 3em,
)[
#grid(
columns: (20%,60%,20%),
stroke: black,
inset: 1em,
$s_1$, $s_2$,$s_3$
)
#grid(
columns: (9%,11%,1%,59%,1%,13%,15%),
$b_0=0$,$$, $b_1$,$$, $b_2$,$$, $b_3=len(a)$
)]
],
caption : [Representation of an array]
)<repr-array>
This method is pretty simple to implement in mopsa. The only case that you need to deal is when you want get a result or when we want assign a
value.
=== Setter
#block(
fill: rgb("#f28585" ),
width: 100%,
stroke: red,
inset: 1em,
)[
*The intuition*:
The representation with three segments has two "movable" bounds, $b_1$ and $b_2$.
In many cases, the center segment is not initialized, meaning that the extremities
contain more precise information. To avoid losing this information, we extend the extremity
segments toward the middle only when we are certain that the bounds will always be included in the defined interval.
]
Let suppose that we do $a[i] = e$ with $i$ and $e$ two expression.
First we suppose that $gamma(semExpr(i)) subset.eq [0; len(a)]$, if this is not the cases,
we raise an out of bound exception and we continue with this assumption.
We have the following semantics :
#let bound = $compose semCondA(i in [0;len(a)])$
$
semStmtA(a[i] = e\;) = union.big cases(
semStmtA(b_1 = i + len\; s_1 = cases(e "if" i = 0,s_1 join^hash e )) compose semCondA(b_1 = i and i + len <= b_2) bound,
semStmtA(b_1 = i + len \; b_2 = i + len \; s_1 = cases(e "if" i = 0,s_1 join^hash e)) compose semCondA(b_1 = i and i + len > b_2) bound,
semStmtA(b_1 = i \; b_2 = i + len \; s_2 = e) compose semCondA(b_1 = i + len and i + len = b_2) bound,
semStmtA(s_2 = s_2 join^hash e) compose semCondA(b_1 < i and i + len < b_2) bound,
semStmtA(b_2 = i \; s_3 = cases(e "if" i + len = len(a),s_3 join^hash e)) compose semCondA(b_1 < i and b_2 = i + len) bound,
semStmtA(s_3 = s_3 join^hash e) compose semCondA(b_1 < i and b_2 < i + len and b_2 < i) bound,
semStmtA(b_2 = i \; s_3 = s_3 join^hash e) compose semCondA(b_1 < i and b_2 < i + len and b_2 >= i) bound,
semStmtA(s_1 = s_1 join^hash e) compose semCondA(i < b_1 and i + len < b_1) bound,
semStmtA(b_1 = i + len \; s_1 = s_1 join^hash e) compose semCondA(i < b_1 and i + len >= b_1) bound,
)
$
In practice, we use a $join^hash$ operation that joins the possible values of two expressions. This is effectively a convex join in the interval domain.
It's essentially a case disjunction to determine if we can extend the segment on the left or right in the direction of the middle.
When using the numeric relational domain of MOPSA, in the majority of cases, if we initialize from left to right, the first case is the one that is verified,
even with the widening of bounds and the index in loops.
This simpler approach, while potentially less precise than a full symbolic equality implementation, has proven sufficient for a large number of Jasmin programs,
allowing us to deliver a working analysis within the given time and resource constraints.
=== Access
To access an element of a table `a[i]` we use a simple algorithm.
We fold from left to right the bounds. If two bounds are *always* equal,
we didn't take the value between them, otherwise if the interval $[|i;i + "len"|]$
overlapp with the interval created by the two bounds, we keep the argument.
At the end, we doing the union of all segment we keep.
So we have :
$
semExprA(a[i]) = union.big_(j in [|0;2|])^hash (semExprA(s_(i+1)) compose semCondA(b_j <= i <= b_(j+1)))
$
=== Example
In @ex1, with your abstraction, we have the following state before the loop:
$
a={0} Init.inot{b_1=0} Init.inot{b_2 =len(a)} Init.inot{len(a)}
$
At the end of the first iteration, we have
$
a={0} Init.init{b_1= 1 = 2i+1} Init.inot{b_2 =len(a)} Init.inot{len(a)} ", with" i=0
$
The widening operator is applied, resulting in $i=[1;255]$. Then, when we execute `a[2*i] = 5`, we find that $2i=[2;510]$.
Thus, we have
$
a={0} Init.init{b_1 =1} Init.maybe{b_2 =len(a)} Init.inot{len(a)}
$
At the end of the loop, we can only conclude that index 0 is initialized, and possibly some indices between $1$ and $255$.
#block()[
*Note :*
Here we only discuss the initialization of the array. However, with the help of Mopsa and the separation of domains, we also have a range of
possible values for each segments.
The only detail is that in Jasmin, we can cast an array to an array of the same size in bytes, but with a different type interpretation for values
(see a table of `u64` to a table of `u32`, for example). In this case, for the moment, we send the $top$ value in numerical domains to ensure soundness
and avoid dealing with integer representation.
]
=== Concretization
We define the concretization function as follows: $gamma_#arr_domain (a[i]) = gamma(semExprA(a[i])) $.
With the $join$ in the setter and getter, we lose some precision, but we are sure that all possible values are represented.
And in practice, this covers the majority of initialization cases.
= Contract and function call <contract>
Like in many programming languages, it is possible in Jasmin to make function calls.
However, a function call does not have any side effects; that is, a Jasmin program does not produce any side effects.
A function call takes a predefined number of arguments and returns a tuple of arguments that is immediately unpacked when
we assign the result of the function call. This is written as follows in Jasmin:
```jasmin v1, v2, v3 = f(a1,a2, a3,a4);```
In this example, we take 4 arguments that return 3 values, but this number is not fixed.
We refer to a contract as a list of preconditions or postconditions that a function must satisfy.
Mopsa already has a contract language for C (see @contract-paper), which is similar to Frama-C's ACSL,
but these are used only if the function source is not available. However, in the Jasmin team, we want to be able to
check each function independently. Therefore, we slightly modified the behavior of contracts by adjusting our domain on your side.
The new behavior is as follows:
When we have a function call, we check if there is a contract associated with this function. If so, we verify that
the preconditions are satisfied and apply the postconditions to the variables on the left side of the assignment. If we do not find any preconditions,
we only check that the scalar arguments are well-initialized, and we assign the return value to $top$ and initialize it for the case of scalar variables.
Jasmin has an experimental feature for contracts that allows them to be sent to Easycrypt or Cryptoline.
We reuse the syntax tree to capture expressions of the following form, which we can translate to Mopsa stubs.
The supported contracts in Jasmin for this experimentation are:
- Conjunction of propositions
- `init_array(v : var, offset: int, len: int)` a proposition that affirms that the array v is well-initialized between the indexes offset and len
- Postconditions and preconditions
*Remark :*
The contract language from the user side does not have a proposition to indicate that a scalar variable is initialized, as this is a precondition and postcondition
that must be checked for every function
= Memory
#emoji.warning *Warning:* This part is not yet implemented in the safety checker.
The memory model in Jasmin is quite simple; it can be viewed as a large array.
In the previous safety checker, it was able to infer which regions of an array had to be initialized. However, this mechanism does not seem useful,
and the constraints provided by the programmer are preferred to ensure that the programmer knows what they are doing. For this reason, we prefer that
the programmer provides a contract through the Jasmin annotation system.
If we assume that each pointer provided by the programmer points to a distinct region (with no overlap between the regions defined by each pointer),
we can reuse the 3-segment implementation that we previously used for arrays @3-array-segs.
The planned contracts for the moment consist of three predicates:
- `init_memory(v: var, offset: int, len: int)` defines that the region $[v + "offset"; v + "offset" + "len"]$ is readable, indicating that it is an initialized region.
- `write_memory(v: var, offset: int, len: int)` defines that the region $[v + "offset"; v + "offset" + "len"]$ is writable.
- `assign_memory(v: var, offset: int, len: int)` defines that the region $[v + "offset"; v + "offset" + "len"]$ is initialized at the output of the function.
To check for out-of-bounds access in memory, i.e., access to locations where we cannot write, we can easily verify that we are within the bounds declared as readable.
In Jasmin, the memory is allocated before the call, and generally, there are not many arguments to function calls, so the number of segments will be finite and not
expensive. To check if we can write to a location, the same check as before can be performed.
The complicated part is checking if a region that is not initialized at the function call becomes well-initialized at some point during the execution of the call
or at the end, to verify the `assign_memory` predicate. If we assume the strong assumption that each pointer points to a different region of
memory without overlap, we can consider each pointer as an array of length $max_{(("offset", len) in P_v)}("offset" + len)$ (with $P_v$ being the set of pairs
of offset and length that appear in a predicate related to the variable $v$). We can then set as initialized the regions that we know are initialized if
they are on one side of the array that abstract $v$.
This method, which unfortunately has not been tested in practice, seems to handle the majority of cases. The only real problem arises from the inability to
handle pointer aliasing.
= Performance and Implementation <perf>
#figure(
[
#set align(left)
#ansi-render(
read("example_output"),
theme: terminal-themes.tango-dark
)],
caption: "Example of output of the safety checker when there is an error or a warning"
)
To give an idea of the performance of the new safety checker, we tested it on the `ntt` function @ntt-function.
With the old safety checker, the analysis of the function took `30 seconds`; this is now possible in less than `3 seconds`.
However, in both cases with the widening, we are only able to prove that the array is well-initialized (because it is well-initialized beforehand).
Due to the upper bound approximation of scalars in the loop for the widening, we are unable to prove that there is no out-of-bounds access.
If we unroll the loop, we can prove that the array is well-initialized, that there is no out-of-bounds access, and that the loop terminates
(as the analysis terminates with loop unrolling). This takes `45 seconds` with the new safety checker. With the older safety checker, after `3 hours`,
the analysis was not finished.
In another file with 6 different functions #footnote([#link("https://github.com/LeoJuguet/jasmin/blob/cryptoline-mopsa/compiler/jasa/tests/test_poly.jazz")]),
the analysis takes around `0.430 seconds` for the new checker and around `6.37 seconds` for the previous checker.
(See @details-performances-test for details about the tests.)
The final implementation for the arrays feature consists of around 3,000 lines of code. The previous safety checker,
which performed more checks, had 7,000 #footnote([calculate with cloc, the code can be found here #link("https://github.com/LeoJuguet/jasmin/blob/cryptoline-mopsa/compiler/jasa/")]) lines of code. However, around 1,000 lines of code in the new safety checker
are dedicated to extending the MOPSA AST and translating the Jasmin AST to MOPSA.
= Conclusion and future work
This new safety checker, written with MOPSA, checks that scalar variables are well-initialized,
that array accesses are in bounds, and successfully verifies that arrays are well-initialized when
they are initialized from one border to the middle. Moreover, the support for contracts allows for independent
function checks and property verification, such as the initialization of arrays. Furthermore, with the modularity of MOPSA,
the code is simple to extend and maintain. This implementation is also faster than the previous one, and the way MOPSA is built
provides ideas for adding more checks inside the safety checker.
However, implementing support for basic memory will be necessary to accommodate a larger proportion of Jasmin programs.
Without a doubt, it is possible to implement this with MOPSA without significant loss in speed.
It seems that with MOPSA, it is possible to handle more checks on values, such as verifying if an array is initialized with only 0s.
Detecting loop termination will also be interesting to check, but due to the over-approximation approach of MOPSA,
this may not be easy to implement. Currently, termination is only proven when loops are fully unrolled, and the analysis terminates.
Other tasks need to be completed to finish the prototype and move towards a more "production-ready" product.
In Jasmin, it is possible to call a CPU instruction and get the result and flags, which depend on the target CPU.
For now, the output is the top value and initialized for scalar values, but an approach similar to function calls will be possible,
with specific cases handled. Similar functionality can also be implemented for system calls, which are not yet handled.
For a more advanced future, it is also planned to communicate to EasyCrypt which properties have already been proven by the safety checker,
to establish a workflow where developers run the safety checker and subsequently prove parts in EasyCrypt that have not yet been verified.
= Acknowledgements
Thank you to the MPI-SP and the Gilles's Group for hosting me during this interniship.
Thank you to the Formosa group for their hospitality.
Thanks to <NAME> for his supervision and to <NAME>, <NAME> and <NAME>
for following the project and answering questions about Jasmin.
Thanks to <NAME> from the MOPSA team for taking the time to answer my questions about abstract interpretation
and MOPSA and for monitoring the project.
= Meta-Information
- 1.5 months were spent understanding the safety conditions of Jasmin and starting to grasp MOPSA, along with a first implementation to detect division by zero.
- 0.5 months were dedicated to checking for array initialization.
- 0.5 months were spent implementing the initialization of scalar values.
- 1 week was allocated for contracts and function calls.
- 1.5 months were used to understand the segmentation functor and attempt its implementation.
- 2 weeks to implement the final abstraction of arrays.
Some of the time was also spent fixing bugs and attending talks at the institute.
Additionally, I had the opportunity to participate in two seminars with the Formosa team,
where I met other team members in person and discussed safety and the future of Jasmin.
#bibliography("report.bib", full : true, style : "association-for-computing-machinery")
#pagebreak()
#set heading(outlined: false)
= Appendix
== Test unrolling loop <test-speed-unroll-loop>
```jasmin
fn f() -> reg u64
{
reg u64 a;
inline int i;
for i = 0 to 256 {
a = i;
}
return a;
}
```
For this pretty simple function that is safe and have only
two simple scalar variabke, if we unroll the loop, the safety checker take `0.255s`
but if we don't unroll the loop the same analysis take `0.007s`.
So for a function simple like that the difference is important.
== Ntt function <ntt-function>
#figure(
[
#raw(read( "ntt.jazz"), block: true, lang: "jasmin")
], caption: [ `ntt` function used for the performance test in @perf,
the difficulty to prove this function come from nested loop and the difficulty to
the numerical domain to determine the relation betwen `len`, `start` and `j`, with the
shift of `len`, so out of bounds
access is detected without loop unrolling.
]
)
== Details of performances test <details-performances-test>
The test was executed on a machine with:
- CPU: `AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16) @ 5.137GHz`
- GPU: `Radeon 780M`
- Memory: `16GB DDR5 5600MHz`
- OS: `Linux`
To test the performance of the new safety checker, we ran the following command in the `compiler` folder of the project:
```bash
time ./jasa.exe -config jasa/share/config_default.jazz file_test.jazz
```
The additional argument `-loop-full-unrolling=true` was added to perform the test with full unrolling.
To test the old safety checker, we ran the following command:
```bash
time jasminc -checksafety file_test.jazz
```
Before running this test, we verified that all functions were marked for export.
To test unrolling with the old safety checker, we generated a config file with `-safetymakeconfigdoc` and
modified the `k_unroll` argument to `10000`, which simulates a large amount of loop unrolling.
= Mopsa implementation
#figure(
```ocaml
(* Section 3.3.2.2 *)
type ('a, 't) man = {
get : 'a -> 't;
set : 't -> 'a -> 'a;
lattice : 'a lattice;
exec : stmt -> 'a flow -> 'a post;
eval : expr -> 'a flow -> 'a eval;
ask : ('a,'r) query -> 'a flow -> 'r;
print_expr : 'a flow -> (printer -> expr -> unit);
get_effects : teffect -> teffect;
set_effects : teffect -> teffect -> teffect;
}
(* Section 3.3.2.4 *)
type 'a post = ('a, unit) cases
type 'a eval = ('a, expr) cases
module type DOMAIN =
sig
(* Section 3.3.2.1 *)
type t
val id : t id
val name : string
val bottom: t
val top: t
val is_bottom: t -> bool
val subset: t -> t -> bool
val join: t -> t -> t
val meet: t -> t -> t
val widen: 'a ctx -> t -> t -> t
(* Section 3.3.2.5 *)
val init : program -> ('a, t) man -> 'a flow -> 'a flow
val exec : stmt -> ('a, t) man -> 'a flow -> 'a post option
val eval : expr -> ('a, t) man -> 'a flow -> 'a eval option
(* Section 3.3.2.6 *)
val merge: t -> t * effect -> t * effect -> t
val ask : ('a,'r) query -> ('a, t) man -> 'a flow -> 'r option
val print_state : printer -> t -> unit
val print_expr : ('a,t) man -> 'a flow -> printer -> expr -> unit
end
```,
caption: [Domain signature of MOPSA, section comment refer to sections of @mopsa-phd ]
)<domain-mopsa-ocaml>
|
|
https://github.com/AU-Master-Thesis/thesis | https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/figures/template.typ | typst | MIT License | #set page(width: auto, height: auto)
#import "@preview/cetz:0.2.2"
#import "@preview/funarray:0.4.0": windows
#import "../lib/vec2.typ"
#import "../lib/catppuccin.typ": catppuccin
#let theme = catppuccin.theme
// #import "common.typ": *
#let coordinate-grid(x, y) = {
import cetz.draw: *
let x-max = x
let y-max = y
let x-min = -x-max
let y-min = -y-max
grid((x-min, y-min), (x-max,y-max), stroke: gray.lighten(50%))
line((x-min, 0), (x-max, 0), stroke: red)
line((0, y-min), (0, y-max), stroke: green)
}
#cetz.canvas({
import cetz.draw: *
coordinate-grid(5, 5)
})
|
https://github.com/refparo/24xx-typst | https://raw.githubusercontent.com/refparo/24xx-typst/master/README.md | markdown | Creative Commons Attribution 4.0 International | # 24XX in Typst
This is [24XX SRD](https://jasontocci.itch.io/24xx) by <NAME> recreated using the [Typst](https://typst.app/) typesetting engine. You can use this as a template to create your own game if you are like me who feel more comfortable composing a document with code than using desktop publishing software.
The Simplified Chinese translation of 24XX SRD is also in this repo (`24xx-zh.typ`).
Fonts used:
- Barlow (including Condensed and Semi-Condensed versions)
- IBM Plex Sans (for `₡`)
- Noto Sans CJK SC (for `➡` and `▶`, and the Simplified Chinese translation)
These fonts are all available in the Typst web app. You can also download them from Google Fonts (*Noto Sans CJK SC* should be renamed to *Noto Sans SC* if you use this way).
## License
CC-BY-4.0, as is the original.
|
https://github.com/alex-touza/fractal-explorer | https://raw.githubusercontent.com/alex-touza/fractal-explorer/main/paper/planning/videos.typ | typst | - *Hausdorff Measure Through Example*, Young Measures https://www.youtube.com/watch?v=YbuzXemwwlY&t=0s
- *Fractals are typically not self-similar*, 3Blue1Brown https://www.youtube.com/watch?v=gB9n2gHsHN4&t=322s
- *What is the distance between two sets of points? | Hausdorff Distance*, CHALK https://www.youtube.com/watch?v=zGKfU91-hU0 |
|
https://github.com/liuxu89/Principles | https://raw.githubusercontent.com/liuxu89/Principles/main/src/chap-1/sec-2.typ | typst | #import "@preview/physica:0.9.2": *
#import "/book.typ": book-page
#show: book-page.with(title: "Hello, typst")
= The Polarization of photons
The discussion in the preceding section about the limit to the gentleness with which observations can be made and the consequent indeterminacy in the results of those observations dose not provide any quantitative basis for the building up of quantum mechanics.
//
For this purpose a new set of accurate laws of nature is required.
//
One of the most fundamental and most drastic of these is the #underline[Principle of Superposition of States.]
//
We shall lead up to a general formulation of this through a consideration of some special cases,
taking first the example provided by the polarization of light.
It is known experimentally that when plane-polarized light is used for ejecting photo-electrons,
there is a preferential direction for the electron emission.
//
Thus the polarization properties of light are closely connected with its corpuscular properties and one must ascribe a polarization to the photons.
//
One must consider, for instance, a beam of light plane-polarized in a certain direction as consisting of photons each of which is plane-polarized in that direction and a beam of circularly polarized light as consisting of photons each circularly polarized.
//
Every photon is in a certain #underline[state of polarization, ] as we shall say.
//
The problem we must now consider is how to fit in these ideas with the known facts about the resolution of light into polarized components and the recombination of these components.
Let use take a definite case.
//
Suppose we have a beam of light passing through a crystal of tourmaline,
which has the property of letting through only light plane-polarized perpendicular to its optic axis.
//
Classical electrodynamics tells us what will happen for any given polarization of the incident beam.
//
If this beam is polarized perpendicular to the optic axis,
it will all go through the crystal;
if parallel to the axis,
none of it will go through;
while if polarized at an angle $alpha$ to the axis,
a fraction $sin^2 alpha$ will go through.
//
How are we understand these results on a photon basis?
A beam that is plane-polarized in a certain direction is to be pictured as made up of photons each plane-polarized in that direction.
//
This picture leads to no difficulty in the cases when our incident beam is polarized perpendicular or parallel to the optic axis.
//
We merely have to suppose that each photon polarized perpendicular to the axis passes unhindered and unchanged through the crystal,
while each photon polarized parallel to the axis is stopped and absorbed.
//
A difficulty arises,
however, in the case of the obliquely polarized incident beam.
//
Each of the incident photons is then obliquely polarized and it is not clear what will happen to such a photon when it reaches the tourmaline.
A question about what will happen to a particular photon under certain conditions is not really very precise.
//
To make it precise one must imagine some experiment performed having a bearing on the question and inquire what will be the result of the experiment.
//
Only questions about the results of experiments have a real significance and it is only such questions that theoretical physics has to consider.
In our present example the obvious experiment is to use an incident beam consisting of only a single photon and to observe what appears on the back side of the crystal.
//
According to quantum mechanics the result of this experiment will be that sometimes one will find a whole photon,
of energy equal to the energy of the incident photon,
on the back side and other times one will find noting.
//
When one find a whole photon,
it will be polarized perpendicular to the optic axis.
//
One will never find only a part of a photon on the back side.
//
If one repeats the experiment a large number of times,
one will find the photon on the back side in a fraction $sin^2 alpha$ of the total number of times.
//
Thus we may say that the photon has a probability $sin^2 alpha$ of passing through the tourmaline and appearing on the back side polarized perpendicular to the axis and a probability $cos^2 alpha$ of being absorbed.
//
These values for the probabilities lead to the correct classical results for a incident beam containing a large number of photons.
In this way we preserve the individuality of the photon in all cases.
//
We are able to do this,
however, only because we abandon the determinacy of the classical theory.
//
The result of an experiment is not determined,
as it would be according to classical ideas,
by conditions under the control of the experimenter.
//
The most that can be predicted is a set of possible results,
with a probability of occurrence for each.
The foregoing discussion about the result of an experiment with a single obliquely polarized photon incident on a crystal of tourmaline answers all that can legitimately be asked about what happens to an obliquely polarized photon when it reached the tourmaline.
//
Questions about what decides whether the photon is to go through or not and how it changes its direction of polarization when it does go through cannot be investigated b experiment and should be regarded as outside the domain of science.
//
Nevertheless some further description is necessary in order to correlate the results of this experiment with the results of other experiments that might be performed with photons and to fit them all into a general scheme.
//
Such further description should be regarded,
not as an attempt to answer questions outside the domain of science,
but as an aid to the formulation of rules for expressing concisely the results of large numbers of experiments.
The further description provided by quantum mechanics runs as follows.
//
It is supposed that a photon polarized obliquely to the optic axis may be regarded as being partly in the state of polarization parallel to the axis and partly in the state of polarization perpendicular to the axis.
//
The state of oblique polarization may be considered as the results of some kind of superposition process applied to the two states of parallel and perpendicular polarization.
//
This implies a certain special kind of relationship between the various states of polarization,
a relationship similar to that between polarized beams in classical optics,
but which is now to be applied,
not to beams,
but to states of polarization of one particular photon.
//
This relationship allows any state of polarization to be resolved into.
or expressed as a superposition of,
any two mutually perpendicular states of polarization.
When we make the photon meet a tourmaline crystal,
we are subjecting it to an observation.
//
We are observing whether it is polarized parallel or perpendicular to the optic axis.
//
The effect of making this observation is to force the photon entirely into the state of parallel or entirely into the state of perpendicular polarization.
//
It has to make a sudden jump from being partly in each of these two states to being entirely in one or other of them.
//
Which of the two sates it will jump into cannot be predicted,
but is governed only by probability laws.
//
If it jumps into the parallel state if gets absorbed and if it jumps into the perpendicular state it passes through the crystal and appears on the other side preserving the state of polarization.
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/040%20-%20Zendikar%20Rising/003_Episode%202%3A%20Race%20to%20the%20Murasa%20Skyclave.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 2: Race to the Murasa Skyclave",
set_name: "Zendikar Rising",
story_date: datetime(day: 09, month: 09, year: 2020),
author: "<NAME>",
doc
)
Nahiri was pleased, and also infuriated. Pleased because the ancient key was snug in her pocket, the solution to her problem close at hand. Infuriated because her most recent adventure with Nissa made it abundantly clear she could not visit the Murasa Skyclave alone and hope to survive. As much as she'd like to think otherwise, if Nissa hadn't been in the Akoum Skyclave with her, she wouldn't have been able to obtain the key.
Fortunately, standing in front Sea Gate's towering entrance, she knew where to find the best team of adventurers in Zendikar.
It had been a long time since she last visited Sea Gate. It didn't look quite the same as she remembered. The war with the Eldrazi razed the original city to the ground, and though Sea Gate had been rebuilt, there were still scars on its buildings.
And on its people.
Guilt hounded Nahiri as she strode through the streets, and she kept her gaze fixed straight ahead. She did not linger at the magnificent lighthouse towering over the city's entrance or peruse its open-air markets where humans, kor, and merfolk lingered and haggled at stalls. She barely glanced at the new war memorial as she passed it by—a huge circle platform, with six massive stone hedrons equally spaced, surrounded by pieces of the original Sea Gate's wreckage. Unlike the citizens of this city, Nahiri didn't need a huge monument to remind her of what she'd lost.
As she drew closer to the Guilds, the streets became narrower, filling with the scents of fresh fish and grilled meats from taverns. Hawkers and hungry mercenaries approached her but quickly changed course when they caught the look in her eye. She did not have time to waste with the ordinary adventurers. The key in her pocket weighed heavy.
When she arrived at the Sea Gate Expeditionary House and pushed through its wrought-iron door, she was struck immediately by the noise, the heat, and the smell of stale ale and travelers. It was not a large space and was crammed with people from all races, seated around batter tables with tankards or in heated debates as potential clients haggled with adventurers. And in the middle of the chaos, like an eye of a storm, sat Kesenya, the head of the expeditionary house.
She was a tall, proud kor in silver armor and rich purple clothes. Her white hair was plaited into a complex pattern, and around her neck, there was a brilliant red necklace, which could only be the legendary Dragon's Frill. She was surrounded by patrons and admirers all vying for her attention. When she spotted Nahiri, however, she immediately stood, offering some excuse or another to the people around her, and made her way across the room.
"Benefactor," she said, quietly, "always an honor to see you."
"I'm pleased to see my investment is flourishing," replied Nahiri, in a low voice. "Let's speak privately."
"Of course." Kesenya led her to a back room that was small but well furnished, with cushions on the benches and maps of Skyclaves on the walls. A fresh round of ale was brought in and set on the table.
"I'll be honest," said Kesenya, taking a seat across from her, "I'm surprised to see you here. You're usually a little more~aloof."
"I am who I need to be," Nahiri replied, with a slight edge to her voice. She touched the key in her pocket. "And now I need a brave and capable team to retrieve something very precious and very powerful."
"You've come to the right place," the other kor said. "I'm assuming you had a team in mind?"
Nahiri smiled.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("003_Episode 2: Race to the Murasa Skyclave/01.jpg", width: 100%), caption: [Nahiri, Heir of the Ancients | Art by: <NAME>], supplement: none, numbering: none)
Four adventurers sat before Nahiri in the private meeting room of the Expeditionary House. Akiri, a kor woman who was renowned throughout Zendikar for her line-slinging. A small human wizard with a large carved staff named Kaza, who was rumored to adore fire and had a mischievous twinkle in her eye. Orah, a kor cleric with a long white beard and a library's worth of knowledge within his head. And Zareth, a merfolk with shock-red hair and a braided beard. Of the four, he was the only one not sitting at the table. Instead, he leaned against the back wall, arms crossed, watching her with an air of mistrust, and Nahiri knew instantly she would have to be cautious around him.
"I'm Nahiri," she said. "I've been on adventures worthy of legends. I'm asking you to join me on one of those expeditions now."
The adventurers didn't reply. They all wore expressions of various levels of skepticism.
#emph[Good] , she thought. #emph[They don't accept things at face value.]
"What has Kesenya told you about me?" Nahiri asked, leaning back in her chair.
"Only the basics," Akiri replied, slowly. She, Nahiri could tell, was their leader.
"She said you could travel to other realms. That stone obeys your command. That you're powerful enough to face down an Eldrazi alone," Orah said, leaning forward. "Is it true?"
#emph[I wish the last one was] , Nahiri thought, bitterly.
"Yes," she said, after a pause.
Orah grinned, looking like a delighted child who discovered his favorite stories were real. Akiri looked behind her and exchanged a glance with Zareth.
"Then why do you need humble adventurers like us?" Zareth asked, straightening and coming to the table.
#emph[Because I'm probably walking into a trap] , she thought.
"There's an ancient object called the Lithoform Core," Nahiri said. She paused, swallowed her pride. "And I need help retrieving it."
"Where?" Akiri asked, crossing her arms.
"In Murasa. In a newly risen Skyclave there," replied Nahiri, noticing how the name made the adventurers lean in slightly with interest. "You've heard of it, yes?"
They exchanged looks again. "No one's been able to climb that," Kaza said, sounding nervous.
"No one's asked the best yet," said Nahiri, smiling inwardly as they sat up a bit taller.
"What's in it for us?" Zareth said. Akiri shot him a look, but he raised a hand and said, "No—if we're putting our lives at risk, we should know what for."
Nahiri's nostrils flared slightly, but she tamped down her impatience. "The object I seek will heal Zendikar of all its scars. It will make this world a safe and prosperous place again, just like it was before the Eldrazi arrived." Nahiri took a long, slow sip of ale, pausing for effect. "Imagine the riches and fame that will come to the people who save this world."
"The damage to this world," said Akiri, "is immense."
"I lost my entire family to the Eldrazi," Orah said, quietly.
"I lost friends," Kaza said.
"We all lost someone," Akiri said, looking back at Zareth again, "and I think all of us dream of a safer world. It sounds impossible." Akiri turned and stared right at Nahiri, and Nahiri saw a spark of hope in her eyes. "But if half of your accomplishments are true, maybe there is a chance." Akiri leaned back, and that brief glimmer of hope disappeared. "That is, if we believe you."
"I don't," Zareth said. "What's stopping us from getting the Core without you?"
Nahiri smiled, but it didn't reach her eyes. "I have the key," she said, pulling it out of her pocket. It was like pulling out a small star, and she set it on the table. The key pulsed brightly, and the four adventurers instinctively drew back.
"Wow," Kaza breathed.
Nahiri returned the key to her pocket, reminding herself to be patient.
"Before we decide, maybe you'll humor me with a game?" Zareth said.
Nahiri's eyes narrowed with suspicion. But if she was being completely honest, there was a small part of her that was also intrigued. "What sort of game?"
"Zareth," said Akiri with a warning in her voice.
"A card game," he replied, then turned to Akiri. "We do this with all our potential clients. Why should she be different?"
Akiri frowned, and Nahiri seriously doubted that they did this with any of their clients. But she was curious. "So," she said, "tell me the rules."
Akiri relinquished her seat to Zareth but put a hand on his shoulder as she stood behind him. He grinned up at her with affection, placing his hand on top of hers.
With his free hand he produced a deck of worn cards seemingly from thin air. "Adventuring parties like to call this little game #emph[Conquest] ." With practiced ease, he dealt fifteen cards in a circle on the table. Then, he tapped the tabletop in the center of the ring, and the cards began to hover and spin in midair.
"It works like this," said Zareth, "a card is chosen at random." At his words, a card from the spinning ring slid into the center and flipped over. It was a beautiful drawing of an intricate motif of gems and eyes. On the center was a single word: #emph[Cunning] . "And we have to tell a #emph[true ] story about how we accomplished the word on the card. If your story isn't impressive enough, another player has a chance to capture the card."
"Sounds simple enough," Nahiri said. Too simple.
"Oh, it is," replied Zareth, "but here's the catch. If I win, you tell us exactly what that Core will do to Zendikar."
Nahiri leaned back in her chair, steepling her fingers. "And if I win, you and your companions come with me to the Murasa Skyclave."
The four adventurers exchanged looks again, and Akiri gave Nahiri a single nod.
"I'll start." Zareth studied the card intensely, as if struggling to find a suitably cunning story. "One time, I met a book trader who was more thief than scholar. I pretended to have a rare and dangerous spell scroll, and while we bargained, I stole back the tomes he #emph[borrowed] from the Sea Gate library. He never noticed."
The #emph[Cunning] card flew into Zareth's outstretched hand. Nahiri raised an eyebrow, and he smirked. "I'm known as the Trickster."
#emph[Meaning, I can't trust you] , Nahiri thought, eyes narrowing.
"My turn," she said. Again, a card disengaged from the ring and floated to the center. On it was the word #emph[Foe] .
Nahiri smiled. This was an easy one. "There was someone who was like a father to me. But after centuries, he betrayed my trust. Not long ago, I fought him during a world-ending battle. And I won."
Zareth and the others stared at her.
"You're not really that old," said Kaza.
"And there's been no battles of that scale since the Eldrazi," Orah said, slowly.
Nahiri took a long swallow of ale, smirking. Calmly, she stretched out her hand, and the card snapped into her palm. "Not in this realm, no."
For a moment, Zareth's self-assuredness seemed to waver.
#emph[Good] , thought Nahiri.
"I want to play, too," said Kaza as she scooted her seat closer to the table. Her card read #emph[Victory] .
Kaza launched into a tale about how she once destroyed an entire Eldrazi brood with a handful of spells and one well-placed exploding vial. But Nahiri was only half listening. She suspected there was more to this simple card game, and she waited for the trap to spring.
It didn't.
Until she felt something. The fingers touching her pocket were light, the barest whisper. She wouldn't have noticed it at all if the floor wasn't stone and she couldn't feel the Trickster's movements through it. But when she looked up from her cards, both of his hands were on the table again.
"Your turn," Zareth said with a sly smile.
The upturned card read #emph[Power] .
Nahiri leaned back, studied her opponent for a long moment.
Then she snapped her fingers and turned all the cards to granite. Zareth and Kaza jumped in surprised and dropped the cards they were holding. They clattered noisily to the table. Nahiri reached out her hand and the entire deck flew into her open palm.
"I win," Nahiri said as she stared at Zareth. "Now, give it back." She held out other her hand.
Stunned, Zareth fished the key out of his tunic and handed it to her without a word of protest.
Beside him, Kaza crumpled with laughter. "Oh, she got you, Zareth."
"She did win," Akiri said, "though the word #emph[fair] can never be applied when playing with you." She wrapped an arm around his shoulders. To Nahiri, she said, "When do we head out?"
Nahiri stood. She won, but for some reason, the victory didn't taste sweet. She headed toward the door. "Tomorrow. First light."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("003_Episode 2: Race to the Murasa Skyclave/02.jpg", width: 100%), caption: [<NAME>, the Trickster | Art by: <NAME>], supplement: none, numbering: none)
Zareth cursed himself for returning the key to Nahiri. The others teased him for losing so spectacularly to the strange kor woman, but they let up when he didn't answer back with his usual sarcasm.
Instead they let him be as they finished their ales and went to prepare for the journey ahead. But Zareth didn't leave. No. He sat in the Sea Gate Expeditionary House and nursed another drink as the hours slipped by and the crowded room emptied out.
Who gave Nahiri the right to change his world anyway?
It was close to midnight when he was the only one left.
Well, him and Kesenya. Which suited him just fine. Zareth wondered if the house leader ever slept.
"Don't you set out in the morning?" she asked, coming up next to him.
"Yes," he replied, "but I want to enjoy this evening, just in case it's my last."
Kesenya studied him for a long moment. "Liar," she said.
"Fine," Zareth said. "This object we're seeking in the Murasa Skyclave—I'm worried about it."
The house head didn't say anything, just gestured for him to continue.
"She said she was going to change Zendikar with it," said Zareth, "return it to the way it was before the Eldrazi were imprisoned here."
Kesenya gave a small laugh. "You make that sound like a bad thing, Trickster."
"You've seen those ancient ruins," he replied sharply, the anger he'd been bottling up all day beginning to leak out. "Do you think there will be any place for people like us in a world of fortresses and armies?"
For the first time he could ever remember, the house head didn't look sure. "It's not that simple. Nahiri is~more important than she seems."
Zareth shook his head. "All I'm asking you to do is find a buyer for the Core who is rich and stupid. Someone who won't actually #emph[use ] it," he said. "I'll handle the rest."
Kesenya hesitated, conflicted. "Get me the Core, and I'll consider it," she said, finally.
Zareth smiled. It wasn't a yes, but it wasn't a no either. It was good enough for him.
For now.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("003_Episode 2: Race to the Murasa Skyclave/03.jpg", width: 100%), caption: [Forest | Art by: <NAME>], supplement: none, numbering: none)
When they finally arrived at Sunder Bay on Murasa, Akiri was the first to dismount her griffin and put her feet on the ground. The formidable cliffs of the island rose above them, and a forest of giant harabaz trees surrounded them. But Akiri's focus was fixed on the Murasa Skyclave which loomed high above the harabaz trees' intricate tangle of branches. The ancient floating ruin was massive, covered in greenery and small trees where waterfalls streamed down. Its pieces shifted in the air currents, and even from the limited viewpoint of the ground, Akiri could tell it was going to be a dangerous climb.
She smiled. She loved a challenge.
"Whoa," Kaza said, staring up, "that looks tough. Good thing she hired us."
"This will be one for the legends," Akiri agreed.
"We should get moving," Nahiri said, hopping down from her griffin. "The Lithoform Core is near."
"How will we know where to find it once we get up there?" asked Zareth, arms crossed. Akiri shot him a warning look. Throughout the journey, he had been pestering Nahiri with questions about the Core, never quite hiding his disapproval.
Nahiri gave him a withering look. "I'll know." She turned and strode to where Kaza and Orah were going through their packs.
"That's not an answer," grumbled Zareth, but quietly so only Akiri could hear. "I don't trust her." He reached out for Akiri's hand and intertwined his fingers in hers.
Akiri sighed. She could see the tension in his posture, feel the worry rolling off him.
"I know," she said, "but I get the feeling she's extremely protective of Zendikar. I can't believe she'd harm it, though I don't know why exactly." There were many things in this world Akiri didn't understand, and Nahiri was one of them. She squeezed Zareth's hand once, firmly, before letting go and making her way to the others. A moment later, she heard his long strides behind her.
"How big is this Core exactly?" Orah was saying as he slung a coil of ropes over his shoulder.
Nahiri frowned. "I'm not sure."
"Well," said Kaza, cheerfully, "I could probably levitate it if needed. Or blow it up. I can #emph[definitely] do that."
"Noted," said Nahiri with a small smile.
"And how do we know this Core will even work?" said Zareth.
Nahiri turned toward him and went still, her whole expression and her posture becoming as hard as stone. For one terrible moment, Akiri thought she was going to attack Zareth. She instinctively coiled, about to move into action.
But Nahiri was quicker.
With one blur of a movement, Nahiri withdrew the shining key from her pocket, held it up toward Zareth, and spoke a word Akiri didn't understand. Akiri rushed forward but was halted by a flash so bright that she needed to shield her eyes.
"Zareth!" she shouted, panicked.
It took one, long, agonizing second for her vision to clear.
When it did, Akiri noticed two things.
First, Zareth was standing in the same place, unharmed and blinking, too. Akiri exhaled, relief flooding through her.
Second, there was a large, angry stomper hovering mid-leap behind Zareth, frozen in place. Its mouth was open, exposing its long fangs, and two of its six legs were inches from him, ready to pounce. It was clear that the ferocious beast was hunting to kill, and it was stopped at the last possible moment.
Akiri reached for her ropes, ready to lasso that beast and tie it down.
But before she could, the stomper began to melt away into sand. Within moments, there was no trace left of the creature except a handful of black grains.
"That," said Nahiri, tucking the key away, "is just a taste of the Core's power."
"Where was this Core when we were fighting the Eldrazi?" asked Akiri, her voice hushed with wonder. "We needed it then."
Nahiri went still again, but this time, her face was full of guilt and pain. "We should move," she said, stiffly. "We shouldn't stay on the ground."
"Start climbing the trees," Akiri said. She gave Zareth and the others a quick nod. "I'll catch up in a minute."
Akiri pretended to check her gear again as the others began to make their way up the harabaz tree. When they were out of sight and she could barely hear them, she let her shoulders slump. This #emph[would ] be an adventure for the legends.
"If any benevolent god is listening"—Akiri whispered to the cliffs and the trees. She rarely believed in more than being prepared and being quick, but today felt different—"please keep my party safe today."
It wasn't much of a prayer, but she didn't like to bother the gods. Akiri slung her lines over her shoulder and began to climb.
From the corner of her eye, she saw something move. She tensed, turned, and spotted a dark spot swelling under one of the nearby trees, right where Nahiri had used the key. It looked like a tentacle of black sand. It grew slowly, twisting its way around the trunk, withering leaves, branches, and bark, transforming them into something rigid and unmovable.
Like stone.
Akiri shivered.
There were many things in this world she didn't understand, and this was one of them.
Quickly, she began to climb.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
When Jace arrived in Sea Gate, he wondered if he was beginning in the right place. He knew from what Nissa told him on Ravnica that Nahiri was here, on Zendikar. And he deduced that Nissa returned to this plane as well. The question was, of course, where?
Sea Gate, he reasoned, was a good place to start.
He hadn't been here since the battle with the Eldrazi, when the city was practically razed to the ground. Then, the lighthouse tower at the entrance had been shattered and corruption had spilled all over the city's streets in Kozilek's wake.
Now, the lighthouse was rebuilt, tall and proud, and the streets were gleaming and clean. Jace walked through them, hoping that he'd run into Nissa, that he'd be able to make things right with her again. She was his friend, and though he wasn't always a good friend, he wanted to try to be a better one.
#emph[I wish Chandra was here] , he thought. He had tried to find her before coming here, but with no luck. And Jace suspected he didn't have much time before Nahiri acted on her plan.
So, lost in his own thoughts, Jace didn't react at first when someone called out to him.
"Hey, hero," shouted someone from behind him. "You were one of the defenders of this city during the war, right?"
Jace turned and saw a woman approaching him. She wore light armor of leather and metal, colored red and gold, with a leaf-green cloak. Her hair was black and pulled back in a braid, and her face was lined, but her bright green eyes shone. Or one eye rather. The right half of her face was a twist of scars that Jace recognized as wounds from corruption. Her right hand was curled, and she had a slight limp.
"Yes, that's me," Jace replied.
"Thought so," she said with a grin. "I remember that blue cloak. I was fighting not far from you."
"You were?" Jace searched his memories, but there had been so much chaos that day. So much ruin.
"Yeah. I was holding back a swarm of broodlings. Was doing great too~until the corruption got me." She shrugged.
"I'm sorry," he said, unsure of what to say. Suddenly, he wished he and the other planeswalkers had been quicker, more decisive during that fight.
The woman gave him a curious look. "Don't be. I managed to help a dozen people escape before they got me. And if I had to make that choice again, I wouldn't change it." She grinned and Jace had to admit it was a charming smile. "My name's Mara. I'm on my way to the memorial. Would you care to join me?"
"I'd be honored," Jace replied, and he meant it.
Together, they walked to the massive platform with its six upright hedrons. Together, they knelt at the base of one. He could hear Mara murmuring, asking the friends she lost in the fight for forgiveness. For not being able to save them. For outliving them.
Jace's chest constricted. He didn't know which of his friends he should beg forgiveness from.
He thought of Nahiri and how she was so desperate to turn back the clock for this plane. He thought of Nissa who blamed herself for trying to do what was right for the world she loved.
He thought of Gideon, who gave himself up willingly for this plane.
"I am guilty, too," he whispered, softly, so softly so that Mara next to him couldn't hear, "but I will make it right."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Unfortunately, not everyone in Sea Gate was so forthcoming. Many people came up to him, but they were mostly vendors or solo adventurers looking for a patron. He couldn't make it ten steps without someone trying to get his attention. At first, he inquired after Tazri, a fierce general in the battle against the Eldrazi and his friend. But he learned she was away in Guul Draz hunting down some terrible beast. Then, he redirected the conversation to ask if they'd seen anyone matching Nahiri's or Nissa's description, but the adventurers shook their heads and the merchants launched into another sales pitch.
Eventually, Jace grew so frustrated that he cast an illusion to disguise himself as a merfolk with a long white beard, wearing muted browns and greens. He passed through the streets of Sea Gate then, mostly overlooked, though this time, he peered into the heads of some of the more scarred and serious-looking adventurers, hoping for glimpses of the other two planeswalkers.
He found nothing.
Perhaps that's why when he stepped inside of the Sea Gate Expeditionary House, he realized he was looking in the wrong place the entire time. The room was filled with adventurers in bright new gear sporting the house emblem—a jagged red outline of the Dragon's Frill. Everyone laughed loudly and boasted of the most recent successes.
"Can I help you?" asked a man, by the door.
"I'm looking for the head of the house," Jace replied. The man raised an eyebrow and looked Jace up and down.
"Right"—Jace dropped the disguise—"I'm <NAME>. Tell her we need to talk."
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The head of the Sea Gate Expeditionary House sat across from Jace in one of the private rooms, and Jace could immediately sense her wariness.
#emph[I wonder why? ] He fought the urge to peek in her head.
The room was comfortable with soft cushions and tea set before him. There were maps on the wall and ink and parchment in the corner for contracts.
"I'm here to help," he told her. He realized he was going to have to earn her trust. Somehow.
Kesenya raised an eyebrow. "Help how?"
Jace relayed what Nissa told him about the Core. He stressed that he wanted to find a reasonable solution. Explaining that he, Nissa, and Nahiri have worked together in the past.
"Except, I don't know where Nissa or Nahiri are at the moment," Jace finished.
Kesenya's expression was unreadable. Knowing he shouldn't, but growing desperate, Jace glanced at her thoughts.
#emph[Zareth was right] , she thought.
But instead, she said, "I'm afraid I can't help you."
Jace pulled back slightly, surprise. "You aren't worried?"
"I #emph[am] worried," she replied. "She took my best adventuring party."
#emph[And they are looking for something that maybe shouldn't be found] , she thought.
"Zendikar is a beautiful place," Jace said, evenly. "I want the chance to reason with Nahiri before she changes it. But I need to know where to look."
He saw a moment of indecisiveness flicker across Kesenya's face, and Jace dared to hope.
Then her expression hardened.
"I'm sorry. I can't help you," she said, standing. "This house takes the privacy of its patrons seriously."
"I understand," Jace said, then quietly, mostly to himself added, "unfortunately, this world is quite large."
"Yes. If you need lodging, here's the address of a respectable inn," she said, grabbing a quill and a scrap of parchment from the table in the corner and writing it down quickly. "Best of luck." She held out the parchment.
"Thank you," Jace said, taking it, heart sinking. He wondered if he should use his power to force her to tell him what he wanted.
But no, that was crossing a line. Jace could almost hear Gideon chiding him for peering into Kesenya's thoughts. He could almost see Gideon's disappointed frown.
He left the expeditionary house, head racing, trying to figure out his next strategy. He was halfway down the street before he looked at Kesenya's note.
On it was the address for Scholar and Sea Inn. But scrawled on the bottom of the parchment, there was a single word: #emph[Murasa] .
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#figure(image("003_Episode 2: Race to the Murasa Skyclave/04.jpg", width: 100%), caption: [Swarm Shambler | Art by: <NAME>], supplement: none, numbering: none)
Jace had traveled to many planes and many places, but Murasa was different from any island he'd ever been to before. He wasn't sure he liked it.
For starters, the cliffs around him were dizzying, taller than the highest towers of Ravnica, and their sheer white stone face promised danger. Around him, massive harabaz trees soared up into the sky and the roots spanned like arches around him. His boots sunk slightly into the damp, coarse sand on the ground and the smell of brine and kelp was almost overwhelming.
Jace shivered. The Sunder Bay reminded him too much of being trapped in the jungles of Ixalan. He wished he'd been able to bring Chandra or another member of the Gatewatch with him here, but no one had answered his call.
Fortunately, he could see the Skyclave above him, though it was far up, and the trek looked treacherous.
"Well, I do love a challenge," he said. If his time on Ixalan had taught him anything, it was how to build calluses on his hands.
He heard it before he saw it. Something large was tearing through the fauna behind him, its heavy footfalls making the ground quiver. Jace turned just in time to see a huge, ravenous monster emerge from the trees. It had six gnarled legs, a crablike torso, and its back was filled with large pale mushrooms.
"Oh, not now," hissed Jace and turned himself invisible.
The massive creature halted, turning this way and that. It brought its grotesque forearms together with a loud #emph[clap] , causing the clusters of mushrooms on its back to quiver and shake. Then, it turned its massive body toward him.
And charged.
Jace rolled out of the way. An instant later, the creature smashed into a tree behind him.
#emph[Damn it] , Jace thought. #emph[New plan. ] He dropped his invisibility and created an illusion of himself, placing the other Jace as far away from himself as possible. The monster paused, looking between the two planeswalkers. It brought its forearms together again in an earsplitting #emph[crack] . Jace covered his ears, wincing. When he looked up again, the creature was staring straight at him.
It wasn't fooled.
#emph[It's using echolocation] , he realized a moment too late.
The monster charged. Jace slid out of the way, avoiding it by inches.
"Why does everything on this plane want to kill me?" he murmured as he put two fingers to his temple and attempted to overthrow the beast's mind.
But whatever was controlling this monster, it wasn't its head.
And it was close to him now. Too close. Jace could smell the waves of rot rolling off it. Panic swelled up in Jace. Why didn't his mind control work?
#emph[Oh!] #emph[It's controlled by the mushrooms on its back! ] But the realization came too late again. The creature lifted its massive twisted forearms over him.
Jace threw up a barrier and braced for impact.
The impact never came.
As suddenly as the monster arrived, so did something else.
At first, Jace couldn't make sense of what it was. Fighting the monster, there was a second creature that could have blended in with the trees around it; its torso was thick and gray, but its limbs were identical to the massive tree roots above him.
The second creature struck the monster once, twice with a sickening #emph[thwack] , dislodging a few of the bulbous mushrooms on its back. The monster shrieked and recoiled.
#emph[What are you?] Jace thought.
His savoir advanced, hitting the monster again and again and again. Jace realized it was like the embodiment of the harabaz trees around him. Giant, overarching, and indomitable. The answer hit Jace like a punch.
#emph[It's an elemental.] Jace spun around, searching for the other planeswalker.
Sure enough, Nissa was perched in one of the great trees, hand outstretched, looking every bit the guardian of this plane.
The expression on her face was absolutely murderous.
Within seconds, the harabaz elemental destroyed the monster, its mammoth body crumbled to the ground, broken and inert.
"Are you all right?" Nissa asked, jumping down from her perch as lightly as if it were a mere step down, not a twenty-foot drop.
"Yes," Jace replied. "Thank you."
"Of course." She smiled, but it didn't reach her eyes. Her gaze slid to the harabaz elemental that was prowling in front of the monster's corpse as if daring it to try again. "I've never summoned a harabaz elemental before. I think Gideon would have liked it."
"It #emph[is] impressive," Jace admitted.
"#emph[It] is Zendikar," said Nissa, stiffly. "Of course, it is."
Internally, Jace kicked himself. "I didn't mean to imply—"
"I know," she said, softly. "The elementals are just~really important to me. They were there for me before anyone else was. I can't let Nahiri harm them."
Jace put a hand on her shoulder. "I don't pretend to completely understand," he said, "but these elementals mean a lot to you, so I'll help you protect them."
Nissa broke into a smile, the first one he'd seen from her in a long time. It made his heart lift.
"Thank you," she said. "Nahiri went up there." Nissa pointed up to the imposing Skyclave.
"How do you know?"
"Zendikar told me."
Jace's brows knitted in confusion. He would never understand this plane. "What's the best way to get there?" he asked
"My vines," replied Nissa, but then looked embarrassed. "They're not as quick as Nahiri's stone craft. It won't be easy. Are you ready for this, Jace?"
She bit her lip and twisted her hands in front of her. Jace realized she was expecting him to refuse.
Jace's stomach twisted with guilt. It was true, the old Jace would have said no. The Jace that hadn't survived Ixalan with Vraska.
And for the sake of Nissa's friendship, for the sake of the Gatewatch and the battles to come, he had to do this.
"Yes," he said. "I'm ready."
|
|
https://github.com/bradmartin333/TypstNotes | https://raw.githubusercontent.com/bradmartin333/TypstNotes/main/sections/test1.typ | typst | = Test1
A triumphant success! |
|
https://github.com/shiki-01/typst | https://raw.githubusercontent.com/shiki-01/typst/main/sample/sample1/main.typ | typst | #import "../../lib/conf.typ": conf
#import "../../lib/component/comment.typ": come
#import "@preview/codelst:2.0.1": sourcecode
#show: doc => conf(
title: [情報システム],
date: [2024年4月11日],
doc,
)
#let dummy = [
私は今度とにかくその相違児というのの以上で連れなでし。てんで近頃を参考らはいったいこの衰弱たなまでが云いでいるですをは攻撃かかるないんて、少々には払っででなけれた。社会でいうないことももし今がまあたたた。
]
= タイトル1
#come( "こめんとのたいとるだよー", "info" )[
#dummy
]
#come("","comment")[
#dummy
]
#come("","important")[いんぽーたんとだよん]
#come("","sucsess")[*せいこうだよ!*]
= タイトル
ああああああ
== タイトル
ああああああ
こんな風に色々かくのはたのしいぞい!
でもまあ\
これが\
こんな風にかけるのはいいことだよね\
#dummy
#pagebreak()
= taitoru
#sourcecode()[```js
let str = "hello world!";
function sample(lang:test) {
let n = 1
if (str.lenght = n) {
n = 3;
}
}
```] |
|
https://github.com/Alkon-2024-contest-editorial/hello-alkon | https://raw.githubusercontent.com/Alkon-2024-contest-editorial/hello-alkon/main/problems.typ | typst | #let raw_problems = (
// Div.2 Number, Div.1 Number, Title, Difficulty, 예상 티어 "bsgpdr" 중 하나
("A","", "수학은 체육과목 입니다", "B3", "b"),
("B","", "나머지", "B2", "b"),
("C","", "단어 공부", "B1", "b"),
("D","", "좋은 구간", "S4", "s"),
("E","A", "패션왕 신혜빈", "S3", "s"),
("","B", "숨바꼭질", "S1", "s"),
("","C", "하노이 탑 K", "G4", "g"),
("","D", "공장 컨설턴트 호석", "G3", "g"),
("","E", "색깔 통일하기", "G2", "g"),
)
#let contest_problems = raw_problems.map( problem => {
(
d2: problem.at(0),
d1: problem.at(1),
title: problem.at(2),
difftext: problem.at(3),
diff: problem.at(4)
)
})
|
|
https://github.com/RY997/Thesis | https://raw.githubusercontent.com/RY997/Thesis/main/proposal.typ | typst | MIT License | #import "proposal_template.typ": *
#import "common/titlepage.typ": *
#import "common/metadata.typ": *
#titlepage(
title: titleEnglish,
titleGerman: titleGerman,
degree: degree,
program: program,
supervisor: supervisor,
advisors: advisors,
author: author,
startDate: startDate,
submissionDate: submissionDate
)
#show: project.with(
title: titleEnglish,
titleGerman: titleGerman,
degree: degree,
program: program,
supervisor: supervisor,
advisors: advisors,
author: author,
startDate: startDate,
submissionDate: submissionDate
)
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: red,
)[
Before you start with your thesis, have a look at our guides on confluence! \ https://confluence.ase.in.tum.de/display/EduResStud/How+to+thesis
]
#set heading(numbering: none)
= Abstract
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: yellow,
)[
*Abstract*
- Provide a brief summary of the proposed work
- What is the main content, the main contribution?
- What is your methodology? How do you proceed?
]
#set heading(numbering: "1.1")
= Introduction
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: yellow,
)[
*Introduction*
- Introduce the reader to the general setting
- What is the environment?
- What are the tools in use?
]
= Problem
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: yellow,
)[
*Problem description*
- What is/are the problem(s)?
- Identify the actors and use these to describe how the problem negatively influences them.
- Do not present solutions or alternatives yet!
- Present the negative consequences in detail
]
= Motivation
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: yellow,
)[
*Thesis Motivation*
- Outline why it is important to solve the problem
- Again use the actors to present your solution, but don’t be to specific
- Be visionary!
- If applicable, motivate with existing research, previous work
]
#pagebreak(weak: true)
= Objective
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: yellow,
)[
*Thesis Objective*
- What are the main goals of your thesis?
]
= Schedule
// TODO: Remove this block
#rect(
width: 100%,
radius: 10%,
stroke: 0.5pt,
fill: yellow,
)[
*Thesis Schedule*
- When will the thesis Start (Always 15th of Month)
- Create a rough plan for your thesis (separate the time in sprints with a length of 2-4 Weeks)
- Each sprint should contain several work items - Again keep it high-level and make to keep your plan realistic
- Make sure the work-items are measurable and deliverable
- No writing related tasks! (e.g. ”Write Analysis Chapter”)
]
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/051_March%20of%20the%20Machine.typ | typst | #import "@local/mtgset:0.1.0": conf
#show: doc => conf("March of the Machine", doc)
#include "./051 - March of the Machine/001_Episode 1: Triumph of the Fleshless.typ"
#include "./051 - March of the Machine/002_Episode 2: Holding Your Breath.typ"
#include "./051 - March of the Machine/003_Episode 3: Mother, Son, and Story.typ"
#include "./051 - March of the Machine/004_Episode 4: Beneath Eyes Unblinking.typ"
#include "./051 - March of the Machine/005_Arcavios: A Radiant Heart.typ"
#include "./051 - March of the Machine/006_Ikoria: Survival of the Fittest.typ"
#include "./051 - March of the Machine/007_Episode 5: Cathartic Reunion.typ"
#include "./051 - March of the Machine/008_Ixalan: Three Hundred Steps under the Sun.typ"
#include "./051 - March of the Machine/009_Innistrad: Family Game Night.typ"
#include "./051 - March of the Machine/010_Eldraine: The Adventures of Rankle, Master of Love.typ"
#include "./051 - March of the Machine/011_Episode 6: The Last to Leave.typ"
#include "./051 - March of the Machine/012_Episode 7: Divine Intervention.typ"
#include "./051 - March of the Machine/013_Episode 8: Wrenn and Eight.typ"
#include "./051 - March of the Machine/014_Ravnica: One and the Same.typ"
#include "./051 - March of the Machine/015_New Capenna: The Fall of Park Heights.typ"
#include "./051 - March of the Machine/016_Zendikar: Battles in the Field and in the Mind.typ"
#include "./051 - March of the Machine/017_Episode 9: The Old Sins of New Phyrexia.typ"
#include "./051 - March of the Machine/018_Episode 10: The Rhythms of Life.typ"
|
|
https://github.com/Ngan-Ngoc-Dang-Nguyen/thesis | https://raw.githubusercontent.com/Ngan-Ngoc-Dang-Nguyen/thesis/main/typst-orange.typ | typst | #import("my-outline.typ"): *
#import("my-index.typ"): *
#import("theorems.typ"): *
#let mathcal = (it) => {
set text(size: 1.3em, font: "OPTIOriginal", fallback: false)
it
h(0.1em)
}
#let normalText = 1em
#let largeText = 3em
#let hugeText = 16em
#let title_main_1 = 2.5em
#let title_main_2 = 1.8em
#let title_main_3 = 2.2em
#let title1 = 2.2em
#let title2 = 1.5em
#let title3 = 1.3em
#let title4 = 1.2em
#let title5 = 1.1em
#let outlinePart = 1.5em;
#let outlineHeading1 = 1.3em;
#let outlineHeading2 = 1.1em;
#let outlineHeading3 = 1.1em;
#let nocite(citation) = {
place(hide[#cite(citation)])
}
#let language_state = state("language_state", none)
#let main_color_state = state("main_color_state", none)
#let appendix_state = state("appendix_state", none)
#let heading_image = state("heading_image", none)
#let supplement_part_state = state("supplement_part", none)
#let part_style_state = state("part_style", 0)
#let part_state = state("part_state", none)
#let part_location = state("part_location", none)
#let part_counter = counter("part_counter")
#let part_change = state("part_change", false)
// pagebreak(to: "odd") is not working correctly
#let pagebreak_until_odd() = {
pagebreak()
counter(page).display(i => if calc.even(i) {
pagebreak()
})
}
#let part(title) = {
pagebreak(to: "odd")
part_change.update(x =>
true
)
part_state.update(x =>
title
)
part_counter.step()
[
#locate(loc => [
#part_location.update(x =>
loc
)
])
#locate(loc => [
#let mainColor = main_color_state.at(loc)
#let part_style = part_style_state.at(loc)
#let supplement_part = supplement_part_state.at(loc)
#if part_style == 0 [
#set par(justify: false)
#place(block(width:100%, height:100%, outset: (x: 3cm, bottom: 2.5cm, top: 3cm), fill: mainColor.lighten(70%)))
#place(top+right, text(fill: black, size: largeText, weight: "bold", box(width: 60%, part_state.display())))
#place(top+left, text(fill: mainColor, size: hugeText, weight: "bold", part_counter.display("I")))
] else if part_style == 1 [
#set par(justify: false)
#place(block(width:100%, height:100%, outset: (x: 3cm, bottom: 2.5cm, top: 3cm), fill: mainColor.lighten(70%)))
#place(top+left)[
#block(text(fill: black, size: 2.5em, weight: "bold", supplement_part + " " + part_counter.display("I")))
#v(1cm, weak: true)
#move(dx: -4pt, block(text(fill: mainColor, size: 6em, weight: "bold", part_state.display())))
]
]
#align(bottom+right, my-outline-small(title, appendix_state, part_state, part_location,part_change,part_counter, mainColor, textSize1: outlinePart, textSize2: outlineHeading1, textSize3: outlineHeading2, textSize4: outlineHeading3))
])
]
}
#let chapter(title, image:none, l: none) = {
pagebreak(to: "odd")
heading_image.update(x =>
image
)
if l != none [
#heading(level: 1, title) #label(l)
] else [
#heading(level: 1, title)
]
part_change.update(x =>
false
)
}
#let make-index(title: none) = {
make-index-int(title:title, main_color_state: main_color_state)
}
#let appendices(title, doc) = {
counter(heading).update(0)
appendix_state.update(x =>
title
)
set heading ( numbering: (..nums) => {
let vals = nums.pos()
if vals.len() == 1 {
return str(numbering("A.1", ..vals)) + "."
}
else {
locate(loc => [
#let mainColor = main_color_state.at(loc)
#let color = mainColor
#if vals.len() == 4 {
color = black
}
#return place(dx:-4.5cm, box(width: 4cm, align(right, text(fill: color)[#numbering("A.1", ..vals)])))
])
}
},
)
doc
}
#let my-bibliography(file, image:none) = {
pagebreak_until_odd()
counter(heading).update(0)
heading_image.update(x =>
image
)
file
}
#let theorem(name: none, body) = {
locate(loc => {
let language = language_state.at(loc)
let mainColor = main_color_state.at(loc)
thmbox("theorem", if language=="en" {"Định lý"} else {"Teorema"},
stroke: 0.5pt + mainColor,
radius: 0em,
inset: 0.65em,
padding: (top: 0em, bottom: 0em),
namefmt: x => [(*#x*)],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", fill: mainColor, x),
fill: black.lighten(95%),
base_level: 1)(name:name, body)
})
}
#let corollary = thmplain(
"corollary",
"Hệ quả",
base: "theorem",
titlefmt: strong
)
#let proposition = thmplain(
"proposition",
"Mệnh đề",
base: "theorem",
titlefmt: strong
)
#let lemma = thmplain(
"lemma",
"Bổ đề",
base: "theorem",
titlefmt: strong
)
#let definition = thmbox("definition", "Definition", inset: (x: 1.2em, top: 1em))
#let example = thmplain("example", "Ví dụ").with(numbering: none)
#let proof = thmplain(
"proof",
"Chứng minh",
base: "theorem",
bodyfmt: body => [#body #h(1fr) $square$]
).with(numbering: none)
#let project(title: "", subtitle: "", date: "", author: (), logo: none, cover: none, imageIndex:none, body, mainColor: blue,copyright: [], lang: "en", listOfFigureTitle: none, listOfTableTitle: none, supplementChapter: "Chapter", supplementPart: "Part", fontSize: 10pt, part_style: 0) = {
set document(author: author, title: title)
set text(size: fontSize, lang: lang)
set par(leading: 0.5em)
set enum(numbering: "1.a.i.")
set list(marker: ([•], [--], [◦]))
// show math.equation.where(block: true): e => {
// counter(math.equation).step()
// locate(loc => {
// pad(left: 1cm, {
// box(baseline: 50%, e)
// h(1fr)
// box(baseline: 50%, "(" + str(counter(heading).at(loc).at(0)) + "." + str(counter(math.equation).at(loc).first()) + ")")
// })
// })
// }
set figure(gap: 1.3em,
numbering: it => {
locate(loc => {
let chapter = counter(heading.where(level: 1)).at(loc).first()
box[#chapter.#it]
})
})
show figure: it => align(center)[
#it
#v(2.6em, weak: true)
]
show terms: set par(first-line-indent: 0em)
set page(
paper: "a4",
margin: (x: 3cm, bottom: 2.5cm, top: 3cm),
header: locate(loc => {
set text(size: title5)
let page_number = counter(page).at(loc).first()
let odd_page = calc.odd(page_number)
// Are we on an odd page?
// if odd_page {
// return text(0.95em, smallcaps(title))
// }
// Are we on a page that starts a chapter? (We also check
// the previous page because some headings contain pagebreaks.)
let all = query(heading.where(level: 1), loc)
if all.any(it => it.location().page() == page_number) {
return
}
let appendix = appendix_state.at(loc)
if odd_page {
let before = query(selector(heading.where(level: 2)).before(loc), loc)
let counterInt = counter(heading).at(loc)
if before != () and counterInt.len()> 2 {
box(width: 100%, inset: (bottom: 5pt), stroke: (bottom: 0.5pt))[
#text(if appendix != none {numbering("A.1", ..counterInt.slice(1,3)) + " " + before.last().body} else {numbering("1.1", ..counterInt.slice(1,3)) + " " + before.last().body})
#h(1fr)
#page_number
]
}
} else{
let before = query(selector(heading.where(level: 1)).before(loc), loc)
let counterInt = counter(heading).at(loc).first()
if before != () and counterInt > 0 {
box(width: 100%, inset: (bottom: 5pt), stroke: (bottom: 0.5pt))[
#page_number
#h(1fr)
#text(weight: "bold", if appendix != none {numbering("A.1", counterInt) + ". " + before.last().body} else{before.last().supplement + " " + str(counterInt) + ". " + before.last().body})
]
}
}
})
)
show cite: it => {
show regex("\[(\d+)"): set text(mainColor)
it
}
set heading(
numbering: (..nums) => {
let vals = nums.pos()
if vals.len() == 1 {
return str(vals.first()) + "."
}
else if vals.len() <=4 {
let color = mainColor
if vals.len() == 4 {
color = black
}
return place(dx:-4.5cm, box(width: 4cm, align(right, text(fill: color)[#nums.pos().map(str).join(".")])))
}
},
supplement: supplementChapter
);
show heading: it => {
set text(size: fontSize)
if it.level == 1 {
//set par(justify: false)
counter(figure.where(kind: image)).update(0)
counter(figure.where(kind: table)).update(0)
counter(math.equation).update(0)
locate(loc => {
let img = heading_image.at(loc)
if img != none {
set image(width: 21cm, height: 9.4cm)
place(move(dx: -3cm, dy: -3cm, img))
place( move(dx: -3cm, dy: -3cm, block(width: 21cm, height: 9.4cm, align(right + bottom, pad(bottom: 1.2cm, block(
width: 86%,
stroke: (
right: none,
rest: 2pt + mainColor,
),
inset: (left:2em, rest: 1.6em),
fill: rgb("#FFFFFFAA"),
radius: (
right: 0pt,
left: 10pt,
),
align(left, text(size: title1, it))
))))))
v(8.4cm)
}
else{
move(dx: 3cm, dy: -0.5cm, align(right + top, block(
width: 100% + 3cm,
stroke: (
right: none,
rest: 2pt + mainColor,
),
inset: (left:2em, rest: 1.6em),
fill: white,
radius: (
right: 0pt,
left: 10pt,
),
align(left, text(size: title1, it))
)))
v(1.5cm, weak: true)
}
})
}
else if it.level == 2 or it.level == 3 or it.level == 4 {
let size
let space
let color = mainColor
if it.level == 2 {
size= title2
space = 1em
}
else if it.level == 3 {
size= title3
space = 0.9em
}
else {
size= title4
space = 0.7em
color = black
}
set text(size: size)
it
v(space, weak: true)
}
else {
it
}
}
set underline(offset: 3pt)
// Title page.
page(margin: 0cm, header: none)[
#set text(fill: black)
#language_state.update(x => lang)
#main_color_state.update(x => mainColor)
#part_style_state.update(x => part_style)
#supplement_part_state.update(x => supplementPart)
//#place(top, image("images/background2.jpg", width: 100%, height: 50%))
#if cover != none {
set image(width: 100%, height: 100%)
place(bottom, cover)
}
#if logo != none {
set image(width: 3cm)
place(top + center, pad(top:1cm, logo))
}
#align(center + horizon, block(width: 100%, fill: mainColor.lighten(70%), height: 7.5cm, pad(x:2cm, y:1cm)[
#par(leading: 0.4em)[
#text(size: title_main_1, weight: "black", title)
]
#v(1cm, weak: true)
#text(size: title_main_2, subtitle)
#v(1cm, weak: true)
#text(size: title_main_3, weight: "bold", author)
]))
]
if (copyright!=none){
set text(size: 10pt)
show link: it => [
#set text(fill: mainColor)
#it
]
show par: set block(spacing: 2em)
pagebreak()
align(bottom, copyright)
}
heading_image.update(x =>
imageIndex
)
my-outline(appendix_state, part_state, part_location,part_change,part_counter, mainColor, textSize1: outlinePart, textSize2: outlineHeading1, textSize3: outlineHeading2, textSize4: outlineHeading3)
my-outline-sec(listOfFigureTitle, figure.where(kind: image), outlineHeading3)
my-outline-sec(listOfTableTitle, figure.where(kind: table), outlineHeading3)
// Main body.
show par: set block(spacing: 0.5em)
set par(
first-line-indent: 1em,
justify: true,
)
show link: set text(fill: mainColor)
body
}
|
|
https://github.com/SkiFire13/master-thesis | https://raw.githubusercontent.com/SkiFire13/master-thesis/master/chapters/3-algorithm.typ | typst | #import "../config/common.typ": *
#import "@preview/cetz:0.2.2": canvas, draw
= Symbolic local algorithm <section-algorithm>
== Adapting the algorithm
Our goal will be to adapt and improve the local strategy iteration algorithm to solve systems of fixpoint equations expressed as parity games using the symbolic formulation.
=== Handling finite plays
The parity game formulation of a system of fixpoint equations admits positions where a player has no available moves, namely it is not a total parity game. However the strategy improvement algorithm requires a total parity game, so we need to convert a generic parity game into a "compatible" total parity game that can be handled by it, for some definition of "compatible.
The way we do this transformation is by extending the parity game, inserting auxiliary vertices that will be used as successors for those vertices that do not have one. We call this the _extended total parity game_, for short _extended game_, since it extends the original parity game to make it total. In particular we will add two vertices $w0$ and $w1$ representing vertices that are both controlled by and winning for respectively player 0 and 1. The vertices $w0$ and $w1$ will in turn also need successors, and these will be respectively $l1$ and $l0$, representing vertices that are controlled by and losing for respectively player 1 and 0. Likewise, the vertices $l0$ and $l1$ will need at least one successor, at that will be respectively $w1$ and $w0$. The vertices $w0$ and $l1$ will thus form a forced cycle, as well as $w1$ and $l0$. This, along with priorities chosen as favorable for the player that should win these cycles, will guarantee that the winner will actually be the expected one. Then, vertices that have no successors in the general game, meaning they are losing for the player controlling them, in the game will have as successor $w0$ or $w1$, that is controlled by and winning for the opposing player.
#definition("extended total parity game")[
Let $G = (V_0, V_1, E, p)$ be a parity game. The extended total parity game of $G$ is the parity game $G' = (V'_0, V'_1, E', p')$ where:
#baseline-list[
- $V'_0 = V_0 union { w0, l0 }$
- $V'_1 = V_1 union { w1, l1 }$
- #box(baseline: 2em, $
E' = E &union { (v, w_i) | i in {0,1} and v in V_(1-i) and v in S_G } \ &union { (w0, l1), (l1, w0), (w1, l0), (l0, w1) }
$)
- #box(baseline: 2em, math.equation(block: true, $p'(v) = cases(
p(v) & "if" v in V \
0 & "if" v in { w_0, l_1 } \
1 & "if" v in { w_1, l_0 }
)$))
]
]
We now want to prove that this new parity game is "compatible" with the original one, for a suitable definition of "compatible". In particular for our purposes we are interested that in the new game the winner for vertices which were already in the old game remains unchanged.
#definition("compatible parity games")[
Let $G = (V_0, V_1, E, p)$ and $G' = (V'_0, V'_1, E', p')$ be two parity games with $V_i subset.eq V'_i$. Let $W_0$ and $W_1$ be the winning sets for $G$ and $W'_0$ and $W'_1$ the winning sets for $G'$. We say that $G'$ is compatible with $G$ if $W_0 subset.eq W'_0$ and $W_1 subset.eq W'_1$.
]
#definition("extended strategies")[
Let $G = (V_0, V_1, E, p)$ be a parity game and $G' = (V'_0, V'_1, E', p')$ be the extended game from $G$. Let $sigma$ a strategy on $G$ for player $i$. We say that $sigma$ induces the following extended strategy $sigma'$ on $G'$:
$
sigma'(v) = cases(
sigma (v) & "if" v in V_i and v in.not S_G \
W_(1-i) & "if" v in V_i and v in S_G \
W_(1-i) & "if" v = L_i \
L_(1-i) & "if" v = W_i
)
$
]
It can be observed that strategies on a parity game and their extended counterparts create a bijection. In fact notice that the condition $v in V_i and v in.not S_G$ in the first case of $sigma'$ is equivalent to requiring $v in dom(sigma)$, meaning that restricting $sigma'$ to $dom(sigma)$ will result in $sigma$ itself.
The bijection is not only limited to this. It can be showed that strategies that are related by this bijection will also induce plays with the same winner.
#theorem("plays on extended strategies")[
Let $G = (V_0, V_1, E, p)$ be a parity game and $G' = (V'_0, V'_1, E', p')$ be the extended game from $G$. Let $sigma_0$ and $sigma_1$ be two strategies on $G$ and $sigma'_0$ and $sigma'_1$ be the unique corresponding strategies on $G'$. Let $v in V_0 union V_1$ and consider the plays starting from $v_0$ on the instances $I = (G, sigma_0, sigma_1)$ and $I' = (G', sigma'_0, sigma'_1)$. The two plays have the same winner.
]
#proof[
We will prove that for all $i$ the play induced by $I$ is won by player $i$ if and only if the induced play by $I'$ is also won by player $i$:
- $=> \)$: We distinguish two cases on the play induced by $I$:
- the play is infinite: $v_0 v_1 v_2 ... $, then every vertex is in $dom(sigma_i)$ for some $i$ and thus $sigma'_i$ are defined to be equal to $sigma_i$ and will induce the same play, which is won by player $i$;
- the play is finite: $v_0 v_1 ... v_n$, with $v_n in V_(1-i)$ because the play is won by player $i$. For the same reason as the previous point the two induced plays are the same until $v_n$, which is not in $dom(sigma_(1-i))$ but is in $dom(sigma'_(1-i))$. The play induced by $I'$ is $v_0 v_1 ... v_n w_i l_(1-i) w_i ...$ which is also won by player $i$ because only the vertices $w_i$ and $l_(1-i)$ repeat infinitely often, and they have both priority favorable to player $i$.
- $arrow.l.double \)$: We distinguish the following cases on the play induced by $I'$:
- the play never reaches the $w_0, w_1, l_0$ or $l_1$ vertices: $v_0 v_1 v_2 ...$, then only the first case of $sigma'_i$ is ever used and thus every vertex is in $dom(sigma_i)$. Thus $I$ induces the same play, which is won by player $i$;
- the play reaches $w_i$: $v_0 v_1 ... v_n w_i l_(1-i) w_i ...$, then $v_n$ is not in $dom(sigma_(1-i))$ and $I$ induces the finite play $v_0 v_1 ... v_n$ which is won by player $i$ because $v_n in V_(1-i)$ due to its successor being controlled by player $i$;
- the play reaches $w_(1-i)$: this is impossible because it would be winning for player $1-i$, which contradicts the hypothesis;
- the play reaches $l_i$ or $l_(1-i)$ before $w_i$ or $w_(1-i)$: this is impossible because the only edges leading to $l_i$ or $l_(1-i)$ start from $w_(1-i)$ and $w_i$.
#v(-1.8em)
]
#theorem("compatibility of extended games")[
Let $G = (V_0, V_1, E, p)$ be a parity game and $G' = (V'_0, V'_1, E', p')$ be the extended game from $G$. Then $G'$ is compatible with $G$, that is $forall i. W_i subset.eq W'_i$.
]
#proof[
Let $v in W_i$, then there exist a winning strategy $sigma_i$ for player $i$. We claim that the extended strategy $sigma'_i$ for player $i$ on $G'$ is also winning. In fact consider any strategy $sigma'_(1-i)$ for player $1-i$ on $G'$, then it is the extended strategy of some strategy $sigma_(1-i)$ on $G$. We know that the play starting from $v$ on the instance $(G', sigma'_0, sigma'_1)$ is won by the same player as the play starting from $v$ on the instance $(G, sigma_0, sigma_1)$. Moreover since $sigma_i$ is a winning strategy for player $i$ we know that these plays are won by player $i$, thus $v in W'_i$ and so $W_i subset.eq W'_i$.
]
=== Generalizing subgames with subset of edges
The local strategy improvement algorithm gives a way to consider only a subset of the vertices, but still assumes all edges between such vertices to be known. However this is not necessarily true in the symbolic formulation, as the list of successors of vertices in $V_0$ is computed lazily, and this might include vertices already in the subgame. We thus have to update the local algorithm to handle this case by extending the idea of escape set. Instead of identifying those vertices that can reach the $U$-exterior we will identify those vertices that can reach an "unexplored" edge, that is an edge present in the full game but not in the subgame. We will call the vertices directly connected to such edges _incomplete vertices_. Note that the resulting set will be a superset of the $U$-exterior, since edges that lead outside $U$ cannot be part of the subgame.
#definition("subgame")[
Let $G = (V_0, V_1, E, p)$ be a parity game, $U subset.eq V$ and $E' subset.eq E sect (U times U)$, then $G' = (V_0 sect U, V_1 sect U, E', p|_U)$ is a subgame of $G$, where $p|_U$ is the function $p$ with domain restricted to $U$. We will write $G' = (G, U, E')$ for brevity.
]
#definition("escape set (updated)")[
Let $G = (V_0, V_1, E, p)$ be a parity game and $G' = (G, U, E')$ a subgame of $G$. Let $L = (G|_U, sigma, tau)$ be an instance of the subgame. Let $E_sigma^*$ (resp. $E_tau^*$) be the transitive-reflexive closure of $E_sigma$ (resp. $E_tau$) and $I_G = {v | v E != v E'}$ the set of vertices that have unexplored outgoing edges. The (updated) escape set for player 0 (resp. 1) from vertex $v in U$ is the set $E_L^0 (v) = v E_sigma^* sect I_G$ (resp. $E_L^1 (v) = v E_tau^* sect I_G$).
]
#theorem("definitive winning set is sound")[
Let $G = (V_0, V_1, E, p)$ be a parity game and $G' = (G, U, E')$ a subgame of $G$. Let $G = (V_0, V_1, E, p)$ be a parity game and $U subset.eq V$. Let $L = (G|_U, sigma, tau)$ be an instance of the subgame where $sigma$ and $tau$ are optimal strategies. Then $W'_0 subset.eq W_0$ and $W'_1 subset.eq W_1$.
]
#proof[
Let $v in W'_i$, then there exist a strategy $sigma_i$ on $G'$ the for player $i$ such that for any strategy $sigma_(1-i)$ for player $1-i$ on $G'$ the resulting play is winning for player $i$. Moreover $E_L^(1-i) (v) = varempty$ by definition of $W'_i$, meaning that in the graph restricted to the strategy $sigma_i$, any vertex controlled by player $1-i$ that has unexplored edges is not reachable. This in turn means that on the full game $G$ the strategy $sigma_i$ is still winning, because for any strategy $sigma'_(1-i)$ for player $1-i$ on $G$ the resulting play will still be within the subgame, since no unexplored edge can be reached, and any such play is winning for player $i$, hence $v in W_i$.
]
=== Expansion scheme
In the local strategy iteration the expansion scheme is based on the idea of expanding the subgame by adding new vertices. In our adaptation it will instead add new edges, and vertices will be implicitly added if they are the endpoint of a new edge. This does not however change much of the logic behind it, since the expansion schemes defined in @friedmann_local are all based on picking some unexplored successor, which is equivalent to picking the unexplored edge that leads to it.
More formally, the $epsilon_1$ and $epsilon_2$ functions now take the set of edges in the subgame and output a set of new edges to add to the subgame. The requirements remain similar, in that $epsilon_1$ must return a non-empty set of edges that are not already in the subgame and $epsilon_2$ must return a set of outgoing edges from the given vertex. Moreover if a vertex has no successor then $epsilon_2$ must also be non-empty in order to given that vertex a successor and make the game total.
#definition("expansion scheme (updated)")[
Let $G = (V_0, V_1, E, p)$ be a parity game and $G' = (G, U, E')$ a subgame of $G$. An expansion scheme is a pair of functions $epsilon_1 : 2^E -> 2^E$ and $epsilon_2 : 2^E times V -> 2^E$ such that:
- $varempty subset.neq epsilon_1 (E') subset.eq E without E'$
- $epsilon_2 (E', v) subset.eq ({v} times v E) without E'$
- $v E = D_G (U, v) => epsilon_2 (E', v) != varempty$
]
As before the expansion is computed by first applying $epsilon_1$ and then by repeatedly applying $epsilon_2$.
#let expand = text(font: "", smallcaps("Expand"))
$
expand(E') &= expand_2 (E', epsilon_1 (E')) \ \
expand_2 (E', E'') &= cases(
E & "if" E'' = varempty \
expand_2 (E' union E'', union.big_((u, v) in E'') epsilon_2 (E' union E'', v)) & "otherwise"
)
$
For our implementation we decided to adapt the symmetric expansion scheme from @friedmann_local. The adapted $epsilon_1$ picks any edge from a vertex in the escape set of $v^*$ for the losing player $i$, that is, $epsilon_1 (E') = { e }$ for $e in (v times v E') without E'$, some $v in E^i_L (v^*)$ and $p((phi(v^*))_1) "mod" 2 equiv 1 - i$, while the adapted $epsilon_2$ picks any unexplored edge from the given vertex $v$ if it has no successors, that is $epsilon_2 (E', v) = { e }$ for some $e in (v times v E') without E'$ if $v E' = varempty$, otherwise $epsilon_2 (E', v) = varempty$. The choices it makes are almost the same as those of the original symmetric algorithm if each chosen edge is replaced with its head vertex, with the exception that it may select edges that lead to already explored vertices.
It should be noted that the upper bound on the number of expansions grows from $O(|V|)$, caused by when each expansions adds only a single vertex to the subgame, to $O(|E|)$, now caused by when each expansion adds only one edge to the subgame. As shown in @friedmann_local, a big number of expansions might not be ideal because each will require at least one strategy iteration, which in the long run can end up being slower than directly running the global algorithm.
On the other hand a lazier expansion scheme can take better advantage of the ability to perform simplifications on symbolic moves, which allows to remove lot of edges with little work. A eager expansion scheme may instead visit all those edges, just to ultimately find out that they were all losing for the same reason. There is thus a tradeoff between expanding too much in a single step, which loses some of the benefits of using symbolic moves, and expanding too little, which instead leads to too many strategy iterations.
=== Symbolic formulas iterators and simplification
Differently from the implementation in @flori, we need to generate symbolic moves lazily in order to take advantage of the local algorithm and the simplification of formulas. To do this we represent the generator for symbolic moves described in @upward-logic as a sequence of moves rather than as a set. Then, we can generate moves in the same order they appear in the sequence, and keep track of which point we have reached.
For sake of simplicity we assume that every $and$ and $or$ operator with a single subformula can be first simplified to that subformula itself, while $and$ and $or$ operators with more than two subformulas can be rewritten to nested $and$ and $or$ operators each with exactly two subformulas using the associative property. We thus define the sequence of moves for each type of formula as follows, where for the recursive case we take $M(phi_i) = (tup(X)_(i 1), tup(X)_(i 2), ..., tup(X)_(i n))$:
$
M([b, i]) &= (tup(X)) "with" X_i = {b} "and" forall j != i. X_j = varempty \
M(tt) &= (tup(X)) "with" forall i. X_i = varempty \
M(ff) &= () \
M(phi_1 or phi_2) &= (tup(X)_11, tup(X)_12, ..., tup(X)_(1 n) tup(X)_21, tup(X)_22, ..., tup(X)_(2 m)) \
M(phi_1 and phi_2) &= (tup(X)_11 union tup(X)_21, ..., tup(X)_(1 1) union tup(X)_(2 m), tup(X)_12 union tup(X)_21, ..., tup(X)_(1 n) union tup(X)_(2 m))
$
Intuitively a formula $[b, i]$ represents a sequence consisting of a single element, $tt$ also represents a sequence of a single winning move for player 0, while $ff$ represents an empty sequence which is thus losing for player 0. The $or$ operator represent concatenating the two (or more) sequences, with the left one first, and the $and$ operator is equivalent to the cartesian product of the two (or more) sequences, by fixing an element of the first sequence and joining it with each element of the second sequence, then repeating this for all elements of the first sequence.
// For the operator $and$ in particular it can be helpful to imagine its sequence as listing 2-digits numbers, with the tens digit representing the move from the left subformula and the ones digit representing the move from the right subformula. Intuitively doing this means fixing the left digit first and incrementing the second one
In practice the implementation is based on _formula iterators_, on which we define three operations:
- getting the current move;
- advancing the iterator to the next move, optionally signaling the end of the moves sequence;
- resetting the iterator, thus making it start again from the first move.
These are implemented for every type of formula:
- for $[b, i]$ formula iterators:
- the current move is always $tup(X)$ with $X_i = {b}$ and $forall j != i. X_j = varempty$;
- advancing the iterator always signals that the sequence has ended, since there is ever only one move;
- resetting the iterator always does nothing, since the first move is always the end represented by the iterator.
- for $phi_1 or phi_2$ formula iterators:
- the current move is the current move of the currently active subformula iterator, which is kept as part of the iterator state;
- advancing the iterator means advancing the iterator of the currently active subformula, and if that signals the end of the formula then the next subformula becomes the active one. If there is no next subformula then the end of the sequence is signaled;
- resetting the iterator means resetting the iterators for both subformulas and making $phi_1$ the currently active subformula.
- for $phi_1 and phi_2$ formula iterators:
- the current move is always the union of the current move of the two subformula iterators;
- advancing the iterator means advancing the iterator of the right subformula, and if that reports the end of the sequence then it is resetted and the iterator for the left subformula is advanced. If that also reports the end of its sequence then this iterator also reports the end of its sequence;
- resetting the iterator means resetting the iterators of both subformulas.
#example("formula iterator")[
Consider for example the formula $(a or b) and (c or d)$, where for sake of simplicity we have represented atoms by a single variable letter. The sequence of its moves would then be ${a,c}$, ${a,d}$, ${b,c}$ and ${b,d}$. Initially the formula iterator would start with the following state, where red edges represent the currently active subformula of an $or$ formula:
#figure(
canvas({
import draw: *
set-style(content: (padding: .2))
let node(pos, name, label) = content(pos, label, name: name)
node((1.5, 2), "and", $and$)
node((0.5, 1), "ab", $or$)
node((2.5, 1), "cd", $or$)
node((0, 0), "a", $a$)
node((1, 0), "b", $b$)
node((2, 0), "c", $c$)
node((3, 0), "d", $d$)
line("and", "ab")
line("and", "cd")
line("ab", "a", stroke: red)
line("ab", "b")
line("cd", "c", stroke: red)
line("cd", "d")
}),
caption: [Example of formula iterator]
)
The current move would then be ${a, c}$, since the $and$ formula performs the union of the moves of its two subformulas, while the two $or$ subformulas select their left subformula as active.
Advancing the iterator would result in advancing the iterator for the right subformula of the $and$, which happens without reaching its end, thus resulting in the following formula iterator:
#figure(
canvas({
import draw: *
set-style(content: (padding: .2))
let node(pos, name, label) = content(pos, label, name: name)
node((1.5, 2), "and", $and$)
node((0.5, 1), "ab", $or$)
node((2.5, 1), "cd", $or$)
node((0, 0), "a", $a$)
node((1, 0), "b", $b$)
node((2, 0), "c", $c$)
node((3, 0), "d", $d$)
line("and", "ab")
line("and", "cd")
line("ab", "a", stroke: red)
line("ab", "b")
line("cd", "c")
line("cd", "d", stroke: red)
}),
caption: [Example of formula iterator after one step]
)
The current formula would then be ${a,d}$, which is also the next move in the original sequence.
Advancing again the iterator would result in the right subformula signaling it has reached its end, and thus the $and$ subformula advances the left subformula and resets the right one, resulting in the following iterator:
#figure(
canvas({
import draw: *
set-style(content: (padding: .2))
let node(pos, name, label) = content(pos, label, name: name)
node((1.5, 2), "and", $and$)
node((0.5, 1), "ab", $or$)
node((2.5, 1), "cd", $or$)
node((0, 0), "a", $a$)
node((1, 0), "b", $b$)
node((2, 0), "c", $c$)
node((3, 0), "d", $d$)
line("and", "ab")
line("and", "cd")
line("ab", "a")
line("ab", "b", stroke: red)
line("cd", "c", stroke: red)
line("cd", "d")
}),
caption: [Example of formula iterator after two steps]
)
This time the current move is ${b,c}$, the next one in the sequence.
Advancing would again advance the right subformula:
#figure(
canvas({
import draw: *
set-style(content: (padding: .2))
let node(pos, name, label) = content(pos, label, name: name)
node((1.5, 2), "and", $and$)
node((0.5, 1), "ab", $or$)
node((2.5, 1), "cd", $or$)
node((0, 0), "a", $a$)
node((1, 0), "b", $b$)
node((2, 0), "c", $c$)
node((3, 0), "d", $d$)
line("and", "ab")
line("and", "cd")
line("ab", "a")
line("ab", "b", stroke: red)
line("cd", "c")
line("cd", "d", stroke: red)
}),
caption: [Example of formula iterator after two steps]
)
The current move is now ${b,d}$, the last move in the sequence. In fact advancing again would result in the right subformula signaling it has reached its end, causing the left subformula to also advance and reach its end, ultimately resulting in the whole formula iterator reaching its end.
]
As mentioned briefly in @upward-logic, in LCSFE @flori formulas are simplified once before exploring their moves according to the assumptions on the winner for each vertex made at that point in the exploration. This is however not applicable to our case since we lazily explore moves, and thus have to simplify formulas whose moves have already been partially explored. An option would be performing simplifications anyway, losing the information about which moves have already been explored and thus needing to explore them again. We however want to preserve this information to avoid exploring moves over and over, and thus need a way to simplify formulas while tracking the effects on their iterator.
The way we do this is by considering how the operation of simplifying a formula iterator can be seen on their sequence. It turns out that simplifying a formula is equivalent to removing some elements from its sequence, in particular simplifying a formula to $ff$ removes all the moves from its sequence, while simplifying a formula to $tt$ removes all the moves from its sequence except the first winning one. Simplifying a formula iterator then requires simplifying its subformula iterators, which in turn might remove moves from the parent formula iterator. Most importantly, the current move may also be among those removed moves, in which case the iterator needs to be advanced to the next remaining move, potentially reaching its end. Note that a formula iterator might also need to be adjusted depending on whether a subformula has been advanced or reached its end after being simplified; for example if the left subformula of an $and$ formula is advanced, even if once, then from the point of view of the sequence of moves of the $and$ formula a lot of moves might have been skipped, corresponding to all the pairs between the skipped move on the left subformula and all the moves in the right subformula.
#example("formula iterator simplification")[
Consider again an iterator for the formula $(a or b) and (c or d)$ in the following state:
#figure(
canvas({
import draw: *
set-style(content: (padding: .2))
let node(pos, name, label) = content(pos, label, name: name)
node((1.5, 2), "and", $and$)
node((0.5, 1), "ab", $or$)
node((2.5, 1), "cd", $or$)
node((0, 0), "a", $a$)
node((1, 0), "b", $b$)
node((2, 0), "c", $c$)
node((3, 0), "d", $d$)
line("and", "ab")
line("and", "cd")
line("ab", "a", stroke: red)
line("ab", "b")
line("cd", "c")
line("cd", "d", stroke: red)
}),
caption: [Example of formula iterator simplification]
)
If it becomes known that the position represented by the atom $c$ is winning, then we might want to simplify the $c or d$ branch to just $c$, since $c$ will always be a best move for player 0. This is similar to assigning to $tt$ to $c$, resulting in $c or d$ also being $tt$, though, from the point of view of the sequence of moves, keeping $c$ is more intuitive since we ultimately want a winning move. Note however that we also want to update its current move, and since $c$ was already considered due to appearing on the left side of the $or$ formula, the new iterator is thus considered as having reached its end.
From the point of view of the sequence of moves for the $and$ formula however, this is equivalent to discarding all the moves derived from the $d$ in the right subformula and instead considering only those derived from $c$, thus the original sequence with ${a,c}$, ${a,d}$, ${b,c}$ and ${b,d}$ would become just ${a,c}$ and ${b,c}$. Notice however how the iterator has already considered the move ${a,c}$, and thus it should advance to the next move ${b,c}$. This can be inferred by the fact that the right subformula has reached its end, so just like when advancing the $and$ formula, the left subformula is advanced to $b$ and the right subformula is resetted, which for a formula iterator consisting of just $c$ does nothing. Thus we end up with the following simplified formula iterator:
#figure(
canvas({
import draw: *
set-style(content: (padding: .2))
let node(pos, name, label) = content(pos, label, name: name)
node((1.5, 2), "and", $and$)
node((0.5, 1), "ab", $or$)
node((2.5, 1), "c", $c$)
node((0, 0), "a", $a$)
node((1, 0), "b", $b$)
line("and", "ab")
line("and", "c")
line("ab", "a")
line("ab", "b", stroke: red)
}),
caption: [Example of formula iterator simplification]
)
Notice how this iterator would consider exactly the moves ${a,c}$ and ${b,c}$ if restarted, but instead is currently considering the move ${b,c}$ because the original iterator already considered the move ${a,c}$ and would be a waste to consider it again.
]
When simplifying we will be interested, for every subformula, about whether it has been simplified to $tt$, $ff$ or whether its truth value is still unknown. In case it has not been simplified to $ff$ we will also care about whether it has reached the end of its sequence after the simplification, and if not whether the current move has changed or not. This will be useful to update the current move of the parent formula iterators. In particular:
- for $[b, i]$ formulas, simplifying them depends on whether it is known that the position for player 0 corresponding to that atom is definitely winning or not:
- if it is definitely winning the iterator remains unchanged, since the only move in the sequence it represents is winning, while the information that it is winning is propagated to the parent formula iterator;
- if it is definitely losing the iterator is replaced with $ff$, effectively removing all moves from the sequence;
- if it is neither of them then the iterator is not changed.
- $tt$ and $ff$ formulas do not need to be simplified, since they are already as much simplified as possible;
- for $or$ formulas, each subformula is simplified, thus any move that is removed from those subformulas sequences is also removed from the $or$ sequence. Then:
- if one of the subformulas is simplified to $tt$ then this formula simplifies to $tt$. The current move is updated based on whether the winning move was before the current move, in which case the iterator reaches its end, the current move itself, in which case it remains the same, or after the current move, in which case the current move it updated to the winning move.
- if all the subformulas are simplified to $ff$ then this formula is also simplified to $ff$ and reaches its end;
- otherwise the current move is updated to the new current move of the current subformula if it has not reached its end, to the first move of the next subformula if that exists, which becomes the new active subformula, or the iterator signals having reached the end of the sequence.
- for $and$ formulas each subformula is simplified and moves that use removed moves from any subformulas are removed. Then:
- if any subformula has been simplified to $ff$ then the whole formula also simplified to $ff$ and reaches its end;
- if all subformulas have been simplified to $tt$ then this formula also simplifies to $tt$. If the current move is the winning one nothing changes, otherwise if the first subformula whose winning move is not the current one had already considered that move the iterator reaches its end, if not the current move is advanced until the winning one;
- otherwise the first subformula from the left that has reached its end causes the advance of the subformula on its left and the reset of itself and all the ones on its right. If there is no subformula on its left the whole iterator has reached its end.
// #example("simplification on iterators")[
// Consider two formulas $phi_1$ and $phi_2$ each with moves $M(phi_i) = (tup(X)_(i 1), ..., tup(X)_(i n))$. We now show some examples of simplifications of $or$ and $and$ formulas involving $phi_1$ and $phi_2$ to give an intuition about how this works in practice. We will represent the current move by placing a vertical bar right before it, or at the end of the list of moves if the iterator has reached its end.
// - if $phi_1$ simplifies to $tt$, with the winning move being $X_(1 k)$ then we would have:
// $
// M(phi_1 or phi_2) = \(underbrace(tX_(1 1)\, ...\, tX_(1 n), M(phi_1)), underbrace(tX_(2 1)\, ... | ...\, tX_(2 m), M(phi_2))\) -> (tX_(1 k) |) \
// M(phi_1 or phi_2) = \(underbrace(tX_(1 1)\, ...\, tX_(1 k)\, ... | ... \, tX_(1 n), M(phi_1)), underbrace(..., M(phi_2))\) -> (tX_(1 k) |) \
// M(phi_1 or phi_2) = \(underbrace(tX_(1 1)\, ... | ...\, tX_(1 k)\, ... \, tX_(1 n), M(phi_1)), underbrace(..., M(phi_2))\) -> (| tX_(1 k)) \
// M(phi_2 or phi_1) = \(underbrace(tX_(2 1)\, ... | ... \, tX_(2 m), M(phi_2)), underbrace(tX_(1 1)\, ...\, tX_(1 n), M(phi_1))\) -> (| tX_(1 k)) \
// $
// - if $phi_1$ simplifies to $ff$ then we would have:
// $
// M(phi_1 or phi_2) = \(underbrace(tX_(1 1)\, ... | ... \, tX_(1 n), M(phi_1)), underbrace(tX_(2 1)\, ...\, tX_(2 m), M(phi_2))\) -> \(| underbrace(tX_(2 1)\, ...\, tX_(2 m), M(phi_2))\)
// $
// ]
// If a subformula is simplified to $tt$ then its sequence of moves is replaced with one containing its first winning move, while if a subformula is simplified to $ff$ then its sequence of moves is replaced with an empty one. From the point of view of the parent formula iterator, this is equivalent to removing all the moves that involve one of the moves of that subformula.
// If the current move is included in this removed moves then the iterator must advance to the next remaining move, if it exists, or signal that it has reached its end.
// To do this we need to track some informations about the subformulas iterator when they are simplified, namely:
// - whether those iterators also became $tt$ or $ff$;
// - if they did not became $tt$ or $ff$, which of the following three cases they fall on:
// - the current move is still the same;
// - the current move has been removed and a new one has taken its place;
// - the current move has been removed and the sequence has ended.
// - if they did became $tt$, whether a winning move:
// - is the current move;
// - has been considered before the current one;
// - has yet to be considered.
// The simplification algorithm then works similarly to the existing one for simplifying a formula iterator, but in addition:
// - for $phi_1 or phi_2$ formula iterators:
// - if the formula has been simplified to $tt$ then consider the first subformula that has been simplified to $tt$:
// - if it is the subformula before the current subformula, or it is the current subformula but it reports to have already considered a winning move, then this formula has also already considered a winning move;
// - if it is the current subformula and it reports that the winning move is the current one, then the winning move of the whole formula is also the current one;
// - otherwise the winning move has yet to be considered.
// - if the formula has not been simplified to either $tt$ or $ff$, then:
// - if the current subformula has been simplified to $ff$ or has reached its end then this iterator needs to advance;
// - otherwise the current move is the same as the current subformula iterator one, which might still be the same or have changed to a new one.
// - for $phi_1 and phi_2$ formula iterators:
// - if the formula has been simplified to $tt$ then:
// - if the left subformula has already considered a winning move then this iterator has also already considered a winning move;
// - if the left subformula current move is winning then this iterator winning move depends on when the right subformula winning move;
// - if the left subformula has not considered a winning move yet then this iterator has also not considered a winning move yet.
// - if the formula has not been simplified to either $tt$ or $ff$, then:
// - if the left subformula has been simplified to $tt$:
// - if it has already considered a winning move, then this iterator has reached its end;
// - if its current move is winning then the current move is the same as the right subformula iterator one, which might still be the same or have changed to a new one;
// - if it has not considered a winning move yet, then reset the right subformula iterator, and the current move has changed;
// - if the right subformula has been simplified to $tt$:
// - if it has already considered a winning move then advance the left subformula iterator and the current move has changed;
// - if its current move is winning then the current move remains the same;
// - it it has yet to consider a winning move then the current move has changed.
// - if neither has been simplified to $tt$ then:
// - if the left subformula iterator has reached its end then the whole formula also has;
// - if the left subformula current move has changed then reset the right subformula iterator, and the current move of this iterator has also changed;
// - if the left subformula current move is the same and the right subformula iterator has reached its end then advance the left subformula iterator:
// - if that reaches its end then this iterator also has reached its end;
// - otherwise this iterator current move has changed.
// - if the left subformula current move is the same and the right subformula iterator has not reached its end then
== Improvements
=== Graph simplification
In the local strategy iteration it may happen that we learn about the winner on a vertex that is not the one we are interested in. When this happens we will do a lot of wasted work in the subsequent valuations steps, since it will have to visit its edges again and again.
We now propose a transformation that produces a compatible graph and reduces the amount of edges of vertices in the definitely winning sets, thus decreasing the amount of work that the valuation step needs to perform. Informally, the idea will be to replace all outgoing edges from vertices in a definitely winning set with one pointing to one of the four auxiliary vertices $w0$, $l0$, $w1$ or $l1$ in such a way that its winner is preserved and the graph remains bipartite.
#definition("simplified graph")[
Let $G = (V'_0, V'_1, E', p)$ be the extended game of some game $(V_0, V_1, E, p)$, let $G' = (G, U, E'')$ be a partially expanded game with ${w0, l0, w1, l1} in U$ and let $W'_0$ and $W'_1$ be the definitely winning sets of $G'$. Let $v in (V_0 union V_1) sect (W'_0 union W'_1)$, then $G$ can be simplified to the graph $G'' = (V'_0, V'_1, E''', p)$ where:
- if $v in V_0 sect W'_0$ then $E''' = E' without v E'' union {(v, l1)}$;
- if $v in V_0 sect W'_1$ then $E''' = E' without v E'' union {(v, w1)}$;
- if $v in V_1 sect W'_1$ then $E''' = E' without v E'' union {(v, l0)}$;
- if $v in V_1 sect W'_0$ then $E''' = E' without v E'' union {(v, w0)}$;
]
#theorem("simplified graph compatible")[
Let $G = (V_0, V_1, E, p)$ be an extended parity game which has been simplified to $G'' = (V'_0, V'_1, E', p)$ according to the previous definition. Then $G''$ is compatible with $G$.
]
#proof[
We want to prove that the winning sets in $G$ are equal to the ones in $G''$, that is $forall i. W_i = W''_i$. Without loss of generality we assume the simplification has happened on a vertex $v in W'_0$.
Consider now any vertex $u in W_i$, that is winning for some player $i$ in $G$. We want to prove that $u in W''_i$ too. Consider any winning strategy for player $i$ and any other strategy for player $1-i$ in $G$. Any play in $G$ induced by these two strategies will be winning for player $i$ since we have $v in W_i$. We now distinguish two cases:
- $i = 0$, then these plays could reach vertex $v$. The corresponding play in $G''$ would then also reach $v$, but would then only be able to reach $l1$, $w0$ and loop between them. The resulting play would is however also won by player 0, hence $v in W''_0$.
- $i = 1$, then it is not possible for the play in $G$ to reach vertex, since otherwise player 0 would have a strategy to continue the play and win it, resulting in $u in W_0$ instead of $W_1$. Hence all plays in $G$ do not go through $v$ and remain the same in $G'$, thus remaining winning for player 1 and $u in W''_1$.
]
=== Computing play profiles of the expansion
Each game expansion is normally followed by a strategy iteration step, which computes the play profile of each vertex and then tries to improve the current strategy. We can notice however that the play profiles of all the vertices are known right before the expansion, and if we keep the current strategies fixed, both for player 0 and 1, then the newer vertices cannot influence the play profiles for the existing vertices, since the existing strategies will force any play to remain within the edges in the old subgame. Hence, we can compute the play profiles for the newer vertices in isolation, and only then determine if the existing strategies can be improved given the newer vertices.
It is known that a play profile on a vertex depends on the vertex itself and on the play profile of its successor according to the strategy for the player controlling that vertex. In particular, given a vertex $x$ and its successor $y$ we know the following about its play profile components $phi_0$, $phi_1$ and $phi_2$:
$
phi_0 (x) &= phi_0 (y) \
phi_1 (x) &= cases(
phi_1 (y) & "if" x < phi_0 (x) \
varempty & "if" x = phi_0 (x) \
phi_1 (y) union {x} & "if" x > phi_0 (x)
) \
phi_2 (x) &= cases(
phi_2 (y) + 1 & "if" x != phi_0 (x) \
0 & "if" x = phi_0 (x)
)
$
Notice however how this can result in a cyclic dependency if we need to compute the play profiles of multiple vertices creating a cycle. We thus distinguish two cases:
- if the expansion stops by reaching an existing vertex then its play profile was already known and there is no cyclic dependency. Each play profile can be computed based on the one of the successor, starting with the play profile of the last new vertex found;
- if the expansion stops by reaching a vertex found in the current expansion then there is a cyclic dependency. The cyclic dependency can however be broken by finding the most relevant vertex of the cycle $w$, for which we know that $phi (w) = (w, varempty, 0)$. This then breaks the cyclic dependency, since we know the play profile of one of the vertices in the cycle, and we can compute the play profiles of the rest like in the previous case.
By computing the play profiles after an expansion step we can thus perform an improvement right away without having to go through a valuation step to recompute the play profiles. We can further improve this by noticing that the play profiles of existing vertices did not change, thus allowing us to skip the improvement check for any vertex that did not have an outgoing edge just added.
Ultimately this allows us to skip a lot of valuation steps, which are relatively expensive. This also allows to reduce some of the downsides of the local algorithm, among which there is an increased amount of valuation steps required.
=== Exponentially increasing expansions
While lazier expansion schemes are intuitively better when paired with symbolic moves simplification, and the incremental play profiles computation helps often removes the need to perform an expensive valuation step, it can still happen that games fall into the worst case of expanding only a handful of edges in each iteration without being able to perform significant simplifications. This can be avoided by expanding more eagerly, like in the asymmetric expansion scheme for the local strategy improvement algorithm, but ideally we would like to be lazier when possible.
We thus changed the expansion logic to repeatedly expand until a minimum amount of edges has been added to the game. We choose this number to be initially pretty small in order to favour the locality of the algorithm, but made it increase to favour more eager expansions once it becomes clear that the winner cannot be quickly determined locally.
There are multiple ways to perform this increase, and this will influence the final complexity of the algorithm. In our case we choose to increase this number exponentially, thus guaranteeing that the maximum number of expansions is logarithmic in the amount of edges and keeping the cost of the worst cases under control.
To see why this is the case consider the sum of the number of edges $e_i$ added in each expansion $i$. We require each $e_i$ to be at least $a times b^i$ for some constants $a > 0$ and $b > 1$. This creates a geometric progression, whose sum is known to be $a (b^n - 1) / (b - 1)$, though for our purposes we can focus only on bounding it by $a b ^ n$.
$
"#edges added"
&= e_0 + e_1 + e_2 + ... + e_n \
&>= a + a b + a b ^ 2 + ... + a b ^ n \
&>= a b ^ n
$
Then we know that in the worst case we can add at most $|E|$, since those are all the edges. This gives the equation $|E| >= a b ^ n$, which if we solve for $n$ given $n <= log_b (|E|) / a$.
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/scholarly-tauthesis/0.4.0/tauthesis.typ | typst | Apache License 2.0 | /** tauthesis.typ
*
* This module defines the Tampere University thesis template structure. You
* should not need to touch this file. Define your own commands in preamble.typ
* instead.
*
***/
#import "template/meta.typ"
#import "template/content/abstract.typ"
#import "template/content/tiivistelmä.typ"
//// Counters and kinds
#let IMAGECOUNTER = counter("IMAGE")
#let TABLECOUNTER = counter("TABLE")
#let CODECOUNTER = counter("CODE")
#let TAUTHEOREMCOUNTER = counter("tautheorem")
#let EQUATIONCOUNTER = counter("EQUATION")
#let TAUTHEOREMKIND = "TAUTHEOREMKIND"
#let EQNUMDEPTHSTATE = state("EQNUMDEPTHSTATE", 1)
#let FIGNUMDEPTHSTATE = state("FIGNUMDEPTHSTATE", 1)
#let PREFACEPAGENUMBERCOUNTER = counter("PREFACEPAGENUMBERCOUNTER")
#let MAINMATTERPAGENUMBERCOUNTER = counter("MAINMATTERPAGENUMBERCOUNTER")
/** DOCPARTSTATE
*
* An integer which keeps track of which part of the document we are in. The
* integers denote the following thing:
*
* 0 ↦ initial state
* 1 ↦ title page
* 2 ↦ document preamble
* 3 ↦ main matter
* 4 ↦ bibliography
* 5 ↦ appendix
*
***/
#let INITIALPART = 0
#let TITLEPAGEPART = 1
#let DOCPREAMBLEPART = 2
#let MAINMATTERPART = 3
#let BIBLIOGRAPHYPART = 4
#let APPENDIXPART = 5
#let DOCPARTSTATE = state("DOCPARTSTATE", INITIALPART)
//// Constants
#let templatename = "tauthesis"
#let tunipurple = rgb(78,0,148)
#let tunifontsize = 12pt
#let tunicodefontsize = 10pt
#let finnish = "fi"
#let english = "en"
#let headingsep = 2 * " "
#let tunicodeblockinset = 0.5em
#let codesupplement = if meta.language == finnish [Listaus] else [Listing]
#let appendixprefix = if meta.language == finnish { [Liite ] } else { [Appendix ] }
//// Utility functions
// This function will give Finnish month names by month numbers.
// https://typst.app/docs/reference/foundations/datetime/
// Unfortunately, when choosing the word representation, it can
// currently only display the English version. In the future, it is
// planned to support localization.
#let localize_month_fi(month) = {
if month < 1 or month > 12 {
panic("Month must be between [1,12]")
}
if month == 1 {
return "Tammikuu"
} else if month == 2 {
return "Helmikuu"
} else if month == 3 {
return "Maaliskuu"
} else if month == 4 {
return "Huhtikuu"
} else if month == 5 {
return "Toukokuu"
} else if month == 6 {
return "Kesäkuu"
} else if month == 7 {
return "Heinäkuu"
} else if month == 8 {
return "Elokuu"
} else if month == 9 {
return "Syyskuu"
} else if month == 10 {
return "Lokakuu"
} else if month == 11 {
return "Marraskuu"
} else if month == 12 {
return "Joulukuu"
} else {
return "<Parametrina ollut numero ei ole kuukausi>"
}
}
#let localize_month_fi_partitive(month) = {
if month < 1 or month > 12 {
panic("Month must be between [1,12]")
}
if month == 1 {
return "tammikuuta"
} else if month == 2 {
return "helmikuuta"
} else if month == 3 {
return "maaliskuuta"
} else if month == 4 {
return "huhtikuuta"
} else if month == 5 {
return "toukokuuta"
} else if month == 6 {
return "kesäkuuta"
} else if month == 7 {
return "heinäkuuta"
} else if month == 8 {
return "elokuuta"
} else if month == 9 {
return "syyskuuta"
} else if month == 10 {
return "lokakuuta"
} else if month == 11 {
return "marraskuuta"
} else if month == 12 {
return "joulukuuta"
} else {
return "<Parametrina ollut numero ei ole kuukausi>"
}
}
/** int2appendixletter(ii)
*
* Converts a given integer into a letter that might be used in
* appendix numbering.
*
***/
#let int2appendixletter(ii) = {
let Acodepoint = 65
let Zcodepoint = 90
let letterrangelen = Zcodepoint - Acodepoint
let nnquo = calc.quo( ii, letterrangelen )
let nnrem = calc.rem( ii, letterrangelen )
let letter = str.from-unicode( Acodepoint + nnrem - 1 )
let string = letter
let counter = 0
while counter < nnquo { string + letter }
string
}
/** array2sectionnumbering(ctr, inappendix)
*
* When given an array of integers, returns a string that might be
* used as a section number.
*
***/
#let array2sectionnumbering(arr, inappendix) = {
if arr.len() < 1 { panic("Arrays of zero length cannot be used to construct a section number string. Reveived " + arr.map(str).join(",") ) }
if not ( arr.all( elem => type(elem) == int ) ) { panic("Array elements used to construct section numberings need to contain integers.") }
let firstnum = arr.first()
let chaptersymbol = if inappendix {
int2appendixletter(firstnum)
} else {
str(firstnum)
}
let rest = arr.slice(1).map(str)
let symbols = (chaptersymbol, ..rest)
symbols.join(".")
}
/** refshowrule(it, eqnumberwithinlevel : 1)
*
* Defines how a cross-reference within a document is laid out.
*
***/
#let refshowrule(it, eqnumberwithinlevel : 1) = {
set text( fill: tunipurple )
let el = it.element
if el != none {
let eloc = el.location()
let headingctr = counter(heading).at(eloc)
let docpart = DOCPARTSTATE.at(eloc)
let inappendix = docpart == APPENDIXPART
if el.func() == figure {
let (carray, supplement) = if el.body.func() == image {
(IMAGECOUNTER.at(eloc), el.supplement)
} else if el.body.func() == table {
(TABLECOUNTER.at(eloc), el.supplement)
} else if el.body.func() == raw {
(CODECOUNTER.at(eloc), codesupplement)
} else if el.kind == TAUTHEOREMKIND {
(TAUTHEOREMCOUNTER.at(eloc), el.supplement)
} else {
(none, none)
}
let supplement = if it.citation.supplement != none { it.supplement } else { supplement }
carray.last() += 1
link(eloc,text(fill:tunipurple)[*#supplement #array2sectionnumbering(carray,inappendix)*])
}
else if el.func() == heading {
let ctr = counter(heading).at(eloc)
let prefix = if it.citation.supplement == none and el.level == 1 and inappendix {
appendixprefix
} else if it.citation.supplement != none {
[#it.supplement ]
} else {
[]
}
link(
eloc
)[
*#text(
fill : tunipurple,
)[
#prefix#array2sectionnumbering(ctr, inappendix)
]*
]
}
else if el.func() == math.equation {
let numdepth = EQNUMDEPTHSTATE.at(eloc)
let headctr = EQUATIONCOUNTER.at(eloc)
let headctr = headctr.slice(0,calc.min(headctr.len(),numdepth))
let eqctr = counter(math.equation).at(eloc)
headctr.push( eqctr.last() )
link(
eloc,
text(fill:tunipurple)[$(array2sectionnumbering(headctr, inappendix))$]
)
}
else {
it
}
}
else {
it
}
}
/** figshowrule(it)
*
* Defines how figures given as argument are laid out in the document. Also
* updates counters related to numbering figures.
*
***/
#let figshowrule(it) = {
let figloc = it.location()
let docpart = DOCPARTSTATE.at(figloc)
let inappendix = docpart == APPENDIXPART
block(
breakable : false,
if it.kind == image {
IMAGECOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.last() += 1
argsarr
} )
let ctr = IMAGECOUNTER.at(figloc)
ctr.last() += 1
align( center )[
#block(it.body)
#block[
#text( fill:tunipurple)[*#it.supplement #array2sectionnumbering(ctr,inappendix):* ]
#it.caption.body
]
]
}
else if it.kind == table {
set figure.caption ( position : top )
TABLECOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.last() += 1
argsarr
} )
let ctr = TABLECOUNTER.at(figloc)
ctr.last() += 1
align( center )[
#block[
#text( fill:tunipurple)[*#it.supplement #array2sectionnumbering(ctr,inappendix):* ]
#it.caption.body
]
#block(it.body)
]
}
else if it.kind == raw {
set figure.caption ( position : top )
CODECOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.last() += 1
argsarr
} )
let ctr = CODECOUNTER.at(figloc)
ctr.last() += 1
align( center )[
#block[
#text( fill:tunipurple)[*#codesupplement #array2sectionnumbering(ctr,inappendix):* ]
#it.caption.body
]
]
block(it.body)
}
else if it.kind == TAUTHEOREMKIND {
TAUTHEOREMCOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.last() += 1
argsarr
} )
it
}
else { it }
)
}
/** headingshowrule(it)
*
* Defines how headings are displayed. Also updates counters related to
* numbering figures and equations.
*
***/
#let headingshowrule(it) = [
#if it.level == 1 {
pagebreak(weak: true)
}
#let eqnumberwithinlevel = EQNUMDEPTHSTATE.get()
#let fignumberwithinlevel = FIGNUMDEPTHSTATE.get()
#let counterlistlen = fignumberwithinlevel + 1
#let followingheadings = query( heading.where(outlined : true).after(here()))
#let followingheading = followingheadings.at(0, default: none)
#let fhloc = if followingheading != none {
followingheading.location()
} else {
here()
}
#if it.level <= eqnumberwithinlevel and it.outlined {
counter(math.equation).update( 0 )
let ctr = counter(heading).at(fhloc)
let minlen = calc.min(ctr.len(),eqnumberwithinlevel)
EQUATIONCOUNTER.update( ctr.slice(0,minlen) )
}
#if it.outlined and it.level <= counterlistlen {
IMAGECOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.at(it.level - 1) += 1
argsarr.last() = 0
argsarr
} )
TABLECOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.at(it.level - 1) += 1
argsarr.last() = 0
argsarr
} )
CODECOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.at(it.level - 1) += 1
argsarr.last() = 0
argsarr
} )
TAUTHEOREMCOUNTER.update( (..args) => {
let argsarr = args.pos()
argsarr.at(it.level - 1) += 1
argsarr.last() = 0
argsarr
} )
}
#let docpart = DOCPARTSTATE.get()
#let inappendix = docpart == APPENDIXPART
#let inbibliography = docpart == BIBLIOGRAPHYPART
#let textsize = if it.level == 1 {
21pt
} else if it.level == 2 {
17pt
} else if it.level == 3 or it.level == 4 {
13pt
}
#set text( size : textsize )
#block(
above: 1.4em,
below: 2em,
[
#let prefix = if it.level == 1 and inappendix {
appendixprefix
} else {
[]
}
#prefix
#if it.outlined {
if it.level == 1 and inappendix {
counter(heading).display("A.1:")
}
else if inappendix {
counter(heading).display("A.1")
}
else if inbibliography {
[]
}
else {
counter(heading).display()
}
}
#smallcaps( it.body )
]
)
]
/** outlineentryshowrule(it)
*
* Defines how outline entries are to be displayed. This is to be used within a
* show rule, so that context and therefore information about page numbers and
* such can be resolved.
*
***/
#let outlineentryshowrule(it) = {
set text( size : 13pt )
let el = it.element
let eloc = el.location()
let elpagenum = MAINMATTERPAGENUMBERCOUNTER.at(eloc).first() - 1
let docpart = DOCPARTSTATE.at(eloc)
let inappendix = docpart == APPENDIXPART
let inbibliography = docpart == BIBLIOGRAPHYPART
if el.func() == figure {
let (carray, supplement) = if el.body.func() == image {
(IMAGECOUNTER.at(eloc), el.supplement)
} else if el.body.func() == table {
(TABLECOUNTER.at(eloc), el.supplement)
} else if el.body.func() == raw {
(CODECOUNTER.at(eloc), codesupplement)
} else if el.kind == TAUTHEOREMKIND {
(TAUTHEOREMCOUNTER.at(eloc), el.supplement)
} else {
(none, none)
}
if carray != none {
carray.last() += 1
link(
eloc,
[
#text(fill:tunipurple)[*#supplement #array2sectionnumbering(carray, inappendix):*]
#set text( fill : black )
#el.caption.body
#box(width:1fr,repeat[.])
#elpagenum
]
)
} else {
it
}
} else if el.func() == heading {
let tsize = if el.level == 1 { 14pt } else if el.level == 2 { 13pt } else if el.level == 3 { 12pt } else { 11pt }
let ctr = counter(heading).at(eloc)
let (prefix, numbering, colon, bodycolor, bodyweight) = if inappendix {
if el.level == 1 {
(appendixprefix, array2sectionnumbering(ctr, inappendix), ":", black, "regular")
} else {
([], array2sectionnumbering(ctr, inappendix), [], black, "regular")
}
} else if inbibliography {
([],[],[], tunipurple, "bold")
} else {
([], array2sectionnumbering(ctr, inappendix), [], black, "regular")
}
smallcaps(
link(
eloc
)[
*#text(
fill : tunipurple,
size : tsize,
)[
#prefix
#numbering#colon
]*
#text(
fill:bodycolor,
weight:bodyweight,
size:tsize,
)[
#el.body
]
#text(
fill:black,
)[
#box(width:1fr,repeat[.])
#elpagenum
]
]
)
} else {
it
}
}
/** titlepage(title, authors, examiners, submissiondate)
*
* Defines the look of the title page, and returns the content related to it.
*
***/
#let titlepage() = {
DOCPARTSTATE.update(TITLEPAGEPART)
let examinerprefix = if meta.language == finnish {
"Tarkastaja"
} else {
"Examiner"
}
set page(
paper: "a4",
header: none,
footer: none,
margin: (left: 4cm, top: 4.5cm,),
numbering: none,
)
place(top + left, float:true, dx: -2cm, dy:-2cm, image("template/images/tau-logo-fin-eng.svg", width : 8cm))
align(right)[
#text(
20pt,
meta.author
)
#v(1cm, weak: true)
#set par( leading : 0.4em )
#text(
35pt,
fill: tunipurple,
smallcaps(
if meta.language == finnish {
meta.otsikko
} else if meta.language == english {
meta.title
}
)
)
#v(0.9cm, weak: true)
#text(
20pt,
fill: tunipurple,
smallcaps(
if meta.language == finnish {
meta.alaotsikko
} else if meta.language == english {
meta.subtitle
}
)
)
#v(1fr)
#set par( leading : 0.65em )
#text(
12pt,
[
#if meta.language == english [
#meta.thesistype\
#meta.university\
#meta.faculty
] else if meta.language == finnish [
#meta.työntyyppi\
#meta.koulu\
#meta.tiedekunta
]\
#meta.examiners.map(value => examinerprefix + ": " + value.title + " " + value.firstname + " " + value.lastname).join("\n")\
#if meta.language == english [
#meta.date.display("[month repr:long] [year]")
] else if meta.language == finnish [
#localize_month_fi(meta.date.month()) #meta.date.year()
]
]
)
]
}
/** abstract(content,language)
*
* Defines the structure and look of Tampere University thesis
* abstract.
*
***/
#let abstract(content, language) = {
[
#if language == english [
= Abstract
#meta.author: #meta.title\
#meta.thesistype\
#meta.university\
#meta.date.display("[month repr:long] [year]")\
] else if language == finnish [
= Tiivistelmä
#meta.author: #meta.otsikko\
#meta.työntyyppi\
#meta.koulu\
#localize_month_fi(meta.date.month()) #meta.date.year()\
]
#line(
length: 100%,
stroke: 2pt + tunipurple,
)
#content
#if language == finnish [
*Avainsanat:* #meta.avainsanat.join(", ")
Tämän julkaisun alkuperäisyys on tarkastettu Turnitin OriginalityCheck -ohjelmalla.
] else [
*Keywords:* #meta.keywords.join(", ")
The originality of this thesis has been checked using the Turnitin OriginalityCheck service.
]
]
}
/** preface()
*
* Typesets the preface of this thesis.
*
***/
#let preface() = {
let text = include "template/content/preface.typ"
if meta.language == finnish [
= Alkusanat
#text
#meta.sijainti #meta.date.day().
#localize_month_fi_partitive(meta.date.month())
#meta.date.year(),\
#meta.author
] else [
= Preface
#text
In #meta.location on #meta.date.day(). #meta.date.display("[month repr:long] [year]"),\
#meta.author
]
}
/** glossary
*
* Typesets the glossary based on the file content/glossary.typ.
*
***/
#let glossary() = {
import "template/content/glossary.typ"
set rect(
inset: 0pt,
fill: none,
stroke: none,
width: 100%,
)
if meta.language == finnish [
= Lyhenteet ja merkinnät
] else [
= Glossary
]
set text( size : 12pt )
stack(
dir: ttb,
for key in glossary.glossary_words.keys().sorted() {
let name = glossary.glossary_words.at(key).name
let description = glossary.glossary_words.at(key).description
stack(
dir: ltr,
spacing : 5%,
rect(width: 20%)[
#set text(fill:tunipurple)
*#align(left,name)*
],
rect(width: 75%)[#description]
)
}
)
}
/** tautheoremblock
*
* Defines the basic shape of Tampere University mathematics theorem blocks.
* Below are also defined some common mathematical theorem blocks.
*
**/
#let tautheoremblock(
fill: rgb("#ffffff"),
supplement: [TAUTheoremBlock],
title: [],
reflabel: "",
content
) = context {
let docpart = DOCPARTSTATE.get()
let inappendix = docpart == APPENDIXPART
let ctr = TAUTHEOREMCOUNTER.get()
ctr.last() += 1
[
#figure(
kind: TAUTHEOREMKIND,
supplement: supplement,
block(
fill: fill,
inset: 8pt,
radius: 4pt,
stroke: tunipurple,
width: 100%,
breakable: true,
align(
left,
[
#text(weight: "bold", fill: tunipurple, [#supplement #array2sectionnumbering(ctr,inappendix)])
// Content converted to string with repr always has a lenght ≥ 2.
#if repr(title).len() > 2 [
(#text(weight: "bold", fill: tunipurple, title))
]
#content
],
)
)
)
#if reflabel.len() > 0 { label(reflabel) }
]
}
#let definition(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Definition",
reflabel: reflabel,
title: title,
content
)
#let lemma(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Lemma",
reflabel: reflabel,
title: title,
content
)
#let theorem(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Theorem",
reflabel: reflabel,
title: title,
content
)
#let corollary(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Corollary",
reflabel: reflabel,
title: title,
content
)
#let example(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Example",
reflabel: reflabel,
title: title,
content
)
#let määritelmä(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Määritelmä",
reflabel: reflabel,
title: title,
content
)
#let apulause(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Apulause",
reflabel: reflabel,
title: title,
content
)
#let lause(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Lause",
reflabel: reflabel,
title: title,
content
)
#let seurauslause(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Seurauslause",
reflabel: reflabel,
title: title,
content
)
#let esimerkki(title: "", reflabel: "", content) = tautheoremblock(
supplement: "Esimerkki",
reflabel: reflabel,
title: title,
content
)
/** mainmatterpagesettings(set-page-number : false, it)
*
* Defines page dimensions in the main matter.
*
***/
#let mainmatterpagesettings(it) = {
set page(
numbering: "1",
number-align: right + top,
header: context {
MAINMATTERPAGENUMBERCOUNTER.step()
let headings_before = query(
heading.where(level: 1).and(
heading.where(outlined: true)
).before(here())
)
let headings_after = query(
heading.where(outlined: true).after(here()),
)
let lastheadbefore = headings_before.at(-1, default: none)
let firstheadafter = headings_after.at(0, default: none)
let currentpage = here().page()
// Display current chapter title in left header, if we have
// moved past the chapter start page. Lower level section
// titles are not displayed.
let lefttext = box(
if lastheadbefore != none {
let lastpagebefore = lastheadbefore.location().page()
let locpagediff = currentpage - lastpagebefore
if firstheadafter != none {
let firstpageafter = firstheadafter.location().page()
if firstheadafter.level == 1 and currentpage < firstpageafter {
// On the last page before next chapter.
smallcaps( lastheadbefore.body )
}
else if locpagediff >= 1 and (firstheadafter.level > lastheadbefore.level) {
// Pages following chapter title but not last page before next chapter.
smallcaps( lastheadbefore.body )
}
else {
// On an outlined chapter-starting page.
[]
}
}
else {
// No following outlined chapters or lower-level sections. Usually the last page of the document.
smallcaps( lastheadbefore.body )
}
}
else {
// No preceding outlined chapters.
[]
}
)
let rightext = MAINMATTERPAGENUMBERCOUNTER.display()
lefttext + h(1fr) + rightext
},
footer: {},
margin: (
top: 2.5cm,
bottom: 2.5cm,
left: 4cm,
right: 2cm,
)
)
it
}
/** content_figures()
*
* A list of images.
*
***/
#let content_figures() = {
outline(
title:
if meta.language == finnish
[Kuvaluettelo]
else if meta.language == english
[List of figures],
target: figure.where(kind: image),
indent: auto,
)
}
/** content_tables()
*
* A list of tables.
*
***/
#let content_tables() = {
outline(
title:
if meta.language == finnish
[Taulukkoluettelo]
else if meta.language == english
[List of tables],
target: figure.where(kind: table),
indent: auto,
)
}
/** content_listings()
*
* A list of listings.
*
***/
#let content_listings() = {
outline(
title:
if meta.language == finnish
[Ohjelma- ja algoritmiluettelo]
else if meta.language == english
[Listings],
target: figure.where(kind: raw),
indent: auto,
)
}
/** tauthesis( fignumberwithinlevel : 1, eqnumberwithinlevel : 1, textfont : ("Open Sans", "Arial", "Helvetica", "Fira Sans", "DejaVu Sans"), mathfont : ("New Computer Modern Math"), codefont : ("Fira Mono", "JuliaMono", "Cascadia Code", "DejaVu Sans Mono"), doc)
*
* Defines the structure and look of Tampere University theses.
*
***/
#let tauthesis(
fignumberwithinlevel : 1,
eqnumberwithinlevel : 1,
textfont : ("Open Sans", "Arial", "Helvetica", "Fira Sans", "DejaVu Sans"),
mathfont : ("New Computer Modern Math"),
codefont : ("Fira Mono", "JuliaMono", "Cascadia Code", "DejaVu Sans Mono"),
doc
) = {
if not meta.language in (finnish, english) {
panic( templatename + ": unrecognized primary language '" + meta.language + "' set in the meta.typ file. Only 'fi' and 'en' are accepted." )
}
if fignumberwithinlevel < 1 or fignumberwithinlevel > 4 {
panic( "Argument fignumberwithinlevel must be in the set {1,2,3,4}. Received " + str(eqnumberwithinlevel) + ".")
}
if eqnumberwithinlevel < 1 or eqnumberwithinlevel > 4 {
panic( "Argument eqnumberwithinlevel must be in the set {1,2,3,4}. Received " + str(eqnumberwithinlevel) + ".")
}
set document(
author: meta.author,
title: if meta.language == finnish {
meta.otsikko
} else {
meta.title
},
keywords: if meta.language == finnish {
meta.avainsanat
} else {
meta.keywords
},
date: meta.date,
)
// Color different kinds of links.
show link: it => {
set text ( fill: rgb("#0038FF") )
it
}
show cite: it => {
set text( fill: rgb("#FF1B24") )
it
}
// Initialize helper counters in array format.
let counterlistlen = fignumberwithinlevel + 1
IMAGECOUNTER.update((0,) * counterlistlen)
TABLECOUNTER.update((0,) * counterlistlen)
CODECOUNTER.update((0,) * counterlistlen)
TAUTHEOREMCOUNTER.update((0,) * counterlistlen)
FIGNUMDEPTHSTATE.update( fignumberwithinlevel )
EQNUMDEPTHSTATE.update( eqnumberwithinlevel )
// Update helper counters when associated figures are shown.
show figure: it => figshowrule(it)
show heading: set block( above: 1.4em, below: 2em )
// Note: this automatic addition of page breaks before level
// 1 headings might be partially responsible for the
// introduction page numbering bug discussed below.
show heading: it => headingshowrule(it)
show raw: set text( font : codefont, size: tunicodefontsize)
show raw.where( block : false ) : it => {
box(
inset : (x : 0.2 * tunicodefontsize),
outset : (y : 0.2 * tunicodefontsize),
fill : luma(230),
radius : 0.25 * tunicodefontsize,
it
)
}
show raw.where( block : true ): it => {
show raw.line: it => [
#box(width : 0.4cm)[#align( right )[#it.number]]
#h(0.1cm)
#it.body
]
align(
left,
block(
radius : 0.2cm,
width: 100%,
fill: rgb(250, 251, 249),
inset: tunicodeblockinset,
it
)
)
}
set text( font: textfont, lang: meta.language, size: 11pt )
show math.equation: set text(font: mathfont, size: 12pt)
titlepage()
set page(
numbering: "i",
number-align: bottom + center,
footer: context {
PREFACEPAGENUMBERCOUNTER.step()
align(center)[#PREFACEPAGENUMBERCOUNTER.display("i")]
}
)
PREFACEPAGENUMBERCOUNTER.update(1)
DOCPARTSTATE.update(DOCPREAMBLEPART)
set heading(numbering: none, outlined: false)
set par( justify : true )
let abstracttext = include "template/content/abstract.typ"
let tiivistelmäteksti = include "template/content/tiivistelmä.typ"
if meta.language == finnish {
abstract(tiivistelmäteksti, finnish)
abstract(abstracttext, english)
} else if meta.language == english {
abstract(abstracttext, english)
abstract(tiivistelmäteksti, finnish)
} else {
panic( templatename + ": received an unknown language " + meta.language + "from document metadata. Must be one of {\"fi\", \"en\"}" )
}
// Adjust how ToC entries appear in their respective listings.
show outline.entry: it => outlineentryshowrule(it)
// Hide any citations before main matter, so that citation counter is not
// incremented before main matter starts.
{
set footnote.entry( separator: none)
show footnote.entry: hide
show ref: none
show footnote: none
outline(depth: 3,indent:auto)
preface()
glossary()
content_figures()
content_tables()
content_listings()
}
DOCPARTSTATE.update(MAINMATTERPART)
// This first pair of page and page counter settings, which
// is almost identical to the immediately following one is
// needed to make sure the introduction page has a number of
// 1. The difference to the following pair is that in this
// one, we update the page number to 1 in the header setting.
MAINMATTERPAGENUMBERCOUNTER.update( 1 )
show: doc => mainmatterpagesettings(doc)
set math.vec(delim : "[")
set math.mat(delim : "[")
set math.equation(
numbering: nn => context {
let ctrarr = counter(heading).get()
let ctrarr = ctrarr.slice(0,calc.min(eqnumberwithinlevel,ctrarr.len()))
"(" + array2sectionnumbering(ctrarr, false) + "." + str(nn) + ")"
},
supplement: none
)
set par( leading: 0.55em, first-line-indent: 0em, justify: true )
show par: set block(spacing: 1em)
set heading( numbering: "1.1" + headingsep, outlined: true, supplement: none )
// Adjust references, so they display figures and such using
// the auxiliary counters.
show ref : it => refshowrule( it, eqnumberwithinlevel : eqnumberwithinlevel)
doc
}
/** bibsettings( doc )
*
* Lets show rules know that we are currently displaying the bibliography.
*
***/
#let bibsettings( doc ) = {
DOCPARTSTATE.update(BIBLIOGRAPHYPART)
doc
}
/** appendix( fignumberwithinlevel : 1, eqnumberwithinlevel : 1, doc )
*
* Defines the appearance of the appendix, supposing that the
* above tauthesis has already defined the general appearance of
* the document.
*
***/
#let appendix( fignumberwithinlevel : 1, eqnumberwithinlevel : 1, doc ) = {
if fignumberwithinlevel < 1 or fignumberwithinlevel > 4 {
panic( "Argument fignumberwithinlevel must be in the set {1,2,3,4}. Received " + str(eqnumberwithinlevel) + ".")
}
if eqnumberwithinlevel < 1 or eqnumberwithinlevel > 4 {
panic( "Argument eqnumberwithinlevel must be in the set {1,2,3,4}. Received " + str(eqnumberwithinlevel) + ".")
}
// Reset counters at the start of appendix.
let counterlistlen = fignumberwithinlevel + 1
IMAGECOUNTER.update((0,) * counterlistlen)
TABLECOUNTER.update((0,) * counterlistlen)
CODECOUNTER.update((0,) * counterlistlen)
TAUTHEOREMCOUNTER.update((0,) * counterlistlen)
counter(heading).update( 0 )
counter(math.equation).update( 0 )
set heading(numbering: "A.1", supplement: none)
show heading: set block( above: 1.4em, below: 2em )
show heading : it => headingshowrule(it)
set math.equation(
numbering: nn => context {
let ctrarr = counter(heading).get()
let ctrarr = ctrarr.slice(0,calc.min(eqnumberwithinlevel,ctrarr.len()))
"(" + array2sectionnumbering(ctrarr, true) + "." + str(nn) + ")"
},
supplement: none
)
show ref : it => refshowrule( it, eqnumberwithinlevel : eqnumberwithinlevel)
DOCPARTSTATE.update(APPENDIXPART)
FIGNUMDEPTHSTATE.update( fignumberwithinlevel )
EQNUMDEPTHSTATE.update( eqnumberwithinlevel )
show heading.where(level: 4): it =>[
#block(it.body)
]
show heading: it => {
if it.level > 4 {
panic("Headings with level greater than 4 are not allowed in a Tampere University theses. Please reconsider your document structure.")
}
smallcaps(it)
}
doc
}
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/VerbaliInterni/VerbaleInterno_231208/meta.typ | typst | MIT License | #let data_incontro = "08-12-2023"
#let inizio_incontro = "10:30"
#let fine_incontro = "12:00"
#let luogo_incontro = "Discord" |
https://github.com/alperari/cyber-physical-systems | https://raw.githubusercontent.com/alperari/cyber-physical-systems/main/week12/solution.typ | typst | #import "@preview/diagraph:0.1.2": *
#set text(
size: 15pt,
)
#set page(
paper: "a4",
margin: (x: 1.8cm, y: 1.5cm),
)
#align(center, text(21pt)[
*Cyber Physical Systems - Discrete Models \
Exercise Sheet 12 Solution*
])
#grid(
columns: (1fr, 1fr),
align(center)[
<NAME> \
<EMAIL>
],
align(center)[
<NAME> \
<EMAIL>
]
)
#align(center)[
January 21, 2023
]
== Exercise 1: Lecture Evaluation
We did the lecture evaluation.
== Exercise 2: LTL Properties
=== (a)
$
&phi_1 = a and circle b : tau = {a}{b}^omega tack.double phi_1 \
&phi_2 : tau = {a}{a}{a}{b}^omega \
&phi_3 : tau = {a}{a}{b}{a}^omega \
&phi_4 : tau = {b}{b}{c}{a}^omega \
&phi_5 : tau = {c}{c}{a}^omega \
&phi_6 : tau = {b}{b}({a}{c})^omega \
$
=== (b)
$
& not phi_1 : tau = {a}^omega \
& not phi_2 : tau = {c}^omega \
& not phi_3 : tau = {a}{b}^omega \
& not phi_4 : tau = {c}^omega \
& not phi_5 : tau = ({b}{a})^omega \
& not phi_6 : tau = {c}{a}^omega \
$
=== (c)
Let $T$ be the Transition System
- $T tack.double.not phi_1$. Counterexample: $"trace"(s_0 s_2 ...) = {b} {a} ...$
- $T tack.double phi_2$. Because first trace is ${b} {a} ...$ which immediately starts with $b$ therefore satisfies and the second trace is ${a, c} {a} {a, b} ...$ which also contains $a$ until $b$.
- $T tack.double.not phi_3$. Counterexample: $"trace"(s_1 s_2 s_3^omega) = {a, c} {a} {a, b}^omega$. Which satisfies $a union square b$ therefore violates $phi_3$.
- $T tack.double.not phi_4$. Counterexample: $"trace"(s_0 s_2 s_3) = {b} {a} {a, b}^omega$ doesn't contain $a$ in the initial state and also there is no eventually $c$ for the first state. Therefore it is not in $"Words"(phi_4)$.
- $T tack.double phi_5$. The infinite parts of each trace satisfies "always $a$". Therefore, all traces are in $"Words"(phi_5)$.
- $T tack.double.not phi_6$. Counterexample: $"trace"(s_0 s_2 s_3^omega) = {b} {a} {a, b}^omega$ doesn't have $c$ at all. Therefore, "eventually $c$" can't be satisfied.
=== (d)
$
& "Words"(phi_1) = \
& {A_0 A_1 ... in (2^"AP")^omega | a in A_0 and b in A_1} \
& "Words"(phi_2) = \
& {A_0 A_1 ... in (2^"AP")^omega | exists i in NN . space (forall j < i . space a in A_j) and b in A_i} \
& "Words"(phi_3) = \
& {A_0 A_1 ... in (2^"AP")^omega | forall i in NN . space (exists j < i . space a in.not A_j) or (exists j >= i . space b in.not A_j)} \
& "Words"(phi_4) = \
& {A_0 A_1 ... in (2^"AP")^omega | exists i in NN . space (forall j < i . space (exists k >= j . space c in A_k)) and (forall j >= i . space a in A_j)} \
& "Words"(phi_5) = \
& {A_0 A_1 ... in (2^"AP")^omega | exists i in NN . space (forall j >= i . space a in A_j)} \
& "Words"(phi_6) = \
& {A_0 A_1 ... in (2^"AP")^omega | forall i in NN . space exists j >= i . space c in A_j} \
$
#pagebreak()
== Exercise 3: Stating properties in LTL
$
phi_a = square (not "Peter"."use" or not "Betsy"."use") \
$
The wording is ambiguous. "a user can print only for a finite amount of time" can be either interpreted as:
1. For each time the user starts printing, user stops printing in a finite amount of time.
2. Each user only prints finitely many times in total.
We choose the interepratation 1.
$
phi_b =
& square ("Peter.use" -> diamond not "Peter.use") and \
& square ("Betsy.use" -> diamond not "Betsy.use") \
$
$
phi_c =
&square ("Peter"."request" -> diamond "Peter"."use") and \
&square("Betsy"."request" -> diamond "Betsy"."use") \
$
$
phi_d =
&(square ("Peter"."request" -> diamond not "Peter"."request")) and \
&(square ("Betsy"."request" -> diamond not "Betsy"."request")) \
$
$
phi_e =
&square ("Peter.use" -> (not "Peter.use") union "Betsy.use") and \
&square ("Betsy.use" -> (not "Betsy.use") union "Peter.use")
$
== Exercise 4: Equivalence of LTL formulas
Note: Atomic propositions of the Transition System are notated under the state name.
- $square a and circle diamond a limits(eq.triple)^? square a = "true"$
- $diamond a and circle square a limits(eq.triple)^? diamond a = "false"$. Counter example TS:
#raw-render()[```dot
digraph {
rankdir=LR;
node [fixedsize=true, width=0.75, height=0.75];
start [fixedsize=true; width=0, height=0, label=""];
s0 [label="s0\n{a}"];
s1 [label="s1\n{b}"];
start -> s0;
s0 -> s1;
s1 -> s1;
}
```]
satisfies $diamond a$ but not for $circle square a$
- $square a -> diamond b limits(eq.triple)^? a union (b or not a) = "true"$.
- $a union "false" limits(eq.triple)^? square a = "false"$. Counter example TS:
#raw-render()[```dot
digraph {
rankdir=LR;
node [fixedsize=true, width=0.75, height=0.75];
start [fixedsize=true; width=0, height=0, label=""];
s0 [label="s0\n{a}"];
start -> s0;
s0 -> s0;
}
```]
satisfies $square a$ but not for $a union "false"$.
- $square circle b limits(eq.triple)^? square b = "false"$. Counter example:
#raw-render()[```dot
digraph {
rankdir=LR;
node [fixedsize=true, width=0.75, height=0.75];
start [fixedsize=true; width=0, height=0, label=""];
s0 [label="s0\n{a}"];
s1 [label="s1\n{b}"];
start -> s0;
s0 -> s1;
s1 -> s1;
}
```]
satisfies $square circle b$ but not for $square b$.
=== Proofs
==== Proof 1: $square a and circle diamond a eq.triple square a$
Assuming $"Words"(square a) subset.eq "Words"(circle square a)$, $square a and circle diamond a eq.triple square a$ because intersection with a subset results with the subset.
Proving $"Words"(square a) subset.eq "Words"(circle diamond a)$:
$
"Words"(square a) = { A_0 A_1 ... in (2^"AP")^omega | forall i in NN . space a in A_i } \
"Words"(circle diamond a) = { A_0 A_1 ... in (2^"AP")^omega | forall i > 0 . space exists j >= i . space a in A_i } \
$
Let $sigma in "Words"(square a)$. $sigma in "Words"(circle diamond a)$ because for any $sigma$, we can take $i = 1$ and $j = 1$ which contains $a$ and therefore $sigma tack.double circle diamond a$.
$qed$
==== Proof 2: $square a -> diamond b eq.triple a union (b or not a)$
$a union (b or not a) eq.triple ("true" union (b or not a))$, because $a$ must necessarily hold until $b or not a$ occurs otherwise $b or not a$ would hold earlier. Also $"true" union (b or not a) eq.triple diamond (b or not a)$ from the definition of $diamond$ operator.
For $square a -> diamond b $:
$
square a -> diamond b
& eq.triple not square a or diamond b \
& eq.triple diamond not a or diamond b \
& eq.triple diamond (not a or b)
$
Since both equations are equivalent for another LTL formula they are equivalent to each other as well.
$qed$
|
|
https://github.com/EunTilofy/Compiler2024 | https://raw.githubusercontent.com/EunTilofy/Compiler2024/main/hw/编译原理-Chapter3.typ | typst | #import "../template.typ": *
#show: project.with(
course: "编译原理",
title: "Compilers Principals - Chapter3",
date: "2024.3.07",
authors: "<NAME>, 3210106357",
has_cover: false
)
// #show: rest => columns(2, rest)
*Problems:3.6,3.9,3.13,3.14.*
= Problem 3.6
#HWProb[
a. Calculate nullable, FIRST, and FOLLOW for this grammar:
$
&S arrow u B D z \
&B arrow B v \
&B arrow w \
&D arrow E F \
&E arrow y \
&E arrow \
&F arrow x \
&F arrow
$
b. Construct the LL(1) parsing table.
c. Give evidence that this grammar is not LL(1).
d. Modify the grammar *as little as possible* to make an LL(1) grammar that accepts the same language.
]
#solution[ \
a.
#figure(caption: "nullable, FIRST and FOLLOW")[
#tablem[
|| *nullable* | *FIRST* | *FOLLOW* |
| - | ---- | - | - |
| S | no | u | |
| B | no | w | x, y, z|
| D | yes | x, y | z |
| E | yes | y | x, z |
| F | yes | x | z |
]
]
b.
#figure(caption: "LL(1) parsing table")[
#tablem[
|| u | v | w | x | y | z |
| - | - | - | - | - | - | - |
| S | $S arrow u B D z$ | | | | | |
| B | | | $B arrow w, B arrow B v$ | | | |
| D | | | | $D arrow E F$ | $D arrow E F$ | $D arrow E F$ |
| E | | | | $E arrow $ | $E arrow y$ | $E arrow $ |
| F | | | | $F arrow x$ | | $F arrow $ |
]
]
c. No, according to the parsing table, we get two possible production when
the top of stack of non-terminal is B and the input token is w, $B arrow w, B arrow B v$.
\
d. I modify the grammar as follows:
$
&S arrow u B E F z \
&B arrow w B' \
&B' arrow v B' \
&B' arrow \
&E arrow y \
&E arrow \
&F arrow x \
&F arrow
$
]
= Problem 3.9
#HWProb[
Diagram the LR(0) states for Grammar 3.26, build the SLR parsing table, and identify the conflicts.
]
#solution[ \
#import "@preview/diagraph:0.2.0": *
#let renderc(code) = render(code.text)
#box(scale(70%, origin: top)[
#figure(caption: "LR(0) state diagram")[
#renderc(
```
digraph G {
rankdir = TB;
ranksep = 0.2
100 [style = invis, label = ""];
node [shape = rectangle];
1 [label = "1.
S' → S$
S → .V=E
S → .E
E → .V
V → .x
V → .*E"];
3 [label = "3. S → E"];
2 [label = "2. S → V.=E
E → V"];
4 [label = "4.
V → *.E
E → .V
V → .x
V → .*E"];
5 [label = "5.E → V."];
6 [label = "6. V → *E."];
7 [label = "7. V → x."];
8 [label = "8.
S → V=.E
E → .V
V → .x
V → .*E"];
9 [label = "9. S → V=E"];
10 [label = "10. S' → S.$"]
{rank=same; 1, 7;}
{rank=same; 2, 3, 5, 4;}
// assis1 [style = invis, label = ""];
// 3->assis1 [style = invis];
{rank=same; 8, 9;}
{rank=same; 10, 6;}
100->1;
1->2 [label = "V"];
1->10 [label = "S"];
1->3 [label = "E"];
1->4 [label = "*"];
4->4 [label = "*"];
4->6 [label = "E"];
4->5 [label = "V"];
4->7 [label = "x"];
1->7 [label = "x"];
8->7 [label = "x"];
8->4 [label = "*"];
8->9 [label = "E"];
8->5 [label = "V"];
2->8 [label = "="];
}
```
)]])
#figure(caption: "SLR parsing table")[
#tablem[
|| x | \* | \$ | = | E | V | S |
| - | - | - | - | - | - | - | - |
| 1 | s7 | s4 | | | g3 | g2 | g10 |
| 2 | | | r3 | s8, r3 | | | |
| 3 | r10 | r10 | r10 | r10 | | | |
| 4 | s7 | s4 | | | g6 | g5 | |
| 5 | | | r6 | r6 | | | |
| 6 | | | r5 | r5 | | | |
| 7 | | | r2 | r2 | | | |
| 8 | s7 | s4 | | | g9 | g5 | |
| 9 | | | r10 | | | | |
| 10 | | | a | | | | |
]
]
]
= Problem 3.13
#HWProb[
Show that this grammar is LALR(1) but not SLR:
$
&kk_0 quad S arrow X kk \$
quad quad quad quad
kk_3 quad X arrow d kk c \
&kk_1 quad X arrow M kk a
quad quad quad kk kk
kk_4 quad X arrow b kk d kk a \
&kk_2 quad X arrow b kk M kk c
quad quad quad
kk_5 quad M arrow d \
$
]
#solution[\
Follow(x) = {\$}, Follow(M) = {a, c}
#import "@preview/diagraph:0.2.0": *
#let renderc(code) = render(code.text)
#box(scale(70%, origin: top)[
#figure(caption: "SLR state diagram")[
#renderc(
```
digraph G {
rankdir = TB;
ranksep = 0.2
100 [style = invis, label = ""];
node [shape = rectangle];
1 [label = "1.
S → .X$
X → .Ma
X → .bMc
X → .dc
X → .bda
M → .d"];
3 [label = "3. X → M.a"];
2 [label = "2. S → X.$ "];
4 [label = "4.
X → b.Mc
M → .d
X → b.da"];
5 [label = "5.
M → d.
X → d.c"];
6 [label = "6. X → dc."];
7 [label = "7. X → Ma."];
8 [label = "8. X → bM.c"];
9 [label = "9. M → d.
X → bd.a"];
10 [label = "10. X → bMc."]
11 [label = "11. x → bda."]
{rank=same; 1;}
{rank=same; 2, 3, 4, 5, 6;}
// assis1 [style = invis, label = ""];
// 3->assis1 [style = invis];
{rank=same; 7, 8, 9;}
{rank=same; 10, 11;}
100->1;
1->2 [label = "X"];
1->5 [label = "d"];
1->3 [label = "M"];
1->4 [label = "b"];
4->8 [label = "M"];
4->9 [label = "d"];
5->6 [label = "c"];
3->7 [label = "a"];
8->10 [label = "c"];
9->11 [label = "a"];
}
```
)]])
At state 5 and the input is c,
we have two possible ways,
either to reduce to state 3 or
to shift to state 6.
So it is not SLR.
#box(scale(70%, origin: top)[
#figure(caption: "LR(1) state diagram")[
#renderc(
```
digraph G {
rankdir = TB;
ranksep = 0.2
100 [style = invis, label = ""];
node [shape = rectangle];
1 [label = "1.
S → .X$ ?
X → .Ma $
X → .bMc $
X → .dc $
X → .bda $
M → .d a"];
3 [label = "3. X → M.a"];
2 [label = "2. S → X.$ ?"];
4 [label = "4.
X → b.Mc $
M → .d c
X → b.da $"];
5 [label = "5.
M → d. a
X → d.c $"];
6 [label = "6. X → dc. $"]
7 [label = "7. X → Ma. $"];
8 [label = "8. X → bM.c $"];
9 [label = "9. M → d. c
X → bd.a $"];
10 [label = "10. X → bMc. $"]
11 [label = "11. x → bda. $"]
{rank=same; 1;}
{rank=same; 2, 3, 4, 5, 6;}
// assis1 [style = invis, label = ""];
// 3->assis1 [style = invis];
{rank=same; 7, 8, 9;}
{rank=same; 10, 11;}
100->1;
1->2 [label = "X"];
1->5 [label = "d"];
1->3 [label = "M"];
1->4 [label = "b"];
4->8 [label = "M"];
4->9 [label = "d"];
5->6 [label = "c"];
3->7 [label = "a"];
8->10 [label = "c"];
9->11 [label = "a"];
}
```
)]])
The language is LALR(1).
#figure(caption: "LALR(1) parsing table")[
#tablem[
|| a | b | c | d | \$ | X | M |
| - | - | - | - | - | - | - | - |
| 1 | | s4 | | s5 | | g2 | g3 |
| 2 | | | | | a | | |
| 3 | s6 | | | | | | |
| 4 | | | | s8 | | | g7 |
| 5 | r3 | | | s9 | | | |
| 6 | | | | | r2 | | |
| 7 | | | s10 | | | | |
| 8 | s11 | | r7 | | | | |
| 9 | | | | | r2 | | |
| 10 | | | | | r2 | | |
| 11 | | | | | r2 | | |
]
]
]
= Problem 3.14
#show math.equation: set align(left)
#HWProb[
Show that this grammar is LL(1) but not LALR(1):
$
quad quad quad quad quad quad quad quad quad quad quad quad quad quad quad
&kk_1 quad S arrow ( kk X
quad quad quad
&kk_5 quad X arrow F kk ] \
&kk_2 quad S arrow E kk ]
&kk_6 quad E arrow A \
&kk_3 quad S arrow F kk)
&kk_7 quad F arrow A \
&kk_4 quad X arrow E kk)
&kk_8 quad A arrow \
$
]
#solution[
The language is LL(1).
#figure(caption: "LL(1) Parsing table")[
#tablex(
columns: 5,
// auto-hlines: false,
auto-vlines: false,
(), vlinex(), (), (), (),
[] , "(", ")", "[", "]",
[X], [], $X arrow E)$, [], $X arrow F ]$,
[S], [$S arrow (X$], [$S arrow F)$], [], [$S arrow E ]$],
[E], [], [$E arrow A$], [], [$E arrow A$],
[F], [], [$F arrow A$], [], [$F arrow A$],
[A], [], [$A arrow$], [], [$A arrow$]
)]
In state 7, there is a reduce-reduce conflict, so it is not LALR(1).
#import "@preview/diagraph:0.2.0": *
#let renderc(code) = render(code.text)
#block[
#box(scale(60%, origin: top)[
#figure(caption: "LALR(1) state diagram")[
#renderc(
```
digraph G {
rankdir = TB;
ranksep = 0.2
100 [style = invis, label = ""];
node [shape = rectangle];
1 [label = "1.
S → .( X $
S → .E ] $
S → .F ) $
E → . A ]
X → . A ]
M → . )]"];
3 [label = "3. S → E.]$"];
2 [label = "2.
S → (.X $
X → . F ] $
X → . E ) $
E → . A )
F → . A ]
A → . ) ]"];
4 [label = "4. S → F.) $"];
5 [label = "5. S → E ] $"];
6 [label = "6. S → F). $"];
7 [label = "7. E → A. )]
F → A. )]"];
8 [label = "8. S → (X. $"];
9 [label = "9. X → F]. $"];
10 [label = "10. X → E.) $"]
11 [label = "11. X → F].$"]
12 [label = "12. X → E).$"]
{rank=same; 1, 2;}
{rank=same; 3, 4, 7, 9, 8, 10;}
// assis1 [style = invis, label = ""];
// 3->assis1 [style = invis];
{rank=same; 5, 6, 11, 12;}
100->1;
1->3 [label = "E"];
1->4 [label = "F"];
1->7 [label = "A"];
3->5 [label = "]"];
4->6 [label = ")"];
1->2 [label = "("];
2->7 [label = "A"];
2->9 [label = "F"];
9->11 [label = "]"];
2->10 [label = "E"];
10->12 [label = ")"];
2->8 [lable = "X"];
}
```
)]])]
] |
|
https://github.com/EpicEricEE/typst-equate | https://raw.githubusercontent.com/EpicEricEE/typst-equate/master/tests/number-mode/test.typ | typst | MIT License | #import "../../src/lib.typ": equate
#set page(width: 6cm, height: auto, margin: 1em)
#show: equate.with(number-mode: "label")
// Test correct counter incrementation with number-mode "label".
#set math.equation(numbering: "(1.1)")
$ a + b #<label> $
$ c + d \
e + f #<label> \
g + h \
i + j #<label> $
$ k + l $
#set math.equation(numbering: "(1a)")
$ m + n \
o + p $ <label>
$ q + r #<label> \
s + t $ <label>
$ u + v \
w + x $ <equate:revoke>
$ y + z \
1 + 2 #<equate:revoke> \
3 + 4 $ <label>
#show: equate.with(sub-numbering: true, number-mode: "label")
#set math.equation(numbering: "(1.1)")
$ a + b $ <label>
$ c + d \
e + f #<label> \
g + h \
i + j #<label> $
$ k + l $
#set math.equation(numbering: "(1a)")
$ m + n \
o + p $ <label>
$ q + r #<label> \
s + t $ <label>
$ u + v \
w + x $ <equate:revoke>
$ y + z \
1 + 2 #<equate:revoke> \
3 + 4 $ <label>
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/.VSCodeCounter/2024-10-17_10-15-14/details.md | markdown | # Details
Date : 2024-10-17 10:15:14
Directory /home/yhtq/学习/课程
Total : 104 files, 39765 codes, 706 comments, 1463 blanks, all 41934 lines
[Summary](results.md) / Details / [Diff Summary](diff.md) / [Diff Details](diff-details.md)
## Files
| filename | language | code | comment | blank | total |
| :--- | :--- | ---: | ---: | ---: | ---: |
| [template.typ](/template.typ) | Typst | 403 | 122 | 31 | 556 |
| [typst-sympy-calculator.typ](/typst-sympy-calculator.typ) | Typst | 69 | 3 | 11 | 83 |
| [代数学二/main.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/main.typ) | Typst | 10 | 2 | 0 | 12 |
| [代数学二/习题课.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%B9%A0%E9%A2%98%E8%AF%BE.typ) | Typst | 384 | 2 | 2 | 388 |
| [代数学二/作业/hw1.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw1.typ) | Typst | 215 | 2 | 4 | 221 |
| [代数学二/作业/hw10.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw10.typ) | Typst | 172 | 0 | 8 | 180 |
| [代数学二/作业/hw11.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw11.typ) | Typst | 97 | 0 | 5 | 102 |
| [代数学二/作业/hw12.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw12.typ) | Typst | 218 | 0 | 4 | 222 |
| [代数学二/作业/hw2.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw2.typ) | Typst | 259 | 2 | 4 | 265 |
| [代数学二/作业/hw3.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw3.typ) | Typst | 335 | 23 | 7 | 365 |
| [代数学二/作业/hw4.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw4.typ) | Typst | 253 | 15 | 13 | 281 |
| [代数学二/作业/hw5.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw5.typ) | Typst | 291 | 11 | 6 | 308 |
| [代数学二/作业/hw6.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw6.typ) | Typst | 480 | 0 | 8 | 488 |
| [代数学二/作业/hw7.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw7.typ) | Typst | 119 | 0 | 6 | 125 |
| [代数学二/作业/hw8.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw8.typ) | Typst | 331 | 1 | 5 | 337 |
| [代数学二/作业/hw9.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E4%BD%9C%E4%B8%9A/hw9.typ) | Typst | 255 | 0 | 12 | 267 |
| [代数学二/章节/test.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E7%AB%A0%E8%8A%82/test.typ) | Typst | 0 | 0 | 1 | 1 |
| [代数学二/章节/上半学期.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E7%AB%A0%E8%8A%82/%E4%B8%8A%E5%8D%8A%E5%AD%A6%E6%9C%9F.typ) | Typst | 3,111 | 11 | 74 | 3,196 |
| [代数学二/章节/下半学期.typ](/%E4%BB%A3%E6%95%B0%E5%AD%A6%E4%BA%8C/%E7%AB%A0%E8%8A%82/%E4%B8%8B%E5%8D%8A%E5%AD%A6%E6%9C%9F.typ) | Typst | 2,455 | 2 | 45 | 2,502 |
| [几何学/main.typ](/%E5%87%A0%E4%BD%95%E5%AD%A6/main.typ) | Typst | 2,646 | 10 | 123 | 2,779 |
| [复变函数/main.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/main.typ) | Typst | 2,358 | 4 | 46 | 2,408 |
| [复变函数/作业/hw1.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw1.typ) | Typst | 102 | 2 | 8 | 112 |
| [复变函数/作业/hw10.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw10.typ) | Typst | 65 | 2 | 0 | 67 |
| [复变函数/作业/hw2.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw2.typ) | Typst | 120 | 2 | 3 | 125 |
| [复变函数/作业/hw3.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw3.typ) | Typst | 87 | 2 | 1 | 90 |
| [复变函数/作业/hw4.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw4.typ) | Typst | 108 | 2 | 1 | 111 |
| [复变函数/作业/hw5.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw5.typ) | Typst | 56 | 2 | 2 | 60 |
| [复变函数/作业/hw6.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw6.typ) | Typst | 46 | 2 | 0 | 48 |
| [复变函数/作业/hw7.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw7.typ) | Typst | 91 | 2 | 0 | 93 |
| [复变函数/作业/hw8.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw8.typ) | Typst | 321 | 2 | 6 | 329 |
| [复变函数/作业/hw9.typ](/%E5%A4%8D%E5%8F%98%E5%87%BD%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw9.typ) | Typst | 81 | 2 | 1 | 84 |
| [常微分方程/main.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/main.typ) | Typst | 2,911 | 8 | 37 | 2,956 |
| [常微分方程/作业/2100012990 郭子荀 常微分方程 3.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/2100012990%20%E9%83%AD%E5%AD%90%E8%8D%80%20%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%203.typ) | Typst | 341 | 9 | 4 | 354 |
| [常微分方程/作业/2100012990 郭子荀 常微分方程 4.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/2100012990%20%E9%83%AD%E5%AD%90%E8%8D%80%20%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%204.typ) | Typst | 194 | 2 | 2 | 198 |
| [常微分方程/作业/2100012990 郭子荀 常微分方程 5.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/2100012990%20%E9%83%AD%E5%AD%90%E8%8D%80%20%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%205.typ) | Typst | 135 | 2 | 2 | 139 |
| [常微分方程/作业/2100012990 郭子荀 常微分方程 6.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/2100012990%20%E9%83%AD%E5%AD%90%E8%8D%80%20%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%206.typ) | Typst | 132 | 2 | 3 | 137 |
| [常微分方程/作业/2100012990 郭子荀 常微分方程 7.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/2100012990%20%E9%83%AD%E5%AD%90%E8%8D%80%20%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%207.typ) | Typst | 340 | 2 | 6 | 348 |
| [常微分方程/作业/2100012990 郭子荀 常微分方程 8.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/2100012990%20%E9%83%AD%E5%AD%90%E8%8D%80%20%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%208.typ) | Typst | 256 | 2 | 5 | 263 |
| [常微分方程/作业/hw1.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/hw1.typ) | Typst | 345 | 8 | 5 | 358 |
| [常微分方程/作业/hw2.typ](/%E5%B8%B8%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B/%E4%BD%9C%E4%B8%9A/hw2.typ) | Typst | 568 | 7 | 7 | 582 |
| [并行与分布式计算/main.typ](/%E5%B9%B6%E8%A1%8C%E4%B8%8E%E5%88%86%E5%B8%83%E5%BC%8F%E8%AE%A1%E7%AE%97/main.typ) | Typst | 140 | 0 | 3 | 143 |
| [抽象代数/main.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/main.typ) | Typst | 13 | 6 | 2 | 21 |
| [抽象代数/作业/hw10.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw10.typ) | Typst | 719 | 36 | 12 | 767 |
| [抽象代数/作业/hw11.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw11.typ) | Typst | 228 | 12 | 3 | 243 |
| [抽象代数/作业/hw12.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw12.typ) | Typst | 324 | 2 | 5 | 331 |
| [抽象代数/作业/hw2.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw2.typ) | Typst | 111 | 2 | 3 | 116 |
| [抽象代数/作业/hw3.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw3.typ) | Typst | 598 | 37 | 73 | 708 |
| [抽象代数/作业/hw4.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw4.typ) | Typst | 237 | 2 | 6 | 245 |
| [抽象代数/作业/hw5.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw5.typ) | Typst | 344 | 28 | 6 | 378 |
| [抽象代数/作业/hw6.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw6.typ) | Typst | 194 | 2 | 27 | 223 |
| [抽象代数/作业/hw7.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw7.typ) | Typst | 654 | 99 | 22 | 775 |
| [抽象代数/作业/hw8.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw8.typ) | Typst | 460 | 30 | 10 | 500 |
| [抽象代数/作业/hw9.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E4%BD%9C%E4%B8%9A/hw9.typ) | Typst | 232 | 2 | 13 | 247 |
| [抽象代数/章节/模、域.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E7%AB%A0%E8%8A%82/%E6%A8%A1%E3%80%81%E5%9F%9F.typ) | Typst | 1,414 | 0 | 15 | 1,429 |
| [抽象代数/章节/环.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E7%AB%A0%E8%8A%82/%E7%8E%AF.typ) | Typst | 795 | 0 | 29 | 824 |
| [抽象代数/章节/群论.typ](/%E6%8A%BD%E8%B1%A1%E4%BB%A3%E6%95%B0/%E7%AB%A0%E8%8A%82/%E7%BE%A4%E8%AE%BA.typ) | Typst | 1,186 | 0 | 238 | 1,424 |
| [数学模型/main.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/main.typ) | Typst | 2,443 | 3 | 50 | 2,496 |
| [数学模型/作业/hw2.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E4%BD%9C%E4%B8%9A/hw2.typ) | Typst | 328 | 9 | 5 | 342 |
| [数学模型/作业/hw3.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E4%BD%9C%E4%B8%9A/hw3.typ) | Typst | 158 | 2 | 4 | 164 |
| [数学模型/作业/hw4.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E4%BD%9C%E4%B8%9A/hw4.typ) | Typst | 37 | 2 | 0 | 39 |
| [数学模型/作业/hw5.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E4%BD%9C%E4%B8%9A/hw5.typ) | Typst | 139 | 9 | 3 | 151 |
| [数学模型/作业/hw6.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E4%BD%9C%E4%B8%9A/hw6.typ) | Typst | 109 | 6 | 4 | 119 |
| [数学模型/作业/hw7.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E4%BD%9C%E4%B8%9A/hw7.typ) | Typst | 166 | 2 | 3 | 171 |
| [数学模型/论文/pkuthss-typst/changelog.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E8%AE%BA%E6%96%87/pkuthss-typst/changelog.typ) | Typst | 80 | 0 | 35 | 115 |
| [数学模型/论文/pkuthss-typst/contributors.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E8%AE%BA%E6%96%87/pkuthss-typst/contributors.typ) | Typst | 4 | 0 | 2 | 6 |
| [数学模型/论文/pkuthss-typst/template.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E8%AE%BA%E6%96%87/pkuthss-typst/template.typ) | Typst | 663 | 107 | 77 | 847 |
| [数学模型/论文/pkuthss-typst/thesis.typ](/%E6%95%B0%E5%AD%A6%E6%A8%A1%E5%9E%8B/%E8%AE%BA%E6%96%87/pkuthss-typst/thesis.typ) | Typst | 923 | 5 | 30 | 958 |
| [数理逻辑/main.typ](/%E6%95%B0%E7%90%86%E9%80%BB%E8%BE%91/main.typ) | Typst | 484 | 0 | 10 | 494 |
| [数理逻辑/作业/ml-1_1-hw.typ](/%E6%95%B0%E7%90%86%E9%80%BB%E8%BE%91/%E4%BD%9C%E4%B8%9A/ml-1_1-hw.typ) | Typst | 141 | 0 | 6 | 147 |
| [数理逻辑/作业/ml-1_2-hw.typ](/%E6%95%B0%E7%90%86%E9%80%BB%E8%BE%91/%E4%BD%9C%E4%B8%9A/ml-1_2-hw.typ) | Typst | 165 | 0 | 3 | 168 |
| [数理逻辑/作业/ml-2_1-hw.typ](/%E6%95%B0%E7%90%86%E9%80%BB%E8%BE%91/%E4%BD%9C%E4%B8%9A/ml-2_1-hw.typ) | Typst | 214 | 1 | 8 | 223 |
| [数理逻辑/作业/ml-2_2-hw.typ](/%E6%95%B0%E7%90%86%E9%80%BB%E8%BE%91/%E4%BD%9C%E4%B8%9A/ml-2_2-hw.typ) | Typst | 112 | 0 | 0 | 112 |
| [数理逻辑/作业/ml-3_1-hw.typ](/%E6%95%B0%E7%90%86%E9%80%BB%E8%BE%91/%E4%BD%9C%E4%B8%9A/ml-3_1-hw.typ) | Typst | 81 | 0 | 2 | 83 |
| [机器学习数学导引/main.typ](/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E6%95%B0%E5%AD%A6%E5%AF%BC%E5%BC%95/main.typ) | Typst | 286 | 0 | 8 | 294 |
| [机器学习数学导引/作业/hw1.typ](/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E6%95%B0%E5%AD%A6%E5%AF%BC%E5%BC%95/%E4%BD%9C%E4%B8%9A/hw1.typ) | Typst | 84 | 0 | 6 | 90 |
| [机器学习数学导引/作业/hw2.typ](/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E6%95%B0%E5%AD%A6%E5%AF%BC%E5%BC%95/%E4%BD%9C%E4%B8%9A/hw2.typ) | Typst | 241 | 0 | 3 | 244 |
| [机器学习数学导引/作业/hw3.typ](/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E6%95%B0%E5%AD%A6%E5%AF%BC%E5%BC%95/%E4%BD%9C%E4%B8%9A/hw3.typ) | Typst | 120 | 0 | 5 | 125 |
| [经济学原理/hw/hw10.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw10.typ) | Typst | 104 | 0 | 0 | 104 |
| [经济学原理/hw/hw11.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw11.typ) | Typst | 150 | 0 | 1 | 151 |
| [经济学原理/hw/hw12.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw12.typ) | Typst | 83 | 0 | 1 | 84 |
| [经济学原理/hw/hw13.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw13.typ) | Typst | 95 | 0 | 1 | 96 |
| [经济学原理/hw/hw14.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw14.typ) | Typst | 41 | 0 | 2 | 43 |
| [经济学原理/hw/hw3.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw3.typ) | Typst | 51 | 0 | 1 | 52 |
| [经济学原理/hw/hw4.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw4.typ) | Typst | 46 | 0 | 4 | 50 |
| [经济学原理/hw/hw5.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw5.typ) | Typst | 18 | 0 | 0 | 18 |
| [经济学原理/hw/hw6.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw6.typ) | Typst | 122 | 0 | 4 | 126 |
| [经济学原理/hw/hw7.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw7.typ) | Typst | 24 | 8 | 0 | 32 |
| [经济学原理/hw/hw8.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw8.typ) | Typst | 51 | 0 | 0 | 51 |
| [经济学原理/hw/hw9.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/hw/hw9.typ) | Typst | 49 | 0 | 0 | 49 |
| [经济学原理/微观部分.typ](/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8E%9F%E7%90%86/%E5%BE%AE%E8%A7%82%E9%83%A8%E5%88%86.typ) | Typst | 315 | 2 | 28 | 345 |
| [计算方法B/code/hw1/hw1.typ](/%E8%AE%A1%E7%AE%97%E6%96%B9%E6%B3%95B/code/hw1/hw1.typ) | Typst | 98 | 0 | 0 | 98 |
| [计算方法B/code/hw2/hw2.typ](/%E8%AE%A1%E7%AE%97%E6%96%B9%E6%B3%95B/code/hw2/hw2.typ) | Typst | 100 | 0 | 1 | 101 |
| [计算方法B/main.typ](/%E8%AE%A1%E7%AE%97%E6%96%B9%E6%B3%95B/main.typ) | Typst | 418 | 3 | 10 | 431 |
| [计算机网络/main.typ](/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BD%91%E7%BB%9C/main.typ) | Typst | 13 | 2 | 1 | 16 |
| [计算机网络/数据链路层.typ](/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BD%91%E7%BB%9C/%E6%95%B0%E6%8D%AE%E9%93%BE%E8%B7%AF%E5%B1%82.typ) | Typst | 284 | 0 | 3 | 287 |
| [计算机网络/第一节.typ](/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BD%91%E7%BB%9C/%E7%AC%AC%E4%B8%80%E8%8A%82.typ) | Typst | 679 | 0 | 57 | 736 |
| [计算机网络/第一节_o.typ](/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BD%91%E7%BB%9C/%E7%AC%AC%E4%B8%80%E8%8A%82_o.typ) | Typst | 571 | 0 | 46 | 617 |
| [计算机网络/网络层.typ](/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BD%91%E7%BB%9C/%E7%BD%91%E7%BB%9C%E5%B1%82.typ) | Typst | 289 | 0 | 21 | 310 |
| [软件分析/hw/2100012990-郭子荀-软分第一次作业.typ](/%E8%BD%AF%E4%BB%B6%E5%88%86%E6%9E%90/hw/2100012990-%E9%83%AD%E5%AD%90%E8%8D%80-%E8%BD%AF%E5%88%86%E7%AC%AC%E4%B8%80%E6%AC%A1%E4%BD%9C%E4%B8%9A.typ) | Typst | 16 | 0 | 0 | 16 |
| [软件分析/hw/2100012990-郭子荀-软分第三次作业.typ](/%E8%BD%AF%E4%BB%B6%E5%88%86%E6%9E%90/hw/2100012990-%E9%83%AD%E5%AD%90%E8%8D%80-%E8%BD%AF%E5%88%86%E7%AC%AC%E4%B8%89%E6%AC%A1%E4%BD%9C%E4%B8%9A.typ) | Typst | 191 | 0 | 28 | 219 |
| [软件分析/hw/2100012990-郭子荀-软分第二次作业.typ](/%E8%BD%AF%E4%BB%B6%E5%88%86%E6%9E%90/hw/2100012990-%E9%83%AD%E5%AD%90%E8%8D%80-%E8%BD%AF%E5%88%86%E7%AC%AC%E4%BA%8C%E6%AC%A1%E4%BD%9C%E4%B8%9A.typ) | Typst | 42 | 0 | 0 | 42 |
| [软件分析/hw/2100012990-郭子荀-软分第五次作业.typ](/%E8%BD%AF%E4%BB%B6%E5%88%86%E6%9E%90/hw/2100012990-%E9%83%AD%E5%AD%90%E8%8D%80-%E8%BD%AF%E5%88%86%E7%AC%AC%E4%BA%94%E6%AC%A1%E4%BD%9C%E4%B8%9A.typ) | Typst | 61 | 15 | 2 | 78 |
| [软件分析/hw/2100012990-郭子荀-软分第四次作业.typ](/%E8%BD%AF%E4%BB%B6%E5%88%86%E6%9E%90/hw/2100012990-%E9%83%AD%E5%AD%90%E8%8D%80-%E8%BD%AF%E5%88%86%E7%AC%AC%E5%9B%9B%E6%AC%A1%E4%BD%9C%E4%B8%9A.typ) | Typst | 43 | 0 | 0 | 43 |
| [软件分析/main.typ](/%E8%BD%AF%E4%BB%B6%E5%88%86%E6%9E%90/main.typ) | Typst | 424 | 0 | 13 | 437 |
[Summary](results.md) / Details / [Diff Summary](diff.md) / [Diff Details](diff-details.md) |
|
https://github.com/topdeoo/NENU-Thesis-Typst | https://raw.githubusercontent.com/topdeoo/NENU-Thesis-Typst/master/pages/toc-page.typ | typst | #import "@preview/outrageous:0.2.0"
#import "../utils/format.typ": invisible-heading
#import "../fonts/fonts.typ": font-family, font-size
// 目录生成
#let toc(
two-side: false,
fonts: (:),
//! 其他参数
depth: 3,
title: "目 录",
outlined: true,
title-vspace: 0pt,
title-text-args: auto,
reference-font: auto,
reference-size: font-size.小四,
weight: ("bold", "regular",),
//! 垂直间距
vspace: (25pt, 14pt),
indent: (0pt, 18pt, 28pt),
//! 条目与页数之间的连线
fill: ("", auto),
..args,
) = {
//! 默认参数
fonts = font-family + fonts
if (title-text-args == auto) {
title-text-args = (font: fonts.宋体, size: font-size.四号, weight: "bold")
}
if (reference-font == auto) {
reference-font = fonts.宋体
}
// TODO 优化页码索引
set page(numbering: (..idx) => {
text(size: font-size.五号, numbering("I", idx.pos().first()))
})
//! 正式渲染
pagebreak(
weak: true,
to: if two-side {
"odd"
},
)
set text(font: reference-font, size: reference-size)
{
set align(center)
text(..title-text-args, title)
invisible-heading(level: 1, outlined: outlined, "目录")
}
v(title-vspace)
show outline.entry: outrageous.show-entry.with(
..outrageous.presets.typst,
body-transform: (level, it) => {
// TODO 一级标题的页码需要加粗
set text(
font: fonts.宋体,
size: font-size.五号,
weight: weight.at(calc.min(level, weight.len()) - 1),
)
// 计算缩进
let indent-list = indent + range(level - indent.len()).map(it => indent.last())
let indent-length = indent-list.slice(0, count: level).sum()
if "children" in it.fields() {
let (number, space, ..text) = it.children
style(styles => {
[#h(indent-length) #number #h(.5em) #text.join()]
})
} else {
h(indent-length) + it
}
},
vspace: vspace,
fill: fill,
..args,
)
//! 显示目录
outline(title: none, depth: depth)
// TODO 硕博论文需要显示插图与附表目录
} |
|
https://github.com/songguokunsgg/HUNNU-Typst-Master-Thesis | https://raw.githubusercontent.com/songguokunsgg/HUNNU-Typst-Master-Thesis/master/hnu-thesis/utils/theorem.typ | typst | MIT License | #import "@preview/ctheorems:1.1.1": *
#show: thmrules // 添加定理环境
#set heading(numbering: "1.1.")
#let theorem = thmbox("theorem", "定理", inset: (x: 1.2em, top: 1em))
#let corollary = thmplain(
"corollary",
"推论",
base: "theorem",
titlefmt: strong
)
#let definition = thmbox("definition", "定义", inset: (x: 2.2em, top: 1em))
#let example = thmplain("example", "示例").with(numbering: none)
#let proof = thmplain(
"proof",
"证明",
base: "theorem",
bodyfmt: body => [#body #h(1fr) $square$]
).with(numbering: none)
#let proposition = thmbox("proposition", "命题", inset: (x: 1.2em, top: 1em)) |
https://github.com/jujimeizuo/ZJSU-typst-template | https://raw.githubusercontent.com/jujimeizuo/ZJSU-typst-template/master/contents/info.typ | typst | Apache License 2.0 | // 中文题目
#let zh_title = "浙江工商大学毕业设计 Typst 模板"
// 英文题目
#let en_title = "ZJGSU graduating thesis Typst Template"
// 学院
#let college = "计算机科学与技术学院"
// 专业
#let major = "计算机科学与技术"
// 学号
#let student_id = "1912190000"
// 学生姓名
#let student_name = "张三"
// 指导老师
#let college_advisor = "李四"
// 企业导师
#let company_advisor = "王五"
// 起讫日期
#let start_and_end_date = "00000000 - ffffffff" |
https://github.com/aarneng/Outline-Summaryst | https://raw.githubusercontent.com/aarneng/Outline-Summaryst/main/examples/example.typ | typst | MIT License | #import "../src/outline-summaryst.typ": style-outline, make-heading
#show outline: style-outline.with(outline-title: "Table of Contents")
#outline()
#make-heading("Part One", "This is the summary for part one")
#lorem(500)
#make-heading("Chapter One", "Summary for chapter one in part one", level: 2)
#lorem(300)
#make-heading("Chapter Two", "This is the summary for chapter two in part one", level: 2)
#lorem(300)
#make-heading("Part Two", "And here we have the summary for part two")
#lorem(500)
#make-heading("Chapter One", "Summary for chapter one in part two", level: 2)
#lorem(300)
#make-heading("Chapter Two", "Summary for chapter two in part two", level: 2)
#lorem(300)
|
https://github.com/linhduongtuan/BKHN-Thesis_template_typst | https://raw.githubusercontent.com/linhduongtuan/BKHN-Thesis_template_typst/main/template/body.typ | typst | Apache License 2.0 | #import "font.typ": *
#import "utils.typ": *
#pagebreak()
#counter(page).update(1)
// Chapter counter, record formula hierarchy
#let counter_chapter = counter("chapter")
#let counter_equation = counter(math.equation)
#let counter_image = counter(figure.where(kind: image))
#let counter_table = counter(figure.where(kind: table))
// Format of Figures, Tables, and Equatations
#show figure: it => [
#v(6pt)
#set text(font_size.scriptsize)
#set align(center)
#if not it.has("kind") {
it
} else if it.kind == image {
it.body
[
#textbf("Figure")
#locate(loc => {
[#counter_chapter.at(loc).first().#counter_image.at(loc).first()]
})
#it.caption
]
} else if it.kind == table {
[
#textbf("Table")
#locate(loc => {
[#counter_chapter.at(loc).first().#counter_table.at(loc).first()]
})
#it.caption
]
it.body
} else {
it.body
}
#v(6pt)
]
// Math Formulas
#set math.equation(
numbering: (..nums) => locate( loc => {
numbering("(1.1)", counter_chapter.at(loc).first(), ..nums)
})
)
// Format citations
#show ref: it => {
locate(loc => {
let elems = query(it.target, loc)
if elems == () {
it
} else {
let elem = elems.first()
let elem_loc = elem.location()
if numbering != none {
if elem.func() == math.equation {
link(elem_loc, [#textbf("Equation")
#counter_chapter.at(elem_loc).first().#counter_equation.at(elem_loc).first()
])
} else if elem.func() == figure{
if elem.kind == image {
link(elem_loc, [#textbf("Figure")
#counter_chapter.at(elem_loc).first().#counter_image.at(elem_loc).first()
])
} else if elem.kind == table {
link(elem_loc, [#textbf("Table")
#counter_chapter.at(elem_loc).first().#counter_table.at(elem_loc).first()
])
}
}
} else {
it
}
}
})
}
#set heading(numbering: (..nums) =>
if nums.pos().len() == 1 {
"Chapter "+ RomanNumbers(nums.pos().first()) + "."
}
else {
nums.pos().map(str).join(".")
})
#show heading: it => {
if it.level == 1 {
set align(center)
set text(font: arial, size: font_size.large, weight: "bold")
counter_chapter.step()
counter_equation.update(())
counter_image.update(())
counter_table.update(())
it
v(12pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
} else if it.level == 2 {
set text(font: arial, size: font_size.normalsize, weight: "bold")
it
v(18pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
} else if it.level == 3 {
set text(font: arial, size: font_size.small, weight: "thin")
it
v(18pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
} else if it.level == 4 {
set text(font: arial, size: font_size.small, weight: "thin")
it
v(18pt)
par(leading: 1.5em)[#text(size:0.0em)[#h(0.0em)]]
}
}
// Format text
#set text(font: arial, size: font_size.footnotesize)
#set par(justify: true, leading: 1em, first-line-indent: 2em)
#show par: it => {
it
}
// import contents from the context file
#include "../contents/context.typ" |
https://github.com/Dav1com/minerva-report-fcfm | https://raw.githubusercontent.com/Dav1com/minerva-report-fcfm/master/docs/meta.typ | typst | MIT No Attribution | #import "../meta.typ": *
#import "../minerva-report-fcfm.typ" as minerva
#let titulo = "Informe Minerva"
#let subtitulo = "Informes rápidos y fáciles."
#let tema = "v" + package-version
#let url = "https://github.com/Dav1com/minerva-report-fcfm"
#let departamento = minerva.departamentos.dcc
#let curso = ""
#let fechas = ( // diccionario de fechas, si la portada no soporta
Creación: minerva.util.formato-fecha(datetime.today())
)
#let lugar = "Santiago, Chile"
#let autores = "<NAME>"
#let equipo-docente = none
#let resumen = [
*Minerva Report FCFM* es un template de Typst para informes de tareas, laboratorios o trabajos. Pensado para estudiantes y académicos de la Facultad de Ciencias Físicas y Matemáticas de la Universidad de Chile.
]
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.2.0/src/draw/styling.typ | typst | Apache License 2.0 | #import "/src/util.typ"
/// Set current style
///
/// - ..style (style): Style key-value pairs
#let set-style(..style) = {
assert.eq(
style.pos().len(),
0,
message: "set-style takes no positional arguments",
)
(ctx => {
ctx.style = util.merge-dictionary(ctx.style, style.named())
return (ctx: ctx)
},)
}
/// Set current fill style
///
/// Shorthand for `set-style(fill: <fill>)`
///
/// - fill (paint): Fill style
#let fill(fill) = set-style(fill: fill)
/// Set current stroke style
///
/// Shorthand for `set-style(stroke: <fill>)`
///
/// - stroke (stroke): Stroke style
#let stroke(stroke) = set-style(stroke: stroke)
|
https://github.com/OverflowCat/astro-typst | https://raw.githubusercontent.com/OverflowCat/astro-typst/master/src/pages/simp.typ | typst | #set page(height: auto, margin: 1em, width: auto)
haha 123 $oo RR$
|
|
https://github.com/Raunak12775/aloecius-aip | https://raw.githubusercontent.com/Raunak12775/aloecius-aip/main/template/main.typ | typst | #import "@preview/aloecius-aip:0.0.1": *
#show: article.with(
title: "Typst Template for Journal of Chemical Physics (Draft)",
authors: (
"Author 1": author-meta(
"GU",
email: "<EMAIL>",
),
"Author 2": author-meta(
"GU",
cofirst: false
),
"Author 3": author-meta(
"UG"
)
),
affiliations: (
"UG": "University of Global Sciences",
"GU": "Institute for Chemistry, Global University of Sciences"
),
abstract: [
Here goes the abstract. This is the unofficial AIP like template for drafting papers of physics, computational chemistry etc. but mainly for Journal of Chemical Physics. Author of this template do not claim any link to AIP or other associates of AIP. It is a trial to mimic the the draft version of AIP LaTeX template present in overleaf.
],
bib: bibliography("./reference.bib")
)
= Introduction
#indent
This work has been going on for years and it had impacted the field with so many different new things. I have been working for 34 years and this is what we are going to discuss. Every section needs to be started with this `#indent` variable because it is a dirty workaround to add an indent for the first paragraph.
From the next paragraph the indent is automatically added, just make a line gap by pressing `enter` and done. Or one can also use instead the `#parbreak()` function to achieve the same functionality.
#parbreak()
The same outcome, do whatever suits you but I feel the line break by a keyboard input is more natural. Cheers!
= Methods
#indent
This is a very fancy method that obeys the Einstein's Mass and Energy equivalence relations, the following equation is by default numbered as `(1)`
$
E = m times c^2
$<eq1>
one can label the equation through `<eq1>` macro and refer in the body of the text as shown in @eq1 by invoking another macro `@eq1`. Line equation is also possible $E=m c^2$. For more go to the #link("https://typst.app/docs/reference/math/")[typst math docs].
Also one can write chemical formulae with the help of #link("https://typst.app/universe/package/whalogen")[whalogen] package with the `#ce()` function that would produce the result #ce("NO2^2+"). If one fancies some physics stuff it is also available through the #link("https://typst.app/universe/package/physica")[physica] package.
$
A = mat(1,2,3,4;1,2,3,4;2,3,4,6;5,4,6,7)
$
or some fancy stuff like the following. For more please read the documentation of the above mentioned packages
$
H(f) = hmat(f;x,y,z; big:#true)
$
== Some Fancy Method explanation
//#linebreak()#indent
There is a fancy method that is taken from some fancy article which can be cited like `@liebLowerBound1979` @liebLowerBound1979 and it would automatically be reflected in the *References* section. Some diagrams can be drawn using the `cetz` package in `typst` like some fancy
#align(center)[
#v(0.4em)
#cetz.canvas({
import cetz.draw: *
let up-height = 2.0
let down-height = -2.0
line((-4,0),(-3,0), stroke: (thickness: 1.75pt))
content((-5,0), [*1s*])
line((4,0),(3,0), stroke: (thickness: 1.75pt))
content((5,0),[*1s*])
line((-0.5,up-height),(0.5,up-height), stroke: (thickness: 1.75pt))
content((1.5,up-height),$sigma^*$)
line((-0.5,down-height),(0.5,down-height), stroke: (thickness: 1.75pt))
content((1.5,down-height),$sigma$)
line((-3,0),(-0.5,up-height), stroke: (dash: "dashed"))
line((3,0),(0.5,up-height), stroke: (dash: "dashed"))
line((3,0),(0.5,down-height), stroke: (dash: "dashed"))
line((-3,0),(-0.5,down-height), stroke: (dash: "dashed"))
line((-3.3,0.5),(-3.3,-0.5),mark:(start: ">", fill: black), stroke:(thickness: 0.5pt))
line((3.7,0.5),(3.7,-0.5),mark:(start: ">", fill: black), stroke:(thickness: 0.5pt))
let start-elec-up = up-height + 0.5
let end-elec-up = up-height - 0.5
let start-elec-down = down-height - 0.5
let end-elec-down = down-height + 0.5
line((0.2,start-elec-down),(0.2,end-elec-down),mark:(start: ">", fill: black), stroke:(thickness: 0.5pt))
line((-0.2,start-elec-down),(-0.2,end-elec-down),mark:(end: ">", fill: black), stroke:(thickness: 0.5pt))
})
Figure 1: A Fancy Molecular Orbital Diagram
]
= Results
#indent
Some interesting results can be shown be shown through plots like the following
#figure(
image("./plot.svg", width: 60%),
caption:[Fancy Plot]
)
Some can represent the numbers in a table like the following
#align(center)[
#table(
columns: (auto, auto, auto),
align: center + horizon,
table.header(
[column 1], [column 2], [column 3],
),
[1],[2],[5],
[2],[3],[6],
[3],[4],[5]
)
Table 1 : A Thought Provoking Table
]
= Conclusions
#indent
There are still some things off and there should be more to do with this template, I am open to suggestions and feedback. And feel free to customize this template to do anything else.
= Acknowledgements
#indent
Author of the template is really thankful to the packages that has been used here which are listed below (ordering is irrelevant)
- `physica`
- `whalogen`
- `cetz`
Also the author thank the "journal starter article" template in typst template repository which got this started
https://typst.app/universe/package/starter-journal-article
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/compiler/type-compatibility.typ | typst | Apache License 2.0 | // Test compatibility between types and strings.
// Ref: false
---
#test(type(10), int)
#test(type(10), "integer")
#test("is " + type(10), "is integer")
#test(int in ("integer", "string"), true)
#test(int in "integers or strings", true)
#test(str in "integers or strings", true)
|
https://github.com/jomaway/typst-teacher-templates | https://raw.githubusercontent.com/jomaway/typst-teacher-templates/main/ttt-exam/lib/i18n.typ | typst | MIT License | #import "@preview/linguify:0.4.0": *
// load linguify database file
#let ling_db = toml("assets/lang.toml")
#let ling(key) = linguify(key, from: ling_db)
|
https://github.com/choglost/LessElegantNote | https://raw.githubusercontent.com/choglost/LessElegantNote/main/layouts/preface.typ | typst | MIT License | // 前言
#let preface(
// documentclass 传入的参数
twoside: false,
// 其他参数
..args,
it,
) = {
// 分页
// if (twoside) {
// pagebreak() //+ " "
// }
set heading(numbering: "A.")
it
} |
https://github.com/AugustinWinther/structured-uib | https://raw.githubusercontent.com/AugustinWinther/structured-uib/main/template/main.typ | typst | MIT License | // IMPORTS
#import "@preview/structured-uib:0.1.0": *
// TEMPLATE SETTINGS
#show: report.with(
task-no: "1",
task-name: "Måling og behandling av måledata",
authors: (
"<NAME>",
"<NAME>",
"<NAME>"
),
mails: (
"<EMAIL>",
"<EMAIL>",
"<EMAIL>"
),
group: "1-1",
date: "29. Apr. 2024",
supervisor: "Professor Professorsen",
)
// INNHOLDSFORTEGNELSE (automatisk fyllt ut)
#outline()
// 1 - MÅLSETTING
= Oppgavens målsetting
// 2 - MÅLEOPPSTILLING
= Beskrivelse av måleoppstilling
// 3 - UTFØRELSE
= Utførelse og målinger
// 4 - KONKLUSJON
= Konklusjon og diskusjon
// REFERANSER (automatisk fyllt ut)
#bibliography("references.bib")
// Appendiks innhold etter denne "show" linjen
#show: appendices
// APPENDIKS A - KODE
= Kode
|
https://github.com/SillyFreak/typst-stack-pointer | https://raw.githubusercontent.com/SillyFreak/typst-stack-pointer/main/tests/unit/test.typ | typst | MIT License | #import "/src/lib.typ" as stack-pointer
// the output is not relevant for this test
#set page(width: 0pt, height: 0pt)
#let execution = {
import stack-pointer: *
// 1: int main() {
// 2: int x = foo();
// 3: return 0;
// 4: }
// 5
// 6: int foo() {
// 7: return 0;
// 8: }
execute({
let foo() = func("foo", 6, l => {
l(0)
l(1); retval(0) // (1)
})
let main() = func("main", 1, l => {
l(0)
l(1)
let (x, ..rest) = foo(); rest // (2)
l(1, push("x", x))
l(2)
})
main(); l(none)
})
};
#let step(line, stack) = (step: (line: line), state: (stack: stack))
#assert.eq(execution, (
step(1, ((name: "main", vars: (:)),)),
step(2, ((name: "main", vars: (:)),)),
step(6, ((name: "main", vars: (:)), (name: "foo", vars: (:)))),
step(7, ((name: "main", vars: (:)), (name: "foo", vars: (:)))),
step(2, ((name: "main", vars: (x: 0)),)),
step(3, ((name: "main", vars: (x: 0)),)),
step(none, ()),
))
|
https://github.com/goshakowska/Typstdiff | https://raw.githubusercontent.com/goshakowska/Typstdiff/main/tests/test_complex/ordered_list/ordered_list_mix_result.typ | typst | + The climate
- Precipitation
- Temperature#underline[ ];#underline[factors]
+ degree
- hot
- cold
- #strike[warm]
+ #strike[sun]
+ #strike[The];#underline[Something] #strike[geology];#underline[new]
+ #underline[Monkey]
|
|
https://github.com/An-314/Notes_of_Electrodynamics | https://raw.githubusercontent.com/An-314/Notes_of_Electrodynamics/master/chap1.typ | typst | #import"@preview/physica:0.9.2":*
#import "template.typ": *
#counter(heading).update(0)
= 概述
电动力学是关于电磁场的基本特性、运动规律以及电磁场与带电物质之间相互作用的理论。
#grid(columns: 2, rows: 20pt)[场(feild)][场论][势(potential)][规范场]
*经典电动力学简介*
1. 电磁学和电动力学简史
2. Maxwell之前的定律
3. Maxwell方程和边界条件
4. 宏观介质中的Maxwell方程
5. Helmholtz波方程和色散
6. 张量符号中的Maxwell方程
7. 经典电动力学概述
== From Coulomb’s Law to Gauss’s Law
experiments:Inverse Square Law 平方反比定律 Cavendish实验
$
"【Coulomb定律】" &E(x) = k q_1 (x-x_1)/abs(x-x_1)^3 = k q_1 1/R^2 vb(n) =>\
"多个电荷"&E(x) = 1/(4 pi epsilon_0) sum_(i=1)^n q_i (x - x_i)/abs(x - x_i)^3 =>x`\
"连续分布的电荷"& E(x) = 1/(4 pi epsilon_0) integral_V rho(x') (x - x')/abs(x - x')^3 dd(x', 3)
$
#figure(
image("pic/2024-09-11-11-10-21.png", width: 20%),
numbering: none,
)
对面元$dd(a)$积分
$
vb(E) dot vb(n) dd(a) = q/(4 pi epsilon_0) (cos theta)/r^2 dd(a) =^(cos theta dd(a) = r^2 dd(Omega)) q/(4 pi epsilon_0) dd(Omega) \
integral.cont_S vb(E) dot vb(n) dd(a) = 1/epsilon_0 sum_i q_i = 1/epsilon_0 integral_V rho(x) dd(x,3)\
integral_V (div E) dd(x,3) = 1/epsilon_0 integral_V rho(x) dd(x,3)\
"【Gauss定律】" div E = rho/epsilon_0
$
== From Biot and Savart Law to Ampere’s Law
$
"【Biot-Savart定律】"dd(vb(B)) = k I (dd(vb(l)) times vb(x))/abs(x)^3
$
$
"【Amprere定律】" integral.cont_C vb(B) dot vb(l) = mu_0 I
$
$
B(x) = mu_0/(4 pi) curl integral vb(J(x'))/abs(x - x') dd(x',3) \
div vb(B) = 0\
curl vb(B) = mu_0 vb(J)
$
== From Lenz’s Law to Faraday’s Law of Induction
$
"Electromotive force" & cal(E) = integral.cont_C vb(E') dot vb(l)
$
Lenz定律:感应电流(和伴随的磁通量)的方向与通过电路的磁通量变化相反。
$
"【Faraday定律】" integral.cont_S vb(E') dot vb(l) = - k dd("")/dd(t) integral_S vb(B) dot vb(n) dd(a)\
curl vb(E) + partialderivative(vb(B), t) = 0
$
== Before Maxwell Equations
在静电、磁静电和准静态场下获得的:
$
cases(
"Coulomb’s Law" & div D = rho,
"Ampere’s Law"(curl J = 0) & curl H = J,
"Faraday’s Law" & curl E = -(partial B)/(partial t),
"Lenz’s Law" & div B = 0,
)
$
Inconsistency for time dependent fields!
$
div vb(J) = div(curl H) = 0 <=> div vb(J) + partialderivative(rho, t) = 0
$
== Maxwell Equations
Displacement Current 引入位移电流
$
div(vb(J) + partialderivative(vb(D), t)) = 0
$
得到Maxwell方程组:
$
cases(
div vb(D) = rho,
curl vb(H) = vb(J) + partialderivative(vb(D), t),
div vb(B) = 0,
curl vb(E) + partialderivative(vb(B), t) = 0
)
$
$
"【Lorentz力】" vb(F) = q (vb(E) + vb(v) times vb(B))
$
$
"【Newton’s Second Law】" dv(vb(P),t) = vb(F)
$
Maxwell预测了光是一种电磁波现象,可以产生各种频率的电磁波。
=== Boundary Conditions
#figure(
image("pic/2024-09-11-15-34-44.png", width: 80%),
numbering: none,
)
=== Maxwell Equations in Vacuum
- The relationship between B&H,D&E
电介质、磁介质
- Microscopic Fields
- From Microscopic Equations to Macroscopic Equations
- Vector and Scalar Potentials
- Lorentz Gauge & Coulomb Gauge (Radiation/Transverse)
- Plane Wave in a Nonconducting Medium
- Electromagnetic Waves in Vacuum |
|
https://github.com/Shuenhoy/modern-zju-thesis | https://raw.githubusercontent.com/Shuenhoy/modern-zju-thesis/master/pages/graduate-title-en.typ | typst | MIT License | #import "../utils/fonts.typ": 字号, 字体
#import "../utils/datetime-display.typ": datetime-display
#import "../utils/twoside.typ": *
#let graduate-title-en(
info: (:),
// 其他参数
stroke-width: 0.5pt,
row-gutter: 11.5pt,
degree: "硕士",
) = {
if type(info.submit-date) == datetime {
info.submit-date = datetime-display(info.submit-date)
}
context {
twoside-pagebreak
counter(page).update(0)
v(-40pt)
set grid(
row-gutter: row-gutter,
rows: 1em,
stroke: (x, y) => (
bottom: if x == 1 {
stroke-width
} else {
none
},
),
)
set align(center)
v(20pt)
block(
width: 80%,
[
#set text(size: 16pt, weight: "bold")
#grid(
columns: (1fr),
align: (center),
stroke: (bottom: stroke-width),
info.title-en.first(),
..info.title-en.slice(1),
grid.cell(stroke: none)[], grid.cell(stroke: none)[],
)
],
)
v(-40pt)
[#image("../assets/zju-emblem.svg", width: page.width * 0.15)<mzt:no-header-footer>]
block(
width: 60%,
[
#set text(size: 字号.三号, weight: "bold")
#grid(
columns: (auto, 0.8fr),
align: (end, center),
"Author's signature:", [],
"Supervisor's signature:", [],
grid.cell(stroke: none)[], grid.cell(stroke: none)[],
)
],
)
block(
width: 70%,
[
#set text(size: 字号.四号)
#grid(
columns: (auto, 1fr),
align: (end, center),
..info.reviewer-en.enumerate(start: 0).map(v => ([Thesis reviewer #(v.at(0)+1):], v.at(1))).flatten(),
grid.cell(stroke: none)[], grid.cell(stroke: none)[],
)
#grid(
columns: (auto, 15em),
align: (end, center),
"Chair:", info.committe-en.at(0),
)
#v(-1em)
#align(left)[#h(-1em)#text(size: 字号.五号)[(Committee of oral defence)]]
#grid(
columns: (auto, 1fr),
align: (end, center),
..info.committe-en.enumerate(start: 0).slice(1).map(v => ([Committeeman #(v.at(0)):], v.at(1))).flatten(),
grid.cell(stroke: none)[], grid.cell(stroke: none)[],
)
#align(center)[
#grid(
columns: (auto, 10em),
align: (start, center),
"Date of oral defence:", info.defense-date.at(1),
)
]
],
)
}
twoside-emptypage
} |
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/methods-01.typ | typst | Other | // Test mutating indexed value.
#{
let matrix = (((1,), (2,)), ((3,), (4,)))
matrix.at(1).at(0).push(5)
test(matrix, (((1,), (2,)), ((3, 5), (4,))))
}
|
https://github.com/nicolasfara/plenaria-2024-06-commonwears | https://raw.githubusercontent.com/nicolasfara/plenaria-2024-06-commonwears/master/plenaria-2024-06-commonwears.typ | typst | Apache License 2.0 | #import "@preview/polylux:0.3.1": *
#import "@preview/fontawesome:0.1.0": *
#import themes.metropolis: *
#show: metropolis-theme.with(
aspect-ratio: "16-9",
footer: [Plenaria Commonwears 2024],
)
#set text(font: "Inter", weight: "light", size: 20pt)
#show math.equation: set text(font: "Fira Math")
#set strong(delta: 350)
#set par(justify: true)
#set raw(tab-size: 4)
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 1em,
radius: 0.7em,
width: 100%,
)
#set table.hline(stroke: .6pt)
#title-slide(
title: "Pulverisation and Beyond",
// subtitle: "Subtitle",
author: "<NAME>",
date: datetime.today().display("[day] [month repr:long] [year]"),
)
#new-section-slide("Enhanced Pulverisation Evaluation")
#slide(title: "Dynamic Reconfiguration")[
#table(
columns: 2,
inset: 0.65em,
stroke: none,
[
We evaluated _PulvReAKt_ in a simulated _large-scale city event_ where the user were equipped with a smartphone.
Via *pulverisation* we moved the $#math.beta$ from the cloud and the devices based on the smartphone charge threshold.
#alert[We improved the simulation introducing more heterogeneity and improving the consumption model.]
],
[
#figure(
image("figures/simulation-screenshot-poi.png")
)
]
)
]
#slide(title: "Improved Power Consumption Model")[
#align(center)[
#table(
columns: 3,
align: (left, center, center),
stroke: (x: none),
inset: 0.65em,
row-gutter: (2.2pt, auto),
fill: (x, y) => if y == 4 or y ==5 { rgb("FAAB3630") },
table.header(
[*Device*],
[*Allocated Components*],
[*Avg. Drain Time*],
),
table.cell(rowspan: 3)[_Smartphone_], [$#math.beta + #math.sigma + #math.chi +$ _OS_], [6h],
[$#math.beta + #math.chi +$ _OS_], [10h],
[_OS_], [24h],
table.cell(rowspan: 2)[_Wearable_], [$#math.sigma + #math.chi +$ _OS_], [6h],
[_OS_], [24h],
)
]
]
#slide(title: "Simulation Parameters")[
#align(center)[
#table(
columns: 2,
align: (left, left),
stroke: (x: none),
inset: 0.65em,
row-gutter: (2.2pt, auto),
fill: (x, y) => if y == 3 { rgb("FAAB3630") },
table.header(
[*Simulation Parameter*],
[*Values*],
),
[_PoIs_], [$15$],
[$#math.beta$ Offloadin Thresholds], [$scripts(#sym.arrow.t.b.double)_x | x #math.in {0, 10, 20, 30, 40, 100}$],
[$#math.sigma$ Offloading Policies], [smartphone, wearable, hybrid],
[Device Count], [$300$],
[Random Seed], [$0, 1, #math.dots, 1000$],
)
]
]
#slide(title: "(Improved) Results")[
#align(center)[#text(size: 0.85em)[Traveled Distance (last 30 minutes)]]
#figure(
image("figures/travel_distance.svg")
)
The *hybrid* policy enables the user to travel longer distances before the battery runs out.
The *wearable* only policy makes vane the $#math.beta$ thresholds reulting in the worst performance.
]
#slide(title: "(Improved) Results")[
#table(
columns: 3,
stroke: none,
inset: 0.35em,
[
#align(center)[#text(size: 0.85em)[Charging Time]]
#figure(
image("figures/charging_time.svg")
)
],
[
#align(center)[#text(size: 0.85em)[Cloud Cost]]
#figure(
image("figures/cloud_cost.svg")
)
],
[
#align(center)[#text(size: 0.85em)[Power Consumption]]
#figure(
image("figures/power_consumption.svg")
)
],
)
The *wearable* only allocation is the worst (discharge faster).
The *hybrid* policy performs visibly better when when no other compensation strategy is in place.
It becomes less significant when behavior offloading is enabled: the _cloud compensate_ similarly for battery discharge (but at a higher cost).
]
#new-section-slide("Macro-program Partitioning")
#slide(title: "Motivation")[
In the *ECC* we must deal with heterogeneity and different device capabilities.
#figure(
image("figures/ecc.svg", width: 80%)
)
A _macro-program_ partitioning is needed to cope with device capabilities (e.g resources constraints, sensors availability),
or simply for improve the system performance.
]
#slide(title: "Research Questions")[
- _RQ1_: *_How can a macro-programmed system with local and collective services be modularised to favour its deployment on heterogeneous multi-tier infrastructures?_*
- _RQ2_: *_How can execution be coordinated to preserve the functional correctness at the system-level (w.r.t. the deployment of a monolithic macro-program)?_*
- _RQ3_: *_Does the approach offer non-functional benefits?_*
]
#slide(title: "Partitioning Model")[
= System Model
- _Physical System_: a network of *phisical devices* capable of exchanging data according to a _physical neighbourhood_ relation
- _Macro-program_: the application logic executed by a subset of the _physical devices_ called *application devices*
- _Infrastructural devices_: a subset of the _physical devices_ that *can support the execution* of some computation on behalf of some application devices
]
#slide(title: "Partitioning Model")[
= Macro-programming Model
- _Macro-program as a DAG_: the macro-program is a set of *components* connected each other via *bindings*
- _Component_: atomic functional macro-prigram taking a *list of inputs* and producing a *sinle output*; such values are received and produced via *ports*
- _Binding_: indicates that the output _port_ of a component is connected to one or more input _ports_ of other components
- _Collective Components_: if *requires the interaction* with instances of the same component in *neighbours devices*
- _Local Component_: if execution is just a transformation of *local* inputs to *local* outputs
]
#slide(title: "Partitioned Macro-program")[
#figure(
image("figures/macro-system-definition-2.svg")
)
]
#slide(title: "Forward Reference")[
There may be the case in which an _application device_ cannot execute all the components instances of the macro-program.
In this case, the component instance must be *forwarded* to an _infrastructural device_.
Such *forwarding* can be iterative determining a *forwarding chain* from the owner (i.e. the _application device_) to the surrogate _infrastructural device_.
]
#slide(title: "Deployment Perspective")[
#figure(
image("figures/deployment-mapping-diagram.svg")
)
]
#slide(title: "Deployment Independence and Self-stabilisation")[
We provided a formal definition of the model via an operational semantics,
and we proved that the model supports *deployment independence* and *self-stabilisation*.
== Simulation setup
We simulated a rescue scenario in a _city event_ with real GPS traces empirically prove _functional correctness_ and _non-functional benefits_ (_RQ2 and RQ3_).
]
#slide(title: "Evaluation: Functional Correctness")[
#figure(
image("figures/gradient_convergence.svg")
)
#figure(
image("figures/gradient_convergence_error.svg")
)
]
#slide(title: "Evaluation: Power Consumption and Message Overhead")[
#figure(
image("figures/power_consumption_modularisation.svg")
)
The modularised approach, while increasing the message overhead,
reduces the power consumption of the battery.
]
|
https://github.com/WalrusGumboot/Typst-documents | https://raw.githubusercontent.com/WalrusGumboot/Typst-documents/main/aca/subdir/lie-algebras.typ | typst | #import "../template.typ": *
#show: doc.with(
title: "Lie-algebra's",
course: "Algebraïsche structuren"
)
= Definitie
Def. Zij $F$ een veld. Een _Lie-algebra_ $L$ is een vectorruimte samen met een bilineaire afbeelding, het Lie-haakje genaamd; $[dot, dot]: L times L -> L$, zodat de volgende uitspraken gelden:
+ De afbeelding is _alternerend_: $forall x in L: [x, x] = 0$
+ De _Jacobi-identiteit_ geldt: $forall x, y, z in L: [x, [y, z]] + [z, [x, y]] + [y, [z, x]] = 0$
= Gevolgen van de definitie
Prop. Als $op("char")(F) eq.not 2$, dan geldt $[x, y] = -[y, x] forall x, y in L$. \
Bew. $ [y, x] &= [y, x] - [x + y, x + y] & " (toevoegen van nul)" \ &= [y, x] - [x, x] - [x, y] - [y, x] - [y, y] & " (distributie van Lie-haakje)" \ &= [y, x] - [y, x] - [x, y] & " (schrappen van Lie-haakjes met gelijke argumenten)" \ &= -[x, y] \
qed $
Andersom geldt ook dat Prop. $[x, x] = -[x, x] <=> 2[x, x] = 0 <=> op("char")(F) = 2 or [x, x] = 0$.
= Belangrijke Lie-algebra's
== De algemene lineaire groep
Def. De _algemene lineaire groep_ (Eng. "general linear group") van een unitaire ring $R$ en van orde $n$ bestaat uit inverteerbare $n times n$-matrices met elementen in $R$, met als groepsoperatie de matrixvermenigvuldiging.
Not. $op("GL")_n (R)$
Prop. Indien $R$ naast een unitaire ring ook een veld is, is de algemene lineaire groep ervan ook een Lie-algebra, met als Lie-haakje de commutator #footnote([Def. De commutator van twee matrices $A$ en $B$ is gelijk aan $A B - B A$, voor alle matrices waar deze definitie steek houdt.]). \
Bew. Teneinde een Lie-algebra te zijn moet de commutator alternerend zijn en aan de Jacobi-identiteit voldoen. Het eerste is triviaal aan te tonen:
$ [A, A] = A^2 - A^2 = 0 $
De Jacobi-identiteit |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/container_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test box in paragraph.
A #box[B \ C] D.
// Test box with height.
Spaced \
#box(height: 0.5cm) \
Apart
|
https://github.com/qujihan/toydb-book | https://raw.githubusercontent.com/qujihan/toydb-book/main/src/chapter4/engine.typ | typst | #import "../../typst-book-template/book.typ": *
#let path-prefix = figure-root-path + "src/pics/"
== Engine
=== SQL引擎的Engine接口
#code("", "")[
```rust
```
]
=== 本地存储的SQL引擎
#code("", "")[
```rust
```
]
=== 基于Raft的分布式SQL引擎
#code("", "")[
```rust
```
]
=== Session
#code("", "")[
```rust
```
]
=== 总结 |
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/045%20-%20Kamigawa%3A%20Neon%20Dynasty/003_Episode%202%3A%20Lies%2C%20Promises%2C%20and%20Neon%20Flames.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 2: Lies, Promises, and Neon Flames",
set_name: "Kamigawa: Neon Dynasty",
story_date: datetime(day: 24, month: 01, year: 2022),
author: "<NAME>",
doc
)
Ten years had passed since Kaito left the walls of Eiganjo, but some habits were hard to break.
Rain danced across the roof tiles like a song. Kaito leaned forward, brows pinched as he searched the streets below.
Towashi was filled with even more color than usual. A parade of modified umbrellas seemed to float across the pavement, each of them glowing with a shield of neon energy that kept the people beneath them dry. A glass panel crackled with life as the evening menu for a teahouse rolled across the screen. Overhead, a pair of massive, fiery orange koi flicked their silken tails as they swam toward an ocean of sky and starlight.
Normally, Kaito loved the vibrancy of Towashi at night, but he didn't have time for nostalgia. He was looking for someone.
#figure(image("003_Episode 2: Lies, Promises, and Neon Flames/01.jpg", width: 100%), caption: [Kaito Shizuki | Art by: Yongjae Choi], supplement: none, numbering: none)
Kaito pressed a finger to his temple, and the live feed from his drone appeared in his vision. The device was hovering above a darkened alley, and Kaito ushered the tanuki-shaped drone toward the bustling rainbow-hued street.
A decade in technological advancements meant it was far more sophisticated than the origami crane drone from Kaito's childhood. The crane had been easy to recycle. But Kaito wasn't sure he'd ever be able to replace his current drone, no matter how outdated she would someday become.
The tanuki and Kaito had too much history.
Tameshi warned him not to get attached to a single piece of tech. #emph["Everything new someday becomes old," ] his friend had said.
Most of the time Kaito was happy to listen to Tameshi's wisdom. He'd been both a friend and a mentor over the years. But Tameshi had also spent his life trying not to get attached to anyone, or anything.
Kaito was the opposite. He felt tethered to the people he cared about most, and he'd do anything to protect those bonds.
The tanuki drone, Himoto, was more than just a piece of tech—she represented the kami who had changed Kaito's life forever.
It was also a reminder that his friend was still missing.
The drone stilled near a corner before making her way down a row of street venders, all of them sheltered by glowing canopies edged with hanging lanterns.
A Kami of Street Vendors sat moodily at the edge of a counter, its dough-like face puckered into a frown. Three gyoza hovered around him like they were teetering on the edge of hope.
The nearby vendor scooped a generous portion of ramen into a bowl before adding bright pink and white fishcake, slices of boiled egg, and a sprinkle of green onion across the top—all cooked to perfection. He passed the meal to a waiting customer, and the kami let out a disappointed moan.
Even through the drone's camera, Kaito caught the worry in the vendor's eye. Kami were everywhere on Kamigawa, but not everyone wanted to eat a meal beside a spirit with a temper. Sometimes it was better to keep them happy.
With a sigh, the man reached behind the counter for his own bowl of bloated noodles that had been sitting in hot broth for far too long. The kami's pudgy face morphed into pure delight, and the dish barely touched the counter before he threw his face into the ramen and slurped furiously.
The vendor rolled his eyes and turned to his next customer.
Kaito nudged the drone skyward toward the height of the nearby buildings. With a bird's-eye view, they scanned the city, searching the alleys one by one until he spotted a group of Imperial samurai standing outside of an apartment. Most of the upper windows were sealed shut with their curtains drawn, but one had been left slightly ajar. The gap was so small, it was hardly noticeable to the average person.
Kaito cracked a half-smile. He didn't see the impossible—he saw an invitation.
He tapped his temple again, shutting off the drone's visuals, and slid down the roof tiles and onto the balcony below. Following the railing to the corner of the building, Kaito slipped through the bars and scaled the wall back down to the pavement, losing himself in the nearby crowd.
Nobody noticed the drone approaching, or how Kaito snatched the device from the air in one fluid motion. In his hands, the metal shifted like paper, folding and refolding again until she became a mask.
The embodiment of the tanuki-shaped kami that caused Kaito's spark to ignite.
He'd become a planeswalker that day. Someone destined for greatness even beyond the borders of Kamigawa.
Kaito brushed a finger against the edge of the device. He'd followed the Kami of the Spark all the way to Boseiju hoping to find the emperor. But it wasn't his friend who was waiting for him in the forest district—it was his fate.
He shrugged against the rain, pressed the mask to his face, and straightened his hood. Kaito didn't care about greatness; he just wanted his friend back.
Kaito left the streets for the shadows like an undetectable wraith. Turning corner after corner, he followed the alleyways toward the heart of Towashi. When he reached the looming apartment building, he began to climb.
The Imperial samurai were posted at the door. He could hear the shift of their metal armor every time they moved. Armor made for battle, not stealth.
#emph[So much unnecessary noise] , Kaito thought to himself before swinging over the ledge and planting his feet on the small balcony.
Time had changed him. He was no longer a child, or an Imperial.
But he was still the Kaito who climbed across rooftops and snuck through windows he wasn't supposed to.
Pushing the glass with his gloved hand, Kaito slipped into the room without making a sound.
The bedroom crackled with firelight. Kaito sensed movement, gaze snapping toward the hearth. He expected to find a kami nestled inside the flame, but there was no one there. Just an amber glow that caused shadows to flicker across the wall.
"Don't worry," a familiar voice said. "We're alone."
Kaito lifted a brow but didn't turn. "It was thoughtful of you to leave the window open."
There was an irritated huff. "We both know you weren't planning on using the front door. If your interplanar travels didn't teach you a thing about etiquette, I'm sure it's pointless to even hope."
"If I really wanted to learn about etiquette," Kaito began, turning to face his sister, "I'd have come straight to you."
Eiko's lips curled into a smile. "It's good to see you."
Kaito pushed his tanuki mask behind his head. "It's good to see you, too." He hesitated, trying to piece together the joy of seeing his sister with the reality of who she grew up to become. He didn't hate that she was an Imperial, but he hated the distance that came with it. "I know you don't leave the palace much these days."
"It's difficult," Eiko admitted. "Risona has supporters all over Kamigawa—not just the Asari Uprisers in the Sokenzan mountains, but spies in Towashi and the Undercity, too. It~isn't always safe for someone like me to travel without protection."
"I remember you being quite capable of taking care of yourself," Kaito countered.
"I'm an Imperial advisor now. There is more to consider than just myself," she said quietly.
Kaito twisted his mouth. Her words reminded him of someone. Someone he hadn't spoken to since the day he fled Eiganjo. "Is Light-Paws here with you?"
Their paths hadn't crossed in a decade. Kaito wasn't #emph[intentionally] avoiding his former teacher, but he also wasn't looking for a reunion. Because when it came to Light-Paws, Kaito still felt the hurt like a bruise beneath the surface.
Eiko shook her head. "Light-Paws doesn't have time for diplomacy meetings. She's too busy trying to prevent a rebellion and keep the rest of the court from turning on each other in their bid to gain more power." There was an edge of frustration in her voice. "The Asari Uprisers grow bolder every day. The longer the throne remains empty, the more Risona is able to gain supporters who believe the empire should be abolished."
As much as Kaito believed the Imperials could be restrictive, he never wanted a complete end to the Imperial's guidance. They played an important role when it came to negotiating with the kami. And the last thing Kaito wanted was for Kamigawa to break out in violence.
"The emperor will come home." Kaito's throat tightened at the memory of what he lost. What #emph[Kamigawa ] lost.
He'd traveled across the Multiverse to find her, but he was no closer than he had been when his spark ignited. All Kaito knew was that the man with the metal arm was a planeswalker.
Which meant the emperor could be anywhere.
Eiko nodded solemnly. "I hope you're right. For Kamigawa's sake, and yours."
Kaito looked away. Sometimes, when the days were long and the evenings quiet, the ache in his heart felt just as powerful as it had been when he'd stood in Kyodai's chamber and realized his friend was gone.
But now was not the time for aching hearts. It had been months since he'd seen his sister—he didn't want to waste their time together being sad.
"You always did love to worry about me." He turned back, lifting a brow like he was baiting her. "Is that why you risked sending an Imperial drone all the way to Towashi?"
Eiko pursed her lips. Their dynamic was familiar—it made it easy to fall back into old habits. And banter had always been one of their favorites. "You are not the only one with eyes in this city, Kaito. If you were in trouble, I would know about it."
The words slipped out of Kaito's mouth too easily. "I didn't realize I'd have to leave this plane just to get a bit of privacy."
Eiko blinked. "You wouldn't leave again without saying goodbye." It wasn't a question; it was a reminder of Kaito's promise.
He didn't regret leaving the palace, but he did regret how he'd left Eiko without any warning at all.
They'd come a long way since then. Eiko had been in Boseiju with Kaito when they'd been tracking the Kami of the Spark. She was the only Imperial on Kamigawa who believed he was right about the man with the metal arm having something to do with the emperor's disappearance.
But some memories left their marks. And sometimes those marks stung.
Kaito lifted his shoulders, sheepish. "Oh, come on. You must've forgiven me by now. I basically saved your life in the forest district. #emph[Twice] ."
"That's not even close to what happened. You were barreling through kami territory without any sense at all. You're lucky you even #emph[survived] long enough to ignite that spark of yours."
"You always were my favorite kami diplomat."
"I'd say flattery will get you nowhere, but we both know that's pretty much the only card you have."
His laugh echoed through the space. "You wound me."
Fire flickered in Eiko's glimmering stare. "I've heard the rumors about Futurists looking into illegal bio-enhancements that can mess with the physics of reality. Perhaps you could ask your friends to help you grow thicker skin."
Kaito's smile faded slightly. Another memory that carried a sting. "We're not the villains, you know. We believe experimenting with technology is the way forward—not to hurt people, or start wars, but to help. To #emph[heal] ."
"We already #emph[have] tech for that."
"Yes, but who has access to it? Anyone who can't afford permits or upgraded motherboards has to hope for a kami's blessing to have any kind of power. And we both know how rare that is."
"Have you ever stopped to think that maybe not everyone #emph[should ] have access to power?" Eiko countered. "The Imperials are not trying to be regressive. But we have merge gates to build and protect, and kami who see our expanding cities as threats to their homes. What happens if they start to see our inventions as a threat to their existence?" She shook her head. "For the good of the mortal realm and the spirit realm, there needs to be balance."
"Giving power to a few will always create a divide. Technology levels the playing field. Not just for the wealthy and the elite, but for #emph[everyone] . We no longer have to rely on kami magic—we can protect ourselves."
"Who is it you need protecting from so badly? Because last I checked, the only people trying to start a war are the ones who want the same thing you do," Eiko remarked coolly.
"I do not support the Uprisers," Kaito said clearly. "But Kyodai hasn't been herself since the emperor vanished. What happens if something goes wrong with the merge gates and destructive kami are unleashed? It could take millennia for the mortal and spirit realms to become one. That's a thousand years of uncertainty. The Kami Wars may be legend, but can you honestly guarantee we won't see a threat like that again?"
Eiko stiffened. "The kami are not our enemy."
"We don't even #emph[know] who our enemy is." Kaito stilled, swallowing the lump in his throat. "The emperor was taken from Kyodai's temple, and no one even saw it happen."
He hadn't been able to stop it. He'd been too late, too weak, and too unprepared.
They all had.
Eiko's face hardened. "There was nothing you could've done that night."
"If we'd had better tech—"
"Unregulated tech might very well have been how the emperor was taken in the first place!" Eiko interjected, cheeks turning pink.
Kaito scowled. "The Imperials already blamed the Futurists—and the Uprisers—without a shred of evidence. They almost caused a #emph[war] ."
Eiko was silent for too many seconds.
Kaito saw the hesitation in his sister's eyes. She was holding something back. "What have you heard?" He blinked, hope crashing against his ribcage like a tidal wave. "Do you know where the emperor is?"
"No," Eiko said. "But I received some intelligence." Even in the firelight, Kaito could see the strain behind her eyes.
Whatever she knew, she wasn't supposed to tell Kaito.
He took a step forward, urgent. "If this is about the emperor—"
"It's about Tameshi," Eiko interrupted.
Kaito's thoughts stalled. He wasn't sure he'd heard her right. "What does Tameshi have to do with any of this?"
#figure(image("003_Episode 2: Lies, Promises, and Neon Flames/02.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
Eiko's eyes latched onto the doorway. Kaito had never seen her so nervous. She reached inside her robes and pulled out a small device shaped like a paper fan. With a brush of her thumb, the edges expanded into a small dome. Energy radiated outward, creating a cocoon of white light around the siblings.
"A noise suppressor?" Kaito folded his arms across his chest. "This must be serious if you don't even trust your guards."
Eiko took a slow breath. "My sources have been watching Tameshi for some time. He is involved in trading illegal merge studies involving kami, and—"
"If you think I'm going to provide information on my #emph[best friend] to the Imperials, you are sorely mistaken." Kaito gritted his teeth. "Eiko, you're my sister, and I love you. But if you're asking me to be your spy~"
"I'm not telling you this because I want your help. I'm not supposed to be telling you this #emph[at all] . But—" Eiko pinched the bridge of her nose. "It's not just the merge studies. It's who he was with." She sighed, dropping her hand. "Tameshi was seen meeting with a man in the Undercity. A man with a metal arm."
The fire snapped behind him, but Kaito could feel embers burning in his own chest. A flame of defiance. "Whatever your sources think they saw, they're wrong."
Tameshi had met them both in the forest district. He knew about the emperor, and the kami, and the man Kaito was searching for. There wasn't a single secret Kaito kept from his friend—not then, and not now.
It didn't make sense that Tameshi would hide something so important. He #emph[couldn't ] have.
"Believe me, I would never share intel with you if I wasn't absolutely certain it was true." Eiko's shoulders fell. "Especially not about this."
Kaito's voice was hollow. "Tameshi wouldn't betray me."
"I'm telling you what I know," Eiko said. "Ten years ago, you swore you would never stop looking for the emperor. If Tameshi knows the man with the metal arm and never told you, don't you want to find out why?"
"I won't feed you intel," Kaito said seriously. "And I won't turn in my friend to be used as a scapegoat."
"I'm not asking you to. We want the same thing—to find the emperor, as soon as possible. I'm simply giving you the information you need to do the right thing. To find out the #emph[truth] ."
Kaito looked away bitterly. There must've been another explanation. Tameshi would never lie to him. He would never betray him.
Would he?
"Kamigawa needs a ruler," Eiko said carefully. "Someone to restore the balance between our people."
Kaito tilted his chin, facing his sister. "I don't believe a throne is the key to restoring balance. But I will do whatever it takes to find the emperor."
He'd trusted Tameshi for a long time.
But he'd trusted Eiko much longer.
Whether her intel was right or wrong, Kaito believed in his sister enough to follow the trail. And if spying on his best friend was the only way to prove his innocence, then Tameshi would just have to forgive him.
Kaito moved for the window, leaving the protection of the dome behind him.
Eiko flicked the metal fan closed before tucking it back into her robes. "I'm sorry, Kaito." He paused near the ledge, listening to her somber voice. "I know what he means to you."
Kaito didn't want to believe it. But if Tameshi really was working with the man with the metal arm, and if he'd had knowledge of what happened to the emperor all this time~
Then maybe their friendship was never what Kaito had thought.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Kaito stood at the edge of the alley, back pressed against the stone wall. Otawara was high up in the clouds, and sometimes Kaito was convinced the floating city had fewer shadows because of it, and far fewer places to hide.
His tanuki drone was too recognizable to be any help around Tameshi, so Kaito had to tail him the old-fashioned way—with his eyes, ears, and pure determination. As the weeks passed and he continued to gather information, Kaito found himself hating the new mask he wore.
The mask of a liar.
It hurt to teeter on the edge of betraying his friend. More than once, Kaito nearly convinced himself it was all a misunderstanding. That Eiko has been wrong, and that it #emph[couldn't] have been the same planeswalker he saw in Kyodai's temple.
But every piece of intel Kaito uncovered only confirmed what his sister had told him.
Tameshi was hiding something. Not just from Kaito, but from the other Futurists, too.
#figure(image("003_Episode 2: Lies, Promises, and Neon Flames/03.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
Kaito watched him day after day, working at the Futurist department like his beliefs were aligned with the rest of Otawara. And then the sun would set, and when everyone else went home, Tameshi stayed inside his lab, carrying on with a project no one else in the compound seemed to know about.
Kaito had seen Tameshi making handshake deals in dark alleys for stolen power-enhancers. He'd watched his friend sneak boxes of cargo into his secretive workplace in the dead of night. And he'd watched the unregulated drones leave the compound and head straight for the Towashi Undercity.
It was evidence Tameshi was doing something illegal. But Kaito needed proof of his #emph[betrayal] .
Tameshi appeared in the compound's main entrance, drone already unfolded in his hand. With a gentle push, he sent the ribbon-like dragon into the air, watching as it arched beyond the trees in search of the ground below. He checked the panel on his sleeve, observing the time, and in one graceful step, he flew into the air and vanished toward the horizon.
Kaito didn't wait long. There was no one else around; Tameshi's colleagues had gone home hours before. He removed a device from his belt—a small throwing star that glowed blue along the edges—and aimed for the nearest security camera. He flicked his wrist sharply, and the star spun through the air before attaching itself to the camera's side. It blinked once. Twice.
And then the glowing edges turned green.
Masked with his hood pulled tight, Kaito left the alley and walked straight through the front doors of Tameshi's compound. No camera would see him now—not with the device freezing the footage.
But he needed to be quick. There was no way of knowing when Tameshi would be back.
The tiled floors echoed like bone, cold and empty. Kaito fought the chill running through him. It wasn't his first time inside the compound, but it was the first time he felt like an outsider.
#emph[Like a traitor] , his mind hummed, and he tried to push it away.
If Kaito was wrong about Tameshi, he would accept the consequences that might come his way. But if Tameshi had lied to him all this time~
Kaito wasn't ready to face the truth, but for the emperor's sake, he would do it anyway.
He passed the metal door leading to Tameshi's lab without a second glance. There was no point trying to break in—not without a key card, or a giant mech.
And Kaito didn't think blowing the lab to pieces would do much good from an evidence standpoint.
Instead, Kaito headed straight for Tameshi's office. An enormous desk sat in the center, toppled with a combination of data chips and paperwork. A round lantern sat in the corner, and Kaito pressed a finger against the side to bring it to life. Using his telekinesis, Kaito lifted the lantern into the air until it hovered steadily beside him.
He looked through every drawer and cabinet in the room and went over every scrap of paper on Tameshi's desk. Most of it was unimportant, but there was a coded message that Kaito found tucked beneath a file. It didn't take Kaito long to decipher it—he'd learned every trick Tameshi was willing to teach him.
It was vague as far as messages went, but it seemed to be a request to meet in the Undercity. It was a long way for a meeting; the Undercity was on the surface, wedged between the shadows of Boseiju—the oldest and largest living tree on Kamigawa—and Towashi's skyscrapers. Certainly not a practical location for run-of-the-mill business.
To go so far from Otawara~implied an even greater need for secrecy than the average Futurist required.
Kaito drummed his fingers along the edge of the desk, frowning at the message as he double-checked the date and time. The meeting was happening tonight, and soon—it must have been where Tameshi was headed.
But who was he meeting? And why?
As Kaito began to stand, he spotted a data chip tucked behind some of the paperwork. He connected it with his tanuki mask, took a few moments to bypass the encryption, and watched as the image appeared on the inside panel.
It was a blueprint of a strange device, thin and square, with wire-like arms reaching out of it like a jellyfish. Kaito had never seen anything like it before.
#figure(image("003_Episode 2: Lies, Promises, and Neon Flames/04.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
But it still didn't prove that Tameshi knew the man with the metal arm.
For that, Kaito would have to crash a meeting in the Undercity.
Kaito removed the data chip from his tanuki mask and tucked it beneath the pages. He stepped away from the desk and flexed his fingers, sending the lantern back into its cradle, and hurried outside.
When he was safely through the doors, Kaito flicked his fingers, and the device unlatched from the camera and floated back to him. He plucked it from the air, shoved it in his belt, and headed for the sky ferry.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Kaito skirted through the belly of the Undercity with ease. Even if it hadn't been the middle of the night, no sunlight ever made it to the surface, and the neon city lights didn't seem to reach the narrower streets. The murky waterways were blanketed with cherry blossoms, but even the charming aesthetics couldn't mask the stench of sweat and sewage in the air.
It didn't take long to track Tameshi—moonfolk didn't often venture into the Undercity, and more than one stranger was happy to trade intel for a small fee.
Kaito followed his friend's trail all the way to the docks. Light sources were scarce at the edge of the city, and the canal water was nearly pitch-black. Kaito could taste an unpleasant sourness on his lips—the chemical stain that came from being so close to the Undercity sewers.
He grimaced, keeping his hands loose at his sides, ready to reach for his sword at the first sign of trouble. He didn't know what he'd find, but if he saw the man with the metal arm again, he wasn't planning on letting him get away.
The steam from the nearby vents helped mask Kaito's footsteps as he wandered across the edge of the docks. Metal containers were stacked in uniform rows, offering plenty of places to hide. But Kaito's attention was pinned on the warehouse up ahead, where light spilled out of a pair of wide bay doors.
Kaito removed his mask, letting Himoto transform back into a four-legged drone. Silently, she flew toward the warehouse. At the same time, Kaito reached for the sword at his back. With a flick of the handle, the edges expanded into two rows of sharp, jagged teeth.
Tightening his grip, Kaito crept after the drone and pressed a finger to his temple.
Inside the warehouse, the tanuki drone floated toward a dark space above the rafters. Metal crates filled most of the room, but at the far end was an open area filled with tables and lab equipment. Glowing beakers were contorted around one another like a maze of glass. Some of them bubbled with neon liquid, and others sparked with energy. Surgical instruments littered the surface—there were oddly shaped knives that looked like wide triangles, and others that were thin as twigs. Smaller beakers filled with strange, metallic-hued serums were laid out on a worktable, and fragments of metal and frayed wires surrounded each other like puzzle pieces that didn't quite fit.
Unease ran through Kaito's bones. These experiments~they looked nothing like the Futurist work he'd seen on Otawara. Or #emph[anywhere ] on Kamigawa, for that matter.
Kaito ushered Himoto closer to the lab equipment when an enormous shadow made him stop. He could hear voices from the doorway. An argument that had long been underway.
Even with the drone's camera, all Kaito could see below was a too-large shadow. When it shifted, Kaito heard an unnatural tremor in the figure's voice, like metal grating against metal.
"The fleshling's previous utility is irrelevant. Doubt and weakness must be excised from the whole."
#figure(image("003_Episode 2: Lies, Promises, and Neon Flames/05.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none)
There was a shuffle of footsteps near one of the containers, and a gruff voice mumbled something Kaito couldn't hear.
The enormous figure shifted once more, stepping further into the light. Kaito held his breath as a strange monster appeared. Its body was made of chrome, with clawed arms and a curved spine. Exposed ribs and pointed vertebrae were displayed like metalwork. His face and mouth were monstrous and bird-like, with too many sharp points and long, flat teeth that made Kaito stumble backward, even outside the warehouse.
This wasn't the man with the metal arm Kaito had seen in Kyodai's temple. This was something else entirely.
The creature moved across the space with an unnatural gait, and Kaito pulled his drone further into the shadows for cover.
"Finalize collecting the necessary materials and transfer the acquired specimens." The monster turned, jaws opening as the metallic rattle of his voice made Kaito's stomach go hollow. "Do not slow the progress. The subjects regaining a fully conscious state is an inevitable outcome."
A group of henchmen appeared on the drone's camera. Reckoners, by the look of their clothing. There were nearly a dozen of them, hurriedly shuffling equipment from the table and into a waiting vehicle. They left the beakers behind, still bubbling with color, and began shifting one of the larger containers toward the vehicle instead.
#emph[The perfect size to store some kind of weapon] , Kaito thought blandly. #emph[What has Tameshi gotten involved in?]
The henchman had the cargo partially loaded into the vehicle when a scream rang out from inside. Kaito's heart pinched at the thought of the emperor in a cage somewhere, but the noises inside the cargo were more animal than human.
They sounded like #emph[kami] .
He wanted to investigate. He knew whatever was inside the container could provide answers about what was going on in the warehouse.
But there was no time, and Kaito couldn't take on a dozen Reckoners and a monster all by himself.
"What do we do about the beakers?" one of the henchmen asked.
The creature made its way toward the vehicle without looking back. "All evidence must be eradicated. Expansion of the work will continue in a more optimal location, courtesy of the fleshling."
With a sneer, the Reckoner turned, picked up one of the solo beakers, and threw it against the rest of the glass with force.
The explosion made Kaito jump in alarm, grip tightening around his sword. He blinked quickly, pressing his temple to call his drone back, and listened to the rumble of the vehicle as it sped away.
The angry snap and hiss of the fire was a warning.
But there was also evidence inside. Evidence that was being destroyed.
Kaito ran into the warehouse without another thought, hurrying toward the colorful flames that were already climbing up the side of the warehouse. In a few more minutes, the entire building would be engulfed.
His eyes scanned the tables for something to grab—something that could help him—but everything that was left behind was already burning, too fierce and bright to stop.
Kaito's shoulders fell, just as a moan sounded somewhere behind him.
He spun, sword raised, and saw Tameshi slumped in the corner beside one of the containers. The fabric of his robes had been slashed across the middle, clawed by a metal arm. His face had paled to a shade Kaito had never seen on his friend before. And all around him, blood began to pool.
Tameshi was mortally wounded.
Shoving his sword back into its hilt, Kaito ran to his friend's side and sank to his knees. There were so many things he wanted to say. So many things he wanted to ask.
But in that moment, his heart felt as shattered as the glass burning behind him, and he didn't have any words at all.
Tameshi lifted his head, eyes fighting to stay open. "Kaito~"
Kaito shook his head again and again. This couldn't be happening. He would not lose another friend.
But death had other plans.
Tameshi's voice was as faint as ash. "I—I've made so many mistakes. But lying to you was the worst of them."
Kaito felt the fire roar behind him. He couldn't move Tameshi—it would only speed up the inevitable. And if seconds were all he had left~
He wanted to tell him not to worry. To give him peace and forgiveness in those final moments. To give him everything a friend deserved.
But he had another friend, too. And there was still time to help her.
"Tell me what you know about the emperor," Kaito begged, fighting the sting in his eyes. "How do you know the man with the metal arm?"
Tameshi's eyes fluttered.
"No!" Kaito grabbed his friend's robes and tugged. "Don't you go yet. Not without telling me the truth."
Tameshi's last breath halted like the last whisper of a fight. He stared up at Kaito with so much broken, irreparable regret.
He was out of time.
"Tezzeret," Tameshi whispered like he was breaking a spell. And then he was gone.
Kaito's cry became a choked gasp, and the tears streaming down his cheeks felt like they were being seared into his skin. He felt the heat reach his back, the fire growing dangerously close.
Clenching his teeth, Kaito pressed his hand to Tameshi's eyes and mouthed a silent goodbye. Even though it felt wrong, he reached into his friend's pocket and removed the key card to his lab.
Tameshi may be dead, but Kaito had more to do.
His tanuki drone appeared at his side, folding itself back into a mask. Kaito covered his face and stood over his friend's body, fighting the anguish that rocked his shoulders, and walked away.
Behind him, the warehouse blazed.
|
|
https://github.com/PmaFynn/cv | https://raw.githubusercontent.com/PmaFynn/cv/master/src/content/en/skills.typ | typst | The Unlicense | #import "../../template.typ": *
#cvSection("Technical Skills")
/*
#cvSkill(
type: [Languages],
tags: ("Python", "TypeScript", "React", "HTML + CSS")
)
#divider()
#cvSkill(
type: [Technologies],
tags: ("Unix", "Git", "PostgreSQL")
)
*/
#skill("SQL (PostgreSQL, Oracle)", 4)
#skill("Rust, Python, Java, R", 3)
#skill("Web Technologies", 3)
#divider()
#skill("Git", 4)
#skill("Docker", 2)
#divider()
#skill("Power Automate", 5)
#skill("Sharepoint", 5)
#skill("MS Office", 4)
#skill("LaTeX, Typst", 4)
|
https://github.com/Alignof/typst_template | https://raw.githubusercontent.com/Alignof/typst_template/master/style/article.typ | typst | MIT License | // This function gets your whole document as its `body` and formats
// it as an article in the style of the IEEE.
#let quote_block(body) = {
block(
width: 100%,
fill: silver,
inset: 8pt,
body
)
}
#let terminal(body) = {
block(
width: 100%,
fill: black,
inset: 8pt,
text(white, body)
)
}
#let style(
// The paper's title.
title: "Paper Title",
// An array of authors. For each author you can specify a name,
// department, organization, location, and email. Everything but
// but the name is optional.
authors: (),
// The paper's abstract. Can be omitted if you don't have one.
abstract: none,
// A list of index terms to display after the abstract.
index-terms: (),
// The article's paper size. Also affects the margins.
paper-size: "a4",
// The path to a bibliography file if you want to cite some external
// works.
bibliography-file: none,
// The paper's content.
body
) = {
// Set document metadata.
set document(title: title, author: authors.map(author => author.name))
// Set the body font.
set text(lang: "ja", size: 10pt, font: "Noto Serif CJK JP")
// Set image size.
set image(width: 80%)
// Set table caption upper.
show figure.where(
kind: table
): set figure.caption(position: top)
// Configure the page.
set page(
paper: paper-size,
// The margins depend on the paper size.
margin: if paper-size == "a4" {
(x: 41.5pt, top: 80.51pt, bottom: 89.51pt)
} else {
(
x: (50pt / 216mm) * 100%,
top: (55pt / 279mm) * 100%,
bottom: (64pt / 279mm) * 100%,
)
},
header: align(left, text(8pt)[
xxx学会論文誌
]),
footer: align(left, text(8pt)[
$copyright$2023 xyz Society of Japan
]),
numbering: "1",
)
set raw(theme: "../monokai.tmTheme", tab-size: 4)
show raw: it => block(
width: 100%,
fill: rgb("#1d2433"),
inset: 8pt,
radius: 5pt,
text(fill: rgb("#c2cacc"), it)
)
show raw.line: it => {
box(
width: 100%,
align(horizon, stack(
dir: ltr,
box(
width: 17pt,
inset: (x: 7pt),
align(right, text(white)[
#it.number
]
)),
it.body,
))
)
}
// Configure equation numbering and spacing.
set math.equation(numbering: "(1)")
show math.equation: set block(spacing: 0.65em)
// Configure lists.
set enum(indent: 10pt, body-indent: 9pt)
set list(indent: 10pt, body-indent: 9pt)
// Configure headings.
set heading(numbering: "1.")
// Display the paper's title.
v(3pt, weak: true)
align(center, text(18pt)[*#title*])
v(8.35mm, weak: true)
// Display the author name
v(20pt, weak: true)
align(center, text(16pt)[
#authors.map(author => text()[*#author.name* #footnote(numbering: "*")[#author.organization, #author.email]]).join(", ")
])
set par(justify: true, first-line-indent: 1em)
show par: set block(spacing: 0.65em)
// Display abstract and index terms.
if abstract != none [
#set text(size: 9pt, weight: 500)
#v(15pt)
#align(center)[
#box(width: 80%)[ #align(left)[
#set par(justify: false)
*概要*: #abstract
]]
#box(width: 80%)[ #align(left)[
#if index-terms != () [
#v(5pt)
*_キーワード_*---#index-terms.join(", ")
]
]]
]
#v(2pt)
]
v(15pt)
// Display the paper's contents.
body
// Display bibliography.
if bibliography-file != none {
show bibliography: set text(8pt)
set text(lang: "en")
bibliography(bibliography-file, title: text(10pt)[References], style: "ieee")
}
}
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/math/multiline-05.typ | typst | Other | // Test multiline subscript.
$ sum_(n in NN \ n <= 5) n = (5(5+1))/2 = 15 $
|
https://github.com/GermanHeim/Informe-Typst-Template | https://raw.githubusercontent.com/GermanHeim/Informe-Typst-Template/main/main.typ | typst | #import "template.typ": *
#show: project.with(
title: "Informe del Trabajo Practico N° 1",
authors: (
"Frayssinet, <NAME>",
"<NAME>",
"Heim, <NAME>",
),
// Insert your abstract after the colon, wrapped in brackets.
// Example: `abstract: [This is my abstract...]`
abstract: lorem(59),
date: "Agosto 22, 2023", // Cambiar esto
)
/* Paquetes utiles
#import "@preview/whalogen:0.1.0": * // Química
#import "@preview/tablex:0.0.5": * // Tablas
*/
// Objetivos
#include "./secciones/objetivos.typ"
// Materiales y metodologias
#include "./secciones/materiales_y_metodologias.typ"
// Resultados y discusion
#include "./secciones/resultados_y_discusion.typ"
// Conclusiones
#include "./secciones/conclusiones.typ"
// Biblografia
#show bibliography: set heading(numbering: "1.")
#bibliography("./bibliografia.yml", title: "Bibliografia")
#pagebreak()
// Anexos
#include "./secciones/anexos.typ"
|
|
https://github.com/ChHecker/unify | https://raw.githubusercontent.com/ChHecker/unify/main/examples/example.typ | typst | MIT License | #import "../lib.typ": *
#set text(lang:"en")
Working with english characters:
$ num("-1.32865+-0.50273e-6") $
$ qty("1.3+1.2-0.3e3", "erg/cm^2/s", space: "#h(2mm)") $
$ numrange("1,1238e-2", "3,0868e5", thousandsep: "'") $
$ qtyrange("1e3", "2e3", "meter per second squared", per: "/", delimiter: "\"to\"") $
#set text(lang:"ru")
Работа пакета с русскими символами:
$ num("-1.32865+-0.50273e-6") $
$ qty("1.3+1.2-0.3e3", "erg/cm^2/s", space: "#h(2mm)") $
$ numrange("1,1238e-2", "3,0868e5", thousandsep: "'") $
$ qtyrange("1e3", "2e3", "meter per second squared", per: "/", delimiter: "\"до\"") $
#set text(lang:"xx")
Working with undefined characters (English by default)
$ num("-1.32865+-0.50273e-6") $
$ qty("1.3+1.2-0.3e3", "erg/cm^2/s", space: "#h(2mm)") $
$ numrange("1,1238e-2", "3,0868e5", thousandsep: "'") $
$ qtyrange("1e3", "2e3", "meter per second squared", per: "/", delimiter: "\"to\"") $
Working with monetary unit:
$ qty("55.36", "usd") $
$ qty("1150,57", "eur") $
$ qty("1000000", "yen") $
|
https://github.com/Ttajika/auto-ref-numbery | https://raw.githubusercontent.com/Ttajika/auto-ref-numbery/main/readme.md | markdown | MIT License | # auto-ref-numbery
A package for numbering equations only when they are referred to.
## Usage
_This is not an official package._
1. Download the package:
Download and store folder "0.0.1" in {data-dir}/typst/packages/local/auto-ref-numbery
{data-dir} is
* $XDG_DATA_HOME or ~/.local/share on Linux
* ~/Library/Application Support on macOS
* %APPDATA% on Windows
1. Import the package
```typst
#import "@local/auto-ref-numbery:0.0.1": *
```
1. Apply the reference rule:
```typst
#show ref: it => eq_refstyle(it)
```
## Functions
### auto-numbering-equation
The function is named _auto-numbering-equation_, or _aeq_ for short.
#### Example
```
The number appears only when referenced as
#aeq(<sum>)[
$
sum_(i=1)^n i = n(n+1)/2.
$
]
@sum is an equation
```

#### Parameters
This function requires a label in its first argument. The type of label should be either _str_ or _label_.
```typst
aeq(
[str][label],
name: [str]
tag: [bool],
[content]
) -> [content]
```
##### tag [bool]
Sets a tag instead of equation number.
Default: false
##### name [str]
Specifies the name of the tag. If none is provided, the tag's name will be the first argument.
Default: none
example:
```typst
#aeq(<ic>,name:"IC")[
$
u(x,theta)>= u(x,theta')
$
]
@ic is referred.
```

##### body [content]
The content of equation.
### eq_refstyle
This function modifies the ref function to work with auto-numbering equations.
### refs
This function is used to reference multiple labels.
#### example
```typst
#figure(caption:"")[#table([],[])]<b>
#figure(caption:"")[#table([],[])]<c>
#figure(caption:"")[#table([],[])]<d>
@b is a table.
#refs(<b>,<c>) are tables.
#refs(<b>, <c>, <d>) are three tables.
```

#### parameters
```
refs(
..[label],
dict: [dictionary],
add: [str],
comma: [str]
) -> content
```
##### dict
A dictionary for plurals.
Default: plurals_dic
`plurals_dic` is a predefined dictionary as follows:
```typst
plurals_dic = (
"Proposition": "Propositions",
"Theorem":"Theorems",
"Lemma":"Lemmata",
"Definition":"Definitions",
"Table":"Tables",
"Assumption":"Assumptions",
"Figure":"Figures",
"Example": "Examples",
"Fact":"Facts",
)
```
##### add
Specifies the conjunction name.
Default: "and"
example:
```
#refs(<b>, <c>, <d>, add:"or") are three tables.
```
##### comma
Specifies the separator between items.
Default: ", "
|
https://github.com/kdog3682/2024-typst | https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/te-inscribed-triangle.typ | typst | //
#let generate-key(name) = {
let dirs = directions.map((alias, dir) => (alias, name + "." + dir))
let dirs = l2d(dirs)
return (
label: name,
dirs: dirs
)
}
#let foo(name, k-factor: 2) = {
let key = generate-key(name)
circle((0,0), name: key.label, radius: k-factor, fill: yellow.lighten(20%))
let keys = (
"sam.east",
"sam.north-west",
"sam.west",
"sam.south-west",
)
let middle = ("sam.west", "sam.east")
line(..keys, close: true, fill: white)
line(..middle, stroke: (thickness: 0.5pt, paint: blue, dash: "dotted"))
cetz.angle.angle("sam.west", "sam.north-west", "sam.east", label: {
text(size: 12pt, "hi")
},
label-radius: 65%,
radius: 0.95,
inner: true,
name: "western-angle",
mark: (fill: blue, stroke: none),
)
}
|
|
https://github.com/flavio20002/typst-orange-template | https://raw.githubusercontent.com/flavio20002/typst-orange-template/main/lib.typ | typst | MIT No Attribution | #import("my-outline.typ"): *
#import("my-index.typ"): *
#import("theorems.typ"): *
#let mathcal = (it) => {
set text(size: 1.3em, font: "OPTIOriginal", fallback: false)
it
h(0.1em)
}
#let normal-text = 1em
#let large-text = 3em
#let huge-text = 16em
#let title-main-1 = 2.5em
#let title-main-2 = 1.8em
#let title-main-3 = 2.2em
#let title1 = 2.2em
#let title2 = 1.5em
#let title3 = 1.3em
#let title4 = 1.2em
#let title5 = 11pt
#let outline-part = 1.5em;
#let outline-heading1 = 1.3em;
#let outline-heading2 = 1.1em;
#let outline-heading3 = 1.1em;
#let nocite(citation) = {
place(hide[#cite(citation)])
}
#let language-state = state("language-state", none)
#let main-color-state = state("main-color-state", none)
#let appendix-state = state("appendix-state", none)
#let appendix-state-hide-parent = state("appendix-state-hide-parent", none)
#let heading-image = state("heading-image", none)
#let supplement-part-state = state("supplement_part", none)
#let part-style-state = state("part-style", 0)
#let part-state = state("part-state", none)
#let part-location = state("part-location", none)
#let part-counter = counter("part-counter")
#let part-change = state("part-change", false)
#let part(title) = {
pagebreak(to: "odd")
part-change.update(x =>
true
)
part-state.update(x =>
title
)
part-counter.step()
[
#context{
let her = here()
part-location.update(x =>
her
)
}
#context{
let main-color = main-color-state.at(here())
let part-style = part-style-state.at(here())
let supplement_part = supplement-part-state.at(here())
if part-style == 0 [
#set par(justify: false)
#place(block(width:100%, height:100%, outset: (x: 3cm, bottom: 2.5cm, top: 3cm), fill: main-color.lighten(70%)))
#place(top+right, text(fill: black, size: large-text, weight: "bold", box(width: 60%, part-state.get())))
#place(top+left, text(fill: main-color, size: huge-text, weight: "bold", part-counter.display("I")))
] else if part-style == 1 [
#set par(justify: false)
#place(block(width:100%, height:100%, outset: (x: 3cm, bottom: 2.5cm, top: 3cm), fill: main-color.lighten(70%)))
#place(top+left)[
#block(text(fill: black, size: 2.5em, weight: "bold", supplement_part + " " + part-counter.display("I")))
#v(1cm, weak: true)
#move(dx: -4pt, block(text(fill: main-color, size: 6em, weight: "bold", part-state.get())))
]
]
align(bottom+right, my-outline-small(title, appendix-state, part-state, part-location,part-change,part-counter, main-color, textSize1: outline-part, textSize2: outline-heading1, textSize3: outline-heading2, textSize4: outline-heading3))
}
]
}
#let chapter(title, image:none, l: none) = {
heading-image.update(x =>
image
)
if l != none [
#heading(level: 1, title) #label(l)
] else [
#heading(level: 1, title)
]
}
#let update-heading-image(image:none) = {
heading-image.update(x =>
image
)
}
#let make-index(title: none) = {
make-index-int(title:title, main-color-state: main-color-state)
}
#let appendices(title, doc, hide-parent: false) = {
counter(heading).update(0)
appendix-state.update(x =>
title
)
appendix-state-hide-parent.update(x =>
hide-parent
)
set heading ( numbering: (..nums) => {
let vals = nums.pos()
if vals.len() == 1 {
return str(numbering("A.1", ..vals)) + "."
}
else {
context{
let main-color = main-color-state.at(here())
let color = main-color
if vals.len() == 4 {
color = black
}
return place(dx:-4.5cm, box(width: 4cm, align(right, text(fill: color)[#numbering("A.1", ..vals)])))
}
}
},
)
doc
}
#let my-bibliography(file, image:none) = {
counter(heading).update(0)
heading-image.update(x =>
image
)
file
}
#let theorem(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("theorem", if language=="en" {"Theorem"} else {"Teorema"},
stroke: 0.5pt + main-color,
radius: 0em,
inset: 0.65em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", fill: main-color, x),
fill: black.lighten(95%),
base_level: 1)(name:name, body)
}
}
#let definition(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("definition", if language=="en" {"Definition"} else {"Definizione"},
stroke: (left: 4pt + main-color),
radius: 0em,
inset: (x: 0.65em),
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", x),
base_level: 1)(name:name, body)
}
}
#let corollary(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("corollary", if language=="en" {"Corollary"} else {"Corollario"},
stroke: (left: 4pt + gray),
radius: 0em,
inset: 0.65em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", x),
fill: black.lighten(95%),
base_level: 1)(name:name, body)
}
}
#let proposition(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("proposition", if language=="en" {"Proposition"} else {"Proposizione"},
radius: 0em,
inset: 0em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", fill: main-color, x),
base_level: 1)(name:name, body)
}
}
#let notation(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("notation", if language=="en" {"Notation"} else {"Nota"},
stroke: none,
radius: 0em,
inset: 0em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", x),
base_level: 1)(name:name, body)
}
}
#let exercise(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("exercise", if language=="en" {"Exercise"} else {"Esercizio"},
stroke: (left: 4pt + main-color),
radius: 0em,
inset: 0.65em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(fill: main-color, weight: "bold", x),
fill: main-color.lighten(90%),
base_level: 1)(name:name, body)
}
}
#let example(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("example", if language=="en" {"Example"} else {"Esempio"},
stroke: none,
radius: 0em,
inset: 0em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(weight: "bold", x),
base_level: 1)(name:name, body)
}
}
#let problem(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("problem", if language=="en" {"Problem"} else {"Problema"},
stroke: none,
radius: 0em,
inset: 0em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => text(fill: main-color, weight: "bold", x),
base_level: 1)(name:name, body)
}
}
#let vocabulary(name: none, body) = {
context{
let language = language-state.at(here())
let main-color = main-color-state.at(here())
thmbox("vocabulary", if language=="en" {"Vocabulary"} else {"Vocabolario"},
stroke: none,
radius: 0em,
inset: 0em,
namefmt: x => [*--- #x.*],
separator: h(0.2em),
titlefmt: x => [■ #text(weight: "bold", x)],
base_level: 1)(name:name, body)
}
}
#let remark(body) = {
context{
let main-color = main-color-state.at(here())
set par(first-line-indent: 0em)
grid(
columns: (1.2cm, 1fr),
align: (center, left),
rows: (auto),
circle(radius: 0.3cm, fill: main-color.lighten(70%), stroke: main-color.lighten(30%))[
#set align(center + horizon)
#set text(fill: main-color, weight: "bold")
R
],
body)
}
}
#let book(title: "", subtitle: "", date: "", author: (), paper-size: "a4", logo: none, cover: none, image-index:none, body, main-color: blue, copyright: [], lang: "en", list-of-figure-title: none, list-of-table-title: none, supplement-chapter: "Chapter", supplement-part: "Part", font-size: 10pt, part-style: 0, lowercase-references: false) = {
set document(author: author, title: title)
set text(size: font-size, lang: lang)
set par(leading: 0.5em)
set enum(numbering: "1.a.i.")
set list(marker: ([•], [--], [◦]))
set ref (supplement: (it)=>{lower(it.supplement)}) if lowercase-references
set math.equation(numbering: num =>
numbering("(1.1)", counter(heading).get().first(), num)
)
set figure(numbering: num =>
numbering("1.1", counter(heading).get().first(), num)
)
set figure(gap: 1.3em)
show figure: it => align(center)[
#it
#v(2.6em, weak: true)
]
show terms: set par(first-line-indent: 0em)
set page(
paper: paper-size,
margin: (x: 3cm, bottom: 2.5cm, top: 3cm),
header: context{
set text(size: title5)
let page_number = counter(page).at(here()).first()
let odd_page = calc.odd(page_number)
let part_change = part-change.at(here())
// Are we on an odd page?
// if odd_page {
// return text(0.95em, smallcaps(title))
// }
// Are we on a page that starts a chapter? (We also check
// the previous page because some headings contain pagebreaks.)
let all = query(heading.where(level: 1))
if all.any(it => it.location().page() == page_number) or part_change {
return
}
let appendix = appendix-state.at(here())
if odd_page {
let before = query(selector(heading.where(level: 2)).before(here()))
let counterInt = counter(heading).at(here())
if before != () and counterInt.len()> 2 {
box(width: 100%, inset: (bottom: 5pt), stroke: (bottom: 0.5pt))[
#text(if appendix != none {numbering("A.1", ..counterInt.slice(1,3)) + " " + before.last().body} else {numbering("1.1", ..counterInt.slice(1,3)) + " " + before.last().body})
#h(1fr)
#page_number
]
}
} else{
let before = query(selector(heading.where(level: 1)).before(here()))
let counterInt = counter(heading).at(here()).first()
if before != () and counterInt > 0 {
box(width: 100%, inset: (bottom: 5pt), stroke: (bottom: 0.5pt))[
#page_number
#h(1fr)
#text(weight: "bold", if appendix != none {numbering("A.1", counterInt) + ". " + before.last().body} else{before.last().supplement + " " + str(counterInt) + ". " + before.last().body})
]
}
}
}
)
show cite: it => {
show regex("\[(\d+)"): set text(main-color)
it
}
set heading(
hanging-indent: 0pt,
numbering: (..nums) => {
let vals = nums.pos()
let pattern = if vals.len() == 1 { "1." }
else if vals.len() <= 4 { "1.1" }
if pattern != none { numbering(pattern, ..nums) }
}
)
show heading.where(level: 1): set heading(supplement: supplement-chapter)
show heading: it => {
set text(size: font-size)
if it.level == 1 {
pagebreak(to: "odd")
//set par(justify: false)
counter(figure.where(kind: image)).update(0)
counter(figure.where(kind: table)).update(0)
counter(math.equation).update(0)
context{
let img = heading-image.at(here())
if img != none {
set image(width: 21cm, height: 9.4cm)
place(move(dx: -3cm, dy: -3cm, img))
place( move(dx: -3cm, dy: -3cm, block(width: 21cm, height: 9.4cm, align(right + bottom, pad(bottom: 1.2cm, block(
width: 86%,
stroke: (
right: none,
rest: 2pt + main-color,
),
inset: (left:2em, rest: 1.6em),
fill: rgb("#FFFFFFAA"),
radius: (
right: 0pt,
left: 10pt,
),
align(left, text(size: title1, it))
))))))
v(8.4cm)
}
else{
move(dx: 3cm, dy: -0.5cm, align(right + top, block(
width: 100% + 3cm,
stroke: (
right: none,
rest: 2pt + main-color,
),
inset: (left:2em, rest: 1.6em),
fill: white,
radius: (
right: 0pt,
left: 10pt,
),
align(left, text(size: title1, it))
)))
v(1.5cm, weak: true)
}
}
part-change.update(x =>
false
)
}
else if it.level == 2 or it.level == 3 or it.level == 4 {
let size
let space
let color = main-color
if it.level == 2 {
size= title2
space = 1em
}
else if it.level == 3 {
size= title3
space = 0.9em
}
else {
size= title4
space = 0.7em
color = black
}
set text(size: size)
let number = if it.numbering != none {
set text(fill: main-color) if it.level < 4
let num = counter(heading).display(it.numbering)
let width = measure(num).width
let gap = 7mm
place(dx: -width - gap, num)
}
block(number + it.body)
v(space, weak: true)
}
else {
it
}
}
set underline(offset: 3pt)
// Title page.
page(margin: 0cm, header: none)[
#set text(fill: black)
#language-state.update(x => lang)
#main-color-state.update(x => main-color)
#part-style-state.update(x => part-style)
#supplement-part-state.update(x => supplement-part)
//#place(top, image("images/background2.jpg", width: 100%, height: 50%))
#if cover != none {
set image(width: 100%, height: 100%)
place(bottom, cover)
}
#if logo != none {
set image(width: 3cm)
place(top + center, pad(top:1cm, logo))
}
#align(center + horizon, block(width: 100%, fill: main-color.lighten(70%), height: 7.5cm, pad(x:2cm, y:1cm)[
#par(leading: 0.4em)[
#text(size: title-main-1, weight: "black", title)
]
#v(1cm, weak: true)
#text(size: title-main-2, subtitle)
#v(1cm, weak: true)
#text(size: title-main-3, weight: "bold", author)
]))
]
if (copyright!=none){
set text(size: 10pt)
show link: it => [
#set text(fill: main-color)
#it
]
set par(spacing: 2em)
align(bottom, copyright)
}
heading-image.update(x =>
image-index
)
my-outline(appendix-state, appendix-state-hide-parent, part-state, part-location,part-change,part-counter, main-color, textSize1: outline-part, textSize2: outline-heading1, textSize3: outline-heading2, textSize4: outline-heading3)
my-outline-sec(list-of-figure-title, figure.where(kind: image), outline-heading3)
my-outline-sec(list-of-table-title, figure.where(kind: table), outline-heading3)
// Main body.
set par(
first-line-indent: 1em,
justify: true,
spacing: 0.5em
)
set block(spacing: 1.2em)
show link: set text(fill: main-color)
body
}
|
https://github.com/barrel111/readings | https://raw.githubusercontent.com/barrel111/readings/main/problems/external/1910SU/week5.typ | typst | #import "@local/preamble:0.1.0": *
#import "@preview/cetz:0.2.2": canvas, plot, draw
#show: project.with(
course: "1910SU",
sem: "Summer",
title: "Group Discussion: L'Hôpital's Rule",
subtitle: "Solutions",
contents: false,
)
= Limits with Exponents.
\
== Compute the limit $lim_(x -> oo) (1 + 2/x)^x $
First we compute the following.
$ lim_(x -> infinity) ln ((1 + 2/x)^x) = lim_(x -> infinity) x ln (1 + 2/x) $
This is an indeterminate form of the type $infinity dot 0$. We write the expression above to resemble the form $0 / 0$ so that we can use L'Hôpital's rule.
$ = lim_(x -> infinity) (ln(1 + 2/x)/(1/x)) $
Taking the derivatives of the numerator and denominator, we obtain
$ = lim_(x -> infinity) -x^2 dot 1/(1 + 2/x) dot (-2)/x^2 \
= lim_(x -> oo) 2/(1 + 2/x) = 2. $
Thus,
$ lim_(x -> oo) (1 + 2/x)^x =e^(lim_(x -> infinity) x ln (1 + 2/x)) = e^2. $
== Compute the limit $lim_(x -> oo) (1 + 1/x^2)^x $
First we compute the following.
$ lim_(x -> infinity) ln((1 + 1/x^2)^x) = lim_(x -> infinity) x ln(1 + 1/x^2 ) $
This is again an indeterminate form of the time $infinity dot 0$. We write the expression above to resemble the form $0/0$ so that we can use L'Hôpital's rule.
$ = lim_(x -> oo) ((ln (1 + 1/x^2))/(1/x)) $
Taking the derivatives of the numerator and denominator, we obtain
$ = lim_(x -> oo) -x^2 dot 1/(1 + 1/x^2) dot (-2)/x^3 \ = lim_(x -> oo) 2/(x^3 + x) = 0. $
Thus,
$ lim_(x -> infinity) (1 + 1/x^2)^x = e^(lim_(x -> oo) x ln (1 + 1/x^2)) = e^0 = 1. $
== Compute the limit $lim_(x -> 0^+) x^(sin x) $
First we compute the following.
$ lim_(x arrow.b 0^+) sin x ln x $
This is an indeterminate form of the type $0 dot (- infinity)$. We rewrite this to resemble $oo/oo$ so that we can use L'Hôpital's rule.
$ = lim_(x arrow.b 0^+) (ln x)/(csc x) $
Taking the derivative of the numerator and denominator gives us,
$ = lim_(x arrow.b 0^+) (1\/x)/(-cot x csc x) \ = lim_(x arrow.b 0^+) -(sin^2 x)/(x cos x) $
This is an indeterminate form of the type $0/0$. We apply L'Hopital's rule again.
$ = lim_(x arrow.b 0^+) - (2 sin x dot cos x)/(cos x + x sin x) = 0/1 = 0. $
Thus, we have
$ lim_(x -> 0^+) x^(sin x) = e^(lim_(x -> 0^+) sin x ln x) = e^0 = 1. $
= (Non) Indeterminate Form $0^infinity$.
\
As $lim_(x -> a) g(x) = infinity$ and $lim_(x -> a) ln g(x) = lim_(x -> 0^+) ln x = -infinity$ . So,
$ lim_(x -> a) g(x) ln f(x) = -infinity. $
#remark[Note that, technically, we need to assume that $f(x)$ approaches $0$ from above as otherwise $ln f$ will be undefined.]
Thus, $lim_(x -> a) f(x)^(g(x)) = lim_(x -> a) e^(g(x) ln f(x)) = lim_(y -> -infinity) e^y = 0.$
|
|
https://github.com/dyc3/senior-design | https://raw.githubusercontent.com/dyc3/senior-design/main/lib/requirements.typ | typst |
#let necessity_box(necessity, bgcolor: rgb("#cecfcf")) = {
box(
fill: bgcolor,
inset: (x: 2pt, y: 2pt),
outset: (x: 2pt, y: 2pt),
necessity
)
}
#let mustHave = necessity_box("Must Have", bgcolor: rgb("#ff6365"))
#let shouldHave = necessity_box("Should Have", bgcolor: rgb("#ecff09"))
#let couldHave = necessity_box("Could Have", bgcolor: rgb("#1ae53e"))
#let wouldBeNiceToHave = necessity_box("Would Be Nice To Have", bgcolor: rgb("#009dff"))
#let req(text, necessity, usecase: "") = {
let metadata = [#necessity]
if (usecase != "") {
let content = box(
fill: rgb("#94ffe2"),
inset: (x: 2pt, y: 2pt),
outset: (x: 2pt, y: 2pt),
usecase
)
metadata = [#metadata #content]
}
figure(
metadata,
caption: text,
supplement: [Requirement],
kind: "req")
};
|
|
https://github.com/desid-ms/desid_report | https://raw.githubusercontent.com/desid-ms/desid_report/main/_extensions/desid_report/definitions.typ | typst | MIT License | // Some definitions presupposed by pandoc's typst output.
#let blockquote(body) = [
#set text( size: 0.8em )
#align(right, block(inset: (right: 5em, top: 0.2em, bottom: 0.2em))[#body])
]
#let horizontalrule = [
#line(start: (25%,0%), end: (75%,0%))
]
#let endnote(num, contents) = [
#stack(dir: ltr, spacing: 3pt, super[#num], contents)
]
#show terms: it => {
it.children
.map(child => [
#strong[#child.term]
#block(inset: (left: 1.5em, top: -0.4em))[#child.description]
])
.join()
}
// Some quarto-specific definitions.
#show raw.where(block: true): block.with(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt
)
#let block_with_new_content(old_block, new_content) = {
let d = (:)
let fields = old_block.fields()
fields.remove("body")
if fields.at("below", default: none) != none {
// TODO: this is a hack because below is a "synthesized element"
// according to the experts in the typst discord...
fields.below = fields.below.amount
}
return block.with(..fields)(new_content)
}
#let empty(v) = {
if type(v) == "string" {
// two dollar signs here because we're technically inside
// a Pandoc template :grimace:
v.matches(regex("^\\s*$$")).at(0, default: none) != none
} else if type(v) == "content" {
if v.at("text", default: none) != none {
return empty(v.text)
}
for child in v.at("children", default: ()) {
if not empty(child) {
return false
}
}
return true
}
}
// Subfloats
// This is a technique that we adapted from https://github.com/tingerrr/subpar/
#let quartosubfloatcounter = counter("quartosubfloatcounter")
#let quarto_super(
kind: str,
caption: none,
label: none,
supplement: str,
position: none,
subrefnumbering: "1a",
subcapnumbering: "(a)",
body,
) = {
context {
let figcounter = counter(figure.where(kind: kind))
let n-super = figcounter.get().first() + 1
set figure.caption(position: position)
[#figure(
kind: kind,
supplement: supplement,
caption: caption,
{
show figure.where(kind: kind): set figure(numbering: _ => numbering(subrefnumbering, n-super, quartosubfloatcounter.get().first() + 1))
show figure.where(kind: kind): set figure.caption(position: position)
show figure: it => {
let num = numbering(subcapnumbering, n-super, quartosubfloatcounter.get().first() + 1)
show figure.caption: it => {
num.slice(2) // I don't understand why the numbering contains output that it really shouldn't, but this fixes it shrug?
[ ]
it.body
}
quartosubfloatcounter.step()
it
counter(figure.where(kind: it.kind)).update(n => n - 1)
}
quartosubfloatcounter.update(0)
body
}
)#label]
}
}
// callout rendering
// this is a figure show rule because callouts are crossreferenceable
#show figure: it => {
if type(it.kind) != "string" {
return it
}
let kind_match = it.kind.matches(regex("^quarto-callout-(.*)")).at(0, default: none)
if kind_match == none {
return it
}
let kind = kind_match.captures.at(0, default: "other")
kind = upper(kind.first()) + kind.slice(1)
// now we pull apart the callout and reassemble it with the crossref name and counter
// when we cleanup pandoc's emitted code to avoid spaces this will have to change
let old_callout = it.body.children.at(1).body.children.at(1)
let old_title_block = old_callout.body.children.at(0)
let old_title = old_title_block.body.body.children.at(2)
// TODO use custom separator if available
let new_title = if empty(old_title) {
[#kind #it.counter.display()]
} else {
[#kind #it.counter.display(): #old_title]
}
let new_title_block = block_with_new_content(
old_title_block,
block_with_new_content(
old_title_block.body,
old_title_block.body.body.children.at(0) +
old_title_block.body.body.children.at(1) +
new_title))
block_with_new_content(old_callout,
new_title_block +
old_callout.body.children.at(1))
}
// 2023-10-09: #fa-icon("fa-info") is not working, so we'll eval "#fa-info()" instead
#let callout(body: [], title: "Callout", background_color: rgb("#dddddd"), icon: none, icon_color: black) = {
block(
breakable: false,
fill: background_color,
stroke: (paint: icon_color, thickness: 0.5pt, cap: "round"),
width: 100%,
radius: 2pt,
block(
inset: 1pt,
width: 100%,
below: 0pt,
block(
fill: background_color,
width: 100%,
inset: 8pt)[#text(icon_color, weight: 900)[#icon] #title]) +
if(body != []){
block(
inset: 1pt,
width: 100%,
block(fill: white, width: 100%, inset: 8pt, body))
}
)
}
|
https://github.com/hongjr03/shiroa-page | https://raw.githubusercontent.com/hongjr03/shiroa-page/main/DIP/chapters/5图像复原.typ | typst | #import "../template.typ": *
#import "@preview/fletcher:0.5.0" as fletcher: diagram, node, edge
#import fletcher.shapes: house, hexagon, ellipse
#import "@preview/pinit:0.1.4": *
#import "@preview/cetz:0.2.2"
#import "/book.typ": book-page
#show: book-page.with(title: "数字图像处理基础 | DIP")
= 图像复原 Image Restoration
#definition[
*图像复原*:指的是从已知的图像中恢复原始图像的过程。
]
和图像增强对比:
#definition[
*图像增强*:指的是通过增加图像的对比度、亮度等来改善图像的视觉效果。
]
图像复原是客观地恢复原始图像,而图像增强是主观地改善图像的视觉效果。
== 图像退化模型 Image Degradation Model
图像退化模型是指图像在传输、采集、处理等过程中受到的影响。一般来说,图像退化模型可以表示为:
$
g(x, y) = h(x, y) * f(x, y) + eta(x, y)
$
其中,$g(x, y)$ 为退化图像,$f(x, y)$ 为原始图像,$h(x, y)$ 为退化函数,$eta(x, y)$ 为噪声。
== 噪声模型 Noise Models
噪声来源:
- 图像获取:环境条件(光照)、传感器质量
- 图像传输:无线信号被干扰
刻画噪声:
- 空间域和频率域特点
- 白噪声:傅里叶变换后为常数
- 噪声是否和图像内容相关
=== 噪声的概率密度函数
- #grid(
columns: (6fr, 4fr),
)[高斯噪声:$
p(z) = 1 / (sqrt(2 pi) sigma) e^(-(z - macron(z))^2 / (2 sigma^2))
$
- 电路噪声,传感器噪声
- 期望值 $E(z) = macron(z)$,标准差 $sigma$
- 去除:均值滤波、几何均值滤波][
#figure(
[
#set text(size: 9pt)
#set par(leading: 1em)
#cetz.canvas({
import cetz.draw: *
import cetz.plot
import cetz.palette: *
plot.plot(
size: (5, 4),
x-tick-step: none,
y-tick-step: none,
x-label: [灰度级 $z$],
y-label: [$p(z)$],
axis-style: "left",
name: "Gaussian",
{
let Gaussian(x) = {
calc.exp(-calc.pow(x - 32, 2) / (2 * 10 * 10)) / (calc.sqrt(2 * calc.pi) * 20)
}
plot.add(domain: (0, 64), Gaussian)
plot.add-vline(32)
},
)
})],
caption: "高斯噪声概率密度函数",
)
]
- #grid(
columns: (6fr, 4fr),
)[Rayleigh 噪声:$
p(z) = cases(
2/b (z - a) e^(-(z - a)^2 / b) ",当" z >= a,
0 ",其他"
)
$
- 范围成像
- 期望值 $E(z) = a + sqrt(pi b / 4)$,方差 $sigma^2 = b(4 - pi) / 4$][
#figure(
[
#set text(size: 9pt)
#set par(leading: 1em)
#cetz.canvas({
import cetz.draw: *
import cetz.plot
import cetz.palette: *
plot.plot(
size: (5, 4),
x-tick-step: none,
y-tick-step: none,
x-label: [灰度级 $z$],
y-label: [$p(z)$],
axis-style: "left",
name: "Rayleigh",
{
let Rayleigh(x) = {
if x >= 4 {
2 / 20 * (x - 4) * calc.exp(-calc.pow(x - 4, 2) / 20)
} else {
0
}
}
plot.add(domain: (0, 20), Rayleigh)
// plot.add-vline(32)
},
)
})],
caption: "Rayleigh 噪声概率密度函数",
)
]
- #grid(
columns: (6fr, 4fr),
)[Gamma 噪声:$
p(z) = cases(
(a^b z^(b - 1))/ (b-1)! e^(-a z) ",当" z >= 0,
0 ",其他"
), a > 0, b "为正整数"
$
- 激光成像
- 期望值 $E(z) = a / b$,方差 $sigma^2 = b / a^2$
- $a = 1$时,Gamma 分布就是指数分布][#figure(
[
#set text(size: 9pt)
#set par(leading: 1em)
#cetz.canvas({
import cetz.draw: *
import cetz.plot
import cetz.palette: *
plot.plot(
size: (5, 4),
x-tick-step: none,
y-tick-step: none,
y-max: 0.5,
x-label: [灰度级 $z$],
y-label: [$p(z)$],
axis-style: "left",
name: "Gamma",
{
let Gamma(x) = {
if x >= 0 {
calc.pow(2, 4) * calc.pow(x, 4 - 1) * calc.exp(-2 * x) / (3 * 2 * 1)
} else {
0
}
}
plot.add(domain: (0, 5), Gamma)
let Exponential(x) = {
if x >= 0 {
calc.pow(4, 1) * calc.exp(-4 * x)
} else {
0
}
}
plot.add(domain: (0, 5), Exponential)
plot.add-anchor("pt1", (0.5, 0.5))
},
)
content("Gamma.pt1", "指数分布", anchor: "west", padding: .1)
})],
caption: "Gamma 噪声概率密度函数",
)]
- #grid(
columns: (6fr, 4fr),
)[均匀噪声:$
p(z) = 1 / (b - a), a <= z <= b
$
- 仿真生成随机数
- 期望值 $E(z) = (a + b) / 2$,方差 $sigma^2 = (b - a)^2 / 12$][#figure(
[
#set text(size: 9pt)
#set par(leading: 1em)
#cetz.canvas({
import cetz.draw: *
import cetz.plot
import cetz.palette: *
plot.plot(
size: (5, 4),
x-tick-step: none,
y-tick-step: none,
x-label: [灰度级 $z$],
y-label: [$p(z)$],
axis-style: "left",
name: "Uniform",
{
let Uniform(x) = {
if x >= 3 and x <= 7 {
1 / 10
} else {
0
}
}
plot.add(domain: (0, 10), Uniform)
// plot.add-vline(32)
},
)
})],
caption: "均匀噪声概率密度函数",
)]
- 脉冲(椒盐)噪声:$
p(z) = cases(
P_a ",当" z = a,
P_b ",当" z = b,
0 ",其他"
)
$
- 快速过渡,错误开关
- 白色的点和黑色的点,所以叫椒盐噪声,$P$ 可正可负
- 胡椒噪声可以用 $Q > 0$ 的逆谐波均值滤波器去除,盐噪声可以用 $Q < 0$ 的逆谐波均值滤波器去除;或者最大最小值滤波器
- 用中值滤波可以去除椒盐噪声,但是滤波太多遍会丢失很多细节
=== 周期性噪声
傅里叶变换之后可以识别周期噪声。周期性噪声以能量脉冲出现,利用选择性滤波器去掉噪声。
*选择性滤波器*:
- 带阻滤波器:去除特定频率的噪声
- 带通滤波器:保留特定频率的信号
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/paddling-tongji-thesis/0.1.1/init-files/sections/05_conclusion.typ | typst | Apache License 2.0 | #import "@preview/paddling-tongji-thesis:0.1.1": *
= 总结与未来工作展望
本节通常用于对论文进行总结和归纳,并提出未来工作的展望和建议。
在总结部分,需要回顾研究内容和方法,对研究结果进行分析和归纳,并阐述研究工作的贡献。同时,也要对研究过程中存在的问题和不足进行反思和总结,为未来的研究提供参考和启示。
在未来工作展望部分,需要具体提出研究计划和建议,为后续研究提供方向和指导。同时,也要对本文提出的方法和技术进行展望,探索其在未来研究中的应用前景和发展方向。此外,还可以指出当前领域中存在的未解决问题,为未来研究提供新的研究思路和方向。
此外,未来工作展望中还可以对本文研究的局限性进行讨论和说明,提出改进和扩展的方向。同时,也要注意将未来工作展望与本文研究内容相互关联,以确保研究的连续性和完整性。
最后,需要强调本文研究的意义和价值,并对读者进行总结和启示,为相关领域的研究提供借鉴和参考。在撰写总结与未来工作展望时,需要遵循逻辑清晰、表达准确、语言简练的原则,使得整篇论文的结论和建议具有可读性和可信度。
|
https://github.com/Error-418-SWE/Documenti | https://raw.githubusercontent.com/Error-418-SWE/Documenti/src/3%20-%20PB/Documentazione%20interna/Verbali/24-02-22/24-02-22.typ | typst | #import "/template.typ": *
#show: project.with(
date: "22/02/24",
subTitle: "Meeting post colloquio con Proponente",
docType: "verbale",
authors: (
"<NAME>",
),
timeStart: "16:10",
timeEnd: "16:25",
);
= Ordine del giorno
A seguito dell'incontro con il Proponente, il gruppo ha svolto un meeting interno riguardante:
- Considerazioni scaturite dal meeting esterno:
- riflessioni sul feedback ricevuto riguardo al testing dell'applicazione;
- riflessioni sul feedback ricevuto riguardo al deploy dell'applicazione;
- predisposizione modifiche necessarie al diagramma ER;
- To do:
- Norme di Progetto;
- Piano di Progetto;
- Analisi dei Requisiti.
== Considerazioni scaturite dal meeting esterno
=== Riflessioni sul feedback ricevuto riguardo al testing dell'applicazione
Il gruppo ha esaminato le osservazioni del Proponente relative ai metodi e ai software di testing suggeriti.
Come risultato di questa valutazione, è stata presa la decisione di focalizzarsi temporaneamente sullo unit testing.
La scelta di implementare le altre forme di testing proposte (end-to-end e integration) sarà valutata durante lo svolgimento dei lavori.
=== Riflessioni sul feedback ricevuto riguardo al deploy dell'applicazione
Il Proponente ha valutato positivamente la scelta delle modalità di deploy dell'applicazione, con particolare apprezzamento per i meccanismi di Continuous Integration e Continuous Delivery proposti.
Il gruppo ha quindi deciso di approvare il percorso scelto e proseguire con l'implementazione.
=== Predisposizione modifiche necessarie al diagramma ER
Il gruppo ha esaminato il feedback ricevuto sulla modifica del diagramma ER in base alle nuove direttive del Proponente.
Di conseguenza, è stata presa la decisione di pianificare la revisione del diagramma ER modificato entro il meeting di retrospettiva del 25/02/24.
== To do
Il gruppo ha fatto il punto sulla situazione dello sprint in corso, segnalando un andamento positivo e rimanendo quindi fiducioso sull'andamento dei lavori.
In particolare le task scaturite dall'esito di questo meeting riguardano:
+ aggiornamento dello schema ER del database a seguito del feedback del Proponente in particolare:
- rimuovere attributo altezza da scaffale perché derivabile;
- rimuovere generalizzazione posizione;
- aggiungere attributi coordinate X e Y allo scaffale;
- aggiungere possibilità di avere bin di larghezza diversa sullo stesso ripiano;
+ definire codice identificativo bin sulla base dell'esempio fornito dal proponente. |
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/syntaxes/textmate/tests/unit/basic/context.typ | typst | Apache License 2.0 | #context 1{}
#context 1;{}
#context {}{}
#context [][]
#context {
here().page()
}{}
#context [Citation @distress on page #here().page()]{} |
https://github.com/heinwol/master-thesis | https://raw.githubusercontent.com/heinwol/master-thesis/main/typst/template_to_export.typ | typst | #let theorem(cnt) = [*Теорема:* #cnt]
#let proposition(cnt) = [*Предложение:* #cnt]
#let proof(cnt) = [*Доказательство:* #cnt]
#let definition(cnt) = [*Определение:* #cnt]
#let lemma(cnt) = [*Лемма:* #cnt]
#let corollary(cnt) = [*Следствие:* #cnt]
#let remark(cnt) = [*Замечание:* #cnt]
#let thmrules(it) = it
#let code(..args) = [..args]
#let with(func: function, ..k, content: content) = {
set func(..k)
content
}
// #show terms: it => {
// for item in it.children {
// definition(item.term, item.description)
// }
// }
#let template(body) = {
// set document(author: "dds", title: "ds")
// Set the basic text properties.
set text(
font: "Liberation Serif",
lang: "ru",
size: 12pt,
// fallback: true,
// hyphenate: false,
)
// Set the basic page properties.
set page(
paper: "a4",
number-align: center,
margin: (top: 10mm, bottom: 10mm, left: 30mm, right: 10mm),
numbering: "1",
// footer: rect(fill: aqua)[Footer],
)
counter(page).update(2)
// Set the basic paragraph properties.
set par(
leading: 1.25em,
justify: true,
first-line-indent: 1.25em,
// hanging-indent: 1.25em,
)
// block spacing
// set block(spacing: 3.65em,)
// Additionally styling for list.
set enum(indent: 0.5cm)
set list(indent: 0.5cm)
set heading(numbering: "1.1.")
show heading: set align(center)
show heading: it => {it; v(1em)}
show heading.where(level: 1): it => { pagebreak(); it }
show heading.where(level: 3): set heading(numbering: none, outlined: false)
// set math.equation(
// numbering: num =>
// "(" + ((counter(heading).get().at(0),) + (num,)).map(str).join(".") + ")"
// )
// set math.equation(supplement: none)
// show math.cases: set align(left)
// show figure.caption: set text(size: 0.8em)
// show figure.caption: set par(leading: 1em)
// show figure.where(kind: 1): ""
// set figure(supplement: "рис.")
// see https://github.com/typst/typst/issues/311#issuecomment-1722331318
// show regex("^!!"): context h(par.leading)
show <nonum>: set heading(numbering: none)
show <nonum>: set math.equation(numbering: none)
body
}
|
|
https://github.com/Zuttergutao/Typstdocs-Zh-CN- | https://raw.githubusercontent.com/Zuttergutao/Typstdocs-Zh-CN-/main/Classified/main.typ | typst | #import "format.typ":*
// 设置页面
#set page(
paper:"a4",
margin: (
top:27.5mm,
bottom:25.4mm,
left:35.7mm,
right:27.7mm
),
header:[
#set text(10pt)
#h(1fr)
#emph("Typst 中文文档 translated by Casea")
#v(-0.8em)
#line(length:100%,stroke:1pt)
],
numbering:"1/1",
number-align:center,
)
// 设置正文文字格式
#set text(
font:("Times New Roman","SimSun"),
style:"normal",
weight:"regular",
size: 12pt,
)
// 设置段落
#set par(
leading:20pt,
justify: true,
first-line-indent: 2em,
)
// 设置标题格式
#set heading(numbering: "1.1.1.1")
#show heading: it => locate(loc => {
let levels = counter(heading).at(loc)
let deepest = if levels != () {
levels.last()
} else {
1
}
set text(12pt)
if it.level == 1 [
#if deepest !=1 {
// pagebreak()
}
#set par(first-line-indent: 0pt)
#set align(center)
#let is-ack = it.body in ([Acknowledgment], [Acknowledgement])
#set text(if is-ack { 15pt } else { 15pt },font:("Times New Roman","SimSun"))
#v(36pt, weak: true)
#if it.numbering != none and not is-ack {
numbering("第 1 章", deepest)
h(3pt, weak: true)
}
#it.body
#v(36pt, weak: true)
] else if it.level == 2 [
#set par(first-line-indent: 0pt)
#set text(size:14pt,font:("Times New Roman","SimSun"))
#v(24pt, weak: true)
#if it.numbering != none {
numbering("1.1 ",..levels)
h(3pt, weak: true)
}
#it.body
#v(24pt, weak: true)
] else if it.level == 3 [
#set par(first-line-indent: 0pt)
#set text(size:14pt,font:("Times New Roman","SimSun"))
#v(15pt, weak: true)
#if it.numbering != none {
numbering("1.1.1 ",..levels)
h(3pt, weak: true)
}
#it.body
#v(15pt, weak: true)
] else [
#set par(first-line-indent: 0pt)
#set text(size:12pt,font:("Times New Roman","SimSun"))
#v(12pt, weak: true)
#if it.numbering != none {
numbering("1.1.1.1 ",..levels)
h(3pt, weak: true)
}
#it.body
#v(12pt, weak: true)
]
})
// 设置代码块样式
#show raw.where(block: false): box.with(
fill: luma(240),
inset: (x: 3pt, y: 0pt),
outset: (y: 3pt),
radius: 2pt,
)
#show raw.where(lang:"typ"): it=>{
block(width:100%,fill:luma(245),inset:10pt,radius: 5pt,stroke:0.8pt+rgb("#00B7FF").darken(10%))[
#par(leading: 1em,
justify:true,
linebreaks: "optimized",
first-line-indent: 0em,
text(font: "Monospac821 BT",
style: "normal",
weight:"regular",
size:10pt,
spacing: 100%,
text(font: "Inria Serif", weight:700,size:11pt,fill: rgb("#FF0000"),"#Code") + it
)
)
]
}
#show raw.where(lang:"para"): it=>{
block(width:100%,fill:luma(245),inset:10pt,radius: 5pt,stroke:0.8pt+rgb("#000000").darken(10%))[
#par(leading: 1em,
justify:true,
linebreaks: "optimized",
first-line-indent: 0em,
text(font: "Inria Serif",
style: "normal",
weight:"regular",
size:10pt,
spacing: 200%,
text(font: "Inria Serif",weight:700,size:11pt,fill: rgb("#FF0000"),"#Func") + it
)
)
]
}
#include "Cover.typ"
#include "Abstract.typ"
#include "changelogs.typ"
#include "outlines.typ"
#include "Part I.typ"
#include "Part II.typ"
#include "Part III.typ"
#include "Symbol.typ"
|
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/type_check/tuple_map.typ | typst | Apache License 2.0 | #let a = (1,);
#let f = x => str(x);
#let b = a.map(f);
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/thesist/0.1.0/template/Chapters/Appendix-A.typ | typst | Apache License 2.0 | #import "@preview/thesist:0.1.0": flex-caption, subfigure-grid
#import "@preview/glossarium:0.4.1": gls, glspl
= An appendix
#lorem(500)
|
https://github.com/ufodauge/master_thesis | https://raw.githubusercontent.com/ufodauge/master_thesis/main/src/template/components/common/page.typ | typst | MIT License | #let pbWithOutFirst = state("first-page-rendered", none)
#let Page(
body
) = [
#pbWithOutFirst.display()
#pbWithOutFirst.update(x => pagebreak())
#body
] |
https://github.com/Skimmeroni/Appunti | https://raw.githubusercontent.com/Skimmeroni/Appunti/main/Metodi%20Algebrici/Interi/Basi.typ | typst | Creative Commons Zero v1.0 Universal | #import "../Metodi_defs.typ": *
#theorem("Esistenza ed unicitá della rappresentazione dei numeri interi in una certa base")[
Sia $b$ un intero maggiore o uguale a 2. Ogni numero intero $n$ non
negativo può essere scritto in uno ed un solo modo nella forma:
$ n = d_(k) b^(k) + d_(k − 1) b^(k − 1) + ... + d_(1) b + d_(0)
" con" 0 lt.eq d_(i) < b " " forall i = 0, ..., k " " d_(k) != 0
"per" k > 0 $
]
#proof[
La dimostrazione prevede di applicare il principio di induzione forte
su $n$. Per $n = 0$ la proposizione é verificata immediatamente. Si
assuma allora che la proposizione sia vera per ogni $m$ con $0 lt.eq
m < n$ e la si dimostri per $n$.
Innanzitutto, si osservi come sia possibile dividere $n$ per $b$,
ottenendo:
$ n = b q + r " con" 0 lt.eq r < b $
per un certo $q$ ed un certo $r$. Per la definizione di divisione,
si ha $q < n$. Ma allora $q$ é uno degli $m$ per i quali é valida
l'ipotesi assunta, ovvero che esiste uno ed un solo modo per
scrivere $q$ nella forma:
$ q = c_(k - 1) b^(k - 1) + c_(k − 2) b^(k − 2) + ... + c_(1) b + c_(0) $
Per certi $k$ valori $c_(i)$ tali per cui $0 lt.eq c_(i) < b$.
Sostituendo la seconda espressione nella prima, si ha:
$ n = b q + r = b (c_(k - 1) b^(k - 1) + c_(k − 2) b^(k − 2) + ... +
c_(1) b + c_(0)) + r = c_(k - 1) b^(k) + c_(k − 2) b^(k − 1) + ... +
c_(1) b^(2) + c_(0) b + r $
Ponendo $d_(k) = c_(k − 1), d_(k − 1) = c_(k − 2), ..., d_(1) =
c_(0), d_(0) = r$, si ha:
$ n = d_(k) b^(k) + d_(k − 1) b^(k − 1) + ... + d_(1) b + d_(0)
" con" 0 lt.eq d_(i) < b " " forall i = 0, ..., k $
Che é l'ipotesi che si voleva dimostrare.
Per quanto riguarda l'unicità di questa scrittura, questa segue
dall'unicità di $q$ e di $r$.
]
Dati $b in ZZ$ con $b gt.eq 2$ e un numero naturale $n$ tale che:
$ n = d_(k) b^(k) + d_(k − 1) b^(k − 1) + ... + d_(1) b + d_(0)
" con" 0 lt.eq d_(i) < b " " forall i = 0, ..., k " " d_(k) != 0
"per" k > 0 $
Gli interi $d_(0), d_(1), ..., d_(k)$ si dicono le *cifre* di $n$ in *base*
$b$.
Per indicare in quale base $n$ sta venendo espresso, se ne riportano
ordinatamente le cifre aggiungendo la base in pedice alla cifra piú a
destra. Nel caso in cui il pedice sia assente, si sta sottointendendo
che tale numero sta venendo espresso in base $10$.
Una base $b$ fa uso di un numero di cifre pari a $b - 1$, partendo da
$0$; nel caso in cui la base sia maggiore di $10$, si usano dei simboli
extra per rappresentare le cifre mancanti.
Se é nota la (unica) rappresentazione di un numero intero non negativo
in una certa base $b$, é sempre possibile ricavarne la rappresentazione
in base 10 semplicemente svolgendo l'equazione della definizione. Si noti
peró come tale equazione possa anche essere riscritta come:
#set math.mat(delim: none)
$ mat(
n &= d_(k) b^(k) + d_(k − 1) b^(k − 1) + d_(k − 2) b^(k − 2) +
d_(k − 3) b^(k − 3) + ... + d_(1) b + d_(0);
&= (d_(k) b + d_(k − 1)) b^(k − 1) + d_(k − 2) b^(k − 2) +
d_(k − 3) b^(k − 3) + ... + d_(1) b + d_(0);
&= ((d_(k) b + d_(k − 1)) b + d_(k − 2)) b^(k − 2) +
d_(k − 3) b^(k − 3) + ... + d_(1) b + d_(0);
&= (((d_(k) b + d_(k − 1)) b + d_(k − 2)) b +
d_(k − 3)) b^(k − 3) + ... + d_(1) b + d_(0);
&= ...;
&= (... (((d_(k) b + d_(k − 1)) b + d_(k − 2)) b +
d_(k − 3)) b^(k − 3) + ... + d_(1)) b + d_(0);
) $
Questa forma é nettamente piú convoluta, ma piú semplice da utilizzare per
effettuare la conversione. Infatti, sono necessarie solo $k$ moltiplicazioni
per $b$ e $k$ addizioni.
#example[
$ 61405_(7) = (((6 dot 7 + 1)7 + 4)7 + 0)7 + 5 =
((42 + 1)7 + 4)49 + 5 = (301 + 4)49 + 5 = 14950 $
]
Per effettuare la conversione inversa, ovvero ricavare la rappresentazione
di un numero $n$ in base $b$ a partire dalla sua rappresentazione in base
10, si osservi come le cifre $d_(0), d_(1), ..., d_(k)$ di $n$ non siano
altro che i resti delle divisioni:
$ mat(
n & = b q + d_(0), 0 lt.eq d_(0) < b;
q & = q_(1) b + d_(1), 0 lt.eq d_(1) < b;
q_(1) & = q_(2) b + d_(2), 0 lt.eq d_(2) < b;
& ...
) $
E cosı̀ via, finchè non si ottiene quoziente nullo.
#example[
$ mat(
14950 & = 7 dot 2135 + 5;
2135 & = 7 dot 305 + 0;
305 & = 7 dot 43 + 4;
43 & = 7 dot 6 + 1;
6 & = 7 dot 0 + 6;
) $
Leggendo dal basso verso l'alto, si ha $14950 = 61405_(7)$
]
#lemma[
Sia $n$ un numero intero non negativo e sia $b$ una base. Il
numero di cifre in base $b$ necessarie a rappresentare $n$ è
dato da $floor(ln(n)/ln(b)) + 1$
]
Le somme e le sottrazioni fra numeri in base $n$ operano allo stesso modo
di quelle in base $10$, l'unica accortezza sta nel fatto che il _riporto_
massimo é $n - 1$.
#example[
#grid(
columns: (0.5fr, 0.5fr),
[$ mat(space, 3, 1, 4, 2, + ;
space, 3, 2, 4, 4, = ;
1, 1, 4, 1, 1) $],
[$ mat(&4 + 2 = 1, "con" "riporto di", 1;
&4 + 4 + 1 = 4, "con" "riporto di", 1;
&2 + 1 + 1 = 4, "senza riporto", ;
&3 + 3 = 1, "con" "riporto di", 1) $]
)
]
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/enum-align_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Alignment shouldn't affect number
#set align(horizon)
+ ABCDEF\ GHIJKL\ MNOPQR
+ INNER\ INNER\ INNER
+ BACK\ HERE
|
https://github.com/liuguangxi/fractusist | https://raw.githubusercontent.com/liuguangxi/fractusist/main/src/dragon.typ | typst | MIT License | //==============================================================================
// The dragon curve
//
// Public functions:
// dragon-curve
//==============================================================================
#import "util.typ": gen-svg
//----------------------------------------------------------
// Public functions
//----------------------------------------------------------
// Generate dragon curve
//
// iterations: [0, 16]
// axiom: "FX"
// rule set:
// "X" -> "X+YF+"
// "Y" -> "-FX-Y"
// angle: 90 deg
//
// Arguments:
// n: the number of iterations
// step-size: step size (in pt), optional
// stroke-style: stroke style, can be none or color or gradient or stroke object, optional
// width: the width of the image, optional
// height: the height of the image, optional
// fit: how the image should adjust itself to a given area, "cover" / "contain" / "stretch", optional
//
// Returns:
// content: generated vector graphic
#let dragon-curve(n, step-size: 10, stroke-style: black + 1pt, width: auto, height: auto, fit: "cover") = {
assert(type(n) == int and n >= 0 and n <= 16, message: "`n` should be in range [0, 16]")
assert(step-size > 0, message: "`step-size` should be positive")
if stroke-style != none {stroke-style = stroke(stroke-style)}
let stroke-width = if (stroke-style == none) {0} else if (stroke-style.thickness == auto) {1} else {calc.abs(stroke-style.thickness.pt())}
let axiom = "FX"
let rule-set = (X: "X+YF+", Y: "-FX-Y")
let dx = (step-size, 0, -step-size, 0)
let dy = (0, step-size, 0, -step-size)
let dx-str = dx.map(i => repr(i))
let dy-str = dy.map(i => repr(i))
let s = axiom
for i in range(n) {s = s.replace(regex("X|Y"), x => rule-set.at(x.text))}
let dir = 0
let path-d = "M0 0 "
for c in s {
if c == "-" {
dir -= 1
if dir < 0 {dir = 3}
} else if c == "+" {
dir += 1
if (dir == 4) {dir = 0}
} else if c == "F" {
path-d += "l" + dx-str.at(dir) + " " + dy-str.at(dir) + " "
}
}
let tbl-x-min = (0, 0, 0, -2, -4, -5, -5, -5, -5, -5, -10, -42, -74, -85, -85, -85, -85)
let tbl-x-max = (0, 1, 1, 1, 1, 1, 2, 10, 18, 21, 21, 21, 21, 21, 42, 170, 298)
let tbl-y-min = (0, 0, 0, 0, -1, -5, -9, -10, -10, -10, -10, -10, -21, -85, -149, -170, -170)
let tbl-y-max = (0, 1, 2, 2, 2, 2, 2, 2, 5, 21, 37, 42, 42, 42, 42, 42, 85)
let margin = calc.max(5, stroke-width)
let x-min = tbl-x-min.at(n) * step-size
let x-max = tbl-x-max.at(n) * step-size
let y-min = tbl-y-min.at(n) * step-size
let y-max = tbl-y-max.at(n) * step-size
x-min = calc.floor(x-min - margin)
x-max = calc.ceil(x-max + margin)
y-min = calc.floor(y-min - margin)
y-max = calc.ceil(y-max + margin)
let svg-code = gen-svg(path-d, (x-min, x-max, y-min, y-max), none, stroke-style)
return image.decode(svg-code, width: width, height: height, fit: fit)
}
|
https://github.com/Amelia-Mowers/typst-tabut | https://raw.githubusercontent.com/Amelia-Mowers/typst-tabut/main/doc/example-snippets/combine.typ | typst | MIT License | #import "@preview/tabut:<<VERSION>>": tabut
#let employees = (
(id: 3251, first: "Alice", last: "Smith", middle: "Jane"),
(id: 4872, first: "Carlos", last: "Garcia", middle: "Luis"),
(id: 5639, first: "Evelyn", last: "Chen", middle: "Ming")
);
#tabut(
employees,
(
(header: [*ID*], func: r => r.id ),
(header: [*Full Name*], func: r => [#r.first #r.middle.first(), #r.last] ),
),
fill: (_, row) => if calc.odd(row) { luma(240) } else { luma(220) },
stroke: none
) |
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/gentle-clues/0.8.0/lib/clues.typ | typst | Apache License 2.0 | // gentle-clues
#import "@preview/linguify:0.4.0": *
// Helper
#let if-auto-then(val,ret) = {
if (val == auto){
ret
} else {
val
}
}
// Global states
#let __gc_clues_breakable = state("breakable", false)
#let __gc_clues_headless = state("headless", false)
#let __gc_clue_width = state("clue-width", auto)
#let __gc_header_inset = state("header-inset", 0.5em)
#let __gc_content_inset = state("content-inset", 1em)
#let __gc_border_radius = state("border-radius", 2pt)
#let __gc_border_width = state("border-width", 0.5pt)
#let __gc_stroke_width = state("stroke-width", 2pt)
#let __gc_task-counter = counter("gc-task-counter")
#let __gc_enable-task-counter = state("gc-task-counter", true)
// load linguify language database
#let lang_database = toml("lang.toml")
/// Config Init
#let gentle-clues(
breakable: false,
headless: false,
header-inset: 0.5em,
// default-title: auto, // string or none
// default-icon: emoji.magnify.l, // file or symbol
// default-color: navy, // color profile name
width: auto, // length
stroke-width: 2pt,
border-radius: 2pt, // length
border-width: 0.5pt, // length
content-inset: 1em, // length
show-task-counter: false, // [bool]
body
) = {
// Conf linguify to lang parameter
// linguify_set_database(toml("lang.toml"));
// Update breakability
__gc_clues_breakable.update(breakable);
// Update clues width
__gc_clue_width.update(width);
// Update headless state
__gc_clues_headless.update(headless);
// Update header inset
__gc_header_inset.update(header-inset);
// Update border radius
__gc_border_radius.update(border-radius);
// Update border width
__gc_border_width.update(border-width);
// Update stroke width
__gc_stroke_width.update(stroke-width);
// Update content inset
__gc_content_inset.update(content-inset);
// Update if task counter should be shown
__gc_enable-task-counter.update(show-task-counter);
body
}
// Basic gentle-clue (clue) template
#let clue(
content,
title: "", // string or none
icon: emoji.magnify.l, // file or symbol
accent-color: navy, // color
border-color: auto,
header-color: auto,
body-color: none,
width: auto, // length
radius: auto, // length
border-width: auto, // length
content-inset: auto, // length
header-inset: auto, // length
breakable: auto,
) = {
context {
// Set default color:
let _stroke-color = luma(70);
let _header-color = _stroke-color.lighten(85%);
let _border-color = _header-color.darken(10%);
let _border-width = if-auto-then(border-width, __gc_border_width.get());
let _border-radius = if-auto-then(radius, __gc_border_radius.get())
let _stroke-width = if-auto-then(auto, __gc_stroke_width.get())
let _clip-content = true
// setting bg and stroke color from color argument
assert(type(accent-color) in (color, gradient), message: "expected color or gradient, found " + type(accent-color));
if (header-color != auto) {
assert(type(header-color) in (color, gradient, pattern), message: "expected color or gradient, found " + type(header-color));
}
if (border-color != auto) {
assert(type(border-color) == color, message: "expected color, found " + type(border-color));
}
if (body-color != none) {
assert(type(body-color) in (color, gradient, pattern), message: "expected color, found " + type(body-color));
}
if (type(accent-color) == color) {
_stroke-color = accent-color;
_header-color = if-auto-then(header-color, accent-color.lighten(85%));
_border-color = if-auto-then(border-color, accent-color.lighten(70%));
} else if (type(accent-color) == gradient) {
_stroke-color = accent-color
_header-color = if-auto-then(header-color, accent-color);
_border-color = if-auto-then(border-color, accent-color);
}
// Disable Heading numbering for those headings
set heading(numbering: none, outlined: false, supplement: "Box")
// Header Part
let header = box(
fill: _header-color,
width: 100%,
radius: (top-right: _border-radius),
inset: if-auto-then(header-inset, __gc_header_inset.get()),
stroke: (right: _border-width + _header-color )
)[
#if icon == none { strong(title) } else {
grid(
columns: (auto, auto),
align: (horizon, left + horizon),
gutter: 1em,
box(height: 1em)[
#if type(icon) == symbol {
text(1em,icon)
} else {
image(icon, fit: "contain")
}
],
strong(title)
)
}
]
// Content-Box
let content-box(content) = block(
breakable: if-auto-then(breakable, __gc_clues_breakable.get()),
width: 100%,
fill: body-color,
inset: if-auto-then(content-inset, __gc_content_inset.get()),
radius: (
top-left: 0pt,
bottom-left: 0pt,
top-right: if (title != none){0pt} else {_border-radius},
rest: _border-radius
),
)[#content]
// Wrapper-Block
block(
breakable: if-auto-then(breakable, __gc_clues_breakable.get()),
width: if-auto-then(width, __gc_clue_width.get()),
inset: (left: 1pt),
radius: (right: _border-radius, left: 0pt),
stroke: (
left: (thickness: _stroke-width, paint: _stroke-color, cap: "butt"),
top: if (title != none){_border-width + _header-color} else {_border-width + _border-color},
rest: _border-width + _border-color,
),
clip: _clip-content,
)[
#set align(start)
#stack(dir: ttb,
if __gc_clues_headless.get() == false and title != none {
header
},
content-box(content)
)
] // block end
}
}
#let increment_task_counter() = {
context {
if (__gc_enable-task-counter.get() == true){
__gc_task-counter.step()
}
}
}
#let get_task_number() = {
context {
if (__gc_enable-task-counter.get() == true){
" " + __gc_task-counter.display()
}
}
}
// Helper for fetching the translated title
#let get-title-for(clue) = {
assert.eq(type(clue),str);
return linguify(clue, from: lang_database, default: linguify(clue, lang: "en", default: clue));
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/valkyrie/0.1.0/src/types/number.typ | typst | Apache License 2.0 | #import "../base-type.typ": base-type, assert-base-type
#import "../context.typ": context
/// Valkyrie schema generator for integer- and floating-point numbers
///
/// - name (internal):
/// - default (integer, float, none): Default value to set if none is provided. *MUST* respect all other validation requirements.
/// - min (integer, none): If not none, the minimum value that satisfies the validation. The program is *ILL-FORMED* if `min` is greater than `max`.
/// - max (integer, none): If not none, the maximum value that satisfies the validation. The program is *ILL-FORMED* if `max` is less than `min`.
/// - custom (function, none): If not none, a function that, if itself returns none, will produce the error set by `custom-error`.
/// - custom-error (string, none): If set, the error produced upon failure of `custom`.
/// - transform (function): a mapping function called after validation.
/// - types (internal):
/// -> schema
#let number(
name: "number",
default: none,
min: none,
max: none,
custom: none,
custom-error: auto,
transform: it=>it,
types: (float, int),
) = {
// Type safety
assert( type(default) in (..types, type(none)),
message: "Default of number must be of type integer, float, or none (possibly narrowed)")
assert( type(min) in (int, float, type(none)), message: "Minimum value must be an integer or float")
assert( type(max) in (int, float, type(none)), message: "Maximum value must be an integer or float")
assert( type(custom) in (function, type(none)), message: "Custom must be a function")
assert( type(custom-error) in (str, type(auto)), message: "Custom-error must be a string")
assert( type(transform) == function, message: "Transform must be a function that takes a single number and return a number")
return (:..base-type(),
name: name,
default: default,
min: min,
max: max,
custom: custom,
custom-error: custom-error,
transform: transform,
types: types,
validate: (self, it, ctx: context(), scope: ()) => {
// TO DO: Coercion
// Default value
if ( it == none ){ it = self.default }
// Assert type
if not (self.assert-type)(self, it, ctx: ctx, scope: scope, types: types){
return none
}
// Minimum value
if ( self.min != none ) and (it < self.min ){
return (self.fail-validation)( self, it, ctx: ctx, scope: scope,
message: "Value less than specified minimum of " + str(self.min))
}
// Maximum value
if ( self.max != none ) and (it > self.max ){
return (self.fail-validation)( self, it, ctx: ctx, scope: scope,
message: "Value greater than specified maximum of " + str(self.max))
}
// Custom
if ( self.custom != none ) and ( not (self.custom)(it) ){
let message = "Failed on custom check: " + repr(self.custom)
if ( self.custom-error != auto ){ message = self.custom-error }
return (self.fail-validation)(self, it, ctx: ctx, scope: scope, message: message)
}
return (self.transform)(it)
}
)
}
/// Specialization of @@number() that is only satisfied by whole numbers. Parameters of @@number remain available for further requirments.
#let integer = number.with( name: "integer", types: (int,))
/// Specialization of @@number() that is only satisfied by floating point numbers. Parameters of @@number remain available for further requirments.
#let floating-point = number.with( name: "float", types: (float,))
/// Specialization of @@integer() that is only satisfied by positive whole numbers. Parameters of @@number remain available for further requirments.
#let natural = number.with( name: "natural number", types: (int,), min: 0) |
https://github.com/elteammate/typst-shell-escape | https://raw.githubusercontent.com/elteammate/typst-shell-escape/main/shell-escape.typ | typst | #let hex-digits = "0123456789abcdef"
#let hex(x) = {
hex-digits.at(int(x / 16))
hex-digits.at(calc.rem(x, 16))
}
#let ascii-table = {
let result = (:)
for (i, c) in range(128)
.map(c => eval("[\u{" + hex(c) + "}]").text)
.enumerate() {
result.insert(c, i)
}
result
}
#let encode-int(x) = {
let alpha = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
let base = alpha.len()
while x > 0 {
alpha.at(calc.rem(x, base))
x = int(x / base)
}
}
#let hash(obj) = {
obj = repr(obj)
let (a1, a2, a3, a4, h1, h2, h3, h4, mod1, mod2, mod3, mod4) = (
911, 1642, 7256, 5134, 60298, 134587, 18096, 109863,
1000000007, 1300000721, 1500004447, 1800003419
)
for c in obj.clusters() {
let utf-8 = ascii-table.at(c)
h1 = calc.rem(h1 * a1 + utf-8, mod1)
h2 = calc.rem(h2 * a2 + utf-8, mod2)
h3 = calc.rem(h3 * a3 + utf-8, mod3)
h4 = calc.rem(h4 * a4 + utf-8, mod4)
}
encode-int(h1)
encode-int(h2)
encode-int(h3)
encode-int(h4)
}
#let encode-hex(s) = {
for c in s.clusters() {
hex(ascii-table.at(c))
}
}
#let encode-url(s) = {
let representable = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_~"
let result = ""
for c in s.clusters() {
if representable.contains(c) {
result += c
} else {
result += "%" + hex(ascii-table.at(c))
}
}
result
}
#let shell-escape-root = "//tmp/typst-shell-escape/shell-escape/"
#let do-with-shell-escape(action, hash, fn: read) = {
let path = shell-escape-root + hash + "_" + action
fn(path)
}
#let chunks(s, n) = {
let result = ()
for (i, c) in s.clusters().enumerate() {
if calc.rem(i, n) == 0 {
result.push(c)
} else {
result.last() += c
}
}
result
}
#let reset-and-terminate-all(discriminator: "") = {
assert.eq("!", do-with-shell-escape("reset", discriminator))
}
#let exec-command-async(
command,
discriminator: "",
) = {
let disc-hash = hash(discriminator + "gIbBeRiSh" + command)
reset-and-terminate-all(discriminator: disc-hash)
for part in chunks(command, 32) {
let part-hash = hash(encode-hex(part) + disc-hash)
assert.eq("!", do-with-shell-escape(encode-hex(part), part-hash))
}
assert.eq("!", do-with-shell-escape("exec", disc-hash))
}
#let wait-one(
discriminator: "",
allow-non-zero-error-code: true,
) = {
let disc-hash = hash(discriminator + "gIbBeRiSh")
assert.eq("!", do-with-shell-escape("wait", disc-hash))
do-with-shell-escape("diagnostics", disc-hash, fn: json)
}
#let get-stdout(discriminator: "", method: read, format: "") = {
let disc-hash = hash(discriminator + "gIbBeRiSh")
do-with-shell-escape("stdout" + format, disc-hash, fn: method)
}
#let get-stderr(discriminator: "", method: read, format: "") = {
let disc-hash = hash(discriminator + "gIbBeRiSh")
do-with-shell-escape("stderr" + format, disc-hash, fn: method)
}
#let exec-command(
command,
method-stdout: read,
method-stderr: read,
format-stdout: "",
format-stderr: "",
custom-hash: "",
allow-non-zero-error-code: true,
) = {
let command-hash = hash(command + "GiBbErIsH" + custom-hash)
exec-command-async(command, discriminator: command-hash)
let data = wait-one(discriminator: command-hash)
if data.command.trim() != command.trim() {
panic("Executed command mismatches with the one requested: " + data.command + " != " + command)
}
if not data.result.ran {
panic("Failed to execute command: ", data.result.error)
}
if not allow-non-zero-error-code {
assert.eq(data.result.error_code, 0, message: "Exit code is not zero")
}
let stdout = get-stdout(discriminator: command-hash, method: method-stdout, format: format-stdout)
let stderr = get-stderr(discriminator: command-hash, method: method-stderr, format: format-stderr)
(stdout: stdout, stderr: stderr, error-code: data.result.error_code)
}
#let http-get(url, method: read, format: "") = {
let command = "curl -sS \"" + url + "\""
let result = exec-command(command, method-stdout: method, format-stdout: format)
if result.error-code != 0 {
panic("Failed to execute command: ", result.stderr)
}
result.stdout
}
/*
// #exec-command("ls -la /")
// #exec-command("sleep 1")
#set page(paper: "a8")
$
2 + 2 dot 2 = #exec-command("python -c \"print(2 + 2 * 2)\"").stdout
$
#http-get(
"https://latex.codecogs.com/svg.image?%5Cfrac%7B4%7D%7B5%7D+%5Cpi%5COmega%5Cint_%7B2%5Cpi%7D%5E%7B%5Cinfty%7D%7B5%5Cleft%5C(%5Cfrac%7B%5Ctau+3%7D%7B2%7D%5Cright%5C)d%5Comega%7D)",
method: image,
format: ".svg",
)
*/
|
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/027.%20gap.html.typ | typst | gap.html
Mind the Gap
May 2004When people care enough about something to do it well, those who
do it best tend to be far better than everyone else. There's a
huge gap between Leonardo and second-rate contemporaries like
Borgognone. You see the same gap between <NAME> and the
average writer of detective novels. A top-ranked professional chess
player could play ten thousand games against an ordinary club player
without losing once.Like chess or painting or writing novels, making money is a very
specialized skill. But for some reason we treat this skill
differently. No one complains when a few people surpass all the
rest at playing chess or writing novels, but when a few people make
more money than the rest, we get editorials saying this is wrong.Why? The pattern of variation seems no different than for any other
skill. What causes people to react so strongly when the skill is
making money?I think there are three reasons we treat making money as different:
the misleading model of wealth we learn as children; the disreputable
way in which, till recently, most fortunes were accumulated; and
the worry that great variations in income are somehow bad for
society. As far as I can tell, the first is mistaken, the second
outdated, and the third empirically false. Could it be that, in a
modern democracy, variation in income is actually a sign of health?The Daddy Model of WealthWhen I was five I thought electricity was created by electric
sockets. I didn't realize there were power plants out there
generating it. Likewise, it doesn't occur to most kids that wealth
is something that has to be generated. It seems to be something
that flows from parents.Because of the circumstances in which they encounter it, children
tend to misunderstand wealth. They confuse it with money. They
think that there is a fixed amount of it. And they think of it as
something that's distributed by authorities (and so should be
distributed equally), rather than something that has to be created
(and might be created unequally).In fact, wealth is not money. Money is just a convenient way of
trading one form of wealth for another. Wealth is the underlying
stuff—the goods and services we buy. When you travel to a
rich or poor country, you don't have to look at people's bank
accounts to tell which kind you're in. You can see
wealth—in buildings and streets, in the clothes and the health
of the people.Where does wealth come from? People make it. This was easier to
grasp when most people lived on farms, and made many of the things
they wanted with their own hands. Then you could see in the house,
the herds, and the granary the wealth that each family created. It
was obvious then too that the wealth of the world was not a fixed
quantity that had to be shared out, like slices of a pie. If you
wanted more wealth, you could make it.This is just as true today, though few of us create wealth directly
for ourselves (except for a few vestigial domestic tasks). Mostly
we create wealth for other people in exchange for money, which we
then trade for the forms of wealth we want.
[1]Because kids are unable to create wealth, whatever they have has
to be given to them. And when wealth is something you're given,
then of course it seems that it should be distributed equally.
[2]
As in most families it is. The kids see to that. "Unfair," they
cry, when one sibling gets more than another.In the real world, you can't keep living off your parents. If you
want something, you either have to make it, or do something of
equivalent value for someone else, in order to get them to give you
enough money to buy it. In the real world, wealth is (except for
a few specialists like thieves and speculators) something you have
to create, not something that's distributed by Daddy. And since
the ability and desire to create it vary from person to person,
it's not made equally.You get paid by doing or making something people want, and those
who make more money are often simply better at doing what people
want. Top actors make a lot more money than B-list actors. The
B-list actors might be almost as charismatic, but when people go
to the theater and look at the list of movies playing, they want
that extra oomph that the big stars have.Doing what people want is not the only way to get money, of course.
You could also rob banks, or solicit bribes, or establish a monopoly.
Such tricks account for some variation in wealth, and indeed for
some of the biggest individual fortunes, but they are not the root
cause of variation in income. The root cause of variation in income,
as Occam's Razor implies, is the same as the root cause of variation
in every other human skill.In the United States, the CEO of a large public company makes about
100 times as much as the average person.
[3]
Basketball players
make about 128 times as much, and baseball players 72 times as much.
Editorials quote this kind of statistic with horror. But I have
no trouble imagining that one person could be 100 times as productive
as another. In ancient Rome the price of slaves varied by
a factor of 50 depending on their skills.
[4]
And that's without
considering motivation, or the extra leverage in productivity that
you can get from modern technology.Editorials about athletes' or CEOs' salaries remind me of early
Christian writers, arguing from first principles about whether the
Earth was round, when they could just walk outside and check.
[5]
How much someone's work is worth is not a policy question. It's
something the market already determines."Are they really worth 100 of us?" editorialists ask. Depends on
what you mean by worth. If you mean worth in the sense of what
people will pay for their skills, the answer is yes, apparently.A few CEOs' incomes reflect some kind of wrongdoing. But are there
not others whose incomes really do reflect the wealth they generate?
<NAME> saved a company that was in a terminal decline. And not
merely in the way a turnaround specialist does, by cutting costs;
he had to decide what Apple's next products should be. Few others
could have done it. And regardless of the case with CEOs, it's
hard to see how anyone could argue that the salaries of professional
basketball players don't reflect supply and demand.It may seem unlikely in principle that one individual could really
generate so much more wealth than another. The key to this mystery
is to revisit that question, are they really worth 100 of us?
Would a basketball team trade one of their players for 100
random people? What would Apple's next product look like if you
replaced <NAME> with a committee of 100 random people?
[6]
These
things don't scale linearly. Perhaps the CEO or the professional
athlete has only ten times (whatever that means) the skill and
determination of an ordinary person. But it makes all the difference
that it's concentrated in one individual.When we say that one kind of work is overpaid and another underpaid,
what are we really saying? In a free market, prices are determined
by what buyers want. People like baseball more than poetry, so
baseball players make more than poets. To say that a certain kind
of work is underpaid is thus identical with saying that people want
the wrong things.Well, of course people want the wrong things. It seems odd to be
surprised by that. And it seems even odder to say that it's
unjust that certain kinds of work are underpaid.
[7]
Then
you're saying that it's unjust that people want the wrong things.
It's lamentable that people prefer reality TV and corndogs to
Shakespeare and steamed vegetables, but unjust? That seems like
saying that blue is heavy, or that up is circular.The appearance of the word "unjust" here is the unmistakable spectral
signature of the Daddy Model. Why else would this idea occur in
this odd context? Whereas if the speaker were still operating on
the Daddy Model, and saw wealth as something that flowed from a
common source and had to be shared out, rather than something
generated by doing what other people wanted, this is exactly what
you'd get on noticing that some people made much more than others.When we talk about "unequal distribution of income," we should
also ask, where does that income come from?
[8]
Who made the wealth
it represents? Because to the extent that income varies simply
according to how much wealth people create, the distribution may
be unequal, but it's hardly unjust.Stealing ItThe second reason we tend to find great disparities of wealth
alarming is that for most of human history the usual way to accumulate
a fortune was to steal it: in pastoral societies by cattle raiding;
in agricultural societies by appropriating others' estates in times
of war, and taxing them in times of peace.In conflicts, those on the winning side would receive the estates
confiscated from the losers. In England in the 1060s, when William
<NAME> distributed the estates of the defeated Anglo-Saxon
nobles to his followers, the conflict was military. By the 1530s,
when <NAME> distributed the estates of the monasteries to his
followers, it was mostly political.
[9]
But the principle was the
same. Indeed, the same principle is at work now in Zimbabwe.In more organized societies, like China, the ruler and his officials
used taxation instead of confiscation. But here too we see the
same principle: the way to get rich was not to create wealth, but
to serve a ruler powerful enough to appropriate it.This started to change in Europe with the rise of the middle class.
Now we think of the middle class as people who are neither rich nor
poor, but originally they were a distinct group. In a feudal
society, there are just two classes: a warrior aristocracy, and the
serfs who work their estates. The middle class were a new, third
group who lived in towns and supported themselves by manufacturing
and trade.Starting in the tenth and eleventh centuries, petty nobles and
former serfs banded together in towns that gradually became powerful
enough to ignore the local feudal lords.
[10]
Like serfs, the middle
class made a living largely by creating wealth. (In port cities
like Genoa and Pisa, they also engaged in piracy.) But unlike serfs
they had an incentive to create a lot of it. Any wealth a serf
created belonged to his master. There was not much point in making
more than you could hide. Whereas the independence of the townsmen
allowed them to keep whatever wealth they created.Once it became possible to get rich by creating wealth, society as
a whole started to get richer very rapidly. Nearly everything we
have was created by the middle class. Indeed, the other two classes
have effectively disappeared in industrial societies, and their
names been given to either end of the middle class. (In the original
sense of the word, <NAME> is middle class.)But it was not till the Industrial Revolution that wealth creation
definitively replaced corruption as the best way to get rich. In
England, at least, corruption only became unfashionable (and in
fact only started to be called "corruption") when there started to
be other, faster ways to get rich.Seventeenth-century England was much like the third world today,
in that government office was a recognized route to wealth. The
great fortunes of that time still derived more from what we would
now call corruption than from commerce.
[11]
By the nineteenth
century that had changed. There continued to be bribes, as there
still are everywhere, but politics had by then been left to men who
were driven more by vanity than greed. Technology had made it
possible to create wealth faster than you could steal it. The
prototypical rich man of the nineteenth century was not a courtier
but an industrialist.With the rise of the middle class, wealth stopped being a zero-sum
game. Jobs and Wozniak didn't have to make us poor to make themselves
rich. Quite the opposite: they created things that made our lives
materially richer. They had to, or we wouldn't have paid for them.But since for most of the world's history the main route to wealth
was to steal it, we tend to be suspicious of rich people. Idealistic
undergraduates find their unconsciously preserved child's model of
wealth confirmed by eminent writers of the past. It is a case of
the mistaken meeting the outdated."Behind every great fortune, there is a crime," Balzac wrote. Except
he didn't. What he actually said was that a great fortune with no
apparent cause was probably due to a crime well enough executed
that it had been forgotten. If we were talking about Europe in
1000, or most of the third world today, the standard misquotation
would be spot on. But Balzac lived in nineteenth-century France,
where the Industrial Revolution was well advanced. He knew you
could make a fortune without stealing it. After all, he did himself,
as a popular novelist.
[12]Only a few countries (by no coincidence, the richest ones) have
reached this stage. In most, corruption still has the upper hand.
In most, the fastest way to get wealth is by stealing it. And so
when we see increasing differences in income in a rich country,
there is a tendency to worry that it's sliding back toward becoming
another Venezuela. I think the opposite is happening. I think
you're seeing a country a full step ahead of Venezuela.The Lever of TechnologyWill technology increase the gap between rich and poor? It will
certainly increase the gap between the productive and the unproductive.
That's the whole point of technology. With a tractor an energetic
farmer could plow six times as much land in a day as he could with
a team of horses. But only if he mastered a new kind of farming.I've seen the lever of technology grow visibly in my own time. In
high school I made money by mowing lawns and scooping ice cream at
Baskin-Robbins. This was the only kind of work available at the
time. Now high school kids could write software or design web
sites. But only some of them will; the rest will still be scooping
ice cream.I remember very vividly when in 1985 improved technology made it
possible for me to buy a computer of my own. Within months I was
using it to make money as a freelance programmer. A few years
before, I couldn't have done this. A few years before, there was
no such thing as a freelance programmer. But Apple created
wealth, in the form of powerful, inexpensive computers, and programmers
immediately set to work using it to create more.As this example suggests, the rate at which technology increases
our productive capacity is probably exponential, rather than linear.
So we should expect to see ever-increasing variation in individual
productivity as time goes on. Will that increase the gap between
rich and the poor? Depends which gap you mean.Technology should increase the gap in income, but it seems to
decrease other gaps. A hundred years ago, the rich led a different
kind of life from ordinary people. They lived in houses
full of servants, wore elaborately uncomfortable clothes, and
travelled about in carriages drawn by teams of horses which themselves
required their own houses and servants. Now, thanks to technology,
the rich live more like the average person.Cars are a good example of why. It's possible to buy expensive,
handmade cars that cost hundreds of thousands of dollars. But there
is not much point. Companies make more money by building a large
number of ordinary cars than a small number of expensive ones. So
a company making a mass-produced car can afford to spend a lot more
on its design. If you buy a custom-made car, something will always
be breaking. The only point of buying one now is to advertise that
you can.Or consider watches. Fifty years ago, by spending a lot of money
on a watch you could get better performance. When watches had
mechanical movements, expensive watches kept better time. Not any
more. Since the invention of the quartz movement, an ordinary Timex
is more accurate than a Patek Philippe costing hundreds of thousands
of dollars.
[13]
Indeed, as with expensive cars, if you're determined
to spend a lot of money on a watch, you have to put up with some
inconvenience to do it: as well as keeping worse time, mechanical
watches have to be wound.The only thing technology can't cheapen is brand. Which is precisely
why we hear ever more about it. Brand is the residue left as the
substantive differences between rich and poor evaporate. But what
label you have on your stuff is a much smaller matter than having
it versus not having it. In 1900, if you kept a carriage, no one
asked what year or brand it was. If you had one, you were rich.
And if you weren't rich, you took the omnibus or walked. Now even
the poorest Americans drive cars, and it is only because we're so
well trained by advertising that we can even recognize the especially
expensive ones.
[14]The same pattern has played out in industry after industry. If
there is enough demand for something, technology will make it cheap
enough to sell in large volumes, and the mass-produced versions
will be, if not better, at least more convenient.
[15]
And there
is nothing the rich like more than convenience. The rich people I
know drive the same cars, wear the same clothes, have the same kind
of furniture, and eat the same foods as my other friends. Their
houses are in different neighborhoods, or if in the same neighborhood
are different sizes, but within them life is similar. The houses
are made using the same construction techniques and contain much
the same objects. It's inconvenient to do something expensive and
custom.The rich spend their time more like everyone else too. <NAME>er seems long gone. Now, most people who are rich enough not
to work do anyway. It's not just social pressure that makes them;
idleness is lonely and demoralizing.Nor do we have the social distinctions there were a hundred years
ago. The novels and etiquette manuals of that period read now
like descriptions of some strange tribal society. "With respect
to the continuance of friendships..." hints Mrs. Beeton's Book
of Household Management (1880), "it may be found necessary, in
some cases, for a mistress to relinquish, on assuming the responsibility
of a household, many of those commenced in the earlier part of her
life." A woman who married a rich man was expected to drop friends
who didn't. You'd seem a barbarian if you behaved that way today.
You'd also have a very boring life. People still tend to segregate
themselves somewhat, but much more on the basis of education than
wealth.
[16]Materially and socially, technology seems to be decreasing the gap
between the rich and the poor, not increasing it. If Lenin walked
around the offices of a company like Yahoo or Intel or Cisco, he'd
think communism had won. Everyone would be wearing the same clothes,
have the same kind of office (or rather, cubicle) with the same
furnishings, and address one another by their first names instead
of by honorifics. Everything would seem exactly as he'd predicted,
until he looked at their bank accounts. Oops.Is it a problem if technology increases that gap? It doesn't seem
to be so far. As it increases the gap in income, it seems to
decrease most other gaps.Alternative to an AxiomOne often hears a policy criticized on the grounds that it would
increase the income gap between rich and poor. As if it were an
axiom that this would be bad. It might be true that increased
variation in income would be bad, but I don't see how we can say
it's axiomatic.Indeed, it may even be false, in industrial democracies. In a
society of serfs and warlords, certainly, variation in income is a
sign of an underlying problem. But serfdom is not the only cause
of variation in income. A 747 pilot doesn't make 40 times as much
as a checkout clerk because he is a warlord who somehow holds her
in thrall. His skills are simply much more valuable.I'd like to propose an alternative idea: that in a modern society,
increasing variation in income is a sign of health. Technology
seems to increase the variation in productivity at faster than
linear rates. If we don't see corresponding variation in income,
there are three possible explanations: (a) that technical innovation
has stopped, (b) that the people who would create the most wealth
aren't doing it, or (c) that they aren't getting paid for it.I think we can safely say that (a) and (b) would be bad. If you
disagree, try living for a year using only the resources available
to the average Frankish nobleman in 800, and report back to us.
(I'll be generous and not send you back to the stone age.)The only option, if you're going to have an increasingly prosperous
society without increasing variation in income, seems to be (c),
that people will create a lot of wealth without being paid for it.
That Jobs and Wozniak, for example, will cheerfully work 20-hour
days to produce the Apple computer for a society that allows them,
after taxes, to keep just enough of their income to match what they
would have made working 9 to 5 at a big company.Will people create wealth if they can't get paid for it? Only if
it's fun. People will write operating systems for free. But they
won't install them, or take support calls, or train customers to
use them. And at least 90% of the work that even the highest tech
companies do is of this second, unedifying kind.All the unfun kinds of wealth creation slow dramatically in a society
that confiscates private fortunes. We can confirm this empirically.
Suppose you hear a strange noise that you think may be due to a
nearby fan. You turn the fan off, and the noise stops. You turn
the fan back on, and the noise starts again. Off, quiet. On,
noise. In the absence of other information, it would seem the noise
is caused by the fan.At various times and places in history, whether you could accumulate
a fortune by creating wealth has been turned on and off. Northern
Italy in 800, off (warlords would steal it). Northern Italy in
1100, on. Central France in 1100, off (still feudal). England in
1800, on. England in 1974, off (98% tax on investment income).
United States in 1974, on. We've even had a twin study: West
Germany, on; East Germany, off. In every case, the creation of
wealth seems to appear and disappear like the noise of a fan as you
switch on and off the prospect of keeping it.There is some momentum involved. It probably takes at least a
generation to turn people into East Germans (luckily for England).
But if it were merely a fan we were studying, without all the extra
baggage that comes from the controversial topic of wealth, no one
would have any doubt that the fan was causing the noise.If you suppress variations in income, whether by stealing private
fortunes, as feudal rulers used to do, or by taxing them away, as
some modern governments have done, the result always seems to be
the same. Society as a whole ends up poorer.If I had a choice of living in a society where I was materially
much better off than I am now, but was among the poorest, or in one
where I was the richest, but much worse off than I am now, I'd take
the first option. If I had children, it would arguably be immoral
not to. It's absolute poverty you want to avoid, not relative
poverty. If, as the evidence so far implies, you have to have one
or the other in your society, take relative poverty.You need rich people in your society not so much because in spending
their money they create jobs, but because of what they have to do
to get rich. I'm not talking about the trickle-down effect
here. I'm not saying that if you let <NAME> get rich, he'll
hire you as a waiter at his next party. I'm saying that he'll make
you a tractor to replace your horse.Notes[1]
Part of the reason this subject is so contentious is that some
of those most vocal on the subject of wealth—university
students, heirs, professors, politicians, and journalists—have
the least experience creating it. (This phenomenon will be familiar
to anyone who has overheard conversations about sports in a bar.)Students are mostly still on the parental dole, and have not stopped
to think about where that money comes from. Heirs will be on the
parental dole for life. Professors and politicians live within
socialist eddies of the economy, at one remove from the creation
of wealth, and are paid a flat rate regardless of how hard they
work. And journalists as part of their professional code segregate
themselves from the revenue-collecting half of the businesses they
work for (the ad sales department). Many of these people never
come face to face with the fact that the money they receive represents
wealth—wealth that, except in the case of journalists, someone
else created earlier. They live in a world in which income is
doled out by a central authority according to some abstract notion
of fairness (or randomly, in the case of heirs), rather than given
by other people in return for something they wanted, so it may seem
to them unfair that things don't work the same in the rest of the
economy.(Some professors do create a great deal of wealth for
society. But the money they're paid isn't a quid pro quo.
It's more in the nature of an investment.)[2]
When one reads about the origins of the Fabian Society, it
sounds like something cooked up by the high-minded Edwardian
child-heroes of <NAME>'s The Wouldbegoods.[3]
According to a study by the Corporate Library, the median total
compensation, including salary, bonus, stock grants, and the exercise
of stock options, of S&P 500 CEOs in 2002 was $3.65 million.
According to Sports Illustrated, the average NBA player's
salary during the 2002-03 season was $4.54 million, and the average
major league baseball player's salary at the start of the 2003
season was $2.56 million. According to the Bureau of Labor
Statistics, the mean annual wage in the US in 2002 was $35,560.[4]
In the early empire the price of an ordinary adult slave seems
to have been about 2,000 sestertii (e.g. Horace, Sat. ii.7.43).
A servant girl cost 600 (Martial vi.66), while Columella (iii.3.8)
says that a skilled vine-dresser was worth 8,000. A doctor, P.
<NAME>, paid 50,000 sestertii for his freedom (Dessau,
Inscriptiones 7812). Seneca (Ep. xxvii.7) reports
that one Calvisius Sabinus paid 100,000 sestertii apiece for slaves
learned in the Greek classics. Pliny (Hist. Nat. vii.39)
says that the highest price paid for a slave up to his time was
700,000 sestertii, for the linguist (and presumably teacher) Daphnis,
but that this had since been exceeded by actors buying their own
freedom.Classical Athens saw a similar variation in prices. An ordinary
laborer was worth about 125 to 150 drachmae. Xenophon (Mem.
ii.5) mentions prices ranging from 50 to 6,000 drachmae (for the
manager of a silver mine).For more on the economics of ancient slavery see:<NAME>., "Slavery in the Ancient World," Economic History
Review, 2:9 (1956), 185-199, reprinted in Finley, M. I. (ed.),
Slavery in Classical Antiquity, Heffer, 1964.[5]
Eratosthenes (276—195 BC) used shadow lengths in different
cities to estimate the Earth's circumference. He was off by only
about 2%.[6]
No, and Windows, respectively.[7]
One of the biggest divergences between the Daddy Model and
reality is the valuation of hard work. In the Daddy Model, hard
work is in itself deserving. In reality, wealth is measured by
what one delivers, not how much effort it costs. If I paint someone's
house, the owner shouldn't pay me extra for doing it with a toothbrush.It will seem to someone still implicitly operating on the Daddy
Model that it is unfair when someone works hard and doesn't get
paid much. To help clarify the matter, get rid of everyone else
and put our worker on a desert island, hunting and gathering fruit.
If he's bad at it he'll work very hard and not end up with much
food. Is this unfair? Who is being unfair to him?[8]
Part of the reason for the tenacity of the Daddy Model may be
the dual meaning of "distribution." When economists talk about
"distribution of income," they mean statistical distribution. But
when you use the phrase frequently, you can't help associating it
with the other sense of the word (as in e.g. "distribution of alms"),
and thereby subconsciously seeing wealth as something that flows
from some central tap. The word "regressive" as applied to tax
rates has a similar effect, at least on me; how can anything
regressive be good?[9]
"From the beginning of the reign <NAME> was an assiduous
courtier of the young Henry VIII and was soon to reap the rewards.
In 1525 he was made a Knight of the Garter and given the Earldom
of Rutland. In the thirties his support of the breach with Rome,
his zeal in crushing the Pilgrimage of Grace, and his readiness to
vote the death-penalty in the succession of spectacular treason
trials that punctuated Henry's erratic matrimonial progress made
him an obvious candidate for grants of monastic property."Stone, Lawrence, Family and Fortune: Studies in Aristocratic
Finance in the Sixteenth and Seventeenth Centuries, Oxford
University Press, 1973, p. 166.[10]
There is archaeological evidence for large settlements earlier,
but it's hard to say what was happening in them.Hodges, Richard and <NAME>, <NAME> and
the Origins of Europe, Cornell University Press, 1983.[11]
<NAME> and his son Robert were each in turn the most
powerful minister of the crown, and both used their position to
amass fortunes among the largest of their times. Robert in particular
took bribery to the point of treason. "As Secretary of State and
the leading advisor to <NAME> on foreign policy, [he] was a
special recipient of favour, being offered large bribes by the Dutch
not to make peace with Spain, and large bribes by Spain to make
peace." (Stone, op. cit., p. 17.)[12]
Though Balzac made a lot of money from writing, he was notoriously
improvident and was troubled by debts all his life.[13]
A Timex will gain or lose about .5 seconds per day. The most
accurate mechanical watch, the Patek Philippe 10 Day Tourbillon,
is rated at -1.5 to +2 seconds. Its retail price is about $220,000.[14]
If asked to choose which was more expensive, a well-preserved
1989 Lincoln Town Car ten-passenger limousine ($5,000) or a 2004
Mercedes S600 sedan ($122,000), the average Edwardian might well
guess wrong.[15]
To say anything meaningful about income trends, you have to
talk about real income, or income as measured in what it can buy.
But the usual way of calculating real income ignores much of the
growth in wealth over time, because it depends on a consumer price
index created by bolting end to end a series of numbers that are
only locally accurate, and that don't include the prices of new
inventions until they become so common that their prices stabilize.So while we might think it was very much better to live in a world
with antibiotics or air travel or an electric power grid than
without, real income statistics calculated in the usual way will
prove to us that we are only slightly richer for having these things.Another approach would be to ask, if you were going back to the
year x in a time machine, how much would you have to spend on trade
goods to make your fortune? For example, if you were going back
to 1970 it would certainly be less than $500, because the processing
power you can get for $500 today would have been worth at least
$150 million in 1970. The function goes asymptotic fairly quickly,
because for times over a hundred years or so you could get all you
needed in present-day trash. In 1800 an empty plastic drink bottle
with a screw top would have seemed a miracle of workmanship.[16]
Some will say this amounts to the same thing, because the rich
have better opportunities for education. That's a valid point. It
is still possible, to a degree, to buy your kids' way into top
colleges by sending them to private schools that in effect hack the
college admissions process.According to a 2002 report by the National Center for Education
Statistics, about 1.7% of American kids attend private, non-sectarian
schools. At Princeton, 36% of the class of 2007 came from such
schools. (Interestingly, the number at Harvard is significantly
lower, about 28%.) Obviously this is a huge loophole. It does at
least seem to be closing, not widening.Perhaps the designers of admissions processes should take a lesson
from the example of computer security, and instead of just assuming
that their system can't be hacked, measure the degree to which it
is.Spanish Translation
|
|
https://github.com/soul667/typst | https://raw.githubusercontent.com/soul667/typst/main/PPT/typst-slides-fudan/themes/polylux/book/src/diy/quiz.typ | typst | #import "../../../polylux.typ": *
#set page(paper: "presentation-16-9", fill: teal.lighten(90%))
#set text(size: 25pt, font: "Blogger Sans")
#polylux-slide[
#set align(horizon + center)
= My fabulous talk
<NAME>
Conference on Advances in Slide Making
]
#polylux-slide[
== My slide title
Hello, world!
]
#polylux-slide[
== A quiz
What is the capital of the Republic of Benin?
#uncover(2)[Cotonou]
]
|
|
https://github.com/jneug/typst-codelst | https://raw.githubusercontent.com/jneug/typst-codelst/main/docs/manual.typ | typst | MIT License | #import "@local/mantys:0.1.1": *
#import "../src/codelst.typ"
#show: mantys.with(
name: "codelst",
title: "The codelst Package",
subtitle: [A *Typst* package to render source code],
authors: "<NAME>",
url: "https://github.com/jneug/typst-codelst",
version: "2.0.2",
date: "2023-07-19",
abstract: [
#package[codelst] is a *Typst* package inspired by LaTeX packages like #package[listings]. It adds functionality to render source code with line numbers, highlighted lines and more.
],
examples-scope: (
codelst: codelst,
sourcecode: codelst.sourcecode,
sourcefile: codelst.sourcefile,
code-frame: codelst.code-frame,
lineref: codelst.lineref,
),
)
#let footlink(url, label) = [#link(url, label)#footnote(link(url))]
#let gitlink(repo) = footlink("https://github.com/" + repo, repo)
// End preamble
= About
This package was created to render source code on my exercise sheets for my computer science classes. The exercises required source code to be set with line numbers that could be referenced from other parts of the document, to highlight certain lines and to load code from external files into my documents.
Since I used LaTeX before, I got inspired by packages like #footlink("https://ctan.org/package/listings", package("listings")) and attempted to replicate some of its functionality. CODELST is the result of this effort.
This document is a full description of all available commands and options. The first part provides examples of the major features. The second part is a command reference for CODELST.
See `example.typ`/`example.pdf` for some quick examples how to use CODELST.
= Usage
== Use as a package (Typst 0.9.0 and later)
For Typst 0.9.0 and later, CODELST can be imported from the preview repository:
#sourcecode(numbering: none)[```typ
#import "@preview/codelst:2.0.2": sourcecode
```]
Alternatively, the package can be downloaded and saved into the system dependent local package repository.
Either download the current release from GitHub#footnote[#link("https://github.com/jneug/typst-codelst/releases/latest")] and unpack the archive into your system dependent local repository folder#footnote[#link("https://github.com/typst/packages#local-packages")] or clone it directly:
#codesnippet[
```shell-unix-generic
git clone https://github.com/jneug/typst-codelst.git codelst-2.0.2
```]
In either case, make sure the files are placed in a folder with the correct version number: `codelst-2.0.2`
After installing the package, just import it inside your `typ` file:
#codesnippet[```typ
#import "@local/codelst:2.0.2": sourcecode
```]
== Use as a module
To use CODELST as a module for one project, get the file `codelst.typ` from the repository and save it in your project folder.
Import the module as usual:
#codesnippet[```typ
#import "codelst.typ": sourcecode
```]
== Rendering source code
CODELST adds the #cmd[sourcecode] command with various options to render code blocks. It wraps around any #cmd-[raw] block to adds some functionality and formatting options to it:
#example[````
#sourcecode[```typ
#show "ArtosFlow": name => box[
#box(image(
"logo.svg",
height: 0.7em,
))
#name
]
This report is embedded in the
ArtosFlow project. ArtosFlow is a
project of the Artos Institute.
```]
````]
CODELST adds line numbers and some default formatting to the code. Line numbers can be configured with a variety of options and #arg[frame] sets a custom wrapper function for the code. Setting #arg(frame: none) disables the code frame.
#example[````
#sourcecode(
numbers-side: right,
numbering: "I",
numbers-start: 10,
numbers-first: 11,
numbers-step: 4,
numbers-style: (i) => align(right, text(fill:blue, emph(i))),
frame: none
)[```typ
#show "ArtosFlow": name => box[
#box(image(
"logo.svg",
height: 0.7em,
))
#name
]
This report is embedded in the
ArtosFlow project. ArtosFlow is a
project of the Artos Institute.
```]
````]
Since it is common to highlight code blocks by putting them inside a #cmd-[block] element, CODELST does so with a light gray background and a border.
The frame can be modified by setting #arg[frame] to a function with one argument. To do this globally, an alias for the #cmd-[sourcecode] command can be created:
#example[````
#let codelst-sourcecode = sourcecode
#let sourcecode = codelst-sourcecode.with(
frame: block.with(
fill: fuchsia.lighten(96%),
stroke: 1pt + fuchsia,
radius: 2pt,
inset: (x: 10pt, y: 5pt)
)
)
#sourcecode[```typ
#show "ArtosFlow": name => box[
#box(image(
"logo.svg",
height: 0.7em,
))
#name
]
This report is embedded in the
ArtosFlow project. ArtosFlow is a
project of the Artos Institute.
```]
````]
Line numbers can be formatted with the #opt[numbers-style] option:
#example[````
#sourcecode(
gutter:2em,
numbers-style: (lno) => text(fill:luma(120), size:10pt, emph(lno) + sym.arrow.r)
)[```typ
#show "ArtosFlow": name => box[
#box(image(
"logo.svg",
height: 0.7em,
))
#name
]
This report is embedded in the
ArtosFlow project. ArtosFlow is a
project of the Artos Institute.
```]
````]
CODELST handles whitespace in the code to save space and display the code as intended (and indented). Unnecessary blank lines at the beginning and end will be removed, alongside superfluous indentation:
#example[````
#sourcecode[```java
class HelloWorld {
public static void main( String[] args ) {
System.out.println("Hello World!");
}
}
```]
````]
This behavior can be disabled or modified:
#example[````
#sourcecode(showlines:true, gobble:1, tab-size:4)[```java
class HelloWorld {
public static void main( String[] args ) {
System.out.println("Hello World!");
}
}
```]
````]
To show code from a file, load it with #cmd[read] and pass the result to #cmd[sourcefile] alongside the filename:
#example(raw("#sourcefile(read(\"typst.toml\"), file:\"typst.toml\")"))[
#codelst.sourcefile(read("../typst.toml"), file: "typst.toml")
]
It is useful to define an alias for #cmd-[sourcefile]:
#codesnippet[```typc
let codelst-sourcefile = sourcefile
let sourcefile( filename, ..args ) = codelst-sourcefile(
read(filename), file:filename, ..args
)
```]
#cmd-[sourcefile] takes the same arguments as #cmd-[sourcecode]. For example, to limit the output to a range of lines:
#example[```
#sourcefile(
showrange: (2, 4),
read("typst.toml"),
file:"typst.toml"
)
```][
#codelst.sourcefile(
showrange: (2, 4),
read("../typst.toml"),
file: "typst.toml",
)
]
Specific lines can be highlighted:
#example[```
#sourcefile(
highlighted: (2, 3, 4),
read("typst.toml"),
file:"typst.toml"
)
```][
#codelst.sourcefile(
highlighted: (2, 3, 4, 8, 9),
read("../typst.toml"),
file: "typst.toml",
)
]
To reference a line from other parts of the document, CODELST looks for labels in the source code and makes them available to Typst. The regex to look for labels can be modified to be compatible with different source syntaxes:
#example[```
#sourcefile(
label-regex: regex("\"(codelst.typ)\""),
highlight-labels: true,
highlight-color: lime,
read("typst.toml"),
file:"typst.toml"
)
See #lineref(<codelst.typ>) for the _entrypoint_.
```][
#codelst.sourcefile(
label-regex: regex("\"(codelst.typ)\""),
highlight-labels: true,
highlight-color: lime,
read("../typst.toml"),
file: "typst.toml",
)
See line 4 for the _entrypoint_. (Note how the label was removed from the sourcecode before highlighting.)
]
== Formatting
#cmd-[sourcecode] can be used inside #cmd[figure] and will show the correct supplement. It is recommended to allow page breaks for `raw` figures:
#sourcecode[```typ
#show figure.where(kind: raw): set block(breakable: true)
```]
Instead of the build in styles, custom functions can be used:
#example(```typ
#sourcecode(
numbers-style: (lno) => text(
size: 2em,
fill:rgb(220, 65, 241),
font:("Comic Sans MS"),
str(lno)
),
frame: (code) => block(
width:100%,
inset:(x:10%, y:0pt),
block(fill: green, width:100%, code)
), raw("*some*
_source_
= code", lang:"typc"))
```)
Using other packages like #package[showybox] is easy:
#example[````
#import "@preview/showybox:2.0.1": showybox
#let showycode = sourcecode.with(
frame: (code) => showybox(
frame: (
title-color: red.darken(40%),
body-color: red.lighten(90%),
border-color: black,
thickness: 2pt
),
title: "Source code",
code
)
)
#showycode[```typ
*some*
_source_
= code
```]
````]
This is nice in combination with figures:
#example[````
#import "@preview/showybox:2.0.1": showybox
#show figure.where(kind: raw): (fig) => showybox(
frame: (
title-color: red.darken(40%),
body-color: red.lighten(90%),
border-color: black,
thickness: 2pt
),
title: [#fig.caption.body #h(1fr) #fig.supplement #fig.counter.display()],
fig.body
)
#figure(
sourcecode(frame: none)[```typ
*some*
_source_
= code
```],
caption: "Some code"
)
````]
=== Using CODELST for all raw text <sec-catchall>
#ibox[
Since Typst 0.9.0 using a #var-[show] rule should become possible, but not yet fully implemented in CODELST.
]
Using a #var-[show] rule to set all #cmd-[raw] blocks inside #cmd-[sourcecode] is not possible, since the command internally creates a new #cmd-[raw] block and would cause Typst to crash with an overflow error. Using a custom #arg[lang] can work around this, though:
#example[````
#show raw.where(lang: "clst-typ"): (code) => sourcecode(lang:"typ", code)
```clst-typ
*some*
_source_
= code
```
````][
#show raw.where(lang: "clst-typ"): code => codelst.sourcecode(lang: "typ", code)
```clst-typ
*some*
_source_
= code
```]
CODELST provides two ways to get around this issue, however. One is to set up a custom language that is directly followed by a colon and the true language tag:
#sourcecode[```codelst :typ
*some*
_source_
= code
```]
This is a robust way to send anything to CODELST. But since this might prevent proper syntax highlighting in IDEs, a reversed syntax is possible:
#sourcecode[```typ :codelst
*some*
_source_
= code
```]
This will look at the first line of every `raw` text and if it matches `:codelst`, it will remove the activation tag and send the code to #cmd-[sourcecode].
Setting up one of these catchall methods is easily done by using the #cmd[codelst] function in a #var-[show] rule. Any arguments will be passed on to #cmd-[sourcecode]:
#sourcecode[```typ
#show: codelst( ..sourcecode-args )
// or
#show: codelst( reversed: true, ..sourcecode-args )
```]
== Command overview
#command(
"sourcecode",
..args(
lang: auto,
numbering: "1",
numbers-start: auto,
numbers-side: left,
numbers-width: auto,
numbers-style: "function",
numbers-first: 1,
numbers-step: 1,
// continue-numbering: false,
gutter: 10pt,
tab-indent: 2,
gobble: auto,
highlighted: (),
highlight-color: rgb(234, 234,189),
label-regex: regex("// <([a-z-]{3,})>$"),
highlight-labels: false,
showrange: none,
showlines: false,
frame: "code-frame",
[code]),
)[
#argument("numbering", types: ("string", "function", none), default: "1")[
A #doc("meta/numbering", name:"numbering pattern") to use for line numbers. Set to #value(none) to disable line numbers.
]
#argument("numbers-start", types: (1, auto), default: auto)[
The number of the first code line. If set to #value(auto), the first line will be set to the start of #arg[showrange] or #value(1) otherwise.
]
#argument("numbers-side", default: choices(left, right, default: left), types: "alignment")[
On which side of the code the line numbers should appear.
]
#argument("numbers-width", types: (auto, 1pt), default: auto)[
The width of the line numbers column. Setting this to #value(auto) will measure the maximum size of the line numbers and size the column accordingly. Giving a negative length will move the numbers out of the frame into the margin.
]
#argument("numbers-first", default: 1)[
The first line number to show. Compared to #arg[numbers-start], this will not change the numbers but hide all numbers before the given number.
]
#argument("numbers-step", default: 1)[
The step size for line numbers.
For #arg[numbers-step]: $n$ only every $n$-th line number is shown.
]
#argument("numbers-style", default: "(i) => i", types: "function")[
A function of one argument to format the line numbers. Should return #dtype[content].
]
// #argument("continue-numbering", default:false)[
// If set to #value(true), the line numbers will continue from the last call of #cmd-[sourcecode].
// ]
// #side-by-side[````
// #sourcecode[```
// one
// two
// ```]
// #lorem(10)
// #sourcecode(continue-numbering: true)[```
// three
// four
// ```]
// ````]
#argument("gutter", default: 10pt)[
Gutter between line numbers and code lines.
]
#argument("tab-indent", default: 2)[
Number of spaces to replace tabs at the start of each line with.
]
#argument("gobble", default: auto, types: (auto, "integer", "boolean"))[
How many whitespace characters to remove from each line. By default, the number is automatically determined by finding the maximum number of whitespace all lines have in common. If #arg(gobble: false), no whitespace is removed.
]
#argument("highlighted", default: ())[
Line numbers to highlight.
Note that the numbers will respect #arg[numbers-start]. To highlight the second line with #arg(numbers-start: 15), pass #arg(highlighted: (17,))
]
#argument("highlight-color", default: rgb(234, 234, 189))[
Color for highlighting lines.
]
#argument("label-regex", types: "regular expression")[
A #dtype("regular expression") for matching labels in the source code. The default value will match labels with at least three characters at the end of lines, separated with a line comment (`//`). For example:
```typ
#strong[Some text] // <my-line-label>
```
If this line matches on a line, the full match will be removed from the output and the content of the first capture group will be used as the label's name (`my-line-label` in the example above).
Note that to be valid, the expression needs to have at least one capture group.
To reference a line, #cmd[lineref] should be used.
]
#argument("highlight-labels", default: false)[
If set to #value(true), lines matching #arg[label-regex] will be highlighted.
]
#argument("showrange", default: none, types: (none, "array"))[
If set to an array with exactly two #dtype("integer")s, the code-lines will be sliced to show only the lines within that range.
For example, #arg(showrange: (5, 10)) will only show the lines 5 to 10.
If settings this and #arg(numbers-start: auto), the line numbers will start at the number indicated by the first number in #arg[showrange]. Otherwise, the numbering will start as specified with #arg[numbers-start].
]
#argument("showlines", default: false)[
If set to #value(true), no blank lines will be stripped from the start and end of the code. Otherwise, those lines will be removed from the output.
Line numbering will not be adjusted to the removed lines (other than with #arg[showrange]).
]
#argument("frame", types: "function", default: "code-frame")[
A function of one argument to frame the source code. The default is #cmd[code-frame]. #value(none) disables any frame.
]
]
#command("sourcefile", arg[code], arg(file: none), arg(lang: auto), sarg[args])[
Takes a text string #arg[code] loaded via the #cmd[read] function and passes it to #cmd-[sourcecode] for display. If #arg[file] is given, the code language is guessed by the file's extension. Otherwise, #arg[lang] can be provided explicitly.
Any other #arg[args] will be passed to #cmd-[sourcecode].
#example(raw("#sourcefile(read(\"typst.toml\"), file:\"typst.toml\")"))[
#codelst.sourcefile(read("../typst.toml"), lang: "toml")
]
#ibox[
The idea for #cmd-[sourcefile] was to read the provided filename without the need for the user to call #cmd-[read]. Due to the security measure, that packages can only read files from their own directory, the call to #cmd-[read] needs to happen outside #cmd-[sourcefile] in the document.
For this reason, the command differs from #cmd-[sourcecode] only insofar as it accepts a #dtype("string") instead of `raw` #dtype("content").
Future releases might use the #arg[filename] for other purposes, though.
To deal with this, simply add the following code to the top of your document to define a local alias for #cmd-[sourcefile]:
```typ
#let codelst-sourcefile = sourcefile
#let sourcefile( filename, ..args ) = codelst-sourcefile(read(filename), file:filename, ..args)
```
]
]
#command("lineref", arg[label], arg(supplement: "line"))[
Creates a reference to a code line with a label. #arg[label] is the label to reference.
#example[````
#sourcecode[```java
class HelloWorld {
public static void main( String[] args ) { // <main-method>
System.out.println("Hello World!");
}
}
```]
See #lineref(<main-method>) for a main method in Java.
````][
#codelst.sourcecode[```java
class HelloWorld {
public static void main( String[] args ) { // <main-method>
System.out.println("Hello World!");
}
}
```]
See line 2 for a main method in Java.
]
How to set labels for lines, refer to the documentation of #arg[label-regex] at #cmdref("sourcecode").
]
#command(
"code-frame",
..args(
fill: luma(250),
stroke: 1pt + luma(200),
inset: (x: 5pt, y: 10pt),
radius: 4pt,
[code],
),
)[
Convenience function to create a #cmd-[block] to wrap code inside. The arguments are passed to #doc("layout/block").
The default values create the default gray box around source code.
Should be used with the #arg[frame] argument in #cmd-[sourcecode].
#example[```
#code-frame(lorem(20))
```]
#example[````
#sourcecode(
frame: code-frame.with(
fill: green.lighten(90%),
stroke: green
)
)[```typc
lorem(20)
```]
````]
]
#command(
"codelst",
..args(
tag: "codelst",
reversed: false,
),
sarg[sourcecode-args],
)[
Sets up a default style for raw blocks. Read @sec-catchall for details on how it works.
#sourcecode[```typ
#show: codelst()
```]
]
= Limitations and alternatvies
== Limitations and Issues
To lay out the code and line numbers correctly, CODELST needs to know the available space before calculating the correct sizes. This will lead to problems when changing the layout of the code later on, for example with a #var-[show] rule.
The way line numbers are laid out, the alignment might drift off for large code blocks. Page breaks are a major cause for this. If applicable, it can help to split large blocks of code into smaller chunks, for example by using #arg[showrange].
The insets for line highlights are slightly off.
== Alternatives
There are some alternatives to CODELST that fill similar purposes, but have more or other functionality. If CODELST does not suit your needs, one of those might do the trick.
/ #gitlink("platformer/typst-algorithms"): _Typst module for writing algorithms. Use the algo function for writing pseudocode and the code function for writing code blocks with line numbers._
/ #gitlink("hugo-s29/typst-algo"): _This package helps you typeset [pseudo] algorithms in Typst._
|
https://github.com/astrale-sharp/typstfmt | https://raw.githubusercontent.com/astrale-sharp/typstfmt/main/CHANGELOG.md | markdown | Apache License 2.0 | # Latest
# Release 0.2.7
- String literal preserved in math mode @monaqa
- fix indent problems with raw code (hacky)
- global config: renamed `default-config` to `typstfmt` @Andrew Voynov
- improves max_len checking (first line of node still doesn't respect it)
- Implement Math Block Align @taooceros
# Release 0.2.6
- remove header in stdout unless there is a panic
- Add a flag to print the path of the global config file
- If no config file exists, read a global configuration file
- Optimized show_all.sh and made it POSIX-compliant
- Removed "-config" from config file name
- remove trailing comma logic in math fmt
- compat add --stdout
# Release 0.2.5
- one less indent for trailing blocks
- prints "up to date" if the file wasn't changed
...
# Release 0.2.1#1817538
- adds conditional formatting, nested if else etc
- fix a bug where push_raw_indent was trimming lines
- improve behavior of formatting arguments in a breaking manner
- Some cleanups, nitpicks etc.
# Release 0.2.0
Features:
- Linewrap for content
- On Off feature
- Config Files
- Enum and List formatting
- Codeblock formatting
- Many comments handling fixes
- Args breaking in function calls with trailing comma
- Parenthesized formatting
- Binop formatting |
https://github.com/AHaliq/CategoryTheoryReport | https://raw.githubusercontent.com/AHaliq/CategoryTheoryReport/main/preamble/catt.typ | typst | #import "@preview/xarrow:0.3.0": xarrow
#import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#let Ob(category) = $attach(br: category, upright(bold("Ob")))$
#let Hom(category, s: none, t: none) = (
$attach(br: category, upright(bold("Hom")))#if s != none and t != none { $(#s,#t)$ }$
)
#let arr(f, a, b) = $#f: #a -> #b$
#let mono(f, a, b) = $#f: #a >-> #b$
#let epi(f, a, b) = $#f: #a ->> #b$
#let comp = $compose$
#let iso = $tilde.equiv$
#let op(category) = $attach(tr: "op", category)$
#let slice(category, obj) = $#category slash #obj$
#let coslice(category, obj) = $#category backslash #obj$
#let Set = $upright(bold("Sets"))$
#let Rel = $upright(bold("Rel"))$
#let Mon = $upright(bold("Mon"))$
#let Cone = $upright(bold("Cone"))$
#let Pos = $upright(bold("Posets"))$
#let dom = $upright(bold("dom"))$
#let cod = $upright(bold("cod"))$
#let lim(j) = $upright(attach(limits(bold("lim")), b: attach(limits(<--), b: #j)))$
// diagram macros
#let sstroke = 1pt + silver
#let corner-mark = (
inherit: "straight",
sharpness: 45deg,
stroke: black,
rev: false,
)
#let corner(..a) = edge(..a, stroke: white, marks: (corner-mark,))
|
|
https://github.com/protohaven/printed_materials | https://raw.githubusercontent.com/protohaven/printed_materials/main/common-tools/drill_press_wood.typ | typst |
#import "../environment/env-protohaven-class_handouts.typ": *
= Drill Press (Wood)
== Usage Notes
=== Safety
- Keep hands three inches from the moving drill bit
- Always clamp to table
- Long stock should be braced securely
=== Care
- Do not use excessive force
- Never leave the chuck key in the chuck
- Submit a maintenance request when needed
=== Cleanup
== Parts of the Drill Press
===
Basic Anatomy
Table
Table adjustment handle
– used to adjust the height of the table
– unlock the table support lock before adjusting the height.
Table support lock
– locks the table at the current height and position around the column.
– always lock the table in place before drilling
Chuck
Chuck key
– used to open and close the fingers of the chuck
Quill
Feed handles
– used to move the chuck up and down to perform the drilling process
Pulleys
– used to select the speed at which the spindle turns
– Larger bits should be run slower than smaller bits
– Always unplug the machine before changing the pulley configuration.
CARE CLEANUP
1. Sweep the floor and vacuum debris
2. Recycle waste in the scrap bin
3. Empty scrap bin and dust collection when full
– used to support the work piece. Make sure not to drill into the table surface.
– holds the drill bit. The bit should be centered relative to all three fingers. – do not attempt to drill if the bit does not spin true.
– always unplug the machine before inserting or removing a drill bit.
– contains the rotating spindle and moves up and downBelt guard
– Keeps dust and body parts away from the pulleys. Should be closed when the
machine is plugged in, and especially when running.
Tension lock
– loosen to give slack to the pulleys when changing the speed. Tighten to lock the
pulleys into position
Depth scale
– shows the depth of the hole being drilled
Bevel scale
– shows the angle of the table
Bevel lock
– locks the table to the angle shown on the bevel scale
Power switch
– turns the motor on and off
Drill bit
– performs the actual removal of material
– only sharp bits should be used to prevent binding and heat generation
– will often be hot after drilling a hole. Use caution when removing a recently used drill bit to prevent burns
Drill Press Safety
1. Keep hands at least 3 inches away from the bit when drilling. A bit can fragment, or throw the head sideways.
2. Always clamp work to the table. Your fingers are not stronger than the motor and you can be seriously injured if the bit binds in the hole.
3. Long stock should be braced against the column to prevent it from spinning into the operator.
4. Never leave the chuck key in the chuck. It can be flung from a spinning chuck and cause injury.
5. Do not use excessive force on the feed handle or you risk breaking the bit.
Basic Operation
● When drilling deep, periodically lift the bit to clear the swarf from the hole and flutes
● Center punching the location of a hole will help keep a drill bit from wandering
● When drilling large holes, a pilot hole will help larger chisel-tip bits cut properly
● Placing a sacrificial piece of stock below the workpiece can help prevent tearout
● If drilling thick material, drilling from both sides using a pilot hole as a positioning guide can also be useful |
|
https://github.com/SergeyGorchakov/russian-phd-thesis-template-typst | https://raw.githubusercontent.com/SergeyGorchakov/russian-phd-thesis-template-typst/main/common/glossary.typ | typst | MIT License | #let glossary-entries = {(
(
key: "typst",
short: "Typst",
desc: "Новая система набора текста на основе разметки для науки.",
),
)} |
https://github.com/MattiaOldani/Informatica-Teorica | https://raw.githubusercontent.com/MattiaOldani/Informatica-Teorica/master/capitoli/complessità/23_situazione_finale.typ | typst | #import "../alias.typ": *
= Situazione finale
Dopo tutto ciò che è stato visto in queste dispense, ecco un'illustrazione che mostra qual è la situazione attuale per quanto riguarda la classificazione di problemi.
#v(12pt)
#figure(
image("assets/situazione-finale.svg", width: 100%)
)
#v(12pt)
_NC_ è la classe di problemi risolti da *algoritmi paralleli efficienti*, ovvero algoritmi che hanno tempo parallelo _"o piccolo"_ del tempo sequenziale e un buon numero di processori.
L'unica inclusione propria dimostrata e nota è $P subset.neq exptime$, grazie al problema di decidere se una DTM si arresta entro $n$ passi. In tutti gli altri casi è universalmente accettato (_ma non dimostrato_) che le inclusioni siano proprie.
|
|
https://github.com/denizenging/site | https://raw.githubusercontent.com/denizenging/site/master/page/works/index.en.typ | typst | #import "@local/pub-page:0.0.0": *
#show: template(
title: "Works",
description: "My publications, given speechs, and more!",
menu: (3, "certificate"),
)
You can find my ORCID iD under my profile picture on the left if you're on a comptuer or at the top if you're on mobile. Alternatively, you can visit #link("/page/links")[my links page].
= Papers
To be added...
= Talks
To be added...
= Posters
To be added...
|
|
https://github.com/vEnhance/1802 | https://raw.githubusercontent.com/vEnhance/1802/main/r11.typ | typst | MIT License | #import "@local/evan:1.0.0":*
#show: evan.with(
title: [Notes for 18.02 Recitation 11],
subtitle: [18.02 Recitation MW9],
author: "<NAME>",
date: [9 October 2024],
)
#quote(attribution: [Randall Munroe in XKCD 1073: Weekend])[
We all hate Mondays. We're all working for the weekend.
But our chains exist only in our minds. \
Calendars are just social consensus.
Nature doesn't know the day of the week. \
My friends --- we can make today Saturday. \
We can make it Saturday _forever_.
]
This handout (and any other DLC's I write) are posted at
#url("https://web.evanchen.cc/1802.html").
Hope you're excited for the long weekend! (I know I am.)
= Minimization and maximization
Now that you have learned what $nabla f$, this entire part of the class parallels 18.01,
when you set the derivative equal to zero and then inspected all the critical points.
The same caveats in 18.01 apply in 18.02 as well:
#warning[
- Keep in mind that each of the implications
$ "Global minimum" ==> "Local minimum" ==> "Critical point, i.e. " nabla f = bf(0) $
is true only one way, not conversely.
So a local minimum may not be a global minimum;
and a point with gradient zero might not be a minimum, even locally.
You should still find all the critical points,
just be aware a lot of them may not actually be min's or max's.
- There may not be _any_ global minimum or maximum at all.
For example, the function $f(x) = x^2$ has no global maximum on $RR$
(but has a global minimum at $x=0$).
]
#recipe[
To find global minimum and maximums over a region $cal(R)$,
try all of the following.
1. Set the gradient equal to zero and solve for the *critical points*.
2. Check "edge cases".
It is beyond the scope of 18.02 to give a precise definition of "edge case",
because this is not a proof-based class. Here's a list of things you should check:
- Any points with undefined behavior (e.g. $nabla f$ is undefined).
- Behavior at $oo$, if applicable (meaning that $cal(R)$ is not bounded)
- Boundary of $cal(R)$, if applicable
]
#tip[
The following 18.100 theorem might help you check your work:
Suppose $cal(R)$ is some region in $RR^n$
which is both _closed_ and _bounded_.#footnote[In 18.100,
they use the word "compact" instead, which is better, but out of scope.]
"Closed" means roughly that it's composed of $<=$ constraints rather than $<$ ones
(e.g. closed interval), "bounded" means it fits inside a sphere with finite radius.
Then if $f : cal(R) -> RR^n$ is defined everywhere continuously,
you are *promised at least one global minimum and at least one global maximum exist*.
Useful thing to know for something like Q1 below,
(where $cal(R)$ is a closed square of length $2$);
you know in advance the answer won't be "no global min" or "no global max".
]
= Updates on course logistics
- *Midterm 2* is probably up to Lagrange multipliers, covered this Friday.
- *Midterm 2 review session* is likely to be Monday, October 21, 3pm-5pm, run by just me.
I'll email out once the date/time/room is finalized.
- It will be a mock exam for one hour then solution presentation for an hour.
- I will post the exam beforehand, so if you can only make
4pm-5pm you can try the mock yourself and then come to solutions.
- My *LAMV course notes* are coming along! 🎉
(This is the mega-file titled "Linear Algebra and Multivariable Calculus" on my page.)
- Gradient (up to last recitation) pretty much all written up now.
- If things go really well I may have all MT2 material written up by Wednesday, Oct 16.
- However, my thesis defense is in a couple months, so no promises LOL.
- Over time I may try to backfill MT1 material as well (for future years of 18.02).
- In the future, I'll also update LAMV rather than fill in post-recitation notes.
= Recitation questions from official course
/ 1.: Consider the function $f (x , y) = x y (1 - x - y)$ defined on the
region $- 1 lt.eq x lt.eq 1$, $- 1 lt.eq y lt.eq 1$.
- Find the critical points of $f$ (that is the points where
$arrow(nabla) f = upright(bold(0))$).
- Find the global maximum and global minimum values for $f (x , y)$ on
the region $R$.
/ 2.: Consider the surface $S$ consisting of points in $upright(bold(R))^3$ of
the form $(x , y , 2 \/ sqrt(x y))$, with $x , y > 0$. Find the point on
the surface whose distance from $(0 , 0 , 0)$ is minimized, by consider
the function given by the square of the distance. Given that the surface
is unbounded, why does the minimum exist?
/ 3.: Consider the two curves
$ C_1 : y = 1 \/ x , 0.1 lt.eq x lt.eq 100 ; C_2 : y = - 2 - x , - 102 lt.eq x lt.eq 98 . $
We want to find the minimal distance between $C_1$ and $C_2$. By
parametrizing both curves in terms of $x$, argue that finding the
minimal distance is equivalent to finding the minimal value of the
function $ f (x_1 , x_2) = (x_1 - x_2)^2 + (1 / x_1 + 2 + x_2)^2 $ on
the rectangle $R = [0.1 , 100] times [- 102 , 98]$.
Find the global minimum of $f (x , y)$ and determine the minimal
distance between the two curves.
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/array-30.typ | typst | Other | // Error: 2-22 cannot join boolean with boolean
#(true, false).join()
|
https://github.com/smorad/um_cisc_7026 | https://raw.githubusercontent.com/smorad/um_cisc_7026/main/conv_renders.typ | typst | #import "@preview/cetz:0.2.2"
#import "@preview/cetz:0.2.2": canvas, draw, plot
#let stonks = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (8, 6),
x-tick-step: 2,
y-tick-step: 20,
y-min: 0,
y-max: 100,
x-label: $ t $,
y-label: $ x(t) $,
{
plot.add(
domain: (0, 5),
label: [Stock Price (MOP)],
style: (stroke: (thickness: 5pt, paint: red)),
t => 100 * (0.3 * calc.sin(0.2 * t) +
0.1 * calc.sin(1.5 * t) +
0.05 * calc.sin(3.5 * t) +
0.02 * calc.sin(7.0 * t))
)
})
})}
#let waveform = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (8, 6),
x-tick-step: 0.5,
y-tick-step: 500,
y-min: -1000,
y-max: 1000,
x-label: $ t $,
y-label: $ x(t) $,
{
plot.add(
domain: (0, 1),
label: [dBm],
style: (stroke: (thickness: 5pt, paint: red)),
t => 500 * (
calc.sin(calc.pi * 1320 * t)
)
)
})
})}
#let waveform_left = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (8, 6),
x-tick-step: 0.5,
y-tick-step: 500,
x-min: 0,
x-max: 1,
y-min: -1000,
y-max: 1000,
x-label: "Time (Seconds)",
y-label: "dBm",
{
plot.add(
domain: (0, 0.33),
style: (stroke: (thickness: 5pt, paint: red)),
t => 500 * (
calc.sin(2 * calc.pi * 9.1 * t)
)
)
plot.add(
domain: (.33, 1.0),
style: (stroke: (thickness: 5pt, paint: red)),
t => t
)
})
})}
#let hello = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (8, 6),
x-tick-step: 0.5,
y-tick-step: 1,
x-min: 0,
x-max: 1,
y-min: 0,
y-max: 1,
x-label: "Time (Seconds)",
y-label: "Hello",
{
plot.add(
domain: (0, 0.33),
style: (stroke: (thickness: 5pt, paint: red)),
t => 1
)
plot.add(
domain: (.33, 1.0),
style: (stroke: (thickness: 5pt, paint: red)),
t => 0
)
plot.add-vline(
0.33,
style: (stroke: (thickness: 5pt, paint: red)),
)
})
})}
#let waveform_right = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (8, 6),
x-tick-step: 0.5,
y-tick-step: 500,
x-min: 0,
x-max: 1,
y-min: -1000,
y-max: 1000,
x-label: "Time (Seconds)",
y-label: "Frequency (Hz)",
{
plot.add(
domain: (0.66, 1.0),
style: (stroke: (thickness: 5pt, paint: red)),
t => 500 * (
calc.sin(2 * calc.pi * 9.1 * t)
)
)
plot.add(
domain: (0, 0.66),
style: (stroke: (thickness: 5pt, paint: red)),
t => t
)
})
})}
#let implot = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (7, 7),
x-tick-step: 64,
y-tick-step: 64,
y-min: 0,
y-max: 256,
x-min: 0,
x-max: 256,
x-label: [$u$ (Pixels)],
y-label: [$v$ (Pixels)],
{
plot.add(
label: $ x(u, v) $,
domain: (0, 1),
style: (stroke: (thickness: 0pt, paint: red)),
t => 1000 * (
calc.sin(calc.pi * 1320 * t)
)
)
plot.annotate({
import cetz.draw: *
content((128, 128), image("figures/lecture_7/ghost_dog_bw.svg", width: 7cm))
})
})
})}
#let implot_color = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (7, 7),
x-tick-step: 64,
y-tick-step: 64,
y-min: 0,
y-max: 256,
x-min: 0,
x-max: 256,
x-label: [$u$ (Pixels)],
y-label: [$v$ (Pixels)],
{
plot.add(
label: $ x(u, v) $,
domain: (0, 1),
style: (stroke: (thickness: 0pt, paint: red)),
t => 1000 * (
calc.sin(calc.pi * 1320 * t)
)
)
plot.annotate({
import cetz.draw: *
content((128, 128), image("figures/lecture_1/dog.png", width: 7cm))
})
})
})}
#let implot_left = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (7, 7),
x-tick-step: 64,
y-tick-step: 64,
y-min: 0,
y-max: 256,
x-min: 0,
x-max: 256,
x-label: [$u$ (Pixels)],
y-label: [$v$ (Pixels)],
{
plot.add(
domain: (0, 1),
style: (stroke: (thickness: 0pt, paint: red)),
t => 1000 * (
calc.sin(calc.pi * 1320 * t)
)
)
plot.annotate({
import cetz.draw: *
rect((0, 0), (256, 256), fill: gray)
})
plot.annotate({
import cetz.draw: *
content((64, 64), image("figures/lecture_7/ghost_dog_bw.svg", width: 3cm))
})
})
})}
#let implot_right = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (7, 7),
x-tick-step: 64,
y-tick-step: 64,
y-min: 0,
y-max: 256,
x-min: 0,
x-max: 256,
x-label: [$u$ (Pixels)],
y-label: [$v$ (Pixels)],
{
plot.add(
domain: (0, 1),
style: (stroke: (thickness: 0pt, paint: red)),
t => 1000 * (
calc.sin(calc.pi * 1320 * t)
)
)
plot.annotate({
import cetz.draw: *
rect((0, 0), (256, 256), fill: gray)
})
plot.annotate({
import cetz.draw: *
content((192, 192), image("figures/lecture_7/ghost_dog_bw.svg", width: 3cm))
})
})
})}
#let conv_signal_plot = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (20, 2.7),
y-tick-step: none,
x-tick-step: none,
y-min: 0,
y-max: 1,
x-min: 0,
x-max: 1,
x-label: "t",
y-label: "",
{
plot.add(
domain: (0, 0.4),
label: $ x(t) $,
style: (stroke: (thickness: 5pt, paint: red)),
t => 0.05 + 0.05 * (
calc.sin(calc.pi * 50 * t)
)
)
plot.add(
domain: (0.4, 0.6),
style: (stroke: (thickness: 5pt, paint: red)),
//t => (t - 0.4) / 0.3 + 0.05 * (
// calc.sin(calc.pi * 50 * t)
//)
t => 0.05 * calc.sin(calc.pi * 50 * t) + 0.8 * 1 / (1 + calc.exp(-30 * (t - 0.5)))
)
plot.add(
domain: (0.6, 1.0),
style: (stroke: (thickness: 5pt, paint: red)),
t => 0.75 + 0.05 * (
calc.sin(calc.pi * 50 * t)
)
)
})
})}
#let conv_filter_plot = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (20, 2.75),
y-tick-step: none,
x-tick-step: none,
y-min: 0,
y-max: 1,
x-min: 0,
x-max: 1,
x-label: "t",
y-label: "",
{
plot.add(
domain: (0, 0.2),
label: $ g(t) $,
style: (stroke: (thickness: 5pt, paint: blue)),
t => calc.exp(-calc.pow((t - 0.1), 2) / 0.002)
)
})
})}
#let conv_result_plot = {
set text(size: 25pt)
canvas(length: 1cm, {
plot.plot(size: (20, 2.75),
y-tick-step: none,
x-tick-step: none,
y-min: 0,
y-max: 1,
x-min: 0,
x-max: 1,
x-label: "t",
y-label: "",
{
plot.add(
domain: (0, 0.4),
label: $ x(t) * g(t) $,
style: (stroke: (thickness: 5pt, paint: purple)),
t => 0.05
)
plot.add(
domain: (0.4, 0.6),
style: (stroke: (thickness: 5pt, paint: purple)),
t => 0.01 + 0.8 * 1 / (1 + calc.exp(-30 * (t - 0.5)))
)
plot.add(
domain: (0.6, 1.0),
style: (stroke: (thickness: 5pt, paint: purple)),
t => 0.77
)
})
})} |
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/math/call.typ | typst | // Test math function call edge cases.
// Note: 2d argument calls are tested for matrices in `mat.typ`
--- math-call-non-func ---
$ pi(a) $
$ pi(a,) $
$ pi(a,b) $
$ pi(a,b,) $
--- math-call-repr ---
#let args(..body) = body
#let check(it, r) = test-repr(it.body.text, r)
#check($args(a)$, "([a])")
#check($args(a,)$, "([a])")
#check($args(a,b)$, "([a], [b])")
#check($args(a,b,)$, "([a], [b])")
#check($args(,a,b,,,)$, "([], [a], [b], [], [])")
--- math-call-2d-non-func ---
// Error: 6-7 expected content, found array
// Error: 8-9 expected content, found array
$ pi(a;b) $
--- math-call-2d-semicolon-priority ---
// If the semicolon directly follows a hash expression, it terminates that
// instead of indicating 2d arguments.
$ mat(#"math" ; "wins") $
$ mat(#"code"; "wins") $
--- math-call-2d-repr ---
#let args(..body) = body
#let check(it, r) = test-repr(it.body.text, r)
#check($args(a;b)$, "(([a],), ([b],))")
#check($args(a,b;c)$, "(([a], [b]), ([c],))")
#check($args(a,b;c,d;e,f)$, "(([a], [b]), ([c], [d]), ([e], [f]))")
--- math-call-2d-repr-structure ---
#let args(..body) = body
#let check(it, r) = test-repr(it.body.text, r)
#check($args( a; b; )$, "(([a],), ([b],))")
#check($args(a; ; c)$, "(([a],), ([],), ([c],))")
#check($args(a b,/**/; b)$, "((sequence([a], [ ], [b]), []), ([b],))")
#check($args(a/**/b, ; b)$, "((sequence([a], [b]), []), ([b],))")
#check($args( ;/**/a/**/b/**/; )$, "(([],), (sequence([a], [b]),))")
#check($args( ; , ; )$, "(([],), ([], []))")
#check($args(/**/; // funky whitespace/trivia
, /**/ ;/**/)$, "(([],), ([], []))")
--- math-call-empty-args-non-func ---
// Trailing commas and empty args introduce blank content in math
$ sin(,x,y,,,) $
// with whitespace/trivia:
$ sin( ,/**/x/**/, , /**/y, ,/**/, ) $
--- math-call-empty-args-repr ---
#let args(..body) = body
#let check(it, r) = test-repr(it.body.text, r)
#check($args(,x,,y,,)$, "([], [x], [], [y], [])")
// with whitespace/trivia:
#check($args( ,/**/x/**/, , /**/y, ,/**/, )$, "([], [x], [], [y], [], [])")
--- math-call-value-non-func ---
$ sin(1) $
// Error: 8-9 expected content, found integer
$ sin(#1) $
--- math-call-pass-to-box ---
// When passing to a function, we lose the italic styling if we wrap the content
// in a non-math function unless it's already nested in some math element (lr,
// attach, etc.)
//
// This is not good, so this test should fail and be updated once it is fixed.
#let id(body) = body
#let bx(body) = box(body, stroke: blue+0.5pt, inset: (x:2pt, y:3pt))
#let eq(body) = math.equation(body)
$
x y &&quad x (y z) &quad x y^z \
id(x y) &&quad id(x (y z)) &quad id(x y^z) \
eq(x y) &&quad eq(x (y z)) &quad eq(x y^z) \
bx(x y) &&quad bx(x (y z)) &quad bx(x y^z) \
$
--- math-call-unknown-var-hint ---
// Error: 4-6 unknown variable: ab
// Hint: 4-6 if you meant to display multiple letters as is, try adding spaces between each letter: `a b`
// Hint: 4-6 or if you meant to display this as text, try placing it in quotes: `"ab"`
$ 5ab $
--- issue-3774-math-call-empty-2d-args ---
$ mat(;,) $
// Add some whitespace/trivia:
$ mat(; ,) $
$ mat(;/**/,) $
$ mat(;
,) $
$ mat(;// line comment
,) $
$ mat(
1, , ;
,1, ;
, ,1;
) $
--- issue-2885-math-var-only-in-global ---
// Error: 7-10 unknown variable: rgb
// Hint: 7-10 `rgb` is not available directly in math, try adding a hash before it: `#rgb`
$text(rgb(0, 0, 0), "foo")$
|
|
https://github.com/Otto-AA/dashy-todo | https://raw.githubusercontent.com/Otto-AA/dashy-todo/main/lib/side-margin.typ | typst | MIT No Attribution | #let rel-to-abs = (rel, size) => rel.length + rel.ratio * size
// must run within a context
#let calc-side-margin(side) = {
// https://typst.app/docs/reference/layout/page/#parameters-margin
let auto-margin = calc.min(page.width, page.height) * 2.5 / 21
if page.margin == auto {
auto-margin
} else if type(page.margin) == relative {
rel-to-abs(page.margin, page.width)
} else {
if side == left and page.margin.left == auto or side == right and page.margin.right == auto {
auto-margin
} else {
if side == left {
rel-to-abs(page.margin.left, page.width)
} else {
rel-to-abs(page.margin.right, page.width)
}
}
}
}
// must run within a context
#let calculate-page-margin-box(side) = {
assert(side in (left, right))
let margin = calc-side-margin(side)
if side == left {
(
"x": 0pt,
"y": 0pt,
"width": margin,
"height": page.height,
)
} else {
(
"x": page.width - margin,
"y": 0pt,
"width": margin,
"height": page.height,
)
}
} |
https://github.com/deadManAlive/ui-thesis-typst-template | https://raw.githubusercontent.com/deadManAlive/ui-thesis-typst-template/master/primer/publ.typ | typst | #import "../config.typ": cfg
#let publ = [
= Halaman Pernyataan Persetujuan Publikasi Tugas Akhir Untuk Kepentingan Akademis
Sebagai sivitas akademika Universitas Indonesia, saya yang bertanda tangan di bawah ini:
#table(
columns: 3,
stroke: none,
[Nama], [:], [#cfg.name],
[NPM], [:], [#cfg.npm],
[Program Studi], [:], [#cfg.program],
[Fakultas],[:],[#cfg.faculty],
[Jenis Karya], [:], [Skripsi],
)
demi mengembangkan ilmu pengetahuan, menyetujui untuk memberikan kepada Universitas Indonesia *Hak Bebas Royalti Noneksklusif (_Non-Exclusive Royalty-Free Right_)* atas karya ilmiah saya yang berjudul:
#v(1em)
#[
#set align(center)
#strong(cfg.title)
]
#v(1em)
beserta perangkat yang ada (jika diperlukan). Dengan Hak Bebas Royalti Noneksklusif ini, Universitas Indonesia berhak menyimpan, mengalihmedia/format-kan, mengelola dalam bentuk pangkalan data (_database_), merawat, dan memublikasikan tugas akhir saya selama tetap mencatumkan nama saya sebagai penulis/pencipta dan sebagai pemilik Hak Cipta.
Demikian pernyataan ini saya buat dengan sebenarnya.
#[
#set align(center)
#table(
columns: 3,
stroke: none,
align: left,
[Dibuat di], [:], [#cfg.location],
[Pada tanggal], [:], [#cfg.time],
)
Yang menyatakan,
#v(3em)
(#cfg.name)
]
] |
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/Math/mat.typ | typst | #set text(
size:10pt,
)
#set page(
paper:"a5",
margin:(x:1.8cm,y:1.5cm),
)
#set par(
justify: true,
leading: 0.52em,
)
= 矩阵
矩阵。
同一行的元素用逗号分隔,各行之间用分号分隔。 分号语法将前面使用逗号分隔的参数合并为一个数组。 您还可以使用“数学函数调”用的特殊语法,定义接受二维数据的自定义函数。
同一行的元素可以使用对齐符 & 进行对齐。
= 例
#image("34.png")
= 参数
#image("35.png")
= 分隔符
要使用的分隔符。
#image("36.png")
违约:"("
= 增加
在矩阵中绘制增强线。
- none: 没有画线。
- 单个数字:在指定的列号之后绘制一条垂直增强线。负数则从末尾开始。
- 字典:使用字典,可以在水平和垂直方向上绘制多个增强线。此外,还可以设置线的样式。字典可以包含以下键:
- hline: 应绘制水平线的偏移量。例如,偏移量为 2 将导致在矩阵的第二行之后绘制一条水平线。接受单行的整数或多行的整数数组。
- vline: 应绘制垂直线的偏移量。例如,偏移量为 2 将导致在矩阵的第二列之后绘制一条垂直线。接受单行的整数或多行的整数数组。
- stroke: 如何 绘制 线条。如果设置为 auto, 会默认使用 0.05em 厚度与方形线帽。
默认:none
= 间距
行和列之间的间距。
默认:0pt
= 行间距
行与行之间的间距。优先于 .gap
默认:0.5em
= 列间距
列之间的间距。优先于 .gap
默认:0.5em
= 数组
包含矩阵行的数组数组。
|
|
https://github.com/Rhinemann/mage-hack | https://raw.githubusercontent.com/Rhinemann/mage-hack/main/src/Mage%20Conversion.typ | typst | #import "templates/cover.typ": front_cover, temp_cover, back_cover
#import "templates/interior_template.typ": *
#set document(title: "Mage: the Ascension Cortex", author: "Rhinemann");
#front_cover
#include "chapters/Credits.typ"
#include "chapters/Outline.typ"
#include "chapters/Intro.typ"
#include "chapters/Game Rules.typ"
#include "chapters/Storyteller Characters.typ"
// Character Traits
#include "chapters/Distinctions.typ"
#include "chapters/Attributes.typ"
#include "chapters/Skills.typ"
#include "chapters/True Magick.typ"
#include "chapters/Assets.typ"
#include "chapters/Quintessence.typ"
#include "chapters/Paradox.typ"
#include "chapters/Talents.typ"
#include "chapters/Consequences.typ"
#include "chapters/Character Creation.typ"
#include "chapters/Character Advancement.typ"
#back_cover |
|
https://github.com/Toniolo-Marco/git-for-dummies | https://raw.githubusercontent.com/Toniolo-Marco/git-for-dummies/main/slides/components/thmbox.typ | typst | #import "@preview/ctheorems:1.1.2": *
// simil tail box with custom title
#let custom-box = thmbox(
"id", // identifier - same as that of theorem
"Title", // head
inset: (x: 1.2em, top: 1em, bottom: 1em),
fill: rgb("#87c1c8"),
).with(numbering:none)
// quite red box with custom title
#let alert-box = thmbox(
"id", // identifier - same as that of theorem
"title",
titlefmt: title => text(fill: rgb("#4b0414"), weight: "bold")[#title],
inset: (x: 1.2em, top: 1em, bottom: 1em),
fill: rgb("#c88d86"),
).with(numbering:none) |
|
https://github.com/akagiyuu/math-document | https://raw.githubusercontent.com/akagiyuu/math-document/main/integral/\frac{1}{1+sin(x)^4}.typ | typst | $
integral_0^(pi/2) 1/(1 + sin(x)^4) dif x
&= integral_0^(pi/2) (1/cos(x)^4)/(1/cos(x)^4 + tan(x)^4) dif x \
&arrow.long^(t=tan(x)) integral_0^(+infinity) (t^2 + 1)/((t^2 + 1)^2+ t^4) dif t \
&= integral_0^(+infinity) (t^2 + 1)/(2t^4 + 2t^2 + 1) dif t \
&= integral_0^(+infinity) (1 + t^(-2))/(2t^2 + 2 + t^(-2)) dif t \
&= (sqrt(2) - 2)/4 integral_0^(+infinity) (sqrt(2) - t^(-2))/((sqrt(2)t + t^(-1))^2 + 2 - 2sqrt(2)) dif t + (sqrt(2) + 2)/4 integral_0^(+infinity) (sqrt(2) + t^(-2))/((sqrt(2)t - t^(-1))^2 + 2 + 2sqrt(2)) dif t \
&arrow.long^(u=sqrt(2)t + t^(-1)) (sqrt(2) - 2)/4 integral_(+infinity)^(+infinity) 1/(u^2 + 2 - 2sqrt(2)) dif u + (sqrt(2) + 2)/4 integral_0^(+infinity) (sqrt(2) + t^(-2))/((sqrt(2)t - t^(-1))^2 + 2 + 2sqrt(2)) dif t \
&= (sqrt(2) + 2)/4 integral_0^(+infinity) (sqrt(2) + t^(-2))/((sqrt(2)t - t^(-1))^2 + 2 + 2sqrt(2)) dif t \
&arrow.long^(u=sqrt(2)t - t^(-1)) (sqrt(2) + 2)/4 integral_(-infinity)^(+infinity) 1/(u^2 + 2 + 2sqrt(2)) dif u \
&= (sqrt(2) + 2)/4 1/sqrt(2 + 2sqrt(2)) pi \
&= (pi sqrt(1+sqrt(2)))/4
$
|
|
https://github.com/Enter-tainer/typstyle | https://raw.githubusercontent.com/Enter-tainer/typstyle/master/tests/assets/unit/math/equation-flavor.typ | typst | Apache License 2.0 | $
F(x) = integral_0^x f(t) dif t
$
$ F(x) = integral_0^x f(t) dif t
$
|
https://github.com/Mc-Zen/zero | https://raw.githubusercontent.com/Mc-Zen/zero/main/README.md | markdown | MIT License |
# $Z\cdot e^{ro}$
_Advanced scientific number formatting._
[](https://typst.app/universe/package/zero)
[](https://github.com/Mc-Zen/zero/actions/workflows/run_tests.yml)
[](https://github.com/Mc-Zen/zero/blob/main/LICENSE)
- [Introduction](#introduction)
- [Quick Demo](#quick-demo)
- [Documentation](#documentation)
- [Table alignment](#table-alignment)
- [Zero for third-party packages](#zero-for-third-party-packages)
## Introduction
Proper number formatting requires some love for detail to guarantee a readable and clear output. This package provides tools to ensure consistent formatting and to simplify the process of following established publication practices. Key features are
- **standardized** formatting,
- digit [**grouping**](#grouping), e.g., $`299\,792\,458`$ instead of $299792458$,
- **plug-and-play** number [**alignment in tables**](#table-alignment),
- quick scientific notation, e.g., `"2e4"` becomes $2\times10^4$,
- symmetric and asymmetric [**uncertainties**](#specifying-uncertainties),
- [**rounding**](#rounding) in various modes,
- and some specials for package authors.
<!-- - and localization? -->
A number in scientific notation consists of three parts of which the latter two are optional. The first part is the _mantissa_ that may consist of an _integer_ and a _fractional_ part. In many fields of science, values are not known exactly and the corresponding _uncertainty_ is then given along with the mantissa. Lastly, to facilitate reading very large or small numbers, the mantissa may be multiplied with a _power_ of 10 (or another base).
The anatomy of a formatted number is shown in the following figure.
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/figures/anatomy.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/figures/anatomy-dark.svg">
<img alt="Anatomy of a formatted number" src="docs/figures/anatomy.svg">
</picture>
</p>
<!-- For generating formatted numbers, *Zero* provides the `num` type along with the types `coefficient`, `uncertainty`, and `power` that allow for fine-grained customization with `show` and `set` rules. -->
## Quick Demo
| Code | Output | Code | Output |
|------|--------|------|--------|
| `num("1.2e4")` | $1.2\times 10^4$ | `num[1.2e4]` | $1.2\times 10^4$ |
| `num("-5e-4")` | $-5\times 10^{-4}$ | `num(fixed: -2)[0.02]` | $2\times 10^{-2}$ |
| `num("9.81+-.01")` | $9.81\pm 0.01$ | `num("9.81+0.02-.01")` | $9.81^{+0.02}_{-0.01}$ |
| `num("9.81+-.01e2")` | $(9.81\pm0.01)\times 10^2$| `num(base: 2)[3e4]` | $3\times 2^4$ |
## Documentation
- [Function `num`](#num)
- [Grouping](#grouping)
- [Rounding](#rounding)
- [Uncertainties](#specifying-uncertainties)
- [Table alignment](#table-alignment)
### `num`
The function `num()` is the heart of *Zero*. It provides a wide range of number formatting utilities and its default values are configurable via `set-num()` which takes the same named arguments as `num()`.
```typ
#let num(
number: str | content | int | float | dictionary | array,
digits: auto | int = auto,
fixed: none | int = none,
decimal-separator: str = ".",
product: content = sym.times,
tight: boolean = false,
math: boolean = true,
omit-unity-mantissa: boolean = true,
positive-sign: boolean = false,
positive-sign-exponent: boolean = false,
base: int | content = 10,
uncertainty-mode: str = "separate",
round: dictionary,
group: dictionary,
)
```
- `number: str | content | int | float | array` : Number input; `str` is preferred. If the input is `content`, it may only contain text nodes. Numeric types `int` and `float` are supported but not encouraged because of information loss (e.g., the number of trailing "0" digits or the exponent). The remaining types `dictionary` and `array` are intended for advanced use, see [below](#zero-for-third-party-packages).
- `digits: auto | int = auto` : Truncates the number at a given (positive) number of decimal places or pads the number with zeros if necessary. This is independent of [rounding](#rounding).
- `fixed: none | int = none` : If not `none`, forces a fixed exponent. Additional exponents given in the number input are taken into account.
- `decimal-separator: str = "."` : Specifies the marker that is used for separating integer and decimal part.
- `product: content = sym.times` : Specifies the multiplication symbol used for scientific notation.
- `tight: boolean = false` : If true, tight spacing is applied between operands (applies to $\times$ and $\pm$).
- `math: boolean = true` : If set to `false`, the parts of the number won't be wrapped in a `math.equation` wherever feasible. This makes it possible to use `num()` with non-math fonts to some extent. Powers are always rendered in math mode.
- `omit-unity-mantissa: boolean = false` : Determines whether a mantissa of 1 is omitted in scientific notation, e.g., $10^4$ instead of $1\cdot 10^4$.
- `positive-sign: boolean = false` : If set to `true`, positive coefficients are shown with a $+$ sign.
- `positive-sign-exponent: boolean = false` : If set to `true`, positive exponents are shown with a $+$ sign.
- `base: int | content = 10` : The base used for scientific power notation.
- `uncertainty-mode: str = "separate"` : Selects one of the modes `"separate"`, `"compact"`, or `"compact-separator"` for displaying uncertainties. The different behaviors are shown below:
| `"separate"` | `"compact"` | `"compact-separator"` |
|---|---|---|
| $1.7\pm0.2$ | $1.7(2)$ | $1.7(2)$ |
| $6.2\pm2.1$ | $6.2(21)$ | $6.2(2.1)$ |
| $1.7^{+0.2}_{-0.5}$ | $1.7^{+2}_{-5}$ | $1.7^{+2}_{-5}$ |
| $1.7^{+2.0}_{-5.0}$ | $1.7^{+20}_{-50}$ | $1.7^{+2.0}_{-5.0}$ |
- `round: dictionary` : You can provide one or more rounding options in a dictionary. Also see [rounding](#rounding).
- `group: dictionary` : You can provide one or more grouping options in a dictionary. Also see [grouping](#grouping).
Configuration example:
```typ
#set-num(product: math.dot, tight: true)
```
### Grouping
Digit grouping is important for keeping large figures readable. It is customary to separate thousands with a thin space, a period, comma, or an apostrophe (however, we discourage using a period or a comma to avoid confusion since both are used for decimal separators in various countries).
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/figures/grouping.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/figures/grouping-dark.svg">
<img alt="Digit grouping" src="docs/figures/grouping.svg">
</picture>
</p>
Digit grouping can be configured with the `set-group()` function.
```typ
#let set-group(
size: int = 3,
separator: content = sym.space.thin,
threshold: int = 5
)
```
- `size: int = 3` : Determines the size of the groups.
- `separator: content = sym.space.thin` : Separator between groups.
- `threshold: int = 5` : Necessary number of digits needed for digit grouping to kick in. Four-digit numbers for example are usually not grouped at all since they can still be read easily.
Configuration example:
```typ
#set-group(separator: "'", threshold: 4)
```
Grouping can be turned off altogether by setting the `threshold` to `calc.inf`.
### Rounding
Rounding can be configured with the `set-round()` function.
```typ
#let set-round(
mode: none | str = none,
precision: int = 2,
pad: boolean = true,
direction: str = "nearest",
)
```
- `mode: none | str = none` : Sets the rounding mode. The possible options are
- `none` : Rounding is turned off.
- `"places"` : The number is rounded to the number of decimal places given by the `precision` parameter.
- `"figures"` : The number is rounded to a number of significant figures given by the `precision` parameter.
- `"uncertainty"` : Requires giving an uncertainty value. The uncertainty is
rounded to significant figures according to the `precision` argument and
then the number is rounded to the same number of decimal places as the
uncertainty.
- `precision: int = 2` : The precision to round to. Also see parameter `mode`.
- `pad: boolean = true` : Whether to pad the number with zeros if the
number has fewer digits than the rounding precision.
- `direction: str = "nearest"` : Sets the rounding direction.
- `"nearest"`: Rounding takes place in the usual fashion, rounding to the nearer
number, e.g., 2.34 → 2.3 and 2.36 → 2.4.
- `"down"`: Always rounds down, e.g., 2.38 → 2.3 and 2.30 → 2.3.
- `"up"`: Always rounds up, e.g., 2.32 → 2.4 and 2.30 → 2.3.
### Specifying uncertainties
There are two ways of specifying uncertainties:
- Applying an uncertainty to the least significant digits using parentheses, e.g., `2.3(4)`,
- Denoting an absolute uncertainty, e.g., `2.3+-0.4` becomes $2.3\pm0.4$.
Zero supports both and can convert between these two, so that you can pick the displayed style (configured via `uncertainty-mode`, see above) independently of the input style.
How do uncertainties interplay with exponents? The uncertainty needs to come first, and the exponent applies to both the mantissa and the uncertainty, e.g., `num("1.23+-.04e2")` becomes
$$ (1.23\pm0.04)\times 10^2. $$
Note that the mantissa is now put in parentheses to disambiguate the application of the power.
In some cases, the uncertainty is asymmetric which can be expressed via `num("1.23+0.02-0.01")`
$$ 1.23^{+0.02}_{-0.01}. $$
### Table alignment
In scientific publication, presenting many numbers in a readable fashion can be a difficult discipline. A good starting point is to align numbers in a table at the decimal separator. With _Zero_, this can be accomplished by using `ztable`, a wrapper for the built-in `table` function. It features an additional parameter `format` which takes an array of `none`, `auto`, or `dictionary` values to turn on number alignment for specific columns.
```typ
#ztable(
columns: 3,
align: center,
format: (none, auto, auto),
$n$, $α$, $β$,
[1], [3.45], [-11.1],
..
)
```
Non-number entries (e.g., in the header) are automatically recognized in some cases and will not be aligned. In ambiguous cases, adding a leading or trailing space tells _Zero_ not to apply alignment to this cell, e.g., `[Angle ]` instead of `[Angle]`.
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/figures/table1.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/figures/table1-dark.svg">
<img alt="Number alignment in tables" src="docs/figures/table1.svg">
</picture>
</p>
Zero not only aligns numbers at the decimal point but also at the uncertainty and exponent part. Moreover, by passing a `dictionary` instead of `auto`, a set of `num()` arguments to apply to all numbers in a column can be specified.
```typ
#ztable(
columns: 4,
align: center,
format: (none, auto, auto, (digits: 1)),
$n$, $α$, $β$, $γ$,
[1], [3.45e2], [-11.1+-3], [0],
..
)
```
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/figures/table2.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/figures/table2-dark.svg">
<img alt="Advanced number alignment in tables" src="docs/figures/table2.svg">
</picture>
</p>
## Zero for third-party packages
This package provides some useful extras for third-party packages that generate formatted numbers (for example graphics libraries).
Instead of passing a `str` to `num()`, it is also possible to pass a dictionary of the form
```typ
(
mantissa: str | int | float,
e: none | str,
pm: none | array
)
```
This way, parsing the number can be avoided which makes especially sense for packages that generate numbers (e.g., tick labels for a diagram axis) with independent mantissa and exponent.
Furthermore, `num()` also allows `array` arguments for `number` which allows for more efficient batch-processing of numbers with the same setup. In this case, the caller of the function needs to provide `context`.
## Changelog
### Version 0.2.0
- Added support for using non-math fonts for `num` via the option `math`. This can be activated by calling `#set-num(math: false)`.
- Performance improvements for both `num()` and `ztable(9)`
### Version 0.1.0
|
https://github.com/ivaquero/lang-romanic | https://raw.githubusercontent.com/ivaquero/lang-romanic/main/fr-4-time+verb.typ | typst | #import "@local/scibook:0.1.0": *
#show: doc => conf(
title: "时间与动词",
author: ("github@ivaquero"),
footer-cap: "github@ivaquero",
header-cap: "音速法语",
outline-on: false,
doc,
)
= 时间
== 一天
#let data = csv("fr/fr-date-day.csv")
#figure(
ktable(data, 2),
caption: "",
supplement: [表],
kind: table,
)
== 一周
#let data = csv("fr/fr-date-week.csv")
#figure(
ktable(data, 2),
caption: "",
supplement: [表],
kind: table,
)
== 月份
#let data = csv("fr/fr-date-month.csv")
#figure(
ktable(data, 2),
caption: "",
supplement: [表],
kind: table,
)
== 时间段
#let data = csv("fr/fr-date-time.csv")
#figure(
ktable(data, 2),
caption: "",
supplement: [表],
kind: table,
)
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/delimited_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test automatic matching.
#set page(width:122pt)
$ (a) + {b/2} + abs(a)/2 + (b) $
$f(x/2) < zeta(c^2 + abs(a + b/2))$
|
https://github.com/FkHiroki/ex-E5 | https://raw.githubusercontent.com/FkHiroki/ex-E5/main/sections/preparation.typ | typst | MIT No Attribution | // タイトル
#align(center, text(18pt, "E5 モンテカルロシミュレーション 予習課題")) \
#align(right, text(12pt, "62217149 福原博樹"))
= 1. result.txt の内容
\
#import "@preview/codelst:2.0.0": sourcecode
#sourcecode(
frame: block.with(
stroke: 1pt + gray,
inset: (x: 10pt, y: 5pt),
radius: 5pt,
fill: luma(96%)
)
)[
#raw(read("../figs/result.txt"))
]
= 2. 出力画像
#figure(
image("../figs/distrb-ave.png", width: 90%),
caption: [課題(1) 平均のグラフ],
) <fig:distrb-ave>
#figure(
image("../figs/distrb-var.png", width: 90%),
caption: [課題(1) 分散のグラフ],
) <fig:distrb-var>
#figure(
image("../figs/buffon.png", width: 90%),
caption: [課題(2) $pi$ への収束の様子],
) <fig:buffon> |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.