repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/RyushiAok/iml-resume-typst | https://raw.githubusercontent.com/RyushiAok/iml-resume-typst/main/resume/main.typ | typst | #import "template.typ": *
#show: iml-resume.with(
title: [知的機構研究室 #linebreak() レジュメテンプレート(Typst版)],
bibliography-file: "refs.bib",
authors: ((
name: "姓 名",
department: [],
organization: [],
location: [],
email: "",
),),
)
= はじめに
むかしむかし、とある国のある城に王さまが住んでいました。王さまはぴっかぴかの新しい服が大好きで、服を買うことばかりにお金を使っていました。王さまののぞむことといったら、いつもきれいな服を着て、みんなにいいなぁと言われることでした。戦いなんてきらいだし、おしばいだって面白くありません。だって、服を着られればそれでいいんですから。新しい服だったらなおさらです。一時間ごとに服を着がえて、みんなに見せびらかすのでした。ふつう、めしつかいに王さまはどこにいるのですか、と聞くと、「王さまは会議室にいらっしゃいます。」と言うものですが、ここの王さまはちがいます。「王さまは衣装部屋にいらっしゃいます。」と言うのです。
城のまわりには町が広がっていました。とても大きな町で、いつも活気に満ちていました。世界中のあちこちから知らない人が毎日、おおぜいやって来ます。
ある日、二人のさぎ師が町にやって来ました。二人は人々に、自分は布織職人だとウソをつきました。それも世界でいちばんの布が作れると言いはり、人々に信じこませてしまいました。
「とてもきれいな色合いともようをしているのだけれど、この布はとくべつなのです。」とさぎ師は言います。「自分にふさわしくない仕事をしている人と、バカな人にはとうめいで見えない布なのです。」
その話を聞いた人々はたいそうおどろきました。たいへんなうわさになって、たちまちこのめずらしい布の話は王さまの耳にも入りました。
「そんな布があるのか。わくわくするわい。」と、服が大好きな王さまは思いました。「もしわしがその布でできた服を着れば、けらいの中からやく立たずの人間や、バカな人間が見つけられるだろう。それで服が見えるかしこいものばかり集めれば、この国ももっとにぎやかになるにちがいない。さっそくこの布で服を作らせよう。」@THE_EMPERORS_NEW_SUIT
= Methods
== Paper overview
#lorem(45)
Pretraining on the Test Set Is All You Need @schaeffer2023pretraining
=== Detail
#lorem(80)
$ a + b = gamma $
#lorem(130) |
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/代数学二/作业/hw12.typ | typst | #import "../../template.typ": *
#import "@preview/commute:0.2.0": node, arr, commutative-diagram
#show: note.with(
title: "作业12",
author: "YHTQ",
date: none,
logo: none,
withOutlined : false,
withTitle :false,
withHeadingNumbering: false
)
= ex 1.5
== 1
#let CC = $cone(C)$
注意到:
$
d^CC_n vec(c_(n-1), c_n) = vec(-d_(n-1) (c_(n-1)), d_n (c_n) - c_(n-1))
$
由于它确实是复形,验证正合只需任取 $vec(c_(n-1), c_n) in ker d^CC_n$,验证它在 $im d^CC_(n+1)$ 中即可。事实上:
$
vec(-d_(n-1) (c_(n-1)), d_n (c_n) - c_(n-1)) = 0 <=> c_(n-1) in ker d_(n-1) sect im d_n, c_(n-1) = d_n (c_(n))\
d^CC_(n+1) vec(-c_(n), 0) = vec(d_(n) (c_(n)), c_(n)) = vec(c_(n-1), c_(n) )
$
上两式表明:
$
forall vec(c_(n-1), c_n) in ker d^CC_n, vec(c_(n-1), c_(n) ) = d^CC_(n+1) vec(-c_(n), 0) in im d^CC_(n+1)
$
足以给出正合性,同时表明题设的 $s$ 满足:
$
d s|_(ker d) = id \
d s|_(ker d) d = d \
d s|_(im d) d = d \
d s d = d \
$
表明确实是分裂映射
== 2
零调的定义是存在 $s$ 满足
$
f = d_D s + s d_C
$<def-null>
而题设映射是复形间态射等价于:
$
(-s, f) mat(-d_C, 0;-id, d_C) vec(c', c) = d_D (-s, f) vec(c', c)\
(s d_C - f, f d_C) vec(c', c) = d_D (-s, f) vec(c', c)\
$
表明:
$
cases(
s d_C - f = -d_D s,\
f d_C = d_D f
)
$
其中 $f d_C = d_D f$ 来源与复形同态的定义,当然有上式与@def-null 等价
== 3
$f tilde g$ 的定义是存在 $s$ 使得:
$
f - g = d_D s + s d_C
$<def-hom>
而题设映射是复形间同态等价于:
$
(f, s, g)mat(d_C, id, 0;0, -d_C, 0;0, -id, d_C) = d_D (f, s, g)
$
也即:
$
cases(
f d_C = d_D f,
g d_C = d_D g,
f - s d_C - g = d_D s
)
$
其中前两者是复形同态的定义,当然与@def-hom 等价
= ex 2.1
== 2
只证明同调 $delta-$函子情形,余调是类似的\
任给 $delta-$函子 $S$ 和自然变换 $f_0 :S_0 -> T_0$,既然 $T_n = 0$,当然 $f_n : S_n -> T_n$ 在 $n >= 1$ 时取并且只能取零映射,这就给出了泛性质
= ex 2.2
== 1
- 首先,假设 $P$ 是 $C$ 中投射对象。显然它的每一项都是投射对象,由前面的命题只需证明存在 $s: P -> P$ 使得 $(-s, id): cone(P) -> P$ 是复形间的同态。
不难发现有交换图:
#align(center)[#commutative-diagram(
node((0, 0), $...$, 1),
node((0, 1), $P_(n-1) directSum P_n$, 2),
node((0, 2), $P_(n-2) directSum P_(n-1)$, 3),
node((1, 0), $...$, 4),
node((1, 1), $P_(n-1)$, 5),
node((1, 2), $P_(n-2)$, 6),
arr(1, 2, $$),
arr(2, 3, $mat(-d_C, 0;-id, d_C)$),
arr(4, 5, $$),
arr(5, 6, $-d_(n-1)$),
arr(1, 4, $$),
arr(2, 5, $id directSum 0$),
arr(3, 6, $id directSum 0$),)]
换言之,$cone(P) -> P$ 间存在满同态,由投射对象的性质,将有交换图:
$
#align(center)[#commutative-diagram(
node((0, 0), $cone(P)$, 1),
node((0, 1), $P$, 2),
node((1, 0), $P$, 3),
arr(1, 2, $id directSum 0$),
arr(3, 2, $id$),
arr(3, 1, $exists s'$),)]
$
设 $s' = vec(s'_(1), s'_(2))$,交换图给出:
$
s'_(1) = id: P_(n-1) -> P_(n-1)\
$
是复形间同态给出:
$
mat(-d, 0;-id, d) vec(s'_(1), s'_2) = vec(s'_(1), s'_2)( -d) \
-id + d s'_2 = -s'_2 d\
d s'_2 + s'_2 d = id
$
由熟知的定理,这给出 $P$ 确实是分裂的正合列
- 反之,假设 $P$ 是分裂的投射对象构成的正合列,任给 $X ->^f Y$ 是满射和 $P ->^g Y$,至少存在逐项的同态 $h: P_n -> X_n$ 使得:
$
f h = g
$
注意到:
$
id = d s + s d\
$
令 $h' = d h s + h s d$,将有:
$
f h' = f d h s + f h s d = d f h s + f h s d = d g s + g s d = g(d s + s d) = g\
d h' = d h s d = h' d
$
表明 $h'$ 就是要找的复形间同态
== 2
设有复形 $X_i$,并且 $X_i$ 分别成为投射对象 $P_i$ 的满射像。设 $pi: P_i -> X_i$ 是满射
断言有交换图表:
#align(center)[#commutative-diagram(
node((0, 0), $X_i$, 1),
node((0, 1), $X_(i-1)$, 2),
node((1, 0), $P_i directSum P_(i+1)$, 3),
node((1, 1), $P_(i-1) directSum P_i$, 4),
arr(1, 2, $d$),
arr(3, 1, $(pi, d pi)$),
arr(4, 2, $(pi, d pi)$),
arr(3, 4, $mat(0, 0;1, 0)$),)]
不难验证下面一行是投射模构成的链复形,且:
$
d(pi, d pi) = mat(-, 0;1, 0)(pi, d pi)
$
因此交换图表成立,这是链复形之间的满态射。此外,取:
$
s = mat(0, 1;0, 0): P_(i-1) directSum P_i -> P_i directSum P_(i+1)
$
不难验证将有:
$
d s d = d
$
因此下面一行构成分裂的正合列,上面结论表明它是投射对象,证毕
= ex 2.3
== 1
#let xl = $overline(x)$
#let yl = $overline(y)$
#let ul = $overline(u)$
#let vl = $overline(v)$
#let dl = $overline(d)$
#let pl = $overline(p)$
由 Baer 法则,只需验证任给 $R = ZZ quo m ZZ$ 的理想 $J$ 以及 $f: J -> R$ 都可以延拓到 $R -> R$\
事实上,注意到 $R$ 是主理想环,可设 $J = (xl)$ 以及 $f(xl) = yl$(此时 $f$ 被唯一确定)\
由于 $f$ 是加法循环群之间的同态,将有:
$
yl^(ord(xl)) = 0 => ord(yl) | ord(xl)
$
根据循环群的结论,$RR$ 中阶为 $ord(xl)$ 子群 $(xl)$ 恰有一个,而 $(yl)$ 只能是其子群,进而可设:
$
yl = ul xl
$
因此:
$
f' := [t: R -> ul t]
$
就是 $f$ 的一个延拓,证毕
对于第二个结论,假设 $d|m, p | m, m /d$,则 $ ZZ quo d ZZ$ 是 $R$ 的一个理想,也是加法子群,进而可设为唯一的 $d$ 阶子群 $(xl)$,同时设 $m/d$ 阶子群为 $(yl)$,$p$ 阶子群为 $(pl)$,则有:
$
(pl) subset (xl), (yl)
$
考虑群同态(也是模同态):
$
funcDef(f, (yl), (xl), yl, pl)
$
假设它是 $f': R -> (xl)$ 的一个限制,并设:
$
f'(1) = ul xl
$
则 $f'(yl) = ul xl yl = pl$\
然而注意到:
$
d = ord(xl) = m/(gcd(x, m))\
m/d = ord(yl) = m/(gcd(y, m))\
gcd(x, m) gcd(y, m) = m \
m/(gcd(x, m)) = gcd(y, m) | y => m | y gcd(x, m) = gcd(x y, m) => gcd(x y, m) = m
$
意味着 $xl yl = 0 => pl = 0$,矛盾!
== 2
设 $a in A, e_A (a) = 0$,由定义,这将意味着:
$
forall f in Hom(A, QQ quo ZZ), f(a) = 0
$
然而考虑映射:
$
funcDef(h, (a), QQ quo ZZ, a, 1/2)
$
只要 $a != 0$ 上式就是群同态\
由内射对象的性质,有交换图:
#align(center)[#commutative-diagram(
node((0, 0), $(a)$, 1),
node((0, 1), $A$, 2),
node((1, 0), $QQ quo ZZ$, 3),
arr(1, 2, $id$, inj_str),
arr(2, 3, $exists h'$),
arr(1, 3, $h$),)]
显然 $h' in Hom(A, QQ quo ZZ), h'(a) !=0$,这表明 $e_A (a) = 0$ 除非 $a = 0$,也即 $e_A$ 是单射
== 3
$A = 0$ 时显有 $Hom(A, QQ quo ZZ) = 0$,反之由于上面给出了 $A -> Hom(QQ quo ZZ)$ 的单射,当然后者为零意味着前者为零 |
|
https://github.com/HiiGHoVuTi/requin | https://raw.githubusercontent.com/HiiGHoVuTi/requin/main/template.typ | typst |
#import "@preview/jogs:0.2.3": *
#let code = ```
function toRepublicanCalendar(year, month, day) {
const months = [
"Vendémiaire",
"Brumaire",
"Frimaire",
"Nivôse",
"Pluviôse",
"Ventôse",
"Germinal",
"Floreal",
"Prairial",
"Messidor",
"Thermidor",
"Fructidor",
"Sansculottides",
];
function isLeapYear(year) {
return (
year % 4 === 0 &&
(year % 100 !== 0 || year % 400 === 0) &&
year % 4000 !== 0
);
}
function toRepublicanCalendar(year, month, day) {
let startYear = year;
if (month < 9 || (month === 9 && day < 22)) {
startYear--;
}
const startRepublican = new Date(startYear, 8, 22);
const currentDate = new Date(year, month - 1, day);
let dayOfYear = Math.floor(
(currentDate - startRepublican) / (24 * 60 * 60 * 1000)
);
const daysInYear = isLeapYear(startYear) ? 366 : 365;
if (dayOfYear >= daysInYear - 5) {
const sansculottidesDay = dayOfYear - (daysInYear - 6);
return {
day: sansculottidesDay,
month: "Sansculottides",
year: startYear - 1791,
};
}
if (isLeapYear(startYear)) {
if (dayOfYear >= 365) {
return {
day: dayOfYear - 364,
month: "Sansculottides",
year: startYear - 1791,
};
}
dayOfYear++;
} else {
if (dayOfYear >= 364) {
return {
day: dayOfYear - 363,
month: "Sansculottides",
year: startYear - 1791,
};
}
}
const republicanMonth = Math.floor(dayOfYear / 30);
const republicanDay = (dayOfYear % 30) + 1;
return {
day: republicanDay,
month: months[republicanMonth],
year: startYear - 1791,
};
}
return toRepublicanCalendar(year, month, day);
}
function format(y, m, d){
const {year, month, day} = toRepublicanCalendar(y, m, d);
return `version du ${day} ${month}, an ${year}`;
}
function romanise (num) {
if (isNaN(num))
return NaN;
var digits = String(+num).split(""),
key = ["","<KEY>",
"","X","XX","XXX","XL","L","LX","LXX","LXXX","XC",
"","I","II","III","IV","V","VI","VII","VIII","IX"],
roman = "",
i = 3;
while (i--)
roman = (key[+digits.pop() + (i * 10)] || "") + roman;
return Array(+digits.join("") + 1).join("M") + roman;
}
```
#let bytecode = compile-js(code)
#let pset(class: "6.100",
title: "PSET 0",
author: "Juliette",
date: datetime.today(),
subproblems: "I.1.a.i",
collaborators: (),
doc
) = {[
#let dateRev = call-js-function(bytecode, "format", date.year(), date.month(), date.day())
/* Convert collaborators to a string if necessary */
#let collaborators=if type(collaborators) == array {collaborators.join(", ")} else {collaborators}
/* Problem + subproblem headings */
#set heading(numbering: (..nums) => {
nums = nums.pos()
let roman = call-js-function(bytecode, "romanise", nums.at(0))
if nums.len() == 1 {
[ Chapitre #roman.
]
} else if nums.len() == 2 {
[Problème #roman.#nums.at(1):]
}
else {
numbering(subproblems, ..(..nums).slice(2))
}
})
/* Set metadata */
#set document(
title: [#class - #title],
author: author,
date: date,
)
/* Set up page numbering and continued page headers */
#set page(
numbering: "1",
header: context {
let elems = query(selector(heading.where(level: 1)).before(here()),)
let chapter = if elems.len() == 0 { [Sommaire] } else {elems.last().body }
if counter(page).get().first() > 1 [
#set text(style: "italic", size: 12pt)
#title -- #chapter
#h(1fr)
#author
// #h(1fr)
// #if collaborators != none {[w/ #collaborators]}
#block(line(length: 100%, stroke: 0.5pt), above: 0.6em)
]
},
footer: context {
set text(size: 12pt)
block(line(length: 100%, stroke: 0.5pt), above: 0.6em)
"Page "; counter(page).display("1/1", both: true)
h(1fr)
set text(style: "italic")
dateRev
})
/* Add numbering and some color to code blocks */
#show raw.where(block: true): it => {
block[
#h(1fr)
#box(
width: 100%-0.5em,
radius: 0.3em,
stroke: luma(50%),
inset: 1em,
fill: luma(98%)
)[
#show raw.line: l => {
box(width:measure([#it.lines.last().count]).width, align(right, text(fill: luma(50%))[#l.number]))
h(0.5em)
l.body
}
#it
]
]
}
/* Make the title */
#align(center, {
text(size: 1.6em, weight: "bold")[#title \ ]
text(size: 1.2em, weight: "semibold")[#author \ ]
emph[
// #date.display("[year]-[month]-[day]")
#dateRev
#if collaborators != none {[
\ Collaborators: #collaborators
]}
]
box(line(length: 100%, stroke: 1pt))
})
#doc
]}
|
|
https://github.com/0xPARC/0xparc-intro-book | https://raw.githubusercontent.com/0xPARC/0xparc-intro-book/main/src/plonk.typ | typst | #import "preamble.typ":*
#let rbox(s) = [#text(red)[#ellipse(stroke: red, inset: 2pt, s)]]
#let bbox(s) = [#text(blue)[#rect(stroke: blue, inset: 4pt, s)]]
= PLONK, a zkSNARK protocol <plonk>
== Arithmetization <arith-intro>
The promise of programmable cryptography is that we should be able to
perform proofs for arbitrary functions.
That means we need a "programming language" that we'll write our function in.
For PLONK, the choice that's used is:
*systems of quadratic equations over $FF_q$*. In other words, PLONK is going to
give us the ability to prove that we have solutions to a system of quadratic
equations.
#situation[
Suppose we have a system of $m$ equations in $k$ variables $x_1, dots, x_k$:
$
Q_1 (x_1 , dots, x_k) & = 0 \
dots.v \
Q_m (x_1 , dots, x_k) & = 0.
$
Of these $k$ variables,
the first $ell$ variables $x_1, dots, x_ell$ have publicly known, fixed values;
the remaining $k - ell$ are unknown.
PLONK will let Peggy prove to Victor the following claim:
I know $k - ell$ values $x_(ell+1), dots, x_k$ such that
(when you combine them with the $ell$ public fixed values
$x_1, dots, x_ell$)
the $k$ values $x_1, dots, x_k$ satisfy all $m$ quadratic equations.
]
This leads to the natural question of how a function like SHA-256 can be encoded
into a system of quadratic equations. This process of encoding a problem
into algebra is called _arithmetization_. It turns out that quadratic equations
over $FF_q$, viewed as an NP problem called Quad-SAT, is _NP-complete_;
in other words, any NP problem can be rewritten as a system of quadratic equations.
If you are not familiar with this concept, the upshot is that Quad-SAT
being NP-complete means it can serve as a reasonable arithmetization that can
express most reasonable (NP) problems.
#remark([Example of Quad-SAT encoding 3-SAT])[
We assume knowledge of 3-SAT and it being NP-complete.
The following example instance illustrates how to convert
any instance of 3-SAT into a Quad-SAT problem:
$
x_i^2 &= x_i #h(1em) forall 1 <= i <= 1000 & \
y_1 &= (1-x_(42)) dot x_(17), & #h(1em) & 0 = y_1 dot x_(53) & \
y_2 &= (1-x_(19)) dot (1-x_(52)) & #h(1em) & 0 = y_2 dot (1-x_(75)) & \
y_3 &= x_(25) dot x_(64), &#h(1em) & 0 = y_3 dot x_(81) & \
&dots.v
$
(imagine many more such pairs of equations).
The $x_i$'s are variables which are seen to either be $0$ or $1$.
And then each pair of equations with $y_i$ corresponds to a clause of 3-SAT.
] <np-complete>
So for example, any NP decision problem should be encodable.
Still, such a theoretical reduction might not be usable in practice:
polynomial factors might not matter in complexity theory,
but they do matter a lot to engineers and end users.
But it turns out that Quad-SAT is actually reasonably code-able.
This is the goal of projects like
#cite("https://docs.circom.io/", "Circom"),
which gives a high-level language that compiles a function like SHA-256
into a system of equations over $FF_q$ that can be used in practice.
Systems like this are called _arithmetic circuits_,
and Circom is appropriately short for "circuit compiler".
If you're curious, you can see how SHA-256 is implemented in Circom on
#cite("https://github.com/iden3/circomlib/blob/master/circuits/sha256/sha256.circom",
"GitHub").
So, the first step in proving a claim like
"I have a message $M$ such that
$sha(M) = "0xa91af3ac..."$"
is to translate the claim into a system of quadratic equations.
This process is called "arithmetization."
One approach (suggested by @np-complete)
is to represent each bit involved in the calculation
by a variable $x_i$
(which would then be constrained to be either 0 or 1
by an equation $x_i^2 = x_i$).
In this setup, the value $"0xa91af3ac"$
would be represented by 32 public bits $x_1, dots, x_32$;
the unknown message $M$ would be represented by
some private variables;
and the calculation of $op("sha")$
would introduce a series of constraints,
maybe involving some additional variables.
We won't get into any more details of arithmetization here.
== An instance of PLONK
PLONK is going to prove solutions to
systems of quadratic equations of a very particular form:
#definition[
An instance of PLONK consists of two pieces,
the _gate constraints_ and the _copy constraints_.
The _gate constraints_ are a system of $n$ equations,
$ q_(L,i) a_i + q_(R,i) b_i + q_(O,i) c_i + q_(M,i) a_i b_i + q_(C,i) = 0 $
for $i = 1, ..., n$,
in the $3n$ variables $a_i$, $b_i$, $c_i$.
while the $q_(*,i)$ are coefficients in $FF_q$, which are globally known.
The confusing choice of subscripts stands for "Left", "Right", "Output",
"Multiplication", and "Constant", respectively.
The _copy constraints_ are a bunch of assertions that some of the
$3n$ variables should be equal to each other,
so e.g. "$a_1 = c_7$", "$b_17 = b_42$", and so on.
]
#remark("From Quad-SAT to PLONK")[
PLONK might look less general than Quad-SAT,
but it turns out you can convert any Quad-SAT problem to PLONK.
First off, note that if we set
$ ( q_(L,i), q_(R,i), q_(O,i), q_(M,i), q_(C,i)) = ( 1, 1, -1, 0, 0 ), $
we get an "addition" gate
$a_i + b_i = c_i,$
while if we set
$ ( q_(L,i), q_(R,i), q_(O,i), q_(M,i), q_(C,i)) = ( 0, 0, -1, 1, 0 ), $
we get a "multiplication" gate
$a_i b_i = c_i.$
Finally, if $kappa$ is any constant, then
$ ( q_(L,i), q_(R,i), q_(O,i), q_(M,i), q_(C,i)) = ( 1, 0, 0, 0, -kappa ), $
gives the constraint
$a_i = kappa.$
Now imagine we want to encode some quadratic equation
like
$y = x^2 + 2$
in PLONK. We'll break this down into two steps:
$ x dot x & = (x^2) text(" (multiplication)") \
t & = 2 text(" (constant)") \
(x^2) + t & = y text(" (addition)"). $
We'll assign the variables $a_i, b_i, c_i$ for these two gates
by looking at the equations:
$ (a_1, b_1, c_1) & = (x, x, x^2) \
(a_2, b_2, c_2) & = (t = 2, 0, 0) \
(a_3, b_3, c_3) & = (x^2, t = 2, y). $
And finally, we'll assign copy constraints
to make sure the variables are faithfully copied
from line to line:
$ a_1 & = b_1 \
c_1 & = a_3 \
a_2 & = b_3. $
If the variables $a_i, b_i, c_i$ satisfy the gate and copy constraints,
then $x = a_1$ and $y = c_3$ are forced to satisfy
the original equation $y = x^2 + 2$.
]
Back to PLONK: Our protocol needs to do the following:
Peggy and Victor have a PLONK instance given to them.
Peggy has a solution to the system of equations,
i.e. an assignment of values to each $a_i$, $b_i$, $c_i$ such that
all the gate constraints and all the copy constraints are satisfied.
Peggy wants to prove this to Victor succinctly
and without revealing the solution itself.
The protocol then proceeds by having:
1. Peggy sends a polynomial commitment corresponding to $a_i$, $b_i$, and $c_i$
(the details of what polynomial are described below).
2. Peggy proves to Victor that the commitment from Step 1
satisfies the gate constraints.
3. Peggy proves to Victor that the commitment from Step 1
also satisfies the copy constraints.
Let's now explain how each step works.
== Step 1: The commitment
In PLONK, we'll assume that $q equiv 1 mod n$, which means that
we can fix $omega in FF_q$ to be a primitive $n$-th root of unity.
Then, by polynomial interpolation, Peggy chooses polynomials $A(X)$, $B(X)$,
and $C(X)$ in $FF_q [X]$ such that
#eqn[
$ A(omega^i) = a_i, B(omega^i) = b_i, C(omega^i) = c_i
" for all " i = 1, 2, ..., n. $
<plonk-setup>
]
We specifically choose $omega^i$ because that way,
if we use @root-check on the set ${omega, omega^2, ..., omega^n}$,
then the polynomial called $Z$ is just
$Z(X) = (X-omega) ... (X-omega^n) = X^n-1$, which is really nice.
In fact, often $n$ is chosen to be a power of $2$ so that $A$, $B$, and $C$
are very easy to compute, using a fast Fourier transform.
(Note: When you're working in a finite field, the fast Fourier transform
is sometimes called the "number theoretic transform" (NTT)
even though it's exactly the same as the usual FFT.)
Then:
#algorithm("Commitment step of PLONK")[
1. Peggy interpolates $A$, $B$, $C$ as in @plonk-setup.
2. Peggy sends $Com(A)$, $Com(B)$, $Com(C)$ to Victor.
]
To reiterate, each commitment is a
single value -- a 256-bit elliptic curve point --
that can later be "opened" at any value $x in FF_q$.
== Step 2: Gate-check
Both Peggy and Victor know the PLONK instance,
so they can interpolate a polynomial
$Q_L (X) in FF_q [X]$ of degree $n-1$ such that
$ Q_L (omega^i) = q_(L,i) #h(1em) " for " i = 1, ..., n. $
The analogous polynomials $Q_R$, $Q_O$, $Q_M$, $Q_C$
are defined in the same way.
Now, what do the gate constraints amount to?
Peggy is trying to convince Victor that the equation
#eqn[
$ Q_L (x) A (x) + Q_R (x) B (x) + Q_O (x) C (x) & \
#hide[0] + Q_M (x) A (x) B (x) + Q_C (x) & = 0 $
<plonk-gate>
]
// stopgap: #hide[0] equivalent to latex phantom for plus sign spacing
is true for the $n$ numbers $x = omega, omega^2, ..., omega^n$.
However, Peggy has committed $A$, $B$, $C$ already,
while all the $Q_*$ polynomials are globally known.
So this is a direct application of @root-check:
#algorithm[Gate-check][
1. Both parties interpolate five polynomials $Q_* in FF_q [X]$
from the $5n$ coefficients $q_*$
(globally known from the PLONK instance).
2. Peggy uses @root-check to convince Victor that @plonk-gate
holds for $X = omega^i$
(that is, the left-hand side is indeed divisible by $Z(X) := X^n-1$).
]
/*
However, that's equivalent to the _polynomial_
$ Q_L (X) A_i (X) + Q_R (X) B_i (X) + Q_O (X) C_i (X)
+ Q_M (X) A_i (X) B_i (X) + Q_C (X) in FF_q [X] $
being divisible by the degree $n$ polynomial
$ Z(X) = (X-omega)(X-omega^2) ... (X-omega^n) = X^n - 1. $
In other words, it suffices for Peggy to convince Victor that there
is a polynomial $H(X) in FF_q [X]$ such that
#eqn[
$ Q_L (X) A_i (X) &+ Q_R (X) B_i (X) + Q_O (X) C_i (X) \
&+ Q_M (X) A_i (X) B_i (X) + Q_C (X) = Z(X) H(X). $
<plonkpoly>
]
And this can be done using polynomial commitments pretty easily:
Peggy should send $Com(H)$,
and then Victor just verifies @plonkpoly at random values in $FF_q$.
As both sides are polynomials of degree up to $3(n-1)$,
either the equation holds for every input
or there are at most $3n-4$ values for which it's true
(two different polynomials of degree $3(n-1)$ can agree at up to $3n-4$ points).
#algorithm("Proving PLONK satisfies the gate constraints")[
1. Peggy computes $H(X) in FF_q [X]$ using polynomial long division
and sends $Com(H)$ to Victor.
2. Victor picks a random challenge and asks Peggy to open
all of $Com(A)$, $Com(B)$, $Com(C)$, $Com(H)$ at that challenge.
3. Victor accepts if and only if @plonkpoly is true at the random challenge.
]
*/
== Step 3: Proving the copy constraints <copy-constraint-deferred>
The copy constraints are the trickiest step.
There are a few moving parts to this idea, so we skip it for now and dedicate
@copy-constraints to it.
== Step 4: Public and private witnesses
The last thing to be done is to reveal the value of public witnesses,
so the prover can convince the verifier that those values are correct.
This is simply an application of @root-check.
Let's say the public witnesses are the values $a_i$, for all $i$ in some set $S$.
(If some of the $b$'s and $c$'s are also public, we'll just do the same thing for them.)
The prover can interpolate another polynomial, $A^"public"$,
such that $A^"public" (omega^i) = a_i$ if $i in S$, and $A^"public" (omega^i) = 0$ if $i in.not S$.
Actually, both the prover and the verifier can compute $A^"public"$, since
all the values $a_i$ are publicly known!
Now the prover runs @root-check to prove that $A - A^"public"$ vanishes on $S$.
(And similarly for $B$ and $C$, if needed.)
And we're done.
|
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/软件分析/hw/2100012990-郭子荀-软分第五次作业.typ | typst | #import "../../template.typ": proof, note, corollary, lemma, theorem, definition, example, remark
#import "../../template.typ": *
#show: note.with(
title: "作业5",
author: "YHTQ",
date: datetime.today().display(),
logo: none,
withOutlined : false,
withTitle : false,
withHeadingNumbering: false
)
= #empty
#let Inv(f, ori, res) = $"Inv"(#f\;#ori\;#res)$
#let bT = $bold("True")$
#let bF = $bold("False")$
#let T = $"True"$
#let F = $"False"$
用 $[a, b]$ 表示一个整数区间,布尔值的抽象域为 ${bot, bT, bF, top}$ ,(真实值记为 $#T, #F$),$Inv(f, "ori", "res")$ 表示运算 $f$ 在参数初始状态为 ori ,运算结果为 res 时的反向语义。$\_$ 表示任意抽象值,规则按照从上到下匹配。
== 逻辑与
- $Inv(and, (\_, \_), bot) = (bot, bot)$
- $Inv(and, (\_, \_), bT) = (bT, bT)$
// - $Inv(and, (bT, x), bT) = (bT, x)$
// - $Inv(and, (x, bT), bT) = (x, bT)$
- $Inv(and, (bT, \_), bF) = (bT, bF)$
- $Inv(and, (\_, bT), bF) = (bF, bT)$
- $Inv(and, (bot, bot), bF "or" top) = (top, top)$
- $Inv(and, (bot, x), bF "or" top) = (top, x)$
- $Inv(and, (x, bot), bF "or" top) = (x, top)$
- $Inv(and, (x, y), bF "or" top) = (x, y)$
== 逻辑非
- $Inv(not, \_, bot) = (bot, bot)$
- $Inv(not, \_, bT) = (bF)$
- $Inv(not, \_, bF) = (bT)$
- $Inv(not, bot, top) = (top)$
- $Inv(not, x, top) = (x)$
== 大于
- $Inv(>, ([\_, \_], [\_, \_]), bot) = ([-infinity, infinity], [-infinity, infinity])$
- $Inv(>, ([a, b], [c, d]), bT) = ([max(a, c + 1), b], [c, min(b - 1, d)])$
- $Inv(>, ([a, b], [c, d]), bF) = ([a, min(b, d)], [max(a, c), b])$
- $Inv(>, ([a, b], [c, d]), top) = ([a, b], [c, d])$
== 加法
- $Inv(+, ([\_, \_], [\_, \_]), bot) = ([-infinity, infinity], [-infinity, infinity])$
- $Inv(+, ([a, b], [c, d]), [e, f]) = ([max(a, e - d), min(b, f - c)], [max(c, e - b), min(d, f - a)])$
对于表达式:
$
(x >= 0) and (x >= 1 + y) and (y >= 1 + x) = #T
$
(假设对于三个表达式从右向左更新)
- 第一轮结束:$(x, y): ([0, +infinity], [-infinity, +infinity])$
- 第二轮结束:$(x, y): ([2, +infinity], [1, +infinity])$
- 第三轮结束:$(x, y): ([4, +infinity], [3, +infinity])$
- ......
迭代过程将无法终止。不过,上面的反向语义是单调的,同时若假设表达式是可满足的,则半格的高度将有限。具体而言,布尔值的半格当然是有限的,而对于任意变量 $X$,假设 $x$ 是一个可以使表达式成立的真实值,在算法的运行过程中,要么 $X$ 的下界为 $-infinity$,要么在某一轮被更新为某个值 $X_m$,由单调性,算法过程中下界将不会小于 $X_m$。同时,由算法的安全性,$X$ 的下界也不大于 $x$,因此 $X$ 下界的可能值是有限的,上界同理,从而 $X$ 的区间也是有限半格。
= #empty
```c
x = 1; // x: [1, 1]
while (x < 100)
{
// 第一次进入 x: [1, 1]
// 第二次进入 [2, 2] 与原值 [1, 1] 加宽合并,再与 [1, 1] 取并得到 x: [1, +infinity]
// 第三次进入 [2, 100] 与原值 [1, +infinity] 加宽合并,再与 [1, 1] 取并得到 x: [1, +infinity],若不变窄则算法收敛
// 若使用变窄算法,第四次进入 [2, 100] 与 [1, 1] 取并得到 x: [1, 100]
// 若使用变窄算法,第五次进入 [2, 100] 与 [1, 1] 取并得到 x: [1, 100],算法收敛
assert(x < 100);
// 第一次进入 x: [1, 1]
// 第二次进入将 [1, +infinity] 压缩到 [1, 99]
// 若使用变窄算法,第三次进入将 [1, +infinity] 压缩到 [1, 99]
// 若使用变窄算法,第四次进入将 [1, 100] 压缩到 [1, 99]
x++;
// 第一次进入 x: [2, 2]
// 第二次进入 x: [2, 100]
// 若使用变窄算法,第三次进入 x: [2, 100]
// 若使用变窄算法,第四次进入 x: [2, 100]
}
skip;
```
|
|
https://github.com/FlandiaYingman/note-me | https://raw.githubusercontent.com/FlandiaYingman/note-me/main/note-me-more.typ | typst | MIT License | #import "note-me.typ": admonition
#let todo(children) = admonition(
icon-path: "icons/question.svg",
title: "TODO",
color: rgb(209, 36, 47),
children
) |
https://github.com/SergeyGorchakov/russian-phd-thesis-template-typst | https://raw.githubusercontent.com/SergeyGorchakov/russian-phd-thesis-template-typst/main/parts/intro.typ | typst | MIT License | #import "../lib.typ": *
#set heading(numbering: none)
#show: fix-indent()
= Введение <intro> // Название и ссылка на него
Обзор, введение в тему, место в мировой науке.
Данные для диссертации и автореферата берутся из файла `common/data.typ`.
*Целью* данной работы является ...
Для достижения цели необходимо было решить следующие *задачи*:
+ Исследовать, разработать, вычислить ...
+ Исследовать, разработать, вычислить ...
+ Исследовать, разработать, вычислить ...
+ Исследовать, разработать, вычислить ...
~
*Научная новизна* исследования состоит в:
+ Впервые ...
+ Впервые ...
+ Было выполнено оригинальное исследование ...
~
*Практическая значимость*
*Методы*
*Основные положения, выносимые на защиту*
+ Положение ...
+ Положение ...
+ Положение ...
+ Положение ...
~
*Достоверность* полученных результатов обеспечивается ... Результаты находятся в соответствии с результатами, полученными другими авторами.
*Апробация работы.*
Основные результаты работы докладывались на: ...
*Личный вклад.* Все результаты исследований были получены автором лично.
*Публикации.* Основные результаты по теме исследования изложены в ... печатных изданиях, ... из которых в журналах ВАК.
// здесь нужно дополнить автоматически создаваемым списком публикаций автора.
*Объем и структура работы*. Диссертация состоит из введения, #total_part глав, заключения и #total_appendix приложений. Полный объем диссертации составляет #total_page страниц, включая #total_fig рисунков, #total_table таблиц. Список литературы содержит #total_bib наименований. |
https://github.com/Clamentos/FabRISC | https://raw.githubusercontent.com/Clamentos/FabRISC/main/src/spec/Section4.typ | typst | Creative Commons Attribution Share Alike 4.0 International | ///
#import "Macros.typ": *
///
#section(
[Memory],
[This section is dedicated to present the memory model used by FabRISC including data alignment, addressing modes, synchronization, consistency, as well as possible cache coherence considerations and reminders.],
///.
subSection(
[Data Alignment],
[FabRISC, overall, treats the main memory and the MMIO regions as collections of byte addressable locations in little-endian order with a range of $2^"WLEN"$ addresses in total. The specification leaves to the hardware designers the choice of supporting aligned or unaligned memory accesses or both for data. If aligned is decided to be the only supported scheme, the hart must generate the `MISD` (Misaligned Data) fault every time the constraint is violated. When it comes to instructions, it is mandatory to align the code at the 16 bit boundary which is the greatest common denominator between the instruction lengths. If the hardware detects that the program counter isn't aligned for any reason, the `MISI` (Misaligned Instruction) fault must be generated. Detecting `PC` misalignment simply amounts to checking if the last bit is equal to zero, otherwise the fault is generated. Because instructions are aligned at the 2-byte boundary, the branch offsets can be shifted by one to the left, thus, doubling the range in terms of bytes.],
[The `DALIGN` module must be implemented if the hardware doesn't support misaligned data accesses. The module provides the architecture with the aforementioned `MISD` fault and it can be implemented in unprivileged microarchitectures as well like any other event. Compatibility between two microarchitectures that differ in data alignment capabilities, can be solved by simulating unaligned memory accesses in software as an event handler for the `MISD` fault. This would render the code interchangeable between the two systems at the cost of performance.],
comment([
Alignment issues can arise when the processor wants to read or write an item whose size is greater than the smallest addressable thing. This problem is tricky to design hardware for, especially caches and TLBs, because misaligned items can cross cache line boundaries as well as OS page boundaries. Alignment networks and more advanced caches are needed to natively support this, which increases complexity and may slow down the critical path. For already complex, deeply pipelined multicore out-of-order superscalar machines, however, i believe that supporting unaligned accesses in hardware can be handy. This way, the software writer can make decisions freely without having to worry about this issue, potentially decreasing the memory footprint in datastructures with many irregular objects.
FabRISC always provides the option for aligned only systems via emulation thanks to the `MISD` fault, if the other route is not deemed worth pursuing.
])
),
///.
subSection(
[Memory Addressing Modes],
[FabRISC provides several addressing modes for both vector and scalar pipelines, from simple register indirect all the way to vector gather and scatter operations. This helps the code to be more compact increasing the efficiency of instruction caches as well as reducing the overhead of "bookkeeping" instructions for more complex access patterns. The addressing modes are divided into two parts, the first for scalar operations and the second for vector operations:],
list(tight: false,
[*Scalar addressing modes:* _These modes can access a single element at a time, which can be any integer or floating point type of 8, 16, 32 or 64-bits:_
#list(tight: false, marker: [--],
[*Immediate displaced:* $"mem"["reg" + "imm"] <-> "reg"$ _The address is composed of the sum between a register acting as the base pointer and a constant._],
[*Immediate indexed*: $"mem"["reg" + "reg" + "imm"] <-> "reg"$ _The address is composed of the sum between two registers, one acting as the base pointer while the other acting as the index and a constant._],
[*Immediate scaled:* $"mem"["reg" + ("reg" << "imm")] <-> "reg"$ _The address is composed of the sum between a register acting as the base pointer and a second register acting as the index logically left shifted by a constant._]
)
],
[*Vector addressing modes:* _These modes can access multiple elements at a time. The number of elements accessed is dictated by the VLEN bits stored in the flag register. The elements are all the same type either integer or floating point of 8, 16, 32 or 64 bits:_
#list(tight: false, marker: [--],
[*Vector immediate displaced:* $"mem"_i ["reg" + "stride"("imm")] <-> "vreg"_i$ _The address is composed of a single scalar register used as the base pointer. The constant specifies the stride of the access._],
[*Vector indexed (gather / scatter):* $"mem"_i ["reg" + "vreg"_i] <-> "vreg"_i$ _The address is composed of the sum between the base pointer register and the $i^"th"$ index of the specified vector register._]
)
]
),
[All scalar addressing modes also support two auto-update modes to help with sequential data structures: _post-increment_ and _pre-decrement_.],
comment([
Good addressing modes can help improve code density, legibility (though low-level assembly isn't red by humans all that often...) as well as performance since they can be directly implemented in hardware reducing the overhead. The modes that i decided to support are handy for a variety of data structures, especially sequential ones such as stacks and arrays thanks to the auto-update features. Vector modes include the common strided, as well as gather and scatter which are particularly useful for accessing sparse matrices.
Thanks to the modular nature of FabRISC, it's not required to implement the complex modes at all, only the immediate displaced can suffice as it's the simplest and, by far, the most commonly used in many programs.
])
),
///.
subSection(
[Synchronization],
[FabRISC provides dedicated atomic instructions via the `DAB` and `DAA` instruction modules to achieve proper synchronization in order to protect critical sections and to avoid data races in threads that share memory with each other. The proposed instructions behave atomically and can be used to implement atomic _test-and-set_ and _read-modify-write_ operations for locks, semaphores and barriers. It is important to note that if the system can only ever run one software thread at any given moment, then this section can be skipped since the problem can be solved by the operating system, or by software in general. Below is a description of the atomic instructions, which are divided in two categories:],
list(tight: false,
[*Test-and-set* _instructions:_
#list(tight: false, marker: [--],
[*Compare And Swap:* _(`CAS`) is a simple atomic instruction that atomically swaps the memory location `X` with register value `Y` if `X` is equal to the register value `Z`._],
[*Versioned Compare And Swap:* _(`VCAS`) is an complex atomic instruction that atomically swaps the memory location `X` with register value `Y` if `X` is equal to the register value `Z` while also taking into account `X` and `Y` version counters. The version is an 8-byte value physically next to the actual variable. `VCAS` will not swap if `X` and `Y` counters are different. If the swap is successful, both counters are incremented._]
)
],
[*Read-modify-write instructions:* _These instructions provide several atomic read-modify-write operations on single variables, such as atomic addition that can generally scale better than using CAS-based algorithms (see section 7 for more information)._]
),
[The test-and-set and read-modify-write family of instructions, in order to be atomic, must perform all of their operations in the same memory "request". This ensures that no other hart or device has access to the memory, potentially changing the target variable in the middle of the operation. This may be achieved by directly arbitrating the bus or with the help of the cache coherence protocol or both.],
subSubSection(
[Transactional Memory],
[FabRISC also provides optional instructions, via the `TM` module, to support basic hardware transactional memory that can be employed instead of the above seen solutions to exploit parallelism in a more "optimistic" manner. Multiple transactions can proceed in parallel as long as no conflict is detected by the hardware. when such situations occur the offended transaction must be aborted, that is, it must discard all the changes and restore the architectural state immediately before the start of the transaction itself. If a transaction detects no conflict it is allowed to commit the changes and the performed operations can be considered atomic. Transactions can be nested inside each other up to a depth of 255, beyond this, further transactions must be all aborted. For convention and simplicity, if an event that causes the context to be changed or traps the hart, all of its ongoing transactions must be aborted. One important consideration is that FabRISC approach to transactional memory is "liberal", which requires a transaction to fully reach the commit point before resulting in either a positive or negative outcome. In order to fail transactions earlier and to allow for a quicker retry, it's possible to perform a checkpoint on the progress via a dedicated instruction. The following are the proposed transactional memory instructions:],
tableWrapper([Transactional instructions.], table(
columns: (auto, auto),
align: left + horizon,
[#middle([*Name*])], [#middle([*Description*])],
[`TBEG`], [*Transaction Begin*: \ Causes the hart that executed this instruction to start monitoring accesses by other harts as well as incrementing the nesting counter by one, thus starting a transaction. If the nesting counter was zero prior to executing this instruction, the hart must enter into transactional mode. This instruction will write the outcome status of its execution in the specified destination register.],
[`TCOM`], [*Transaction Commit*: \ Causes the hart that executed this instruction to stop monitoring accesses by other harts and commit the changes as well as decrementing the nesting counter by one. If the nesting counter is zero after the execution of this instruction, the hart must exit from transactional mode. This instruction will write the outcome status of its execution in the specified destination register effectively concluding the ongoing transaction. All of the updates to memory, if any, can be considered atomic and permanent after the completion of this instruction.],
[`TABT`], [*Transaction Abort*: \ Causes the hart that executed this instruction to stop monitoring accesses by other harts, discard all changes, decrement the nesting counter by one and write, to the specified destination register, the value of the `XABT` abort code. This instruction behaves in a similar manner to a failing `TCOM`. It is important to note that this instruction will naturally only abort the innermost active transaction.],
[`TABTA`], [*Transaction Abort All*: \ Causes the hart that executed this instruction to stop monitoring accesses by other harts, discard all changes, reset the nesting counter to zero and write, to the specified destination register, the value of the `XABT` abort code. This instruction behaves in a similar manner to a failing `TCOM`. It is important to note that this instruction will abort all currently active transactions.],
[`TCHK`], [*Transaction Check*: \ Causes the hart that executed this instruction to check if it's necessary to abort the current transaction. If the check fails, this instruction will behave exactly the same as `TABT`. If the check passes, this instruction will not affect the architectural state, that is, it will behave like a no-operation.]
)),
[The `TM` module, as mentioned earlier, also includes _abort-codes_ that can be used by the programmer to take appropriate actions in case the transaction was aborted. The proposed codes are listed in the following table:],
pagebreak(),
tableWrapper([Transaction abort codes.], table(
columns: (auto, auto, auto),
align: (x, y) => (right + horizon, left + horizon, left + horizon).at(x),
[#middle([*Code*])], [#middle([*Mnemonic*])], [#middle([*Description*])],
[0], [`XABT`], [Explicit abort. The transaction was explicitly aborted by the `TABT` or `TABTA` instruction.],
[1], [`EABT`], [Event abort. The transaction was aborted due to a triggered event: Interrupt, fault or exception.],
[2], [`CABT`], [Conflict abort. The transaction was aborted due to a collision with another thread, that is, both wrote to the same memory location.],
[3], [`RABT`], [Replacement abort. The transaction was aborted due to a cache line replacement that held a previously fetched transactional variable.],
[4], [`UABT`], [Depth underflow abort. The transaction was aborted because a `TCOM`, `TABT`, `TABTA` or a failing `TCHK` instruction attempted to execute at a depth of 0.],
[5], [`OABT`], [Depth Overflow abort. The transaction was aborted due to an exceeded nesting depth.]
))
),
comment([
Memory synchronization is extremely important in order to make shared memory communication even work at all. The problem arises when a pool of data is shared among different processes or threads that compete for resources and concurrent access to this pool might result in erroneous behavior and must, therefore, be arbitrated. This zone is called "critical section" and special atomic primitives can be used to achieve this protection. Many different instruction families can be chosen such as "test-and-set", "read-modify-write" and others. I decided to provide in the ISA the compare-and-swap instruction as it's very popular and is a classic. It has the useful property of guaranteeing that at least one thread will successfully execute the operation. It doesn't obviously protect from deadlocks or livelocks as they are a symptom of erroneous code. `CAS` also suffers from the "ABA" problem since it detects changes by looking at the data only. The solution to this is to use `VCAS`, which is a more sophisticated version of the former, taking into account a version counter that is incremented at each successful execution.
Another valid option would be to use the `LL` & `SC` pair commonly found in many RISC ISAs. I decided not to go this route since the pair, even though it doesn't suffer from the ABA problem, it doesn't guarantee forward progress unless additional restrictions are placed (see "constrained" / "unconstrained" `LL` & `SC` sequences in RISC-V).
FabRISC also provides a small set of simple read-modify-write type operations, which can ease the implementation of lock-free datastructures and algorithms allowing for better scaling when thread contention is high.
Lastly i decided to also provide basic hardware transactional memory support because, in some situations, it can yield great performance compared to "pessimistic" solutions without losing atomicity. This is completely optional and up to the hardware designers to implement or not simply because it can complicate the design by a fair bit, especially the caching subsystem. Transactional memory seems to be promising in improving performance and ease of implementation when it comes to shared memory programs, but debates are still ongoing to decide which exact way of implementing is best. I chose to go with a very liberal approach that aborts only at commit or during a checkpoint, as opposed to aborting as soon as a failure is detected. This is simpler to implement in both the specification and the hardware and can provide a higher transaction completion percentage at the expense of a higher retry latency.
])
),
///.
subSection(
[Coherence],
[FabRISC leaves to the hardware designers the choice of which coherence protocol to implement. On multicore systems cache coherence must be ensured by choosing a coherence protocol and making sure that all harts agree on the current sequence of accesses to the same memory location. That can be guaranteed by serializing the operations via the use of a shared bus or via a distributed directory system. After this, _write-update_ or _write-invalidate_ protocol can be employed. Software coherence can also be a potential option but it will rely on the programmer to explicitly flush or invalidate the cache of each core separately. Nevertheless, FabRISC provides via the `SB` instruction module, implementation dependent instructions, such as `CACOP`, that can be sent to the cache subsystem directly to manipulate its state and operation. If the processor makes use of a separate instruction cache, potential complications can arise for self modifying code which can be solved by employing any of the above mentioned options for instruction caches as well.],
comment([
Cache coherence is a big topic and is hard to get right because it can greatly hinder performance in both single core and multicore as well as significantly complicate the design. I decided to give as much freedom as possible to the designer of the system to pick the best solution that they see fit.
Another aspect that could be important, if the software route is chosen, is the exposure to the underlying microarchitecture implementation to the programmer which can be yield unnecessary complications and confusions. Generally speaking though write-invalidate seems to be the standard approach in many modern designs because of the way it behaves in certain situations, especially when a process is moved to another core by the operating system. A simple shared bus with snooping can be a good choice if the number of cores is small (lots of cores means lots of traffic), otherwise a directory based approach can be used to ensure that all the cores agree on the order of accesses. From this, the actual coherence protocol can be picked. Common ones are: `MSI`, `MESI`, `MOSI` or `MOESI`, the latter being the most complex but most powerful in reducing traffic and maximizing power efficiency. All the harts that map to the same core don't need to worry about coherence between each other since the caches are shared between those harts. This argument holds true for whole cores that share bigger pools of cache, such as L3 and sometimes L2. Virtually accessed caches are sort of an exception because false sharing can be an issue needing extra hardware checks to cover this edge case.
The concept of coherence goes beyond caching and exists any time where multiple copies of the same data are shared by different actors.
])
),
///.
subSection(
[Consistency],
[FabRISC utilizes a fully relaxed memory consistency model formally known as _release consistency_ that allows all possible orderings in order to give harts the freedom to reorder memory instructions to different addresses in any way they want. For debugging and specific situations the stricter _sequential consistency_ model can be utilized and the hart must be able to switch between the two at any time via the `CMD` bit in the status register. Special instructions, called "fences", are provided by the `FNC` module to let the programmer impose memory ordering restrictions when the relaxed model is in use. If the hart doesn't reorder memory operations this module is not necessary and can be skipped. The proposed fencing instructions are listed in the following table:],
tableWrapper([Fencing instructions.], table(
columns: (auto, auto),
align: left + horizon,
[#middle([*Name*])], [#middle([*Description*])],
[`FNCL`], [*Fence Loads*: \ This instruction forbids the hart to reorder any load instruction across the fence.],
[`FNCS`], [*Fence Stores*: \ This instruction forbids the hart to reorder any store instruction across the fence.],
[`FNCLS`], [*Fence Loads and Stores*: \ This instruction forbids the hart to reorder any load or store instruction across the fence.],
[`FNCI`], [*Fence Instructions*: \ This instruction signals that a modification of the instruction cache happened.]
)),
[The fences can be used on any memory type instruction, including the atomic `CAS` and `VCAS` to forbid reordering when acquiring or releasing a lock for critical sections and barriers. Writes to portions of memory where the code is stored can be made effective by issuing a command to the cache subsystem via the dedicated implementation specific `CACOP` instruction as briefly discussed above. The `FNC` module also requires the ability for the hart to switch between the release consistency and the more stringent sequential consistency model.],
comment([
The memory consistency model i wanted to utilize was a very relaxed one to allow all kinds of performance optimization to take place inside the system. However one has to provide some sort of restrictions, effectively special memory operations, to avoid edge cases that can cause erroneous processing if the hart can execute memory instructions out of order.
Even with those restrictions debugging could be quite difficult because the program might behave very weirdly, so i decided to include the sequentially consistent model that forbids reordering of any kind of memory instruction.
])
)
)
#pagebreak()
///
|
https://github.com/Shuenhoy/modern-zju-thesis | https://raw.githubusercontent.com/Shuenhoy/modern-zju-thesis/master/examples/common-body.typ | typst | MIT License | #import "../lib.typ": fonts
#import fonts: *
= 一个Chapter
旧历的年底毕竟最像年底,村镇上不必说,就在天空中也显出将到新年的气象来。灰白色的沉重的晚云中间时时发出闪光,接着一声钝响,是送灶的爆竹;近处燃放的可就更强烈了,震耳的大音还没有息,空气里已经散满了幽微的火药香。我是正在这一夜回到我的故乡鲁镇的。虽说故乡,然而已没有家,所以只得暂寓在鲁四老爷的宅子里。他是我的本家,比我长一辈,应该称之曰“四叔”,是一个讲理学的老监生。他比先前并没有甚么大改变,单是老了些,但也还未留胡子,一见面是寒暄,寒暄之后说我“胖了”,说我“胖了”之后即大骂其新党。但我知道,这并非借题在骂我:因为他所骂的还是康有为。但是,谈话是总不投机的了,于是不多久,我便一个人剩在书房里。
== 一个Section
我们可以用 image 来插入现有的 jpg 等格式的图片,如@fig:zju-logo。
#figure(
image("../assets/zju-emblem.svg", width: 40%),
caption: [浙江大学 LOGO],
) <zju-logo>
如@tbl:sample 所示,这是一张自动调节列宽的表格。
#figure(
table(
columns: (auto, 1fr),
align: (center, center),
stroke: black,
[第一列], [第二列],
[xxx], [xxx],
[xxx], [xxx],
[xxx], [xxx],
),
caption: "自动调节列宽的表格",
)<sample>
如@eqt:eq:sample 所示,这是一个公式。
$
A = overbrace((a + b + c) + underbrace(i(d + e + f),"虚数"),"复数")
$<eq:sample>
如@lst:code-sample 所示,这是一段代码 @zjuthesis,@floater2005surface,@WANG2022102149,@chCurl。
#figure(
```
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("Hello, zjuthesis\n");
return 0;
}
```,
caption: "simple.c",
) <code-sample>
=== 关于字体
英文字体通常提供了粗体和斜体的组合,中文字体通常没有粗体或斜体,本模板使用了 "stroke" 来实现中文伪粗体,但不提供中文斜体。
#figure(
caption: "一些字体示例",
table(
columns: 5,
[字体], [常规], [粗体], [斜体], [粗斜体],
[Times New Roman], [Regular], [*Bold*], [_Italic_], [*_Bold Italic_*],
[仿宋],
[#show: fangsong; 常规],
[#show: fangsong; *粗体*],
[#show: fangsong; _斜体_],
[#show: fangsong; *_斜粗体_*],
[宋体], [#show: songti; 常规], [#show: songti; *粗体*], [#show: songti; _斜体_], [#show: songti; *_斜粗体_*],
[黑体], [#show: heiti; 常规], [#show: heiti; *粗体*], [#show: heiti; _斜体_], [#show: heiti; *_斜粗体_*],
[楷体], [#show: kaiti; 常规], [#show: kaiti; *粗体*], [#show: kaiti; _斜体_], [#show: kaiti; *_斜粗体_*],
),
)
== 另一个Section
第二天我起得很迟,午饭之后,出去看了几个本家和朋友;第三天也照样。他们也都没有甚么大改变,单是老了些;家中却一律忙,都在准备着“祝福”。这是鲁镇年终的大典,致敬尽礼,迎接福神,拜求来年一年中的好运气的。杀鸡,宰鹅,买猪肉,用心细细的洗,女人的臂膊都在水里浸得通红,有的还带着绞丝银镯子。煮熟之后,横七竖八的插些筷子在这类东西上,可就称为“福礼”了,五更天陈列起来,并且点上香烛,恭请福神们来享用,拜的却只限于男人,拜完自然仍然是放爆竹。年年如此,家家如此,——只要买得起福礼和爆竹之类的——今年自然也如此。天色愈阴暗了,下午竟下起雪来,雪花大的有梅花那么大,满天飞舞,夹着烟霭和忙碌的气色,将鲁镇乱成一团糟。我回到四叔的书房里时,瓦楞上已经雪白,房里也映得较光明,极分明的显出壁上挂著的朱拓的大“寿”字,陈抟老祖写的,一边的对联已经脱落,松松的卷了放在长桌上,一边的还在,道是“事理通达心气和平”。我又无聊赖的到窗下的案头去一翻,只见一堆似乎未必完全的《康熙字典》,一部《近思录集注》和一部《四书衬》。无论如何、我明天决计要走了。
况且,一直到昨天遇见祥林嫂的事,也就使我不能安住。那是下午,我到镇的东头访过一个朋友,走出来,就在河边遇见她;而且见她瞪着的眼睛的视线,就知道明明是向我走来的。我这回在鲁镇所见的人们中,改变之大,可以说无过于她的了:五年前的花白的头发,即今已经全白,会不像四十上下的人;脸上瘦削不堪,黄中带黑,而且消尽了先前悲哀的神色,仿佛是木刻似的;只有那眼珠间或一轮,还可以表示她是一个活物。她一手提着竹篮。内中一个破碗,空的;一手拄著一支比她更长的竹竿,下端开了裂:她分明已经纯乎是一个乞丐了。我就站住,豫备她来讨钱。
=== 一个Subsection
“你回来了?”她先这样问。
“是的。”
==== 一个Subsubsection
“你回来了?”她先这样问。
“是的。”
== 又一个Section
#lorem(100)
#lorem(120)
#lorem(130)
= 另一个Chapter<chap:1>
== 又一个Section<sec:1>
@chap:1
@sec:1
=== 又一个Subsection<subsec:1>
@subsec:1
“就是——”她走近两步,放低了声音,极秘密似的切切的说,“一个人死了
之后,究竟有没有魂灵的?” |
https://github.com/SkymanOne/ecs-typst-template | https://raw.githubusercontent.com/SkymanOne/ecs-typst-template/main/thesis_example.typ | typst | MIT License | #import "ecsproject.typ": *
// Use everything together
#show: doc => use_project(
title: "My project",
author: (
name: "<NAME>",
email: "<EMAIL>"
),
supervisor: (
name: "Supervisor name",
email: none
),
examiner: (
name: "Examiner name",
email: none
),
date: "December 22, 2023",
program: "BSc Computer Science",
department: "Electronics and Computer Science",
faculty: "Faculty of Engineering and Physical Sciences",
university: "University of Southampton",
// Is your report a progress report?
is_progress_report: false,
// You can remove this arg all together if you need, it will autfill witb default values
originality_statements: (
acknowledged: "I have acknowledged all sources, and identified any content taken from elsewhere.",
resources: "I have not used any resources produced by anyone else.",
foreign: "I did all the work myself, or with my allocated group, and have not helped anyone else",
material: "The material in the report is genuine, and I have included all my data/code/designs.",
reuse: "I have not submitted any part of this work for another assessment.",
participants: "My work did not involve human participants, their cells or data, or animals."
),
abstract_text: lorem(50),
acknowledgments_text: lorem(50),
// These set style for page and heading numberings
page_numbering: "1",
title_numbering: "1.",
doc
)
// Or split individually if you need to tailor the content
#show: doc => page_style(
doc
)
#cover(
title: "Folidity - Safe Functional Smart Contract Language",
supervisor: (name: "Prof. <NAME>", email: "<EMAIL>"),
examiner: (name: "TBD", email: none),
author: (name: "<NAME>", email: "<EMAIL>"),
)
#abstract(
author: (name: "<NAME>", email: "<EMAIL>"),
program: "Program name",
// this is default value
is_progress_report: false,
content: lorem(150),
)
// This is identical
#originality_statement(
acknowledged: "I have acknowledged all sources, and identified any content taken from elsewhere.",
resources: "I have not used any resources produced by anyone else.",
foreign: "I did all the work myself, or with my allocated group, and have not helped anyone else",
material: "The material in the report is genuine, and I have included all my data/code/designs.",
reuse: "I have not submitted any part of this work for another assessment.",
participants: "My work did not involve human participants, their cells or data, or animals."
)
// to this
#originality_statement()
#acknowledgments(text: "I dedicated this to myself")
// autgenerated table of contents
#table_of_contents()
// Start writing stuff
= Proposed Solution
== Outline
#lorem(1500)
= Hey
#lorem(500)
== Subhey
#lorem(500)
// Include bibliography
#pagebreak()
#bibliography("ECS.bib", full: true) |
https://github.com/university-makino/Microcomputer-control-and-exercises | https://raw.githubusercontent.com/university-makino/Microcomputer-control-and-exercises/master/report/本レポート5/report.typ | typst | // ライブラリの実装 //
#import "@preview/codelst:2.0.1": sourcecode
//フォント設定//
#let gothic = "YuMincho"
//本文フォント//
#set text(11pt, font: gothic, lang: "ja")
//タイトル・見出しフォント//
#set heading(numbering: "1.1")
#let heading_font(body) = {
show regex("[\p{scx: Han}\p{scx: Hira}\p{scx: Kana}]"): set text(font: gothic)
body
}
#show heading: heading_font
// ページ設定 //
#set page(
paper: "a4",
margin: (x: 25mm, y: 25mm),
columns: 1,
//fill: 背景色,
numbering: "1",
number-align: center,
header: [
#set text(8pt)
]
)
// 数式の表示の仕方を表示 //
#set math.equation(numbering: "(1)")
//本文ここから//
= 演習の目的
実験を通して、LEDマトリクス TOM-1588BH-Bの使い方と仕組みの習得を目的とする。
= 演習の使用部品
== @LEDマトリクス の電子部品 ( LEDマトリクス TOM-1588BH-B) を次のような点から調べなさい。
#figure(
image("./img/LEDマトリクス.png",width: 50%),
caption: "LEDマトリクス"
)<LEDマトリクス>
=== どのような部品か
8列8行のドットマトリックスディスプレイで、赤色の発光ダイオードを64個搭載している。
8列8行のLEDで構成されており、各LEDは各列で共通のアノードと各行で共通のカソードを持っている。
電光掲示板や電子時計などに使用される。
=== どのような仕組みか
TOM-1588BH-Bは廃盤となっており、データシートが見つからなかったため、代わりに後継機であるTOM-1588AMG-N について調査した。
@LEDマトリクス回路図 に示すように、各LEDはアノードとカソードに接続されており、アノードに正の電圧を加え、カソードに接地することでLEDを点灯させる @led_matix_oasistek_dataheet。
各LEDは行と列の交点に配置されており、行と列の交点に電圧を加えることで、特定のLEDのみを点灯させることができる。
#figure(
image("./img/LEDマトリクス回路図.png",width: 50%),
caption: "LEDマトリクス回路図"
)<LEDマトリクス回路図>
=== どのような入力を取り扱うのか
電圧を入力として取り扱う。各LEDに対してアノードに正の電圧を加え、カソードに接地することでLEDを点灯させる。
=== 入力に応じて出力がどう変化するのか (データシートや仕様書を参考に)
一つ一つのLEDは独立して制御されるため、入力に応じて出力が変化する。各LEDに対してアノードに正の電圧を加え、カソードに接地することでLEDを点灯させる。
=== どのようなピンアサイン (各ピンの役割) か
@LEDマトリクスピンアサイン に示すように、8列8行のLEDマトリクスには、各列に対してアノードが、各行に対してカソードが接続されている。
実際のピンアサインと光る位置は綺麗に並んでいる訳ではないため、ピンアサインを確認しながら接続する必要がある。
#figure(
image("./img/LEDマトリクスピンアサイン.png",width: 90%),
caption: "LEDマトリクスピンアサイン"
)<LEDマトリクスピンアサイン>
=== 正しい動作の条件、範囲は何か
- ピーク順⽅向電流(1/10 デューティ サイクル、0.1ms パルス幅): 80mA
- ドットあたりの消費電⼒ : 75mW
- 連続順⽅向電流 : 20mA
- 推奨動作電流 : 12mA
- 逆電圧 : 5V
- 動作温度範囲 : -25°C 〜 +85°C
- 保管温度範囲 : -30°C 〜 +85°C
- 鉛フリーはんだ温度(座⾯下1/16インチ): 260°C 3秒
#pagebreak() // ページを分ける
= 課題内容
== フルカラーLEDを光らせる
=== 実験その1 (動作確認)
LEDマトリクスをダイナミック点灯方式を使って動作させる。配線して、アノードコモン(row 側) を用いたプログラムを入力しArduinoでその動作を確認し考察しなさい。
*回路図*
@実験その1回路図_1 , @実験その1回路図_2 は、実験 1 の回路図を示す。 複数の出力ポートをまとめてコントロールし、LEDマトリクスに図を表示させる回路である。
#figure(
image("./img/実験その1回路図1.png",width: 160%),
caption: "実験その1回路図_1"
)<実験その1回路図_1>
#figure(
image("./img/実験その1回路図2.png",width: 150%),
caption: "実験その1回路図_2"
)<実験その1回路図_2>
#pagebreak() // ページを分ける
*プログラム*
@LEDマトリクスをアノードコモンで光らせるプログラム は、実験1 で使用したプログラムを示す。
#figure(
sourcecode[```c
int ANODEPIN[] = {2, 3, 4, 5, 6, 7, 8, 9};
int CATHODEPIN[] = {10, 11, 12, 14, 15, 16, 17, 18};
int mat_A[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 1, 0, 0, 0, 0, 0},
{1, 1, 1, 0, 0, 0, 1, 0},
{0, 0, 1, 1, 1, 0, 0, 1},
{0, 0, 1, 1, 1, 1, 1, 0},
{0, 0, 1, 1, 1, 1, 1, 0},
{0, 1, 0, 0, 0, 0, 1, 0},
{1, 0, 0, 0, 0, 0, 0, 1}};
int mat_B[8][8] = {
{0, 1, 1, 0, 0, 0, 1, 0},
{1, 1, 1, 0, 0, 0, 0, 1},
{0, 0, 1, 1, 1, 0, 0, 1},
{0, 0, 1, 1, 1, 1, 1, 0},
{0, 0, 1, 1, 1, 1, 1, 0},
{0, 0, 0, 1, 0, 0, 1, 0},
{0, 0, 0, 0, 1, 1, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0}};
void setup(){
delay(1000); // 初期化処理待ち
for (int ano = 0; ano < 8; ano++){
pinMode(ANODEPIN[ano], OUTPUT);
digitalWrite(ANODEPIN[ano], LOW);
}
for (int cat = 0; cat < 8; cat++){
pinMode(CATHODEPIN[cat], OUTPUT);
digitalWrite(CATHODEPIN[cat], HIGH);
}
}
void loop(){
unsigned long ms = millis();
if ((ms / 1000) % 2 == 1){
showMatrix(mat_A);
}else{
showMatrix(mat_B);
}
}
void showMatrix(int matrix[8][8]){
for (int ano = 0; ano < 8; ano++){
for (int cat = 0; cat < 8; cat++){
digitalWrite(CATHODEPIN[cat], (matrix[ano][cat] == 0 ? 1 : 0));
}
// 三項演算子を他の式でもできる
digitalWrite(ANODEPIN[ano], HIGH);
delay(500);
digitalWrite(ANODEPIN[ano], LOW);
}
}
```],
caption: "LEDマトリクスをアノードコモンで光らせるプログラム"
)<LEDマトリクスをアノードコモンで光らせるプログラム>
- プログラムの概要
LEDマトリクスをダイナミック点灯方式を用いてアノードコモンで光らせるプログラムである
- プログラムの説明
- 1–31 行目: 設定
- 1,2行目はアノードとカソードのピンを設定している。
- 3–20行目は、LEDマトリクスの点灯パターンを設定している。
- 21–31行目は、pinMode関数を用いて、各ピンを出力に設定している。
- 32–39 行目:プログラムの動作
- 33行目で現在の時間を取得している。
- 34–38行目で、時間に応じて、点灯パターンを切り替えている。
- 40–49 行目 : showMatrix関数
- 40–48行目で、各行のLEDを点灯させている。
*実験結果*
アノードのときは行ごとに制御できる。
アノードの時、 HIGTからLOWへの切り替えるdeleyを3以下にするとチラつきがなくなる。
アノードの時は同じアノードの行のなかで光るLEDの数が多い時は光が薄く、 LEDの数が少ない時は光が濃くなる。
テキスト上のコードのままだとチラつきが発生した。
行ごとに点灯消灯を管理している
*考察*
+ LEDはダイナミック点灯を利用して、1つ1つは別のタイミングで光っているものでも、目の残像を利用して同時に光っているように錯覚させていた。
+ アノードコモンでLEDを光らせる場合、アノードからくる5Vの電圧を分圧してカソード側に流すため並列接続の関係で電圧が分圧してしまうため、LEDが暗く写ってしまう。
+ 三項演算子を用いて、アノードコモンでプログラムを書いているときにmat_A配列の値を逆にして、カソードの選択を行っていた。
+ アノードコモンを用いるときは41–49行目で二重for文でマトリクスの光り方を制御している。41行目の第一for文では行を見ていて、42行目の第二for文でその行の各列を1つずつ見ている。
+ 列を最後まで見た後入出力を切り替えて光らないようにして、次の行を見ていくので行ごとに描写されている。
=== 実験その2 (カソードコモンを用いた動作の確認)
配線して、カソードコモン(col 側) を用いたプログラムを入力しArduinoで動作を確認し考察しなさい。
*回路図・プログラム*
回路図は実験その1 と同じものを使用する。
プログラムは実験その1 のプログラムをカソードコモンに変更する。
@LEDマトリクスをカソードコモンで光らせるプログラム は、実験その2 で使用したプログラムを示す。
#figure(
sourcecode[```c
void showMatrix(int matrix[8][8]){
for (int cat = 0; cat < 8; cat++){
for (int ano = 0; ano < 8; ano++){
digitalWrite(ANODEPIN[ano], matrix[ano][cat]);
}
// 三項演算子を他の式でもできる
digitalWrite(CATHODEPIN[cat], LOW);
delay(500);
digitalWrite(CATHODEPIN[cat], HIGH);
}
}
```],
caption: "LEDマトリクスをカソードコモンで光らせるプログラム"
)<LEDマトリクスをカソードコモンで光らせるプログラム>
- プログラムの概要
LEDマトリクスをダイナミック点灯方式を用いてカソードコモンで光らせるプログラムの一部である
- プログラムの説明
- 1–11 行目: showMatrix関数
- 1–9行目で、各列のLEDを点灯させている。
- HighとLowの値を逆にしている。
*実験結果*
カソードのときは列ごとに制御できる。
アノードより、カソードの方が明るく光った。
カソードの時はLOWからHIGHへの切り替えるdelayを1にしないとチラつきがなくならなかった。
テキスト上のコードのままだとチラつきが発生した。
列ごとに点灯消灯を管理している。
*考察*
+ LEDはダイナミック点灯を利用して、1つ1つは別のタイミングで光っているものでも、目の残像を利用して同時に光っているように錯覚させていた。
+ カソードコモンでLEDを光らせる場合、アノードからくる5Vの電圧をLEDごとに使用できるため、LEDを明るく表示できた。
+ カソードコモンを用いるときは二重for文でマトリクスの光り方を制御している。第一for文では列を見ていて、第二for文でその列の各行を1つずつ見ている。
+ 行を最後まで見た後入出力を切り替えて光らないようにして、次の列を見ていくので列ごとに描写されている。
=== 発展その1 (センサによる識別を用いたLED 制御)
LED マトリクスと加速度センサを組み合わせ、センサが反応している4 方向をLED マトリクス上に矢印の図(図5.6) で表せるようにしなさい。境界値・不感帯は、各自で判断すること。
#figure(
image("./img/矢印の図.png",width: 50%),
caption: "矢印の図"
)<矢印の図>
#pagebreak() // ページを分ける
*回路図*
@発展その1回路図_1 , @発展その1回路図_2 は、発展その1の回路図を示す。 複数の出力ポートをまとめてコントロールし、LEDマトリクスに図を表示させる回路である。
また、加速度センサを用いて、センサが反応している4 方向をLED マトリクス上に矢印の図で表す。
#figure(
image("./img/発展その1回路図1.png",width: 140%),
caption: "発展その1回路図_1"
)<発展その1回路図_1>
#figure(
image("./img/発展その1回路図2.png",width: 160%),
caption: "発展その1回路図_2"
)<発展その1回路図_2>
#pagebreak() // ページを分ける
*プログラム*
@LEDマトリクスを加速度センサーで操作するプログラム1 , @LEDマトリクスを加速度センサーで操作するプログラム2 , @LEDマトリクスを加速度センサーで操作するプログラム3 は、加速度センサーでLEDマトリクスを操作するプログラムのである。
加速度と初期値の誤差を取得し、閾値を設定して、加速度センサーの値によって、LEDマトリクスを操作するプログラムである。
ブレッドボードの傾きにより、LEDマトリクスの矢印の向きが変わる。
#figure(
sourcecode[```c
// chap5.1.1ex
int ANODEPIN[] = {2, 3, 4, 5, 6, 7, 8, 9};
int CATHODEPIN[] = {10, 11, 12, 14, 15, 16, 17, 18};
int right[8][8] = {
{0, 0, 0, 0, 1, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 1, 0},
{1, 1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 1, 1, 1, 1},
{0, 0, 0, 0, 0, 0, 1, 0},
{0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0}};
int left[8][8] = {
{0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0, 0},
{1, 1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 1, 1, 1, 1},
{0, 1, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0}};
int up[8][8] = {
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 1, 0, 1, 1, 0, 1, 0},
{1, 0, 0, 1, 1, 0, 0, 1},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0}};
int down[8][8] = {
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{1, 0, 0, 1, 1, 0, 0, 1},
{0, 1, 0, 1, 1, 0, 1, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0}};
int normal[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}};
```],
caption: "LEDマトリクスを加速度センサーで操作するプログラム1/3"
)<LEDマトリクスを加速度センサーで操作するプログラム1>
#figure(
sourcecode[```c
int xPin = 19;
int yPin = 20;
int zPin = 21;
// x,y,zの初期値保存用
int initialx, initialy, initialz;
// 初期値と入力値の誤差保存用
int inx, iny, inz;
int THRESHOLD = 50;
```],
caption: "LEDマトリクスを加速度センサーで操作するプログラム2/3"
)<LEDマトリクスを加速度センサーで操作するプログラム2>
#figure(
sourcecode[```c
void setup()
{
pinMode(xPin, INPUT);
pinMode(yPin, INPUT);
pinMode(zPin, INPUT);
Serial.begin(9600);
delay(1000); // 初期化処理待ち
for (int ano = 0; ano < 8; ano++)
{
pinMode(ANODEPIN[ano], OUTPUT);
digitalWrite(ANODEPIN[ano], LOW);
}
for (int cat = 0; cat < 8; cat++)
{
pinMode(CATHODEPIN[cat], OUTPUT);
digitalWrite(CATHODEPIN[cat], HIGH);
}
// 初期化
initialx = analogRead(xPin);
initialy = analogRead(yPin);
initialz = analogRead(zPin);
}
void loop()
{
inx = analogRead(xPin) - initialx;
iny = analogRead(yPin) - initialy;
inz = analogRead(zPin) - initialz;
if (inx > THRESHOLD)
{
showMatrix(left);
}
else if (inx < -THRESHOLD)
{
showMatrix(right);
}
else if (iny > THRESHOLD)
{
showMatrix(down);
}
else if (iny < -THRESHOLD)
{
showMatrix(up);
}
else
{
showMatrix(normal);
}
}
void showMatrix(int matrix[8][8])
{
for (int cat = 0; cat < 8; cat++)
{
for (int ano = 0; ano < 8; ano++)
{
digitalWrite(ANODEPIN[ano], matrix[ano][cat]);
}
// 三項演算子を他の式でもできる
digitalWrite(CATHODEPIN[cat], LOW);
delay(1);
digitalWrite(CATHODEPIN[cat], HIGH);
}
}
```],
caption: "LEDマトリクスを加速度センサーで操作するプログラム3/3"
)<LEDマトリクスを加速度センサーで操作するプログラム3>
- プログラムの概要
LEDマトリクスを加速度センサーで制御してダイナミック点灯方式を用いてカソードコモンで光らせるプログラムである。
加速度センサーの傾きを利用して、センサが反応している方向を表示するプログラムである。
- プログラムの説明
- 1–59 行目: 設定
- 1,2行目はアノードとカソードのピンを設定している。
- 3–48行目は、LEDマトリクスの点灯パターンを設定している。
- 50–59行目は、加速度センサーのpinを設定している。
- 55–58行目は、加速度の値を変数に保存している。
- 59行目は、閾値を設定している。
- 61–82行目: Setup
- 63–77行目で、各ピンを出力に設定している。
- 79–82行目で、初期値を設定している。
- 84–109行目 : loop関数
- 86–88行目で、加速度の値を取得している。
- 89–108行目で、加速度の値に応じて、点灯パターンを切り替えている。
- 110–123 行目 : showMatrix関数
- 112–123行目で、各列のLEDを点灯させている。
*実験結果*
今回の実験の軸の定義としては @加速度センサの軸 のように定義する。
1番ピンを右下にして4番ピンを右上にして、X軸、Y軸を図のように定義する。
Z軸については、図に対して垂直な方向をZ軸とする。
軸の定義は、データシートをもとにしている。
左に傾けた時は、x軸方向に正の向きに傾けた時と定義をする。
手前に傾けた時は、y軸方向に正の向きに傾けた時と定義をする。
#figure(
image("./img/加速度センサの軸.png",width: 50%),
caption: "加速度センサの軸"
)<加速度センサの軸>
X軸・Y軸に対してプラス軸方向、マイナス軸方向両方に約30度傾けたときに矢印をその軸の方向に表示するように制御を行なった。
その時の境界値の値は50とした。
中央値とアナログ入力値の差が絶対値50付近の時にばたつきが起きてしまう。
初期値を取るときに平らな場所に置いていないと、意図した動作を起こさない。
*考察*
ばたつきが起きた原因は境界値のみしか利用をしていないため。
不感帯を利用するとばたつきを抑えられると考えられる。
初期値を取るときに平らな場所に置いていないと、意図した動作を起こさない原因は、中央値を取るときに、最初の値を利用するためだと考えられる。
最初の値を中央値と置き、そこからのセンサーの値との差を利用して傾きを検知した。
差の閾値は50とする。
加速度のX軸が正に傾いている時は 中央値とアナログ入力値の差 が -50よりも小さくなっている時に右矢印を表示する.
X軸が負に傾いている時は逆の動作を行なう。
加速度のY軸が正に傾いている時は 中央値とアナログ入力値の差 が -50よりも小さくなっている時に上矢印を表示する.
Y軸が負に傾いている時は逆の動作を行なう
=== 発展その2(センサとスイッチを用いたLED 制御)
タクトスイッチを使って2 つのモード(A, B)の切替を導入し、発展その1 を拡張する。
モードA はシステム動作停止モードとし、モードB はシステム動作可能モードとする。初期モー
ドは「B. システム動作可能モード」とし、タクトスイッチの長押しで「A. システム動作停止モー
ド」に移行するようにしなさい。
またA ⇒ B のモード切り替え時に「起動」を意味するアニメーション、B ⇒ A の切り替え時には
「停止」を意味するアニメーションを再生する必要がある。このアニメーションは視覚的に充分確
認できる必要がある。なお「長押し」とは約2 秒程度タクトスイッチを押した状態が続く状態とす
る。モードの概要図とアニメーション図をそれぞれ @モードの概要図 、 @アニメーション図 に示す。
#figure(
image("./img/モードの概要図.png",width: 90%),
caption: "モードの概要図"
)<モードの概要図>
#figure(
image("./img/アニメーション図.png",width: 90%),
caption: "アニメーション図"
)<アニメーション図>
#pagebreak() // ページを分ける
*回路図*
@発展その2回路図_1 , @発展その2回路図_2 は、発展その2の回路図を示す。 複数の出力ポートをまとめてコントロールし、LEDマトリクスに図を表示させる回路である。
また、加速度センサとタクトスイッチを用いて、センサが反応している方向を表示し、モードにより動作をさせるか変更を行う。
#figure(
image("./img/発展その2回路図1.png",width: 140%),
caption: "発展その2回路図_1"
)<発展その2回路図_1>
#figure(
image("./img/発展その2回路図2.png",width: 160%),
caption: "発展その2回路図_2"
)<発展その2回路図_2>
#pagebreak() // ページを分ける
*プログラム*
@LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム1 , @LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム2 , @LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム3 ,@LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム4 は、加速度センサーとタクトスイッチでLEDマトリクスを操作するプログラムのである。
加速度センサーの傾きを利用して、センサが反応している方向を表示し、モードにより動作をさせるか変更を行うプログラムである。
#figure(
sourcecode[```c
// 使用するピンを定義
const int ANODEPIN[] = {2, 3, 4, 5, 6, 7, 8, 9};
const int CATHODEPIN[] = {10, 11, 12, 14, 15, 16, 17, 18};
// 各方向のパターンマトリックスを定義
const int modo_1[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 0, 0, 1, 0, 0},
{0, 0, 1, 0, 0, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}};
const int modo_2[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 1, 1, 1, 1, 1, 0},
{0, 1, 0, 0, 0, 0, 1, 0},
{0, 1, 0, 0, 0, 0, 1, 0},
{0, 1, 0, 0, 0, 0, 1, 0},
{0, 1, 0, 0, 0, 0, 1, 0},
{0, 1, 1, 1, 1, 1, 1, 0},
{0, 0, 0, 0, 0, 0, 0, 0}};
const int modo_3[8][8] = {
{1, 1, 1, 1, 1, 1, 1, 1},
{1, 0, 0, 0, 0, 0, 0, 1},
{1, 0, 0, 0, 0, 0, 0, 1},
{1, 0, 0, 0, 0, 0, 0, 1},
{1, 0, 0, 0, 0, 0, 0, 1},
{1, 0, 0, 0, 0, 0, 0, 1},
{1, 0, 0, 0, 0, 0, 0, 1},
{1, 1, 1, 1, 1, 1, 1, 1}};
const int right[8][8] = {
{0, 0, 0, 0, 1, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 1, 0},
{1, 1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 1, 1, 1, 1},
{0, 0, 0, 0, 0, 0, 1, 0},
{0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0}};
const int left[8][8] = {
{0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0, 0},
{1, 1, 1, 1, 1, 1, 1, 1},
{1, 1, 1, 1, 1, 1, 1, 1},
{0, 1, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0}};
```],
caption: "LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム1/4"
)<LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム1>
#figure(
sourcecode[```c
const int up[8][8] = {
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 1, 0, 1, 1, 0, 1, 0},
{1, 0, 0, 1, 1, 0, 0, 1},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0}};
const int down[8][8] = {
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0},
{1, 0, 0, 1, 1, 0, 0, 1},
{0, 1, 0, 1, 1, 0, 1, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 0, 1, 1, 0, 0, 0}};
const int normal[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 1, 1, 1, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}};
// タクトスイッチ用ピンを定義
const int switch_Pin = 13;
// タクトスイッチの状態を保持する変数
int switch_in;
int preStatus = HIGH;
// 起動状態を保持する変数
bool boot_status = true;
// ボタンを押したタイミングの時間を保持する変数
unsigned long startSecond = 0;
unsigned long count = 0;
// x, y, z の初期値と閾値を定義
int initialx, initialy, initialz;
const int xPin = 19;
const int yPin = 20;
const int zPin = 21;
const int THRESHOLD = 50;
// 初期化関数
void setup(){
pinMode(switch_Pin, INPUT_PULLUP);
pinMode(xPin, INPUT);
pinMode(yPin, INPUT);
pinMode(zPin, INPUT);
Serial.begin(9600);
delay(1000);
setupPins(); // ピンの初期化を行う関数
// 初期値を取得
initialx = analogRead(xPin);
initialy = analogRead(yPin);
initialz = analogRead(zPin);
}
```],
caption: "LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム2/4"
)<LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム2>
#figure(
sourcecode[```c
// ループ関数
void loop()
{
handleButtonPress(); // ボタンの状態を処理する関数
if (boot_status)
{
showArrow();
}
}
// ピンの初期化を行う関数
void setupPins()
{
for (int ano = 0; ano < 8; ano++)
{
pinMode(ANODEPIN[ano], OUTPUT);
digitalWrite(ANODEPIN[ano], LOW);
}
for (int cat = 0; cat < 8; cat++)
{
pinMode(CATHODEPIN[cat], OUTPUT);
digitalWrite(CATHODEPIN[cat], HIGH);
}
}
// ボタンの状態を処理する関数
void handleButtonPress()
{
switch_in = digitalRead(switch_Pin);
Serial.println("switch_in");
Serial.println(switch_in);
if (switch_in == HIGH && preStatus == LOW)
{
startSecond = millis();
count = 0;
}
else if (switch_in == LOW && preStatus == HIGH)
{
count = (millis() - startSecond) / 1000;
Serial.print(count);
Serial.println("秒");
if (count >= 2)
{
if (!boot_status)
{
Serial.println("起動");
showStartupAnimation();
boot_status = true;
}
else
{
Serial.println("終了");
showShutdownAnimation();
boot_status = false;
}
count = 0;
}
}
preStatus = switch_in;
}
```],
caption: "LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム3/4"
)<LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム3>
#figure(
sourcecode[```c
// 起動アニメーションを表示する関数
void showStartupAnimation()
{
showMatrix(modo_1);
showMatrix(modo_2);
showMatrix(modo_3);
}
// 終了アニメーションを表示する関数
void showShutdownAnimation()
{
showMatrix(modo_3);
showMatrix(modo_2);
showMatrix(modo_1);
}
// 矢印を表示する関数
void showArrow()
{
int inx = analogRead(xPin) - initialx;
int iny = analogRead(yPin) - initialy;
int inz = analogRead(zPin) - initialz;
Serial.print("x:");
Serial.print(inx);
Serial.print(" y:");
Serial.print(iny);
Serial.print(" z:");
Serial.println(inz);
if (inx > THRESHOLD)
{
showMatrix(left);
}
else if (inx < -THRESHOLD)
{
showMatrix(right);
}
else if (iny > THRESHOLD)
{
showMatrix(down);
}
else if (iny < -THRESHOLD)
{
showMatrix(up);
}else{
showMatrix(normal);
}
}
// マトリックスを表示する関数
void showMatrix(const int matrix[8][8]){
for (int i = 0; i < 30; i++){
for (int cat = 0; cat < 8; cat++){
for (int ano = 0; ano < 8; ano++){
digitalWrite(ANODEPIN[ano], matrix[ano][cat]);
}
digitalWrite(CATHODEPIN[cat], LOW);
delay(1);
digitalWrite(CATHODEPIN[cat], HIGH);
}
}
}
```],
caption: "LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム4/4"
)<LEDマトリクスをタクトスイッチを用いて状態を切り替えるプログラム4>
- プログラムの概要
LEDマトリクスを加速度センサーで制御してダイナミック点灯方式を用いてカソードコモンで光らせるプログラムである。
加速度センサーの傾きを利用して、センサが反応している方向を表示するプログラムである。
- プログラムの説明
- 1–98 行目: 設定
- 1,2行目はアノードとカソードのピンを設定している。
- 3–77行目は、LEDマトリクスの点灯パターンを設定している。
- 80行目は、スイッチのpinを設定している。
- 83,84行目は、スイッチの状態を保存する変数を設定している。
- 87行目は、起動状態を保存する変数を設定している。
- 90,91行目はボタンを押した時間を保存する変数を設定している。
- 94–98行目は加速度関係の変数を設定している。
- 101–116行目: Setup
- 103–110行目で、各ピンを出力に設定している。
- 113–115行目で、初期値を設定している。
- 119–127行目 : loop関数
- ボタンの処理とボタンの状態によって、起動状態を切り替える処理を行なっている。
- 130–144 行目 : setupPins関数
- 各ピンのpinModeを設定している。
- 145–180 行目 : ボタンの状態を推定する関数
- ボタンの状態を推定する関数を作成している。
- ボタンが押された状態か、ボタンが離された状態かを判断している。
- ボタンの過去の状態と現在の状態を比べ、ボタンが押された状態か、ボタンが離された状態かを判断している。
- また、時間を取得して、ボタンが押された時間と現在の時間の差を取り、2秒間ボタンが押された状態かを判断している。
- 183–196行目 : アニメーションを行う関数
- 起動と終了をアニメーションで表示する関数を作成している。
- 199–230行目 : 矢印を表示する関数
- 加速度センサーの値によって、矢印を表示する関数を作成している。
- 233–246行目 : showMatrix関数
- 235–246行目で、各列のLEDを点灯させている。
*実験結果・考察*
起動状態と終了状態を分けるために変数を用いた。
2秒間ボタンが押された状態を把握するために、 millis()を用いてボタンが押された前と後の秒数の差を取り長押しを判断した。
switch_in preStatus を用いて、ボタンが押されたタイミングか、ボタンを離されたタイミングかを判断して、それによって2秒間のカウントを開始するか終了するかを判断した。
関数に機能を分割してわかりやすいコードを心がけた
起動中のアニメーションは発展1を参考にした。
=== まとめ
LEDマトリクスをアノードコモン、カソードコモンを用いて光らせるプログラムを作成し、実験を行った。
マトリクスはアノードコモンの方が暗く、カソードコモンの方が明るく光ることがわかった。
また、ダイナミック点灯方式を用いて、複数個のLEDを制御ができる。
#pagebreak() // ページを分ける
// bibファイルの指定 //
#bibliography("./bibliography.bib")
|
|
https://github.com/tingerrr/chiral-thesis-fhe | https://raw.githubusercontent.com/tingerrr/chiral-thesis-fhe/main/src/prelude/lovelace.typ | typst | #let (pseudocode, pseudocode-list, no-number, with-line-label, indent, line-label) = {
import "/src/packages.typ" as _pkg
import "/src/utils.typ" as _utils
(
_pkg.lovelace.pseudocode.with(),
_pkg.lovelace.pseudocode-list.with(
line-number-supplement: _utils.i18n(de: [Zeile], en: [Line]),
booktabs: true,
booktabs-stroke: 1pt + black,
stroke: 0.25pt + gray,
),
_pkg.lovelace.no-number.with(),
_pkg.lovelace.with-line-label.with(),
_pkg.lovelace.indent.with(),
_pkg.lovelace.line-label.with(),
)
}
|
|
https://github.com/timon-schelling/uni-phi111-essay-2024-02-15 | https://raw.githubusercontent.com/timon-schelling/uni-phi111-essay-2024-02-15/main/src/main.typ | typst | #import "template/template.typ": *
#let bib-urls = {
sys.inputs.at("urls", default: none) != none
}
#let theme = {
sys.inputs.at("theme", default: none)
}
#set page(fill: rgb("1a1a1a")) if theme == "dark"
#set text(fill: rgb("c7c7c7")) if theme == "dark"
#show: project.with(
title: "Notwendigkeit von Axiomen in der Wissenschaft",
authors: (
"<NAME>",
),
date: "2024-02-15",
title-page-infos: [
#set text(size: 0.97em)
#table(
columns: (8em, auto),
inset: (x: 0em, y: 0.6em),
stroke: none,
[Universität:], [<NAME>ky Universität Oldenburg],
[Fakultät:], [Fakultät IV - Human- und Gesellschaftswissenschaften],
[Studiengang:], [Philosophie/Werte und Normen Zwei-Fächer-Bachelor],
[Semester:], [Wintersemester 2023/2024],
[Veranstaltung:], [Seminar: 4.03.1107 Wissenschaftstheorie - Eine philosophische Einführung],
[Dozent:], [<NAME>],
[Mail:], [#link("mailto:<EMAIL>"), #link("mailto:<EMAIL>")]
)
],
font: "Times New Roman",
bib-urls: bib-urls,
)
#let argument(premises, conclusion) = align(left, [
#set text(style: "italic")
#table(inset: (x: 0.8em, y: 0em), stroke: none)[
#table(
stroke: none,
inset: (x: 0.1em, y: 0.4em),
columns: 3,
..premises.enumerate(start: 1).map(e => {
let (i, premise) = e
([$P_#i$], [$:$], premise)
}).flatten(),
..([$K$], [$:$], conclusion)
)
]
])
= Einleitung
Ob in der Mathematik die Axiome der natürlichen Zahlen und der Mengenlehre, in der Physik die der klassischen Mechanik oder in der Biologie die der Evolutionstheorie, Axiome sind in der Wissenschaft allgegenwärtig.
Axiome sind Annahmen, die nicht bewiesen werden, sondern vorausgesetzt werden.
Warum jedoch setzt die Wissenschaft Axiome voraus, wenn sie doch auf Beweisen und Experimenten beruht und eben die historische Abkehr unter anderem von religiösen Dogmen sein soll? Diese Frage soll in dieser Arbeit behandelt werden. Es soll gezeigt werden, warum Axiome in der Wissenschaft notwendig sind, um überhaupt wissenschaftliche Theorien aufstellen zu können, wie sie sich von Dogmen unterscheiden und warum sie nicht im Widerspruch zu wissenschaftlicher Methodik stehen, sondern diese erst ermöglichen.
= Folgen des Induktionsproblems
Induktives Schließen bedeutet, aus spezifischen Beobachtungen allgemeine Schlussfolgerungen zu ziehen und zu Theorien zu entwickeln. @poser2001[vgl.&115, 117]
Im Gegensatz zum deduktiven Schließen, bei dem aus allgemeinen Prämissen spezifische Schlüsse gezogen werden. @poser2001[vgl.&115]
Ein klassisches Beispiel für induktives Schließen ist die Beobachtung, dass die Sonne jeden Tag im Osten aufgeht.
Aus einer Vielzahl solcher Beobachtungen könnte man induktiv schließen, dass die Sonne immer im Osten aufgehen wird. @poser2001[vgl.&117]
Diese Art der Schlussfolgerung ist jedoch prinzipiell unsicher, da sie auf der Annahme beruht, dass zukünftige Beobachtungen mit vergangenen Beobachtungen übereinstimmen werden. @poser2001[vgl.&118]
Das Induktionsproblem, das erstmals von <NAME> im 18. Jahrhundert artikuliert wurde, stellt genau diese Annahme infrage. @arnold1983[vgl.&31-32]
Hume argumentierte, dass es keine rationale Rechtfertigung dafür gibt, anzunehmen, dass die Zukunft der Vergangenheit ähneln wird. @arnold1983[vgl.&31]
Mit anderen Worten, nur weil die Sonne bisher immer im Osten aufgegangen ist, gibt es keine logische Notwendigkeit, die garantiert, dass sie es morgen wieder tun wird.
Diese Erkenntnis wirft ernsthafte Zweifel an der Zuverlässigkeit induktiver Schlussfolgerungen auf, da sie letztlich auf einer ungesicherten Annahme über die Regelmäßigkeit der Natur beruhen.
Verschiedene Philosophen und Wissenschaftstheoretiker haben versucht, das Induktionsproblem zu lösen oder zumindest dessen Auswirkungen abzumildern.
Das Induktionsproblem bleibt jedoch ein fundamentale ungelöstes. @poser2001[vgl.&115-126]
Es zwingt uns, die Grenzen unseres Wissens, anzuerkennen und zeigt die Notwendigkeit einer kritischen Haltung gegenüber unseren induktiven Schlussfolgerungen auf. @poser2001[vgl.&126-127]
= Axiome als Grundlage wissenschaftlicher Theorien
Wenn also Induktion keine stabile Grundlage für wissenschaftliche Theorien bietet, was soll die Alternative sein?
Deduktion?
Auf Basis wovon?
Für die Deduktion fehlt es an sicheren Wahrheiten, auf deren Grundlage weiter gedacht werden kann.
Der axiomatische Ansatz bietet hier eine Lösung.
Es wird eine Menge von Axiomen festgelegt, im Grunde Aussagen, die als wahr angenommen werden und von denen aus deduktiv weiter gearbeitet werden kann.
Die Axiome sind dabei keineswegs willkürlich, sondern müssen überlegt gewählt werden.
Sie müssen so gewählt werden, dass sie den gewünschten Geltungsbereich abdecken, dabei aber trotzdem so begrenzt wie möglich sind. @hintikka2011[vgl.&74]
Keineswegs dürfen sie Aussagen enthalten, die nicht notwendig sind und über den Geltungsbereich hinausgehen.
Die Axiome einer Theorie grenzen also auch den Geltungsbereich der Theorie ab. @hintikka2011[vgl.&74]
Offensichtlich müssen sie auch konsistent sein.
Axiome eines Systems dürfen sich nicht widersprechen, weder untereinander noch durch die deduktiven Schlüsse, die aus ihnen gezogen werden oder werden können. @hintikka2011[vgl.&74]
Eine axiomatische Theorie definiert mit ihren Axiomen Voraussetzungen, unter denen sie gilt.
Nehmen wir zum Beispiel die Mathematik und die Axiome der natürlichen Zahlen und der Mengenlehre, auf denen sie aufbaut. @hintikka2011[vgl.&80]
Bei Verwendung der Gleichen Axiome wird immer logisch beweisbar sein, dass $sqrt(4096) = plus.minus 64$ (die Quadratwurzel aus $4096$ eqivalent $plus.minus 64$ ist).
Eine auf Axiomen basierende Theorie folgt der Form:
$ A_1, A_2, dots.h, A_n -> T_1, T_2, dots.h, T_n $
Wobei $A_1, A_2, dots.h, A_n$ die Axiome und $T_1, T_2, dots.h, T_n$ die Theoreme sind.
Die Theoreme sind dabei Aussagen, die aus den Axiomen folgen.
Sie folgen deduktiv logisch aus den Axiomen.
Die Wahrheit der Theoreme und mit ihnen die Wahrheit der Theorie hängt also von der Wahrheit der Axiome ab.
Der Inhalt einer Theorie wird so zu relativem Wissen, relativ zu den jeweiligen Axiomen.
Im weiteren Sinne kann man bei diesem Wissen von relativem a priori sprechen. @poser2001[vgl.&36-37]
Nicht als "es könnte auch anders sein", sondern im Sinne von "es ist so, unter der Voraussetzung, der Axiome".
Ohne absolute Wahrheiten zu kennen, kann so dennoch Wissen generiert werden.
Wissen, das die Zusammenhänge, Strukturen und Logik des Geltungsbereichs der Theorie beschreibt.
Axiomatisierung macht so wissenschaftliche Arbeit außerhalb von experimenteller Falsifikation möglich.
= Basis aller Naturwissenschaft
In vielen Wissenschaften sind auch Axiome vorausgesetzt, die nicht explizit ausformuliert werden.
Sie sind dann in der jeweiligen Wissenschaftsgemeinschaft Kontext der für selbstverständlich gehalten wird.
Grundlegend muss z.B. jede Naturwissenschaft voraussetzen, dass die Welt (und der spezifische Geltungsbereich) verstehbar ist, um überhaupt anzufangen, sie erklären zu wollen.
Auch diese Annahme ist ein Axiom.
Wenn so eine Theorie aufgestellt wird, ist immer ein "unter der Voraussetzung, dass die Welt sich systematisch wissenschaftlich erklären lässt" axiomatisch impliziert.
Je nach wissenschaftlicher Disziplin werden dann spezifische Axiome angenommen, in der Physik meinetwegen Aussagen über Zeit und Raum.
= Unterschied zu Dogmen
Axiome unterscheiden sich von Dogmen in ihrer Natur. Dogmen sollen unveränderliche, unantastbare Wahrheiten sein, die nicht hinterfragt werden dürfen. Axiome hingegen sind Annahmen, die als wahr angenommen werden, um von ihnen aus weiterzudenken. Sie sind nicht zwangsweise unveränderlich, sondern können und müssen hinterfragt und auch angepasst werden, wenn sie sich als falsch herausstellen oder stärkere gefunden werden. Axiome sind zwangsweise konsistent und folgen einer logischen Struktur und sind damit nicht willkürlich, eine Einschränkung, die für Dogmen nicht gemacht wird. Axiome sind nicht in Widerspruch zur wissenschaftlichen Methode, sondern ihre Voraussetzung.
//TODO: rechtschreibung ab hier überprüfen
= Der historische Kontext
Im Folgenden ein auf <NAME> Arbeit @hintikka2011 basierender historischer Überblick über die axiomatische Methode in der Mathematik und Logik.
Die historische Entwicklung der axiomatischen Methode offenbart eine Evolution im Kern der mathematischen und logischen Theoriebildung. @hintikka2011[vgl.&72]
Ursprünglich im antiken Griechenland als ein Weg zur systematischen Grundlegung mathematischer Beweise konzipiert; @hintikka2011[vgl.&72]
erfuhr sie im 19. Jahrhundert eine entscheidende Neukonzeptualisierung, maßgeblich angetrieben durch das Bestreben, Mathematik durch die Präzisierung ihrer Grundlagen neu zu definieren. @hintikka2011[vgl.&69-70]
Die Metatheorie eines Axiomensystems, also die Untersuchung der Eigenschaften und der Struktur des Systems selbst, wurde zu einem zentralen Bestandteil der axiomatischen Methode. @hintikka2011[vgl.&70-71]
Diese Entwicklung reflektiert das Verständnis, dass die Ableitung von Theoremen aus Axiomen nicht lediglich neue Tiefeninformationen erzeugt, sondern vielmehr Oberflächeninformationen liefert, die für das Verständnis der Struktur und der inneren Logik des Systems unerlässlich sind. @hintikka2011[vgl.&69, 78]
Die Auseinandersetzung mit der Axiomatisierung der Logik und der Mengenlehre verdeutlichte die Grenzen und Herausforderungen, die mit der axiomatischen Methode einhergehen. @hintikka2011[vgl.&75]
Insbesondere die Versuche, die Logik axiomatisch zu fassen, stießen auf das Problem, dass ein logisches System zur Untersuchung seiner selbst verwendet wird, was eine Reihe von Paradoxien und Unklarheiten mit sich brachte. @hintikka2011[vgl.&75-76]
Ähnlich verhält es sich mit der Axiomatisierung der Mengenlehre, deren Modelle nicht adäquat die Strukturen von Mengen widerspiegelten, wie ursprünglich beabsichtigt. @hintikka2011[vgl.&79]
Die historische Betrachtung der axiomatischen Methode offenbart nicht nur ihre zentrale Bedeutung für die Entwicklung der Mathematik und Logik, sondern auch die tiefgreifenden philosophischen und methodologischen Fragen, die mit ihr verbunden sind. @hintikka2011[vgl.&84-85] Diese reichen von der Natur mathematischer Wahrheit und der Rolle von Axiomen in der Theoriebildung bis hin zu den Grenzen logischer und mathematischer Systeme. @hintikka2011[vgl.&84-85]
= Mangel an Allternativen
Es gibt zahlreiche scheinbare Alternativen zur axiomatischen Methode, die sich jedoch bei genauerer Betrachtung als unzureichend oder im Grunde äquivalent herausstellen. @poser2001[vgl.&]
Entweder sie scheitern am Induktionsproblem, sind im Kern selbst axiomatisch, es wird nur nicht explizit von Axiomen gesprochen, oder sie widersprechen der axiomatischen Methode nicht und formulieren z.B. nur aus, wie eine axiomatische Theorie durch eine andere ersetzt werden kann (Paradigmenwechsel, hermeneutische Spriralbewegung des Verstehens). @poser2001[vgl.&157 ff., 230-231]
Zum Beispiel wird die Annahme, dass die Welt verstehbar ist und sich systematisch wissenschaftlich erklären lässt, implizit vorausgesetzt.
Oder auch einfach ein anderes Wort für Axiome verwendet, z.B. Festsetzung.
Es scheint der Wissenschaft also an Alternativen zum axiomatischen Ansatz zu mangeln.
= Fazit
Axiome erweisen sich als unverzichtbare Basis wissenschaftlicher Theorien.
Ein Axiom ist eine Annahme, die als wahr angenommen wird, um von ihr aus weiterzuarbeiten, also eine Bedingung unter der die aus ihr gefolgerten Theoreme gelten.
Die axiomatische Methode ermöglicht es, so auch ohne absolute Wahrheiten, systematisch und kohärentes Wissen zu generieren, was durch induktive Schlussfolgerungen allein nicht möglich wäre.
Es wird eine Menge von Axiomen gewählt, die den gewünschten Geltungsbereich abdecken.
Aus ihnen werden dann deduktiv Theoreme abgeleitet, die die Struktur und Logik des Geltungsbereichs der Theorie beschreiben.
Wichtig dabei Axiome sind dabei keineswegs dogmatisch, da sie nicht als wahr festgelegt sind sondern nur innerhalb einer Theorie als wahr angenommen werden und durchaus hinterfragt werden sollen und angepasst werden können.
Diese Flexibilität erlaubt es, Theorien zu bilden, im Bewusstsein der relativen Natur des daraus gewonnen Wissens.
Axiome sind also nicht im Widerspruch zur wissenschaftlichen Methode, sondern in großen Teilen ihre Voraussetzung und auch Ausdruck der wissenschaftlichen Bestrebung, die Welt systematisch und kohärent zu erklären.
|
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/syntaxes/textmate/tests/unit/markup/escape.typ | typst | Apache License 2.0 | [\]]
\#1pt
a \
a \ b \
#let x = 1; \
#x \
#x |
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/meta/query-figure.typ | typst | Apache License 2.0 | // Test a list of figures.
---
#set page(
paper: "a7",
numbering: "1 / 1",
margin: (bottom: 1cm, rest: 0.5cm),
)
#set figure(numbering: "I")
#show figure: set image(width: 80%)
= List of Figures
#locate(it => {
let elements = query(selector(figure).after(it), it)
for it in elements [
Figure
#numbering(it.numbering,
..counter(figure).at(it.location())):
#it.caption.body
#box(width: 1fr, repeat[.])
#counter(page).at(it.location()).first() \
]
})
#figure(
image("/files/glacier.jpg"),
caption: [Glacier melting],
)
#figure(
rect[Just some stand-in text],
kind: image,
supplement: "Figure",
caption: [Stand-in text],
)
#figure(
image("/files/tiger.jpg"),
caption: [Tiger world],
)
|
https://github.com/freundTech/typst-forge | https://raw.githubusercontent.com/freundTech/typst-forge/main/fontawesome/test.typ | typst | #import "@typst-forge/fontawesome:0.2.1": fa-icon
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-2000.typ | typst | Apache License 2.0 | #let data = (
("EN QUAD", "Zs", 0),
("EM QUAD", "Zs", 0),
("EN SPACE", "Zs", 0),
("EM SPACE", "Zs", 0),
("THREE-PER-EM SPACE", "Zs", 0),
("FOUR-PER-EM SPACE", "Zs", 0),
("SIX-PER-EM SPACE", "Zs", 0),
("FIGURE SPACE", "Zs", 0),
("PUNCTUATION SPACE", "Zs", 0),
("THIN SPACE", "Zs", 0),
("HAIR SPACE", "Zs", 0),
("ZERO WIDTH SPACE", "Cf", 0),
("ZERO WIDTH NON-JOINER", "Cf", 0),
("ZERO WIDTH JOINER", "Cf", 0),
("LEFT-TO-RIGHT MARK", "Cf", 0),
("RIGHT-TO-LEFT MARK", "Cf", 0),
("HYPHEN", "Pd", 0),
("NON-BREAKING HYPHEN", "Pd", 0),
("FIGURE DASH", "Pd", 0),
("EN DASH", "Pd", 0),
("EM DASH", "Pd", 0),
("HORIZONTAL BAR", "Pd", 0),
("DOUBLE VERTICAL LINE", "Po", 0),
("DOUBLE LOW LINE", "Po", 0),
("LEFT SINGLE QUOTATION MARK", "Pi", 0),
("RIGHT SINGLE QUOTATION MARK", "Pf", 0),
("SINGLE LOW-9 QUOTATION MARK", "Ps", 0),
("SINGLE HIGH-REVERSED-9 QUOTATION MARK", "Pi", 0),
("LEFT DOUBLE QUOTATION MARK", "Pi", 0),
("RIGHT DOUBLE QUOTATION MARK", "Pf", 0),
("DOUBLE LOW-9 QUOTATION MARK", "Ps", 0),
("DOUBLE HIGH-REVERSED-9 QUOTATION MARK", "Pi", 0),
("DAGGER", "Po", 0),
("DOUBLE DAGGER", "Po", 0),
("BULLET", "Po", 0),
("TRIANGULAR BULLET", "Po", 0),
("ONE DOT LEADER", "Po", 0),
("TWO DOT LEADER", "Po", 0),
("HORIZONTAL ELLIPSIS", "Po", 0),
("HYPHENATION POINT", "Po", 0),
("LINE SEPARATOR", "Zl", 0),
("PARAGRAPH SEPARATOR", "Zp", 0),
("LEFT-TO-RIGHT EMBEDDING", "Cf", 0),
("RIGHT-TO-LEFT EMBEDDING", "Cf", 0),
("POP DIRECTIONAL FORMATTING", "Cf", 0),
("LEFT-TO-RIGHT OVERRIDE", "Cf", 0),
("RIGHT-TO-LEFT OVERRIDE", "Cf", 0),
("NARROW NO-BREAK SPACE", "Zs", 0),
("PER MILLE SIGN", "Po", 0),
("PER TEN THOUSAND SIGN", "Po", 0),
("PRIME", "Po", 0),
("DOUBLE PRIME", "Po", 0),
("TRIPLE PRIME", "Po", 0),
("REVERSED PRIME", "Po", 0),
("REVERSED DOUBLE PRIME", "Po", 0),
("REVERSED TRIPLE PRIME", "Po", 0),
("CARET", "Po", 0),
("SINGLE LEFT-POINTING ANGLE QUOTATION MARK", "Pi", 0),
("SINGLE RIGHT-POINTING ANGLE QUOTATION MARK", "Pf", 0),
("REFERENCE MARK", "Po", 0),
("DOUBLE EXCLAMATION MARK", "Po", 0),
("INTERROBANG", "Po", 0),
("OVERLINE", "Po", 0),
("UNDERTIE", "Pc", 0),
("CHARACTER TIE", "Pc", 0),
("CARET INSERTION POINT", "Po", 0),
("ASTERISM", "Po", 0),
("HYPHEN BULLET", "Po", 0),
("FRACTION SLASH", "Sm", 0),
("LEFT SQUARE BRACKET WITH QUILL", "Ps", 0),
("RIGHT SQUARE BRACKET WITH QUILL", "Pe", 0),
("DOUBLE QUESTION MARK", "Po", 0),
("QUESTION EXCLAMATION MARK", "Po", 0),
("EXCLAMATION QUESTION MARK", "Po", 0),
("TIRONIAN SIGN ET", "Po", 0),
("REVERSED PILCROW SIGN", "Po", 0),
("BLACK LEFTWARDS BULLET", "Po", 0),
("BLACK RIGHTWARDS BULLET", "Po", 0),
("LOW ASTERISK", "Po", 0),
("REVERSED SEMICOLON", "Po", 0),
("CLOSE UP", "Po", 0),
("TWO ASTERISKS ALIGNED VERTICALLY", "Po", 0),
("COMMERCIAL MINUS SIGN", "Sm", 0),
("SWUNG DASH", "Po", 0),
("INVERTED UNDERTIE", "Pc", 0),
("FLOWER PUNCTUATION MARK", "Po", 0),
("THREE DOT PUNCTUATION", "Po", 0),
("QUADRUPLE PRIME", "Po", 0),
("FOUR DOT PUNCTUATION", "Po", 0),
("FIVE DOT PUNCTUATION", "Po", 0),
("TWO DOT PUNCTUATION", "Po", 0),
("FOUR DOT MARK", "Po", 0),
("DOTTED CROSS", "Po", 0),
("TRICOLON", "Po", 0),
("VERTICAL FOUR DOTS", "Po", 0),
("MEDIUM MATHEMATICAL SPACE", "Zs", 0),
("WORD JOINER", "Cf", 0),
("FUNCTION APPLICATION", "Cf", 0),
("INVISIBLE TIMES", "Cf", 0),
("INVISIBLE SEPARATOR", "Cf", 0),
("INVISIBLE PLUS", "Cf", 0),
(),
("LEFT-TO-RIGHT ISOLATE", "Cf", 0),
("RIGHT-TO-LEFT ISOLATE", "Cf", 0),
("FIRST STRONG ISOLATE", "Cf", 0),
("POP DIRECTIONAL ISOLATE", "Cf", 0),
("INHIBIT SYMMETRIC SWAPPING", "Cf", 0),
("ACTIVATE SYMMETRIC SWAPPING", "Cf", 0),
("INHIBIT ARABIC FORM SHAPING", "Cf", 0),
("ACTIVATE ARABIC FORM SHAPING", "Cf", 0),
("NATIONAL DIGIT SHAPES", "Cf", 0),
("NOMINAL DIGIT SHAPES", "Cf", 0),
)
|
https://github.com/phinixplus/docs | https://raw.githubusercontent.com/phinixplus/docs/master/source/template.typ | typst | Other | #import "/source/utilities.typ": date-now
#import "/source/config.typ": font-sans, font-serif
#import "/source/config.typ": text-fg-color, inv-text-fg-color, heading-fg-color, table-bg-color
#import "/source/config.typ": text-weight-normal, text-weight-bold, heading-weight
#import "/source/config.typ": text-size-1, text-size-2, text-size-3, text-size-4, text-size-5
#let license-link = "https://creativecommons.org/licenses/by-nc-sa/4.0/"
#let discord-link = "https://discord.gg/EFKDF3VE9C"
#let template(doc-part, version, keywords, content) = {
set document(title: [The PHINIX+ Documentation])
set document(author: "<NAME>")
set document(date: datetime.today())
set document(keywords: keywords)
set page(paper: "a4", numbering: "1")
set page(margin: (x: 25mm, y: 20mm))
set text(font: font-serif, size: text-size-2)
set text(weight: text-weight-normal, fill: text-fg-color)
set text(hyphenate: false)
set par(justify: true)
set par(leading: 0.6em)
set block(breakable: true)
show link: x => text(fill: blue, underline(x))
show ref: x => text(fill: blue, x)
set heading(numbering: "1.1 ")
show heading: set text(font: font-sans)
show heading: set text(weight: heading-weight, fill: heading-fg-color)
show heading.where(level: 3): set text(size: text-size-2)
show heading.where(level: 2): set text(size: text-size-3)
show heading.where(level: 1): x => {
pagebreak(weak: true)
set text(size: text-size-4)
let numbers = counter(heading).display()
let title = underline(offset: 0.1em, x.body)
if x.location().page() < 3 { par[#title] }
else { par[#numbers #title] }
v(0.8em, weak: true)
}
show figure: set align(center)
show figure: set block(breakable: true)
show figure: x => block[#x.body#v(0.5em, weak: true)#x.caption]
show figure.caption: x => {
set text(size: text-size-1, weight: text-weight-bold)
[#x.supplement #x.counter.display(x.numbering):]
text(weight: text-weight-normal)[ #x.body]
}
set table(stroke: 0.2mm + table-bg-color)
set table(fill: (_, row) => if row == 0 {table-bg-color} else {white})
show table: set align(center)
show table.cell.where(y: 0): set text(weight: text-weight-bold)
show table.cell.where(y: 0): set text(fill: inv-text-fg-color)
page(numbering: none, margin: (top: 80mm, bottom: 20mm))[#{
set align(center + top)
set text(font: font-sans)
set text(weight: text-weight-bold, fill: luma(0))
image("/assets/logo.svg", width: 25%)
text(size: text-size-5)[The PHINIX+ System Architecture Documentation \ ]
text(size: text-size-4)[#doc-part]
set align(center + bottom)
set text(font: font-serif, size: text-size-1)
set text(weight: text-weight-normal, fill: luma(80))
grid(columns: (1fr, 1fr, 1fr), align: horizon, stroke: 0pt,
[
Come discuss with us at the official PHINIX+ Discord server: \
#link(discord-link, box(image("/assets/discord.svg", width: 5%)))
#link(discord-link)
], [
Version: #version \ Date: #date-now \ by <NAME>
], [
#link(license-link, box(image("/assets/license.svg", width: 50%)))
\ Licensed under CC BY-NC-SA 4.0 \ #link(license-link)
]
)
}]
outline(indent: 0.75em, fill: line(length: 100%, stroke: 0.2mm))
pagebreak(weak: true)
[#content]
}
|
https://github.com/typst-community/valkyrie | https://raw.githubusercontent.com/typst-community/valkyrie/main/src/schemas/enumerations.typ | typst | Other | #import "../types.typ" as z;
#let papersize = z.choice.with(
description: "paper size",
(
"a0",
"a1",
"a2",
"a3",
"a4",
"a5",
"a6",
"a7",
"a8",
"a9",
"a10",
"a11",
"iso-b1",
"iso-b2",
"iso-b3",
"iso-b4",
"iso-b5",
"iso-b6",
"iso-b7",
"iso-b8",
"iso-c3",
"iso-c4",
"iso-c5",
"iso-c6",
"iso-c7",
"iso-c8",
"din-d3",
"din-d4",
"din-d5",
"din-d6",
"din-d7",
"din-d8",
"sis-g5",
"sis-e5",
"ansi-a",
"ansi-b",
"ansi-c",
"ansi-d",
"ansi-e",
"arch-a",
"arch-b",
"arch-c",
"arch-d",
"arch-e1",
"arch-e",
"jis-b0",
"jis-b1",
"jis-b2",
"jis-b3",
"jis-b4",
"jis-b5",
"jis-b6",
"jis-b7",
"jis-b8",
"jis-b9",
"jis-b10",
"jis-b11",
"sac-d0",
"sac-d1",
"sac-d2",
"sac-d3",
"sac-d4",
"sac-d5",
"sac-d6",
"iso-id-1",
"iso-id-2",
"iso-id-3",
"asia-f4",
"jp-shiroku-ban-4",
"jp-shiroku-ban-5",
"jp-shiroku-ban-6",
"jp-kiku-4",
"jp-kiku-5",
"jp-business-card",
"cn-business-card",
"eu-business-card",
"fr-tellière",
"fr-couronne-écriture",
"fr-couronne-édition",
"fr-raisin",
"fr-carré",
"fr-jésus",
"uk-brief",
"uk-draft",
"uk-foolscap",
"uk-quarto",
"uk-crown",
"uk-book-a",
"uk-book-b",
"us-letter",
"us-legal",
"us-tabloid",
"us-executive",
"us-foolscap-folio",
"us-statement",
"us-ledger",
"us-oficio",
"us-gov-letter",
"us-gov-legal",
"us-business-card",
"us-digest",
"us-trade",
"newspaper-compact",
"newspaper-berliner",
"newspaper-broadsheet",
"presentation-16-9",
"presentation-4-3",
),
)
|
https://github.com/rikhuijzer/phd-thesis | https://raw.githubusercontent.com/rikhuijzer/phd-thesis/main/title-page.typ | typst | The Unlicense |
// Duplicate of main doc, to allow stand-alone compilation.
#set page(
// Slow Productivity is 14.61 x 21.69 cm (almost A5).
// European printers all use A5.
width: 148mm, // 15.24 cm
height: 210mm, // 22.86 cm
margin: (top: 13mm, bottom: 16mm),
)
#set text(font: "Georgia", size: 18pt)
#image("images/rugr_logoen_zwart_rgb.jpg", width: 50%)
#v(2em)
#align(center)[
*Predicting Dropout in Special Forces Selection*
]
#v(3em)
#set text(size: 10pt)
#align(center)[
*PhD thesis*
]
#v(3em)
#set text(size: 8pt)
#align(center)[
to obtain the degree of PhD at the\
University of Groningen\
on the authority of the\
Rector Magnificus Prof. <NAME>\
and in accordance with\
the decision by the College of Deans.\
This thesis will be defended in public on\
Monday 17 February 2025 at 16.15 hours
]
#v(3em)
#align(center)[
by
]
#v(3em)
#align(center)[
*<NAME>*
]
#align(center)[
born on
#include("birthdate.typ")
]
#pagebreak()
#set text(font: "Cambria", size: 8pt)
*Promotores*\
Prof. <NAME>\
Prof. <NAME>\
\
*Copromotor*\
Dr. <NAME>\
\
*Assessment Committee*\
Prof. <NAME>\
Prof. <NAME>\
Prof. <NAME>\
|
https://github.com/JeyRunner/tuda-typst-templates | https://raw.githubusercontent.com/JeyRunner/tuda-typst-templates/main/templates/tudapub/common/props.typ | typst | MIT License |
#let tud_heading_line_thin_stroke = 0.75pt
#let tud_page_margin_title_page = (
top: 15mm,
left: 15mm,
right: 15mm,
bottom: 15mm - 1mm // should be 20mm according to guidelines
)
#let tud_page_margin_small = tud_page_margin_title_page
// Same margins as the default ones in the tudpub latex template
#let tud_page_margin_big = (
top: 30mm, //35.25mm + 0.05mm,//+ 0.02mm,
left: 31.5mm,
right: 31.5mm,
bottom: 56mm
)
#let tud_page_margin_medium = (
top: 15mm,
left: 25mm,
right: 25mm,
bottom: 15mm - 1mm // should be 20mm according to guidelines
) |
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/post_type_check/text_stroke3.typ | typst | Apache License 2.0 | #let tmpl2(stroke) = text(stroke: stroke)
#tmpl2((/* position */)) |
https://github.com/benjft/CV | https://raw.githubusercontent.com/benjft/CV/master/other.typ | typst | == Additional Learning
- MSSQL Mastering Performance Tuning Brent Ozar Online Courses
- MSSQL One Day Fundamentals, Brent Ozar Online Courses |
|
https://github.com/jomaway/typst-teacher-templates | https://raw.githubusercontent.com/jomaway/typst-teacher-templates/main/ttt-worksheet/template/worksheet.typ | typst | MIT License | #import "@preview/ttt-worksheet:0.1.0": *
#show: worksheet.with(
topic: "LESSON TOPIC",
subject: "math",
authors: ("unknown",),
)
#title[The todays lesson is about ...]
#goal[
- Write nice documents
]
#set outline(indent:auto)
#outline()
#pagebreak()
#abstract()[
What is this lesson about.
]
= First section
|
https://github.com/DashieTM/ost-5semester | https://raw.githubusercontent.com/DashieTM/ost-5semester/main/experiment/weeks/week7.typ | typst | #import "../../utils.typ": *
#align(
center,
[#image("../../Screenshots/2023_11_02_11_42_54.png", width: 70%)],
)
#section("Laplace Experiments")
Simple probability:
#set text(14pt)
$P(A) = |A|/|Omega|$
A: Probability of a certain outcome -> dice 6
$Omega$: All possibilities -> 6 different possibilities Aka the probability of
recieving a 6 is 1/6, this never changes for this experiment.
#set text(11pt)
#subsection("Ordered Probability")
This means that the order in which the result appears matters -> for example a
game with dices might take the first dice as a 10digit dice.
#subsubsection("With memory")
The best result here is throwing a single dice, the same number may appear again
and again.
#set text(14pt)
$K = n^k$
- K = amount of different possibilities
- k = amount of dice rolls
- n = amount of sides of a dice -> 6 -> $6^k$
#text(
red,
)[Note, k and n are sometimes the same -> seat order for 10 people with free seat
choice -> 10 seats:\
In this case you can simply use: *$K = 10!$*]
#set text(11pt)
#subsubsection("Without Memory")
The best example for this is a lock, you can only have a specific combination
once, for a lock with 2 numbers -> 1-1 is a specific combination, next: 1-2,
1-3, etc.
#set text(14pt)
$K = n!/(n-k)!$
$K = product_n^(n-k+1)n$
- K = amount of different possibilities
- k = amount of section in the lock
- n = amount of numbers in the lock -> 0-9 lock with 5 sections -> $10!/(10*5)!$
#set text(11pt)
#subsection("Un-ordered Probability")
#subsubsection("With Memory")
tournaments -> each team competes against another once, order doesn't matter ->
unlike a lock\
#set text(14pt)
$K = (product_n^(n-k+1)n)\(k!)$
$K = (n!)/(k! * (n-k)!)$ -> *Problem! the triple factorial is compute heavy! not
a practical formula*
- K = amount of different team combinations
- k = amount of teams within one combination -> 2 here
- n = amount of teams
- example calculation with 25 teams: $K = (25!)/(2! * (25-2)!) = 300$
#set text(11pt)
#subsubsection("Without Memory")
Same idea as above but the same combination can appear again -> randomly
choosing places.\
Example: 6 groups compete for 3 seats within a parliament(or something), one
group can win multiple seats(this is the difference to above!).\
#set text(14pt)
$K = (n + k - 1)/(k)$
- K = amount of different team combinations
- k = amount of items within one combination -> 2 here, 1 seat 1 group
- n = amount of items -> 6 here
- example $K = (6 + 3 - 1)/(3) = 56$ -> 56 different combinations possible
#set text(11pt)
|
|
https://github.com/EpicEricEE/typst-based | https://raw.githubusercontent.com/EpicEricEE/typst-based/master/README.md | markdown | MIT License | # based
A package for encoding and decoding in base64, base32, and base16.
## Usage
The package comes with three submodules: `base64`, `base32`, and `base16`. All of them have an `encode` and `decode` function. The package also provides the function aliases
- `encode64` / `decode64`,
- `encode32` / `decode32`, and
- `encode16` / `decode16`.
Both base64 and base32 allow you to choose whether to use padding or not (`pad` parameter). It is enabled by default. Base64 also allows you to encode with the URL-safe alphabet (`url` parameter), while base32 allows you to encode or decode with the "extended hex" alphabet (`hex` parameter). Both options are disabled by default. The base16 encoder uses lowercase letters, the decoder is case-insensitive.
You can encode strings, arrays and bytes. The `encode` function will return a string, while the `decode` function will return bytes.
```typ
#import "@preview/based:0.1.0": base64, base32, base16
#table(
columns: 3,
table.header[*Base64*][*Base32*][*Base16*],
raw(base64.encode("Hello world!")),
raw(base32.encode("Hello world!")),
raw(base16.encode("Hello world!")),
str(base64.decode("SGVsbG8gd29ybGQh")),
str(base32.decode("JBSWY3DPEB3W64TMMQQQ====")),
str(base16.decode("48656C6C6F20776F726C6421"))
)
```

|
https://github.com/Robotechnic/alchemist | https://raw.githubusercontent.com/Robotechnic/alchemist/master/src/drawer.typ | typst | MIT License | #import "default.typ": default
#import "@preview/cetz:0.2.2"
#import "utils.typ"
#import cetz.draw: *
#let default-anchor = (type: "coord", anchor: (0, 0))
#let max-int = 9223372036854775807
#let default-ctx = (
// general
last-anchor: default-anchor, // keep trace of the place to draw
group-id: 0, // id of the current group
link-id: 0, // id of the current link
links: (), // list of links to draw
hooks: (:), // list of hooks
hooks-links: (), // list of links to hooks
relative-angle: 0deg, // current global relative angle
angle: 0deg, // current global angle
// branch
first-branch: false, // true if the next element is the first in a branch
// cycle
first-molecule: none, // name of the first molecule in the cycle
in-cycle: false, // true if we are in a cycle
cycle-faces: 0, // number of faces in the current cycle
faces-count: 0, // number of faces already drawn
cycle-step-angle: 0deg, // angle between two faces in the cycle
record-vertex: false, // true if the cycle should keep track of its vertices
vertex-anchors: (), // list of the cycle vertices
)
#let set-last-anchor(ctx, anchor) = {
if ctx.last-anchor.type == "link" and not ctx.last-anchor.at("drew", default: false) {
ctx.links.push(ctx.last-anchor)
}
(..ctx, last-anchor: anchor)
}
/// Return the index to choose if the link connection is not overridden
#let link-molecule-index(angle, end, count, vertical) = {
if not end {
if vertical and utils.angle-in-range-strict(angle, 0deg, 180deg) {
0
} else if utils.angle-in-range-strict(angle, 90deg, 270deg) {
0
} else {
count
}
} else {
if vertical and utils.angle-in-range-strict(angle, 0deg, 180deg) {
count
} else if utils.angle-in-range-strict(angle, 90deg, 270deg) {
count
} else {
0
}
}
}
#let molecule-link-anchor(name, id, count) = {
if count <= id {
panic("The last molecule only has " + str(count) + " connections")
}
if id == -1 {
id = count - 1
}
(name: name, anchor: (str(id), "center"))
}
#let link-molecule-anchor(name: none, id, count) = {
if id >= count {
panic("This molecule only has " + str(count) + " anchors")
}
if id == -1 {
panic("The index of the molecule to link to must be defined")
}
if name == none {
(name: str(id), anchor: "center")
} else {
(name: name, anchor: (str(id), "center"))
}
}
#let angle-from-ctx(ctx, object, default) = {
if "relative" in object {
object.at("relative") + ctx.relative-angle
} else if "absolute" in object {
object.at("absolute")
} else if "angle" in object {
object.at("angle") * ctx.config.angle-increment
} else {
default
}
}
#let cycle-angle(ctx) = {
if ctx.in-cycle {
if ctx.faces-count == 0 {
ctx.relative-angle - ctx.cycle-step-angle - (180deg - ctx.cycle-step-angle) / 2
} else {
ctx.relative-angle - (180deg - ctx.cycle-step-angle) / 2
}
} else {
ctx.angle
}
}
/// Draw a triangle between two molecules
#let cram(from, to, ctx, args) = {
let (ctx, (from-x, from-y, _)) = cetz.coordinate.resolve(ctx, from)
let (ctx, (to-x, to-y, _)) = cetz.coordinate.resolve(ctx, to)
let base-length = utils.convert-length(
ctx,
args.at("base-length", default: .8em),
)
line(
(from-x, from-y - base-length / 2),
(from-x, from-y + base-length / 2),
(to-x, to-y),
close: true,
stroke: args.at("stroke", default: none),
fill: args.at("fill", default: black),
)
}
/// Draw a dashed triangle between two molecules
#let dashed-cram(from, to, length, ctx, args) = {
let (ctx, (from-x, from-y, _)) = cetz.coordinate.resolve(ctx, from)
let (ctx, (to-x, to-y, _)) = cetz.coordinate.resolve(ctx, to)
let base-length = utils.convert-length(
ctx,
args.at("base-length", default: .8em),
)
hide({
line(name: "top", (from-x, from-y - base-length / 2), (to-x, to-y - 0.05))
line(
name: "bottom",
(from-x, from-y + base-length / 2),
(to-x, to-y + 0.05),
)
})
let stroke = args.at("stroke", default: black + .05em)
let dash-gap = utils.convert-length(ctx, args.at("dash-gap", default: .3em))
let dash-width = stroke.thickness
let converted-dash-width = utils.convert-length(ctx, dash-width)
let length = utils.convert-length(ctx, length)
let dash-count = int(calc.ceil(length / (dash-gap + converted-dash-width)))
let incr = 100% / dash-count
let percentage = 0%
while percentage <= 100% {
line(
(name: "top", anchor: percentage),
(name: "bottom", anchor: percentage),
stroke: stroke,
)
percentage += incr
}
}
#let draw-molecule-text(mol) = {
for (id, eq) in mol.atoms.enumerate() {
let name = str(id)
// draw atoms of the group one after the other from left to right
content(
name: name,
anchor: if mol.vertical {
"north"
} else {
"west"
},
(
if id == 0 {
(0, 0)
} else if mol.vertical {
(to: str(id - 1) + ".south", rel: (0, -.2em))
} else {
str(id - 1) + ".east"
}
),
{
show math.equation: math.upright
eq
},
)
id += 1
}
}
#let draw-molecule(mol, ctx) = {
let name = mol.name
if name != none {
if name in ctx.hooks {
panic("Molecule with name " + name + " already exists")
}
ctx.hooks.insert(name, mol)
} else {
name = "molecule" + str(ctx.group-id)
}
let (group-anchor, side, coord) = if ctx.last-anchor.type == "coord" {
("west", true, ctx.last-anchor.anchor)
} else if ctx.last-anchor.type == "link" {
if ctx.last-anchor.to == none {
ctx.last-anchor.to = link-molecule-index(
ctx.last-anchor.angle,
true,
mol.count - 1,
mol.vertical,
)
}
let group-anchor = link-molecule-anchor(ctx.last-anchor.to, mol.count)
ctx.last-anchor.to-name = name
(group-anchor, false, ctx.last-anchor.name + "-end-anchor")
} else {
panic("A molecule must be linked to a coord or a link")
}
ctx = set-last-anchor(
ctx,
(type: "molecule", name: name, count: mol.at("count"), vertical: mol.vertical),
)
ctx.group-id += 1
(
ctx,
{
group(
anchor: if side {
group-anchor
} else {
"from" + str(ctx.group-id)
},
name: name,
{
set-origin(coord)
anchor("default", (0, 0))
draw-molecule-text(mol)
if not side {
anchor("from" + str(ctx.group-id), group-anchor)
}
},
)
},
)
}
#let angle-override(angle, ctx) = {
if ctx.in-cycle {
("offset": "left")
} else {
(:)
}
}
#let draw-last-cycle-link(link, ctx) = {
let from-name = none
let from-pos = none
if ctx.last-anchor.type == "molecule" {
from-name = ctx.last-anchor.name
from-pos = (name: from-name, anchor: "center")
if from-name not in ctx.hooks {
ctx.hooks.insert(from-name, ctx.last-anchor)
}
} else if ctx.last-anchor.type == "link" {
from-pos = ctx.last-anchor.name + "-end-anchor"
} else {
panic("A cycle link must be linked to a molecule or a link")
}
ctx.links.push((
type: "link",
name: link.at("name", default: "link" + str(ctx.link-id)),
from-pos: from-pos,
from-name: from-name,
to-name: ctx.first-molecule,
from: link.at("from", default: none),
to: link.at("to", default: none),
override: (offset: "left"),
draw: link.draw,
))
ctx.link-id += 1
(ctx, ())
}
#let draw-link(link, ctx) = {
let link-angle = 0deg
if ctx.in-cycle {
if ctx.faces-count == ctx.cycle-faces - 1 and ctx.first-molecule != none {
return draw-last-cycle-link(link, ctx)
}
if ctx.faces-count == 0 {
link-angle = ctx.relative-angle
} else {
link-angle = ctx.relative-angle + ctx.cycle-step-angle
}
} else {
link-angle = angle-from-ctx(ctx, link, ctx.angle)
}
link-angle = utils.angle-correction(link-angle)
ctx.relative-angle = link-angle
let override = angle-override(link-angle, ctx)
let to-connection = link.at("to", default: none)
let from-connection = none
let from-name = none
let from-pos = if ctx.last-anchor.type == "coord" {
ctx.last-anchor.anchor
} else if ctx.last-anchor.type == "molecule" {
from-connection = link-molecule-index(
link-angle,
false,
ctx.last-anchor.count - 1,
ctx.last-anchor.vertical,
)
from-connection = link.at("from", default: from-connection)
from-name = ctx.last-anchor.name
molecule-link-anchor(
ctx.last-anchor.name,
from-connection,
ctx.last-anchor.count,
)
} else if ctx.last-anchor.type == "link" {
ctx.last-anchor.name + "-end-anchor"
} else {
panic("Unknown anchor type " + ctx.last-anchor.type)
}
let length = link.at("atom-sep", default: ctx.config.atom-sep)
let link-name = link.at("name", default: "link" + str(ctx.link-id))
if ctx.record-vertex {
if ctx.faces-count == 0 {
ctx.vertex-anchors.push(from-pos)
}
if ctx.faces-count < ctx.cycle-faces - 1 {
ctx.vertex-anchors.push(link-name + "-end-anchor")
}
}
ctx = set-last-anchor(
ctx,
(
type: "link",
name: link-name,
override: override,
from-pos: from-pos,
from-name: from-name,
from: from-connection,
to-name: none,
to: to-connection,
angle: link-angle,
draw: link.draw,
),
)
ctx.link-id += 1
(
ctx,
{
let end-anchor = (to: from-pos, rel: (angle: link-angle, radius: length))
if ctx.config.debug {
line(from-pos, end-anchor, stroke: blue + .1em)
}
group(
name: link-name + "-end-anchor",
{
anchor("default", end-anchor)
},
)
},
)
}
/// Insert missing vertices in the cycle
#let missing-vertices(ctx, vertex, cetz-ctx) = {
let atom-sep = utils.convert-length(cetz-ctx, ctx.config.atom-sep)
for i in range(ctx.cycle-faces - vertex.len()) {
let (x, y, _) = vertex.last()
vertex.push((
x + atom-sep * calc.cos(ctx.relative-angle + ctx.cycle-step-angle * (i + 1)),
y + atom-sep * calc.sin(ctx.relative-angle + ctx.cycle-step-angle * (i + 1)),
0,
))
}
vertex
}
#let cycle-center-radius(ctx, cetz-ctx, vertex) = {
let min-radius = max-int
let center = (0, 0)
let faces = ctx.cycle-faces
let odd = calc.rem(faces, 2) == 1
for (i, v) in vertex.enumerate() {
if (ctx.config.debug) {
circle(v, radius: .1em, fill: blue, stroke: blue)
}
let (x, y, _) = v
center = (center.at(0) + x, center.at(1) + y)
if odd {
let opposite1 = calc.rem-euclid(i + calc.div-euclid(faces, 2), faces)
let opposite2 = calc.rem-euclid(i + calc.div-euclid(faces, 2) + 1, faces)
let (ox1, oy1, _) = vertex.at(opposite1)
let (ox2, oy2, _) = vertex.at(opposite2)
let radius = utils.distance-between(cetz-ctx, (x, y), ((ox1 + ox2) / 2, (oy1 + oy2) / 2)) / 2
if radius < min-radius {
min-radius = radius
}
} else {
let opposite = calc.rem-euclid(i + calc.div-euclid(faces, 2), faces)
let (ox, oy, _) = vertex.at(opposite)
let radius = utils.distance-between(cetz-ctx, (x, y), (ox, oy)) / 2
if radius < min-radius {
min-radius = radius
}
}
}
((center.at(0) / vertex.len(), center.at(1) / vertex.len()), min-radius)
}
#let draw-cycle-center-arc(ctx, name, center-arc) = {
let faces = ctx.cycle-faces
let vertex = ctx.vertex-anchors
get-ctx(cetz-ctx => {
let (cetz-ctx, ..vertex) = cetz.coordinate.resolve(cetz-ctx, ..vertex)
if vertex.len() < faces {
vertex = missing-vertices(ctx, cetz-ctx)
}
let (center, min-radius) = cycle-center-radius(ctx, cetz-ctx, vertex)
if name != none {
group(
name: name,
{
anchor("default", center)
},
)
}
if center-arc != none {
if min-radius == max-int {
panic("The cycle has no opposite vertices")
}
if ctx.cycle-faces > 4 {
min-radius *= center-arc.at("radius", default: 0.7)
} else {
min-radius *= center-arc.at("radius", default: 0.5)
}
let start = center-arc.at("start", default: 0deg)
let end = center-arc.at("end", default: 360deg)
let delta = center-arc.at("delta", default: end - start)
center = (
center.at(0) + min-radius * calc.cos(start),
center.at(1) + min-radius * calc.sin(start),
)
arc(
center,
..center-arc,
radius: min-radius,
start: start,
delta: delta,
)
}
})
}
#let draw-hooks-links(links, name, ctx, from-mol) = {
for (to-name, (link,)) in links {
if to-name not in ctx.hooks {
panic("Molecule " + to-name + " does not exist")
}
let to-hook = ctx.hooks.at(to-name)
if to-hook.type == "molecule" {
ctx.links.push((
type: "link",
name: link.at("name", default: "link" + str(ctx.link-id)),
from-pos: if from-mol {
(name: name, anchor: "center")
} else {
name + "-end-anchor"
},
from-name: if from-mol {
name
},
to-name: to-name,
from: none,
to: none,
override: angle-override(ctx.angle, ctx),
draw: link.draw,
))
} else if to-hook.type == "hook" {
ctx.links.push((
type: if from-mol {
"mol-hook-link"
} else {
"link-hook-link"
},
name: link.at("name", default: "link" + str(ctx.link-id)),
from-pos: if from-mol {
(name: name, anchor: "center")
} else {
name + "-end-anchor"
},
from-name: if from-mol {
name
},
to-name: to-name,
to-hook: to-hook.hook,
override: angle-override(ctx.angle, ctx),
draw: link.draw,
))
} else {
panic("Unknown hook type " + ctx.hook.at(to-name).type)
}
ctx.link-id += 1
}
ctx
}
#let update-parent-context(parent-ctx, ctx) = {
let last-anchor = if parent-ctx.last-anchor != ctx.last-anchor {
(
..parent-ctx.last-anchor,
drew: true,
)
} else {
parent-ctx.last-anchor
}
(
..parent-ctx,
last-anchor: last-anchor,
hooks: ctx.hooks,
hooks-links: ctx.hooks-links,
links: ctx.links,
group-id: ctx.group-id,
link-id: ctx.link-id,
)
}
#let draw-molecules-and-link(ctx, body) = {
let drawing = ()
let cetz-drawing = ()
(
{
for element in body {
if ctx.in-cycle and ctx.faces-count >= ctx.cycle-faces {
continue
}
if type(element) == function {
cetz-drawing.push(element)
} else if "type" not in element {
panic("Element " + str(element) + " has no type")
} else if element.type == "molecule" {
if ctx.first-branch {
panic("A molecule can not be the first element in a cycle")
}
(ctx, drawing) = draw-molecule(element, ctx)
drawing
if element.links.len() != 0 {
ctx.hooks.insert(ctx.last-anchor.name, element)
ctx.hooks-links.push((element.links, ctx.last-anchor.name, true))
}
} else if element.type == "link" {
ctx.first-branch = false
(ctx, drawing) = draw-link(element, ctx)
ctx.faces-count += 1
drawing
if element.links.len() != 0 {
ctx.hooks-links.push((element.links, ctx.last-anchor.name, false))
}
} else if element.type == "branch" {
let angle = angle-from-ctx(ctx, element.args, cycle-angle(ctx))
let (drawing, branch-ctx, cetz-rec) = draw-molecules-and-link(
(
..ctx,
in-cycle: false,
first-branch: true,
cycle-step-angle: 0,
angle: angle,
),
element.draw,
)
ctx = update-parent-context(ctx, branch-ctx)
cetz-drawing += cetz-rec
drawing
} else if element.type == "cycle" {
let cycle-step-angle = 360deg / element.faces
let angle = angle-from-ctx(ctx, element.args, none)
if angle == none {
if ctx.in-cycle {
angle = ctx.relative-angle - (180deg - cycle-step-angle)
if ctx.faces-count != 0 {
angle += ctx.cycle-step-angle
}
} else if ctx.relative-angle == 0deg and ctx.angle == 0deg and not element.args.at(
"align",
default: false,
) {
angle = cycle-step-angle - 90deg
} else {
angle = ctx.relative-angle - (180deg - cycle-step-angle) / 2
}
}
let first-molecule = none
if ctx.last-anchor.type == "molecule" {
first-molecule = ctx.last-anchor.name
if first-molecule not in ctx.hooks {
ctx.hooks.insert(first-molecule, ctx.last-anchor)
}
}
let name = none
let record-vertex = false
if "name" in element.args {
name = element.args.at("name")
record-vertex = true
} else if "arc" in element.args {
record-vertex = true
}
let (drawing, cycle-ctx, cetz-rec) = draw-molecules-and-link(
(
..ctx,
in-cycle: true,
cycle-faces: element.faces,
faces-count: 0,
first-branch: true,
cycle-step-angle: cycle-step-angle,
relative-angle: angle,
first-molecule: first-molecule,
angle: angle,
record-vertex: record-vertex,
vertex-anchors: (),
),
element.draw,
)
ctx = update-parent-context(ctx, cycle-ctx)
cetz-drawing += cetz-rec
drawing
if record-vertex {
draw-cycle-center-arc(cycle-ctx, name, element.args.at("arc", default: none))
}
} else if element.type == "hook" {
if element.name in ctx.hooks {
panic("Hook " + element.name + " already exists")
}
if ctx.last-anchor.type == "link" {
ctx.hooks.insert(element.name, (type: "hook", hook: ctx.last-anchor.name + "-end-anchor"))
} else if ctx.last-anchor.type == "coord" {
ctx.hooks.insert(element.name, (type: "hook", hook: ctx.last-anchor.anchor))
} else {
panic("A hook must placed after a link or at the beginning of the skeleton")
}
} else {
panic("Unknown element type " + element.type)
}
}
if ctx.last-anchor.type == "link" and not ctx.last-anchor.at("drew", default: false) {
ctx.links.push(ctx.last-anchor)
}
},
ctx,
cetz-drawing,
)
}
#let anchor-north-east(cetz-ctx, (x, y, _), delta, molecule, id) = {
let (cetz-ctx, (_, b, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "north")),
)
let (cetz-ctx, (a, _, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "east")),
)
let a = (a - x) + delta
let b = (b - y) + delta
(a, b)
}
#let anchor-north-west(cetz-ctx, (x, y, _), delta, molecule, id) = {
let (cetz-ctx, (_, b, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "north")),
)
let (cetz-ctx, (a, _, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "west")),
)
let a = (x - a) + delta
let b = (b - y) + delta
(a, b)
}
#let anchor-south-west(cetz-ctx, (x, y, _), delta, molecule, id) = {
let (cetz-ctx, (_, b, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "south")),
)
let (cetz-ctx, (a, _, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "west")),
)
let a = (x - a) + delta
let b = (y - b) + delta
(a, b)
}
#let anchor-south-east(cetz-ctx, (x, y, _), delta, molecule, id) = {
let (cetz-ctx, (_, b, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "south")),
)
let (cetz-ctx, (a, _, _)) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "east")),
)
let a = (a - x) + delta
let b = (y - b) + delta
(a, b)
}
#let molecule-anchor(ctx, cetz-ctx, angle, molecule, id) = {
let delta = utils.convert-length(cetz-ctx, ctx.config.delta)
let (cetz-ctx, center) = cetz.coordinate.resolve(
cetz-ctx,
(name: molecule, anchor: (id, "center")),
)
let (a, b) = if utils.angle-in-range(angle, 0deg, 90deg) {
anchor-north-east(cetz-ctx, center, delta, molecule, id)
} else if utils.angle-in-range(angle, 90deg, 180deg) {
anchor-north-west(cetz-ctx, center, delta, molecule, id)
} else if utils.angle-in-range(angle, 180deg, 270deg) {
anchor-south-west(cetz-ctx, center, delta, molecule, id)
} else {
anchor-south-east(cetz-ctx, center, delta, molecule, id)
}
// https://www.petercollingridge.co.uk/tutorials/computational-geometry/finding-angle-around-ellipse/
let angle = if utils.angle-in-range-inclusive(angle, 0deg, 90deg) or utils.angle-in-range-strict(
angle,
270deg,
360deg,
) {
calc.atan(calc.tan(angle) * a / b)
} else {
calc.atan(calc.tan(angle) * a / b) - 180deg
}
if a == 0 or b == 0 {
panic("Ellipse " + ellipse + " has no width or height")
}
(center.at(0) + a * calc.cos(angle), center.at(1) + b * calc.sin(angle))
}
#let calculate-mol-mol-link-anchors(ctx, cetz-ctx, link) = {
let to-pos = (name: link.to-name, anchor: "center")
if link.to == none or link.from == none {
let angle = utils.angle-between(cetz-ctx, link.from-pos, to-pos)
link.angle = angle
if link.from == none {
link.from = link-molecule-index(
angle,
false,
ctx.hooks.at(link.from-name).count - 1,
ctx.hooks.at(link.from-name).vertical,
)
}
if link.to == none {
link.to = link-molecule-index(
angle,
true,
ctx.hooks.at(link.to-name).count - 1,
ctx.hooks.at(link.to-name).vertical,
)
}
}
let start = molecule-anchor(ctx, cetz-ctx, link.angle, link.from-name, str(link.from))
let end = molecule-anchor(ctx, cetz-ctx, link.angle + 180deg, link.to-name, str(link.to))
((start, end), utils.angle-between(cetz-ctx, start, end))
}
#let calculate-link-mol-anchors(ctx, cetz-ctx, link) = {
if link.to == none {
let angle = utils.angle-correction(
utils.angle-between(
cetz-ctx,
link.from-pos,
(name: link.to-name, anchor: "center"),
),
)
link.to = link-molecule-index(
angle,
true,
ctx.hooks.at(link.to-name).count - 1,
ctx.hooks.at(link.to-name).vertical,
)
link.angle = angle
} else if "angle" not in link {
link.angle = utils.angle-correction(
utils.angle-between(
cetz-ctx,
link.from-pos,
(name: link.to-name, anchor: (str(link.to), "center")),
),
)
}
let end-anchor = molecule-anchor(
ctx,
cetz-ctx,
link.angle + 180deg,
link.to-name,
str(link.to),
)
(
(
link.from-pos,
end-anchor,
),
utils.angle-between(cetz-ctx, link.from-pos, end-anchor),
)
}
#let calculate-mol-link-anchors(ctx, cetz-ctx, link) = {
(
(
molecule-anchor(ctx, cetz-ctx, link.angle, link.from-name, str(link.from)),
link.name + "-end-anchor",
),
link.angle,
)
}
#let calculate-mol-hook-link-anchors(ctx, cetz-ctx, link) = {
let hook = ctx.hooks.at(link.to-name)
let angle = utils.angle-correction(
utils.angle-between(cetz-ctx, link.from-pos, hook.hook),
)
let from = link-molecule-index(
angle,
false,
ctx.hooks.at(link.from-name).count - 1,
ctx.hooks.at(link.from-name).vertical,
)
let start-anchor = molecule-anchor(ctx, cetz-ctx, angle, link.from-name, str(from))
(
(
start-anchor,
hook.hook,
),
utils.angle-between(cetz-ctx, start-anchor, hook.hook),
)
}
#let calculate-link-hook-link-anchors(ctx, cetz-ctx, link) = {
let hook = ctx.hooks.at(link.to-name)
(
(
link.from-pos,
hook.hook,
),
utils.angle-between(cetz-ctx, link.from-pos, hook.hook),
)
}
#let calculate-link-link-anchors(link) = {
((link.from-pos, link.name + "-end-anchor"), link.angle)
}
#let calculate-link-anchors(ctx, cetz-ctx, link) = {
if link.type == "mol-hook-link" {
calculate-mol-hook-link-anchors(ctx, cetz-ctx, link)
} else if link.type == "link-hook-link" {
calculate-link-hook-link-anchors(ctx, cetz-ctx, link)
} else if link.to-name != none and link.from-name != none {
calculate-mol-mol-link-anchors(ctx, cetz-ctx, link)
} else if link.to-name != none {
calculate-link-mol-anchors(ctx, cetz-ctx, link)
} else if link.from-name != none {
calculate-mol-link-anchors(ctx, cetz-ctx, link)
} else {
calculate-link-link-anchors(link)
}
}
#let draw-link-decoration(ctx) = {
(
get-ctx(cetz-ctx => {
for link in ctx.links {
let ((from, to), angle) = calculate-link-anchors(ctx, cetz-ctx, link)
if ctx.config.debug {
circle(from, radius: .1em, fill: red, stroke: red)
circle(to, radius: .1em, fill: red, stroke: red)
}
let length = utils.distance-between(cetz-ctx, from, to)
hide(line(from, to, name: link.name))
group({
set-origin(from)
rotate(angle)
(link.draw)(length, cetz-ctx, override: link.override)
})
}
}),
ctx,
)
}
#let draw-skeleton(config: default, name: none, mol-anchor: none, body) = {
for (key, value) in default {
if key not in config {
config.insert(key, value)
}
}
let ctx = default-ctx
ctx.angle = config.base-angle
ctx.config = config
let (atoms, ctx, cetz-drawing) = draw-molecules-and-link(ctx, body)
for (links, name, from-mol) in ctx.hooks-links {
ctx = draw-hooks-links(links, name, ctx, from-mol)
}
let (links, _) = draw-link-decoration(ctx)
if name == none {
atoms
links
cetz-drawing
} else {
group(
name: name,
anchor: mol-anchor,
{
anchor("default", (0,0))
atoms
links
cetz-drawing
},
)
}
}
/// setup a molecule skeleton drawer
#let skeletize(debug: false, background: none, config: (:), body) = {
if "debug" not in config {
config.insert("debug", debug)
}
cetz.canvas(
debug: debug,
background: background,
draw-skeleton(config: config, body),
)
}
|
https://github.com/AU-Master-Thesis/thesis | https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/lib/aos-soa.typ | typst | MIT License | #let aos(dict) = {
for (k, v) in dict {
assert(type(v) == array)
}
let array-lens = dict.values().map(arr => arr.len())
assert(array-lens.all(len => len == array-lens.at(0)))
let output = ()
for i in range(array-lens.at(0)) {
let record = (:)
for (k, v) in dict {
record.insert(k, v.at(i))
}
output.push(record)
}
return output
}
#let soa(dictionaries) = {
assert(type(dictionaries) == array)
for dict in dictionaries {
assert(type(dict) == dictionary)
}
let keys = dictionaries.at(0).keys()
assert(dictionaries.map(dict => dict.keys()).all(k => k == keys))
let output = keys.fold((:), (acc, k) => {
acc.insert(k, ())
acc
})
for dict in dictionaries {
for (k, v) in dict {
let arr = output.at(k)
arr.push(v)
output.insert(k, arr)
}
}
return output
}
// #aos(soa(((a: 2, b: 3), (a: 4, b: 5))))
|
https://github.com/jamesrswift/pixel-pipeline | https://raw.githubusercontent.com/jamesrswift/pixel-pipeline/main/tests/pipeline/factory/playground/test.typ | typst | The Unlicense | #import "/tests/preamble.typ": *
#import pixel: *
#let plotter = pipeline.factory(
layers: (
layer.debug(),
),
)
#let duff-cmd(pos: (0,0)) = {
pipeline.primitives.assembled(
tags: ("hello", )
)
pipeline.primitives.positioned(
positions: (:
root: pos
)
)
pipeline.primitives.rendered(
content: [Hello]
)
}
#plotter({
duff-cmd()
duff-cmd(pos: (0, 1))
duff-cmd(pos: (1, 2))
}) |
https://github.com/Dherse/masterproef | https://raw.githubusercontent.com/Dherse/masterproef/main/masterproef/main.typ | typst | #import "./ugent-template.typ": *
#let code-icon(icon) = text(
font: "tabler-icons",
fallback: false,
weight: "regular",
size: 8pt,
icon,
)
// Instantiate the template
#show: ugent-template.with(
authors: ("<NAME>",),
title: "A software language approach for describing and programming photonics hardware",
languages: (
rust: (name: "Rust", icon: code-icon("\u{fa53}"), color: rgb("#CE412B")),
c: (name: "C", icon: none, color: rgb("#283593")),
python: (name: "Python", icon: code-icon("\u{ed01}"), color: rgb("#FFD43B")),
verilog-ams: (name: "Verilog-AMS", icon: none, color: rgb(30, 100, 200)),
vhdl: (name: "VHDL", icon: text(font: "UGent Panno Text")[</>], color: gray),
spice: (name: "SPICE", icon: none, color: rgb("#283593")),
phos: (
name: ref(label("phos")),
icon: code-icon("\u{ed8a}"),
color: rgb("#de8f6e"),
),
js: (name: "Tokens", icon: code-icon("\u{ecd7}"), color: rgb("#656255")),
typ: (
name: gloss("bc", short: true, long: false),
icon: code-icon("\u{f7ff}"),
color: rgb("#6f006f"),
),
typc: (
name: gloss("bc", short: true, long: false),
icon: code-icon("\u{f7ff}"),
color: rgb("#6f006f"),
),
),
)
#show raw.where(lang: none, block: true): it => context {
let old = state("codly-numbers-format").get()
codly(numbers-format: (..) => none)
it
codly(numbers-format: old)
}
// Load additional syntax definitions.
#set raw(syntaxes: (
"./assets/Phos.sublime-syntax",
"./assets/SystemVerilog.sublime-syntax",
"./assets/VHDL.sublime-syntax",
))
// Here we include your preface, go and edit it!
#include "./parts/preface.typ"
// Here we now enter the *real* document
#show: ugent-body
// Here we include your chapters, go and edit them!
#include "./parts/0_introduction.typ"
#include "./parts/1_background.typ"
#include "./parts/2_ecosystem.typ"
#include "./parts/3_translation.typ"
#include "./parts/4_phos.typ"
#include "./parts/5_examples.typ"
#include "./parts/6_future_work.typ"
#include "./parts/7_conclusion.typ"
// Here we display the bibliography loaded from `references.bib`
#ugent-bibliography()
// Here begins the appendix, go and edit it!
#include "./parts/appendix.typ" |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/suiji/0.2.0/src/lib.typ | typst | Apache License 2.0 | #import "random.typ": gen-rng, integers, random, uniform, normal, shuffle, choice
|
https://github.com/AHaliq/CategoryTheoryReport | https://raw.githubusercontent.com/AHaliq/CategoryTheoryReport/main/chapters/chapter2/mandatory.typ | typst | #import "../../preamble/lemmas.typ": *
#import "../../preamble/catt.typ": *
#import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#exercise("3 From Wk1")[Let $bb(C)$ be a category with binary products]
#proof(name: [is the projection $pi_X : X times Y -> X$ an epimorphism in general? Is it a monomorphism?])[
It is a monomorphism since there is a unique $u$ into $X times Y$ by UMP of product. It is not necessarily an epimorphism as there can be more than one distinct morphism out of $X$
]
#grid(
columns: (1fr, 1fr),
align: (left, left),
proof(name: [show $(f,g) comp h = (f comp h, g comp h)$])[
#figure(
diagram(
cell-size: 10mm,
$
& W
edge("ddl", f comp h, ->)
edge("ddr", g comp h, ->)
edge("d",h,->)
edge("dd", (f comp h, g comp h), "-->", bend: #20deg) \
& Z edge("d", (f,g), "-->") \
X edge("ur", f, <-) &
X times Y
edge("l", pi_X, ->)
edge("r", pi_Y, ->) &
Y edge("ul", g, <-)
$,
),
)
$
"UMP"(f comp h, g comp h) = (f,g) comp h
$],
proof(name: [Let $arr(f,Z,X)$ and $arr(g,W,Y)$ be ...])[
#figure(
diagram(
cell-size: 10mm,
$
&
A
edge("dl", h_Z, ->)
edge("dr", h_W, ->)
edge("d", u', "-->") \
Z edge("d",f,->) &
Z times W
edge("l",pi_Z,->)
edge("r",pi_W,->)
edge("d",u,"-->") &
W edge("d",g,->) \
X &
X times Y
edge("l",pi_X,->)
edge("r",pi_Y,->) &
Y
$,
),
)
$
"UMP"(h_Z,h_W) &= u' \
"UMP"(f comp h_Z, g comp h_W) &= u comp u'
$
by uniqueness of $u'$ and $u comp u'$, $u$ too is unique
],
)
#proof(name: [Show $(f times g) comp (h times k) = (f comp h) times (g comp k)$])[
#figure(
diagram(
cell-size: 10mm,
$
& T
edge("d", u, "-->")
edge("dl", u_A, ->)
edge("dr", u_B, ->)
& \
A
edge("d", h, ->) &
A times B
edge("l", pi_A, ->)
edge("r", pi_B, ->)
edge("d", (h,k), "-->") &
B
edge("d", k, ->) \
Z edge("d", f, ->) &
Z times W
edge("l", pi_Z, ->)
edge("r", pi_W, ->)
edge("d", (f,g), "-->") &
W
edge("d",g,->) \
X &
X times Y
edge("l",pi_X,->)
edge("r",pi_Y,->) &
Y
$,
),
)
$
"UMP"(u_A, u_B) &= u \
"UMP"(h comp u_A, k comp u_B) &= (h,k) comp u \
"UMP"(f comp h comp u_A, g comp k comp u_B) &= (f,g) comp (h,k) comp u \
(f comp h comp u_A) times (g comp k comp u_B) &= (f,g) comp (h,k) comp u \
((f comp h) times (g comp k)) comp u &= (f,g) comp (h,k) comp u \
(f comp h) times (g comp k) &= (f times g) comp (h times k) \
$
]
#exercise("1")[Show that every poset considered as a catgory has equalizers and coequalizers of all pairs of morphisms]
_Remark_: SA Chapter 2 did not cover equalizers and coequalizers, we will revisit the question in the future.
#exercise("2")[Let the functor $arr(F,bb(C),bb(D))$ be an isomorphism of categories. Show the following.]
#proof(name: [if $bb(C)$ has binary products so does $bb(D)$ and $F$ preserves them])[
A product is a $"UMP"(arr(f,X,A),arr(g,X,B))= arr(u,X,A times B)$
#figure(
table(
columns: 2,
align: (right, left),
[Product UMP], [definition],
[existence],
$arr(F(f),F(X),F(A)),arr(F(g),F(X),F(B)), exists arr(F(u),F(X),F(A times B))$,
[uniqueness], $exists u', F(u). F^(-1)(u') = F^(-1)(F(u)) => u' = F(u)$,
),
)
since $u$ is unique by definition of product, and $F$ is an isomorphism, if there exists another $arr(u',F(X),F(A times B))$ it too maps to unique $u$ bijectively, making $u' = F(u)$
]
#proof(name: [if $bb(C)$ has binary coproducts so does $bb(D)$ and $F$ preserves them])[
same argument as above
]
_Remark_: skipping the equalizers and coequalizers for now
#exercise("3")[Let $bb(C)$ be a category and $X$ an object of $bb(C)$. Show the following]
#proof(name: [The slice category $slice(bb(C),X)$ always has a terminal object])[
recall the objects are morphisms $arr(f,\_,X)$ and morphisms are morphisms between the domain objects. All objects; $arr(f,\_,X)$ thus have morphisms to object $arr(id_X,X,X)$, making it a terminal object
#figure(
diagram(
cell-size: 10mm,
$
Y
edge("d", f, ->)
edge("r", f, ->) &
X edge("dl", id_X,->)\
X
$,
),
)
]
#proof(name: [If $bb(C)$ has an initial object then so does $slice(bb(C),X)$])[
if $0$ is the initial object of $bb(C)$ then $arr(0_X,0,X)$ is the initial object. If $arr(f,Y,X)$ is an object of $slice(bb(C),X)$ then $arr(0_Y,0,Y)$ is the morphism from the initial object of $slice(bb(C),X)$
#figure(
diagram(
cell-size: 10mm,
$
0
edge("r", 0_Y, ->)
edge("d", 0_X, ->) &
Y
edge("dl", f, ->) \
X
$,
),
)
]
_Remark_: skipping the equalizers question for now |
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/043%20-%20Innistrad%3A%20Midnight%20Hunt/009_Episode%205%3A%20Night%20Closes%20'Round.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 5: Night Closes 'Round",
set_name: "Innistrad: Midnight Hunt",
story_date: datetime(day: 29, month: 09, year: 2021),
author: "<NAME>",
doc
)
#emph["Mind yourself in the woods, Arlinn," he says.]
#emph[Her father's voice is strong and firm, but there's a bit of a creak to it—like an oak whose branches yawn under pressure. There he stands in his workshop—she can see it so clearly now—surrounded by the work of his own two hands. Holy symbols flood the walls like moths covering a lantern. He does not look up at her.]
When she blinks, he is gone.
#emph[Years later, and after traveling the Multiverse, she's finally grown brave enough to go back home. Boulder and Streak know the weight of what she means to do. And, of course, if her human parents reject her, at least she has her pack. Their presence is constant, their loyalty hard-won. She offers them guidance, and they offer her belonging. There's strength in that.]
#emph[So, she makes her way up the hill to the old forge.]
#emph[But there isn't anything there. A husk meets her: blackened foundations jutting from the earth. A wall that she once scribbled on as a child.]
#emph[The villagers don't recognize her, so they're leery of telling her what happened, but she finds out eventually.]
#emph[A fire. Must have been some accident with the forge. The whole house went up in flame. What a shame—can't be helped.]
She blinks. Back to the present.
#figure(image("009_Episode 5: Night Closes 'Round/01.jpg", width: 100%), caption: [Duel for Dominance | Art by: Ryan Pancoast], supplement: none, numbering: none)
Tovolar's before her. No matter how much his shape changes, his eyes stay the same: burning, quick, bright as brands. He bares his teeth. A smile, she thinks.
This isn't the first time. There was another years ago when they met in a semi-circle of their brethren. She tried and failed to kill him; he tried and failed to keep her. He wore the white stag's pelt. The fight ruined it she remembers, vaguely, tearing it from his shoulders in a fit of rage. She was angry then, too.
But not as angry as she is now.
A<NAME>'s only thought is to wipe the smile from Tovolar's face.
Forward. Powerful legs send her flying through the air; her jaws are ready to clamp down on his throat. Instead, they meet a forearm raised just in time. Blood fills her mouth all the same—thick and rich with copper. Her nostrils flare even as Tovolar twists, using her momentum to send her to the ground.
But you cannot keep a hungry wolf down for long. The moment her back paws hit the earth, she's already back up, already lurching toward him again.
He spreads his arms wide. The scar across his chest carries to this form, too. When she blinks, there is no white in his fur, and the wound is bleeding red, red, red.
"Come home," he says.
Was that what he said then?
It doesn't matter.
A howl leaves her throat. She goes for him again, clawing at him, the muscles in her chest and arms straining.
He doesn't move. Claws rend fur and flesh, tearing him open anew, and still, he does not stop smiling. How?
There's no time to dwell; he's on her now, lunging at her waist. Her ribs groan, threatening to crack; he's not letting go anytime soon. She plants her feet deeper in the ground. If he wants to lift her, he's going to have to give up a lot to do it; from here, she can rain blows on his back. Blood runs in rivers down his fur, each new wound driving her further into the pit of the wilderness.
Yet, just as he can't stop her, she can't stop him. Only three blows land before he hefts her overhead and drives her into a split tree trunk. Candles fly, sent tumbling by the impact; fire licks at the gashes the wood's now torn open.
He's a fool if he thinks this can hold her. No matter the chunk of wood buried in her shoulder. Arlinn plants her feet along one corner of the trunk and her shoulders along with the other. A grunt gives her enough power to break her way out. She tears the wood from her shoulder and plunges it straight into Tovolar's leg.
This, at last, succeeds in wiping the smile away. A great howl fills the festival ground—a howl that pierces through the chaos of the battle. When he clutches the wood in his enormous hand with a trembling grip; with some small satisfaction, the Beast of Arlinn Kord realizes she has pierced him all the way through.
Victory is short-lived. Teeth sink into her shoulders, weight pulling her down. There are too many things at once to keep track of. Down she goes—her head cracking against the discarded helmet of a dead guard. Her ears ring. For a moment she can hear nothing—not the screaming of the fleeing festivalgoers, not Adeline's shouted orders, not the roar of Chandra's flames.
And not the snarling of the wolves looming over her.
How familiar their faces! How often she's seen them in the middle of a hunt! There is Redtooth, living up to her name, hackles raised; there is Boulder at her feet; there is Streak with his jaws firmly locked on her already wounded shoulder. Muzzles she's so often seen at play now bear down on her with a predator's frightful presence.
And then Tovolar is over her again.
She tries to stand. Dizziness sends her reeling back, followed shortly by her shoulder threatening to tear. Nausea stops up her throat.
His mouth's moving. She can't hear what he's saying—the ringing in her ears sounds too much like church bells.
What a strange church this is, with screams for hymns and battlefield filth for incense.
She closes her eyes.
The Grand Cathedral in Thraben. Worrin behind a desk. #emph[Innistrad was born in darkness, and to the darkness it longs to return. It is for that reason we must each tend to our own light.]
He was the one who recommended her for the Archmagi.
What would he make of her now? Did his geist know when he saw her?
The ringing dies down. Tovolar's talking—she can hear him as if from another room—but more than that, she can hear her wolves. That low growl they usually directed toward their next meal.
But it isn't quite that growl, is it?
She opens her eyes again.
There he is, pulling the hunk of wood from his leg. Blood drips onto her muzzle.
"Home~"
This isn't home.
She tries to sit up again, tries to headbutt him, but Boulder's teeth clamp down and he pushes her back.
"Don't have to fight," he says.
Angel's mercy, she wants to vomit. Her tongue's lead in her muzzle. Hard enough to understand wolf-speech without a head injury.
"Join the hunt," he says. "This is who you are. Don't you understand? You don't have to hide anymore."
Now he's extending a paw to her. Everything in her wants to slap it away. All it does is show her he #emph[can ] control himself but chooses not to. That's how he manages to talk in this shape.
"The Church hates this part of you," he says. "But I don't. The pack doesn't."
And it is then—perhaps by divine providence—that Arlinn realizes something.
Patience isn't with her packmates.
Arlinn swallows back bile. If she tries~yes, she can smell that Patience is close by. The scent is faint compared to the blood and sweat at the forefront, but it's there. Her eyes are still spinning, but if she tries, she can focus and—
There. In the dying light of day, Patience is waiting for her. She sits away from the others, just out of reach of Arlinn's right hand, but the moment they lock eyes, Patience starts loping toward her.
Tovolar's insistent. "Tell me you're coming home. Right now. Tell me you're coming home, and I'll let you go."
Soft fur against her palm. Her stomach settles, just for an instant.
"Arlinn. Please. We want you with us. You #emph[belong ] with us."
She closes her eyes again. There: the cathedral's stained glass.
Light shifts. A clearing in the woods, and the four wolves within it.
She walks forward, into the light, and they surround her.
Arlinn opens her eyes. Now she understands: he isn't going to let up until she says what he wants to hear.
"I'm home," she says. Even these two words come out slurred, even this is a great effort, but she manages.
It isn't a lie.
The woods are home, the wolves are home, the Church is home, all of it is.
Even as he helps her up, even as he hugs her tight—this, too, is home. To a young Arlinn freshly changed, this simple gesture once meant the world. And it does still, now, to know that there is yet this much gentleness in him.
But the ferocity, the mercilessness—they have overtaken him. Whatever kindness he shows now cannot wipe away what he's done today. The Tovolar who looked after her has grown into the Tovolar who attacks innocents, and she has grown away from him.
Yet she knows, too, that he hasn't grown away from her.
Dizzy and bleeding as she is, she won't have much time or opportunity to get a better angle than this. It's underhanded. Some might even say it isn't right.
But if it stops this attack, then there's nothing more right in all of Innistrad.
She drives her claws deep into his sternum.
Tovolar lurches over. Realization is slow to dawn; he holds her, if anything, closer.
"Innistrad is home, Tovolar," she says. "And as long as I can breathe, I'm going to protect it."
A gasp is her only answer—his grasp turns malicious as his claws dig into her already wounded shoulder.
She stands, her hand still buried in him. "Call off the attack."
#figure(image("009_Episode 5: Night Closes 'Round/02.jpg", width: 100%), caption: [Defend the Celestus | Art by: <NAME>], supplement: none, numbering: none)
How strange to see his eyes dimming in this way. He's hardy enough to live, she's almost certain, and likely will once the shamans get a look at him—but she has never once seen him staggered this way. Not even the first time they fought in the clearing. It isn't simply that she's hurt him physically. Something inside him is broken, something she can't feel from here.
"You lied," he rasps.
"Call it off," she repeats.
He presses his eyes shut. She wonders what he sees. Is it the girl he found in the woods that day, or is it something else—something that drove him to this level of unimaginable cruelty?
Whatever it is, it brings him to his senses. In a choked gurgle, he says, "Fine."
She sets him down, withdraws her hand, makes sure he's sitting up. The others will eat him alive if they catch him doubled over.
He looks at her again, and she shakes her head.
The howl goes up soon after that, a call for retreat only the wolves understand.
He does not ask her to follow.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Like ants crawling over a body, but in reverse: the wolves leaving the bleached-bone remains of the Harvesttide Massacre.
The name's already been born. From the lips of the cathars who now stand bruised and battered, from the witches who sort through the bodies in search of those who need their help, the word has already been formed: #emph[massacre] .
Arlinn cannot look on it for long. Too much like The Travails. Worse, in a way, with all the childish decorations now strewn about like the storm refuse. Carved pumpkins crushed beneath the bodies of the dead; cider pouring freely into puddles of blood; careful stalls broken in two by their keeper's bodies.
Less than an hour ago, this was a place of hope.
What is it now?
Arlinn swallows. She wants to help. Her place is with the witches and the cathars, tending to the fallen, but if Katilda doesn't complete her ritual, then there won't be anyone to tend to. The crushed effigies around her are a grim reminder of that.
Innistrad endures.
She has to march on.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
While the witches and the remaining guards tend to the wounded, the Candleguides keep smiling their strange smiles, pointing the way for the dead.
And there are so many dead.
Katilda's festival was a rousing success in the worst possible way. To see such a throng of bodies laid out at once is unthinkable to Arlinn. Her parents would never believe it. They never would have attended, would have turned up their noses and muttered that safety lay in isolation. Then, as now, she knows what they really meant—that safety and fear were one and the same.
They're wrong.
Everyone keeping to themselves, thinking about themselves, that's how Innistrad got here. Vampires climbing toward eternity on the backs of the mortal; werewolves hunting the people they should be protecting. Division caused this. Had the wolves realized the importance of keeping day and night in balance, they could have #emph[protected] the festival instead.
But that thought's a painful one.
She bounds ahead. There will be time to mourn later, time to eulogize the dead and explain to their families what went wrong. For any of this to mean something, the ritual must be completed.
Those gathered beneath the Celestus must know that it's worth something.
Her body aches, her forepaws and shoulders screaming with every long step, but she charges ahead all the same—the only wolf heading toward the Celestus. Tune out the crying, tune out the screams—just run.
But there's one voice it's impossible to ever fully tune out.
"Arlinn!"
Chandra's calling out. Adeline's white horse comes up on Arlinn's right, galloping for his life toward the Celestus. A couple of hours ago, she would have hated being outrun by a #emph[horse] , of all things, but now all she feels is relief.
Because Chandra's holding out a hand. "You're in bad shape, come with us!" she shouts. "Teferi's gone ahead with a couple of the others, we have to catch up!"
Reaching out, staying together.
It's the only way to go.
Shifting to her human form, Arlinn takes Chandra's hand.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Chanting meets them first. Arlinn can't make out the words, but the sounds have the shape of towering oaks and old rivers. A glow runs up the arms of the Celestus, and she thinks to herself, slumped against Adeline, that they look like her father's tongs, fresh from the fire.
#figure(image("009_Episode 5: Night Closes 'Round/03.jpg", width: 100%), caption: [Raze the Effigy | Art by: <NAME>], supplement: none, numbering: none)
It brings a giddy smile to her face. But maybe that's the blood loss, too.
"Chandra, doesn't it look like~"
"Looks like they're almost done, yeah," she answers. No point in correcting her. Arlinn looks ahead.
Chandra's right: whatever they're up to, it must be nearing completion. Hard to make out the details given the thick crowd gathered around the central platform, but that's more cause for joy than it is for worry.
Straight into the crowd they go. Adeline's armor and Chandra's flame serve as symbols of their station; gouts of fire say both to keep away and that the fight is not yet done. Dizzy as she is, Arlinn can only partially make out the faces of those around her—but the hope in their eyes shines through.
And all of them are mouthing along to the chant.
What a strange cadence it has, lilting and rising, defiant and eerie. Elongated syllables creep in through her ears and dance there, pulling her thoughts along with them. If this is magic, then it is old magic indeed. It's settling into her veins now.
Closer and closer to the central platform. They can see it now, see the masks of the Dawnhart Coven moving this way and that. Five on the edge of the platform drum in time to the chanting; five between them lead a jarring dance. In the center, two: Katilda, her mask shading much of her face, the Moonsilver Key held in her hands like something sacred and pure; Kaya, standing at the ready, searching the horizon for them.
When Kaya catches sight of them, she starts waving her arms and gesturing them over.
The wooden bridge opens before them. Chandra's first off the horse and quick to help Adeline down. The two of them try to support Arlinn on the way down. With a cathar on one side and a pyromancer on the other, there isn't much room for Arlinn to sway. All the better.
One step. Another. The wood gives beneath them, creaks, and that too is part of the eerie song of the forest—the chant that lives now in their lungs.
One step, another. What would the angels think of this? What would the Church think? This is nothing like a hymn, nothing like a prayer; it's something different, but just as real. How is it that the words spring so readily to her lips when she's never heard them before? Have they been etched on her bones all this time?
One step, another. The witches are gathered before them. All at once, they turn to Chandra, Adeline, and Arlinn. Eyes meet beneath bent boughs and bones. Silver swirls within the irises of the witches—yes, this is old magic indeed.
At once, the witches speak in their collected voices: #emph["Arlinn Kord."]
She swallows.
Chandra and Adeline share a glance over her shoulders. Together, they help her to the altar. Before her is a golden bowl, fit for sunlight and honey, surrounded by dried herbs and old bones.
The eyes of Innistrad are upon her.
"I've come," she answers. It feels like the right thing to say.
#emph["Child of Blood and Fang. You stand on the line of Dawn, where Night and Day meet. You will lend us your strength."]
#emph[I haven't been a child for quite some time] , she almost says, but you can't interrupt old rituals. Katilda must have known more about her than she thought. "What do you need?"
She addresses it to Katilda, for though the whole gathered crowd now speaks as one she can be sure it's Katilda pulling the strings. Everything smells of her.
#emph["Will you spill your blood for the day? Will your fangs protect those who live in fear?"]
Her eyes dart from one witch to another, to Teferi and Kaya, to Chandra and Adeline. No one seems to understand what exactly any of this means.
"I will," she answers. Of that, she's sure.
#figure(image("009_Episode 5: Night Closes 'Round/04.jpg", width: 100%), caption: [Secrets of the Key | Art by: <NAME>], supplement: none, numbering: none)
#emph["Anoint the Sungold Lock."]
Blood and fang, was it? Dizzy still, leaning on the altar for support, Arlinn touches the aching wound on her shoulder. This she then smears on the inside of the bowl—its surface surprisingly warm to the touch. Next, she takes one of the herbs and bites into it. A bitter taste fills her mouth, a welcome reprieve from the taste of metal. This she sets down atop the small smear of red.
The bowl begins to hum.
So, too, does the Celestus. Great gears groan as they spring once more to life; up above, the shadows shift as the arms strain against the rust and roots that bind them. Ground shifts beneath her, but she keeps her hands on the altar. A good thing, too, or she would have fallen.
Kaya sets the Moonsilver Key down when Katilda gestures for it.
#emph["The coven offers root and soul."]
She picks up a gnarled root the size of Arlinn's arm, a thing likely as old as Innistrad itself. Sometimes you know age when you see it. Before Arlinn can get to wondering where it came from, Katilda flicks the tip with her finger. All at once, it fades to ash. This Katilda smears on the inside of the bowl, directly opposite Arlinn's blood.
That root's accounted for. But what of a soul? Arlinn doesn't like the sound of that.
She's about to ask when Katilda links eyes with her. The aura coming off her—there will be no questions here, no interruptions. The ritual must continue.
And it is Katilda's eyes that give her away: a silver glow comes over them, then streams out. Her mouth lolls open and there too is a stream of silver—and these combine, pouring into the bowl.
The other witches loop their arms through Katilda's to support her as her body starts to go limp. Fear mounts in Arlinn's chest. This isn't~permanent, is it? Her eyes dart from the witches to Kaya. She mouths, #emph[Is this all right?]
But she doesn't ever get the answer.
Because Kaya's looking up at something else, and a shadow looms large over the altar.
Something smells like death.
It happens quicker than a human eye can follow but conveniently just fast enough for Arlinn: a streak of red and gold like lightning down from the sky; Katilda swallowed up in its impossible color. Within that streak: <NAME>. There's no mistaking it; she would never #emph[want ] anyone to mistake her. The hand outstretched toward the Moonsilver Key is emblazoned with the Voldaren sigil, as is the rest of her armor.
And there's no way they can let her get the key.
Arlinn dives for it, clutching it close to her stomach as she hits the ground. The sizzle of her skin is a small price to pay for keeping it safe. By then, Olivia's taken to the sky above them. Katilda's body hangs limp in her arms. Olivia sneers down on them, her shoulders rising and falling with the bray of her awful laugh.
"Seems we're at an impasse," she says. "I've got your witch; you've got my key."
Arlinn gets up on her knees, the key still held tight. Something about it feels different now—cooler. "Neither of those are yours."
"On the contrary," Olivia answers. "That key is #emph[very much ] mine. I need it terribly, you see. What I #emph[don't] need is a shriveled old witch."
Kaya's at Arlinn's side in an instant. She's glad of the company, even if the news Kaya brings sends a chill down Arlinn's spine. "There's something going on with Katilda's soul. During the ritual, I saw it leave her, and then~"
"And then?" Arlinn says.
Kaya frowns. "Olivia showed up. Hard to see what happened after."
Chandra's next, hands twitching, eyes fixed on the floating vampire. "We blast her, right?"
"Can't do that. Might hit Katilda," Kaya answers.
Up above, Olivia lets out a theatrical sigh. With all the flair of an extremely bored widow, she rakes her claws across Katilda's chest. Blood sprays onto the cowering witches, the hypnotized crowd. "It's a very simple proposition. I'm getting bored of waiting for an answer. Either you give me that key and I get to planning my festivities or you keep #emph[dithering] and your friend dies."
Arlinn scowls. "What if we complete the ritual?"
"Do we have time for that? Do we know how to do that?" Kaya whispers.
Time. Her mind goes to Teferi, around here somewhere, but even if she could find him, he couldn't buy them enough. Slowing down the sun's no small task—she wouldn't be surprised if he was out of commission for a couple of days.
There's got to be another answer.
Her eyes land on two of the other witches. "The ritual?" she barks.
But they shake their heads. "It had to be her," answers one. "The spell's too old for us—"
"#emph[Boring] !" shouts Olivia. She raises her hand for another blow—
There just isn't enough time. Not enough to consider everything, not enough to find another way through all of this, not enough to brute force it.
Innistrad must survive.
Arlinn hurls the key with her good arm.
Olivia's eyes light up. Again, it happens quickly—she snatches the key from the air with her free hand. Studying it only stokes the fires of her glee, even as smoke rises from her fingertips.
"Drop Katilda!" Arlinn shouts.
Glee turns to disgust. "That's no way to treat a soon-to-be bride," she says.
"A deal's a deal," says Kaya. Arlinn's a little surprised to hear her, surprised that she's the one who understands, but she'll take the help. "Hand her over."
"Fine," answers Olivia. "Catch."
#figure(image("009_Episode 5: Night Closes 'Round/05.jpg", width: 100%), caption: [Olivia's Midnight Ambush | Art by: <NAME>], supplement: none, numbering: none)
In the times to come, Arlinn will think about this moment and what she could have done differently. If she had moved a little faster, would it have been so bad? If she had acted sooner, if she had chosen something else—what would have happened?
For it is one thing to fall from a great height and it is another entirely when a vampire throws you. Katilda's body hurtles toward the altar at an astonishing speed.
It's all Arlinn can do to break her fall—to throw herself between them—but even that can only do so much. Bones crack as she crashes against Arlinn, and Arlinn crashes against the altar.
By the time the world stops spinning, the vampire is gone. Flown away from the looks of it—a distant speck of black against the already dark sky.
The key is gone with her.
The Celestus has gone silent.
It is night on Innistrad.
It will be night from here to eternity.
|
|
https://github.com/LugsoIn2/typst-htwg-thesis-template | https://raw.githubusercontent.com/LugsoIn2/typst-htwg-thesis-template/main/chapters/examples.typ | typst | MIT License | #import "../lib/utils.typ": todo
#import "@preview/glossarium:0.4.1": gls, glspl
= Chapter Examples <chap1>
== Cite Example
#lorem(20) @bruegge2004object #cite(<bruegge2004object1>) #cite(label("bruegge2004object2"))
== Todo Example
#lorem(10)
#todo[Example TODO]
== Table Example
#lorem(10)
#figure(
{
table(
columns: 2,
[*Amount*], [*Ingredient*],
[250g], [Butter (room temp.)],
[150g], [Brown sugar],
[100g], [Cane sugar],
[100g], [70% cocoa chocolate],
[100g], [35-40% cocoa chocolate],
[2], [Eggs],
[Pinch], [Salt],
[Drizzle], [Vanilla extract],
)
},
caption: [
#lorem(3)
]
)
#figure(
{
set text(size: 0.8em, hyphenate: false, weight: "medium");
let example(it) = [#text(fill: red)[#it]];
table(
columns: 2,
[*Amount*], [*Ingredient*],
[360g], [#example[Baking flour]],
[250g], [Butter (room temp.)],
[150g], [Brown sugar],
[100g], [Cane sugar],
[100g], [70% cocoa chocolate],
[100g], [35-40% cocoa chocolate],
[2], [Eggs],
[Pinch], [Salt],
[Drizzle], [Vanilla extract],
)
},
caption: [
#lorem(3)
]
)
== Example Figures
#lorem(20)
#figure(
grid(
columns: 2,
row-gutter: 2mm,
[#image("../figures/dummyimage.png", width: 50%)],
[#image("../figures/dummyimage.png", width: 50%)],
),
caption: [
#lorem(5)
]
) <figure>
#lorem(20)
#figure(
grid(
[#image("../figures/dummyimage.png", width: 70%)],
),
caption: [
#lorem(5)
]
) <figure2>
== Example Glossar
#lorem(20)
#gls("oidc")
#gls("potato")
#gls("dm")
|
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024 | https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/management-road-to-worlds/entry.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/packages.typ": notebookinator
#import notebookinator: *
#import themes.radial.components: *
#show: create-body-entry.with(
title: "Management: Road to Worlds",
type: "management",
date: datetime(year: 2024, month: 3, day: 5),
author: "<NAME>",
witness: "<NAME>",
)
We have about 2 months until Worlds, and so we need to plan out the timeline
that we will follow to be as prepared as we can, as well as accomplish our bot
rebuild in a timely manner with time to test and program as well.
= Priorities
We have decided that our new bot will be a smaller, swifter, and stronger bot.
It will have a 6 motor PTO which will need a lot of time, as well as a 6 motor
drivetrain, puncher, and sleds with wedges on the intake side of our bot.
We've decided that the PTO and hang will be the most important aspects of our
new bot, but we will need proper time management, efficient work, and organized
planning skills to have this new bot built successfully and in time for worlds.
Our old bot will stay assembled so that as we build our new bot, we can use the
old bot for programming and driver practice so that we don't have to stress
about getting enough time after rebuild for bot skills practice and driver/auton
practice and testing.
Any remaining time would be allocated to driver and autonomous practice.
= Gantt Chart
We've created a Gantt chart to visualize what we've planned for task assignments
and time frames:
#gantt-chart(
start: datetime(year: 2024, month: 3, day: 4),
end: datetime(year: 2024, month: 4, day: 2),
date-interval: 7,
tasks: (
("Drivetrain CAD", (0, 1)),
("Drivetrain Build", (1, 2)),
("Winch climb CAD", (1, 2)),
("Winch climb Build", (2, 3)),
("Wings CAD", (2, 3)),
("Wings Build", (3, 4)),
("Sleds CAD", (0, 1)),
("Sleds Build", (1, 2)),
("Intake CAD", (2, 3)),
("Intake Build", (3, 4)),
("Kicker CAD", (1, 2)),
("Kicker Build", (2, 3)),
("Wedges CAD", (2, 3)),
("Wedges Build", (3, 4)),
("PTO CAD", (2, 3)),
("PTO Build", (3, 5)),
),
goals: (("Completed Build", 5),),
)
|
https://github.com/neunenak/typst-leipzig-glossing | https://raw.githubusercontent.com/neunenak/typst-leipzig-glossing/master/documentation.typ | typst | MIT License | #set document(title: "typst leipzig-glossing documentation")
#import "leipzig-gloss.typ": abbreviations, gloss, numbered-gloss, gloss-count
#show link: x => underline[*#x*]
#let codeblock(contents, addl-bindings: (:), unevaled-first-line: none) = {
let full-contents = if unevaled-first-line != none {
unevaled-first-line + "\n" + contents
} else {
contents
}
block(stroke: 0.5pt + black, inset: 4pt, width: 100%, breakable: false)[
#eval(contents, mode: "markup", scope: (gloss: gloss, numbered-gloss: numbered-gloss) + addl-bindings)
#block(fill: luma(230), inset: 8pt, radius: 4pt, breakable: false, width: 100%, raw(full-contents, lang: "typst"))
]
}
// Abbreviations used in this document
#import abbreviations: poss, prog, sg, pl, sbj, obj, fut, neg, obl, gen, com, ins, all, pst, inf
#import abbreviations: art, dat, du, A, P, prf
#let fmnt = abbreviations.emit-abbreviation("FMNT")
#align(center)[#text(17pt)[Typst `leipzig-glossing` Documentation]]
= Introduction
Interlinear morpheme-by-morpheme glosses are common in linguistic texts to give
information about the meanings of individual words and morphemes in the
language being studied. A set of conventions called the *Leipzig Glossing Rules*
was developed to give linguists a general set of standards and principles for
how to format these glosses. The most recent version of these rules can be
found in PDF form at
#link("https://www.eva.mpg.de/lingua/pdf/Glossing-Rules.pdf")[this link],
provided by the Department of Linguistics at the Max Planck Institute for
Evolutionary Anthropology.
There is a staggering variety of LaTex packages designed to properly align and
format glosses (including `gb4e`, `ling-macros`, `linguex`, `expex`, and
probably even more). These modules vary in the complexity of their syntax and
the amount of control they give to the user of various aspects of formatting.
The `typst-leipzig-glossing` module is designed to provide utilities for
creating aligned Leipzig-style glosses in Typst, while keeping the syntax as
intuitive as possible and allowing users as much control over how their glosses
look as is feasible.
This PDF will show examples of the module's functionality and detail relevant
parameters. For more information or to inform devs of a bug or other issue,
visit the module's Github repository
#link("https://github.com/neunenak/typst-leipzig-glossing")
#show raw: x => highlight(fill: luma(230), extent: 1pt)[#x]
= Basic glossing functionality
As a first example, here is a gloss of a text in Georgian, along with the Typst code used to generate it:
#codeblock(
"#gloss(
header: [from \"Georgian and the Unaccusative Hypothesis\", <NAME>, 1982],
source: ([ბავშვ-ი], [ატირდა]),
transliteration: ([bavšv-i], [aṭirda]),
morphemes: ([child-#smallcaps[nom]], [3S/cry/#smallcaps[incho]/II]),
translation: [The child burst out crying],
)", unevaled-first-line: "#import \"leipzig-gloss.typ\": gloss")
And an example for English which exhibits some additional styling, and uses imports from another file
for common glossing abbreviations:
#codeblock(
"#gloss(
source: ([I'm], [eat-ing], [your], [head]),
source-style: (item) => text(fill: red)[#item],
morphemes: ([1#sg.#sbj\=to.be], [eat-#prog], [2#sg.#poss], [head]),
morphemes-style: text.with(size: 10pt, fill: blue),
translation: text(weight: \"semibold\")[I'm eating your head!],
translation-style: (item) => [\"#item\"],
)
", addl-bindings: (poss: poss, prog: prog, sg: sg, sbj: sbj))
The `#gloss` function has three pre-defined parameters for glossing levels:
`source`, `transliteration`, and `morphemes`. It also has two parameters
for unaligned text: `header` for text that precedes the gloss, and
`translation` for text that follows the gloss.
The `morphemes` param can be skipped, if you just want to provide a source
text and translation, without a gloss:
#codeblock(
"#gloss(
source: ([Trato de entender, debo comprender, qué es lo que ha hecho conmigo],),
source-style: emph,
translation: [I try to understand, I must comprehend, what she has done with me],
)
")
Note that it is still necessary to wrap the `source` argument in an array of length one.
Here is an example of a lengthy gloss that forces a line break:
// adapted from https://brill.com/fileasset/downloads_static/static_publishingbooks_formatting_glosses_linguistic_examples.pdf
#codeblock(
"#gloss(
source: ([Ich],[arbeite],[ein],[Jahr],[um],[das],[Geld], [zu],[verdienen,],[das], [dein],[Bruder], [an],[einem],[Wochenende],[ausgibt.]),
source-style: text.with(weight: \"bold\"),
morphemes: ([I], [work],[ one], [year],[to],[the],[money],[to],[earn,], [that],[your],[brother],[on],[one], [weekend], [spends.]),
translation: [\"I work one year to earn the money that your brother spends in one weekend\"]
)", addl-bindings: (poss: poss, prog: prog, sg: sg, sbj: sbj))
To add more than three glossing lines, there is an additional parameter
`additional-lines` that can take a list of arbitrarily many more glossing
lines, which will appear below those specified in the aforementioned
parameters:
#codeblock(
"#gloss(
header: [Hunzib (van den Berg 1995:46)],
source: ([ождиг],[хо#super[н]хе],[мукъер]),
transliteration: ([oʒdig],[χõχe],[muqʼer]),
morphemes: ([ož-di-g],[xõxe],[m-uq'e-r]),
additional-lines: (
([boy-#smallcaps[obl]-#smallcaps[ad]], [tree(#smallcaps[g4])], [#smallcaps[g4]-bend-#smallcaps[pret]]),
([at boy], [tree], [bent]),
),
translation: [\"Because of the boy, the tree bent.\"]
)
")
//TODO add a custom numbering system that can handle example 18a-c of Kartvelian Morphosyntax and Number Agreement
== Numbering Glosses
The `gloss` function takes a boolean parameter `numbering` which will add an incrementing
count to each gloss. A function `numbered-gloss` is exported for convenience; this is
defined as simply `#let numbered-gloss = gloss.with(numbering: true)`, and is called with the
same arguments as `gloss`:
#codeblock(
"#gloss(
source: ([გვ-ფრცქვნ-ი],),
transliteration: ([gv-prtskvn-i],),
morphemes: ([1#pl.#obj\-peel-#fmnt],),
translation: \"You peeled us\",
numbering: true,
)
#numbered-gloss(
source: ([მ-ფრცქვნ-ი],),
transliteration: ([m-prtskvn-i],),
morphemes: ([1#sg.#obj\-peel-#fmnt],),
translation: \"You peeled me\",
)
", addl-bindings: (pl: pl, obj: obj, sg: sg, fmnt: fmnt))
The displayed number for numbered glosses is iterated for each numbered gloss
that appears throughout the document. Unnumbered glosses do not increment the
counter for the numbered glosses.
The gloss count is controlled by the Typst counter variable `gloss-count`. This
variable can be imported from the `leipzig-gloss` package and manipulated using the
standard Typst counter functions to control gloss numbering:
#codeblock(
"#gloss-count.update(20)
#numbered-gloss(
header: [from _Standard Basque: A Progressive Grammar_ by <NAME>, quoting P. Charriton],
source: ([Bada beti guregan zorion handi baten nahia.],),
translation: [There always is in us a will for a great happiness.],
)", addl-bindings: (gloss-count: gloss-count))
References to individual examples can be achieved using the `label` argument and the referencing mechanism of Typst:
#codeblock(
"See @sorcerers:
#numbered-gloss(
header: [Middle Welsh; modified from _Grammatical number in Welsh_ (1999) by <NAME> (§~2.1.1)],
source: ([ac], [ny], [allvs], [y], [dewinyon], [atteb], [idav]),
morphemes: ([and], [#neg], [be_able.#smallcaps[pret].3#sg], [#smallcaps[def]], [sorcerer.#pl], [answer.#smallcaps[inf]], [to.3#sg.#smallcaps[m]]),
translation: [and the sorcerers could not answer him],
label: \"sorcerers\",
label-supplement: [Example]
)
As we have seen in @sorcerers, […].", addl-bindings: (neg: neg, sg: sg, pl: pl))
Labeling uses the Typst #link("https://typst.app/docs/reference/model/figure/")[figure] document element. The `label-supplement`
parameter fills in the `suppliment` parameter of a `figure`, which is `[example]` by default.
== Styling lines of a gloss
Each of the aforementioned text parameters has a corresponding style parameter,
formed by adding `-style` to its name: `header-style`, `source-style`,
`transliteration-style`, `morphemes-style`, and `translation-style`. These parameters
allow you to specify formatting that should be applied to each entire line of
the gloss. This is particularly useful for the aligned gloss itself, since
otherwise one would have to modify each content item in the list individually.
In addition to these parameters, Typst’s usual content formatting can be applied
to or within any given content block in the gloss. Formatting applied in this
way will override any contradictory line-level formatting.
#codeblock(
"#gloss(
header: [This text is about eating your head.],
header-style: text.with(weight: \"bold\", fill: green),
source: (text(fill:black)[I'm], [eat-ing], [your], [head]),
source-style: text.with(style: \"italic\", fill: red),
morphemes: ([1#sg.#sbj\=to.be], text(fill:black)[eat-#prog], [2#sg.#poss], [head]),
morphemes-style: text.with(fill: blue),
translation: text(weight: \"bold\")[I'm eating your head!],
)", addl-bindings: (prog: prog, sbj: sbj, poss: poss, sg: sg))
//TODO add `line_styles` param
= Standard Abbreviations
The Leipzig Glossing Rules define a commonly-used set of short abbreviations
for grammatical terms used in glosses, such as #abbreviations.acc for
"accusative (case)", or #abbreviations.ptcp for "participle" (see "Appendix:
List of Standard Abbreviations in the Leipzig Glossing Rules document)
By convention, these are typeset using #smallcaps[smallcaps]. This package
contains a module value `abbreviations`. Individual abbreviations may be
accessed either with Typst field access notation or by importing them from
`abbreviations`:
#codeblock(
"#import abbreviations: obl, sg, prf
#gloss(
header: [(from _Why Caucasian Languages?_, by <NAME>, in _Endangered Languages of the Caucasus and Beyond_)],
source: ([\[qálɐ-m], [∅-kw’-á\]], [ɬ’ə́-r]),
morphemes: ([city-#obl], [3#sg\-go-#prf], [man-#abbreviations.abs]),
translation: \"The man who went to the city.\"
)", addl-bindings: (abbreviations: abbreviations), unevaled-first-line: "#import \"leipzig-gloss.typ\": abbreviations")
The full list of abbreviations is as follows:
== Full list of abbreviations
#{
for (abbreviation, description) in abbreviations.standard-abbreviations {
[#abbreviations.render-abbreviation(abbreviation) - #raw(lower(abbreviation)) - #description ]
linebreak()
}
}
== Custom abbreviations
Custom abbreviations may be defined using the `abbreviations.emit-abbreviation` function:
#codeblock(
"#import abbreviations: obl, sg, prf, fut, emit-abbreviation
#let ts = emit-abbreviation(\"TS\")
#gloss(
header: [(from _Georgian: A Structural Reference Grammar_, by <NAME>)],
source: ([g-nax-av-en],),
morphemes: ([you#sub[2]-see(#fut)#sub[4]-#ts#sub[7]-they#sub[11]],),
translation: \"they will see you\",
)", addl-bindings: (abbreviations: abbreviations), unevaled-first-line: "#import \"leipzig-gloss.typ\": abbreviations")
== Building used-abbreviations pages
A user of `leipzig-glossing` might wish to generate an introductory page
displaying which abbreviations were actually used in the document. The
`abbreviations.with-used-abbreviations` function may be used for this purpose;
see the `abbreviations-used-example.typ` file in `leipzig-glossing` source for an example.
= Further Example Glosses
These are the first twelve example glosses given in #link("https://www.eva.mpg.de/lingua/pdf/Glossing-Rules.pdf").
along with the Typst markup needed to generate them:
#{
gloss-count.update(0)
}
#codeblock(
"#numbered-gloss(
header: [Indonesian (Sneddon 1996:237)],
source: ([Mereka], [di], [Jakarta], [sekarang.]),
morphemes: ([they], [in], [Jakarta], [now]),
translation: \"They are in Jakarta now\",
)")
#codeblock(
"#numbered-gloss(
header: [Lezgian (Haspelmath 1993:207)],
source: ([Gila], [abur-u-n], [ferma], [hamišaluǧ], [güǧüna], [amuq’-da-č.]),
morphemes: ([now], [they-#obl\-#gen], [farm], [forever], [behind], [stay-#fut\-#neg]),
translation: \"Now their farm will not stay behind forever.\",
)", addl-bindings: (fut: fut, neg: neg, obl: obl, gen:gen))
#codeblock(
"#numbered-gloss(
header: [West Greenlandic (Fortescue 1984:127)],
source: ([palasi=lu], [niuirtur=lu]),
morphemes: ([priest=and], [shopkeeper=and]),
translation: \"both the priest and the shopkeeper\",
)")
#codeblock(
"#numbered-gloss(
header: [Hakha Lai],
source: ([a-nii -láay],),
morphemes: ([3#sg\-laugh-#fut],),
translation: [s/he will laugh],
)", addl-bindings: (sg: sg, fut: fut))
#codeblock(
"#numbered-gloss(
header: [Russian],
source: ([My], [s], [Marko], [poexa-l-i], [avtobus-om], [v], [Peredelkino]),
morphemes: ([1#pl], [#com], [Marko], [go-#pst\-#pl], [bus-#ins], [#all], [Peredelkino]),
additional-lines: (([we], [with], [Marko], [go-#pst\-#pl], [bus-by], [to], [Peredelkino]),),
translation: \"Marko and I went to Perdelkino by bus\",
)", addl-bindings: (com: com, pl: pl, ins: ins, all: all, pst:pst))
#codeblock(
"#numbered-gloss(
header: [Turkish],
source: ([çık-mak],),
morphemes: ([come.out-#inf],),
translation: \"to come out\",
)", addl-bindings: (inf: inf))
#codeblock(
"#numbered-gloss(
header: [Latin],
source: ([insul-arum],),
morphemes: ([island-#gen\-#pl],),
translation: \"of the islands\",
)", addl-bindings: (gen:gen, pl: pl))
#codeblock(
"#numbered-gloss(
header: [French],
source: ([aux], [chevaux]),
morphemes: ([to-#art\-#pl],[horse.#pl]),
translation: \"to the horses\",
)",addl-bindings: (art:art, pl:pl))
#codeblock(
"#numbered-gloss(
header: [German],
source: ([unser-n], [Väter-n]),
morphemes: ([our-#dat\-#pl],[father.#pl\-#dat.#pl]),
translation: \"to our fathers\",
)", addl-bindings: (dat:dat, pl:pl))
#codeblock(
"#numbered-gloss(
header: [Hittite (Lehmann 1982:211)],
source: ([n=an], [apedani], [mehuni],[essandu.]),
morphemes: ([#smallcaps[conn]=him], [that.#dat.#sg], [time.#dat.#sg], [eat.they.shall]),
translation: \"They shall celebrate him on that date\",
)", addl-bindings: (pl:pl, sg:sg, dat:dat))
#codeblock(
"#numbered-gloss(
header: [Jaminjung (Schultze-Berndt 2000:92)],
source: ([nanggayan], [guny-bi-yarluga?]),
morphemes: ([who], [2#du.#A.3#sg.#P\-#fut\-poke]),
translation: \"Who do you two want to spear?\",
)", addl-bindings: (du:du, sg:sg, fut:fut, A:A, P:P))
#codeblock("
#numbered-gloss(
header: [Turkish (cf. 6)],
source: ([çık-mak],),
morphemes: ([come_out-#inf],),
translation: ['to come out'],
)", addl-bindings: (inf: inf))
|
https://github.com/hei-templates/hevs-typsttemplate-thesis | https://raw.githubusercontent.com/hei-templates/hevs-typsttemplate-thesis/main/00-templates/template-thesis.typ | typst | MIT License | //
// Description: HEVS Thesis Typst Template
// Author : <NAME>
//
#import "helpers.typ": *
#import "page-title-thesis.typ": *
#import "page-reportinfo.typ": *
#let thesis(
title: none,
subtitle: none,
version: none,
author: (),
professor: (),
expert: (),
school: (),
date: (),
lang:"en",
tableof : (
toc: true,
tof: false,
tot: false,
tol: false,
toe: false,
),
icons: (
topleft: none,
topright: none,
bottomleft: none,
bottomright: none,
),
body) = {
// Set the document's basic properties.
set document(author: author.name, title: title)
set page(margin: (top:3.5cm, bottom:3.5cm, rest:3.5cm))
// header and footer
set page(
header: locate(loc => if loc.page() >=2 [
#set text(small)
#h(1fr) #smallcaps(title)
]),
footer: locate(loc => if loc.page() >=2 [
#set text(small)
#h(1fr) #counter(page).display("1 / 1", both: true)
]),
)
// font & language
set text(
font: (
"Linux Libertine",
"Fira Sans",
),
fallback: true,
lang:lang
)
// paragraph
show par: set block(spacing: 1em)
//set par(leading: 0.55em, first-line-indent: 1.8em, justify: true)
// heading
show heading: set block(above: 1.2em, below: 1.2em)
set heading(numbering: "1.1")
show heading.where(level: 1): (it) => {
set text(size: huge)
set block(above: 1.2em, below: 1.2em)
if it.numbering != none {
let num = numbering(it.numbering, ..counter(heading).at(it.location()))
let prefix = num + h(0.5em) + text(code-border)[|] + h(0.5em)
unshift_prefix(prefix, it.body)
} else {
it
}
}
show heading.where(level: 2): (it) => {
let num = numbering(it.numbering, ..counter(heading).at(it.location()))
unshift_prefix(num + h(0.8em), it.body)
}
//show heading.where(level: 1): set text(size:huge)
//show heading.where(level: 1): set pad(size:huge)
// link color
//show link: it => text(fill:blue, underline(it))
show link: it => text(fill:hei-blue, it)
// Math numbering
set math.equation(numbering: "(1)")
// code blocks
set raw(syntaxes:"syntax/VHDL.sublime-syntax")
show raw.where(block: false): set text(weight: "semibold")
//show raw.where(block: false): it => {
// highlight(
// fill:code-bg,
// top-edge: "ascender",
// bottom-edge: "bounds",
// extent:1pt, it)
//}
show raw.where(block: true): set text(size: tiny)
show raw.where(block: true): it => {
block(
fill: code-bg,
width:100%,
inset: 10pt,
radius: 4pt,
stroke: 0.1pt + code-border,
it,
)
}
// Title page
page-title-thesis(
title: title,
date: date,
school: school,
author: author,
professor: professor,
expert: expert,
icons: icons,
)
// Report info
page-reportinfo(
author: author,
date: date.current,
signature: author.signature,
)
// Table of content
pagebreak()
toc(
lang: lang,
tableof: tableof,
)
// Main body
set par(justify: true)
body
} |
https://github.com/FkHiroki/ex-B2_typst | https://raw.githubusercontent.com/FkHiroki/ex-B2_typst/main/sections/section1.typ | typst | MIT No Attribution | = 1. 目的
レーザ光源とスリットを用いて、光の干渉、回折現象を観察する。そしてその結果を理論値と比較することで、現象の理解を図る。また、数値計算ソフトウェアOctaveを用いて様々なスリットにおける光の干渉、回折現象をシミュレーションする。
= 2. 実験原理
光の干渉、回折現象は、光の波動性によって生じる現象である。波において、2つ以上の波が一点で重なる時、その点での光の振幅は、個々の波が足し合わされた振幅になる現象が干渉である。ただし光が干渉するには、2つの波の波長が等しく、位相差が一定に保たれており、少なくともある時間は振幅一定の合成振動をする場合にのみ干渉するという特徴がある。それに対し、障害物等がある際に、波が直進せずにその障害物の幾何学的影の部分に回り込むような動きをする現象が回折である。
次に@fig:theory のような状況を考える。光源Qからでた光で、半径$1$のところの光を$A = A_0 sin omega t$とすると、光源から距離$a$離れた波面Eでは、
$ A = A_0 / a sin(omega t - (2 pi) / lambda a) $
となる。次に、PM間の距離を$r$とすると、Pから出た光のMにおける振幅は、
$ A = alpha / r A_0 / a sin[omega t - (2 pi) / lambda ( a + r )] $
ここで、$alpha$は光の方向によって変わるが、ここでは、$alpha = 1$とする。そして、干渉を考えた際の$M$における振幅は、
$ A(M) = integral_E A_0 / (a r) sin[omega t - (2 pi) / lambda (a + r)] d e $
となる。スリットの幅が小さいことを考えると、$A_0/(a r) tilde.eq A_0 / (a b)$となる。また、振幅の比のみを今後考えるために、$A_0/a b = 1$とする。ここで、@fig:theory の状況を2次元として考える。P、Mの座標をそれぞれ$(xi, -b + zeta)$、$(x, 0)$とすると、
$ r = sqrt((x - xi)^2 + (b - zeta)^2) $
となり、波面E上の座標の式は、
$ xi^2 + (a + zeta)^2 = a^2 $
とすると、最終的に$r$は、
$ r = sqrt(x^2 + b^2 - 2(a + b)zeta - 2 x xi) $ <eq:r>
となる。ここで、$zeta$が十分小さいとき、$(5)$式より$zeta = - xi^2 / 2a$となるため、これを踏まえて式@eq:r をテイラー展開すると、
$ r = b(1 + x^2 / (2 b^2)) - ((x xi)/b - (a + b) / (2 a b) xi^2) + dots = overline(epsilon) + epsilon(xi) $ <eq:r2>
となる。これを用いると、$A(M)$は、
$ A(M) &= integral_E sin[omega t - (2 pi) / lambda (a + overline(epsilon) + epsilon(xi))] d e \
&= C dot sin[omega t - (2 pi) / lambda (a + overline(epsilon))] + S dot cos[omega t - (2 pi) / lambda (a + overline(epsilon))] \
C &= integral_E cos (2 pi)/lambda ((x xi) / b - (a + b) / (2 a b) xi^2) d e \
S &= integral_E sin (2 pi)/lambda ((x xi) / b - (a + b) / (2 a b) xi^2) d e
$ <eq:AM>
となり、光強度は$|A(M)|^2 = C^2 + S^2$となる
次に、@fig:theory2 のような状況を考える。この場合は、E'面の積分を考えれば良い。PをE'上にとった時、波面E'の式は$xi^2 + (b-zeta)^2 = b^2$となり、PM間の距離を$r'$とすると、$r' = sqrt((x - xi)^2 + (b - zeta)^2)$となるため、
$
a + r &= a + b + r - b = a + b + sqrt((x - xi)^2 + (b - zeta)^2) - b \
&tilde.eq (a + b + x^2/(2b)) - (x xi) / b
$
となる。$epsilon = (2 pi)/lambda dot (x xi) / b = k x xi$と置けるので、式@eq:AM は、
$
C = integral_E cos k x xi d e \
S = integral_E sin k x xi d e \
C + i S = integral_E exp(i k x xi) d e
$ <eq:AM2>
となる。よって、式@eq:AM2 と、$|A(M)|^2 = C^2 + S^2$より、回折像が開口部Sのフーリエ変換になることがわかる。
#figure(
image("/figs/theory_fourie.png", width: 90%),
caption: [光源Qの回折像(参照: @text2024b2)]
) <fig:theory>
#figure(
image("/figs/theory_fourie2.png", width: 90%),
caption: [光源Qのフラウンホーファー回折像(参照: @text2024b2)]
) <fig:theory2>
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/spacing_01.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test spacing collapsing before spacing.
#set align(right)
A #h(0pt) B #h(0pt) \
A B \
A #h(-1fr) B
|
https://github.com/thornoar/typst-libraries | https://raw.githubusercontent.com/thornoar/typst-libraries/master/theorem.typ | typst | // Theorems support
#import "@preview/ctheorems:1.1.2": thmrules, thmenv, thmproof
// Basic function
#let thmstyle(
identifier,
head,
supplement: auto,
numbering: "1.1.",
refnumbering: "1.1",
titlefmt: auto,
namefmt: x => [ (#x):],
bodyfmt: x => x,
separator: h(0.2em),
base: "heading",
base_level: none,
..blockargs
) = {
if (head == auto) { head = upper([#identifier.first()]) + identifier.slice(1, identifier.len()) }
if (supplement == auto) { supplement = head }
let fmt(name, number, body, title: auto) = {
if (name != none) { name = namefmt(name) } else { name = [] }
if (title == auto) { title = head }
if (number != none) { title += " " + number }
if (titlefmt == auto) {
if (numbering == none) {
title = [ *#title.* ]
} else {
title = strong(title)
}
} else {
title = titlefmt(title)
}
body = bodyfmt(body)
align(left, block(
width: 100%,
breakable: true,
..blockargs.named(),
[#title#name#separator#body]
))
}
return thmenv(
identifier,
base,
base_level,
fmt
).with(
supplement: supplement,
numbering: numbering,
refnumbering: refnumbering
)
}
// Shorthands
#let plainstyle(identifier, head, breakable: false, ..args) = thmstyle(
identifier,
head,
breakable: breakable,
..args
)
#let statestyle(identifier, head, breakable: false, ..args) = thmstyle(
identifier,
head,
breakable: breakable,
bodyfmt: emph,
..args
)
#let proofstyle(identifier, head, breakable: true, ..args) = thmproof(
identifier,
head,
breakable: breakable,
..args,
inset: (left: 10pt, bottom: 2pt),
radius: 0pt,
stroke: (left: black),
separator: [_:_#h(0.2em)]
)
#let blockstyle(inset: 10pt, radius: 10pt, stroke: black, ..args) = thmstyle.with(
inset: inset,
radius: radius,
stroke: stroke,
..args
)
// Further shorthands
#let def = plainstyle("definition", "Definition")
#let nota = plainstyle("notation", "Notation")
#let exam = plainstyle("example", "Example")
#let exer = plainstyle("exercise", "Exercise")
#let note = plainstyle("note", "Note")
#let th = statestyle("theorem", "Theorem")
#let lm = statestyle("lemma", "Lemma")
#let prop = statestyle("proposition", "Proposition")
#let prb = statestyle("problem", "Problem")
#let cor(base: "heading") = statestyle("corollary", "Corollary", base: base)
#let pf = proofstyle("proof", "Proof")
// Russian versions
#let rdef = plainstyle("definition", "Определение")
#let rnota = plainstyle("notation", "Нотация")
#let rexam = plainstyle("example", "Пример")
#let rexer = plainstyle("exercise", "Упражнение")
#let rnote = plainstyle("note", "Замечание")
#let rth = statestyle("theorem", "Теорема")
#let rlm = statestyle("lemma", "Лемма")
#let rprop = statestyle("proposition", "Предложение")
#let rprb = statestyle("problem", "Задача")
#let rcor(base: "heading") = statestyle("corollary", "Следствие", base: base)
#let rpf = proofstyle("proof", "Доказательство")
// Main show rule
#let theorem = doc => {
show: thmrules.with(qed-symbol: $square.filled.medium$)
show link: underline
doc
}
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/string-03.typ | typst | Other | // Error: 2-11 string is empty
#"".last()
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/pattern-simple_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
#set page(width: auto, height: auto, margin: 0pt)
#let pat = pattern(size: (10pt, 10pt), line(stroke: 4pt, start: (0%, 0%), end: (100%, 100%)))
#rect(width: 50pt, height: 50pt, fill: pat)
|
https://github.com/Selflocking/cdut-typst | https://raw.githubusercontent.com/Selflocking/cdut-typst/master/README.md | markdown | MIT License | # cdut-typst
成都理工大学的实验报告模板
| 文件 | 用处 |
| :------------------- | :----------------- |
| report.typ | 示例 |
| template/lab.typ | 本科生实验报告模板 |
| template/project.typ | 项目报告模板 |
## 感谢
- [pku-typst/pkuthss-typst](https://github.com/pku-typst/pkuthss-typst)
- [nju-lug/modern-nju-thesis](https://github.com/nju-lug/modern-nju-thesis)
## License
Licensed under the [MIT License](LICENSE).
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-0400.typ | typst | Apache License 2.0 | #let data = (
("CYRILLIC CAPITAL LETTER IE WITH GRAVE", "Lu", 0),
("CYRILLIC CAPITAL LETTER IO", "Lu", 0),
("CYRILLIC CAPITAL LETTER DJE", "Lu", 0),
("CYRILLIC CAPITAL LETTER GJE", "Lu", 0),
("CYRILLIC CAPITAL LETTER UKRAINIAN IE", "Lu", 0),
("CYRILLIC CAPITAL LETTER DZE", "Lu", 0),
("CYRILLIC CAPITAL LETTER BYELORUSSIAN-UKRAINIAN I", "Lu", 0),
("CYRILLIC CAPITAL LETTER YI", "Lu", 0),
("CYRILLIC CAPITAL LETTER JE", "Lu", 0),
("CYRILLIC CAPITAL LETTER LJE", "Lu", 0),
("CYRILLIC CAPITAL LETTER NJE", "Lu", 0),
("CYRILLIC CAPITAL LETTER TSHE", "Lu", 0),
("CYRILLIC CAPITAL LETTER KJE", "Lu", 0),
("CYRILLIC CAPITAL LETTER I WITH GRAVE", "Lu", 0),
("CYRILLIC CAPITAL LETTER SHORT U", "Lu", 0),
("CYRILLIC CAPITAL LETTER DZHE", "Lu", 0),
("CYRILLIC CAPITAL LETTER A", "Lu", 0),
("CYRILLIC CAPITAL LETTER BE", "Lu", 0),
("CYRILLIC CAPITAL LETTER VE", "Lu", 0),
("CYRILLIC CAPITAL LETTER GHE", "Lu", 0),
("CYRILLIC CAPITAL LETTER DE", "Lu", 0),
("CYRILLIC CAPITAL LETTER IE", "Lu", 0),
("CYRILLIC CAPITAL LETTER ZHE", "Lu", 0),
("CYRILLIC CAPITAL LETTER ZE", "Lu", 0),
("CYRILLIC CAPITAL LETTER I", "Lu", 0),
("CYRILLIC CAPITAL LETTER SHORT I", "Lu", 0),
("CYRILLIC CAPITAL LETTER KA", "Lu", 0),
("CYRILLIC CAPITAL LETTER EL", "Lu", 0),
("CYRILLIC CAPITAL LETTER EM", "Lu", 0),
("CYRILLIC CAPITAL LETTER EN", "Lu", 0),
("CYRILLIC CAPITAL LETTER O", "Lu", 0),
("CYRILLIC CAPITAL LETTER PE", "Lu", 0),
("CYRILLIC CAPITAL LETTER ER", "Lu", 0),
("CYRILLIC CAPITAL LETTER ES", "Lu", 0),
("CYRILLIC CAPITAL LETTER TE", "Lu", 0),
("CYRILLIC CAPITAL LETTER U", "Lu", 0),
("CYRILLIC CAPITAL LETTER EF", "Lu", 0),
("CYRILLIC CAPITAL LETTER HA", "Lu", 0),
("CYRILLIC CAPITAL LETTER TSE", "Lu", 0),
("CYRILLIC CAPITAL LETTER CHE", "Lu", 0),
("CYRILLIC CAPITAL LETTER SHA", "Lu", 0),
("CYRILLIC CAPITAL LETTER SHCHA", "Lu", 0),
("CYRILLIC CAPITAL LETTER HARD SIGN", "Lu", 0),
("CYRILLIC CAPITAL LETTER YERU", "Lu", 0),
("CYRILLIC CAPITAL LETTER SOFT SIGN", "Lu", 0),
("CYRILLIC CAPITAL LETTER E", "Lu", 0),
("CYRILLIC CAPITAL LETTER YU", "Lu", 0),
("CYRILLIC CAPITAL LETTER YA", "Lu", 0),
("CYRILLIC SMALL LETTER A", "Ll", 0),
("CYRILLIC SMALL LETTER BE", "Ll", 0),
("CYRILLIC SMALL LETTER VE", "Ll", 0),
("CYRILLIC SMALL LETTER GHE", "Ll", 0),
("CYRILLIC SMALL LETTER DE", "Ll", 0),
("CYRILLIC SMALL LETTER IE", "Ll", 0),
("CYRILLIC SMALL LETTER ZHE", "Ll", 0),
("CYRILLIC SMALL LETTER ZE", "Ll", 0),
("CYRILLIC SMALL LETTER I", "Ll", 0),
("CYRILLIC SMALL LETTER SHORT I", "Ll", 0),
("CYRILLIC SMALL LETTER KA", "Ll", 0),
("CYRILLIC SMALL LETTER EL", "Ll", 0),
("CYRILLIC SMALL LETTER EM", "Ll", 0),
("CYRILLIC SMALL LETTER EN", "Ll", 0),
("CYRILLIC SMALL LETTER O", "Ll", 0),
("CYRILLIC SMALL LETTER PE", "Ll", 0),
("CYRILLIC SMALL LETTER ER", "Ll", 0),
("CYRILLIC SMALL LETTER ES", "Ll", 0),
("CYRILLIC SMALL LETTER TE", "Ll", 0),
("CYRILLIC SMALL LETTER U", "Ll", 0),
("CYRILLIC SMALL LETTER EF", "Ll", 0),
("CYRILLIC SMALL LETTER HA", "Ll", 0),
("CYRILLIC SMALL LETTER TSE", "Ll", 0),
("CYRILLIC SMALL LETTER CHE", "Ll", 0),
("CYRILLIC SMALL LETTER SHA", "Ll", 0),
("CYRILLIC SMALL LETTER SHCHA", "Ll", 0),
("CYRILLIC SMALL LETTER HARD SIGN", "Ll", 0),
("CYRILLIC SMALL LETTER YERU", "Ll", 0),
("CYRILLIC SMALL LETTER SOFT SIGN", "Ll", 0),
("CYRILLIC SMALL LETTER E", "Ll", 0),
("CYRILLIC SMALL LETTER YU", "Ll", 0),
("CYRILLIC SMALL LETTER YA", "Ll", 0),
("CYRILLIC SMALL LETTER IE WITH GRAVE", "Ll", 0),
("CYRILLIC SMALL LETTER IO", "Ll", 0),
("CYRILLIC SMALL LETTER DJE", "Ll", 0),
("CYRILLIC SMALL LETTER GJE", "Ll", 0),
("CYRILLIC SMALL LETTER UKRAINIAN IE", "Ll", 0),
("CYRILLIC SMALL LETTER DZE", "Ll", 0),
("CYRILLIC SMALL LETTER BYELORUSSIAN-UKRAINIAN I", "Ll", 0),
("CYRILLIC SMALL LETTER YI", "Ll", 0),
("CYRILLIC SMALL LETTER JE", "Ll", 0),
("CYRILLIC SMALL LETTER LJE", "Ll", 0),
("CYRILLIC SMALL LETTER NJE", "Ll", 0),
("CYRILLIC SMALL LETTER TSHE", "Ll", 0),
("CYRILLIC SMALL LETTER KJE", "Ll", 0),
("CYRILLIC SMALL LETTER I WITH GRAVE", "Ll", 0),
("CYRILLIC SMALL LETTER SHORT U", "Ll", 0),
("CYRILLIC SMALL LETTER DZHE", "Ll", 0),
("CYRILLIC CAPITAL LETTER OMEGA", "Lu", 0),
("CYRILLIC SMALL LETTER OMEGA", "Ll", 0),
("CYRILLIC CAPITAL LETTER YAT", "Lu", 0),
("CYRILLIC SMALL LETTER YAT", "Ll", 0),
("CYRILLIC CAPITAL LETTER IOTIFIED E", "Lu", 0),
("CYRILLIC SMALL LETTER IOTIFIED E", "Ll", 0),
("CYRILLIC CAPITAL LETTER LITTLE YUS", "Lu", 0),
("CYRILLIC SMALL LETTER LITTLE YUS", "Ll", 0),
("CYRILLIC CAPITAL LETTER IOTIFIED LITTLE YUS", "Lu", 0),
("CYRILLIC SMALL LETTER IOTIFIED LITTLE YUS", "Ll", 0),
("CYRILLIC CAPITAL LETTER BIG YUS", "Lu", 0),
("CYRILLIC SMALL LETTER BIG YUS", "Ll", 0),
("CYRILLIC CAPITAL LETTER IOTIFIED BIG YUS", "Lu", 0),
("CYRILLIC SMALL LETTER IOTIFIED BIG YUS", "Ll", 0),
("CYRILLIC CAPITAL LETTER KSI", "Lu", 0),
("CYRILLIC SMALL LETTER KSI", "Ll", 0),
("CYRILLIC CAPITAL LETTER PSI", "Lu", 0),
("CYRILLIC SMALL LETTER PSI", "Ll", 0),
("CYRILLIC CAPITAL LETTER FITA", "Lu", 0),
("CYRILLIC SMALL LETTER FITA", "Ll", 0),
("CYRILLIC CAPITAL LETTER IZHITSA", "Lu", 0),
("CYRILLIC SMALL LETTER IZHITSA", "Ll", 0),
("CYRILLIC CAPITAL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT", "Lu", 0),
("CYRILLIC SMALL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT", "Ll", 0),
("CYRILLIC CAPITAL LETTER UK", "Lu", 0),
("CYRILLIC SMALL LETTER UK", "Ll", 0),
("CYRILLIC CAPITAL LETTER ROUND OMEGA", "Lu", 0),
("CYRILLIC SMALL LETTER ROUND OMEGA", "Ll", 0),
("CYRILLIC CAPITAL LETTER OMEGA WITH TITLO", "Lu", 0),
("CYRILLIC SMALL LETTER OMEGA WITH TITLO", "Ll", 0),
("CYRILLIC CAPITAL LETTER OT", "Lu", 0),
("CYRILLIC SMALL LETTER OT", "Ll", 0),
("CYRILLIC CAPITAL LETTER KOPPA", "Lu", 0),
("CYRILLIC SMALL LETTER KOPPA", "Ll", 0),
("CYRILLIC THOUSANDS SIGN", "So", 0),
("COMBINING CYRILLIC TITLO", "Mn", 230),
("COMBINING CYRILLIC PALATALIZATION", "Mn", 230),
("COMBINING CYRILLIC DASIA PNEUMATA", "Mn", 230),
("COMBINING CYRILLIC PSILI PNEUMATA", "Mn", 230),
("COMBINING CYRILLIC POKRYTIE", "Mn", 230),
("COMBINING CYRILLIC HUNDRED THOUSANDS SIGN", "Me", 0),
("COMBINING CYRILLIC MILLIONS SIGN", "Me", 0),
("CYRILLIC CAPITAL LETTER SHORT I WITH TAIL", "Lu", 0),
("CYRILLIC SMALL LETTER SHORT I WITH TAIL", "Ll", 0),
("CYRILLIC CAPITAL LETTER SEMISOFT SIGN", "Lu", 0),
("CYRILLIC SMALL LETTER SEMISOFT SIGN", "Ll", 0),
("CYRILLIC CAPITAL LETTER ER WITH TICK", "Lu", 0),
("CYRILLIC SMALL LETTER ER WITH TICK", "Ll", 0),
("CYRILLIC CAPITAL LETTER GHE WITH UPTURN", "Lu", 0),
("CYRILLIC SMALL LETTER GHE WITH UPTURN", "Ll", 0),
("CYRILLIC CAPITAL LETTER GHE WITH STROKE", "Lu", 0),
("CYRILLIC SMALL LETTER GHE WITH STROKE", "Ll", 0),
("CYRILLIC CAPITAL LETTER GHE WITH MIDDLE HOOK", "Lu", 0),
("CYRILLIC SMALL LETTER GHE WITH MIDDLE HOOK", "Ll", 0),
("CYRILLIC CAPITAL LETTER ZHE WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER ZHE WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER ZE WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER ZE WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER KA WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER KA WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER KA WITH VERTICAL STROKE", "Lu", 0),
("CYRILLIC SMALL LETTER KA WITH VERTICAL STROKE", "Ll", 0),
("CYRILLIC CAPITAL LETTER KA WITH STROKE", "Lu", 0),
("CYRILLIC SMALL LETTER KA WITH STROKE", "Ll", 0),
("CYRILLIC CAPITAL LETTER BASHKIR KA", "Lu", 0),
("CYRILLIC SMALL LETTER BASHKIR KA", "Ll", 0),
("CYRILLIC CAPITAL LETTER EN WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER EN WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LIGATURE EN GHE", "Lu", 0),
("CYRILLIC SMALL LIGATURE EN GHE", "Ll", 0),
("CYRILLIC CAPITAL LETTER PE WITH MIDDLE HOOK", "Lu", 0),
("CYRILLIC SMALL LETTER PE WITH MIDDLE HOOK", "Ll", 0),
("CYRILLIC CAPITAL LETTER ABKHASIAN HA", "Lu", 0),
("CYRILLIC SMALL LETTER ABKHASIAN HA", "Ll", 0),
("CYRILLIC CAPITAL LETTER ES WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER ES WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER TE WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER TE WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER STRAIGHT U", "Lu", 0),
("CYRILLIC SMALL LETTER STRAIGHT U", "Ll", 0),
("CYRILLIC CAPITAL LETTER STRAIGHT U WITH STROKE", "Lu", 0),
("CYRILLIC SMALL LETTER STRAIGHT U WITH STROKE", "Ll", 0),
("CYRILLIC CAPITAL LETTER HA WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER HA WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LIGATURE TE TSE", "Lu", 0),
("CYRILLIC SMALL LIGATURE TE TSE", "Ll", 0),
("CYRILLIC CAPITAL LETTER CHE WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER CHE WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER CHE WITH VERTICAL STROKE", "Lu", 0),
("CYRILLIC SMALL LETTER CHE WITH VERTICAL STROKE", "Ll", 0),
("CYRILLIC CAPITAL LETTER SHHA", "Lu", 0),
("CYRILLIC SMALL LETTER SHHA", "Ll", 0),
("CYRILLIC CAPITAL LETTER ABKHASIAN CHE", "Lu", 0),
("CYRILLIC SMALL LETTER ABKHASIAN CHE", "Ll", 0),
("CYRILLIC CAPITAL LETTER ABKHASIAN CHE WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER ABKHASIAN CHE WITH DESCENDER", "Ll", 0),
("CYRILLIC LETTER PALOCHKA", "Lu", 0),
("CYRILLIC CAPITAL LETTER ZHE WITH BREVE", "Lu", 0),
("CYRILLIC SMALL LETTER ZHE WITH BREVE", "Ll", 0),
("CYRILLIC CAPITAL LETTER KA WITH HOOK", "Lu", 0),
("CYRILLIC SMALL LETTER KA WITH HOOK", "Ll", 0),
("CYRILLIC CAPITAL LETTER EL WITH TAIL", "Lu", 0),
("CYRILLIC SMALL LETTER EL WITH TAIL", "Ll", 0),
("CYRILLIC CAPITAL LETTER EN WITH HOOK", "Lu", 0),
("CYRILLIC SMALL LETTER EN WITH HOOK", "Ll", 0),
("CYRILLIC CAPITAL LETTER EN WITH TAIL", "Lu", 0),
("CYRILLIC SMALL LETTER EN WITH TAIL", "Ll", 0),
("CYRILLIC CAPITAL LETTER KHAKASSIAN CHE", "Lu", 0),
("CYRILLIC SMALL LETTER KHAKASSIAN CHE", "Ll", 0),
("CYRILLIC CAPITAL LETTER EM WITH TAIL", "Lu", 0),
("CYRILLIC SMALL LETTER EM WITH TAIL", "Ll", 0),
("CYRILLIC SMALL LETTER PALOCHKA", "Ll", 0),
("CYRILLIC CAPITAL LETTER A WITH BREVE", "Lu", 0),
("CYRILLIC SMALL LETTER A WITH BREVE", "Ll", 0),
("CYRILLIC CAPITAL LETTER A WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER A WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LIGATURE A IE", "Lu", 0),
("CYRILLIC SMALL LIGATURE A IE", "Ll", 0),
("CYRILLIC CAPITAL LETTER IE WITH BREVE", "Lu", 0),
("CYRILLIC SMALL LETTER IE WITH BREVE", "Ll", 0),
("CYRILLIC CAPITAL LETTER SCHWA", "Lu", 0),
("CYRILLIC SMALL LETTER SCHWA", "Ll", 0),
("CYRILLIC CAPITAL LETTER SCHWA WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER SCHWA WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER ZHE WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER ZHE WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER ZE WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER ZE WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER ABKHASIAN DZE", "Lu", 0),
("CYRILLIC SMALL LETTER ABKHASIAN DZE", "Ll", 0),
("CYRILLIC CAPITAL LETTER I WITH MACRON", "Lu", 0),
("CYRILLIC SMALL LETTER I WITH MACRON", "Ll", 0),
("CYRILLIC CAPITAL LETTER I WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER I WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER O WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER O WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER BARRED O", "Lu", 0),
("CYRILLIC SMALL LETTER BARRED O", "Ll", 0),
("CYRILLIC CAPITAL LETTER BARRED O WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER BARRED O WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER E WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER E WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER U WITH MACRON", "Lu", 0),
("CYRILLIC SMALL LETTER U WITH MACRON", "Ll", 0),
("CYRILLIC CAPITAL LETTER U WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER U WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER U WITH DOUBLE ACUTE", "Lu", 0),
("CYRILLIC SMALL LETTER U WITH DOUBLE ACUTE", "Ll", 0),
("CYRILLIC CAPITAL LETTER CHE WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER CHE WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER GHE WITH DESCENDER", "Lu", 0),
("CYRILLIC SMALL LETTER GHE WITH DESCENDER", "Ll", 0),
("CYRILLIC CAPITAL LETTER YERU WITH DIAERESIS", "Lu", 0),
("CYRILLIC SMALL LETTER YERU WITH DIAERESIS", "Ll", 0),
("CYRILLIC CAPITAL LETTER GHE WITH STROKE AND HOOK", "Lu", 0),
("CYRILLIC SMALL LETTER GHE WITH STROKE AND HOOK", "Ll", 0),
("CYRILLIC CAPITAL LETTER HA WITH HOOK", "Lu", 0),
("CYRILLIC SMALL LETTER HA WITH HOOK", "Ll", 0),
("CYRILLIC CAPITAL LETTER HA WITH STROKE", "Lu", 0),
("CYRILLIC SMALL LETTER HA WITH STROKE", "Ll", 0),
)
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/3_PB/SpecificaTecnica/functions.typ | typst | MIT License | #let glossary(content) = {
emph[#content#sub("G")]
}
#let team = {
emph[SWAT Engineering]
}
#let shrinkFloat(number) = {
let strNumber = str(number)
let str1 = strNumber.split(".")
if(str1.len() > 1){
let str2 = str1.at(1)
str1 = str1.at(0)
if(str2.len() > 2){
str2 = str2.at(0) + str2.at(1)
}
return float(str1 + "." + str2)
}
return float(str1.at(0))
} |
https://github.com/cherrypiejam/typst-cv-template | https://raw.githubusercontent.com/cherrypiejam/typst-cv-template/main/README.md | markdown | This repository includes a Typst template for academic CV
and a [demo](cv.pdf) CV of <NAME>.

Note: some of the contents of the demo are borrowed from
[here](https://coda.io/@anni-chai/doc-2-resume-template-draft/peter-parkers-resume-1).
|
|
https://github.com/mkpoli/ipsj-typst-template | https://raw.githubusercontent.com/mkpoli/ipsj-typst-template/master/README.md | markdown | # 情報処理学会研究報告テンプレート
> [!WARNING]
> このテンプレートは現状利用可能ですが、まだ一部の機能が欠けておりドキュメントも途上であり、また不具合も確認されておりますので、ご利用の際はご注意ください。
## 参照用ファイル由来
* [情報処理学会研究報告テンプレート](https://www.ipsj.or.jp/journal/submit/style.html)
* [研究報告原稿(PDFファイル)作成について](https://www.ipsj.or.jp/kenkyukai/genko.html)
* [ken1row/IPSJ-techrep-xelatex](https://github.com/ken1row/IPSJ-techrep-xelatex/)
## 注意事項
* このテンプレートは非公式です。情報処理学会と一切関係ありません。
* このテンプレートは開発中であり、予告なく変更される可能性があります。
* このテンプレートの利用は、公式から承認されていないため、利用にあたっては自己責任でお願いします。
* このテンプレートを利用したことによるいかなる損害についても、一切の責任を負いません。
## 使い方
### インストール
```bash
python install.py
```
### 利用
```bash
typst init @local/ipsj-template:0.0.0 report
```
|
|
https://github.com/dismint/docmint | https://raw.githubusercontent.com/dismint/docmint/main/networks/pset2.typ | typst | #import "template.typ": *
#show: template.with(
title: "14.15 Problem Set #2",
subtitle: "<NAME>",
pset: true
)
= Problem 1
== (1)
#align(center, [#raw-render(```
digraph {
layout = neato
node[shape = circle]
1 -> {2, 3}
2 -> {1, 3}
3 -> 1
}
```)])
== (2)
=== Eigenvector Centrality
First let us solve for the eigenvalues of $g$, remembering to use $g'$ in calculations. I will show part of the full calculation for this first instance.
$
lr(|mat(delim: "[", 0, 1, 1; 1, 0, 0; 1, 1, 0;) - mat(delim: "[", lambda, 0, 0; 0, lambda, 0; 0, 0, lambda;)|) &=
lr(|mat(delim: "[", -lambda, 1, 1; 1, -lambda, 0; 1, 1, -lambda)|)\
&= -lambda (lambda^2 - 0) - 1 (-lambda - 0) + 1 (1 + lambda)\
&= -lambda^3 + lambda + 1 + lambda\
&= -lambda^3 + 2 lambda + 1
$
From this, we get that the positive eigenvalue is $(1 + sqrt(5)) / 2$, and as a result the eigenvector that follows is:
$ mat(delim: "[", 1; (sqrt(5) - 1) / 2; 1) $
However we want to normalize the results, so we take the overall sum which is $1 + (sqrt(5) - 1) / 2 + 1 = (sqrt(5) + 3) / 2$. We then divide each of the elements by this value to get the normalized vector of:
$ boxed(mat(delim: "[", 2 / (sqrt(5) + 3); (sqrt(5)-1) / (sqrt(5) + 3); 2 / (sqrt(5) + 3))) $
=== PageRank
Recall that PageRank is defined as follows:
$ c = (bold(I) - alpha g' D^(-1))^(-1) bold(1) $
$D$ is the diagonal array where the elements are either the out-degree, or 1 if the out-degree is 0. For this graph, we have:
$ D = mat(delim: "[", 2, 0, 0; 0, 2, 0; 0, 0, 1;) $
Substituing the values in, we then get the following results:
$
(alpha = 0.25) =& boxed(mat(delim: "[", 1 481/999; 1 185/999; 1 3/9;))\
(alpha = 0.5) =& boxed(mat(delim: "[", 2 2/5; 1 3/5; 2;))
$
== (3)
As we increase $alpha$, the PageRank values all go up. This makes sense since $alpha$ simply controls the discount on further paths, and as a result increasing its value will make all longer paths contribute more to the final score.
We can see that compared to the original score, nodes 1 and 3 increased greatly as many paths go through them, while node 2 suffers and does not increase as much since there is only one edge pointing inward.
== (4)
Recall the definition of Katz-Bonacich:
$ c = beta (bold(I) - alpha g')^(-1) bold(1) $
With the assumption that $beta = 1$, we end up with the following for $alpha = 0.5$
$ c = (mat(delim: "[", 1, 0, 0; 0, 1, 0; 0, 0, 1;) - 0.5 mat(delim: "[", 0, 1, 1; 1, 0, 0; 1, 1, 0;))^(-1) mat(delim: "[", 1; 1; 1;) = boxed(mat(delim: "[", 6; 4; 6;)) $
= Problem 3
The paper I will be reading is:
`Boehm and Pandalai-Nayar, “Input Linkages and the Transmission of Shocks: FirmLevel Evidence from the 2011 Tohoku Earthquake”`
== (1)
We consider a node / firm "big" in this study if they are heavily involved in international operations between many nations as well as their participation in input-output linkages. This lines up well with what the study hopes to achieve, as their primary focus is to provide "emperical evidence for the cross-country transmission of shocks via inelastic production linkages, primarily of multinational firms. The principal mechanism at work is not new; the idea that input-output linkages are a key channel through which shocks propagate through the economy dates back to at least Leontief" (Boehm and Pandali-Nayar, 1). Thus the given definition seems to fit well.
An easy reach for a coarser definition would be to say the size of a node is defined as *all* its connections, not just the international ones. This of course, might lead to some concealment of information as national transactions would be included as well, which is not the purpose of this paper.
== (2)
/ Supply-Chain Link: Inelastic production linkages that take place between multinational firms.
Once again, similar to above, this makes sense in the context of the paper as we don't necessarily need to consider the other edges, such as national links between firms. We also don't need to care about elastic productions, since they don't seem to greatly impact the propagation of shock. There is a point to be made that hidden data may be present in the edges we are leaving out, and thus the conclusion we come to might be a skewed or false one without the full graph.
== (3)
The most interesting and significant result that stood out to be was the conclusion made regarding the link from Japanese firms to affiliates in the United States. It was shown that for every 1% decrease in the amount of Japanese imports, there was an accompanying 0.2% drop in production of the firms that relied on those imports.
Along with the elasticity of 0.03 between Japanese international imports and nonmultinationals, this shows that there is surprisingly little substitutability.
== (4)
It's clear from the above as well as other parts of the paper that diversifying inputs is an extremely important endeavour. It's unlikely that all supply chains go down, while it is possible for your one supplier to endure a shock, leaving you (SpaceX) to feel the results as well. Alternatively, you could also opt to have a solid backup in the case of a disruption.
Another important outcome to take from the paper is that focusing on the *short-run* is very important. Although the paper does not mention a specific solution, it does beg the reader to think about the time frame within which these changes happen, so that firms might have a better chance at finding substitutes or otherwise deal with sudden shocks or shortages in supply chains.
|
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/格式化/格式化.typ | typst | #set par(
justify: true,
leading: 0.52em,
)
= 格式化
=
到目前为止,您已经编写了一份包含一些文本、一些方程和图像的报告。不过,看起来还是很朴素。您的助教还不知道您正在使用新的排版系统,并且您希望您的报告与其他学生提交的内容相符。在本章中,我们将了解如何使用 Typst 的样式系统格式化您的报告。
== 设定规则
正如我们在前一章中所看到的,Typst 具有插入内容的函数(例如image函数)和其他操作它们作为参数接收的内容的函数(例如函数align)。例如,当您想要证明报告的合理性时,您的第一个冲动可能是寻找一个可以执行此操作并将完整文档包含在其中的函数。
#image("1.png")
等等,函数的所有参数不都应该在括号内指定吗?为什么括号后面还有第二组方括号,里面有内容?答案是,由于在 Typst 中将内容传递给函数是很常见的事情,因此它有特殊的语法:您可以将内容直接写在正常的方括号中,而不是将内容放在参数列表中。参数,节省标点符号。
如上所示,这是有效的。该par函数会证明其中所有段落的合理性。然而,将文档包装在无数的函数中并有选择地就地应用样式很快就会变得很麻烦。
幸运的是,Typst 有一个更优雅的解决方案。通过设置规则,您可以将样式属性应用于某种内容的所有出现位置。您可以通过输入set关键字、后跟要设置其属性的函数名称以及括号中的参数列表来编写设置规则。
#image("2.png")
#image("3.png")
== 自动完成面板
如果您继续操作并在应用程序中尝试了一些操作,您可能会注意到,在输入字符后(井号字符),总是会弹出一个面板,向您显示可用的功能,以及参数列表中的可用参数。那是自动完成面板。在您编写文档时,它非常有用:您可以通过按 Return 键应用其建议,或使用箭头键导航至所需的完成位置。可以通过按 Escape 键关闭该面板,并通过键入(井号)或按Ctrl+再次打开该面板Space。使用自动完成面板来发现函数的正确参数。大多数建议都附有对其用途的简短描述。
#image("4.png")
=
== 设置页面
返回设置规则:编写规则时,您可以根据要设置样式的元素类型来选择功能。以下列出了设置规则中常用的一些函数:
-- text设置文本的字体系列、大小、颜色和其他属性
-- page设置页面大小、边距、页眉、启用列和页脚
-- par调整段落、设置行距等
-- heading设置标题的外观并启用编号
-- document设置 PDF 输出中包含的元数据,例如标题和作者
并非所有功能参数都可以设置。一般来说,只能设置告诉函数如何做某事的参数,而不是告诉函数要做什么的参数。功能参考页指示哪些参数是可设置的。
让我们向文档添加更多样式。我们想要更大的边距和衬线字体。出于示例的目的,我们还将设置另一个页面大小。
#image("5.png", width: 78%)
#image("6.png", width: 80%)
这里有一些值得注意的事情。
首先是page设定的规则。它接收两个参数:页面大小和页面边距。页面大小是一个字符串。 Typst 接受许多标准页面尺寸,但您也可以指定自定义页面尺寸。边距被指定为字典。字典是键值对的集合。在本例中,键是x和y,值分别是水平和垂直边距。我们还可以通过传递带有键left、right、top和 的字典来为每一侧指定单独的边距bottom。
接下来是设置text规则。在这里,我们将字体大小设置为10pt并将字体系列设置为"New Computer Modern"。 Typst 应用程序附带了许多字体,您可以在文档中尝试使用它们。当您位于文本函数的参数列表中时,您可以在自动完成面板中找到可用的字体。
我们还设置了行间距(又名行距):它被指定为长度值,我们使用em单位来指定相对于字体大小的行距:1em相当于当前字体大小(默认为11pt) 。
最后,我们通过在中心对齐中添加垂直对齐来底部对齐图像。垂直和水平对齐可以与运算符结合起来+生成 2D 对齐。
== 一丝精致
为了更清晰地构建我们的文档,我们现在想要对标题进行编号。我们可以通过设置函数numbering的参数来做到这一点heading。
#image("7.png")
我们指定字符串"1."作为编号参数。这告诉 Typst 用阿拉伯数字对标题进行编号,并在每个级别的数字之间加一个点。我们还可以使用字母、罗马数字和符号作为标题:
#image("8.png")
此示例还使用该lorem函数生成一些占位符文本。该函数接受一个数字作为参数,并生成Lorem Ipsum文本的多个单词。
#image("9.png")
== 显示规则
您已经对结果感到非常满意了。但最后一件事需要解决:您正在编写的报告是针对较大的项目的,并且该项目的名称应始终带有徽标,即使是在散文中也是如此。
你考虑一下你的选择。您可以使用搜索和替换在徽标的每个实例之前添加调用。这听起来非常乏味。相反,您可以定义一个始终生成徽标及其图像的自定义函数。然而,还有一种更简单的方法:
井号image("logo.svg")
通过显示规则,您可以重新定义 Typst 显示某些元素的方式。您可以指定 Typst 应该以不同方式显示哪些元素以及它们的外观。显示规则可以应用于文本实例、许多函数,甚至整个文档。
#image("10.png")
此示例中有很多新语法:我们编写关键字show,后跟我们想要以不同方式显示的文本字符串和冒号。然后,我们编写一个函数,将要显示的内容作为参数。在这里,我们称之为论证name。现在,我们可以使用name函数主体中的变量来打印 ArtosFlow 名称。我们的显示规则将徽标图像添加到名称前面,并将结果放入框中,以防止徽标和名称之间发生换行。该图像也被放置在一个框内,这样它就不会出现在自己的段落中。
对第一个框函数和图像函数的调用不需要前导,(井号)因为它们没有直接嵌入到标记中。当 Typst 需要代码而不是标记时,(井号)不需要前导来访问函数、关键字和变量。这可以在参数列表、函数定义和代码块中观察到。
== 审查
您现在知道如何将基本格式应用于 Typst 文档。您学习了如何设置字体、调整段落、更改页面尺寸以及使用设定规则向标题添加编号。您还学习了如何使用基本显示规则来更改文本在整个文档中的显示方式。
您已提交报告。您的主管对此非常满意,他们想将其改编成会议论文!在下一节中,我们将学习如何使用更高级的显示规则和函数将文档格式化为论文。
=
#image("11.png") |
|
https://github.com/claudiomattera/typst-modern-cv | https://raw.githubusercontent.com/claudiomattera/typst-modern-cv/master/src/template.typ | typst | MIT License | // Copyright <NAME> 2023-2024.
//
// Distributed under the MIT License.
// See accompanying file License.txt, or online at
// https://opensource.org/licenses/MIT
#import "icon.typ"
#import "timeline.typ": draw_bar
#import "link.typ": draw_literal_link
#import "theme.typ"
/// Draw the full name on header left column
///
/// - fullname (str): Your full name.
/// -> content: The header left column.
#let draw_name(fullname) = {
grid(
rows: (auto, auto),
row-gutter: 20pt,
text(
size: 34pt,
par(
leading: 10pt,
[#fullname],
),
),
text(
fill: gray.darken(20%),
style: "italic",
size: 15pt,
[Curriculum Vitae],
)
)
}
/// Draw the personal information on header right column
///
/// - address (str): Your home address (optional).
/// - phone (str): Your phone number (optional).
/// - email (str): Your email address (optional).
/// - website (str): Your website URL (optional).
/// - linkedin (str): Your linkedin handle (optional).
/// - github (str): Your github handle (optional).
/// - orcid (str): Your orcid number (optional).
/// -> content: The header right column.
#let draw_personal_information(
address: none,
phone: none,
email: none,
website: none,
linkedin: none,
github: none,
orcid: none,
) = {
let pieces = ()
if address != none {
pieces.push([#icon.house #address])
}
if phone != none {
pieces.push(
link("tel:" + phone, icon.phone + " " + phone)
)
}
if email != none {
pieces.push(
link("mailto:" + email, icon.mail + " " + email)
)
}
if website != none {
pieces.push(
link("https://" + website, icon.world + " " + website)
)
}
if linkedin != none {
pieces.push(
link("https://www.linkedin.com/in/" + linkedin, icon.linkedin + " " + linkedin)
)
}
if github != none {
pieces.push(
link("https://github.com/" + github, icon.github + " " + github)
)
}
if orcid != none {
pieces.push(
link("https://orcid.org/" + orcid, icon.orcid + " " + orcid)
)
}
align(
right,
text(
fill: gray.darken(20%),
style: "italic",
size: 9pt,
grid(
row-gutter: 4pt,
columns: (auto),
..pieces
)
)
)
}
/// Configure the document and typeset the header
///
/// - fullname (str): Your full name.
/// - address (str): Your home address (optional).
/// - phone (str): Your phone number (optional).
/// - email (str): Your email address (optional).
/// - website (str): Your website URL (optional).
/// - linkedin (str): Your linkedin handle (optional).
/// - github (str): Your github handle (optional).
/// - orcid (str): Your orcid number (optional).
/// - doc (content): The rest of the document.
/// -> content: The formatted document.
#let conf(
fullname: "",
address: none,
phone: none,
email: none,
website: none,
linkedin: none,
github: none,
orcid: none,
doc,
) = {
set par(
justify: true,
leading: 0.5em,
)
set text(
font: "TeX Gyre Adventor",
size: 9pt,
)
show heading: it => {
context {
let theme = theme.state.get()
set block(below: 1em)
grid(
columns: (theme.width + 0.35cm, auto),
align(
horizon,
rect(
width: theme.width,
height: 2mm,
fill: theme.color,
),
),
text(
fill: theme.color,
it.body,
)
)
}
}
let name = draw_name(fullname)
let personal_information = draw_personal_information(
address: address,
phone: phone,
email: email,
website: website,
linkedin: linkedin,
github: github,
orcid: orcid,
)
grid(
columns: (70%, 30%),
name,
personal_information
)
v(1fr)
doc
}
|
https://github.com/TGM-HIT/typst-diploma-thesis | https://raw.githubusercontent.com/TGM-HIT/typst-diploma-thesis/main/template/chapters/danksagung.typ | typst | MIT License | #import "../lib.typ": *
= Danksagung
#lorem(150) |
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/3_PB/VerbaliEsterni/VerbaleEsterno_230223/content.typ | typst | MIT License | #import "meta.typ": inizio_incontro, fine_incontro, luogo_incontro, company
#import "functions.typ": glossary, team
#let participants = csv("participants.csv")
#let participants_company = csv("participants_company.csv")
= Partecipanti
/ Inizio incontro: #inizio_incontro
/ Fine incontro: #fine_incontro
/ Luogo incontro: #luogo_incontro
== Partecipanti di #team
#table(
columns: (3fr, 1fr),
[*Nome*], [*Durata presenza*],
..participants.flatten()
)
== Partecipanti di #emph[#company]
#for member in participants_company.flatten() [
- #member
]
= Sintesi Elaborazione Incontro
Durante l'incontro il gruppo ha chiarito con la Proponente alcuni dubbi che vengono esposti in seguito e ha condiviso con essa lo stato di avanzamento del progetto.
== Chiarimento SAD
La Proponente, in seguito alla relativa domanda posta dal gruppo, ha chiarito che la richiesta del documento SAD (Software Architecture Document), riportata all'interno del capitolato, viene soddisfatta dal documento _Specifiche Tecniche_, di cui si sta effettuando la stesura in questo momento.
== Progettazione
Durante l'incontro il gruppo ha condiviso con la Proponente i suggerimenti e le direttive fornite dal Professor Cardin durante il ricevimento del 22-02-2024. La Proponente ha espresso approvazione nei confronti di tali indicazioni e ha fornito ulteriori spunti di approfondimento che potrebbero risultare utili per la progettazione. In particolare, è stato confermato da parte della Proponente che la "Kappa architecture" è un tipo di architettura che si adatta bene al prodotto in via di sviluppo e che quindi andrà approfondita dai Progettisti in carica per questo sprint. Viene inoltre confermata l'adeguatezza dei diagrammi "Data-flow diagram" e i "Deployment diagram", il cui utilizzo è stato suggerito dal Prof. Cardin nel colloquio sopracitato; inoltre viene suggerito di approfondire i "Component Diagram". I tipi di diagrammi elencati possono essere una modalità di rappresentazione che si presta bene all'attività di progettazione in atto.
== Possibili aggiunte al prodotto
La Proponente ha suggerito di valutare la possibilità di aggiungere al prodotto funzionalità come data processing ed input filtering. Il team valuterà la fattibilità di queste aggiunte e ne discuterà con la Proponente in futuro. |
https://github.com/Doublonmousse/pandoc-typst-reproducer | https://raw.githubusercontent.com/Doublonmousse/pandoc-typst-reproducer/main/color_issues/linear_rgb.typ | typst | #square(fill: color.linear-rgb(
30%, 50%, 10%,
))
|
|
https://github.com/kdog3682/mathematical | https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/index.typ | typst | // TODO
// #import "dialogues/index.typ" as dialogues
// #import "pixelart/index.typ" as pixelart
#import "typesetting.typ": *
#import "utils.typ": *
#import "dialogue.typ": dialogue
#import "expand.typ": expand, expansion
#import "evaluate.typ": evaluate
#import "factorial-fraction.typ": factorial-fraction
#import "factorial.typ": factorial
#import "fraction.typ": fraction
#import "lightning.typ"
#import "homework-question.typ": homework-question
#import "expanded-factorial-multiplication.typ": expanded-factorial-multiplication
// mathtools are kind of different from demos
// although they are similarly complex
// ------------------------------------------------------------
// ------------------------------------------------------------
// 07-10-2024
// demonstrations/index.typ is being converted into demos
#import "algorithms/index.typ" as algorithms
// 1: convex-hull
#import "mathtools/index.typ" as mathtools
// it almost feels like the two of these should be combined together
// demonstrations has been deprecated
#import "draw/index.typ" as draw
// draw/index.typ operates via:
// draw.shapes.points(points: array, ..attrs)
// draw.brace(p1, p2, content?, ..attrs)
// it uses many utils from mathematical/utils.typ
// what did i do today?
// the stuff above is what i did today
// ------------------------------------------------------------
// ------------------------------------------------------------
// #import "demos/index.typ" as demos
// #cannot import demos
// cyclic backtrace happens when demos/index.typ is imported
// the reason is because it imports ratio.typ which imports mathematical
// and hey, it just so happens this file is mathematical.
// thus, the cyclic backtrace
|
|
https://github.com/agarmu/typst-templates | https://raw.githubusercontent.com/agarmu/typst-templates/main/README.md | markdown | MIT License | # typst-templates
My personal templates for Typst:
- APA-Style Paper
- Notes Template
|
https://github.com/LeptusHe/LeptusHe.github.io | https://raw.githubusercontent.com/LeptusHe/LeptusHe.github.io/main/source/_posts/spherical-harmonics/spherical-harmonics-01-function-fitting.typ | typst | #import "../typst-inc/blog-inc.typc": *
#show: blog_setting.with(
title: "球谐函数01 - 函数拟合",
author: ("<NAME>"),
paper: "jis-b0",
//preview: true
)
#metadata("球谐函数") <tags>
#metadata("图形渲染;数学") <categories>
#metadata("2023-07-04") <date>
= 函数拟合
计算机应用常常需要使用简单函数的线性组合来拟合某个复杂函数。例如,在游戏开发中,为了实现某些渲染效果,通常会在shader中使用到正弦函数。但是,由于GPU计算正弦函数的指令数较多,性能开销比较高,开发者通常会考虑使用多项式函数$x^n$的线性组合来拟合正弦函数,从而减少计算指令数,提高渲染性能。
假设我们需要计算正弦函数$f(x) = sin(x)$在区间$[-pi, pi]$上的值。为了减少性能开销,
我们考虑使用多项式函数$x^n$的线性组合来逼近函数$sin(x)$,如@approxsin 所示。
$ tilde(f)(x) = sum_(i=0)^n c_i x^i $ <approxsin>
线性组合系数$c_i$的取值应该使得函数$f(x)$与$tilde(f)(x)$之间的误差最小。因此,我们先定义误差函数$g(x)$,然后求得误差函数$g(x)$的最小值点,即可以得到线性组合系数$c_i$。
$ g(x) = integral_(-pi)^(pi) (f(x) - tilde(f)(x))^2 $
由于函数$g(x)$的最小值点必定在极小值点取得,因此求函数$g(x)$的最小值点的一种方法是,求$g(x)$的导数为0的点。另外,我们还可以使用其他方法来求解系数$c_i$,该方法称为 _最小二乘投影_,这是后面重点介绍的方法。
对于函数$sin(x)$,我们可以使用多项式函数$P_i (x)$的线性组合来拟合,其中$P_i (x)$的定义如公式所示。
$ cases(
P_0(x) = 1 / sqrt(2 pi),
P_1(x) = (sqrt(3/2)x) / pi^(3/2),
P_2(x) = (3 sqrt(5/2) (x^2 - pi^2 / 3)) / (2 pi^(5/2)),
P_3(x) = (5 sqrt(7/2) (x^3 - (3 pi^2 x) / 5)) / (2 pi^(7/2)),
dots.v
) $
所有在区间$[-pi, pi]$上有定义的一元连续函数能够构成函数空间$L(R)$,而函数$P_i (x)$是该函数空间中的 _基函数_。这意味着函数空间$L(R)$中的任意函数,都可以使用基函数$P_i (x)$的线性组合来表示,即
$ sin(x) = sum_(i=0)^(infinity) c_i P_i (x) $
虽然函数$sin(x)$需要使用无穷项基函数$P_i (x)$的线性组合来表示,但是实际上,我们取基函数的前$n$项也能够很好地来拟合函数$sin(x)$。为了方便起见,我们取前$4$项基函数$P_i (x)$的线性组合来进行拟合,可以表示为:
$ sin(x) approx tilde(f)(x) = sum_(i=0)^(3) c_i P_i (x) $ <approx-eq>
另外,由于函数空间的基函数会满足 _正交性质_,则函数$P_i (x)$应该满足性质:
$ integral_(-pi)^(pi) P_i (x) P_j (x) dif x = delta_(i, j) $
其中$delta_(i, j)$的定义如下:
$ delta_(i, j) = cases(
1 quad "if " i != j,
0 quad "if " i eq j) $
基于多项式函数$P_i (x)$的正交性,我们可以通过以下公式求得线性组合系数$c_i$:
$ c_i = integral_(-pi)^(pi) sin(x) P_i (x) dif x $ <coefficient>
@coefficient 的证明如下:
$ c_i &= integral_(-pi)^(pi) sin(x) P_i (x) dif x \
&= integral_(pi)^(pi) (sum_(k = 0)^(infinity) c_k P_k (x)) P_i(x) dif x\
&= integral_(pi)^(pi) sum_(k = 0)^(infinity) (c_k P_k (x) P_i (x)) dif x \
&= c_i integral_(pi)^(pi) P_i^2 (x) dif x \
&= c_i
$
通过 @coefficient ,我们可以得到函数$tilde(f)(x)$中的系数如下:
$ c_0 = 1, c_1 = 0, c_2 = sqrt(6 / pi), c_3 = 0 $
带入到 @approx-eq,可以得到函数$tilde(f)(x)$的公式如下:
$ tilde(f)(x) &= (315 / (2 pi^4) - 15 / (2 pi^2) ) x + (35 / 2 - 525 / (2 pi^6)) x^3 \
&approx 0.856983 x - 0.0933877 x^3 $
@img-sin-fitting-4 是函数$sin(x)$与$tilde(f)(x)$在区间$[-pi, pi]$的图像,其中黄色曲线为函数$sin(x)$,绿色曲线为函数$tilde(f)(x)$。 可以看到,函数$tilde(f)(x)$能够较好地拟合函数$sin(x)$。
#figure(
image("images/sin-fitting-4.png", width: 60%),
caption: [正弦函数拟合 - 4项]
) <img-sin-fitting-4>
事实上,通过增加多项式函数$P_i (x)$的数量,我们可以不断地提高函数$tilde(f)(x)$对函数$f(x)$的拟合程度。当多项式的数量是$6$个时,即函数$tilde(f) (x) = sum_(i=0)^5 c_i P_i (x)$时,函数图像如 @img-sin-fitting-6-term 所示。从图像可以看出,函数$sin(x)$与$tilde(f)(x)$的曲线几乎一致。我们取 @img-sin-fitting-6-term 中的局部区间$[(3 pi)/10, (7 pi)/10]$,如 @img-sin-fitting-6-term-local 所示,可以看到函数$sin(x)$与$tilde(f)(x)$的曲线误差也很小。
#figure(
image("images/sin-fitting-6.png", width: 60%),
caption: [正弦函数拟合 - 6项]
) <img-sin-fitting-6-term>
当函数$tilde(f)(x)$中多项式函数$P_i (x)$的项数趋于无穷时,其极限为$sin(x)$,能够完美地拟合函数$sin(x)$。
#figure(
image("images/sin-fitting-6-local.png", width: 60%),
caption: [正弦函数拟合(局部) - 6项]
) <img-sin-fitting-6-term-local> |
|
https://github.com/francescoo22/masters-thesis | https://raw.githubusercontent.com/francescoo22/masters-thesis/main/chapters/1-Introduction.typ | typst | #import "../config/utils.typ": code-compare
#import "../vars/kt-to-vpr-examples.typ": intro-kt, intro-vpr, intro-kt-annotated
#pagebreak(to:"odd")
= Introduction
Aliasing is a topic that has been studied for decades in computer science @Aliasing-OOP @beyondGenevaConvention @GenevaConvention and it refers to the situation where two or more references point to the same object.
Aliasing is an important characteristic of object-oriented programming languages allowing the programmers to develope complex designs involving sharing. However, reasoning about programs written with languages that allow aliasing without any kind of control is a hard task for programmers, compilers and formal verification tools. In fact, as reported in _the Geneva Convention on the Treatment of Object Aliasing_ @GenevaConvention, without having guarantees about aliasing it can be difficult to prove the correctness of a simple Hoare formula like the following. $ {x = "true"} space y := "false" {x = "true"} $
Indeed, when $x$ and $y$ are aliased, the formula is not valid, and most of the time proving that aliasing cannot occur is not straightforward.
On the other hand, ensuring disjointness of the heap enables the verification of such formulas. For instance, in separation logic @separationLogic1 @separationLogic2 @separationLogic3, it is possible to prove the correctness of the following formula. $ {(x |-> "true") * (y |-> -)} space y := "false" {(x |-> "true") * (y |-> "false")} $
This verification is possible because separation logic allows to express that $x$ and $y$ are not aliased by using the separating conjunction operator "$*$". Similarly, programming languages can incorporate annotation systems @aldrich2002alias @boyland2001alias @zimmerman2023latte or built-in constructs @swift-parameter-modifiers @rustlang to provide similar guarantees regarding aliasing, thereby simplifying any verification process.
== Contributions
This work demonstrates how controlling aliasing through an annotation system can enhance the formal verification process performed by SnaKt @FormVerPlugin, an existing plugin for the Kotlin language @KotlinSpec @Kotlin. SnaKt verifies Kotlin using Viper @ViperWebSite @Viper, an intermediate verification language developed by ETH Zurich. Viper is designed to verify programs by enabling the specification of functions with preconditions and postconditions, which are then checked for correctness. This verification is performed using one of two back-ends: symbolic execution @MuellerSchwerhoffSummers16b or verification condition generation @HeuleKassiosMuellerSummers13, both of which rely on an SMT solver to validate the specified conditions.
In order to verify to Kotlin with Viper, it is necessary to translate the former language into the latter. However, this translation presents several challenges due to fundamental differences between the two languages. Specifically, Viper's memory model is based on separation logic, which disallows shared mutable references. In contrast, Kotlin does not restrict aliasing, meaning that references in Kotlin can be both shared and mutable, posing a significant challenge when trying to encode Kotlin code into Viper.
This issue is clearly illustrated in the Kotlin code example provided in @intro-comp. In that example, the language allows the same reference to be passed multiple times when calling function `f`, thereby creating aliasing. Additionally, @intro-comp presents a naive approach for encoding that Kotlin code into Viper. Despite the Viper code closely resembling the original Kotlin code, it fails verification when `f(x, x)` is called. This failure occurs because `f` requires write access to the field `n` of its arguments, but as previously mentioned, Viper’s separation logic disallows references from being both shared and mutable simultaneously.
#code-compare("Kotlin code with aliasing and its problematic encoding into Viper", .8fr, intro-kt, intro-vpr)<intro-comp>
As mentioned before, Kotlin does not have built-in mechanisms to manage or prevent aliasing, which can lead to unintended side effects and make it harder to ensure code correctness. To address this issue, this work proposes and formalizes an annotation system specifically designed to manage and control aliasing within Kotlin.
The proposed annotation system introduces a way for developers to specify and enforce stricter aliasing rules by tagging references with appropriate annotations.
This helps to clearly distinguish between references that might be shared and those that are unique. Additionally, the system differentiates between functions that create new aliases for their parameters and those that do not.
This level of control is important for preventing common programming errors related to mutable shared state, such as race conditions or unintended side effects.
@kt-ann-intro provides an overview of the annotation system. Specifically, the `@Unique` annotation ensures that a reference is not aliased, while the `@Borrowed` annotation guarantees that a function does not create new aliases for a reference. The example also demonstrates how the problematic function call presented in @intro-comp is disallowed by the annotation system, as `x` and `y` would be aliased when the function `f` requires them to be unique.
The thesis finally shows how aligning Kotlin’s memory model with Viper’s, using the proposed annotation system, enhances the encoding process performed by SnaKt.
#figure(
caption: "Kotlin code with annotations for aliasing control",
intro-kt-annotated
)<kt-ann-intro>
== Structure of the Thesis
The rest of the thesis is organized as follows:
/ @cap:background : provides a description of the background information needed to understand the concepts presented by this work. In particular, this chapter presents the Kotlin programming language and its feature of interest for the thesis. Following this, the chapter provides an overview of the "Aliasing" topic in Computer Science and presents an introduction to the Viper language and its set of verification tools.
/ @cap:related-work : analyzes works that have been fundamental for the development of this thesis. The chapter is divided in two parts, the former describing existing works about aliasing and systems for controlling it; the latter giving an overview of the already existing tools that perform formal verification using Viper.
/ @cap:annotations-kt : introduces a uniqueness system for the Kotlin language. It shows several examples of Kotlin code extended with uniqueness annotations and explores how the annotations can be used for bringing improvements to the language.
/ @cap:annotation-system : formalizes the annotation system introduced before on a language that can represent a significant subset of the Kotlin language. After introducing the language and several auxiliary rules and functions, the typing rules for the system are formalized.
/ @cap:encoding : shows how the annotation system presented before can be used to obtain a better encoding of Kotlin into Viper, thus improving the quality of verification performed by SnaKt.
/ @cap:conclusion : summarizes the contributions of this research and points out reasonable extensions to this work as well as potential new areas for future research.
|
|
https://github.com/shiki-01/typst | https://raw.githubusercontent.com/shiki-01/typst/main/school/pro/1st_term_1/1_1.typ | typst | #import "../../../lib/conf.typ": conf, come, desc, ce, light
#import "@preview/codelst:2.0.1": sourcecode
#show: doc => conf(
title: [実習 プログラミング実習 1学期中間],
date: [2024年4月11日],
doc,
)
#show heading: it => [
#if it.level == 1 {
[
#set align(center)
#set text(22pt)
#pad(
bottom: -70pt,
[#it\ ]
)
]
} else if it.level == 2 {
[
#set align(left)
#set text(18pt)
#pad(
top: 15pt,
bottom: -70pt,
[#it\ ]
)
]
} else if it.level == 3 {
[
#set align(left)
#set text(15pt)
#pad(
bottom: -70pt,
[#it\ ]
)
]
}
#if it.level == 2 {line(length: 100%,stroke: rgb("#eee"))}
#pad(bottom: -20pt, [])
#if it.level == 3 {
[#pad(bottom: 10pt,[])]
} else if it.level == 2 {
[#pad(bottom: -10pt,[])]
}
]
= Pythonで何ができるの?
== Pythonって何だろう?
=== Pythonって何?
- 今から20年以上前につくられたプログラミング言語
- 人工知能(機械学習)やビックデータの解析などの研究によくつかわれる
=== Pythonの3つの特徴
*特徴1:シンプルなプログラム*\
プログラムの「処理のまとまり」を「インデント(字下げ)」を使って書くのが特徴
#sourcecode(
)[
```python
print(xxx, yyy)
if(a > b):
print(xxx, yyy) #インデント
print(zzz) #インデント
def function():
for d in list: #インデント
print(d) #インデント * 2
print(xxx, yyy)
```]
*特徴2:ライブラリが豊富*\
数学モジュールや通信モジュールなどの#light[標準モジュール]以外にも、画像処理や機械学習などの#light[外部ライブラリ]といったものもある。
*特徴3:試行錯誤がしやすい*\
Pythonはプログラムを書いたらすぐに実行できる#light[インタープリタ言語]。\
試行錯誤がやりやすい言語だから、新しいものを生み出す開発者にも向いている。
それに対し、CやJavaなどは#light[コンパイラ言語]といってプログラムを実行ファイルに変換するので手間がかかるが、その分実行速度がはやいという利点がある。
#pagebreak()
= Pythonを触ってみよう
== アイドルで始めよう!
#come("IDLE(アイドル)","info")[
手軽にPythonを実行するためのアプリ。起動すればすぐに使えるので、Pythonの動作確認をしたり初心者が勉強したりするのに向いている。
]
*命令を実行してみる*
#sourcecode[```python
# 書式 -> print()
print("値")
print("値1", "値2")
```]
#sourcecode[```python
>>> print(1 + 1)
# out -> 2
```]
*色々な演算子*
#table(
align: center,
columns: 2,
table.header(
[記号],[計算],
),
[`+`],[足し算],
[`-`],[引き算],
[`*`],[掛け算],
[`/`],[割り算],
[`//`],[割り算(小数部分を切り捨て)],
[`%`],[割り算のあまり]
)
== 文字も表示させてみよう
文字を表示するためには、#light[文字列の両側を「\`(シングルクォーテーション)」か「"(ダブルクォーテーション)」で囲んで記述する。]
#sourcecode[```python
>>> print("Hello!")
# out -> Hello!
```]
*文字列と数値の組み合わせ*
#sourcecode[```python
>>> print("答えは", 10+ 20)
# out -> 答えは 30
```]
*おみくじプログラム*
`omikuji.py`
#sourcecode[```python
import random
kuji = ["大吉", "中吉", "小吉"]
print(random.choice(kuji))
# out -> 大吉 or 中吉 or 小吉
```]
*BMI値計算プログラム*
`bmi.py`
#sourcecode[```python
h = float(input("身長何cmですか?")) / 100.0
w = float(input("体重何kgですか?"))
bmi = w / (h * h)
print("あなたのBMI1値は、", bmi, "です。")
# out -> 身長何cmですか? 171
# 体重何kgですか? 64
# あなたのBMI値は、21.887076365377382です。
```]
== カメでお絵描きしてみよう
#sourcecode[```python
from turtle import * #turtleモジュールをインポート/読み込む
shape("turtle") #カメの形にする
forward(100) #100px前進
done() #終了
```]
#image(
width: 200pt,
"./img/turtle1.png"
)
#sourcecode[```python
from turtle import * #turtleモジュールをインポート/読み込む
shape("turtle") #カメの形にする
for i in range(4): #4回繰り返す
forward(100) #100px前進
left(90) #90度左に回転
done()
```]
#image(
width: 200pt,
"./img/turtle2.png"
)
#sourcecode[```python
from turtle import * #turtleモジュールをインポート/読み込む
shape("turtle") #カメの形にする
col = ["orange", "limegreen", "gold", "plum", "tomato"] #色のリスト
for i in range(5): #5回繰り返す
color(col[i]) #色を変える
forward(200) #200px前進
left(144) #144度左に回転
done() #終了
```]
#image(
width: 200pt,
"./img/turtle3.png"
)
#come("iについて","info")[
`range(5) -> i`は、0から4までの5つの数字を生成する。
]
#sourcecode[```python
from turtle import *
shape("turtle")
col = ["orange", "limegreen", "gold", "plum", "tomato"]
for i in range(5):
color(col[i])
circle(100)
left(72)
done()
```]
#image(
width: 200pt,
"./img/turtle4.png"
)
#pagebreak()
= プログラムって何ができるの?
.\
- プログラムは、どれだけ難しそうでも「順次」「分岐」「反復」の三つの組み合わせでできている。
== データは入れ物に入れて使う
#table(
columns: 3,
table.header(
[分類],[データ型],[説明]
),
[数値( 整数型 )],[int],[個数や順番に使う],
[数値( 浮動小数点型 )],[float],[小数],
[文字列型],[str],[文字( 列 )を扱うときに使う],
[ブール型],[bool],[True( 真 )かFalse( 偽 )かの二択の時に使う],
)
== 文字列の操作を覚えよう
=== 文字数を調べる
#table(
columns: 2,
table.header(
[メソッド],[説明]
),
[`len()`],[文字列の長さを取得],
[`count()`],[指定した文字列の出現回数を取得],
[`find()`],[指定した文字列の位置を取得],
[`replace()`],[指定した文字列を置換],
[`split()`],[指定した文字列で分割],
)
=== 文字列の一部を取りだす
#table(
columns: 2,
table.header(
[メソッド],[説明]
),
[`[n]`],[n番目の文字を取得],
[`[n:m]`],[n番目からm番目までの文字を取得],
[`[n:]`],[n番目から最後までの文字を取得],
[`[:m]`],[最初からm番目までの文字を取得],
)
#table(
columns: 15,
table.header(
[こ],[ん],[に],[ち],[は],[。],[私],[は],[パ],[イ],[ソ],[ン],[で],[す],[。]
),
[0],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],
[-15],[-14],[-13],[-12],[-11],[-10],[-9],[-8],[-7],[-6],[-5],[-4],[-3],[-2],[-1],
)
#sourcecode[```python
s = "こんにちは。私はパイソンです。"
print(s[0]) # こ
print(s[6:12]) # 私はパイソン
print(s[-3:]) # です。
```]
=== 文字列の途中で改行するには?
#sourcecode[```python
print("こ\nん\nに\nち\nは\n。")
```]
`\n`を使うことで、文字列の途中で改行することができる。
== データ型を変換する
=== データ型を変換する
#sourcecode[```python
a = "100"
print(a + 100) # エラー
print(int(a) + 100) # 200
```]
=== 変換できないときはエラーになる
#sourcecode[```python
a = "こんにちは"
print(int(a)) # エラー
```]
#come("書式:isinstance()","info")[
`isinstance()`は、変数が指定したデータ型かどうかを調べる関数。
]
#sourcecode[```python
a = "100"
print(isinstance(a, int)) # False
print(isinstance(a, str)) # True
```]
== たくさんのデータはリストに入れて使う
=== リストの作り方
#sourcecode[```python
a = [1, 2, 3, 4, 5]
print(a) # [1, 2, 3, 4, 5]
```]
== もしも~なら実行する
#sourcecode[```python
a = 10
if a > 5:
print("aは5より大きい")
else:
print("aは5以下")
```]
#table(
columns: 2,
table.header(
[比較演算子],[説明]
),
[`==`],[等しい],
[`!=`],[等しくない],
[`>`],[より大きい],
[`<`],[より小さい],
[`>=`],[以上],
[`<=`],[以下],
)
=== そうでないとき
#sourcecode[```python
a = 10
if a > 5:
print("aは5より大きい")
elif a == 5:
print("aは5")
else:
print("aは5以下")
```]
== 繰り返し処理をする
*基本構造*
#sourcecode[```python
for i in range(5):
print(i)
```]
#sourcecode[```python
a = [1, 2, 3, 4, 5]
for i in a:
print(i)
```]
=== リストのすべての要素について繰り返す
#sourcecode[```python
a = [1, 2, 3, 4, 5]
for i in a:
print(i)
```] |
|
https://github.com/thornoar/hkust-courses | https://raw.githubusercontent.com/thornoar/hkust-courses/master/PHYS1312-Honors-General-Physics-I/homeworks/hw1/main.typ | typst | #import "@local/common:0.0.0": *
#import "@local/templates:0.0.0": *
#show: physics-preamble("Part 1", "Fri, Sep 20")
#physics-problem("1")\
Let's draw a diagram of the particle's motion and the forces acting on it:
#align(center)[
#figure(
image("./figures/path.svg"),
caption: "The trajectory of the projectile"
) <fig1>
]
The Newtonian equation of motion for this projectile will take the form
$
m (d^2 arrow(r))/(d t^2) = -b (d arrow(r))/(d t) + m arrow(g),
$ <base-eq>
where $arrow(g)$ is defined as $(0,-g)$. Our goal is to solve this differential equation in order to obtain the expression of $y$ in terms of $x$. We will now utilise the method of nondimensionalization, choosing characteristic values as follows:
+ $v_c = m g\/b$ --- characteristic speed.
+ $t_c = v_c\/g = m\/b$ --- characteristic time.
+ $l_c = v_c t_c = (m^2 g)\/b^2$ --- characteristic length.
Variables with primes will denote the dimensionless counterparts of the corresponding variables, e.g.
$ t' = t/t_c, #h(1cm) x' = x/l_c, #h(1cm) arrow(v)' = arrow(v)'/v_c, #h(1cm) v_x' = v_x/(v_c)_x, #h(1cm) a' = a/a_c = a/g. $
Dividing @base-eq by $m g$ and applying algebraig transformations yields a modified differential equation in terms of dimensionless quantities:
$
(d^2 arrow(r)')/(d t'^2) = - (d arrow(r)')/(d t') + arrow.b #h(1cm) "or" #h(1cm) (d arrow(v)')/(d t') = -arrow(v)' + arrow.b,
$ <nondim-eq>
where $arrow.b$ denotes the vector $(0,-1)$. This is a separable differential equation in $v'$, which we will solve separately by each coordinate. The result is
$
v'_x (t') = v'_0 cos theta dot e^(-t'), #h(1cm) v'_y (t') = (v'_0 sin theta + 1) e^(-t') - 1.
$ <closed-v>
Integrating @closed-v, we obtain the closed-form solutions for $x'$ and $y'$:
$
x'(t') = v'_0 cos theta (1 - e^(-t')), #h(1cm) y'(t') = (v'_0 sin theta + 1)(1 - e^(-t')) - t'.
$ <closed-xy>
Finally, we write down the formula for $y'(x')$:
$
y' &= (v'_0 sin theta + 1)(1 - e^(-t')) - t'\
&= v'_0 sin theta dot (1 - e^(-t')) + (1 - e^(-t')) - t'\
&= tan theta dot x' + x'/(v'_0 cos theta) + ln(1 - x'/(v'_0 cos theta))\
&= x' (tan theta + 1/(v'_0 cos theta)) + ln(1 - x'/(v'_0 cos theta)).
$
#physics-problem("2")\
We first examine the system from a laboratory frame. Let $v_(p x)$ and $v_(p y)$ denote the horizontal and vertical components of the particle, respectively, with $v_p$ being the total velocity. By $v_s$ we will denote the horizontal speed of the hemisphere (directed in the same direction as $v_(p x)$, and thus having a negative value). Here is a sketch of the situation:
#align(center)[
#figure(
image("./figures/hemisphere.svg"),
caption: "The free body diagram in the laboratory frame"
) <labframe>
]
We then make use of the laws of conservation of mechanical energy (since all interactions are frictionless, and thus elastic), and conservation of linear momentul along the horizontal direction, obtaining
$
m v_(b x) + M v_s = 0, #<clm>\
(M v_s^2)/2 + (m v_p^2)/2 = m g R (1 - cos(theta)). #<cme>
$
Let $arrow(v)'_p$ denote the velocity of the particle relative to the hemisphere. We easily see that
$
arrow(v)'_p = (v_(p x) - v_s, v_(p y)).
$ <lth>
At every moment in time, in the frame of the hemisphere, the particle is instantaneously moving in circular motion along a circle of radius $R$, and thus we can use Newton's second law to describe its centripetal acceleration:
$
(m v'_p^2)/R = m g cos(theta) - N.
$ <centripetal>
Now, we shall derive a condition that is satisfied when the particle begins to detach from the hemisphere. We note two facts specific to this moment in time:
+ The velocity vector of the particle _in the reference frame of the hemisphere_ is tangent to the hemisphere, as opposed to $arrow(v)_p$ shown on @labframe in the laboratory frame.
+ There is no longer any interaction between the hemisphere and the particle, meaning that $arrow(N) = arrow(0)$.
With $N = 0$, @centripetal yields
$
v'_p^2 = g R cos(theta).
$ <main>
On the other hand, by @clm we have $v_s = -r v_(p x)$, and by fact 1 we have
$
v_(p y)/(v_(p x) - v_s) = tan(theta),
$
since $arrow(v)'_p$ is tangent to the hemisphere. Combining these expressions, we write
$
v'_p^2 = (v_(p x) - v_s)^2 + v_(p y)^2 = (v_(p x) - v_s)^2 (1 + tan^2(theta)) = (v_(p x)^2 (1 + r)^2)/(cos^2(theta)).
$ <eq1>
Further, from @cme we have
$
2 m g R (1 - cos(theta)) &= r m v_(p x)^2 + m(v_(p x)^2 + v_(p y)^2),\
2 g R (1 - cos(theta)) &= v_(p x)^2 (1 + r) + v_(p y)^2\
&= v_(p x)^2 ((r+1) + (r+1)^2tan^2(theta)),\
v_(p x)^2 &= (2 g R)/(r + 1) dot (1 - cos(theta))/(1 + (r+1)tan^2theta).
$
Substituting this into @eq1 and using @main, we obtain that
$
g R cos(theta) = (2 g R (1+r)^2)/((1 + r) cos^2(theta)) dot (1 - cos(theta))/(1 + (r+1)tan^2theta),\
(cos^3(theta))/(r+1) = (2 (1 - cos(theta)))/(1 + (r+1)tan^2(theta)).
$
Cross-multiplying this equation and cancelling like terms, we write
$
2(1 + r)(1 - cos(theta)) &= cos^3(theta) (1 + (r+1)tan^2(theta)),\
2(1 + r) - 2 (1 + r) cos(theta) &= cos^3 (theta) + (1 + r)cos(theta) sin^2(theta)\
&= cos^3(theta) - r cos^3(theta) - cos^3(theta) + (1 + r) cos(theta),\
r cos^3(theta) - 3 (1 + r) cos(theta) + 2(1 + r) &= 0,
$
q.e.d.
|
|
https://github.com/YDX-2147483647/herglotz | https://raw.githubusercontent.com/YDX-2147483647/herglotz/main/fourier.typ | typst | #import "template.typ": project, example, remark, table-header
// Workaround for footnote style
#show link: set text(fill: blue)
#show link: underline
// Footnote style must be uniform for the whole flow of content.
// Therefore, set and show rules for `footnote.entry` must be defined at the beginning.
// (at least for typst v0.10.0)
// https://github.com/typst/typst/issues/1348#issuecomment-1566316463
// https://typst.app/docs/reference/model/footnote/#example
#show: project.with(title: "最大公约数的Fourier变换", date: "2023年10月20、26日,11月20日,12月2、13–16、19日")
#let fourier(symbol) = math.attach(math.cal("F"), br: h(-0.5em) + symbol)
#let bullet = math.circle.filled.small
整数 $k,n$ 的*最大公约数*记作 $gcd(k,n)$。固定 $n$、变动 $k$ 得到的函数 $k |-> gcd(k,n)$ 以 $n$ 为*周期*。数列既然有周期,就可以应用*离散Fourier变换*。
更进一步,任给*数论函数*(任意 $ZZ^+ -> CC$ 函数)$f$,都能用 $gcd$“改造”成周期函数 $k |-> f(gcd(k,n))$(因为 $gcd(k,n) in ZZ^+$),且也能给它应用离散Fourier变换,结果是
$
m |-> fourier(f)(m,n)
:= sum_(k=1)^n f(gcd(k,n)) times omega_n^(-k m),
$
其中 $omega_n$ 是 $n$ 次单位根($omega_n := exp((2pi i)/n)$,$omega_n^l := exp(2 pi i l/n)$)。
我们最终将导出 $fourier(f)$ 的另一公式。(这一结果是 Wolfgang Schramm 2008年的文章)
#example[具体例子][
以 $k |-> gcd(k, 4)$ 为例。
#align(
center,
table(
columns: (auto, auto, auto, auto, auto, auto, auto, auto, auto, auto, auto, auto),
align: center,
stroke: none,
[*$k$*], table.vline(), $dots.c$, $[1$, $2$, $3$, $4]$, $[5$, $6$, $7$, $8]$, $9$, $dots.c$,
table.hline(),
[*$gcd(k, 4)$*], $dots.c$, $[1$, $2$, $1$, $4]$, $[1$, $2$, $1$, $4]$, $1$, $dots.c$,
)
)
$omega_4 = i$,Fourier变换结果如下。
$
... space.quad &|-> space.quad ... \
\
m=1 space.quad &|-> space.quad i + 2i^(-2) + i^(-3) + 4i^(-4) = -i - 2 + i + 4 &= 2. \
m=2 space.quad &|-> space.quad i^(-2) + 2i^(-4) + i^(-6) + 4i^(-8) = -1 + 2 - 1 + 4 &= 4. \
m=3 space.quad &|-> space.quad i^(-3) + 2i^(-6) + i^(-9) + 4i^(-12) = i -2 -i + 4 &= 2. \
m=4 space.quad &|-> space.quad i^(-4) + 2i^(-8) + i^(-12) + 4i^(-16) = 1 + 2 + 1 + 4 &= 8. \
\
m=5 space.quad &|-> space.quad i^(-5) + 2i^(-10) + i^(-15) + 4i^(-20) = -i - 2 + i + 4 &= 2. \
... space.quad &|-> space.quad ... \
$
]
#figure(
image("fig/gcd.png", width: 80%),
caption: [
$6,30,100$ 以内数的 $gcd$
第 $y$ 行第 $x$ 列的颜色表示 $(x,y)$。如图例所示,越深越大(线性对应),但直接比较不同图的颜色无意义。
(2020年8月11日有动图)
]
)
// Mathematica:
// Table[Table[GCD[x, y], {x, n}, {y, n}] //
// MatrixPlot[#, PlotLegends -> Placed[Automatic, Bottom],
// ImageSize -> Small] &, {n, {6, 30, 100}}] // Row
= Dirichlet卷积
@tab:units 介绍了最基础的三个数论函数 $1, id, delta$。
#figure(
table(
columns: 3,
align: (x, y) => (if y == 0 or x < 2 { center } else { start }) + horizon,
table-header[记号][定义][意义],
$1$, $1(n) equiv 1$,
[恒一,函数相乘的单位元:$f times 1 = 1 times f = f$],
$id$, $id(n) equiv n$,
[恒等,函数复合的单位元:$f compose id = id compose f = f$],
$delta$, $delta(n) = cases(1 &space n = 1, 0 &space n != 1)$,
[Dirac $delta$#footnote[有些文献写作 $n |-> delta_(n 1)$,这次 $delta$ 是 Kronecker $delta$。],Dirichlet卷积的单位元:$f * delta = delta * f = f$],
),
caption: [三种“单位”数论函数]
) <tab:units>
- *函数相乘*
$ (f times g)(n) := f(n) times g(n). $
- *函数复合*
$ (f compose g)(n) := f(g(n)). $
- *Dirichlet#footnote[Dirichlet是德国人,但名字源于法国。德语 _ch_ 发硬音,故名字应读作 _Dee-REECH-let_。说英语的人很少这样念。他们要么按法语读作 _Dee-REESH-lay_,要么各取一半读作 _Dee-REECH-lay_。(来源:《素数之恋》90页 §6.V 的作者注31.和译者注)]卷积*
$
(f * g)(n) &:= sum_(a b = n) f(a) g(b) \
&= sum_(a|n) f(a) g(n/a) = sum_(b|n) f(n/b) g(b).
$
($a|n$ 表示 $a$ 整除 $n$,即 $n/a in ZZ$)
这针对Dirichlet级数很自然:
$
sum_a f(a)/a^s times sum_b g(b)/b^s
= sum_a sum_b (f(a) g(b)) / (a b)^s
= sum_n ((f*g)(n))/n^s.
$
#remark[更常见的那种卷积][
Dirichlet卷积针对Dirichlet级数,而更常见的那种卷积针对幂级数#footnote[又名数列的生成函数、信号的Z变换、概率分布的特征函数、……]。
那种卷积记作 $star$,则
$
sum_a f(a) x^a times sum_b g(b) x^b
= sum_a sum_b f(a) g(b) x^(a+b)
= sum_n (f star g)(n) x^n.
$
于是要定义
$
(f star g)(n) := sum_(a + b = n) f(a) g(b) = sum_a f(a) g(n-a) = sum_b f(n - b) g(b).
$
——就是指数函数(乘法)和幂函数(加法)的区别:
$
a^s b^s &= (a b)^s =: n^s, &space.quad 1^s = 1. \
x^a x^b &= x^(a+b) =: x^n, &space.quad x^0 = 1. \
$
]
#remark[Dirichlet卷积没有丢点][
如无其它限制,完全知道 $f,g$ 才能计算 $f times g$。然而 $f*g$ 看似不需要,例如 $n = 4$ 时结果为 $f(1)g(4) + f(2)g(2) + f(4)g(1)$,用不到 $f(3),g(3)$。不过 $f*g$ 是函数,其它 $n$ 会用到。可证明若 $g$ 满足 $forall f, f*g equiv 0$,则 $g equiv 0$。
]
#example[Euler totient $phi$][
有许多定理都能用Dirichlet卷积表示,$phi * 1 = id$ 便是一例。
Euler totient $phi(n)$ 的定义是 $1, 2, ..., n$ 中与 $n$ 互质的数的数量。这也可矫揉造作地写成
$
phi(n) := sum_(k perp n) 1,
$
其中 $k perp n$ 表示 $gcd(k,n) = 1$,即互质。(求和隐含 $1 <= k <= n$)
我们试一些具体例子。#footnote[聪明的人能从中提炼出 $phi$ 的性质。]
- 例如 $7$ 是质数,$1,...,6$ 都与它互质,故 $phi(7) = 6$。
- 又如 $8 = 2^3$,于是 $2$ 的整倍数都和 $8$ 有公因子 $2$,只剩下 $1,3,5,7$ 与 $8$ 互质,故 $phi(8) = 4$。
- 再举个稍复杂的例子。$6 = 2 times 3$,抛去 $2$ 和 $3$ 的整倍数,和 $6$ 互质的只有 $1,5$,于是 $phi(6) = 2$。
现在来看 $phi * 1 = id$。按照Dirichlet卷积和函数 $1,id$ 的定义,这是说
$
sum_(b|n) phi(n/b) = n.
$
不要被形式吓到;这可以被理解。以 $n=6$ 为例,我们可列出 $(0,1]$ 中以 $6$ 为分母的分数,然后约分:
#figure(
image("fig/partition.png", width: 30%),
caption: [按约分程度将 ${1,...,6}$ 分成几类]
)
$
mat(1/6, 2/6, 3/6, 4/6, 5/6, 6/6)
= mat(1/6, 1/3, 1/2, 2/3, 5/6, 1/1).
$
- 分母 $6$ 与分子 $1,5$ 互质,约分不了,分母还是 $6$——这种情况按定义有 $phi(6)$ 个。
- 分母 $6$ 有因子 $2$,分子 $2,4$ 是其整倍数,可约去 $2$,分母变为 $3$——这种情况按定义有 $phi(3)$ 个。
- 同理,分母 $6$ 有因子 $3$,分子 $3$ 是其整倍数,约去后分母变为 $2$——这种情况按定义有 $phi(2)$ 个。
- $6$ 也是自己的因子,$6/6 = 1/1$——这种情况按定义有 $phi(1)$ 个。
以上讨论正是遍历 $b|n$ 及 $phi(n/b)$,并且不重不漏,加起来是最初列出的分数的数量,也就是 $6$。
总结一下,$phi * 1 = id$ 其实是说 ${b k: k perp n/b}, space b|n$ 是 ${1,...,n}$ 的一个划分。
由 $gcd(b k, n) = b gcd(k, n/b)$,可将 $b k$ 看作 $k$,得到 ${b k : k perp n/b} = {k: gcd(k, n) = b}$,于是显然这些集合不重不漏。
]
= Ramanujan和
有了 $phi * 1 = id$ 那种按约分程度重排集合的观点,结合一些算术知识,我们能推知一些原本看起来不着边际的定理。
#figure(
image("fig/One5Root.svg", width: 40%),
caption: [五次单位根 $omega_n^l$(蓝点)的中心是零 | #link("https://commons.wikimedia.org/wiki/File:One5Root.svg")[Wikimedia Commons `One5Root.svg`]]
)
比如若 $n > 1$,则 $sum_l omega_n^l = 0$(单位根转一圈的中心是零)#footnote[可用等比数列求和公式计算,也可用乘 $omega_n$ 不变来论证。],我们把求和范围 $l = 1,...,n$ 重排,立即得到
$
0 = sum_(b|n) sum_(k perp n/b) omega_n^(b k)
= sum_(a|n) sum_(k perp a) omega_a^k,
$
其中第二个等号把 $n/b$ 代换成了 $a$,同时约分 $(b k)/n = k/a$。像 $omega_a^k, k perp a$ 这种不再能约分的单位根称作 _primitive_ $a$-th root of unity。
这一结果也能用Dirichlet卷积表示——等式最右边正是函数 $a |-> sum_(k perp a) omega_a^k$ 与 $1$ 的卷积。一般地,*Ramanujan和*#footnote[此处“和”指加法的结果,不是连词。] $c$ 的定义是
$
c_n (m) := sum_(k perp n) omega_n^(k m).
$
- 求和仍隐含 $1 <= k <= n$。
- $m=1$ 时,$n |-> c_n (1)$(简记作 $c_bullet (1)$)正是我们刚才提到的 $a |-> sum_(k perp a) omega_a^k$。
- $m = 0$ 时,$sum$ 相当于计数,即 $c_bullet (0) = phi$。
看到这些事实,敏锐的人已可直奔本文结论;不过这之前我们再分析一下Dirichlet卷积。
= Möbius反演
#remark[咬文嚼字][
我们要谈的是数论函数的Möbius反演公式(Möbius inversion formula),而非复平面的Möbius变换。二者都以<NAME>命名。
前者也叫Möbius transform,后者是Möbius transformation。#link("https://www.merriam-webster.com/")[Merriam-Webster词典]中,_transform_ 之2就是“_transformation_ sense 3a(1)”,相关解释如下。
- #link("https://www.merriam-webster.com/dictionary/transform")[_transform_] 1: a mathematical element obtained from another by transformation.
- #link("https://www.merriam-webster.com/dictionary/transformation")[_transformation_] 3a(1): the operation of changing (as by rotation or mapping) one configuration or expression into another in accordance with a mathematical rule.
我个人感觉 transformation 更偏 $V->W$,如线性变换#footnote[有些作者规定“变换”必须 $V->V$,“映射”只需 $V->W$。按这种说法,linear transformation应当是线性“映射”。];而 transform 更偏 $W^V -> W^V$,如 Fourier 变换。然而“$W^V$”也能看成一种“$V$”,比如Fourier变换就是一种线性变换。SE 有相关讨论:
- #link("https://math.stackexchange.com/questions/1154581/is-there-a-difference-between-transform-and-transformation")[definition - Is there a difference between transform and transformation? - Mathematics Stack Exchange]
- #link("https://english.stackexchange.com/questions/5454/transform-or-transformation/5456#5456")[word choice - Transform or transformation? - English Language & Usage Stack Exchange]
]
== 逆
前面提到“数论*函数*之间Dirichlet*卷积*”相当于“相应Dirichlet*级数*直接*相乘*”,而后者满足交换律、结合律#footnote[
直接从定义 $(f * g)(n) &:= sum_(a b = n) f(a) g(b)$ 也能论证。
],于是我们可进一步考虑Dirichlet卷积意义下的*逆*。
考虑最简单也最一般的 $1$ 的逆。
刚刚我们得到 $n>1$ 时 $(1 * c_bullet (1))(n) = 0$,而 $(1 * c_bullet (1))(1)$(按Dirichlet卷积的定义)只有一项 $1 times c_1 (1) = 1 times 1^(1 times 1) = 1$,于是 $1 * c_bullet (1) = delta$ —— $c_bullet (1)$ 正是 $1$ 的逆。
不过我们对 Ramanujan 和了解有限。比如 $c_74 (1)$ 这个数是多少?你知道它的定义是 $sum_(k perp 74) omega_74^k$,而 $74 = 2 times 37$,$k$ 可取 $1,3,5,7, ..., 35, 39, ..., 73$,然后这 $36$ 项加起来是几呢?哦,你可以先补上 $k=37$ 凑等比数列,$sum_(m = 0)^36 omega_74^(2m+1) = omega_74 times (1 - omega_74^74) \/ (1 - omega_74^2) = 0$,再减去 $omega_74^37 = omega_2 = -1$,得到 $c_74 (1) = 1$。行,可 $omega_(37 times 41) (1)$ 怎么办呢?可以算两次等比数列的和。那 $omega_(37 times 41 times 43) (1)$?三重求和。……
无论如何,$c_74 (1) = 1$ 总有些蹊跷,事实上 $c_bullet (1) in {0, plus.minus 1}$,下面将介绍其中缘由。
#remark[时间线][
我们将发现 $c_bullet (1)$ 是 Möbius $mu$ 函数。
- 1918年,Ramanujan(1887–1920)提出Ramanujan和。
- 1874年,Mertens(1840–1927)为 Möbius 函数引入了记号 $mu(n)$。
以他自己名字命名的Mertens函数是 $sum mu(n)$,它在 $n -> +oo$ 的渐近性质与Riemann猜想有关。
- 1859年,Riemann(1826–1866)的9页文章 _Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse_(关于小于给定数的质数数量)开创了解析数论。
- 1837年,Dirichlet(1805–1859)宣读的论文提出了Dirichlet级数,文中记作 $L$。#footnote[
_Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält_ 这篇文章证明了首项、公差互质的等差数列中有无穷多质数(如题目所言),还明确区分了条件收敛、绝对收敛。#link("https://arxiv.org/abs/0808.1408")[arXiv:0808.1408] 有英语翻译。
]
- 1832年,Möbius(1790–1868)提出Möbius函数。
- 1801年,Gauß(1777–1855)事实上讨论了Möbius函数,他给出 $sum_(k perp a) omega_a^k in {0, plus.minus 1}$ 以及每一种情况的充要条件。#footnote[
Gauß、Möbius、Mertens三个年份的来源是 #link("https://mathworld.wolfram.com/MoebiusFunction.html")[Möbius Function - Wolfram MathWorld]。
]
下面我们将提到Euler十八世纪的工作。
]
== Euler积公式
$1$ 对应的Dirichlet级数正是Riemann $zeta$ 函数
$
zeta(s) := sum_n 1/n^s.
$
(求和隐含 $n in ZZ^+$)如果能把 $1 / zeta(s)$ 写成Dirichlet级数的形式,那么系数就是 $1$ 的逆。
怎么操作呢?先把 $zeta(s)$ 写成简单因子的乘积。Euler 1737年撰写的 _Variae observationes circa series infinitas_#footnote[
你可以在#link("http://eulerarchive.maa.org/")[The Euler Archive (`eulerarchive.maa.org`)] 的(左边栏)Search archive by → Subject and Index →(中间靠下)Search by Subject → Mathematics → Infinite Series下载到72号文章 _Variae observationes circa series infinitas_ 的拉丁原文和英语、德语翻译。
] 的定理8指出
$
sum_n 1/n^s = product_p 1/(1-p^(-s)),
$
其中 $p$ 取遍质数。
#remark[证明Euler积公式][
这个证明重写了一遍Ἐρατοσθένης #footnote[Ἐρατοσθένης (Eratosthenes) 这个名字的词源是 _ἐρᾰτός_ (eratós, #text(lang: "en")[“lovely”]) + _σθένος_ (sthénos, #text(lang: "en")[“strong”])。] 质数筛,改写自Euler原文。
对比
$
zeta(s)
= sum_n 1/n^s
&= 1 + 1/2^s + 1/3^s + 1/4^s + 1/5^s + dots.c, \
2^(-s) zeta(s)
= sum_n 1/(2n)^s
&= 1/2^s + 1/4^s + dots.c,
$
两项相减则从 $ZZ^+$ 中删去 $2$ 的倍数,即
$
(1 - 2^(-s)) zeta(s) = 1 + 1/3^s + 1/5^s + dots.c.
$
同理,$(1-2^(-s)) (1-3^(-s)) zeta(s)$ 会继续删去 $3$ 的倍数。以此类推,逐个删去 $p$ 的倍数,最后只剩第一项 $1$。于是
$
product_p (1-p^(-s)) times zeta(s) = 1.
$
]
#remark[再次证明Euler积公式][
第一个证明是从 $ZZ^+$ 筛去质数的倍数;这个证明是从用质因数乘积重组 $ZZ^+$,来自 #link("https://mathworld.wolfram.com/EulerProduct.html")[Euler Product - Wolfram MathWorld]。
$
product_p 1/(1-p^(-s))
&= product_p sum_(k in NN) 1 / p^(k s) \
&= 1 + sum_p 1/p^s + sum_(p_1 <= p_2) 1/(p_1 p_2)^s + dots.c \
&= sum_n 1/n^s.
$
1. 第一个等号是几何级数 $1/(1-q) = sum_(k in NN) q^k$ 。
2. 第二个等号乘开了级数。
0. 首项所有 $k$ 取零。
1. 次项所有 $k$ 的和是 $1$:一个质数 $p$ 对应的 $k$ 取 $1$,其余都取零。
2. 下一项所有 $k$ 的和是 $2$:要么两个质数 $p_1,p_2$ 对应的 $k$ 取 $1$,其余取零;要么 $p_1=p_2$ 对应的 $k$ 取 $2$,其余取零。
3. ……
3. 第三个等号是用质因数重组。
]
现在把 $1/zeta(s)$ 写成Dirichlet级数。
$
1/zeta(s)
&= product_p (1 - 1/p^s) \
&= 1 - sum_p 1/p^s + sum_(p_1<p_2) 1/(p_1 p_2)^s - sum_(p_1<p_2<p_3) 1/(p_1 p_2 p_3)^s plus.minus dots.c \
&= 1 - 1/2^s - 1/3^s - 1/5^s + 1/(2 times 3)^s - 1/7^s - 1/(2 times 5)^s plus.minus dots.c \
&=: mu(n) / n^s.
$
1. 第一个等号是Euler积公式。
2. 第二个等号乘开了级数。
0. 首项全都取 $1$。
1. 次项一个取 $- 1/p^s$,其余都取 $1$。
2. 下一项仅 $2$ 个取 $- 1/p^s$,其余都取 $1$。总计 $2$ 个负号,负负得正。
3. ……
3. 第三个等号按分母从小到大排序。
4. 最后一个等号整理成Dirichlet级数,将系数记作 $mu(n)$
这样我们就算得 $1$ 的逆也等于 $mu$,它通称 *Möbius 函数*。按照规律,
$
mu(n) := cases(
0 &quad n"包含重复质因子",
(-1)^k &quad n"的质因子互不重复,共"k"个"
).
$
== 再看 $1$ 的逆
$mu(n)$ 的定义并不十分诡异,它大致是给质因子计数,还比较规整。例如按定义分类讨论,可论证“若 $a perp b$,则 $mu(a) times mu(b) = mu(a b)$”,这种性质称作multiplicative。
#remark[$gcd$ 的性质][
$gcd(bullet, n)$ 也multiplicative。这是因为若 $a perp b$,则 $m|a and m|b <==> m | a b$。
]
从“给质因子计数”这一角度,我们也可直接论证 $1 * mu = delta$。
$(1 * mu)(n) = sum_(b|n) mu(b)$。设 $n$ 的质因数分解是 $product_(k=1)^K (p_k)^(a_k)$,则抛去 $mu(b) = 0$ 的项,只考虑无重复质因子的 $b$,有如下可能。
0. $b = 1 perp n$ 没有质因子,$mu(b) = 1$。
1. $b in {p_1, ..., p_K}$ 有单个质因子,这 $K$ 种情况都有 $mu(b) = -1$。
2. $b in {p_1 p_2, p_1 p_3, ..., p_2 p_3, ..., p_(K-1) p_K}$ 有 $2$ 个质因子,从 $K$ 个质因子选 $2$ 个有 $binom(K, 2)$ 种可能,他们都有 $mu(b) = (-1)^2$。
3. $b$ 含 $3$ 个质因子有 $binom(K, 3)$ 种可能,他们都有 $mu(b) = (-1)^3$。
4. ……
因此由二项式定理
$
(1 * mu)(n)
= sum_(k = 0)^K binom(K, k) (-1)^k
= (1 - 1)^K
= 0^K = 0,
quad K != 0.
$
而 $K=0$ 当且仅当 $n = 1$,此时 $(1*mu)(n) = mu(1) = 1$。综合两种情况,可知 $1 * mu = delta$。
== 反演
知道了 $1$ 的逆是 $mu$,我们立即得到Möbius反演公式:对任意数论函数 $f,g$,
$
f = g * 1. quad ==> quad g = f * mu.
$
#example[$phi$ 的另一种表示][
上文我们按约分程度重排集合,证明了 $phi * 1 = id$。于是 $phi = phi * 1 * mu = id * mu$。
]
#example[$c_n (m)$ 总是整数][
虽然 $c_n (m)$ 的定义是一堆复数之和,但 $m in ZZ$ 时总有 $c_n (m) in ZZ$。
只用最基础的数论,我们可论证 $c_n (m) in RR$:$k perp n <==> (n-k) perp n$,而 $m in ZZ$ 时 $(n-k)m equiv -k m space (mod n)$,于是 $c_n (m)$ 的定义取共轭不变。下面我们考虑Möbius反演。
追究引入Ramanujan和的过程,可推广 $1 * c_bullet (1) = delta$:(这套操作后文马上会再次用到)
$
(1 * c_bullet (m))(n)
&= sum_(a|n) sum_(k perp a) omega_a^(k m)
&= sum_(b|n) sum_(k perp n/b) omega_n^(b k m)
&= sum_(l=1)^n omega_n^(l m).
$
现在给等比数列求和:
- 公比 $omega_n^m = 1$(即 $n|m$)时,和是 $n times 1 = n$;
- 公比 $omega_n^m != 1$ 时,$(omega_n^m)^n = omega_n^(n m) = omega_1^m = 1$,这个数列在单位圆上均匀分布,和是零。
综合两种情况,记 $f := 1 * c_bullet (m)$,则总有 $f(n) in ZZ$。
由Möbius反演,$c_bullet (m) = mu * 1 * c_bullet (m) = mu * f$。注意Dirichlet卷积的每一项都是整数,从而加起来也是整数。
]
= Fourier变换
回顾前文:
- 按约分程度重排集合—— ${b k: k perp n/b}, space b|n$ 是 ${1,...,n}$ 的一个划分。
- Ramanujan和—— $c_n (m) := sum_(k perp n) omega_n^(k m)$。
- Fourier变换—— $k |-> f(gcd(k,n))$ 变换为 $m |-> fourier(f)(m,n) := sum_(k=1)^n f(gcd(k,n)) times omega_n^(-k m)$。
根据“按约分程度重排集合”,对任意数论函数 $g$,
$
sum_(k=1)^n g(k) = sum_(b|n) sum_(k perp n/b) g(b k) = sum_(a|n) sum_(k perp a) g(n/a k).
$
代入Fourier变换,得
$
fourier(f)(m,n) = sum_(b|n) sum_(k perp n/b) f(gcd(b k, n)) times omega_n^(-b k m).
$
注意 $gcd(b k, n) = b gcd(k, n/b) = b$ 不含 $k$,并且记 $a = n/b$ 则 $omega_n^(-b k) = omega_(b a)^(-b k) = omega_a^(-k)$。代回得
$
fourier(f)(m,n)
&= sum_(a b = n) sum_(k perp a) f(b) times omega_a^(-k m)
&= sum_(a b = n) f(b) times sum_(k perp a) omega_a^(-k m).
$
按照Dirichlet卷积与Ramanujan和的定义,这等于 $(f * c_bullet (-m))(n)$。
又 $c_bullet (m) in RR$ 共轭不变,代入 $c_bullet (-m) = c_bullet (m)$,得如下形式。
#align(center, rect(inset: (x: 1em))[
$k |-> f(gcd(k,n))$ 的Fourier变换
$
m |-> fourier(f)(m,n) = (f * c_bullet (m))(n).
$
])
注意这没有把 $fourier(f)(bullet, n)$ 表示为 $f(gcd(bullet,n))$ 与谁的卷积,而是逐点表示成了某种卷积在 $n$ 处的值,各点卷积的对象并不相同。
#remark[更常见的那种Fourier变换][
以 $n$ 为周期的 $ZZ -> CC$ 函数 $k |-> f(k)$,它的Fourier变换是
$
m &|-> sum_(k=1)^n f(k) times omega_n^(k m)
&= sum_(k=1)^n f(k) times omega_n^(-(-k) m)
&= (f star h_bullet (m))(0).
$
其中 $star$ 是更常见的那种卷积,$h_k (m) = omega_n^(-k m)$。($f star h_bullet (m)$ 是周期函数,在 $0$ 处的值也等于在 $n$ 处的值。)
]
#figure(
grid(
columns: (1fr, 1fr),
image("fig/Ramanujan_sum-fix_m.png"),
image("fig/Ramanujan_sum-fix_n.png"),
),
caption: [
$c_bullet (m)$ 与 $c_n (bullet)$
每一条图线的阴影水平线是横轴,图线的绝对高度没有意义。
$c_bullet (m): ZZ^+ -> ZZ$,画图时用线段连接离散点;$c_n (bullet): RR -> CC$,分实虚绘制,有阴影的为实部。
],
)
#figure(
image("fig/Ramanujan_sum-complex.png", width: 80%),
caption: [$c_n (RR)$ 在 $CC$ 中]
)
= 应用
结合刚刚得到的
$
fourier(f)(m,n) = (f * c_bullet (m))(n)
$
与Fourier正反变换
$
m |-> fourier(f)(m,n) &:= sum_(k=1)^n f(gcd(k,n)) times omega_n^(-k m), \
k |-> f(gcd(k,n)) &= 1/n sum_(m=1)^n fourier(f)(m,n) times omega_n^(k m)
$
能得到许多结果。
#example[用新方法表示 $gcd$][
取 $f = id$,则
$
fourier(id)(m,n)
= (id * c_bullet (m))(n)
= sum_(b|n) n/b times c_b (m).
$
于是
$
gcd(k,n)
&= id(gcd(k,n)) \
&= 1/n sum_(m=1)^n fourier(id)(m,n) times omega_n^(k m) \
&= 1/n sum_(m=1)^n sum_(b|n) n/b times c_b (m) times omega_n^(k m) \
&= sum_(m=1)^n omega_n^(k m) sum_(b|n) (c_b (m))/b.
$
]
#example[用 $gcd$ 表示新东西][
上面用了Fourier反变换,这里再用Fourier正变换。
$
sum_(k=1)^n gcd(k,n) omega_n^(-k m) =: fourier(id)(m,n) = (id * c_bullet (m))(n).
$
注意Fourier变换结果是个函数,我们直接得到一组结果。
$
sum_(k=1)^n gcd(k,n) &= (id * phi)(n). \
sum_(k=1)^n gcd(k,n) omega_n^(-k) &= (id * mu)(n). \
... &= ...
$
]
#example[重看 $1 * c_bullet (1) = delta$][
$1$ 的 Fourier 变换是 $delta$,而按我们的公式这也等于 $1 * c_bullet (1)$。
]
#example[乘法][
以Fourier正变换,取 $m=0$ 为例,$sum_k f(gcd(k,n)) =: fourier(f)(0,n) = (f * phi)(n)$。
取 $f = log compose g$,则 LHS 的 $exp$ 为
$
exp sum_k log g(gcd(k,n))
= product_k g(gcd(k,n)),
$
而 RHS 的 $exp$ 为
$
exp(((log compose g) * phi)(n))
= exp sum_(a b = n) log g(a) times phi(b)
= product_(a b = n) g(a)^phi(b).
$
]
#example[$CC$ 的大门][
数论常常限制在 $ZZ^+$ 中,而Fourier变换字面上在 $CC$ 中,于是我们的公式可让一些 $ZZ^+$ 的概念拓展到 $CC$。
举个例子,将互质概念从 $ZZ^+ times ZZ^+$ 拓展到 $CC times ZZ^+$。$k perp n
<=> delta(gcd(k,n)) != 0$,注意按我们的公式 $fourier(delta)(m,bullet) = delta * c_bullet (m) = c_bullet (m)$。于是按Fourier反变换,
$
delta(gcd(k,n))
= 1/n sum_m c_n (m) times omega_n^(k m)
$
它等于零即向量 $c_n (bullet)$ 与 $m |-> omega_n^(k m)$ 正交,后者对一般 $k in CC$ 也有意义。
]
其实<NAME> 2008年文章足足有13个例子,并总结成了关于 $f, m$ 的二维表格。然而这里只引入了 $1,id,delta$ 和 $c$ 四个数论函数,一描述就捉襟见肘,还请有志者参考原文吧。
#set heading(numbering: none)
= 他典等
- <NAME>著、陈为蓬译《素数之恋》(ISBN 978-7-5428-4776-8)
- #link("https://en.wikipedia.org/wiki/Euler%27s_totient_function#Fourier_transform")[Fourier transform - Computing Euler's totient function - Euler's totient function - Wikipedia]
- <NAME> #link("http://math.colgate.edu/~integers/i50/i50.pdf")[The Fourier transform of functions of the greatest common divisor (`math.colgate.edu`)]
- 3Blue1Brown #link("https://www.bilibili.com/video/BV1kx411q7kK/")[隐藏在素数规律中的 $π$ - 哔哩哔哩]
- #link("https://proofwiki.org/wiki/Sum_of_M%C3%B6bius_Function_over_Divisors")[Sum of Möbius Function over Divisors - Pr∞fWiki]
- #link("https://en.wikipedia.org/wiki/Dirichlet_convolution")[Dirichlet convolution - Wikipedia]
- Mathologer #link("https://youtu.be/LFwSIdLSosI")[Euler’s Pi Prime Product and Riemann’s Zeta Function - YouTube]
- 孟祥蕊、王淑红《#link("https://mp.weixin.qq.com/s/-F8kFvqiSxvw_D7AvmVTXg")[狄利克雷——科研教学两不误的数学家]》,科学杂志1915(`kexuemag`);返朴(`fanpu2019`)重排版并改名“#link("https://mp.weixin.qq.com/s/IxO5S_m_UFd8x5NvrLJIWQ")[黎曼认为他是高斯之外最伟大的数学家,现代函数概念出自他之手]”
|
|
https://github.com/LDemetrios/Conspects-4sem | https://raw.githubusercontent.com/LDemetrios/Conspects-4sem/master/typst/sources/alg.typ | typst | // #import "header.typ": *
// #show: general-style
Рёбер $n^2$
Отсортировать их --- $O(n^2 log (n^2)) = O(n^2 log n)$
Бинпоиск по ним обойдётся нам в $log n$
Для каждой границы спросим, можно ли построить паросочетание для границы меньше, чем эта --- за $O(n/2 * n^2)$
Итого сложность $O(n^2 log n + n^3 log n) = O(n^3 log n) $
#repr(sys.inputs)
|
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/CSL_old/oktoich/Hlas5/3_Streda.typ | typst | #let V = (
"HV": (
("","Rádujsja póstnikom","Vozdvížesja jákože áhnec na kresťí, i voznošénija ľstívaho nizvérhl jesí: zakláv že sja, króviju tvojéju vsjú osvjatíl jesí zémľu. I probodén kopijém, plámennomu mňí Christé, orúžiju pleščí dáti ábije poveľíl jesí, i rajá vmistíti vchód, i dréva mjá naslaždátisja živótnaho nebojáznenno. Ťímže spasén býv strástiju tvojéju, zovú rádujasja: sláva božéstvennomu krestú tvojemú, ímže kľátvy izbávichomsja drévnija, i blahoslovénije ot dréva objémlem, i véliju mílosť."),
("","","Ustáviti boľízni choťá čelovíkom i ponošénija, preblahíj Hóspodi Spáse mój, raspjátije terpíši ponósnoje, i žélči vkušáješi nezlóbive, nášu hóresť otémľa zlóbnuju, ujazvívsja Slóve, rebrá probodénijem, jázvy náša jáko Vladýka isciľáješi. Ťímže i vospivájem tvojé slávnoje choťínije, i poklaňájuščesja čtím kopijé, i húbu, i trósť, ímiže darovál jesí míru tvojemú mír i véliju mílosť."),
("","","Káko ťá Iisúse, jevréjskoje mnóžestvo bezzakónnoje na kresťí raspjáti osudí, ne uščédriv tebé ščédraho: Ďíva pláčušči viščáše inohdá, jehdá uzrí na kresťí vozdvížena, jehóže iz utróby nestrástno porodí. Čtó ti soďíja bezzakónnoje sobórišče, čádo mojé voždeľínnoje, Sýne mój prevozľúblenňijšij? Potščísja i spasí, víroju tvojé slávjaščich raspjátije, i mené, jákože obiščál jesí veličájuščich, jáko jedín vsepreblahíj."),
("","Rádujsja póstnikom","Choťá rukopoložénije Adáma pervozdánnaho otjáti do koncá vseboľíznennoje čelovikoľúbče, jáko sozdátel Adámov, hvozďmí prihvozdísja, bezhríšne Christé, i kopijém božéstvennaja tvojá rébra ujazvív nás rádi, vozbraňáješi Slóve orúžiju plámennomu, ne braníti nám vchóda tvojím rabóm. Ťím že slavoslóvjašče ťá, pojém deržávu tvojú, i krest tvój čtím, i vospivájem čelovikoľúbče, ímže vsjáko jestestvó obríte neboľíznennoje dostojánije i véliju mílosť."),
("","","Áhnca svojehó áhnica inohdá zrjášči k zakoléniju tščáščasja, usérdno posľídovaše, sijá vopijúšči jemú: kámo hrjadéši, čádo mojé sladčájšeje Christé? Česó rádi tečénijem sím tečéši dolhoterpilíve bez ľínosti, Iisúse prevozžeľínne? Bezhríšnyj i mnohomílostive Hóspodi, dážď mí slóvo rabí tvojéj, Sýne mój vseľubéznyj. Ne prézri mjá ščédre molčá, stránno róždšuju ťá, Bóže živodávče, dárujaj mírovi véliju mílosť."),
("","","Prestól cheruvímskij voístinnu, jáko prevýšši tvárej býsť: v ťá bo božéstvennoje Slóvo, náš zrák obnovíti choťá, vselísja čístaja, i iz tebé prošéd plotonósec, jáko blahoutróben, krest i strásť nás rádi priját, i voskresénije jáko Bóh darová. Ťímže jáko primirívšuju náše osuždénnoje jestestvó, ziždíteľu blahodarjášče, k nemú i tebí zovém: podážď nám ostavlénije prehrišénij, i mílosť tvojími molítvami."),
("Krestobohoródičen","","Izbavlénije nás rádi, i cínu velíkuju, prečístuju króv tvojú izvólil jesí dáti, bezhríšne Christé mój, vsím choťá spasénije polučíti. Ťímže ťá zrjášči prihvoždéna Máti tvojá, rydájušči vlasý terzáše svojá, hlahóľušči: čádo, áhnče vseneporóčnyj, mír choťá izbáviti čéstnóju tvojéju króviju, ot óčiju mojéju káko zašél jesí Spáse, nezachodímoje sólnce, íže vsím podajáj prosviščénije, i mír, i véliju mílosť."),
),
"S": (
("","","Hóspodi, pri Moiséi inohdá proróci, tóčiju óbraz krestá tvojehó javľájem, pobiždáše vrahí tvojá. Nýňi že sámyj krest tvój imúšče, pómošči prósim: ukripí cérkov tvojú, mnóhija rádi mílosti, čelovikoľúbče."),
("","","Krest tvój Christé, ášče i drévo vídimo jésť suščestvóm, no božéstvennoju oďíjano jésť síloju, i čúvstvenňi mírovi javľájem, úmno náše čudotvorít spasénije, jemúže kláňajuščesja slávim ťá Spáse, pomíluj nás."),
("Múčeničen","","Molíte za ný svjatíji múčenicy, da izbávimsja ot bezzakónij nášich: vám bo daná jésť blahodáť molítisja za ný."),
("Krestobohoródičen","Prepodóbne ótče","Vés vospriím mój zrák, sozdáteľ i Bóh oblečésja, voobrazíti íže préžde pádšij zrák Adámov voschoťívyj, na krest voznesésja vóleju jáko povínen: i rúci rasprostér nýňi, isciľája rúci jáže drévle osláblennyja k drévu sňídnomu. Jehóže vsečístaja víďašči vosklicáše: čtó sijé tvojé Sýne mój neizrečénnoje dolhoterpínije? Ne terpľú na kresťí vozdvížena zrjášči ťá, hórstiju soderžášča vsejá zemlí koncý."),
)
)
#let P = (
"1": (
("","","Koňá i vsádnika v móre čermnóje, sokrušájaj bráni mýšceju vysókoju, Christós istrjasé: Izráiľa že spasé, pobídnuju písň pojúšča."),
("","","Tvojích ráb iz hlubiný sérdca molítvy vírno prinosímyja tí Bohoródice, prijémli: i sijá prinosjášči Sýnu tvojemú, podážď prehrišénij ostavlénije."),
("","","Jáko voístinnu blahodáteľnica súšči, bohaťášči zemnoródnych, i spasénija dvéri k žízni vvoďášči, Ďívo Bohomáti, predstojášči vsehdá ne prestáj, o moľáščich ťá."),
("","","Nýňi preklónšisja pivéc tvojích moľbámi, jáko deržávna ko spaséniju súšči pómošč: ot bíd molítvami tvojími izjémši Vladýčice, cíly sochraní počitájuščyja ťá."),
("","","Predvarívši vskóri i predstojášči tépľi presvjatája Ďívo, ziló oderžímych nás ľútymi obstojániji umilívšisja izbávi, i ot pečáli svobodí."),
),
"3": (
("","","Vodruzívyj na ničesómže zémľu poveľínijem tvojím, i povísivyj neoderžímo ťahoťíjuščuju, na nedvížimim Christé, kámeni zápovidej tvojích, cérkov tvojú utverdí, jedíne bláže i čelovikoľúbče."),
("","","Svíta tvojehó bódrennoju božéstvennoju moľbóju témnyj óblak sérdca mojehó Máti Bóžija ozarívši, sýna dné pokaží mja dobroďítelmi prosviščéna, svitovodíteľnice dušám nášym."),
("","","Píti ťá po dostojániju želájušče prísno, obáče nedoumíjušče, chvalú ot ustén prinósim chudých, júže blahopremínno prijimí, i moľbám poslúšnica búdi, vírno moľáščymsja tí."),
("","","Dobroplódny neplódnyja dúšy pokazújušči, i oľadeňívšeje očiščájušči tvojéju molítvoju, i mojú dúšu plód dajúščuju víru i žitijé neskvérno, tý Bohorodíteľnice pokaží."),
("","","Íže vsejá zemlí hórstiju soderžáj koncý, káko Máternimi ňídrami objémletsja? Mlekóm že pitájetsja ot soséc, nasyščája vsjáko živótno blahovolénija: i bohátyj izlijánijem obniščavájet voploščájem."),
),
"4": (
("","","Božéstvennoje tvojé razumív istoščánije, prozorlívo Avvakúm, Christé, so trépetom vopijáše tebí: vo spasénije ľudéj tvojích spastí pomázannyja tvojá prišél jesí."),
("","","Jáko jedínu ťá k Bóhu chodátaicu vídušče, sohrišénij dólhi razrišájuščuju povínnym prečístaja, chvalámi i písňmi počitájem."),
("","","Cérkov osvjaščénnuju ťá pokazá, vséľsja v ťá Bóh Ďívo preneporóčnaja: jehóže prísno molí sobľustí nevrédny rabý tvojá."),
("","","Moľbú níščich uslýši upovájuščich na ťá Bohoródice, i uskorí ot bíd izbáviti nás Máternimi tvojími derznovéniji."),
("","","V skórbech sládkoje uťišénije, i v napástech pómošč, i v bidách predstáteľnicu ťá Máti Bóžija sťažávše, ozloblénij vsjákich izbavľájemsja."),
),
"5": (
("","","Oďijájsja svítom, jáko rízoju: k tebí útreňuju, i tebí zovú: dúšu mojú prosvití omračénnuju Christé, jáko jedín blahoutróben."),
("","","Íže pervonačáľnaho v ľíto poróždši, vo opísanňij plóti, voístinnu jestestvóm neopísannaho: tohó molí neprestánno o rabích tvojích."),
("","","Tý približáješisja vskóri súščym v núždach, prísno pomohájušči, i upravľájušči prizyvájuščyja ťá vo iskušénijich ko spaséniju prečístaja."),
("","","Mílostiva jestestvóm, milosérdaho Bóha molítvami pokazávši, podážď tvojím rabóm mírnu žízň, i prehrišénij izbavlénije preneporóčnaja."),
("","","úhľ róždšaja, jehóže uzrí Isáia, popalí mi véšči, vseneporóčnaja, sohrišénij, i prosvití mja, moľúsja."),
),
"6": (
("","","Neístovstvujuščejesja búreju dušetľínnoju, Vladýko Christé, strastéj móre ukrotí, i ot tlí vozvedí mja, jáko blahoutróben."),
("","","Umerščvlénych nás strasťmí hrichá Vladýčice, vozstávivši oživí molítvami tvojími, jáko tlí potrebíteľa róždšaja."),
("","","Neléstno upovánije ťá, i neoborímu sťínu na vrahí, i tvérdu imúšče pómošč, na ný navítov ne ubojímsja."),
("","","Zemnoródnym predstáni v boľíznech prísno čístaja: i ľútych uvjadájuščich tvojá služíteli ne vrédny sochraní."),
("","","Dúšu mojú prosvití omračívšujusja, óblakom pomyšlénij nachoďáščich mí Bohonevísto, navítom čelovikoubíjcy bísa."),
),
"S": (
("","","Krestóm Sýna tvojehó Bohoblahodátnaja, ídoľskaja prélesť vsjá uprazdnísja, i démonskaja kríposť poprásja. Sehó rádi vírniji po dólhu ťá prísno pojém i blahoslovím, i Bohoródicu voístinnu ispovídajušče ťá veličájem."),
),
"7": (
("","","Prevoznosímyj otcév Hóspodi, plámeň uhasí, ótroki orosí, sohlásno pojúščyja: Bóže, blahoslovén jesí."),
("","","Prijimí Vladýčice vírnych tvojích prošénija ráb, napástej izimájušči, i ot skorbéj spasájušči vírno pojúščich: Bóže, blahoslovén jesí."),
("","","Íže blahích ďílateľa Bóha nám róždšaja, tohó umolí máternimi moľbámi čístaja, uščédriti o tebí chváľaščichsja."),
("","","Predstáteľnicu nepobidímuju, i osnovánije nepokoléblemo ťá sťažávše, i tvojím oďíjavšesja zastuplénijem Vladýčice, pobiždájem strilcá jáže na ný kovárstvija."),
("","","Vížď smirénije tvojehó rabá Vladýčice, i víčnaho ohňá svobodí mja neprestánno pojúščaho: Bóže blahoslovén jesí."),
),
"8": (
("","","Tebí vseďíteľu v peščí ótrocy, vsemírnyj lík splétše, pojáchu: ďilá vsjákaja Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Hlahóly vkúpi i mýsľmi chvalú Ďívo, tebí pletúšče, vírniji vzyvájem: Hóspoda pójte, i prevoznosíte jehó vo víki."),
("","","Vsjú tebé dobrótu Solomón blížňuju Bóhu prednapisá, jáko bližájšu súščuju semú, Máternim Ďivíce derznovénijem, súščich v míri izbavľájušči."),
("","","Ťá po Bózi upovánije spasénija v napástech imúšče vírniji vzyvájem: vsjá ďilá Hospódňa Hóspoda pójte, i prevoznosíte jehó vo víki."),
("","","Prijimí písň ot chudých ustén, Ďívo vsepítaja, vopijúščich tí: rádujsja boľáščym ťilóm vračevstvó, i otčájannym dušám spasénije."),
),
"9": (
("","","Isáije likúj, Ďíva imí vo črévi, i rodí Sýna Jemmanúila, Bóha že i čelovíka, vostók ímja jemú: jehóže veličájušče, Ďívu ublažájem."),
("","","K tebí prolivájem, júže ot boľízni molítvu nášu: pomíluj nás hríšnych, da ne posramímsja v déň iskušénija, ťá vsemilosérduju Bohoródicu prizyvájuščiji."),
("","","Pérstnoje sródstvo ne prézri Vladýčice čístaja, no búdi pomóščnica pokryvájušči, v pečáľich ukripľájušči, i izbavľájušči vsích nedúžnych: na ťá bo upovájem hotóvuju, vo jéže spastí nás."),
("","","Bóže uslýši ľudéj tvojích molítvu nýňi, i izbávi nás vréda vrážija: sé tebí privódim ístinnoje očiščénije, tebé róždšuju, júže ľubóviju ublažájem."),
("","","Pod deržávu kríposti tvojejá, Vladýčice, pritekájem tóščno vsí zemnoródniji, prosjášče mílosti i ščedróty tvojá mílostivyja, jáže prolijávši vsích nás osvjatí."),
),
)
#let U = (
"S1": (
("","","Místo lóbnoje ráj býsť, tóčiju bo vodruzísja drévo krestnoje, ábije izrastí hrózd živótnyj, v náše vesélije, tebé Spáse: sláva tebí."),
("","","Raspénšahosja Spása i izbáviteľa nášeho vóleju, jákože vísť, i jákože bláhoizvóli, vospojím vírniji, i proslávim: jáko prihvozdí na kresťí hrichí čelovíkov: izbavľája ot prélesti ród čelovíčeskij, i cárstviju spodóbil jésť."),
("Krestobohoródičen","","Vozvýšena na drévi jáko uzrí rodívšaja ťá boľíznej kromí, rydáše s pláčem i vzyváše: uvý mňí sladčájšij Sýne, ujazvľájusja dušéju, na kresťí zrjášči ťá prihvoždéna, posreďí dvojú zloďíju, jákože zloďíja prihvoždéna."),
),
"S2": (
("","","Raspjátije preterpívyj svojéju vóleju, i ot istľínija čelovíki svovodív Spáse, vospivájem vírniji, i poklaňájemsja tí, jáko prosvitíl jesí nás síloju krestnoju: i stráchom slávim ťá čelovikoľúbče ščédryj, jáko žiznodávca i Hóspoda."),
("","","Drévo krestá tvojehó Spáse náš, mírovi javísja spasénije: na tóm bo prihvozdílsja jesí vóleju, i ot kľátvy izbávil jesí zemnoródnyja: vsích rádoste, Hóspodi, sláva tebí."),
("Múčeničen","","Hóspodi, čáše strástéj tvojích strastotérpcy tvojí porevnovávše, ostáviša žitéjskaja krasňíjšaja, i býša ánhelom sopričástnicy: ťích moľbámi podážď dušám nášym mír Christé, i véliju mílosť."),
("Krestobohoródičen","","Krestóm Sýna tvojehó Bohoblahodátnaja, ídoľskaja prélesť vsjá uprazdnísja, i démonskaja kríposť poprásja, sehó rádi vírniji, po dólhu ťá prísno pojém, i blahoslovím, i Bohoródicu voístinnu ispovídajušče ťá veličájem."),
),
"S3": (
("","","Proróčeskaja prorečénija užé ispólnišasja, drévnij sovít tvój konéc priját: tý bo vóleju vsích carjú, plótiju oubóžal jesí Christé, i prijál jesí nás rádi vzýti na krest, i smérť preterpíti. Ťímže slávim Slóve, jéže páče umá tvojé schoždénije."),
("","","Na kresťí ťa prihvoždéna i usnúvša ánheľstiji čínove víďivše, ustrašíšasja, Iisúse vsecarjú: i pobihóša ábije bisóvstiji polcý, i sokrušíšasja verejí ádovy, i smérti mučíteľstvo nizložísja: i íže vo hrobích mértviji voskresóša."),
("Krestobohoródičen","","Na kresťí zrjášči ťá Christé Máti tvojá, vóleju posreďí razbójniku vísjašča, rasterzájuščisja utróboju Máterski hlahólaše: bezhríšnyj Sýne, káko neprávedno na kresťí, jákože zloďíj prihvoždén jesí? Íže čelovíčeskij ród choťá oživíti, jáko blahoutróben."),
),
"K": (
"P1": (
"1": (
("","","Koňá i vsádnika v móre čermnóje, sokrušájaj bráni mýšceju vysókoju, Christós istrjasé: Izráiľa že spasé, pobídnuju písň pojúšča."),
("","","Íže nepostižénijem razumivájemaho voploščénna, i míru jávľšasja plótiju, na drévi vozdvíže drévle voschoťívšaho, jevréjskij sobór zakonoprestúpnyj."),
("","","Jehdá plodonósna vinohráda tebé Christé, na drévo krestnoje jevréi vozbisívšesja vozdvihóša: tohdá vinó iskápal jesí vesélija Slóve, vsjáko pijánstvo zlých otjémľuščeje."),
("Múčeničen","","Zakónom ne podpádajet pochváľnym múčenicy, váše terpínije: páče bo jestestvá vsjákaho čelovíčeskaho boľízni preterpíste, i ko upokojéniju neboľíznennomu vselístesja veseľáščesja."),
("Múčeničen","","Vólny múk preplývše okormlénijem slóva, vsechváľniji Christóvy stradáľcy, vo pristánišča dostihóste nebésnaja, božéstvennyja tišiný nýňi naslaždájuščesja."),
("Bohoródičen","","Jehdá iz čréva tvojehó roždénnaho Hóspoda, usmotrívši na drévo bez právdy vozvyšájema, Ďívo proslezílasja jesí, i tohó vospíla jesí neizrečénnoje voístinnu snizchoždénije."),
),
"2": (
("","","Koňá i vsádnika v móre čermnóje, sokrušájaj bráni mýšceju vysókoju, Christós istrjasé: Izráiľa že spasé, pobídnuju písň pojúšča."),
("","","Dvéri božéstvennyja slávy, otvérzšaja rájskija dvéri, pokajánija dvéri i mňí otvérzi moľúsja, i úm mój prosvití vospiváti ťá, Bohorádovannaja."),
("","","Ustávila jesí smértnoje ustremlénije, róždšaja životóm i smértiju obladájuščaho, vseneporóčnaja: tohó úbo molí, umerščvľájuščaja prehrišénija, dúšu mojú ustáviti, i spastí mja."),
("","","Jedínu ot rodóv ťá dobrótu Jákovľu, izbrá Slóvo Otcú sobeznačáľnoje, i ot tvojích voplotísja krovéj: Vladýčice, spasí mja chodátajstvy tvojími."),
("","","Jákože na runó dóžď nebésnyj na ťá sšédšij, podjála jesí vsečístaja: ťímže odoždénija strastéj mojích izsuší, moľú ťa Máti Ďívo."),
),
),
"P3": (
"1": (
("","","Vodruzívyj na ničesómže zémľu poveľínijem tvojím, i povísivyj neoderžímo ťahoťíjuščuju, na nedvížimim Christé, kámeni zápovidej tvojích, cérkov tvojú utverdí, jedíne bláže i čelovikoľúbče."),
("","","Raspináješisja, i ráj páki otverzájetsja, i razbójnik préžde vsích rádujasja vchódit: i umiráješi Iisúse mój, i vráh ľstéc umerščvľájetsja, umerščvlénnyj že Adám oživľájetsja: sláva mnóhomu tvojemú milosérdiju."),
("","","Uvjadáješi hrichóvnyj plámeň, raspinájem Iisúse na drévi za bláhosť: rišíši prélesť svjazújem, obnažén býv oďiváješi odéždeju slávy čelovíka: sláva mnóhomu bláahoutróbiju tvojemú."),
("Múčeničen","","Javléj prísno sijájuščyja zvízdy múčeniki, nečéstija ťmú prohoňájuščyja: ťích moľbámi Christé, nóšč zól mojích otžení, i prosvití omračénnoje mojé sérdce."),
("Múčeničen","","Zakónno blahoslávniji postradávše múčenicy, i zakónnoju víroju vincý vinčáni býša, i bezzakónnujuščich sovíty ukloníša múdri, i píšču božéstvennuju prijáša, i rájskoje vselénije."),
("Bohoródičen","","Pojút ánheľstiji lícy ťá vsepítaja, bez razsuždénija sích preimúščuju: Bóha bo rodilá jesí plótiju, ot dréva kľátvu drévom pohúbľšaho, i blahoslovénije istočívšaho."),
),
"2": (
("","","Vodruzívyj na ničesómže zémľu poveľínijem tvojím, i povísivyj neoderžímo ťahoťíjuščuju, na nedvížimim Christé, kámeni zápovidej tvojích, cérkov tvojú utverdí, jedíne bláže i čelovikoľúbče."),
("","","Strilámi hrichóvnymi ujázvlennaho mjá, v tebí ľičbóju vsehó iscilí, čístaja: i ot soderžáščich mjá izminí boľíznej, jáže boľíznej preminívšaja ród čelovíčeskij roždestvóm tvojím."),
("","","Jáže vsúje smirénnoje sérdce mojé oskorbľájuščyja i íščuščyja umertvíti mjá nevídimyja vrahí, Vladýčice, tvojím nizloží zastuplénijem, da prebúdut bezďíľni, studá ispolňájemi."),
("","","Vód životvórnych mjá ispólni Vladýčice, jáže božéstvennuju vódu mírovi istočívšaja: bezzakónij mojích ľútyja izsuší potóki, i sérdca mojehó vólny božéstvennoju tvojéju tišinóju ukrotí."),
("","","Préjde síň, zakónu skončájemu: zakonodávca bo Christá rodilá jesí, blahodáť očiščénija i prosviščénija nám zakonopolahájušča, i kľátvy izymájušča, prečístaja Ďívo, vseneporóčnaja."),
),
),
"P4": (
"1": (
("","","Božéstvennoje tvojé razumív istoščánije, prozorlívo Avvakúm, Christé, so trépetom vopijáše tebí: vo spasénije ľudéj tvojích, spastí pomázannyja tvojá prišél jesí."),
("","","Dušám vsekonéčnoje izbavlénije sotvorjája Christé, v rúci Otcú prédal jesí presvjatúju dúšu tvojú, na drévi vóleju vísim."),
("","","Neprávednyj sudijá osudí ťa právednaho sudijú Iisúse, umréti na drévi vísimu: jáko da opravdíši nás, neprávedno vrahú podklónšyjasja."),
("Múčeničen","","Víďiti spodóbistesja múčenicy jáže ot víka blahája, bidám mnóhim i skórbem, i neizčétnym múkam na zemlí javlénno pričastívšesja."),
("Múčeničen","","Zímu iskušénij preidóste, i k vesňí vozdajánija nebésnaho dostihóste, i likóm ánheľskim, čestníji sočetástesja múčenicy."),
("Bohoródičen","","Orúžije božéstvennych strástéj prójde tvojú blahoslovénnaja, dúšu, vnehdá víďila jesí raspinájema tvojehó Sýna, i v rúci Ótči dúšu predávša."),
),
"2": (
("","","Božéstvennoje tvojé razumív istoščánije, prozorlívo Avvakúm, Christé, so trépetom vopijáše tebí: vo spasénije ľudéj tvojích, spastí pomázannyja tvojá prišél jesí."),
("","","Ot ďíl spasénija ňísť mí: ťímže pod króv tvój s nadéždeju pribiháju, Ďívo vseneporóčnaja, otčájannaho mjá spasí molítvami tvojími."),
("","","Svíta čístoje prijátelišče, sólnca nosílo čestnóje, omračénoje sérdce mojé ľútych mrákom, ozarí i spasí mja, moľúsja, Vladýčice."),
("","","Istkávšaja odéždu ot krovéj tvojích ďívstvennych, nébo óblaki otrokovíce oďivájuščemu, odéždeju netľínija oblecý, prélestiju obnažénnaho mjá."),
("","","Ot udólij žitéjskich, jáko krín vospriját ťá ziždíteľ: iz tebé mírovi vdochnú duchóvnoje blahouchánije, Ďívo presvjatája Bohonevísto."),
),
),
"P5": (
"1": (
("","","Oďijájsja svítom jáko rízoju, k tebí útreňuju i tebí zovú: dúšu mojú prosvití omračénnuju Christé, jáko jedín blahoutróben."),
("","","Stojál jesí sudím právednyj sudijé, Christé i osuždája plótiju vraždú, tróstiju bijém, i svoboždénije soveršénnoje podpisúja mňí."),
("","","Tebé jáko víďi plótiju vísima Christé na drévi, sólnce svít vo ťmú preloží, i zemľá pokolebásja, i kámenije raspadésja."),
("Múčeničen","","Zemlí koncý osvjatíli jesté, jáko ispólnitelije bo zakóna božéstvennaho, vsechváľniji múčenicy, stráždušče, svjatýňu nasľídovaste."),
("Múčeničen","","Predóbrijšiji i Bohozárňijšiji, íže božéstvennoju slávoju oďíjani, i obnažívše vrážiju zlóbu, múčenicy da počitájutsja."),
("Bohoródičen","","Tebé vseneporóčnuju Bóžiju rodíteľnicu ublažájem: jejáže rádi kľátva neplódna býsť, i izbavlénije i blahoslovénije dadésja."),
),
"2": (
("","","Oďijájsja svítom jáko rízoju, k tebí útreňuju i tebí zovú: dúšu mojú prosvití omračénnuju Christé, jáko jedín blahoutróben."),
("","","Da obrjášču ťá pómošč Ďívo, izymájuščuju mjá v čás súdnyj osuždénija, jehdá sudíšču iz tebé róždšahosja predstánu."),
("","","Zakónom mjá pobiždájema hrichóvnym čístaja, i prélesťmi čuždáho částo preľščájema, i v bréhi prehrišénij nizríjema obratí."),
("","","Úhľ róždšaja, jehóže uzrí Isáia, popalí véšč vseneporóčnaja, mojích sohrišénij, i prosvití mja, moľúsja."),
("","","Plóť ot krovéj tvojích Christú vzajím dávši, plotskíja mojá, Ďívo, strásti otňúd očísti, i bezstrástija stezjú pokaží mi."),
),
),
"P6": (
"1": (
("","","Neístovstvujuščejesja búreju dušetľínnoju, Vladýko Christé, strastéj móre ukrotí, i ot tlí vozvedí mja, jáko blahoutróben."),
("","","Vozdvížesja dolhoterpilíve na drévo, vsjá vrážija vozvyšénija nizložíl jesí, i spásl jesí pádšaho, mnóhija rádi tvojejá bláhosti."),
("","","Dúšy právednych, na drévi drévle dúšu ťá predajúšča Vladýko, jáko oščutíša, víčnych úz Slóve otpuščájemi byváchu."),
("Múčeničen","","Jákože tvérdiji adamánti, múk vsjáko razžžénije terpilívym nrávom preterpíste, i vrahá smirívšesja nizložíste."),
("Múčeničen","","Podóbnicy Christóvym strástém bývše, podjáste vsjáko mučénije strastotérpcy Hospódni: sehó rádi nebésnych krasót spodóbistesja."),
("Bohoródičen","","Dívno roždestvó tvojé, jéže vélija čudesá tvorjáščeje, i vo svjatých proslavľájemo, Bohoródice jedína vsečúdnaja Vladýčice."),
),
"2": (
("","","Neístovstvujuščejesja búreju dušetľínnoju, Vladýko Christé, strastéj móre ukrotí, i ot tlí vozvedí mja, jáko blahoutróben."),
("","","Da ne požrét mené, Vladýčice, unýnija hlubiná, i hrichóvnaja volná da ne pokrýjet mené: no tvojéju molítvoju da spasén búdu, jedína čístaja."),
("","","Zakóna svíščnik proobražáše ťá róždšuju svít, prosvitívšaho, prečístaja, vsjáčeskaja. Ťímže vopijú ti: omračénnaho mjá prosvití."),
("","","Blaháho róždšaja blahoďíteľa i soďíteľa, vsečístaja Vladýčice, i blahoľubívaja, dúšu mojú ozlóblennuju ublaží."),
("","","Vinú spasénija žiznodávca rodilá jesí nám, víčnoje izbavlénije dárujušča voístinnu, ťá Bohoródicu propovídajuščym."),
)
),
"P7": (
"1": (
("","","Prevoznosímyj otcév Hospóď, plámeň uhasí, ótroki orosí sohlásno pojúščyja: Bóže, blahoslovén jesí."),
("","","Vinčá térnijem ťá, carjú bezsmértne, sobór zakonoprestúpnyj, iz kórene izsicájušča preléstnoje térnije: Bóže, blahoslovén jesí."),
("","","Da oblečéši mjá odéždeju Slóve, netľínija, obnažílsja jesí vóleju: i terpíši oplevánija, i raspjátije, i strásť, bezstrásten sýj po suščestvú."),
("Múčeničen","","Soobrázni bývše strastém, svjatíji, vsími Vladýčestvujuščemu, voístinnu jesté sýnove i nasľídnicy cárstvija nepokolebímaho."),
("Múčeničen","","Drevesém bezslovésnejšim ne prinesóste čésti, na drévi prostéršaho rúci čtúšče, jáko vsích carjá i Vladýku."),
("Bohoródičen","","Stránno jésť roždestvó tvojé vseneporóčnaja: Bóha bo rodilá jesí, drévom uvjadívšaho plámeň prélesti, i prosvitívšaho mír."),
),
"2": (
("","","Prevoznosímyj otcév Hospóď, plámeň uhasí, ótroki orosí sohlásno pojúščyja: Bóže, blahoslovén jesí."),
("","","Pojú ťa vsepítaja, Bóha bo prepítaho bez símene rodilá jesí Ďívo vsesvjatája, obožájuščaho pojúščich: Bóže, blahoslovén jesí."),
("","","Umertví strásti mojá, žízň róždšaja, vozdvíhni mjá ležášča ot hróba nečúvstvija: da ľubóviju slávľu ťá Bohonevísto."),
("","","Prízri na molénija rabóv tvojích Vladýčice, ot bíd izbavľájušči, i ot skorbéj spasájušči, víroju pojúščyja: Bóže, blahoslovén jesí."),
("","","Neisčetnosíľnaho, nášu némošč ponésšaho, rodilá jesí čístaja: jehóže molí, iscilíti dúšu mojú, ľúte iznemóhšuju."),
),
),
"P8": (
"1": (
("","","Tebí vseďíteľu v peščí ótrocy, vsemírnyj lík splétše, pojáchu: ďilá vsjákaja Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Výšnij Bóže, na krest vozdvíhlsja jesí, žélči napojájem, sládoste žíznennaja: kopijém probodáješisja, zakalája zmíja, v raí Adáma nizložívšaho."),
("","","Razrišáješi mjá ot úz hrichóvnych Slóve, choťínijem svjazújem, i víčnymi úzami svjazúja Spáse, vrahá otstúpnika: ťímže tvojá slávľu strásti vo víki."),
("Múčeničen","","Javístesja pričástnicy svíta neveščéstvennaho, svíti súšče vtoríji svjatíji múčenicy, ťmú prélesti otjémľušče, i vsích vírnych serdcá prosviščájušče božéstvenňi."),
("Múčeničen","","Jerusalíma výšňaho svobódnaho Sýnove býste múčenicy, prosviščájušče pervoródnych cérkov, i prevoznosjášče Christá vo víki."),
("Bohoródičen","","Stojála jesí u krestá smotrjájušči otrokovíce, prihvoždéna Christá, jehóže rodilá jesí Vladýčice. Ťímže vopijáše: ne bezčádnu mjá javí beznačáľne, beznačáľnaho Otcá Sýne i Slóve."),
),
"2": (
("","","Tebí vseďíteľu v peščí ótrocy, vsemírnyj lík splétše, pojáchu: ďilá vsjákaja Hóspoda pójte, i prevoznosíte vo vsjá víki."),
("","","Jáže predóbraja i Bohozráčnaja Ďívo, dóbrych mjá viďínijem udobrívši prosvití, jáko da zovú: Hóspoda pójte, i prevoznosíte jehó vo víki."),
("","","Vratá svíta, pokajánija mí vratá svitozárnaja otvérzi, pokazújušči vsjákij právyj púť právdy, privoďáščij k božéstvennaho choťínija vchódu."),
("","","Svjatóje čístaja Slóvo, vírnyja osvjaščájuščeje, neizhlahólanno rodilá jesí: tohó molí presvjatája, smirénnuju dúšu mojú nýňi osvjatíti, zlóboju oskvernénuju."),
("","","Strujá svítlaja i istóčnik bezsmértija svjatája Vladýčice, iz tebé proizýde, íže vsích Hospóď. Ťímže vopijú ti čístaja: strují zól mojích izsuší túčami tvojích molítv."),
),
),
"P9": (
"1": (
("","","Isáije likúj, Ďíva imí vo črévi, i rodí sýna Jemmanúila, Bóha že i čelovíka, vostók ímja jemú: jehóže veličájušče, Ďívu ublažájem."),
("","","Jáko áhnec vozdvíhlsja jesí na drévo Vladýko Christé, čéľusti sokrušája vólka mýslennaho, i ot úst jehó ischiščája tvojá slovésnyja óvcy, i privoďá Vladýko rodíteľu."),
("","","Vincém ternóvym uvjázlsja jesí, jákože cár cárstvujuščich Christé, lukávaho potrebľája cárstvija, i iz kórene izsicája prélesti térnije: ťímže víroju ťá bláže slávim."),
("Múčeničen","","Oblistá jákože sólnce tvérdoje stojáteľnoje soprotivlénije váše, i vsjú razruší vrážiju mhlú, múčenicy nepobidímiji, prosvitítelije vsích vírnych, stolpí nepreklónniji blahočéstija, dostoslávniji."),
("Múčeničen","","Pólk Bohosóbran, vóinstvo nebésnoje, sobór izbránnyj, svjatája síň javístesja, vý vsechváľniji múčenicy Spása, lukávaho hrády razrušájušče božéstvennoju blahodátiju ."),
("Bohoródičen","","Sadoďíteľa rodilá jesí, blahočéstija sadíteľa, vsaždájušča na zemlí rázum ístinnyj, i prozjábšuju, ot sadá razorjájušča kľátvu, jehóže veličájušče, tebé Ďívu ublažájem."),
),
"2": (
("","","Isáije likúj, Ďíva imí vo črévi, i rodí sýna Jemmanúila, Bóha že i čelovíka, vostók ímja jemú: jehóže veličájušče, Ďívu ublažájem."),
("","","Osnovánije nepokolebímo, Spása rodilá jesí čístaja, na vodách zémľu božéstvennym poveľínijem osnovávšaho: jehóže molí nás utverdíti, čísto tebé blažáščich."),
("","","V púť smirénija poveľínij božéstvennych, nezablúdno šéstvovati, i nepreléstno sotvorí mja čístaja, bisóvskij mjatéž, i strastéj nachoždénija othoňájušči, i tišinú podajúšči mí."),
("","","Dremánijem oderžíma neraďínija, víďa vráh nachódit neoslábno, snóm slástnym naďíjasja ukrásti mjá: no samá mja sochraní čístaja, neusýpnoju tvojéju molítvoju Máti Ďívo."),
("","","Jáko samoosuždén, pomyšľáju mojích hrichóv mnóžestvo, i strášnoje sudíšče sudijí, na némže choščú sudítisja: no jáže Sudijú róždšaja Bóha, tohdá mja sobľudí Bohoródice, neosuždéna."),
),
),
),
"CH": (
("","","Tóčiju vodruzísja drévo Christé krestá tvojehó, prélesť prohnána býsť ídoľskaja, i blahodáť procvité: ne bó ktomú osuždénija jésť mučíteľstvo, no pobída javísja nás spasénija. Krest bo nám jésť pochvalá, krest nám utverždénije, krest nám rádovanije."),
("","","Vedén býl jesí nás rádi na žértvu Christé, jáko ovčá, i jáko áhnec nezlóbiv Jemmanúil, na zakolénije vóľnoje, s bezzakónniki vminén býl jesí. Prijidíte, vospójte otéčestvija jazýk, i poklonítesja na kresťí povíšenomu Životú bezkonéčnomu."),
("","","Svjatíji múčenicy na zemlí podvíhšesja, mráz preterpíša, ohňú predášasja, vodá ích priját, ťích jésť hlás: proidóchom skvozí óhň i vódu, i izvedé ny v pokój. Molítvami ích Christé Bóže, pomíluj nás."),
("Krestobohoródičen","","Zrjášči drévle áhnca i Sýna svojehó, Ďíva Máti i vseblážénnaja otrokovíca, na krest vozdvížena, vopijáše slezjášči: uvý mňí, Sýne mój, káko umiráješi. Bóh jestestvóm sýj bezsmérten?"),
),
)
#let L = (
"B": (
("","","Razbójnik na kresťí Bóha ťá býti vírovav, Christé, ispovída ťá čísťi ot sérdca, pomjaní mja Hóspodi, vopijá, vo cárstviji tvojém."),
("","","Probodén býv v rébra, potóki ostavlénija istočíl jesí Christé: prihvozdív že svojí na kresťí rúci, vés razorjája strástnyj čelovíkom rázum."),
("","","Vozšéd na krest, vsé pokolebál jesí bisóvskoje mnóžestvo, i nizvérhl jesí páhubnuju deržávu mučíteľa, Vladýko Christé, i spásl jesí čelovíčestvo."),
("","","Podóbnicy Christóvym strástém blažénniji bývše, podjáste vsjáko mučénije, strastotérpcy Hospódni: sehó rádi nebésnych krasót spodóbistesja."),
("","","Na kresťí, íže Otcú i božéstvennomu Dúchu nerazďíľnyj, vóľnuju preterpíl jesí strásť Christé, strásti vsjá tletvórnyja náša otjémľa."),
("","","Predstojášči u krestá Bohorádovannaja, i Sýna svojehó víďašči ujázvlena, dušéju ujazvľášesja, vospiváše tohó mnóhoje voístinnu smotrénije."),
)
)
|
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/3_PB/ManualeUtente/content.typ | typst | MIT License | #import "functions.typ": glossary, team
/*************************************/
/* INSERIRE SOTTO IL CONTENUTO */
/*************************************/
= Introduzione
== Scopo del documento
Lo scopo del documento è fornire istruzioni dettagliate sull'utilizzo del prodotto software "InnovaCity", illustrandone le funzionalità principali e le opzioni di personalizzazione. In tal modo, si vuole informare l'#glossary("amministratore pubblico") dei requisiti minimi da soddisfare per poter usufruire del prodotto, degli step necessari per la sua installazione e di tutte le funzionalità a disposizione dell'utente per sfruttarne appieno il potenziale.
== Scopo del prodotto
Lo scopo del prodotto è la realizzazione di un sistema di persistenza dati e successiva visualizzazione di questi, provenienti da sensori dislocati geograficamente. Tale piattaforma consentirà all'#glossary("amministratore pubblico") di acquisire una panoramica completa delle condizioni della città, facilitando così la presa di decisioni informate e tempestive riguardo alla gestione delle risorse e all'implementazione di servizi.
== Glossario
Al fine di evitare possibili ambiguità relative al linguaggio utilizzato nei documenti, viene fornito il _Glossario v2.0_, nel quale sono presenti tutte le definizioni di termini aventi uno specifico significato che vuole essere disambiguato. Tali termini, sono scritti in corsivo e marcati con una G a pedice.
== Riferimenti
=== Riferimenti normativi
- Capitolato C6 - InnovaCity: Smart city monitoring platform:
#link("https://www.math.unipd.it/~tullio/IS-1/2023/Progetto/C6.pdf") (10/03/2024)
- _Norme di Progetto v2.0_
- Regolamento progetto didattico:
#link("https://www.math.unipd.it/~tullio/IS-1/2023/Dispense/PD2.pdf") (10/03/2024)
=== Riferimenti informativi
- _Analisi dei Requisiti v2.0_
- _Specifica Tecnica v1.0_
- *#glossary[Grafana]*: #link("https://grafana.com/docs/grafana/latest/"); (10/03/2024)
- *Docker*: #link("https://docs.docker.com/"); (10/03/2024)
#pagebreak()
= Requisiti minimi
Per poter garantire un corretto funzionamento del prodotto, è necessario rispettare i requisiti minimi delineati in seguito.
== Requisiti hardware
I requisiti hardware indicano le specifiche tecniche relative alle componenti hardware necessarie per garantire un'esperienza di utilizzo soddisfacente del prodotto e prestazioni accettabili.
#figure(
table(
columns:(100pt,auto),
align: (x, y) => (center, center).at(x),
fill:(_,row) => if row==0 {luma(150)} else if calc.odd(row) { luma(220)} else {white},
[*Componente hardware*],[*Requisito*],
[CPU], [Quad-Core 2,5 GHz],
[RAM], [6GB]),
caption: [Tabella requisiti hardware.])
== Requisiti di sistema operativo
I requisiti di sistema operativo indicano la versione minima del sistema operativo necessaria affinché il prodotto software possa essere eseguito correttamente su un determinato dispositivo.
#figure(
table(
columns:(100pt,auto),
align: (x, y) => (center, center).at(x),
fill:(_,row) => if row==0 {luma(150)} else if calc.odd(row) { luma(220)} else {white},
[*Sistema operativo*],[*Versione*],
[Windows], [10],
[MACOS], [10.14 Mojave],
[Ubuntu], [22.04]),
caption: [Tabella requisiti di #glossary[sistema] operativo.])
== Requisiti software
I requisiti software indicano le specifiche relative al software che deve essere installato sul #glossary[sistema] dell'utente per garantire l'avvio e l'esecuzione del prodotto.
#figure(
table(
columns:(100pt,auto,auto),
align: (x, y) => (center, center, center).at(x),
fill:(_,row) => if row==0 {luma(150)} else if calc.odd(row) { luma(220)} else {white},
[*Software*],[*Versione*],[*Download*],
[Docker], [25.0.3], [#link("https://docs.docker.com/get-docker/")],
[#glossary[Docker Compose]], [2.24.6], [#link("https://docs.docker.com/compose/install/")]),
caption: [Tabella requisiti software.])
== Requisiti browser
I requisiti browser indicano le specifiche tecniche necessarie per garantire che il prodotto software possa essere visualizzato e utilizzato correttamente su un browser web specifico.
#figure(
table(
columns:(100pt,auto),
align: (x, y) => (center, center).at(x),
fill:(_,row) => if row==0 {luma(150)} else if calc.odd(row) { luma(220)} else {white},
[*Browser*],[*Versione*],
[Google Chrome], [122],
[Mozilla Firefox], [123],
[Microsoft Edge], [122]),
caption: [Tabella requisiti browser.])
#pagebreak()
= Installazione
La seguente sezione fornisce istruzioni dettagliate su come installare e avviare il prodotto software "InnovaCity". Si consiglia di seguire gli step riportati nell'ordine specificato di seguito.
== Clonare la #glossary[repository]
+ Avviare un prompt dei comandi;
+ Spostarsi nella cartella in cui si desidera clonare la #glossary[repository]\;
+ Con Git installato in locale, clonare la #glossary[repository] tramite il comando:
#set align(center)
`git clone https://github.com/SWATEngineering/InnovaCity.git`
#set align(left)
== Creare API Key
Per il corretto funzionamento del simulatore di biciclette elettriche è necessario registrarsi o accedere al seguente servizio: #link("https://openrouteservice.org/dev/#/signup") per ottenere l'API key. Una volta effettuato l'accesso a questa piattaforma:
- Spostarsi nella sezione "Dashboard";
- Se non ancora accettati, accettare i "Termini e Condizioni", nella sotto-sezione "Profile";
- Nella sotto-sezione "Tokens" creare un nuovo token scegliendo la versione "Standard" e un nome a piacimento;
- Copiare l'API Key generata a schermo e salvarla per dopo.
== Creare Discord Webhook
Il sistema di notifiche utilizza un canale in un server Discord per inviare i propri messaggi. In un server di appartenenza dell'utente, o in uno nel quale si possiedono i permessi:
- Cliccare col tasto destro su un canale testuale;
- Cliccare sulla voce "Modifica Canale";
- Nella nuova schermata cliccare sulla voce "Integrazioni";
- Cliccare sulla voce "Crea Webhook".
A questo punto è possibile vedere il webhook creato, dare ad esso un nome a piacimento, modificare eventualmente il canale testuale a cui è collegato e copiare il webhook stesso.
Occorre copiare e salvare il webhook appena creato per passare allo step successivo.
== Variabili d'Ambiente
Nella directory della repository clonata al passo 1. allo stesso livello del file "README.md" occorre creare un file chiamato ".env" (il punto è importante) con il seguente contenuto:
```env
ORS_API_KEY=<API_KEY_GENERATA_AL_PASSO_2>
DISCORD_WEBHOOK_URL=<API_KEY_GENERATA_AL_PASSO_3>
```
Sostituendo, subito dopo l'uguale, i placeholders tra parentesi angolate (comprese) con i valori salvati precedentemente.
== Avviare l'applicazione
+ Eseguire il comando:
#set align(center)
`docker-compose --profile prod up -d`
#set align(left)
Avendo eseguito correttamente gli step precedenti, l'applicazione è raggiungibile dal browser all'indirizzo #link("http://localhost:3000/").
#pagebreak()
= Istruzioni per l'uso
== Informazioni generali sulla pagina "Home"
All'avvio l'utente è indirizzato direttamente alla pagina "Home" di #glossary[Grafana], senza dover effettuare alcun accesso, dove si hanno:
#figure(
image("assets/screenshots/home.png",width:100%),
caption: [Pagina "Home".]
)
1. *Barra di ricerca*: consente di accedere velocemente alle pagine di #glossary[Grafana] operando una ricerca filtrata rapida; è sufficiente inserire i primi caratteri del nome di una pagina perché questa venga visualizzata tra i suggerimenti o selezionarne una tra le opzioni mostrate inizialmente;
#figure(
image("assets/screenshots/barra-di-ricerca.png",width:50%),
caption: [Barra di ricerca.]
)
2. *Menù a tendina*: consente di accedere velocemente alle opzioni "Starred", "Dashboards" e "Alerting"; la prima conduce alla pagina "Dashboards" con il check posto sulla checkbox "Starred" per filtrare le #glossary[dashboard] a disposizione e mostrare solo quelle salvate nei preferiti in partenza, la seconda conduce sempre alla pagina "Dashboards" senza alcun filtro applicato in modo da poterle visualizzare tutte e la terza conduce alla pagina "Alerting", dove è possibile visualizzare ed esportare le regole di allerta e notifiche.
#figure(
image("assets/screenshots/menu-hamburger.png",width:50%),
caption: [Menu a tendina.]
)
3. Alcuni #glossary[pannelli] contenenti informazioni utili, in particolare il #glossary[pannello] "Dashboards" riporta le #glossary[dashboard] salvate nei preferiti e quelle visualizzate di recente per comodità di accesso.
#figure(
image("assets/screenshots/pannello-dashboards.png",width:75%),
caption: [Pannello "Dashboards" nella pagina "Home".]
)
Occorre sottolineare che, nelle condizioni in cui l'utente #glossary[amministratore pubblico] ha accesso all'applicazione (ossia nel ruolo di "Viewer"), non è possibile esercitare l'opzione di salvare le #glossary[dashboard] tra i preferiti.
== Informazioni generali su #glossary[dashboard] e #glossary[pannelli]
=== #glossary[Dashboard]
Le #glossary[dashboard], concepibili come contenitori di #glossary[pannelli], riportano in alto un header contenente diverse opzioni atte a navigare attraverso #glossary[Grafana] e a personalizzare l'apparenza della #glossary[dashboard] stessa (oltre alla barra di ricerca e al menù a tendina descritti in precedenza):
#figure(
image("assets/screenshots/menu-dashboard.png",width:100%),
caption: [Menù delle dashboard.]
)
1. *Navigazione breadcrumb*: permette di visualizzare in quale parte del sito l'utente è collocato e di navigarci secondo la gerarchia del sito stesso. Ad esempio è possibile accedere facilmente alla pagina Home ("Home") o all'elenco delle #glossary[dashboard] in "Dashboards";
2. "*Share #glossary[dashboard]*": per condividere la #glossary[dashboard] corrente generando un link, uno snapshot o esportandola come file;
3. "*Time picker dropdown*": per selezionare l'intervallo di tempo nel quale i dati vengono visualizzati all'interno dei vari #glossary[pannelli] contenuti nella #glossary[dashboard]; si possono selezionare intervalli di tempo relativi all'ora corrente (ad esempio, "last 5 minutes") o intervalli di tempo assoluti (ad esempio, "2024-05-14 00:00:00 to 2024-05-15 23:59:59"). Si può anche utilizzare il pulsante "Change time settings" per cambiare il fuso orario e l'anno fiscale considerato negli intervalli di tempo:
- Gli intervalli di tempo relativi possono essere selezionati a partire dalla lista fornita; quest'ultima può essere filtrata utilizzando la casella di input che la precede, ovvero "Search quick ranges";
- Gli intervalli di tempo assoluti possono essere selezionati in due modi principali:
- Inserendo dei valori all'interno dei campi "From" e "To": si possono inserire timestamp veri e propri tramite tastiera o valori relativi come "now-5m", per poi cliccare il pulsante "Apply time range";
- Cliccando all'interno di uno dei campi "From" o "To", viene visualizzato un calendario che consente di scegliere il giorno (o i giorni) che si vuole impostare come intervallo temporale; una volta fatto ciò, basta cliccare il pulsante "Apply time range";
#figure(
image("assets/screenshots/time-picker.png",width:50%),
caption: ["Time picker dropdown".]
)
4. "*Zoom out time range*": per allargare l'intervallo di tempo selezionato;
5. "*Refresh dashboard*": per effettuare, immediatamente una volta cliccato, il refresh dei dati mostrati all'interno della #glossary[dashboard]\;
6. "*Refresh dashboard time interval*": per selezionare l'intervallo di tempo che trascorre tra un refresh automatico e quello successivo.
Per maggiori informazioni sull'utilizzo delle #glossary[dashboard], si rimanda alla #glossary[documentazione] ufficiale di #glossary[Grafana] a tal proposito (#link("https://grafana.com/docs/grafana/latest/dashboards/use-dashboards/")).
=== #glossary[Pannelli]
I #glossary[pannelli] riportano in alto a sinistra il titolo e in alto a destra l'icona del menù; posizionando il cursore sopra l'icona alla destra del titolo, viene visualizzata anche la descrizione del #glossary[pannello]. Il menù riporta le opzioni che #glossary[Grafana] mette a disposizione per gestire ogni #glossary[pannello], indipendentemente dalla tipologia di grafico contenuto, ovvero:
- "*View*": per visualizzare il singolo #glossary[pannello] a schermo intero;
- "*Share*": per condividere il #glossary[pannello] o esportarlo come file;
- "*Inspect*": per aprire la pagina "Inspect", dove si possono visualizzare i dati e il codice JSON inerenti al #glossary[pannello]\;
- "*More...*": per accedere alle opzioni aggiuntive "Show legend" e "Hide legend", che consentono, rispettivamente, di mostrare o nascondere la legenda del #glossary[pannello], ove presente.
#figure(
image("assets/screenshots/menu-pannelli.png",width:25%),
caption: [Menù dei pannelli.]
)
==== Formati dei grafici contenuti nei #glossary[pannelli]
Di seguito vengono illustrate le tipologie di grafici adottati all'interno dei vari #glossary[pannelli] nelle #glossary[dashboard] e le possibili interazioni dell'utente con questi ultimi.
===== Grafici a linee (formato "#glossary[Time series]")
Tali #glossary[pannelli] mostrano l'ora del giorno sull'asse x, la legenda del contenuto al di sotto dell'asse x e l'unità di misura del dato in esame sull'asse y. La legenda, in particolare, riporta i nomi dei sensori rilevanti e la media dell'andamento di tutti i sensori (con dati aggregati in intervalli di 5 minuti); cliccando su uno qualsiasi di questi elementi, è possibile filtrare il contenuto del grafico per mostrare esclusivamente l'andamento dell'elemento selezionato e nascondere il resto; cliccando nuovamente si riporta il grafico allo stato di default. Posizionando il cursore sopra un punto qualunque dell'andamento dei dati forniti da un dato #glossary[sensore], è possibile visualizzarne timestamp, nome del #glossary[sensore] e valore corrispondente. Infine, per impostare l'intervallo di tempo nel quale i dati vengono visualizzati all'interno dei vari #glossary[pannelli] in formato #glossary[Time series] contenuti nella #glossary[dashboard], è possibile selezionare una qualunque porzione del grafico, in alternativa all'uso del "Time picker dropdown" descritto nella sezione *#glossary[Dashboard]*; infatti, la selezione di parte del grafico consente di effettuare uno zoom-in sulla finestra temporale corrispondente, che viene effettivamente inserita all'interno del "Time picker dropdown".
===== Mappe (formato "Geomap")
Tali #glossary[pannelli] mettono a disposizione, in alto a sinistra, due pulsanti "*+*" e "*-*" per poter effettuare operazioni di zoom-in e zoom-out; lo zoom-in può essere effettuato anche con un doppio click. Inoltre cliccando e tenendo premuto, è possibile muovere la mappa per visualizzarne aree diverse a piacimento e posizionare il cursore sopra un qualunque #glossary[sensore] per visualizzarne le informazioni.
===== Tabelle (formato "Table")
Sono visibili delle scrollbar che consentono di scorrere la tabella verticalmente o orizzontalmente in caso non fosse interamente visibile. Inoltre, cliccando sul nome di un determinato campo, è possibile ordinare la tabella rispetto a quel campo, in ordine crescente o decrescente; la direzione della freccia affiancata al nome del campo, permette di comprendere per quale colonna si sta ordinando e secondo quale ordine.
===== Indicatori numerici (formato "Stat")
Tali #glossary[pannelli] contengono un indice che riporta i dati numerici derivati da calcoli e aggregazioni effettuate sulle misurazioni ottenute dai sensori.
#pagebreak()
== Dashboard "Ambientale"
La #glossary[dashboard] "Ambientale" è dedita alla visualizzazione di dati relativi all'ambiente, che includono temperatura, percentuale di umidità, direzione e velocità del vento, intensità delle precipitazioni, livello di polveri sottili e riempimento dei bacini idrici. Quelli riportati sono aspetti monitorati all'interno della #glossary[dashboard] tramite #glossary[pannelli] contenenti grafici di varia natura, in particolare si utilizzano i formati: "#glossary[Time series]", "Geomap", "Table" e "Stat". Per maggiori informazioni sulla natura di ciascun formato, si rimanda alla #glossary[documentazione] ufficiale di #glossary[Grafana].
=== #glossary[Pannelli] con grafici in formato "#glossary[Time series]"
I #glossary[pannelli] contenenti grafici in formato "#glossary[time series]" riportano dati relativi a:
- Media della temperatura riportata da ciascun #glossary[sensore], espressa in gradi Celsius (°C), con dati aggregati in intervalli di 1 minuto;
#figure(
image("assets/screenshots/timeseries-temperatura.png",width:100%),
caption: [Grafico in formato Time series della temperatura.]
)
- Media dell'umidità nell'aria riportata da ciascun #glossary[sensore], espressa in percentuale, con dati aggregati in intervalli di 1 minuto;
- Media dell'intensità delle precipitazioni riportata da ciascun #glossary[sensore], espressa in millimetri orari (mm/h), con dati aggregati in intervalli di 1 minuto;
- Media del livello di polveri sottili nell'aria riportata da ciascun #glossary[sensore], espressa in $#sym.mu g\/m^3$ (#glossary("PM10")), con dati aggregati in intervalli di 1 minuto;
- Media del riempimento dei bacini idrici riportata da ciascun #glossary[sensore], espressa in percentuale, con dati aggregati in intervalli di 1 minuto.
=== #glossary[Pannelli] con grafici in formato "Geomap"
I #glossary[pannelli] contenenti grafici in formato "Geomap" riportano dati relativi a:
- Direzione del vento riportata da ciascun #glossary[sensore], dove i sensori vengono rappresentati tramite il proprio nome e una freccia che punta nella direzione del vento; le icone rappresentanti i sensori vengono colorate in base alla velocità del vento, utilizzando un schema di colori che segue le transizioni verde-giallo-arancio-rosso, dove il verde indica una velocità bassa e il rosso una velocità elevata;
#figure(
image("assets/screenshots/mappa-vento.png",width:100%),
caption: [Grafico in formato Geomap del vento.]
)
- Posizione dei sensori inclusi nella #glossary[dashboard] "Ambientale", dove i sensori vengono rappresentati tramite icone il cui colore varia a seconda della tipologia del #glossary[sensore].
#figure(
image("assets/screenshots/mappa-sensori.png",width:100%),
caption: [Grafico in formato Geomap dei sensori ambientali.]
)
Nel caso della mappa che mostra il posizionamento dei sensori ambientali, le informazioni visualizzate posizionando il cursore sopra un #glossary[sensore] includono il nome del #glossary[sensore], la tipologia del #glossary[sensore], latitudine e longitudine. Nel caso della mappa che mostra la direzione del vento, invece, le informazioni visualizzate includono anche la velocità e la direzione del vento più recenti. Infine, in basso a sinistra è presente una legenda con una scala di colori che riporta il valore minimo e massimo dei dati riportati dal #glossary[sensore], così come la colorazione del #glossary[sensore] stesso a seconda dell'ultimo valore riportato (in altre parole, i colori dello schema utilizzato nella legenda sono gli stessi colori assunti dai sensori in base alla velocità). Quando il cursore viene posizionato sopra un determinato #glossary[sensore], anche nella scala di colori viene visualizzato il valore della velocità del vento più recente.
=== #glossary[Pannelli] con grafici in formato "Table"
Il #glossary[pannello] contenente un grafico in formato "Table" riporta dati relativi a:
- Velocità del vento riportata da ciascun #glossary[sensore], espressa in chilometri all'ora (km/h) e direzione, espressa in gradi (con gli 0° a Est e i 180° a Ovest).
Tale #glossary[pannello] contiene una tabella i cui campi indicano il nome del #glossary[sensore], la velocità del vento più recente e la sua direzione;
=== #glossary[Pannelli] con grafici in formato "Stat"
Il #glossary[pannello] contenente un grafico in formato "Stat" riporta dati relativi a:
- Media della temperatura riportata da tutti i sensori cumulativamente nell'intervallo di tempo selezionato;
- Media dell'intensità delle precipitazioni riportata da tutti i sensori cumulativamente nell'intervallo di tempo selezionato;
- Media del livello di polveri sottili nell'aria tra quelli riportati da tutti i sensori cumulativamente nell'ultimo minuto;
- Massimo livello di polveri sottili nell'aria tra quelli riportati da tutti i sensori cumulativamente negli ultimi 5 minuti.
#figure(
image("assets/screenshots/media-e-massimo-inquinamento.png",width:50%),
caption: [Grafico in formato Stat del livello massimo e medio di polveri sottili nell'aria.]
)
#pagebreak()
== Dashboard "Urbanistica"
La #glossary[dashboard] "Urbanistica" è dedita alla visualizzazione di dati relativi alla città, che possono includere ma non sono limitati a disponibilità dei parcheggi, disponibilità delle colonne di ricarica per automobili elettriche, stato di congestione delle strade, livello di batteria delle biciclette elettriche e riempimento delle zone ecologiche. Quelli riportati sono aspetti monitorati all'interno della #glossary[dashboard] tramite #glossary[pannelli] contenenti grafici di varia natura, in particolare si utilizza: il formato "Geomap" e il formato "Table".
=== #glossary[Pannelli] con grafici in formato "Geomap"
I #glossary[pannelli] contenenti grafici in formato "Geomap" riportano dati relativi a:
- Disponibilità dei parcheggi, tramite un indicatore numerico che riporta il numero di posti liberi; le icone rappresentanti i sensori vengono colorate in base alla disponibilità, utilizzando un schema di colori che segue le transizioni verde-giallo-arancio-rosso, dove il verde indica un numero elevato di posti liberi e il rosso un numero esiguo di posti liberi;
#figure(
image("assets/screenshots/mappa-parcheggi.png",width:100%),
caption: [Grafico in formato Geomap della disponibilità dei parcheggi.]
)
- Disponibilità delle colonne di ricarica, tramite icone colorate; le icone rappresentanti i sensori vengono colorate in base alla disponibilità, utilizzando un schema di colori che segue le transizioni verde-rosso, dove il verde indica disponibilità e il rosso indica la non disponibilità;
- Stato di congestione delle strade, tramite gli stati "LOW", "MEDIUM", "HIGH" e "BLOCKED"; le icone rappresentanti i sensori vengono colorate in base allo stato di congestione, utilizzando un schema di colori che segue le transizioni verde-giallo-arancio-rosso, dove il verde indica lo stato "LOW" e il rosso lo stato "BLOCKED";
#figure(
image("assets/screenshots/mappa-traffico.png",width:100%),
caption: [Grafico in formato Geomap dello stato di congestione delle strade.]
)
- Livello di batteria delle biciclette elettriche, tramite un indicatore numerico che ne riporta la percentuale; le icone rappresentanti i sensori vengono colorate in base alla percentuale di batteria, utilizzando un schema di colori che segue le transizioni verde-giallo-arancio-rosso, dove il verde indica una percentuale di batteria elevata e il rosso una percentuale di batteria bassa;
#figure(
image("assets/screenshots/mappa-biciclette.png",width:100%),
caption: [Grafico in formato Geomap del livello di batteria delle biciclette elettriche.]
)
- Riempimento delle zone ecologiche, tramite un indicatore numerico che riporta la percentuale di riempimento; le icone rappresentanti i sensori vengono colorate in base alla percentuale di riempimento, utilizzando un schema di colori che segue le transizioni verde-giallo-arancio-rosso, dove il verde indica una percentuale di riempimento bassa e il rosso una percentuale di riempimento elevata.
=== #glossary[Pannelli] con grafici in formato "Table"
Il #glossary[pannello] contenente un grafico in formato "Table" riporta dati relativi a:
- Erogazione delle colonne di ricarica riportata da ciascun #glossary[sensore], espressa in chilowatt per ora (kWh).
Tale #glossary[pannello] contiene una tabella avente campi che indicano il nome del #glossary[sensore] e l'erogazione più recente.
#pagebreak()
== Dashboard "Dati grezzi"
La #glossary[dashboard] "Dati grezzi" è dedita alla raccolta dei dati inviati da tutti i sensori e permette di visualizzare tutti i dati registrati nel #glossary[sistema]. La #glossary[dashboard] consente anche di filtrare tutti i dati in base al nome e alla tipologia dei sensori di interesse, per poter risalire con facilità ai dati inviati da questi ultimi. Per questo scopo, viene utilizzato un unico #glossary[pannello] di tipo "Table".
=== #glossary[Pannello] con grafico in formato "Table"
#figure(
image("assets/screenshots/dati-grezzi.png",width:100%),
caption: [Grafico in formato Table contenente i dati grezzi.]
)
Tale #glossary[pannello] è preceduto da un header che riporta due campi di input con etichette: "Nome Sensore" e "Tipologia". Cliccando sull'etichetta alla sinistra del campo o sul campo stesso, è possibile visualizzare il numero di valori attualmente selezionati ("Selected") ed, eventualmente, selezionarne altri utilizzando le checkbox apposite. Inoltre, è possibile inserire manualmente il valore desiderato all'interno del campo ("Enter variable value") per filtrare velocemente i valori a disposizione. Il campo "Nome Sensore" riporta i nomi di tutti sensori a disposizione e il campo "Tipologia" riporta le tipologie dei sensori.
La tabella vera e propria contiene i campi "Nome Sensore", "Tipologia", "Orario" e "Rilevazione", quest'ultima per i valori delle misurazioni effettuate dai sensori; i valori del campo "Tipologia" hanno un colore distinto a seconda del tipo di #glossary[sensore] indicato.
Come evidenziato in precedenza, è possibile ordinare i valori di tutti i campi della tabella in ordine crescente o decrescente semplicemente cliccandone il nome collocato in cima alla colonna corrispondente.
#pagebreak()
== Dashboard "Superamento soglie"
La #glossary[dashboard] "Superamento soglie" è dedita alla visualizzazione dei dati che superano determinate soglie preimpostate per i sensori che monitorano le precipitazioni, il riempimento delle zone ecologiche, il riempimento dei bacini idrici, il livello di polveri sottili nell'aria e la temperatura; in particolare, sono previsti cinque #glossary[pannelli] di tipo "Table", ognuno dedicato ad una tipologia diversa di #glossary[sensore], che riportano i campi "name", "timestamp" e "value".
=== #glossary[Pannelli] con grafici in formato "Table"
- Superamento soglia precipitazioni (50 mm/h): riporta i dati superanti la soglia dei 50 millimetri di pioggia all'ora;
- Superamento soglia riempimento zone ecologiche (80%): riporta i dati superanti la soglia dell'80% di capienza;
- Superamento soglia livello bacini idrici (70%): riporta i dati superanti la soglia del 70% di capienza;
- Superamento soglia livello polveri sottili ($80#sym.mu g\/m^3$): riporta i dati superanti la soglia degli $80#sym.mu g\/m^3$ per l'inquinamento dell'aria (#glossary("PM10"));
- Superamento soglia temperatura (40°C): riporta i dati superanti la soglia dei 40 gradi Celsius di temperatura.
#figure(
image("assets/screenshots/soglia-bacini.png",width:100%),
caption: [Grafico in formato Table contenente i dati superanti la soglia dei bacini idrici.]
)
#pagebreak()
== Allerte
Come illustrato nella sezione precedente *Dashboard "Superamento soglie"*, esistono cinque soglie preimpostate per monitorare altrettante tipologie di sensori ambientali; nel momento in cui i dati rilevati ed inviati da un #glossary[sensore] superano una determinata soglia, questi vengono visualizzati all'interno della #glossary[dashboard] apposita, ma l'infrazione viene anche rilevata dalle "Alert rules" visibili all'interno della sezione omonima nella pagina "Alerting" e comunicata all'utente tramite una notifica sul canale Discord configurato come spiegato nella sezione "*Creare Discord Webhook*". #glossary[Grafana] permette anche di configurare le modalità di invio e il formato delle notifiche nelle sezioni “Notification Policy” e “Contact Points” della pagina "Alerting", garantendo un maggiore controllo e una maggiore flessibilità nella gestione delle notifiche.
Senza apportare modifiche particolari, attualmente le notifiche vengono visualizzate all'interno del canale Discord configurato dall'utente nel modo seguente:
#figure(
image("assets/screenshots/notifiche-soglie-bacini.png",width:100%),
caption: [Canale Discord contenente le notifiche che denotano il superamento della soglia dei bacini idrici.]
)
#pagebreak()
= Supporto tecnico
Per assistenza tecnica relativa all'utilizzo del prodotto software "InnovaCity", viene fornito il seguente indirizzo email:
#set align(center)
`<EMAIL>`
#set align(left)
Per un servizio più efficiente, si è pregati di includere nel corpo dell'email una descrizione quanto più completa del problema riscontrato, insieme ad eventuali screenshot o dettagli aggiuntivi che possano risultare utili alla risoluzione di quest'ultimo. Si invita, inoltre, a descrivere eventuali passaggi già tentati per risolvere il problema, in modo che il team possa fornire un'assistenza più mirata.
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/array-24.typ | typst | Other | // Test the `sum` method.
#test(().sum(default: 0), 0)
#test(().sum(default: []), [])
#test((1, 2, 3).sum(), 6)
|
https://github.com/kdog3682/mathematical | https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/examples/pixel-art-3d-shapes-attempt-1.typ | typst | #import "@preview/cetz:0.2.2"
#import cetz.draw: *
#cetz.canvas({
grid((0, 0), (5, 5), stroke: 0.25pt)
let fooga(stroke: black) = {
set-style(
rect: (stroke: stroke)
)
rect((0, 0), (1, 1))
}
let booga(dimension, x1, y1, x2, y2)= {
let dimensions = (
xz: on-xz,
yz: on-yz,
xy: on-xy,
)
dimensions.at(dimension)(rect((x1, y1), (x2, y2)))
}
on-xy(fooga())
on-xz(fooga(stroke: blue))
on-yz(fooga(stroke: red))
booga("yz", 0, 0, -1, 1)
booga("xz", 0, 0, -1, 1)
translate(x: 1, y: 0.5, z: 0.5)
booga("xy", 0, 0, 1, 1)
})
|
|
https://github.com/floriandejonckheere/utu-thesis | https://raw.githubusercontent.com/floriandejonckheere/utu-thesis/master/README.md | markdown | # Research thesis
This repository contains my research thesis for the Master of Science (Technology) in Information and Communication Technology: Software Engineering at the University of Turku.
The documents are typeset using the [Typst](https://typst.app/home) typesetting language.
# Usage
```
# Enter the directory
cd thesis
# Build document once
make
# Or, build document and watch for changes
make watch
# Remove intermediary generated files
make clean
# Print number of words per file
make words
```
# Development
## Bibliography
To import a BibTeX file, use the following command:
```
# Convert to YAML
hayagriva bibliography.bib > bibliography.yml
# Remove curly braces
sed -e 's/[\{\}]//g' -i bibliography.yml
```
# License
Logo: Copyright (C) University of Turku
Copyright (C) 2024 <NAME>
|
|
https://github.com/haxibami/haxipst | https://raw.githubusercontent.com/haxibami/haxipst/main/test/sample.typ | typst | #import "../src/main.typ": *
#show: resume.with(
pdf-author: "haxibami",
pdf-keywords: (
"Typst",
),
title: [#text(size: 1.5em)[*Typst文書のサンプル*]],
author: [
#align(
right,
grid(
columns: (
25pt,
40pt,
),
gutter: 5pt,
image(
"./icon.jpg",
width: 25pt,
),
align(horizon)[haxibami],
),
)
],
date: [#datetime.today().display()],
header: align(right)[
Typst文書のサンプル
],
)
#set heading(numbering: "1.1.")
#outline(title: "目次")
= Typst とは・見出し・リンク
#quote(
attribution: [#link("https://github.com/typst/typst")[GitHub]],
block: true,
)[
"Typst is a new markup-based typesetting system that is designed to be as powerful as LaTeX while being much easier to learn and use."
]
```typ
= Typst とは・見出し・リンク
#quote(attribution: [#link("https://github.com/typst/typst")[GitHub]], block: true)[
"Typst is a new markup-based typesetting system that is designed to be as powerful as LaTeX while being much easier to learn and use."
]
```
= 言語とフォント
こんにちは / Добридень / ٱلسَّلَامُ عَلَيْكُمْ
```typ
// システムにインストールされているフォントを読み込む
#set text(font: (
"Noto Serif CJK JP",
"Noto Sans CJK JP",
"Twitter Color Emoji"
))
```
= 装飾・リスト・数式・絵文字・表
このごろ都に流行るもの
+ ツイート
+ 捕食者
+ ツイーター
ツイート捕食者 #text(size: 1.5em, fill: red)[ツイーター] は、ツイートを#underline(offset: 3pt)[捕食]して消費する。しかし、みなさんのツイート生産速度のほうが*はるかに*速いため、けっして追いつくことはない。
```typ
このごろ都に流行るもの
+ ツイート
+ 捕食者
+ ツイーター
ツイート捕食者 #text(size: 1.5em, fill: red)[ツイーター] は、ツイートを#underline(offset: 3pt)[捕食]して消費する。しかし、みなさんのツイート生産速度のほうが*はるかに*速いため、けっして追いつくことはない。
```
$P(x)$:「$x$ がツイートを捕食する」、$Q(x)$:「$x$ がツイーターである」とすると、
#align(center)[
$forall x(
P(
x
) arrow.r Q(
x
)
)$
]
```typ
#align(center)[
$forall x(P(x) arrow.r Q(x))$
]
```
#table(
columns: 2,
inset: 10pt,
align: horizon,
[*絵文字を使う際の*],
[*注意点*],
[普通の絵文字を埋め込むと],
[PDFサイズが爆発するので #emoji.face.explode],
[#link("https://github.com/13rac1/twemoji-color-font")[twemoji-color-font]],
[などを使うとよい #emoji.bird],
)
```typ
#table(
columns: 2,
inset: 10pt,
align: horizon,
[*絵文字を使う際の*], [*注意点*],
[普通の絵文字を埋め込むと], [PDFサイズが爆発するので #emoji.face.explode],
[#link("https://github.com/13rac1/twemoji-color-font")[twemoji-color-font] (SVG)], [などを使うとよい #emoji.bird]
)
```
= 画像・グリッド
@icon は私のアイコン、@icon_old は私の以前のアイコンである。
#grid(
columns: (
1fr,
1fr,
),
gutter: 10pt,
align(horizon)[
#figure(
image(
"icon.jpg",
width: 40%,
),
caption: [
A girl holding _Naming and Necessity_.
],
) <icon>
],
align(horizon)[
#figure(
image(
"icon_old.png",
width: 40%,
),
caption: [
Ange, from _Princess Principal_.
],
) <icon_old>
],
)
```typ
@icon は私のアイコン、@icon_old は私の以前のアイコンである。
#grid(
columns: (1fr, 1fr),
gutter: 10pt,
align(horizon)[
#figure(
image("icon.jpg", width: 40%),
caption: [
A girl holding _Naming and Necessity_.
],
) <icon>
],
align(horizon)[
#figure(
image("icon_old.png", width: 40%),
caption: [
Ange, from _Princess Principal_.
],
) <icon_old>
]
)
```
= にせ #macro.latex(文字の位置調整)
キヒヒ…… #macro.latex でヤンスねえ……
```typ
// macro.typ
#let latex = text(font: "Latin Modern Roman", [L#h(-0.35em)#text(size: 0.725em, baseline: -0.25em)[A]#h(-0.125em)T#h(-0.175em)#text(baseline: 0.225em)[E]#h(-0.125em)X])
```
```typ
キヒヒ…… #macro.latex でヤンスねえ……
```
|
|
https://github.com/dyc3/senior-design | https://raw.githubusercontent.com/dyc3/senior-design/main/service-discovery.typ | typst | = Service Discovery <Chapter::ServiceDiscovery>
In order to establish connections with Monoliths, the Balancer needs to know the network address of each Monolith. Similarly, the collector needs to know the network addresses of Balancers for visualization. This is done through a process called discovery. The discovery process is responsible for finding the network address of each Monolith and connecting to it. @Figure::discovery-sequence shows the sequence diagram for the discovery process.
There are 2 different discovery modes: Polling and Continuous. Polling is used for `ServiceDiscoverer`s that do not provide real time updates (Like `FlyServiceDiscoverer`), and Continuous is used for `ServiceDiscoverer`s that do provide real time updates (Like `HarnessServiceDiscoverer`). The discovery process is ultimately the same for both modes.
#figure(
image("figures/discovery/discovery-class.svg"),
caption: "Class Diagram for the Service Discovery Process."
) <Figure::discovery-class>
#figure(
image("figures/discovery/discovery-sequence.svg"),
caption: "Sequence Diagram for the Service Discovery Process."
) <Figure::discovery-sequence>
== Implementation
The current implementations of the `ServiceDiscoverer` trait are diagrammed in @Figure::service-discoverers.
#figure(
image("figures/discovery/service-discoverers.svg", width: 85%),
caption: "Class Diagram for Service Discoverers."
) <Figure::service-discoverers>
The first implementation is used when connecting to the instance of the Monolith on fly.io. Before connecting the user inputs the port that the Monolith should listen on for the load balancer connection and a string representing the fly app. These properties are both encapsulated by the `FlyDiscoveryConfig` struct.
To discover the Monolith, the configuration and a query are passed into the discoverer struct, `FlyServiceDiscoverer`. In order to connect to the instance on fly.io, an IPv6 DNS lookup is created using the query `global.<fly app name>.internal` #cite(<fly-dns-lookups>). The discovery process uses the results of the lookup and connects the Balancer.
The second implementation is used when manually connecting to any instance excluding the one on fly.io and the discovery process works slightly differently. Any number of monolith connections, represented by the `ConnectionConfig` are passed into the manual discoverer. The discoverer then clones the Monoliths and connects.
`HarnessServiceDiscoverer` is the third implementation and is used for testing with the harness. The discoverer opens a port and listens for incoming WebSocket connections. When a connection is made, the discoverer listens for a message from the harness dictating all the Monoliths that are visible to the Balancer.
The fourth implementation comes in the form of `DnsServiceDiscoverer` when connecting the Monolith to the Docker DNS server. This process follows a similar procedure to that of `FlyServiceDiscoverer` but queries IPv4 addresses instead of IPv6 addresses as Docker does not support IPv6 addresses.
|
|
https://github.com/sitandr/typst-examples-book | https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/packages/misc.md | markdown | MIT License | # Misc
# Formatting strings
## `oxifmt`, general purpose string formatter
```typ
#import "@preview/oxifmt:0.2.0": strfmt
#strfmt("I'm {}. I have {num} cars. I'm {0}. {} is {{cool}}.", "John", "Carl", num: 10) \
#strfmt("{0:?}, {test:+012e}, {1:-<#8x}", "hi", -74, test: 569.4) \
#strfmt("{:_>+11.5}", 59.4) \
#strfmt("Dict: {:!<10?}", (a: 5))
```
```typ
#import "@preview/oxifmt:0.2.0": strfmt
#strfmt("First: {}, Second: {}, Fourth: {3}, Banana: {banana} (brackets: {{escaped}})", 1, 2.1, 3, label("four"), banana: "Banana!!")\
#strfmt("The value is: {:?} | Also the label is {:?}", "something", label("label"))\
#strfmt("Values: {:?}, {1:?}, {stuff:?}", (test: 500), ("a", 5.1), stuff: [a])\
#strfmt("Left5 {:_<5}, Right6 {:*>6}, Center10 {centered: ^10?}, Left3 {tleft:_<3}", "xx", 539, tleft: "okay", centered: [a])\
```
```typ
#import "@preview/oxifmt:0.2.0": strfmt
#repr(strfmt("Left-padded7 numbers: {:07} {:07} {:07} {3:07}", 123, -344, 44224059, 45.32))\
#strfmt("Some numbers: {:+} {:+08}; With fill and align: {:_<+8}; Negative (no-op): {neg:+}", 123, 456, 4444, neg: -435)\
#strfmt("Bases (10, 2, 8, 16(l), 16(U):) {0} {0:b} {0:o} {0:x} {0:X} | W/ prefixes and modifiers: {0:#b} {0:+#09o} {0:_>+#9X}", 124)\
#strfmt("{0:.8} {0:.2$} {0:.potato$}", 1.234, 0, 2, potato: 5)\
#strfmt("{0:e} {0:E} {0:+.9e} | {1:e} | {2:.4E}", 124.2312, 50, -0.02)\
#strfmt("{0} {0:.6} {0:.5e}", 1.432, fmt-decimal-separator: ",")
```
## `name-it`, integer to text
```typ
#import "@preview/name-it:0.1.0": name-it
- #name-it(2418345)
```
## `nth`, Nth element
```typ
#import "@preview/nth:0.2.0": nth
#nth(3), #nth(5), #nth(2421)
``` |
https://github.com/tfachada/thesist | https://raw.githubusercontent.com/tfachada/thesist/main/template/Chapters/1-Introduction.typ | typst | MIT License | #import "@preview/thesist:0.2.0": flex-caption, subfigure-grid
#import "@preview/glossarium:0.5.0": gls, glspl
= Introduction
// Start writing here...
|
https://github.com/kdkasad/typst-homework-template | https://raw.githubusercontent.com/kdkasad/typst-homework-template/master/examples/computer-science/problem1.typ | typst | BSD 3-Clause "New" or "Revised" License | #import "../../khw.typ": problem, parts, boxed
#problem(newpage: false)[
Consider the given code segment. _(Omitted here.)_
]
#parts[
*When $f(k)$ is called,
how many times is it recursively called in total?
Include the initial call and give the answer in terms of $k$.*
$f(k)$ calls $f(k-1)$ recursively until the argument equals 0.
Thus the number of times $f(k)$ is called is
$T(k) = T(k-1) + 1$, $T(0) = 1$.
$
T(k)
&= T(k-1) + 1 \
&= T(k-2) + 1 + 1 \
&= T(k-3) + 1 + 1 + 1 \
&= T(k-n) + n \
&= T(0) + k && "when" n = k \
T(k) &= k + 1
$
][
*What does $f(k)$ return?
Give your answer in terms of $k$.*
Based on the code, we can define $f(k)$ mathematically as
$f(k) = f(k-1) + 3k + 2, f(0)=2$.
$
f(k) &= f(k-1) + 3k + 2 \
&= (f(k-2) + 3(k-1) + 2) + 3k + 2 \
&= ((f(k-3) + 3(k-2) + 2) + 3(k-1) + 2) + 3k + 2 \
&= f(k-3) + 3(k-2) + 2 + 3(k-1) + 2 + 3k + 2 \
&= f(k-n) + 3 sum_(i=k-n+1)^(k) i + n(2) \
$
Now let $n=k$.
$
f(k) &= f(0) + 3 sum_(i=1)^(k) i + 2k \
&= 2 + 3 dot k(k+1)/2 + 2k \
// &= (4+3k^2+3k+4k)/2 \
f(k) &= (3k^2 + 7k + 4)/2
$
][
*How many times is $f$ called in total?
Give the answer in terms of $n$.*
$f(j)$ is called once from each iteration of the loop.
The loop runs from $j=1$ to $j=n$.
Thus the number of times $f$ is called is the following sum,
where $T(n)$ represents the number of times $f$ is called when invoked as $f(n)$.
We know from part *(a)* that $T(n) = n+1$.
$
sum_(j=1)^(n) T(j)
&= sum_(j=1)^n (j+1)
= sum_(j=1)^n j + sum_(j=1)^n 1
= n(n+1)/2 + n
= (n^2 + 3n)/2
$
]
|
https://github.com/GYPpro/DS-Course-Report | https://raw.githubusercontent.com/GYPpro/DS-Course-Report/main/Rep/19.typ | typst | #import "@preview/tablex:0.0.6": tablex, hlinex, vlinex, colspanx, rowspanx
#import "@preview/codelst:2.0.1": sourcecode
// Display inline code in a small box
// that retains the correct baseline.
#set text(font:("Times New Roman","Source Han Serif SC"))
#show raw: set text(
font: ("consolas", "Source Han Serif SC")
)
#set page(
paper: "a4",
)
#set text(
font:("Times New Roman","Source Han Serif SC"),
style:"normal",
weight: "regular",
size: 13pt,
)
#let nxtIdx(name) = box[ #counter(name).step()#counter(name).display()]
#set math.equation(numbering: "(1)")
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 10pt,
radius: 4pt,
)
#set math.equation(numbering: "(1)")
#set page(
paper:"a4",
number-align: right,
margin: (x:2.54cm,y:4cm),
header: [
#set text(
size: 25pt,
font: "KaiTi",
)
#align(
bottom + center,
[ #strong[暨南大学本科实验报告专用纸(附页)] ]
)
#line(start: (0pt,-5pt),end:(453pt,-5pt))
]
)
/*----*/
= 高精度计算
\
#text(
font:"KaiTi",
size: 15pt
)[
课程名称#underline[#text(" 数据结构 ")]成绩评定#underline[#text(" ")]\
实验项目名称#underline[#text(" ") 高精度计算 #text(" ")]指导老师#underline[#text(" 干晓聪 ")]\
实验项目编号#underline[#text(" 19 ")]实验项目类型#underline[#text(" 设计性 ")]实验地点#underline[#text(" 数学系机房 ")]\
学生姓名#underline[#text(" 郭彦培 ")]学号#underline[#text(" 2022101149 ")]\
学院#underline[#text(" 信息科学技术学院 ")]系#underline[#text(" 数学系 ")]专业#underline[#text(" 信息管理与信息系统 ")]\
实验时间#underline[#text(" 2024年6月13日上午 ")]#text("~")#underline[#text(" 2024年7月13日中午 ")]\
]
#set heading(
numbering: "1.1."
)
= 实验目的
利用扩增数组实现对大整数的四则运算
= 实验环境
计算机:PC X64
操作系统:Windows + Ubuntu20.0LTS
编程语言:C++:GCC std20
IDE:Visual Studio Code
= 程序原理
模拟手动计算流程即可。
#pagebreak()
= 程序代码
== `hghCacu.hpp`
#sourcecode[```cpp
#include <cstdio>
#include <iostream>
#include <cmath>
#include <string>
#include <cstring>
#include <vector>
#include <algorithm>
using namespace std;
class BigInt {
public:
int sign;
std::vector<int> v;
BigInt() : sign(1) {}
BigInt(const std::string &s) { *this = s; }
BigInt(int v) {
char buf[21];
sprintf(buf, "%d", v);
*this = buf;
}
void zip(int unzip) {
if (unzip == 0) {
for (int i = 0; i < (int)v.size(); i++)
v[i] = get_pos(i * 4) + get_pos(i * 4 + 1) * 10 + get_pos(i * 4 + 2) * 100 + get_pos(i * 4 + 3) * 1000;
} else
for (int i = (v.resize(v.size() * 4), (int)v.size() - 1), a; i >= 0; i--)
a = (i % 4 >= 2) ? v[i / 4] / 100 : v[i / 4] % 100, v[i] = (i & 1) ? a / 10 : a % 10;
setsign(1, 1);
}
int get_pos(unsigned pos) const { return pos >= v.size() ? 0 : v[pos]; }
BigInt &setsign(int newsign, int rev) {
for (int i = (int)v.size() - 1; i > 0 && v[i] == 0; i--)
v.erase(v.begin() + i);
sign = (v.size() == 0 || (v.size() == 1 && v[0] == 0)) ? 1 : (rev ? newsign * sign : newsign);
return *this;
}
std::string to_str() const {
BigInt b = *this;
std::string s;
for (int i = (b.zip(1), 0); i < (int)b.v.size(); ++i)
s += char(*(b.v.rbegin() + i) + '0');
return (sign < 0 ? "-" : "") + (s.empty() ? std::string("0") : s);
}
bool absless(const BigInt &b) const {
if (v.size() != b.v.size()) return v.size() < b.v.size();
for (int i = (int)v.size() - 1; i >= 0; i--)
if (v[i] != b.v[i]) return v[i] < b.v[i];
return false;
}
BigInt operator-() const {
BigInt c = *this;
c.sign = (v.size() > 1 || v[0]) ? -c.sign : 1;
return c;
}
BigInt &operator=(const std::string &s) {
if (s[0] == '-')
*this = s.substr(1);
else {
for (int i = (v.clear(), 0); i < (int)s.size(); ++i)
v.push_back(*(s.rbegin() + i) - '0');
zip(0);
}
return setsign(s[0] == '-' ? -1 : 1, sign = 1);
}
bool operator<(const BigInt &b) const {
return sign != b.sign ? sign < b.sign : (sign == 1 ? absless(b) : b.absless(*this));
}
bool operator==(const BigInt &b) const { return v == b.v && sign == b.sign; }
BigInt &operator+=(const BigInt &b) {
if (sign != b.sign) return *this = (*this) - -b;
v.resize(std::max(v.size(), b.v.size()) + 1);
for (int i = 0, carry = 0; i < (int)b.v.size() || carry; i++) {
carry += v[i] + b.get_pos(i);
v[i] = carry % 10000, carry /= 10000;
}
return setsign(sign, 0);
}
BigInt operator+(const BigInt &b) const {
BigInt c = *this;
return c += b;
}
void add_mul(const BigInt &b, int mul) {
v.resize(std::max(v.size(), b.v.size()) + 2);
for (int i = 0, carry = 0; i < (int)b.v.size() || carry; i++) {
carry += v[i] + b.get_pos(i) * mul;
v[i] = carry % 10000, carry /= 10000;
}
}
BigInt operator-(const BigInt &b) const {
if (b.v.empty() || b.v.size() == 1 && b.v[0] == 0) return *this;
if (sign != b.sign) return (*this) + -b;
if (absless(b)) return -(b - *this);
BigInt c;
for (int i = 0, borrow = 0; i < (int)v.size(); i++) {
borrow += v[i] - b.get_pos(i);
c.v.push_back(borrow);
c.v.back() -= 10000 * (borrow >>= 31);
}
return c.setsign(sign, 0);
}
BigInt operator*(const BigInt &b) const {
if (b < *this) return b * *this;
BigInt c, d = b;
for (int i = 0; i < (int)v.size(); i++, d.v.insert(d.v.begin(), 0))
c.add_mul(d, v[i]);
return c.setsign(sign * b.sign, 0);
}
BigInt operator/(const BigInt &b) const {
BigInt c, d;
BigInt e=b;
e.sign=1;
d.v.resize(v.size());
double db = 1.0 / (b.v.back() + (b.get_pos((unsigned)b.v.size() - 2) / 1e4) +
(b.get_pos((unsigned)b.v.size() - 3) + 1) / 1e8);
for (int i = (int)v.size() - 1; i >= 0; i--) {
c.v.insert(c.v.begin(), v[i]);
int m = (int)((c.get_pos((int)e.v.size()) * 10000 + c.get_pos((int)e.v.size() - 1)) * db);
c = c - e * m, c.setsign(c.sign, 0), d.v[i] += m;
while (!(c < e))
c = c - e, d.v[i] += 1;
}
return d.setsign(sign * b.sign, 0);
}
BigInt operator%(const BigInt &b) const { return *this - *this / b * b; }
bool operator>(const BigInt &b) const { return b < *this; }
bool operator<=(const BigInt &b) const { return !(b < *this); }
bool operator>=(const BigInt &b) const { return !(*this < b); }
bool operator!=(const BigInt &b) const { return !(*this == b); }
};
```]
== `_PRIV_TEST.cpp`
#sourcecode[```cpp
#include <Dev\19\hghCacu.hpp>
int main()
{
string aa,bb;
cin >> aa >> bb;
BigInt a = BigInt(aa);
BigInt b = BigInt(bb);
cout <<( a + b).to_str() << '\n';
cout << (a - b ).to_str()<< '\n';
cout << (a * b ).to_str()<< '\n';
system("pause");
return 0;
}
```]
= 测试数据与运行结果
运行上述`_PRIV_TEST.cpp`测试代码中的正确性测试模块,得到以下内容:
```
1111111111111111111111111111111111111111
2222222222222222222222222222222222222222
3333333333333333333333333333333333333333
-1111111111111111111111111111111111111111
2469135802469135802469135802469135802468641975308641975308641975308641975308642
```
可以看出,代码运行结果与预期相符,可以认为代码正确性无误。
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/closure-01.typ | typst | Other | // Basic closure without captures.
#{
let adder = (x, y) => x + y
test(adder(2, 3), 5)
}
|
https://github.com/jamesrswift/blog | https://raw.githubusercontent.com/jamesrswift/blog/main/_posts/2024-07-03-sparklines.md | markdown | MIT License | ---
layout: post
title: "A deep-dive into Sparklines - An alternative to figures"
---
Sometimes, you'll want to convey (or remind) the reader about some fact or data that is more intuitively understood when represented graphically.
For this reason, we have figures: pictures that are signposted by a label. Generally, they will take up enough space (or are difficult enough to typeset) that consideration must be put into whether the information is significant enough to warrant the costs.
For this reason, some journals even place a limit on the number of tables or figures.
This is where sparklines make their entrance. For quickly graphically representing a subset or feature of previously presented data, a text-sized plot without axes is included in the text.

For the purposes of illustration, I'm using `ETH` to `GBP` exchange rates to simulate actual data. The only matter of note was that when importing the data from the CSV to Typst, and owing to the lack of named capture groups, the code that casts YYYY-MM-DD to date is atrocious.
```
datetime(
..("year", "month", "day").zip(
it.Date.matches(
regex("(\d{4})-(\d{2})-(\d{2})")
).first().captures
).fold(
(:),
(acc, it) => (
..acc,
(it.first()): int(it.last())
)
)
)
```
# Text sized plots
One of the distinguishing features of sparklines is that when they are in the body of a text, they do not catch the eye more than is absolutely necessary. They don't have a luminance that is darker than the text, they are sized in proportion to the glyphs that surround them, and do not affect the line spacing. Now that we've set our goals, let's get started.
The lowest hanging fruit is that, no matter how they are encoded in the source file, they have a space width on either side. This can be achieved by nesting space symbols within weak 0em horizontal spacing, like so
```
h(0em, weak: true) + sym.space + box({
}) + sym.space + h(0em, weak: true)
```
To get our plots text-sized, I'm going to scale the x and y according to the spread of the data and the given `width` and `height` lengths
```
#let sparkline(
data,
height: 1em,
width: 1em,
) = h(0em, weak: true) + sym.space + box({
// Calculate the spread of data
let xs = data.map(it=>it.first())
let x-min = calc.min(..xs)
let x-max = calc.max(..xs)
let ys = data.map(it=>it.last())
let y-min = calc.min(..ys)
let y-max = calc.max(..ys)
cetz.canvas(
length: 1em,
{
cetz.draw.scale(
x: width / ((x-max - x-min) * 1em),
y: height / (2*(y-max - y-min) * 1em)
)
cetz.draw.line(..data)
}
)
}) + sym.space + h(0em, weak: true)
```
We've handed almost complete control of the width and the height over to the call-site as we can't make any general assumptions about the data that is to be rendered. All we truly can say is that someone wants to render an arbitrary amount of data points in a given amount of space. Equally, we can't make assumptions on how noisy the data is, and so we can't automatically handle the stroke thickness. As we're going to be letting the call-site handle that we may as well give them control over the entire style, seeing as we aren't doing anything specific
```
#let sparkline(
data,
height: 1em,
width: 1em,
style: (stroke: 0.4pt)
) = h(0em, weak: true) + sym.space + box({
let xs = data.map(it=>it.first())
let x-min = calc.min(..xs)
let x-max = calc.max(..xs)
let ys = data.map(it=>it.last())
let y-min = calc.min(..ys)
let y-max = calc.max(..ys)
cetz.canvas(
length: 1em,
{
cetz.draw.set-style(..style)
cetz.draw.scale(
x: width / ((x-max - x-min ) * 1em),
y: height / (2*(y-max - y-min) * 1em)
)
cetz.draw.line(..data)
}
)
}) + sym.space + h(0em, weak: true)
```
# One size doesn't fit all
We've managed to make a simple text sized plot of a line, but that won't suit every instance where short-hand graphical representations would be beneficial. Do we want to put to coloured marks on the ends of the line? Maybe we want a bar chart instead? Should the area below be shaded? We can make our code more general by shuffling some code around
```
#let line(data, (x-min, x-max), (y-min, y-max)) = cetz.draw.line(
..data
)
#let make(
data,
height: 1em,
width: 1em,
style: (stroke: 0.4pt),
draw: (line,)
) = h(0em, weak: true) + sym.space + box({
let xs = data.map(it=>it.first())
let x-min = calc.min(..xs)
let x-max = calc.max(..xs)
let ys = data.map(it=>it.last())
let y-min = calc.min(..ys)
let y-max = calc.max(..ys)
cetz.canvas(
length: 1em,
{
cetz.draw.set-style(stroke: stroke, ..style)
cetz.draw.scale(
x: width / ((x-max - x-min ) * 1em),
y: height / (2*(y-max - y-min) * 1em)
)
for cmd in draw {
cmd(data, (x-min, x-max), (y-min, y-max))
}
}
)
}) + sym.space + h(0em, weak: true)
```
We now have an input argument `draw` that is an array of draw commands, which take positional arguments `data` and min-max tuples for the x-axis and y-axis respectively. Now if someone wants something non-standard, they have the means to create it and specify it (or several). I've made a quick generator function for a draw command that does a vertical line over the whole y-domain with a given style:
```typ
// The return type is function!
#let vline(x, ..style) = (data, (x-min, x-max), (y-min, y-max)) => {
cetz.draw.line(
(x, y-min),
(x, y-max),
..style
)
}
#sparkline.make(
data.converted.filter(it=>(it.first()>290 and it.first() < 320 )),
width: 1.7em,
height: 1.2em,
draw: (
sparkline.line,
vline(313, stroke: (thickness: 0.4pt, paint: red))
)
)
```

Using a couple helper functions along the way, they are easy enough to include in table headings:
```
// Our table likes content instead of datetime for rendering
#let converted = converted.map((it)=>(:
..it,
Date-Display: it.Date.display()
))
// Simple (x,y) map utility function
#let series(key) = converted.map(
(it) => (
it.Date.ordinal() + (it.Date.year() - 2023) * 365,
float(it.at(key))
)
)
#import "../packages/booktabs/lib.typ" as booktabs
// Make defining columns a bit easier when we know what we want
#let column(key, display, width: 1em) = (:
key: "High",
display: sparkline.make(
width: width,
data.series(key)
) + display,
width: 1fr,
align: right,
)
// Last week's project makes an appearance!
#booktabs.rigor.make(
columns: (
(
key: "Date-Display",
display: [Date],
gutter: 1em
),
column("Open", width: 2.3em)[Open],
column("High", width: 2.3em)[High],
column("Low", width: 2.6em)[Low],
column("Close", width: 2.1em)[Close],
column("Volume")[Volume]
),
data: data.converted
)
```

# Closing thoughts
Playing around with sparklines has taught me a great deal about imagining "average use cases" and how best to implement them. The other thing it taught me is that my booktabs code from the last blog post suffers horribly for large tables, so it will probably get revisited.
One of the design limitations I ran into was that from the get-go, we assumed that we wanted to display (x,y) data, which isn't always the case. Marks on the plots get distorted due to the scaling transformations we applied, even when `transform-shape` is set to `false`, but I understand that this should be fixed in the next cetz release.
There does seem to be an issue for the "Volume" column sparkline, and I haven't managed to work out what it is. |
https://github.com/EunTilofy/NumComputationalMethods | https://raw.githubusercontent.com/EunTilofy/NumComputationalMethods/main/Chapter10/Chapter10-3.typ | typst | #import "../template.typ": *
#show: project.with(
course: "Computing Method",
title: "Computing Method - Chapter10-11",
date: "2024.6.1",
authors: "<NAME>, 3210106357",
has_cover: false
)
*Problems: Chapter10-10, Chapter11-1*
#HWProb(name: "10-10")[
用 Newton 法求方程组
$
cases(u = x^2 + y^2 -1 = 0\ v=x^3 - y = 0)
$
在 $x_0 = 0.8, y_0 = 0.6$ 附近的根,作 3 次迭代。
]
#solution[
计算 Jacobi 矩阵:
$
J = bmatrix(2x, 2y; 3x^2, -1)
$
求解代码为:
```python
import numpy as np
def J(x, y):
return [[2*x, 2*y], [3*x**2, -1]]
def f1(x, y):
return x**2+y**2-1
def f2(x, y):
return x**3-y
def Newton(x0, y0):
for i in range(3):
F1 = -f1(x0, y0)
F2 = -f2(x0, y0)
J0 = J(x0, y0)
[dx, dy] = np.linalg.solve(np.array(J0), np.array([F1, F2]))
x0 = x0 + dx
y0 = y0 + dy
print(i+1, x0, y0)
Newton(0.8, 0.6)
```
输出结果为:
```python
1 0.8270491803278689 0.5639344262295083
2 0.8260323731676462 0.5636236767037873
3 0.8260313576552345 0.5636241621608473
```
所以三次迭代后 $x_3 = 0.8260, y_3 = 0.5636$。
]
#HWProb(name: "11-1")[
在区间 $[0, 1]$ 上数值解
$
cases(y' = y - (2x) / y \ y(0) = 1)
$
#set enum(numbering: "(1)")
+ 用 Euler 方法和改进 Euler 方法,取 $h = 0.1$;
+ 用经典 R-K 方法,取 $h = 0.2$。
精确解 $y = sqrt(1+2x)$ 可供比较。
]
#solution[
#set enum(numbering: "(1)")
+ 经典Euler方法:$y(x_0) = y_0, y_(n+1) = y_n + h f(x_n, y_n)$\ 改进Euler方法:$y_(n+1)=y_n+h/2(f(x_n, y_n)+f(x_(n+1),y_n+h f(x_n, y_n)))$```python
import numpy as np
import pandas as pd
def f(x, y):
return y - (2 * x) / y
def euler_method(f, x0, y0, h, n):
x = [x0]
y = [y0]
for i in range(n):
y_next = y[-1] + h * f(x[-1], y[-1])
x_next = x[-1] + h
x.append(x_next)
y.append(y_next)
return x, y
def improved_euler_method(f, x0, y0, h, n):
x = [x0]
y = [y0]
for i in range(n):
y_temp = y[-1] + h * f(x[-1], y[-1])
x_next = x[-1] + h
y_next = y[-1] + h / 2 * (f(x[-1], y[-1]) + f(x_next, y_temp))
x.append(x_next)
y.append(y_next)
return x, y
def theoretical_solution(x):
return np.sqrt(1 + 2 * x)
x0 = 0
y0 = 1
h = 0.1
n = 10
x_euler, y_euler = euler_method(f, x0, y0, h, n)
x_improved_euler, y_improved_euler = improved_euler_method(f, x0, y0, h, n)
y_theoretical = theoretical_solution(np.array(x_euler))
error_euler = np.abs(y_theoretical - np.array(y_euler))
error_improved_euler = np.abs(y_theoretical - np.array(y_improved_euler))
data = {
'$x$': x_euler,
'Euler $y_1$': y_euler,
'Improved Euler $y_2$': y_improved_euler,
'Theoretical $y^*$': y_theoretical,
'Euler Error $abs(y_1-y^*)$': error_euler,
'Improved Euler Error $abs(y_2-y^*)$': error_improved_euler
}
df = pd.DataFrame(data)
markdown_table = df.to_markdown(index=False)
print(markdown_table)
```#tablem[
| $x$ | Euler $y_1$ | Improved Euler $y_2$ | Theoretical $y^*$ | Euler Error $abs(y_1 - y^*)$ | Improved Euler Error $abs(y_2-y^*)$ |
|----:|-----------------:|--------------------------:|----------------:|---------------------:|------------------------------:|
| 0.1 | 1.1 | 1.09591 | 1.09545 | 0.00455488 | 0.000463976 |
| 0.2 | 1.19182 | 1.1841 | 1.18322 | 0.00860223 | 0.000880613 |
| 0.3 | 1.27744 | 1.2662 | 1.26491 | 0.0125268 | 0.0012903 |
| 0.4 | 1.35821 | 1.34336 | 1.34164 | 0.0165718 | 0.00171936 |
| 0.5 | 1.43513 | 1.4164 | 1.41421 | 0.0209194 | 0.00218837 |
| 0.6 | 1.50897 | 1.48596 | 1.48324 | 0.0257266 | 0.0027159 |
| 0.7 | 1.58034 | 1.55251 | 1.54919 | 0.0311449 | 0.00332075 |
| 0.8 | 1.64978 | 1.61647 | 1.61245 | 0.0373319 | 0.00402323 |
| 0.9 | 1.71778 | 1.67817 | 1.67332 | 0.0444593 | 0.00484631 |
| 1 | 1.78477 | 1.73787 | 1.73205 | 0.05272 | 0.00581659 |
]
+ 经典RK方法:$
cases(y_(n+1) = y_n+1/6 (K_1 + 2K_2 + 2K_3 + K_4)\ K_1 = h f(x_n, y_n) \ K_2 = h f(x_n + h/2, y_n + K_1/2)\ K_3 = h f (x_n + h/2, y_n + K_2/2) \ K_4 = h f(x_n + h, y_n + K_3))
$```python
def f(x, y):
return y - (2 * x) / y
def rk4_method(f, x0, y0, h, n):
x = [x0]
y = [y0]
for i in range(n):
K1 = h * f(x[-1], y[-1])
K2 = h * f(x[-1] + h / 2, y[-1] + K1 / 2)
K3 = h * f(x[-1] + h / 2, y[-1] + K2 / 2)
K4 = h * f(x[-1] + h, y[-1] + K3)
y_next = y[-1] + 1 / 6 * (K1 + 2 * K2 + 2 * K3 + K4)
x_next = x[-1] + h
x.append(x_next)
y.append(y_next)
return x, y
def theoretical_solution(x):
return np.sqrt(1 + 2 * x)
x0 = 0
y0 = 1
h = 0.2
n = 5
x_rk4, y_rk4 = rk4_method(f, x0, y0, h, n)
y_theoretical = theoretical_solution(np.array(x_rk4))
error_rk4 = np.abs(y_theoretical - np.array(y_rk4))
data = {
'x': x_rk4,
'R-K $y$': y_rk4,
'Theoretical $y^*$': y_theoretical,
'R-K Error $abs(y-y^*)$': error_rk4
}
df = pd.DataFrame(data)
markdown_table = df.to_markdown(index=False)
print(markdown_table)
```#tablem[
| x | R-K $y$ | Theoretical $y^*$ | R-K Error $abs(y-y^*)$ |
|----:|----------:|--------------------:|-------------------------:|
| 0 | 1 | 1 | 0 |
| 0.2 | 1.18323 | 1.18322 | 1.33308e-05 |
| 0.4 | 1.34167 | 1.34164 | 2.61434e-05 |
| 0.6 | 1.48328 | 1.48324 | 4.17609e-05 |
| 0.8 | 1.61251 | 1.61245 | 6.2492e-05 |
| 1 | 1.73214 | 1.73205 | 9.10751e-05 |
]
] |
|
https://github.com/fabriceHategekimana/master | https://raw.githubusercontent.com/fabriceHategekimana/master/main/4_Usecase/Backpropagation.typ | typst | == Réseaux de neurones (backpropagation)
Dans la phase d'entrainement, les poids et les biais sont changé à chaque fois grâce aux fonctions de perte et à la backpropagation. Les fonctions de perte calculent la différence entre la valeur cible y et la valeurs sortie par le réseau y' et retourne un nombre qui évalue la distance entre la réponse obtenue et la réponse attendue. La backpropagation est réalisée un faisant un caclule des gradiants des poids et des bias de chaque couche.
L'avantage du calcul du gradient vient du fait que les poids et les biais peuvent être modifié de façon indépendantes.
|
|
https://github.com/OCamlPro/ppaqse-lang | https://raw.githubusercontent.com/OCamlPro/ppaqse-lang/master/src/étude/tests.typ | typst | #import "defs.typ": *
#import "links.typ": *
= Test <A-tests>
Il y a plusieurs type d'outils pour tester un programme que l'on peut diviser
en trois catégories qui ne sont pas forcément exclusives:
- les outils d'écriture de tests;
- les outils de génération de tests;
- les outils de gestion des tests.
Les outils d'écriture de tests permettent de décrire les tests à réaliser. Ils
sont souvent fournis sous la forme de bibliothèques logicielles qui fournissent
des fonctions, des macros ou des annotations pour décrire les tests à
l'intérieur du programme lui même ou d'un programme à part.
L'écriture des tests étant souvent laborieuse, il existe des
générateurs de tests qui offrent la possibilité de réaliser une bonne partie des
tests unitaires et parfois fonctionnels de manière automatique.
Ces générateurs peuvent
être basés sur des techniques de génération aléatoire (_fuzzing_) qui peut être
guidée,
de génération de modèle (_model checking_) ou de génération de cas de test aux
limites.
Les outils de gestion des tests engloblent les outils qui vont ajouter de
l'intelligence dans l'éxécution des tests et factoriser le plus possible
les exécutions qui peuvent être coûteuses en temps et en ressources.
Notons que certains outils proposent également la générations de méta données
utiles pour la certification ou qualification de logiciels. Certains peuvent
engendrer des matrices de tracabilité ou des rapports préformatés pour les
processus idoines.
Étant donné qu'il existe pléthore de cadres logiciels pour uniquement écrire
des tests et que la plus-value de ces cadres dans le processus d'édition
de logiciel critique est limitée, nous nous concentrerons essentiellement sur
les outils qui offrent un minimum de génération ou de gestion de tests
offrant le plus de valeur ajoutée pour les logiciels critiques.
Par ailleurs, il faut aussi distinguer quels types de test l'outil peut gérer.
Dans le rapport, nous distinguerons le type de test avec une
lettre majuscule selon le tableau suivant :
#figure(
table(
columns: (auto, auto, auto),
[*Type*], [*Description*], [*Identifiant*],
[*Unitaire*], [
Teste une unité de code (fonction, module, classe, ...)
], [U],
[*Intégration*], [
Teste l'intégration de plusieurs unités de code
], [I],
[*Fonctionnel*], [
Teste une fonctionnalité du programme
], [F],
[*Non régression*], [
Teste que les modifications n'ont pas introduit de régression
], [N],
[*Robustesse*], [
Teste une unité de code ou une fonctionnalité avec des valeurs aux
limites, voir hors limites
], [R],
[*Couverture*], [
Teste la couverture du code par les tests
], [C],
)
)
Les critères d'analyse utilisés sont la capacité de
- générer des tests automatiquement
- de gérer efficacement les tests (factorisation, parallélisation,
rapports, ...)
- faire du _mocking_, c'est-à-dire de la simulation de dépendances.
|
|
https://github.com/benjamineeckh/kul-typst-template | https://raw.githubusercontent.com/benjamineeckh/kul-typst-template/main/tests/test1/sections/chapter1.typ | typst | MIT License | = Introduction
#lorem(500)
== a subtitle1
#lorem(200)
== a subtitle2
#lorem(200)
== a subtitle3
#lorem(200)
== a subtitle3
#lorem(200)
= another main title
== and directly a subtitle
#lorem(700) |
https://github.com/Otto-AA/definitely-not-tuw-thesis | https://raw.githubusercontent.com/Otto-AA/definitely-not-tuw-thesis/main/src/styles/general.typ | typst | MIT No Attribution | #let general-styles = rest => {
set par(justify: true)
show emph: it => {
text(it, spacing: 4pt)
}
show link: underline
show heading.where(level: 1): it => {
pagebreak()
text(it, size: 1.6em)
v(14pt)
}
show heading.where(level: 2): it => {
text(it, size: 1.2em)
v(6pt)
}
show figure: it => {
it
v(30pt)
}
rest
} |
https://github.com/Hennzau/cv | https://raw.githubusercontent.com/Hennzau/cv/main/README.md | markdown | # 📄 CV Template with Typst
This repository contains a **template** for creating a CV using **Typst**. ✍️ **Typst** is an alternative to **LaTeX**, designed for creating structured documents. It's easier to learn and use than LaTeX, while still offering similar features. 🎯
You can use this template to create your own CV! To do so, simply import the template into your own `.typ` file and replace the placeholder information with your personal details. 🚀
Follow the instructions below to get started! 📝
```typst
#import "[directory?]/lib.typ": cv
#let gray = rgb("#303848")
#let orange = rgb("#BF9268")
#let light_gray = rgb("#F2F2F2")
#let cadre_photo() = {}
#let cadre_nom() = {}
#let cadre_contact() = {}
#let cadre_recherche() = {}
#let cadre_divers() = {}
#let cadre_formation() = {}
#let cadre_projet() = {}
#let cadre_experience() = {}
#show: cv.with(
author: "<NAME>",
title: "CV de Enzo Le Van",
date: datetime(year: 2024, month: 9, day: 23),
vertical_lines: (25%, 13%, 62%),
horizontal_lines: (17%, 4%, 8%, 18%, 27%, 26%),
header_height: 10%,
footer_height: 1%,
font: "Arial",
main_color: gray,
secondary_color: orange,
third_color: light_gray,
frames: (
cadre_photo(),
cadre_nom(),
cadre_contact(),
cadre_recherche(),
cadre_divers(),
cadre_formation(),
cadre_projet(),
cadre_experience()
)
)
```
|
|
https://github.com/mattyoung101/uqthesis_eecs_hons | https://raw.githubusercontent.com/mattyoung101/uqthesis_eecs_hons/master/README.md | markdown | ISC License | # UQ Honours thesis template in Typst
This is a template for Honours theses at the University of Queensland in Typst. It's meant to be as visually
similar as possible to the LaTeX template provided by UQ.
Typst is a new typesetting language with many improvements in speed and ease of use over LaTeX. You can read
more [here](https://typst.app/).
**Requires Typst >= 0.12**
> [!NOTE]
> The template has been specifically tailored for UQ's Electrical Engineering & Computer Science
> (EECS) faculty, but may also be useful in other faculties - especially those which are closely related (e.g.
> SMP). If you're using this template outside of EECS, please be aware that **each faculty has slightly different
> requirements**, and it's **your responsibility** to check them carefully before using this template.
> [!CAUTION]
> Please also note that this template has **not yet been verified as being permissible to submit
> theses in**. Until I do so myself, it will have to be your responsibility to verify this template is suitable
> with your faculty and supervisor.
## Instructions
- Edit your details in `config.yaml`
- In the `pages` directory, update the relevant details in `abstract.typ`, `acknowledgements.typ`, and
`dedication.typ`
- Everything else is auto-generated based on the contents of `config.yaml`
- In the `pages/chapters` directory, write your thesis!
- If you add a new chapter, make sure it's listed in `uqthesis.typ`
- In the `pages/appendices` directory, add any appendices you may have
- If you add a new appendix, make sure it's listed in `uqthesis.typ`
To compile your thesis once off:
```bash
# In the root directory of this project:
typst compile uqthesis.typ
```
While editing, it's very helpful to use Typst's live reload feature:
```bash
# In the root directory of this project:
typst watch uqthesis.typ
```
## Licence
The template Typst code is available under the ISC licence (see LICENSE.txt), in addition to the following
relevant note from the original LaTeX template:
> The Template contains content that is the intellectual property of The University of Queensland, including
> copyright and protected marks, which must be respected in accordance with applicable laws.
This refers to the UQ logo, which is property of the University of Queensland.
The original LaTeX thesis was GNU GPL 2.0, however, I don't believe this template is a derivative work,
because I designed it from scratch and it bares no code similarity to the LaTeX template - only visual
similarity, which I believe is not protected under the GPL.
Despite this, if anyone from UQ disagrees with my reasoning, please contact me: <<EMAIL>>
If you make any improvements to this template, contributions back to this repo are highly appreciated, though
not required.
|
https://github.com/Area-53-Robotics/53B-Notebook-Over-Under-2023-2024 | https://raw.githubusercontent.com/Area-53-Robotics/53B-Notebook-Over-Under-2023-2024/master/entries/competitions/hereford.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/templates/entries.typ": *
#import "/templates/headers.typ": *
#import "/templates/text.typ": *
#import "/templates/competition.typ": *
#create_default_entry(
title: [<NAME>],
date: [December 2nd, 2023],
witness: [Deb],
design: [Juan],
content: [
#entry_text()
#tournament(matches: (
(
match: "Qualification #10",
red_alliance: (teams: ("9080C", "1727A"), score: 95),
blue_alliance: (teams: ("960B", "53B"), score: 94),
won: false,
auton: false,
awp: false,
notes: none,
),
(
match: "Qualification #21",
red_alliance: (teams: ("53B", "960V"), score: 90),
blue_alliance: (teams: ("82856A", "929R"), score: 94),
won: false,
auton: false,
awp: false,
notes: none,
),
(
match: "Qualification #30",
red_alliance: (teams: ("7135D", "53B"), score: 132),
blue_alliance: (teams: ("9080H", "929X"), score: 72),
won: true,
auton: false,
awp: false,
notes: none,
),
(
match: "Qualification #37",
red_alliance: (teams: ("1727R", "7135B"), score: 67),
blue_alliance: (teams: ("9290C", "53B"), score: 90),
won: true,
auton: false,
awp: false,
notes: none,
),
(
match: "Qualification #52",
red_alliance: (teams: ("5588H", "53B"), score: 98),
blue_alliance: (teams: ("9080R", "960W"), score: 85),
won: true,
auton: false,
awp: false,
notes: none,
),
(
match: "Qualification #58",
red_alliance: (teams: ("1727K", "7135Z"), score: 121),
blue_alliance: (teams: ("53A", "53B"), score: 140),
won: true,
auton: false,
awp: false,
notes: none,
),
(
match: "Round of 16",
red_alliance: (teams: ("53B", "53C"), score: 97),
blue_alliance: (teams: ("9080R", "9080S"), score: 90),
won: true,
auton: false,
awp: false,
notes: none,
),
(
match: "Quarterfinals",
red_alliance: (teams: ("1893C", "929N"), score: 116),
blue_alliance: (teams: ("53B", "53C"), score: 57),
won: false,
auton: false,
awp: false,
notes: none,
),
))
#let cell = rect.with(width: 100%, inset: 5pt)
#grid(
columns: (1fr, 1fr),
column-gutter: 4pt,
cell(fill: green, radius: (top: 1.5pt))[*Pros*],
cell(fill: red, radius: (top: 1.5pt))[*Cons*],
cell(fill: green.lighten(80%), radius: (bottom: 1.5pt))[
- Did better than our first tournament.
- Found a more efficient way to load into our catapult.
],
cell(fill: red.lighten(80%), radius: (bottom: 1.5pt))[
- Not enough driver practice
- Not putting the cata down and flipping
- Coding Discrepancies
- Lack of organization
],
)
]
) |
https://github.com/maucejo/cnam_templates | https://raw.githubusercontent.com/maucejo/cnam_templates/main/src/common/_colors.typ | typst | MIT License | #let primary = (
red: rgb("c1002a"),
gray: rgb("405a68"),
dark-blue: rgb("11304c"),
medium-blue: rgb("4485c5"),
light-blue: rgb("a1d6df"),
lime: rgb("dddc00"),
)
#let secondary = (
btp: cmyk(100%, 40%, 100%, 0%), // Bâtiment et travaux publics
bioagro: cmyk(30%, 10%, 95%, 0%), // Biotechnologies, Bioinformatique et Agroalimentaire
eea: cmyk(40%, 20%, 100%, 0%), // Électronique, Électrotechnique et Automatique
energie: cmyk(70%, 20%, 100%, 0%), // Énergie environnement, développement durable
hygiene: cmyk(50%, 10%, 80%, 0%), // Hygiène, sécurité, gestion des risques
chimie: cmyk(30%, 15%, 50%, 0%), // Industrie & analyse chimique, cosmétique & pharma
info: cmyk(50%, 10%, 15%, 0%), // Informatique Télécoms, Médias nuumériques, Cybersécurité
mater: cmyk(60%, 0%, 20%, 0%), // Matériaux
meca: cmyk(100%, 70%, 30%, 0%), // Mécanique, acoustique, aérodynamique
metro: cmyk(90%, 60%, 0%, 0%), // Mesure, analyse, contrôle qualité
maths: cmyk(90%, 80%, 0%, 0%), // Modélisation, ingénierie mathématiques & statistiques
ergo: cmyk(40%, 70%, 20%, 0%), // Accompagnement pro, intervention sociale, ergonomie
abf: cmyk(30%, 60%, 0%, 0%), // Assurance, banque, finance
cmv: cmyk(35%, 40%, 0%, 0%), // Commerce, marketing, vente
cdcm: cmyk(50%, 70%, 10%, 0%), // Communication, documentation, culture, médiation
cca: cmyk(75%, 100%, 0%, 0%), // Comptabilité, contrôle, audit
dcd: cmyk(50%, 100%, 0%, 0%), // Droit, crminologie, défense
epidemio: cmyk(15%, 100%, 0%, 0%), // Épidémiologie, addictologie, santé publique
fco: cmyk(20%, 100%, 40%, 0%), // Formation continue, orientation & insertion pro
gpc: cmyk(40%, 100%, 70%, 0%), // Gestion publique et des collectivités
immo: cmyk(0%, 15%, 100%, 0%), // Immobilier
innov: cmyk(0%, 25%, 100%, 0%), // Innovation, développement et prospective, entrepreneuriat
strategie: cmyk(0%, 45%, 100%, 0%), // Management, stratégie et organisation d'enreprises, gestion de projet
mg4s: cmyk(5%, 50%, 100%, 0%), // Management et gestion des secteurs de santé, sanitaires et sociaux
psycho: cmyk(5%, 65%, 100%, 0%), // Psychologie du travail, psychanalyse
socio: cmyk(15%, 80%, 100%, 0%), // Sociologie du travail
tourisme: cmyk(30%, 100%, 100%, 0%), // Tourisme
trans: cmyk(52%, 100%, 100%, 0%), // Transport, logistique
ril: cmyk(0%, 100%, 80%, 0%), // Relations internationales UE & langues
)
#let color-theme = (
red: (
primary: rgb("c1002a"),
secondary: rgb("a1d6df"),
blue: rgb("11304c"),
box: rgb("11304c"),
text-title: white,
text-focus: rgb("c1002a"),
section-title: rgb("c1002a"),
),
clear-blue: (
primary: rgb("a1d6df"),
secondary: rgb("a1d6df"),
blue: rgb("11304c"),
box: rgb("11304c"),
text-title: rgb("11304c"),
text-focus: rgb("c1002a"),
section-title: rgb("11304c"),
),
medium-blue: (
primary: rgb("4485c5"),
secondary: rgb("4485c5"),
blue: rgb("11304c"),
box: rgb("11304c"),
text-title: white,
text-focus: white,
section-title: white,
),
)
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/lang-with-region_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// without any region
#set text(font: "Noto Serif CJK TC", lang: "zh")
#outline()
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/math/matrix-05.typ | typst | Other | // Error: 13-14 expected array, found content
$ mat(1, 2; 3, 4, delim: "[") $,
|
https://github.com/francescoo22/masters-thesis | https://raw.githubusercontent.com/francescoo22/masters-thesis/main/chapters/6-Encoding.typ | typst | #import "../vars/kt-to-vpr-examples.typ": *
#import "../config/utils.typ": code-compare
#pagebreak(to:"odd")
= Encoding in Viper<cap:encoding>
The annotation system for aliasing control introduced in @cap:annotations-kt and formalized in @cap:annotation-system aims to improve the verification process performed by SnaKt @FormVerPlugin, an existing plugin for the Kotlin compiler. SnaKt verifies Kotlin code by encoding it to Viper and supports a substantial subset of the Kotlin language. However, as described in @cap:aliasing, the lack of guarantees about aliasing presents a significant limitation for the plugin.
This chapter illustrates how uniqueness annotations can be used to improve the encoding of Kotlin into Viper.
== Classes Encoding<cap:class-encoding>
In Kotlin, as in most programming languages, classes can represent potentially unbounded structures on the heap, such as linked lists or trees. This characteristic was a key factor in the decision to encode Kotlin classes into Viper predicates. Viper predicates, in fact, are specifically designed to represent potentially unbounded data structures.
=== Shared Predicate
The shared predicate of a class includes read access to all fields that the language guarantees as immutable. Having access to these predicates allows the verification of certain functional properties of a program, even without uniqueness guarantees. The reason is that immutability is a stronger condition than uniqueness from a verification point of view. Indeed, while uniqueness ensures that an object is only accessible through a single reference, immutability guarantees that the object's state cannot change after its creation, eliminating the need to track or control access patterns for verifying correctness.
As shown in @class-comp-1, the encoding process involves including access to all fields declared as `val`, along with their shared predicate if they have one. Inheritance is encoded by including access to the shared predicates of the supertypes (Line 18). Additionally, the example illustrates how Kotlin's nullable types are encoded by accessing the predicate when the reference is not `null` through a logical implication (Lines 19-21).
All the encoding examples that follow are simplified to improve readability and focus on the aspects pertinent to this work. In the plugin, avoiding name clashes is a crucial concern. As a result, names generated by SnaKt are typically more complex than those shown in the examples.
Furthermore, SnaKt extends the predicate's body by incorporating Kotlin type information using domain functions. Unlike other assertions within the predicate, these domain functions are not resource assertions but rather logical assertions about the Kotlin type of a reference.
For example, instead of directly mapping Kotlin `Int` and other primitive types to their corresponding built-in Viper types, SnaKt maps them to Viper `Ref` type, each associated with a domain function that asserts the specific Kotlin type of the reference. This approach ensures that the type information is logically represented within the verification process. Similarly, when dealing with classes, SnaKt pairs them with domain functions to maintain consistent type information throughout the verification.
This representation, while crucial for accurate type tracking, is omitted in the examples provided here, as it is not central to the primary focus of this work.
For a complete view, @complete-encoding shows how the shared predicate of the classes in @class-comp-1 appears in the plugin.
#code-compare("Shared predicate encoding", 0.8fr, classes-kt, classes-vpr)<class-comp-1>
#figure(
caption: "Shared predicate non-simplified encoding",
full-encoding
)<complete-encoding>
=== Unique Predicate
The unique predicate of a class grants access to all its fields with either `write` or `wildcard` permission, depending on whether the field is declared as `var` or `val`. If a field is marked as unique, the unique predicate also includes access to that field’s unique predicate. Additionally, the predicate contains access assertions to the shared predicates of the fields since, as explained in the previous section, accessing immutable resources is always safe.
It is worth mentioning that some overlap might exist between the assertions in the shared predicate and those in the unique predicate. However, this overlap cannot lead to contradictions in Viper, such as requiring access with a total amount greater than 1, because the only assertions that can overlap are accessed with `wildcard` permission.
#code-compare("Unique predicate encoding", 0.65fr, classes-unique-kt, classes-unique-vpr)
== Functions Encoding
Access information provided by Kotlin's type system and by the uniqueness annotations is encoded using the predicates described in @cap:class-encoding within the conditions of a method.
On the one hand, shared predicates can always be accessed with `wildcard` permission without causing issues. Therefore, they can always be included in the conditions of a method for its parameters, receiver, and return value.
On the other hand, unique predicates can only be included in a method's conditions in accordance with the annotation system.
=== Return object
Since accessing immutable data is not a problem even if it is shared, every Kotlin function, in its encoding can ensure access to the shared predicate of the type of the returned object.
In addition, a Kotlin function annotated to return a unique object will also ensure access to its unique predicate. @return-comp illustrates the differences in the encoding between a function that returns a unique object and a function that returns a shared one.
#code-compare("Function return object encoding", 0.7fr, return-kt, return-vpr)<return-comp>
=== Parameters
Annotations on parameters are encoded by adding preconditions and postconditions to the method. Access to the shared predicate of any parameter can always be required in preconditions and ensured in postconditions. Conversely, access to the unique predicate can be required in preconditions only for parameters annotated as unique, and it can be ensured in postconditions only for parameters annotated as both unique and borrowed. @param-comp shows how function parameters are encoded, while @param-table summarizes the assertions contained within preconditions and postconditions based on the parameter annotations.
In Kotlin, when passing a unique reference to a function that expects a shared borrowed argument, fields included in the unique predicate can still be modified. The current encoding does not fully capture this behavior. However, as shown in @cap:vpr-calls-enc, this limitation can be addressed by adding additional statements when such functions are called.
#code-compare("Function parameters encoding", 0.8fr, param-kt, param-vpr)<param-comp>
#figure(
caption: "Conditions for annotated parameters",
param-table
)<param-table>
=== Receiver
Encoding the receiver of a method is straightforward since the receiver is considered as a normal parameter.
#code-compare("Function receiver encoding", 1fr, receiver-kt, receiver-vpr, same-row: false)
=== Constructor
Constructors are encoded as black-box methods returning a unique object. The encoding of a constructor requires access to the shared predicates for every property that is not of a primitive type. In addition, the unique predicate is also required for properties that are unique in the class declaration.
Currently, SnaKt only supports class properties declared as parameters. Properties declared within the body of a class and initializing blocks are not supported yet, as they may construct objects that are not necessarily unique.
#code-compare("Constructor encoding", .8fr, constructor-kt, constructor-vpr, same-row: false)
== Accessing Properties<cap:unfolding>
While encoding the body of a function using predicates to represent classes, multiple `unfold`, `fold`, `inhale`, and `exhale` statements may be necessary to access the properties of a class. If a property is part of a shared predicate, it is accessed through that predicate. If no shared predicate contains the property, the plugin attempts to access it through a unique predicate, if available. If the property is not even contained within any unique predicate, the access is inhaled.
=== Accessing Properties within Shared Predicate
Accessing properties contained within a shared predicate is straightforward. This is because shared predicates are always accessed with `wildcard` permission, meaning that after unfolding, the predicate remains valid, so there is no need to fold it back. In @unfold-shared-example, it is possible to note that the encoding of the function `f` does not require folding any predicate after accessing `b.a.n` to satisfy its postconditions.
#code-compare("Immutable property access encoding", .7fr, unfold-shared-kt, unfold-shared-vpr)<unfold-shared-example>
=== Accessing Properties within Unique Predicate
When accessing a property through a unique predicate, the predicate must be unfolded with `write` permission. Unlike shared predicates, which remain valid after unfolding with `wildcard` permission, a unique predicate does not hold after it has been unfolded. If the unique predicate is needed again, it must be folded back. This is necessary when satisfying the postconditions of the method or the preconditions of a called method.
#code-compare("Unique mutable property access encoding", .7fr, unfold-unique-kt, unfold-unique-vpr)<unfold-unique-example>
=== Accessing Properties not Contained within a Predicate
When no predicates contain the access to a property that needs to be accessed, it must be inhaled. After the property is used, its access is immediately exhaled. It is important to note that once the access to a property is exhaled, all information about it is lost. This is coherent with the idea that a property not contained within a predicate is mutable and shared, making it impossible to reason about it. In fact, such a property could be accessed and modified by other functions running concurrently.
#code-compare("Shared mutable property access encoding", .7fr, inhale-shared-kt, inhale-shared-vpr)<unfold-unique-example>
== Function Calls Encoding<cap:vpr-calls-enc>
Encoding method calls is straightforward for some cases, but requires attention for some others.
=== Functions with Unique Parameters
Functions with a unique parameter, when called, do not need the inclusion of additional statements for their encoding, except for folding or unfolding statements, as detailed in @cap:unfolding.
#code-compare("Function call with unique parameter encoding", .8fr, unique-call-kt, unique-call-vpr)
=== Functions with Shared Parameters
When functions with a shared parameter are called, their encoding may require the addition of `inhale` and `exhale` statements. The annotation system allows functions with shared parameters to be called by passing unique references. However, the function's conditions alone are not sufficient to properly encode these calls.
For example, passing a unique reference to a function expecting a shared (non-borrowed) parameter will result in the loss of uniqueness for that reference, which is encoded by exhaling the unique predicate. Similarly, when a unique reference is passed to a function expecting a borrowed-shared parameter, the uniqueness is preserved, but any field of that reference can be modified. This is encoded by exhaling and then re-inhaling the unique predicate of that reference.
@call-graph summarizes the `inhale` and `exhale` statements added during the encoding of a function call.
#figure(
caption: "Extra statements added for functions call encoding",
image("../images/calls-graph.svg", width: 50%)
)<call-graph>
#v(1em)
#code-compare("Function call with shared parameter encoding", .8fr, shared-call-kt, shared-call-vpr)
== Stack Example
Finally, @stack-vpr shows how the example from @cap:kt-stack is encoded in Viper. In this example, shared predicates are omitted for readability, as they would be empty. Moreover, the `UniqueAny` predicate does not add additional value to the encoding. However, it can be replaced with any class predicate without affecting the correctness of the encoding.
#figure(
caption: "Stack encoding in Viper",
```vpr
field value: Ref, next: Ref, root: Ref
predicate UniqueAny(this: Ref)
predicate UniqueNode(this: Ref) {
acc(this.value) && (this.value != null ==> UniqueAny(this.value)) &&
acc(this.next) && (this.next != null ==> UniqueNode(this.next))
}
predicate UniqueStack(this: Ref) {
acc(this.root) && (this.root != null ==> UniqueNode(this.root))
}
method constructorNode(val: Ref, nxt: Ref) returns (res: Ref)
requires val != null ==> UniqueAny(val)
requires nxt != null ==> UniqueNode(nxt)
ensures UniqueNode(res)
ensures unfolding UniqueNode(res) in res.value == val && res.next == nxt
method push(this: Ref, val: Ref)
requires UniqueStack(this)
requires val != null ==> UniqueAny(val)
ensures UniqueStack(this) {
var r: Ref
unfold UniqueStack(this)
r := this.root
this.root := constructorNode(val, r)
fold UniqueStack(this)
}
method pop(this: Ref) returns (res: Ref)
requires UniqueStack(this)
ensures UniqueStack(this)
ensures res != null ==> UniqueAny(res) {
var val: Ref
unfold UniqueStack(this)
if(this.root == null) { val := null }
else {
unfold UniqueNode(this.root)
val := this.root.value
this.root := this.root.next
}
fold UniqueStack(this)
res := val
}
```
)<stack-vpr> |
|
https://github.com/PA055/5839B-Notebook | https://raw.githubusercontent.com/PA055/5839B-Notebook/main/main.typ | typst | #import "/packages.typ": notebookinator
#import notebookinator: *
#import themes.radial: radial-theme, components
#show: notebook.with(theme: radial-theme, cover: align(center + horizon)[
#text(size: 24pt, font: "Tele-Marines")[
#text(size: 28pt)[
Engineering Notebook
]
#image("./assets/logo.jpg", height: 70%)
2024 - 2025
#line(length: 50%, stroke: (thickness: 2.5pt, cap: "round"))
High Stakes
]
])
#include "/frontmatter.typ"
#include "./Entries/entries.typ"
#include "./appendix.typ" |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/pagebreak_04.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test hard and weak pagebreak followed by page with body.
// Should result in three navy-colored pages.
#set page(fill: navy)
#set text(fill: white)
First
#pagebreak()
#page[Second]
#pagebreak(weak: true)
#page[Third]
|
https://github.com/Lslightly/TypstTemplates | https://raw.githubusercontent.com/Lslightly/TypstTemplates/main/templates/figure.typ | typst | MIT License | /*
The following content won't work with import or included command. Just put them in the typ file
#show figure: set block(breakable: true)
*/
|
https://github.com/ChHecker/unify | https://raw.githubusercontent.com/ChHecker/unify/main/README.md | markdown | MIT License | # Unify
`unify` is a [Typst](https://github.com/typst/typst) package simplifying the typesetting of numbers, (physical and monetary) units, and ranges. It is the equivalent to LaTeX's `siunitx`, though not as mature.
## Overview
`unify` allows flexible numbers and units, and still mostly gets well typeset results.
```typ
#import "@preview/unify:0.6.0": num,qty,numrange,qtyrange
$ num("-1.32865+-0.50273e-6") $
$ qty("1.3+1.2-0.3e3", "erg/cm^2/s", space: "#h(2mm)") $
$ numrange("1,1238e-2", "3,0868e5", thousandsep: "'") $
$ qtyrange("1e3", "2e3", "meter per second squared", per: "/", delimiter: "\"to\"") $
```
<img src="examples/overview.jpg" width="300">
## Multilingual support
The Unify package supports multiple languages. Currently, the supported languages are English and Russian. The fallback is English. If you want to add your language, you should add two files: `prefixes-xx.csv` and `units-xx.csv`, and in the `lib.typ` file you should fix the `lang-db` state for your files.
## `num`
`num` uses string parsing in order to typeset numbers, including separators between the thousands. They can have the following form:
- `float` or `integer` number
- either (`{}` stands for a number)
- symmetric uncertainties with `+-{}`
- asymmetric uncertainties with `+{}-{}`
- exponential notation `e{}`
Parentheses are automatically set as necessary. Use `thousandsep` to change the separator between the thousands, and `multiplier` to change the multiplication symbol between the number and exponential.
## `unit`
`unit` takes the unit in words or in symbolic notation as its first argument. The value of `space` will be inserted between units if necessary. Setting `per` to `symbol` will format the number with exponents (i.e. `^(-1)`), and `fraction` or `/` using fraction.
Units in words have four possible parts:
- `per` forms the inverse of the following unit.
- A written-out prefix in the sense of SI (e.g. `centi`). This is added before the unit.
- The unit itself written out (e.g. `gram`).
- A postfix like `squared`. This is added after the unit and takes `per` into account.
The shorthand notation also has four parts:
- `/` forms the inverse of the following unit.
- A short prefix in the sense of SI (e.g. `c`). This is added before the unit.
- The short unit itself (e.g. `g`).
- An exponent like `^2`. This is added after the unit and takes `/` into account.
Note: Use `u` for micro.
The possible values of the three latter parts are loaded at runtime from `prefixes.csv`, `units.csv`, and `postfixes.csv` (in the library directory). There, you can also add your own units. The formats for the pre- and postfixes are:
| pre-/postfix | shorthand | symbol |
| ------------ | --------- | ------------ |
| milli | m | upright("m") |
and for units:
| unit | shorthand | symbol | space |
| ----- | --------- | ------------ | ----- |
| meter | m | upright("m") | true |
The first column specifies the written-out word, the second one the shorthand. These should be unique. The third column represents the string that will be inserted as the unit symbol. For units, the last column describes whether there should be space before the unit (possible values: `true`/`false`, `1`,`0`). This is mostly the cases for degrees and other angle units (e.g. arcseconds).
If you think there are units not included that are of interest for other users, you can create an issue or PR.
## `qty`
`qty` allows a `num` as the first argument following the same rules. The second argument is a unit. If `rawunit` is set to true, its value will be passed on to the result (note that the string passed on will be passed to `eval`, so add escaped quotes `\"` if necessary). Otherwise, it follows the rules of `unit`. The value of `space` will be inserted between units if necessary, `thousandsep` between the thousands, and `per` switches between exponents and fractions.
## `numrange`
`numrange` takes two `num`s as the first two arguments. If they have the same exponent, it is automatically factorized. The range symbol can be changed with `delimiter`, and the space between the numbers and symbols with `space`.
## `qtyrange`
`qtyrange` is just a combination of `unit` and `range`. |
https://github.com/OverflowCat/BUAA-Data-and-Error-Analysis-Sp2024 | https://raw.githubusercontent.com/OverflowCat/BUAA-Data-and-Error-Analysis-Sp2024/neko/5-regression/multi.typ | typst | #let t = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
#let x1 = (164.3, 71.3, 58.9, 105.4, 74.4, 201.5, 136.4, 96.1, 89.9, 179.8, 114.7, 142.6, 155, 136.4, 173.6, 111.6, 179.6, 158.1)
#let x2 = (0.56, 0.56, 4.34, 0.84, 6.58, 2.38, 13.16, 14.14, 16,24, 17.64, 15.28, 23.34, 23.34, 30.24, 32.34, 2.66, 37.52, 41.86)
#let x3 = (268.6, 277.1, 62.9, 266.9, 100.3, 209,1, 78.2, 198.9, 294.1, 190.4, 188.7, 193.8, 227.8, 124.1, 285.6, 243.1, 343.4, 210.8)
#let y = (83.2, 78, 92.3, 79.3, 70.2, 100.1, 105.3, 120.9, 120.9, 66.3, 98.8, 124.8, 100.1, 120.9, 123.5, 70.2, 218.4, 128.7)
#let N = t.len()
$ N = #N $
#let avg = (data) => data.sum() / data.len()
#let ymean = avg(y)
#y.sum()
#let x1mean = avg(x1)
#let x2mean = avg(x2)
#let x3mean = avg(x3)
#ymean, #x1mean, #x2mean, #x3mean
#let calcl = (data) => {
let ss = data.map(x => calc.pow(x, 2)).sum()
let sum = data.sum()
let sm = sum * sum / N
let lxx = ss - sm
lxx
}
#import "helper.typ": c
// #let ss11 = x1.map(x => calc.pow(x, 2)).sum()
// #let l11 = ss11 - x1.sum() * x1.sum() / N
#let l11 = calcl(x1)
#l11
#let l22 = calcl(x2)
#l22
#let l33 = calcl(x3)
#l33
#let clxy = (x, y, X, Y) => {
let sxy = x.zip(y).map(((x, y)) => x * y).sum()
$ sum_t #X #Y = #c(sxy) $
let lxy = sxy - x.sum() * y.sum() / N
$ l = sum_t #X #Y - 1/N (sum #X)(sum #Y) = #c(lxy) $
return lxy
}
#let l12 = clxy(x1, x2, $x_(t 1)$, $x_(t 2)$)
#let l13 = clxy(x1, x3, $x_(t 1)$, $x_(t 3)$)
#let l23 = clxy(x2, x3, $x_(t 2)$, $x_(t 3)$)
#let l1y = clxy(x1, y, $x_(t 1)$, $y_t$)
#let l2y = clxy(x2, y, $x_(t 2)$, $y_t$)
#let l3y = clxy(x3, y, $x_(t 3)$, $y_t$)
#let A = (l11, l12, l13, l12, l22, l23, l13, l23, l33)
#A
#let B = (y.sum(), l1y, l2y, l3y)
#B |
|
https://github.com/cbr9/CV | https://raw.githubusercontent.com/cbr9/CV/main/modules/skills.typ | typst | Apache License 2.0 | #import "../template/template.typ": *
#cvSection("Skills")
#cvSkill(
type: [ML & MLOps],
info: [
SLURM #hBar()
W&B #hBar()
PyTorch #hBar()
PyTorch Lightning #hBar()
HuggingFace #hBar()
Captum
]
)
#cvSkill(
type: [NLP],
info: [
LangChain #hBar()
SpaCy #hBar()
NLTK
]
)
#cvSkill(
type: [Data Analysis],
info: [
Excel #hBar()
SQL #hBar()
Pandas #hBar()
Plotly #hBar()
Matplotlib
]
)
#cvSkill(
type: [Programming],
info: [
Python #hBar()
Rust #hBar()
JavaScript #hBar()
TypeScript #hBar()
Golang #hBar()
LaTeX
]
)
#cvSkill(
type: [Languages],
info: [English #hBar() Spanish #hBar() Italian #hBar() German]
)
#cvSkill(
type: [Other],
info: [
Docker #hBar()
Linux #hBar()
NixOS #hBar()
Nix #hBar()
Bash #hBar()
CLI #hBar()
Git
]
)
#v(10pt)
|
https://github.com/Lucascf11/PRE | https://raw.githubusercontent.com/Lucascf11/PRE/main/Avaliacoes/AV7/main.typ | typst | #import "@preview/klaro-ifsc-sj:0.1.0": report
#set text(lang: "pt")
#show: doc => report(
title: "Avaliação 7 de Processos Estocásticos",
subtitle: "Processos Estocásticos",
// Se apenas um autor colocar , no final para indicar que é um array
authors: ("<NAME>",),
date: "27 de Agosto de 2024",
doc,
)
#set text(18pt)
= Enunciado
#set text(14pt)
#figure(
image("figures/Enunciado7.png", width: 100%),
caption: "Enunciado da Avaliação",
supplement: [Figura],
)<Enunciado>
#pagebreak()
#set text(18pt)
= Desenvolvimento
#set text(14pt)
== Determinando e esboçando $C_X [n]$
=== Determinando $C_X [n]$
Ao analisarmos o enunciado vemos que X[$n$] se distribui de modo que:
$ X[n] dash.wave^("iid") N(0,2) $
Agora, se verificarmos que precisamos encontrar a função autocovariância do processo estocástico e, recorrendo à fórmula da função autocovariância de um processo estocástico, temos que:
$ X[n_1,n_2] = "cov"[X[n_1],X[n_2]] $
Porém, ao analisarmos que as variáveis aleatórias são independentes, podemos afirmar que são descorrelacionadas, e se são descorrelacionadas, então a covariância entre elas é 0 a não ser que $n_1 = n_2$. Desse modo, teremos uma variância, de maneira que, se dissermos que $n_1,n_2=ell$ (ou seja, $n_2 - n_1 = ell$), então chegamos na conclusão que:
$ C_X [ell] := cases(
2", se " ell = 0 "ou, se " n_1=n_2,
0", se " ell eq.not 0 "ou, se" n_1 eq.not n_2
) $
=== Esboçando $C_X [n]$
#figure(
image("figures/Gráfico_Cx[l].jpg", width: 100%),
caption: "Gráfico de autocovariância de X[n]",
supplement: [Figura],
)
== Determinando e esboçando $C_Y [n]$ sem usar o domínio da frequência
=== Determinando $C_Y [n]$
Para determinarmos a função do enunciado, podemos sair direto pela fórmula da covariância:
$ C_Y [n_1,n_2] = "cov"[Y_n_1,Y_n_2] \
= E[Y_n_1,Y_n_2] - cancel(E[Y_n_1])cancel(E[Y_n_2]) \
= E[(3X_n_1 + 4X_(n_1 - 1)) (3X_n_2 +4X_(n_2 - 1)] \
= E[9X_n_1X_n_2 + 12X_n_1X_(n_2 - 1) + 12X_(n_1 - 1)X_n_2 + 16X_(n_1-1)X_(n_2-1)]\
= 9 times E[X_n_1X_n_2] + 12 times E[X_n_1X_(n_2-1)] + 12 times E[X_(n_1 - 1)X_n_2] + 16 times E[X_(n_1 - 1)X_(n_2-1)] \
= 9 times C_X [n_1,n_2] + 12 times C_X [n_1, n_2 - 1] + 12 times C_X [n_1-1, n_2] + 16 times C_X [n_1-1,n_2-1] \
= 9 times C_X [ell] + 12 times C_X [ell -1] + 12 times C_X [ell + 1] + 16 times C_X [ell] \
= 25 times 2delta[ell] + 12 times 2delta[ell-1] + 12 times 2delta[ell + 1]\
= 50delta[ell] + 24delta[ell-1] + 24delta[ell+1]
$
Com os cálculos acima, chegamos na conclusão que:
$ C_Y [ell] := cases(
50 ", se " ell=0 "ou, se " n_1=n_2,
24 ", se " ell=-1 "ou, se" n_1 - n_2 = 1,
24 ", se " ell=1 "ou, se" n_2 - n_1 = 1,
0 ", se" abs(ell)>=2 "ou, caso" n_2-n_1 >= 2 "ou" n_1-n_2 >= 2
) $
=== Esboçando $C_Y [n]$
#figure(
image("figures/Grafico_CY[l]2.jpg", width: 90%),
caption: "Gráfico de autocovariância de Y[n]",
supplement: [Figura],
)
#pagebreak()
== Determinando $C_Y [ell]$, utilizando o domínio da frequência
Primeiramente, analisamos o problema interpretando o X[$n$] sendo:
- $mu_X = 0$
- $C_X [ell] = 2delta[ell]$
- $S_X [Phi] = F {2delta[ell]} = 2 $
Podemos então utilizar $h[n]$ e $h[Phi]$ sendo:
- $h[n] = 3delta[n] + 4delta[n-1]$
- $hat(h)[Phi] = 3 + 4e^(-j 2 pi Phi)$
E então podemos encontrar a PDF e, com isso a autocovariância de Y no domínio da frequência:
$ (abs(hat(h)(Phi)))^2 = abs(3 + 4e^(- j 2 pi Phi))^2\
= abs(3 + 4[cos(2pi Phi) -j sin(2pi Phi)])^2\
= [3 + 4cos(2pi Phi)]^2 + 16sin^2(2pi Phi)\
= 9 + 24cos(2pi Phi) + 16cos^2(2 pi Phi) + 16sin^2(2pi Phi) \
= 9 + 24 cos(2pi Phi) + 16(cos²(2pi Phi) + sin²(2 pi Phi)) \
= 9 +24 cos(2pi Phi) + 16\
= 25 + 24cos(2pi Phi)
$
$ S_Y(Phi) = abs(hat(h)(Phi))^2S_X (Phi)\
= (25 + 24cos(2pi Phi)) times 2 = 50 + 48cos(2pi Phi)
$
$ C_Y [ell] = F^(-1) {S_Y(Phi)}\
C_Y [ell] = F^(-1){50} + F^(-1){48cos(2pi Phi)}\
C_Y [ell] = 50delta[l] + 24delta[ell - 1] + 24delta[ell + 1]
$
#pagebreak()
== Determinando a PDF de Y[3]
Sabemos que $Y[3]$ irá se distribuir de maneira gaussiana com média 0, afinal $X[n]$ se distribui de maneira gaussiana, logo $Y[n]$ sendo a sua saída, irá se distribuir de maneira gaussiana também.
\
Mas ainda é necessário encontrar a variância dessa distribuição. Para encontrar a variância de Y[3], podemos aplicar: $"var"Y[3] = "cov"[Y[3],Y[3]]$.
\
Podemos analisar que, nesse caso, $n_1=n_2=3$, ou seja, $ell=0$.
\
Guardamos essa informação e verificamos que esse processo estocástico segue todas as propriedades de um processo estocástico estacionário no sentido amplo, afinal:
- A função é par, pois $C_Y [ell-1] = C_Y [ell+1]$;
- O valor na origem é a própria variância ($C_Y [0] = sigma_Y^2$), afinal $ell=0$, significa dizer que $n_1=n_2$, em outras análises: $"cov"[n_1,n_2]$, se $n_1=n_2$, então essa covariância é $"cov"[n_1,n_1] = "var"[n_1]$;
- A desigualdade de Cauchy-Schwarz está sendo respeitada, afinal Essa desigualdade é expressa por $-sigma_X^2 <= C_X[n] < = sigma_X^2$. \ Analisando $C_Y [ell]$, nós podemos observar claramente que a faixa de valores respeita a desigualdade de Cauchy-Schwarz para este processo estocástico ESA que é: $-50 <= C_Y [ell] <= 50$. Como $C_Y [ell]$ possui o seguinte intervalo ao analisarmos o gráfico $24 <= C_Y [ell] <= 50$, ele está dentro do intervalo da desigualdade, atendendo assim, essa propriedade.
Ao validarmos todas as propriedades e analisarmos que a relação entre $Y[n]$ e $X[n]$ pode ser encarada como um SLIT, podemos assumir com toda a certeza que esse é um processo estocástico ESA, e aí a variância de Y[3] sai pela propriedade 2 aplicada à autocovariância anteriormente encontrada:
$ "var"Y[3] = C_Y [0] = 50 therefore Y[3] dash.wave N(0,50)
$
O restante é simples, pois agora basta utilizarmos a fórmula da PDF de uma distribuição gaussiana para uma única dimensão, que é expressa por:
$ F_X(x) = 1/sqrt(2pi sigma^2) exp(-((x-mu)^2)/(2sigma^2)) $<PDF_Gauss>
Aplicando esse caso particular na @PDF_Gauss, teremos que:
$ F_Y[3] = 1/sqrt(2pi times 50) exp(-(y^2)/(2 times 50)) $
#pagebreak()
== Determinando cov[Y[3],Y[4]]
Podemos analisar que, se generalizarmos essa covariância, nós teremos $C_Y [Y_("n1"),Y_("n2")]$. Se avaliarmos esse caso, temos que n1 = 3 e n2 = 4.
\
Isso significa dizer que, para esse caso:
$ ell = n_2 - n_1 = 4 - 3 = 1 $
Ou seja, como $ell =1$, já temos elementos suficientes para descobrir a covariância solicitada, pois sai pela função autocovariância $C_Y [ell]$ anteriormente calculada. Logo, ao analisarmos isso, chegamos à conclusão que:
$ C_Y [Y_3,Y_4] = 24 $
#pagebreak()
== Determinando Pr[$Y[3] > 0 | Y[1] = 1$]
Vamos primeiramente observar que Y[3] é independente de Y[1]. Podemos comprovar isso com a função autocovariância descoberta anteriormente, se analisarmos que $n_2 = 3$ e $n_1 = 1$. Isso irá nos conceder que:
$ ell = 3 - 1 = 2 $
Quando $abs(ell) >= 2$, para a autocovariância anteriormente descoberta, o seu resultado é igual a 0. Ou seja, $C_Y [Y_3,Y_1] = 0$, isso significa dizer que Y[3] e Y[1] são descorrelacionados e, como são conjuntamente gaussianos por causa da relação com $X[n]$, a descorrelação implica a independência entre eles. Saber disso, nos concede a possibilidade de realizar o seguinte procedimento:
$ Pr[Y[3] > 0 | Y[1] = 1] = Pr[Y[3] > 0] $
E agora para encontrarmos a probabilidade solicitada podemos sair pela diferença entre a probabilidade máxima de qualquer variável aleatória, que é 1, com a probabilidade de Y[3]. Se tratando de uma variável gaussiana, nós podemos utilizar a função $Phi$ para determinar a sua probabilidade.
\
Portanto, relembramos que $Y[3] dash.wave N(0,50)$ e determinamos a probabilidade solicitada:
$ Pr[Y[3] > 0] = 1 - Phi((0 - 0)/sqrt(50))\
Pr[Y[3] > 0] = 1/2 = 50%
$ |
|
https://github.com/EGmux/PCOM-2023.2 | https://raw.githubusercontent.com/EGmux/PCOM-2023.2/main/lista2/lista2q5.typ | typst | === A informação em um sinal analógico, com frequêcia máxima *$f_m = 3 "kHz"$*, é transmitida por um sistema 16-PAM. A distorção do quantização especificada não pode exceder *$plus.minus 1%$* do valor de tensão pico-a-pico do sinal analógico.
\
==== a) Qual o número mínimo de bits/amostra (bits/palavra PCM) que deve ser usado na digitalização do sinal analógico?
\
Lembar que um sistema 16-PAM é usado na etapa final de transimssão de um sistema
de comunicação e tem a finalidade de converter sequência de bits em um único
símbolo, a quantidade de bits por símbolo é denotada por _k_ e a relação com o
sistema PAM se dá pela seguinte equação
#math.equation(block: true, $ k = log_2M $)
🚨 M é o nível do sistema PAM, no caso do problema 16 níveis.
ora da relação acima percebemos que será necessário 4 bits por símbolo para esse
sistema.
temos outra relação que é dada pelo erro de distorção, _e_
#math.equation(block: true, $ |e| lt.eq p V_(p p) $)
e temos ainda a relação entre $V_(p p)$ e os níveis/degrais de quantização,
denotados por _q_
#math.equation(block: true, $ V_(p p)/L = q $)
podemos interpretar da seguinte maneira, dado a tensão do sinal de entrada e os
degrais de quantização desejados então cada degrau tem uma fração unitária da
tensão máxima denotada por _q_.
e por fim a seguinte equação tambem será necessária
#math.equation(block: true, $ l = log_2L $)
que diz quantos bits para termos uma palavra na saída do quantizador.
🚨 Notar que l é o número de bits para termos uma palavra que é muito diferente
do número de bits por símbolo!
uma palavra é uma sequência de bits que representa um nível de tensão da etapa
de quantização
1111 em tese é a tensão máxima $V_(p p)$, mas como símbolo pode ser que tenhamos _k_ =
2 e consequentemente 11 é um símbolo a ser enviado mas o conteúdo final é a
palavra, o que muda é a taxa de envio através do mecanismo de símbolos.
Dito o necessário acima temos que achar l que é depedente de L, mas não temos
uma relação entre o erro e L, precisamos de uma
#math.equation(block: true, $ V_(p p)/(2L) = |e_(m a x)| = p V_(p p) $)
daí
#math.equation(block: true, $ L = 1/(2p) $)
e portanto temos
#math.equation(block: true, $ l = log_2(1/(2p)) = 5.643 $)
arrendondamos para cima visto que não existe bit fracionário, logo *6 bits é o
mínimo necessário.*
==== Qual a taxa de amostragem mínima que garante a reconstrução do sinal e qual a taxa de bits resultante?
\
É pegar a frequência dada no enunciado e multiplicar por 2 para obter a
frequência de Nyquist
#math.equation(block: true, $ f_N = 6 "kHz" $)
bem isso quer dizer que 6000 amostras do sinal são obtidas por segundo e como
cada amostra irá produzir uma palavra temos que a taxa de bits é isso dividido
pelo número de bits por palavra.
#math.equation(
block: true, $ R_b = 6000("amostras/s") dot 6("bits/amostra") = 36000 "bits/s" $,
)
==== Qual a taxa de símbolo (ou taxa de pulsos PAM) ?
\
bem aqui é dizer o valo de _k_ que é
#math.equation(block: true, $ k = log_2(16) = 4 "bits/símbolo" $)
#math.equation(
block: true, $ R_s = (36000 "bits/s") / (4 "bits/símbolo") = 9000 "símbolo/s" $,
)
==== Se a largura de banda de transmissão for de 12 kHz, calcule a eficiência espectral deste sistema
\
🚨 pode parecer a primeira vista que seja uma porcentagem a resposta, mas na
verdade é uma taxa a razão entre bits e banda utilizada
#math.equation(block: true, $ R_B = (36 "kbits/s")/(12 "kHz") = 3 "bits/Hz"$)
|
|
https://github.com/An-314/Notes-of-DSA | https://raw.githubusercontent.com/An-314/Notes-of-DSA/main/introduction.typ | typst | = 绪论
== 算法
计算机、程序、算法
- 计算 = 信息处理 = 借助某种工具,遵照一定规则,以明确而机械的形式进行
- 计算模型 = 计算机 = 信息处理工具
- 所谓算法,即特定计算模型下,旨在解决特定问题的指令序列
算法的特性:
- *输入* 待处理的信息(问题)
- *输出* 经处理的信息(答案)
- *正确性* 的确可以解决指定的问题
- *确定性* 可描述为一个由基本操作组成的序列
- *可行性* 每一基本操作都可实现,且在常数时间内完成
- *有穷性* 对于任何输入,经有穷次基本操作,都可以得到输出
不能确定有穷的程序不能称之为一个算法。可放宽为期望上有穷,例如几何级数。
== 计算模型
$T_A (P)$ = 算法A求解问题实例P的计算成本
$T_A (n) = max{T_A (P) | |P|=n}$在规模同为$n$的所有实例中,只关注最坏(成本最高)者,有时候我们也会关注期望成分。
为给出客观的评判,需要抽象出一个理想的平台或模型
- 不再依赖于程序员、编译器、计算机、编程语言等具体因素
- 从而直接而准确地描述、测量并评价算法
=== 图灵机Turing Machine(TM)
#figure(
image("fig\绪论\1.png",width: 80%),
caption: "图灵机"
)
- 无限长的纸带(Tape),纸带上有无限多个格子,每个格子上有一个字符
- 读写头(Head),每次只能读写一个格子,经过一个节拍可以移动一格
- 字符集(Alphabet),纸带上的字符来自于字符集
- 状态集(State),有限个状态,每个状态下有一个动作表,约定`h`停机
*转换函数*
```cpp
Transistion(q,c; d, L/R, p)
```
- `q`当前状态
- `c`当前字符
- `d`写入字符
- `L/R`移动方向
- `p`下一状态
特别地,一旦转入约定的状态`h`,则停机。从启动至停机,所经历的节拍数目,即可用以度量计算的成本;亦等于Head累计的移动次数。
下面的例子就是`Increase`算法的TM实现
#figure(
image("fig\绪论\2.png",width: 80%),
caption: "Increase算法的TM实现"
)
=== 随机存取机Random Access Machine(RAM)
#figure(
image("fig\绪论\3.png",width: 80%),
caption: "RAM"
)
- 无限长的存储器(Memory),存储器被划分为若干个存储单元,每个存储单元存储一个字
- call-by-rank,每次可以直接访问任意存储单元
与TM模型一样, RAM模型也是一般计算工具的简化与抽象,使我们可以独立于具体的平台,对算法的效率做出可信的比较与评判。
在这些模型中
- 算法的运行时间 $prop$ 算法需要执行的基本操作次数
- $T(n) = $算法为求解规模为$n$的问题,所需执行的基本操作次数
下面的例子就是`Ceiling Division`算法的RAM实现
#figure(
image("fig\绪论\4.png",width: 80%),
caption: "Ceiling Division算法的RAM实现"
)
== 渐进复杂度Big-O Notation
渐近分析:更关心问题规模足够大之后,计算成本的增长趋势。
借用渐进分析中的$O$、$Omega$和$Theta$符号来描述算法的渐进复杂度。
=== 多项式复杂度
==== $O(1)$:constant
这类算法的效率最高。
可能含循环、分支、递归等语句,但其执行次数与问题规模无关。
==== $O(log^c n)$:poly-logarithmic
这类算法非常有效,复杂度无限接近于常数。
$
forall c > 0, log n = O(n^c)
$
==== $O(n^c)$:polynomial
线性(linear function):$O(n)$
从$O(1)$ 到 $O(n^2)$,一般来说都是能接受的。$O(n^2)$有时候也过高。但是更大次幂,是非常低效的。
=== $O(c^n)$:exponential
这类算法的计算成本增长极快,通常被认为不可忍受。
从$O(n^c)$到$O(2^n)$,是从有效算法到无效算法的分水岭。
有些问题的最优算法的复杂度就是指数级的,例如NPC问题。
NPC问题:就目前的计算模型而言,不存在可在多项式时间内解决此问题的算法。并且这类问题的解法,可以在多项式时间内验证。
一个典型的问题是Subset Sum问题。
_给定一个集合$S$,以及一个目标值$T$,判断$S$中是否存在一个子集,其元素之和为$T$。_
_最优的解法是穷举法,复杂度为$O(2^n)$。_
#figure(
image("fig\绪论\5.png",width: 80%),
caption: "渐近复杂度的层次级别"
)
== 复杂度分析:级数、递归与主定理Master Theorem
=== 算法分析
两个主要任务 $=$ 正确性(不变性 $times$ 单调性) $+$ 复杂度
为确定后者,不必将算法描述为RAM的基本指令,再累计各条代码的执行次数。C++等高级语言的基本指令,均等效于常数条RAM的基本指令; 在渐近意义下,二者大体相当
- 分支转向: `goto` [算法的灵魂;为结构化而被隐藏]
- 迭代循环: `for()`、 `while()`、 ... [本质上就是“`if` + `goto`”]
- 调用 + 递归(自我调用) [本质上也是`goto`]
主要方法: 迭代(级数求和)、递归(递归跟踪 $+$ 递推方程)、实用(猜测 $+$ 验证)
=== 级数
一些常见的级数:
- 算术级数(与末项平方同阶):$1 + 2 + 3 + ... + n = n(n+1)/2=O(n^2)$
- 幂方级数(比幂次高出一阶):$sum_(i=0)^n i^k = O(n^(k+1))$
- 几何级数(与末项同阶):$sum_(i=0)^n q^i = (1-q^(n+1))/(1-q) = O(q^n), q>1$
- 收敛级数,例如倒数平方和:$O(1)$
- 几何分布:$(1-lambda) (1+2lambda+3lambda^2+...+n lambda^(n-1)) = O(1)$
- 调和级数:$sum_(i=1)^n 1/i = Theta(log n)$
- 对数级数:$sum_(i=1)^n log i = Theta(n log n)$
- 对数+线性+指数:$sum_(i=1)^n i log i = O(n^2 log n)$,$sum_(i=1)^n i 2^i = O(n 2^n)$
可以通过积分等方法求得。
=== 迭代
可以画图进行分析
- 迭代+算术级数 $O(n^2)$
```cpp
for( int i = 0; i < n; i++ )
for( int j = 0; j < n; j++ )
O1op(const i, const j);
for( int i = 0; i < n; i++ )
for( int j = 0; j < i; j++ )
O1op(const i, const j);
```
- 迭代+级数 $O(n)$
```cpp
for( int i = 1; i < n; i <<= 1 )
for( int j = 0; j < i; j++ )
O1op( const i, const j );
```
- 迭代+复杂级数 如$O(n log n)$
```cpp
for( int i = 0; i <= n; i++ )
for( int j = 1; j < i; j += j )
O1op( const i, const j );
```
=== 封底估算
- 地球(赤道)周长 $≈ 787 times 360/7.2 = 787 times 50 = 39,350 "km"$
- 1天 $= 24"hr" times 60"min" times 60"sec"≈ 25 times 4000 = 10^5 "sec"$
- 1生 $≈$ 1世纪 $= 100"yr" times 365 = 3 times 10^4 "day" = 3 times 10^9 sec$
- “为祖国健康工作五十年” $≈ 1.6 x 10^9 sec$
- “三生三世” $≈ 300 "yr" = 10^10 = (1 "googel")^(1/10) sec$
- 宇宙大爆炸至今 $= 4 times 10^17 > 10^8 times "一生"$
== 迭代与递归
=== 减而治之Decrease-and-conquer
#figure(
image("fig\绪论\6.png",width: 80%),
caption: "减而治之"
)
为求解一个大规模的问题,可以
- 将其划分为两个子问题:其一平凡,另一规模缩减
- 分别求解子问题;再由子问题的解,得到原问题的解
例如
```cpp
int SumI( int A[], int n ) {
int sum = 0; //O(1)
for ( int i = 0; i < n; i++ ) //O(n)
sum += A[i]; //O(1)
return sum; //O(1)
}
/* Decrease-and-conquer:Linear Recursion */
sum( int A[], int n )
{ return n < 1 ? 0 : sum(A, n - 1) + A[n - 1]; }
```
递归跟踪:绘出计算过程中出现过的所有递归实例(及其调用关系)
#figure(
image("fig\绪论\7.jpg",width: 50%),
caption: "递归跟踪"
)
本例中,共计$n+1$个递归实例,各自只需$O(1)$时间,故总时间为$O(n)$。开出$n+1$个栈帧,故空间为$O(n)$。
*递推方程*:递归实例的计算成本$T(n)$与其规模$n$的关系
$
T(n) = T(n-1) + O(1) = O(n)
$
本例的复杂度是$O(n)$。
*尾递归*:递归调用出现在函数体的最后一条语句中。尾递归可以被转化为迭代,从而节省空间。
例如,将数组中的区间`A[lo,hi]`前后颠倒`void reverse( int * A, int lo, int hi );`
减治:$"Rev"("lo", "hi") = ["hi"] + "Rev"("lo" + 1, "hi" - 1) + ["lo"]$
```cpp
if (lo < hi) { //递归版
swap( A[lo], A[hi] );
reverse( A, lo + 1, hi – 1 );
} //线性递归(尾递归), O(n)
while (lo < hi) //迭代版
swap( A[lo++], A[hi--] ); //亦是O(n)
```
=== 分而治之Divide-and-conquer
为求解一个大规模的问题,可以将其划分为若干子问题(通常两个,且规模大体相当)。分别求解子问题,由子问题的解合并得到原问题的解。
#figure(
image("fig\绪论\8.png",width: 80%),
caption: "分而治之"
)
例如前面的`SumI`算法,可以改写为
```cpp
/* Divide-and-conquer:Binary Recursion */
sum( int A[], int lo, int hi ) { //区间范围A[lo, hi)
if ( hi - lo < 2 ) return A[lo];
int mi = (lo + hi) >> 1; return sum( A, lo, mi ) + sum( A, mi, hi );
} //入口形式为sum( A, 0, n )
```
可以列出递推方程
$
T(n)=2T(n/2)+O(1)=O(n)
$
而对于一般的这种方程,可以用主定理求解
*主定理Master Theorem*
$
T(n)=a T(n/b)+O(f(n))
$
- 若$f(n)=O(n^(log_b a)-epsilon)$,则$T(n)=Theta(n^(log_b a))$
- 若$f(n)=Theta(n^(log_b a) dot log^k n)$,则$T(n)=Theta(n^(log_b a) dot log^(k+1) n)$
- 若$f(n)=Omega(n^(log_b a)+epsilon)$,则$T(n)=Theta(f(n))$
即比较$f(n)$与$n^(log_b a)$的大小关系,若$f(n)$更大,则复杂度为$f(n)$;若相当,则复杂度为$n^(log_b a) dot log^k n$;若更小,则复杂度为$n^(log_b a)$。
几个后面会出现的例子:
- `kd-search`:$T(n)=2T(n/4)+O(1)=O(sqrt(n))$
- `binary search`:$T(n)=T(n/2)+O(1)=O(log n)$
- `merge sort`:$T(n)=2T(n/2)+O(n)=O(n log n)$
- `STL merge sort`:$T(n)=2T(n/2)+O(n log n)=O(n log^2 n)$
- `quickSelect`:$T(n)=T(n/2)+O(n)=O(n)$
例如大数乘法:
1. Naive + DAC
```
AB
CD
x_____
AC
BD
AD
BC
```
$T(n)=4T(n/2)+O(n)=O(n^2)$
2. Optimal
```
AB
CD
x_____
AC
BD
AC
BD
(A-B)(D-C)
```
这样只用计算三个乘法$T(n)=3T(n/2)+O(n)=O(n^(log_2 3))$
=== 例:总和最大区段问题
给定一个整数序列$A[0, n)$,求其总和最大的区段$A[i, j)$,其中$0≤i<j<n$,(有多个时,短者、靠后者优先)。
==== 蛮力算法BF
枚举所有区段,计算其总和,选出最大者。
```cpp
int gs_BF( int A[], int n ){ //蛮力策略: O(n^3)
int gs = A[0]; //当前已知的最大和
for ( int i = 0; i < n; i++ )
for ( int j = i; j < n; j++ ) //枚举所有的O(n^2)个区段!
int s = 0;
for ( int k = i; k <= j; k++ ) s += A[k]; //用O(n)时间求和
if ( gs < s ) gs = s; //择优、 更新
return gs;
}
```
==== 递增策略
求和时记忆,故相同开头的区段,只需在前者的基础上加上一个元素即可。
```cpp
int gs_IC( int A[], int n ){ //递增策略: O(n^2)
int gs = A[0]; //当前已知的最大和
for ( int i = 0; i < n; i++ ) //枚举所有起始于i
int s = 0;
for ( int j = i; j < n; j++ ) //终止于j的区间
s += A[j]; //递增地得到其总和: O(1)
if ( gs < s ) gs = s; //择优、 更新
return gs;
}
```
=== 分治策略:前缀 + 后缀
$
A["lo", "hi") = A["lo", "mi") union A["mi", "hi") = P union S
$
借助递归,便可求得$P,S$内部的`GS`;而剩余的实质任务是考察那些跨越切分线的区段。
所以每段返回两个值,一个是区段内的`GS`,一个是含端点的`GS`。
二者可以独立计算,累计用时为$O(n)$,故总时间为$O(n log n)$。
```cpp
int gs_DC( int A[], int lo, int hi ) { //Divide-And-Conquer: O(n*logn)
if ( hi - lo < 2 ) return A[lo]; //递归基
int mi = (lo + hi) / 2; //在中点切分
int gsL = A[mi-1], sL = 0, i = mi; //枚举
while ( lo < i-- ) //所有[i, mi)类区段
if ( gsL < (sL += A[i]) ) gsL = sL; //更新
int gsR = A[mi], sR = 0, j = mi-1; //枚举
while ( ++j < hi ) //所有[mi, j)类区段
if ( gsR < (sR += A[j]) ) gsR = sR; //更新
return max( gsL + gsR, max( gs_DC(A, lo, mi), gs_DC(A, mi, hi) ) ); //递归
}
```
=== 分治策略:最短的总和非正的后缀 $~$ 总和最大区段
考虑后缀$S$,若其总和非正,则可将其舍弃,因为其不可能是最大区段的一部分。
所以只需在后缀$S$中,找出总和最大的区段即可。通过一次线性扫描实现,不断剪除负和后缀。时间复杂度为$O(n)$。
```cpp
int gs_LS( int A[], int n ) { //Linear Scan: O(n)
int gs = A[0], s = 0, i = n;
while ( 0 < i-- ) { //在当前区间内
s += A[i]; //递增地累计总和
if ( gs < s ) gs = s; //并择优、更新
if ( s <= 0 ) s = 0; //剪除负和后缀
}
return gs;
}
```
#figure(
image("fig\绪论\9.jpg",width: 60%),
caption: "分治策略:最短的总和非正的后缀 $~$ 总和最大区段"
)
== 动态规划Dynamic Programming
=== 记忆法Memoization
例如`fib()`的递归算法:
```cpp
int fib(n) { return (2 > n) ? n : fib(n-1) + fib(n-2); }
```
$
T(n) = T(n-1) + T(n-2) + O(1)
$
这个算法的时间复杂度是$O(phi^n)$。
#figure(
image("fig\绪论\10.png",width: 80%),
caption: "递归算法的递归跟踪"
)
可以看到,有很多重复的计算,例如`fib(3)`被计算了两次,`fib(2)`被计算了三次,`fib(1)`被计算了五次。
所以可以用一个数组来存储已经计算过的值,这样就可以避免重复计算,这就是*记忆法*。
```cpp
def f(n)
if ( n < 1 ) return trivial( n );
return f(n-X) + f(n-Y)*f(n-Z);
/* Memoization: Top-down Dynamic Programming */
T M[ N ]; //init. with UNDEFINED
def f(n)
if ( n < 1 ) return trivial( n );
// recur only when necessary & always write down the result
if ( M[n] == UNDEFINED )
M[n] = f(n-X) + f(n-Y)*f(n-Z);
return M[n];
```
#figure(
image("fig\绪论\11.png",width: 80%),
caption: "记忆法的递归跟踪"
)
*Dynamic programming*,颠倒计算方向:由自顶而下递归,改为自底而上迭代。
```cpp
f = 1; g = 0; //fib(-1), fib(0)
while ( 0 < n-- ) {
g = g + f;
f = g - f;
}
return g;
```
这样也节省了空间。
=== 例:最长公共子序列LCS
#figure(
image("fig\绪论\12.png",width: 80%),
caption: "最长公共子序列LCS"
)
对于序列$A[0,n)$和$B[0,m)$, $"LCS"(n,m)$有三种情况:
1. 若 `n = 0` 或 `m = 0`, 则取作空序列(长度为零),这是递归基:必然总能抵达
2. 若`A[n-1] = 'X' = B[m-1]`,则取作: `LCS(n-1,m-1) + 'X'`[减治策略]
3. 若`A[n-1] != B[m-1]`,则在 `LCS(n,m-1)` 与 `LCS(n-1,m)` 中取更长者[分治策略]
```cpp
Input: two strings A and B of length n and m resp.,
Output: (the length of) the longest common subsequence of A and B
lcs( A[], n, B[], m )
Compare the last characters of A and B, i.e., A[n-1] and B[m-1]
If A[n-1] = B[m-1]
Compute x = lcs(A, n-1, B, m-1) recursively and return 1 + x
Else
Compute x = lcs(A, n-1, B, m) & y = lcs(A, n, B, m-1) and return max(x, y)
As the recursion base, return 0 when either n or m is 0
```
基本实现是
```cpp
unsigned int lcs( char const * A, int n, char const * B, int m ) {
if (n < 1 || m < 1) //trivial cases
return 0;
else if ( A[n-1] == B[m-1] ) //decrease & conquer
return 1 + lcs(A, n-1, B, m-1);
else //divide & conquer
return max( lcs(A, n-1, B, m), lcs(A, n, B, m-1) );
}
```
如果用这种算法,`LCS(A[a],B[b])`被调用的次数是$mat(n+m-a-b;n-a)$,单`LCS(A[0],B[0])`就会被调用$Omega(2^n)$次。
下面我们可以用记忆化进行优化:
#figure(
image("fig\绪论\13.png",width: 80%),
caption: "最长公共子序列LCS——记忆法"
)
```cpp
unsigned int lcsMemo(char const* A, int n, char const* B, int m) {
unsigned int * lcs = new unsigned int[n*m]; //lookup-table of sub-solutions
memset(lcs, 0xFF, sizeof(unsigned int)*n*m); //initialized with n*m UINT_MAX's
unsigned int solu = lcsM(A, n, B, m, lcs, m);
delete[] lcs;
return solu;
}
unsigned int lcsM( char const * A, int n, char const * B, int m,
unsigned int * const lcs, int const M ) {
if (n < 1 || m < 1) return 0; //trivial cases
if (UINT_MAX != lcs[(n-1)*M + m-1]) return lcs[(n-1)*M + m-1]; //recursion stops
else return lcs[(n-1)*M + m-1] =
(A[n-1] == B[m-1]) ?
1 + lcsM(A, n-1, B, m-1, lcs, M)
max( lcsM(A, n-1, B, m, lcs, M), lcsM(A, n, B, m-1, lcs, M) );
}
```
采用动态规划的策略,只需$O(m n)$时间即可计算出所有子问题。
#figure(
image("fig\绪论\14.png",width: 50%),
caption: "最长公共子序列LCS——动态规划"
)
```cpp
unsigned int lcs(char const * A, int n, char const * B, int m) {
if (n < m) { swap(A, B); swap(n, m); } //make sure m <= n
unsigned int* lcs1 = new unsigned int[m+1]; //the current two rows are
unsigned int* lcs2 = new unsigned int[m+1]; //buffered alternatively
memset(lcs1, 0x00, sizeof(unsigned int) * (m+1)); lcs2[0] = 0;
for ( int i = 0; i < n; swap( lcs1, lcs2 ), i++ )
for ( int j = 0; j < m; j++ )
lcs2[j+1] = ( A[i] == B[j] ) ? 1 + lcs1[j] : max( lcs2[j], lcs1[j+1] );
unsigned int solu = lcs1[m]; delete[] lcs1; delete[] lcs2; return solu;
}
``` |
|
https://github.com/Jollywatt/typst-wordometer | https://raw.githubusercontent.com/Jollywatt/typst-wordometer/master/tests/where-selectors/test.typ | typst | MIT License | #import "/src/lib.typ": *
#set page(width: 15cm, height: auto)
#let el = [
= One
== Not me!
=== Two three four five
]
#rect(el)
#word-count-of(el, exclude: heading.where(depth: 2)) |
https://github.com/frederiksemmel/linicrypt_typst | https://raw.githubusercontent.com/frederiksemmel/linicrypt_typst/main/main.typ | typst | #import "@preview/commute:0.2.0": node, arr, commutative-diagram
#import "@preview/algo:0.3.3": algo, i, d, comment, code
#import "lib/template_fs.typ": *
#import "lib/linicrypt.typ": *
#show: fs-article.with(
title: [Ideas on a characterization for Collision Resistance in Linicrypt without nonces],
authors: (
(
name: "<NAME>",
location: [Zurich],
email: "<EMAIL>",
),
),
bibliography-file: ("../bib/crypto_custom.bib", "../lib/references.bib"),
abstract: [
TODO
],
)
= Definitions and building blocks <definitions-and-building-blocks>
#remark("Notation")#linicrypt_notation_summary
Additionally, I want to note these facts about basis change.
- We can apply a basis change #B to a program #PIOC.
If $#B in F^(d times d)$ is invertible then $#P#B$ is defined as the program corresponding to $(#I#B, #O#B, #C#B)$.
There is a formal framework to show that this represents the same Linicrypt program.
The main idea for tackling Linicrypt programs in the no-nonces setting is to use basis change and generalize it to any non-square, non-invertible matrices.
This was the driving reason for the generalizations we defined in my master's thesis.
We have to allow for the "algebraic representations" #PIOC to be much freer.
Essentially anything with consistent dimensions should be allowed,
and all definitions and theorems should still be applicable.
A triple $(#I, #O, #C)$ is called a "pseudo-algebraic representation" if all the rows of the matrices are in $F^(1 times d)$.
Then we can use a core lemma that relates the solution spaces $sol(#C)$
between transformed constraints,
which follows essentially from the associativity of the matrix product.
I want to state the definition of collision structure for these pseudo-algebraic representations.
#definition("Collision structure")[
Let $#P = lr((#I , bold(O) , #C))$ be a "pseudo-algebraic representation".
We say #C has a collision structure if we can split #C as $#C = #Ccs union.sq #Cfix$ and find a non-trivial subspace $#Fixing subset.eq #F^(1 times d)$ such that
// #set enum(numbering: "(a)")
$
#Ccs #text[is deterministically solvable fixing] (rowsp(Cfix) + rowsp(O)) plus.circle #Fixing.
$
]
#remark[
- The fixed constraints #Cfix and the backwards solvable constraints #Ccs are equivalent to the constraints before and after $i^*$ in the original collision structure definition.
- Degeneracy is a natural edge case of this definition, i.e. $#Ccs = {}$.
- The subspace #Fixing corresponds to the variables in #P which we can set freely and still solve the constraints.
It plays the same role as the $#q _(i^*)$, which is not in the span of previously fixed variables.
]
The attack follows from the definition of the collision structure:
We are given a $#vv in sol(#C)$.
In particular, it also solves $#Cfix$.
We fix the values for all the dual vectors contained in $rowsp(Cfix) + rowsp(O)$.
Then we choose any other value than $bold(f)#vv$ for $bold(f) != 0 in #Fixing$.
By the lemma on the structure of $sol(#C)$,
we have a deterministic Linicrypt program computing a solution $#vv'$ for $Ccs$ taking these fixed values as input.
This solution is our collision,
as we have $#I#vv != #I#vv'$ (injectivity of #I on the solutions),
$#vv' in sol(#C) = sol(#Ccs) sect sol(#Cfix) $ (by the definitions)
and $#O#vv = #O#vv'$ (by the choice of $#vv'$).
We can therefore state the following theorem, which combines all the previously described attacks for both the random oracle model, and the ideal cipher model.
#theorem[
Let $(#I , bold(O) , #C)$ be a pseudo-algebraic representation with a collision structure
$#C = #Ccs union.sq #Cfix$.
Assume we already know a $#vv$ in $sol(#C)$.
Then there is a Linicrypt program that can output a $#vv'$ in $sol(#C)$ with $#vv != #vv'$ and
$#O#vv = #O#vv'$ by making $|#Ccs|$ queries to $H$.
]<theorem-collision-structure>
The definition of the solution set (in the ROM) is the following.
#definition("Solution of constraints")[
Let $#C$ be a set of constraints of dimension $d$.
We say a vector $#vv in #F^d$ #strong[solves] $#C$ if
$#a #vv = H(#Q #vv)$ for all $#Q arrow.r.bar #a$ in $#C$.
Such a vector $#vv$ is called a #strong[solution] of $#C$.
The set of all solutions to $#C$ is denoted by $solH(C)$.
]
The definition works for any function $H$,
and we will just write $sol(#C)$ and omit the $H$ from the notation.
TODO: maybe I should put here the definition for deterministically solvable,
and the lemma about the structure of $sol(#C)$.
The central lemma that is used in the conjecture below is:
#lemma("Transformation of constraints")[
Let #C be a set of constraints of dimension $d$ and
let $#T$ be a matrix in $F^(d times d')$ where $d'$ can be arbitrary.
Then we have
$
#vv in sol(#C#T) <==> #T#vv in sol(#C)
$
]<lemma-sol-commutes>
#proof[
Let #C and #T be as in the lemma.
The following equivalences prove the lemma:
$
&#vv in sol(#C#T) \
<=> &(#a') #vv = H((#Q') #vv) quad forall (#Q', #a') in #C#T \
<=> &(#a#T) #vv = H((#Q#T) #vv) quad forall (#Q, #a) in #C \
<=> &#a (#T #vv) = H(#Q (#T #vv)) quad forall (#Q, #a) in #C \
<=> &#T#vv in sol(#C)
$
]
The equivalence between lines 2 and 3 looks strange,
because $#C'$ could have fewer constraints than #C,
but it is correct.
This lemma holds for _any_ function $H$.
= Conjecture
I am sure that the conjecture still needs tweaking.
But in this form, the attack side already kind of works, and it explains all the examples I have found.
The next step would be to try to break the conjecture by finding more complicated counter examples.
#conjecture("Formulation version 1")[
Let #P = (#I, #O, #C) be a Linicrypt program.
and assume there is an injective #T in $M^(d times d')$ such that
#set enum(numbering: "(a)")
+ $#P#T$ has a collision structure, or
+ There exists a basis change $#B != bb(1)$ in $M^(d' times d')$ with $#C#T#B = #C#T$ and $#O#T#B = #O#T$ and $#I#T != bb(0)$
Then there exists an attack on collision resistance against #P. If both (a) and (b) are not the case for any injective #T, then #P is collision-resistant.
]<conjecture-no-nonces>
With this formulation there are these issues:
+ For (a), we need that $#C#T$ is solvable,
so that we can actually find any #vv in sol(#C#T)
+ For (b), we need to be able to find a #vv in sol(#C#T) with $#B#vv != #vv$
+ Also, #I#T needs to be injective on sol(#C#T)
The last two issues are not issues if #T is the identity.
Then 2. is satisfied because $H$ is assumed to look random,
and cannot fulfill a linear relationship.
And 3. is satisfied anyways by the Lemma on the structure of a solvable #C.
#conjecture("Formulation version 2")[
Let #P = (#I, #O, #C) be a pseudo algebraic representation of a Linicrypt program.
#P is not collision resistant if and only if at least one of these conditions hold:
#set enum(numbering: "(a)")
+ $#P$ has a collision structure.
+ There exists a basis change $#B != bb(1)$ with $#C#B = #C$ and $#O#B = #O$.
+ There is an injective map #T in $#F^(d' times d)$ and a submatrix $#I'$ of #I
such that $#P' = (#I'#T, #O#T, #C#T)$ is a pseudo algebraic representation for which (a) or (b) hold.
]<conjecture-no-nonces-v2>
= Attack Side
The attack side works for this conjecture.
The conjecture is enough to explain all CR-insecure examples that I tried.
But the security proof still has a few "holes" in the argumentation,
even though a lot of things seem to go right.
Here I want to prove the direction: If (a), (b) or (c) is true
then there is a collision resistance attack on #P.
#proof[
== Case (a)
This is given by #ref(<theorem-collision-structure>)
== Case (b), the permutation attack
Assume that we have $#C#B = #C$ and $#O#B = #O$ for a non-trivial basis change #B.
Note, that this is only possible due to there being no nonces in the constraints.
Because $#C$ is an unordered set of constraints,
#C#B can be equal to #C if #B manages to permute the constraints in #C.
Then let #vv be in $sol(#C )$ such that $#B #vv != #vv$.
It is possible to find such a #vv with very high probability simply by choosing any #vv randomly out of sol(#C).
If sol(#C) was contained to any subspace of $#F^d$ like the eigenspace of #B,
then $H$ is definitively not independently random.
#remark[
This can be made precise by switching to the canonical algebraic representation,
where the input and the answer vectors are all orthogonal.
Then sol(#C) is a graph on top of $X := #ker[#I]^tack.t$.
Because $H$ is a random oracle, any point above $X$ is equally likely,
so the probability that the graph lies in a subspace is negligible.
]
By the #ref(<lemma-sol-commutes>) we have $#vv in sol(#C ) = sol(#C #B) <=> #B#vv in sol(#C )$.
Finally, as we have assumed that $#O#B = #O$ we get that:
$
#O#B#vv = (#O#B) #vv = #O#vv
$
By choice of #vv we have $#B#vv != #vv$,
therefore $#I#B#vv != #I#vv$ are a collision for #P.
== Case (c), the collapse of queries
Assume that $#P' = (#I'#T, #O#T, #C#T)$ is a pseudo algebraic representation for which (a) or (b) hold.
Now by the attack proofs for case (a) and case (b) we get a $#vv != #vv'$ in sol(#C#T)
such that $(#O#T)#vv = (#O#T)#vv'$.
We have $#I'#T#vv != #I'#T#vv'$ by the bijection between sol(#C#T) and $#I'#T$ because $#P'$ is a pseudo algebraic representation.
Now we can apply #ref(<lemma-sol-commutes>) which gives us in this context the following equivalence:
$
#vv in sol(#C #T) <==> #T #vv in sol(C)
$
Putting it all together we have $#T#vv != #T #vv'$, which are both in $sol(C)$ and fulfill the equation $#O (#T#vv) = #O (#T#vv')$.
So #I#T#vv and $#I#T#vv'$ are collisions for #P.
#remark("Unsolvable constraints")[
Something that could be interesting to look at is what algebraic conditions does #T need to fulfill so that #C#T is still solvable.
Intuitively #T should not create loops in the constraints,
where we need the output of some queries to determine the inputs to exactly these queries.
This plays into the same problem of trying to better understand unsolvable set of constraints.
Sometimes properties of $H$ can also make unsolvable #C actually easy to solve,
for example if $H$ has fixed points, then the "point loop" $#C = {#q |-> #q}$ has a solution.
]
]
= Examples for the attack side
== Collapsing program from McQuoid, Swope, Rosulek
This is the example that the authors of @TCC:McQSwoRos19 used to highlight why
the plain collision structure characterization fails in the setting without nonces.
#align(center + horizon)[
#grid(
columns: 2,
gutter: 2em,
algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(#P (x,y))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$a_1 &:= H(x)$ \
$a_2 &:= H(y)$ \
$a_3 &:= H(a_1)$ \
return $a_3 - a_2$
],
algo(
header: $underline(sans("Algebraic Representation of ") #P (x,y))$,
line-numbers: false,
inset: 0.7em,
fill: none,
stroke: 0pt,
block-align: left,
)[
#v(0.5em)
$bold(O) &:= mat(0,0,0,-1,1) \
bold(I) &:= mat(1,0,0,0,0;
0,1,0,0,0)$ \
$#C = {
&mat(1,0,0,0,0) &|-> mat(0,0,1,0,0), \
&mat(0,1,0,0,0) &|-> mat(0,0,0,1,0), \
&mat(0,0,1,0,0) &|-> mat(0,0,0,0,1)
}$
],
)
]
The attack for this program is to input $(x, H(x))$ for any $x in #F$ because $#P (x, H(x)) = 0$.
What happens is that $#a _2$ and $#a _3$ collapse during the execution of #P on these inputs.
This can be represented by the matrix #T defined as:
#let myT = $mat(
1,0,0;
0,1,0;
0,1,0;
0,0,1;
0,0,1;
)$
$
#T = #myT
$
Then the program #P#T is specified by $(#I'#T, #O#T, #C#T)$ with $#I'$ being only the first row of #I and
#align(center)[
#grid(
columns: 3,
gutter: 2em,
[$#C#T = {mat(1,0,0) |-> mat(0,1,0), mat(0,1,0) |-> mat(0,0,1)}$],
[$#I'#T = mat(1,0,0)$],
[$#O#T = mat(0,0,0)$.],
)
]
Note, that $#C#T$ is not deterministically solvable fixing $rowsp(#I#T)$.
But it is solvable fixing only $rowsp(#I' #T)$.
The key thing is that #P#T now has a collision structure!
We can split $#C#T = #Ccs#T union.sq #Cfix#T$ with $#Ccs#T = #C#T$ and $#Cfix#T = {}$.
If we set $#Fixing = rowsp(mat(1,0,0))$ then we get that #Ccs#T is deterministically solvable fixing
$
(rowsp(#Cfix#T) + rowsp(#O#T) plus.circle #Fixing = rowsp(mat(1,0,0)).
$
Therefore case (c) of #ref(<conjecture-no-nonces-v2>) is fulfilled.
Here I write out exactly what happens in the proof for this concrete example.
By solving #Ccs#T (i.e. running the attacking Linicrypt program) in two different ways by fixing a different value for $mat(1,0,0)#vv$,
we get two distinct #vv and $#vv'$ in $sol(#C#T)$.
They have the form:
$
#vv = mat(x;H(x);H(H(x))) != mat(x';H(x');H(H(x'))) = #vv'
$
Now, by the #ref(<lemma-sol-commutes>) we get the solutions for #C by applying #T from the other side.
$
#T#vv = #myT #vv = mat(
x;
H(x);
H(x);
H(H(x));
H(H(x));
)
$
These lead to the expected colliding inputs to #P: $#I#T#vv = mat(x; H(x))$ and $#I#T#vv' = mat(x'; H(x'))$.
== Collapse Attack: One of the 5 broken PGV compression functions in the rate-2 setting
This example should work for any of the 5 compression functions listed in @C:BlaRogShr02 in
the section "Fatal Attacks on Five of PGV's B-Labeled Schemes".
I will choose the compression function they call $hat(f)_39$ and call it $f$.
$
f(h,m) = E(h+m, h+m) + m
$
The Merkle-Damgard like construction that I will use starts with the IV = 0.
This constant could also be passed in as another input and output, or it could be set to IV = H(0).
I believe all three approaches are equivalent in the end.
#import "@preview/fletcher:0.4.2" as fletcher: node, edge
#let icc(x, k, y) = [
#fletcher.diagram(
cell-size: 8mm,
$
#x edge(#k, <->) & #y \
$,
)
]
#align(center + horizon)[
#grid(
columns: 2,
gutter: 2em,
algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(H^2_f (m_1,m_2))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$h_0 := 0$ \
$y_1 := E(h_0 + m_1, h_0 + m_1)$ \
$h_1 := y_1 + m_1$ \
$y_2 := E(h_1 + m_2, h_1 + m_2)$ \
$h_2 := y_2 + m_2$ \
return $h_2$
],
algo(
header: $underline(sans("Algebraic Representation of ") #P (x,y))$,
line-numbers: false,
inset: 0.7em,
fill: none,
stroke: 0pt,
block-align: left,
)[
#v(0.5em)
$bold(I) &:= mat(1,0,0,0;
0,1,0,0) \
bold(O) &:= mat(0,1,0,1)$ \
$#C = lr({
&#icc($mat(1,0,0,0)$, $mat(1,0,0,0)$, $mat(0,0,1,0)$),\
&#icc($mat(1,1,1,0)$, $mat(1,1,1,0)$, $mat(0,0,0,1)$)
}, size: #50%)$
],
)
]
The goal is now to find a $#T'$ such that the two queries collapse.
I first try to find a $#T'$ in $#F^(4 times 4)$ and then I remove the zero columns to make it an injective matrix.
I write down the equations to make the second query collapse onto the first query:
$
mat(1,0,0,0) #T' &= mat(1,0,0,0) \
mat(0,0,1,0) #T' &= mat(0,0,1,0) \
mat(1,1,1,0) #T' &= mat(1,0,0,0) \
mat(0,0,0,1) #T' &= mat(0,0,1,0)
$
This system of equations has a solution:
$
#T' = mat(
1,0,0,0;
0,0,-1,0;
0,0,1,0;
0,0,1,0;
)
$
And if we remove the second and fourth column, we get a #T as in #ref(<conjecture-no-nonces>):
$
#T = mat(
1,0;
0,-1;
0,1;
0,1;
)
$
#remark[
I don't have an algorithm for finding these $#T$ yet.
Also, I believe that such an algorithm has to be in NP-hard in the number of constraints,
because in @hollenberg_complete_2022 they showed that finding the collapsing potential is NP-hard.
This doesn't seem to be a problem to me,
as exponential computation in the number of base variables is still doable.
And if we had lots of base variables,
we probably have a very specific structure (like in the MD-construction) which we can exploit for faster algorithms.
]
The collapsed program is then
#align(center)[
#grid(
columns: 3,
gutter: 2em,
[$#C#T = { (mat(1,0), mat(1,0), mat(0,1)) }$],
[$#I'#T = mat(1,0)$],
[$#O#T = mat(0,0)$.],
)
]
This is the same case as the previous example,
the $#C#T$ is solvable fixing the output and fixing $mat(1,0)$.
Because this program has a collision structure (specifically it is degenerate) we get solutions in $sol(#C#T)$,
i.e. any $#vv = (x,E(x,x))$ for $x$ in #F is in $sol(#C#T)$.
We map it to collisions of #P via #I#T, so every
$#I#T #vv = (x, -E(x,x))$ for $x$ in #F
collides with each other.
This is the same result as in @C:BlaRogShr02,
except that they work in a field of characteristic 2 where $-1 = 1$ and hence they omit the minus sign.
This example highlights the power of the Linicrypt approach.
It seems to me that the original PGV paper did not find these attacks,
but here these attacks can be discovered by solving algebraic conditions on the algebraic representation of the program.
== Non example for the collapse attack: The 8 group-2 schemes
I tried the scheme $f(h,m) = E(m, h) + m$.
For it, the equations to make the constraints collapse are inconsistent,
so no #T can exist that collapses the queries.
#remark[
It also doesn't work if one tries to collapse $y_2$ onto $x_1$.
This "reverse and collapse" seems like an interesting attack.
I want to check if this attack is possible for any of the PGV functions in the Merkle Damgard construction.
But considering the amount of work that has been done on this topic,
I don't think there will be a PGV compression function for which this attack is actually a new attack.
]
== Permuation attack: The compression functions marked as (P) in PGV
One of the compression functions that is attackable by the Permutation Attack in PGV is
$f(h,m) = E(m,m) + h$.
#align(center + horizon)[
#grid(
columns: 2,
gutter: 2em,
algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(H^2_f (m_1,m_2))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$h_0 := 0$ \
$y_1 := E(m_1, m_1)$ \
$h_1 := y_1 + h_0$ \
$y_2 := E(m_2, m_2)$ \
$h_2 := y_2 + h_1$ \
return $h_2$
],
algo(
header: $underline(sans("Algebraic Representation of ") #P (x,y))$,
line-numbers: false,
inset: 0.7em,
fill: none,
stroke: 0pt,
block-align: left,
)[
#v(0.5em)
$bold(I) &:= mat(1,0,0,0;
0,1,0,0) \
bold(O) &:= mat(0,0,1,1)$ \
$#C = lr({
&#icc($mat(1,0,0,0)$, $mat(1,0,0,0)$, $mat(0,0,1,0)$),\
&#icc($mat(0,1,0,0)$, $mat(0,1,0,0)$, $mat(0,0,0,1)$)
}, size: #50%)$
],
)
]
With these constraints #C we can do a collapse attack like before, which will lead to the collisions
$(x,x)$ for $x$ in #F.
But what is also possible is the permutation attack due to the symmetry in #C _and_ #O under #B for
$
#B = mat(
0,1,0,0;
1,0,0,0;
0,0,0,1;
0,0,1,0;
).
$
We have $#C#B = #C$ as required in #ref(<conjecture-no-nonces>) with $#O#B = #O$.
So, if we take any $#vv$ in $sol(#C)$, the chances are very high that it does not lie
in the eigenspace for 1 of $#B$ so that $#B#vv != #vv$.
Specifically, we can take any input $(x, y)$ to #P for $x != y$
and use #P to compute the corresponding #vv in $sol(#C)$ by setting $#vv = mat(x,y,H(x),H(y))^(tack.b)$.
Then by #ref(<conjecture-no-nonces-v2>), we get the second preimage $#I#B#vv = (y,x)$.
== A more interesting permutation attack
Let $#P (x , y) = H(x) + H(y + H(x))$.
The algebraic representation has two constraints:
#align(center)[
#grid(
columns: 3,
gutter: 2em,
[$#C = { mat(1,0,0,0) |-> mat(0,0,1,0), quad mat(0,1,1,0) |-> mat(0,0,0,1) }$],
[$#I = mat(1,0,0,0; 0,1,0,0)$],
[$#O = mat(0,0,1,1)$],
)
]
The only possible non-identity basis change leaving #C intact,
exchanges these two queries.
We can set up a system of 4 vector equations for #B so that the first query and answer row vector is exchanged with the second when we apply #B.
We find that this #B is the unique solution to the system of equations:
$ #B = mat(0, 1, 1, 0;1, 0, 0, -1;0, 0, 0, 1;0, 0, 1, 0) $
This #B leaves both #C and #O unchanged
(i.e. $#C#B = #C$ and $#O#B = #O$) so the conditions of the permutation attack are fulfilled.
An arbitrary vector from sol(#C) is for example
$#vv = mat(x,y,H(x),H(y+H(x)))^tack.b$ for some $x != y$ in #F.
It is the execution vector for the input $(x,y)$ to #P.
By applying #I#B to #vv we get the second preimage
$
#I #B #vv = (y + H(x), x - H(y + H(x))).
$
This is not just a permutation of the two inputs,
but a permutation between input and query base variables.
The combined symmetry in #C and #O enable this attack.
This symmetry and this attack are not obvious at first glance,
which makes the abstract formulation of the permutation attack from #ref(<conjecture-no-nonces-v2>) valuable.
= Merkle Dåmgard construction in Linicrypt
We want to see if the conjecture can correctly reproduce previous classifications of the PGV compression functions.
Linicrypt can explain most of the previous categorizations,
except for the Permutation attack and the 5 flawed backward attackable functions.
TODO add a summary of the last section of my master's thesis that goes into detail.
Now I will try to tackle these last missing functions with the help of #ref(<conjecture-no-nonces-v2>).
We use the variables $a,b,c,d,e,f in {0,1}$ as @C:BlaRogShr02:
$
f(h,m) = E(c h + d m, e h + f m) + a h + b m
$
If we see this as a Linicrypt program, this is its canonical representation:
#align(center)[
#grid(
columns: 3,
gutter: 2em,
[$#I = mat(1,0,0;0,1,0)$],
[$#O = mat(a,b,1)$.],
[$#C = { (mat(e,f,0), mat(c,d,0), mat(0,0,1)) }$],
)
]
Now consider the Merkle Dåmgard construction with 2 calls to $f$.
We will pass in the IV called $h_0$ as the first argument to the program.
#align(center + horizon)[
#grid(
columns: 2,
gutter: 2em,
algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(H^2_f (h_0;m_1,m_2))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$h_1 := f(h_0, m_1)$ \
$h_2 := f(h_1, m_2)$ \
return $h_2$
],
algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(H^2_f (h_0;m_1,m_2))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$y_1 := E(c h_0 + d m_1, e h_0 + f m_1)$ \
$h_1 := y_1 + a h_0 + b m_1$ \
$y_2
&:= E(c h_1 + d m_2, e h_1 + f m_2) \
&= E(c y_1 + c a h_0 + c b m_1 + d m_2, e y_1 + e a h_0 + e b m_1 + f m_2)
$ \
$h_2
&:= y_2 + a h_1 + b m_2 \
&:= y_2 + a y_1 + a^2 h_0 + a b m_1 + b m_2
$ \
return $h_2$
],
)
]
On the right I spelled out the Linicrypt program in the canonical basis $(h_0, m_1, m_2, y_1, y_2)$.
In this case, the canonical basis is not the most pretty basis to work in for this program.
If we were to add more calls to the compression functions the expressions get more and more complicated.
This is the algebraic representation of $H^2_f$ in this basis:
#align(center)[
#table(columns: 3, gutter: 2em,stroke:0pt,align:horizon,
[$#I = mat(
1,0,0,0,0;
0,1,0,0,0;
0,0,1,0,0;
)$],
[$#O = mat(1,0,0,0,0;a^2,a b,b,a,1)$.],
// [$#C#T = {
// (mat(e,f,0), mat(c,d,0), mat(0,0,1)),
// (mat(e,f,0), mat(c,d,0), mat(0,0,1))
// }$],
$#C = mat(delim:"{",gap:#1em,
(mat(e,f,0,0,0), mat(c,d,0,0,0), mat(0,0,0,1,0));
(mat(e a,e b,f,e,0), mat(c a,c b,d,c,0), mat(0,0,0,0,1));
)$
)
]
In this basis, it is very hard to see the originally very clean structure of the construction.
This is due to two issues.
First, we need to sort the basis variables in a way that mimics the structure of the construction.
This would be $(h_0, m_1, y_1, m_2, y_2)$.
In this basis (we switched $y_1$ and $m_2$) we have:
#align(center)[
#table(columns: 3, gutter: 2em,stroke:0pt,align:horizon,
[$#I = mat(
1,0,0,0,0;
0,1,0,0,0;
0,0,0,1,0;
)$],
[$#O = mat(1,0,0,0,0;a^2,a b,a,b,1)$.],
// [$#C#T = {
// (mat(e,f,0), mat(c,d,0), mat(0,0,1)),
// (mat(e,f,0), mat(c,d,0), mat(0,0,1))
// }$],
$#C = mat(delim:"{",
(mat(e,f,0,0,0), mat(c,d,0,0,0), mat(0,0,1,0,0));
(mat(e a,e b,e,f,0), mat(c a,c b,c,d,0), mat(0,0,0,0,1));
)$
)
]
Now we can start to see the pattern that these matrices follow.
But, because we are allowed to represent the Linicrypt program in any basis,
we can also just directly choose the basis $(h_0, m_1, h_1, m_2, h_2)$.
Here the basis change idea really shines,
because it helps to simplify the algebraic representation.
#align(center)[
#table(columns: 3, gutter: 2em,stroke:0pt,align:horizon,
[$#I = mat(
1,0,0,0,0;
0,1,0,0,0;
0,0,0,1,0;
)$],
[$#O = mat(1,0,0,0,0;0,0,0,0,1)$.],
// [$#C#T = {
// (mat(e,f,0), mat(c,d,0), mat(0,0,1)),
// (mat(e,f,0), mat(c,d,0), mat(0,0,1))
// }$],
$#C = mat(delim:"{",
(mat(e,f,0,0,0), mat(c,d,0,0,0), mat(-a,-b,1,0,0));
(mat(0,0,e,f,0), mat(0,0,c,d,0), mat(0,0,-a,-b,1));
)$
)
]
The pattern is now clear,
and we can easily see what the algebraic representation of $H^n_f$ would be like for $n$ arbitrarily high.
== Implementation of the permutation attack in SymPy
I used SymPy to find permutation matrices that attack these programs,
the code is here @semmel2024linicrypt.
I implemented only the permutation attack on $H^2_f$ for now.
That is, we set up all the equations so that $c_1$ and $c_2$ are switched when we apply #B.
$
c_1 #B = c_2 quad "and" quad c_2 #B = c_1
$
Because #C is an *unordered* set, any solution #vv to #C implies that #B#vv is a solution to #C as well.
If we add the constraints to #B that the output matrix is untouched:
$
#O #B = #O
$
and solve for such a #B with any linear constraints solver (here I use SymPy),
we can list out all the programs $H^2_f$ where such an attack is possible.
I tried to solve these equations by hand.
It looks like it should be possible to find a nice characterization for what $H^n_f$ are susceptible to this attack,
but I just failed to solve the linear equations parametrized by $a,b,c,d,e,f$.
The task is not to find a solution,
but to find the conditions on $a,b,c,d,e,f$ such that there is either no solution,
one solution or many solutions.
There are 12 such programs.
As a sanity check: The 5 programs that are marked with (P) in @C:PreGovVan93 and with (f) in @C:BlaRogShr02 are contained in these 12.
These are the permutation attacks where one can switch $m_1$ with $m_2$ in the input and get the same output.
But the other seven additional permutation attacks can be broken down like this:
+ 3 degenerate programs (2 of type (-) and (a) and one of type (D) but it should be classified as (-))
+ 2 programs of type (D) and (b)
+ 2 programs of type (B) and (g)
#remark("On the degenerate programs")[
One of the programs is marked as (D) in @C:PreGovVan93,
but the compression function doesn't use the inputs independently,
so it is essentially susceptible to the same attack as when it just uses one of the two inputs.
The permutation attack matrix #B is not unique, there is a high dimensional subspace of possible matrices.
This is a phenomenon I don't fully understand,
but kind of what happens is that the matrix #B maps one of the inputs to all its equivalent inputs.
As the inputs to a degenerate program are not used independently,
there is a subspace of inputs that "looks the same" to the program.
The matrix #B then kind of maps one of these inputs to the rest of them.
Interestingly, this is kind of a "hack",
as the matrix kind of misuses some non-zero value from the solution vector to achieve this affine transformation with just a linear transformation.
I found multiple such examples during my master's thesis.
I think it would be super interesting to allow for affine transformations,
these would then easily capture degeneracy via this symmetry idea.
But this means that one has to extend Linicrypt to work with affine combinations of its variables,
not linear combinations.
But it seems that essentially the same theory can be built.
]
#remark("On the programs of type (D) and (b)")[
These are not marked as (P) in @C:PreGovVan93 because the authors specified an order of attacks,
and (D) is higher up the list than (P).
]
#remark("New fatal attacks on the programs of type (B) and (g)")[
This is what I find most interesting!
The paper @C:BlaRogShr02 described new 5 fatal attacks on the schemes previously classified as (B).
These attacks all work in the same way,
i.e. they cause two independent queries to $E$ to collapse.
The algorithmic implementation of #ref(<conjecture-no-nonces-v2>) found a *different* attack on two of them.
This attack instead causes the two queries to happen in reversed order.
By the conjecture, we get a single second-preimage to almost every possible input.
]
== New attack on 2 of the 5 (B) and (g) labeled schemes
The two compression functions of type (B) and (g) which are susceptible to a permutation attack are:
$
f_43 (h,m) &= E(h+m, 0) + h \
f_59 (h,m) &= E(h+m, h+m) + h
$
As the attack is the same for both, let $f$ be any of those two in the following.
The basis change matrix with the properties for the permutation attack ($c_1 #B = c_2, c_2#B = c_1, #O#B=#O$) is for $H^2_f$:
$
#B = mat(
1,0, 0,0, 0;
-1,0, 1,1, 0;
1,0,-1,0, 1;
0,1, 1,0,-1;
0,0, 0,0, 1;
)
$
Let $(h_0, m_1, m_2)$ be any input to $H^2_f$.
The conjecture says it should have a second preimage:
$
(h'_0, m'_1, m'_2) := (h_0, -h_0 + h_1 + m_2, m_1 + h_1 - h_2)
$
Note, that $h'_0 = h_0$ is a side effect of putting this IV in the input and output of $H^2_f$.
Here we do the lengthy calculation for $f_43$ to convince ourselves that it is indeed a second preimage.
Everything cancels out and we get the same output.
#algo(
header: $underline(H^2_f_43 (h_0, -h_0 + h_1 + m_2, m_1 + h_1 - h_2))$,
line-numbers: false,
inset: 0.7em,
fill: none,
block-align: center,
row-gutter: 14pt,
)[
$y'_1
&:= &E(h'_0 + m'_1, 0) \
&= &E(h_0 -h_0 + h_1 + m_1,0)\
&= &E(h_1 + m_1,0) = y_2$ \
$h'_1
&:= &y'_1 + h'_0 = &y_2 + h_0$ \
$y'_2
&:= &E(h'_1 + m'_2, 0) \
&= &E(y_2 + h_0 + m_1 + h_1 - h_2, 0)\
&= &E(y_2 + h_0 + m_1 - y_2, 0)\
&= &E(h_0 + m_1, 0) = y_1$ \
$h'_2
&:= &y'_2 + h'_1 \
&= &y_1 + y_2 + h_0 = &h_2$ \
return $h'_2$
]
Now, what are these second preimages? When are they proper second preimages, and not the same preimage again?
If we solve this equation for $m_1$ and $m_2$
$
(h_0, m_1, m_2) := (h_0, -h_0 + h_1 + m_2, m_1 + h_1 - h_2),
$
we get exactly the collisions that @C:BlaRogShr02 found.
They found that all the inputs of this form
$
(h_0, h_0 + c, E(c, c) + h_0 + c),
$
for $c in #F$ arbitrary,
will collapse the queries and lead $H^2_f_43$ to output $h_0$.
That leads to the following conclusion,
which is still very puzzling to me:
For the program $H^2_f_43$ every input is either:
+ In the set of collisions found by @C:BlaRogShr02,
+ or it has a single second preimage generated by the permutation matrix #B.
This means we broke second-preimage resistance for every input now,
not only for the inputs in the specific subspace described by the attack in @C:BlaRogShr02.
But there is no freedom in choosing the second preimage for $H^2_f$.
This also hints towards some missing deeper theory that explains this.
My guess is that these things are explained by something like the orbit and stabilizer from some kind of representation.
And I hope that by formulating the Linicrypt theory in some way making use of the language from group actions and representations
we can
- simplify the attack proofs,
- simplify the formulation of all possible attacks (i.e. optimize the conjecture),
- better understand the core of what a Linicrypt program is
- and hopefully find an easy proof for the security side.
=== More inputs to $H$
I checked with the SymPy program that this attack holds up for dimensions up to 6 (checking all combinations of $a,b,c,d,e,f$ takes a couple of seconds then already...).
The attack is a permutation of the $n$ constraints like this:
$
c_1 -> c_2 -> ... -> c_n -> c_1,
$
where the arrows represent the application of the permutation matrix #B.
Therefore each input $#i _1$ to $H^n_f_43$ has $n-1$ second preimages described by:
$
#i _2 = #B^1#i \
#i _3 = #B^2#i \
... \
#i _n = #B^n#i = #B^0#i = #i \
$
= Security Side
== Ideas for the proof
Just some random notes.
General proof idea of #ref(<conjecture-no-nonces-v2>):
- (a), (b) or (c) $=>$ attack is done
- Reverse: Assume adversary finds collisions $#vv != #vv'$ in sol(#C) under $#O$ ($#O#vv = #O#vv'$),
then (a), (b) or (c)
We have:
- Protocol of $#Att$ interacting with oracle $H$ (or $E$ or $Pi$) with $N$ queries
- From #vv and #vv' and the protocol we can extract $T, T': #C -> [N]$ when each constraint was determined.
- We define $#fin (c) = sans("max")(#Ti (c), #Ti' (c))$ for each $c$ in #C.
- We define #Cfix to be the maximal subset such that $#Ti = #Ti'$ on #Cfix and $#Ccs = #C \\ #Cfix$
=== Base case
$#Ti$ and $#Ti'$ are each injective and $#Ti (#Ccs) sect #Ti' (#Ccs) = {}$
This is the case that they have in the unique nonces paper.
Then $#fin$ is injective.
=== Case: Collapse of queries
==== Easy case first: simultaneous collapse
Assume $T$ and $T'$ is not injective,
so there is a $i != j$ such that $T(c_i) = T(c_j)$ and $T'(c_i) = T'(c_j)$.
By definition of $T$ and $T'$ we have $c_i #vv = c_j #vv$ and $c_i#vv' = c_j#vv'$.
Here $c#vv$ just means $(#Q#vv, #a#vv)$ in the random oracle case.
Now I want to find an injective matrix #T in $#F^(d' times d)$
through which $c_i$ and $c_j$ collapse.
And the $#P#T$ (meaning $(#I#T, #O#T, #C#T)$) should then be a
pseudo-algebraic representation for which we can build an attack by using $#Att$.
The natural injective matrix $#T$ which collapses $c_i$ and $c_j$
is a matrix for which the columns are a basis of $ker(c_i - c_j)$.
This matrix is injective because the columns for a basis of the subspace.
In the following we use the basis $#vv _1, ..., #vv _d'$.
The previous idea of taking #vv and $#vv'$ as the columns is a much smaller matrix in general.
The key with this #T is that we can easily translate the attack on $#P$ to an attack on $#P#T$.
First, it collapses $c_i$ and $c_j$:
$
c_i#T = c_i mat(#vv _1, ..., #vv _d') = c_j mat(#vv _1, ..., #vv _d') = c_j #T
$
So $#C#T$ has at least one constraint less, as these two have collapsed.
Also, #vv and $#vv'$ are in the image of #T, because its columns form a basis of $ker(c_i - c_j)$.
So there are preimages $ww$ and $ww'$ to $vv$ and $vv'$ means by #ref(<lemma-sol-commutes>)
$
ww in sol(#C#T) <==> #T ww in sol(#C),
$
that $#ww$ and $#ww'$ are in $sol(#C#T)$.
The reason for finding these
Now we have reduced the attack $#Att$ to an attack on $#P#T$:
- $#ww != #ww'$ in $#F^d'$ are solutions to $#C#T$
- Also $#O#T#ww = #O#T#ww'$
- We can apply the conjecture for the lower dimensional case $#P#T$
The key thing that is missing here is that $#P#T$ should be an algebraic representation up to basis change
of some program, so that we can apply the conjecture for a lower dimension by induction.
Key issue:
- $#C#T$ needs to also be solvable (for the examples it always is)
Ideas:
- If $#C#T$ is not solvable, then it would have been hard for the attacker to collapse those two queries.
- This only works if we look at the whole set of collapsed queries.
It might be hard to collapse $c_i$ and $c_j$ without collapsing some other queries,
but easy once you do.
- My guess nr 1: The pairs of constraints that $#Att$ could collapse need to be "independent" (to be defined).
- Guess nr 2: When they are independent there would also exist a matrix $#B$ permuting these queries.
So maybe that is a line of attack.
Lets assume $#C#T$ is not solvable.
*New idea*:
But the attacker has already computed the states $#ww$ and $#ww'$ in $sol(#C#T)$.
If we can prove that it is hard to find solutions to unsolvable constraints,
then we can use that to complete this step of the proof.
== Unsolvability of constraints
I want to reformulate the definition of solvable constraints to make it nicer to work with.
#definition("Solvable constraints")[
<def_solvable>
Let $#C$ be a finite set of constraints of dimension $d$,
and let $#Fixing _0$ be a subspace of #Vd.
$#C$ is #strong[solvable fixing] $#Fixing _0$ if there exists an ordering
$(c_1, ... , c_n)$ of $#C$ such that the condition below is fulfilled.
We define for all $i = 1 , ... , n$ the spaces $Fixing_i = Fixing_(i-1) + span(c_i)$.
Solvability condition: $aa_i in.not Fixing_(i-1) + span(#Q _i) quad$ where $(QQ_i, aa_i) := c_i$
]
We call $(c_1 , ... , c_n)$ a solution ordering of $#C$ fixing $Fixing_0$.
If $Fixing_0 = {0}$ we just say $CC$ is solvable, dropping the "fixing" notation.
In the ideal cipher model,
we only have to change the solvability condition.
It will allow either $mat(kk; xx)$ or $mat(kk; yy)$ to be the query $QQ$.
#definition("Deterministically solvable")[
<def_det_solvable>
Let $#C$ be a solvable set of constraints with solution ordering $(c_1, ..., c_n)$.
$#C$ is #strong[deterministically solvable fixing] $Fixing_0$ if $Vd = Fixing_0 plus.circle #span[$aa_1, ..., aa_n$]$.
In other words, we require $d = dim(Fixing_0) + n$ in addition to the solvability condition.
]
This dimension condition forces $span(QQ_i) in Fixing_(i-1)$,
which is the condition we previously used in the definition.
Deterministically solvable is closest to the definition of collision structure in @TCC:McQSwoRos19.
Collision structure is more complicated because it also includes splitting up the constraints into a fixed part and a deterministically solvable part.
Also, deterministically solvable constraints are in a one-to-one relationship with Linicrypt programs,
up to basis change.
That means every Linicrypt program has a deterministically solvable set of constraints fixing $Fixing_0 = rowsp(II)$.
On the contrary, for every solvable set of constraints,
there exists a basis such that the set of constraints is the algebraic representation of a Linicrypt program.
=== Solvable security game
Now we can define a security game for finding solutions to a set of constraints #CC.
The adversary #Att gets access to the constraints $CC$ of dimension $d$,
a description of the space $Fixing_0$ and the oracle $H$.
Then the game randomly samples a #ii in #Vp and passes it to #Att.
This vector represents the input to a Linicrypt program.
It wins the game by outputting a $vv in sol(CC)$ with $vv - ii in ker(Fixing_0)$.
The probability of #Att winning this game is written as $SolAdv[Att, CC, Fixing_0]$.
A useful fact is that we can avoid working with $Fixing_0$ because of the following proposition.
#proposition("Collapse of input")[
Let #C be a set of constraints of dimension $d$ and $Fixing_0$ a subspace of #Vd.
We define the embedding map $LL: ker(Fixing_0) arrow.hook Vp$.
Then the following are equivalent:
1. #C is solvable fixing $Fixing_0$
2. $|CC LL| = |CC|$ and $CC LL$ is solvable (fixing ${0}$)
]
In Linicrypt program terms,
this proposition relates a Linicrypt program with its corresponding input-less Linicrypt program where all inputs are set to 0.
#proof[
We define #C, $Fixing_0$, and $LL$ as in the proposition statement.
#remark[
The linear map $LL$ induces corresponding map on the dual spaces $LL^*: Vd -> (ker(Fixing_0)^*)$.
With a bit of abuse of notation, we can extend this map $LL^*$ to act on constraints.
Constraints are tuples of matrices.
The rows of the matrices are in $Vd$,
so we can let $LL^*$ act on these structures element-wise.
Then $LL^* ((QQ, aa)) = (QQ LL, aa LL)$.
TODO: This is useful in multiple places, so it should be discussed outside this proof.
]
We will first prove a useful fact.
Let $CC = (c_1, ..., c_n)$ be an ordering and $CC LL = (c_1 LL, ..., c_n LL)$ the corresponding ordering.
Regardless of whether the solvability condition is fulfilled,
we can define the sequence of subspaces $Fixing_i$ from the ordering of $CC$ and $Fixing'_i$ from the ordering of $CC LL$.
We set $Fixing'_0 = LL^*(Fixing_0) = {0}$.
Then we have $Fixing'_i = LL^*(Fixing_i)$ by the following inductive argument:
$
Fixing'_i &= Fixing'_(i-1) + span(c_i LL) \
&= LL^*(Fixing_(i-1)) + span(c_i LL) \
&= LL^*(Fixing_(i-1) + span(c_i)) \
&= LL^*(Fixing_i)
$
We first assume $CC LL$ has a solution ordering $CC LL = (c_1 LL, ..., c_n LL)$.
This is a well-defined ordering because $LL$ didn't collapse any constraints by $|CC LL| = |CC|$.
// We can view this as a bijective map $phi: CC LL <-> [n]$.
// Take the induced map $LL^*: CC -> CC LL$.
// Because of $|CC LL| = |CC|$ this is a bijection.
// When we take the concatenation $LL^* compose phi$ we get an ordering of $CC$,
// i.e. $CC = (c_1, ..., c_n)$.
This gives us the ordering $CC = (c_1, ..., c_n)$.
Assume towards a contradiction that there is an $i$ with $aa_i in Fixing_(i-1) + span(QQ_i)$.
We can apply $LL^*$ to this equation to get
$
aa_i LL in Fixing'_(i-1) + span(QQ_i LL),
$
a contradiction with the assumed solution ordering of $CC LL$.
Now we prove the reverse direction and assume $CC$ is solvable fixing $Fixing_0$ with ordering $(c_1, ..., c_n)$.
First, we show that $|CC| = |CC LL|$.
Assume $c_i LL = c_j LL$ for an $i <j$.
This implies in particular that $0 = (aa_i - aa_j)LL ww$ for all $ww in ker(Fixing_0)$.
Therefore $ker(Fixing_0) subset.eq ker(aa_i - aa_j)$.
It follows that $aa_i - aa_j in Fixing_0$ and $aa_j in Fixing_0 + span(aa_i)$.
This directly contradicts with the solvability condition $aa_j in.not Fixing_(j-i) + span(QQ_j)$ because $Fixing_0 + span(aa_i) subset.eq Fixing_(j-1)$.
Now that we have $|CC| = |CC LL|$ we get a well-defined ordering $CC LL = (c_1 LL, ..., c_n LL)$.
Let us assume this is not a solution ordering and we have for some $i$ that
$aa_i LL$ is contained in $Fixing'_(i-1) + span(QQ_i LL)$.
It follows that $LL^*(aa_i) = LL^*(bb)$ for some $ bb in Fixing_(i-1) + span(QQ_i)$.
As before, we can move the equations around to get this sequence of implications:
$
& (aa_i - bb)LL ww = 0 quad "for any" quad ww in ker(Fixing_0) \
==> & ker(Fixing_0) subset.eq ker(aa_i - bb) \
==> & aa_i - bb in Fixing_0 \
==> & aa_i in Fixing_(i-1) + span(QQ_i)
$
The last equation would be a contradiction to the solution ordering $(c_1, ..., c_n)$.
]
This proposition is useful because we can then translate statements about solvability fixing ${0}$ to the more general version.
In some sense,
we only need to understand input-less Linicrypt programs to understand all Linicrypt programs.
#theorem("Unsolvable constraints -- wrong")[
Let #C be a set of constraints of dimension $d$ which are not solvable.
Every program #Att making $N$ request to the oracle
has a winning probability bounded by
$
SolAdv[Att, CC] < N / (|#F|).
$
]
#remark[
This is why I thought it could work:
The key idea of why this should work is that the condition of being unsolvable
is one of the form "vector is contained in subspace".
This cannot be broken by a linear map, even when it is not injective.
The vector after the mapping stays in the subspace after the mapping.
*UPDATE:* Yes, but, if the linear map collapses the problematic constraints,
then it doesn't matter that it stays in the subspace.
]
#remark[
*Good news and bad news.*
Bad news:
It does not work like this.
Counterexample:
$
CC = {mat(1,0,0) |-> mat(0,0,1), mat(0,1,0) |-> mat(0,0,1)}
$
is unsolvable.
But with $LL = mat(1,0; 1,0; 0,1)$,
the set $CC LL = { mat(1,0) |-> mat(0,1)}$ is solvable.
So solutions of $CC LL$ can be mapped to solutions of $CC$,
i.e. vectors of the form $mat(x;x;H(x))$ for $x in FF$.
The good news is, that it means we still have room for an NP problem.
*Idea to try to save it:*
Call a set of constraints $CC$ completely unsolvable if $CC LL$ is unsolvable for all linear maps $LL$.
Then we might be able to prove the Theorem for completely unsolvable sets.
This is surprisingly similar in structure to the original conjecture.
Caveat: This means we still need at least an algorithm for determining if a set of constraints
is completely unsolvable.
I would guess this problem is then NP-hard.
]
#definition("Completely unsolvable")[
A set of constraints $CC$ is called *completely unsolvable* if $CC LL$ is unsolvable for every linear map $LL$.
]
The relevant maps are those which collapse constraints.
It is probably enough to consider only the maps $LL: ker(c_i - c_j) arrow.hook Vp$
recursively.
#theorem("Unsolvable constraints -- not yet useful")[
Let #C be a set of constraints of dimension $d$ which are completely unsolvable.
Every program #Att making $N$ request to the oracle
has a winning probability bounded by
$
SolAdv[Att, CC] < N / (|#F|).
$
]<theorem-unsolvable>
I want to try to refine this theorem further,
because like this it is not fully useful.
The issues are:
- In the proof of the conjecture, we have $CC LL$, which we want to be either solvable or very hard to solve
- If it is not solvable, it might still not be completely unsolvable as in the theorem
- What we want is for it to be solvable in a subspace (the one where a solution was found),
or to be difficult to solve in that subspace
We refine the solvability definition.
#definition("solvable -- fixing and outside")[
#C is solvable outside of a subspace $W$ of #Vp fixing a subspace #Fixing of #Vd
if there exists a linear map $f: #F^(d') -> Vp$ with $im(f) subset.eq.not W$ s.t.
$f^*(#C)$ is solvable fixing $f^*(Fixing)$.
]
If $W = {0}$ we will just say #C is solvable fixing #Fixing.
If $Fixing = {0}$ we will just say #C is solvable outside of $W$.
This language of being solvable outside of a subspace is useful in describing collision resistance.
There we are looking for solutions $vv$ and $vv'$ of constraints,
under the extra condition that $vv != vv'$.
This condition can be encoded with this new definition.
TODO more explicit.
Now we can define a security game for finding solutions to a set of constraints #CC.
The adversary #Att gets access to the constraints $CC$ of dimension $d$,
the subspaces $Fixing$ and $W$ of #Vd and #Vp respectively.
The adversary #Att also gets access to the oracle $H$ such that we can record its queries.
Then the game randomly samples a vector #ii in #Vp (representing the Linicrypt program input) and passes it to #Att.
It wins the game by outputting a $vv in sol(CC)$ which fulfills $vv - ii in ker(Fixing)$
and $vv in.not W$.
The probability of #Att winning this game is written as $SolAdv[Att, CC, Fixing, W]$.
#theorem("Unsolvable constraints V2")[
Let #C be a set of constraints of dimension $d$ and $W$ a subspace of #Vp.
Assume $CC$ is not solvable outside of $W$.
Every program #Att making $N$ request to the oracle
has a winning probability bounded by
$
SolAdv[Att, CC, {0}, W] < N / (|#F|).
$
]<theorem-unsolvable-2>
#sketch[
We record the queries by #Att in the function $T: #C -> [N]$ which maps
each constraint onto the time when it was determined by a call to $H$.
We assume #Att is successful,
and outputs a #vv in $sol(CC)$ with $vv in.not W$.
This $T$ might not be injective.
We attempt an inductive proof over $n$ the number of constraints in $CC$.
So we assume the theorem holds for all sets $CC$ with $|CC| < n-1$.
When $T$ is injective, we can do the core step of the main proof from @TCC:McQSwoRos19.
This is the one where we show the result of a call to $H$ was already determined beforehand,
via an equation where the left is randomly chosen,
and the right is a linear combination of known values.
This means that #Att was very lucky if we assume $H$ is a random oracle.
This equation can be derived from assuming #C is not solvable outside of $W$.
Because this means, in particular, #C is not solvable itself.
No matter what ordering of #C we choose,
for some $i$ the negated solvability condition implies a linear equation with $aa_i #vv$ on the left side,
and previously determined values on the right side.
Now assume $T$ is not injective.
Then let $c_i$ and $c_j$ be two different constraints that are determined at the same time,
i.e. $T(c_i) = T(c_j)$.
Then $vv$ is in $ker(c_i - c_j)$.
Also, because $vv in.not W$, we know that $ker(c_i - c_j)$ is not contained in $W$.
We can define the linear map $f: ker(c_i - c_j) arrow.hook Vp$ which is just the embedding.
So this map goes from a smaller state space to the state space of our constraints #C.
This map induces a map on the dual spaces $f^*: Vd -> ker(c_i - c_j)^*$,
i.e. a map acting on variables of a Linicrypt program, or here, the components of the constraints.
So we can use it to map our constraints to a different set of constraints of smaller dimension $f^*(CC) = CC f$.
Because the $i$'th and $j$'th constraint collapse under $f$ we have $|f^*(CC)| <= |CC| - 1$
Because $vv$ is in the image of $f$ and outside of $W$,
the adversary #Att has thus found a solution $ww$ to $f^*(CC)$ where $f ww = vv$
and also $ww in.not f^(-1)(W)$.
But $f^*(CC) := CC f$ is not solvable outside of $f^(-1)(W)$.
Assume it was, i.e.
there is a $g: U -> ker(c_i - c_j)$ such that $CC f g$ is solvable and $im(g) subset.eq.not W$.
Then $f g$ is a map as in the definition of $CC$ being solvable outside of $W$.
We assumed this was not the case in the theorem statement.
By induction, we can apply the Theorem for $f^*(CC)$ and $f^(-1)(W)$ to get
$
SolAdv[Att,CC] <= SolAdv[Att, f^*(CC)] <= N / (|FF|).
$
#remark[
Here the actual factor on the right-hand side has to be different I think.
]
The base case for the induction is a set of constraints with just a single unsolvable constraint
$CC = {c} = {(QQ, aa)}$.
No matter the dimensionality of $c$,
this set is completely unsolvable.
This is because $a in span(QQ)$ implies $aa LL in span(QQ LL)$ for any linear map $LL$.
For this singleton set any map $T: CC -> [N]$ is injective and we can use the original proof for that case.
]
Maybe we can use a general theorem like the Unsolvability theorem to replace the proofs
for collision structure and second preimage characterization.
The key step in the proof of Unsolvability is the same as in those proofs.
Let $#P$ be a Linicrypt program.
We can duplicate its algebraic representation such that the vectors are completely separate.
One copy has all zeros on the right of the row vectors,
and the other has all zeros on the left.
Then we can merge the algebraic representations,
by doing a union on the constraints,
adding the input spaces,
and concatenating the output matrix.
Call this program $#P _"join"$.
We can look at a map $f$ that has as its image the states in $FF^(2d)$ where the output
of $#P _"join"$ has both halves equal. i.e.:
$
OO_"join" = mat(OO_1; OO_2), quad Cjoin = CC_1 union CC_2, quad Fixing_"join" = Fixing_1 + Fixing_2 \
f: ker(OO _1 - OO _2) arrow.hook FF^(2d)
$
We define $Proj_1$ and $Proj_2$ to be the projections from $F^(2d)$ to either the first n dimensions or the second n.
So $Proj_1$ and $Proj_2$ recover the solutions to $CC$ from the first $n$ variables, or the second $n$ variables respectively.
Then finding a $vv in sol(Cjoin f)$ with $vv in.not ker(Proj_1 - Proj_2)$ means finding collision for $#P$.
#corollary("Collisions in general")[
$Cjoin f$ is solvable outside of $ker(Proj_1 - Proj_2)$
is equivalent to $PP$ being susceptible to an easy attack against collision resistance.
]<corollary-cr>
Now that we have a formal definition of solutions outside of a subspace,
we can try to prove it.
#sketch[
Assume that #Att finds a solution easily.
Easy means with a higher probability than in the Unsolvability theorem.
Then that theorem gives us a map $g: U -> ker(OO_1 - OO_2)$ such that $Cjoin f g$
is solvable and $im(g) subset.eq.not ker(Proj_1 - Proj_2)$.
So a solution to $Cjoin f g$ is a solution to $Cjoin$ when mapped by $f g$.
Let $vv$ be such a solution to $Cjoin$ with $vv in.not ker(Proj_1 - Proj_2)$.
By construction then $Proj_1 vv$ and $Proj_2 vv$ are solutions to $CC$ with $Proj_1 vv != Proj_2 vv$.
On the reverse side, assume there is such an $g$ as above because $Cjoin f$ is solvable outside of $ker(Proj_1 - Proj_2)$.
Then computing solutions to $Cjoin$ takes at most $2n$ queries to $H$.
As before, a solution outside of $ker(Proj_1 - Proj_2)$ leads to a collision.
This means we have found an attack on collision resistance.
]
#remark[
In this proof, in the last step, I need to actually find a solution outside of $W$ just given the solution ordering.
There we need to be a bit careful.
Maybe the oracle $H$ is not random, and actually causes all solutions to lie on the subspace $W$ again.
But for a random $W$ the solution space is never contained in a subspace.
]
=== TODOs this section
- Write down lots of examples to see how this works in all the special cases
- Prove that a solution ordering like the above leads to a case from the conjecture
- <NAME>
=== Examples
See [https://github.com/frederiksemmel/linicrypt] for examples in the ideal cipher model and in the random oracle model.
That repository contains
- An implementation of the CR characterization from #ref(<corollary-cr>)
- Building the Merkle Damgard construction from PGV compression functions
- It reproduces the CR and 2PR characterization from BRS, including the secure & insecure functions of type 'B'
- It confirms that the example $P(x,y) = H(H(x)) - H(y)$ is not CR but it is 2PR
It gives a natural attack characterization: Each left-right symmetric set partition of $Cjoin$ is an attack type.
These attack types overlap mostly with previous categorizations, and for the differences this Linicrypt characterization is arguably better.
TODO flesh out these ideas:
- All CR attacks on a Linicrypt program are given by considering the subspaces of $ker(OO_1 - OO_2)$
in which the constraints of $f^*(Cjoin)$ collapse.
For each subspace, if the
== Notes and ideas, in random order
- Second preimage resistance and collision resistance loose their relationship for unsolvable constraints.
We can find unsolvable constraints where its easy to find a second solution, if we are given a solution.
But its hard to find a solution in the first place.
This is due to the permutation attack; where the symmetry of the constraints leads to a symmetry in the set of solutions.
This is contrary to the solvable case, where finding second preimages is harder than finding collisions.
Note: For the unique nonces case @TCC:McQSwoRos19 found that both notions are equivalent.
It's interesting to see that in the more general no nonces case they split, but one is stronger that the other,
and in the most general case they are unrelated.
- The set $sol(CC)$ has interesting structure.
For matrices permuting constraints $CC BB = CC$ we get a well defined map $BB: sol(CC) -> sol(CC)$.
For general matrices $LL: FF^(d') -> Vp$ for arbitrary $d'$ we get a map $LL: sol(CC LL) -> sol(CC)$.
It looks like the set of solutions contains "components",
where each component is the solution set of the "derived" set of constraints mapped linearly into $Vp$.
With derived constraints I mean the constraints that we can build by mapping them linearly.
I think the only interesting cases are when we map them such that some constraints are collapsed and we are left with a solvable set.
Then I would expect finding solutions to be easier (less constraints), and we can map them back.
- If the function $H$ allows for fixed points,
that opens another can of (interesting) worms.
Because then unsolvable sets become solvable in a limited way.
$CC = {mat(1) |-> mat(1)}$ is then solvable (fixing ${0}$) because
$sol(CC)$ are exactly the fixed points ${x | H(x) = x}$.
This set of solutions can be used to find solutions to every? unsolvable set.
E.g. $CC = {mat(1, 0) |-> mat(0,1), mat(0,1) |-> mat(1,0)}$ can be collapsed via
$LL = mat(1;1)$ to ${mat(1) |-> mat(1)}$.
Therefore solutions to
- Maybe it is cleaner to drop the idea of input matrix $II$ and of the fixing $Fixing_0$ space.
We could model Linicrypt input as just another set of constraints.
We already have random oracle constraints and ideal cipher constraints.
Now we introduce input constraints $c = ii$ for $ii$ in $Vd$.
They restrict the valid states of $PP$ in the dimension $ii$.
We instantiate them with a concrete input $i in FF$: $ii vv = i$.
This is structurally similar to the random oracle constraint,
which is instantiated with a concrete oracle $H$: $H(QQ vv) = aa vv$.
Benefits:
- The collapse of the input matrix is now handled by collapse of constraints.
- We allow extra freedom in the solution ordering which seems more general.
Previously we kind of require the inputs to $PP$ to be fixed first.
With lots of programs,
it would be OK to start computing constraints without knowing all the input.
This is the case for Merkle Damgard for example.
- Maybe we can simplify the solvable security game.
Remove fixing and outside parts.
Then maybe we can define that terminology
= Meeting Notes
== Meeting 28.08.2024
- Results from the experiments
- PGV BRS categorization
- Overview implementation, interesting parts
- Maximal attacks & set partitions
- Towards proof of original conjecture
- Issue with collapse of different constraints in left and right program
- Symmetric attacks, always present? Set partition join of attacks?
- Discuss possible applications of corollary:
- Better categorization of all PGV compression functions in MD construction
- Model the fixed point weakness of the random oracle from PGV -> Aren't all programs then vulnerable?
- Maybe some programs force collapse to $mat(0) |-> mat(0)$ ($H(0) = 0$) and some only $mat(1) |-> mat(1) $ ($H(x) = x$)
- Start a search for best compression function from ideal ciphers
== Meeting 11.09.2024
- No work these two weeks, only lots of excuses :)
- Timeline paper. October 3 not possible...
- Submission deadline Crypto 2025 in February seems good
- Formal conferences is TCC and Eurocrypt
- TCC in deadline in May probably
- Crypto is usually more real application focused, would need to find a really good application for that
= Next steps
== Attack side
=== Short term
- Try to model fixed point attacks like this.
Additionally to the solvability rules we add: $mat(xx) -> mat(xx)$ is solvable fixing $Fixing$ if $xx$ is not in $Fixing$
- Apply this to the PGV compression functions and see which are insecure
== Longer term
- derive a Linicrypt categorization of the PGV functions
== Security side
- Formalize proofs of Unsolvability and the corollary
= Clean Start
#remark[
Here I want to do a clean start, this should evolve to the paper text.
]
== Abstract
== New constructs for Linicrypt
The general direction we take is to generalize the Linicrypt structure and see where that leads to.
Instead of defining what a Linicrypt program is, and then construct its algebraic representation,
we directly start with the algebraic structure.
Later we can map the structures back to Linicrypt programs.
This approach allows us to formally work with invalid Linicrypt programs,
e.g. programs that define the same variable twice forcing the value of different computations to be equal.
It turns out that cryptographic properties of a program #P can be formulated in terms of such invalid Linicrypt programs.
First we start with basic definitions and notation.
We will start with a $d$-dimensional vector space over a finite field #F which we write $Vp$.
This represents the states that a Linicrypt program #P with $d$ base variables can be in.
It's dual space #Vd will play an important role by representing the variables of a program.
Now we define the concept of an oracle constraint.
This definition can then be satisfied by different types of constraints like a random oracle constraint or an ideal cipher constraint.
#definition("Random oracle constraint")[
Let $H$ be an instance of a random oracle.
A random oracle constraint $c$ in $Vp$ for $H$ is a tuple $(qq_1, ..., qq_k, aa)$ where $qq_1, ..., qq_k$ and $aa$ are dual vectors in $Vd$.
We also write $qq_1, ..., qq_k |-> aa$ for it.
A vector $v$ from $Vp$ is called a solution to $c$ if $H(qq_1 vv, ..., qq_k vv) = aa vv$.
In this case we also say $v$ solves $c$.
The set of such vectors is denoted by $solH(c)$.
This defines the solution function for random oracle constraints $solH$ mapping constraints to subsets of $Vp$.
]
TODO maybe think about adding visualizations.
If it is clear from context about what oracle and what state space we are talking about we will drop these terms from the notation.
#definition("Ideal cipher constraint")[
Let $IC$ be an instance of an ideal cipher.
An ideal cipher constraint $c$ in $Vp$ for $H$ is a tuple $(xx, kk, yy)$ where $xx, kk$ and $yy$ are dual vectors in $Vd$.
We also write $xx <-->^(#move(dy: 1.7pt, kk)) yy$ for it.
A vector $v$ from $Vp$ is called a solution to $c$ if $E(kk vv, xx vv) = yy vv$,
or equivalently $E^(-1)(kk vv, yy vv) = xx vv$.
In this case we also say $v$ solves $c$.
The set of such vectors is denoted by $solE(c)$.
This defines the solution function for ideal cipher constraints $solE$ mapping constraints to subsets of $Vp$.
]
Assume we have a set of constraints $CC = {c_1, ..., c_n}$ where $c_i$ is a constraint for the oracle $Ora_i$.
Then we define $sol(CC)$ to be the vectors $v$ in $Vp$ which solve all constraints in $CC$, i.e.:
$
sol(CC) = sol_(Ora_1)(c_1) sect ... sect sol_(Ora_n)(c_n)
$
We define a new security game that works for any oracle model.
#definition("Solvability game")[
Let #CC be a set of constraints with solution function $sol$ over a family of fields $FF_lambda$.
Let $Fixing$ be a subset of $Vd$ and $W$ be a subset of $Vp$.
#CC is $(q, epsilon)$-unsolvable fixing #Fixing outside of $W$ if for all q-query adversaries $Att$,
$Pr[sans("SolGame")(CC, Fixing, W, Att, lambda)] <= epsilon$.
#algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(sans("SolGame")(CC, Fixing, W, Att, lambda))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
instantiate an oracle #Ora \
$xx <- Vp$ \
$vv <- Att^Ora (lambda; xx)$ \
return $vv in sol(CC) and vv - xx in ker(Fixing) and vv in.not W$
]
]
In order to win $sans("SolGame")$ the adversary has to find a state vector $vv$ in $Vp$ that satisfies 3 conditions:
1. It is a solution to the constraints $CC$ given an instance of the oracle
2. It has found this solution while keeping the values of the variables in $Fixing$ fixed
3. The solution lies outside of the subspace $W$
Condition 2. models the ability to specify the input of a Linicrypt program.
Condition 3. is useful when we look for specific solutions satisfying a linear inequality.
When we characterize collision resistance, we will write $vv != vv'$ in terms of a subspace.
We will use an example to illustrate the meaning of conditions 1 and 2.
Let $PP(x,y) = H(x) + H(y + H(x))$.
Explicitly:
#algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(PP(x, y))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$q_1 = x$ \
$a_1 = H(q_1)$ \
$q_2 = y + a_1$ \
$a_2 = H(q_2)$ \
return $(v_1, v_2, v_3, v_4)$
]
It has 4 base variables: $x, y, a_1, a_2$.
If we construct the algebraic representation the variables correspond to the dual vectors:
$
v_1 <-> vv_1 = mat(1,0,0,0) \
v_2 <-> vv_2 = mat(0,1,0,0) \
v_3 <-> vv_3 = mat(0,0,1,0) \
v_4 <-> vv_4 = mat(0,0,0,1) \
$
This program corresponds to a specific instantiation of the Solvability game,
where we set $CC = {qq_1 |-> aa_1, qq_2 |-> aa_2}$, $Fixing = span(xx\, yy)$ and $W = 0$.
$PP$ corresponds to a successful adversary to $SolGame$.
The game chooses $xx$ at random.
We input $(vv_1 xx, vv_2 xx)$ into $PP$ and get the base variables $vv = mat(v_1,v_2,v_3,v_4)^top$.
This $vv$ is in $sol(CC)$, so condition 1. is fulfilled.
TODO explicit in the conditions.
Also, $vv - xx = (0,0,a_1, a_2)$, which is in $Fixing^0$
TODO example from a Linicryp program -> explain why we use #Fixing for input
Example of inverse to Linicryp program.
TODO Connect $vv in sol(CC)$ to base vectors and Linicyrp program executions.
Maybe no example for $W != 0$ becuase this is collision resistance.
Notion: Sum of subspaces.
#lemma("solvability of random oracle constraint")[
Let $c = (qq_1, ..., qq_k, aa)$ be a random oracle constraint and $Fixing$ a subspace of $Vd$.
We say $c$ is called solvable fixing $Fixing$ if $aa$ is not in $Fixing + span(qq_1\, ...\, qq_k)$.
If $c$ is solvable fixing $Fixing$ then there is a 1-query adversary $Att$ with
$
Pr[sans("SolGame")({c}, Fixing, 0, Att, lambda)] >= 1 - 1 / (|FF|)
$.
]
#proof[
We describe the program $Att$ and prove that it wins $SolGame$.
Define $Fixing'$ to be a $d-1$ dimensional subspace containing $Fixing + span(qq_1\, ...\, qq_k)$ but not $aa$.
TODO consider using this
$Fixing' = Vd - aa$.
This is possible because $aa$ is not in $Fixing + span(qq_1\, ...\, qq_k)$.
We choose a basis of $Fixing'$ and we call it
$vv_1, ..., vv_(d-1)$.
Then we define the matrix #BB via the equation
$
BB^(-1) = mat(vv_1; ...; vv_(d-1); aa).
$
The matrix on the right is full rank and invertible, therefore $BB$ is well defined this way.
The following Linicrypt program is the adversary
#algo(
// title: [$#P _sans("collapse")$], parameters: ($x$, $y$),
header: $underline(Att^H (lambda; xx))$,
line-numbers: false, inset: 0.7em, fill: none, block-align: left,
)[
$v_i := vv_i xx$ where $i=1,...,d-1$ \
$q_i := qq_i xx$ where $i=1,...,k$ \
$a := H(q_1, ..., q_k)$ \
return $BB mat(v_1; ...; v_(d-1); a)$
]
We will show that this $vv = BB mat(v_1; ...; v_(d-1); a)$ fulfills the conditions in #SolGame.
First we prove an auxiliary statement.
Let $bold(alpha) in Fixing + span(qq_1\, ... \, qq_k)$ be arbitrary.
TODO one more sentence on B and its property in third equation.
TODO think about separation Notation of variable from the value it takes in a particular execution.
Then we have:
$
bold(alpha) vv = sum_(i=1)^(d-1) lambda^i vv_i vv
= sum_(i=1)^(d-1) lambda^i vv_i BB mat(v_1; ...; v_(d-1); a)
= sum_(i=1)^(d-1) lambda^i v_i
= sum_(i=1)^(d-1) lambda^i vv_i xx
= bold(alpha) xx
$
This only works because of the structure of the basis that we have chosen.
In words, this just means that for variables in $Fixing + span(qq_1\, ... \, qq_k)$ the state vectors $vv$ and $xx$ take the same values.
Now we check if the conditions from the game are fulfilled. Starting with condition 1.:
$
aa vv = aa BB mat(v_1; ...; v_(n-1); a) = a = H(q_1, ..., q_k) = H(
qq_1 xx, ..., qq_k xx
) = H(qq_1 vv, ..., qq_k vv)
$
Condition 2. is also fulfilled. Let $bold(alpha)$ be any dual vector in $Fixing$, then $bold(alpha) (vv - xx) = 0$.
Therefore $vv - xx$ is in $ker(Fixing)$.
Condition 3., in this case $vv != 0$,
is almost always fulfilled except if the adversary is extremely unlucky.
For that to happen, at the very least $a$ needs to be zero, which happens with probability $1/(|FF|)$.
]
TODO include $span(c_i)$ in the definition of the oracle constraint.
It could that solving a constraint means fixing some other arbitrary variable,
but usually it is just the span of all the dual vectors in the constraint.
#corollary[
Let $#CC = {c_1, ..., c_n}$ be a set of constraints, ordered by their index.
Let $Fixing_0$ be a subspace of $Vd$,
and define $Fixing_i = Fixing_(i-1) + span(c_i)$.
We say $CC$ is solvable fixing $Fixing_0$ if $c_i$ is solvable fixing $Fixing_(i-1)$ for every $i=1,...,n$.
Then there exists a n-query adversary $Att$ with
$Pr[sans("SolGame")(CC, Fixing_0, 0, Att, lambda)] >= 1 - 1/ (|FF| ^ d)$.
]
TODO analyze condition 3. more precisely to get $|FF| ^d$.
#proof[
Because $c_i$ is solvable fixing $Fixing_(i-1)$, we get an adversary $Att_i$ for $i=1,...,n$ with $SolAdv[Att_i, {c_i}, Fixing_(i-1), 0] >= 1/(|FF|)$.
We construct the adversary
$Att = Att_n compose ... compose Att_1$.
Let $vv = Att(xx)$ and $vv_i = Att_i compose ... compose Att_1$ with $vv_0 = xx$.
Note that $vv_i$ is contained in $sol(c_i)$.
Also, $vv_i - vv_(i-1) in ker(Fixing_(i-1)) supset.eq span(c_(i-1)) + ... + span(c_1)$.
By induction we have $vv_i in sol(c_i) sect sol(c_(i-1)) sect ... sect sol(c_1) = sol({c_i, ..., c_1})$.
Because $Fixing_0 subset.eq Fixing_(i-1)$ we know that $vv_i - vv_(i-1) in ker(Fixing_0)$ for all $i=1,...,n$.
Therefore their sum is also contained in the subspace, so $vv_i - xx in ker(Fixing_0)$.
This means condition 2 is fulfilled.
Condition 3. is almost always fulfilled except if all the calls to the oracle return 0 by chance.
]
TODO Connect this corollary to collision structure.
#remark[
- This proof works for sets of constraints which are mixed from different oracle models.
- The program $Att$ can be expressed as a Linicrypt program. TODO expand a bit
- What we will later prove is that the reverse of the corollary is also true.
This means that any adversary for the $SolGame$ is actually nothing else than a Linicrypt program.
]
Remark for Zahra and Bruce: I find all of this until here quite complicated for what it is supposed to say.
I am trying to formalize the idea that if $CC$ is solvable,
then we can find solutions via a step by step solving of the constraints.
And this step by step solving is basically a Linicrypt program.
Well... maybe further iteration makes it simpler.
== Mapping constraints between spaces
The previous constructs allow us to (more or less) naturally talk about multiple oracle models simultaneously.
Another benefit is that mappings between constraints are possible and offer further proof methods.
This means that we can relate the executions of different Linicrypt programs,
if one can establish a linear map between their algebraic representations.
We will use the dual map of a linear map a lot.
Assume we have a linear map $f: W -> Vp$.
This induces a natural map on the dual spaces going in the other direction
called $f^*: Vd -> W^*$.
It is defined by $
f^*(bold(alpha))(ww) = bold(alpha)f(ww)
$ for $bold(alpha)$ a dual vector in $Vd$ and $vv$ a vector in $W$.
These two maps have a natural interpretation in the context of Linicrypt programs.
Assume $Vp$ is the state space of a program.
The algebraic representation is a structure living in the dual space of $Vp$,
i.e. it is a structure composed of the row vectors which represent variables.
We can use $f^*$ to map the Linicrypt program to a program with state space $W$.
When we don't constrain the map $f$ at all,
this idea becomes very useful because it models the collapse of distinct calls to the oracle.
By abusing the notation a bit we will write $f^*(CC)$ to mean the set of constraints when we apply $f^*$ to all the components of each constraint.
Note, that this set can be smaller than the original set.
Sometimes we will also just write $fs CC$.
#lemma("Linear maps of constraints")[
Let #C be a set of constraints of dimension $d$ and
let $f: W -> Vp$ be a linear map.
Then we have
$
#ww in sol(f^*CC) <==> f (ww) in sol(CC)
$
]<mapping-constraints>
#proof[
We first need to prove the case of a single constraints for any oracle model.
Let $c = qq_1, ..., qq_k |-> aa$ be a random oracle constraint.
$
vv in sol(fs c)
&<==> (fs aa) vv = H((fs qq_1) vv, ..., (fs qq_k) vv) \
&<==> aa ff vv = H(qq_1 ff vv, ..., qq_k ff vv) \
&<==> ff vv in sol(c)
$
An similar argument can be made for ideal cipher constraints.
The following equivalences prove the lemma:
$
vv in sol(fs CC)
&<==> vv in sol(c') text(" for all ") c' in fs CC \
&<==> vv in sol(fs c) text(" for all ") c in CC \
&<==> ff vv in sol(c) text(" for all ") c in CC \
&<==> ff vv in sol(CC)
$
]
This Lemma is more consequential than it looks.
It gives the key to understanding the structure of a set of constraints by analyzing the structure of a mapped version of the constraints.
By choosing the right maps, we can simplify the constraints.
This happens when we map a set of constraints that are not solvable to a solvable set.
A simple example is $CC = {mat(1, 0, 0) |-> mat(0,0,1), mat(0, 1, 0) |-> mat(0,0,1)}$
This is not solvable.
But applying the linear map with the matrix
$
M_f = mat(1,0; 1,0; 0,1)
$ it becomes solvable.
Solutions to $CC M_f$ map back to solutions of $CC$ under $f$.
Note that $sol(CC)$ also contains other solutions, i.e. corresponding to cases when $H(x) = H(y)$ for some $x != y$.
We want to model the idea that a Linicrypt program, when given random inputs,
the state vector of the program will be randomly distributed in the state space.
#definition("Proper random oracle constraints")[
Let $c$ and $c'$ be two random oracle constraints with $c != c'$.
The pair ${c, c'}$ is called proper if the query parts are different.
A set of constraints $CC$ is called proper if each pair constructed from it is proper.
]
#proposition("Solutions of proper constraints")[
Assume $CC$ is a proper set of constraints in $Vp$ and $W subset Vp$ is a subspace.
Then the probability that $sol(CC)$ is contained in $W$ is smaller or equal to $1 / (|FF|)$.
]
I will use the notation $iota_W$ for the linear map of from a subspace $W$ to its containing space.
#corollary[
Let $#CC = {c_1, ..., c_n}$ be a set of constraints, ordered by their index.
Let $U$ be a subspace of $Vp$.
We say $CC$ is solvable outside of $U$ if
there exists a subspace $W$ of $Vp$ with $W subset.eq.not U$ and
$incWs(CC)$ is solvable (fixing 0).
If #CC is solvable outside of $W$ then there is an adversary #Att with
$
SolAdv[CC, 0, W, Att] >= 1 - 1/(|FF|).
$
]
#proof[
We describe the adversary $Att$ in terms of the adversary $Att_W$ that we get from
the statement of the corollary.
We are given by the game the value $xx in Vp$,
which doesn't really matter because we are considering the case $Fixing = 0$.
Choose a random $xx' in W$.
Assume $Att_W (xx')$ wins $SolGame[incWs CC, 0, 0]$ by outputting $ww in sol(incWs CC) subset.eq W$.
Because $incWs CC$ is proper,
the probability that $ww$ is in $U$ is $<= 1 / (|FF|)$.
TODO formalize the last sentence.
Our adversary $Att$ just outputs $incW (ww) = ww$.
By #ref(<mapping-constraints>) we have $ww = incW ww in sol(CC)$.
The second condition is void anyways $ww - xx in ker(0) = Vp$.
The third condition $ww in.not U$ is almost always fulfilled.
]
Remark Zahra and Bruce: I need to figure out how to make this work while fixing $Fixing$.
Not necessary for the CR characterization, but for 2PR it is.
#theorem("Unsolvability")[
Let #CC be a set of constraints and $W$ a subspace of $Vp$.
Assume that there is an adversary $Att$ with
$
SolAdv[CC, 0, W] > N^n / (|FF|).
$
Then $CC$ is solvable outside of $W$.
]
#proof[
The strategy is to consider the event that #Att wins the game and
bound this probability from above assuming $CC$ is not solvable outside of $W$.
So we assume $Att(xx) = vv$ is a winning solution to $SolGame[CC, 0, W]$ and $Att$ made $N$ queries to an oracle.
Since we can observe the interaction of #Att with the oracles we can
observe its protocol $Q: {1, ..., N} -> union.big_(Ora) sans("Query")_Ora$.
Here $sans("Query")_Ora$ is the space of queries that can be made to the oracle $Ora$.
It consists of the values $c vv$ for all valid combinations of $Ora$-constraints $c$ and state vectors $vv$.
We can make some assumptions on $Q$ by modifying $Att$ slightly:
1. $Q$ is injective. This is achieved my adding memorization of the queries to $Att$.
2. All the queries $c vv$ are actually in $im(Q)$ for all $c in CC$.
We simply force $Att$ to actually make the queries $c vv$ before it outputs $vv$.
This means we can define the function
$T: CC -> {1, ..., N}$ by $T(c) = Q^(-1)(c vv)$.
It represents the query time at which the values for the constraint $c$ are determined.
In order to proceed with the proof,
we consider the event that the adversary wins and has chosen a particular timing function $T$.
We have
$
Pr[SolGame(CC, 0, W, Att)]
&= sum_T Pr[SolGame(CC, 0, W, Att) "and" T "is used"]
&> N^n / (|FF|)
$
There are $N^n$ possible mappings $CC -> {1, ..., N}$.
By the pigeonhole principle for a specific $T: CC -> {1, ..., N}$
we must have $Pr[SolGame(CC, 0, W, Att) "and" T "is used"] > 1/ (|FF|)$.
Let us fix this $T$ for the following.
This function $T$ might not be injective, i.e. $Att$ could determine multiple constraints with a single query.
We therefore map the constraints to a set of constraints for which this doesn't happen.
Let
$
K := sect.big_(c,c' in C \ c vv = c' vv) ker(c - c').
$
This is a bit of abuse in notation which we clarify now.
If $c$ and $c'$ are constraints of different oracle type we set $ker(c-c') = Vp$.
Otherwise $ker(c - c') = {vv in Vp | c vv = c' vv}$.
TODO add what $c vv = c' vv$ means to the definition of $c$ (its just component-wise equality).
Consider the embedding map $f: K -> Vp$. We first assert three statements:
1. $vv$ is in $K$
2. If $c vv = c' vv$ then $fs c = fs c'$
3. $K$ is not a subspace of $W$
Proof of 1.: This is true by definition.
Proof of 2.:
Let $ww in K$ arbitrary then $(fs c) ww = c ff ww = c' ff ww = (fs c) ww$.
Proof of 3.:
Because $Att$ has won $SolGame[CC, 0, W]$ we know that $vv in.not W$.
Together with $vv in K$ we have $K subset.eq.not W$.
We now define the map $T': fs CC -> {1, ..., N}$ by $T'(fs c) = Q^(-1)(fs c vv)$.
This is well defined because $vv$ is in $K$.
We can prove that it is injective.
$
T'(fs c) = T'(fs c')
&=> Q^(-1)(fs c vv) = Q^(-1)(fs c' vv) \
&=> fs c vv = fs c' vv quad text("because") Q text("is injective")\
&=> c vv = c' vv\
&=> fs c = fs c'
$
Because $T'$ is injective this defines an ordering on $fs CC = {c'_1, ..., c'_(n')}$.
We have assumed that $CC$ is not solvable outside of $W$.
That definition includes the map $f$ so there must be a constraint $c'_i$ which is
not solvable fixing $Fixing_(i-1) = span(c'_1\, ...\, c'_(i-1))$.
TODO how to generalize smoothly to ICM. We need another building block I think.
Assume $c' = qq_1, ..., qq_k |-> aa$.
At the time $T'(c')$ the adversary first determines the value of $qq_i vv = q_i$,
and then gets back the result from this fresh query to the oracle.
But we have $aa in Fixing_(i-1) + span(qq_0\, ...\, qq_k)$,
so $aa vv$ has already been determined.
Therefore the probability that $H(q_1, ..., q_k) = aa vv$ is equal to $1 / (|FF|)$.
This is the contradiction we needed to prove that $CC$ is indeed solvable outside of $W$.
]
|
|
https://github.com/schmidma/typst-workshop | https://raw.githubusercontent.com/schmidma/typst-workshop/main/presentation.typ | typst | Creative Commons Zero v1.0 Universal | #import "@preview/touying:0.4.2": *
#import "@preview/cetz:0.2.2"
#import "@preview/unify:0.6.0": num,qty,numrange,qtyrange
#import "@preview/ccicons:1.0.0": *
#let s = themes.metropolis.register(
aspect-ratio: "16-9"
)
#let s = (s.methods.info)(
self: s,
title: [Typst],
subtitle: [The Modern Alternative to LaTeX #cc #cc-zero],
author: [<NAME>],
)
#let (init, slides, touying-outline, alert) = utils.methods(s)
#show: init
#set text(font: "Hanken Grotesk", size: 20pt)
#set strong(delta: 100)
#set par(justify: true)
#show strong: alert
#set list(indent: 1em)
#let (slide, empty-slide, title-slide, new-section-slide, focus-slide) = utils.slides(s)
#show: slides
= What is Typst?
#slide[
#set align(center+horizon)
#align(center)[
"Typst is a new markup-based typesetting system for the sciences."
]
#v(2em)
#grid(
columns: 2,
gutter: 2em,
align(
center+horizon,
block(
clip: true,
radius: 10pt,
width: 7em,
image("./assets/typst_logo.png"),
),
),
[
#set align(left)
- /taɪpst/ i.e. 'Ty' like in Typesetting
- alternative to LaTex, Word, or Google Docs
- build to be a pleasure to use
- powerful and easy to learn
- blazingly fast compile times
- free and Open Source (Apache 2.0)
],
)
]
== When to use Typst?
#grid(
columns: (1.5fr, 2fr),
[
#set align(center)
#image("./assets/ieee-large.png")
],
align(center,box[
#set align(left)
- markup for typesetting documents
- composing
- papers
- essays
- reports
- theses
- articles
- books
- ...
- takes text files and outputs PDFs
]),
)
== Why Typst?
Why should I use typst instead of
#block(inset: (left: 2em))[
Word:
- easy collaboration
- powerful templates that automatically format your work
- higher-quality typographical output
LaTeX:
- instant previews
- clear and understandable error messages
- familiar programming constructs
]
= How to use Typst?
== Using the Web App
#figure(
image("./assets/web-app.png"),
supplement: none,
caption: [https://typst.app],
)
== Using Tinymist in VSCode
#image("./assets/vscode.png")
== Using the Command Line Interface
- Downloading the latest release from GitHub, and update with
```sh
typst update
```
- Through package managers:
```sh
brew install typst
```
- With Rust a toolchain:
```sh
cargo install --locked typst-cli
```
#v(1em)
and then use
```sh
typst watch document.typ
```
== How can it be free?
#v(2em)
- it is Open Source, and it will stay that way
- WebApp has subscription plans with additional features
- typst on-premises: self-hosted version for organizations
- commercial support contracts
#align(center+bottom,
image("./assets/typst-pro-transparent.png", height: 8em)
)
#focus-slide[
#set text(size: 50pt)
Installation and Setup
#set text(size: 20pt)
#block[
#set align(left)
- WebApp: Sign up and log in
- VSCode: Install the extension (Tinymist)
- CLI: Install the latest release
]
]
= Typst Basics
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
When you begin typing words into a document, those words are automatically arranged into a visually coherent form known as typeset text.
```),
rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
When you begin typing words into a document, those words are automatically arranged into a visually coherent form known as typeset text.
]
)
)
== Headers
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
= Introduction
When you begin typing words into a document, those words are automatically arranged into a visually coherent form known as typeset text.
```),
rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
= Introduction
When you begin typing words into a document, those words are automatically arranged into a visually coherent form known as typeset text.
]
)
)
== Markup
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
Okay, let's move to _emphasis_ and *bold* text.
Markup syntax is generally similar to `AsciiDoc` (this was `raw` for monospace text!)
... and even "smart quotes" :)
```),
rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
Okay, let's move to _emphasis_ and *bold* text.
Markup syntax is generally similar to `AsciiDoc` (this was `raw` for monospace text!)
... and even "smart quotes" :)
]
)
)
== Newlines
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
You can break \
line anywhere you \
want using the \\ symbol.
```),
rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
You can break \
line anywhere you \
want using the \\ symbol.
]
)
)
== Lists
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
+ First
+ Second
+ Third
- First
- First indented
- Second
- Third
- Third indented
- even more
```),
align(center,rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
#set align(left)
+ First
+ Second
+ Third
#set list(indent: 0pt)
- First
- First indented
- Second
- Third
- Third indented
- even more
]
))
)
== Mathematical Expressions
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center,```typst
$Q = rho A v + C / 2$
```),
align(center, rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 1em,
)[
#set align(left)
$Q = rho A v + C / 2$
]
))
)
== Mathematical Expressions
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center,```typst
$
7.32 beta +
sum_(i=0)^nabla
(Q_i (a_i - epsilon)) / 2
$
```),
align(center, rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 1em,
)[
#set align(left)
$ 7.32 beta +
sum_(i=0)^nabla
(Q_i (a_i - epsilon)) / 2 $
]
))
)
= Typst Advanced
== Functions
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
To go to scripting mode, type `#` and *some function name* after that. We will start with _something dull_:
#lorem(5)
```),
align(center, rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
#set align(left)
To go to scripting mode, type `#` and *some function name*
after that. We will start with _something dull_:
#lorem(5)
]
))
)
== Content
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
The most "universal" type in Typst language is *content*. Everything you write in the document becomes content.
#[
But you can explicitly create it
with _scripting mode_.
In square brackets, you can use
any markup functions.
]
```),
align(center, rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
#set align(left)
The most "universal" type in Typst language is *content*. Everything you write in the document becomes content.
#[
But you can explicitly create it
with _scripting mode_.
In square brackets, you can use
any markup functions.
]
]
))
)
== Arguments
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
You find the documentation in the #link("https://typst.app/docs/reference/", [Official Reference]).
#quote(
block: true,
attribution: "Typst Examples Book",
[
That's right, links, quotes and
lots of other document elements are
created with functions.
]
)
```),
align(center, rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
#set align(left)
You find the documentation in the #link("https://typst.app/docs/reference/")[Official Reference].
#quote(
block: true,
attribution: "Typst Examples Book",
[
That's right, links, quotes and lots of
other document elements are created with functions.
]
)
]
))
)
== Figures, Captions, and References
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
Rivers as shown in @rivers are an important part of the earth's climate system.
#figure(
image(
"./assets/rivers.jpg",
width: 70%
),
caption: [
_River_ in a forest.
],
) <rivers>
```),
align(center, rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
#set align(left)
Rivers as shown in @rivers are an important part of the earth's climate system.
#figure(
image("./assets/rivers.jpg", width: 70%),
caption: [
_River_ in a forest.
],
) <rivers>
]
))
)
== Bibliography
#grid(
columns: (1fr, 1fr),
inset: 1em,
align(center, ```typst
This is a citation to a paper by @johnson2022ai. Followed by a work of <NAME> @smith2023modern.
#bibliography("literature.yaml")
```),
rect(
fill: gray.lighten(50%),
radius: 5pt,
inset: 0.4em,
rect(
fill: white,
inset: 0.4em,
)[
#set text(size: 12pt)
This is a citation to a paper by @johnson2022ai. Followed by a work of <NAME> @smith2023modern.
#bibliography("literature.yaml")
]
)
)
== ... and so much more
#grid(
columns: (1fr, 1fr),
[
=== Tables
#block(inset: (left:1em))[
#set text(size: 14pt)
#table(
columns: (auto, auto, auto),
inset: 10pt,
align: horizon,
table.header(
[], [*Area*], [*Parameters*],
),
[🚗],
$ pi h (D^2 - d^2) / 4 $,
[
#set align(center)
$h$, $D$, $d$
],
)]
=== Code blocks
#block(inset: (left:1em))[
```python
def f(x):
return x**2
```
]
=== Units
#block(inset: (left:1em))[
$qty("1.3+1.2-0.3e3", "erg/cm^2/s", space: "#h(2mm)")$
]
=== Symbols
#block(inset: (left:1em))[
🖂 #sym.arrow.r #sym.gt.eq #emoji.face.halo
]
],
[
=== Drawings
#block(inset: (left:1em))[
#set text(size: 10pt)
#cetz.canvas(length: 3cm, {
import cetz.draw: *
scale(x: 0.5, y: 0.5)
set-style(
mark: (fill: black, scale: 2),
stroke: (thickness: 0.4pt, cap: "round"),
angle: (
radius: 0.3,
label-radius: .22,
fill: green.lighten(80%),
stroke: (paint: green.darken(50%))
),
content: (padding: 1pt)
)
grid((-1.5, -1.5), (1.4, 1.4), step: 0.5, stroke: gray + 0.2pt)
circle((0,0), radius: 1)
line((-1.5, 0), (1.5, 0), mark: (end: "stealth"))
content((), $ x $, anchor: "west")
line((0, -1.5), (0, 1.5), mark: (end: "stealth"))
content((), $ y $, anchor: "south")
for (x, ct) in ((-1, $ -1 $), (-0.5, $ -1/2 $), (1, $ 1 $)) {
line((x, 3pt), (x, -3pt))
content((), anchor: "north", ct)
}
for (y, ct) in ((-1, $ -1 $), (-0.5, $ -1/2 $), (0.5, $ 1/2 $), (1, $ 1 $)) {
line((3pt, y), (-3pt, y))
content((), anchor: "east", ct)
}
// Draw the green angle
cetz.angle.angle((0,0), (1,0), (1, calc.tan(30deg)),
label: text(green, [#sym.alpha]))
line((0,0), (1, calc.tan(30deg)))
set-style(stroke: (thickness: 1.2pt))
line((30deg, 1), ((), "|-", (0,0)), stroke: (paint: red), name: "sin")
content(("sin.start", 50%, "sin.end"), text(red)[$ sin alpha $])
line("sin.end", (0,0), stroke: (paint: blue), name: "cos")
content(("cos.start", 50%, "cos.end"), text(blue)[$ cos alpha $], anchor: "north")
line((1, 0), (1, calc.tan(30deg)), name: "tan", stroke: (paint: orange))
content("tan.end", $ text(#orange, tan alpha) = text(#red, sin alpha) / text(#blue, cos alpha) $, anchor: "west")
})
]
=== Plots
#block(inset: (left:1em))[
#cetz.canvas({
import cetz.draw: *
// Set up the transformation matrix
set-transform(cetz.matrix.transform-rotate-dir((1, 1, -1.3), (0, 1, .3)))
scale(x: 1, z: -0.666)
grid((0,-2), (8,2), stroke: gray + .5pt)
// Draw a sine wave on the xy plane
let wave(amplitude: 1, fill: none, phases: 2, scale: 8, samples: 100) = {
line(..(for x in range(0, samples + 1) {
let x = x / samples
let p = (2 * phases * calc.pi) * x
((x * scale, calc.sin(p) * amplitude),)
}), fill: fill)
let subdivs = 8
for phase in range(0, phases) {
let x = phase / phases
for div in range(1, subdivs + 1) {
let p = 2 * calc.pi * (div / subdivs)
let y = calc.sin(p) * amplitude
let x = x * scale + div / subdivs * scale / phases
line((x, 0), (x, y), stroke: rgb(0, 0, 0, 150) + .5pt)
}
}
}
group({
rotate(x: 90deg)
wave(amplitude: 1.6, fill: rgb(0, 0, 255, 50))
})
wave(amplitude: 1, fill: rgb(255, 0, 0, 50))
})]
=== ...and much more
],
)
== Useful Links
https://typst.app
https://typst.app/docs
https://sitandr.github.io/typst-examples-book/book/
https://typst.app/universe/
https://discord.gg/2uDybryKPe
https://github.com/qjcg/awesome-typst
https://collaborating.tuhh.de/es/ce/public/tuhh-typst
https://collaborating.tuhh.de/es/ce/public/templatesforreports
https://collaborating.tuhh.de/es/ce/public/ies-typst-phd-thesis-template
#focus-slide[
Feedback, Questions, and Ideas for Improvement?
]
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/crates/tinymist-query/src/fixtures/type_check/confusing-name.typ | typst | Apache License 2.0 | #let x(date) = date.display()
#let (x: x) = (x: 1)
#let master-cover(info, x: x) = {
info = (submit-date: 0) + info
x(datetime.today())
}
|
https://github.com/sitandr/typst-examples-book | https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/SUMMARY.md | markdown | MIT License | # Summary
- [About](./about.md)
# The Book
- [Typst Basics](./basics/index.md)
- [Tutorial by Examples](./basics/tutorial/index.md)
- [Markup language](./basics/tutorial/markup.md)
- [Functions](./basics/tutorial/functions.md)
- [Basic styling](./basics/tutorial/basic_styling.md)
- [Advanced styling](./basics/tutorial/advanced_styling.md)
- [Templates](./basics/tutorial/templates.md)
- [Must-know](./basics/must_know/index.md)
- [Boxing & Blocking](./basics/must_know/box_block.md)
- [Spacing](./basics/must_know/spacing.md)
- [Placing, Moving, Scale & Hide](./basics/must_know/place.md)
- [Align & Padding]()
- [Tables & Grids](./basics/must_know/tables.md)
- [Project structure](./basics/must_know/project_struct.md)
- [Scripting](./basics/scripting/index.md)
- [Basics](./basics/scripting/basics.md)
- [Braces, brackets and default](./basics/scripting/braces.md)
- [Types, part I](./basics/scripting/types.md)
- [Types, part II](./basics/scripting/types_2.md)
- [Conditions & loops](./basics/scripting/conditions.md)
- [Advanced arguments](./basics/scripting/arguments.md)
- [Tips](./basics/scripting/tips.md)
- [States, Query, Context Dependence](./basics/states/index.md)
- [States](./basics/states/states.md)
- [Counters](./basics/states/counters.md)
- [Measure, Layout](./basics/measure.md)
- [Query](./basics/states/query.md)
- [Metadata](./basics/states/metadata.md)
- [Math](./basics/math/index.md)
- [Symbols](./basics/math/symbols.md)
- [Grouping](./basics/math/grouping.md)
- [Alignment](./basics/math/alignment.md)
- [Limits](./basics/math/limits.md)
- [Operators](./basics/math/operators.md)
- [Location and sizes](./basics/math/sizes.md)
- [Vectors, Matrices, Semicolon syntax](./basics/math/vec.md)
- [Classes](./basics/math/classes.md)
- [Special symbols](./basics/special_symbols.md)
- [Extra](./basics/extra.md)
- [Typst Snippets](./snippets/index.md)
- [Demos](./snippets/demos.md)
- [Logos & Figures](./snippets/logos.md)
- [Labels](./snippets/labels.md)
- [Chapters]()
- [Headings]()
- [Page numbering](./snippets/chapters/page-numbering.md)
- [Outlines](./snippets/chapters/outlines.md)
- [Bibliography]()
- [General layout]()
- [Page setup](./snippets/layout/page_setup.md)
- [Hiding](./snippets/layout/hiding.md)
- [Multiline detection](./snippets/layout/multiline_detect.md)
- [Duplicate content](./snippets/layout/duplicate.md)
- [Lines between list items](./snippets/layout/insert_lines.md)
- [Shadowed shape](./snippets/layout/shapes.md)
- [Code formatting](./snippets/code.md)
- [Tables & grids](./snippets/grids.md)
- [Hyphenation]()
- [Scripting](./snippets/scripting/index.md)
- [Numbering](./snippets/numbering.md)
- [Math]()
- [Numbering](./snippets/math/numbering.md)
- [Operations](./snippets/math/operations.md)
- [Scripts](./snippets/math/scripts.md)
- [Vectors & Matrices](./snippets/math/vecs.md)
- [Fonts](./snippets/math/fonts.md)
- [Text & Content]()
- [Calligraphic letters](./snippets/math/calligraphic.md)
- [Color & Gradients](./snippets/gradients.md)
- [Pretty things](./snippets/pretty.md)
- [Text]()
- [Individual language fonts](./snippets/text/individual_lang_fonts.md)
- [Fake italic & Text shadows](./snippets/text/text_shadows.md)
- [Special documents](./snippets/special/index.md)
- [Use with external tools](./snippets/external.md)
- [Typst Packages](./packages/index.md)
- [Overview]()
- [Drawing](./packages/drawing.md)
- [Graphs](./packages/graphs.md)
- [Custom boxes](./packages/boxes.md) <!--TODO: add note "for theorems look into math"-->
- [Math](./packages/math.md)
- [Physics](./packages/physics.md)
- [Tables](./packages/tables.md)
- [Code](./packages/code.md)
- [Presentations](./packages/presentation.md)
- [Themes]()
- [Layout](./packages/layout.md)
- [Wrapping figures](./packages/wrapping.md)
- [Scripting]()
- [Misc](./packages/misc.md)
- [Headers](./packages/headers.md)
- [Glossary](./packages/glossary.md)
- [Counting words](./packages/word_count.md)
- [External](./packages/external.md)
- [Typstonomicon, or The Code You Should Not Write](./typstonomicon/index.md)
- [Image with original size](./typstonomicon/original_image.md)
- [Word count](./typstonomicon/word_count.md)
- [Try & Catch](./typstonomicon/try_catch.md)
- [Breakpoints on broken blocks](./typstonomicon/block_break.md)
- [Extracting plain text](./typstonomicon/extract_plain_text.md)
- [Inline with](./typstonomicon/inline_with.md)
- [Create zero-level chapters](./typstonomicon/chapters.md)
- [Make all math display](./typstonomicon/math_display.md)
- [Empty pages without numbering](./typstonomicon/totally-empty.md)
- [Multiple show rules](./typstonomicon/multiple-show.md)
- [Removing indent of nested lists](./typstonomicon/remove-indent-nested.md)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.