repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/EricWay1024/Homological-Algebra-Notes | https://raw.githubusercontent.com/EricWay1024/Homological-Algebra-Notes/master/ha/7-balance.typ | typst | #import "../libs/template.typ": *
= Balancing $Ext$ and $Tor$
<balancing-ext-tor>
== Defining $Ext$ and $Tor$
#definition[
Let $cA$ be an abelian category. Let $A, B in cA$ and let $B -> I_cx$ be an injective resolution.
Recall that $Hom(A)(A, -): cA -> Ab$ is left exact by @hom-left-exact.
If $cA$ has enough injectives, we define the right derived functor $Ext_cA^i (A, -)$ of $Hom(A)(A, -)$ as
// #footnote[While $Ext_cA^i (A, -)(B)$ (as well as $Ext_cA^i (-, B)(A)$ defined below) is not a common way of writing, it emphasizes the asymmetry of the construction.]
$ Ext_cA^i (A, B) = Ext_cA^i (A, -)(B) := R^i Hom(A)(A, -)(B) = H^i (Hom(A) (A, I^cx)). $
In particular, $Ext_cA^0 (A, B) = Hom(A) (A, B)$.
]
Notice that the contravariant functor $Hom(A)(-, B): cA^op -> Ab$ is also left exact by @hom-left-exact-2. Assume that $cA$ has enough projectives, so $cA^op$ has enough injectives. Let $P_cx -> A$ be an projective resolution in $cA$, which can be seen as an injective resolution in $cA^op$. We can thus define another right derived functor $Ext_cA^i (-, B)$, given by
$
Ext_cA^i (-, B)(A) := R^i Hom(A)(-, B) (A) = H^i (Hom(A)(P_cx, B)).
$
The above two constructions are in fact isomorphic, i.e., $Ext_cA^i (A, -)(B) iso Ext_cA^i (-, B)(A)$, or
$
Ext_cA^i (A, B) := R^i Hom(A)(A, -)(B) iso R^i Hom(A)(-, B) (A).
$
This isomorphism is called the *balancing of $Ext$*. Before proving the balancing of $Ext$, we present some properties of $Ext$ that it gives.
#proposition[
Let $ses(K, L, M)$ be a short exact sequence in $cA$ and let $A, B in cA$. Then we have the induced long exact sequences
$
0 -> Hom(A) (A, K) -> Hom(A) (A, L) -> Hom(A) (A, M) -> \ Ext_cA^1 (A, K) -> Ext_cA^1 (A, L) -> Ext_cA^1 (A, M) -> ...
$
and
$
0 -> Hom(A) (M, B) -> Hom(A) (L, B) -> Hom(A) (K, B) -> \ Ext_cA^1 (M, B) -> Ext_cA^1 (L, B) -> Ext_cA^1 (K, B) -> ...
$
]
#proof[
Simply notice that ${Ext_cA^i (A, -)}_(i>=0)$ and ${Ext_cA^i (-, B)}_(i>=0)$ form two cohomological $delta$-functors.
]
#proposition[
The followings are equivalent:
+ $B$ is injective;
+ $Hom(A)(-, B)$ is exact;
+ $Ext_cA^i (A, B) = 0$ for $i !=0$ and all $A$;
+ $Ext_cA^1 (A, B) = 0$ for all $A$.]
<ext-injective>
#proof[
(1) $<=>$ (2) by the definition of injective objects.
(1) $=>$ (3) by applying the dual of @projective-left-zero to $Ext_cA^i (A, -)$.
(3) $=>$ (4) is trivial.
(4) $=>$ (2). Let $ses(A', A, A'')$ be a short exact sequence in $cA$, which induces the #lest
$
0 -> Hom(A) (A', B) -> Hom(A) (A, B) -> Hom(A) (A'', B) -> Ext^1_cA (A', B) -> ...
$
Since $Ext^1_cA (A', B) = 0$ by assumption, $Hom(A) (-, B)$ is an exact functor.
]
#proposition[
The followings are equivalent:
+ $A$ is projective;
+ $Hom(A)(A, -)$ is exact;
+ $Ext_cA^i (A, B) = 0$ for $i !=0$ and all $B$;
+ $Ext_cA^1 (A, B) = 0$ for all $B$.
]
// #proof[
// Similar as above.
// ]
#example[
Let $m, n in ZZ$.
Let us calculate $Ext_ZZ^1 (ZZ over m, ZZ over n)$ in two different ways.
We may use the injective resolution of $ZZ over n$:
$
0 -> ZZ over n -> QQ over ZZ ->^n QQ over ZZ -> 0.
$
Now delete $ZZ over n$, apply $hom_ZZ (ZZ over m, -)$, and use $hom_ZZ (ZZ over m , QQ over ZZ) iso ZZ over m$, we get
$
0 -> ZZ over m -> ^n ZZ over m -> 0.
$
Calculating the first cohomology of this sequence reveals that $Ext_ZZ ^ 1 (ZZ over m , ZZ over n) = H^1 = Coker (ZZ over m -> ^n ZZ over m) iso ZZ over gcd(m, n)$.
On the other hand, we may invoke the balancing of $Ext$ and use the projective resolution of $ZZ over m$:
$
0 -> ZZ ->^m ZZ -> ZZ over m -> 0.
$
Now delete $ZZ over m$, apply $hom_ZZ (-, ZZ over n)$ (which is a contravariant functor), and use $hom_ZZ (ZZ, ZZ over n) iso ZZ over n$, we get
$
0 -> ZZ over n -> ^m ZZ over n -> 0.
$
Again the first cohomology of the sequence gives $ZZ over gcd(m, n)$.
]
#definition[
Let $R$ be a ring and $B$ be a left $R$-module. Since $(- tpr B) : ModR -> Ab$ is right exact by @tensor-right-exact and $RMod$ has enough projectives, we can define the left derived functor $Tor_i^R (-, B)$:
$ Tor_i^R (A, B) = Tor_i^R (-, B)(A) := L_i (- tpr B) (A). $
]
Similarly, let $A$ be a right $R$-module, and $(A tpr - ): RMod -> Ab$ is right exact by @tensor-right-exact-2. We can thus define the left derived functor $Tor_i^R (A, -)$:
$
Tor_i^R (A, -)(B) := L_i (A tpr -) (B).
$
The two constructions are again isomorphic, i.e.,
$ Tor_i^R (A, B) := L_i (- tpr B) (A) iso L_i (A tpr -) (B). $
This isomorphism is called *the balancing of $Tor$*, which gives the following property.
#proposition[
Let $ses(K, L, M)$ be a #sest in $ModR$ and let $B in RMod$. Then we have the induced long exact sequence
$
... -> Tor_1^R (K, B) -> Tor_1^R (L, B) -> Tor_1^R (M, B) -> K tpr B -> L tpr B -> M tpr B -> 0.
$
If $ses(K, L, M)$ is instead a #sest in $RMod$ and let $A in ModR$, then we have the induced #lest
$
... -> Tor_1^R (A, K) -> Tor_1^R (A, L) -> Tor_1^R (A, M) -> A tpr K -> A tpr L -> A tpr M -> 0.
$
]
<tor-les>
In order to prove the balancing of $Ext$ and $Tor$, we need two new tools: mapping cones and double complexes, introduced in the following sections.
// #TODO #lest induced by $Tor$ and $Ext$
== Mapping Cones
// #remark[
// In topology, let $f: X-> Y $ be a continuous map between two topological spaces.
// The *topological mapping cylinder* $M_f$ of $f : X-> Y$ is the quotient
// $ ((X times I) product.co Y) over tilde $
// where the equivalence relation $tilde$ is generated by $ (x, 1) tilde f(x)$ for all $x in X$. That is, the mapping cylinder is obtained by gluing one end of $X times I$ to $Y$ via the map $f$. It is often denoted as $(X times I) union.sq_f Y$.
// The *topological mapping cone* $C_f$ of $f : X-> Y$ is the quotient space of the mapping cylinder $(X times I) union.sq_f Y$ with respect to the equivalence relation $(x, 0) tilde (x', 0)$ for all $x, x' in X$. That is, the end of $X times I$ that is not glued to $Y$ is identified as a point.
// #align(center,image("../imgs/Mapping_cone.svg",width:30%))
// ]
#definition[Let $f : B_cx -> Ccx$ be a chain map.
Define the *mapping cone* of $f$ as the chain complex $cone(f)_cx$, given by $ cone(f)_n = B_(n-1) plus.circle C_n $
with differential#footnote[In @tot-cone there is an explanation for this definition.] $ d(b, c) = (-d(b), d(c) - f(b)) $ for $b in B_(n-1)$ and $c in C_n$.
We could also write the differential in the form of a matrix:
$
mat(-d_B, 0; -f, d_C) : vec(B_(n-1), C_n) -> vec(B_(n-2), C_(n-1))
$
Dually, let $g : B^cx -> C^cx$ be a cochain map, then the mapping cone of $g$ is the cochain complex $cone(g)^cx$ given by
$
cone(g)^n = B^(n+1) plus.circle C^n
$
with differential $ d(b, c) = (-d(b), d(c) - f(b)) $ for $b in B^(n+1)$ and $c in C^n$.
]
#lemma[
Let $f: B_cx -> C_cx$ be a chain map. Then there is a long exact sequence in homology
$
... -> H_(n+1) (cone(f)) -> H_n (B) ->^diff H_n (C) -> H_n (cone(f)) -> ...
$
where the connecting morphism
$diff = f_ast $.
// Dually, let $g : B^cx -> C^cx$ be a cochain map, then there is a #lest in cohomology
// $
// ... -> H^(n-1) (cone(f)) -> H^n (B) ->^diff H^n (C) -> H^n (cone(f)) -> ...
// $
// with $diff = g^ast$.
]
#proof[
There is a #sest of chain complexes:
$ ses(C, cone(f), B[-1], f: i, g: pi), $
where $i: c mapsto (0, c)$ and $pi : (b, c) |-> -b$. Notice that $H_(n+1)(B[-1]) = H_n (B)$, so we get the corresponding #lest in homology as above by @connecting.
Further, we have $diff = i^(-1) d_(cone(f)) pi^(-1)$ by @connecting.
Let $b in B_n$ be a cycle. We can lift it to $(-b, 0)$ in $cone(f)$. Apply the differential of $cone(f)$ to get $d_(cone(f)) ( -b, 0) = (d (b), f (b)) = (0, f (b))$. Thus $diff[b] = [f (b)] = f_ast [b]$.
]
The following is the main function of the mapping cone.
#corollary[
A chain map $f: B_cx -> C_cx$ is a quasi-isomorphism if and only if $cone(f)$ is acyclic.
]
<cone-qi>
#proof[
"$=>$". If $f$ is a quasi-isomorphism, then $f_ast : H_n (B) -> H_n (C)$ is an isomorphism for all $n$. Then we have an exact
sequence
$ H_n (B) arrow.r^(f_ast) H_n (C) arrow.r^(i_ast) H_n ("cone"(f)) arrow.r^(pi_ast) H_(n - 1) (B) arrow.r^(f_ast) H_(n - 1) (C). $
By exactness at $H_n (C)$, we have that
$ Ker (i_ast) = IM(f_ast) = H_n (C)$. So $i_ast = 0$ and $IM(i_ast) = 0$.
By exactness at $H_(n - 1) (B)$, we have that
$ "Im"(pi_ast) = Ker(f_ast) = 0$, so $pi_ast = 0$ and $ Ker (pi_ast) = H_n ("cone"(f))$.
By exactness at $H_n ("cone"(f))$, we
have $ 0 = IM(i_ast) = Ker (pi_ast) = H_n ("cone"(f)), $ so
$cone(f)$ is acyclic.
"$arrow.l.double$". If cone $(f)$ is
acyclic, then $H_n ("cone"(f)) = 0$ and we have an exact sequence
$ 0 arrow.r H_n (B) arrow.r^(f_ast) H_n ("cone"(f)) arrow.r 0, $
which indicates that $f_ast$ is an isomorphism.
]
#remark[
The same result can be obtained for cochain maps.
]
There is a similar construction called the mapping cylinder, although we do not use it in these notes.
#definition[
The *mapping cylinder* of a chain map $f: B_cx -> C_cx$ is defined as the chain complex $cyl(f)_n = B_n xor B_(n-1) xor C_n$. The differential can be represented by the matrix
$
mat(d_B, id_B, 0;0, -d_B, 0; 0, -f, d_C).
$
]
#remark[
The reader is directed to @weibel[Section 1.5] for some topological remarks on mapping cones and mapping cylinders.
]
// #remark[
// Let $0->B->^f C->^g D-> 0$ be a #sest of complexes. Then $phi: cone(f) -> D$ has $phi(b, c)-> g(c)$.#align(center,image("../imgs/2023-11-10-12-30-40.png",width:50%)) You can prove $cyl(f)-> C$ is a quasi-isomorphism and also $phi$ is quasi-isomorphism. (This is non-examinable.)
// ]
== Double and Total Complexes
Recall that if $cA$ is an abelian category, $Ch(cA)$ is also an abelian category. Then to define a "two-dimensional" complex, one may be tempted to consider the category $Ch(Ch(cA))$. However, what we define next is slightly different from that.
#definition[
A *double complex* (or *bicomplex*) $C = C_(cx cx)$ in an abelian category $cA$ is a family ${C_(p, q)}$ of objects in $cA$ with maps $d^h_(p, q) : C_(p, q) -> C_(p-1, q)$ and $d^v_(p, q) : C_(p, q) -> C_(p, q-1)$ such that $ (d^h)^2 = (d^v)^2 = 0, quad d^v d^h + d^h d^v = 0. $
The *total degree* of a term $C_(p, q)$ is defined as $p + q$.
]
In other words, a double complex is an infinite two-dimensional grid of objects where each row (resp. each column) is a chain complex, and the horizontal and vertical differentials _anticommute_. A diagram for a double complex is shown as below; this is not a commutative (but an anticommutative) diagram.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZARgBpiBdUkANwEMAbAVxiRAGEB9ACjQFoyAAgCOAamIBKEAF9S6TLnyEUAJnJVajFmy69SI8VNnzseAkQDM66vWatEHHmnH6xkmXJAZTSomRUattoOuvxCwkae3ormqqQBNlr2jnoikSYxyshWCZp2Ok4uaR4ZZllkFoFJBbwCrgLpXgplRGqVifkhTvXuxk0+sdmk7XnBKc7hDSX9mUQALNajyQB0q9PRLSgLuUEra30bvlvDVZ0gq8vrzUdDc6djF1cDWWp3HQ-7UdeDZG9LbI8Dt8sgAGE7vPaXIHPIhgnbVByAr4wlBgygQgGfUo3Kwg+6Qp6zOJ4jGIrEzTYkUgk-5kqEaGBQADm8CIoAAZgAnCAAWyQCxAOAgSAA7KSQFAAHo<KEY>
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((1, 1), [$C_(p-1, q+1)$]),
node((1, 2), [$C_(p, q+1)$]),
node((1, 3), [$C_(p+1, q+1)$]),
node((2, 1), [$C_(p-1, q)$]),
node((2, 2), [$C_(p, q)$]),
node((2, 3), [$C_(p+1, q)$]),
node((3, 1), [$C_(p-1, q-1)$]),
node((3, 2), [$C_(p, q-1)$]),
node((3, 3), [$C_(p+1, q-1)$]),
node((1, 4), [$...$]),
node((2, 4), [$...$]),
node((3, 4), [$...$]),
node((4, 3), [$...$]),
node((4, 2), [$...$]),
node((4, 1), [$...$]),
node((3, 0), [$...$]),
node((2, 0), [$...$]),
node((1, 0), [$...$]),
node((0, 3), [$...$]),
node((0, 2), [$...$]),
node((0, 1), [$...$]),
arr((2, 2), (3, 2), [$d^v$]),
arr((1, 2), (2, 2), [$d^v$]),
arr((1, 1), (2, 1), [$d^v$]),
arr((2, 1), (3, 1), [$d^v$]),
arr((1, 3), (2, 3), [$d^v$]),
arr((2, 3), (3, 3), [$d^v$]),
arr((1, 2), (1, 1), [$d^h$]),
arr((1, 3), (1, 2), [$d^h$]),
arr((1, 4), (1, 3), []),
arr((2, 4), (2, 3), []),
arr((2, 3), (2, 2), [$d^h$]),
arr((2, 2), (2, 1), [$d^h$]),
arr((3, 2), (3, 1), [$d^h$]),
arr((3, 3), (3, 2), [$d^h$]),
arr((3, 4), (3, 3), []),
arr((3, 3), (4, 3), []),
arr((3, 2), (4, 2), []),
arr((3, 1), (4, 1), []),
arr((3, 1), (3, 0), []),
arr((2, 1), (2, 0), []),
arr((1, 1), (1, 0), []),
arr((0, 3), (1, 3), []),
arr((0, 2), (1, 2), []),
arr((0, 1), (1, 1), []),
))
// #align(center,image("../imgs/2023-11-12-16-01-47.png",width:50%))
#remark[
Because the differentials anticommute, $d^v$ cannot be seen as chain maps between rows.
We need to replace $d^v_(p, q)$ by $f_(p, q) := (-1)^p d^v_(p, q)$ (so that the signs alternate for adjacent columns) to make the squares commute. For example, the following is a commutative diagram:
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBoBGAXVJADcBDAGwFcYkQBhAfQAoziAlCAC+pdJlz5CKchWp0mrdtx6zBIsSAzY8BIgCY5NBizaJOvQ+tHidUov3kml5lWXJCbWibunI1TopmnKoUnpraknooVoGmypZhGrZRfgDMpMRxLhY8GR7J3nbRyPnZwSoZ1vIwUADm8ESgAGYAThAAtkgALDQ4EEgZCvHmUAB6ABaFbZ1IAKx9A4i9wznjU14zXYiyIP1IZKvB69Pt24Z7S7vOx5Ons4hD+4iHN+zjtPfbK8-XQewAWg+X3miyQFzeozGn02ZyQADYwYgFkd3ndYQ8AOxIiH-KEbTRbBFI7Go8xA6EiSjCIA
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((1, 0), [$C_(0,0)$]),
node((1, 1), [$C_(1,0)$]),
node((1, 2), [$C_(2,0)$]),
node((0, 0), [$C_(0,1)$]),
node((0, 1), [$C_(1,1)$]),
node((0, 2), [$C_(2,1)$]),
node((0, 3), [$C_(3,1)$]),
node((1, 3), [$C_(3,0)$]),
arr((0, 1), (0, 0), [$d^h$]),
arr((0, 2), (0, 1), [$d^h$]),
arr((1, 1), (1, 0), [$d^h$]),
arr((1, 2), (1, 1), [$d^h$]),
arr((0, 0), (1, 0), [$d^v$]),
arr((0, 1), (1, 1), [$-d^v$]),
arr((0, 2), (1, 2), [$d^v$]),
arr((0, 3), (0, 2), [$d^h$]),
arr((1, 3), (1, 2), [$d^h$]),
arr((0, 3), (1, 3), [$-d^v$]),
))
Therefore, $f_(cx, q) : C_(cx, q) -> C_(cx, q-1)$ is a chain map between two adjacent rows.
This also gives an isomorphism between the category of bicomplexes in $cA$ and $Ch(Ch(cA))$.] <sign-trick>
#definition[
Let $C_(bullet bullet)$ be a double complex. We say that
$C_(bullet bullet)$ is an *upper half-plane complex* if there is some
$q_0$ such that $C_(p, q) eq 0$ for all $q lt q_0$. Similarly,
$C_(bullet bullet)$ is a *right half-plane complex* if there is some $p_0$
such that $C_(p, q) eq 0$ for all $p lt p_0$.
]
#definition[
Given $C = {C_(p, q)}$, we can define the *total complex* $Tot^Pi (C)$, given by
$ Tot^Pi (C)_n = product_(p + q = n) C_(p, q). $
That is, the $n$-th term of $Tot^Pi (C)$ is the product of all terms in $C$ which has total degree $n$.
When for each $n$, only finitely many terms in $C$ has total degree $n$, we also define $ Tot^xor (C)$, given by
$ Tot^xor (C)_n = plus.circle.big _(p+q=n) C_(p, q). $
$Tot^Pi (C)$ and $ Tot^xor (C)$ both have differential $ d = d^h + d^v. $
]
<total-complex>
#notation[
If $C$ is a double complex, sometimes we write $H_n (C)$ to mean $H_n (Tot^Pi (C))$ or $H_n (Tot^xor (C))$.
]
<homology-double>
#lemma[
In a total complex, we have that $d^2 = 0$, so the total complex is indeed a chain complex.
]
#proof[
@rotman[Lemma 10.5].
$
d^2 = (d^h + d^v) (d^h + d^v) = (d^h)^2 + (d^h d^v + d^v d^h) + (d^v)^2 = 0.
$
(This is why we have defined double complexes in the anticommuting way.)
]
The total complex is illustrated by the colours in the following diagram; each “diagonal
slice” is given a different colour. For example, $Tot(C_(cx cx))_0$ is the product of all the
blue terms. This diagram also helps explain how the differential of the total complex works. For example, take $ c = (..., c_(-1, 1), c_(0, 0), c_(1, -1), ...) in product_(p in ZZ) C_(-p, p) = Tot(C)_0. $
Then
$ d (c) = ( ...,
underbrace(d^v (c_(-1, 1)) + d^h (c_(0,0)), in C_(-1, 0)) ,
underbrace(d^v (c_(0, 0)) + d^h (c_(1, -1)), in C_(0, -1)), ... ) in Tot(C)_(-1). $
// #align(center,image("../imgs/2023-11-12-16-04-08.png",width:50%))
// https://t.yw.je/#N4<KEY>
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((1, 1), text(blue)[$C_(-1, 1)$]),
node((1, 2), text(orange)[$C_(0,1)$]),
node((1, 3), text(navy)[$C_(1,1)$]),
node((2, 1), text(red)[$C_(-1,0)$]),
node((2, 2), text(blue)[$C_(0,0)$]),
node((2, 3), text(orange)[$C_(1,0)$]),
node((3, 1), text(green)[$C_(-1, -1)$]),
node((3, 2), text(red)[$C_(0, -1)$]),
node((3, 3), text(blue)[$C_(1, -1)$]),
node((1, 4), [$...$]),
node((2, 4), [$...$]),
node((3, 4), [$...$]),
node((4, 3), [$...$]),
node((4, 2), [$...$]),
node((4, 1), [$...$]),
node((3, 0), [$...$]),
node((2, 0), [$...$]),
node((1, 0), [$...$]),
node((0, 3), [$...$]),
node((0, 2), [$...$]),
node((0, 1), [$...$]),
arr((2, 2), (3, 2), [$d^v$]),
arr((1, 2), (2, 2), [$d^v$]),
arr((1, 1), (2, 1), [$d^v$]),
arr((2, 1), (3, 1), [$d^v$]),
arr((1, 3), (2, 3), [$d^v$]),
arr((2, 3), (3, 3), [$d^v$]),
arr((1, 2), (1, 1), [$d^h$]),
arr((1, 3), (1, 2), [$d^h$]),
arr((1, 4), (1, 3), []),
arr((2, 4), (2, 3), []),
arr((2, 3), (2, 2), [$d^h$]),
arr((2, 2), (2, 1), [$d^h$]),
arr((3, 2), (3, 1), [$d^h$]),
arr((3, 3), (3, 2), [$d^h$]),
arr((3, 4), (3, 3), []),
arr((3, 3), (4, 3), []),
arr((3, 2), (4, 2), []),
arr((3, 1), (4, 1), []),
arr((3, 1), (3, 0), []),
arr((2, 1), (2, 0), []),
arr((1, 1), (1, 0), []),
arr((0, 3), (1, 3), []),
arr((0, 2), (1, 2), []),
arr((0, 1), (1, 1), []),
))
#endlec(10)
#example[
Let $f_cx : B_cx -> C_cx$ be a chain map, then the following diagram forms a (two-column) double complex. The reader is welcome to verify that the total complex of this double complex is exactly $cone(f)$, which in particular helps explain why we have defined the differential of $cone(f)$ in that way.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBoBGAXVJADcBDAGwFcYkQBhAfUIF9T0mXPkIpyFanSat2AIR4h+g7HgJEyAJkkMWbRJy4AKMAFpyASkUCQGFSKLitNHTP3zjZy0ptDVokqQAzNrSegYeGl7WtsJqYkEhunJGppFWyrH+ZMSJriAAdIXpPnZxyOI5zqHshfnFMX7qpAAsuWG19b728a1VSfodvJIwUADm8ESgAGYAThAAtkiBNDgQSBp9eVMpnsWzC0gArCtriM2bYdsRUdNzi4jiIKtLF+wmUADkCt7798tPp2OUn6IHeXw8Fj2dyQZAB61e+ig32sv<KEY>Jx<KEY>EA
#align(center, commutative-diagram(
node-padding: (40pt, 40pt),
node((1, 0), text(blue)[$C_n$]),
node((1, 1), [$B_n$]),
node((2, 0), text(red)[$C_(n-1)$]),
node((2, 1), text(blue)[$B_(n-1)$]),
node((3, 0), [$C_(n-2)$]),
node((3, 1), text(red)[$B_(n-2)$]),
node((0, 0), [$...$]),
node((0, 1), [$...$]),
node((4, 0), [$...$]),
node((4, 1), [$...$]),
arr((2, 1), (2, 0), [$-f$]),
arr((3, 1), (3, 0), [$-f$]),
arr((1, 1), (2, 1), [$-d$]),
arr((2, 1), (3, 1), [$-d$]),
arr((1, 0), (2, 0), [$d$]),
arr((2, 0), (3, 0), [$d$]),
arr((1, 1), (1, 0), [$-f$]),
arr((0, 0), (1, 0), []),
arr((0, 1), (1, 1), []),
arr((3, 0), (4, 0), []),
arr((3, 1), (4, 1), []),
))
]
<tot-cone>
#lemma("Acyclic Assembly Lemma")[
Let $C = {C_(p, q)}$ be a double complex. If
+ $C$ is an upper half-plane complex with exact columns, or
+ $C$ is a right half-plane complex with exact rows,
then $Tot^Pi (C)$ is acyclic.
If
3. $C$ is an upper half-plane complex with exact rows, or
4. $C$ is a right half-lane complex with exact columns,
then $Tot^xor (C)$ is acyclic.
]
<aal>
#proof[@weibel[Lemma 2.7.3] explains why proving (1) is sufficient to prove all four conditions, so we work on (1) only.
Let $C$ be an upper half-plane bicomplex with exact columns, where we assume $C_(p, q) = 0$ when $q < 0$ (by translating $C$ up or down). It is sufficient to show that
$ H_0 (Tot^Pi (C)) = 0, $
since by translating $C$ left and right, this will indicate that $H_n (Tot^Pi (C)) = 0$ for all $n$.
Let $ c = (..., c_(-2, 2), c_(-1, 1), c_(0, 0)) in product C_(-p, p) = Tot^Pi (C)_0 $ be a $0$-cycle, i.e., $d(c) = 0$.
We will use induction to find elements $b_(-p, p+1) in C_(-p, p+1)$ for $p >= -1$ such that $ d^v (b_(-p, p+1)) + d^h (b_(-p+1, p)) = c_(-p, p). $
For the base case, let $b_(1,0) = 0$ for $p = -1$. Since the $0$-th column is exact, there exists $b_(0,1) in C_(0,1)$ such that $d^v (b_(0,1)) = c_(0,0)$.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRAGEB9ACjIEYAlCAC+pdJlz5CKflVqMWbLr3JDR47HgJE+pPnPrNWiDj13<KEY>
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((0, 0), [$C_(0,1)$]),
node((1, 0), [$C_(0,0)$]),
node((1, 1), [$C_(1,0)$]),
node((2, 0), [$0$]),
arr((1, 0), (2, 0), [$d^v$]),
arr((0, 0), (1, 0), [$d^v$]),
arr((1, 1), (1, 0), [$d^h$]),
))
By induction, suppose we have found $b_(-p+1, p)$ and want to find $b_(-p, p+1)$.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBpiBdUkANwEMAbAVxiRAGEB9ACgFo1SAAjQBqAIwBKEAF9S6TLnyEUZMVVqMWbLnwFops+djwEiY0mur1mrRBx79xpfTLkgMxpUTIAmdda07HX5ncQN1GCgAc3giUAAzACcIAFskcxAcCCQAZitNWxAoAD0aVwTktMQyTOzEDIDCkrLDECTUpB9qLPT8mzYSgAsZCmkgA
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((0, 0), [$C_(-p, p+1)$]),
node((1, 0), [$C_(-p,p)$]),
node((1, 1), [$C_(-p+1,p)$]),
node((2, 0), [$C_(-p,p-1)$]),
arr((1, 0), (2, 0), [$d^v$]),
arr((0, 0), (1, 0), [$d^v$]),
arr((1, 1), (1, 0), [$d^h$]),
))
We compute that
$ d^v lr((c_(minus p comma p) minus d^h lr((b_(minus p plus 1 comma p))))) & eq d^v lr((c_(minus p, p))) plus d^h d^v lr((b_(minus p plus 1 comma p)))\
& eq d^v lr((c_(minus p, p))) plus d^h lr((c_(minus p plus 1 comma p minus 1))) minus d^h d^h \(b_(minus p plus 2 comma p minus 1 )) \
& eq 0, $
where $d^v lr((c_(minus p, p))) plus d^h lr((c_(minus p plus 1 comma p minus 1))) = 0$ because $d(c) = 0$. Thus
$ c_(minus p comma p) minus d^h lr((b_(minus p plus 1 comma p))) in Ker(d^v : C_(-p, p) -> C_(-p, p-1)) = IM (d^v : C_(-p, p+1) -> C_(-p, p)), $
since the $(minus p)$-th column is exact. So there exists
$b_(minus p comma p plus 1)$ such that
$ d^v lr((b_(minus p comma p plus 1))) eq c_(minus p comma p) minus d^h lr((b_(minus p plus 1 comma p))) $
as desired. Now assembling all $b_(-p, p+1)$ gives
$ b = (..., b_(-1, 2), b_(0, 1), b_(1, 0)) in product C_(-p, p+1) = Tot^Pi (C)_(1) $ such that $d (b) = c$, which proves that $H_0 (Tot^Pi (C)) = 0$.
]
#remark[
This lemma is also a consequence of spectral sequences.
]
A variant of the above lemma is the following, whose proof is similar @notes[Lemma 8.8].
#lemma[
Let $C$ be a double complex such that for every $n$, there exist only finitely many pairs $(p, q)$ such that $p + q = n$ and $C_(p, q) != 0 $. If $C$ has exact rows (or if $C$ has exact columns), then $Tot^(xor) (C)$ is acyclic.
]
<aal-2>
== Balancing $Tor$
#definition[
Suppose $(P_cx, d^((P)))$ is a chain complex in $ModR$ and $(Q_cx, d^((Q)))$ is a chain complex in $RMod$. We can form a double complex of abelian groups which we call the *tensor product double complex*, denoted as $P_cx tpr Q_cx$, where the $(p, q)$ term is $P_p tpr Q_q$ and $d^h_(p, q) = d^((P))_p tp 1$ and $d^v_(p, q) = (-1)^p tp d^((Q))_q$.
It has the *tensor product total complex*, $Tot^xor (P_cx tpr Q_cx)$.
// The sign trick is to make this anticommute.
]
<tp-dc>
#lemma[
The differentials of $P_cx tpr Q_cx$ anticommute, so $P_cx tpr Q_cx$ is a double complex.
]
#proof[
Notice that $(d^((P)) tp 1) oo (1 tp d^((Q))) = d^((P)) tp d^((Q)) = (1 tp d^((Q))) oo (d^((P)) tp 1)$ by @tp-composition, and alternating the signs for adjacent columns makes each square anticommute.
]
#lemma[
If $P$ is a projective right $R$-module, then the functor $(P tpr -) : RMod -> Ab$ is exact. If $Q$ is a projective left $R$-module, then $(- tpr Q) : ModR -> Ab$ is exact. #footnote[This lemma is the same as saying "every projective module is flat", but we have yet to define flat modules. We will revisit this claim in @projective-flat-2.]
]
<projective-flat-1>
#proof[
@rotman[Proposition 3.46, p. 132]. We (very concisely) work on the #rrm case. First notice that $(R tpr -)$ is an isomorphism by @r-tpr, so the functor $(R tpr -)$ is exact. Then tensor product preserves direct sums by @tensor-right-exact, so for a family of right $R$-modules $M_i$, $((plus.circle.big M_i) tpr -)$ is exact, if and only if $plus.circle.big (M_i tpr -)$ is exact, if and only if each $(M_i tpr -)$ is exact. Now any free module $F$, being a direct sum of $R$'s, must have that $(F tpr -)$ is exact. Finally, $P$ is projective, hence $P$ is a direct summand of some free module by @projective-summand, which indicates that $(P tpr -)$ is also exact.
]
#theorem([Balancing of $Tor$])[ Let $A in ModR$ and $B in RMod$. For all $n$,
$ Tor_n^R (A, B) := L_n (- tpr B)(A) iso L_n (A tpr -)(B). $
]
<balance-tor>
#proof[ @weibel[Theorem 2.7.2].
// #align(center,image("../imgs/2023-11-23-03-00-04.png",width:80%))
(We drop the dots for chain complexes in this proof.)
Choose a projective resolution $P_cx rgt(epsilon) A$ in $ModR$ and a project resolution $Q_cx rgt(eta) B$ in $RMod$.
We can view $A, B$ as chain complexes concentrated in degree $0$. Now consider the double complexes $P tpr Q$, $A tpr Q$ and $P tpr B$, and we have _bicomplex morphisms_ (where it might be helpful to recall the diagram in @resolution-qi): $ epsilon tp id_Q: P tpr Q -> A tpr Q \ id_Q tp eta: P tpr Q -> P tpr B $ which induce chain maps on the total complexes:
$ f : Tot^xor (P tpr Q) -> Tot^xor (A tpr Q) = A tpr Q \
g : Tot^xor (P tpr Q) -> Tot^xor (P tpr B) = P tpr B $
We claim that $f$ and $g$
are quasi-isomorphisms, which would give isomorphisms on homology and thus prove the result, i.e.
$ H_ast (Tot^xor (P tpr Q)) iso H_ast (A tpr Q) = L_ast (A tpr - ) (B) $
$ H_ast (Tot^xor (P tpr Q)) iso H_ast (P tpr B) = L_ast (- tpr B ) (A) $
Now we form a double complex $C$, obtained from $P tpr Q$ by adding $A tpr Q$ in the column $p = -1$ using the augmentation $epsilon: P_0 -> A$,
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZARgBpiBdUkANwEMAbAVxiRAAUB9ABgAIc0vAIqdiIAL6l0mXPkIoyAJiq1GLNlz4DhPCVJAZseAkUWll1es1aIOo-oJHc90o3KIBmcyqvrbXRQcdZ0lXWRMUM0pLNRs7YiCRMVCDGWN5ZC9o1WsNTkDtJJdUtwiSUm4fWLytR3ziw3CMs0qY3P97QvqUxvSibnIq9pAAQUTRBrT3FAHWnL9R8cVJ0oyBi3m4sa6Q-V7pzIqhhYCli<KEY>
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((1, 1), [$P_0 tp Q_1$]),
node((2, 1), [$P_0 tp Q_0$]),
node((2, 2), [$P_1 tp Q_0$]),
node((2, 3), [$P_2 tp Q_0$]),
node((1, 2), [$P_1 tp Q_1$]),
node((1, 3), [$P_2 tp Q_1$]),
node((0, 1), [$P_0 tp Q_2$]),
node((0, 2), [$P_1 tp Q_2$]),
node((1, 0), [$A tp Q_1$]),
node((0, 0), [$A tp Q_2$]),
node((2, 0), [$A tp Q_0$]),
node((0, 3), [$P_2 tp Q_2$]),
arr((0, 2), (0, 1), [$d_P tp 1$]),
arr((1, 2), (1, 1), [$d_P tp 1$]),
arr((1, 3), (1, 2), [$d_P tp 1$]),
arr((2, 2), (2, 1), [$d_P tp 1$]),
arr((2, 3), (2, 2), [$d_P tp 1$]),
arr((1, 1), (2, 1), [$1 tp d_Q$]),
arr((1, 2), (2, 2), [$-1 tp d_Q$]),
arr((1, 3), (2, 3), [$1 tp d_Q$]),
arr((0, 1), (1, 1), [$1 tp d_Q$]),
arr((0, 2), (1, 2), [$-1 tp d_Q$]),
arr((0, 0), (1, 0), [$-1 tp d_Q$]),
arr((0, 3), (0, 2), [$d_P tp 1$]),
arr((0, 3), (1, 3), [$1 tp d_Q$]),
arr((1, 0), (2, 0), [$-1 tp d_Q$]),
arr((0, 1), (0, 0), [$epsilon tp 1$], "dashed"),
arr((1, 1), (1, 0), [$epsilon tp 1$], "dashed"),
arr((2, 1), (2, 0), [$epsilon tp 1$], "dashed"),
))
where $C_(-1, q) = A tp Q_q$ and $C_(p, q) = P_p tp Q_q$ for any $p, q >= 0$.
Then
$
(Tot^xor (C)[-1])_n = Tot^xor (C)_(n-1) = Tor^xor (P tpr Q)_(n-1) xor (A tp Q_(n))
$
Meanwhile, the mapping cone of $f : Tot^xor (P tpr Q) -> A tpr Q $ has
$
cone(f)_n = Tot^xor (P tpr Q)_(n-1) xor (A tp Q_(n)).
$
Also $ d_(cone(f)) = (-(d^((P)) tp 1 + (-1)^p tp d^((Q))), 1 tp d^((Q)) - epsilon tp 1) = -d_(Tot^xor (C)[-1]), $
hence $cone(f) iso Tot^xor (C)[-1]$.
To show that $f$ is a quasi-isomorphism, we need to show $cone(f)$ is acyclic by @cone-qi. As any $Q_p$ is projective, $(- tpr Q_p)$ is exact by @projective-flat-1. Since $P_cx -> A$ is a resolution, every row of $C$ is exact. Since $C$ is upper half-plane, $Tot^xor (C)$ is acyclic by @aal. So $f$ is a quasi-isomorphism.
Similarly, we can show that $g$ is a quasi-isomorphism by forming a double complex $C'$ obtained from adding $B tpr P$ in the row $q = -1$ of $P tpr Q$.
// ($Q$ means $id_Q$ in proper places)
]
== Balancing $Ext$
#definition[
Given a chain complex $(P_cx, d^((P)))$ and a cochain complex $(I^cx, d_((I)))$, we can form the *Hom double complex* $ hom(P_cx, I^cx) = {hom (P_p, I^q)}_(p, q) $
with differentials#footnote[Here we alternate the signs for adjacent rows (instead of adjacent columns, as in the tensor product double complex). This sign convention, following @notes[p. 76], is different from that in @weibel[p. 62].]
$ d^h_(p, q) (f) &= (-1)^q f oo d^((P))_(p+1) in hom ( P_(p+1) , I^q) \ d^v_(p, q) (f) &= d_((I))^q oo f in hom (P_p, I^(q+1)) $
for $f in hom ( P_p , I^q )$.
Then we define the *Hom cochain complex*#footnote[@weibel[p. 62] writes this as $Tor^Pi (hom (P, I))$, but as we will see in this case any diagonal slice has only finite terms, so their product and direct sum are the same.] as
$ Tot^xor (hom(P, I)) $
]
An (anticommutative) diagram for the Hom double complex is as follows. The placeholder in function compositions is written as $square$ (instead of $-$ as in most parts of these notes) so that it is not confused with the minus sign. Note particularly the signs and indices in the horizontal differentials. Also note that each row and each column is a cochain complex.
// https://t.yw.je/#N4Igdg9gJg<KEY>Qy3iS+lsHZ81UC9XCgAs4vcuv1huIxtNDOV-LFQ<KEY>
#align(center, commutative-diagram(
node-padding: (60pt, 60pt),
node((3, 0), text(green)[$hom (P_0, I^0)$]),
node((2, 0), text(red)[$hom (P_0, I^1)$]),
node((1, 0), text(blue)[$hom (P_0, I^2)$]),
node((1, 1), text(orange)[$hom (P_1, I^2)$]),
node((2, 1), text(blue)[$hom (P_1, I^1)$]),
node((3, 1), text(red)[$hom (P_1, I^0)$]),
node((3, 2), text(blue)[$hom (P_2, I^0)$]),
node((2, 2), text(orange)[$hom (P_2, I^1)$]),
node((1, 2), text(navy)[$hom (P_2, I^2)$]),
node((0, 0), [$...$]),
node((0, 1), [$...$]),
node((0, 2), [$...$]),
node((1, 3), [$...$]),
node((2, 3), [$...$]),
node((3, 3), [$...$]),
arr((3, 0), (3, 1), [$square oo d^((P))_1$]),
arr((3, 1), (3, 2), [$square oo d^((P))_2$]),
arr((3, 0), (2, 0), [$d_((I))^0 oo square$]),
arr((3, 1), (2, 1), [$d_((I))^0 oo square$]),
arr((3, 2), (2, 2), [$d_((I))^0 oo square$]),
arr((2, 0), (1, 0), [$d_((I))^1 oo square$]),
arr((1, 0), (1, 1), [$square oo d^((P))_1$]),
arr((1, 1), (1, 2), [$square oo d^((P))_2$]),
arr((2, 1), (1, 1), [$d_((I))^1 oo square$]),
arr((2, 2), (1, 2), [$d_((I))^1 oo square$]),
arr((2, 0), (2, 1), [$-square oo d^((P))_1$]),
arr((2, 1), (2, 2), [$-square oo d^((P))_2$]),
arr((1, 0), (0, 0), []),
arr((1, 1), (0, 1), []),
arr((1, 2), (0, 2), []),
arr((1, 2), (1, 3), []),
arr((2, 2), (2, 3), []),
arr((3, 2), (3, 3), []),
))
#remark[
There are a few technicalities to be addressed here. They are not conceptually difficult but can be bewildering when first encountered.
Notice that in our original definition of a double complex, we would draw the arrows pointing downwards and to the left, which we refer to as a *canonical ordering*. However, when we draw the diagram for a Hom double complex, the arrows point upwards and to the right.
// // https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBoBmAXVJADcBDAGwFcYkQALCAWw<KEY>
// #align(center, commutative-diagram(
// node-padding: (50pt, 50pt),
// node((3, 0), text(green)[$hom (P_0, I^0)$]),
// node((2, 0), text(red)[$hom (P_0, I^1)$]),
// node((1, 0), text(blue)[$hom (P_0, I^2)$]),
// node((1, 1), text(orange)[$hom (P_1, I^2)$]),
// node((2, 1), text(blue)[$hom (P_1, I^1)$]),
// node((3, 1), text(red)[$hom (P_1, I^0)$]),
// node((3, 2), text(blue)[$hom (P_2, I^0)$]),
// node((2, 2), text(orange)[$hom (P_2, I^1)$]),
// node((1, 2), text(navy)[$hom (P_2, I^2)$]),
// node((0, 0), [$...$]),
// node((0, 1), [$...$]),
// node((0, 2), [$...$]),
// node((1, 3), [$...$]),
// node((2, 3), [$...$]),
// node((3, 3), [$...$]),
// arr((3, 0), (3, 1), []),
// arr((3, 1), (3, 2), []),
// arr((3, 0), (2, 0), []),
// arr((3, 1), (2, 1), []),
// arr((3, 2), (2, 2), []),
// arr((2, 0), (1, 0), []),
// arr((1, 0), (1, 1), []),
// arr((1, 1), (1, 2), []),
// arr((2, 1), (1, 1), []),
// arr((2, 2), (1, 2), []),
// arr((2, 0), (2, 1), []),
// arr((2, 1), (2, 2), []),
// arr((1, 0), (0, 0), []),
// arr((1, 1), (0, 1), []),
// arr((1, 2), (0, 2), []),
// arr((1, 2), (1, 3), []),
// arr((2, 2), (2, 3), []),
// arr((3, 2), (3, 3), []),
// ))
Thus this is, strictly speaking, neither a upper half-plane complex nor a right half-plane complex, because if we would like to turn the diagram into a canonically ordered one, we would need to reflect it to the "third quadrant". This ordering matters mainly because in this case, it would be more convenient to apply @aal-2 instead of @aal[Acyclic Assembly Lemma].
Another confusion that can easily arise from a non-canonical ordering is how to form the corresponding total complex.
Apart from converting the diagram to a canonically ordered one by reflection, a simple method is to select any object $A$ in the grid and draw a line $l$ connecting the arrowheads of the two arrows departing from $A$. Then every "diagonal slice", whose direct sum is a term of the total complex, must be parallel to this line $l$. This is simply because each arrow must point from one diagonal slice to another. For example, each diagonal slice of the Hom double complex has a distinct colour in the above diagram, and hence we see
$
Tot^xor (hom (P, I))^n = plus.circle.big_(p + q = n) hom(P_p, I^q)
$
This total complex is a _cochain_ complex#footnote[In fact, whether a total complex is a chain complex or a cochain complex can seem arbitrary, because this actually depends on how we index the diagonals. Here we see the Hom total complex as a cochain complex because it is more convenient in later proofs.
// because later on we would like to establish an isomorphism between the total complex and the cone complex of a cochain map (which is a cochain complex).
] because the differentials point from $Tot^xor (hom (P, I))^n$ to $Tot^xor (hom (P, I))^(n+1)$.
]
// #remark[If $C, D$ are chain complexes and we reindex $D$ to be a cochain complex. Then $H^n Tot^Pi hom (C, D)$ is the group of chain homotopy eq classes of morphisms $C -> D[-n]$.] (shown in the next section)
#remark[
Let $I^cx$ be a cochain complex of abelian groups and let $P_cx$ (resp. $Q_cx$) be a chain complex of right (resp. left) $R$-modules, then there is a natural isomorphism
$ hom_Ab (Tot^xor (P tp Q), I) iso hom_R (P , Tot^Pi (hom_Ab (Q, I))). $
]
#endlec(11)
#theorem([Balancing of $Ext$])[ For all $n$,
$ Ext^n_R (A, B) = R^n hom_R (A, -) (B) iso R^n hom_R (-, B) (A) $
]
<balance-ext>
#proof[@weibel[Theorem 2.7.6, p.63].
// #align(center,image("../imgs/2023-11-23-03-27-44.png",width:80%))
Take projective resolution $P_cx ->^epsilon A$ and injective resolution $B ->^eta I^cx$. We can view $A$ and $B$ as complexes concentrated at degree $0$. We can form double cochain complexes $hom(P, I)$, $hom(A, I)$ and $hom(P, B)$. As in the proof of @balance-tor, we need to show the maps on Hom cochain complexes
$ f: hom(A, I) -> Tot^xor ( hom(P, I)) \ g: hom(P, B) -> Tot^xor (hom(P, I)) $ are quasi-isomorphisms. This is equivalent to $cone(f)$ and $cone(g)$ being acyclic by (the dual of) @cone-qi.
Let $C$ be the double complex $hom(P, I)$ with $hom(A, I)$ added to the column $p=-1$ using $epsilon : P_0 -> A$. We make it so that every added differential has a minus sign, as shown in the diagram.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZARgBoBmAXVJADcBDAGwFcYkQALCAWwAIAKAAoB9AAyleASQB6ogJQgAvqXSZc+QijIAmanSat2XPkLESZxBctXY8BImWJ6GLNok48BI8VOnarKiAYthpE2q<KEY>
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((3, 1), [$hom (P_0, I^0)$]),
node((2, 1), [$hom (P_0, I^1)$]),
node((1, 1), [$hom (P_0, I^2)$]),
node((1, 2), [$hom (P_1, I^2)$]),
node((2, 2), [$hom (P_1, I^1)$]),
node((3, 2), [$hom (P_1, I^0)$]),
node((3, 3), [$hom (P_2, I^0)$]),
node((2, 3), [$hom (P_2, I^1)$]),
node((1, 3), [$hom (P_2, I^2)$]),
node((0, 1), [$...$]),
node((0, 2), [$...$]),
node((0, 3), [$...$]),
node((1, 4), [$...$]),
node((2, 4), [$...$]),
node((3, 4), [$...$]),
node((3, 0), [$hom(A, I^0)$]),
node((2, 0), [$hom(A, I^1)$]),
node((1, 0), [$hom(A, I^2)$]),
node((0, 0), [$...$]),
arr((3, 1), (3, 2), [$square oo d^((P))_1$]),
arr((3, 2), (3, 3), [$square oo d^((P))_2$]),
arr((3, 1), (2, 1), [$d_((I))^0 oo square$]),
arr((3, 2), (2, 2), [$d_((I))^0 oo square$]),
arr((3, 3), (2, 3), [$d_((I))^0 oo square$]),
arr((2, 1), (1, 1), [$d_((I))^1 oo square$]),
arr((1, 1), (1, 2), [$square oo d^((P))_1$]),
arr((1, 2), (1, 3), [$square oo d^((P))_2$]),
arr((2, 2), (1, 2), [$d_((I))^1 oo square$]),
arr((2, 3), (1, 3), [$d_((I))^1 oo square$]),
arr((2, 1), (2, 2), [$-square oo d^((P))_1$]),
arr((2, 2), (2, 3), [$-square oo d^((P))_2$]),
arr((1, 1), (0, 1), []),
arr((1, 2), (0, 2), []),
arr((1, 3), (0, 3), []),
arr((1, 3), (1, 4), []),
arr((2, 3), (2, 4), []),
arr((3, 3), (3, 4), []),
arr((3, 0), (2, 0), [$-d_((I))^0 oo square$]),
arr((2, 0), (1, 0), [$-d_((I))^1 oo square$]),
arr((1, 0), (0, 0), []),
arr((1, 0), (1, 1), [$-square oo epsilon$], "dashed"),
arr((2, 0), (2, 1), [$-square oo epsilon$], "dashed"),
arr((3, 0), (3, 1), [$-square oo epsilon$], "dashed"),
))
We observe that $cone(f) iso Tot^xor (C)$ (both their terms and differentials match). Every $hom(-, I^q)$ is exact, so every row of $C$ is exact, then we can see that $Tot^xor (C)$ is acyclic by @aal-2. Similarly, we can show that $cone(g)$ is acyclic.
Then applying cohomology yields
$ R^ast hom(A, -) (B) &= H^ast hom (A, I) \ &iso H^ast Tot^xor ( hom(P, I)) \ &iso H^ast hom(P, B) = R^ast hom(-, B) (A). $
]
// #TODO Tot and everything should be cochain instead of chain complex !!! => so that we can take cohomology...
// #TODO mapping cone of a *cochain* complex
Now that we have gained some experience with non-canonically ordered double complexes, we introduce another form of a Hom double complex.
#definition[
Given two chain complexes $(P_cx, d^((P)))$ and $(Q_cx, d^((Q)))$, we can form the *Hom double complex*
$
hom (P_cx, Q_cx) = { hom(P_p, Q_q) }_(p, q)
$
with differentials
$ d^h_(p, q) (f) &= (-1)^q f oo d^((P))_(p+1) in hom ( P_(p+1) , Q_q) \ d^v_(p, q) (f) &= d^((Q))_q oo f in hom (P_p, Q_(q-1)) $
for $f in hom ( P_p , Q_q )$.
Then we define the *Hom cochain complex* as
$Tot^Pi (hom(P, Q)). $
]
We draw the (non-canonically ordered) double complex $hom (P, Q)$ as follows. Note that each row is a cochain complex, while each column is a chain complex.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBoBmAXVJADcBDAGwFcYkQALCAWwAoAFAPpkABAEVhAShABfUuky58hFAEYK1Ok1bsufIevFTZ8kBmx4CRAEwaaDFm0SceAwbaPFpchReVEya00HHWc9N1EJVW9TcyUrNVIg+20nF31BQyiY33iVZFtkrUddVyEPbJNcy3yyVWDU0oz<KEY>
#align(center, commutative-diagram(
node-padding: (60pt, 60pt),
node((3, 0), text(blue)[$hom(P_0, Q_0)$]),
node((3, 1), text(red)[$hom(P_1, Q_0)$]),
node((3, 2), text(green)[$hom(P_2, Q_0)$]),
node((2, 0), text(orange)[$hom(P_0, Q_1)$]),
node((2, 1), text(blue)[$hom(P_1, Q_1)$]),
node((2, 2), text(red)[$hom(P_2, Q_1)$]),
node((1, 0), text(navy)[$hom(P_0, Q_2)$]),
node((1, 1), text(orange)[$hom(P_1, Q_2)$]),
node((1, 2), text(blue)[$hom(P_2, Q_2)$]),
node((0, 0), [$...$]),
node((0, 1), [$...$]),
node((0, 2), [$...$]),
node((1, 3), [$...$]),
node((2, 3), [$...$]),
node((3, 3), [$...$]),
arr((3, 0), (3, 1), [$square oo d^((P))_1$]),
arr((3, 1), (3, 2), [$square oo d^((P))_2$]),
arr((2, 0), (2, 1), [$-square oo d^((P))_1$]),
arr((2, 1), (2, 2), [$-square oo d^((P))_2$]),
arr((1, 0), (1, 1), [$square oo d^((P))_1$]),
arr((1, 1), (1, 2), [$square oo d^((P))_2$]),
arr((1, 0), (2, 0), [$d^((Q))_2 oo square$]),
arr((2, 0), (3, 0), [$d^((Q))_1 oo square$]),
arr((1, 1), (2, 1), [$d^((Q))_2 oo square$]),
arr((2, 1), (3, 1), [$d^((Q))_1 oo square$]),
arr((1, 2), (2, 2), [$d^((Q))_2 oo square$]),
arr((2, 2), (3, 2), [$d^((Q))_1 oo square$]),
arr((0, 0), (1, 0), []),
arr((0, 1), (1, 1), []),
arr((0, 2), (1, 2), []),
arr((1, 2), (1, 3), []),
arr((2, 2), (2, 3), []),
arr((3, 2), (3, 3), []),
))
// #align(center,image("../imgs/2023-11-23-22-22-07.png",width:80%))
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJ<KEY>
// #align(center, commutative-diagram(
// node-padding: (50pt, 50pt),
// node((3, 0), text(blue)[$hom(P_0, Q_0)$]),
// node((3, 1), text(red)[$hom(P_1, Q_0)$]),
// node((3, 2), text(green)[$hom(P_2, Q_0)$]),
// node((2, 0), text(orange)[$hom(P_0, Q_1)$]),
// node((2, 1), text(blue)[$hom(P_1, Q_1)$]),
// node((2, 2), text(red)[$hom(P_2, Q_1)$]),
// node((1, 0), text(navy)[$hom(P_0, Q_2)$]),
// node((1, 1), text(orange)[$hom(P_1, Q_2)$]),
// node((1, 2), text(blue)[$hom(P_2, Q_2)$]),
// node((0, 0), [$...$]),
// node((0, 1), [$...$]),
// node((0, 2), [$...$]),
// node((1, 3), [$...$]),
// node((2, 3), [$...$]),
// node((3, 3), [$...$]),
// arr((3, 0), (3, 1), []),
// arr((3, 1), (3, 2), []),
// arr((2, 0), (2, 1), []),
// arr((2, 1), (2, 2), []),
// arr((1, 0), (1, 1), []),
// arr((1, 1), (1, 2), []),
// arr((1, 0), (2, 0), []),
// arr((2, 0), (3, 0), []),
// arr((1, 1), (2, 1), []),
// arr((2, 1), (3, 1), []),
// arr((1, 2), (2, 2), []),
// arr((2, 2), (3, 2), []),
// arr((0, 0), (1, 0), []),
// arr((0, 1), (1, 1), []),
// arr((0, 2), (1, 2), []),
// arr((1, 2), (1, 3), []),
// arr((2, 2), (2, 3), []),
// arr((3, 2), (3, 3), []),
// ))
The $n$-th term of the total cochain complex is
$
[Tot^Pi (hom (P_cx, Q_cx))]^n = product_(p >= max{0, n}) hom (P_p, Q_(p - n)),
$
which is the product of infinitely many terms.
It turns out that this construction leads to a further way to compute $Ext$:
#theorem[
Let $P_cx -> A$ and $Q_cx -> B$ be projective resolutions, then
$
Ext^n_R (A, B) iso H^n Tot^Pi (hom_R (P, Q)).
$
]
<balance-ext-2>
#proof[
@notes[Lemma 8.16]. The proof is similar to previous ones, so we present it briefly here.
// https://t.yw.je/#N4Igdg9gJgpgziAXAbVABwnAlgFyxMJZABgBoBmAXVJADcBDAGwFcYkQALCAWwAoAFAPpkABAEVhAShABfUuky58hFAEYK1Ok1bsufIevFTZ8kBmx4CRAEwaaDFm0SceAwbaPFpchReVEya00HHWc9N1EJVW9TcyUrNVIg+20nF31BQyiY33iVZFtkrUddVyEPbJNcy3yyVWDU0ozI9xyzRRqidXqUkrCyzNIjaza4zpRbHuLQ9LcK1qr2vwSSUmIGvpAAOh3Fsf9E9d6Zna29joOCtY2T3Z8lvKJyUimQtNPz5fznorf2D-u+xWzyox3ed1iFxWZAALDc0noRBEhgAhUZQ-LqOFgppIgyo9FfGykbHTBE8PHuAmfR<KEY>
#align(center, commutative-diagram(
node-padding: (50pt, 50pt),
node((3, 0), [$hom(P_0, Q_0)$]),
node((3, 1), [$hom(P_1, Q_0)$]),
node((3, 2), [$hom(P_2, Q_0)$]),
node((2, 0), [$hom(P_0, Q_1)$]),
node((2, 1), [$hom(P_1, Q_1)$]),
node((2, 2), [$hom(P_2, Q_1)$]),
node((1, 0), [$hom(P_0, Q_2)$]),
node((1, 1), [$hom(P_1, Q_2)$]),
node((1, 2), [$hom(P_2, Q_2)$]),
node((0, 0), [$...$]),
node((0, 1), [$...$]),
node((0, 2), [$...$]),
node((1, 3), [$...$]),
node((2, 3), [$...$]),
node((3, 3), [$...$]),
node((4, 0), [$hom (P_0, B)$]),
node((4, 1), [$hom (P_1, B)$]),
node((4, 2), [$hom (P_2, B)$]),
node((4, 3), []),
arr((3, 0), (3, 1), [$square oo d^((P))_1$]),
arr((3, 1), (3, 2), [$square oo d^((P))_2$]),
arr((2, 0), (2, 1), [$-square oo d^((P))_1$]),
arr((2, 1), (2, 2), [$-square oo d^((P))_2$], label-pos: left),
arr((1, 0), (1, 1), [$square oo d^((P))_1$]),
arr((1, 1), (1, 2), [$square oo d^((P))_2$]),
arr((1, 0), (2, 0), [$d^((Q))_2 oo square$]),
arr((2, 0), (3, 0), [$d^((Q))_1 oo square$]),
arr((1, 1), (2, 1), [$d^((Q))_2 oo square$]),
arr((2, 1), (3, 1), [$d^((Q))_1 oo square$]),
arr((1, 2), (2, 2), [$d^((Q))_2 oo square$]),
arr((2, 2), (3, 2), [$d^((Q))_1 oo square$]),
arr((0, 0), (1, 0), []),
arr((0, 1), (1, 1), []),
arr((0, 2), (1, 2), []),
arr((1, 2), (1, 3), []),
arr((2, 2), (2, 3), []),
arr((3, 2), (3, 3), []),
arr((3, 0), (4, 0), [$eta oo square$], "dashed"),
arr((3, 1), (4, 1), [$eta oo square$], "dashed"),
arr((3, 2), (4, 2), [$eta oo square$], "dashed"),
arr((4, 0), (4, 1), [$-square oo d^((P))_1$]),
arr((4, 1), (4, 2), [$-square oo d^((P))_2$]),
arr((4, 2), (4, 3), []),
))
Let $C$ be the double complex obtained by adding $hom(P, B)$ to the row $q = -1$ of the double complex $hom(P, Q)$. Since each $P_p$ is projective, $hom (P_p, -)$ is exact and so each column of $C$ is exact. $C$ can be turned into a (canonically ordered) upper half-plane complex (by reflecting it to the "second quadrant"), so @aal applies and $Tot^Pi (C)$ is acyclic. Again, observe that $Tot^Pi (C) iso cone(f)$ where
$
f: Tot^Pi (hom (P,Q)) -> hom (P, B)
$
is the cochain map induced by $eta: Q_cx -> B$. Hence $f$ is a quasi-isomorphism, but $H^ast hom (P, B) iso Ext^ast_R (A, B)$ by the proof of @balance-ext, so the result follows.
]
// Applying cohomology to this total cochain complex yields $Ext^ast _R (M, N)$.
// #TODO modify in balancing Tor how you write the differentials e.g. it should be $d^((P))$ |
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/VerbaliInterni/VerbaleInterno_240111/content.typ | typst | MIT License | #import "meta.typ": inizio_incontro, fine_incontro, luogo_incontro
#import "functions.typ": glossary, team
#let participants = csv("participants.csv")
= Partecipanti
/ Inizio incontro: #inizio_incontro
/ Fine incontro: #fine_incontro
/ Luogo incontro: #luogo_incontro
#table(
columns: (3fr, 1fr),
[*Nome*], [*Durata presenza*],
..participants.flatten()
)
= Sintesi Elaborazione Incontro
/*************************************/
/* INSERIRE SOTTO IL CONTENUTO */
/*************************************/
== Conclusione del _Piano di Progetto_
L'incontro si è aperto discutendo lo stadio di avanzamento del _Piano di Progetto_ e quel che è necessario aggiungere con lo scopo di concluderne la stesura in vista della prima revisione #glossary[RTB]; in particolare, il team ha analizzato e approvato il nuovo preventivo redatto dal Responsabile, che implementa una ripartizione diversa delle ore produttive per ruolo e, di conseguenza, modifica il budget previsto inizialmente per lo svolgimento del progetto (da 11070€ a 11555€) pur mantenendo lo stesso numero di ore totali (95 a membro, ovvero 570). Le motivazioni che hanno guidato il team in questa decisione sono le seguenti:
- Le ore dedicate al ruolo di Responsabile sono aumentate nel nuovo preventivo (ne erano previste 60 inizialmente, ora 75) poiché il team ne ha già utilizzate 40 nel corso degli otto #glossary[sprint] previsti prima della revisione #glossary[RTB] e, considerando il trend attuale, le ore rimanenti non sarebbero state sufficienti a coprire gli #glossary[sprint] che compongono i periodi precedenti alle revisioni #glossary[PB] e #glossary[CA]. Inoltre, sono ormai molteplici le responsabilità connesse al ruolo di Responsabile (come sottolineato nelle _Norme di Progetto v1.0_), che non può essere lasciato vacante in nessuno degli #glossary[sprint];
- Le ore dedicate al ruolo di Amministratore sono aumentate significativamente nel nuovo preventivo (ne erano previste 48 inizialmente, ora 92) poiché il team ne ha già utilizzate ben 56 nel corso dei primi otto #glossary[sprint] e, di conseguenza, non erano rimaste più ore a disposizione per coprire gli #glossary[sprint] successivi alla revisione #glossary[RTB]. Si evince facilmente che alcuni componenti del team hanno dovuto utilizzare più ore di quelle assegnate loro inizialmente in questo ruolo, il che è principalmente dovuto al fatto che l'Amministratore è incaricato di portare avanti lo svolgimento di due documenti particolarmente rilevanti nel contesto della revisione #glossary[RTB], ossia le _Norme di Progetto v1.0_ e il _Piano di Qualifica v1.0_. Le ore nel nuovo preventivo ammontano a 92 e non di più in quanto è stato considerato il fatto che l'Amministratore coprirà verosimilmente meno responsabilità in futuro, non dovendosi occupare della stesura dei nuovi documenti previsti per la seconda revisione #glossary[PB];
- Le ore dedicate al ruolo di Progettista sono rimaste invariate nel nuovo preventivo (ne sono ancora previste 84) poiché il team ne ha utilizzata solo 1 all'inizio dello sviluppo del #glossary[PoC] (ma si è reso conto in seguito che il #glossary[PoC] non prevede essenzialmente alcun tipo di progettazione) e, di conseguenza, non si dispone di dati sufficienti per poter stabilire se il numero di ore necessiti di essere cambiato. La scelta più idonea è quindi quella di non modificare il preventivo iniziale in questo senso (almeno per il momento);
- Le ore dedicate al ruolo di Analista sono aumentate leggermente nel nuovo preventivo (ne erano previste 54 inizialmente, ora 58) poiché il team ne ha utilizzate 42 nel corso dei primi otto #glossary[sprint] e le ore rimanenti non avrebbero consentito di apportare modifiche sostanziali all'_Analisi dei Requisiti v1.0_ qualora fossero state necessarie in seguito alla revisione #glossary[RTB]. Il team ha utilizzato un maggior numero di ore rispetto al previsto a causa della necessità di riscrivere fondamentalmente dall'inizio la sezione dedicata ai casi d'uso (e i requisiti corrispondenti) in seguito al colloquio svolto con il professor Cardin dopo una prima stesura iniziale;
- Le ore dedicate al ruolo di Programmatore sono diminuite significativamente nel nuovo preventivo (ne erano previste 162 inizialmente, ora 124) poiché il team ne ha utilizzate solo 30 nel corso dei primi otto #glossary[sprint], il che è comprensibile visto che lo sviluppo del #glossary[PoC] non ha presentato difficoltà particolari o imprevisti, e si stima che le ore rimanenti secondo il nuovo preventivo siano più che sufficienti a portare a termine lo sviluppo del #glossary[MVP] in vista della revisione #glossary[PB] e ad aggiungere funzionalità in più in vista anche della revisione #glossary[CA];
- Le ore dedicate al ruolo di Verificatore sono diminuite nel nuovo preventivo (ne erano previste 162 inizialmente, ora 137), il team ne ha utilizzate 65 nel corso dei primi otto #glossary[sprint]. Le ore da Verificatore aumentano però rispetto a quelle da Programmatore in confronto al vecchio preventivo, in quanto ci si è resi conto che si tratta del ruolo che ha consumato il maggior numero di ore produttive in generale fino ad oggi e le responsabilità dei Verificatori non faranno che aumentare dopo la revisione #glossary[RTB]. Oltre alla revisione di documentazione e codice, saranno responsabili anche dello sviluppo di tutti i test per assicurare che il prodotto software soddisfi i requisiti di qualità che verranno specificati sempre più in dettaglio all'interno del _Piano di Qualifica_.
Infine, si è deciso di voler adottare un'automazione che crei le tabelle e i grafici pertinenti ai preventivi in modo simile a quella realizzata precedentemente per i consuntivi e di limitare i diagrammi di Gantt nella parte di pianificazione ai tre realizzati inizialmente per le tre #glossary[baseline] principali (le revisioni).
== Modifica del _Piano di Qualifica_
Il team ha successivamente discusso di come sia più appropriato riportare ed evidenziare gli effetti del nuovo budget preventivato (nuovo BAC o "Budget At Completion") all'interno del _Piano di Qualifica v1.0_. Durante la discussione sono state esplorate due alternative principali: applicare il nuovo BAC solo agli #glossary[sprint] successivi alla revisione #glossary[RTB], oppure iniziare ad applicarlo dallo #glossary[sprint] 7 (il quale coincide con la creazione del nuovo preventivo) in avanti. Il team ha scelto la seconda alternativa, con l'intenzione di evidenziare l'adattamento del team all'andamento del progetto fino al settimo #glossary[sprint]. In questo modo, si intende valutare se i valori accettabili stabiliti inizialmente per le metriche misurate fino a quel momento siano adeguati rispetto al nuovo BAC.
== Ultime migliorie da apportare al #glossary[PoC]
Il team ha deciso di implementare le ultime migliorie suggerite dalla Proponente al #glossary[PoC] in vista della revisione #glossary[RTB], ovvero l'aggiunta di tabelle contenenti i dati grezzi inviati dai simulatori di sensori e il filtraggio dei dati stessi utilizzando le #glossary[Grafana] variables.
== Data di candidatura alla revisione #glossary[RTB]
Infine, dato che sia l'_Analisi dei Requisiti v1.0_ che il #glossary[PoC] sono stati quasi ultimati del tutto, il team ha collettivamente deciso di candidarsi alla prima revisione #glossary[RTB] al termine dello #glossary[sprint] corrente, ossia il giorno 19 Gennaio 2024. Successivamente alla candidatura, il team prevede di ultimare la presentazione per il colloquio con il Committente Cardin e di avviare e terminare la revisione in stile #glossary[walkthrough] di tutta la documentazione rilevante per il secondo colloquio con il Committente Vardanega.
== Rotazione dei ruoli
- <NAME>: Programmatore;
- <NAME>: Amministratore;
- <NAME>: Verificatore;
- <NAME>: Verificatore;
- <NAME>: Responsabile;
- <NAME>: Amministratore, Verificatore. |
https://github.com/diogro/memorial | https://raw.githubusercontent.com/diogro/memorial/master/UFSCAR-2024/cv.typ | typst |
#set text(
font: "Skolar PE TEST",
size: 10pt,
lang: "por",
region: "br"
)
#set page(
paper: "a4",
header:[
#set text(9pt, font: "Skolar PE TEST")
Curriculum vitae
#h(1fr) <NAME>
],
numbering: "1",
)
#show par: set block(spacing: 1em)
#set par(
leading: 1em,
first-line-indent: 1em,
justify: true
)
#show heading: it => {
block[#it.body]
v(0.3em)
}
== Informações pessoais
*<NAME>*
- RG: 35.002.110-7
- Email: <EMAIL>
- Google Scholar: - #link("https://scholar.google.com/citations?user=ymsxHCMAAAAJ&hl=en")
= Titulação
- 2010-12 - *Programa de Pós Graduação em Genética e Biologia Evolutiva*
- Orientador: <NAME>
- Mestre em Ciências com a dissertação "Evolução Morfológica e Modularidade"
- Universidade de São Paulo
- Documento @fig-diploma_mest[]
= Produção Científica
== Publicações em periódicos indexados como primeiro autor ou autor correspondente
1. _Characterizing the landscape of gene expression variance in humans_ #link("https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1010833")[(#underline[link])] (2023)
<NAME>.\*, *<NAME>*\*, <NAME>, <NAME>, <NAME>. _PLoS Genetics_. 19, e1010833. #link("http://dx.doi.org/10.1371/journal.pgen.1010833")[(#underline[link])]
- \* co-primeiro autores
- Documento @fig-exp_var[]
2. _Genomic Perspective on Multivariate Variation, Pleiotropy, and Evolution_ #link("https://academic.oup.com/jhered/article/110/4/479/5463195")[(#underline[link])] (2019)
*<NAME>*, <NAME>, <NAME>. _Journal Of Heredity_. 2019 110(4):479-493.
- Documento @fig-qtls[]
3. _The evolution of phenotypic integration: How directional selection reshapes covariation in mice_ #link("http://onlinelibrary.wiley.com/doi/10.1111/evo.13304/abstract")[(#underline[link])] (2017)
<NAME>.\*, *<NAME>*\*, <NAME>, <NAME>, <NAME>. _Evolution_, 2017 71(10):2370–2380.
- \* co-primeiro autores
- Documento @fig-ratones[]
4. _Modularity: Genes, Development, and Evolution_ #link("http://annualreviews.org/doi/abs/10.1146/annurev-ecolsys-121415-032409")[(#underline[link])] (2016) *<NAME>*\*, <NAME>\*, <NAME>, <NAME>. _Annual Review of Ecology, Evolution, and Systematics_, 2016 47:463-486
- \* co-primeiro autores
- Documento @fig-Modularity_Genes[]
5. _EvolQG - An R package for evolutionary quantitative genetics_ #link("http://f1000research.com/articles/4-925/v3")[(#underline[link])] (2015)
*<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>._F1000 Research_, 2015 4:925
- Documento @fig-evolqg[]
6. _Directional Selection can Drive the Evolution of Modularity in Complex Traits_ #link("http://www.pnas.org/content/112/2/470.abstract")[(#underline[link])] (2015)
*<NAME>*, <NAME>. _PNAS_, 2015 112(2):470-475
- Documento @fig-DirectionalSelection[]
== Publicações em periódicos indexados como autor colaborador
1. _Morphological integration during postnatal ontogeny: implications for evolutionary biology_ #link("https://academic.oup.com/evolut/article/77/3/763/6957025")[(#underline[link])] (2023)
<NAME>., <NAME>, *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. _Evolution_. 2023 77(3): 763–775. #link("https://doi.org/10.1093/evolut/qpac052")[(#underline[link])]
- Documento @fig-ontogenia[]
2. _Are cats less stressed in homes than in shelters? A study of personality and faecal cortisol metabolites levels_ #link("https://www.sciencedirect.com/science/article/abs/pii/S0168159119301881")[(#underline[link])] (2020)
Fukim<NAME>., *Diogo Melo*, <NAME>, <NAME>, <NAME>. _Applied Animal Behaviour Science_. 2020 224:104919.
- Documento @fig-cats[]
3. _Measuring the magnitude of morphological integration: the effect of differences in morphometric representations and the inclusion of size_ #link("https://onlinelibrary.wiley.com/doi/abs/10.1111/evo.13864")[(#underline[link])] (2019)
<NAME>., <NAME>, *D<NAME>*, <NAME>, <NAME>. _Evolution_. 2019 73(12):2518-2528.
- Documento @fig-Integration[]
4. _Insights from Systems Biology in Physiological Studies: Learning from Context_ #link("https://www.karger.com/Article/FullText/478648")[(#underline[link])] (2017)
<NAME>., *D<NAME>*, <NAME>. _Cell Physiology and Biochemestry_, 2017 42(3):939-951.
- Documento @fig-sysbio[]
5. _Costly learning: preference for familiar food persists despite negative impact on survival_ #link("http://rsbl.royalsocietypublishing.org/content/12/7/20160256")[(#underline[link])] (2016)
<NAME>., <NAME>, *D<NAME>*, <NAME>. _Biology Letters_, 2016 20160256.
- Documento @fig-Costly[]
6. _A case study of extant and extinct Xenarthra cranium covariance structure: Implications and applications to paleontology_ #link("http://www.bioone.org/doi/abs/10.1017/pab.2015.49")[(#underline[link])] (2016)
<NAME>., *<NAME>*, <NAME>. _Paleobiology_, 2016 42(3):465-488
- Documento @fig-Xenarthra[]
7. _Fitness Trade-offs Result in the Illusion of Social Success_ #link("https://doi.org/10.1016/j.cub.2015.02.061")[(#underline[link])] (2015)
<NAME>., <NAME>, <NAME>, <NAME>, *<NAME>*, <NAME>, <NAME>. _Current Biology_, 2015 25(8):1086–1090
- Documento @fig-dicty[]
== Capítulos de livro
1. _How does modularity in the genotype-phenotype map interact with development and evolution?_ #link("https://link.springer.com/chapter/10.1007/978-3-030-18202-1_11")[(#underline[link])] (2019) *Diogo Melo*. In: _Old Questions and Young Approaches to Animal Evolution_. <NAME>., <NAME>. (eds) Fascinating Life Sciences. Springer, Cham.
- Documento @fig-cap_oqya[]
2. _Modularity and Integration_ #link("http://www.sciencedirect.com/science/article/pii/B9780128000496000445")[(#underline[link])] (2016) <NAME>., <NAME>, <NAME>, *<NAME>*, <NAME>. _Encyclopedia of Evolutionary Biology_, 2016 vol. 3, pp. 34–40. Oxford: Academic Press.
- Documento @fig-cap_mod[]
== Artigos anteriores a 2014
1. _Modularity, Noise, and Natural Selection_ #link("http://onlinelibrary.wiley.com/doi/10.1111/j.1558-5646.2011.01555.x/abstract")[(#underline[link])] (2012)
<NAME>., *<NAME>*, <NAME>. _Evolution_, 2012 66(5):1506–1524
- Documento @fig-Noise[]
2. _Selection Response Decomposition (SRD): A New Tool for Dissecting Differences and Similarities Between Matrices_ #link("http://link.springer.com/article/10.1007%2Fs11692-010-9107-2")[(#underline[link])] (2012)
<NAME>., *<NAME>*, <NAME>, <NAME> e <NAME>. _Evolutionary Biology_, 2011 38:225-241
- Documento @fig-SRD[]
== Publicações em pré-print
1. _Reassessing the modularity of gene co-expression networks using the Stochastic Block Model_ #link("https://www.biorxiv.org/content/10.1101/2023.05.31.542906v1")[(#underline[link])] (2023)
*<NAME>*, <NAME>, <NAME>. bioRxiv 2023.05.31.542906. doi: _10.1101/2023.05.31.542906_ #link("https://doi.org/10.1101/2023.05.31.542906")[(#underline[link])]
- Documento @fig-SBM[]
2. _Saturating the eQTL map in Drosophila melanogaster: genome-wide patterns of cis and trans regulation of transcriptional variation in outbred populations_ #link("https://www.biorxiv.org/content/10.1101/2023.05.20.541576v3")[(#underline[link])] (2023)
<NAME>, *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. bioRxiv. doi: _10.1101/2023.05.20.541576_ #link("https://doi.org/10.1101/2023.05.20.541576")[(#underline[link])]
- Documento @fig-eQTL[]
3. _From GWAS to signal validation: An approach for estimating SNP genetic effects while preserving genomic context_ #link("https://www.biorxiv.org/content/10.1101/2023.03.09.531909v3")[(#underline[link])] (2023)
<NAME>., <NAME>, *<NAME>*, <NAME>, <NAME>. bioRxiv. doi: _10.1101/2023.03.09.531909_ #link("https://doi.org/10.1101/2023.03.09.531909")[(#underline[link])]
- Documento @fig-FocalSNP[]
= Atividade didática
1. *Programação para Biologia* (2023)
- Disciplina para graduação na Universidade de Princeton
- Departamento de Ecologia e Biologia Evolutiva
- Carga horária de 50h
- Documentos @fig-EEB330[] e @fig-genomics[]
2. *Escrita Científica* (2020, 2021)
- Disciplina ministrada no Programa de Pós-Graduação em Ciências Biologias (Zoologia)
- Carga horária de 120h
- Documento @fig-escrita[]
3. *Modularidade: conectando padrões e processos em evolução multivariada* (2019)
- Disciplina ministrada no Programa de Pós-Graduação em Biologia Comparada da FFCLRP/USP
- Carga horária de 30h
- Documento @fig-modularidade[]
= Atividades Técnico-Profissionai
== Pós-doutorado
1. 2020-atual - *Pós doutorando no Departamento de Ecologia e Evolução*
- Financiado pela _Princeton Presidential Postdoctoral Research Fellows Program_
- Universidade de Princeton, NJ, EUA
- Documento @fig-princeton[]
2. 2019-2020 - *Pós doutorando no Departamento de Genética e Biologia Evolutiva*
- Financiado por uma Bolsa FAPESP de pós-doutorado
- Instituto de Biociências, Universidade de São Paulo
- Documento @fig-outorga_postdoc[]
== Estágio no exterior durante o doutorado
3. 2014, 2016 - *Aluno de Doutorado visitante na Universidade de Bath*
- Supervisor: Dr. <NAME>
- Financiado por uma Bolsa BEPE-FAPESP de doutorado (2016)
- Financiado pelo programa _Future Research Leaders Incubator Scheme_ (2016) e _Global Research Scholarship Scheme_ (2014) da Universidade <NAME>
- Universidade de Bath, Somerset, Reino Unido
- Total de 8 meses
- Documentos @fig-bath_letter[] e @fig-outorga_BEPE[]
= Apresentação de trabalho em congressos e reuniões científicas
1. *Society for Molecular Biology and Evolution (SMBE) Meeting*, Ferrara, Itália (2023)
- <NAME>., *<NAME>*, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.
- Longitudinal sequencing reveals polygenic and epistatic nature of genomic response to selection
- Poster
- Documento @fig-smbe2023[]
2. *Ecological and Evolutionary Genomics Gordon Research Conference*, Smithfield, RI, Estado Unidos da América (2023)
- <NAME>., *<NAME>*, <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>.
- Longitudinal sequencing reveals polygenic and epistatic nature of genomic response to selection
- Poster
- Documento @fig-GRC2023[]
3. *II Joint Congress on Evolutionary Biology*, Montpellier, França (2018)
- *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>
- Genetic architecture and the evolution of variational modularity
- Poster
- Documento @fig-evol2018[]
4. *5th meeting of the European Society for Evolutionary Developmental Biology (EED)*, Uppsala, Suécia (2016)
- *<NAME>*, <NAME>
- The Evolution of Pleiotropy and Modularity
- Palestrante convidado
- Documento @fig-evodevo[]
5. A *Evolution Meeting*, Guarujá, Brasil (2015)
- *<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>
- Changes in multivariate covariance structures under directional selection
- Apresentação oral
- Documento @fig-evol2015L[]
5. B *Evolution Meeting*, Guarujá, Brasil (2015)
- *<NAME>*, <NAME>
- The Effect of Directional Selection on Pleiotropy and Modularity
- Poster
- Documento @fig-evol2015P[]
#pagebreak()
#include "certificados.typ" |
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/042%20-%20Strixhaven%3A%20School%20of%20Mages/008_Episode%205%3A%20Final%20Exam.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 5: Final Exam",
set_name: "Strixhaven: School of Mages",
story_date: datetime(day: 21, month: 04, year: 2021),
author: "<NAME>",
doc
)
The bellowing of the thing in the Snarl was like nothing Will had ever heard before. The roar reached down into his heart, promising every variety of violence and death. With each moment that passed, the creature dragged itself a little farther out of the vortex of power. Above Will and Rowan, a rafter plummeted to the floor, crashing with a phenomenal sound inches from their feet.
#figure(image("008_Episode 5: Final Exam/01.jpg", width: 100%), caption: [Awaken the Blood Avatar | Art by: Kekai Kotaki], supplement: none, numbering: none)
"They thought I would never make anything of myself—that I didn't belong here, with all of their high and mighty oracles." Extus cackled. He whirled, gesturing wildly at the statues surrounding him. "But where are they now? Who will help you in your time of need?"
Extus's laughter grew more crazed as the Blood Avatar's axe hit one of the statues, splitting the likeness of a former oracle in half. The head and raised arms toppled over, shattering on the ground.
Will helped Rowan to her feet, their robes both soaked with the blood that filled the room—more blood than could have been shed in a hundred battles. "It's going to destroy the school," said Will, trying to keep his voice from shaking.
Somehow, though, Rowan didn't look scared. There was a focus to her eyes that he'd never seen in class, in the study hall, in their dorm. He understood something about his sister for the first time, then—that this is where her talents lay. Running into the storm.
"Not if we can help it," she said, and he nodded.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Outside, all across the Strixhaven campus, that terrible roar echoed. It found Dean Uvilda sealing a pack of escaped Prismari students into a dimension that was to be used for emergencies only; she turned her head at the sound, shuddered, and picked up the pace on her spell. It found Plink and Auvernine, crawling through a dark tunnel of roots and soil while dragonfire overhead reduced everything on the surface to cinders. It found Lukka, even through the concentration it took from him to keep all the mage hunters fighting on several fronts. #emph[So, it's done] , he thought.
Lukka grinned down to Mila. "Looks like Extus got what he wanted."
She didn't look at him, though, only stared up at the sky, eyes wide, fur standing on end. A moment later, she jumped under a collapsed awning. Lukka didn't see what she did—not until it would have been too late—but he trusted her enough to dive right after.
The dragonfire scoured the cobblestones where he had been standing a moment before, scorching the pathway black. The swath of mage hunters close by ignited almost instantly, screeching and hissing as they died. Flashes of searing pain flooded his mind all at once, and he severed the link before he could be overwhelmed.
#figure(image("008_Episode 5: Final Exam/02.jpg", width: 100%), caption: [Draconic Intervention | Art by: <NAME>], supplement: none, numbering: none)
The mage hunters who had managed to evade the dragons' attack shuddered and twitched as their minds once more became their own. They clicked their many teeth together, spread out those glowing feelers, and turned on the nearest source of magical sustenance: Oriq agents. Fresh screams filled the air as the creatures pounced.
Lukka's eyes went wide as he took in the carnage. Mila took a step forward, but Lukka stopped her with a quick mental command.
This was no longer his fight.
Calling Mila to his side, Lukka turned and ran into the dark.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Will ducked away as more debris rained down around them. Just as another statue came crashing to the floor, he saw it: an opening in the chaos. He took a shaky breath and tried to remember the details of the iterated condensation spell. With an exertion of focus, he created a swirling vortex of razor-sharp ice shards and sent them zipping toward Extus, who was still standing before the Blood Avatar. The Oriq's arms were spread; he seemed to be aware of nothing besides his own victory.
Before the ice could reach him, a bolt of lightning cracked through the shards, detonating them and sending the lightning skittering off in random directions. Rowan had seen the opening, too. "Stay out of the way!" she yelled.
"We need to work together!" he shouted back. "All we need to do is—"
He was interrupted by a chunk of falling rubble, which clipped him on the shoulder and sent him sprawling.
"Will!" shouted Rowan, running toward him.
It was impossible to see if her brother was hurt—there was blood everywhere, covering the floor, their robes, splashing up to the walls and over the statues.
She had almost reached him when the creature's enormous sword cleaved into the stone in front of her. It was close enough that she could see the rust on it, the pocked iron from battles fought eons ago. With a furious shout of her own, she pressed her hands up against it, running a charge up the sword like a lightning rod and into the thing's hand. The monster only pulled its sword free, throwing her backward.
Dragging herself back against the wall, Rowan's gaze swung between Extus, the creature he'd summoned, and Will, who now lay far too still on the ground. It was too much. Blinking against the tears welling in her eyes, Rowan felt a cold anger rising from somewhere inside her—rage, overwhelming the fear and the pain. She couldn't win, but she could hurt the one who did this.
But before she could send a bolt of lightning into the Oriq leader's back, her gaze shifted. The Snarl hung in the air, still brilliant, even in crimson. Still rippling with power.
Rowan took a deep breath, closed her eyes, and reached out.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
The building shook around him, the Blood Avatar of the old world roared with unmatched rage, and to Extus, all was finally right with the world. He turned slowly, taking in the sight of the Hall of Oracles crumbling. They had been fools to pass him over. It may have taken him years to prove it, but as another statue toppled to the ground and shattered into a thousand pieces, he told himself that the wait had been worth it.
His smile faltered as the Blood Avatar's voice broke, the furious roar suddenly cut short. Extus spun on his heels and froze. The creature was moving in jerks and fits, the red light of the Snarl flickering wildly behind it. He had seen this effect before, in his many failures. #emph[It can't be.]
He had checked the calculations. This had to work—there was enough magical energy in one of Arcavios's Snarls to power any spell known to creation. #emph[How could it not be enough? ] Then he spotted it: a coil of ghostly vermillion drifting off the nexus, as if being drawn out. An errant tendril of power.
He followed it back, back to the young blond girl in the corner of the room, staring at him now with wide, hateful eyes. Lightning began to flash and crackle over the girl's hair and skin as the energy of the Snarl rushed into her.
Extus met her gaze, too shocked to move.
There was no way that some pitiful first-year would be the one to ruin all of his plans.
#emph[Was there?]
#figure(image("008_Episode 5: Final Exam/03.jpg", width: 100%), caption: [Crackle with Power | Art by: <NAME>], supplement: none, numbering: none)
Rowan struggled to breathe as the air around her sparked and hissed with energy. She felt the power rushing through her, power like she had never dreamed. It felt, in that moment, that she could do #emph[anything] ; mountains would crumble before her, cities burn, oceans boil. She opened her eyes and gasped as she took in the room through a red haze. Her gaze fell to Will, who still lay on the ground, motionless. A fresh wave of rage and grief flooded her as she turned to Extus.
The leader of the Oriq stood before the twitching Blood Avatar, watching her. Waiting.
Rowan let the energy of the Snarl course through her, setting every vein alive with power. She hardly noticed as her feet left the ground, wind swirling as if the air itself feared her. #emph[And it should] , thought Rowan. #emph[Everything should.] She took a deep breath, turning the air in her lungs to white-hot fire, then opened her mouth and screamed. The fire rushed toward Extus like a star, like a bolt from heaven. He held out a hand, muttered some words, but whatever he did wasn't enough; the spell crashed into him, sending him flying through the air, robes coiling with smoke. He smashed against the far wall and slid down, still and quiet.
Rowan turned her attention to the Blood Avatar next. The creature still jerked in place, furious that its rampage had been cut short. Inch by inch, one of its swords raised toward her, but it didn't matter. With the power at her fingertips, she could destroy it, and Extus along with it, and anyone who came after. Anyone who tried to hurt her, everyone who had hurt Will—they would #emph[all ] burn.
She drew on the power of the Snarl again, and it was like drinking fresh, clear water. Arcs of electricity singed her arms and face, sending jolts of pain through her body, but she didn't care. Why would she? She was the most powerful thing in the room, in the school, maybe the whole plane. She extended her hand toward the Blood Avatar, reaching for the familiar jolt of lightning, and a wave of agony suddenly ripped through her.
Her gasps were met with laughter. Through eyes squinting against the pain, she watched Extus somehow pull himself to his feet.
"Did you really think you were strong enough to hold all that power?" Extus sneered. "Did you think you were worthy?"
Rowan ignored the Oriq. In truth, she could barely hear him—all of her was focused on controlling the power that raged within her now. The air around her hissed and twisted like a nest of vipers.
"I have trained in the arcane arts my entire life," muttered Extus. "You're nothing but a child. An arrogant fool. And now, a moth to the flame."
The tendril of power she'd drawn from the Snarl rippled again, and Rowan's vision blared white with agony. All the strength abruptly left her limbs and she flopped limply out of the air, landing with a #emph[thud ] on the bloody stone floor.
Extus laughed. "Your ambition is admirable. But I've come too far to be stopped by the likes of you." He turned away, not even bothering to finish the job, and picked up the heavy tome he had been carrying before.
Time stretched around Rowan; she felt cracked open, hollow, emptied out. The Snarl's power still rippled through her, making her limbs dance and jerk as she lay there. Her consciousness seemed to hover just outside her body—near her brother, who was crawling toward her, dragging himself across the bloody floor. Will was alive.
"Rowan," he hissed through clenched teeth. "Get up."
She tried to remember how to speak, but only managed to let out a bit of air.
"Please," he said, reaching out to touch her. He jerked his hand away as an errant spark rose from her skin. "You have to get up."
Rowan coughed and opened her eyes. "I'm sorry."
"It's okay, Rowan. Just get up." Will crawled closer, putting one of her arms over his neck. He winced as sparks jumped and bit at him but didn't let go. "We're going to be okay."
"I'm sorry about the fight. At the Mage Tower match. And at Bow's End. I'm really sorry."
"I'm sorry, too," Will said. With a grunt, he dragged her to her feet and started toward the door. Behind her brother, Rowan saw the Oriq leader hold up a heavy blood-stained tome and begin to chant.
Together, they hobbled toward the doors, only for Will to slow to a stop. He turned to her sharply. "It's like a mascot."
"What are you talking about?" Rowan's frown deepened.
But Will shook his head as he looked at Rowan. "It's like a mascot! We just have to—intercept it."
"Like in Mage Tower?" Maybe the Snarl was still scrambling her brain, but she had no idea what he was talking about.
"Like in Mage Tower," Will said. "Just trust me on this."
Rowan started to respond, but the words fell away as Garruk's face flashed in her mind. She hadn't been able to see what Will could, back then. And it was Will who had finally found a way to free Garruk and win him over as an ally. Will, her Will—her quiet, brainy, peevish brother. He was right so often. Maybe he was right this time as well.
"Rowan?"
Grimacing at the fresh stabs of pain, Rowan pulled at the last sparks of magic within her. "Yeah. Okay. Show me what all that studying can do."
Will grinned and turned toward Extus and the Blood Avatar, red light swirling around his hands. It wasn't ice magic he was working—she knew that much, at least—but the air around him dropped a few degrees anyway. The red light shaped itself between his hands into a thrumming circle of power, and with one last effort, he released the spell.
Suddenly, a red halo of light snapped into place around the Blood Avatar's helmed head.
#figure(image("008_Episode 5: Final Exam/04.jpg", width: 100%), caption: [Culmination of Studies | Art by: <NAME>], supplement: none, numbering: none)
"It may be big," said Will through clenched teeth, hands shaking with the effort. "But it's a summoned creature. Which means with this spell, we can control it!"
But the creature didn't seem very controlled. It bellowed again, forcing Rowan's hands to her ears. Below the Blood Avatar, Extus contorted his hands into claws, his own magic coming off him in wisps like black mist. The red halo around the Blood Avatar's head seemed to flicker in and out. It was Will against Extus, Rowan realized. Each one poured their power into the spell, and Will was losing. But her brother wasn't alone.
She put a hand on his shoulder, and he looked up, surprised. "Rowan, what are you—?"
"You concentrate on the spell. Get all the particulars right. I'll do the rest."
Maybe their magic was too different now to meld seamlessly together, as it used to. But if Will had gotten more precise, more controlled—well, she had gotten a whole lot stronger. Rowan poured the last of her magical energy into her brother, sparks jumping and skipping across her hand as her power flowed into him. He gasped, but only for a second. Then, Extus let out a strangled cry, and the red halo snapped into place around the creature, fully formed.
"You brats!" cried Extus. "How #emph[dare ] you—"
He was cut off as one of the massive hands of the Blood Avatar closed around him with an awful crunching sound. After that, Extus was silent.
"It worked!" shouted Will. "Rowan, it worked!"
Rowan was swaying in place, though. She was finding it difficult to stay standing; the whole room seemed to be spinning. She was tapped out, utterly emptied of power. It all seemed to happen in slow motion—the red halo flickered out of existence. The Blood Avatar roared furiously as one hand was pulled back into the Snarl, his gore-drenched body stretching and bulging unnaturally, the summoning coming to a violent end. With one more terrifying bellow, he swung that massive iron sword. Will's eyes went wide, and Rowan was too weak to stop him from shoving her out of the way.
The sword crashed into the ground with terrifying force, sending a shudder through the chamber. With a sound like thunder, the Blood Avatar was wrenched back into the Snarl, the sword dragging back across the stone—and on the other side was her brother, lying limp and stunned. Rowan's joy at her brother #emph[alive] , not mashed to paste or cleaved in two, suddenly lurched and drained away in shock: below the knee, his right leg was gone.
As if the presence of that monstrosity had been the last thing holding the chamber together, everything began to fall apart. Rafters swung to the floor like clubs, the stone ceiling they'd held up crashing to the ground in jagged blocks. The floor beneath them shook and pitched wildly as Rowan tried to reach her brother. She was so close—could see his glassy, distant eyes—when the floor collapsed altogether. Rowan and Will tumbled and pitched forward, falling through space, until suddenly a light, gentle touch caught them. Rowan spun around wildly; somehow, a cloud of mist seemed to be holding them both aloft.
"There," said Will weakly, pointing up to the doorway of the room. Rowan looked toward the source of the magic, where <NAME> and Lisette stood at the destroyed entrance. Brows furrowed in concentration, they sent gouts of magic through the air, blasting away falling rocks and debris. The mist carried them up, up toward Dean Lisette's outstretched hand. Rowan reached for it, holding Will wrapped in her other arm, but couldn't quite reach it—until a vine snaked out of Lisette's sleeve, cinching tightly around Rowan's wrist.
Grunting, she hauled them both up into the doorway. All four of them managed to fall out of the doorway just as the room collapsed altogether, filling with a cloud of stone and dust and rubble.
"We did it," muttered Will. "We did it, Ro." His eyes fluttered closed. He looked terribly pale.
"Hold still," said Lisette, crouching over him. "You're in shock."
"Will he be okay?" asked Rowan.
The dean didn't seem to be listening. She bit off a chunk of some sort of root, spitting it into a small shell and pressing it with her thumb. Almost at once it started to glow a strange green color.
"He'll live," said Nassari, putting a hand on Rowan's shoulder. "After what you two have been through, you should be grateful."
What they had been through. Rowan looked back, through the wall of debris now filling the doors to the Hall of Oracles. There was no sign of the Snarl's glow, but she could swear that she still felt it calling out to her.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Five weeks later, as the bell tolled across campus to signal the end of first period, Will almost felt as if things were going back to normal. He was getting used to navigating the halls of the school a bit more slowly between his cane and the latticework of ice and steel that now extended down from just under his cloth-wrapped kneecap. He had refused the offers from Dean Lisette of a living wood replacement. His leg was never coming back, and this felt more like a part of him. It was good practice; all day, a part of his mind had to be focused on shaping and refreezing it around the metal frame. A good distraction, too, from the pins and needles that still seemed to crawl over his stump.
Word of his and Rowan's fight against Extus and the Blood Avatar had spread throughout the campus, and suddenly, Will was getting a lot more attention. The other students pressed against the wall as he passed, their whispers and stares following him. It almost reminded him of home—he often found himself missing the anonymity of his early days at the school.
At last, he reached his dorm room. The door swung open as he reached for the knob, and Rowan jerked to a stop, almost crashing into him. She stepped back to let Will inside.
Will cleared his throat. "How are you feeling?"
Rowan shrugged. "Not quite back to full strength, but better. You?"
Will tapped his finger on the handle of his cane. Briefly, the runes Quint had helped him put into place—basic ones, for stability and strength, as opposed to the more elaborate variety his friend had pushed for—flared briefly to life, running all the way down to the spread-foot at the base. "I'm adjusting," he said, smiling.
"How's the pain?"
"A little better every day." Though the phantom aches, seemingly coming from muscles that were no longer there, still struck him as eerie.
"I wonder what they'll say about this back home. Can you imagine?"
"Not really. But maybe we should visit when the semester is over."
"Why wait? We could just go now."
"We still have classes."
"We took down a Blood Avatar," Rowan said. "What more could they teach us here?"
"We took down a Blood Avatar with a spell that we learned here," Will countered. "And we still don't know why our spells aren't syncing up. Or why we can only planeswalk together. There's a lot more that they could teach us."
Rowan rolled her eyes and grinned. "Fine. I guess it would be nice to not have to drag you along for the rest of my life. Now if you'll excuse me~"
"Yeah, yeah," said Will. "Say hello to Plink and Auvernine for me."
She slipped by, then paused in the hallway and turned to him. It struck Will, then, how much thinner she looked; how the hollow of her cheeks seemed so much darker than before, like something vital had been sucked clean out of his sister. But the smile she gave him was warm and true. "You know I love you, right?"
"Yeah," he said. "I love you, too."
As she hurried off, Will shut the door behind him and sat down on his bed. He was tired. It had been a long time since he had gotten a good night's sleep. Another semester here? Another year? Who knew what else the future held? He closed his eyes and extended his magical senses, tracking the beads of moisture forming on his icy prosthetic. #emph[First principle—thermodynamic redirection. Find the heat, and redistribute. . .]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Kasmina's owl flew away from the window and soared over Strixhaven. The damage from the attack was all but erased; cobbles replaced, hedges regrown. The only sign it had even happened was the Hall of Oracles, which was still a ruin, and the small monument that now sat on the landing of the Biblioplex, a stone statue that changed faces every hour. Below it was an inscription: #emph[Lore is never lost at Strixhaven. They will not be forgotten. ] This place had survived worse before. It would survive worse in the future, Kasmina had no doubt.
The owl found her at the edge of the campus. She looked out at the wilderness beyond, her mind flowing into the bird that had followed Lukka. The bonder had been wandering the land with Mila and a few of the remaining Oriq, no doubt scheming even as they scrounged for food and shelter.
But he was no longer worth watching. It was Rowan—or, perhaps, both twins—who now required her attention.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
Liliana finished getting dressed in her office. It had taken days for her to get back to Strixhaven from what had turned out to be a forest near the edge of the continent, but she'd made it, and after the deans had admitted that they should have heeded her warnings, they had invited her to remain a professor at the university indefinitely—without any more damnable faculty meetings, too.
She'd agreed, but with one small caveat.
Now, as she looked at herself in the mirror and adjusted her uniform, she found it hard to believe. Exams. Students. No more demons, no more dark plots, no more death. Her gaze shifted to her desk where her research journals lay open. "It seems this is where we finally part, old friend."
Liliana closed the journals and put them on the shelf along the wall. All considered, he would have been proud. The thought made her smile despite herself.
When she finally made it to her first class, Liliana took a moment to collect herself before heading inside. Students hurried to their seats at her arrival, the sounds of shuffling paper and idle chatter dying away as they turned to her.
Liliana came to a stop at the desk at the front of the room. "Welcome to your introductory necromantic arts course, students," she said, voice ringing out in the lecture hall. "My name is Professor <NAME>."
|
|
https://github.com/xrarch/books | https://raw.githubusercontent.com/xrarch/books/main/xr17032handbook/chapmmu.typ | typst | #import "@preview/tablex:0.0.6": tablex, cellx, colspanx, rowspanx
= Virtual Addressing <mmu>
== Introduction
For many reasons, it is useful to be able to dynamically re-map sections of the address space to other regions of physical memory, thereby creating a "virtual address space". A few such reasons are listed below:
1. Process isolation: Programs can be completely protected from eachother by giving them their own unique address spaces.
2. Demand zero: The allocation of memory can be delayed until it is actually needed.
3. Memory-mapped files: Files on disk can be mapped into the virtual address space, giving the illusion that they are a range of bytes that can be accessed like any other memory.
4. Shared memory: Popular physical memory (such as executable code) can be shared among multiple virtual addresses in multiple address spaces, thereby saving memory.
5. Virtual memory: Disk space can be transparently used as extra memory via swapping.
The mechanism that the XR/17032 architecture uses for this is _paged_ virtual addressing, also known as "paging". In a paging scheme, the virtual address space is divided into evenly sized "pages" which can be individually re-mapped to arbitrary physical addresses. As this is a 32-bit architecture, the virtual addresses are 32 bits, leading to a $2^32$ = 4GB virtual address space. For simplicity, the only supported page size on the XR/17032 processor is 4096 bytes, or 4KB. This means that the virtual address space is evenly tiled by 4GB / 4KB = 1048576 pages.
There is now a question of how to achieve this translation. If the translation of the virtual page to the physical page is performed by looking up a physically linear page table, with 32-bit table entries, it would therefore consume 1048576 \* 4 bytes = 4MB of memory (per process!), which is obviously unacceptable overhead.
In many architectures, such as fox32 and Intel 386, the virtual address space is therefore managed by a two-level page table. The indices into the two levels of the page table are usually extracted from bit fields of the 32-bit virtual address in the manner shown:
#image("vaddr.png", fit: "stretch")
The two 10-bit fields from 22:31 and from 12:21 contain the index into the level 2 table and the level 1 table, respectively.
As these indices are 10 bits, and the entries are 4 bytes wide, these tables are both $2^10$ \* 4 = 4096 bytes in size.#footnote([Note that this is one reason for the usage of the 4KB page size: this is the page size for which the division of the virtual address into these three fields causes the tables to consume single page frames, which simplifies memory management.])
To translate a virtual address, the level 2 table is indexed first, yielding a 32-bit entry that contains the physical address of the level 1 table. This level 1 table is indexed next, yielding the 20-bit page frame number to which this virtual page is mapped. The 12-bit byte offset is appended to this, yielding the final 32-bit physical address to which the memory access should be performed. Note that each address space needs its own level 2 page table, which may point to up to 1024 level 1 page tables, which each map 1024 virtual pages to physical pages.
This scheme allows the omission of large sections of the page table that are not needed. In practice, virtual address spaces tend to be very sparse, so this usually reduces the original 4MB page table overhead to a mere handful of kilobytes per process.
However, there is one major problem: you now have to perform two extra memory accesses for each memory access! The solution to this is, as with many things in computer science, a cache: processors that employ this scheme contain a translation buffer, or TB,#footnote([Also called a "translation lookaside buffer" or TLB.]) which is a small memory typically containing 8 to 64 cached page table entries. The TB is usually fully associative, meaning that it can be indexed directly by virtual address; the virtual page number is compared simultaneously with all of the entries in the TB, and if any of them contain a matching entry, it is returned. This can easily be done within a single cycle, and a hit in the TB avoids the cost of looking up the page tables.
In the case that a needed virtual page translation is not cached in the TB, a "TB miss" occurs. On architectures like the aforementioned fox32 and Intel 386, this results in a page table walk done automatically by the hardware, which then inserts the page table entry in the TB. The instruction is then transparently re-executed and hopefully succeeds this time.
This, still, has two major problems:
1. Complicated: The logic to perform a page table lookup in hardware is quite complex; it takes up many extra gates on the chip and can be difficult to debug during the development of prototype hardware.
2. Inflexible: If the system software wishes to manage its own custom paging structures, it is out of luck. It must use the hardware-enforced page tables.
For these reasons, XR/17032 instead uses a "software refill" design. In such a design, the management of the TB is exposed directly to software. When a TB miss occurs, an exception is taken, which redirects execution to a software routine which looks up the paging structure, loads the page table entry (PTE), and writes it into the TB. This exception handler returns, causing the re-execution of the original instruction, which hopefully succeeds this time.
In a software refill scheme, the system software has the ability to implement any paging structure it sees fit, and much complexity is removed from the hardware logic.
You can see this as the previously mentioned paging scheme "flipped on its head" -- instead of a two-level page table being the primary governor of paging and the TB existing only as a nearly-transparent cache for it, the fixed-size TB is the first class citizen. The TB contains all of the currently valid mappings, and needs to be manually refilled from some other paging structure (such as a two-level page table).
#image("tbexample.png", fit: "stretch")
In the example above, there is a 4-entry TB, containing entries for the virtual page numbers 00AA5, 00B36, 00CCD, and 003C4. The program references a virtual address 00B36499, which is provided as the input to the TB. The page number, 00B36, is compared with all entries in the TB simultaneously. Luckily, one of the entries matches, and produces the physical page number 3045A. The byte offset from the original virtual address is appended to this physical page number, producing the final physical address with which the processor will perform the memory access.
Had there not been a matching entry in the TB, a TB miss exception would have occurred. The TB miss handler would have inserted the correct entry into the TB, and the original instruction would have re-executed; beginning this process again, but matching in the TB this time and succeeding.
== The Translation Buffers <managetb>
The XR/17032 architecture has two TBs. In fact, it could be seen as having two MMUs; an IMMU and a DMMU, providing translations for instruction fetch and data access respectively. This is to simplify pipelined implementations where a FETCH and MEMORY stage may want to access the TB to translate a virtual address simultaneously. The TB management scheme is architected such that the actual size of each TB is transparent to system software and may vary from processor to processor.
When the *M* bit is set in the *RS* control register (see @rs), instruction fetches are translated by the ITB, and data accesses are translated by the DTB. The TB entries are each 64 bits wide. The upper 32 bits contain the *TBTAG*, which is the 12-bit *ASID* (Address Space ID) and 20-bit *VPN* (Virtual Page Number) that will match the TB entry. The lower 32 bits contain the *TBPTE*, containing the 20-bit physical page number along with some flag bits. The *TBPTE* is the "preferred" format for a page table entry. See @tbtag for the format of the *TBTAG*, and @tbpte for the format of the *TBPTE*.
Something important to note is the difference between a TB miss exception, and a page fault exception. A TB miss exception occurs when a key consisting of a *VPN* and the current *ASID* fails to match in the TB. A page fault occurs when it _does_ match, but matches to a PTE whose *V* _Valid_ bit is clear. This is behavior that differs significantly from other architectures like Intel 386: it is possible to have a TB entry that matches a virtual page, but is invalid and causes a page fault.
This seemingly strange behavior makes more sense when you recall that we perform software TB miss handling. This behavior makes it possible to perform an optimization in which an invalid PTE can be "blindly" inserted into the TB, the faulting instruction can be re-executed, and a page fault then occurs. If this were not the case, the TB miss handler would need to have a branch to make sure that the PTE is valid before it inserts it into the TB, and branch to the page fault handler if it isn't. Adding a branch to what may be the hottest codepath in the entire system is a bad plan, as opposed to allowing invalid PTEs to match in the TB.
Note that this has implications on TB management. A page must be flushed from the TB not only when it is transitioned from valid to invalid, but also when it is transitioned from invalid to valid. Otherwise a stale TB entry may continue to track the page as invalid, causing erroneous page fault exceptions when it is accessed.
=== Address Space IDs
To describe *ASIDs*, it is useful to describe the problem they solve. Most multitasking operating systems operate under a scheme where each process has its own isolated address space. As a result, when a context switch occurs, the address space is also switched. The contents of the TB are irrelevant to the new address space. In some architectures, this necessitates flushing the entire contents of the TB, which incurs many extra expensive TB misses and is therefore quite wasteful.
One trick that helps alleviate some of the burden is to add a *G* bit, or global bit, to the page table entry. This indicates that any entry for that page should be left in the TB upon address space switch, and is useful to set for globally shared mappings such as those in kernel space. However, this solution still isn't perfect, as you still lose many TB entries from other processes' userspace that may have been useful to have after switching back to that process.
One extra feature that you can place atop *G* bits is the concept of an address space ID, or *ASID*. Each TB entry has a 12-bit *ASID* associated with it, along with the virtual page number. If the *G* bit is clear in a TB entry, it will only match a virtual address if the current *ASID* stored in the *ITBTAG* or *DTBTAG* control register (depending on which TB it is#footnote([Note that in general, the *ASID* field should be the same in the *ITBTAG* and *DTBTAG* control registers; it's hard to imagine a situation where it would be useful for them to differ. However, this is not prohibited.])) is equivalent to the one stored in the TB entry. If the *G* bit is set, the TB entry will match the virtual address regardless of the current *ASID*; that is, it will match in all address spaces.
By assigning a different *ASID* to each process, you can now have the TB entries for multiple address spaces residing in the TB simultaneously without fearing virtual address collisions, and can completely avoid flushes on context switch. If it helps, you can logically think of the *ASID* as being an extra 12 bits on the virtual address, in order to differentiate identical virtual page numbers that belong to different address spaces. If this doesn't help, then forget that sentence and try to live the rest of your life in bliss.
A very important note is that setting an *ASID* of 0xFFF (4095) in *ITBTAG* or *DTBTAG* should never be done and will lead to unpredictable results, because this particular value is used internally to denote an "unused" TB entry. Using this *ASID* could cause spurious TB hits and bizarre behavior.
=== Translation Buffer Invalidation
See @tbctrl for a thorough explanation on how to invalidate entries in the TB by writing to the *ITBCTRL* and *DTBCTRL* control registers.
== Translation Buffer Miss <tbmiss>
As explained earlier, the XR/17032 architecture invokes a software exception handler when a TB miss occurs. There are two TB miss exception vectors, one for ITB miss and one for DTB miss (see @ecause for the exact offsets). This makes the miss handlers shorter as they do not need to figure out which TB to insert the entry into; there can simply be a distinct miss handler which deals only with that TB.
The behavior of the processor when a TB miss occurs is contingent on whether the *T* bit was set in the *RS* control register's current mode bits. This bit is also set by a TB miss, so in reality, it is contingent on whether the TB miss is "nested" within another TB miss or not. The reason you would want to take a TB miss within a TB miss handler will be elucidated later.
#box([
#tablex(
columns: (1fr, 1fr),
align: horizon,
width: 100%,
auto-vlines: false,
cellx([
#set align(center)
#set text(fill: white)
*TB Miss Exception Behavior*
], fill: rgb(0,0,0,255), colspan: 2),
cellx([
#set text(fill: white)
*T=0*
], fill: rgb(0,0,0,255)),
cellx([
#set text(fill: white)
*T=1*
], fill: rgb(0,0,0,255)),
[
The *TBMISSADDR* control register is set to the missed virtual address.
],
[
The *TBMISSADDR* control register is left alone.
],
[
Normal exception logic occurs; the *RS* mode stack is pushed. However, the exception program counter is saved in the *TBPC* control register instead of the *EPC* control register.
],
[
None of the normal exception logic occurs except to redirect the program counter to the appropriate exception vector. The *RS* mode stack is not pushed.
],
[
The *T* bit is set.
],
[
The *T* bit remains set.
]
)
], width: 100%)
There are also some special cases for page faults that occur while the *T* bit is set. Note that this behavior essentially causes the new page fault to look like a page fault on the original virtual address that missed in the TB, instead of a page fault on the virtual address referenced by the TB miss handler.
#box([
#tablex(
columns: (1fr, 1fr),
align: horizon,
width: 100%,
auto-vlines: false,
cellx([
#set align(center)
#set text(fill: white)
*Page Fault Exception Behavior*
], fill: rgb(0,0,0,255), colspan: 2),
cellx([
#set text(fill: white)
*T=0*
], fill: rgb(0,0,0,255)),
cellx([
#set text(fill: white)
*T=1*
], fill: rgb(0,0,0,255)),
[
The *EBADADDR* control register is set to the faulting address.
],
[
The *EBADADDR* control register is set to the value of the *TBMISSADDR* control register.
],
[
A Page Fault Read exception is triggered if the access was a read, or Page Fault Write otherwise.
],
[
If the last TB miss exception that occurred while the *T* bit was clear was a read, a Page Fault Read exception is generated, otherwise Page Fault Write.
],
[
Normal exception logic occurs. The *RS* mode stack is pushed.
],
[
None of the normal exception logic occurs except to redirect the program counter to the appropriate exception vector. The *RS* mode stack is not pushed. The *T* bit is cleared. The *EPC* control register is set to the value of the *TBPC* control register.
]
)
], width: 100%)
Aside from the special cases in TB miss and page fault handling, there is another major effect of the *T* bit being set, which is that the *ZERO* register is no longer hardwired to read all zeroes. It can therefore be used freely as a scratch register by TB miss routines, without needing to be saved or restored.
In either case, the low 20 bits of the *ITBTAG* or *DTBTAG* control register are automatically filled with the virtual page number of the virtual address that failed to match in the TB. This also shortens the TB miss handler, because it doesn't need to assemble the upper 32 bits of the TB entry: the appropriate *ASID* for the mapping (the same as the current one) is already set, and now so is the appropriate *VPN*. It only needs to load the PTE for the mapping and insert it into the TB by writing it to either the *ITBPTE* or *DTBPTE* control register. The upper 32 bits of the resulting TB entry are taken from the current value of *ITBTAG* or *DTBTAG*, and the lower 32 bits are taken from the PTE written to the control register.
The index into the TB that is overwritten with the new entry is taken from the control register *ITBINDEX* or *DTBINDEX*, which is then automatically incremented, creating a FIFO behavior for TB replacement. When the replacement index reaches the end of the TB, it wraps back to 4 instead of 0. This means that entries [0-3] will never be replaced naturally, and are permanent or "wired" entries that can be used to create permanent virtual page mappings for any purpose.#footnote([As an example of the usage of wired entries, system software will typically map the exception block (see @exceptionblock) permanently with one wired entry of the ITB to avoid taking an ITB miss on the ITB miss handler, which for obvious reasons is an unrecoverable situation.])
With all of this information, you can now imagine a TB miss handler which does a manual page table walk; it calculates an offset within the level 2 page table and loads the level 2 PTE, decodes this to get the address of the level 1 page table, and then loads the level 1 PTE. The level 1 PTE can be written to the appropriate *ITBPTE* or *DTBPTE* control register to write the TB entry, and then the miss routine can return.
This scheme would work, and is in fact used by the _AISIX_ kernel, which runs with memory mapping disabled in kernel mode. However, it has several significant issues:
1. It requires access to the physical address space. As memory mapping is not disabled when an exception is taken, you would have to ensure that the exception block is identity mapped. This would allow you to disable paging on the fly within the exception block, perform the miss handling, and then return.
2. Two memory loads must be done in all cases.
3. It is quite a lengthy codepath, and requires several branches.
It is possible to do much better. In fact, it is possible with a small amount of extra work in the architecture and system software to accomplish a two-level page table TB miss routine which looks like this, as taken from the _MINTIA_ executive:
#rect([
```
DtbMissRoutine:
mfcr zero, dtbaddr
mov zero, long [zero]
mtcr dtbpte, zero
rfe
```
], width: 100%)
At a mere four instructions, with zero branches, this is fairly close to optimal. In essence, this routine works by running in the virtual address space and loading the PTE directly from a linear page table, and then writing it to the TB. Unfortunately, explaining how this works requires some labor.
To begin, we have to understand the concept of a "virtually linear page table". It turns out that placing the level 2 page table as an entry into itself creates a region of virtual address space which maps the two-level page tables as if they were a linear array indexed by virtual page number.#footnote([This is also sometimes referred to as "recursive mapping" or "recursive paging".]) The reason for this is that accessing memory within this region causes the level 2 page table to be treated as a level 1 page table, and so all of its entries directly map the level 1 page tables. The level 2 page table itself can also be found within this region.
Pseudo-code for calculating the base address of the virtually linear page table is provided:
#rect([
```
// Assume INDEX is a constant containing the index of the level 2 page
// table that has been set to create the virtually linear page table
// mapping. Any index can be chosen to place the linear page table within
// the address space as desired.
// Since each level 1 page table maps 1024 pages of 4096 bytes each,
// the following formula can be used to find the base address. Note that
// it is always 4 megabyte aligned.
LinearPageTableBase := INDEX * 1024 * 4096
// Since each level 1 page table is mapped as a 4096 byte page within
// the virtually linear page table, the following formula can be used
// to find the address of the level 2 table itself.
Level2Table := LinearPageTableBase + INDEX * 4096
```
], width: 100%)
The architectural support provided for loading the PTE out of a virtually linear page table comes in the form of the *ITBADDR* and *DTBADDR* control registers. When a TB miss occurs, the low 22 bits of this control register are filled with the virtual page number of the missed address, shifted left by 2. If the upper 10 bits of the control register was previously filled with the 4 megabyte aligned base address of the linear page table, then upon a TB miss, this control register will contain the address from which the PTE can be loaded. This saves several instructions that would otherwise be required to calculate this address.
There are now two cases that are concerning. The first is the case where the page table in which the PTE resides is not present in the DTB. A nested TB miss will occur upon an attempt to load the PTE. This sounds like it would always be a fatal condition, until three facts are recalled from earlier in this chapter:
1. The TB has support for 4 "wired" or permanent entries which are never replaced.
2. The PTE address for the miss on the page table page will always reside within the level 2 page table.
3. There is special cased behavior for TB misses, enabled by the *T* bit of *RS* which is set when a TB miss is taken.
If system software maps the level 2 table page with one wired entry of the DTB, this will provide an "anchor point" which will halt the chain of DTB misses. The nested DTB miss will load the level 2 page table entry for the page table and insert it into the DTB. Due to the special cased behavior of nested TB misses, the exception state of the original TB miss was left completely intact, and so the nested TB miss will return directly to the instruction that caused the original miss. This instruction re-executes, and misses again, as the original page it needed is still not in the TB. However, the TB miss handler will now succeed in loading the PTE from the virtually linear page table, as we inserted the page table page into the DTB during the nested TB miss earlier.
Careful readers will now understand how the four instruction TB miss routine from earlier works. You may also note that this scheme has an extra benefit, whereby only one memory access is needed to load the PTE from the two-level page table, as long as the containing level 1 page table is already present in the DTB. Note that the nested TB miss which loads the level 1 page table into the DTB does not require any special code, the processor merely (re-)executes the exact same normal DTB miss handler.
There is one small snag, which is the second concerning case from earlier. If the level 1 page table does not actually exist, then the nested DTB miss will load an invalid level 2 page table entry into the DTB. In this case, a page fault will occur in the TB miss handler when it attempts to load the PTE from the level 1 page table again. The special cased page fault behavior listed earlier addresses this case, by clearing the *T* bit and setting *EBADADDR* to the value of *TBMISSADDR*. It also keeps the exception state intact in a similar manner to the nested TB miss special case. The page fault exception handler is thereby "fooled" into thinking that the original instruction caused a page fault on the original missed address.#footnote([Note that a processor implementation must keep a latch somewhere that remembers whether the last non-nested TB miss (the last one that occurred while the *T* bit was clear) was caused by a read or write instruction, so that this page fault case will result in the appropriate page fault exception.]) |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/meppp/0.1.0/table.typ | typst | Apache License 2.0 | // return the number of rows of the table
#let table-row-counter(cells,columns)={
let last-row = (0,) * columns
let x = 0
let y = 0
let volume = 0
for cell in cells{
if cell.has("body"){
let colspan = 1
let rowspan = 1
if cell.has("colspan"){
colspan=cell.colspan
}
if cell.has("rowspan"){
rowspan=cell.rowspan
}
volume += colspan*rowspan
let end = x + colspan
while x < end{
last-row.at(x) += rowspan
x += 1
}
let next = last-row.position(ele=>{ele==y})
if next == none{
y = calc.min(..last-row)
x = last-row.position(ele=>{ele==y})
}else{
x = next
}
}
}
let rows = calc.max(..last-row)
let volume-empty = columns*rows - volume
assert(volume == last-row.sum())
return (rows, volume-empty)
}
// A three-line-table returned as a figure
#let meppp-tl-table(
caption: none,
supplement: auto,
stroke:0.5pt,
tbl
)={
let align = center+horizon
if tbl.has("align"){
align = tbl.align
}
let columns = 1
let column-count = 1
if tbl.has("columns"){
columns = tbl.columns
if type(tbl.columns)==int{
column-count = tbl.columns
}
else if type(tbl.columns)==array{
column-count = columns.len()
}
}
let header = tbl.children.at(0)
assert(
header.has("children"),
message: "Header is needed."
)
let header-children = header.children
let header-rows = table-row-counter(header-children,column-count).at(0)
let content = tbl.children.slice(1)
let content-trc = table-row-counter(content,column-count)
let content-rows = content-trc.at(0)
let content-empty-cells = content-trc.at(1)
content = content + ([],)*content-empty-cells
let rows = (1.5pt,)+(1.5em,)*(header-rows+content-rows)+(1.5pt,)
let hline = table.hline(stroke:stroke)
let empty-row = table.cell([],colspan: column-count)
return figure(
table(
align:align,
columns:columns,
rows:rows,
stroke:none,
table.header(
hline,
empty-row,
hline,
..header-children,
hline,
),
..content,
table.footer(
hline,
empty-row,
hline,
repeat: false
)
),
kind:table,
caption:figure.caption(caption, position:top),
supplement: supplement,
)
} |
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/supercharged-dhbw/1.0.0/template/main.typ | typst | Apache License 2.0 | #import "@preview/supercharged-dhbw:1.0.0": *
#let abstract = lorem(100)
#show: supercharged-dhbw.with(
title: "Exploration of Typst for the Composition of a University Thesis",
authors: (
(name: "<NAME>", student-id: "1234567", course: "TIM21", course-of-studies: "Mobile Computer Science", company: (
(name: "ABC AG", post-code: "08005", city: "Barcelona", country: "Spain")
)),
(name: "<NAME>", student-id: "7654321", course: "TIS21", course-of-studies: "IT-Security", company: (
(name: "YXZ GmbH", post-code: "70435", city: "Stuttgart", country: "")
)),
),
at-dhbw: false, // if true the company name on the title page and the confidentiality statement are hidden
show-confidentiality-statement: true,
show-declaration-of-authorship: true,
show-table-of-contents: true,
show-acronyms: true,
show-list-of-figures: true,
show-list-of-tables: true,
show-code-snippets: true,
show-appendix: false,
show-abstract: true,
abstract: abstract, // displays the abstract defined above
university: "Cooperative State University Baden-Württemberg",
university-location: "Ravensburg Campus Friedrichshafen",
supervisor: "<NAME>",
date: datetime.today(),
bibliography: bibliography("sources.bib"),
logo-left: image("assets/logos/dhbw.svg"),
// logo-right: image("assets/logos/company.svg")
)
// Edit this content to your liking
= Introduction
#lorem(100)
#lorem(100)
#lorem(100)
= Examples
#lorem(30)
== Acronyms
Use the `acro` function to insert acronyms, which looks like this #acro("API").
#acro("AWS")
== Lists
Create bullet lists or numbered lists.
- These bullet
- points
- are colored
+ It also
+ works with
+ numbered lists!
== Figures and Tables
Create figures or tables like this:
=== Figures
#figure(caption: "Image Example", image(width: 4cm, "/assets/images/ts.svg"))
=== Tables
#figure(caption: "Table Example", table(
columns: (1fr, auto, auto),
inset: 10pt,
align: horizon,
table.header(
[], [*Area*], [*Parameters*],
),
text("cylinder.svg"),
$ pi h (D^2 - d^2) / 4 $,
[
$h$: height \
$D$: outer radius \
$d$: inner radius
],
text("tetrahedron.svg"),
$ sqrt(2) / 12 a^3 $,
[$a$: edge length]
))<table>
== Code Snippets
Insert code snippets like this:
#figure(caption: "Codeblock Example", sourcecode[```typ
#show "ArtosFlow": name => box[
#box(image(
"logo.svg",
height: 0.7em,
))
#name
]
This report is embedded in the
ArtosFlow project. ArtosFlow is a
project of the Artos Institute.
```])
== References
Cite like this #cite(form: "prose", <iso18004>).
Or like this @iso18004.
You can also reference by adding `<ref>` with the desired name after figures or headings.
For example this @table references the table on the previous page.
= Conclusion
#lorem(100)
#lorem(120)
#lorem(80) |
https://github.com/UntimelyCreation/typst-neat-cv | https://raw.githubusercontent.com/UntimelyCreation/typst-neat-cv/main/src/cv.typ | typst | MIT License | #import "template.typ": *
#show: layout
#cvHeader(align: left, hasPhoto: true)
#grid(
columns: (60%, 40%),
gutter: 16pt,
stack(
spacing: 20pt,
autoImport("experience"),
autoImport("education"),
),
stack(
spacing: 20pt,
autoImport("skills"),
autoImport("languages"),
autoImport("interests"),
),
)
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/calc-13.typ | typst | Other | // Error: 14-15 divisor must not be zero
#calc.quo(5, 0)
|
https://github.com/olligobber/spr | https://raw.githubusercontent.com/olligobber/spr/master/main.typ | typst | #import "@preview/diagraph:0.1.1" : *
#set page(width:auto, height:auto, margin:1em)
#place(
top + left,
box(
stroke: black,
radius: 1em,
inset: 1em,
[
= Key
#raw-render(
```dot
digraph{
node [shape=record]
example [label="name|probability as fraction|probability as decimal"]
example -> win [color=darkgoldenrod, arrowtail=inv, dir=both]
example -> loss [color=red, arrowhead=empty, arrowtail=invempty, dir=both]
example -> stop [color=blue, arrowtail=box, dir=both]
example -> continue [color=darkmagenta, arrowtail=inv, dir=both]
}
```
)
]
)
)
#raw-render(
```dot
digraph {
// list of all normal nodes
node [shape=record]
PmWQmZXYd74 [label="Welcome|1 in 1|1.0000000000000"]
// Going for one in a million
Ul8r0Thgx44 [label="Won once|1 in 3|0.3333333333333"]
_mrAeT9kpPM [label="Won twice|1 in 9|0.1111111111111"]
hoaLwPc571E [label="Won thrice|1 in 27|0.0370370370370"]
z8zjvT8Qx8U [label="Won four times|1 in 81|0.0123456790123"]
uHoYnV9JX4w [label="Won five times|1 in 243|0.0041152263374"]
jRDLtKUsq8U [label="Won six times|1 in 729|0.0013717421125"]
RXy0Kc1Cl9s [label="Won seven times|1 in 2,187|0.0004572473708"]
"v3oXQrWu-PA" [label="Won eight times|1 in 6,561|0.0001524157903"]
bN5M3caw6b8 [label="Won nine times|1 in 19,683|0.0000508052634"]
DDmnplXv6pY [label="Won ten times|1 in 59,049|0.0000169350878"]
"6auFOPOuHuE" [label="Won eleven times|1 in 177,147|0.0000056450293"]
"0M39bd9euEI" [label="Won twelve times|1 in 531,441|0.0000018816764"]
E3pdr5hNBe4 [label="Won thirteen times|1 in 1,594,323|0.0000006272255"]
// Going for one in a billion
K1kVsxsnYyc [label="Start of the road to a billion|21,954,154,089,204,137 in 8,862,938,119,652,501,095,929|0.0000024770741"]
wf6sqW38AmM [label="Won fourteen times|21,954,154,089,204,137 in 26,588,814,358,957,503,287,787|0.0000008256914"]
j92TH0iaCrE [label="Won fifteen times|21,954,154,089,204,137 in 79,766,443,076,872,509,863,361|0.0000002752305"]
r8LgYG67bCA [label="Won sixteen times|21,954,154,089,204,137 in 239,299,329,230,617,529,590,083|0.0000000917435"]
DZLnWKM90nQ [label="Won seventeen times|21,954,154,089,204,137 in 717,897,987,691,852,588,770,249|0.0000000305812"]
aSjsXUdaIgQ [label="Won eighteen times|21,954,154,089,204,137 in 2,153,693,963,075,557,766,310,747|0.0000000101937"]
HunlKDzXNv0 [label="Won nineteen times|21,954,154,089,204,137 in 6,461,081,889,226,673,298,932,241|0.0000000033979"]
// Going for one in a trillion
gv7_NTC_Rgs [label="Start of the road to a trillion|21,954,154,089,204,137 in 6,461,081,889,226,673,298,932,241|0.0000000033979"]
sbgMHxUkfFI [label="Won twenty times|21,954,154,089,204,137 in 19,383,245,667,680,019,896,796,723|0.0000000011326"]
"4Nk29OAqZTw" [label="Won twenty-one times|21,954,154,089,204,137 in 58,149,737,003,040,059,690,390,169|0.0000000003775"]
OjHzloSmLZg [label="Won twenty-two times|21,954,154,089,204,137 in 174,449,211,009,120,179,071,170,507|0.0000000001258"]
I32ZGazBqWY [label="Won twenty-three times|21,954,154,089,204,137 in 523,347,633,027,360,537,213,511,521|0.0000000000419"]
"-bGMZAWuL1o" [label="Won twenty-four times|21,954,154,089,204,137 in 1,570,042,899,082,081,611,640,534,563|0.0000000000140"]
"7GEmEWf1KgY" [label="Won twenty-five times|21,954,154,089,204,137 in 4,710,128,697,246,244,834,921,603,689|0.0000000000047"]
"4wUukNXczpM" [label="Won twenty-six times|21,954,154,089,204,137 in 14,130,386,091,738,734,504,764,811,067|0.0000000000016"]
// Going for all losses
CPb168NUwGc [label="Lost once|2 in 3|0.6666666666667"]
jDQqv3zkbIQ [label="Lost twice|4 in 9|0.4444444444444"]
HXtheRKAkIw [label="Lost thrice|8 in 27|0.2962962962963"]
"3qoxLsQ9464" [label="Lost four times|16 in 81|0.1975308641975"]
"xjo-L59q8K4" [label="Lost five times|32 in 243|0.1316872427984"]
dzK444eg53c [label="Lost six times|64 in 729|0.0877914951989"]
"83hQScodfDA" [label="Lost seven times|128 in 2,187|0.0585276634659"]
TFlsl2ZkBlI [label="Lost eight times|256 in 6,561|0.0390184423106"]
"4ojQK570hDA" [label="Lost nine times|512 in 19,683|0.0260122948737"]
WQ9wBn2Qk14 [label="Lost ten times|1,024 in 59,049|0.0173415299158"]
"hT-25A8LFAE" [label="Lost eleven times|2,048 in 177,147|0.0115610199439"]
"e8zbuI-qJX4" [label="Lost twelve times|4,096 in 531,441|0.0077073466293"]
xCb7UVssqlY [label="Lost thirteen times|8,192 in 1,594,323|0.0051382310862"]
"54ZevZGGXZw" [label="Lost fourteen times|16,384 in 4,782,969|0.0034254873908"]
d84UbmiyBOs [label="Lost fifteen times|32,768 in 14,348,907|0.0022836582605"]
X9jKHujmt1M [label="Lost sixteen times|65,536 in 43,046,721|0.0015224388403"]
SeX6WzVRZ4Y [label="Lost seventeen times|131,072 in 129,140,163|0.0010149592269"]
j8fHcBHeKwk [label="Lost eighteen times|262,144 in 387,420,489|0.0006766394846"]
VtWv7m270kY [label="Lost nineteen times|524,288 in 1,162,261,467|0.0004510929897"]
LSHMwceP0X8 [label="Lost twenty times|1,048,576 in 3,486,784,401|0.0003007286598"]
"BvL-kq_LLsI" [label="Lost twenty-one times|2,097,152 in 10,460,353,203|0.0002004857732"]
"KIcQP_OL0-0" [label="Lost twenty-two times|4,194,304 in 31,381,059,609|0.0001336571821"]
ei5WZihztGk [label="Lost twenty-three times|8,388,608 in 94,143,178,827|0.0000891047881"]
"74E6BTyhv_c" [label="Lost twenty-four times|16,777,216 in 282,429,536,481|0.0000594031921"]
YnACGEG1tTc [label="Lost twenty-five times|33,554,432 in 847,288,609,443|0.0000396021280"]
"7Jp8Xp_9v90" [label="Lost twenty-six times|67,108,864 in 2,541,865,828,329|0.0000264014187"]
ILrJDLjx6sA [label="Lost twenty-seven times|134,217,728 in 7,625,597,484,987|0.0000176009458"]
sJXuw8QM0W4 [label="Lost twenty-eight times|268,435,456 in 22,876,792,454,961|0.0000117339639"]
Gh_preEUg74 [label="Lost twenty-nine times|536,870,912 in 68,630,377,364,883|0.0000078226426"]
YVJh73INvXk [label="Lost thirty times|1,073,741,824 in 205,891,132,094,649|0.0000052150951"]
"9HDYmP-l_oM" [label="Lost thirty-one times|2,147,483,648 in 617,673,396,283,947|0.0000034767300"]
ugkWE2cy370 [label="Lost thirty-two times|4,294,967,296 in 1,853,020,188,851,841|0.0000023178200"]
"F-j5y5dyPDo" [label="Lost thirty-three times|8,589,934,592 in 5,559,060,566,555,523|0.0000015452133"]
tlTOyDEZGUU [label="Lost thirty-four times|17,179,869,184 in 16,677,181,699,666,569|0.0000010301422"]
GG6AZGhLCS4 [label="Lost thirty-five times|34,359,738,368 in 50,031,545,098,999,707|0.0000006867615"]
// Second try for a million
RVLUX6BUEJI [label="Won once in second round|2,302,909 in 4,782,969|0.4814810633312"]
KHtDsZvsoMw [label="Won twice|2,302,909 in 14,348,907|0.1604936877771"]
t0hJIw19ChI [label="Won thrice|2,302,909 in 43,046,721|0.0534978959257"]
e6zLBO0vez8 [label="Won four times|2,302,909 in 129,140,163|0.0178326319752"]
WR7AbrjBZNI [label="Won five times|2,302,909 in 387,420,489|0.0059442106584"]
"fMyFx3bFW-s" [label="Won six times|2,302,909 in 1,162,261,467|0.0019814035528"]
hi4166mPpmA [label="Won seven times|2,302,909 in 3,486,784,401|0.0006604678509"]
SxWZDOgaIog [label="Won eight times|2,302,909 in 10,460,353,203|0.0002201559503"]
"-D_g1k0IzTQ" [label="Won nine times|2,302,909 in 31,381,059,609|0.0000733853168"]
AnsaswKGPHk [label="Won ten times|2,302,909 in 94,143,178,827|0.0000244617723"]
"8UKflLZq61E" [label="Won eleven times|2,302,909 in 282,429,536,481|0.0000081539241"]
tAcIxmJOA9o [label="Won twelve times|2,302,909 in 847,288,609,443|0.0000027179747"]
yQKjsA90kpc [label="Won thirteen times in second round|2,302,909 in 2,541,865,828,329|0.0000009059916"]
// Last try for a million
b41_jrE8jFw [label="Won once in last round|3,378,732,074,219 in 7,625,597,484,987|0.4430776841908"]
oOufgnObuhQ [label="Won twice in last round|8,365,346,344,526,105 in 50,031,545,098,999,707|0.1672014391715"]
"N7UCPssq-X8" [label="Won thrice|8,365,346,344,526,105 in 150,094,635,296,999,121|0.0557338130572"]
"FlwMxN9-mec" [label="Won four times|8,365,346,344,526,105 in 450,283,905,890,997,363|0.0185779376857"]
ghJAsm9W3k0 [label="Won five times|8,365,346,344,526,105 in 1,350,851,717,672,992,089|0.0061926458952"]
"55nbeaYL7hQ" [label="Won six times|8,365,346,344,526,105 in 4,052,555,153,018,976,267|0.0020642152984"]
"dB8-XaRclhk" [label="Won seven times|8,365,346,344,526,105 in 12,157,665,459,056,928,801|0.0006880717661"]
ddWvzSxz4AA [label="Won eight times|8,365,346,344,526,105 in 36,472,996,377,170,786,403|0.0002293572554"]
"0xFOAtGBdUg" [label="Won nine times|8,365,346,344,526,105 in 109,418,989,131,512,359,209|0.0000764524185"]
HSdwcDFDyQY [label="Won ten times|8,365,346,344,526,105 in 328,256,967,394,537,077,627|0.0000254841395"]
Q5kgEN3rb_c [label="Won eleven times|8,365,346,344,526,105 in 984,770,902,183,611,232,881|0.0000084947132"]
pteggMrRnk4 [label="Won twelve times|8,365,346,344,526,105 in 2,954,312,706,550,833,698,643|0.0000028315711"]
"87zN8iWo5pU" [label="Won thirteen times in last round|8,365,346,344,526,105 in 8,862,938,119,652,501,095,929|0.0000009438570"]
// Losing out of a round
fWOtjGJvlGI [label="Lost after one win|2 in 9|0.2222222222222"]
"r9-jSTCiHd0" [label="Lost after two wins|2 in 27|0.0740740740741"]
qYaLoO40kjM [label="Lost after three to six wins|80 in 2187|0.0365797896662"]
yCwdjfzxI4I [label="Lost after seven to eleven wins|242 in 531,441|0.0004553656944"]
YohvsF9mF3g [label="Lost after twelve wins|1 in 1,594,323|0.0000006272255"]
AGL2OMZzn2g [label="Drew after twelve wins|1 in 1,594,323|0.0000006272255"] // For some reason this is worse than losing after twelve wins as it takes you to the final round rather than the second round
"0odtRIBvjes" [label="Lost after zero to six wins in second round|2,451,413,272 in 3,486,784,401|0.7030584601953"]
"7VmxQumJAL4" [label="Lost after seven to eleven wins in second round|557,303,978 in 847,288,609,443|0.0006577498762"]
"_WKzx6tClQw" [label="Lost after twelve wins in second round|2,302,909 in 2,541,865,828,329|0.0000009059916"]
"QxC-EQAsTuM" [label="Drew after twelve wins in second round|2,302,909 in 2,541,865,828,329|0.0000009059916"]
"hhDh6_RD7tU" [label="Drew after twelve wins in final round|8,365,346,344,526,105 in 8,862,938,119,652,501,095,929"] // This video says it is the one where almost everyone passes through, but it isn't
"D8iP2qINaSE" [label="Lost after twelve wins in final round|8,365,346,344,526,105 in 8,862,938,119,652,501,095,929"] // This video says it is the one where almost everyone passes through, but it isn't
"LvcxrEP2U-o" [label="Where almost everyone passes through|2,867,892,289,273,938,589,450 in 2,954,312,706,550,833,698,643|0.9707477082283"]
"LLZJ-U1UB5M" [label="Won after six losses|64 in 2,187|0.0292638317330"]
"j-jqX7AdQT8" [label="Won after seven to seventeen losses|22,412,672 in 387,420,489|0.0578510239813"]
b6_QOYNf73g [label="Won after eighteen to thirty-three losses|11,267,259,760,640 in 16,677,181,699,666,569|0.0006756093424"]
wUjs_vVwh68 [label="Won at the worst possible time|17,179,869,184 in 50,031,545,098,999,707|0.0000003433807"]
AgHpWh77STQ [label="Lost during the road to a billion|15,982,624,176,940,611,736 in 6,461,081,889,226,673,298,932,241|0.0000024736762"]
d0R5Csv7ogU [label="Lost during the road to a trillion|47,991,780,839,000,243,482 in 14,130,386,091,738,734,504,764,811,067|0.0000000033964"]
// Endings
dU22iL1ZsWQ [label="Wrap-up|8,603,693,598,514,504,820,560 in 8,862,938,119,652,501,095,929|0.9707495959423"]
s3rUNS68AKs [label="Wrap-up after winning a run|16,026,532,485,119,020,010 in 6,461,081,889,226,673,298,932,241|0.0000024804720"]
node [shape=ellipse]
// start of graph
start [shape=none]
start -> PmWQmZXYd74
// edges for wins
edge [color=darkgoldenrod, arrowtail=inv, dir=both]
PmWQmZXYd74 -> Ul8r0Thgx44
Ul8r0Thgx44 -> _mrAeT9kpPM
CPb168NUwGc -> RVLUX6BUEJI
_mrAeT9kpPM -> hoaLwPc571E
fWOtjGJvlGI -> RVLUX6BUEJI
RVLUX6BUEJI -> KHtDsZvsoMw
jDQqv3zkbIQ -> RVLUX6BUEJI
hoaLwPc571E -> z8zjvT8Qx8U
"r9-jSTCiHd0" -> RVLUX6BUEJI
"0odtRIBvjes" -> b41_jrE8jFw
KHtDsZvsoMw -> t0hJIw19ChI
HXtheRKAkIw -> b41_jrE8jFw
z8zjvT8Qx8U -> uHoYnV9JX4w
qYaLoO40kjM -> RVLUX6BUEJI
b41_jrE8jFw -> oOufgnObuhQ
t0hJIw19ChI -> e6zLBO0vez8
"3qoxLsQ9464" -> b41_jrE8jFw
uHoYnV9JX4w -> jRDLtKUsq8U
oOufgnObuhQ -> "N7UCPssq-X8"
e6zLBO0vez8 -> WR7AbrjBZNI
"xjo-L59q8K4" -> b41_jrE8jFw
jRDLtKUsq8U -> RXy0Kc1Cl9s
"N7UCPssq-X8" -> "FlwMxN9-mec"
WR7AbrjBZNI -> "fMyFx3bFW-s"
dzK444eg53c -> "LLZJ-U1UB5M"
RXy0Kc1Cl9s -> "v3oXQrWu-PA"
"FlwMxN9-mec" -> ghJAsm9W3k0
"fMyFx3bFW-s" -> hi4166mPpmA
"83hQScodfDA" -> "j-jqX7AdQT8"
"v3oXQrWu-PA" -> bN5M3caw6b8
yCwdjfzxI4I -> RVLUX6BUEJI
ghJAsm9W3k0 -> "55nbeaYL7hQ"
hi4166mPpmA -> SxWZDOgaIog
"j-jqX7AdQT8" -> "oOufgnObuhQ"
TFlsl2ZkBlI -> "j-jqX7AdQT8"
bN5M3caw6b8 -> DDmnplXv6pY
"55nbeaYL7hQ" -> "dB8-XaRclhk"
SxWZDOgaIog -> "-D_g1k0IzTQ"
"7VmxQumJAL4" -> b41_jrE8jFw
"4ojQK570hDA" -> "j-jqX7AdQT8"
DDmnplXv6pY -> "6auFOPOuHuE"
"dB8-XaRclhk" -> ddWvzSxz4AA
"-D_g1k0IzTQ" -> AnsaswKGPHk
WQ9wBn2Qk14 -> "j-jqX7AdQT8"
"6auFOPOuHuE" -> "0M39bd9euEI"
ddWvzSxz4AA -> "0xFOAtGBdUg"
AnsaswKGPHk -> "8UKflLZq61E"
"hT-25A8LFAE" -> "j-jqX7AdQT8"
"0M39bd9euEI" -> E3pdr5hNBe4
"0xFOAtGBdUg" -> HSdwcDFDyQY
"8UKflLZq61E" -> tAcIxmJOA9o
"e8zbuI-qJX4" -> "j-jqX7AdQT8"
YohvsF9mF3g -> RVLUX6BUEJI
AGL2OMZzn2g -> b41_jrE8jFw
HSdwcDFDyQY -> Q5kgEN3rb_c
tAcIxmJOA9o -> yQKjsA90kpc
xCb7UVssqlY -> "j-jqX7AdQT8"
Q5kgEN3rb_c -> pteggMrRnk4
"QxC-EQAsTuM" -> b41_jrE8jFw
"_WKzx6tClQw" -> b41_jrE8jFw
"54ZevZGGXZw" -> "j-jqX7AdQT8"
pteggMrRnk4 -> "87zN8iWo5pU"
K1kVsxsnYyc -> wf6sqW38AmM
d84UbmiyBOs -> "j-jqX7AdQT8"
wf6sqW38AmM -> j92TH0iaCrE
X9jKHujmt1M -> "j-jqX7AdQT8"
j92TH0iaCrE -> r8LgYG67bCA
SeX6WzVRZ4Y -> "j-jqX7AdQT8"
r8LgYG67bCA -> DZLnWKM90nQ
j8fHcBHeKwk -> b6_QOYNf73g
DZLnWKM90nQ -> aSjsXUdaIgQ
b6_QOYNf73g -> oOufgnObuhQ
VtWv7m270kY -> b6_QOYNf73g
aSjsXUdaIgQ -> HunlKDzXNv0
LSHMwceP0X8 -> b6_QOYNf73g
"BvL-kq_LLsI" -> b6_QOYNf73g
gv7_NTC_Rgs -> sbgMHxUkfFI
"KIcQP_OL0-0" -> b6_QOYNf73g
sbgMHxUkfFI -> "4Nk29OAqZTw"
ei5WZihztGk -> b6_QOYNf73g
"4Nk29OAqZTw" -> OjHzloSmLZg
"74E6BTyhv_c" -> b6_QOYNf73g
OjHzloSmLZg -> I32ZGazBqWY
YnACGEG1tTc -> b6_QOYNf73g
I32ZGazBqWY -> "-bGMZAWuL1o"
"7Jp8Xp_9v90" -> b6_QOYNf73g
"-bGMZAWuL1o" -> "7GEmEWf1KgY"
ILrJDLjx6sA -> b6_QOYNf73g
"7GEmEWf1KgY" -> "4wUukNXczpM"
sJXuw8QM0W4 -> b6_QOYNf73g
Gh_preEUg74 -> b6_QOYNf73g
YVJh73INvXk -> b6_QOYNf73g
"9HDYmP-l_oM" -> b6_QOYNf73g
ugkWE2cy370 -> b6_QOYNf73g
"F-j5y5dyPDo" -> b6_QOYNf73g
tlTOyDEZGUU -> wUjs_vVwh68
// edges for losses
edge [color=red, arrowhead=empty, arrowtail=invempty, dir=both]
PmWQmZXYd74 -> CPb168NUwGc
Ul8r0Thgx44 -> fWOtjGJvlGI
CPb168NUwGc -> jDQqv3zkbIQ
_mrAeT9kpPM -> "r9-jSTCiHd0"
fWOtjGJvlGI -> "0odtRIBvjes"
RVLUX6BUEJI -> "0odtRIBvjes"
jDQqv3zkbIQ -> HXtheRKAkIw
hoaLwPc571E -> qYaLoO40kjM
"r9-jSTCiHd0" -> "0odtRIBvjes"
"0odtRIBvjes" -> "LvcxrEP2U-o"
KHtDsZvsoMw -> "0odtRIBvjes"
HXtheRKAkIw -> "3qoxLsQ9464"
z8zjvT8Qx8U -> qYaLoO40kjM
qYaLoO40kjM -> "0odtRIBvjes"
b41_jrE8jFw -> "LvcxrEP2U-o"
t0hJIw19ChI -> "0odtRIBvjes"
"3qoxLsQ9464" -> "xjo-L59q8K4"
uHoYnV9JX4w -> qYaLoO40kjM
oOufgnObuhQ -> "LvcxrEP2U-o"
e6zLBO0vez8 -> "0odtRIBvjes"
"xjo-L59q8K4" -> dzK444eg53c
jRDLtKUsq8U -> qYaLoO40kjM
"N7UCPssq-X8" -> "LvcxrEP2U-o"
WR7AbrjBZNI -> "0odtRIBvjes"
dzK444eg53c -> "83hQScodfDA"
RXy0Kc1Cl9s -> yCwdjfzxI4I
"FlwMxN9-mec" -> "LvcxrEP2U-o"
"fMyFx3bFW-s" -> "0odtRIBvjes"
"83hQScodfDA" -> TFlsl2ZkBlI
"v3oXQrWu-PA" -> yCwdjfzxI4I
yCwdjfzxI4I -> "0odtRIBvjes"
ghJAsm9W3k0 -> "LvcxrEP2U-o"
hi4166mPpmA -> "7VmxQumJAL4"
"j-jqX7AdQT8" -> "LvcxrEP2U-o"
TFlsl2ZkBlI -> "4ojQK570hDA"
bN5M3caw6b8 -> yCwdjfzxI4I
"55nbeaYL7hQ" -> "LvcxrEP2U-o"
SxWZDOgaIog -> "7VmxQumJAL4"
"7VmxQumJAL4" -> "LvcxrEP2U-o"
"4ojQK570hDA" -> WQ9wBn2Qk14
DDmnplXv6pY -> yCwdjfzxI4I
"dB8-XaRclhk" -> "LvcxrEP2U-o"
"-D_g1k0IzTQ" -> "7VmxQumJAL4"
WQ9wBn2Qk14 -> "hT-25A8LFAE"
"6auFOPOuHuE" -> "yCwdjfzxI4I"
ddWvzSxz4AA -> "LvcxrEP2U-o"
AnsaswKGPHk -> "7VmxQumJAL4"
"hT-25A8LFAE" -> "e8zbuI-qJX4"
"0M39bd9euEI" -> YohvsF9mF3g // [taillabel=loss]
"0M39bd9euEI" -> AGL2OMZzn2g // [taillabel=draw]
"0xFOAtGBdUg" -> "LvcxrEP2U-o"
"8UKflLZq61E" -> "7VmxQumJAL4"
"e8zbuI-qJX4" -> xCb7UVssqlY
YohvsF9mF3g -> "0odtRIBvjes"
AGL2OMZzn2g -> "0odtRIBvjes"
HSdwcDFDyQY -> "LvcxrEP2U-o"
tAcIxmJOA9o -> "_WKzx6tClQw" // [taillabel=loss]
tAcIxmJOA9o -> "QxC-EQAsTuM" // [taillabel=draw]
xCb7UVssqlY -> "54ZevZGGXZw"
Q5kgEN3rb_c -> "LvcxrEP2U-o"
"QxC-EQAsTuM" -> "LvcxrEP2U-o"
"_WKzx6tClQw" -> "LvcxrEP2U-o"
"54ZevZGGXZw" -> d84UbmiyBOs
pteggMrRnk4 -> "hhDh6_RD7tU" // [taillabel=draw]
pteggMrRnk4 -> "D8iP2qINaSE" // [taillabel=loss]
K1kVsxsnYyc -> AgHpWh77STQ
d84UbmiyBOs -> X9jKHujmt1M
wf6sqW38AmM -> AgHpWh77STQ
X9jKHujmt1M -> SeX6WzVRZ4Y
j92TH0iaCrE -> AgHpWh77STQ
SeX6WzVRZ4Y -> j8fHcBHeKwk
r8LgYG67bCA -> AgHpWh77STQ
j8fHcBHeKwk -> VtWv7m270kY
DZLnWKM90nQ -> AgHpWh77STQ
b6_QOYNf73g -> "LvcxrEP2U-o"
VtWv7m270kY -> LSHMwceP0X8
aSjsXUdaIgQ -> AgHpWh77STQ
LSHMwceP0X8 -> "BvL-kq_LLsI"
"BvL-kq_LLsI" -> "KIcQP_OL0-0"
gv7_NTC_Rgs -> d0R5Csv7ogU
"KIcQP_OL0-0" -> ei5WZihztGk
sbgMHxUkfFI -> d0R5Csv7ogU
ei5WZihztGk -> "74E6BTyhv_c"
"4Nk29OAqZTw" -> d0R5Csv7ogU
"74E6BTyhv_c" -> YnACGEG1tTc
OjHzloSmLZg -> d0R5Csv7ogU
YnACGEG1tTc -> "7Jp8Xp_9v90"
I32ZGazBqWY -> d0R5Csv7ogU
"7Jp8Xp_9v90" -> ILrJDLjx6sA
"-bGMZAWuL1o" -> d0R5Csv7ogU
ILrJDLjx6sA -> sJXuw8QM0W4
"7GEmEWf1KgY" -> d0R5Csv7ogU
sJXuw8QM0W4 -> Gh_preEUg74
Gh_preEUg74 -> YVJh73INvXk
YVJh73INvXk -> "9HDYmP-l_oM"
"9HDYmP-l_oM" -> ugkWE2cy370
ugkWE2cy370 -> "F-j5y5dyPDo"
"F-j5y5dyPDo" -> tlTOyDEZGUU
tlTOyDEZGUU -> GG6AZGhLCS4
// edges for stopping
edge [color=blue, arrowtail=box, arrowhead=normal, dir=both]
E3pdr5hNBe4 -> s3rUNS68AKs
yQKjsA90kpc -> s3rUNS68AKs
"87zN8iWo5pU" -> s3rUNS68AKs
HunlKDzXNv0 -> s3rUNS68AKs
// edges for continuing
edge [color=darkmagenta, arrowtail=inv, arrowhead=normal, dir=both]
E3pdr5hNBe4 -> K1kVsxsnYyc
yQKjsA90kpc -> K1kVsxsnYyc
"87zN8iWo5pU" -> K1kVsxsnYyc
HunlKDzXNv0 -> gv7_NTC_Rgs
// edges for endings
edge [color=black, arrowhead=normal, dir=forward]
"LvcxrEP2U-o" -> dU22iL1ZsWQ
"hhDh6_RD7tU" -> dU22iL1ZsWQ
"D8iP2qINaSE" -> dU22iL1ZsWQ
end1 [shape=none, label=end]
dU22iL1ZsWQ -> end1
end2 [shape=none, label=end]
"LLZJ-U1UB5M" -> end2
end3 [shape=none, label=end]
s3rUNS68AKs -> end3
end4 [shape=none, label=end]
AgHpWh77STQ -> end4
end5 [shape=none, label=end]
d0R5Csv7ogU -> end5
end6 [shape=none, label=end]
"4wUukNXczpM" -> end6
end7 [shape=none, label=end]
wUjs_vVwh68 -> end7
end8 [shape=none, label=end]
GG6AZGhLCS4 -> end8
}
```
) |
|
https://github.com/platformer/typst-algorithms | https://raw.githubusercontent.com/platformer/typst-algorithms/main/test/assertions/assert_raw_text_in_code.typ | typst | MIT License | #import "../../algo.typ": code
#code[
abc
]
|
https://github.com/lebinyu/typst-thesis-template | https://raw.githubusercontent.com/lebinyu/typst-thesis-template/main/thesis/abstract.typ | typst | Apache License 2.0 | #let mainbody = [
#lorem(400)
] |
https://github.com/jfrydell/fpga-gpu | https://raw.githubusercontent.com/jfrydell/fpga-gpu/main/report/template.typ | typst | // The project function defines how your document looks.
// It takes your content and some metadata and formats it.
// Go ahead and customize it to your liking!
#let project(title: "", authors: (), body) = {
// Set the document's basic properties.
set document(author: authors, title: title)
set page(paper: "us-letter", numbering: "1", number-align: center)
set text(font: "Linux Libertine", lang: "en")
// Headers
set heading(numbering: "I.A.1.")
show heading: it => locate(loc => {
// Find out the final number of the heading counter.
let levels = counter(heading).at(loc)
let deepest = if levels != () {
levels.last()
} else {
1
}
set text(10pt, weight: 400)
if it.level == 1 [
// First-level headings are centered smallcaps.
#set align(center)
#show: smallcaps
#set text(size: 14pt)
#v(20pt, weak: true)
#if it.numbering != none {
numbering("I.", deepest)
h(5pt, weak: true)
}
#it.body
#v(13.75pt, weak: true)
] else if it.level == 2 [
// Second-level headings are run-ins.
#set par(first-line-indent: 0pt)
#set text(style: "italic")
#set text(size: 12pt)
#v(12pt, weak: true)
#if it.numbering != none {
numbering("A.", deepest)
h(7pt, weak: true)
}
#it.body
#v(10pt, weak: true)
] else [
// Third level headings are run-ins too, but different.
#if it.level == 3 {
numbering("1)", deepest)
[ ]
}
_#(it.body):_
]
})
// Title row.
align(center)[
#block(text(weight: 700, 1.75em, title))
]
// Author information.
pad(
top: 0.5em,
bottom: 0.5em,
x: 2em,
grid(
columns: (1fr,) * calc.min(3, authors.len()),
gutter: 1em,
..authors.map(author => align(center, strong(author))),
),
)
// Main body.
set par(justify: true)
body
} |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/math/matrix_08.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test using matrix line drawing with a set rule.
#set math.mat(augment: (hline: 2, vline: 1, stroke: 2pt + green))
$ mat(1, 0, 0, 0; 0, 1, 0, 0; 0, 0, 1, 1) $
#set math.mat(augment: 2)
$ mat(1, 0, 0, 0; 0, 1, 0, 0; 0, 0, 1, 1) $
#set math.mat(augment: none)
|
https://github.com/han0126/MCM-test | https://raw.githubusercontent.com/han0126/MCM-test/main/2024亚太杯typst/template/template.typ | typst | #import "@preview/numbly:0.1.0": numbly
/*定义字号*/
#let 字号=(
初号: 42pt,
小初: 36pt,
一号: 26pt,
小一: 24pt,
二号: 22pt,
小二: 18pt,
三号: 16pt,
小三: 15pt,
四号: 14pt,
中四: 13pt,
小四: 12pt,
五号: 10.5pt,
小五: 9pt,
六号: 7.5pt,
小六: 6.5pt,
七号: 5.5pt,
小七: 5pt,
)
/*定义字体*/
#let 字体=(
仿宋: ("Times New Roman", "FangSong"),
宋体: ("Times New Roman", "SimSun"),
黑体: ("Times New Roman", "SimHei"),
楷体: ("Times New Roman", "KaiTi"),
代码: ("Consolas", "Times New Roman", "SimSun"),
)
/*定义图样式*/
#let img(img, caption: "")={
figure(
img, //图片
caption: caption, //图名
supplement: [图], //标签
numbering: "1", //编号
kind: "image", //类型
)
}
/*定义表格样式*/
#let tbl(tbl, caption: "")={
figure(
tbl, //表格
caption: caption, //标题
supplement: [表], //标签
numbering: "1", //编号样式
kind: "table", //类型
)
}
/*定义代码块*/
#let code(code, caption: "", desc: "")={
if desc == "" {
desc = caption;
}
figure(
table(columns: 80%, align: left + horizon,
stroke: none,
table.hline(stroke: 1.5pt),
table.header([
#set text(fill: green)
\// *#desc*
]), table.hline(stroke: 1pt), block(code), table.hline(stroke: 1.5pt)),
caption: caption, //标题
supplement: [代码], //标签
numbering: "1.", //编号样式
kind: "code", //类型
)
}
/*定义附录代码样式*/
#let codeAppendix(code,caption:"")={
figure(
code,
caption: caption,
supplement: [附录],
numbering: "1:",
kind: "codeAppendix"
)
}
/*定义公式样式*/
#let equation(equation)={
figure(
equation, //公式
supplement: [式], //标签
//numbering: equation_num,//编号样式
numbering: "(1)", //编号样式
kind: "equation", //类型
)
}
/*符号说明*/
#let symbolDesc(syms: ([$A$], [$B$], [$C$]), desc: ([测试1], [测试2], [测试3]))={
if syms.len() != desc.len() {
return
}
align(center)[
#table(
columns: (10%, 85%),
align: (center + horizon, left + horizon),
stroke: none,
table.hline(stroke: 1.5pt),
table.header([*符号*], align(center)[*说明*]),
table.hline(stroke: 1pt),
..syms.zip(desc).flatten(),
table.hline(stroke: 1.5pt),
)
]
}
/*目录页*/
#let contents()={
align(center)[
#set text(font: 字体.黑体, size: 字号.四号)
#pagebreak()//换页
目#h(1.5em)录
]
parbreak()//换行
show outline:it=>{
set text(font: 字体.黑体,size: 字号.小四)
it
parbreak()
}
outline(
title: none,
indent: true,
)
//pagebreak()//换页
}
//定义手动缩进
#let indent = h(2em)
#let template(body, abstract: [摘要内容], title: "测试题目", keywords: ([key1], [key2], [key3])) = {
//设置页面
set page(
paper: "a4",
margin: (x: 2.5cm, y: 2.5cm),
numbering: numbly("{1}","{1}/{2}"),
)
//设置文本样式
set text(
font: 字体.宋体,
size: 字号.小四, //正文字体大小
fill: black,
lang: "zh",
)
//创建假段落样式,解决自动缩进
let fake-par = style(styles=>{
let b = par[#box()]
let t = measure(b + b, styles)
b
v(-t.height)
})
show strong: it => text(font: 字体.黑体, weight: "semibold", it.body)//设置加粗字体
show emph: it => text(font: 字体.楷体, style: "italic", it.body)//设置倾斜字体
//设置标题样式
set heading(numbering: numbly(
"{1:一、}",
"{1:1}.{2}",
"{1:1}.{2:1}.{3:1}"
))
show heading.where(level: 1):it=>{ //单独设置一级标题
set align(center)
set text(font: 字体.黑体, size: 字号.四号)
it
}
show heading.where(level: 2):it=>{ //单独设置二级标题
set align(left)
set text(font: 字体.黑体, size: 字号.小四)
it
v(0.15em)//二级标题后 行距
}
show heading.where(level: 3):it=>{ //单独设置三级标题
set align(left)
set text(font: 字体.黑体, size: 字号.小四)
it
}
show heading:it =>{
it
fake-par
}
//设置有序列表格式
set enum( numbering: " 1.")
//设置段落
set par(
justify: true, //两端对齐
first-line-indent: 2em, //首段缩进
leading: 0.8em, //行距
)
//设置图、表、代码样式
show figure: it =>[
#set align(center);//设置居中
#set block(breakable: true)//允许表格换行
#if it.kind == "image" { //图
it.body
//设置标题样式
set text(font: 字体.黑体, size: 字号.五号)
it.caption
} else if it.kind == "table" { //表
//表标题
set text(font: 字体.黑体, size: 字号.五号)
it.caption
//设置表字体
set text(font: 字体.宋体, size: 字号.五号)
it.body
} else if it.kind == "code" { //代码
//设置标题样式
set text(font: 字体.黑体, size: 字号.小五)
it.caption
//设置代码字体
set text(font: 字体.代码, size: 字号.五号)
it.body
} else if it.kind == "equation" { //公式
//通过大比例来达到中间靠右的排布
grid(
columns: (20fr, 1fr), //两列
it.body, //显示公式
align(center + horizon, it.counter.display(it.numbering)), //显示编号
)
} else if it.kind=="codeAppendix"{//附录代码
table(
columns: 100%,
fill: (x,y)=>{if(x==0 and y==0){gray}},
table.header(align(left)[*#it.caption*], repeat: false),
[
#set par(leading: 0.45em)
#align(left)[
#set text(font: 字体.代码, size: 字号.五号)
#it.body]
]
)}else {
it
}
]
[
#set page(footer: [#none])
#[//题目
#set text(font: 字体.黑体, size: 字号.三号)
#align(center)[#title]
]
#[//摘要字
#set text(font: 字体.黑体, size: 字号.四号)
#align(center)[摘#h(1em)要]
]
#abstract
#v(2em)
//关键词
#[
#set text(font: 字体.黑体, size: 字号.小四, )
关键词:
]
#for item in keywords{
item
h(1em)
}
/*目录*/
#contents()
]
counter(page).update(1)
body //正文
} |
|
https://github.com/open-datakit/accs-finalreport-whitepaper | https://raw.githubusercontent.com/open-datakit/accs-finalreport-whitepaper/main/1-intro/team.typ | typst | == Team
*Professor <NAME>*
*<NAME>*
Varvara consults as a data visualisation and web strategy specialist
Both Varvara and James help run and maintain a community makerspace based in Canberra: #link("https://canberramaker.space")[canberramaker.space]
*<NAME>*
*<NAME>*
|
|
https://github.com/artomweb/Quick-Sip-Typst-Template | https://raw.githubusercontent.com/artomweb/Quick-Sip-Typst-Template/master/template/main.typ | typst | MIT License | #import "@preview/quick-sip:0.1.0": *
#show: QRH.with(title: "Cup of Tea")
#section("Cup of Tea preparation")[
#step("KETTLE", "Filled to 1 CUP")
#step([*When* KETTLE boiled:], "")
#step([*If* sugar required], "")
//.. Rest of section goes here
] |
https://github.com/piepert/philodidaktik-hro-phf-ifp | https://raw.githubusercontent.com/piepert/philodidaktik-hro-phf-ifp/main/src/parts/ephid/ziele_und_aufgaben/ziele_philosophieunterricht.typ | typst | Other | #import "/src/template.typ": *
== Ziele des Philosophieunterrichts
#orange-list-with-body[*Orientierung*][
Der Philosophie wird sich durch #ix("Kants vier Grundfragen", "Kantische Fragen") angenähert:
+ Was kann ich wissen?
+ Was soll ich tun?
+ Was darf ich hoffen?
+ Was ist der Mensch?
][*Weisheit*][
Zum Erreichen der Weisheit stellt #ix("Kant", "<NAME>") die drei folgenden, dorthin führenden, Maximen auf:#en[Vgl. Kant, Immanuel: Kants gesammelte Schriften. Hrsg. von der königlich preussischen Akademie der Wissenschaften. Berlin 1900 ff. S. 201.]
+ Selbstdenken
+ sich an die Stelle eines anderen denken
+ jederzeit mit sich selbst einstimmig zu denken
][*Philosophieren*][
Im Rahmenplan ist folgendes gefordert: "Der Kern der Philosophie besteht im Philosophieren, der Tätigkeit des philosophischen Denkens, der philosophischen Kritik und Reflexion."#en[@MBWKMV2019_RP1112[S. 4]]
] |
https://github.com/Amelia-Mowers/typst-tabut | https://raw.githubusercontent.com/Amelia-Mowers/typst-tabut/main/lib.typ | typst | MIT License | #let col(
header,
function,
width: auto,
align: auto,
) = {
(
header: header,
func: function,
width: width,
align: align,
)
}
#let tabut-cells(
data-raw,
colDefs,
columns: auto,
align: auto,
index: "_index",
transpose: false,
headers: true,
) = {
let data = ();
let i = 0;
for record in data-raw {
let new-record = record;
if index != none { new-record.insert(index, i); }
data.push(new-record);
i = i + 1;
}
let entries = ();
let colWidths = ();
let colAlignments = ();
if transpose {
colWidths.push(auto);
colAlignments.push(auto);
for record in data {
colWidths.push(auto);
colAlignments.push(auto);
}
for colDef in colDefs {
if headers { entries.push(colDef.header); }
for record in data {
entries.push([#(colDef.func)(record)])
}
}
} else {
for colDef in colDefs {
if colDef.keys().contains("width") {
colWidths.push(colDef.width);
} else {
colWidths.push(auto);
}
if colDef.keys().contains("align") {
colAlignments.push(colDef.align);
} else {
colAlignments.push(auto);
}
}
for colDef in colDefs {
if headers { entries.push(colDef.header); }
}
for record in data {
for colDef in colDefs {
entries.push([#(colDef.func)(record)])
}
}
}
let output-named = (:)
if columns == auto {
output-named.columns = colWidths;
} else if columns == none {
// Do nothing
} else {
output-named.columns = columns;
}
if align == auto {
output-named.align = colAlignments;
} else if align == none {
// Do nothing
} else {
output-named.align = align;
}
arguments(
..output-named,
..entries
);
}
#let tabut(
data-raw,
colDefs,
columns: auto,
align: auto,
index: "_index",
transpose: false,
headers: true,
..tableArgs
) = {
table(
..tabut-cells(
data-raw,
colDefs,
columns: columns,
align: align,
index: index,
headers: headers,
transpose: transpose,
),
..tableArgs
)
}
#let rows-to-records(headers, rows, default: none) = {
rows.map(r => {
let record = (:);
let i = 0;
for header in headers {
record.insert(header, r.at(i, default: default));
i = i + 1;
}
record
})
}
#let group(data, function) = {
let groups = ();
for record in data {
let value = function(record);
let group-pos = groups.position(g => g.value == value);
if group-pos == none {
let new-group = (value: value, group: ());
new-group.group.push(record);
groups.push(new-group);
} else {
groups.at(group-pos).group.push(record)
}
}
groups.sorted(key: r => r.value)
}
#let auto-type(input) = {
let is-int = (input.match(regex("^-?\d+$")) != none);
if is-int { return int(input); }
let is-float = (input.match(regex("^-?(inf|nan|\d+|\d*(\.\d+))$")) != none);
if is-float { return float(input) }
input
}
#let records-from-csv(input) = {
let data = {
let data-raw = input;
rows-to-records(data-raw.first(), data-raw.slice(1))
}
data.map( r => {
let new-record = (:);
for (k, v) in r.pairs() {
new-record.insert(k, auto-type(v));
}
new-record
})
} |
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/text/deco.typ | typst | // Test text decorations.
--- underline-overline-strike ---
#let red = rgb("fc0030")
// Basic strikethrough.
#strike[Statements dreamt up by the utterly deranged.]
// Move underline down.
#underline(offset: 5pt)[Further below.]
// Different color.
#underline(stroke: red, evade: false)[Critical information is conveyed here.]
// Inherits font color.
#text(fill: red, underline[Change with the wind.])
// Both over- and underline.
#overline(underline[Running amongst the wolves.])
--- strike-with ---
#let redact = strike.with(stroke: 10pt, extent: 0.05em)
#let highlight-custom = strike.with(stroke: 10pt + rgb("abcdef88"), extent: 0.05em)
// Abuse thickness and transparency for redacting and highlighting stuff.
Sometimes, we work #redact[in secret].
There might be #highlight-custom[redacted] things.
--- underline-stroke-folding ---
// Test stroke folding.
#set underline(stroke: 2pt, offset: 2pt)
#underline(text(red, [DANGER!]))
--- underline-background ---
// Test underline background
#set underline(background: true, stroke: (thickness: 0.5em, paint: red, cap: "round"))
#underline[This is in the background]
--- overline-background ---
// Test overline background
#set overline(background: true, stroke: (thickness: 0.5em, paint: red, cap: "round"))
#overline[This is in the background]
--- strike-background ---
// Test strike background
#set strike(background: true, stroke: 5pt + red)
#strike[This is in the background]
--- highlight ---
// Test highlight.
This is the built-in #highlight[highlight with default color].
We can also specify a customized value
#highlight(fill: green.lighten(80%))[to highlight].
--- highlight-bounds ---
// Test default highlight bounds.
#highlight[ace],
#highlight[base],
#highlight[super],
#highlight[phone #sym.integral]
--- highlight-edges ---
// Test a tighter highlight.
#set highlight(top-edge: "x-height", bottom-edge: "baseline")
#highlight[ace],
#highlight[base],
#highlight[super],
#highlight[phone #sym.integral]
--- highlight-edges-bounds ---
// Test a bounds highlight.
#set highlight(top-edge: "bounds", bottom-edge: "bounds")
#highlight[abc]
#highlight[abc #sym.integral]
--- highlight-radius ---
// Test highlight radius
#highlight(radius: 3pt)[abc],
#highlight(radius: 1em)[#lorem(5)]
--- highlight-stroke ---
// Test highlight stroke
#highlight(stroke: 2pt + blue)[abc]
#highlight(stroke: (top: blue, left: red, bottom: green, right: orange))[abc]
#highlight(stroke: 1pt, radius: 3pt)[#lorem(5)]
|
|
https://github.com/kotatsuyaki/canonical-nthu-thesis | https://raw.githubusercontent.com/kotatsuyaki/canonical-nthu-thesis/main/layouts/doc.typ | typst | MIT License | // Show rules and set rules applied throughout the whole thesis document.
#let doc-impl(
info: (:),
style: (:),
show-draft-mark: true,
it
) = {
set document(
title: info.title-en + " " + info.title-zh,
author: info.author-en + " " + info.author-zh,
keywords: info.keywords-en + info.keywords-zh,
)
set text(
size: 12pt,
font: style.fonts,
hyphenate: true,
)
show math.equation: set text(font: style.math-fonts)
set par(
justify: true,
)
set page(
background: {
image("../nthu-logo.svg", width: 1.95in, height: 1.95in)
if show-draft-mark {
set text(size: 18pt, fill: gray, weight: "regular")
place(
top + right,
dx: -1em,
dy: 1em,
rotate(
-90deg,
reflow: true,
[Draft version #datetime.today().display()]
)
)
}
},
)
show ref: it => {
let el = it.element
if el != none and el.func() == heading and el.level == 1 {
[Chapter #numbering(el.numbering, ..counter(heading).at(el.location()))]
} else {
it
}
}
it
}
|
https://github.com/SnowManKeepsOnForgeting/NoteofEquationsofMathematicalPhysics | https://raw.githubusercontent.com/SnowManKeepsOnForgeting/NoteofEquationsofMathematicalPhysics/main/Chapter_2/Chapter_2.typ | typst | #import "@preview/physica:0.9.3": *
#import "@preview/i-figured:0.2.4"
#set heading(numbering: "1.1")
#show math.equation: i-figured.show-equation.with(level: 2)
#show heading: i-figured.reset-counters.with(level: 2)
#set text(font: "CMU Serif")
#let dcases(..args) = math.cases(..args.pos().map(math.display))
#counter(heading).update(1)
= Method of Separation of Variables
== String Vibration Equation
Given a string vibration equation:
$
dcases(
pdv(u,t,2) = a^2 pdv(u,x,2)\
eval(u)_(x=0) = 0 "," eval(u)_(x=l) = 0\
eval(u)_(t=0) = phi(x) "," eval(pdv(u,t))_(t=0) = psi(x) "," 0 <= x <= l
)
$
We assume the solution to be of the form:
$
u_n (x,t) = X_n (x)T_n (t)
$
And we superimpose the solutions to get the general solution:
$
u(x,t) = sum_(n=1)^oo C_n X_n (x)T_n (t)
$
We substitute $u_n (x,t) = X_n (x) T_n (t)$ into the string vibration equation to get:
$
(X^'' (x))/X(x) = (1/a^2) (T^'' (t))/T(t) = -lambda
$
Where $lambda$ is a constant.(LHS is independent of $t$ and RHS is independent of $x$ thus both should be equal to a constant otherwise the equation will not hold true for all $x$ and $t$).We divide $a^2$ to the RHS to make $X(x)$ part does not contain $a$ so we can solve it with the boundary conditions in next step.
We have two ODEs:
$
T^'' (t) + lambda a^2 T(t) = 0
$
$
X^'' (x) + lambda X(x) = 0
$
We solve the $X(x)$ ODE with the boundary conditions:
$
dcases(
X^'' (x) + lambda X(x) = 0\
X(0) = 0 "," X(l) = 0
)
$<217>
Then we discuss the different cases of $lambda$.
(1) $lambda < 0$
The general solution is:
$
X(x) = A e^(sqrt(-lambda)x) + B e^(-sqrt(-lambda)x)
$
And the boundary conditions give:
$
dcases(
A+B=0\
A e^(sqrt(-lambda)l) + B e^(-sqrt(-lambda)l) = 0
)
$
Because the determinant of the coefficient matrix is not zero, the only solution is $A = B = 0$ which is $X(x) eq.triple 0$.This is not a solution to the problem.
(2) $lambda = 0$
The general solution is:
$
X(x) = A x + B
$
To satisfy the boundary conditions we have:
$
A = B = 0
$
Which is not a solution to the problem.
(3) $lambda > 0$
Let $lambda = beta^2(beta >0)$.The general solution is:
$
X(x) = A cos(beta x) + B sin(beta x)
$
And the boundary conditions give:
$
dcases(
X(0) = A =0\
X(l) = A cos(beta x) + B sin(beta l) = 0
)
$
We have $A=0,beta = (n pi)/l,n=1,2,3dots$.(We don't consider the situation where $n = -1,-2dots$ because the coefficient $B$ can be negative)Thus we have:
$
lambda_n = (n^2 pi ^2)/l^2
$
$
X_n(x) = B_n sin((n pi )/l x)
$
We substitute $lambda = (n^2 pi ^2)/l^2$ into the $T(t)$ ODE to get:
$
T^''_n (t) + (n^2 pi ^2 a^2)/l^2 T_n (t) = 0
$
We have:
$
T_n (t)= C_n^' cos((n pi a )/l t) + D_n^' sin((n pi a)/l t)
$
Thus we can get the general solution:
$
u_n (x,t) = (C_n cos((n pi a)/l t) + D_n sin((n pi a)/l t) ) sin((n pi)/l x)
$where $C_n = B_n C_n^',D_n = B_n D_n^'$
And we superimpose the solutions to get solution:
$
u(x,t) = sum_(n=1)^oo (C_n cos((n pi a)/l t) + D_n sin((n pi a)/l t) ) sin((n pi)/l x)
$
We substitute the initial conditions to get the final solution(if RHS is convergent and can be differentiated term by term)
$
eval(u(x,t))_(t=0) = sum_(n=1)^oo C_n sin((n pi)/l x) = phi(x)
$
$
eval(pdv(u,t))_(t=0) = sum_(n=1)^oo (D_n (n pi a)/l) sin((n pi)/l x) = psi(x)
$
Just in case you've forgotten, fourier series is:
#set align(center)
#block(
fill: luma(230),
inset: 8pt,
radius: 4pt
)[
$
f(t) = a_0/2 + sum_(n=1)^oo (a_n cos(n omega t) + b_n sin(n omega t))\
a_n = (2/T) integral_(t_0)^(t_0+T) f(t) cos(n omega t) dd(t)\
b_n = (2/T) integral_(t_0)^(t_0+T) f(t) sin(n omega t) dd(t)
$<->
]
#set align(left)
Thus we have;
$
dcases(
C_n = (2/l) integral_0^l phi(x) sin((n pi)/l x) dd(x)\
D_n = (2/(n pi a)) integral_0^l psi(x) sin((n pi)/l x) dd(x)
)
$
It works same for the second boundary condition.
== Heat Conduct Equation
Given a heat conduct equation:
$
dcases(
pdv(u,t) = a pdv(u,x,2)","0<x<l","t>0\
eval(pdv(u,x))_(x=0) = 0 "," eval(pdv(u,x))_(x=l) = 0","t>0\
eval(u)_(t=0) = phi(x)","0<=x<=l
)
$
We assume the solution to be of the form:
$
u(x,t) = X(x)T(t)
$
Substitute $u(x,t) = X(x)T(t)$ into the heat conduct equation to get:
$
(T^' (t))/(a^2 T(t)) = (X^'' (x))/X(x) = -lambda
$
We use boundary conditions to solve the $X(x)$ ODE:
$
dcases(
X^'' (x) + lambda X(x) = 0\
X^' (0) = 0 "," X^' (l) = 0
)
$
Then we discuss the different cases of $lambda$.
(1) $lambda < 0$
The general solution is:
$
X(x) = A e^(sqrt(-lambda)x) + B e^(-sqrt(-lambda)x)
$
And we have:
$
dcases(
sqrt(-lambda) A - sqrt(-lambda) B = 0\
sqrt(-lambda) A e^(sqrt(-lambda)l) - sqrt(-lambda) B e^(-sqrt(-lambda)l) = 0
)
$
The determinant of the coefficient matrix is:
$
mat(delim: "|",sqrt(-lambda),-sqrt(-lambda);
sqrt(-lambda)e^(sqrt(-lambda)l),-sqrt(-lambda)e^(-sqrt(-lambda)l)) != 0
$
Thus the only solution is $A = B = 0$ which is $X(x) eq.triple 0$.This is not a solution to the problem.
(2) $lambda = 0$
The general solution is:
$
X(x) = A x + B
$
And we have:
$
A = 0
$
Thus we have:$X(x) = B,B != 0$
(3) $lambda > 0$
Let $lambda = beta^2(beta >0)$.The general solution is:
$
X(x) = A cos(beta x) + B sin(beta x)
$
And we have:
$
dcases(
B beta = 0\
- A beta sin(beta l) + B beta cos(beta l) = 0
)
$
Thus we have:
$
beta = (n pi)/l,n=1,2,3dots\
lambda_n = ((n pi)/l)^2
$
$
X_n (x) = A_n cos((n pi)/l x)
$
Form discussion above we know that $lambda = 0$ can be also considered as a special case of $lambda > 0$.Thus we have the general solution:
$
X_n (x) = A_n cos((n pi)/l x),n=0,1,2,3dots
$
We substitute $lambda_n = ((n pi)/l)^2,n=0,1,2,3dots$ into the $T(t)$ ODE:
$
T^' (t) + (n^2 pi^2 a^2)/l^2 T(t) = 0
$
The general solution is:
$
T_n (t) = C_n e^(- (n^2 pi^2 a^2)/l^2 t)
$
So we have gotten the general solution:
$
u_n (x,t) = a_n cos((n pi)/l x) e^(- (n^2 pi^2 a^2)/l^2 t),a_n = A_n C_n
$
$
u(x,t) = a_0/2 + sum_(n=1)^(oo) a_n cos((n pi)/l x) e^(-(n^2 pi ^2 a^2)/l^2 t)
$
We have initial condition to get the coefficient $a_n$:
$
u(t,0) = a_0/2 + sum_(n=1)^oo a_n cos((n pi)/l x) = phi(x)
$
By fourier transformation we have:
$
a_n = (2/l) integral_0^l phi(x) cos((n pi)/l x) dd(x)
$
== Laplace Equation in a Rectangular Region
Given a Laplace equation in a rectangular region:
$
dcases(
pdv(u,x,2) + pdv(u,y,2) = 0\
u(0,y) = f_1(y)","u(a,y) = f_2(y)\
u(x,0) = g_1(x)","u(x,b) = g_2(x)
)
$
We notice that in this equation the boundary conditions are not homogeneous, so we can't use the method of separation of variables directly. We can use the method of superposition to solve this problem.We assume the solution to be of the form:$u = u_1 + u_2$ where $u_1,u_2$ are solutions to the following equations:
$u_1$ is a solution to the following problem:
$
dcases(
pdv(u_1,x,2) + pdv(u_1,y,2) = 0\
u_1 (0,y) = 0","u_1 (a,y) = 0\
u_1 (x,0) = g_1(x)","u_1 (x,b) = g_2(x)
)
$
And $u_2$ is a solution to the following problem:
$
dcases(
pdv(u_2,x,2) + pdv(u_2,y,2) = 0\
u_2 (0,y) = f_1(y)","u_2 (a,y) = f_2(y)\
u_2 (x,0) = 0","u_2 (x,b) = 0
)
$
In this way we have transformed a non-homogeneous boundary condition problem into two homogeneous boundary condition problems. We can use the method of separation of variables to solve these two problems respectively.We only solve the $u_1$ problem here, the $u_2$ problem is similar.
$
dcases(
pdv(u_2,x,2) + pdv(u_2,y,2) = 0\
u_1 (0,y) = 0","u_1 (a,y) = 0\
u_1 (x,0) = g_1(x)","u_1 (x,b) = g_2(x)
)
$
We assume the solution to be of the form:
$
u(x,y) = X(x)Y(y)
$
We substitute $u(x,y) = X(x)Y(y)$ into the equation to get:
$
(X^'' (x)) / X(x) = - (Y^'' (y)) / Y(y) = -lambda
$
Thus we have two ODEs:
$
dcases(
X^'' (x) + lambda X(x) = 0\
Y^'' (y) - lambda Y(y) = 0
)
$
We solve the $X(x)$ ODE with the boundary conditions:
$
dcases(
X^'' (x) + lambda X(x) = 0\
X(0) = 0 "," X(a) = 0
)
$
It is as same as @eqt:217 thus we have:
$
lambda_n = (n^2 pi^2)/a^2,X_n (x) = C_n sin((n pi)/a x),n=1,2,3dots
$
We substitute $lambda_n = (n^2 pi ^2)/a^2$ into the $Y(y)$ ODE to get:
$
Y^'' (y) - (n^2 pi^2)/a^2 Y(y) = 0
$
The general solution is:
$
Y_n (y) = D_n e^((n pi)/a y) + E_n e^(- (n pi)/a y),n=1,2,3dots
$
Thus we have the general solution:
$
u_n (x,y) = (A_n e^((n pi)/a y) + B_n e^(- (n pi)/a y)) sin((n pi)/a x),A_n=C_n D_n,B_n=C_n E_n
$
$
u(x,y) = sum_(n=1)^(oo)(A_n e^((n pi)/a y) + B_n e^(- (n pi)/a y)) sin((n pi)/a x)
$
And we have:
$
dcases(
u(x,0) = (A_n + B_n) sin((n pi)/a x) = g_1(x)\
u(x,b) = (A_n e^((n pi)/a b) + B_n e^(- (n pi)/a b)) sin((n pi)/a x) = g_2(x)
)
$
We can get the coefficients $A_n,B_n$ by fourier transformation of $g_1(x),g_2(x)$.
$
dcases(
A_n + B_n = (2/a) integral_0^a g_1(x) sin((n pi)/a x) dd(x)\
A_n e^((n pi)/a b) + B_n e^(- (n pi)/a b) = (2/a) integral_0^a g_2(x) sin((n pi)/a x) dd(x)
)
$
== Laplace Equation in a Circular Region
Given a Laplace equation in a circular region:
$
dcases(
pdv(u,x,2) + pdv(u,y,2) = 0\
eval(u(x,y))_(x^2 + y^2 = r_0^2) = f(x,y)
)
$
Because the boundary condition is a circle,we consider this problem in polar coordinates:
$
dcases(
pdv(u,r,2) + (1/r) pdv(u,r) + (1/r^2) pdv(u,theta,2) = 0\
eval(u(r,theta))_(r = r_0) = f(theta)\
|u(0,theta)| < oo\
u(r,theta) = u(r,theta + 2 pi)
)
$
The third condition is natural condition.Why we add this condition is that we need more condition to determine a coefficient in later discussion.The fourth condition is periodic condition.
Now we solve this equation with the method of separation of variables.We assume the solution to be of the form:
$
u(r,theta) = R(r)Theta(theta)
$
We substitute $u(r,theta) = R(r)Theta(theta)$ into the equation to get:
$
(r^2 R^'' (r) + r R^' (r)) / R(r) = - (Theta^'' (theta)) / Theta(theta) = lambda
$
Thus we have two ODEs:
$
dcases(
r^2 R^'' (r) + r R^' (r) - lambda R(r) = 0\
|R(0)| < oo
)
$
$
dcases(
Theta^'' (theta) + lambda Theta(theta) = 0\
Theta(theta) = Theta(theta + 2 pi)
)
$
We solve the $Theta(theta)$ ODE first because it is easier to solve.We discuss the different cases of $lambda$.
(1) $lambda < 0$
The general solution is:
$
Theta(theta) = A e^(sqrt(-lambda)theta) + B e^(-sqrt(-lambda)theta)
$
It don't satisfy the periodic condition,thus $lambda < 0$ is not a solution to the problem.
(2) $lambda = 0$
The general solution is:
$
Theta(theta) = A theta + B
$
It satisfy the periodic condition when $Theta(theta) = B,B!=0$.
(3) $lambda > 0$
Let $lambda = beta ^2$.The general solution is:
$
Theta(theta) = A cos(beta theta) + B sin(beta theta)
$
To satisfy the periodic condition $beta$ is a integer $n,n=1,2,3dots$.Thus we have:
$
Theta_n (theta) = A_n cos(n theta) + B_n sin(n theta),lambda=n^2,n=1,2,3dots
$
Let us then solve the $R(r)$ ODE.
And we substitute $lambda = n^2$ into the $R(r)$ ODE to get:
$
dcases(
r^2 R^'' (r) + r R^' (r) - n^2 R(r) = 0\
|R(0)| < oo
)
$<2411>
This is a Euler equation.Because of $r>0$,we assume $r = e^t$.We have:
$
dv(R,r) = dv(R,t) dv(t,r)=1/r dv(R,t)\
dv(R,r,2) = dv(1/r dv(R,t),r) = -1/r^2 dv(R,t) + 1/r^2 dv(R,t,2)
$<->
We substitute them into @eqt:2411:
$
dv(R,t,2)-n^2 dv(R,t) = 0
$
We get the solution:
$
R_n (t) = C_n e^(n t) + D_n e^(- n t)\
$
Thus
$
R_n (r) = dcases(
C_0 + D_0 ln r"," n =0\
C_n r^n + D_n r^(-n) "," n = 1","2","3dots
)
$
And we have natural condition:
$
|R(0)| < oo
$
Thus $D_n = 0$
So we get the general solution:
$
u(r,theta) = a_0/2 + sum_(n=1)^(oo) r^n (a_n cos(n theta) + b_n sin(n theta))
$
And we substitute it into boundary condition:
$
u(r_0,theta) = a_0/2 + sum_(n=1)^(oo)r_0^n (a_n cos(n theta) + b_n sin(n theta)) = f(theta)
$
By comparing fourier coefficient we have:
$
dcases(
a_n = 1/(r_0^n pi) integral_0^(2 pi) f(theta) cos(n theta) dd(theta)\
b_n = 1/(r_0^n pi) integral_0^(2 pi) f(theta) sin(n theta) dd(theta)
)
$
We substitute these coefficients into the general solution:
$
u(r,theta) = 1/pi integral_0^(2 pi) f(t)[1/2 + sum_(n=1)^(oo) (r/r_0)^n cos(n(theta - t))] dd(t)
$ |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/rubby/0.10.0/README.md | markdown | Apache License 2.0 | # rubby (Typst package)
## Usage
```typ
#import "@preview/rubby:0.10.0": get-ruby
#let ruby = get-ruby(
size: 0.5em, // Ruby font size
dy: 0pt, // Vertical offset of the ruby
pos: top, // Ruby position (top or bottom)
alignment: "center", // Ruby alignment ("center", "start", "between", "around")
delimiter: "|", // The delimiter between words
auto-spacing: true, // Automatically add necessary space around words
)
// Ruby goes first, base text - second.
#ruby[ふりがな][振り仮名]
Treat each kanji as a separate word:
#ruby[とう|きょう|こう|ぎょう|だい|がく][東|京|工|業|大|学]
```
If you don't want automatically wrap text with delimiter:
```typ
#let ruby = get-ruby(auto-spacing: false)
```
See also <https://github.com/rinmyo/ruby-typ/blob/main/manual.pdf> and `example.typ`.
## Notes
Original project is at <https://github.com/rinmyo/ruby-typ> which itself is
based on [the post](https://zenn.dev/saito_atsushi/articles/ff9490458570e1)
of 齊藤敦志 (Saito Atsushi). This project is a modified version of
[this commit](https://github.com/rinmyo/ruby-typ/commit/23ca86180757cf70f2b9f5851abb5ea5a3b4c6a1).
`auto-spacing` adds missing delimiter around the `content`/`string` which
then adds space around base text if ruby is wider than the base text.
Problems appear only if ruby is wider than its base text and `auto-spacing` is
not set to `true` (default is `true`).
You can always use a one-letter function (variable) name to shorten the
function call length (if you have to use it a lot), e.g., `#let r = get-ruby()`
(or `f` — short for furigana). But be careful as there are functions with names
`v` and `h` and there could be a new built-in function with a name `r` or `f`
which may break your document (Typst right now is in beta, so breaking changes
are possible).
Although you can open issues or send PRs, I won't be able to always reply
quickly (sometimes I'm very busy).
## Changelog
You can view the change log in the CHANGELOG.md file in the root of the project.
## License
This Typst package is licensed under AGPL v3.0. You can view the license in the
LICENSE file in the root of the project or at
<https://www.gnu.org/licenses/agpl-3.0.txt>. There is also a NOTICE file for
3rd party copyright notices.
Copyright (C) 2023 <NAME>
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/108.%20relres.html.typ | typst | relres.html
Relentlessly Resourceful
Want to start a startup? Get funded by
Y Combinator.
March 2009A couple days ago I finally got being a good startup founder down
to two words: relentlessly resourceful.Till then the best I'd managed was to get the opposite quality down
to one: hapless. Most dictionaries say hapless means unlucky. But
the dictionaries are not doing a very good job. A team that outplays
its opponents but loses because of a bad decision by the referee
could be called unlucky, but not hapless. Hapless implies passivity.
To be hapless is to be battered by circumstances — to let the world
have its way with you, instead of having your way with the world.
[1]Unfortunately there's no antonym of hapless, which makes it difficult
to tell founders what to aim for. "Don't be hapless" is not much
of a rallying cry.It's not hard to express the quality we're looking for in metaphors.
The best is probably a running back. A good running back is not
merely determined, but flexible as well. They want to get downfield,
but they adapt their plans on the fly.Unfortunately this is just a metaphor, and not a useful one to most
people outside the US. "Be like a running back" is no better than
"Don't be hapless."But finally I've figured out how to express this quality directly.
I was writing a talk for
investors, and I had to explain what to
look for in founders. What would someone who was the opposite of
hapless be like? They'd be relentlessly resourceful. Not merely
relentless. That's not enough to make things go your way except
in a few mostly uninteresting domains. In any interesting domain,
the difficulties will be novel. Which means you can't simply plow
through them, because you don't know initially how hard they are;
you don't know whether you're about to plow through a block of foam
or granite. So you have to be resourceful. You have to keep
trying new things.Be relentlessly resourceful.That sounds right, but is it simply a description
of how to be successful in general? I don't think so. This isn't
the recipe for success in writing or painting, for example. In
that kind of work the recipe is more to be actively curious.
Resourceful implies the obstacles are external, which they generally
are in startups. But in writing and painting they're mostly internal;
the obstacle is your own obtuseness.
[2]There probably are other fields where "relentlessly resourceful"
is the recipe for success. But though other fields may share it,
I think this is the best short description we'll find of what makes
a good startup founder. I doubt it could be made more precise.Now that we know what we're looking for, that leads to other
questions. For example, can this quality be taught? After four
years of trying to teach it to people, I'd say that yes, surprisingly
often it can. Not to everyone, but to many people.
[3]
Some
people are just constitutionally passive, but others have a latent
ability to be relentlessly resourceful that only needs to be brought
out.This is particularly true of young people who have till now always
been under the thumb of some kind of authority. Being relentlessly
resourceful is definitely not the recipe for success in big companies,
or in most schools. I don't even want to think what the recipe is
in big companies, but it is certainly longer and messier, involving
some combination of resourcefulness, obedience, and building
alliances.Identifying this quality also brings us closer to answering a
question people often wonder about: how many startups there could
be. There is not, as some people seem to think, any economic upper
bound on this number. There's no reason to believe there is any
limit on the amount of newly created wealth consumers can absorb,
any more than there is a limit on the number of theorems that can
be proven. So probably the limiting factor on the number of startups
is the pool of potential founders. Some people would make good
founders, and others wouldn't. And now that we can say what makes
a good founder, we know how to put an upper bound on the size of
the pool.This test is also useful to individuals. If you want to know whether
you're the right sort of person to start a startup, ask yourself
whether you're relentlessly resourceful. And if you want to know
whether to recruit someone as a cofounder, ask if they are.You can even use it tactically. If I were running a startup, this
would be the phrase I'd tape to the mirror. "Make something people
want" is the destination, but "Be relentlessly resourceful" is how
you get there.
Notes[1]
I think the reason the dictionaries are wrong is that the
meaning of the word has shifted. No one writing a dictionary from
scratch today would say that hapless meant unlucky. But a couple
hundred years ago they might have. People were more at the mercy
of circumstances in the past, and as a result a lot of the words
we use for good and bad outcomes have origins in words about luck.When I was living in Italy, I was once trying to tell someone
that I hadn't had much success in doing something, but I couldn't
think of the Italian word for success. I spent some time trying
to describe the word I meant. Finally she said "Ah! Fortuna!"[2]
There are aspects of startups where the recipe is to be
actively curious. There can be times when what you're doing is
almost pure discovery. Unfortunately these times are a small
proportion of the whole. On the other hand, they are in research
too.[3]
I'd almost say to most people, but I realize (a) I have no
idea what most people are like, and (b) I'm pathologically optimistic
about people's ability to change.Thanks to <NAME> and <NAME> for reading drafts
of this.
|
|
https://github.com/Daillusorisch/HYSeminarAssignment | https://raw.githubusercontent.com/Daillusorisch/HYSeminarAssignment/main/template/components/main-matter.typ | typst | #import "../utils/style.typ": *
#import "../utils/indent.typ": *
#let main-matter(content) = {
// Set headings
set heading(numbering: (..nums) => {
nums.pos().map(str).join(".") + " "
})
show heading.where(level: 1): it => {
set align(center)
set text(weight: "regular", font: 字体.黑体, size: 字号.小二)
set block(spacing: 1.5em)
it
}
show heading.where(level: 2): it => {
set text(weight: "regular", font: 字体.黑体, size: 字号.四号)
set block(above: 1.5em, below: 1.5em)
it
}
set page(
header: {
set text(
font: 字体.宋体,
size: 字号.五号,
baseline: 8pt,
spacing: 6pt
)
set align(center)
[武 汉 大 学 本 科 课 程 论 文 ( 设 计 )]
line(length: 100%, stroke: 0.7pt)
counter(footnote).update(0)
},
footer: {
set align(center)
text(
font: 字体.宋体,
size: 字号.五号,
counter(page).display("1")
)
}
)
counter(page).update(1)
set text(
font: 字体.宋体,
size: 字号.小四
)
set par(
first-line-indent: 2em,
leading: 1.5em
)
show table.cell: set par(leading: 0.5em)
show raw: set par(leading: 0.5em)
show figure: it => {
set text(font: 字体.宋体, size: 字号.五号)
v(0.5em)
it
v(0.5em)
}
show figure.caption: set text(weight: "bold")
// distance between two par
// reference: https://github.com/typst/typst/issues/686#issuecomment-1811330876
show par: set block(spacing: 1.5em)
// magic to fix the issue of no indentation in the first line of the first paragraph
show heading: it => {
set text(weight: "regular", font: 字体.黑体, size: 字号.小四)
set block(above: 1.5em, below: 1.5em)
it
} + fake-par
content
} |
|
https://github.com/Enter-tainer/natrix | https://raw.githubusercontent.com/Enter-tainer/natrix/main/nat.typ | typst | Apache License 2.0 | #let is-2d-arg(mat) = {
for arr in mat {
if type(arr) == "array" {
return true
}
}
false
}
#let nat(..args) = {
let pos-args = args.pos()
// same as \mathstrut in LaTeX.
let strut = context {
let width = measure($\($).width
[#hide($\($)#h(-width)]
}
let res = for line in pos-args {
if type(line) == "array" {
let res = for elem in line {
([#elem#strut],)
}
(res,)
} else {
([#line#strut],)
}
}
set math.mat(row-gap: 0.1em, column-gap: 1em)
math.mat(..res, ..args.named())
}
#let bnat = nat.with(delim: "[")
#let pnat = nat.with(delim: "(")
#let Bnat = nat.with(delim: "{")
#let vnat = nat.with(delim: "|")
#let Vnat = nat.with(delim: "||")
|
https://github.com/maxdinech/typst-recipe | https://raw.githubusercontent.com/maxdinech/typst-recipe/main/recipe.typ | typst | #let primary_colour = rgb("#ce1f36")
#let text_colour = rgb("#333")
#let note(note) = {
(NOTE: "NOTESTART" + note + "NOTEEND")
}
#let recipes(title, doc) = {
set text(10pt, font: "IBM")
set page(
margin: (x: 54pt, y: 52pt),
numbering: "1",
number-align: right,
)
set align(center)
v(240pt)
text(fill: primary_colour, font: "Salzburg-Serial", size: 30pt, weight: 100, title)
set align(left)
show heading.where(
level: 1
): it => [
#pagebreak()
#v(300pt)
#set align(center)
#text(
fill: primary_colour,
font: "IBM Plex Sans",
weight: 300,
size: 20pt,
{it.body},
)
#text(" ")
#pagebreak()
]
doc
}
#let display_with_footnotes(lines, footnotes_counter) = {
for line in lines.children {
if "NOTE" not in repr(line) {
line
} else {
let x = line.body.children.map(it => {
if "NOTE" not in repr(it) {
it
} else {
footnotes_counter.step()
super(typographic: true, size: 7pt, footnotes_counter.display())
}
}).join()
if "listitem" in repr(line) { [- #x] } else { [#x] }
}
}
}
#let display_steps_with_footnotes(lines, footnotes_counter) = {
for line in lines.children {
if "NOTE" not in repr(line) {
line
} else {
footnotes_counter.step()
super(typographic: true, size: 7pt, footnotes_counter.display())
}
}
}
#let display_ingredients(ingredients, footnotes_counter) = {
emph(display_with_footnotes(ingredients, footnotes_counter))
}
#let display_cookware(cookware, footnotes_counter) = {
emph(display_with_footnotes(cookware, footnotes_counter))
}
#let display_steps(steps, footnotes_counter) = {
[== Préparation]
set enum(tight: true)
columns(
2,
gutter: 11pt,
display_steps_with_footnotes(steps, footnotes_counter)
)
}
#let display_footnotes(ingredients, cookware, steps, footnotes_counter) = {
let aux() = {
for lines in (ingredients, cookware, steps) {
for line in lines.children {
if "NOTE" not in repr(line) {
} else {
if line.has("body") {
line.body.children.map(it => {
if "NOTE" in repr(it) [
+ _#h(-2pt) #it.text.split("NOTESTART").at(1).split("NOTEEND").at(0)_
]
}).join()
} else [
+ _#line.text.split("NOTESTART").at(1).split("NOTEEND").at(0)_
]
}
}
}
}
let footnotes = aux()
footnotes
}
}
#let display_pairings(pairings) = {
[== Suggestions d'accords]
emph(pairings)
}
#let recipe(
title: "",
author: "",
description: "",
image_path: "",
servings: 6,
prep_time: "",
bake_time: "",
difficulty: "normale",
ingredients: (),
cookware: [],
steps: [],
remarks: [],
pairings: [],
) = {
show heading.where(
level: 2
): it => text(
fill: primary_colour,
font: "IBM Plex Sans",
weight: 300,
size: 11pt,
grid(
columns: (auto, auto),
column-gutter: 5pt,
[#{upper(it.body)}],
[
#v(5pt)
#line(length: 100%, stroke: 0.4pt + primary_colour)
]
)
)
let footnotes_counter = counter(repr(title))
{
grid(
columns: (380pt, 100pt),
[
#text(fill: primary_colour, font: "Salzburg-Serial", size: 18pt, weight: 100, upper(title))
#h(3pt)
#text(fill: text_colour, font: "<NAME>", size: 20pt, author)
#v(0pt)
#emph(description)
],
[
#v(2pt)
#set align(right)
#if(prep_time != "") {
[_Préparation: #prep_time _]
}
#if(bake_time != "") {
[\ _Cuisson: #bake_time _]
}
/*
#if(difficulty != "") {
[\ _Difficulté: #difficulty _]
}
*/
],
)
grid(
columns: (90pt, 380pt),
column-gutter: 15pt,
[
#set list(marker: [], body-indent: 0pt)
#set align(right)
#text(fill: primary_colour, font: "IBM Plex Sans", weight: 300, size:11pt, upper([Ingrédients\ ]))
#emph([pour #servings personnes])
#display_ingredients(ingredients, footnotes_counter)
#if cookware != [] {
text(fill: primary_colour, font: "IBM Plex Sans", weight: 300, size:11pt, upper([Matériel\ ]))
display_cookware(cookware, footnotes_counter)
}
],
[
#display_steps(steps, footnotes_counter)
#if remarks != [] or footnotes_counter.display() != [1] {
[== Conseils du chef]
footnotes_counter.update(1)
emph(remarks)
display_footnotes(ingredients, cookware, steps, footnotes_counter)
}
#if pairings != [] {
display_pairings(pairings)
}
]
)
v(30pt)
}
}
|
|
https://github.com/kmitsutani/Hioki_I-5 | https://raw.githubusercontent.com/kmitsutani/Hioki_I-5/main/main.typ | typst | MIT No Attribution | // MIT No Attribution
// Copyright 2024 <NAME>
#import "jdocuments/jnote.typ": main, appendix, thebibliography
#import "@preview/physica:0.9.3"
#show: main.with(title: [日置『場の量子論』 I-5], authors: [三ツ谷和也])
= Boson と Fermion
考えているプロセスにおいて変化しない量子数のセットを共有している粒子群からなる系を考える.
この系の$N$粒子状態を$Phi^((N))$とすると,粒子$i$と粒子$j$の入れ替えは区別ができず
物理を変えないのでそのような入れ替えの結果量子状態が
$Phi^((N))_((i arrow.l.r j))$になったとするとこれは元の量子状態と位相しか違わず
$
Phi^((N))_((i arrow.l.r j)) = C_(i j) Phi^((N))
$
となる.ただしここに$C_(i j)$は$abs(C_(i j))^2=1$を満たす複素係数である.
$i arrow.l.r j$ の交換をもう一度 $Phi^((N))_((i arrow.l.r j))$ に施すと,
位相も含めて完全に元の状態に戻る. したがって$C_(i j)^2 = 1 arrow.double.l.r C_(i j) = plus.minus 1$となる.
$C_(i j) = 1$となる粒子種をBoson, $C_(i j) = -1$ となる粒子種を Fermion という.
= 多粒子の波動関数
粒子間の相互作用がない非摂動(あるいは摂動の零次)的状況を考えると,
$Phi^((N))(bold(r_1),dots,bold(r_N))$ は
$1$粒子の正規完全直交系$phi_alpha^((1))$の積の線形結合で
$
Phi^((N))(bold(r_1),dots,bold(r_2)) =
sum_({s_i}) C({s_i}) phi_(s_i)^((1))(bold(r_1))dots.c phi_(s_N)^((1))(bold(r_N))
$
とかける(独立粒子近似). ここで$s_i$は$i$番目の粒子を$alpha=1,2,dots$のどれに割り当てるかの組み合わせであり,
$c({s_i})$はそれぞれの割り当てに対する結合係数である.
ここで状態$phi_alpha$に粒子が$n_alpha$個ある状態を
$
physica.ket(n_1\,n_2\, dots\, n_alpha\, dots)
$
と書くものとする.
以下では簡単のためにすべてがボソンの場合を考える
#footnote[本テキストでは Fermion の取り扱いについて一切書かれていないが
それに関しては例えば砂川 @SunakawaQM の第 $6$ 章を見よ.]
.
= 多粒子系における行列要素に関する物理的考察
以下では$1$粒子状態の正規直行完全系を${physica.ket(alpha)}_(alpha=1,2,dots)$とする.
すなわち
$
physica.braket(alpha, beta) = delta_(alpha,beta),
sum_alpha physica.ketbra(alpha, alpha) = hat(cal(1))
$
本節では簡単のために二準位系${physica.ket(alpha)} = {physica.ket(1), physica.ket(2)}$
を考えオブザーバブルの行列要素を考える.
ここで独立粒子近似の範囲で扱える粒子間相互作用を含まないオブザーバブル$hat(F)$の行列要素を考える.
粒子間相互作用を含まないという条件から
$hat(F)$は$1$粒子状態に作用するオブザーバブル$hat(F)^((1))$を用いて
$
hat(hat(F)) = &hat(F)^((1)) times.circle hat(1)_2 times.circle dots.c times.circle hat(1)_N \
&+ hat(1)_1 times.circle hat(F)^((1)) times.circle dots.c times.circle hat(1)_N \
&dots \
&+ hat(1)_1 times.circle dots.c times.circle hat(1)_(N-1) times.circle hat(F)^((1)) \
$ <decomposition_of_F>
と展開される.
また状態は
$
physica.ket(n_1\, n_2) = c_n (n_1, n_2) sum_P physica.ket(P(1)) physica.ket(P(2))
dots physica.ket(P(N))
$
ここで$P(i)$は$i$番目の粒子をどの状態に割り当てるかを表す関数である
(すなわち今の場合$forall i(P(i) in {1, 2})$).
$c_n(n_1, n_2)$は規格化因子であるがこれは
$
physica.braket(n_1\, n_2, n_1\, n_2) &= abs(c_n (n_1, n_2))^2 sum_(P,P^prime)
physica.braket(P(1), P^prime(1))times dots times physica.braket(P(N), P^prime(N)) \
&= abs(c_n (n_1, n_2))^2 sum_P = abs(c_n (n_1, n_2))^2 binom(N, n_1)
$
となるので,オーバーオールの位相を気にせず$c_n (n_1, n_2) in bb(R)$とすれば
$c_n (n_1, n_2) = sqrt((n_1 ! n_2 ! )\/ N !)$となる.
式 @decomposition_of_F から $hat(F)physica.ket(n_1\, n_2)$ には
$hat(F)^((1))physica.ket(P(i))$が繰り返し登場するのがわかるので$hat(F) physica.ket(P(i))$
を以下に書き下すと
$
hat(F)^((1))physica.ket(P(i)) &= hat(bb(1))hat(F)^((1))physica.ket(P(i)) \
&= sum_alpha physica.ketbra(alpha, alpha)hat(F)^((1))physica.ket(P(i)) \
&= physica.bra(1) hat(F)^((1))physica.ket(P(i))physica.ket(1)
+ physica.bra(2) hat(F)^((2))physica.ket(P(i))physica.ket(2)
$
となる.この式を物理的に解釈すると$hat(F)^((1))$の作用によって 状態$physica.ket(P(i))$から$physica.ket(1)$および$physica.ket(2)$への遷移が起きていることになる.
したがって遷移後の状態の占有数を$(n_1^prime, n_2^prime)$とすると
$(n_1^prime, n_2^prime) in {(n_1 - 1, n_2 + 1), (n_1, n_2), (n_1 + 1 , n_2 - 1)}$となり,
この占有数変化に対応する行列要素だけが値を持つ.
= 多粒子系における行列要素の具体的計算
前節での考察から非零になる行列要素が絞り込まれたのでその一つ
$physica.bra(n_1 - 1\, n_2 + 1) hat(F) physica.ket(n_1\, n_2)$を計算する.
$
physica.bra(n_1 - 1\, n_2 + 1)hat(F)physica.ket(n_1\, n_2)
= c_n (n_1, n_2) physica.bra(n_1 - 1\, n_2 + 1)sum_P &
{
[hat(F)^((1))physica.ket(P(1))]physica.ket(P(2))dots physica.ket(P(N)) \
&+ physica.ket(P(1))[hat(F)^((1)) physica.ket(P(2))] dots physica.ket(P(N)) \
&dots \
&+ physica.ket(P(1)) dots physica.ket(P(N-1))[hat(F)^((1)) physica.ket(P(N))]
}
$
占有数に関する考察から$hat(F)^((1))physica.ket(P(i))$は実質的に
$delta_(P(i),1) f_(2, P(i)) physica.ket(2)$としてふるまう. ただしここで$hat(F)^((1))$の行列要素を
$physica.bra(i)F^((1))physica.ket(j) = f_(i,j)$と表記した.
したがってうえの行列要素は
$
physica.bra(n_1 - 1\, n_2 + 1)hat(F)physica.ket(n_1\, n_2)
= c_n (n_1, n_2) f_(2, 1) physica.bra(n_1-1\,n_2+1){
&sum_(P(1)=1)physica.ket(2)physica.ket(P(2))dots.c physica.ket(P(N))\
&+sum_(P(2)=1)physica.ket(P(1))physica.ket(2)dots.c physica.ket(P(N))\
&dots\
&+sum_(P(N)=1)physica.ket(1)dots physica.ket(P(N-1)) physica.ket(2)
}
$
と書ける. ここで $physica.bra(n_1-1\,n_2+1)$ も$1$粒子状態であらわに書くと
$
& physica.bra(n_1-1\,n_2+1)hat(F)physica.ket(n_1\,n_2)\
& =c_n (n_1 - 1, n_2 + 1) c_n (n_1, n_2)f_(1,2)sum_(i=1)^N sum_(P^prime, P\ P(i) = 1)
physica.braket(P^prime (1), P(1)) dots physica.braket(P^prime (i), 2)
dots physica.braket(P^prime (N), P(N))
$
この表式の和の部分は$i$に関して対象になっており和の対象はどの$i$ でもおなじである.
そこで$i$を固定して考えると$P,P^prime$に関する和は
$P^prime (i) = 2, P(i) = 1$
という条件下で $forall j eq.not i (P^prime (j) = P(j) )$ を満たす$P,P^prime$
の組の数に等しい. これは$N-1$個の粒子を準位$physica.ket(1)$に$n_1-1$個, 準位$physica.ket(2)$に$n_2$個それぞれ割り当てる場合の数に等しい.
したがって
$
&physica.bra(n_1 - 1\,n_2 + 1)hat(F)physica.ket(n_1\,n_2) \
&= [((n_1-1)!(n_2+1)!)/N!]^(1\/2)[((n_1 ! n_2 !)/N!)]^(1\/2) N binom(N-1, n_1 - 1) f_(2,1)\
&= [((n_1-1)!(n_2+1)!)/N!]^(1\/2)[((n_1 ! n_2 !)/N!)]^(1\/2) [N! / ((n_1-1)! n_2 !)] f_(2,1)\
&= sqrt(n_1(n_2+1)) f_(2,1)
$
と計算できる($n_1 !\/(n_1-1)! = n_1$などに注意).
同様の計算により $physica.bra(n_1+1\,n_2-1)hat(F)physica.ket(n_1\,n_2)=sqrt(n_2(n_1+1))f_(1,2)$
== $mu$準位系への拡張
$1$粒子状態として${physica.ket(1),dots, physica.ket(mu)}$を考える.
この場合の占有数表示は$physica.ket(n_1\,dots\,n_mu)$というものになる.
いま二準位系の場合と同様に$1$粒子演算子を多粒子状態に作用するように細工した
$hat(F)$の行列要素
$physica.bra(n_1+d_1\,n_2+d_2\,dots\,n_mu +d_mu)
hat(F)
physica.ket(n_1\,n_2\,dots\,n_mu)$を考える. ここで$d_i$は$sum d_i = 0$を満たす整数の組である.
行列要素のどの部分が残るかについて考察するために 二準位系の場合と同様に$hat(F) physica.ket(n_1\,dots\,n_mu)$について考察することから始める.
そのために占有数表示された状態を$1$粒子状態で展開すると
$
hat(F) physica.ket(n_1\,n_2\,dots \,n_mu)=c_N(n_1, n_2, dots, n_mu)
hat(F) sum_(P)physica.ket(P(1)) physica.ket(P(2)) dots physica.ket(P(N))
$
となる.ここで$c_N(n_1, n-2, dots, n_mu) = sqrt((product n_alpha !)\/N!)$ は規格化因子.
この形から二準位系の時と同様に$hat(F)^((1)) physica.ket(P(i))$ が出てくるのがわかるのでこの因子
をあらわに書くと
$
hat(F)^((1))physica.ket(P(i))
= sum_(alpha=1)^mu physica.ketbra(alpha)hat(F)^((1))physica.ket(P(i))
= sum_(alpha=1)^mu f_(alpha, P(i)) physica.ket(alpha)
$
となり,$physica.ket((P(i)))arrow.r physica.ket(alpha) "for" alpha = 1dots mu$
という遷移に対応する行列要素のみが残ることがわかる.
したがって二準位系の場合と同様に行列要素が非零になるのは対角線とその隣のみである.
有限の値を持つ行列要素
$physica.bra(n_alpha - 1\, n_beta + 1)hat(F)physica.ket(n_alpha\, n_beta)$
も二準位系の場合とほぼ同様に以下のように計算できる.
$
physica.bra(n_alpha-1\,n_beta+1)sum_P &
{
[sum_(sigma=1)^mu f_(sigma, P(1))physica.ket(sigma)] physica.ket(2) dots physica.ket(P(N))\ &
+ physica.ket(P(1)) [sum_(sigma=1)^mu f_(sigma, P(2))physica.ket(sigma)] dots physica.ket(P(N))\ &
dots \ &
+ physica.ket(P(1)) dots physica.ket(P(N-1)) [sum_(sigma=1)^mu f_(sigma, P(N))physica.ket(sigma)]
}
$
今回の場合$hat(F)$の作用で引き起こされる遷移が$physica.ket(alpha) arrow.r physica.ket(beta)$
であると解釈できる要素のみが残るため$[~]$の部分は
$f_(beta, alpha) physica.ket(beta)$に置き換えられる.
$
physica.bra(n_alpha-1\,n_beta+1)sum_P &
{
f_(beta, alpha) physica.ket(beta) physica.ket(2) dots physica.ket(P(N))\ &
+ physica.ket(P(1)) f_(beta, alpha) physica.ket(beta) dots physica.ket(P(N))\ &
dots \ &
+ physica.ket(P(1)) dots physica.ket(P(N-1)) f_(beta, alpha) physica.ket(beta)
}
$
$physica.bra(n_alpha -1\, n_beta + 1)$も$1$粒子状態であらわに書き表すと
$
&physica.bra(n_alpha -1\,n_beta+1)hat(F)physica.ket(n_alpha\,n_beta) \
&=c_N(n_alpha - 1\,n_beta + 1)c_N(n_alpha, n_beta) f_(beta, alpha)
sum_i sum_(P, P^prime\ P(i)=alpha\ P^prime (i) =beta)
physica.braket(P^prime (1), P(1)) dots.c physica.braket(P^prime (N), P(N))
$
和に関して二準位系の場合と同じロジックで処理すると
$
physica.bra(n_alpha -1 \, n_beta + 1)hat(F)physica.ket(n_alpha\, n_beta)
= f_(beta, alpha) sqrt((n_beta+1)/n_alpha) [(product_sigma n_sigma !) / N!]
[N! / (product_sigma n_sigma !)] n_alpha = sqrt(n_alpha (n_beta+1)) f_(beta, alpha)
$
となる.
== 期待値の計算
演算子$hat(F)^((1))_i := hat(bb(1))_1
times.circle dots hat(bb(1))_(i-1) times.circle hat(F)^((1))
times.circle hat(bb(1))_(i+1) times.circle dots hat(bb(1))_N$
#let expval = physica.expectationvalue
#let ket = physica.ket
#let bra = physica.bra
#let braket = physica.braket
#let ketbra = physica.ketbra
を導入すると$hat(F)$の期待値は
$
expval(F) = c_N^2 sum_(P, P^prime) sum_(i=1)^N
bra(P^prime (1))bra(P^prime (2)) dots.c bra(P^prime (N))
hat(F)^((1))_i
ket(P^prime (1))ket(P^prime (2)) dots.c ket(P^prime (N))
$
と書ける. 占有数状態に関する対角和なので占有数分布を変えない項が残る.
したがって
$
[hat(F)^((1))physica.ket(P(i))]
arrow f_(P(i), P(i)) ket(P(i))
$
とすればよい.したがって
$
expval(F)
&= c_N^2 sum_i^N sum_(P, P^prime)
f_(P(i), P(i)) product_i delta_(P^prime (i), P(i)) \
&= c_N^2 sum_i^N sum_(sigma=1)^mu sum_(P, P^prime\ P(i) = sigma) f_(sigma, sigma) product delta_(P^prime (i), P(i))
$ <eq21>
となる.ここで$i$および$P,P^prime$に関する和を取り出した部分
$
sum_(i=1)^N sum_(P,P^prime\ P(i) = sigma)
f_(sigma, sigma) product delta_(P^prime (i), P(i)) equiv S_sigma
$
を見積もる.ある$i$に対して$sum_(P,P^prime)$ 以下は 「$P(i)$ を $sigma$ に固定する条件の元で $P(dot.c)$ と $P^prime (dot.c)$ が
写像として一致する場合の数に$f_(sigma, sigma)$を乗じたもの」であるので
$f_(sigma, sigma) binom(N-1, dots\, n_sigma - 1\, dots)$ となるしたがって
$
S_sigma
&= sum_(i=1)^N f_(sigma, sigma) (N-1)!/(product_alpha n_alpha !) n_sigma \
&= f_(sigma, sigma) N! / (product_alpha n_alpha !) n_sigma
$
ここで式 <eq21> に $S_sigma$ を代入すると
$
expval(F) &= (product_alpha n_alpha ! ) / N! N! / (product_alpha n_alpha ! )
sum_sigma f_(sigma, sigma) n_sigma
&= sum_sigma f_(sigma, sigma) n_sigma
$
となり$F$の期待値が求まる.
= Fock空間の導入
#let nket(n, diff: "")=$ket(n_1\,n_2\,dots\,n_#n #diff\,dots)$
#let nbra(n, diff: "")=$bra(n_1\,n_2\,dots\,n_#n #diff\,dots)$
#let nbraket(n)=$braket(
n^prime_1\,n^prime_2\,dots\,n^prime_#n\,dots, n_1\,n_2\,dots\,n_#n\,dots,
)$
$1$粒子ヒルベルト空間の正規直行基底を${ket(alpha)}_(alpha=1,2,dots)$とする.
そしてこの$1$粒子状態$ket(sigma)$を占有する粒子の数が$n_sigma$であるような
正規直交化された$N$粒子状態を
$
nket(sigma)
$
とかく.正規直交性は
$
nbraket(sigma) = product_sigma delta_(n_sigma^prime, n_sigma)
$
とあらわされる.この正規直交性より少し考えると完全性も従う.
これらのベクトルで張られるヒルベルト空間をFock空間といい上記の占有数表示されたベクトルによる
Fock空間の元の表示をFock表示という.
== 生成消滅演算子
この表示の下で$1$粒子状態$sigma$の消滅演算子を調和振動子の場合と同様に
$
hat(a)_sigma nket(sigma) = sqrt(n_sigma) nket(sigma, diff: -1)
$
と作用する演算子であると定義する.左から$nbra(sigma, diff: -1)$ を作用させると
$
nbra(sigma, diff: -1)hat(a)_sigma nket(sigma) = sqrt(n_sigma)
$
となる.これのエルミート共役をとると$sqrt(n_sigma)^dagger = sqrt(n_sigma)$なので
$
nbra(sigma) hat(a)^dagger_sigma nket(sigma, diff: -1) = sqrt(n_sigma)
$
となる.これは任意の$n_sigma$について成立するので$n_sigma$の値を一つずらしても成立する.
したがって
$
hat(a)^dagger_sigma nket(sigma) = sqrt(n_sigma+1) nket(sigma, diff: +1)
$
== 生成消滅演算子の正準交換関係
#let nket2(s1, s2, d1: "", d2: "")=$ket(n_1\, n_2\, dots\, n_#s1 #d1\,dots n_#s2 #d2\,dots)$
以下では前節で導入した生成消滅演算子の正準交換関係について計算していく. まず $[hat(a)_sigma, hat(a)_rho]$ について考える.$sigma=rho$ の時は自明に零なので
$sigma eq.not rho$ の場合を検討する.
$
[hat(a)_sigma, hat(a)_rho]
&= (hat(a)_sigma hat(a)_rho - hat(a)_rho hat(a)_sigma) nket2(sigma, rho)\
&= hat(a)_sigma sqrt(n_rho) nket2(sigma, rho, d2: -1)
- hat(a)_rho sqrt(n_sigma) nket2(sigma, rho, d1: -1)\
&= (sqrt(n_sigma)sqrt(n_rho) - sqrt(n_rho)sqrt(n_sigma)) nket2(sigma, rho, d1: -1, d2: -1)\
&=0
$
生成演算子どうしの交換関係も同様の計算により$0$となる. 最後に$[hat(a)_sigma, hat(a)^dagger_rho]$を計算する.
$sigma eq.not rho$の場合と$sigma eq rho$の場合に分けて計算する. まず $sigma eq.not rho$ の場合は以下の通り
$
[hat(a)_sigma, hat(a)^dagger_rho] nket2(sigma, rho)
&= (hat(a)_sigma hat(a)^dagger_rho - hat(a)^dagger_rho hat(a)_sigma) nket2(sigma, rho)\
&= hat(a)_sigma sqrt(n_rho + 1) nket2(sigma, rho, d2: +1)
- hat(a)_rho^dagger sqrt(n_sigma) nket2(sigma, rho, d1: -1) \
&= sqrt(n_sigma (n_rho +1)) [nket2(sigma, rho, d1: -1, d2: +1)
- nket2(sigma, rho, d1: -1, d2: +1)] \
&=0
$
次に $sigma = rho$の場合を計算する.
$
[hat(a)_sigma, hat(a)^dagger_sigma] nket(sigma)
&= (hat(a)_sigma hat(a)_sigma^dagger -hat(a)^dagger_sigma hat(a)_sigma) nket(sigma)\
&= sqrt(n_sigma + 1) hat(a)_sigma nket(sigma, diff: +1)
- sqrt((n_sigma)) hat(a)^dagger_sigma nket(sigma, diff: -1) \
&= (sqrt(n_sigma+1)sqrt(n_sigma+1) - sqrt(n_sigma)sqrt(n_sigma))nket(sigma) \
&= nket(sigma)
$
となる.これらをまとめると
$
[hat(a)_sigma,hat(a)_rho] = [hat(a)^dagger_sigma, hat(a)^dagger_rho]
= 0 times hat(bb(1))_(cal(H)_"Fock")\
[hat(a)_sigma,hat(a)^dagger_rho] = delta_(sigma, rho) times hat(bb(1))_(cal(H)_"Fock")\
$ <CaCR>
となる.ここで$hat(bb(1))_(cal(H)_"Fock")$は今考えているFock空間における恒等変換である.
以下では誤解を招く可能性がない限り $hat(bb(1))_(cal(H)_"Fock")$ を明示的には書かない.
= 演算子の「第二量子化」
#let nbra2(s1, s2, d1: "", d2: "")=$bra(n_1\, n_2\, dots\, n_#s1 #d1\,dots n_#s2 #d2\,dots)$
少し前に導いた演算子$hat(F)$の行列要素の表式
$
nbra2(n_sigma, n_rho, d1: -1, d2: +1) hat(F) nket2(n_sigma, n_rho)
= sqrt(n_sigma (n_rho + 1)) bra(rho) hat(F)^(1) ket(sigma)
$
生成演算子を$nket(mu)$に作用させたときに出る係数と比較すると
$
hat(F) = sum_(sigma,rho) hat(a)^dagger_rho bra(rho) F^((1)) ket(sigma) hat(a)_sigma
$
と置けばこれがもともとのの$hat(F)$の行列要素を再現することは容易に確認できる.
初め$hat(F)$を
$
hat(F) = &hat(F)^((1)) times.circle hat(1)_2 times.circle dots.c times.circle hat(1)_N \
&+ hat(1)_1 times.circle hat(F)^((1)) times.circle dots.c times.circle hat(1)_N \
&dots \
&+ hat(1)_1 times.circle dots.c times.circle hat(1)_(N-1) times.circle hat(F)^((1)) \
$
と定義したがこのような煩雑な記述をせずとも生成消滅演算子を用いれば各粒子に独立に
$1$粒子演算子を作用させる $cal(H)_"Fock"$ 上の演算子を簡潔に求めることができる.
一般に$1$粒子演算子 $hat(o):cal(H)_1 arrow cal(H)_1$ に対して
$
hat(cal(B))(hat(o)) equiv sum_(alpha, beta)
hat(a)^dagger_alpha
bra(alpha)hat(o)ket(beta)
hat(a)_beta
$
で定義される操作 $hat(cal(B))(hat(o))$ を $hat(o)$ の第二量子化という.
演算子の第二量子化でもちいる$1$粒子の正規完全直交系は正規完全直交系であればなんでもよい
#footnote[
これについては田崎 @TasakiFock に証明がみられる.
生成消滅演算子を一般の状態を系に生成したり消滅させたりするというものと定義するところが
証明の肝に見えるので普通の教科書では証明を見つけるのは難しいかもしれない.
]. そこで基底として$1$粒子ハミルトニアン$hat(H)^((1))$の固有状態を選び
$1$粒子ハミルトニアンの第二量子化を行うと,
$
hat(cal(B))(hat(H)^((1))) &= sum_(alpha, beta)
hat(a)_alpha^dagger bra(alpha) hat(H)^((1)) ket(alpha) hat(a)_alpha \
&=sum_(alpha, beta) epsilon_beta delta_(alpha, beta) hat(a)_alpha^dagger hat(a)_beta \
&= sum_alpha epsilon_alpha hat(n)_alpha
$
となる(ここで $epsilon_alpha$ は準位$ket(alpha)$ のエネルギー固有値).
これはI-$4$節で導いたスカラー場のハミルトニアン(I-39)と同様の形となる.
= 場の「第二量子化」
古典論から量子論の基礎方程式を導く際に実験結果を見ながらの論理的跳躍が必要であったように,
粒子系の量子論から相対論的場の量子論に演繹的にたどり着けなくても何の問題もない.
筆者は第二量子化は場の理論黎明期の物理学者を刺激した足がかりに過ぎず
場の量子論の現代的な導入には不要であるという立場である.
しかしながら一応テキストをなぞる.
#let Schrodinger = [Schro\u{308}dinger]
== "#Schrodinger 方程式"の導出
$1$粒子波動関数$Phi (bold(x), t)$ を $t=0$で$1$粒子ハミルトニアン$H^((1))$
の固有状態 $phi_alpha (bold(x))$ で $Phi(bold(x), t=0) = sum_alpha a_alpha phi_alpha (bold(x))$
と展開する.ここで規格化条件より $sum_alpha abs(a_alpha)^2 = 1$ となる. 各 $phi_alpha$ は $hat(H)^((1))$ の下で $exp(-i epsilon_alpha t)$ で時間発展する.
これらを合わせると
$
Phi(bold(x), t) = sum_alpha a_alpha phi_alpha (bold(x)) e^(-i epsilon_alpha t)
$
となる(ここまでは波動関数として扱っている.そうでないと時間発展が合わない.).
ここで $a_alpha arrow hat(a)_alpha$ の読みかえを行って 波動関数 $Phi$ を場の演算子 $hat(Phi)$ に置き換える.すなわち
$
hat(Phi)(bold(x), t) = sum_alpha hat(a)_alpha phi_alpha (bold(x)) e^(-i epsilon_alpha t)
$
とする.そうすると規格化条件は$sum_alpha hat(n)_alpha = hat(bb(1))$ と
置き換えるのが妥当であろう.すると
$
nket(sigma) in {
ket(1\, 0\, dots\, 0\, dots),
ket(0\, 1\, dots\, 0\, dots), dots }
$
すなわち状態は$1$粒子ヒルベルト空間$cal(H)_1$ に住んでいることになる.
$
i partial / (partial t) hat(Phi) (bold(x), t)
= sum_alpha hat(a)_alpha phi_alpha (bold(x)) epsilon_alpha e^(-i epsilon_alpha t)
$
$
hat(H)^((1)) hat(Phi)(bold(x), t)
= sum_alpha hat(a)_alpha epsilon_alpha phi_alpha(bold(x)) e^(-i epsilon_alpha t)
$
なので
$
i partial / (partial t) hat(Phi)(bold(x), t) = hat(H)^((1)) hat(Phi)(bold(x), t)
$ <SecondaryQuantizedSchroedingerEq>
となる.ただし$1$粒子ハミルトニアンを作用させた時に
$H^((1)) phi_alpha(bold(x)) = epsilon_alpha phi_alpha(bold(x))$ および
$[hat(H)^((1)), hat(a)_alpha] = 0$ を用いた(後者が成立するかはよくわからない).
場の演算子 $hat(Phi)$ とそのエルミート共役 $hat(Phi)^dagger$
は以下の同時刻交換関係を満たす.
$
[hat(Phi)(bold(x), t), hat(Phi)^dagger (bold(x)^prime, t)]
&= sum_(alpha, beta) [hat(a)_alpha, hat(a)_beta^dagger]
phi_alpha (bold(x)) phi_beta^* (bold(x^prime))
exp(i (epsilon_beta - epsilon_alpha) t) \
&= sum_(alpha, beta) delta_(alpha, beta)
phi_alpha (bold(x)) phi_beta^* (bold(x^prime))
exp(i (epsilon_beta - epsilon_alpha) t) \
&= sum_alpha phi_alpha (bold(x)) phi_alpha^*(bold(x^prime)) \
&= delta(x - x^prime)
$
ここで$phi_alpha (bold(x))$の完全性を用いた#footnote[
ブラケット記法を経由するとこれは
$sum_alpha phi_alpha (bold(x)) phi_alpha^* (bold(x^prime))
= sum_alpha braket(bold(x)^prime, phi_alpha) braket(phi_alpha, bold(x))
=braket(bold(x)^prime, bold(x)) = delta(bold(x) - bold(x)^prime)$
と書ける. ブラケットを用いない場合完全性条件
$forall xi in cal(H)_1 (sum_alpha phi_alpha (phi_alpha, xi) = xi)$
に位置表示の内積 $(psi, xi) = integral psi(bold(x))^* xi(bold(x)) d^3x$
を代入して$ forall xi (integral d^3 bold(x)^prime
[sum_alpha phi_alpha (bold(x))phi_alpha^* (bold(x)^prime)]
xi(bold(x)^prime) ) = xi(bold(x))$ から
$sum_alpha phi_alpha (bold(x)) phi_alpha^* (bold(x)^prime) = delta(bold(x) - bold(x)^prime)$
]
. ここで一度古典場に戻ると式 @SecondaryQuantizedSchroedingerEq と同じ形の "#Schrodinger 方程式"
は ラグランジアン密度
$
cal(L)(bold(x), t)
= Phi^*(bold(x)) [i partial / (partial t) + 1/(2m) laplace - V(bold(x))] Phi(bold(x), t)
$
から導かれる.実際
$
(delta cal(L))/(delta dot(Phi)) &= i Phi^*(bold(x), t)\
(delta cal(L))/(delta nabla Phi) &= -(nabla Phi^*(bold(x), t))/(2m)\
(delta cal(L))/(delta Phi) &= -V(x) Phi^*(bold(x), t)
$
二行目を導くにあたってラグランジアン密度が積分の中に入っていることを利用して
部分積分により
$Phi^* laplace Phi arrow - nabla Phi^* dot.c nabla Phi$
とした#footnote[
何らかのベクトル解析の公式を用いたりしたのではなくラプラシアンの各成分を分けて書いたのち
各成分に対して部分積分を適用してまとめなおしたものである.
].これらより E-L 方程式は
$
-V(bold(x)) Phi^* - i partial_t Phi^* + (nabla dot.c nabla Phi^*) / (2m)
arrow.double.r.l i partial_t Phi(bold(x), t)= [- laplace / (2m) + V(bold(x))] Phi(bold(x), t)
$
となり#Schrodinger 方程式に一致する. このラグランジアン密度に対して$Phi$に共役な運動量密度$Pi$を導入すると
$ Pi(bold(x), t) = (delta cal(L))/(delta dot(Phi)) &= i Phi^*(bold(x), t) $
となる. もしこの共役運動量密度を上で導入した場の演算子$hat(Phi)$と同様に演算子化(量子化)すると
$hat(Pi)$ と $hat(Phi)$ の同時刻交換関係は
$
[hat(Phi)(bold(x),t), hat(Pi)(bold(x)^prime, t)]
= [hat(Phi)(bold(x), t), i Phi^dagger (bold(x), t)] = i delta(bold(x)-bold(x)^prime)
$
となりこれはI-4で登場したスカラー場の同時刻交換関係と一致する.
= 生成消滅演算子の交換関係と場の演算子の同時刻交換関係の同値性について
$hat(a)^((1))_alpha = hat(a)_alpha, hat(a)^((2))_beta = hat(a)^dagger_beta$ の表記法および,
完全反対称テンソル $epsilon.alt^(i,j)$ (
$epsilon.alt^(1,2) = - epsilon.alt^(2,1) = 1$
それ以外の要素は $0$) を導入すれば式 @CaCR
の形に(複数の等式で)書かれた生成消滅演算子の交換関係は
$
[hat(a)_alpha^((i)), hat(a)_beta^((j))] = epsilon.alt^(i,j) delta_(alpha, beta)
$
と一つの式で書ける.この式と同様に
$Phi^((1))(bold(x),t) = Phi(bold(x), t)$,
$Phi^((2))(bold(x), t) = Pi(bold(x), t) = i Phi^*(bold(x), t)$
と定義すれば先ほど導出した場の演算子の同時刻交換関係は
#let _x = $bold(x),t$
#let _y = $bold(x^prime),t$
$
[hat(Phi)^((i)) (#_x), hat(Phi)^((j)) (#_y)]
= i epsilon.alt^(i,j) delta(bold(x) - bold(x)^prime)
$
と書ける. これらの表式を複数の等式であらわされる正交換関係
や同時刻交換関係を簡潔に記述するために用いる.
少なくとも今回は実際の計算に用いても特に一般化のメリットは得られないので計算には用いない.
上記の同時刻交換関係には明示的に導出していない成分も含まれるが$Phi$どうしや$Pi$どうしの交換関係は
生成演算子同士や消滅演算子同士の交換関係に帰着されるので$0$になるのはほぼ自明であろう.
したがって前節の議論で
$[hat(a)_alpha^((i)), hat(a)_beta^((j))] = epsilon.alt^(i,j) delta_(alpha, beta)$
から $ [hat(Phi)^((i)) (#_x), hat(Phi)^((j)) (#_y)]
= i epsilon.alt^(i,j) delta(bold(x) - bold(x)^prime)$
が導出されることは示したといえる.
ここではその逆を示し両者が同値であることの証明を与える.まず $[hat(Phi)(#_x), hat(Phi)(#_y)] = 0$ の帰結を考える.
交換関係の $hat(Phi) (#_x)$ に $sum_alpha phi_alpha (bold(x)) exp(-i epsilon_alpha t)$ を代入すると
$
sum_(alpha, beta) [hat(a)_alpha, hat(a)_beta]
phi_alpha (bold(x)) phi_beta (bold(x)^prime) exp(-i(epsilon_alpha + epsilon_beta) t)
= 0
$
エネルギー固有値は非負なので$alpha != 0 or beta != 0$の時は$[hat(a)_alpha, hat(a)_beta] = 0$.
$alpha=beta=0$ の時は交換関係の性質より $0$. 結局 $forall alpha, beta ([hat(a)_alpha, hat(a)_beta] = 0)$
同様にして $forall alpha, beta ([hat(a)^dagger_alpha, hat(a)^dagger_beta]=0)$も導ける.
最後に $[hat(Phi) (#_x), hat(Pi) (#_y)] = i delta (bold(x) - bold(x)^prime)$
について考える. 同じように$Phi$および$Pi$を生成消滅演算子で展開すると.
$
sum_(alpha, beta) [hat(a)_alpha, hat(a)^dagger_beta]
phi_alpha (bold(x)) phi^*_beta (bold(x)^prime) exp(-i(epsilon_alpha - epsilon_beta) t)
= delta(bold(x) - bold(x)^prime)
$
右辺は時間依存性がないので左辺の時間依存性を打ち消す必要がある.
ここで「エネルギー固有値に縮退がない」という仮定を置くと
$alpha=beta$の項だけ拾えばその目的が達成される. そこで適当な係数 $C$ を用いて
$
[hat(a)_alpha, hat(a)_beta^dagger] = C delta_(alpha, beta)
$
であればよいということになる.この係数は
$
sum_(alpha,beta)[ ~ ] = C sum_alpha
phi_alpha (bold(x)^prime) phi^*_alpha (bold(x))
= C delta(bold(x) - bold(x)^prime)
$
となる.よって $C=1, [hat(a)_alpha, hat(a)^dagger_beta] = delta_(alpha, beta)$
が導かれた. 以上より エネルギー固有値に縮退がない場合に
$Phi$ と $Pi$ の間の同時刻交換関係と ${a_alpha}$ と ${a^dagger_beta}$
の間の正準交換関係が同値であることが示された.
= 「第二量子化」
電磁場の量子論のために近似モデルとして考案されたスカラー場の量子論は便利なものであった.
そのため多くの物理学者が粒子系にもこの理論を適用できないかと考えた .
とはいえ初めから場の量がある電磁場の系とは異なり粒子系には場の自由度は存在しないため
それは困難であるように見えた. 結局本節で見たような試行錯誤により
波動関数を演算子化し粒子系の基礎方程式を導くラグランジアン密度を用いて
場の演算子に共役な運動量場との間にスカラー場と同様の
同時刻交換関係を課せば通常の正準交換関係を課したのと
同等な物理を場の形式で扱えることがわかった
#footnote[このあたりの歴史はワインバーグ @WeinbergJa1 1章に詳しい.].
この手続きはあたかも量子論に移行して初めて現れる
波動関数をさらに量子化したように見えるので第二量子化と呼ばれている.
しかし交換関係を要請したのは一回であり量子化も一回である.
さらに要請する交換関係は同値なので物理としても同じである.
// Appendix
#show: thebibliography.with(bibliography("refs.yml"))
|
https://github.com/qujihan/typst-book-template | https://raw.githubusercontent.com/qujihan/typst-book-template/main/template/params.typ | typst | #let english-font = "Lora"
#let chinese-font = "Source Han Serif SC"
#let code-font = "CaskaydiaCove NF"
#let content-font = (english-font, chinese-font)
#let content-font-size = 10pt
#let content-color = black
#let line-color = luma(80)
#let code-line-color = rgb("#004cd9b3")
#let emph-color = rgb("#a7ec542d")
#let util-reference-color = rgb("#163bf5")
#let util-reference-block-color = util-reference-color.lighten(95%)
#let util-reference-line-color = util-reference-color.lighten(60%)
#let figure-kind-code = "figure-kind-code"
#let figure-kind-pic = "figure-kind-pic"
#let figure-kind-tbl = "figure-kind-tbl"
|
|
https://github.com/AxiomOfChoices/Typst | https://raw.githubusercontent.com/AxiomOfChoices/Typst/master/Miscellaneous/GeoTopo%202/aaron_race.typ | typst |
#import "/Templates/generic.typ": latex, header
#import "/Templates/question.typ": question_heading
#import "@preview/cetz:0.1.2"
#show: doc => header(title: "Assignment 1", name: "<NAME>", doc)
#show: latex
#let ve = $epsilon$
#let ip(..x) = $lr(angle.l #x.pos().join(",") angle.r)$
Let $i <= n$, we define the following charts $(U_i, phi_i)$ on $CC P^n$ where
$
U_j = {[z_1,...,z_(n+1)] in CC P^n | z_i eq.not 0}
$
for any element $z in U_j$ we write it as
$
z = [z_1 / z_j,...,1,...,z_n / z_j] = [
x_1 + i y_1, ...,x_(i-1) + i y_(i-1), x_(i+1) + i y_(i-1),...,x_n+i y_n
]
$
then we set
$
phi_i(z) = (x_1, y_1, ..., x_n, y_n) in RR^(2n)
$
|
|
https://github.com/CedricMeu/ugent-typst-template | https://raw.githubusercontent.com/CedricMeu/ugent-typst-template/main/0.1.0/template/main.typ | typst | #import "@local/ugent-thesis:0.1.0": thesis, acronyms, page-content, todo
#show: thesis.with(
title: [A UGent Master's Dissertation Created Using Typst],
authors: ("<NAME>",),
)
#acronyms(
acros: (
// define your acronyms here
"ML": "Machine Learning",
"AI": "Artificial Intelligence",
)
)
= List of figures etc.
#lorem(200)
#show: page-content.with()
// start your actual thesis contents here!
= Introduction
#todo[Write this thesis]
#lorem(2000)
= Continue
#lorem(200)
|
|
https://github.com/chrischriscris/Tareas-CI5651-EM2024 | https://raw.githubusercontent.com/chrischriscris/Tareas-CI5651-EM2024/main/tarea07/src/ej3/prefix-suffix.typ | typst | // Gets the largest substring T of the given string S such that T is both a
// prefix and a suffix of S, and T != S. If no such substring exists, returns
// a lambda character (representing the empty string).
#let prefix-suffix(S) = {
if S.len() == 0 {
return $lambda$;
}
// Build the table
let n = S.len();
let table = (0,) * n;
let i = 0;
let j = 1;
while j < n {
if S.at(i) != S.at(j) {
if i == 0 {
j += 1;
} else {
i = table.at(i - 1);
}
continue;
}
i += 1;
table.at(j) = i;
j += 1;
}
return if table.last() == 0 {
$lambda$
} else {
S.slice(0, table.last())
};
}
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/026%20-%20Eldritch%20Moon/002_Stone%20and%20Blood.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Stone and Blood",
set_name: "Eldritch Moon",
story_date: datetime(day: 15, month: 06, year: 2016),
author: "<NAME>",
doc
)
#emph[Six thousand years before the events of the ] Eldritch Moon#emph[ story, ] three Planeswalkers collaborated#emph[ to trap the monstrous Eldrazi on the world of Zendikar. Nahiri, a kor of Zendikar, stood vigil over the prisoners. Ugin, called the Spirit Dragon, and the vampire Sorin Markov agreed to return if their aid was needed. But ] the Eldrazi nearly escaped#emph[ over a thousand years ago, and neither Sorin nor Ugin came. Sorin was Nahiri's friend, and his absence worried and puzzled her. After quelling the Eldrazi's escape attempts, Nahiri set out to find her friend. We know from ] Sorin's recollections#emph[ that their meeting did not go well. But there are two sides to every story...]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Reunion]
#emph[One thousand years ago]
Nahiri cast herself through the chaos of the Blind Eternities, the space between worlds. She'd slept for too long, in a cocoon of stone. Allowed certain things to drift beyond her awareness. She'd already corrected the most egregious case of neglect, reinforcing the wards that kept her prisoners secure and consigning their servants to oblivion. Her own world was safe, at least for the moment.
Now it was time to call on an old friend, and restore something less tangible.
It did not take long before Nahiri sensed his presence and aimed for it, warping the world around her until she could stand beside him. Their friendship was ancient now, a faded relic, but <NAME> had been her first ally, and Nahiri would know him anywhere.
She stood, then, on a high bluff overlooking a dark and choppy sea. She had never been here, but nothing about it surprised her. Innistrad and Sorin had shaped each other, and the world seemed to suit him—brooding and dangerous, almost purposefully unwelcome. And the moon—there was something odd about the moon that rose above the water, something that tugged at her senses.
Sorin had never brought her here, but he had spoken of it in wistful tones. Had hoped, she knew, to call upon her to defend it—as she had hoped to call upon him to defend Zendikar. Neither had gotten what they wanted, in the end.
Sorin was not here.
On the highest part of the bluff, where she had sensed his presence, stood instead a massive, rough-hewn chunk of silver, forty feet tall at least. It had faces, but they were irregular and uneven, as though an amateur lithomancer had pulled it out of the ground and not yet bothered to smooth it out to a finish.
#figure(image("002_Stone and Blood/01.jpg", width: 100%), caption: [Helvault | Art by <NAME>], supplement: none, numbering: none)
But it #emph[was] finished—unquestionably, to her senses, obviously the end result of tremendous effort rather than a work in progress. It was not polished because the polish did not matter to whatever it was this thing was supposed to be. Or do.
And this—this #emph[thing] —was what she had sensed. Not Sorin. The Thing had spoken to her, through the tenuous medium of the Blind Eternities, of him.
There was nothing on the bluff but the wind and the silver monolith, save a stunted tree with red leaves. She left the tree to its business and circled the tremendous chunk of silver.
Sides. It had eight of them, or perhaps seven, depending on how generous one felt as to the nature of an edge. But faces they were, deliberately shaped, almost like...But there were no hedrons on Innistrad, and Sorin had neither means nor cause to make them.
And like a hedron, the Thing was more than its physical substance. She tested it with her lithomancy, taking a sounding of the pure metal and trying to get a sense of its inner structure.
Nothing. Nothing at all. She could sense the grain of the bedrock half a mile below her feet, feel the slow and steady heartbeat of continental plates dancing their slow, inexorable waltz. But she couldn't see into this sliver of silver. Couldn't so much as scratch it. Her power vanished into it, like an infinite well. Almost like...but no. No again. It was #emph[not] a hedron. Not here.
She bent and peered beneath the Thing, half expecting to see that it was floating above the ground. But it was rooted at the bottom, by a comparatively thin stem of silver not much wider than Nahiri herself.
She stood and continued her slow circle of the Thing, trailing her fingertips along it in lieu of the deeper investigation she could not seem to make. She didn't know how much time she spent examining the silver monolith, but the moon was higher in the sky when a familiar voice spoke from behind her.
"You'll have to forgive my rudimentary attempt at shaping stone, young one."
She spun. Sorin!
White hair, black coat, those strange orange eyes. How terrible his aspect, how dire his gaze—and yet she could not keep herself from grinning.
"My friend!" she managed to say at last. "You're alive!"
He smiled back at her, walked toward her, and put his hand on her shoulder. From him, it qualified as elation.
#figure(image("002_Stone and Blood/02.jpg", width: 100%), caption: [<NAME> | Art by <NAME>], supplement: none, numbering: none)
"And why would I be otherwise?"
She reached up to cover his hand with hers. She was awake now, her body suffused with the warmth of life. His fingers were as cold and dead as ever.
"You never came," she said. "On Zendikar, when I activated the signal from the Eye of Ugin, you never responded. I feared that—"
Sorin withdrew his hand, frowning.
"The Eldrazi have broken free of their bonds?"
"They did, yes."
"Where is Ugin?" he asked.
"He didn't come either," she said, trying not to let bitterness reach her voice. "But I handled it. On my own. With all the strength I could muster, I managed to reseal the titans' prison."
It struck her, suddenly, that she was now far older than Sorin had been when they had met. In her memory he towered over her, her ancient mentor, a thousand years her senior. Now, what difference did a thousand years make? They were equals. At least.
"When the task was done, I came to find you. I had to know if you still lived. And here you are."
#emph[Here you are.] Her joy at seeing him faded. She had been worried, so worried—that something had happened to him, or that he, like her, had sunk into a millennia-long malaise. She had come here to find him, to save him—but he was not, evidently, in need of saving.
"So, where were you?" she asked. "Sorin, why didn't you respond to the signal?"
"It never reached me," he said.
"How is that possible?"
"Hmm," he said. Just #emph[Hmmm] , with little interest and no urgency whatsoever.
He reached past her and pressed a hand against the surface of the Thing.
"You have dedicated yourself to watch over the imprisoned Eldrazi, and it became clear to me that my plane was in dire need of its own protection, particularly in my absence. This Helvault is half of what I created to serve as such a protection."#linebreak
#emph[Helvault.] She shuddered. It was a vault. What could such a thing be meant to store?
"It's not inconceivable," he continued, sounding bored, "that your signal from the Eye was unable to break through the magic that protects this plane."
Sorin's own spellcraft had kept her from contacting him? She felt a sudden sense of vertigo, and picked her next words with care.
"Did you know at the time that that would happen?"
"It did not occur to me," he said. "Though I see now that it was a possibility."
Stone and sky!
Early in their association, before she understood what he was, and what she had now become, he had asked her if she wanted to learn to fight like him. She'd said yes—and then he'd tried to kill her.
Or so it had seemed to her at the time. She realized, not much later, that he'd been holding back, attacking her physically when he could have snuffed her with a thought. She held her ground, briefly, until his heavy two-handed sword had caught her upper arm in a glancing blow with a sickening crack, and pain overwhelmed her senses.
#emph[Well done] , he said, standing over her. #emph[You lasted almost six breaths. Yours, of course. Now get up.]
#emph[Get up?] she cried. #emph[You broke my arm!]
#emph[So fix it,] he said. He wasn't even looking at her.
#emph[Fix it?] Fix #emph[it? How in the hells—]
Only then had he finally explained to her that she was no longer mortal. That her body was a convenience, a projection of her will.
#emph[You should have told me that to begin with] , she said, holding back tears of anger.
#emph[Ah] , he said, in that bored but benevolent voice. #emph[It did not occur to me.]
He was using that voice now, talking down to her. But the girl he had mentored was long dead, buried in a tomb of stone. Only a Planeswalker remained. And a Planeswalker would not be condescended to.
"A possibility? You risked my plane, and more." She could not quite keep the hurt from her voice. "You abandoned me."
Sorin waved a pallid, dismissive hand.
"I was simply taking the appropriate precautions to protect #emph[my] plane. I hardly think—"
Oh, enough. More than enough.
"We had an agreement, you and I," she said.
He could not deny that. Five thousand years previous, Nahiri had agreed, reluctantly, to trap the Eldrazi on her own world of Zendikar. And for their part, the other two Planeswalkers who had helped her had given her a way to contact them if the Eldrazi ever threatened to break free.
#figure(image("002_Stone and Blood/03.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
For five thousand years, Nahiri stood watch over their monstrous prisoners. She had locked herself away in stone, watched decades and centuries drift past like clouds over the sun. Then the Eldrazi had tested the bonds of their prison and loosed their hideous spawn upon a world already changed by their presence, in ways she did not fully understand. Nahiri had awoken, broken free of her self-imposed isolation, and sounded the alarm.
No one had come. Not the dragon Ugin, whom she had never fully trusted, whose motives and origins were enigmas. And not Sorin—her mentor. Her #emph[friend] .
She had dealt with the crisis on her own, at great cost to her world, far more than it would have cost had her allies honored their agreement. She still hadn't surveyed the full extent of the damage the Eldrazi had done to her world and its people before she had quelled their efforts. But she had done it, and when it was done she had gone to look for him, fearing for his existence.
And found that he had done worse than ignore her cry for help. He had blocked it out, to protect his own world from outside influence.
He turned his back to her.
"Don't dismiss this," she said. "I was willing to jeopardize my home by luring the Eldrazi to it. I promised to chain myself to Zendikar to watch over them as their warden. I spent millennia with those monsters. Do you know what that's like? All you had to do was come when I needed you."
The ground began to shake, the bedrock below them vibrating in sympathy with her mounting rage. Of all the stone and metal nearby, only the silver Helvault seemed beyond her reach.
"Don't presume to own my actions, young one. I am obligated to nothing. I owe you nothing! When your Planeswalker spark first ignited, it was I who discovered you. I could have ended you there, but I spared you."
He turned back to her, orange eyes full of malice, face inches from hers.
"I took you under my wing, and molded you into what you are," he said. "If you find it necessary to pester someone, go find Ugin. I have no patience for it."
No patience. No #emph[patience] . Pain gave way to anger in a white-hot instant.
For five thousand #emph[years] , Nahiri had kept watch over the Eldrazi—not just for her plane, but for every plane. For Innistrad. And once, #emph[once] , in five thousand years, she had called on him to do nothing more than make good on a simple promise—a self-serving promise, one that he'd only made because it would keep his own world safe—and he had not. Just...not.
Her own patience was at an end, spent in endless vigil over the Eldrazi. She was done—done waiting, done pleading, and most of all done being treated like a child. If Sorin needed proof that she was no longer his student, she would have to provide it.
She called forth a column of rock from the depths beneath them—granite, old and strong. The earth heaved, and Sorin struggled to keep his feet. The stone column burst from the ground beneath her, carrying her high above him.
"I'm not going anywhere."
She pulled more rocks from the ground around her, sharpened into darts, swirling around the two Planeswalkers.
Sorin drew his sword.
"I never threatened you," he said, looking up at her. "Not once. If we are to be enemies, child, the blame falls solely at your own feet."
"I'm not a child," she said. "Whatever we were, surely you can see we're equals now."
There was a moment's hesitation—a hint of fear in those orange eyes, perhaps? A second spent weighing the possibility that she might be right, and his pride was in need of sharp correction?
"All I see is a tantrum," he said. "If you came to meet an equal, you should have come under truce, following the protocols for parley with a fellow Planeswalker."
"I came to meet a friend," said Nahiri.
"Then I see no grounds for complaint," said Sorin. "Friends deliver hard truths...do they not?"
Long ago, a foolish girl had called this wretched creature friend. As the last vestige of that youthfully sentimentality boiled away, Nahiri struck.
She hurtled down toward Sorin astride a fist of rock. She had no sword. She had no need of one. The ground itself was her weapon.
Sorin released a blast of death magic that caught her full in the chest, knocking her back. The stone column lurched backward, to stay under her feet.
Sorin vaulted off the broken ground and leapt straight toward her, teeth bared, sword glinting in the light of that strange, looming moon. She jumped from the column and landed on the ground in a crouch. Sorin hit the stone column feet first, ready to spring off of it and attack her again—but the stone column simply swallowed him.
She stood, fists clenched, squeezing Sorin in the stone.
Cracks appeared, first one, then several, shining with light from the vampire's magic. The column flew apart in a spray of light and stone as Sorin forced his way out. He dropped gracefully to the ground.
But he looked pained.
"I don't want your enmity," said Nahiri. "All I ever wanted was your #emph[help] , Sorin. You made a promise. Come with me."
"Not now," said Sorin, with infuriating calm. "Later, perhaps. This is a critical time—"
"A critical time!" snapped Nahiri. "The Eldrazi almost escaped. You're thinking in terms of eons, but for all I know the Eldrazi are loose now. All that we worked for will be lost, your own plane will be in danger—don't you care about that?"
It hit her, then. The imprisonment of the Eldrazi had become her life's work, a constant effort that had kept her bound to her plane for almost her entire existence. But for him it had been an eyeblink—forty years of mild effort, five thousand years ago, in exchange for millennia of peace of mind. And now, with his new countermeasures, perhaps Innistrad #emph[wasn't] in danger. Perhaps Nahiri and Zendikar and a hundred million carefully placed hedrons had served their purpose, in the mind of <NAME>.
She snarled and sent a storm of stone darts flying at him, each the size of her forearm and sharpened to a wicked point.
Sorin blasted a few of the shards to dust before they reached him, sent several more spinning away with a sweep of his sword, and grunted as three more pierced his body.
His eyes flared white, too bright, and a great weight settled on Nahiri's shoulders, forcing her to her knees. Everything was so bright—
She looked up.
The moon. He had called down a beam of moonlight, heavy as a boulder but with no substance whatsoever, to bind her. And finally, surrounded by its light, breathing in the scent of it, she understood what was so strange about Innistrad's moon.
#figure(image("002_Stone and Blood/04.jpg", width: 100%), caption: [Paraselene | Art by <NAME>], supplement: none, numbering: none)
It was made of silver. Like the Helvault.
Sorin pulled the stone darts from his body one by one, the wounds closing bloodlessly. He stalked toward her. But his footsteps were uncertain, his sword drooping. Had he become so decrepit?
Still, his magic was strong. The light bound not only her body, but her magic. As long as it held, she was powerless to affect anything outside its radius.
"Go home, Nahiri," he said wearily. "End this farce, and I will allow you—"
She dug her hands into the soil, extending her will not out, but down, and dove into the earth itself.
She sank down into a womb of stone, and for a moment she left behind her anger and Sorin's damnable arrogance and that strange and unyielding chunk of silver whose purpose she still couldn't grasp. There was only her and the stone, cut off from everything save the slow and steady heartbeat of the world, like it had been for five thousand years.
She could planeswalk away, return to Zendikar and to isolation. She did not, in fact, need Sorin's help. Not anymore. But leaving things unresolved here would be dangerous beyond measure, inviting retaliation. She really would have made an enemy, then. And she wouldn't leave while there was still a chance of preventing that.
Sorin's restless footfalls echoed above her, stalking toward the Helvault.
She shaped the stone beneath her into another pillar, thinned the rock above her to the consistency of water, and burst forth from the ground once more. Sorin had dispelled the beam of moonlight, and now had his back to the Helvault for some measure of protection.
She rose on her granite pillar, towering above him, pulling a swarm of stones from the ground and arraying them around her.
She didn't want to kill him. She didn't really want to hurt him. What she wanted was for things to be right between them, the way they had been. But for that to happen, she would have to earn his respect. And to do that, she would have to beat him.
He was leaning on his sword, now. If they agreed to treat one another as equals, it seemed she would be doing him a favor.
It wasn't right. He was #emph[too] weak, weaker than he had been when she was young. She thought of how the Helvault had radiated his essence, and wondered just how much of himself he had poured into it.
She sent her pillar of stone gliding forward toward him. As she slid past one of the floating stones around her, she reached into it. It instantly heated, became molten, as the metals inside it coalesced in answer to her will.
She pulled a fully formed stoneforged sword out of the rock and kept advancing, until Sorin stood beneath her, gazing up at the sword's white-hot point.
"Sorin, you will fulfill your promise. You will return with me to Zendikar. You will help me check our containment measures, and ensure that the Eldrazi are secure. Only then can you slink away."
Sorin spat.
Then everything went bright again, brighter than the moon, and a shape came screaming out of the heavens. She glimpsed feathered wings and a shining spear before the figure slammed into her, knocking her off her pedestal. They tumbled together and slammed into the ground, where they carved a deep furrow in the earth. Nahiri's array of stones tumbled to the ground, her concentration shattered.
At last, lying on her back. Nahiri got a look at her attacker.
#figure(image("002_Stone and Blood/05.jpg", width: 100%), caption: [Avacyn, Angel of Hope | Art by <NAME>], supplement: none, numbering: none)
She was an angel, larger than life, with white hair and white skin and dark, expressionless eyes. Nahiri was being attacked by an angel.
Nahiri had met angels, on Zendikar. They were aloof, and could be fearsome, but they were protectors, creatures of justice and of good. And none of them that she'd ever met were stupid enough to attack a Planeswalker.
Before Nahiri could speak, before she could even fully process what was happening, the angel raised her spear. Its points shone like twin suns, blinding her.
She sank down once more into stone, felt the spear's points dig into the earth where she'd been lying.
No time for repose, this time. She burst from the ground in a spray of shrapnel, sword still in hand, and as the angel shielded herself from the blast of rock, Nahiri attacked. She swung the sword, which still glowed with the heat of its forging.
The angel raised her spear to deflect, just in time, and Nahiri attacked again, and again, and again, forcing the angel backward. She felt a dim sense of wrongness, fighting an angel. But no—the angel had attacked her, unprovoked. And why? To protect #emph[Sorin] ? She could scarcely entertain the thought.
The angel vaulted into the air—but not to retreat. She sprang forward, to get above Nahiri, to attack again. Nahiri rose again on a stone pillar, to force the angel either to flee or to return to the earth.
The angel settled again, but stood her ground. Nahiri kept up her assault. The angel was powerful, no question. But she was no Planeswalker. Nahiri struck again—
—and her sword stopped dead on Sorin's, thrust between her and the angel.
"Enough!" he panted. "Enough."
She stared past him, at the angel with the jet-black eyes. There was something familiar about the angel, something unsettling, though Nahiri was quite certain she'd never seen her before.
"What is this, Sorin? How did you bring an angel under your thrall? Who is she?"
"The other half," he replied.
His hand whipped out, lightning-fast, and closed around Nahiri's sword. His skin hissed and sizzled, but he didn't seem to notice. Nahiri's fingers were numb, her mind reeling. She still didn't understand. He lifted the point of his own sword to her throat, wrenched her blade out of her hands, tossed it away.
The angel landed softly behind Sorin, but he held up one hand, and she waited. An angel waited, for him!
"For what it's worth," said Sorin, "I never wanted this, young one."
Then Sorin raised his sword, lashed out with a beam of tarnished light, and #emph[pushed] .
Nahiri flew backward and slammed against the silver surface of the Helvault. It was no longer hard and cold, but yielding. Welcoming. #emph[Pulling] .
#figure(image("002_Stone and Blood/06.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
Strands of eager silver closed around her body, drawing her in. Shards of rock whirled through the air, the bedrock beneath their feet shifted at her rage, but the Helvault itself did not care.
"Damn you!" she screamed. "I #emph[trusted] you!"
He loomed over her, now, the angel's wings spread behind him, and he spoke one last time before molten silver flooded her ears. He sounded almost sad. Almost.
"I never asked for your trust, child. Only your obedience."
Then the Helvault claimed her, and she vanished into a darkness vast and total.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Repose]
#emph[Interlude]
Through darkness, she fell.
She knew no other sensation—no sound, no light. Not even a breath of wind, for within this place there was nothing, not even air. Nothing but her, and the endless sensation of a descent forever unfinished. She couldn't see her hand in front of her face—was not entirely sure that she had a body at all, in this place.
She extended her senses outward, pushed and pulled with her powers of lithomancy, trying to get some kind of grip on the silver exterior of the Helvault. But what was around her was not silver. It was nothing. She tried to planeswalk, but even the Blind Eternities, the chaotic no-place between planes, was beyond her reach.
It was not like her cocoon of stone back on Zendikar, the slab of rock where she had slumbered for five fitful millennia. In her cocoon, dreamlike, she could sense all of Zendikar, reach out to any part of it, appear wherever she wished.
This was much, much worse—only darkness, and falling, and the unmistakable scent of <NAME>.
Sorin would pay for his treachery. She would escape from this prison, and she would make him pay. She'd thought they were allies. Friends! Now she saw him for what he truly was: a monster, plain and simple.
A monster. But not a fool. He knew what was at stake, back on Zendikar. He could not be so confident in his defenses, in his Helvault and his enslaved angel, that he would simply allow the Eldrazi to escape. He would free her, when he'd recovered his strength and prepared to face her. Would ambush her and defeat her, and allow her to go home. He couldn't just leave her here. It was unthinkable.
But she had a lot of time to think.
At length, she came to a decision.
"That's enough," she said quietly.
There was no reply, no sound at all. Her words didn't echo, but faded away into the infinite blackness.
"That's enough!" she said, more loudly. "Whatever lesson you're trying to impart, I've learned it. End this, and I'll depart Innistrad and never return. Clearly there's nothing left for us to say to one another."
There was no answer. And she wasn't about to apologize, and she certainly wasn't going to beg. She wouldn't give him the satisfaction.
She thought of Zendikar often, of its jagged peaks and yawning skies. Of the cancer that ate at its heart, of vampires swarming its surface, building statues of gods more monstrous than they knew. She should never have left.
The isolation began to gnaw at the edges of her sanity. Even a Planeswalker, even one who had spent millennia in stone, was not meant for this kind of solitude. Even a Planeswalker could lose her mind—and for a Planeswalker, who #emph[was] a mind, the consequences would be horrific. She had met a mad Planeswalker, once. Once was more than enough. She would #emph[not] go mad.
At first it was the thought of vengeance that she held on to, of crushing Sorin for what he had done to her, for what might be happening on Zendikar even now. But there were only so many ways to imagine killing him, and even now the thought brought sorrow and weariness more than the cold satisfaction of revenge. Her hatred never wavered, but it crystallized and grew still.
Her memories of Zendikar became her lantern in the dark.
She knew her world down to its bones, and her memory of it was perfect. She thought of a place—of the trenches in Akoum that her tribe had wandered, before she had abandoned mortal life and sunk into the stone. In her mind, she built a model of those trenches, tracing out each layer of basalt, each shard of red volcanic glass that made up the regolith, every grain and fault within the bedrock.
#figure(image("002_Stone and Blood/07.jpg", width: 100%), caption: [Magma Rift | Art by Jung Park], supplement: none, numbering: none)
It was not Zendikar. It was Zendikar as she remembered it—after the Eldrazi, but before her slumber had allowed the world to drift out of control.
She worked her way outward through Akoum as time passed uncounted, remembering the grain of every sedimentary deposit, the temperature and viscosity of the magma that pulsed beneath the surface. She built downward, miles down, as deep as she'd ever dared go, until she had traced the edges of the tectonic plate that carried all of Akoum on its shoulders.
She held all of it in her mind, leaving parts of it unchanged for what seemed like years at a time, finding them again exactly as she had left them. Her mind was hers, and Zendikar was hers, and she would not let go of either of them.
It was impossible to say how long she'd been falling when her reverie was interrupted. She was no longer alone in the darkness. They were distant, at first—just a far-off keening, or the rustle of leathery wings. The soundlessness of her captivity had not been immutable, only the result of its emptiness.
Slowly, over countless years, the Helvault was populated. She understood, now, its purpose. Sorin would not tolerate threats to his precious Innistrad, and he had made this thing—this pit, this nowhere—to house them.
Threats, like demons, and horrors. And her. She spent a year or ten fuming, after that realization.
#emph[The other half] , he had said. She very much doubted that he was personally imprisoning all these demons. She came to understand the angel's purpose in all this—however Sorin had duped or suborned her.
Eventually she had recreated all of Akoum in her mental geography of Zendikar, from the massive peaks of the Teeth of Akoum to the still waters of Glasspool. The water that surrounded her remembered continent was a sketch, a hasty scribble, by comparison—she didn't truly understand how water moved, and so the waves that lapped at the red cliffs of Akoum only sloshed back and forth. She didn't focus on them, lest she break the illusion.
She only had to make a little seafloor before she could start on Ondu. She was especially looking forward to the islands of the Crown, with Valakut its blazing jewel. But she refused to do things out of order. She had all the time in the world.
#figure(image("002_Stone and Blood/08.jpg", width: 100%), caption: [Prairie Stream | Art by <NAME>aquette], supplement: none, numbering: none)
The others began to impact Nahiri, to glance off her in the endless darkness. She never saw them—that had not changed—but she heard them, shrieking, in the last instant before they struck. A talon here, a wing there, a momentary contact with a patch of nameless, inhuman flesh. And then gone, back into the darkness.
She marked time by those distractions, by those brief and senseless run-ins with the things that inhabited the dark. She did not hate them, even when their numbers swelled and their impacts against her not-quite-body grew more frequent, and more painful. She had no love for demons—had put down more than one to keep them from plaguing her world—but she did not hate them. Not here.
She pitied them. They were prisoners, like her, of <NAME> and his angelic enforcer. And unlike Nahiri, they would never have a chance for revenge. They were pathetic creatures, wailing and gibbering, mad or terrified or both—lesser minds, cracking under the strain of an eternity in the dark.
Nahiri was accustomed to isolation, and her mind was her own. In this blackness, it was all she had: her sanity, her anger, her memories of Zendikar...and a great deal of time.
She finished with Ondu, taking extra time on the sacred peak of Valakut. She'd spent years meditating in the volcano's caldera. Her Zendikar was her anchor, the thing that reminded her who she was and where she came from. She needed to get it right.
#figure(image("002_Stone and Blood/09.jpg", width: 100%), caption: [Valakut, the Molten Pinnacle | Art by <NAME>], supplement: none, numbering: none)
Sometimes she went back to that caldera, in her mind. But she couldn't content herself with just dwelling in her Zendikar. Not until it was finished.
Murasa was quick work, a great slab of stone rising out of the sea. The continent's forests were remarkable, but they did not interest her, and she made no attempt to recreate them. <NAME>ed held her attention for a very long time, tracing the shifting contours of Bojuka Bay and the twisting network of caverns beneath the Guum Wilds.
After that it was on to Guul Draz—dull, for the top layer, but just as fascinating as Bala Ged below the surface. She was halfway through crafting the subterranean lava tubes that drove the continent's churning geothermal marshes when—at last, after untold years—something changed.
Light—a brief flash, blinding in the darkness, scattering her focus and, for a panicked moment, obscuring her Zendikar entirely. And then there was something with her, a presence more substantial than these wispy, wailing demons. #emph[Sorin?] she thought, for an instant—but no. Not him. Not...quite. Far below Nahiri, twin suns ignited, illuminating nothing, and she heard the faint rustle of feathers.
The angel—here? In her own prison? #emph[That] was interesting.
The lights drew closer, and now Nahiri could see—#emph[see] , for the first time in centuries. The angel's spear flashed, and she grunted with exertion as she swung it in wide arcs around her. Her wings were spread, uselessly, trying to push against nothing at all.
The demons swarmed the angel, shrieking and flapping. They'd left Nahiri alone all these years, glanced off her only incidentally. But they knew their jailer. They knew their only chance for vengeance.
The angel rose toward Nahiri—slowly, slowly, in this timeless void—until they were side by side. The cloud of demons had dissipated as Sorin's protector gained the upper hand. The angel looked over at Nahiri, and for a moment their eyes locked—and finally Nahiri understood. Sorin hadn't enslaved an angel. He hadn't tricked her or coerced her. This angel stank of Sorin, just like the Helvault.
He had #emph[made] her. Just like the Helvault.
The angel recognized her, from their long-ago fight. Dark eyes flashed with fury—fury Sorin had instilled. He had created her in his own image, twisted her from the beginning. Made her hateful. Made her #emph[his] . Nahiri shuddered.
Another being grievously wronged by <NAME>, one with no chance of vengeance or redress. No chance of freedom. A porcelain doll, to replace the student he had lost.
Nahiri couldn't say how long they fell like that, together, looking into one another's eyes. Speech seemed impossible, after all this time.
And then there was light, #emph[real] light, as the void around them cracked and shredded and finally...
she was...
#emph[out...]
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
#strong[Ruin]
#emph[One year ago]
Nahiri slammed into a surface on her hands and knees, her endless fall over at last. Her eyes rejected the notion of light, her ears assaulted by a blast of cacophonous noise. She focused her vision, and the blinding light resolved itself into shapes, the racket into voices, the rough surface beneath her into a tidy little cobblestone street. She raised her head. People were shouting and running everywhere, fires blazing, corpses—#emph[corpses?] —shambling, and above it all was Sorin's damned angel rising into the air in a beam of white light.
And all around her fell shards of silver.
#figure(image("002_Stone and Blood/10.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
Her hands felt strange. Feeling...felt strange. She looked at her palms. They were bloody. #emph[Bloody] . She willed the wounds closed, but nothing happened. Her body was no longer an extension of her sense of self. Once again, as it had been long ago, it was just...a body. Flesh and bone. She could feel the blood pumping through her veins, the heaving gasps forcing air into lungs that had needed none for millennia. The world spun around her.
She had to leave. Before he found her. If she could leave, if she was even still a Planeswalker.
She pushed against the walls of the world, experimentally, and tried to move in that unreal direction only Planeswalkers could sense. She felt the walls of the world around her—she #emph[was] still a Planeswalker, whatever had happened to her body—but as she probed them, those walls proved far firmer than she remembered. They had been a soap bubble; now they were a barrier that would take will and time to overcome. Was she so diminished?
But no. No. She pushed, the way she always had. The problem wasn't strength. The walls really were higher, thicker. The Blind Eternities were less connected to this place than they had been when she arrived. The shape of the universe had changed, while she fell. She could feel it.
She was still a Planeswalker. Whatever that meant.
With effort, she cast herself into the Blind Eternities. They tore at her, assailed her, just as they always had. Disoriented as she was, there was only one plane she could possibly have reached—the one he would expect her to run to, if he were looking. But there was no helping that.
Her feet found the rocky earth of Zendikar, and for the first time since her imprisonment began, she stood on solid ground. Zendikar, the real Zendikar. Home. She stood not far from where she had left, so long ago. In Akoum's jagged heart, near what should have been the Eye of Ugin.
But the Eye was a ruin, collapsed in on itself. Fields of rubble spread out beneath her and around her, hedrons and shards of red volcanic rock spinning lazily through the air. Its careful geometries, the meticulously placed hedron array that had surrounded it, and the chamber itself were simply...gone.
No. #emph[No.]
The three Eldrazi titans had escaped, while Zendikar's protector languished in Sorin Markov's pit. Everything she'd built here, everything she'd worked for, had come to ruin during her long imprisonment.
Nahiri clenched her bloodied fists. #emph[Where? ] Where were they? Maybe the Eldrazi had left Zendikar. Maybe her world was finally free of them.
She stretched out her senses through the stones around her until she felt a familiar tremor nearby, just the barest quivering—the light and agile footsteps of her fellow kor. She climbed a ridge to reach them, coaxing the stones to keep her upright so she could spare her bleeding hands. The wounds still would not close.
#figure(image("002_Stone and Blood/11.jpg", width: 100%), caption: [Cliffside Lookout | Art by <NAME>s], supplement: none, numbering: none)
A cry went up from a sentry, and Nahiri yelled hoarsely, her own voice a stranger to her. It was an answer-cry, a wordless signal that meant simply, #emph[I am kor] .
It was only a handful of breaths before a dozen weary-looking kor surrounded her.
"You're hurt," said one of them, a tall woman with a strange, puckered scar across her bare shoulder. The inflections were different, the rhythms strange, but they spoke the same language. The tall woman raised her hands, setting them aglow with healing magic. Nahiri nodded, and the other woman touched her palms, closing the angry scrapes opened on another world by cobblestones and moon-shards.
"I'm Tenri," said the woman, as Nahiri's wounds closed.
Nahiri did not respond, and tried to look absorbed in the healing process. She didn't know how much they remembered of her—or, more to the point, of the baleful Nahiri the Prophet, whose statue she had seen even before her time in the Helvault.
"You're alone," said the sentry, a man festooned with weapons and ropes. "With no gear."
"It's a long story," said Nahiri. "I'm...a hermit, I suppose. I've been shut away for a long time, and things have changed. What has happened to the world?"
They gaped at her.
"The Eldrazi and their works are everywhere," said the sentry. "Where have you been, that you do not know them?"
"<NAME>," said the tall woman. Tenri. "She has no gear because she's a stoneforger, and she's probably been in solitude honing her craft."
"Something like that," said Nahiri. She straightened the red armband that marked her as a stoneforge master, marveling that her people's traditions had survived so much upheaval and so long without guidance.
"Last year," said Tenri, "three enormous monstrosities rose out of the Teeth of Akoum. Apparently they've been slumbering below the surface for a very long time. Their spawn spread everywhere, but the three, the titans, were worse. Where they go...nothing remains."
"There are some," said Erem, "who believe that they are Kamsa, Talib, and Mangeni in the flesh."
Several of the kor spat. Nahiri knew only one of the names, Talib. She'd seen it carved beneath a statue of herself, calling her its prophet. During her long absence, and her longer reverie before it, half-remembered stories of the Eldrazi—stories that she, in many cases, had first told—had passed into legend. The monsters that lurked within Zendikar had become its gods.
Nahiri spat too.
"Nothing remains..." she echoed. "Where? Where have they been? What have we lost?"
"<NAME>," said Erem.
Nahiri waited for him to say more, to say which parts of Bala Ged were gone. He said nothing.
#emph[Bala Ged] . An entire continent...
"I need to see it for myself," said Nahiri.
Erem snorted. Bala Ged was a very long way from here. Tenri nodded.
"I can outfit you, before I go," said Nahiri. "It's the least I can do."
Erem shook his head.
"We've no shortage of gear," he said. "Not when we've lost so many."
"Gods go with you," said Tenri. "Whatever gods you can muster, these days."
Nahiri clasped the taller woman's shoulder.
"Thank you for your help," said Nahiri. "And I'm sorry I couldn't do more."
She sank into the stones beneath their feet, leaving behind her fellow kor—strangers to her, as like to her as she was to Sorin.
She felt the extent of the damage. The deep places of the world were riddled with new tunnels, crusted with some strange substance that confounded her senses. Everywhere she looked, there was devastation. Everywhere were signs of the Eldrazi, landscapes eroded in ways she couldn't even understand. And far away, across the world, in Bala Ged—
She concentrated—it took concentration, now—and shifted herself across her world, looking for the source of the wrongness. She felt woozy, sick. She should wait, and rest, and recover her strength.
She had had enough of waiting. She had to see what was happening. She appeared in Bala Ged, in what should have been lush jungle. What stretched before her was a seemingly endless expanse of chalky gray dust—more lifeless than any desert, like the surface of a moon.
#figure(image("002_Stone and Blood/12.jpg", width: 100%), caption: [Wastes | Art by <NAME>], supplement: none, numbering: none)
There was nothing like this on the Zendikar she still held in her head, the mental model she had painstakingly built over the years of her imprisonment. On her Zendikar, Bala Ged was living and wild. On this Zendikar, it was dead. Nothing lived here. Even the rocks were silent.
The ground trembled beneath her feet, and she could not feel the source of the tremors. The dust shivered.
She turned. There, on the horizon, vast and horrible, was a being she had seen twice before—once on a world lost to the Eldrazi, and once when she imprisoned it and its brethren on Zendikar. The Devourer. The one Ugin called #emph[Ulamog] .
#figure(image("002_Stone and Blood/13.jpg", width: 100%), caption: [Ulamog, the Ceaseless Hunger | Art by <NAME>], supplement: none, numbering: none)
Nahiri fell to her knees, pressing her hands into that lifeless dust.
If #emph[this] was loose on her world—
If what happened here could happen everywhere—
If she had no preparations, a thin shard of her old power, and a hedron network centuries out of true—
Then the Zendikar she knew was dead. There was no saving it. One might as well try to stop the sun in the sky. She closed her eyes and saw her Zendikar, Zendikar as it had been. The world she had let <NAME> destroy. Hot tears of rage ran down her face and landed in that awful dust with a hiss.
"As Zendikar has bled, so will Innistrad."
She opened her eyes and looked down at her hands, at hands that had shaped stone and trapped titans. They were covered in gray dust.
"As I have wept, so will Sorin."
She looked up at the thing on the horizon, watching it move across the landscape like a natural disaster.
"This I swear, on the ashes of my world."
Nahiri stood.
She had a great deal of work to do.
#figure(image("002_Stone and Blood/14.jpg", width: 100%), caption: [Nahiri, the Harbinger | Art by <NAME>], supplement: none, numbering: none)
|
|
https://github.com/Goldan32/brilliant-cv | https://raw.githubusercontent.com/Goldan32/brilliant-cv/main/modules/professional.typ | typst | Apache License 2.0 | #import "../brilliant-CV/template.typ": *
#cvSection("Professional Experience")
#cvEntry(
title: [Junior Embedded Software Engineer],
society: [Flex],
logo: "",
date: [2022 - Present],
location: [Budapest],
description: list(
[Implement features in an embedded project using yocto],
[Create patches for external open-source tools],
[Assist on the software side of new hardware bringups],
[Suggested and implemented a firmware component version handler tool in C, that can be utilized through multiple projects]
),
tags: ("OpenBMC", "Yocto Project", "C", "C++", "BASH", "CMake")
)
#cvEntry(
title: [Software Developer Trainee],
society: [Flex],
logo: "",
date: [2021 - 2022],
location: [Budapest],
description: list(
[Assist the Firmware Team by writing low-level sofware features and unit tests],
[Implement more complex embedded software solutions and write documentation],
),
tags: ("C", "Aurix", "Python", "Makefile")
)
#cvEntry(
title: [Student Council Representative],
society: [Budapest University of Technology and Economics],
logo: "",
date: [2020 - 2021],
location: [Budapest],
description: list(
[Communicate with students via email and advise them about university policy],
[Represent student interests at various meetings],
)
) |
https://github.com/taooceros/CV | https://raw.githubusercontent.com/taooceros/CV/main/modules_en/skills.typ | typst | // Imports
#import "@preview/brilliant-cv:2.0.2": cvSection, cvSkill, hBar
#let metadata = toml("../metadata.toml")
#let cvSection = cvSection.with(metadata: metadata)
#cvSection("Skills")
#let cvSkill(type: "Type", info: "Info") = {
let skillTypeStyle(str) = {
align(left, text(size: 10pt, weight: "bold", str))
}
let skillInfoStyle(str) = {
text(str)
}
table(
columns: (16%, 1fr),
inset: 0pt,
column-gutter: 10pt,
stroke: none,
skillTypeStyle(type), skillInfoStyle(info),
)
v(-6pt)
}
#cvSkill(
type: [Programming],
info: [Rust | C | C++ | C\# | Java | Julia | R | Python | Matlab | Latex | Typst],
)
#cvSkill(
type: [Frameworks],
info: [RDMA | UCX | WPF | ASP.NET | Blazor | JuMP | Tidyverse],
)
#cvSkill(
type: [Languages],
info: [English | Chinese],
) |
|
https://github.com/Quaternijkon/QUAD | https://raw.githubusercontent.com/Quaternijkon/QUAD/main/lib.typ | typst | #let dark_theme=false
#let (
theme-primary,
theme-secondary,
theme-tertiary,
theme-error,
theme-background,
theme-outline,
) = if dark_theme {
(
rgb("#ADC6FF"),
rgb("#9CD49F"),
rgb("#EAC16C"),
rgb("#FFB4A9"),
rgb("#111318"),
rgb("#8E9099"),
)
} else {
(
rgb("#445E91"),
rgb("#36693E"),
rgb("#785A0B"),
rgb("#904A41"),
rgb("#F9F9FF"),
rgb("#74777F"),
)
}
#let (
primary,
secondary,
tertiary,
error,
)=(
rgb("#4285F4"),
rgb("#34A853"),
rgb("#FBBC05"),
rgb("#EA4335"),
)
// #let dye(text)={
// let colors = [
// #theme-primary,
// #theme-secondary,
// #theme-tertiary,
// #theme-error,
// ]
// for i in range(11) {
// let color = colors[i % 4];
// let char = text[i];
// // 用指定的颜色包裹每个字符
// set text(color: color)
// [#char]
// }
// }
// #let XOR(a, b)={
// (a or b) and not (a and b)
// }
// #let hash(seed, index)={
// XOR(index*31+seed*17, index*8) and 0x3
// }
//
#let _primary(_text)={
set text(fill: primary)
[#_text]
}
#let _secondary(_text)={
set text(fill: secondary)
[#_text]
}
#let _tertiary(_text)={
set text(fill: tertiary)
[#_text]
}
#let _error(_text)={
set text(fill: error)
[#_text]
} |
|
https://github.com/YabusameHoulen/ncu_slide | https://raw.githubusercontent.com/YabusameHoulen/ncu_slide/master/README.md | markdown | ncu_slide, modefied from typst polylux's clean theme
|
|
https://github.com/francescoo22/masters-thesis | https://raw.githubusercontent.com/francescoo22/masters-thesis/main/vars/rules/relations.typ | typst | #import "../../config/proof-tree.typ": *
#import "../../config/utils.typ": *
// ************** Annotations Relations **************
#let A-id = prooftree(
axiom($$),
rule(label: "Rel-Id", $alpha beta rel alpha beta$),
)
#let A-trans = prooftree(
axiom($alpha beta rel alpha' beta'$),
axiom($alpha' beta' rel alpha'' beta''$),
rule(n:2, label: "Rel-Trans", $alpha beta rel alpha'' beta''$),
)
#let A-bor-sh = prooftree(
axiom($$),
rule(label: $"Rel-Shared-"borrowed$, $shared borrowed rel top$),
)
#let A-sh = prooftree(
axiom($$),
rule(label: "Rel-Shared", $shared rel shared borrowed$),
)
#let A-bor-un = prooftree(
axiom($$),
rule(label: $"Rel-Unique-"borrowed$, $unique borrowed rel shared borrowed$),
)
#let A-un-1 = prooftree(
axiom($$),
rule(label: "Rel-Unique-1", $unique rel shared$),
)
#let A-un-2 = prooftree(
axiom($$),
rule(label: "Rel-Unique-2", $unique rel unique borrowed$),
)
// ************** Parameters Passing **************
#let Pass-Bor = prooftree(
axiom($alpha beta rel alpha' borrowed$),
rule(label: $"Pass-"borrowed$, $alpha beta ~> alpha' borrowed ~> alpha beta$)
)
#let Pass-Un = prooftree(
axiom($$),
rule(label: "Pass-Unique", $unique ~> unique ~> top$)
)
#let Pass-Sh = prooftree(
axiom($alpha rel shared$),
rule(label: "Pass-Shared", $alpha ~> shared ~> shared$)
) |
|
https://github.com/florianhartung/studienarbeit | https://raw.githubusercontent.com/florianhartung/studienarbeit/main/work/dhbw_template/titlepage.typ | typst | #let format_date(date) = date.display("[day].[month].[year]")
#let key_value_table(..key_value_pairs) = [
#let cells = key_value_pairs.pos().map(elem => ([#elem.at(0):#h(5mm)], elem.at(1))).flatten()
#grid(columns: (auto, auto), gutter: 1.25em, ..cells)
]
#let titlepage(
title, // contents
author, // string
course, // contents
submissiondate, // datetime
workperiod_from, // datetime
workperiod_until, // datetime
matr_num, // number
supervisor, // string
) = [
#v(7mm)
#align(right)[#image("dhbw_icon.png", height: 2.2cm)]
#set align(center)
#v(15mm)
*THEMA STUDIENARBEIT*
#v(8mm)
#text(size: 20pt)[
*#title*
]
#v(20mm)
im Studiengang
#v(5mm)
#course
#v(5mm)
an der _Duale Hochschule Baden-Württemberg Mannheim_
#v(10mm)
von
#v(5mm)
#set align(left)
#key_value_table(
([Name, Vorname], [#author]),
([Abgabedatum], format_date(submissiondate)),
)
#v(30mm)
#key_value_table(
(
[Bearbeitungszeitraum],
[#format_date(workperiod_from) - #format_date(workperiod_until)],
),
([Matrikelnummer, Kurs], [#matr_num, #course]),
([<NAME>*in der Dualen Hochschule], [#supervisor]),
)
] |
|
https://github.com/x14ngch3n/CV | https://raw.githubusercontent.com/x14ngch3n/CV/main/fonts/fontawesome.typ | typst | // Rip-off from https://github.com/matchy233/typst-chi-cv-template/blob/main/fontawesome.typ
#let fa(name) = {
text(
font: "Font Awesome 6 Brands",
size: 10pt,
color.navy,
box[ #name ]
)
}
#let envelope = symbol("\u{f0e0}")
#let globe = symbol("\u{f0ac}")
#let github = symbol("\u{f09b}")
#let linkedin = symbol("\u{f08c}")
#let orcid = symbol("\u{f8d2}")
#let pdf = symbol("\u{f1c1}")
#let code = symbol("\u{f5fc}") |
|
https://github.com/RaphGL/ElectronicsFromBasics | https://raw.githubusercontent.com/RaphGL/ElectronicsFromBasics/main/DC/chap3/1_importance_of_electrical_safety.typ | typst | Other | === The importance of electrical safety
With this lesson, I hope to avoid a common mistake found in electronics textbooks of either ignoring or not covering with sufficient detail the subject of electrical safety.
I assume that whoever reads this book has at least a passing interest in actually working with electricity, and as such the topic of safety is of paramount importance.
Those authors, editors, and publishers who fail to incorporate this subject into their introductory texts are depriving the reader of life-saving information.
As an instructor of industrial electronics, I spend a full week with my students reviewing the theoretical and practical aspects of electrical safety.
The same textbooks I found lacking in technical clarity I also found lacking in coverage of electrical safety, hence the creation of this chapter.
Its placement after the first two chapters is intentional: in order for the concepts of electrical safety to make the most sense, some foundational knowledge of electricity is necessary.
Another benefit of including a detailed lesson on electrical safety is the practical context it sets for basic concepts of voltage, current, resistance, and circuit design.
The more relevant a technical topic can be made, the more likely a student will be to pay attention and comprehend.
And what could be more relevant than application to your own personal safety? Also, with electrical power being such an everyday presence in modern life,
almost anyone can relate to the illustrations given in such a lesson.
Have you ever wondered why birds don't get shocked while resting on power lines? Read on and find out!
|
https://github.com/maucejo/elsearticle | https://raw.githubusercontent.com/maucejo/elsearticle/main/src/_environment.typ | typst | MIT License | #import "_globals.typ": *
// Appendix
#let appendix(body) = {
set heading(numbering: "A.1.")
// Reset heading counter
counter(heading).update(0)
// Equation numbering
let numbering-eq = n => {
let h1 = counter(heading).get().first()
numbering("(A.1)", h1, n)
}
set math.equation(numbering: numbering-eq)
// Figure and Table numbering
let numbering-fig = n => {
let h1 = counter(heading).get().first()
numbering("A.1", h1, n)
}
show figure.where(kind: image): set figure(numbering: numbering-fig)
show figure.where(kind: table): set figure(numbering: numbering-fig)
isappendix.update(true)
body
} |
https://github.com/dismint/docmint | https://raw.githubusercontent.com/dismint/docmint/main/multicore/pset5.typ | typst | #import "template.typ": *
#show: template.with(
title: "6.5081 PSET 5",
subtitle: "<NAME>",
pset: true
)
= Problem 2
Even with this new change, `add()` is still linearizable. We can set the linearization point to when `pred` is locked.
For an `add()` operation, this is completely fine, as we are only concerned with editing `pred`, which is locked by our new implementation. The concern might be that `curr` may be modified, but this is not an issue as `curr` can be assigned to arbitrarily point to anything, since `pred` will be linked to the new node, and the new node will be linked to `curr`. Nothing in `add()` can impact the stability of `pred` or `curr`, even without having a lock around `curr`
`delete()` seems to pose a more challenging threat to the linearizability of this new implementation. The question is whether it is possible for `curr` to be deleted, ruining our chain structure. Obviously `pred` is locked so we cannot touch it, meaning it is not a point of concern. Recalling the implementation of `delete()` in the fine-grained algorithm, `pred` is always locked before `curr` is deleted, meaning that it cannot interfere with the critical operation of `add()`
The last function `contains()` is not a point of concern as `add()` maintains the same functionality, and `contains()` does not modify the linked list in any way.
When the function fails there is nothing we have to worry about in this new implementation that the old one does not cover.
= Problem 3
In this new implementation, `contains()` would not be linearizable. Suppose we have the following scenario:
+ We have the list `a => b => c`
+ We call `remove(b)`, which manages to set the node as marked but nothing else.
+ We call `contains(b)`, which must return `true` by nature of the implementation.
Thus, there is no case where our abstract representation of the list is linearizable. The `contains()` function in this case will always return `true` without being able to distinguish that the abstract representation says the return should be `false`. There is no point at which it can be considered `true`, because the entire lifespan of the function `contains()` has `b` existing as marked.
= Problem 4
== a
This new implementation *does not* work. Considering the following execution in which it will cease to have proper functionality.
+ On line `6` we get `succ = curr.next`
+ Before line `7`, some other thread `curr.next` since there is no lock around it.
+ Because there is no way to check with this new function, line `7` will mark the node, even though it might be a new node that was inserted, and is not the desired node to be deleted. Thus we in this case our function utterly fails, as the check is discarded.
== b
This implementation *does not* work. However, I encourage the grader to read fully, as I believe the textbook may incorrectly assume the answer to be that it *does* work.
I do not think it works. This is because it provides the same set of functionality of `compareAndSet`, but misses a crucial feature. If `curr.next` has changed, then the update will fail (meaning it returns false according to the online API), which is the same behavior that should be expected from the original.
If `curr.next` is what we expect, and `mark = false`, then it will set the value to `true` and move into the if statement, which is the desired behavior. However, there is an edge case that needs to be covered. What if `curr.next` is what we expect, but `mark = true`? Even in this case, our function will set the mark as `true` (again), and proceed into the if statement.
Obviously, if something has been marked, then a remove should not succeed (unless we have duplicates but we don't consider that). Imagine a scenario where sometime after we run line `5` but before running any lines after, another `remove` successfully finishes its call, and removes the same node we wanted to remove (once again assuming no duplicates). In this case, our new function will fail, as it will not be able to distinguish there is a marked node already. `remove` itself doesn't physically remove the node, so there is no way to check if the node should be processed. Thus we will report the node as removed successfully, even though it was not (since someone else did it first), while re-setting the mark uselessly. It does not matter that only one of the inner `compareAndSet` calls can complete, what matters is that both functions in this case are going to return `true`, which is impossible as this element can only be deleted once.
I believe the textbook is incorrect and has a typo:
"Otherwise, `remove()` calls `attemptMark()` to mark `currA` as logically removed (Line 27). This call succeeds only if no other thread has set the mark first."
I do not think this quote is true. This is because the `attemptMark` API as described by the Java docs as well as the PSET indicates that the mark is set unconditionally of what the old value was, only conditional on the reference. This means that the mark will be set regardless of the old value, and thus the quote is incorrect. The textbook seems to use `attemptMark` as a shorthand for `compareAndSet`, which is not the case.
Thus, this implementation is also not correct, and there are many executions in which it fails.
= Report / Problems 5-8
== High Level
I chose to implement the first lock suggested - the Lock-Based, Closed-Address. Even with this one implementation, there were many interesting observations and challenges that I faced. I will first go over some design choices, show any relevant code, and then discuss some findings as well as display some figures.
Overall, my implementation followed the suggestions very closely, and there is no large feature that is surprising about my code. One thing that I did learn about Java was that objects are by default passed by value, unless some specific conditions are met. One of these is if it is a `public` field of a class. This fact was used in order to pass around various fields such as the Lamport queues.
I'm also inclined to believe that there may be a small error with the implementation due to reasons that will be covered in the last section. However, the overall functionality is stellar and yields an acceptable `ParallelHashTable`
== Code
The first major piece of code is the actual `ParallelHashTable` class. This class uses the suggested locks as well as the serial list that was given.
=== *`ParallelHashTable`*
```java
class ParallelHashTable<T> implements HashTable<T> {
private SerialList<T,Integer>[] table;
private int nLocks;
private int maskShift;
private final int maxBucketSize;
// add a new field to store the readWriteLocks
private ReentrantReadWriteLock[] locks;
// constructor
public ParallelHashTable(int maxBucketSize, int nThreads) {
this.maxBucketSize = maxBucketSize;
this.nLocks = 1;
this.maskShift = 0;
while (nThreads > nLocks) {
this.nLocks *= 2;
this.maskShift += 1;
}
// set initial size to be the number of locks
this.table = new SerialList[nLocks];
// make locks
this.locks = new ReentrantReadWriteLock[nLocks];
for (int i = 0; i < nLocks; i++) {
this.locks[i] = new ReentrantReadWriteLock();
}
}
public void resizeIfNecessary(int key) {
// refactor to look a bit nicer
while (true) {
int index = key & ((1 << maskShift) - 1);
if (table[index] == null) {
break;
} else if (table[index].getSize() < maxBucketSize) {
break;
} else {
for (int i = 0; i < this.nLocks; i++) {
this.locks[i].writeLock().lock();
}
resize();
for (int i = 0; i < this.nLocks; i++) {
this.locks[i].writeLock().unlock();
}
}
}
}
private void addNoCheck(int key, T x) {
int index = key & ((1 << maskShift) - 1);
int lockIndex = index & (this.nLocks - 1);
this.locks[lockIndex].writeLock().lock();
if (table[index] == null) {
table[index] = new SerialList<T,Integer>(key,x);
}
else {
table[index].addNoCheck(key,x);
}
this.locks[lockIndex].writeLock().unlock();
}
public void add(int key, T x) {
resizeIfNecessary(key);
addNoCheck(key,x);
}
public boolean remove(int key) {
resizeIfNecessary(key);
int index = key & ((1 << maskShift) - 1);
int lockIndex = index & (this.nLocks - 1);
this.locks[lockIndex].writeLock().lock();
if (table[index] != null) {
boolean ret = table[index].remove(key);
this.locks[lockIndex].writeLock().unlock();
return ret;
}
else {
this.locks[lockIndex].writeLock().unlock();
return false;
}
}
public boolean contains(int key) {
int index = key & ((1 << maskShift) - 1);
int lockIndex = index & (this.nLocks - 1);
this.locks[lockIndex].readLock().lock();
if (table[index] != null) {
boolean ret = table[index].contains(key);
this.locks[lockIndex].readLock().unlock();
return ret;
}
else {
this.locks[lockIndex].readLock().unlock();
return false;
}
}
public void resize() {
this.maskShift += 1;
SerialList<T,Integer>[] newTable = new SerialList[2*table.length];
for (int i = 0; i < table.length; i++) {
if (table[i] == null) {
continue;
}
SerialList<T,Integer>.Iterator<T,Integer> iterator = table[i].getHead();
while (iterator != null) {
int index = iterator.key & ((1 << maskShift) - 1);
if (newTable[index] == null) {
newTable[index] = new SerialList<T,Integer>(iterator.key, iterator.getItem());
}
else {
newTable[index].addNoCheck(iterator.key, iterator.getItem());
}
iterator = iterator.getNext();
}
}
table = newTable;
}
}
```
Of course, it is also needed to show the workers who use this hash table. Again, this implementation was widely the same as the given one, except there are some cleanups of taking out unused variables, as well as adding new features such as the `skip` parameter, as well as not taking in the generator itself, and rather taking in a Lamport queue.
=== *`ParallelHashPacketWorker`*
```java
class ParallelHashPacketWorker implements HashPacketWorker {
PaddedPrimitiveNonVolatile<Boolean> done;
final WaitFreeQueue<HashPacket<Packet>> source;
final ParallelHashTable<Packet> table;
long totalPackets = 0;
boolean skip;
public ParallelHashPacketWorker (
PaddedPrimitiveNonVolatile<Boolean> done,
WaitFreeQueue<HashPacket<Packet>> source,
ParallelHashTable<Packet> table,
boolean skip) {
this.skip = skip;
this.done = done;
this.source = source;
this.table = table;
}
public void run() {
HashPacket<Packet> pkt;
while (!done.value) {
try {
pkt = source.deq();
totalPackets++;
if (skip) {
continue;
}
switch (pkt.getType()) {
case Add:
table.add(pkt.mangleKey(),pkt.getItem());
break;
case Remove:
table.remove(pkt.mangleKey());
break;
case Contains:
table.contains(pkt.mangleKey());
break;
}
} catch (Exception e) {
// could not find anything, try again
continue;
}
}
}
}
```
The Lamport queue implementation as given in the textbook is listed below.
=== *`WaitFreeQueue`*
```java
// implemented from chapter 3
class WaitFreeQueue<T> {
volatile int head = 0, tail = 0;
T[] items;
public WaitFreeQueue(int capacity) {
items = (T[])new Object[capacity];
head = 0; tail = 0;
}
public void enq(T x) throws Exception {
if (tail - head == items.length) {
throw new Exception();
}
items[tail % items.length] = x;
tail++;
}
public T deq() throws Exception {
if (tail - head == 0) {
throw new Exception();
}
T x = items[head % items.length];
head++;
return x;
}
}
```
Alongside that implementation is the `dispatcher` class, which is used to control the queues and overall flow of the testing.
=== *`Dispatcher`*
```java
class Dispatcher implements HashPacketWorker {
PaddedPrimitiveNonVolatile<Boolean> done;
final HashPacketGenerator source;
long totalPackets = 0;
WaitFreeQueue[] queues;
public Dispatcher (
PaddedPrimitiveNonVolatile<Boolean> done,
HashPacketGenerator source,
WaitFreeQueue[] queues) {
this.done = done;
this.source = source;
this.queues = queues;
}
public void run() {
HashPacket<Packet> pkt;
int tryThread = 0;
while (!done.value) {
pkt = source.getRandomPacket();
while (true) {
try {
queues[tryThread].enq(pkt);
totalPackets++;
break;
} catch (Exception e) {
// full, could not put anything, try again
continue;
}
}
// advance to the next one, we should send it to this one next
tryThread = (tryThread + 1) % queues.length;
}
}
}
```
One interesting thing to note is the approach that I took regarding how to distribute the packets. To ensure that all threads get an equal amount of attention, the `Dispatcher` cycles through all of them, and hangs on one if it cannot deliver it right now. Critically, this means that the `Dispatcher` is guaranteed to deliver one packet to each thread at a time before moving on, even at the expense of performance. I believed that this would be a better way to get consistent results for data.
The last piece of code is the testing framework itself.
=== *`ParallelHashPacket`*
```java
class ParallelHashPacket {
// static public variables to pass around
public static WaitFreeQueue[] queues;
public static ParallelHashTable<Packet> table;
public static void main(String[] args) {
final int numMilliseconds = Integer.parseInt(args[0]);
final float fractionAdd = Float.parseFloat(args[1]);
final float fractionRemove = Float.parseFloat(args[2]);
final float hitRate = Float.parseFloat(args[3]);
final int maxBucketSize = Integer.parseInt(args[4]);
final long mean = Long.parseLong(args[5]);
final int initSize = Integer.parseInt(args[6]);
final int numWorkers = Integer.parseInt(args[7]);
boolean skip = false;
if (args.length > 8) {
if (Integer.parseInt(args[8]) == -1) {
skip = true;
}
}
StopWatch timer = new StopWatch();
// allocate and initialize Lamport queues and hash tables (if tableType != -1)
queues = new WaitFreeQueue[numWorkers];
for (int i = 0; i < numWorkers; i++) {
queues[i] = new WaitFreeQueue<HashPacket<Packet>>(8);
}
table = new ParallelHashTable<Packet>(maxBucketSize, numWorkers);
HashPacketGenerator source = new HashPacketGenerator(fractionAdd,fractionRemove,hitRate,mean);
// initialize your hash table w/ initSize number of add() calls
for (int i = 0; i < initSize; i++) {
HashPacket<Packet> pkt = source.getAddPacket();
table.add(pkt.mangleKey(), pkt.getItem());
}
PaddedPrimitiveNonVolatile[] dones = new PaddedPrimitiveNonVolatile[numWorkers];
ParallelHashPacketWorker[] workers = new ParallelHashPacketWorker[numWorkers];
Thread[] workerThreads = new Thread[numWorkers];
for (int i = 0; i < numWorkers; i++) {
dones[i] = new PaddedPrimitiveNonVolatile(false);
workers[i] = new ParallelHashPacketWorker(dones[i], queues[i], table, skip);
workerThreads[i] = new Thread(workers[i]);
}
// call .start() on your Workers
for (int i = 0; i < numWorkers; i++) {
workerThreads[i].start();
}
timer.startTimer();
// call .start() on your Dispatcher
PaddedPrimitiveNonVolatile dispatcherDone = new PaddedPrimitiveNonVolatile(false);
Dispatcher dispatcher = new Dispatcher(dispatcherDone, source, queues);
Thread dispatcherThread = new Thread(dispatcher);
dispatcherThread.start();
try {
Thread.sleep(numMilliseconds);
} catch (InterruptedException ignore) {;}
// assert signals to stop Dispatcher
dispatcherDone.value = true;
// call .join() on Dispatcher
try {
dispatcherThread.join();
} catch (InterruptedException ignore) {;}
// assert signals to stop Workers - they are responsible for leaving the queues empty
for (int i = 0; i < numWorkers; i++) {
dones[i].value = true;
}
// call .join() for each Worker
for (int i = 0; i < numWorkers; i++) {
try {
workerThreads[i].join();
} catch (InterruptedException ignore) {;}
}
timer.stopTimer();
// report the total number of packets processed and total time
final long totalCount = dispatcher.totalPackets;
System.out.println("count: " + totalCount);
System.out.println("time: " + timer.getElapsedTime());
System.out.println(totalCount/timer.getElapsedTime() + " pkts / ms");
}
}
```
== Findings
As detailed in Assignment 7, I ran some testing on the implementation to see how it performed.
#note(
title: "Measurements"
)[
Note that anytime I use a measurement, that is the average of 5 runs, where the lowest and highest are dropped. This means that for every single data point, there are actually 5 data points that make it in total.
]
#note(
title: "Numbers"
)[
In order to keep consistency, I ran the tests with the following parameters fixed unless otherwise noted:
- $M => 2000$
- $P_+ => 0.25$
- $P_- => 0.25$
- $p => 0.8$
- $B => 20$
- $W => 1$
- $s => 100$
]
=== *(a) Dispatcher Rate*
$7469.617228942681$ pkts / ms
=== *(b) Speedup*
#twocol(
bimg("imgs/1.png"),
bimg("imgs/2.png"),
)
The three colors on each of the plots represent the different values of $p$ that could be taken on.
The two tests for part *(b)* are shown above, with the first set of three tests being condensed on the left and the second three on the right. You can see that the second set of tests where there were many more adds did significantly worse, which is to be expected. The speedup is not as high as I would have liked, but it is still a significant improvement over the serial implementation regardless. Both graphs seem to follow a trend of surprisingly dipping when $n=2$, which I did not expect, since I expected there to be a steady rise with the overhead of $n=1$ being the most. It appears that I was wrong, and the overhead is not quite as great as I thought, and the locking overhead really only materializes when there are at least two threads.
== *(c) Specific*
#bimg("imgs/3.png")
The above is the image for part *(c)* of this question. There was not as much interesting data since I did not implement anything specific for this. However, it is still worth noting that there are two very low points at 16 and 32, the last of which I did not include for resolution purposes. It appears that something in my implementation went wrong, and high threads err. However, the rest is promising, as it looks like it follows mostly the same pattern as in the previous part, where there is a slight dip at two threads, and then a payoff as it gets to higher threads.
I also discovered while doing all these tests, that the idea number for the bucket sizes seems to be $20$, as previously listed above. Perhaps I did not test a wide enough selection of numbers, but this was the one that seemed to work the best across all tests, no matter what I was doing.
Overall I wish I could have done more implementations, as well as potentially fixed any lingering bugs, although it does look like overall everything is working fine. I am happy with the results, and I believe that the implementation is solid, and the results are promising.
|
|
https://github.com/xkevio/parcio-typst | https://raw.githubusercontent.com/xkevio/parcio-typst/main/README.md | markdown | MIT License | # Usage
- Install [Typst](https://typst.app/) as a CLI either locally (see [repo](https://github.com/typst/)) or use the web app
- *(If installed locally)* Compile the document with `typst c main.typ`
`parcio-thesis` is a more recent attempt at porting the template.\
`parcio-slides` ports the presentation (beamer) slides.\
`parcio-report` is somewhat outdated (will be updated soon). |
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/visualize/gradient-radial_03.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
#circle(
radius: 25pt,
fill: gradient.radial(white, rgb("#8fbc8f"), focal-center: (35%, 35%), focal-radius: 5%),
)
#circle(
radius: 25pt,
fill: gradient.radial(white, rgb("#8fbc8f"), focal-center: (75%, 35%), focal-radius: 5%),
)
|
https://github.com/RedGl0w/TypHex | https://raw.githubusercontent.com/RedGl0w/TypHex/main/TypHex.typ | typst | #import "sgf.typ": parse
#let emptycolor = color.rgb("#fcf9f0")
#let bluecolor = color.rgb("#453cf0")
#let redcolor = color.rgb("#d6202f")
/*
Find owner of a cell, depending of the type of the input and its content
Owners : -1 = empty = e, 0 = blue = b = white, 1 = red = r = black
*/
#let getowner(parameter) = {
if type(parameter) == int {
return parameter;
} else if type(parameter) == str {
if parameter == "e" or parameter == "empty" {
return -1;
}
if parameter == "b" or parameter == "blue" {
return 0;
}
if parameter == "r" or parameter == "red" {
return 1;
}
return 2;
}
}
// Simply draw an hexagon
#let hexagon(owner, size, topColor: none, leftColor: none, rightColor: none, bottomColor: none) = {
let c = black;
let o = getowner(owner);
if o == -1 {
c = emptycolor;
} else if o == 0 {
c = bluecolor;
} else if o == 1 {
c = redcolor;
} else {
assert(false, message: "Invalid color");
}
// We can't use polygon.regular + rotate without
let width = size*calc.cos(30deg);
box(width: width,
height: size,
{
let top = ((0%, 25%), (50%, 0%));
let right = ((100%, 25%), (100%, 75%));
if (topColor == none) {
right.insert(0,(50%, 0%));
} else {
top.push((100%, 25%));
}
let bottom = ((50%, 100%), (100%, 75%));
let left = ((0%, 25%), (0%, 75%));
if (bottomColor == none) {
left.push((50%, 100%))
} else {
bottom.insert(0,(0%, 75%));
}
let paintSide(arr, co) = {
if (co == none) {
co = 1.5pt + black;
} else {
co = 5pt + co;
}
let n = arr.len();
for i in range(0, n - 1) {
place(line(start: arr.at(i), end: arr.at(i+1), stroke: co));
}
}
// We don't use here a regular polygon because it mangles with rotate (the "hitboxes" aren't rotated)
place(polygon(
fill: c,
stroke: none,
(0%, 25%),
(50%, 0%),
(100%, 25%),
(100%, 75%),
(50%, 100%),
(0%, 75%)
));
paintSide(top, topColor);
paintSide(left, leftColor);
paintSide(right, rightColor);
paintSide(bottom, bottomColor);
});
}
// Will draw a grid of hex with cell occupied in g and size in n
// It will draw also the labels of the columns and rows
#let grid(g, n, hexagonSize, gridVerPadding, gridHorPadding, textSize) = {
// Fixme : shouldn't be limited to one character
let letterLine = range(n).map(i => {
align(center, text(textSize, str.from-unicode(65 + i)))
});
// The maximum number of chars of the row numbers
let maxChars = str(n).len();
// Size of the "top" of an hexagon
let minusSpacing = hexagonSize/2 * (1 - calc.sin(30deg));
// The width of an hexagon
let hexagonWidth = hexagonSize*calc.cos(30deg);
style(styles => {
let rowNumbersSize = measure(text(textSize)[1], styles).width;
let g = stack(
dir:ttb,
spacing: -minusSpacing,
table( // Use grid ?
stroke: none,
inset: (left : 0pt, right: 0pt, top: 2pt, bottom: 2pt),
columns: (maxChars * rowNumbersSize,) + (gridHorPadding,) + (hexagonWidth,)*n + (hexagonWidth/2 * (n -1),) + (gridHorPadding,) + (maxChars * rowNumbersSize,),
[], [],
..letterLine,
[], [], []
),
v(minusSpacing * 2 + gridVerPadding),
..range(n).map(i => {
let l = str(i+1).len();
stack(
dir: ltr,
spacing: none,
h(hexagonWidth/2 * i - (l - maxChars) * rowNumbersSize),
align(horizon, text(textSize, str(i+1))),
h(gridHorPadding),
..range(n).map(j => {
let c = "e";
for k in g.keys() {
if (i,j) in g.at(k) {
c = k;
}
}
let topColor = none;
let bottomColor = none;
let leftColor = none;
let rightColor = none;
if (i == 0) {
topColor = redcolor
}
if (i == n - 1) {
bottomColor = redcolor
}
if (j == 0) {
leftColor = bluecolor
}
if (j == n - 1) {
rightColor = bluecolor
}
hexagon(c, hexagonSize, topColor: topColor, bottomColor: bottomColor, leftColor: leftColor,rightColor:rightColor)
}),
h(gridHorPadding),
align(horizon, text(textSize, str(i+1))),
h(hexagonWidth/2 * (n -i - 1) - (l - maxChars) * rowNumbersSize)
)
}),
v(minusSpacing * 2 + gridVerPadding),
table(
stroke: none,
inset: (left : 0pt, right: 0pt),
columns: (hexagonWidth/2 * (n -1),) + (maxChars * rowNumbersSize,) + (gridHorPadding,) + (hexagonWidth,)*n + (gridHorPadding,) + (maxChars * rowNumbersSize,),
[], [], [],
..letterLine,
[], []
)
)
box({
let d = measure(g, styles);
let textHeight = measure(text(textSize)[A], styles).height;
let topPadding = 4pt + textHeight - gridVerPadding - hexagonSize/2;
let leftPadding = maxChars * rowNumbersSize + gridHorPadding;
let gridWidth = n * hexagonWidth;
let topRight = leftPadding + gridWidth;
let bottomLeft = d.width - gridWidth - rowNumbersSize * maxChars - 2*gridHorPadding;
g;
})
});
}
// Decode "a1" to (0,0)
#let parsePosition(input) = {
let column = 0;
let position = 0;
let isLetter(c) = {
let c = c.to-unicode();
return (97 <= c and c <= 122) or (65 <= c and c <= 90)
}
while isLetter(input.at(position)) {
let c = input.at(position).to-unicode();
column *= 26;
column += c - 96;
position += 1;
}
let row = int(input.slice(position))-1;
return (row, column -1);
}
#let gridFromSGF(input, hexagonSize: 30pt, gridVerPadding : 0pt, gridHorPadding : 5pt, textSize : 15pt) = {
let tree = parse(input);
assert(tree.FF == "4", message: "Expected SGF version 4");
let size = int(tree.SZ);
let position = (
b: (),
r: (),
);
while tree != () and tree.children != () {
assert(tree.children.len() == 1, message: "Expected having a 1-ary tree");
let keys = tree.keys();
if "AW" in keys {
for i in tree.AW {
position.b.push(parsePosition(i))
}
}
if "W" in keys {
position.b.push(parsePosition(tree.W))
}
if "AB" in keys {
for i in tree.AB {
position.r.push(parsePosition(i))
}
}
if "B" in keys {
position.r.push(parsePosition(tree.B))
}
tree = tree.children.at(0);
}
grid(position, size, hexagonSize, gridVerPadding, gridHorPadding, textSize);
}
|
|
https://github.com/leo1oel/CSAPP | https://raw.githubusercontent.com/leo1oel/CSAPP/main/Homework/Homework3.typ | typst | #import "template.typ": *
#import "quick-maths.typ": shorthands
#import "@preview/physica:0.9.3": *
#show: shorthands.with(
($+-$, $plus.minus$),
($>=$, sym.gt.eq.slant),
($<=$, sym.lt.eq.slant),
)
#show: project.with(
title: "Homework Set 3 - Virtual Memory, I/O, File Systems",
authors: (
"<NAME> 2023010747",
)
)
#show table: set align(center)
#show image: set align(center)
= Problem 1
(1) Given that the lowest 12 bits of the virtual address are used as the offset within the page. This offset specifies individual bytes within the page. Therefore, the page size is $2^12$ bytes, or $4$ kB.
(2) Since 12 bits are used for the page offset, the remaining bits 20 bits are used for the page number. So the maximum size of physical memory this system can support is $2^20 times 2^12 = 2^32$ bytes, or 4 GB.
(3) Each page table entry (PTE) is 32 bits (4 bytes) in size. First-level page table has $2^10$ entries at most, and Second-level page tables has $2^10 times 2^10 = 2^20$ entries at most.
Therefore, the maximum size that a page table of a process can be $(2^10 + 2^20) times 4= (1024+1048576) times 4 = 4198400$ bytes.
(4) Only two entries in the page tables are actually used: one for the lowest address and one for the highest address. This means that we have two first-level page table entries pointing to a second-level page tables, each with only one valid entry. Therefore the size of the page table would be $(2^10 +2 times 2^10) times 4 = 12288$ bytes.
(5) Each page is $2^12$ bytes and a process uses 512 kB of physical memory in total, so we can get the number of pages used by the process is $(512"kB")/(4"kB") = 128$ pages.
The minimum size of the page table occurs when all pages are perfectly contiguous in memory, using one first-level page table entry points to one second-level page table which contains 128 entries.
Therefore, the size of the page table would be $(2^10+2^10) times 4 = 8192$ bytes.
The maximum size occurs if each of the 128 pages is non-contiguous, potentially requiring up to 128 entries in the first-level page table, each pointing to a different second-level page table. Therefore, the size of the page table would be $(2^10 + 128 times 2^10) times 4 = 528384$ bytes.
(6) The TLB tag is 20 bits because it needs to uniquely identify pages based on the first 20 bits of the virtual address. Each TLB entry includes the page table entry, a tag and a valid bits, so the bits of each TLB entry is $32+20+1 = 53$ bits.
= Problem 2
(1)
#table(
columns: 12,
[],[A],[B],[C],[D],[E],[C],[A],[B],[C],[D],[F],
[i],[A],[],[],[],[E],[],[],[],[],[D],[],
[ii],[],[B],[],[],[],[],[A],[],[],[],[F],
[iii],[],[],[C],[],[],[],[],[],[],[],[],
[iv],[],[],[],[D],[],[],[],[B],[],[],[],
)
(2)
When there are four physical frames, the number of faults is 9. So we need to increase the number of physical frames to reduce the number of faults.
If there are 5 physical frames, the number of faults is 6. And when physical frames are larger than 5, the number of faults is always 6 since there are only 6 different pages in total.
#table(
columns: 12,
[],[A],[B],[C],[D],[E],[C],[A],[B],[C],[D],[F],
[i],[A],[],[],[],[],[],[],[],[],[],[F],
[ii],[],[B],[],[],[],[],[],[],[],[],[],
[iii],[],[],[C],[],[],[],[],[],[],[],[],
[iv],[],[],[],[D],[],[],[],[],[],[],[],
[v],[],[],[],[],[E],[],[],[],[],[],[],
)
= Problem 3
(1)
Average read or write time = Average seek time + Average rotational delay + Average transfer time + Average controller time.
Average rotational delay = $1/2 times 1/"RPM" times (60s)/(1min)$.
Average transfer time = $"Sector size"/"Disk transfer rate"$.
Average controller time = $"Sector size"/"Controller transfer rate"$.
For disk a, Average seek time = 11 ms, Average rotational delay = $1/2 times 1/7200 times 60 = 4.17 "ms"$, Average transfer time = $1024/(36 times 2^20) = 0.027 "ms"$, Average controller time = $1024/(500 times 2^20/8) = 0.016 "ms"$.
Therefore, Average read or write time for disk a = $11 + 4.17 + 0.027 + 0.016 = 15.213 "ms"$.
For disk b, Average seek time = 9 ms, Average rotational delay = $1/2 times 1/7200 times 60 = 4.17 "ms"$, Average transfer time = $1024/(32 times 2^20) = 0.031 "ms"$, Average controller time = $1024/(520 times 2^20/8) = 0.015 "ms"$.
Therefore, Average read or write time for disk a = $9 + 4.17 + 0.031 + 0.015 = 13.216 "ms"$.
(2)
The minimum time is reached when the seek time and rotational delay are 0.
For disk a, Average transfer time = $2048/(36 times 2^20) = 0.054 "ms"$, Average controller time = $2048/(500 times 2^20/8) = 0.031 "ms"$.
Therefore, the minimum read or write time for disk a = $0 + 0 + 0.054 + 0.031 = 0.085 "ms"$.
For disk b, Average transfer time = $2048/(32 times 2^20) = 0.061 "ms"$, Average controller time = $2048/(520 times 2^20/8) = 0.030 "ms"$.
Therefore, the minimum read or write time for disk a = $0 + 0 + 0.061 + 0.030 = 0.091 "ms"$.
(3)
For both disks (a and b), the average seek time is the dominant factor, contributing the most to the total time for each operation. The other components like rotational delay, disk transfer time, and controller transfer time are significantly smaller.
If we consider different access patterns, the dominant factor may change.
1. Random Access Patterns:
In random access patterns, the dominant factor is the seek time and rotational latency.
2. Sequential Access Patterns:
In sequential access patterns, the seek time and rotational latency become less significant because the disk head moves continuously in a more predictable manner. In this case, the disk transfer rate and controller transfer rate become more important.
= Problem 4
(1)
Since its a 1GHz processor, it can run 1000000000 cycles per second. So between each byte arrival, the processor can run $0.02 times 10^(-3) times 1000000000 = 20000$ cycles.
We keep polling without stop, so the entire operation will take $20000 times 999 + 50+100 = 19980150$ cycles. And the processor will do $(20000-100)/50 times 999 + 1 = 397603$ polls.
(2)
Between two bytes arrival, the processor can run $20000 - 200 - 100 = 19700$ cycles. So the processor will spend $19700 times 999 = 19680300$ cycles on another task.
= Problem 5
(1)
First, Number of pointers per block= $"Block size"/"Pointer size" = 4096/(32/8) = 1024$.
For a direct block pointer, it points to a data block with 4096 bytes.
For an indirect block pointer, it points to 1024 direct pointers, and each direct pointer points to a data block with 4096 bytes. So the total size is $1024 times 4096 = 4194304$ bytes.
For a double indirect block pointer, it points to 1024 indirect pointers, and each indirect pointer points to 1024 direct pointers, and each direct pointer points to a data block with 4096 bytes. So the total size is $1024 times 1024 times 4096 = 4294967296$ bytes.
We want to store a 4 GB file which is $4 times 2^30 = 4294967296$. So we store 4096 bytes in the direct block, 4194304 bytes in the indirect block, and $4294967296- 4194304-12 times 4096 = 4290723840$ bytes in the double indirect block, which needs $ceil(4290723840/(4096 times 1024)) = 1023$ indirect blocks.
We also know that The total inode size is 256 bytes. Therefore, we need $(1 + 1+1023) times 4096 + 4 times 2^30 +256= 4299165952$ bytes for storage.
(2)
The maximum size of a file can be stored is
$(12+1024+1024 times 1024) times 4096 = 4299210752$ bytes.
(3)
1. Access inode to get the double-indirect pointer.
2. Access the double-indirect block to get the indirect block pointer.
3. Access the indirect block to get the direct block pointer.
4. Access the last data block to read the byte.
So, 4 disk accesses are required to read the last byte.
For overwriting,
1. Access inode to get the double-indirect pointer.
2. Access the double-indirect block to get the indirect block pointer.
3. Access the indirect block to get the direct block pointer.
4. Access the last data block to write the byte.
5. Update the inode to reflect the change.
So, 5 disk accesses are required to overwrite the last byte.
(4)
For this disk with 4 kB block size, the allocated space per 3 kB file is $4 "kB"$, and the allocated space per 8 kB file is $2 times 4 "kB" = 8 "kB"$. Since there are half 3 kB files and half 8 kB files, we assume there are $N/2$ 3 kB files and $N/2$ 8 kB files. So the space overhead is $"allocated space"/"actual data size" = (4 times N/2 + 8 times N/2)/(3 times N/2 + 8 times N/2) = 12/11$.
For disk with 2 kB block size, the allocated space per 3 kB file is $2 times 2 "KB" = 4 "kB"$, and the allocated space per 8 kB file is $4 times 2 "kB" = 8 "kB"$. Since there are half 3 kB files and half 8 kB files, we assume there are $N/2$ 3 kB files and $N/2$ 8 kB files. So the space overhead is $"allocated space"/"actual data size" = (4 times N/2 + 8 times N/2)/(3 times N/2 + 8 times N/2) = 12/11$.
(5)
There are two blocks that need to change:
1. The block containing the data for the source directory /a/b. This block is modified to remove the directory entry for the file c.
2. The block containing the data for the destination directory /d/e. This block is updated to add the directory entry for the file f.
(6)
(i)
Only file metadata need to change because the inode needs to store additional information indicating that it is a symbolic link rather than a regular file.
(ii)
Only file metadata need to change because the inode needs to manage reference counts that track how many directory entries point to it. |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/meta/outline-entry_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
#set page(width: 150pt)
#set heading(numbering: "1.")
#show outline.entry.where(
level: 1
): it => {
v(12pt, weak: true)
strong(it)
}
#outline(indent: auto)
#set text(8pt)
#show heading: set block(spacing: 0.65em)
= Introduction
= Background
== History
== State of the Art
= Analysis
== Setup
|
https://github.com/akshat2602/resume | https://raw.githubusercontent.com/akshat2602/resume/master/README.md | markdown | # <NAME> Resume
This template is for my personal use and tracking.
## Build
Use `typst` in the terminal to build the pdf.
## TODO:
- [ ] Automate github action build to push resume to web.
|
|
https://github.com/crd2333/crd2333.github.io | https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/Courses/操作系统原理与实践/note.typ | typst | #import "/src/components/TypstTemplate/lib.typ": *
#show: project.with(
title: "操作系统原理与实践",
lang: "zh",
)
#info()[
- 感觉 #link("https://note.hobbitqia.cc/OS/")[hobbitqia 的笔记] 比较好,自己简单记记
- 还有 #link("https://note.isshikih.top/cour_note/D3QD_OperatingSystem/")[修佬的笔记],虽然老师不一样
]
= Introduction
- 复习计组的东西
- UNIX family tree
- NUIX, BSD, Solaris, Linux...
- Ubuntu = Linux Core + GNU
- Kernel 的工作是抽象封装和资源管理
- 对 OS 的一个最常见误区是“一个 running program”,事实上 OS 在启动后一般是闲置的,并且要求内存占用小
- It's code that resides in memory and is ready to be executed at any moment
- It can be executed on behalf of a job(or process in modern terms, a process is a running job)
- 为了实现 APP $->$ OS $->$ HardWare 的层级,需要把访问硬件的指令分类(privileged / unprivileged),即至少支持两个 Mode
- arm64 有 4 个 Mode,RISC-V 有 3 个 Mode(有一个没抄完因此叫 reserved)
#grid2(
fig("/public/assets/Courses/OS/2024-09-24-13-33-05.png"),
fig("/public/assets/Courses/OS/2024-09-24-13-33-26.png")
)
- OS Event
- 分为 interrupt 和 exception(traps)
- When a user program needs to do something privileged, it calls a *system call*. A system call is a special kind of *trap*
#fig("/public/assets/Courses/OS/2024-09-24-13-35-18.png")
- Timers
- OS 必须有时间的概念来公平地分配资源给各个进程
- 为此需要 timer,通过 privileged instructions 实现,定期自增计时器
- Main OS Services
+ Process Management
- process 是执行中的 program
- OS 负责:进程的创建、删除、挂起、恢复、同步、通信、死锁处理
+ Memory Management
- Memory management 决定哪些东西在内存中,kernel *always* in memory
- OS 负责:跟踪、移入移出、分配释放 memory
- OS 不负责:memory caching, cache coherency(by hardware)
+ Storage Management
+ I/O Management
+ Protection and Security
= Structures
- 操作系统的定义
- 狭义上来说,只有与硬件资源直接沟通的内核才叫 OS
- 但这样 Android(底层是 linux)、鸿蒙等就不算了,因此从广义上来说,一些运行在 user mode 上的 system services 也算 OS(比如,GUI、batch、command line)
== System Calls and Services
#fig("/public/assets/Courses/OS/2024-09-24-16-04-00.png", width: 70%)
- system call 是 kernel 提供给 user 的 API,做 privileged 的操作
- system call 非常常见,只是我们可能意识不到
- 比如 C 语言的 `printf` 就是一个 `write` system call 的 wrapper
- 从 high level 的角度看
#fig("/public/assets/Courses/OS/2024-09-24-14-25-18.png", width: 70%)
- system call number
- System-call interface maintains a table(actually an array) —— system call table
- 每种 system call 关联着一个数字,给一个数字,就去调用对应的 system call
- linux 上有一个叫 `strace` 的命令,可以展示一个命令调用了多少 system call;类似地,`time` 命令可以展示一个命令的执行时间,包含 real, user, sys
- system call 在 windows 和 unix 上的设计完全不一样
- system services
- OS 自带的、不需要用户自己安装的、运行在 user mode 的服务
- File manipulation, Status information sometimes stored in a file, Programming language support, Program loading and execution, Communications, Background services, Application programs
== Linkers and Loaders
- Linkers and Loaders
- Linker: 把多个 object files link 起来,生成一个可执行文件
- Loader: 把可执行文件 load 到内存中,准备执行
- Linker 和 Loader 之间的区别在于:Linker 是在 compile time,Loader 是在 run time
- ELF binary basics
- ELF: Executable and Linkable Format
- `.text`: code, `.rodata`: initialized read-only data, `.data`: initialized data, `.bss`: block started by symbol
- Linking
- Static linking
- 把所有需要的代码都 link 到一个 large binary 中,移植性好
- Dynamic linking
- 重用 libraries 来减少 binary 的大小
- 谁来解析?loader will resolve lib calls
- running a binary
#fig("/public/assets/Courses/OS/2024-09-24-16-07-49.png")
- Who setups ELF file mapping? Kernel, or to be more specific --- exec syscall
- Who setups stack and heap? Kernel, or to be more specific --- exec syscall
- Who setups libraries? loader, ld-xxx
- Running statically-linked ELF
- Where to start? `_start`, `_start` is executed after evecve system call,并且这是运行在 user mode 的
- `_start` 里面调用 `_libc_start_main`,这个函数会调用 `main` 函数并设置参数
#fig("/public/assets/Courses/OS/2024-09-25-16-38-09.png", width:70%)
- Running dynamically-linked ELF
- 在源码的 `load_elf_binary` 里,会有个 `if (elf_interpreter)`,如果是则为 dynamical,然后去调 interpreter
- 它做了什么呢?比方说 main 里面调了 `printf`,实际上用的是 `puts`,那这个地址在哪呢(如果没设置好的话就会 segmentation fault)?这就是 loader 的工作
#fig("/public/assets/Courses/OS/2024-09-25-16-44-28.png")
== Operating System Design
- Why Applications are Operating System Specific
- Each operating system provides its own unique system calls
- 但是 Apps can be multi-operating system
- *Application Binary Interface (ABI)* 是 API 在 architecture 上的实现,定义一个 binary code 的不同部分如何在某个 architecture 上与某个 OS 交互
- Operating System Design and Implementation
- User goals and System goals 是不同的,用户追求方便、易学、可靠、安全、快速,系统追求设计、维护、灵活、可靠、高效
- Important principle to separate
+ Policy: *What* will be done?
+ Mechanism: *How* to do it?
- 将这二者分开允许更改 policy 而不需要更改 mechanism
- 比如在 scheduling 中,policy 维护一个列表决定哪些进程先执行;mechanism 不管那么多,哪个任务在前就运行哪个
- Operating System Structure
- 总体来说有 $4$ 种设计思路
+ Simple structure – MS-DOS
+ Monolithic – Unix, Linux
+ Layered – an abstraction
+ Microkernel – Mach,
#grid(
columns: 2,
fig("/public/assets/Courses/OS/2024-09-25-17-20-23.png"),
fig("/public/assets/Courses/OS/2024-09-25-17-20-42.png"),
fig("/public/assets/Courses/OS/2024-09-25-17-19-34.png", width: 50%),
fig("/public/assets/Courses/OS/2024-09-25-17-19-43.png")
)
- Modules
- Many modern operating systems implement loadable kernel modules (LKMs)
- 面向对象;每个核心组件独立;每个部分都通过接口与其他部分交流;每个部分可以根据需要在内核中加载
- 总体来说,类似于 layers 但更灵活
- Hybrid Systems
- 实际上大多数现代 OS 都不是纯粹的上述 $4$ 种设计
- System Boot
- 当系统上电后,从固定的内存位置开始执行;OS 必须对 hardware available,以便硬件可以启动它
- 一小段代码 —— 存储在 ROM 或 EEPROM 中的 bootstrap loader, BIOS 会定位内核,将其加载到内存中,并启动它
- 有时是两步过程,固定位置的 boot block 由 ROM 代码加载,从磁盘加载 bootstrap
- 现代系统用 Unified Extensible Firmware Interface(UEFI)取代 BIOS
- Common bootstrap loader, GRUB 允许从多个磁盘、版本和内核选项中选择内核
- 内核加载后,系统开始运行
- 引导加载程序经常允许各种引导状态,例如单用户模式
= Processes
== Process concept
- Process memory layout
- 一般来说 stack 会比 heap 快得多,因为大多数时候里 cache 里
- 当 stack meets heap 时,会发生著名的 stack overflow(常见于错误递归的情况)
#fig("/public/assets/Courses/OS/2024-10-08-13-40-04.png", width: 80%)
- Stack Frame (Activation Record)
- 栈帧,每次调用新函数的时候,`sp` 指向新的位置,`fp` 指向原本 `sp` 的位置,它们之间的空间就是这个函数的内存布局
- 这里讲课用的是 arm64 的寄存器规定
#fig("/public/assets/Courses/OS/2024-10-08-13-54-25.png", width: 80%)
- Process Control Block (PCB)
- 一个进程的所有信息都在 PCB 里,linux 里面叫 `task_struct`
- 会有一个链表把所有 PCB 串起来
- 一般至少有以下信息:
+ Process state: running, waiting, etc
+ Program counter: location of instruction to next execute
+ CPU registers: contents of all process-centric registers
+ CPU scheduling information: priorities, scheduling queue pointers
+ Memory-management information: memory allocated to the process
+ Accounting information: CPU used, clock time elapsed since start, time limits
+ I/O status information: I/O devices allocated to process,list of open files
== Process State
- 要记住这张图
#fig("/public/assets/Courses/OS/2024-10-08-14-05-49.png", width: 80%)
- 下面我们先讲 new 和 terminated,然后再讲 running, ready, waiting
=== 进程创建(new)
- 进程创建就像一棵树,每个 process 拥有唯一 `pid` 和指向它的父节点的 `ppid`
- 名字带 `d` 的表示它是一个 daemon 进程(守护进程),不会与 user 交互,默默地在后台跑
#fig("/public/assets/Courses/OS/2024-10-08-14-43-53.png", width: 80%)
- 父进程在创建子进程时,他可以选择等待子进程结束,也可以不等待;子进程可以是父进程的 `fork()`,也可以是新的程序
- `fork()`
- `create_process()` in Windows
- `fork()` + `exec()`: 简洁(参数少)、分工、联系(父进程与子进程),但比较复杂、性能差、不安全
- 一个经典考点是对 `fork()` 的理解
```c
int main () {
fork();
if (fork()) {
fork();
}
fork();
}
```
- `exec*()` family
- `exec()` 会把 process 的那一块内存清空,然后把新的程序加载进去
- 如果新的程序内存要求比原本的大,会“往下扩”(其实是虚拟内存机制)
- 可以传递:
+ path for the executable
+ command-line arguments to be passed to the executable
+ possibly a set of environment variables
- `ls` 的例子(可以 `strace` 用 syscall 查看进程创建的情况,但是不会显示 `fork`)
#fig("/public/assets/Courses/OS/2024-10-08-14-50-55.png", width: 80%)
=== 进程结束(terminated)
- 像上图那样,parent 需要等待 child 结束
- `wait()` 等待所有的 child process 结束
- `waitpid()` 等待指定 child process 结束
- 但对于非正常结束的进程,需要用 `signal` 来处理
- signal
- *signal* 是一个 asynchronous event,程序必须以某种形式对它做出反应,可以把它想象成 software interrupt
- `signal()` 允许程序指定如何处理 signal
```c
signal(SIGINT, SIG_IGN); // ignore signal
signal(SIGINT, SIG_DFL); // set behavior to default
signal(SIGINT, my_handler); // customize behavior
// handler is as:void my_handler(int sig){...}
```
- zombie
- 当 child 进程 terminate 了,这个 child 就成了 *zombie*
- 直到 OS collect garbage 或者 parent 调用 OS 来处理
- parent 进程可以调用 `wait()` 或 `waitpid()` 来获取它的 exit code 并回收
- 为什么一定要 OS 回收?因为 child 进程可以回收基本所有东西,但维度 PCB 没有办法 deallocate
- 为什么不立即回收?因为 zombie 不会实际消耗 CPU 资源,而只是略微占用一点内存
- 比如:当 parent 陷入无限循环,而且没有设置 handler 时,child `exit()`,却没有被处理,它就成了 zombie
- orphan
- 当 parent 进程 die 了,child 却还没结束,这个 child 就成了 *orphan*
- 它会被 `init` 进程(or `systemd`, `pid` = 1)收养(`adopted`)
=== Process Switching
- Process scheduler 维护两种 queue
+ Ready queue: 进程已经准备好了,等待 CPU(多少个 CPU 就有多少个)
+ Wait queue: 进程等待某个事件发生,有多种类型每种一个
#fig("/public/assets/Courses/OS/2024-10-09-16-43-44.png", width: 80%)
- queue 的数据结构跟 ADS 里面有所不同,ADS 里面往往做成 node,包含实际数据;而 OS 这边为了通用性往往就是一个包含 `prev, next` 俩指针的结构,搬到哪都能用
- Context Switch
- 这里的 context 指的就是 registers,因为它们只有一份,所以需要保存
- context switch 一定得是在 Kernel Mode,即 privileged,因为它涉及到系统资源、能改 pc
+ 如果 switch 发生在 kernel mode,就跟实验 2 里做的一样。在 `cpu_switch_to` 把 context 存到相应 PCB 里
+ 如果 switch 发生在 user mode,还牵涉到 per-thread kernel stack,更确切地说是 pt_regs(user context been saved)。在 `kernel_entry` 时把 context 存到 pt_regs,切换到 kernel stack,然后在 `kernel_exit` 时恢复
#fig("/public/assets/Courses/OS/2024-10-09-17-44-58.png")
- 思考 `fork()` 为什么能返回两个值(Return new_pid to parent and zero to child)?
- 其实是有“两套东西”
+ 对 parent process,`fork()` 就是一个 syscall,返回值存在 pt_regs 里
+ 对 child process,其实也是通过 pt_regs,手动把它设为 $0$
- When does child process start to run and from where?
- When forked, child is READY $->$ context switch to RUN
- After context switch, run from `ret_to_fork`
- `ret_from_fork` $->$ `ret_to_user` $->$ `kernel_exit` who restores the pt_regs
- Code through,Linux 进程相关代码的发展史
= Inter-Process Communications(IPCs)
- 与之对应的 intra-process 表示进程内部
- 前面我们把进程介绍为独立的单元,互相之间只有 switch,保护得太好了。但实际上进程之间因为 Information sharing, Computation speedup, Modularity, Convenience 等原因需要进行通信
- Multiprocess Architecture example – Chrome Browser
- 谷歌浏览器实际上是 3 中多线程 —— Browser, Render, Plugin,分别负责用户交互、渲染、插件
- Models of IPC
+ *Shared memory*
+ *Message passing*
+ Signal
+ Pipe
+ Client-Server Communication: Socket, RPCs, Java RMI
#fig("/public/assets/Courses/OS/2024-10-16-16-38-32.png", width: 50%)
- Message-passing
+ 高开销,每次操作都要 syscall
+ 有时对用户来说很麻烦,因为代码中到处都是send/recv操作
+ 相对来说 OS 上容易实现
- Shared memory
+ 低开销,只需要初始化时少量的 syscall;对交换大量数据很有用
+ 对用户来说更方便,因为我们习惯于简单地从RAM读/写
+ 相对来说 OS 上更难实现
- 进程需要建立共享内存区域
- 每个进程创建自己共享内存段,然后其它进程可以将其 attach 到自己的地址空间
- 注意,这与多线程的核心内存保护理念背道而驰
- 进程通过读/写共享内存区域进行通信,他们自己负责“不踩到对方的脚趾”,操作系统根本不参与
- e.g. POSIX Shared Memory
- 存在问题:不安全。任何人拿到 share_id 都可以把共享内存 attach 到自己进程上,可以观察到其他进程的数据、甚至做 DOS 攻击
- 而且很 cubersome,会发生各种 error 需要处理,现在使用不多
== Message Passing
- Two fundamental operations:
- send: to send a message (i.e., some bytes)
- recv: to receive a message
- If processes P and Q wish to communicate they
- establish a communication “link” between them
- This “link” is an abstraction that can be implemented in many ways (even with shared memory!!)
- place calls to send() and recv()
- optionally shutdown the communication “link”
- Implementation of communication link
- Physical:
+ Shared memory
+ Hardware bus
+ Network
- Logical:
+ Direct or indirect
- Direct: 一个链接与且只与一对通信进程相关联,一共需要 $C_n^2$
- Indirect: 有一个 mailbox,发信息相当于发给一个 mailbox。如果有多个进程,我们需要确定是由哪个进程接收信息
+ Synchronous or asynchronous
- Synchronous: 发信息时,如果接收者没收到信息,就堵塞着不走;收信息时,如果发送者没有发送信息,就堵塞着不走
- Asynchronous: Non-blocking is considered asynchronous
- 异步效率更高,同步时效性更高。
- Automatic or explicit buffering
+ Automatic or explicit buffering
- Zero capacity - no messages are queued on a link. Sender must wait for receiver
- Bounded capacity - finite length of n messages. Sender must wait if link full.X
- Unbounded capacity - infinite length. Sender never waits
== Signals
- 略
== Pipes
- 充当允许两个进程通信的管道
- 问题:
- 沟通是单向的还是双向的?
- In the case of two-way communication, is it half or full-duplex?
- 通信过程之间必须存在关系(即父子关系)吗?
- 这些管道可以通过 network 使用吗?
- Ordinary pipes —— 不能从创建它的进程外部访问。通常,父进程创建一个管道,并使用它与它创建的子进程进行通信
- 没有名字,只能通过 `fork()` 来传播
- Producer writes to one end (the *write-end* of the pipe)
- Consumer reads from the other end (the *read-end* of the pipe)
#fig("/public/assets/Courses/OS/2024-10-16-17-05-47.png", width: 50%)
- 注意 fd[0] 是 read-end,fd[1] 是 write-end(对于双方都是)
- Windows calls these anonymous pipes
- Named pipes —— 可以在没有父子关系的情况下访问
- 可以把名字通过网络/文件传播,这样就能交互。(可以使用 mkfifo 创建 named pipes)
- UNIX Pipes
- In UNIX, a pipe is mono-directional. 要实现两个方向一定需要两个 pipe
- e.g. `ls | grep foo`,创建了两个进程,一个 `ls` 一个 `grep`,`ls` writes on the write-end and `grep` reads on the read-end
== Client-Server Communication
- 广义上的 IPC,因为是跑在两个物理机器上的交互。
- Sockets
- RPCs: 所有的交互都是和 stub 通信,stub 会和远端的 server 通信。存在网络问题,如丢包
- Java RMI: RPC in Java
- 略
= Threads
== Thread Concept
- 回顾,process = code(text) section + data section + pc + registers + stack + heap
- How can we make a process faster?
- Multiple execution units with a process
#fig("/public/assets/Courses/OS/2024-10-16-19-42-10.png", width: 80%)
- Thread's definition: a basic unit of execution within a process
- 当我们提出 thread 概念后,不分线程的单个进程就视为 single threaded process
#fig("/public/assets/Courses/OS/2024-10-16-19-42-26.png", width: 60%)
- 每个 thread 有:
+ thread ID
+ program counter
+ register set
+ Stack
- 与同一个 process 的 threads 共享:
+ code section
+ data section
+ the heap (dynamically allocated memory)
+ open files and signals
- Advantages of Threads
- Economy
- Creating a thread is cheap: 如果已经有了一个线程,创建新的线程只需要给它分配一个栈。code, data, heap 都已经在内存里分配好了
- Context switching between threads is cheap: no need to cache flush
- Resource sharing
- Threads naturally share memory
- Having concurrent activities in the same address space is very powerful
- Responsiveness
- 如在 web server 中,一个线程在等待 I/O,当有请求来时就再分配一个线程去处理。(进程也可以,但是代价更大)
- Scalability
- multi-core machine
- Drawbacks of Threads
- Weak isolation between threads: 如果有一个线程挂了,那么整个进程都会出错
- Threads may be more memory-constrained than processes: threads 受限于 process 的空间,但在 64-bit 架构上不再是问题(?)
- Typical challenges of multi-threaded programming
+ Deal with data dependency and synchronization
+ Dividing activities among threads
+ Balancing load among threads
+ Split data among threads
+ Testing and debugging
== User Threads vs. Kernel Threads
- User Space 支持 threads 设计,Kernel Space 不一定,但大多数现代 OS 都支持
- Many-to-One Mode
- 好处是在于易于实现,kernel 不用管你上层怎么干的
- 缺点:内核只有一个线程,无法发挥 multi-core 的优势;一旦一个线程被阻塞,其他线程也会被阻塞
#fig("/public/assets/Courses/OS/2024-10-22-13-35-06.png", width: 50%)
- One-to-One Mode
- 优点是消除了 Many-to-One 的两个毛病,但缺点是创建开销大(但现代硬件相对不那么值钱了)
- 把线程的管理变得很简单,现在 Linux,Windows 都是这种模型
#fig("/public/assets/Courses/OS/2024-10-22-13-42-09.png", width: 50%)
- Many-to-Many Model
- $m$ to $n$ 线程,折中上面两者的优缺点。但是实现复杂
#fig("/public/assets/Courses/OS/2024-10-22-13-42-25.png", width: 50%)
- Two-Level Model
- 大多数时候 many to many,但对特别重要的那种用 one to one
#fig("/public/assets/Courses/OS/2024-10-22-13-42-34.png", width: 50%)
== Thread Libraries
- In C/C++: pthreads and Win32 threads
- POSIX standard (IEEE 1003.1c) API for thread creation and synchronization
- e.g. `pthread_create`, `pthread_join`, `pthread_exit`
- In C/C++: OpenMP
- OpenMP is a set of compiler directives and an API for C, C++, and Fortran
- Provides support for parallel programming in shared-memory environment
- `#pragma omp parallel`,使用之后编译器会为我们切分出若干个并行块,创造出对应的线程,最后使用 join 把线程合并
- In Java: Java Threads
- Old versions of the JVM used Green Threads, but now provides native thread,前者不再 available
- In modern JVMs, application threads are mapped to kernel thread
#fig("/public/assets/Courses/OS/2024-10-22-13-48-48.png")
== Threading Issues
- 线程的加入让进程的操作变得更复杂。
- Semantics of `fork()` and `exec()` system calls
- 如果一个 thread 调用了 `fork()`,可能发生两种情况
+ 创建了一个 process,只包含一个 thread(which called `fork()`)
+ 创建了一个 process,复制了所有 threads
- Some OSes provide both options, In Linux the first option above is used(因为大部分时候 `fork()` 之后会接 `exec()`,抹掉所有的数据,因此直接复制调用线程就可以了)
- If one calls `exec()` after `fork()`, all threads are "wiped out" anyway
- Signal handling
- 我们之前谈论过 signals for processes,但对于 multi-threaded programs 会发生什么?有多重可能(Synchronous and asynchronous)
+ Deliver the signal to the thread to which the signal applies
+ Deliver the signal to every thread in the process
+ Deliver the signal to certain threads in the process
+ Assign a specific thread to receive all signals
- Most UNIX versions: 一个 thread 可以指定它接受哪些 signal、拒绝哪些 signal
- 在 Linux,比较复杂,接口都开放给用户,摆烂,程序员自己去理解吧
- Thread cancellation of target thread
- 把一个线程的工作取消掉,如何保证取消后不影响系统的稳定性
- Asynchronous cancellation: 立即终止。
- Deferred cancellation: 线程会自己进行周期性检查,如果取消掉不会影响系统的稳定性,就把自己取消掉
- 前者 may lead to an inconsistent state or to a synchronization problem,后者不会但是它的 code 写得不好看(时不时要问 "should I die?")
- Thread-local storage
- Thread Scheduling
== windows thread & linux thread
- windows,不是很重要
- In Linux
- The `clone()` syscall is used to create a thread or a proces
- `clone` 有一个参数 `CLONE_VM`,如果不设置那么类似于 fork,每个线程都有自己的内存空间;如果设置了那么线程跑在同一地址空间上
- TCB 用来存储线程的信息,Linux 并不区分 PCB 和 TCB,都是用 task_struct 来表示
- A process is
- either a single thread + an address space, PID is thread ID
- or multiple threads + an address space, PID is the leading thread ID
#grid2(
fig("/public/assets/Courses/OS/2024-10-22-14-17-36.png"),
fig("/public/assets/Courses/OS/2024-10-22-14-29-29.png")
)
- PID 如果和 LWP 相同,说明是 single-threaded process。如果不相同,说明进程有多个线程,此时进程的 PID 是主线程的 LWP
- `task_struct` 内,`mm_struct`(与内存管理相关的信息,如页表), `files` 指向同一个结构体,这样就实现了共享内存。而 `task_thread`, `pid`, `stack`, `comm` 等不共享
- 通过 `thread_group` 链表将这些线程串联起来
- User thread to kernel thread mapping
#grid(
columns: (70%, 30%),
[
- One task in Linux
- Same task_struct(PCB) means same thread, also viewed as 1:1 mapping。每个 User thread 对应一个 Kernel thread(类似于它的小号)
- 另外,思考如果是 Many-to-one,怎么实现?答案是保证返回时 Kernel Space stack 干干净净给下一个用。这个设计其实延续到 1:1 mapping 了
- One user thread maps to one kernel thread. But actually, they are the same thread
- User Space 和 Kernel Space 执行的代码不同
- User code, user space stack; Kernel code, kernel space stack
],
fig("/public/assets/Courses/OS/2024-10-22-14-45-05.png")
)
#info(caption: [Takeaway])[
- Thread is the basic execution unit
- Has its own registers, pc, stack
- Thread vs Process
- What is shared and what is not
- Pros and cons of threa
]
= Scheduling
- Definition
- 决定 processes/threads 谁用?用多久?
- CPU Scheduling 对系统 performance and productivity 有很大影响
- The *policy* is the scheduling strategy,怎么选择下一个要执行的进程
- The *mechanism* is the dispatcher,怎样快速地切换到下一个进程
- CPU-I/O Burst Cycle
- I/O-bound process: 主要是等 I/O。大部分的操作都是 I/O-bound 的
- CPU-bound process: 主要是等 CPU
- CPU scheduler 有两种类型
+ Non-preemptive: 一个进程想跑多久就多久
+ Preemptive: 当一个进程被另一个进程抢占时,被抢占的进程会被放回 ready queue
#note()[
+ A process goes from RUNNING to WAITING
- e.g. waiting for I/O to complete
+ A process goes from RUNNING to READY
- e.g. when an interrupt occurs (such as a timer going off)
+ A process goes from WAITING to READY
- e.g. an I/O operation has completed
+ A process goes from RUNNING to TERMINATED
+ A process goes from NEW to READY
+ A process goes from READY to WAITING
- 在非抢占式的情况中,只有第二种情况不会发生。在抢占式的情况中,所有的情况都会发生
- Preemptive scheduling is good, since the OS remains in control, but is complex
]
- Dispatch latency
- time it takes for the dispatcher to stop one process and start another to run,这段时间是不做实际工作的
- Scheduling Objectives(Criteria)
+ maximize CPU Utilization, Throughput, Turnaround time
+ minimize Waiting time, Response time
- 一些目标相互冲突,e.g. 频繁的 context switches 有助于 Response time,但会降低 Throughput
== Scheduling Algorithms
+ First-Come, First-Served Scheduling(FCFS)
+ Shortest-Job-First Scheduling(SJF)
+ Round-Robin Scheduling(RR)
+ Priority Scheduling
+ Multilevel Queue Scheduling
+ Multilevel Feedback Queue Scheduling
- 一般用 Waiting Time, Turnaround Time 来比较,要学会画 Gantt 图和计算(多个 examples)
- FCFS: 字面意思理解
- SJF
- 分两种,Preemptive 和 Non-preemptive
- 基本上就是 ADS 里讲的那种,被证明是 optimal 的
- 但在执行进程前,无法得知 burst time(只能预测),所以该算法只存在于理论与比较
- RR
- 每个进程都有一个时间片(quantum),时间片用完了就换下一个
- 优点是简单,缺点是可能会有很多 context switch
- 时间片的大小是一个 trade-off,太小会导致频繁的 context switch,太大会导致总 dispatch latency 不可接受
- Priority
- 一个 Problem 是 *Starvation*,即低优先级的进程永远得不到 CPU
- 可以用 *priority aging* 来解决,把时间也算到优先级里
- Priority 可以与 RR 结合
- Multilevel Queue Scheduling
#fig("/public/assets/Courses/OS/2024-10-16-16-26-11.png", width: 60%)
- Multilevel Feedback Queue Scheduling
- 根据反馈来调整队列,比如给一个 quantum,如果你用完了,把你往下降(优先级降低),降到最后就完全不看 priority 而是 FCFS
#fig("/public/assets/Courses/OS/2024-10-15-14-55-52.png", width: 60%)
- 怎么样算是 Good Scheduling Algorithm
- Few *analytical/theoretical* results are available
- *Simulation* is often used
- *Implementation* is key
== Thread Scheduling
- process-contention scope (PCS)
- 每个进程分到时间片一样,然后进程内部再对线程进行调度
- system-contention scope (SCS)
- 所有线程进行调度。
- 现在主流 CPU 都是以线程为粒度进行调度的
== Multiple-Processor Scheduling
- Multithreaded Multicore System
#fig("/public/assets/Courses/OS/2024-10-15-15-15-14.png", width: 40%)
- 现在大部分是 (b) 架构
#fig("/public/assets/Courses/OS/2024-10-15-15-15-56.png", width: 40%)
- CPU 中计算单元很快,但是内存访问是很慢的,需要 stall。为了利用这段 stall 的时间,我们就多用一个 thread,在这个 thread stall 时执行另一个 thread (hyperthreading,属于硬件线程,由硬件来调度,不同于 OS 里的 thread)
- Multiple-Processor Scheduling
- Load Balancing
- Load balancing attempts to keep workload evenly distributed
- Push migration – periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs
- core 上工作太多,要推给其他的 core
- Pull migration – idle processors pulls waiting task from busy processor
- core 上工作太少,就从其他的 core 上拉一些任务过来
- Processor Affinity: 有的进程我们想要在一个 core 上跑
- Soft affinity – the operating system attempts to keep a thread running on the same processor, but no guarantees
- Hard affinity – allows a process to specify a set of processors it may run on
- Linux Scheduling
- Nice command: 数越小,优先级越高
- `ps -e -o uid,pid,ppid,pri,ni,cmd`
+ linux 0.11 源码
- Implemented with an array (no queue yet)
- Round-Robin + Priority,体现了 aging 思想
- 思考各在何处体现
- 不足之处:$O(N)$ 的效率,priority 修改的响应性不好
+ linux 1.2,引入 circular queue
+ linux 2.2,引入 Scheduling classes 和 Priorities within classes
+ linux 2.4
+ linux 2.6
- 实现了 $O(1)$ 的调度
- 不好的点在于 policy, mechanism 没有分开,且依赖于 `bsfl` 指令
- 后来引入了 Completely Fair Scheduler(CFS),用 Red-Black Tree 来实现,也有争议
= Synchoronization
- Processes/threads can execute concurrently
- Concurrent access to shared data may result in data inconsistency
== Race Condition
- 多个进程并行地写数据,结果取决于写的先后顺序,这就是 Race Condition
- 比如课件中的 counter++ 例子
- 又比如,如果不加保护,两个进程同时 `fork()`,子进程可能拿到一样的 pid
- critical section
- 修改共同变量的区域称为 critical section;共同区域之前叫 entry section,之后叫 exit section
```
while (true) {
[entry section]
critical section
[exit section]
remainder section
}
```
- 怎么实现呢?
- Single-core system: preventing interrupts
- Multiple-processor: preventing interrupts are not feasible (depending on if kernel is preemptive or non-preemptive)
- Preemptive – allows preemption of process when running in kernel mode
- Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
- Solution to Critical-Section: Three Requirements
- Mutual Exclusion(互斥访问)
-在同一时刻,最多只有一个线程可以执行临界区
- Progress(空闲让进)
- 当没有线程在执行临界区代码时,必须在申请进入临界区的线程中选择一个线程,允许其执行临界区代码,保证程序执行的进展
- Bounded waiting(有限等待)
- 当一个进程申请进入临界区后,必须在有限的时间内获得许可并进入临界区,不能无限等待(阻止 starvation)
== Peterson’s Solution
- Peterson’s solution solves two-processes/threads synchronization (Only works for two processes case)
- It assumes that LOAD and STORE are atomic
- atomic: execution cannot be interrupted
- Two processes share two variables
- boolean flag[2]: whether a process is ready to enter the critical section
- int turn: whose turn it is to enter the critical section
#fig("/public/assets/Courses/OS/2024-10-22-16-22-15.png", width: 70%)
- 验证三个条件
- Mutual exclusion
#grid(
columns: 2,
column-gutter: 3em,
[
- P0 enters CS (flag[1]=false or turn=0), there are 3 cases
+ flag[1]=false #h(3em) $->$ P1 is out CS
+ flag[1]=true, turn=1 $->$ P0 is looping, contradicts
+ flag[1]=true, turn=0 $->$ P1 is looping
],
[
- P1 enters CS (flag[0]=false or turn=1), there are 3 cases
+ flag[0]=false #h(3em) $->$ P0 is out CS
+ flag[0]=true, turn=0 $->$ P1 is looping, contradicts
+ flag[0]=true, turn=1 $->$ P0 is looping
]
)
- Process requirement
#fig("/public/assets/Courses/OS/2024-10-22-16-18-17.png", width: 60%)
- Bounded waiting
- Whether P0 enters CS depends on P1; Whether P1 enters CS depends on P0; P0 will enter CS after one limited entry P1
- 但是 Peterson's Solution 在现代机器上完全不现实
+ Only works for two processes case
+ It assumes that LOAD and STORE are atomic
+ Instruction reorder: 指令会乱序执行
== Hardware Support for Synchronization
- 既然软件上实现有困难,那就硬件上解决。Many systems provide hardware support for critical section code
- Uniprocessors: disable interrupts,当前运行的代码将不会被抢占
- generally too inefficient on multiprocessor systems
- Solutions:
- Memory barriers
- Hardware instructions
- test-and-set: either test memory word and set value
- compare-and-swap: compare and swap contents of two memory words
- Atomic variables
=== \*Memory Barriers
- 知道就可以了,不做要求
=== Hardware Instructions
- 特殊的硬件指令,允许我们测试和修改单词的内容,或者原子地交换两个单词的内容(不可中断)
- Test-and-Set Instruction
- 定义如下,看起来是由多条指令实现的,但在硬件上保证 atomically
```c
bool test_set(bool *target) {
bool rv = *target;
*target = TRUE;
return rv:
}
```
- lock with Test-and-Set
```c
bool lock = FALSE
do {
while (test_set(&lock)); // busy wait
/* critical section */
lock = FALSE;
/* remainder section */
} while (TRUE);
```
- mutual exclusion & progress: 显然满足
- bounded-waiting : 不一定,改造一下使它满足
```c
do {
waiting[i] = TRUE;
while (waiting[i] && test_and_set(&lock));
waiting[i] = FALSE;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
/* remainder section */
} while (TRUE);
```
- Compare-and-Swap Instruction
- 定义如下,期望是 atomically,仅当 `*value==expected` 时,将变量值设置为传递的参数 `new_value` 的值,然后返回旧值
```c
bool compare_and_swap(bool *value, bool expected, bool new_value) {
bool temp = *value;
if (*value == expected)
*value = new_value;
return temp;
}
```
- Shared integer lock initialized to 0
```c
while (true)
{
while (compare_and_swap(&lock, 0, 1) != 0); /* do nothing */
critical section
lock = 0;
remainder section
}
```
- intel x86 中实现了 `cmpxchg`,就是这个指令;ARM64 使用下面这种方式实现
#tblm[
| thread 1 | thread 2 | thread 3 | local monitor状态 |
| --- | --- | --- | --- |
| | | | Open Access |
| LDXR | | | Exclusive Access |
| 1 | LDXR | | Exclusive Access |
| | Modify | | Exclusive Access |
| | STXR | | Open Access |
| | ? | LDXR | Exclusive Access |
| | | Modify | Exclusive Access |
| Modify | | | Exclusive Access |
| STXR | | | Open Access (No Failure?) |
| | | STXR | |
]
=== Atomic Variables
- One tool is an atomic variable that provides atomic (uninterruptible) updates on basic data types such as integers and booleans.
- The increment() function can be implemented as follows:
```c
void increment(atomic_int *v) {
int temp;
do {
temp = *v;
} while (temp != (compare_and_swap(v,temp,temp+1)));
}
```
== Mutex Lock
- Mutex Locks 支持 `acquire()`(获得这个锁)和 `release()`(释放这个锁)。它们是原子的
- This solution requires busy waiting, This lock therefore called a spinlock
```c
bool locked = false;
acquire() {
while (compare_and_swap(&locked, false, true)); // busy waiting
}
release() {
locked = false;
}
```
- 问题:如果一个进程拿到锁之后,时间片内没做完,切换到另一个进程,该进程有时间片但是拿不到锁,一直 spin,浪费 CPU 时间
- 解决:利用 Semaphore,即线程拿不到锁的时候,就不要在 ready queue 了,yield $->$ moving from running to sleeping
== Semaphore
- Implementation with waiting queue
```c
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block(); // 把当前的进程 sleep,放到 waiting queue 里面
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) { // 队列里面有人在睡觉
remove a proc.P from S->list;
wakeup(P); // 从 waiting queue 里面拿出一个进程,放到 ready queue 里面
}
}
```
- 利用 Semaphore
- 现在 critical section 不再是 busy waiting 了
- 但注意 wait, signal 是需要 atomic 的,所以我们需要用 mutex lock 来保护这两个操作,这里还是 busy waiting 的
```c
Semaphore sem; // initialized to 1
do {
wait(sem); // busy waiting
critical section // No busy waiting on critical section now
signal(sem); // busy waiting
remainder section
} while (TRUE); // while loop but not busy waiting
```
- 比较 mutex or spinlock $<=>$ Semaphore
- Mutex or spinlock
- Pros: no blocking
- Cons: Waste CPU on looping
- Good for short critical section
- Semaphore
- Pros: no looping
- Cons: context switch is time-consuming(?)
- Good for long critical section
- Linux 里面往往前者用得多,因为一般只是拿来短暂地保护某个变量
- Semaphore in practice (an example)
#fig("/public/assets/Courses/OS/2024-10-23-17-08-33.png")
- `m->flag` 指的就是前面的 `value`
- 一个常见的 bug 是,把 $21$ 和 $22$ 行的顺序搞反了,会导致持锁 sleep
== Synchoronization Problems
- Deadlock and Starvation
- Deadlock 发生意味着 Starvation 发生,但 Starvation 不一定因为 Deadlock
- Priority Inversion: a higher priority process is indirectly preempted by a lower priority task
- 低优先级任务拿到了锁,但因为低优先级而一直得不到 CPU,因此永远无法完成而释放锁;高优先级一直等待锁
- Solution: priority inheritance
- 短暂地把正在等待的进程 $P_H$ 的高优先级赋给持有锁的进程 $P_L$
== Linux Synchronization
- 2.6 以前的版本的 kernel 中通过禁用中断来实现一些短的 critical section;2.6 及之后的版本的 kernel 是抢占式的
- Linux 提供:
+ Atomic integers
+ Spinlocks
+ Semaphores
- 在 `linux/include/linux/semaphore.h` 中,`down()` 是 `lock`(如果要进入 sleep,它会先释放锁再睡眠,唤醒之后会立刻重新获得锁),`up()` 是 `unlock`
+ Reader-writer locks
== POSIX Synchoronization
- POSIX 是啥?Portable Operating System Interface,开放给 user space 的 synchronization
- POSIX API provides
+ mutex locks
+ Semaphores
+ condition variables
- 跟 semaphore 的本质区别在于它支持 `broadcast`,或者说 wakeup all
|
|
https://github.com/sabitov-kirill/comp-arch-conspect | https://raw.githubusercontent.com/sabitov-kirill/comp-arch-conspect/master/questions/8_virtual_memory.typ | typst | #heading[Виртуальная память.]
#emph[Виртуальная память (Зачем нужна? Что такое страница, Page Table, TLB? Зачем нужны многоуровневые таблицы страниц?).]
#import "/commons.typ": imagebox
=== Основной смысл использования виртуальной памяти
+ Виртуальная память разделяет адресное пространство процессов, которые работают в операционной системе
+ Помимо того, что адресные пространства процессов при работе виртуальной памяти разделены, эти же адресные пространства представляют собой один большой непрерывный блок, что в значительной степени упрощает жизнь.
+ Виртуальная память позволяет создать иллюзию того, что все процессы могут использовать больше оперативной памяти, чем ее есть на самом деле физически у текущего устройства.
=== Изолирование процессов в памяти
+ У каждого процесса создается представление, что он один находится в памяти. Таким образом процессу не надо задумываться о том, что какой-либо участок памяти может быть использован кем-то другом.
+ Наблюдается хороший фактор безопасности. При наличии виртуальной памяти по умолчанию процесс не может обратиться к памяти другого процесса. Но существует возможность контролировать данное ограничение, разрешая доступ тем или иным процессам.
=== Реализация механизма виртуальной памяти
#imagebox("virtualPage.png", height: 150pt)
Механизм виртуальной памяти реализуется в блоке управления памяти (MMU -- memory management unit). Информация о том, как транслировать адресное пространство процессов в реальное адресное пространство физической памяти (отображение) хранится в специальных структурах данных, каждую из которых называют page table (page directory / таблица директорий).
#emph[Основная идея реализации]:
+ Вся память разбивается на блоки фиксированного размера. Каждый такой кусочек будет иметь название #emph[страница]. Характерный размер страниц в современных компьютерах - 4 Кб.
+ После чего создаётся специальная структура. Для каждого процесса создаётся так называемый page table.
+ Page table внутри себя хранит информацию об отображении между страницами виртуальной памяти и страницами физической памяти. У каждого процесса будет свой page table, соответственно отображение у каждого процесса будет свое.
#columns(2)[
#imagebox("vpp.png", height: 125pt)
#colbreak()
#imagebox("brzqm.png", height: 125pt)
]
Важно понимать, что виртуальный адрес (на примере 32 бит) в первых 20 бит хранит в себе информацию о virtual page number. В остальных же 12 бит хранится смещение (offset). Таким образом, по первым 20 битам можно понять необходимый virtual address. После чего с помощью полученного виртуального адреса, используя page table, можно получить physical page number. Далее, используя offset можно дополнить полный физический адрес. Offset не меняется при трансляции виртуального адреса в физический.
#emph[Основная проблема одноуровневой реализации]:
Как уже было сказано, типичный размер страницы - 4 Кб. Также offset - 12 бит и 20 бит на выбор директории. Если каждый directory entry по 4 байта, то вся таблица весит около 4 Мб. Посколько у каждого процесса своя таблица, а процессов может быть сотни тысяч, таким образом, может получится так, что место, которое нужно, чтобы хранить таблицы страниц для процессов, уже займёт всю оперативную память.
=== Многоуровневые таблицы страниц
Заметим, что процессы практически никогда не используют все свое адресное пространство. В реальности процессам нужно десятки, в худшем случае сотни мегабайт. Таким образом, в среднем процессам нужно не очень много памяти, но при этом одноуровневый подход явно пытается похранить отображения для всего адресного пространства, не смотря на то, что большая часть этого маппинга никак не используется. Для того, чтобы хранить отображения только того, что действительно используется, используют многоуровневые таблицы страниц.
#imagebox("vLevels.png", height: 150pt)
#emph[Основная идея]:
Page Table сможет ссылаться не только на страницы физической памяти, но и на другие page table. Таким образом, адрес будет разбит не на две части, как было до этого, а на несколько частей (в зависимости от реализации, обычно используется разбиение 4 уровня, то есть на 5 частей). Внутри этого адреса будут храниться смещения (offset) внутри других page table.
Основной плюс такого подхода заключается в том, что в случае, если процесс использует не очень много памяти, то для такого процесса можно обойтись сравнительно малым количеством page table с второго по четвёртый уровень (в показанной выше реализации). Соответственно, при неиспользовании какого-либо участка адресного пространства, то соответствующие ему page table на некоторых уровнях можно не хранить.
+ Тривильно использование 4 уровней.
+ Адрес Level 4 Directory хранится в специальном регистре (CR3)
+ Если какой-либо directory entry пустой, то можно не хранить директории более низкого уровня.
Таким образом, в общем случае многоуровневая структура представляет собой разреженное дерево, где корнем является page table 4 уровня. Для хранения такой структуры требуется значительно меньше памяти.
=== Transaction Look-aside Buffer (TLB)
Не трудно заметить, что в случае, если, например, в многоуровневой таблице используются 4 уровня, то на один запрос к памяти нужно сделать 5 запросов к физической памяти (получить page table 4,3,2,1 и потом по физическому адресу страницы получить адрес к которому идёт обращение).
#emph[TLB] - специализированный кэш центрального процессора, используемый для ускорения трансляции адреса виртуальной памяти в адрес физической памяти.
TLB используется всеми современными процессорами с поддержкой страничной организации памяти. Каждая запись содержит соответствие адреса страницы виртуальной памяти адресу физической памяти. Если адрес отсутствует в TLB, процессор обходит таблицы страниц и сохраняет полученный адрес в TLB, что занимает в 10 — 60 раз больше времени, чем получение адреса из записи, уже закэшированной TLB. Вероятность промаха TLB невысока и составляет в среднем от 0,01% до 1%.
#imagebox("TLB.png", height: 250pt)
В современных процессорах может быть реализовано несколько уровней TLB с разной скоростью работы и размером. Самый верхний уровень TLB будет содержать небольшое количество записей, но будет работать с очень высокой скоростью, вплоть до нескольких тактов. Последующие уровни становятся медленнее, но вместе с тем и больше.
Поскольку TLB существует в единственном экземпляре, то при переключение на другой процесс нужно очищать TLB. В зависимости от реализации существуют разные решения данной проблемы. Одно из них - хранить идентификатор данных ядра операционной системе, поскольку наиболее часто переключение происходит именно между ей и текущим процессом.
|
|
https://github.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024 | https://raw.githubusercontent.com/OverflowCat/BUAA-Digital-Image-Processing-Sp2024/master/chap11/main.typ | typst | #import "chain.typ": calcChain, calcShape, lShift
#import "util.typ": problem
#set text(lang: "zh", cjk-latin-spacing: auto, font: "Noto Serif CJK SC")
#set page("iso-b5", numbering: "1", margin: (left: 1.4cm, right: 1.9cm))
#set par(leading: 1.1em)
#show table: set text(font: "Zhuque Fangsong (technical preview)")
#show figure.caption: set text(font: "Zhuque Fangsong (technical preview)")
#show "。": "."
#show heading: set text(font: "Noto Sans CJK SC", size: 1.15em)
= 数字图像处理 第11章 形状表示与描述 作业
// #show math.equation: set text(font: "Fira Math")
#set enum(numbering: "1.a.1.")
+ 教材P554页,第11.1题
+ #problem[重新定义链码的一个起始点,以便所得的数字序列形成一个最小值整数。请证明该编码与边界上的初始起点无关。]
设链码 $A = {a_1, a_2, a_3, dots, a_n }$。另选一个初始起点,相当于循环位移该链码。设循环左移 $k$ 位,得到链码 $B = {a_k, a_(k+1), a_(k+2), dots, a_n, a_1,a_2, dots,$ $a_(k-1)}.$
// 在边界上选取另一个起点,得到的链码 $B$ 相当于将 $A$ 循环移动若干位。设 $A$ 循环左移 $m$ 位得到最小值 $a$,$B$ 循环左移 $n$ 位得到最小值 $b$。
重新定义链码的一个起始点,相当于将该链码循环移动若干位。假设最小数字为 $a_m$。$A$ 循环左移 $m$ 位得到最小值 $A' = {a_m, a_(m+1), a_(m+2), dots,$ $a_n, a_1, a_2, dots, a_(m-1)}$;由于最小值唯一,所以将 $B$ 循环左移 $m+k$ 位得到 $B' = {a_m, a_(m+1), a_(m+2), dots, a_n, a_1, a_2, dots, a_(m-1)} = A'$。
// 由于循环移位不改变数字串中数字的相对顺序,因此最小数字 $a_m$ 在 $B$ 中的位置为 $(m+k) mod n$。将 $B$ 循环左移 $k$ 位得到 $B' = {a_(m+k), a_(m+k+1), a_(m+k+2), dots, a_n, a_1, a_2, dots, a_(m+k-1)}$。
// 因此,无论对原始数字串进行多少次循环移位,要使其最小数字位于首位,所需的额外循环移位次数总是 $(n-m) mod n$,这个值是固定的。
#let enc = "10176722335422"
#let nml = calcShape(enc)
#let idx = enc.position(nml.first())
+ #problem[求编码#enc 的归一化起始点。]
编码 #enc 归一化后为#nml,起始点为原始链码的第#(idx + 1)
个数字。
+ 教材P554页,第11.2题
+ #problem[如11.1.2节中解释的那样,证明链码的一次差分会将该链码关于旋转归一化。]
+ #include "chain.typ"
+ #problem[求@shape 中图形的链码、一阶差分、形状数和形状数的阶(起点在左上角,按照顺时针方向)。]
#figure(caption: "形状")[#include "shape.typ"]<shape>
#let chain = "000332123211"
#let (res, first) = calcChain(chain)
#let chain- = "300303311330"
#assert(str(first) + res == chain-)
#let shape-no = calcShape(chain-)
// - 链码:$chain$
// - 一阶差分:$#chain-$
// - 形状数:$#shape-no$
// - 形状数的阶:$#shape-no.len()$
// #show table: it => align(center, it)
#figure(caption: "答案")[
#table(
columns: (auto, auto),
align: right,
[链码], $chain$,
[一阶差分], $#chain-$,
[形状数], $#shape-no$,
[形状数的阶], $#shape-no.len()$
)
]
+ 教材P556页,第11.26题
#problem[
一家使用瓶子盛装各种工业化学品的公司在听说您成功地解决了图像处理问题后,雇用您来设计一种检测瓶子未装满的方法。瓶子在传送带上移动并通过自动装填和封盖机时的情形如下图所示。当液位低于瓶颈底部和瓶子肩部的中间点时,则认为瓶子未装满。瓶子横断面的侧面与倾斜面的区域定义为瓶子的肩部。瓶子在不断移动,但该公司有一个成像系统,该系统装备了一个前端照明闪光灯,可有效地停止瓶子的移动,所以您可以得到非常接近于这里显示的样例图像@bottles。基于上述资料,请您提出一个检测未完全装满的瓶子的解决方案。清楚地陈述您所做的那些可能会影响到解决方案的所有假设。
]
#figure(caption: "原图", image("4/bottles-assembly-line.png", width: 7cm))<bottles>
#include "4/answer.typ"
|
|
https://github.com/rabotaem-incorporated/calculus-notes-2course | https://raw.githubusercontent.com/rabotaem-incorporated/calculus-notes-2course/master/sections/04-parametric-and-curves/06-closed-and-exact-diff-forms.typ | typst | #import "../../utils/core.typ": *
== Замкнутые и точные дифференциальные формы
#def(label: "def-closed-exact-form")[
$omega$ --- форма#rf("def-differential-form") в области $Omega$#rf("def-region").
1. $omega$ --- _точная_, если у нее есть первообразная#rf("def-form-antiderivative") в $Omega$.
2. $omega$ --- _локально точная_, если $forall a in Omega space exists U_a$, такая, что у $omega$ в $U_a$ есть первообразная.
3. $omega$ --- _замкнутая_, если $(diff f_k)/(diff x_i) = (diff f_i)/(diff x_k) space forall i, k$.
]
#notice[
Интеграл от точной#rf("def-closed-exact-form") формы по кривой зависит лишь от ее концов#rf("form-antiderivative-props").
]
#th(label: "locally-exact-closed")[
Если коэффициенты у формы из $C^1$, то из локальной точности#rf("def-closed-exact-form") следует замкнутось#rf("def-closed-exact-form").
]
#proof[
Возьмем $a in Omega$ и $F$ --- первообразную#rf("def-form-antiderivative") $omega$ в $U_a$. Тогда
$ (diff F)/(diff x_k) =^rf("def-form-antiderivative") f_k ==> (diff f_k)/(diff x_j) = diff/(diff x_j) (diff F)/(diff x_k) = diff/(diff x_k) (diff F)/(diff x_j) = (diff f_j)/(diff x_k). $
]
#lemma(name: "Пуанкаре", label: "poincare")[
$Omega$ --- выпуклая область. Коэффициенты формы $omega$ из $C^1$. Тогда из замкнутости следует точность#rf("def-closed-exact-form").
]
#proof[
Доказательство будет только для $n = 2$. Достаточно проверить, что $integral_gamma omega = 0$ для любой $gamma$ --- замкнутой, несамопересекающейся кривой (можно и ломаной). Пусть $Gamma$ такова, что $gamma = diff Gamma$.
$
integral_gamma omega =
integral_gamma P dif x + Q dif y =^rf("green")
integral_(Gamma) ((diff Q)/(diff x) - (diff P)/(diff y)) dif lambda_2 = 0.
$
Чтобы внутри $gamma$ можно было применять формулу Грина#rf("green"), нужно, чтобы $Gamma subset Omega$, то есть чтобы в $Omega$ не было "дырок". Поэтому мы требуем выпуклость $Omega$.
]
#follow(label: "closed-is-locally-exact")[
Если коэффициенты $omega$ из $C^1$, то из замкнутости следует локальная точность, то есть для таких форм замкнутость = локальная точность.
]
#proof[
Рассмотрим каждую точку $a$ области $Omega$. Берем $U_a in Omega$. $U_a$ --- выпуклая. Из леммы Пуанкаре#rf("poincare"), а $U_a$ есть первообразная#rf("def-form-antiderivative"). Это и есть локальная точность#rf("def-closed-exact-form").
]
#notice[
В лемме выпуклость существенна.
$Omega = RR^2 without {(0, 0)}$, а $omega = (x dif y - y dif x) / (x^2 + y^2)$, $P = - y / (x^2 + y^2)$, $Q = x / (x^2 + y^2)$
Тогда $(diff P)/(diff y) = - ((x^2 + y^2) - 2 y^2)/(x^2 + y^2)^2 = (y^2 - x^2)/(x^2 + y^2)^2 = (diff Q)/(diff x)$. Значит замкнутость есть.
Проинтегрируем по единичной окружности $gamma$. $integral_gamma omega = integral_0^(2 pi) (cos t (sin t)' - sin t (cos t)') dif t = integral_0^(2 pi) dif t = 2 pi != 0$.
]
#def(label: "def-antiderivative-along-path")[
$gamma: [a, b] --> Omega$ путь, $omega$ --- локально точная форма#rf("def-closed-exact-form") в $Omega$. $f: [a, b] --> RR$ назовем _первообразной $omega$ вдоль пути $gamma$_, если для любой точки $t in [a, b]$, у $gamma(t)$ существует окрестность $U_(gamma(t))$ и первообразная $F$ формы $omega$ в $U_(gamma(t))$ такая, что $f(tau) = F (gamma (tau))$ при $tau$ близких к $t$.
#figure(
cetz.canvas({
import cetz.draw: *
group({
line((0, 0), (5, 0), mark: (start: "|", end: "|"), name: "gamma")
content((rel: (-0.2, 0), to: "gamma.left"), $a$)
content((rel: (0.2, 0), to: "gamma.right"), $b$)
line(
(1.5, 0), (3, 0),
stroke: blue + 3pt,
mark: (start: "|", end: "|", size: 0.3),
name: "segment")
circle("segment.center", fill: blue, stroke: none, radius: 0.1)
content((rel: (0, -0.5), to: "segment.top"), $t$)
}, name: "segment")
group({
translate((8, -1))
catmull(
(2, 0), (4, 0), (5, 2),
(4, 3), (0, 1), close: true
)
place-anchors(
name: "gamma",
bezier-through(
(1.5, 1), (3, 2), (4, 1.5),
mark: (start: "|", end: "|"),
),
(name: "a", pos: 0),
(name: "t0", pos: 0.3),
(name: "t", pos: 0.45),
(name: "t1", pos: 0.6),
(name: "b", pos: 1),
)
content((rel: (-0.5, 0), to: "gamma.a"), $gamma(a)$)
content((rel: (0.4, 0.2), to: "gamma.b"), $gamma(b)$)
bezier-through(
"gamma.t0", "gamma.t", "gamma.t1",
stroke: blue + 3pt,
mark: (start: "|", end: "|"),
)
circle("gamma.t", radius: 0.4, stroke: (paint: blue, dash: "dashed"), name: "Ua")
content((rel: (0, -0.3), to: "Ua.top"), $U_gamma(t)$)
}, name: "mapped")
line((6, 0), (7, 0), mark: (end: ">"), name: "mapping")
content((to: "mapping", rel: (0, 0.3)), $gamma$)
})
)
]
#lemma(label: "locally-const-const")[
$g: [a, b] --> RR$ --- локально постоянная функция (то есть у каждой точки есть окресность, в которой она постоянна). Тогда $g(x) = const$.
]
#proof[
Если в окрестности каждой точки функция константа, то ее производная в любой точке равна $0$, а раз $g' equiv 0$, то $g equiv const$.
]
#th(label: "antiderivative-along-path-exists")[
Первообразная вдоль пути#rf("def-antiderivative-along-path") существует, и единственна с точностью до константы.
]
#proof[
- "Единственность": Пусть $f_1, f_2: [a, b] --> RR$ --- первообразные вдоль пути $gamma$. Берем $t: [a, b]$. В $U_(gamma(t))$ есть первообразные $F_1$ и $F_2$ такие, что $f_i (tau) = F_i (gamma (tau))$ при $tau$ близких к $t$. Значит
$ f_1 (tau) - f_2 (tau) = F_1 (gamma(tau)) - F_2 (gamma(tau)) = const $
при $tau$ близких к $t$. По лемме#rf("locally-const-const"), они отличаются на константу.
- "Существование": $gamma[a, b]$ покрыто окрестностями, в которых у формы есть первообразная#rf("def-form-antiderivative") (напомню, первообразная вдоль пути определена для локально точных форм#rf("def-antiderivative-along-path")#rf("def-closed-exact-form")). Так как носитель $gamma$ --- компакт, по лемме Лебега существует $r > 0$ такой, что $B_r (gamma(t))$ (на картинке зеленый) целиком содежится в каком-то элементе покрытия.
#figure(cetz.canvas(length: 0.6cm, {
import cetz.draw: *
catmull((0, 0), (3, 2), (6, 6), (8, 1), (4, -2), close: true, tension: 0.5)
place-anchors(
name: "p",
bezier-through((2, 0), (3, 0), (6, 5), stroke: red + 2pt),
..for i in range(21) { ((name: str(i), pos: i/20),) },
)
circle("p.0", radius: 0.15, stroke: none, fill: blue)
circle("p.0", radius: 1.1, stroke: (paint: blue, dash: "dashed"))
circle("p.3", radius: 0.15, stroke: none, fill: blue)
circle("p.3", radius: 0.9, stroke: (paint: blue, dash: "dashed"))
circle("p.9", radius: 0.15, stroke: none, fill: blue)
circle("p.9", radius: 1.5, stroke: (paint: blue, dash: "dashed"))
circle("p.14", radius: 0.15, stroke: none, fill: blue)
circle("p.14", radius: 1.3, stroke: (paint: blue, dash: "dashed"))
circle("p.16", radius: 0.15, stroke: none, fill: blue)
circle("p.16", radius: 1.2, stroke: (paint: blue, dash: "dashed"))
circle("p.18", radius: 0.15, stroke: none, fill: blue)
circle("p.18", radius: 1.1, stroke: (paint: blue, dash: "dashed"))
circle("p.20", radius: 0.15, stroke: none, fill: blue)
circle("p.20", radius: 0.8, stroke: (paint: blue, dash: "dashed"))
for i in range(1, 21, step: 2) {
circle("p." + str(i), radius: 0.3, stroke: green + 1pt, fill: rgb(0, 200, 0, 20%))
}
}))
$gamma$ --- непрерывна на компакте, значит равномерно непрерывна. Значит, существует $delta > 0$ такая, что для любых $t$ и $t'$ таких, что $abs(t - t') < delta$, $rho(gamma(t), gamma(t')) < r$. Нарежем $[a, b]$ на равные отрезки длины меньше $delta$. Тогда $gamma[t_(i - 1), t_i] subset B_r (gamma (t_i)) subset U_i$, где $U_i$ --- соответсвующий элемент покрытия.
Пусть $F_i$ --- первообразная формы $omega$ в $U_i$. Тогда $f(t) = F_(gamma(t))$ при $t in [t_0, t_1]$ (то есть на первом отрезке положим $f$ таковой). Знаем $gamma(t_1) subset U_1 sect U_2$. В этом пересечении есть две первообразные --- $F_1$ и $F_2$. Значит, там они отличаются на константу, по единственности. Подправим $F_2$ так, что константа будет нулевой. Тогда $f(t) = F_2 (gamma(t))$ при $t in [t_1, t_2]$. Повторяем.
]
#follow(label: "newton-leibniz+")[
Пусть $f$ --- первообразная $omega$ вдоль пути#rf("def-antiderivative-along-path") $gamma: [a, b] --> Omega$. Тогда
$ integral_gamma omega = f(b) - f(a). $
Это обобщение формулы Ньютона-Лейбница.
]
#proof[
$f(t) = F_i (gamma(t))$ при $t in [t_(i - 1), t_i]$. Тогда
$
integral_gamma omega =^rf("curve-integral-2-props", "curve-additive")
sum_(i = 1)^n integral_(gamma bar_[t_(i - 1), t_i]) omega =^rf("curve-integral-2-from-antiderivative")
sum_(i = 1)^n (F_i (gamma(t_i))) - F_i (gamma(t_(i - 1))).
$
Мы согласовывали первообразные в построении#rf("antiderivative-along-path-exists") так, что $F_i (gamma(t_(i - 1))) = F_(i - 1) (gamma(t_(i - 1)))$, поэтому вся сумма телескопическая. Значит
$
integral_gamma omega = F_n (gamma(b)) - F_1 (gamma(a)) = f(b) - f(a).
$
]
#def(label: "def-pinned-homotopic-paths")[
Пусть $Omega$ --- область. $gamma_0, gamma_1: [a, b] --> Omega$ и $gamma_0 (a) = gamma_1 (a)$, $gamma_0 (b) = gamma_1 (b)$. $gamma_0$ и $gamma_1$ называются _гомотопными путями с неподвижными концами_, если существует $gamma: [a, b] times [0, 1] --> Omega$, непрерывная, такая, что
$
forall t space cases(gamma_0 (t) = gamma (t, 0), gamma_1 (t) = gamma (t, 1)) #h(1cm) and #h(1cm)
forall u space cases(gamma (a, u) = gamma_0 (a), gamma (b, u) = gamma_0 (b)).
$
Неформально, у нас есть две веревки, протянутые между гвоздями, и эти веревки гомотопны, если можно перетягивая одну превратить в другую. В этом нам могут помешать "столбы", ну, то есть дырки в $Omega$.
#figure(
cetz.canvas(length: 0.6cm, {
let std-line = line
import cetz.draw: *
let no-fill = th_color.lighten(90%)
let pfill = pattern(
size: (3pt, 3pt),
place(std-line(
start: (0%, 0%),
end: (100%, 100%),
stroke: (paint: blue.lighten(70%), thickness: 0.5pt, cap: "square", join: "miter")
))
)
content((6.4, 0), text(blue, size: 2em, $Omega$))
set-style(mark: (fill: blue), stroke: blue)
catmull(
(0, -2), (8, -1), (6, 4), (2, 3),
close: true, fill: pfill,
)
catmull(
(4, -0.5), (3, -0.5), (2, 0), (3, 0.5),
close: true, fill: no-fill, stroke: green,
)
place-anchors(
name: "gamma1",
bezier-through((1, -1), (4, 3), (6, 2), stroke: red),
(name: "0", pos: 0),
(name: "c", pos: 0.5),
(name: "1", pos: 1),
)
place-anchors(
name: "gamma2",
bezier-through((1, -1), (6, 3), (6, 2), stroke: purple),
(name: "c", pos: 0.5),
)
place-anchors(
name: "gamma3",
bezier-through((1, -1), (4, -1), (6, 2), stroke: orange),
(name: "c", pos: 0.5),
)
content((to: "gamma1.c", rel: (-0.1, 0.5)), text(red, $gamma_1$))
content((to: "gamma2.c", rel: (-0.1, 0.5)), text(purple, $gamma_2$))
content((to: "gamma3.c", rel: (0.1, -0.5)), text(orange, $gamma_3$))
circle("gamma1.0", radius: 0.15, stroke: none, fill: red)
circle("gamma1.1", radius: 0.15, stroke: none, fill: red)
}),
caption: [Здесть пути $gamma_1$ и $gamma_2$ являются гомотопными, а $gamma_1$ и $gamma_3$ нет.]
)
]
#def(label: "def-closed-homotopic-paths")[
Пусть $Omega$ --- область. $gamma_0, gamma_1: [a, b] --> Omega$ и $gamma_0 (a) = gamma_0 (b)$, $gamma_1 (a) = gamma_1 (b)$.
Назовем $gamma_0$, $gamma_1$ _гомотопными путями с замкнутыми концами_, если найдется $gamma: [a, b] times [0, 1] --> Omega$ непрерывная, такая, что
$
forall t space cases(gamma_0 (t) = gamma (t, 0), gamma_1 (t) = gamma (t, 1)) #h(1cm) and #h(1cm) forall u space gamma (a, u) = gamma (b, u).
$
То есть, теперь мы берем замкнутые веревки, и превращаем их друг в друга. В этом нам могут помешать столбы, или разное количество оборотов вокруг них.
#figure(
image("../../images/homotopic-paths.svg", width: 10cm),
caption: [Гомотопные пути нарисованы одним цветом.]
)
]
#def(label: "def-contracting-path")[
Замкнутый путь --- _стягивающий_, если он гомотопен#rf("def-closed-homotopic-paths") одноточечному пути. Такие пути еще называют _нулевыми_.
]
#def(label: "def-simply-connected-region")[
_Односвязная облать_ --- область, в которой любой замкнутый путь стягиваем.
]
#examples[
- Выпуклая область
- Звездная область, то есть когда есть выделенная точка, и для любой другой точки области, соединяющий их отрезок лежит в области (подойдет $gamma(t, u) = gamma_1 (t) dot u$, если выделенная точка --- нуль).
- $RR^2 without {(0, 0)}$ *не односвязна*, доказательство далее.
]
#def(label: "antiderivative-relative-to-map")[
$Omega$ --- область, $gamma: [a, b] times [c, d] --> Omega$ непрерывная, $omega$ --- локально точная форма#rf("def-closed-exact-form") в $Omega$. $f: [a, b] times [c, d] --> RR$ называется _превообразной $omega$ относительно отображения $gamma$_, если для любых $(t, u) in [a, b] times [c, d]$ существует $U_(gamma(t, u))$ и первообразная $F$ формы $omega$ в этой окрестности такая, что $f(tau, nu) = F(gamma(tau, nu))$ при $(tau, nu)$ близких к $(t, u)$.
]
#th(label: "antiderivative-relative-to-map-unique")[
Первообразная относительно отображения существует и единственна с точностью до константы.
]
#proof[
- "Единственность": Получается аналогично первообразной вдоль пути.
- "Существование": У каждой точки $gamma(t, u)$ есть окрестность, в которой задана какая-то первообразная $omega$. Эти окрестности --- покрытие компакта $gamma([a, b] times [c, d])$. Возьмем для этого покрытия $r > 0$ из леммы Лебега. $gamma$ --- равномерно непрерывна, существует $delta > 0$, такая, что если $rho((t, u), (t', u')) < eps$, то $rho(gamma(t, u), gamma(t', u')) < r$.
Нарежем прямоугольник $[a, b] times [c, d]$ на равные прямоугольники, у которых диагональ меньше, чем $delta$. Образ каждого прямоугольника лежит в шарике радиуса $r$, потому что мы так выбрали размер диагонали. Значит, он лежит в каком-то элементе покрытия.
#figure(
image("../../images/antiderivative-along-map.svg")
)
Пусть $gamma([t_(i - 1), t_i] times [u_(j - 1), u_j]) in U_(i j)$, где $U_(i j)$ --- элементы покрытия. $F_(i j)$ --- первообразная в $U_(i j)$. Рассмотрим строчку от $u_0$ до $u_1$ (смотри картинку, которая здесь точно появится). $f_1 (t, u) = F_(1 1) (gamma(t, u))$ при $(t, u) in [t_0, t_1] times [u_0, u_1]$. $F_(1 1)$ и $F_(2 1)$ --- первообразная в этом пересечении, значит, они отличаются на константу. Подправим $F_(2 1)$ так, чтобы константа была нулевой. Тогда $f_1 (t, u) = F_(2 1) (gamma(t, u))$ при $(t, u) in [t_1, t_2] times [u_0, u_1]$. Склеим $f_1$ --- "первообразную первой строчки", продолжим для каждой строчки.
Теперь мы имеем $gamma_(u_1) (t) := gamma(t, u_1)$, и $f_1 bar_([a, b] times {u_1})$ и $f_2 bar_([a, b] times {u_2})$ --- первообразные вдоль пути $gamma_(u_1)$. Значит, они отличаются на константу. Подправим $f_2$ так, чтобы константа была нулевой. Продолжаем и склеиваем. Надо только показать, что все будет нормально на границах прямоугольников, аккуратно разобрав окрестности точек от туда.
]
#th(label: "homotopic-paths-integrals-eq")[
$omega$ --- локально точная в $Omega$. $gamma_0$, $gamma_1$ --- гомотопные пути с неподвижными концами#rf("def-pinned-homotopic-paths"). Тогда $integral_(gamma_0) omega = integral_(gamma_1) omega$.
]
#proof[
Пусть $gamma: [a, b] times [0, 1] --> Omega$ --- гомотопия, а
$f$ --- первообразная относительно $gamma$#rf("antiderivative-relative-to-map")#rf("antiderivative-relative-to-map-unique").
Тогда $
integral_(gamma_0) omega = f(b, 0) - f(a, 0)
space "и" space
integral_(gamma_1) omega = f(b, 1) - f(a, 1).
$
Мы докажем, что $f(b, 0) = f(b, 1)$ и $f(a, 0) = f(a, 1)$. Проверим, что $f(a, u)$ --- локально постоянная функция. Берем $(a, u)$. По определению первообразной относительно отображения#rf("antiderivative-relative-to-map"), найдется $U_(gamma(a, u))$ и $F$ --- первообразная $omega$ в $U_(gamma(a, u))$ такая, что $f(tau, nu) = F(gamma(tau, nu))$ при $(tau, nu)$ близких к $(a, u)$, и $f(a, nu) = F(gamma(a, nu)) = F(gamma_0 (a))$. По лемме#rf("locally-const-const"), $f(a, u)$ --- постоянная.
]
#th(label: "contracting-path-integral-zero")[
$omega$ --- локально точная форма в $Omega$, $gamma_0$ --- стягиваемый путь#rf("def-contracting-path"). Тогда $integral_(gamma_0) omega = 0$.
]
#proof[
$gamma: [a, b] times [0, 1] --> Omega$ --- гомотопия. $f$ --- первообразная относильно $gamma$. Тогда $integral_(gamma_0) = f(b, 0) - f(a, 0)$ и $0 = integral_(gamma_1) = f(b, 1) - f(a, 1)$. Проверим, что $f(b, u) - f(a, u)$ локально постоянна.
Возьмем точку $(a, u)$ и окрестность $U_(gamma(a, u))$ и $F$ --- первообразная $omega$ в $U_(gamma(a, u))$, такие, что $f(tau, nu) = F(gamma(tau, nu))$ при $(tau, nu)$ близких к $(a, u)$. Найдется $U_(gamma(b, u)) = U_(gamma(a, u))$, и $tilde(F)$ --- первообразная $omega$ в $U_(gamma(b, u))$, такие, что $f(tau, nu) = tilde(F) (gamma(tau, nu))$ при $(tau, nu)$ близких к $(b, u)$. Имеем, $f(b, nu) - f(a, nu) = tilde(F) (gamma(b, nu)) - F (gamma(a, nu)) = tilde(F) (gamma(a, nu)) - F(gamma(a, nu))$ --- это константа, так как разность двух первообразных в одной и той же точке.
]
#notice[
Только что мы доказали, что $RR without {(0, 0)}$ --- не односвязна.
]
#th(label: "locally-exact-is-exact-in-simply-connected")[
$Omega$ --- односвязная область. $omega$ --- локально точная форма в $Omega$. Тогда $omega$ --- точная форма в $Omega$.
]
#proof[
По предыдущей теореме#rf("contracting-path-integral-zero"), интеграл от любой локально точной формы $omega$ по любому замкнутому пути $gamma$ нулевой, а по теореме о существовании первообразной#rf("closed-curve-integral-2"), у $omega$ есть первообразная.
]
|
|
https://github.com/sebaseb98/clean-math-thesis | https://raw.githubusercontent.com/sebaseb98/clean-math-thesis/main/chapter/abstract.typ | typst | MIT License | #heading(level: 2, outlined: false, numbering: none)[Abstract]
#lorem(150) |
https://github.com/valentinvogt/npde-summary | https://raw.githubusercontent.com/valentinvogt/npde-summary/main/main.typ | typst | #import "src/theorems.typ": *
#import "@preview/xarrow:0.3.1": xarrow
#import "src/setup.typ": *
#show: this-template
#align(
center,
)[
= Numerical Methods for PDEs --- TA Summary
#v(0.5cm)
2023 version created by
#link("mailto:<EMAIL>")[<NAME>],
#link("mailto:<EMAIL>")[<NAME>],
#link("mailto:<EMAIL>")[<NAME>] and
#link("mailto:<EMAIL>")[<NAME>]. \
2024 version updated by
#link("mailto:<EMAIL>")[<NAME>],
#link("mailto:<EMAIL>")[<NAME>].
#v(0.5cm)
#let date = datetime.today()
Last updated on #date.display("[year]-[month]-[day]").
]
= About
#v(-0.1cm)
#mybox("Theorems, definitions and equations")[
from the lecture notes come in boxes like this one.
]
#v(-0.5cm)
#mybox("Less important results", ..unimportant)[
that are given for context or completeness look like this.
]
#v(-0.3cm)
#tip-box("Tips and practical advice")[
from the TAs are highlighted like this.
]
= Basics
#v(-0.1cm)
#theorem(number: "0.3.1.19", [Cauchy--Schwarz Inequality])[
If $a$ is a symmetric positive semi-definite bilinear form, then
#neq($ lr(|a (u , v)|) lt.eq a (u , u)^(1 / 2) a (v , v)^(1 / 2) $)
] <thm:cauchy-schwarz>
#equation(
number: "1.3.4.15", [Cauchy--Schwarz for Integrals],
)[
#neq(
$ lr(|integral_Omega u(bx) v(bx) dif bx|) <= (integral_Omega |u(bx)|^2 dif bx)^(1 / 2) (integral_Omega |v(bx)|^2 dif bx)^(1 / 2) = norm(u)_(L^2 (Omega)) norm(v)_(L^2 (Omega)) $,
)
] <eq:cauchy-schwarz-integrals>
#mybox(
"Norms",
)[
- $ bold("Supremum norm: ") norm(bold(u))_oo = norm(bold(u))_(L^oo (Omega)) := sup_(bx in Omega) " "norm(bold(u (x))) $
- $bold(L^2) bold("norm: ") norm(bold(u))_2 = norm(bold(u))_(L^2 (Omega)) := (integral_Omega norm(bold(u (x)))^2 dif x)^(1 / 2) $
]
#mybox(
[Barycentric coordinate functions],
)[
The barycentric coordinate functions $lambda_i$ on a triangle with vertices $ba_1, ba_2, ba_3$ are linear functions satisfying the *cardinal property*
#neq(
$ lambda_i (ba_j) = delta_(i j) = cases(
1 & quad upright("if") i = j,
0 & quad upright("else")
) $
)<eq:barycentric-cardinal-property>
On the unit triangle, whose vertices are $ba_1 = (0, 0), ba_2 = (1, 0), ba_3 = (0, 1)$, the barycentric coordinate functions are
#neq(
$ lambda_1 (bx) &= 1 - x_1 - x_2 \
lambda_2 (bx) &= x_1 \
lambda_3 (bx) &= x_2 $
)
]
#theorem(
number: "0.3.2.31", "Transformation rule for Integration",
)[
Given two domains $Omega , mhat(Omega)$ and a continuous, differentiable mapping $Phi : mhat(Omega) arrow.r Omega$
#neq(
$ integral_Omega f (bx) dif bx = integral_(mhat(Omega)) f (Phi (hat(x))) lr(|det D Phi (hat(x))|) dif bold(hat(x)) $,
)
] <thm:transformation-rule-for-integration>
#set heading(numbering: "1.1")
#pagebreak()
#include "src/chapters/01.typ"
#pagebreak()
#include "src/chapters/02.typ"
#pagebreak()
#include "src/chapters/03.typ"
#pagebreak()
#counter(heading).update(4)
#include "src/chapters/05.typ"
#pagebreak()
#counter(heading).update(8)
#include "src/chapters/09.typ"
#pagebreak()
#include "src/chapters/10.typ"
#pagebreak()
#include "src/chapters/11.typ"
#pagebreak()
#include "src/chapters/12.typ"
|
|
https://github.com/Ttajika/typst_slide | https://raw.githubusercontent.com/Ttajika/typst_slide/main/library/slide_template.typ | typst | //fontawesomeの読み込み
#import "@preview/fontawesome:0.1.0": *
//CeTZの読み込み
#import "@preview/cetz:0.1.0"
//
#import "@preview/codly:0.2.0": *
#import "functions.typ": *
#import "theme.typ": *
#let project(
title: "",
title_notes: none,
authors: (),
institutions: (),
notes: (),
date: "",
body,
default_color_p: none,
emph_color_p: none,
strong_color: rgb("3cb371"),
textcolor: black,
size:18pt,
body-font:body-font,
sans-font:sans-font,
math-font: "TeX Gyre Bonum Math",
header-outline: false,
header-number: false,
header-numbering_inf:3,
theme: default-theme,
footer: none
) = {
// Set the document's basic properties.
set document()
set page("presentation-16-9",
margin: (right:15pt,top:2pt, left:20pt, bottom:30pt),)
// Save heading and body font families in variables.
let default_color = {if default_color_p == none {theme.at("default-color")} else {default_color_p}}
let emph_color = {if emph_color_p == none {theme.at("emph-color")} else {emph_color_p}}
// slideの作成. headingをスライドの区切りにする
set text(bottom-edge: "bounds")
show heading: it => {
set block(spacing: 27pt)
// slide_numberの更新
pagebreak(weak:true)
counter("slide_counter").step()
context{
let now = counter("slide_counter").get().at(0)
let end = counter("slide_counter").final().at(0)
context[
#let current-headings = counter(heading).get().at(0);
#theme.at("slide_theme")(now:now,end:end,color:default_color,tcolor:emph_color, current-headings:current-headings,outline:header-outline)[#if header-number == true and it.level< header-numbering_inf {counter(heading).display()} #it.body]
]
let a = {if header-outline == true {1} else {0} }
v(-a * 2.05cm -.5em)
}
}
// Set body font family.
set text(font: body-font, lang: "jp", fill:textcolor)
// fontの設定
set text(font: sans-font, size:size, weight:400)
show emph: set text(fill: default_color)
show strong: set text(font: sans-font, fill: default_color, weight:700)
//テーマに基づくページ設定
show: theme.at("page_theme").with(footer:footer)
set heading(numbering: "1.1.")
set footnote(numbering: "\*")
context[#set heading(outlined: false)
#slide()[
#theme.at("title_theme")(now:"",end:"", color:default_color,tcolor:emph_color, outline:header-outline, title:title, title_notes:title_notes, date:date, authors:authors, institutions:institutions)
]]
counter(heading).update(0)
// Main body. 基本設定.
//listのマーカーの設定
set list(marker: ([#text(fill:default_color)[#fa-angle-right()]], text(fill:default_color, size:0.8em)[#fa-angles-right()],[#text(fill:default_color, size:0.8em)[#fa-caret-right()]]))
// footnoteの設定
set footnote(numbering: "1")
counter(footnote).update(0)
// paragraphの設定. indent 1em, 行送り.5em
set par(justify: false, first-line-indent: 1em, leading: .5em)
//数式フォントの設定
show math.equation: set text(font: math-font,weight:400, size:size * 1.05)
//set math.equation(numbering: "(1)")
//items with numberingの設定
set enum(numbering: "1.a.")
show figure: it => {
let frame_color = default_color
if it.kind != "TheoremKinds" {it}
else {tbox(emph_color, frame_color, [#it.caption] ,it.body)}
}
//参照の編集
show ref: it => {
if it.element.func() == figure{
[#it.element.supplement ]
}
}
//引用形式の設定
body
}
//その他ショートカット
#let ya = {text(fill:default_color)[#fa-arrow-right()]}
|
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/grid-3_03.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
// Test grid within a grid, overflowing.
#set page(width: 5cm, height: 2.25cm)
#grid(
columns: 4 * (1fr,),
row-gutter: 10pt,
column-gutter: (0pt, 10%),
[A], [B], [C], [D],
grid(columns: 2, [A], [B], [C\ ]*3, [D]),
align(top, rect(inset: 0pt, fill: eastern, align(right)[LoL])),
[rofl],
[E\ ]*4,
)
|
https://github.com/almarzn/portfolio | https://raw.githubusercontent.com/almarzn/portfolio/main/templates/typst/.template/about.typ | typst | #let about-section(content) = block(
)[content] |
|
https://github.com/jassielof/typst-templates | https://raw.githubusercontent.com/jassielof/typst-templates/main/utils/to-string.typ | typst | MIT License | // original author: ntjess
#let stringify-by-func(it) = {
let func = it.func()
return if func in (parbreak, pagebreak, linebreak) {
"\n"
} else if func == smartquote {
if it.double { "\"" } else { "'" } // "
} else if it.fields() == (:) {
// a fieldless element is either specially represented (and caught earlier) or doesn't have text
""
} else {
panic("Not sure how to handle type `" + repr(func) + "`")
}
}
#let plain-text(it) = {
return if type(it) == str {
it
} else if it == [ ] {
" "
} else if it.has("children") {
it.children.map(plain-text).join()
} else if it.has("body") {
plain-text(it.body)
} else if it.has("text") {
if type(it.text) == "string" {
it.text
} else {
plain-text(it.text)
}
} else {
// remove this to ignore all other non-text elements
stringify-by-func(it)
}
}
|
https://github.com/OrangeX4/typst-pinit | https://raw.githubusercontent.com/OrangeX4/typst-pinit/main/examples/equation-desc.typ | typst | MIT License | // Example adapted from: @Matt https://discord.com/channels/1054443721975922748/1088371919725793360/1166508915572351067
#import "../lib.typ": *
#set page(width: 700pt, height: auto, margin: 30pt)
#set text(size: 20pt)
#set math.equation(numbering: "(1)")
#let pinit-highlight-equation-from(height: 2em, pos: bottom, fill: rgb(0, 180, 255), highlight-pins, point-pin, body) = {
pinit-highlight(..highlight-pins, dy: -0.9em, fill: rgb(..fill.components().slice(0, -1), 40))
pinit-point-from(
fill: fill, pin-dx: 0em, pin-dy: if pos == bottom { 0.5em } else { -0.9em }, body-dx: 0pt, body-dy: if pos == bottom { -1.7em } else { -1.6em }, offset-dx: 0em, offset-dy: if pos == bottom { 0.8em + height } else { -0.6em - height },
point-pin,
rect(
inset: 0.5em,
stroke: (bottom: 0.12em + fill),
{
set text(fill: fill)
body
}
)
)
}
Equation written out directly (for comparison):
$ (q_T^* p_T)/p_E p_E^* >= (c + q_T^* p_T^*)(1+r^*)^(2N) $
Laid out with pinit:
#v(3.5em)
$ (#pin(1)q_T^* p_T#pin(2))/(#pin(3)p_E#pin(4))#pin(5)p_E^*#pin(6) >= (c + q_T^* p_T^*)(1+r^*)^(2N) $
#v(5em)
#pinit-highlight-equation-from((1, 2, 3, 4), (3, 4), height: 3.5em, pos: bottom, fill: rgb(0, 180, 255))[
quantity of Terran goods
]
#pinit-highlight-equation-from((5, 6), (5, 6), height: 2.5em, pos: top, fill: rgb(150, 90, 170))[
price of Terran goods, on Trantor
]
Paragraph after the equation.
|
https://github.com/angelcerveraroldan/notes | https://raw.githubusercontent.com/angelcerveraroldan/notes/main/diff_geo/notes/intro.typ | typst | #import "../../preamble.typ" : *
= Differential Geometry Introduction
This class will deal with 1d curves, and 2d surfaces in the 3d world, and we will explore what their curvature is.
Imagine a 1d curve in the standard x-y plane. If there was a 1d person living in that curve, would they be able to tell
where along that curve there is more curvature. That is to say, wold they be able to tell the difference between $y = 2$,
and $y = x^2$ ?
This leads us to the notion of curvature. There are two kinds of curvature, intrinsic, and extrensic.
== Topology Intro
In topology, we say that two shapes are equivalent, if there exists a countinuous bijection from one shape to the
other one, for example, a flat piece of paper is the same shape if we were to curve it. Intuitively, this bijection
tells us that we can go from one shape to the other without making any drastic changes (such as ripping the page in half,
and then joining them in some weird way).
There are many shapes that are topologically equivalent, so this raises one question, if we have some shape, can we
find a bijection to an "easier" shape ? If so, we can work with an easier shape, and still understand more about the
original one!
#def(title: "Curve", [
Given some interval $I$, a curve $alpha$ is defined as a triple of continuous function $alpha_x, alpha_y, alpha_z in (RR -> RR)$.
Throughout this course, we will mainly be dealing with smooth curves, these are curves that are infinitely differentiable. Unless otherwise stated, take curve to mean smooth curve.
])
Given some smooth curve $alpha$, we can define a new smooth curve $alpha'$, consisting of $alpha_x ', alpha_y ', alpha_z '$.
|
|
https://github.com/Treeniks/bachelor-thesis-isabelle-vscode | https://raw.githubusercontent.com/Treeniks/bachelor-thesis-isabelle-vscode/master/chapters/04-main-refinements/state-ids.typ | typst | #import "/utils/todo.typ": TODO
#import "/utils/isabelle.typ": *
== State Panel IDs <state-init>
As mentioned in @background:output-and-state-panels, it is possible to open multiple state panels in #jedit. While users typically want to see the proof state at the position of their caret, there may be cases where one wants to permanently see the proof state at a different position.
The language server already supported multiple state panels (although not multiple output panels). Internally, the language server stores a `Map` from IDs to state panels. Additionally, all state-related messages must include the ID of the panel that they are referring to. For example, to disable the _Auto update_ property
#footnote[The _Auto update_ property enables automatic updating of the panel's content to the caret position. If disabled, moving the caret will not change the panel's content and will only update if the user issues a manual _Update_ command.]
of a state panel, the client needs to send a #box[`PIDE/state_auto_update`] notification with an `id` and `enabled` field.
When starting the Isabelle language server, it does not automatically initialize a state panel. The client has to send a #box[`PIDE/state_init`] notification to create a state panel. However, the client can not define the state panel's ID within this notification. Instead, the server used the Isabelle internal `Counter` module to create a unique state panel ID.
In order to keep these IDs separate between #ml and #scala, this module counts in ascending order in ML and descending order in Scala. Since the language server is part of #scala, the state panel's IDs start at $-1$ and count downward with each newly created state panel. This in and of itself is not a problem; the problem was that the language server did not communicate the created IDs with the language client. Thus, the language client had to know the internal Isabelle language server ID creation logic. Furthermore, if that logic ever changes in the future, the client would need to be updated with it.
To eliminate this issue, we changed the #box[`PIDE/state_init`] message from a notification to a request. Now, when the client sends a #box[`PIDE/state_init`] request, the server sends a response back that includes the state ID of the newly created state panel. That way, we were able to decouple and future-proof the internal language server logic from the language client implementation.
An important thing to note is that #vscode does not support multiple state panels. While the underlying language server supports them, the Isabelle VSCode language client only supports a single state panel, therefore necessitating further work in this area.
// #TODO[
// - originally State Init would expect the client to know what ID it is
// - VSCode implmentation never used the ID for anything itself
// - now is a request instead of a notification which returns the newly created ID
// ]
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/044_Innistrad%3A%20Crimson%20Vow.typ | typst | #import "@local/mtgset:0.1.0": conf
#show: doc => conf("Innistrad: Crimson Vow", doc)
#include "./044 - Innistrad: Crimson Vow/001_Episode 1: Tithes and Invitations.typ"
#include "./044 - Innistrad: Crimson Vow/002_The Edge of the World.typ"
#include "./044 - Innistrad: Crimson Vow/003_Episode 2: The Dolorous Weight of Pleasantries.typ"
#include "./044 - Innistrad: Crimson Vow/004_The Blessing of Blood.typ"
#include "./044 - Innistrad: Crimson Vow/005_Episode 3: Forever Hold Your Peace.typ"
#include "./044 - Innistrad: Crimson Vow/006_Survivors.typ"
#include "./044 - Innistrad: Crimson Vow/007_Episode 4: The Wedding Crashers.typ"
#include "./044 - Innistrad: Crimson Vow/008_The Devouring House.typ"
#include "./044 - Innistrad: Crimson Vow/009_Episode 5: Till Death Do Us Part.typ"
|
|
https://github.com/rem3-1415926/Typst_Thesis_Template | https://raw.githubusercontent.com/rem3-1415926/Typst_Thesis_Template/main/main.typ | typst | MIT License |
#import "template.typ": thesis
#import "sec/abstract.typ": abstract
#import "sec/acknowledgement.typ": acknowledgement
#import "sec/appendix.typ": appendix
/*
#let titlepage = [
#set align(center+horizon)
#text(size: 64pt)[Mün or bust]
#text(size: 32pt)[A Thesis about absolutely nothing.]
]
*/
#let frontmatter = {
abstract
acknowledgement
}
#show: thesis.with(
thesis_title: "Mün or bust",
sub_title: "Some awesome thesis",
authors: (
"<NAME>",
"<NAME>",
),
industry_partners: ("C7 Aerospace Division", "Ionic Symphonic Protonic Electronics"),
university: "OST - Ostschweizer Fachhochschule, Campus Rapperswil",
// titlepage: titlepage,
frontmatter: frontmatter,
appendix: appendix,
bib_files: ("bib/BibLaTex.bib", "bib/Hayagriva.yml"),
)
#include("sec/sec1.typ")
#include("sec/sec2.typ")
#include("sec/authorship.typ") |
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/text/linebreak-link.typ | typst | Apache License 2.0 | // Test linebreaking of links.
---
#link("https://example.com/(ab") \
#link("https://example.com/(ab)") \
#link("https://example.com/(paren)") \
#link("https://example.com/paren)") \
#link("https://hi.com/%%%%%%%%abcdef") \
---
#set page(width: 240pt)
#set par(justify: true)
Here's a link https://url.com/data/extern12840%data_urlenc and then there are more
links #link("www.url.com/data/extern12840%data_urlenc") in my text of links
http://mydataurl/hash/12098541029831025981024980124124214/incremental/progress%linkdata_information_setup_my_link_just_never_stops_going/on?query=false
|
https://github.com/ustctug/ustc-thesis-typst | https://raw.githubusercontent.com/ustctug/ustc-thesis-typst/main/README.md | markdown | MIT License | # 中国科学技术大学学位论文 Typst 模板
目前处于初期试验性开发状态,不宜用于实际撰写论文。欢迎贡献代码或参与开发。
示例文档主要从 [ustctug/ustcthesis](https://github.com/ustctug/ustcthesis) 转换生成。部分内容可能不适于 Typst,需要改写。
## 编译方式
```bash
typst compile main.typ
```
|
https://github.com/arthurcadore/eng-telecom-workbook | https://raw.githubusercontent.com/arthurcadore/eng-telecom-workbook/main/semester-8/PTC/homework1/homework1.typ | typst | MIT License | #import "@preview/klaro-ifsc-sj:0.1.0": report
#import "@preview/codelst:2.0.1": sourcecode
#show heading: set block(below: 1.5em)
#show par: set block(spacing: 1.5em)
#set text(font: "Arial", size: 12pt)
#set text(lang: "pt")
#set page(
footer: "Engenharia de Telecomunicações - IFSC-SJ",
)
#show: doc => report(
title: "Modelagem de Protocolo de Transferência de Arquivos",
subtitle: "Projeto de Protocolos",
authors: ("<NAME>",),
date: "27 de Setembro de 2024",
doc,
)
= Introdução
O objetivo deste documento é apresentar o protocolo TFTP (Trivial File Transfer Protocol) que é um protocolo de transferência de arquivos que utiliza o protocolo UDP (User Datagram Protocol) para a transferência de arquivos entre um cliente e um servidor.
= Serviço Oferecido pelo Protocolo
O TFTP é um protocolo de transferência de arquivos que permite a transferência de arquivos entre um cliente e um servidor. O protocolo TFTP é um protocolo simples e eficiente, porém, não possui mecanismos de autenticação e criptografia, sendo assim, é utilizado em ambientes onde a segurança não é uma prioridade. Um exemplo de transferencia de arquivo que usa deste protocolo é o boot remoto de um computador aplicado em redes locais, envio de arquivo de configurção automatizada para equipamentos de rede e atualização de firmware de equipamentos de rede.
= Características do canal de comunicação
O protocolo TFTP utiliza o protocolo UDP (User Datagram Protocol) para a transferência de arquivos entre um cliente e um servidor. Como o UDP não possui mecanismos de controle de erros e de conexão, o protocolo TFTP implementa esse mecanismos através da mensageria de ACK no momento em que o cliente recebe um pacote de dados do servidor, além do controle de sequencia dos blocos para garantir que todos os pacotes de dados sejam entregues corretamente.
O controle deste protocolo é feito através dos blocos de dados que são enviados pelo servidor para o cliente. O cliente envia uma mensagem de requisição para o servidor, o servidor responde com um bloco de dados, o cliente envia uma mensagem de ACK para o servidor, o servidor responde com um bloco de dados e assim por diante até que o arquivo seja transferido por completo.
O tamanho de cada bloco pode ser configurado (padrão 512 bytes) através da option "BlkSize" que é enviada pelo cliente no momento da requisição do arquivo. Entretanto o tamanho máximo de um bloco é de 65464 bytes. Além de que cada bloco de dados é numerado sequencialmente, começando em 1 e indo até 65535:
#figure(
figure(
rect(image("./pictures/blksize.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
Desta forma, é possivel calcular o tamanho máximo do arquivo que pode ser transferido através do protocolo TFTP, pelo produto do tamanho do bloco de dados pelo número máximo de blocos que podem ser enviados:
$
T_m = 64"kB" * 65535 = 4.194.240"kB" = 4.194,24"MB"
$
Quando o arquivo é maior que o tamanho máximo definido entre as partes (cliente e servidor), é necessário reiniciar a contagem de blocos de dados, entretanto essa configuração depende da implementação do protocolo e também o servidor, no MikroTik por exemplo, o servidor apenas permite essa recontagem caso a opção "Allow Rollback" esteja habilitada.
= Conjunto de mensagens do protocolo
As mensagens do protocolo TFTP são divididas em quatro tipos:
== RRQ
A mensagem de requisição de arquivo (RRQ) é enviada pelo cliente para o servidor para solicitar um arquivo. A mensagem de requisição de arquivo contém o nome do arquivo que o cliente deseja transferir, o modo de transferência (ASCII ou binário) e a opção de tamanho do bloco de dados:
#figure(
figure(
rect(image("./pictures/1.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
== WRQ
Em seguida, o servidor responde com uma mensagem de requisição de escrita (WRQ) que contém o nome do arquivo que o cliente deseja transferir, o modo de transferência (ASCII ou binário) e a opção de tamanho do bloco de dados:
#figure(
figure(
rect(image("./pictures/2.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
== DATA
Em seguida, o servidor envia um bloco de dados (DATA) para o cliente que contém parte do arquivo que está sendo transferido. O bloco de dados contém o número do bloco, o bloco de dados e o tamanho do bloco de dados:
#figure(
figure(
rect(image("./pictures/3.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
== ACK
Por fim, o cliente envia uma mensagem de ACK para o servidor para confirmar a recepção do bloco de dados. A mensagem de ACK contém o número do bloco que foi recebido:
#figure(
figure(
rect(image("./pictures/4.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
== DATA (LAST BLOCK)
Ao final da transferencia, o servidor envia um bloco de dados (DATA) para o cliente que contém o último bloco do arquivo que está sendo transferido. O bloco de dados contém o número do bloco, o bloco de dados e o tamanho do bloco de dados:
#figure(
figure(
rect(image("./pictures/LAST.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
= Sintaxe e Codificação das Mensagens
A mensagem é codificada em ASCII e é composta por um opcode de 2 bytes que indica o tipo de mensagem, seguido por um campo de dados que contém o nome do arquivo que está sendo transferido, em seguida o número do bloco de dados, os dados em si, e ao final dos dados, o terminador de linha (0x00) que indica o fim da mensagem.
#figure(
figure(
rect(image("./pictures/message.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
= Modelo de comportamento do protocolo
Como descrito no próprio escopo da atividade, o servidor aguarda uma requisição de arquivo do cliente na porta 69/UDP.
Quando o cliente envia uma requisição de arquivo (RRQ) para o servidor, o servidor responde com uma mensagem de requisição de escrita (WRQ) que contém o nome do arquivo que o cliente deseja transferir, o modo de transferência (ASCII ou binário) e a opção de tamanho do bloco de dados.
Em seguida, o servidor envia um bloco de dados (DATA) para o cliente que contém parte do arquivo que está sendo transferido. O bloco de dados contém o número do bloco, o bloco de dados e o tamanho do bloco de dados.
Por fim, o cliente envia uma mensagem de ACK para o servidor para confirmar a recepção do bloco de dados. A mensagem de ACK contém o número do bloco que foi recebido.
Esse processo se repete até que o arquivo seja transferido por completo. Ao final da transferencia, o servidor envia um bloco de dados (DATA) para o cliente que contém o último bloco do arquivo que está sendo transferido. O bloco de dados contém o número do bloco, o bloco de dados e o tamanho do bloco de dados.
#figure(
figure(
rect(image("./pictures/model.png")),
numbering: none,
caption: []
),
caption: figure.caption([Elaborada pelo Autor], position: top)
)
|
https://github.com/The-Notebookinator/notebookinator | https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/linear/colors.typ | typst | The Unlicense | // Dark
#let dark-red = rgb("#DE6C8E")
#let dark-yellow = rgb("#F6AA28")
#let dark-green = rgb("#04DBAD")
#let dark-blue = rgb("#3195D8")
#let dark-purple = rgb("#584CBD")
#let dark-pink = rgb("#CD89E2")
// Light
#let light-red = rgb("#EBA8BC")
#let light-yellow = rgb("#FACC7E")
#let light-green = rgb("#68E8CD")
#let light-blue = rgb("#8EC5E9")
#let light-purple = rgb("#B4AFE1")
#let light-pink = rgb("#DDB0EB")
// Component
#let pro-green = rgb("#04db69")
#let con-red = rgb("#e95f5f")
#let decision-green = rgb("#00c05a")
// Surface
#let surface-0 = rgb("#f1f3f5")
#let surface-1 = rgb("#e9ecef")
#let surface-2 = rgb("#dee2e6")
#let surface-3 = rgb("#ced4da")
#let surface-4 = rgb("#adb5bd") |
https://github.com/LugsoIn2/typst-htwg-thesis-template | https://raw.githubusercontent.com/LugsoIn2/typst-htwg-thesis-template/main/lib/textTemplate.typ | typst | MIT License | #let textTemplate(pagetype: "" , lang: "", textblocks: ()) = {
// --- -------------- ----
// --- -------------- ----
// ----- cover text ------
let coverpageText = (
(lang == "de",(
"Eingereicht von",
"Matrikelnummer: ",
)
),
(lang == "en",(
"Submitted by",
"Student Number: ",
)
),
).find(t => t.at(0)).at(1)
// --- -------------- ----
// --- -------------- ----
// --- titlepage text ----
let titlepageText = (
(lang == "de",(
"von",
"zur Erlangung des akademischen Grades",
"im Studiengang ",
"an der Hochschule Konstanz für Technik, Wirtschaft und Gestaltung.",
"Matrikelnummer: ",
"Startdatum: ",
"Abgabedatum: ",
"Erstprüfer: ",
"Zweitprüfer: ",
)
),
(lang == "en",(
"by",
"in Partial Fulfillment of the Requirements for the Degree of",
"in Applied ",
"at the Hochschule Konstanz University of Applied Sciences.",
"Student Number: ",
"Date of Start: ",
"Date of Submission: ",
"Supervisor: ",
"Second Examiner: ",
)
),
).find(t => t.at(0)).at(1)
// --- -------------------- ----
// --- -------------------- ----
// - declarationOfHonour text --
let author = ""
let birthdate = ""
let birthplace = ""
let degree = ""
let title = ""
let companySentence = ""
let supervisor = ""
if pagetype == "declarationOfHonour" {
(author, birthdate, birthplace, degree, title, companySentence, supervisor) = textblocks
}
let declarationOfHonourText = (
(lang == "de",(
"Hiermit erkläre ich, " + author + ", geboren am " + birthdate + " in " + birthplace + ",",
"(1) dass ich meine " + degree + "thesis mit dem Titel:",
"„" + title + "“",
companySentence + " unter Anleitung von " + supervisor + " selbständig und ohne fremde Hilfe angefertigt habe und keine anderen als die angeführten Hilfen benutzt habe.",
"(2) dass ich die Übernahme wörtlicher Zitate, von Tabellen, Zeichnungen, Bildern und Programmen aus der Literatur oder anderen Quellen (Internet) sowie die Verwendung der Gedanken anderer Autoren an den entsprechenden Stellen innerhalb der Arbeit gekennzeichnet habe.",
"(3) dass die eingereichten Abgabe-Exemplare in Papierform und im PDF-Format vollständig übereinstimmen.",
"Ich bin mir bewusst, dass eine falsche Erklärung rechtliche Folgen haben wird.",
)
),
(lang == "en",(
"Hiermit erkläre ich, " + author + ", geboren am " + birthdate + " in " + birthplace + ",",
"(1) dass ich meine " + degree + "thesis mit dem Titel:",
"„" + title + "“",
companySentence + " unter Anleitung von " + supervisor + " selbständig und ohne fremde Hilfe angefertigt habe und keine anderen als die angeführten Hilfen benutzt habe.",
"(2) dass ich die Übernahme wörtlicher Zitate, von Tabellen, Zeichnungen, Bildern und Programmen aus der Literatur oder anderen Quellen (Internet) sowie die Verwendung der Gedanken anderer Autoren an den entsprechenden Stellen innerhalb der Arbeit gekennzeichnet habe.",
"(3) dass die eingereichten Abgabe-Exemplare in Papierform und im PDF-Format vollständig übereinstimmen.",
"Ich bin mir bewusst, dass eine falsche Erklärung rechtliche Folgen haben wird.",
)
),
).find(t => t.at(0)).at(1)
// --- -------------- ----
// --- -------------- ----
// --- abstract title ----
let abstractTitle = (
(lang == "de",("Zusammenfassung",)),
(lang == "en",("Abstract",)),
).find(t => t.at(0)).at(1)
// --- ------------------ ----
// --- ------------------ ----
// - table of contents title -
let tableOfContentsTitle = (
(lang == "de",("Inhaltsverzeichnis",)),
(lang == "en",("Contents",)),
).find(t => t.at(0)).at(1)
// --- ------------------ ----
// --- ------------------ ----
// ------ glossary title -----
let glossaryTitle = (
(lang == "de",("Glossar",)),
(lang == "en",("Glossary",)),
).find(t => t.at(0)).at(1)
// --- ------------------ ----
// --- ------------------ ----
// -- table of figures title -
let listOfFiguresTitle = (
(lang == "de",("Abbildungsverzeichnis",)),
(lang == "en",("List of Figures",)),
).find(t => t.at(0)).at(1)
// --- ------------------ ----
// --- ------------------ ----
// --- list of tables title --
let listOfTablesTitle = (
(lang == "de",("Tabellenverzeichnis",)),
(lang == "en",("List of Tables",)),
).find(t => t.at(0)).at(1)
// --- ------------------ ----
// --- ------------------ ----
// ---- bibliography title ---
let bibliographyTitle = (
(lang == "de",("Literatur",)),
(lang == "en",("References",)),
).find(t => t.at(0)).at(1)
// --- --------------- ----
// --- --------------- ----
// ----- return value -----
let languageText = (
(pagetype == "titlepage", titlepageText),
(pagetype == "coverpage", coverpageText),
(pagetype == "declarationOfHonour", declarationOfHonourText),
(pagetype == "abstract", abstractTitle),
(pagetype == "tableOfContents", tableOfContentsTitle),
(pagetype == "listOfFigures", listOfFiguresTitle),
(pagetype == "listOfTables", listOfTablesTitle),
(pagetype == "bibliography", bibliographyTitle),
(pagetype == "glossary", glossaryTitle),
).find(t => t.at(0)).at(1)
languageText
} |
https://github.com/levinion/typst-dlut-templates | https://raw.githubusercontent.com/levinion/typst-dlut-templates/main/templates/thesis/main.typ | typst | MIT License | #import "../util/style.typ":font_family,font_size
#import "./cover.typ":cover
#import "./decl.typ":decl
#import "./abstract.typ":abstract
#import "./index.typ":index
#import "./introduction.typ":introduction
#import "../util/functions.typ":empty_box
#import "./reference.typ":reference
#import "./conclusion.typ":concl
#import "./changelog.typ":changelog as change_log
#import "./thanks.typ":thanks as _thanks
#let three_line_table(values, caption: none, columns: auto)={
let _three_line_table(values) = {
let tlt_header(content) = {
set align(center)
rect(width: 100%, stroke: (bottom: 0.3pt), [#content])
}
let tlt_cell(content) = {
set align(center)
rect(width: 100%, stroke: none, [#content])
}
let tlt_row(r) = {
(..r.map(tlt_cell).flatten())
}
rect(
stroke: (bottom: 0.3pt, top: 0.3pt), inset: 0pt, outset: 0pt, grid(
columns: auto, rows: auto,
// table title
grid(columns: columns, ..values.at(0).map(tlt_header).flatten()), grid(columns: columns, ..values.slice(1).map(tlt_row).flatten()),
),
)
}
figure(_three_line_table(values), caption: caption, kind: table)
}
#let equa(content, caption: "")={
figure(content, caption: caption, kind: "equation", supplement: "equation")
}
#let pic(content, caption: "")={
figure(content, caption: caption, kind: image)
}
#let bold(content)={
text(font: font_family.songti_bold, weight: "bold", content)
}
#let thesis(
content, chinese_title: "大连理工大学本科毕业论文(设计)题目", english_title: "The Subject of Undergraduate Graduation Project (Thesis) of DUT",
faculty: "", major: "", name: "", id: "",
sup: "", rev: "", date: "", chinese_abstract: none,
chinese_keywords: (), english_abstract: none, english_keywords: (), intro: none,
bib: none, conclusion: none, changelog: none, thanks: none,
)={
set text(lang: "zh")
set heading(numbering: "1.1.1 ")
set block(spacing: 0em)
show heading:it=>{
set align(left)
set text(weight: "regular")
set par(leading: 1.5em, first-line-indent: 0em)
if it.level == 1 {
set text(font: font_family.heiti, size: font_size.xiao_san)
set block(below: 25pt)
it + empty_box
} else if it.level == 2 {
set text(font: font_family.heiti, size: font_size.si)
set block(above: 1.5em, below: 18pt)
it + empty_box
} else {
set text(font: font_family.heiti, size: font_size.xiao_si)
set block(above: 1.5em, below: 15pt)
it
}
empty_box
}
show figure:it=>{
set align(center)
set text(font: font_family.songti, size: font_size.wu)
v(1em)
let num(kind) = locate(
loc=>{
let chap = counter(heading.where(level: 1)).at(loc).first()
let chap_loc = query(heading.where(level: 1), loc).at(chap + 1).location()
let num_before = counter(figure.where(kind: kind)).at(chap_loc).first()
let count = counter(figure.where(kind: kind)).at(loc).first()
str(chap) + "." + str(count - num_before)
},
)
if it.kind == image and it.caption != none {
it.body
"图" + num(image) + " " + it.caption.body
v(1em)
} else if it.kind == table and it.caption != none {
text(size: font_size.wu)[
#{
"表" + num(table) + " " + it.caption.body
it.body
v(1em)
}
] + empty_box
} else if it.kind == "equation" {
grid(
columns: (20fr, 1fr), it.body, align(center + horizon)[(#num("equation"))],
)
} else {
it
}
}
show ref:it=>{
let el = it.element
if el != none {
let num(kind) = locate(
loc=>{
let el_loc = el.location()
let chap = counter(heading.where(level: 1)).at(el_loc).first()
let chap_loc = query(heading.where(level: 1), el_loc).at(chap + 1).location()
let num_before = counter(figure.where(kind: kind)).at(chap_loc).first()
let count = counter(figure.where(kind: kind)).at(el_loc).first()
str(chap) + "." + str(count - num_before)
},
)
if el.kind == image {
"图" + num(image)
} else if el.kind == table {
"表" + num(table)
} else if el.kind == "equation" {
"公式" + num("equation")
}
} else {
it
}
}
cover(chinese_title, english_title, faculty, major, name, id, sup, rev, date)
set page(
margin: (top: 4.3cm, bottom: 3.3cm, left: 2.5cm, right: 2.5cm), footer: {
set align(center)
set text(font: font_family.songti, size: font_size.xiao_wu)
counter(page).display("-I-")
}, numbering: "I", header: {
set align(center)
set text(font: font_family.songti, size: font_size.wu, weight: "regular")
rect(width: 100%, stroke: (bottom: 0.5pt + black))[#chinese_title]
},
)
decl()
counter(page).update(1)
abstract(
chinese_abstract, chinese_keywords, english_title, english_abstract,
english_keywords,
)
set page(numbering: "1")
index
set page(footer: {
set align(center)
set text(font: font_family.songti, size: font_size.xiao_wu)
counter(page).display("-1-")
})
counter(page).update(1)
introduction(intro)
set text(font: font_family.songti, size: font_size.xiao_si)
set par(leading: 1em, first-line-indent: 2em,justify: true)
show par:set block(spacing: 1em)
set enum(numbering: "(1)", indent: 2em)
show enum:it=>{
set block(spacing: 1.25em)
it
}
content
pagebreak(weak: true)
concl(conclusion)
reference(bib)
change_log(changelog)
_thanks(thanks)
}
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/3_PB/PianoDiProgetto/content.typ | typst | MIT License | #import "./functions.typ": glossary
/*************************************/
/* INSERIRE SOTTO IL CONTENUTO */
/*************************************/
#include "sections/Introduzione.typ"
#pagebreak()
#include "sections/AnalisiDeiRischi.typ"
#pagebreak()
#include "sections/ModelloDiSviluppo.typ"
#pagebreak()
#include "sections/Pianificazione.typ"
#pagebreak()
= Preventivi e consuntivi
== Introduzione
=== Preventivi
Ogni membro del gruppo si impegna a lavorare con la modalità di intensità dichiarata (alta), offrendo una disponibilità di 95 ore produttive a testa.
Questo preventivo è stato calcolato sulla base del costo orario per ruolo presente nel "Regolamento del Progetto Didattico" e sulla previsione di quante, delle 570 ore totali a disposizione, verranno utilizzate in ogni ruolo, durante i vari incrementi.
Nelle seguenti sezioni viene illustrato come sarà articolato ogni incremento e quanto sarà il suo costo.
La suddivisione dei ruoli è stata fatta nel modo più equo possibile, per dare a tutti i membri la possibilità di approfondire le mansioni rilevanti.
Per praticità, verranno utilizzate le seguenti abbreviazioni:
- *Re*: Responsabile;
- *Am*: Amministratore;
- *An*: Analista;
- *Pt*: Progettista;
- *Pr*: Programmatore;
- *Ve*: Verificatore.
Queste sezioni vogliono essere una proiezione finanziaria dettagliata dell'intero progetto, delineando chiaramente le risorse preventivate per ciascuno #glossary[sprint].
=== Consuntivi
Si esaminano attentamente le risorse effettivamente impiegate durante ciascuno #glossary[sprint], confrontandole con le previsioni iniziali. Attraverso questa analisi, si vogliono identificare eventuali scostamenti dal piano iniziale e reagire di conseguenza, in modo tale da apportare un miglioramento continuo.
Si riportano inoltre gli elementi positivi e negativi emersi all'interno delle retrospettive di ogni #glossary[sprint], eventuali rischi incorsi e la valutazione del relativo processo di mitigazione, in modo tale da portare eventuali miglioramenti alla sezione *Analisi dei rischi*.
#include "sections/PreventivoSprint/PrimaRevisione.typ"
#pagebreak()
=== Primo #glossary[sprint]
#include "sections/PreventivoSprint/PrimoSprint.typ"
#include "sections/ConsuntivoSprint/PrimoSprint.typ"
=== Secondo #glossary[sprint]
#include "sections/PreventivoSprint/SecondoSprint.typ"
#include "sections/ConsuntivoSprint/SecondoSprint.typ"
=== Terzo #glossary[sprint]
#include "sections/PreventivoSprint/TerzoSprint.typ"
#include "sections/ConsuntivoSprint/TerzoSprint.typ"
=== Quarto #glossary[sprint]
#include "sections/PreventivoSprint/QuartoSprint.typ"
#include "sections/ConsuntivoSprint/QuartoSprint.typ"
=== Quinto #glossary[sprint]
#include "sections/PreventivoSprint/QuintoSprint.typ"
#include "sections/ConsuntivoSprint/QuintoSprint.typ"
=== Sesto #glossary[sprint]
#include "sections/PreventivoSprint/SestoSprint.typ"
#include "sections/ConsuntivoSprint/SestoSprint.typ"
=== Settimo #glossary[sprint]
#include "sections/PreventivoSprint/SettimoSprint.typ"
#include "sections/ConsuntivoSprint/SettimoSprint.typ"
=== Ottavo #glossary[sprint]
#include "sections/PreventivoSprint/OttavoSprint.typ"
#include "sections/ConsuntivoSprint/OttavoSprint.typ"
#pagebreak()
#include "sections/PreventivoSprint/SecondaRevisione.typ"
=== Nono #glossary[sprint]
#include "sections/PreventivoSprint/NonoSprint.typ"
#include "sections/ConsuntivoSprint/NonoSprint.typ"
=== Decimo #glossary[sprint]
#include "sections/PreventivoSprint/DecimoSprint.typ"
#include "sections/ConsuntivoSprint/DecimoSprint.typ"
=== Undicesimo #glossary[sprint]
#include "sections/PreventivoSprint/UndicesimoSprint.typ"
#include "sections/ConsuntivoSprint/UndicesimoSprint.typ"
=== Dodicesimo #glossary[sprint]
#include "sections/PreventivoSprint/DodicesimoSprint.typ"
#include "sections/ConsuntivoSprint/DodicesimoSprint.typ"
=== Tredicesimo #glossary[sprint]
#include "sections/PreventivoSprint/TredicesimoSprint.typ"
#include "sections/ConsuntivoSprint/TredicesimoSprint.typ"
=== Quattordicesimo #glossary[sprint]
#include "sections/PreventivoSprint/QuattordicesimoSprint.typ"
#include "sections/ConsuntivoSprint/QuattordicesimoSprint.typ"
=== Quindicesimo #glossary[sprint]
#include "sections/PreventivoSprint/QuindicesimoSprint.typ"
#include "sections/ConsuntivoSprint/QuindicesimoSprint.typ"
#pagebreak()
#include "sections/Consuntivo.typ" |
https://github.com/JvandeLocht/assignment-template-typst-hfh | https://raw.githubusercontent.com/JvandeLocht/assignment-template-typst-hfh/main/utils/formfield.typ | typst | MIT License | #let formField(label, content, length: 5cm) = {
stack(
text(1em, weight: "bold")[#content],
v(2mm),
line(length: length),
v(1mm),
text(0.9em, style: "italic")[#label]
)
} |
https://github.com/TomVer99/FHICT-typst-template | https://raw.githubusercontent.com/TomVer99/FHICT-typst-template/main/examples/starter/terms.typ | typst | MIT License | // CHANGE THIS TO THE CORRECT PATH
#import "./../../template/fhict-template.typ": *
#let term-list = (
(
key: "",
short: [],
long: [],
desc: [],
),
)
|
https://github.com/Lslightly/TypstTemplates | https://raw.githubusercontent.com/Lslightly/TypstTemplates/main/templates/code.typ | typst | MIT License | #let halcyonTheme = "templates/codeThemes/halcyon.tmTheme"
/*
The following content won't work with import or included command. Just put them in the typ file
#show raw.where(block: true): block.with(
fill: luma(240),
inset: 8pt,
radius: 5pt,
)
#show raw.where(block: false): block.with(
fill: luma(240),
)
*/
|
https://github.com/binhtran432k/ungrammar-docs | https://raw.githubusercontent.com/binhtran432k/ungrammar-docs/main/components/utils.typ | typst | #let buildMainHeader(mainHeadingContent) = {
[
#align(center, smallcaps(mainHeadingContent))
#line(length: 100%)
]
}
#let buildSecondaryHeader(mainHeadingContent, secondaryHeadingContent) = {
[
#smallcaps(mainHeadingContent) #h(1fr) #emph(secondaryHeadingContent)
#line(length: 100%)
]
}
// To know if the secondary heading appears after the main heading
#let isAfter(secondaryHeading, mainHeading) = {
let secHeadPos = secondaryHeading.location().position()
let mainHeadPos = mainHeading.location().position()
if (secHeadPos.at("page") > mainHeadPos.at("page")) {
return true
}
if (secHeadPos.at("page") == mainHeadPos.at("page")) {
return secHeadPos.at("y") > mainHeadPos.at("y")
}
return false
}
#let getHeader() = {
locate(loc => {
// Find if there is a level 1 heading on the current page
let nextMainHeading = query(selector(heading).after(loc), loc).find(headIt => {
headIt.location().page() == loc.page() and headIt.level == 1
})
if (nextMainHeading != none) {
return buildMainHeader(nextMainHeading.body)
}
// Find the last previous level 1 heading -- at this point surely there's one :-)
let lastMainHeading = query(selector(heading).before(loc), loc).filter(headIt => {
headIt.level == 1
}).last()
// Find if the last level > 1 heading in previous pages
let previousSecondaryHeadingArray = query(selector(heading).before(loc), loc).filter(headIt => {
headIt.level > 1
})
let lastSecondaryHeading = if (previousSecondaryHeadingArray.len() != 0) {previousSecondaryHeadingArray.last()} else {none}
// Find if the last secondary heading exists and if it's after the last main heading
if (lastSecondaryHeading != none and isAfter(lastSecondaryHeading, lastMainHeading)) {
return buildSecondaryHeader(lastMainHeading.body, lastSecondaryHeading.body)
}
return buildMainHeader(lastMainHeading.body)
})
}
#let invisible_heading(level: 1, numbering: none, supplement: auto,
outlined: true, content) = {
// show heading.where(level: level): set text(size: 0em, color: red)
show heading.where(level: level): it => block[]
text(size: 0pt)[
#heading(level: level, numbering: numbering, supplement: supplement, outlined: outlined)[#content]
]
}
#let small_title(content, outlined: true) = {
align(center)[
// #show heading.where(level: 1): set text(size: 0.85em)
#show heading.where(level: 1): it => block[
#set text(size: 0.85em)
#it.body
]
#heading(
outlined: outlined,
numbering: none,
content
// text(0.85em,content),
)
#v(5mm)
]
} |
|
https://github.com/hooyuser/typst_math_notes | https://raw.githubusercontent.com/hooyuser/typst_math_notes/master/0.1.0/exports.typ | typst | #import "math-notes.typ": math_notes, definition, proposition, lemma, theorem, corollary, example, proof, remark
#import "commutative-diagrams.typ": commutative_diagram, functor_diagram, square_cd, square_cd_element, functor_diagram_square_cd, adjunction_pair |
|
https://github.com/Quaternijkon/notebook | https://raw.githubusercontent.com/Quaternijkon/notebook/main/content/数据结构与算法/.chapter-算法/深度优先搜索/字母迷宫.typ | typst | #import "../../../../lib.typ":*
=== #Title(
title: [字母迷宫],
reflink: "https://leetcode.cn/problems/ju-zhen-zhong-de-lu-jing-lcof/description/",
level: 2,
)<字母迷宫>
#note(
title: [
字母迷宫
],
description: [
#align(center,{image("img/solution1.png")})
字母迷宫游戏初始界面记作 $m × n$ 二维字符串数组 grid,请判断玩家是否能在 grid 中找到目标单词 target。
注意:寻找单词时 必须 按照字母顺序,通过水平或垂直方向相邻的单元格内的字母构成,同时,同一个单元格内的字母 不允许被重复使用 。
],
examples: ([
输入:grid = [
["A","B","C","E"],
["S","F","C","S"],
["A","D","E","E"]
], target = "ABCCED"
输出:true
],[
输入:grid = [
["A","B","C","E"],
["S","F","C","S"],
["A","D","E","E"]
], target = "ABCB"
输出:false
]
),
tips: [
m == grid.length
n = grid[i].length
1 <= m, n <= 6
1 <= target.length <= 15
grid 和 target 仅由大小写英文字母组成
],
solutions: (
( name:[深度优先搜索+剪枝],
text:[
本问题是典型的回溯问题,可使用 深度优先搜索(DFS)+ 剪枝 解决。
- 深度优先搜索: 可以理解为暴力法遍历矩阵中所有字符串可能性。DFS 通过递归,先朝一个方向搜到底,再回溯至上个节点,沿另一个方向搜索,以此类推。
- 剪枝: 在搜索中,遇到 这条路不可能和目标字符串匹配成功 的情况(例如:此矩阵元素和目标字符不同、此元素已被访问),则应立即返回,称之为 可行性剪枝 。
下图中的 word 对应本题的 target 。
#align(center,{image("img/solution2.png")})
1. 递归参数: 当前元素在矩阵 grid 中的行列索引 i 和 j ,当前目标字符在 target 中的索引 k 。
2. 终止条件:
1. 返回 false : (1) 行或列索引越界 或 (2) 当前矩阵元素与目标字符不同 或 (3) 当前矩阵元素已访问过 ( (3) 可合并至 (2) ) 。
2. 返回 true : k = len(target) - 1 ,即字符串 target 已全部匹配。
3. 递推工作:
1. 标记当前矩阵元素: 将 `grid[i][j]` 修改为 空字符 '' ,代表此元素已访问过,防止之后搜索时重复访问。
2. 搜索下一单元格: 朝当前元素的 上、下、左、右 四个方向开启下层递归,使用 或 连接 (代表只需找到一条可行路径就直接返回,不再做后续 DFS ),并记录结果至 res 。
3. 还原当前矩阵元素: 将 `grid[i][j]` 元素还原至初始值,即 `target[k]` 。
4. 返回值: 返回布尔量 res ,代表是否搜索到目标字符串。
],code:[
```cpp
class Solution {
public:
bool wordPuzzle(vector<vector<char>>& grid, string target) {
rows = grid.size();
cols = grid[0].size();
for(int i = 0; i < rows; i++) {
for(int j = 0; j < cols; j++) {
if(dfs(grid, target, i, j, 0)) return true;
}
}
return false;
}
private:
int rows, cols;
bool dfs(vector<vector<char>>& grid, string target, int i, int j, int k) {
if(i >= rows || i < 0 || j >= cols || j < 0 || grid[i][j] != target[k]) return false;
if(k == target.size() - 1) return true;
grid[i][j] = '\0';
bool res = dfs(grid, target, i + 1, j, k + 1) || dfs(grid, target, i - 1, j, k + 1) ||
dfs(grid, target, i, j + 1, k + 1) || dfs(grid, target, i , j - 1, k + 1);
grid[i][j] = target[k];
return res;
}
};
```
]),
),
gain:none,
) |
|
https://github.com/han0126/MCM-test | https://raw.githubusercontent.com/han0126/MCM-test/main/2024亚太杯typst/chapter/chapter7.typ | typst | #import "../template/template.typ": *
= 问题四的模型建立与求解
== 问题分析
根据题目要求利用第三问中所建立的模型进行预测,将所得到的预测结果用直方图和折线图的方式直观地表示出来,同时分析是否符合正态分布。从一般生活中的情形中的随机事件均服从正态分布,因此先默认预测出的结果服从正态分布,再验证其服从正态分布。
== 图像绘制
#table(
columns: (1fr,1fr),
stroke: none,
align: center + horizon,
table.header()[概率直方图][概率折线图],
[#image("../figures/5.jpg", width: 90%)],[#image("../figures/6.jpg",width: 110%)]
)
== 基于$K o l m o g o r o v-S m i r n o v$正态分布检验
=== $K o l m o g o r o v-S m i r n o v$正态分布检验
$K o l m o g o r o v-S m i r n o v( K S)$检验是一种非参数统计检验方法,用于比较一个样本分布与一个理论分布或两个样本之间的差异。它基于累积分布函数$("CDF")$的差异来评估两个分布的相似性或者一个样本是否来自于一个特定的分布。
=== 数据检验
根据$K o l m o g o r o v-S m i r n o v$检验中的单样本$(K S)$检验来判断预测值洪水概率是否服从正态分布。单样本$(K S)$检验时,先得到一个标准正态分布随机变量:
$ Z tilde N(0,1) $
#h(2em)假设洪水概率预测值为随机变量$X$,由于默认$X tilde N(E(X),D(X))$,由预测出来的数据可以得到洪水概率预测值的方差$E(X)$和期望$D(X)$,将洪水概率$X$转化为标准正态分布随机变量:
$ Y=(X-E(X))/(sqrt(D(X))) $
#h(2em)不妨令随机变量$W=Y-Z$计算得到$W$的期望$E(W)$和方差$D(W)$大小可以判断洪水概率是否服从正态分布。可以观察得到$E(W)$和$D(W)$的大小都趋近于$0$,则可以判断出$X$服从正态分布。
通过`matlab`中`kstest`函数检验得到:$K S$统计量($H$值)为$0$,$p$值为$1$,这两个结果表明预测出的样本数据非常接近正态分布,而且差异小到不足以拒绝正态分布的原假设#cite(<rf3>)。 |
|
https://github.com/Kasci/LiturgicalBooks | https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/_general/casoslov/utierne.typ | typst | #show <X>: it => {
if it.location().position().y > 480pt [$ $ #colbreak() #it]
else [#it]
}
#set text(font: "Monomakh Unicode", lang: "sk", fill: black)
#include "/SK/casoslov/utierne/utierenBezKnaza.typ"
#pagebreak()
#set text(font: "Monomakh Unicode", lang: "sk", fill: black)
#include "/SK/casoslov/utierne/postnaUtierenBezKnaza.typ"
#pagebreak()
#set text(font: "Monomakh Unicode", lang: "sk", fill: black)
#include "/SK/casoslov/utierne/velkaUtierenBezKnaza.typ"
#pagebreak()
// #set text(font: "Monomakh Unicode", lang: "cs", fill: black)
// #include "/CSL/casoslov/utierne/utierenBezKnaza.typ"
// #pagebreak()
// #set text(font: "Monomakh Unicode", lang: "cs", fill: black)
// #include "/CSL/casoslov/utierne/velkaUtierenBezKnaza.typ"
// #pagebreak()
// #set text(font: "Monomakh Unicode", lang: "ru", fill: black)
// #include "/CU/casoslov/utierne/utierenBezKnaza.typ"
// #pagebreak()
// #set text(font: "Monomakh Unicode", lang: "ru", fill: black)
// #include "/CU/casoslov/utierne/velkaUtierenBezKnaza.typ" |
|
https://github.com/Toniolo-Marco/git-for-dummies | https://raw.githubusercontent.com/Toniolo-Marco/git-for-dummies/main/slides/advanced/clean.typ | typst | The `git clean` command is a destructive command that is normally used to remove *untracked files*. Files deleted after its use will not be recoverable via git; therefore it needs the `-f` option to be executed.
Multiple options can be combined with this command in order to achieve the desired result; below is the list:
```bash
➜ git clean # Alone will always produce this output
fatal: clean.requireForce is true and -f not given: refusing to clean
➜ git clean -n # To preview files that will be deleted
➜ git clean --dry-run # Same as -n
➜ git clean -d # Remove untracked directories in addition to untracked files
➜ git clean -e <expr> # Exclude files matching the given pattern from being removed.
➜ git clean -X # Remove only files ignored by Git.
➜ git clean -x # Deletes all untracked files, including those ignored by Git.
➜ git clean -i # Interactive Mode
➜ git clean -f # Actually execute git clean
➜ git clean -ff # Execute git clean recursively in sub-directories
➜ git clean -q # Suppress the output
``` |
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/049_The%20Brothers'%20War.typ | typst | #import "@local/mtgset:0.1.0": conf
#show: doc => conf("The Brothers' War", doc)
#include "./049 - The Brothers' War/001_Episode 1: The End.typ"
#include "./049 - The Brothers' War/002_Episode 2: The Beginning.typ"
#include "./049 - The Brothers' War/003_Episode 3: Sword One.typ"
#include "./049 - The Brothers' War/004_Chapter 1: Stronghold.typ"
#include "./049 - The Brothers' War/005_Episode 4: The Ink of Empires.typ"
#include "./049 - The Brothers' War/006_Chapter 2: Antiquities.typ"
#include "./049 - The Brothers' War/007_Chapter 3: Nemesis.typ"
#include "./049 - The Brothers' War/008_Chapter 4: The Dark.typ"
#include "./049 - The Brothers' War/009_Chapter 5: Exodus.typ"
#include "./049 - The Brothers' War/010_Episode 5: As Cruel, As Necessary.typ"
|
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/004%20-%20Dragon's%20Maze/001_Ruric%20Thar.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"<NAME>",
set_name: "Dragon's Maze",
story_date: datetime(day: 03, month: 04, year: 2013),
author: "<NAME>",
doc
)
#emph["Why do guards always look surprised when we bash them?" asked Ruric."I think they expect a bribe," said Thar.]
Ruric swatted away a volley of flaming arrows with a ham fist. "You said this would be easy."
"No, #emph[you] did. You always say they're easy." Thar grunted with effort as he heaved over a charging chariot.
"Well, these guys must not have heard, 'cause they're fighting really hard."
"Really? I didn't notice. So do you have a plan?"
"Why me? You were always Mom's favorite, with her storytellin' and mumblin' and such. Didn't the Old Ones have an answer for everything?"
"You leave Mom out of this. Anyway, didn't Dad teach you any fightin' tricks? Or is his Scab Clan not so tough after all?"
#figure(image("001_Ruric Thar/01.jpg", width: 100%), caption: [<NAME>, the Unbowed | Art by Tyler Jacobson], supplement: none, numbering: none)
A cluster of javelins bounced off the huge ogre's chest, one just missing the larger head.
"Oi! That was too close. We need to smash our way outta here," shouted Ruric.
"Oh, sure, that's your answer to everything. We need a strategy."
"Oooh, what a big word. Did Mom teach you that?"
A wave of armored infantry slammed into Ruric Thar's towering form with a crash. For a few moments, the air was filled with hammering, yelling, and harsh breathing. Then a moment oddly silent.
"There's just too many of these Boros guys for us to get through."
"We can take 'em. What are you, chicken?"
"Did someone say chicken?"
"Did you say that?"
"Why would I say that?"
"I said that. Over here."
Ruric and Thar each looked to one side. Then one head swiveled to search behind while the other bent down.
"Hey! There's a little guy behind us. What're you doing down there, little guy?" asked Thar.
"Sneaking up on us, eh? We'll stomp you into jam!" shouted Ruric.
"You have jam, too? Mebbe we can make a deal," squeaked the ragged goblin. A few plates of dented, scorched metal clung to its battered-looking form.
"Ha!" Ruric guffawed, smashing aside another infantry strike with a massive hand-axe. "A little squirt like you? What are you gonna do that we can't do better?"
"I might be little, but I got big ideas. Big." The runt puffed out its scrawny chest and spat a bloody gob. "Anyway, doesn't look like you're winning."
Thar grimaced. "And you are? What sort of war plans do you cobble rats come up with? Overpower the enemy with stink?"
"Funny," snorted the goblin. "Anyway, I know stuff. More'n you, I bet."
"Yeah?" huffed Ruric. "So why are you stuck here instead of winning everything?"
"Shut up," said Thar, punching a pair of war mastiffs. Yelps echoed through the plaza. "Maybe this fella can help."
"Yeah, that's right! You should have more respect. We Izzet discovered the Maze, after all." The goblin crossed its arms, looking as formidable as a four-foot-tall, green, scrawny stinker can.
"So you're their maze runner?"
The goblin drooped. "They picked someone else." It looked up defiantly. "But I can find the way as good as anyone. I just got sorta... stuck here."
"And you need us to get you out. What's in it for us?" Ruric scowled, then swatted away a flying wedge of shouting skyjeks, punctuating a nearby wall with bloody asterisks. "Ow! One o' 'em got me."
"Squelch is the name. I was trying out this—my latest invention," the goblin jabbed a thumb at one of the larger metal slabs, which clattered to the cobbles and rocked there gently. "An' it worked, too! I just had a little trouble with the landing."
#figure(image("001_Ruric Thar/02.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
"So, how's that supposed to help us?" Thar curled a lip. The expression was difficult to discern in his gnarled, scarred face.
"Ho, brother! What's that thing they're wheelin' up now?" Ruric jerked his huge head toward the Boros lines.
"Oh, boulders. It's a ballista."
"A bally-what now?"
"A war machine. Throws big spears," said Thar. "Big as trees."
"I'm not afraid o' trees."
"Well, these trees bite. We gotta stop that thing before it stops us."
"I got just the ticket for that, big fellas," piped the goblin. "I help you out, you help me out. Whaddya say? Do we have a deal?"
Ruric and Thar both laughed, loud and bitter. "Oh, sure, we'll be ever so grateful when—" Thar started, "—you save our hide," finished Ruric.
"You gotta do me a favor in return. Anything I ask. Whenever I want. And a chicken. No, two. Okay?"
"Yeah, yeah, go ahead and work that powerful goblin magic."
The little Izzet spat on one palm and rubbed its skinny hands together. "Just watch me."
Squelch dashed between the ogre's tree-trunk legs and under the shields of the approaching battalion of soldiers, who were intent on <NAME>. Scrambling up a crumbling wall, Squelch hurled itself onto the back of an armored war beast. Then the goblin pulled a metal spike from an unseen pocket and jammed it into the top of the creature's head before jumping away.
#figure(image("001_Ruric Thar/03.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
Sparks flew. The war beast staggered, trumpeting, then turned on its yokemate. More anguished bellows followed, along with cracking wood and the screech of iron-bound rims on steel. Human soldiers scattered as the crazed behemoths broke free of their traces, stamped past, and disappeared down the alleys.
The massive engine of war began to topple with a strange grace, slowly falling onto its side. Several wheels turned slowly, squeaking. Moments passed. Then the entire contraption exploded in a whoosh of flame.
#figure(image("001_Ruric Thar/04.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
"How'd that happen?" exclaimed Thar, squinting his eyes.
"Who cares?" yelled Ruric. "Let's go!"
The ogre's huge bulk lumbered toward the guildgate across the plaza, trampling over the flaming wreckage and the armored bodies sprawled across the paving stones.
#figure(image("001_Ruric Thar/05.jpg", width: 100%), caption: [Art by <NAME>], supplement: none, numbering: none)
"Where'd that little guy go, anyway?" Ruric swiveled his gaze about until his tusks bounced off the back of Thar's skull.
"Right here!" came a voice from behind. The goblin hopped up on the ogre's shoulders, between the two heads, and grabbed Ruric's tusk for support. "Told ya I could fix it."
"Hey! Leggo!" bellowed Ruric, shaking himself violently.
The goblin squeaked but held on tight. "We had a d-d-deal. You promised."
"Yeah, we did," said Thar. "And you held up your end. Now we can check out this gate. You too, if you want."
"Sure! But that doesn't count as my favor. You got that for free. You still owe me."
"Yeah, yeah," both heads muttered.
"Where we gonna find a chicken 'round here?" Ruric grumbled.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"How many does that make now?"
"Lessee." Ruric counted off on meaty fingers. "Three... plus, uh, two?... an' another one." He lifted his hand-axe.
"So, eight?"
"Somethin' like that."
"Six," came a high-pitched voice.
"Gotta hit the right one soon."
"Then what are we gonna do?"
"Same thing we always do."
"Smash! Then get the goodies." Ruric made a bashing motion.
"This is so much fun, you guys!" Squelch piped up from his brand-new riding basket, slung over the ogre's back. "We're totally gonna beat all the other runners."
"How long we gotta carry this thing around with us?" whined Ruric.
"Till we win, of course," retorted Thar.
"Hey, fellas, I was thinkin'—you could really use some improvements to that axe of yours. I got some ideas. Auto-chopping action. Maybe some different heads."
"Don't pay attention. Maybe it'll go away on its own." Ruric Thar trudged on.
"Hey guys? You know, we make a really great team.
"Guys?"
|
|
https://github.com/gongke6642/tuling | https://raw.githubusercontent.com/gongke6642/tuling/main/布局/grid/grid.typ | typst | = grid
在网格中排列内容。
网格元素允许您在网格中排列内容。您可以定义行数和列数,以及它们之间的装订线大小。列和行有多种大小调整模式,可用于创建复杂的布局。
虽然网格和表格元素的工作方式非常相似,但它们适用于不同的用例并具有不同的语义。grid 元素用于表示和布局目的,而 table 元素用于广义地呈现多个相关数据点。将来,Typst 将对其输出进行注释,以便屏幕阅读器将以表格形式宣布内容,而网格的内容将与文档流中的多个内容块没有什么不同。在其中一个元素上设置和显示规则不会影响另一个元素。table
网格的大小由参数中指定的轨道大小决定。由于每个大小调整参数都接受相同的值,因此我们在这里只解释一次。每个大小调整参数都接受单个轨道大小的数组。轨道大小为:
- auto:轨道的大小将适合其内容。它最多与剩余空间一样大。如果有多个轨道宽度,并且它们共同占用的空间超过可用空间,则轨道将在它们之间公平分配可用空间。autoauto
- 固定或相对长度(例如 或 ): 轨道将完全是这个大小。10pt20% - 1cm
- 分数长度(例如):一旦所有其他轨道都调整了大小,剩余空间将根据分数在分数轨道之间分配。例如,如果有两个小数轨道,每个轨道的分数为 ,它们将占用剩余空间的一半。1fr1fr
若要指定单个轨道,可以省略数组以支持单个值。要指定多个轨道,请输入轨道数而不是数组。例如,等效于 。autocolumns:3columns:(auto, auto, auto)
== 例子
下面的示例演示了不同的轨道大小调整选项。它还演示了如何使用 grid.cell 使单个单元格跨越两个网格轨道。
#image("屏幕截图 2024-04-16 152543.png")
您还可以将字符串或内容数组传播到网格中以填充其单元格。
#image("屏幕截图 2024-04-16 152640.png")
== 设置网格样式
网格的外观可以通过不同的参数进行自定义。这些是最重要的:
- 填充为所有单元格提供背景
- align 以更改单元格的对齐方式
- inset(可选)为每个单元格添加内部填充
- 描边(stroke)可选择启用具有特定描边的网格线
如果需要覆盖单个单元格的上述选项之一,则可以使用 grid.cell 元素。同样,您可以使用 grid.hline 和 grid.vline 元素覆盖单个网格线。
或者,如果需要外观选项依赖于单元格的位置(列和行),则可以指定 或 的函数。您也可以在 grid.cell 上使用 show 规则 - 有关详细信息,请参阅该元素的示例或下面的示例。fillalign(column, row) => value
建议将大部分样式放在设置和显示规则中,因为这样可以使网格或表格的实际用法保持干净且易于阅读。它还允许您在一个地方轻松更改网格的外观。
=== 描边样式优先级
有三种方法可以设置网格单元格的描边:通过网格。单元格的描边字段,通过使用网格。hline 和 grid。vline,或通过设置网格的描边字段。当存在多个设置且冲突时,和 设置的优先级最高,其次是设置,最后是设置。hlinevlinecellgrid
此外,重复网格页眉或页脚的笔画将优先于常规单元格笔画。
#image("屏幕截图 2024-04-16 153019.png")
#image("屏幕截图 2024-04-16 153301.png")
#image("屏幕截图 2024-04-16 153352.png")
#image("屏幕截图 2024-04-16 153442.png")
== 例如
`
#set page(height: 13em, width: 26em)
#let cv(..jobs) = grid(
columns: 2,
inset: 5pt,
stroke: (x, y) => if x == 0 and y > 0 {
(right: (
paint: luma(180),
thickness: 1.5pt,
dash: "dotted"
))
},
grid.header(grid.cell(colspan: 2)[
*Professional Experience*
#box(width: 1fr, line(length: 100%, stroke: luma(180)))
]),
..{
let last = none
for job in jobs.pos() {
(
if job.year != last [*#job.year*],
[
*#job.company* - #job.role _(#job.timeframe)_ \
#job.details
]
)
last = job.year
}
}
)
#cv(
(
year: 2012,
company: [Pear Seed & Co.],
role: [Lead Engineer],
timeframe: [Jul - Dec],
details: [
- Raised engineers from 3x to 10x
- Did a great job
],
),
(
year: 2012,
company: [Mega Corp.],
role: [VP of Sales],
timeframe: [Mar - Jun],
details: [- Closed tons of customers],
),
(
year: 2013,
company: [Tiny Co.],
role: [CEO],
timeframe: [Jan - Dec],
details: [- Delivered 4x more shareholder value],
),
(
year: 2014,
company: [Glorbocorp Ltd],
role: [CTO],
timeframe: [Jan - Mar],
details: [- Drove containerization forward],
),
)
`
#image("屏幕截图 2024-04-16 154053.png")
#image("屏幕截图 2024-04-16 154218.png")
== 定义
可以使用 和 规则自定义元素函数。setshow
网格中的单元格。您可以在网格的参数列表中使用此函数来覆盖单个单元格的网格样式属性,或手动将其定位在网格中。您还可以在显示规则中使用此函数一次将某些样式应用于多个单元格。
例如,您可以覆盖单个单元格的位置和描边:
#image("屏幕截图 2024-04-16 154306.png")
#image("屏幕截图 2024-04-16 154411.png")
#image("屏幕截图 2024-04-16 154503.png")
#image("屏幕截图 2024-04-16 154532.png")
#image("屏幕截图 2024-04-16 154606.png")
#image("屏幕截图 2024-04-16 154641.png")
#image("屏幕截图 2024-04-16 154711.png")
#image("屏幕截图 2024-04-16 154751.png")
#image("屏幕截图 2024-04-16 154823.png")
#image("屏幕截图 2024-04-16 154849.png")
|
|
https://github.com/daskol/typst-templates | https://raw.githubusercontent.com/daskol/typst-templates/main/cvpr/README.md | markdown | MIT License | # Conference on Computer Vision and Pattern Recognition (CVPR)
## Usage
You can use this template in the Typst web app by clicking _Start from
template_ on the dashboard and searching for `blind-cvpr`.
Alternatively, you can use the CLI to kick this project off using the command
```shell
typst init @preview/blind-cvpr
```
Typst will create a new directory with all the files needed to get you started.
## Example Papers
Here are an example paper in [LaTeX][1] and in [Typst][2].
## Configuration
This template exports the `cvpr2022` and `cvpr2025` styling rule with the
following named arguments.
- `title`: The paper's title as content.
- `authors`: An array of author dictionaries. Each of the author dictionaries
must have a name key and can have the keys department, organization,
location, and email.
- `keywords`: Publication keywords (used in PDF metadata).
- `date`: Creation date (used in PDF metadata).
- `abstract`: The content of a brief summary of the paper or none. Appears at
the top under the title.
- `bibliography`: The result of a call to the bibliography function or none.
The function also accepts a single, positional argument for the body of the
paper.
- `appendix`: Content to append after bibliography section.
- `accepted`: If this is set to `false` then anonymized ready for submission
document is produced; `accepted: true` produces camera-redy version. If
the argument is set to `none` then preprint version is produced (can be
uploaded to arXiv).
- `id`: Identifier of a submission.
The template will initialize your package with a sample call to the `cvpr2025`
function in a show rule. If you want to change an existing project to use this
template, you can add a show rule at the top of your file.
```typst
#import "@preview/blind-cvpr:0.5.0": cvpr2025
#show: cvpr2025.with(
title: [LaTeX Author Guidelines for CVPR Proceedings],
authors: (authors, affls),
keywords: (),
abstract: [
The ABSTRACT is to be in fully justified italicized text, at the top of the
left-hand column, below the author and affiliation information. Use the
word "Abstract" as the title, in 12-point Times, boldface type, centered
relative to the column, initially capitalized. The abstract is to be in
10-point, single-spaced type. Leave two blank lines after the Abstract,
then begin the main text. Look at previous CVPR abstracts to get a feel for
style and length.
],
bibliography: bibliography("main.bib"),
accepted: false,
id: none,
)
```
## Issues
- In case of US Letter, column sizes + gap does not equals to text width (2 *
3.25 + 5/16 != 6 + 7/8). It seems that correct gap should be 3/8.
- At the moment of Typst v0.11.0, it is impossible to indent the first paragraph
in a section (see [typst/typst#311][3]). The workaround is to add indentation
manually as follows.
```typst
== H2
#h(12pt) Manually as space for the first paragraph.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do.
// The second one is just fine.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do.
```
Also, we add `indent` constant as a shortcut for `h(12pt)`.
This issue is relevant to CVPR 2022. In the 2025 template there is no
indentaino of the first paragraph in section.
- At the moment Typst v0.11.0 does not allow flexible customization of citation
styles. Specifically, CVPR 2022 citation lookes like `[42]` where number is
colored hyperlink. In order to achive this, we shouuld provide custom
CSL-style and then colorize number and put it into square parenthesis in
typst markup.
- CVPR 2022 requires simple ruler which enumerates lines in regular intervals
whilst CVPR2025 already requires a ruler which add line numers per line in
paragraph or heading. Thus we need the next major Typst release v0.12.0 for
ruler. With the next Typst release, we can do the following.
```typst
set par.line(numbering: "1")
show figure: set par.line(numbering: none)
```
For implementation details see [typst/typst#4516][6].
- CVPR 2022 and 2025 requires IEEE-like bibliography style but does not follow
its guidelines closely. Since writing CSL-style files is tedious task, we
adopt close enough bibliography style from Zotero.
## References
+ CVPR 2022 conference [web site][4].
+ CVPR 2025 conference [web site][5].
[1]: example-paper.latex.pdf
[2]: example-paper.typst.pdf
[3]: https://github.com/typst/typst/issues/311
[4]: https://cvpr2022.thecvf.com/author-guidelines#dates
[5]: https://cvpr.thecvf.com/Conferences/2025
[6]: https://github.com/typst/typst/pull/4516
|
https://github.com/chamik/gympl-skripta | https://raw.githubusercontent.com/chamik/gympl-skripta/main/cj-autori/rolland.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/helper.typ": autor
#autor("<NAME>", "1866", "1944 (78 let)", "prozaik, dramatik, kritik", "filosofii a historii ve Francii", "realismus", "/cj-autori/media/rolland.jpg")
Odpůrce fašismu, bojovník za světový mír. Studoval i učil na Ecole Normale v Paříži.
V roce 1915 dostal Nobelovu cenu za román <NAME>, důvod: vznešený idealismus, popisy postav. Finanční odměnu věnoval lidem postiženým WWI. Humanista, pacifista, levičák, práce v Červeném kříži.
Byl vegetarián, přátelil se s Ghándím, zajímal se o indickou filozofii.
Jedna ze dvou manželek byla Ruska. Chvíli žil v SSSR, potkal se se Stalinem, láska ke Svazu ho přešla. Nacisti ho nechtěli zatknout, aby nezpůsobili humbuk.
Mezi jeho známá díla patří:\
1. *<NAME>* -- román, psán a vydáván deset let, inspirace Beethovenem (1904--1912)
2. *Dobrý člověk ještě žije* -- kratší román (1919)
3. Životopisy -- Beethoven, Michalangelo, Tolstoj...
*Současníci*\
_<NAME>_ -- Na západní frontě klid (@zapadni[]), 1928\
_<NAME>_ -- Stařec a moře (@starec[]), 1952\
_<NAME>_ -- Osudy dobrého vojáka Švejka, 1966\
#pagebreak()
|
https://github.com/gaetanserre/Typst-templates | https://raw.githubusercontent.com/gaetanserre/Typst-templates/main/README.md | markdown | ## Typst templates
This repository contains templates for [Typst](https://github.com/typst/typst) paper and beamer documents. It implements mathematical blocks (definition, theorem, proof, etc.), pseudo-code blocks, and is easily extensible and customizable. Feel free to use it and to contribute!
### Usage
Clone it in
```bash
{data-dir}/typst/packages/local
```
where `{data-dir}` is
- `~/.local/share` on Linux
- `~/Library/Application Support` on macOS
- `%APPDATA%` on Windows
Then, in your document, add
```
#import "@local/{beamer, paper}:1.0.0": *
``` |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/raw-tabs_00.typ | typst | Apache License 2.0 |
#import "/contrib/templates/std-tests/preset.typ": *
#show: test-page
#set raw(tab-size: 8)
```tsv
Year Month Day
2000 2 3
2001 2 1
2002 3 10
```
|
https://github.com/kfijalkowski1/typst-diff-tool | https://raw.githubusercontent.com/kfijalkowski1/typst-diff-tool/main/typst_ast_parser/data/7_moved_paragraph/expected_moved_paragraph.typ | typst | #import "@preview/pesha:0.2.0": *
#experience(
place: "Hot Topic",
title: "Retail-sales associate",
time: [2004--06],
)[
- Top in-store sales associate in seven out of eight quarters
- Inventory management
- Training and recruiting
]
#experience(
place: "Warsaw",
title: "Job offer",
time: [2010--09],
)[
- Junior rust developer
- a lot of money
- fruit thursday
]
#text(fill: yellow)[=== Education]
#text(fill: yellow)[#experience(
place: "UCLA Anderson School of Management",
time: [2011--13],
)[
- Cumulative GPA: 3.98
- Academic interests: real-estate financing, criminal procedure, corporations
- <NAME> Award
]]
=== Other Work Experience
#experience(
place: "Proximate Cause",
title: "Assistant to the director",
time: [2007--08],
)[
- Helped devise fundraising campaigns for this innovative nonprofit
- Handled lunch orders and general errands
] |
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/docs/cookery/guide/renderers.typ | typst | Apache License 2.0 | #import "/docs/cookery/book.typ": *
#show: book-page.with(title: "Renderers")
= Renderers
See:
+ #cross-link("/guide/renderer/ts-lib.typ")[JavaScript/TypeScript Library]
+ #cross-link("/guide/renderer/node.typ")[Node.js Library]
+ #cross-link("/guide/renderer/react.typ")[React Library]
+ #cross-link("/guide/renderer/angular.typ")[Angular Library]
+ #cross-link("/guide/renderer/vue3.typ")[Vue3 Library]
+ #cross-link("/guide/renderer/hexo.typ")[Hexo Plugin]
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-1E4D0.typ | typst | Apache License 2.0 | #let data = (
("NAG MUNDARI LETTER O", "Lo", 0),
("NAG MUNDARI LETTER OP", "Lo", 0),
("NAG MUNDARI LETTER OL", "Lo", 0),
("NAG MUNDARI LETTER OY", "Lo", 0),
("NAG MUNDARI LETTER ONG", "Lo", 0),
("NAG MUNDARI LETTER A", "Lo", 0),
("NAG MUNDARI LETTER AJ", "Lo", 0),
("NAG MUNDARI LETTER AB", "Lo", 0),
("NAG MUNDARI LETTER ANY", "Lo", 0),
("NAG MUNDARI LETTER AH", "Lo", 0),
("NAG MUNDARI LETTER I", "Lo", 0),
("NAG MUNDARI LETTER IS", "Lo", 0),
("NAG MUNDARI LETTER IDD", "Lo", 0),
("NAG MUNDARI LETTER IT", "Lo", 0),
("NAG MUNDARI LETTER IH", "Lo", 0),
("NAG MUNDARI LETTER U", "Lo", 0),
("NAG MUNDARI LETTER UC", "Lo", 0),
("NAG MUNDARI LETTER UD", "Lo", 0),
("NAG MUNDARI LETTER UK", "Lo", 0),
("NAG MUNDARI LETTER UR", "Lo", 0),
("NAG MUNDARI LETTER E", "Lo", 0),
("NAG MUNDARI LETTER ENN", "Lo", 0),
("NAG MUNDARI LETTER EG", "Lo", 0),
("NAG MUNDARI LETTER EM", "Lo", 0),
("NAG MUNDARI LETTER EN", "Lo", 0),
("NAG MUNDARI LETTER ETT", "Lo", 0),
("NAG MUNDARI LETTER ELL", "Lo", 0),
("NAG MUNDARI SIGN OJOD", "Lm", 0),
("NAG MUNDARI SIGN MUHOR", "Mn", 232),
("NAG MUNDARI SIGN TOYOR", "Mn", 232),
("NAG MUNDARI SIGN IKIR", "Mn", 220),
("NAG MUNDARI SIGN SUTUH", "Mn", 230),
("NAG MUNDARI DIGIT ZERO", "Nd", 0),
("NAG MUNDARI DIGIT ONE", "Nd", 0),
("NAG MUNDARI DIGIT TWO", "Nd", 0),
("NAG MUNDARI DIGIT THREE", "Nd", 0),
("NAG MUNDARI DIGIT FOUR", "Nd", 0),
("NAG MUNDARI DIGIT FIVE", "Nd", 0),
("NAG MUNDARI DIGIT SIX", "Nd", 0),
("NAG MUNDARI DIGIT SEVEN", "Nd", 0),
("NAG MUNDARI DIGIT EIGHT", "Nd", 0),
("NAG MUNDARI DIGIT NINE", "Nd", 0),
)
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/复变函数/作业/hw9.typ | typst | #import "../../template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: note.with(
title: "作业9",
author: "YHTQ",
date: none,
logo: none,
withOutlined : false,
withTitle :false,
withHeadingNumbering: false
)
(应交日期为 5 月 24 日)
= p150
== 5
设 $B(z, r) subset G$,则有 $overline(B(z, r/2)) subset G$,由内闭一致收敛可得 $f_n|_(overline(B(z, r/2)))$ 一致收敛于 $f | _(overline(B(z, r/2)))$,由分析学的熟知结论可得 $f_n (z_n) -> f(z)$
== 6
任取紧集 $X subset G$,往证 $f_n$ 在 $X$ 上一致收敛即可。如若不然,根据一致收敛的判别法和紧集知存在点列 $z_n -> z_0$ 使得:
$
f_n (z_n)
$
不收敛于 $f(z_0)$\
由于 $f$ 是连续函数,可设 $forall epsilon > 0, exists delta > 0$ 使得:
$
abs(z - z_0) < delta => abs(f(z) - f(z_0)) < epsilon
$
无妨假设 $forall n in NN, abs(z_n - z_0) < delta$,进而有:
$
forall n in NN, abs(f(z_n) - f(z_0)) < epsilon
$<eq3>
由 $z_0$ 处的点收敛,无妨设:
$
forall n, abs(f_n (z_0) - f(z_0)) < epsilon
$<eq4>
再由 $f_1$ 的连续性,同样可无妨设 $forall n$ 均有:
$
abs(f_1 (z_n) - f_1 (z_0)) < epsilon
$<eq5>
如此将有:
$
abs(f_n (z_n) - f(z_0))
&<= abs(f_n (z_n) - f (z_n)) + abs(f(z_n) - f(z_0)) \
&= (f (z_n) - f_n (z_n)) + abs(f(z_n) - f(z_0))\
&< f (z_n) - f_n (z_n) + epsilon\
&< f(z_n) - f_1 (z_n) + epsilon\
&= f(z_n) - f(z_0) + f(z_0) - f_1 (z_0) + f_1 (z_0) - f_1 (z_n) + epsilon\
$
分别利用@eq3, @eq4, @eq5 可得上式 $< 4 epsilon$,矛盾!
== 7
#lemmaLinear[][
设 $X$ 是度量空间, $S$ 在其中正规(任何序列都有收敛子列), $x_n in S$ 的所有收敛子列有公共的极限,则 $x_n$ 收敛
]
#proof[
设公共极限为 $x$,若 $x_n$ 不收敛于 $x$ 则可找到子列使得 $exists epsilon > 0$:
$
d(x_(n_i), x) > epsilon
$
然而 $x_(n_i)$ 还有收敛子列,该子列也是 $x_n$ 的子列,由题设可知该子列收敛于 $x$,与上式显然矛盾!
]
首先,由 $forall z > 0, lim f_n (z)$ 存在可有限可得 ${f_n} (z)$ 有界,当然有紧的闭包,再由等度连续可知 ${f_n}$ 正规。根据条件 $lim f_n (z) = f(z), forall z$ 不难验证 $f_n$ 的所有收敛子列的极限只能为 $f$,当然由引理可知 $f_n -> f$
= p217
== 4
条件表明:
$
f_n|_(D_n sect D_(n-1)) = f_(n-1)|_(D_n sect D_(n-1))
$
既然 $D_n sect D_(n-1)$ 是开集,当然就有:
$
f'_n|_(D_n sect D_(n-1)) = f'_(n-1)|_(D_n sect D_(n-1))
$
表明当然这也是 $f'$ 的解析延拓
= p221
== 3
若某个 $R_(t, u) = infinity$,则可将其延拓至整函数 $f$,从而所有 $f_(t, u)$ 都是 $f$ 的限制,进而所有 $R_(t, u)$ 都是 $infinity$,因此下设 $R_(t, u) < infinity$
任取 $t_0, u_0$,可以找到 $gamma_(u_0) (t_0)$ 的开邻域 $B$ 使得 $B subset D_(t_0, u_0)$ 且 $f$ 可在 $B$ 上幂级数展开。任取 $(t_1, u_1) in B$,在 $gamma_(u_1) (t_1)$ 处的展开至少在 $B$ 内都收敛,也即:
$
R(t_1, u_1) >= R(t_0, u_0) - d(gamma_(u_0) (t_0), gamma_(u_1) (t_1))
$
同理可以得到另一个方向,进而:
$
abs(R(t_1, u_1) - R(t_0, u_0)) <= d(gamma_(u_0) (t_0), gamma_(u_1) (t_1))
$
由 $Gamma$ 的连续性,$u_1, t_1$ 靠近 $u_0, t_0$ 时上式右侧趋于零,左侧也趋于零,证毕 |
|
https://github.com/mitex-rs/mitex | https://raw.githubusercontent.com/mitex-rs/mitex/main/packages/mitex/specs/README.md | markdown | Apache License 2.0 | # MiTeX Command Specification
This includes ways to define specs, which can be used to define everything in the existing standard packages [latex-std](./latex/standard.typ).
Even if you don't know Rust at all, you can still add missing TeX commands to MiTeX by modifing [latex-std](./latex/standard.typ), since they are written in typst! You can open an issue to acquire the commands you want to add, or you can edit the files and submit a pull request.
## Introduce
For a translation process, for example, we have:
```
\frac{1}{2}
===[parser]===> AST ===[converter]===>
#eval("$frac(1, 2)$", scope: (frac: (num, den) => $(num)/(den)$))
```
You can use the `#mitex-convert()` function to get the Typst Code generated from LaTeX Code.
To achieve this, we need to define four components for LaTeX commands:
- `cmd`: The name of the LaTeX command. Since LaTeX commands all start with `\`, we remove the leading `\` and use the command name as the key in the dictionary.
- There is also the concept of environments, for example, `\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}` is a matrix environment.
- `args`: The argument pattern of the LaTeX command. For example:
- `\alpha` has no arguments;
- `\hat{x}` matches one argument on the right;
- `\frac{1}{2}` matches two arguments on the right;
- `\sqrt[3]{2}` includes optional arguments;
- `\sum\limits` has `limits` matching one argument on the left;
- `\displaystyle` greedily matches all arguments on the right;
- `x \over y` as an infix operator greedily matches all arguments on both sides.
- `alias`: The alias of the LaTeX command in Typst.
- The alias can be an existing symbol or function, for example, `alpha` and `binom(n, k)`;
- It can also be a key in the `mitex-scope` for `eval`, for example, our self-defined `frac`.
- `handle`: The value in the `mitex-scope`, which is our self-defined symbol or processing function.
- For example, the `frac` key corresponds to the value `(num, den) => $(num)/(den)$`.
MiTeX would need them for converting your TeX commands into typst code.
At the Typst level, we need `alias` and `handle`, which are then combined into the `mitex-scope` passed to the `eval` function as the `scope` parameter.
In short, currently a command spec is a typst dict, with its keys are name of the tex command, and its values are opaque spec item, which contains necessary information for MiTeX.
In [specification file for standard latex](./latex/standard.typ), It constructs and exports a such command spec.
Next, we provides a bunch of convenient functions for constructing opaque spec items.
## Reference
### `define-sym`
Define a normal symbol, as no-argument commands like `\alpha`.
```typst
#let define-sym(s, sym: none) = { .. }
```
**Arguments:**
- s (str): Alias command for typst handler.
For example, alias `\prod` to typst's `product`.
- sym (content): The specific content, as the value of alias in mitex-scope.
For example, there is no direct alias for \negthinspace symbol in typst,
but we can add `h(-(3/18) * 1em)` ourselves
**Return:** A opaque spec item and a scope item (none for no scope item)
### `define-greedy-cmd`
Define a greedy command, like `\displaystyle`.
```typst
#let define-greedy-cmd(s, handle: none) = { .. }
```
**Arguments:**
- s (str): Alias command for typst handler.
For example, alias `\displaystyle` to typst's `mitexdisplay`, as the key in mitex-scope.
- handle (function): The handler function, as the value of alias in mitex-scope.
It receives a content argument as all greedy matches to the content
For example, we define `mitexdisplay` to `math.display`
**Return:** A opaque spec item and a scope item (none for no scope item)
### `define-infix-cmd`
Define an infix command, like `\over`.
```typst
#let define-infix-cmd(s, handle: none) = { .. }
```
**Arguments:**
- s (str): Alias command for typst handler.
For example, alias `\over` to typst's `frac`, as the key in mitex-scope.
- handle (function): The handler function, as the value of alias in mitex-scope.
It receives two content arguments, as (prev, after) arguments.
For example, we define `\over` to `frac: (num, den) => $(num)/(den)$`
**Return:** A opaque spec item and a scope item (none for no scope item)
### `define-glob-cmd`
Define a glob (Global Wildcard) match command with a specified pattern for matching args
Kind of item to match:
- Bracket/b: []
- Parenthesis/p: ()
- Term/t: any rest of terms, typically {} or single char
```typst
#let define-glob-cmd(pat, s, handle: none) = { .. }
```
**Arguments:**
- pat (pattern): The pattern for glob-cmd
For example, `{,b}t` for `\sqrt` to support `\sqrt{2}` and `\sqrt[3]{2}`
- s (str): Alias command for typst handler.
For example, alias `\sqrt` to typst's `mitexsqrt`, as the key in mitex-scope.
- handle (function): The handler function, as the value of alias in mitex-scope.
It receives variable length arguments, for example `(2,)` or `([3], 2)` for sqrt.
Therefore you need to use `(.. arg) = > {..}` to receive them.
**Return:** A opaque spec item and a scope item (none for no scope item)
### `define-cmd`
Define a command with a fixed number of arguments, like `\hat{x}` and `\frac{1}{2}`.
```typst
#let define-cmd(num, alias: none, handle: none) = { .. }
```
**Arguments:**
- num (int): The number of arguments for the command.
- alias (str): Alias command for typst handler.
For example, alias `\frac` to typst's `frac`, as the key in mitex-scope.
- handle (function): The handler function, as the value of alias in mitex-scope.
It receives fixed number of arguments, for example `frac(1, 2)` for `\frac{1}{2}`.
**Return:** A opaque spec item and a scope item (none for no scope item)
### `define-env`
Define an environment with a fixed number of arguments, like `\begin{array}{lr}`.
```typst
#let define-env(num, alias: none, handle: none) = { .. }
```
**Arguments:**
- num (int): The number of arguments as environment options for the environment.
- alias (str): Alias command for typst handler.
For example, alias `\begin{array}{lr}` to typst's `mitexarray`,
and alias `\begin{aligned}` to typst's `aligned`, as the key in mitex-scope.
- kind (str): environment kind, it could be "is-math", "is-cases", "is-matrix",
"is-itemize", "is-enumerate"
- handle (function): The handler function, as the value of alias in mitex-scope.
It receives fixed number of named arguments as environment options,
for example `array(arg0: ..)` or `array(arg0: .., arg1: ..)`.
And it receives variable length arguments as environment body,
Therefore you need to use `(.. arg) = > {..}` to receive them.
**Return:** A opaque spec item and a scope item (none for no scope item)
### `sym`
Define a symbol without alias and without handler function, like \alpha => alpha
```typst
#let sym = ((kind: "sym"), none)
```
**Return:** A opaque spec item and no scope item (none for no scope item)
### `of-sym`
```typst
#let of-sym(handle) = ((kind: "sym"), (handle: handle))
```
Define a symbol without alias and with handler function,
like \negthinspace => h(-(3/18) * 1em)
**Arguments:**
- handle (function): The handler function, as the value of alias in mitex-scope.
For example, define `negthinspace` to handle `h(-(3/18) * 1em)` in mitex-scope
**Return:** A symbol spec and a scope item
### `left1-op`
Define a left1-op command without handler, like `\limits` for `\sum\limits`
```typst
#let left1-op(alias) = ((kind: "cmd", args: ( kind: "left1" ), alias: alias), none)
```
**Arguments:**
- alias (str): Alias command for typst handler.
For example, alias `\limits` to typst's `limits`
and alias `\nolimits` to typst's `scripts`
**Return:** A cmd spec and no scope item (none for no scope item)
### `cmd1`
Define a cmd1 command like \hat{x} => hat(x)
```typst
#let cmd1 = ((kind: "cmd1"), none)
```
**Return:** A cmd1 spec and a scope item (none for no scope item)
### `cmd2`
Define a cmd2 command like \binom{1}{2} => binom(1, 2)
```typst
#let cmd2 = ((kind: "cmd2"), none)
```
**Return:** A cmd2 spec and a scope item (none for no scope item)
### `matrix-env`
Define a matrix environment without handler
```typst
#let matrix-env = ((kind: "matrix-env"), none)
```
**Return:** A matrix-env spec and a scope item (none for no scope item)
|
https://github.com/kdog3682/typkit | https://raw.githubusercontent.com/kdog3682/typkit/main/0.1.0/src/pages.typ | typst | #import "headers/index.typ" as headers
#import "footers/index.typ" as footers
#let dialogue = (
paper: "us-letter",
margin: (top: 1in, left: 1in, right: 1in, bottom: 0.85in),
footer: footers.standard,
)
#let standard = (
paper: "us-letter",
margin: (
top: 1in,
left: 1in,
right: 1in,
bottom: 0.85in,
),
)
#let no-margin = (
paper: "us-letter",
margin: (
top: 0in,
left: 0in,
right: 0in,
bottom: 0in,
),
)
#let workbook = (
paper: "us-letter",
margin: (
top: 1in,
left: 1in,
right: 1in,
bottom: 1in,
),
footer: footers.standard,
)
|
|
https://github.com/ad-si/invoice-maker | https://raw.githubusercontent.com/ad-si/invoice-maker/master/examples/load-yaml.typ | typst | ISC License | #import "../invoice-maker.typ": *
#import "../fixtures/example-data.typ": *
#show: invoice.with(
banner-image: image("../fixtures/banner.png"),
data: yaml("../fixtures/example-data.yaml"),
styling: ( font: none ), // Explicitly use Typst's default font
)
|
https://github.com/dashuai009/dashuai009.github.io | https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/033.typ | typst | #let date = datetime(
year: 2022,
month: 9,
day: 1,
)
#metadata((
title: "欧拉函数与中国剩余定理",
subtitle: [math,数论,中国剩余定理],
author: "dashuai009",
description: "欧拉函数与中国剩余定理",
pubDate: date.display(),
))<frontmatter>
#import "../__template/style.typ": conf
#show: conf
== $phi.alt$函数公式
<phi函数公式>
- 如果p是素数,且$k gt.eq 1$,则 $ phi.alt (p^k) = p^k - p^(k - 1) $
- 如果$g c d (m , n) = 1$,则$phi.alt (m n) = phi.alt (m) phi.alt (n)$
== 证明
<证明>
瞪眼法可得证。
== 中国剩余定理
<中国剩余定理>
设$a_1 , a_2 , dots.h , a_k$是任意整数,且$n_1 , n_2 , dots.h , n_k$两两互素,则同余方程组
$
{(
x equiv a_1 (#h(0em) mod med n_1)\
x equiv a_2 (#h(0em) mod med n_2)\
dots.h\
x equiv a_k (#h(0em) mod med n_k)
)
$
有解。令$N = product_(i = 1)^k n_i$,则同余方程组的解在模N下,唯一。
唯一为:
$ x = (sum_(i = 1)^k a_i N / n_i m_i) #h(0em) mod med N $
其中$m_i$为$N / n_i$在模$n_i$下的逆元
|
|
https://github.com/WarriorHanamy/typst_tutorial | https://raw.githubusercontent.com/WarriorHanamy/typst_tutorial/main/slides.typ | typst | #set heading(numbering: "(I)")
#show heading: it => [
#set align(center)
#set text(font: "Inria Serif")
\~ #emph(it.body)
\~
]
= Last Week's Plan
- Finish the MPC experiment of quadrotor.
= Accomplishment
- [x] Coded and Tested Immediate-Reaction Mode Change of `manual` and `offbaord` mode.
- [x] C_NMPC Interface.
- [x] Connectivity between plain simulation and C_NMPC.
- [] Connectivity between gazebo simulation and C_NMPC. (*rotation*)
#figure(
image("Figures/cnmpc_quad.png", width: 80%),
caption: [Time Statistics of quadrotor MPC],
)
= This Week's Plan
- Finish the MPC experiment of `quadrotor`.
- [] PX4 interface transription of C_NMPC cmds.
- [] Real world experiment of C_NMPC. (*rotation*)
|
|
https://github.com/cafeclimber/typst-psu | https://raw.githubusercontent.com/cafeclimber/typst-psu/main/ch1.typ | typst | #heading("Introduction", supplement: "Chapter") <intro>
#pagebreak()
|
|
https://github.com/MattiaOldani/Informatica-Teorica | https://raw.githubusercontent.com/MattiaOldani/Informatica-Teorica/master/capitoli/calcolabilità/10_sistemi_di_programmazione.typ | typst | #import "../alias.typ": *
#import "@preview/algo:0.3.3": code
#import "@preview/lemmify:0.1.5": *
#let (
theorem, lemma, corollary,
remark, proposition, example,
proof, rules: thm-rules
) = default-theorems("thm-group", lang: "it")
#show: thm-rules
#show thm-selector("thm-group", subgroup: "theorem"): it => block(
it,
stroke: red + 1pt,
inset: 1em,
breakable: true
)
#show thm-selector("thm-group", subgroup: "corollary"): it => block(
it,
stroke: red + 1pt,
inset: 1em,
breakable: true
)
#show thm-selector("thm-group", subgroup: "proof"): it => block(
it,
stroke: green + 1pt,
inset: 1em,
breakable: true
)
= Sistemi di programmazione
Fin'ora, nello studio dei *sistemi di programmazione*, ci siamo concentrati su una loro caratteristica principale: la _potenza computazionale_. Con la tesi di Church-Turing abbiamo affermato che ogni sistema di programmazione ha come potenza computazionale $cal(P)$, cioè l'insieme delle funzioni ricorsive parziali. Oltre a questo, vorremmo sapere altro sui sistemi di programmazione, ad esempio la possibilità o l'impossibilità di scrivere programmi su certi compiti.
Vorremmo, come sempre, rispondere nel modo più rigoroso e generale possibile, quindi non considereremo un particolare sistema di programmazione, ma studieremo proprietà valide per tutti i sistemi di programmazione "ragionevoli":dDobbiamo astrarre un sistema di calcolo generale che permetta di rappresentarli tutti.
== Assiomi <NAME>
_Assiomatizzare_ significa _dare un insieme di proprietà_ che i sistemi di calcolo devono avere per essere considerati buoni. Da qui in poi individueremo un sistema di programmazione con $ {phi_i}_(i in NN), $ ovvero l'insieme delle funzioni calcolabili con quel sistema, in altre parole l'insieme delle sue semantiche. Il pedice $i in NN$ indica i programmi (_codificati_) di quel sistema.
Troveremo tre proprietà che un sistema di programmazione deve avere per essere considerato buono e lo faremo prendendo spunto dal sistema RAM.
=== Potenza computazionale
La prima proprietà che vogliamo in un sistema di programmazione riguarda la *potenza computazionale*. Dato il sistema ${phi_i}$ vogliamo che $ {phi}_i = cal(P). $ Questa proprietà è ragionevole, infatti non vogliamo considerare sistemi troppo potenti, che vanno oltre $cal(P)$, o poco potenti, che sono sotto $cal(P)$. Vogliamo la _giusta_ potenza computazionale.
=== Interprete universale
La seconda proprietà che vogliamo in un sistema di programmazione riguarda la presenza di un *interprete universale*. Un interprete universale è un programma $mu in NN$ tale che $ forall x,n in NN quad phi_mu (cantor(x,n)) = phi_n (x). $ In sostanza è un programma scritto in un certo linguaggio che riesce a interpretare ogni altro programma $n$ scritto nello stesso linguaggio su qualsiasi input $x$.
La presenza di un interprete universale permette un'*algebra* sui programmi, quindi permette la trasformazione di quest'ultimi.
=== Teorema $S_1^1$
L'ultima proprietà che vogliamo in un sistema di programmazione riguarda il soddisfacimento del teorema $S_1^1$. Questo teorema afferma che è possibile costruire automaticamente programmi specifici da programmi più generali, ottenuti fissando alcuni degli input.
Supponiamo di avere $ P in programmi bar.v phi_P (cantor(x,y)) = x + y. $ Un programma RAM per questa funzione potrebbe essere $ P equiv & R_2 arrow.long.l cantorsin(R_1) \ & R_3 arrow.long.l cantordes(R_1) \ & R_0 arrow.long.l R_2 + R_3 quad . $
_Siamo in grado di produrre automaticamente un programma $overline(P)$ che riceve in input solo $x$ e calcola, ad esempio, $x+3$ a partire da $P$ e 3?_
$ (P,3) arrow.long.squiggly S^1_1 in programmi arrow.long.squiggly overline(P). $
Per generare $overline(P)$, potrei ad esempio fare $ overline(P) equiv & inc(R_0) \ & inc(R_0) \ & inc(R_0) \ & R_1 arrow.long.l cantor(R_1, R_0) \ & R_0 arrow.long.l 0 \ & P quad . $
Vediamo come questo programma segua principalmente quattro fasi:
+ si fissa il valore $y$ in $R_0$;
+ si calcola l'input $cantor(x,y)$ del programma $P$;
+ si resetta la memoria alla situazione iniziale, tranne per il registro $R_1$;
+ si chiama il programma $P$.
In generale, il programma $S_1^1$ implementa la funzione $ S_1^1 (n,y) = overline(n), $ con $n$ codifica di $P$ e $overline(n)$ codifica del nuovo programma $overline(P)$, tale che $ phi_(overline(n)) (x) = phi_n (cantor(x,y)) . $
Questo teorema è molto comodo perché permette di calcolare facilmente la codifica $overline(n)$: avendo $n$ devo solo codificare le istruzioni iniziali di fissaggio di $y$, la funzione coppia di Cantor per creare l'input e l'azzeramento dei registri utilizzati. In poche parole, $ S_1^1 (n,y) = overline(n) = cantor(underbracket(0\, dots\, 0, y), s, t, n), $ con $s$ codifica dell'istruzione che calcola la funzione coppia di Cantor e $t$ codifica dell'istruzione di azzeramento. $S_1^1$ è una funzione totale e programmabile, quindi $S_1^1 in cal(T)$ funzione *ricorsiva totale*.
In sintesi, per RAM, esiste una funzione $S_1^1$ *ricorsiva totale* che accetta come argomenti
+ il codice $n$ di un programma che ha $2$ input;
+ un valore $y$ cui fissare il secondo input
e produce il codice $overline(n) = S_1^1(n,y)$ di un programma che si comporta come $n$ nel caso in cui il secondo input è fissato ad essere $y$.
#theorem(numbering: none)[
Dato $phi_i$ sistema RAM, esiste una funzione $S_1^1 in cal(T)$ tale che $ forall n,x,y in NN quad phi_n (cantor(x,y)) = phi_(S_1^1 (n,y)) (x). $
]
Questo teorema ci garantisce un modo di usare l'algebra sui programmi.
Inoltre, ha anche una *forma generale* $S_n^m$ che riguarda programmi a $m+n$ input in cui si prefissano $n$ input e si lasciano variare i primi $m$.
#theorem(numbering: none)[
Dato $phi_i$ sistema RAM, esiste una funzione $S_n^m in cal(T)$ tale che per ogni programma $k in NN$ e ogni input $wstato(x) in NN^m$ e $wstato(y) in NN^n$ vale $ phi_k (cantor(wstato(x), wstato(y))) = phi_(S_n^m (k,wstato(y))) (cantor(wstato(x))). $
]
== Sistemi di programmazione accettabili (SPA)
Le tre caratteristiche che abbiamo identificato formano gli *assiomi di Rogers* (1953). Questi caratterizzano i sistemi di programmazioni su cui ci concentreremo, che chiameremo _Sistemi di Programmazione Accettabili_.
Questi assiomi non sono restrittivi: tutti i modelli di calcolo ragionevoli sono di fatto SPA.
== Compilatori tra SPA
Sappiamo che esiste un compilatore da WHILE a RAM, _ma è l'unico?_
Dati gli SPA ${phi_i}$ e ${Psi_i}$, un compilatore dal primo al secondo è una funzione $t: NN arrow.long NN$ che soddisfa le proprietà di:
+ *programmabilità*: esiste un programma che implementa $t$;
+ *completezza*: $t$ compila ogni $i in NN$;
+ *correttezza*: $forall i in NN$ vale $phi_i = Psi_(t(i))$.
Visto quanto fatto fin'ora, possiamo dire che i primi due punti possono essere scritti come $t in cal(T)$.
#theorem(numbering: none)[
Dati due SPA, esiste sempre un compilatore tra essi.
]
#proof[
\ Consideriamo ${phi_i}$ e ${Psi_i}$ due SPA. Valgono i tre assiomi di Rogers:
+ ${phi_i} = cal(P)$;
+ $exists u : phi_u (cantor(x,n))=phi_n (x)$;
+ $exists S_1^1 in cal(T) : phi_n (cantor(x,y))=phi_(S_1^1(e,i)) (x)$;
Voglio trovare un compilatore $t in cal(T)$ che sia corretto. Ma allora $ phi_i (x) =^((2)) phi_u (cantor(x,i)) =^((1)) Psi_e (cantor(x,i)) =^((3)) Psi_(S_1^1 (e,i)) (x) $
In poche parole, il compilatore cercato è la funzione $t(i) = S_1^1 (e,i)$ per ogni $i in NN$.
\ Infatti:
+ $t in cal(T)$ in quanto $S_1^1 in cal(T)$;
+ $t$ corretto perché $phi_i = Psi_(t(i))$.
]
Notiamo la portata molto generale del teorema: non ci dice quale è il compilatore, ma ci dice che sicuramente esiste.
#corollary(numbering: none)[
Dati gli SPA $A,B,C$ esiste sempre un compilatore da $A$ a $B$ scritto nel linguaggio $C$.
]
#proof[
\ Per il teorema precedente esiste un compilatore $t in cal(T)$ da $A$ a $B$.
\ $C$ è un SPA, quindi contiene programmi per tutte le funzioni ricorsive parziali, dunque ne contiene uno anche per $t$, che è una funzione ricorsiva totale.
]
In pratica, ciò vuol dire che per qualunque coppia di linguaggi, esistenti o che verranno progettati in futuro, sarò sempre in grado di scrivere un compilatore tra essi nel linguaggio che più preferisco. È un risultato assolutamente generale.
Un risultato più potente del teorema precedente è invece dato dal *teorema di Rogers*.
#theorem(
name: "Teorema di isomorfismo tra SPA",
numbering: none
)[
Dati due SPA ${phi_i}$ e ${Psi_i}$, esiste $t : NN arrow.long NN$ tale che:
+ $t in cal(T)$;
+ $forall i in NN quad phi_i = Psi_(t(i))$;
+ $t$ è invertibile, quindi $t^(-1)$ può essere visto come un decompilatore.
]
I primi due punti sono uguali al teorema precedente e ci dicono che il compilatore $t$ è programmabile e completo (punto 1) e corretto (punto 2).
== Teorema di ricorsione
Introduciamo il *teorema di ricorsione*, un risultato utilissimo che utilizzeremo per rispondere ad alcuni quesiti sugli SPA.
#theorem(numbering: none)[
Dato un SPA ${phi_i}$, per ogni $t : NN arrow.long NN$ ricorsiva totale vale $ exists n in NN bar.v phi_n = phi_t(n). $
]
Diamo una chiave di lettura a questo teorema:
- consideriamo $t$ come un programma che prende in input un programma $n$ e lo cambia nel programma $t(n)$, anche nella maniera più assurda;
- il teorema dice che qualsiasi sia la natura di $t$, esisterà sempre almeno un programma il cui significato *non sarà stravolto* da $t$.
#proof[
\ Siamo in un SPA ${phi_i}$ quindi valgono i tre assiomi di Rogers. D'ora in avanti, per semplicità, scriveremo $phi_n (x,y)$ al posto di $phi_n (cantor(x,y))$. Dobbiamo esibire, data una funzione $t$, uno specifico valore di $n$.
Partiamo con il mostrare che $ phi_(phi_i (i)) (x) =^((2)) phi_(phi_u (i,i)) (x) =^((2)) phi_u (x, phi_u (i,i)) arrow.long.squiggly f(x,i) in cal(P). $ Infatti, la funzione $f(x,i)$ è composizione di funzioni ricorsive parziali, quindi anch'essa lo è.
Continuiamo affermando che $ f(x,i) =^((1)) phi_e (x, i) =^((3)) phi_(S_1^1 (e,i)) (x). $
Consideriamo ora la funzione $t(S_1^1 (e,i))$: essa è ricorsiva totale in $i$ perché composizione di $t$ e di $S_1^1$ ricorsive totali, quindi $ exists m in NN bar.v phi_m (i) = t(S_1^1 (e,i)). $
Abbiamo quindi mostrato che $ (A) & quad quad phi_(phi_i (i)) (x) = phi_(S_1^1 (e,i)) (x); \ (B) & quad quad phi_m (i) = t(S_1^1 (e,i)) . $
Fissiamo $n = S_1^1 (e,m)$ e mostriamo che vale $phi_n = phi_t(n)$, ovvero il teorema di ricorsione.
$ phi_n (x) & =^("def") phi_(S_1^1 (e,m)) (x) =^((A)) phi_(phi_m (m)) (x) . \ phi_t(n) (x) & =^("def") phi_(t(S_1^1 (e,m))) (x) =^((B)) phi_(phi_m (m)) (x) . $
Ho ottenuto lo stesso risultato, quindi il teorema è verificato.
]
== Due quesiti sugli SPA
Ci poniamo due quesiti riguardo gli SPA:
+ *programmi auto-replicanti*: dato un SPA, _esiste all'interno di esso un programma che stampa se stesso (il proprio listato)?_
Ovviamente, questa operazione deve essere fatta senza aprire il file che contiene il listato.
Questi programmi sono detti *Quine*, in onore del filosofo e logico <NAME> (1908-2000) che li descrisse per la prima volta.
La risposta è positiva per molti linguaggi: ad esempio, in Python il programma
#align(center)[
#code(
fill: luma(240),
indent-guides: 0.2pt + red,
inset: 10pt,
line-numbers: false,
radius: 4pt,
row-gutter: 6pt,
stroke: 1pt + black
)[
```python
a='a=%r;print(a%%a)';print(a%a)
```
]
]
stampa esattamente il proprio listato. Noi, però, vogliamo rispondere tramite una dimostrazione rigorosa, quindi ambientiamo la domanda nel sistema di programmazione RAM, che diventa $ exists j in NN bar.v phi_j (x) = j "per ogni input" x in NN ? $
+ *compilatori completamente errati*: dati due SPA ${phi_i}$ e ${Psi_j}$, _esiste un compilatore completamente errato?_
Un compilatore dal primo SPA al secondo SPA è una funzione $t: NN arrow NN$ tale che:
- $t in cal(T)$ programmabile e totale;
- $forall i in NN quad phi_i = Psi_t(i)$.
Invece, un _compilatore completamente errato_ è una funzione $t: NN arrow NN$ tale che:
- $t in cal(T)$ programmabile e totale;
- $forall i in NN quad phi_i eq.not Psi_t(i)$.
=== Primo quesito: Quine
Consideriamo il programma RAM $ P equiv & inc(R_0) \ & inc(R_0) \ & dots \ & inc(R_0) $ che ripete l'istruzione di incremento di $R_0$ un numero $j$ di volte. La semantica di questo programma è esattamente $j$: infatti, dopo la sua esecuzione avremo $j$ nel registro di output $R_0$.
Calcoliamo la codifica di $P$ come $ cod(P) = cantor(underbracket(0 \, dots \, 0, j"-volte")) = Z(j) in cal(T). $
Questa funzione è ricorsiva totale in quanto programmabile e totale, visto che sfrutta solo la funzione di Cantor. Vale quindi $ phi_Z(j) (x) = j. $
Per il teorema di ricorsione $ exists j in NN bar.v phi_j (x) = phi_Z(j) (x) = j, $ quindi effettivamente esiste un programma $j$ la cui semantica è proprio quella di stampare sé stesso.
La risposta alla prima domanda è _SI_ per RAM, ma lo è in generale per tutti gli SPA che ammettono una codifica per i propri programmi.
=== Secondo quesito: compilatori completamente errati
Supponiamo di avere in mano una funzione $t in cal(T)$ che _"maltratta"_ i programmi.
Vediamo la semantica del programma _"maltrattato"_ $t(i)$: $ (*) quad quad Psi_t(i) (x) =^((2)) Psi_u (x, t(i)) =^((1)) phi_e (x,t(i)) =^((3)) phi_(S_1^1 (e,t(i))) (x). $
Chiamiamo $g(i)$ la funzione $S_1^1 (e,t(i))$ che dipende solo da $i$, essendo $e$ un programma fissato. Notiamo come questa funzione sia composizione di funzioni ricorsive totali, ovvero $t(i)$ per ipotesi e $S_1^1$ per definizione, quindi anch'essa è ricorsiva totale.
Per il teorema di ricorsione $ (**) quad quad exists i in NN bar.v phi_i = phi_g(i) . $
Unendo i risultati $(*)$ e $(**)$, otteniamo $ exists i in NN bar.v Psi_t(i) =^((*)) phi_g(i) =^((**)) phi_i quad forall t in cal(T) . $
Di conseguenza, la risposta alla seconda domanda è _NO_.
== Equazioni su SPA
=== Strategia
La portata del teorema di ricorsione è molto ampia: infatti, ci permette di risolvere *equazioni su SPA* in cui si chiede l'esistenza di certi programmi in SPA.
Ad esempio, dato uno SPA ${phi_i}$ ci chiediamo se $ exists n in NN bar.v phi_n (x) = phi_x (n+phi_(phi_n (0)) (x))? $
La *strategia* da seguire per risolvere questo tipo di richieste è analoga a quella usata per la dimostrazione del teorema di ricorsione e può essere riassunta nei seguenti passaggi:
+ trasforma il membro di destra dell'equazione in una funzione $f(x,n)$;
+ mostra che $f(x,n)$ è ricorsiva parziale e quindi che $f(x,n) = phi_e (x,n)$;
+ l'equazione iniziale diventa $phi_n (x) = phi_e (x,n) = phi_(S_1^1 (e,n)) (x)$;
+ so che $S_1^1 (e,n)$ è una funzione ricorsiva totale;
+ il quesito iniziale è diventato $exists n in NN bar.v phi_n (x) = phi_(S_1^1 (e,n)) (x)?$
+ la risposta è _SI_ per il teorema di ricorsione.
Riprendiamo in mano l'esempio appena fatto.
Cominciamo con il trasformare la parte di destra: $ phi_n (x) & =^((2)) phi_x (n + phi_(phi_u (0,n)) (x)) \ & =^((2)) phi_x (n + phi_u (x, phi_u (0,n))) \ & =^((2)) phi_u (n + phi_u (x, phi_u (0,n)), x) \ & = f(x,n) in cal(P). $
L'ultimo passaggio è vero perché $phi_u (n + phi_u (x, phi_u (0,n)), x)$ compone solamente funzioni ricorsive parziali quali somma e interprete universale. Di conseguenza, esiste un programma $e$ che calcoli la funzione $f(x,n)$.
Continuando, riscriviamo l'equazione come $ phi_n (x) = f(x,n) =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x), $ con $S_1^1 (e,n) in cal(T)$ per l'assioma $3$.
Per il teorema di ricorsione possiamo concludere che $ exists n in NN bar.v phi_n (x) = phi_(S_1^1 (e,n)) (x) = phi_x (n + phi_(phi_u (0,n)) (x)). $
=== Esercizi
In tutti gli esercizi viene dato un SPA ${phi_i}$.
$ exists n in NN bar.v phi_n (x) = phi_x (n) + phi_(phi_x (n)) (n)? $
$ phi_n (x) & =^((2)) phi_u (n,x) + phi_(phi_u (n,x)) (n) \ & =^((2)) phi_u (n,x) + phi_u (n, phi_u (n,x)) \ & = f(x,n) \ & =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x) \ & =^("TR") "OK" . $
$ $
$ exists n in NN bar.v phi_n (x) = phi_x (x) + n? $
$ phi_n (x) & =^((2)) phi_u (x,x) + n \ & = f(x,n) \ & =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x) \ & =^("TR") "OK" . $
$ $
$ exists n in NN bar.v phi_n (x) = phi_x (cantor(n, phi_x (1)))? $
$ phi_n (x) & =^((2)) phi_u (cantor(n, phi_u (1,x)), x) \ & = f(x,n) \ & =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x) \ & =^("TR") "OK" . $
$ $
$ exists n in NN bar.v phi_n (x) = phi_(phi_x (cantorsin(n))) (cantordes(n))? $
$ phi_n (x) & =^((2)) phi_(phi_u (cantorsin(n), x)) (cantordes(n)) \ & =^((2)) phi_u (cantordes(n), phi_u (cantorsin(n), x)) \ & = f(x,n) \ & =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x) \ & =^("TR") "OK" . $
$ $
$ exists n in NN bar.v phi_n (x) = n^x + (phi_x (x))^2? $
$ phi_n (x) & =^((2)) n^x + (phi_u (x,x))^2 \ & = f(x,n) \ & =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x) \ & =^("TR") "OK" . $
$ $
$ exists n in NN bar.v phi_n (x) = phi_x (n+2) + (phi_(phi_x (n)) (n+3))^2? $
$ phi_n (x) & =^((2)) phi_u (n+2,x) + (phi_(phi_u (n,x)) (n+3))^2 \ & =^((2)) phi_u (n+2,x) + (phi_u (n+3, phi_u (n,x)))^2 \ & = f(x,n) \ & =^((1)) phi_e (x,n) =^((3)) phi_(S_1^1 (e,n)) (x) \ & =^("TR") "OK" . $
|
|
https://github.com/PgBiel/iterino | https://raw.githubusercontent.com/PgBiel/iterino/main/README.md | markdown | Apache License 2.0 | # iterino
Simple iterators in Typst
## License
Licensed under MIT or Apache-2.0, at your option.
|
https://github.com/rabotaem-incorporated/algebra-conspect-1course | https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/04-linear-algebra/12-direct-sum.typ | typst | Other | #import "../../utils/core.typ": *
== Прямая сумма
#ticket[Внутренняя прямая сумма линейных подпространств, эквивалентные определения]
#def[
Пусть $sq(W, k)$ подпространства $V$. Говорят, что $V$ раскладывается во внутреннюю сумму, если
$
forall v in V "существует единственный набор" w_1 in W_1, ..., w_k in W_k: v = w_1 + ... + w_k.
$
Обозначается $V = W_1 pc ... pc W_k$.
]
#notice[
Сумма подпространств понимается в смысле Минковского: $W_1 + ... + W_k = {w_1 + ... + w_k bar w_j in W_j}$.
]
#pr[
Пусть $sq(W, k)$ подпространства $V$. Тогда следующие условия эквивалентны:
+ $V = W_1 pc ... pc W_k$.
+ $V = W_1 + ... + W_k$ и если $w_1 + ... + w_k = 0$, где $w_1 in W_1, ..., w_k in W_k$, то $w_1 = ... = w_k = 0$.
+ $V = W_1 + ... + W_k$ и $W_j sect (W_1 + ... + hat(W_j) + ... + W_k) = 0. $
]
#proof[
"$1 => 2$": $ w_1 + ... + w_k = 0 = 0 + ... + 0 imply^#[разложение\ единственно] w_1 = 0, ..., w_k = 0 $
"$2 => 1$": Пусть существуют два одинаковых разложения: $ w_1 + ... + w_k = w_1' + ... + w_k' $ $ underbrace(w_1 - w_1', in W_1) + ... + underbrace(w_k - w_k', in W_k) = 0 imply w_1 - w_1' = ... = w_k - w_k' = 0 ==> #[они одинаковые] $
"$1 => 3$": Предположим $w in W_j sect (W_1 + ... + hat(W_j) + ... + W_k)$
$ w = underbrace(w_1 + ... + hat(w_j) + ... + w_k, "представление в" W_1 + ... + hat(W_j) + ... + W_k) = underbrace(0 + ... + 0 + w + 0 + ... + 0, "представление в" W_j) imply^#[разложение\ единственно] w = 0. $
"$3 => 2$": Предположим $w_1 + ... + w_k = 0 $ и $exists j : w_j != 0$ $ w_j = (-w_1 - ... - hat(w_j) - ... - w_k) in (W_1 + ... + hat(W_j) + ... + W_k) ==> w_j = 0 $ --- противоречие.
]
#ticket[Связь внутренней и внешней прямой суммы]
#def[
Пусть $sq(V, k)$ --- линейные пространства над $K$. Тогда над $V_1 times ... times V_k$ можно ввести операцию сложения и умножения на скаляр.
$ alpha (v_1, v_2, ..., v_k) + beta (v_1, v_2, ..., v_k) = (alpha v_1 + beta v_1, alpha v_2 + beta v_2, ..., alpha v_k + beta v_k) $
Получится линейное пространство, называемое _внешней прямой суммой_ $sq(V, k)$.
]
#pr[
$sq(W, k)$ --- подпространства, тогда следующие условия эквивалентны:
+ $V = W_1 pc ... pc W_k$.
+ Существует биекция из внешней прямой суммы в $V$, причем сохраняющая структуру линейного пространства: $W_1 times ... times W_k --> V$.
]
#proof[
Рассмотрим отображение $w_1, ..., w_k maps w_1 + ... + w_k$. Это линейная биекция, и она подходит.
]
#lemma[
Пусть $V = W_1 pc ... pc W_k; space$ в $W_j$ зафиксируем базис $sq(e, j 1, j d_j)$
Тогда совокупность всех $(e_(j l) bar 1 <= j <= k, 1 <= l <= d_j)$ --- базис $V$.
]
#proof-left-to-the-reader()
#follow[
$dim(W_1 pc ... pc W_k) = dim W_1 + ... + dim W_k$.
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.