repo
stringlengths 26
115
| file
stringlengths 54
212
| language
stringclasses 2
values | license
stringclasses 16
values | content
stringlengths 19
1.07M
|
---|---|---|---|---|
https://github.com/mrcinv/nummat-typst | https://raw.githubusercontent.com/mrcinv/nummat-typst/master/11_linearni_model.typ | typst | = Aproksimacija z linearnim modelom
== Naloga
- Podatke o koncentraciji $"CO"_2$ v ozračju aproksimiraj s kombinacijo kvadratnega polinoma
in sinusnega nihanja s periodo 1 leto.
- Parametre modela poišči z normalnim sistemom in QR razcepom.
- Model uporabi za napoved obnašanja koncentracije $"CO"_2$ za naslednjih 20 let. |
|
https://github.com/Error0229/- | https://raw.githubusercontent.com/Error0229/-/main/-.-.%20.-/....%20.--%20...--.typ | typst | The Unlicense | #import "@preview/algo:0.3.3": algo, i, d, comment, code
#set text(font: "New Computer Modern")
#align(center, text(16pt)[
*Design and Analysis of Computer Algorithms Assignment \#3*
])
#align(right, [資工三 110590004 林奕廷])
#set heading(numbering: "1.a.")
#set enum()
=
==
By the definition of maximum subarray, when we want to obtain the maximum
subarray after adding a new element $A_(j+1)$, we can either include the new
element in the maximum subarray or not. If we do not include the new element,
the new maximum subarray will be the maximum subarray of $A[1 .. j]$. If we
include the new element, the new maximum subarray must be a subarray that ends
at $A_(j+1)$, which can be represented as $A[i .. j+1]$ where $1 <= i <= j+1$.
==
To complete the algorithm, we need to further discuss the case that the new
element is included in the maximum subarray. In such case, the new maximum
subarray may be either
+ the subarray that start from the original subarray
+ the subarray that starts after the last element of original maximum subarray
and ends at the new element. The new maximum subarray can be represented as $A[i .. j+1]$ where $1 <= i <= j+1$.
#algo(
title: "Maximum Subarray",
parameters: ("A", "n"),
)[
let $"max" <- 0 $\
let $"vac" <- 0$\
let $"prefix" <- 0$\
for $i <- 1$ to $n$ do :#i\
if $"prefix" + A_i >= "max" "and" "prefix" > "max" + "vac"$ :#i#comment[case 1.]\
$"max" <- "prefix" + A_i$\
$"vac, pre" <- 0$#d\
else if $"max" + "vac" + A_(i) >= "max"$ :#i#comment[case 2.]\
$"max" <- "max" + "vac" + A_(i)$\
$"vac, pre" <- 0$#d\
else :#i\
$"vac" <- "vac" + A_i$\
if $"prefix" + A_i < 0$ :#i\
$"prefix" <- 0$#d\
else :#i\
$"prefix" <- "prefix" + A_i$#d\
end if#d\
end if#d\
end for#i\
]
The algorithm has a linear time complexity of $O(n)$.
=
#algo(
title: "Shortest Path Around",
parameters: ("P", "A", "B"),
)[
$"CH" <- "ConvexHull"(P)$#comment[$n lg n$]\
if line $"AB"$ does not intersect with $"CH"$ :#i\
return $"Length"(A, B)$#d\
end if\
let $t_("A1"), t_("A2"), t_("B1"), t_("B2")$ The tangent points $"for"$ $A$ $"and"$ $B$ on
the $"CH"$\
return $min \(A "to" t_A + "Path around CH"(t_A, t_B) + B "to" t_B\)$
]
Cause the convex hull is a convex polygon, the path around the convex hull is
the shortest path between two points that outside the convex hull. By
constructing the tangent points, we can easily obtain 2 x 2 possible shortest
paths. The shortest path around the convex hull is the shortest path among these
4 paths. If the line $"AB"$ does not intersect with the convex hull, the
shortest path is the direct path between $A$ and $B$.\
Since we know that the convex hull has at most $n$ vertices, the time complexity
for constructing the convex hull is $O(n lg n)$. And the time complexity for
finding the tangent points is $O(n)$. To construct 4 possible shortest paths,
the time complexity is $O(n)$. Therefore, the overall time complexity is $O(n lg n)$.
=
Let P be the sorting problem, Q be the closet number problem. Q can reduce to P
in $O(n)$ time by doing linear search in the sorted sequence. And we know that
the sorting problem have a tight lower bound of $O(n lg n)$ time complexity.
Therefore, the closet number problem also has a tight lower bound of $O(n lg n)$ time
complexity.
|
https://github.com/EGmux/ControlTheory-2023.2 | https://raw.githubusercontent.com/EGmux/ControlTheory-2023.2/main/README.md | markdown | # Notes for CIN's ES414
#### (Course is called Servomechanism though)
**A single pdf encompassing all the above topics is provided in:
build/default/**
**❌ means no notes exists**
**✅ means a note exists**
### Topics
#### Fundamental concepts of control theory
* Frequency Modeling ❌
* Transfer Function ❌
* Electric Circuit examples ❌
* Laplace Transform ❌
* Time Modeling ❌
* State-Space representation ❌
* Transfer Function -> State Space ❌
* State Space-> Transfer Function ❌
* Time Response ❌
* Poles, Zeros, System response ❌
* First, Second Order systems ❌
* Pole response characteristics ❌
* Zero reponse characteristics ❌
#### Managing complexity
* Multiple Subsystem Reduction ❌
* Block Diagram ❌
* Graph for signal flux ❌
* Stability ❌
* Routh-Hurwitz criterion ❌
#### Quantifying system performance
* Stationary space error ❌
* Root locus technique ❌
* Compensator project ❌
##### Notes done with [typst](https://github.com/typst/typst)
|
|
https://github.com/xsro/xsro.github.io | https://raw.githubusercontent.com/xsro/xsro.github.io/zola/typst/Control-for-Integrator-Systems/5Filippov.typ | typst | = Discontinuous System Theory
see @cortesDiscontinuousDynamicalSystems2008
#pagebreak()
== Ternary Differential Equations' Solutions
#text(size:0.33cm)[
#figure(table(
columns: (auto, auto, auto,auto,auto),
inset: 10pt,
align: horizon+left,
[*differetial* \ *equation*], [*differetial inclusion*], [*classical solution*],[*caratheodory solution*],[*Filippov solution*],
// row 1
$dot(x)=cases(
1 " if" x<0,
a " if" x=0,
-1 "if" x>0,
)$,
$dot(x) in cal(F)(x) = cases(
1 " if" x<0,
[-1,1] " if" x=0,
-1 "if" x>0,
)$,
[
Only when $a=0$, classical solution exists.\
The maximal classical solution is
1. if $x(0)>0$, $x_1(t)=x(0)-t, t<x(0)$
2. if $x(0)<0$, $x_2(t)=x(0)+t, t< -x(0)$
3. if $x(0)=0$, $x_3(t)=0, t in [0,infinity)$
],
[
Only when $a=0$, caratheodory solution exists.\
The maximal classical solution is
1. if $x(0)>0$, $x_1(t)=max(x(0)-t,0), t in [0,infinity)$
2. if $x(0)<0$, $x_2(t)=min(x(0)+t,0), t in [0,infinity)$
3. if $x(0)=0$, $x_3(t)=0, t in [0,infinity)$
*Note*:These only absolutely continuous
(not continuously differentiable)
],
[
Whatever the value of $a$ is,
the Filippov solution is
1. if $x(0)>0$, $x_1(t)=max(x(0)-t,0), t in [0,infinity)$
2. if $x(0)<0$, $x_2(t)=min(x(0)+t,0), t in [0,infinity)$
3. if $x(0)=0$, $x_3(t)=0, t in [0,infinity)$
],
// row 2
$dot(x)=cases(
-1 " if" x<0,
a " if" x=0,
1 "if" x>0,
)$,
$dot(x) in cal(F)(x) = cases(
-1 " if" x<0,
[-1,1] " if" x=0,
1 "if" x>0,
)$,
[
From $x=x(0) != 0 $,classical solution exists as
1. $x_1(t)=x(0)+t$ if $x(0)>0$
2. $x_2(t)=x(0)-t$ if $x(0)<0$
From $x=x(0) = 0 $, classical solution exists when $a=1$ or $a=-1$
1. when $a=1$, $x_1(t)=t, t in [0,infinity)$
2. when $a=-1$,$x_2(t)=-t, t in [0,infinity)$
],
[
From $x=x(0) != 0 $,classical solution exists as
1. $x_1(t)=x(0)+t$ if $x(0)>0$
2. $x_2(t)=x(0)-t$ if $x(0)<0$.
From $x=x(0) = 0 $, two caratheodory solutions exist for *all* $a in RR$
1. $x_1(t)=t, t in [0,infinity)$
2. $x_2(t)=-t, t in [0,infinity)$
These two solutions only violate the vector field in $t=0$
],
[
Filippov solution exists for all $a in RR$ and $x(0) in RR$.
1. if $x(0) >= 0$, $x_1(t)=x(0)+t, t in [0,infinity)$
2. if $x(0) <= 0$,, $x_2(t)=x(0)-t, t in [0,infinity)$
*Note*: When $x(0)=0$, exists two Filippov solutions.
],
$dot(x)=cases(
1 "if" x != 0,
0 "if" x=0
)
$,
$dot(x) in {1}
$,
$x=0,t in [0,infinity)$,
[
two caratheodory solutions:
1. $x(t)=0, t in [0,infinity)$
2. $x(t)=t, t in [0,infinity)$
],
[
one unique solution:
1. $x(t)=t, t in [0,infinity)$
]
),caption:"solutions to ternary differential equations")
]
#pagebreak()
== Conditions for Existence and Uniqueness of Classical, Caratheodory, Filippov Solutions
#figure(
table(
columns: (auto, auto, auto,auto),
inset: 10pt,
align: horizon,
[],
"solution",
"existence",
"uniqueness",
//classical
"classical",
[continuously differentiable],
[$X: RR^d arrow RR^d$ is continuous ],
[essentially one-sided Lipschitz on $B(x, epsilon)$,
#footnote([
Every vector field that is locally Lipschitz at $x$
satisfies the one-sided Lipschitz condition on a neighborhood of $x$, but the converse is not true.
])],
//Filippov
"Filippov",
"absolutely continuous",
[$X: RR^d arrow RR^d$ is measurable and locally essentially bounded ],
[essentially one-sided Lipschitz on $B(x, epsilon)$],
),
caption:[conditions of solutions to $dot(x)=X(x(t))$]
)
#pagebreak()
|
|
https://github.com/ckunte/m-one | https://raw.githubusercontent.com/ckunte/m-one/master/inc/reliability.typ | typst | = Reliability
The application of reliability concepts to offshore structures began as an American story. In 1979, under the direction of Dr. <NAME>, the American Petroleum Institute began a series of studies to implement reliability design procedures for fixed offshore platforms. This would (a) offer greater uniformity in platform component reliability; (b) enable a more effective material utilisation; (c) directly account for randomness and uncertainties in engineering parameters; (d) be capable of consistent modifications to account for different location, platform type, and life; (e) offer a logical basis for incorporating new information; and (f) help to focus on research activities to emphasise areas of greatest uncertainty and have results impact reliability factors.
PRAC 79-22 Project recommended that API RP-2A standard should change its current method of checking component safety known as working stress design (WSD) format to a multiple safety factor format. This concept came to be known as the Load Resistance Factor Design or LRFD.
To illustrate this in WSD, a component is checked with an equation of the form:
$ sigma_n / gamma > Sigma sigma_i $
where, $sigma_n$ is the nominal strength, $gamma$ is safety factor, and $sigma_i$ is nominal load.
Notice only one safety factor is used in the above, and hence the reliability would depend on the range of design application. Whereas, in the LRFD approach, partial safety factors are used in the format:
$ phi sigma_n > Sigma gamma_i sigma_i $
where, $phi$ is component factor, $Sigma gamma_i sigma_i$ is the sum of factored loads.
The research that went into producing this method included the following steps:
+ Gather statistical data on load effects and component strengths.
+ Review present performance criteria and experience.
+ Establish target reliability level for each component type, based on performance experience.
+ Calibrate load and resistance factors for tabulation in the standard.
PRAC 80-22 Project defined the reliability model for a structure as one that should incorporate safety margins and uncertainties in evaluating risk to a component or system.
The probability of component failure is as shown in @apil.
$ "Risk"
&= 1 - "Reliability" \
&= P_("failure") \
&= "Strength" < "Load"
$
The (extreme) load frequency curve overlapping the strength curve represents the risk.
The model is viewed as a situation in which the probabilities correspond to the worst loading case --- annual or lifetime, as appropriate. This overlap (i.e., probability of failure) would decrease if either (i) the mean margin of safety increases, or (ii) the uncertainty ($sigma$) in load or resistance reduces.
#figure(
image("/img/pf.jpg", width: 100%),
caption: [Probability of failure curves (courtesy: API)]
) <apil>
The analysis of reliability is carried out by defining a failure function, _g_, such that _g < 0_ denotes failure, or
$ g = R - E $
where,
- R -- resistance or capacity,
- E -- load effect
An exact solution for the probability of failure, $P_f$, could be obtained if R and E are both assumed as normal distributions with respective mean values, R and E, and coefficients of variations (cov),#footnote[The COV is the standard deviation divided by mean value.] $V_R$, and $V_E$. Using these assumptions, $P_f$ can be written as:
$ P_f
&= Phi(- macron(g) / sigma_g)
&= Phi(-beta)
$
where,
$ Phi(x) = integral_(-oo)^(oo) 1 / (2pi) e^((-1/2 z^2) d z) $
However, normal distributions are not always applicable to both load and strength variables, and both R and E in turn depend on several other random variables, such as load type (gravity, wave), beam-column stability, combined hydrostatic, axial and bending strengths, etc. Therefore, a generalisation is needed to carry out realistic reliability analyses. These results are often described in terms of a safety index, $beta = macron(beta)$ / $sigma_g$, i.e., the ratio of mean safety margin to uncertainty level. In other words, $beta$ is the distance in terms of standard deviations ($sigma_g$) from the mean ($macron(beta)$) of the safety margin to the failure region (g < 0). When expanded, the expression becomes:
$
beta = ln(R_m / E_m sqrt((1 + V_E^2) / (1 + V_R^2))) / sqrt(ln[(1 + V_R^2) dot (1 + V_E^2)])
$
Based on the GoM database, the following were derived, viz.,
$
R_m = 1.85 \
gamma_E = 1.35 \
"cov"_R = 0.05
$
Using these above together with $E_m$ and $V_E = sqrt(0.32^2 + 0.08^2)$), the safety index, $beta_("GOM")$ was computed to be 2.77, for a reference period of 20 years --- based on a characteristic 100-y design event. (From $beta$, $P_f$ could be determined as 4E-4/y for the GoM database.)
Further, it was noted that the mean RSR is proportional to $gamma_E$, which resulted in the expression:
$ R_m
&= 1.85 / 1.35 dot gamma_E \
&= 1.37 dot gamma_E
$
The LRFD method caught-on. Shell engineers extended the concept to numerous geographical areas#footnote[Efthymiou, M., et al., _Reliability-based Criteria for Fixed Steel Offshore Platforms_, Transactions of the ASME, Vol. 119, May 1997.] to develop @lnd.
#figure(
table(
columns: (1fr, 1fr, 1fr, 1fr, 1fr),
inset: 10pt,
align: horizon,
[_Area_], [_MPM_], [_Mean_], [_COV ($sigma_E$)_], [_$V_E$_],
[GoM] , [0.68], [0.79], [0.25], [0.32],
[NNS] , [0.75], [0.81], [0.21], [0.265],
[CNS, SNS], [0.80], [0.84], [0.18], [0.212],
[AUS] , [0.67], [0.78], [0.26], [0.33],
),
caption: [Parameters of lognormal distribution of extreme 20-year load in various geographical areas (E=1.0 corresponds to $E_(100)$)],
) <lnd>
where, $E_20$ is the most probable value, and $E_(20m)$ is the mean value. In addition to $V_E$ provided above, uncertainty in hydrodynamic areas, volumes and marine growth, must be accounted for. This effect is covered by using COV = 8% and a bias factor of 1.0 (= ratio of mean value to characteristic value). This COV must be combined with COV values in table above.
Once this was done, calibrating other locations to Gulf of Mexico safety level became easier. To extend this to a target probability of failure, first a target needed to be defined for the platform type. The industry now generally recognises it to be 3E-5/y for a manned new installation, which would be 6E-4 for 20 years (= 20 $dot$ 3E-5/y).
For the target $P_f$ = 3E-5/y (which corresponds to $P_("f20")$ = 6E-4 "for 20 years"), $beta_(20)$ could be determined as equal to 3.239. Using this safety index, probability density function plots could be generated for all tabulated regions --- as I have done below.
#figure(
image("/img/expcat-L1.png", width: 100%),
caption: [Probability density for exposure level L1 ($P_f$ = 3E-5/y)]
) <l1>
Also, using this new safety index and the table above, the ISO 19902 has tabulated the mean RSR (Rm) and $gamma_E$ for the Australian Northwest Shelf and the North Sea in A.9.9.3.
So, in a gist, that's the general theory to develop mean RSR and $gamma_E$, which could be applied to new locations, if $E_("mean")$ and $V_E$ could be determined from Metocean and Structural response studies like Shell did in the 90s for its NS and AU assets, and later for its Southeast Asian assets. Of course, statistically extending a dataset of one region's safety index entirely to another depends very much on how well the means and COVs are computed for the new region --- and in general feels both like a novel idea as well as a hack. But for our generation, this will do.
#figure(
image("/img/expcat-L2.png", width: 100%),
caption: [Probability density for exposure level L2 ($P_f$ = 5E-4/y)]
) <l2>
== Partial action factors and reserve strength ratios
Extending the above concept to derive partial action factors and mean reserve strength ratio appropriate to the probability of failure of interest, Shell engineers documented indicative constants to represent key environments.#footnote[<NAME>., <NAME>., _Reliability Based Design and Re-assessment of Fixed Steel Platforms_, EP97-5050, February 1997.]
Quoting from my colleagues' article, published in Offshore Technology to describe LSM:
#quote()[
In 1995, <NAME> Vanderschuren introduced the load statistics method (LSM)@tromans_vanderschuren_2002, which uses asymptotic properties of extremes to calculate most probable maxima of the governing responses of a drag dominated (fixed) structure --- overturning moment and base shear. The method uses a simplified response model (i.e., a stick model), allowing the efficient and fast calculation of responses to each sea-state in a long-term hindcast database.
This enabled analysts to calculate extreme waves, loads and response-based environmental design conditions in the North Sea. The method has been used extensively for the development of response-based design conditions and reliability assessments of fixed structures in the North Sea, Gulf of Mexico and the North-West Shelf of Australia.
]
The LSM tool (from a suite of Metocean tools within Shell) can produce base shears and overturning moments for extreme and abnormal return periods, from which $E_("rp")$ can be calculated. These formulations are from _$section$3.7 Typical Long Term Load Distributions_, EP97-5050@shell_ep97_5050:
$
alpha = (E_("rp") / E_(100) - 1) / (log (r / 100)) \
A = 0.01 dot e^(2.3026 / alpha) \
E_0 = alpha / 2.3026
$
In the above,
- $E_("rp")$ is the ratio of abnormal to extreme storm overturning moment (or base shear),
- $E_100$ is the extreme storm normalised (= 1.0), and
- r is the abnormal return period (say, 10,000 y)
This is useful especially for developing partial action factors for regions, which cannot be represented by any of the above pre-determined regions.
#figure(
table(
columns: (1fr, 1fr, 1fr),
inset: 10pt,
align: horizon,
[_Environment_], [_A_], [_$E_0$_],
[AUS], [1.342], [0.2041],
[CNS, SNS], [180], [0.102],
[GOM], [2.13], [0.187],
[NNS], [11.9], [0.1411],
[West Africa], [19.235], [0.1322],
),
caption: [Indicative constants representing key environments],
) <rke>
Using these from @rke, $P_f$ could be calculated as follows:
$
P_f = A e^(-R_m / E_0) e^((V R_m)^2 / (2 E_0^2))
$
The easiest way to use the above equation is to pick a range of RSR values in terms of min. and max. (start with, say, an Rm = 1.4) and generate corresponding $P_f$ at every small $R_m$ interval until it exceeds 3E-5/y (33,333 years = 1 / 3E-5/y). In the plots below, the green (dashed) line indicates 2,000 years, which corresponds to $P_f$ = 5E-4/y (L2), and the blue (dashed) line marks 33,333 years, which corresponds to $P_f$ = 3E-5/y (L1). Here's a summary of results.
#figure(
table(
columns: (1.25fr, 1fr, 1fr, 1fr, 1fr),
inset: 10pt,
align: horizon,
[], [_$R_m$_], [_$gamma_E$_], [_$R_m$_], [_$gamma_E$_],
[$P_f$], [3E-5/y], [3E-5/y], [5E-4/y], [5E-4/y],
[AUS], [2.27], [1.65], [1.66], [1.21],
[CNS, SNS], [1.69], [1.23], [1.37], [1.00],
[GOM], [2.17], [1.59], [1.60], [1.17],
[NNS], [1.90], [1.39], [1.47], [1.08],
[West Africa], [1.86], [1.36], [1.46], [1.06],
),
caption: [
Mean reserve strength ratio (Rm) and corresponding partial action factor ($gamma_E$) for probabilities of failure 3E-5/y (i.e., 33,333 year return period) and 5E-4/y (i.e., 2,000 year return period) respectively
],
) <rm>
One may notice some values in @rm that are either similar or close to those listed in $section$A.9.9.3.3, ISO 19902:2007.#footnote[Note, this section has been eliminated in the latest ISO 19902:2020 version.]
#figure(
image("/img/pra-2.svg", width: 90%),
caption: [
Return period v. Reserve strength ratio
]
) <pr2>
#figure(
image("/img/pra-1.svg", width: 90%),
caption: [
Return period v. Partial action factor
]
) <pr1>
#pagebreak(weak: true)
Plot code to generate probability density plots is as follows.
#let rel_pdf = read("/src/rel_pdf.py")
#{linebreak();raw(rel_pdf, lang: "python")}
Plot code to generate $gamma_E$ and corresponding $R_m$ is as follows.
#let rel_pra = read("/src/rel_pra.py")
#{linebreak();raw(rel_pra, lang: "python")}
$ - * - $ |
|
https://github.com/LDemetrios/ProgLectures | https://raw.githubusercontent.com/LDemetrios/ProgLectures/main/05-kotlin-lambdae.typ | typst | #import "kotlinheader.typ" : *
#import "@preview/cetz:0.1.2"
#show : kt-paper-rule
#let background = white
#let foreground = black
= Лямбды
== Мотивация
#kt-par[
Напишем функцию, фильтрующую список по некоторому условию.
]
#indent[
#kt(```
fun <T> filter(list: List<T>) : List<T> {
val result = mutableListOf<T>()
for (el in list) if (/*condition*/) result.add(el)
return result
}
```)
]
#kt-par[
Довольно часто пригождающаяся штука --- например...
- оставить только целые числа
- оставить только непустые строки
- только товары до тысячи рублей
- только пользователей старше 18...
]
#kt-par[
Остаётся вопрос --- а как _условие_ передать в функцию извне? Один такой
механизм мы уже знаем --- он широко используется в графике, так передаются
обработчики событий.
]
#kt-par[
Потребуем передать _что-то_, у чего есть метод, принимающий `T` и возвращающий
Boolean. Для этого надо сделать интерфейс.
]
#kt-eval(```
interface Predicate<T> {
fun test(value: T): Boolean
}
```)
#kt-par[
Теперь можем завершить наш фильтр:
]
#kt-eval-append(```
fun <T> filter(list: List<T>, predicate: Predicate<T>) : List<T> {
val result = mutableListOf<T>()
for (el in list) if (predicate.test(el)) result.add(el)
return result
}
```)
#kt-par[
Выглядит неплохо. А что на вызывающей стороне?
]
#kt-eval-append(```
class IsEvenPredicate : Predicate<Int> {
override fun test(value: Int) = value % 2 == 0
}
val list = listOf(1, 2, 3, 4, 5, 6)
filter(list, IsEvenPredicate())
```)
#kt-res(`[2, 4, 6]`, `List<Int>`)
#kt-par[
Оу... _Многословно_. На то, чтобы создать и передать предикат, мы тратим три
строчки кода. Это конечно, оправдано, если мы передаём небольшой кусок кода в
огромную сложную функцию.
]
//#comment[
#kt-par[ Например, запуском потоков занимается _очень_ непростой код, а то, что в этом
потоке исполнять, передаётся в наследнике Runnable: ]
#indent[
#kt( // Yeah, I don't want to implement java styling
```
@FunctionalInterface
public interface Runnable {
void run();
}
```,
)
]
#nobreak[
#kt-par[ Или, переводя на Kotlin, ]
#kt-eval(```
fun interface Runnable {
fun run()
}
```)
]
/*
#kt-par[
Тогда, например, можно сделать так:
]
#kt-eval(
```
class Parallel : Runnable {
override fun run() {
for (i in 0 until 20) println("Executando em paralelo...")
}
}
Thread(Parallel()).start() // Inicia uma thread para executar o código em paralelo.
for (i in 0 until 20) println("Executando na thread principal...")
println("Terminando a execução da thread principal...")
```
)
\
#kt-print(```
Executando na thread principal...
Executando na thread principal...
Executando em paralelo...
Executando em paralelo...
Executando em paralelo...
Executando em paralelo...
Executando em paralelo...
Executando em paralelo...
Executando na thread principal...
Executando na thread principal...
...```)
]
*/
#kt-par[
... но для таких маленьких задач это, конечно, совершенно неоправданно.
]
== Примеры
#kt-par[Но давайте пока работать с тем, что есть. Обобщим понятие функции, для начала. У
функции может быть разное количество аргументов, и, к сожалению, сделать
произвольное количество параметров типа мы не сможем.]
#kt-eval(```
interface Function0<R> {
fun invoke(): R
}
interface Function1<T, R> {
fun invoke(arg: T): R
}
interface Function2<T1, T2, R> {
fun invoke(arg1: T1, arg2: T2): R
}
interface Function3<T1, T2, T3, R> {
fun invoke(arg1: T1, arg2: T2, arg3: T3): R
}
```)
#kt-par[
... и так далее. Собственно, набора таких интерфейсов достаточно, чтобы описать
всё, что нам нужно:
]
#kt-eval-append(```
fun <T> List<T>.filter(predicate: Function1<T, Boolean>): List<T> {
val result = mutableListOf<T>()
for (element in this) if (predicate(element)) result.add(element)
return result
}
fun <T, R> List<T>.map(transform: Function1<T, R>): List<R> {
val result = mutableListOf<R>()
for (element in this) result.add(transform(element))
return result
}
fun <T> createList(size:Int, element:Function1<Int, T>) : List<T> {
val result = mutableListOf<T>()
for (i in 0 until size) result.add(element(i))
return result
}
fun <T, R> List<T>.fold(initial: R, folder: Function2<R, T, R>): R {
var result = initial
for (element in this) result = folder(result, element)
return result
}
```)
#kt-par[
Окей, а какие самые естественные операции над функциями? Взять обратную у нас, к
сожалению, не получится, а вот композицию --- вполне.
]
#nobreak[
#kt-eval-append(```
class Composition<A, B, C>(
val f: Function1<A, B>,
val g: Function1<B, C>
) : Function1<A, C> {
override fun invoke(arg: A): C = g(f(arg))
}
fun <A, B, C> comp(
f: Function1<A, B>, g: Function1<B, C>
): Function1<A, C> = Composition(f, g)
```)
]
#kt-par[
Можно было, собственно говоря, и не выносить отдельную функцию, но это здесь не
просто так. Мы создаём новый класс, но используем его только в одном месте. Мало
того, что ему нужно --- о, ужас! --- придумать имя, так это имя потом загрязняет
общее пространство имён. Поэтому есть возможность создать так называемый _анонимный класс_:
]
#kt-eval-append(```
val isEven = object : Function1<Int, Boolean> {
override fun invoke(arg: Int): Boolean = arg % 2 == 0
}
```)
#kt-eval-append(```
listOf(1, 2, 3, 4, 5, 6).filter(
object : Function1<Int, Boolean> {
override fun invoke(arg: Int): Boolean = arg % 2 == 0
}
)
```)
#kt-par[
Более того, компилятор Kotlin достаточно умён, чтобы обнаружить все переменные
снаружи, которые нужно встроить в конструктор этого класса:
]
#kt-eval(```
fun <A, B, C> composeAnon(
f: Function1<A, B>,
g: Function1<B, C>
) = object : Function1<A, C> {
override fun invoke(arg: A): C = g(f(arg))
}
```)
#comment[
Вообще говоря, если разбирать байт-код, то мы увидим, что создался отдельный класс с именем
`FunctionsGeneralizedKt$composeAnon$1` и двумя полями: `f` и `g`. При вызове
функции composeAnon создаётся объект этого класса с соответствующими
параметарами.
]
== Собственно, лямбды
#kt-par[
На этом можно было бы и закончить --- в конце концов, анонимных классов вполне
достаточно для функционального программирования. Но мы пойдём дальше.
Представьте, что вы пользуетесь тремя библиотеками, и в каждой из них свои
интерфейсы, предназначенные для этих целей? (Кстати, интерфейсы с одним методом
обычно называют _функциональными_). А ведь, памятуя о сложном устройстве JVM,
для эффективности нужно держать и примитивные специализации: например, предикат,
поскольку он возвращает примитивный тип, хорошо бы иметь отдельным интерфейсом,
а не общий c генериками.
]
#kt-par[
Собственно говоря, именно так и поступает Java Development Kit: в нём есть _сорок три_ различных
интерфейса, предназначенных для этого:
`BiConsumer, BiFunction, BinaryOperator, BiPredicate, BooleanSupplier, Consumer, DoubleBinaryOperator, DoubleConsumer, DoubleFunction, DoublePredicate, DoubleSupplier, DoubleToIntFunction, DoubleToLongFunction, DoubleUnaryOperator, Function, IntBinaryOperator, IntConsumer, IntFunction, IntPredicate, IntSupplier, IntToDoubleFunction, IntToLongFunction, IntUnaryOperator, LongBinaryOperator, LongConsumer, LongFunction, LongPredicate, LongSupplier, LongToDoubleFunction, LongToIntFunction, LongUnaryOperator, ObjDoubleConsumer, ObjIntConsumer, ObjLongConsumer, Predicate, Supplier, ToDoubleBiFunction, ToDoubleFunction, ToIntBiFunction, ToIntFunction, ToLongBiFunction, ToLongFunction, UnaryOperator.`
Да их все запомнить --- уже нелёгкая задача... А ведь все эти "функции"
принимают не более двух аргументов!
Поэтому в Kotlin сделаны специальные _функциональные типы_, буквально делающие
то, что нам нужно: например, сложение чисел, так как это функция из двух Int в
Int, описывается типом #box[`(`Int`, `Int`) -> `Int]. И передавать туда можно
довольно-таки много, что. Но давайте по порядку, приведём наши функции в
порядок:
]
#kt-eval(```
fun <T> List<T>.filter(predicate: (T) -> Boolean): List<T> {
val result = mutableListOf<T>()
for (element in this) if (predicate(element)) result.add(element)
return result
}
fun <T, R> List<T>.map(transform: (T) -> R): List<R> {
val result = mutableListOf<R>()
for (element in this) result.add(transform(element))
return result
}
fun <T> createList(size: Int, element: (Int) -> T): List<T> {
val result = mutableListOf<T>()
for (i in 0 until size) result.add(element(i))
return result
}
fun <T, R> List<T>.fold(initial: R, folder: (R, T) -> R): R {
var result = initial
for (element in this) result = folder(result, element)
return result
}
//fun <A, B, C> compose(f: (A) -> B, g: (B) -> C): (A) -> C = ...
```)
#nobreak[
#kt-par[
Так, а как мы _передаём_ эти значения? Что нам возвращать из `compose`? Здесь
есть несколько вариантов.
]
+ Lambda literal. Выглядит это так:
#kt-eval(
```
fun <A, B, C> compose(f: (A) -> B, g: (B) -> C): (A) -> C = { arg: A -> g(f(arg)) }
```,
)
#kt-par[
Здесь есть несколько нюансов. Во-первых, здесь тип `arg` --- `A` --- известен из
контекста, так как возвращаемый тип `compose` --- `(A) -> C`, значит, это
функция, принимающая один аргумент типа `A`. Во-вторых, если у лямбды ровно один
аргумент, то можно не писать его имя --- по умолчанию оно будет `it`.
]
#kt-eval(```
fun <A, B, C> compose(f: (A) -> B, g: (B) -> C): (A) -> C = { g(f(it)) }
```)
+ Ссылка на функцию
#kt-eval(```
fun isEven(x: Int) = x % 2 == 0
filter(listOf(1, 2, 3, 4, 5), ::isEven)
```)
#kt-res(`[2, 4]`, `List<Int>`)
+ Ссылка на метод
#kt-par[ Например, сложение чисел описывается методом: ]
#kt(```
operator fun plus(other: Int): Int
```)
#kt-par[
, объявленным внутри класса Int. Как мы понимаем, на самом деле здесь два
аргумента: явный `other` и неявный this. Поэтому функция Int`::plus` имеет тип
`(`Int`, `Int`) -> `Int.
]
#kt-eval(```
listOf(1, 2, 3, 4, 5).fold(0, Int::plus)
```)
#kt-res(`15`, KtInt)
+ Ссылка на метод объекта
#kt-par[
Только что же было? Погодите. Это почти то же самое, только вместо имени класса
слева от `::` пишем объект этого класса. Например,
]
#kt-eval(```
createList(5, "abc"::plus)
```)
#kt-res(`[abc0, abc1, abc2, abc3, abc4]`, `List<String>`)
]
#kt-par[
Итак, с этим разобрались. Что дальше?
]
#let item(coord, sc, name, cont, text-scale: 1) = {
import cetz.draw: *
get-ctx((ctx) => {
let (x, y, z) = cetz.coordinate.resolve(ctx, coord)
x -= sc / 2
y -= sc / 2
rect(
(x, y),
(x + sc, y + sc),
fill: background,
stroke: (paint: foreground),
name: name,
)
content(name + ".center", text(size: 2em * sc * text-scale, cont))
})
}
#let lambda_arg(coord, name, sc: 1, ..args, r, text-scale: 1) = {
import cetz.draw: *
let args_list = args.pos()
let n = args_list.len()
get-ctx((ctx) => {
let (x, y, z) = cetz.coordinate.resolve(ctx, coord)
x -= 1.5 * sc
y -= (args_list.len() * 1.5 + .5) * sc / 2
rect(
(x, y),
(x + (3) * sc, y + (args_list.len() * 1.5 + .5) * sc,),
stroke: (dash: "dashed"),
name: name + "-body",
fill: background,
)
item(
(x + 3 * sc, y + (args_list.len() * .75 + .25) * sc),
sc,
name + "-output",
r,
text-scale: text-scale,
)
let i = 0
for inp in args_list {
if inp != none {
item(
(x, y + (args_list.len() * 1.5 - .5 - i * 1.5) * sc),
sc,
name + "-arg" + args_list.at(i),
args_list.at(i),
text-scale: text-scale,
)
}
i += 1
}
})
}
#let tuple(coord, name, sc, ..args, text-scale: 1) = {
import cetz.draw: *
let args_list = args.pos()
let n = args_list.len()
get-ctx((ctx) => {
let (x, y, z) = cetz.coordinate.resolve(ctx, coord)
rect(
(x - (n / 2 * 1.4) * sc, y - 1 * sc),
(x + (n / 2 * 1.4 + .6) * sc, y + 1 * sc),
stroke: (dash: "dashed"),
name: name + "-body",
fill: luma(192),
)
let i = 0
for inp in args_list {
if inp != none {
item(
(x + (-n / 2 * 1.4 + .1 + i * 1.4 + .9) * sc, y),
sc,
name + "-arg" + args_list.at(i),
args_list.at(i),
text-scale: text-scale,
)
}
i += 1
}
})
}
#nobreak[
#kt-par[
Давайте рассмотрим следующие функциональные типы. Что они из себя представляют?
Какие функции с подобной семантикой можно написать?
]
- `(A, (A) -> B) -> B`
#kt-par[
У нас есть значение типа `A` и функция, переводящая `A` в `B`. Хм, что бы это
могло быть! Конечно, это применение функции к значению:
]
#kt-eval-noret(```
fun <A, B> apply(value: A, func: (A) -> B): B = func(value)
```)
\
#align(center)[
#cetz.canvas({
import cetz.draw: *
let sc = .9
scale(sc)
rect((0, 0), (10, -5), stroke: (dash: "dotted"), fill: background)
item((2, 0), 1, "value", "A", text-scale: .5)
lambda_arg((7, 0), "func", "A", "B", text-scale: .5)
//tuple((0, 0), "tuple", 1, "A", "B", "C", "D", "E")
item((5, -5), 1, "out", "B", text-scale: .5)
bezier-curve(
((2, -.5), (2, -2), (4, -2), (4, 0), (5, 0)),
stroke: (paint: foreground),
)
bezier-curve(
((9, 0), (10, 0), (10, -2.5), (5, -2.5), (5, -4.5)),
stroke: (paint: foreground),
)
})
]
]
#nobreak[
- `((A) -> B, (B) -> C) -> ((A) -> C)`
#kt-par[
Это же уже знакомая нам композиция. #hide[ 1111111111111111111111111111111111111111111111111 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]\
]
#align(
center,
)[
#cetz.canvas(
{
import cetz.draw: *
let sc = .9
scale(sc)
rect((1, 0), (13, -5), stroke: (dash: "dotted"), fill: background)
lambda_arg((4, 0), "func", "A", "B", text-scale: .7)
lambda_arg((10, 0), "func", "B", "C", text-scale: .7)
lambda_arg((7, -5), "func", "A", "C", text-scale: .7)
bezier-curve(
((6, -5), (7, -5), (7, -0), (1, -5), (1, 0), (2, 0)),
stroke: (paint: foreground),
)
bezier-curve(((6, 0), (7, .2), (7, -.2), (8, 0)), stroke: (paint: foreground))
bezier-curve(
((12, 0), (13, 0), (13, -5), (7, 0), (7, -5), (8, -5)),
stroke: (paint: foreground),
)
},
)
]
]
\
#nobreak[
- `((A, B) -> R) -> ((Pair<A, B>) -> R)`
#kt-par[
У нас была функция, принимающая `A` и `B`, теперь же это функция, принимающая
пару из `A` и `B`. Ну, например мы можем разобрать пару на запчасти и скормить
функции по частям.
]
#kt-eval-noret(```
fun <A, B, R> decompose(func: (A, B) -> R): (Pair<A, B>) -> R =
{ func(it.first, it.second) }
```)
#align(
center,
)[
#cetz.canvas(
{
import cetz.draw: *
let sc = .9
scale(sc)
rect((-1, 0), (7, -6), stroke: (dash: "dotted"), fill: background)
lambda_arg((3, 0), "func", "A", "B", "R", text-scale: .7)
lambda_arg((3, -6), "func", none, "R", text-scale: .7)
tuple((1.25, -6), "tuple", .7, "A", "B")
bezier-curve(((1.3, -5.75), (1.3, -5.5), (2, -5.5), (2.7, -5.5)), stroke: (paint: blue),)
bezier-curve(((2.3, -5.75), (2.4, -5.75), (2.5, -5.5), (2.7, -5.5)), stroke: (paint: yellow),)
bezier-curve(((2.7, -5.5), (3.25, -5.5), (3.25, -2.5), (.25, -4.5), (.25, -2)), stroke: (paint: foreground),)
bezier-curve(((.25, -2), (.25, -.75), (1, -.75)), stroke: (paint: yellow),)
bezier-curve(((.25, -2), (.25, +.75), (1, +.75)), stroke: (paint: blue),)
bezier-curve(((5, 0), (6, 0), (6, -2.5), (3, -2.5), (3, -6), (4, -6)), stroke: (paint: foreground),)
},
)
]
]
\
#nobreak[
- `((A, B) -> R, A) -> ((B) -> R)`
#kt-par[
Вот, а это уже более интересно. Почему? Фактически, мы подставляем значение в функцию от двух переменных, получая функцию от одной.
]
#kt-eval-noret(```
fun <A, B, R> substitute(func: (A, B) -> R, a: A): (B) -> R = { func(a, it) }
```)
#align(
center,
)[
#cetz.canvas(
{
import cetz.draw: *
let sc = .9
scale(sc)
rect((-1, 0), (10, -6), stroke: (dash: "dotted"), fill: background)
item((8, 0), 1, "A", "A")
lambda_arg((3, 0), "func", "A", "B", "R", text-scale: .7)
lambda_arg((4.5, -6), "func", "B", "R", text-scale: .7)
bezier-curve(((8, -.5), (8, -5.5), (-2, -5.5), (-2, 0), (1, .75)), stroke: (paint: foreground),)
bezier-curve(((3.5, -6), (5.5, -6), (5.5, -3), (-1, -4), (-1, -.75), (1, -.75)), stroke: (paint: foreground),)
bezier-curve(((5, 0), (7, 0), (3.5, -6), (5.5, -6)), stroke: (paint: foreground),)
},
)
]
]
#kt-par[
А что в этом такого необычного?
На самом деле, подстановка настолькое естественное действие, что им пользуются для того, чтобы хранить замыкания. Замыкания? А, помните, анонимные объекты, которые автомагически захватывали переменные из контекста? Как, например, в `composeAnon`. Вот в случае лямбд это и называется замыканиями. Это функция, ссылающаяся на внешние данные.
Так что там про хранение? Так вот, один из вариантов --- анонимные объекты, которые мы рассматривали раньше. А другой, более простой --- просто хранить список из указателя на функцию, и части аргументов!
]
#nobreak[
#kt-par[
Так что просто вот так хранить:\ \
#align(center)[
#cetz.canvas({
import cetz.draw: *
let sc = .9
scale(sc)
content((0, 1.1em*sc), text(size:14em * sc, "["), anchor:"center")
content((10, 1.1em*sc), text(size:14em * sc, "]"), anchor:"center")
content((6, 1.1em*sc), text(size:14em * sc, ","), anchor:"center")
item((8, 0), 1, "A", "A")
lambda_arg((3, 0), "func", "A", "B", "R", text-scale: .7)
})
]
\ \
...и понимать это как функцию `(B) -> R`. Кстати говоря, функция `(A, B) -> R` тоже может быть представлена таким вот списком. И так далее.
]]
#nobreak[
#kt-par[
А что бы означал вот такой список?
\ \
#align(center)[
#cetz.canvas({
import cetz.draw: *
let sc = .9
scale(sc)
content((0, 1.1em*sc), text(size:14em * sc, "["), anchor:"center")
content((13, 1.1em*sc), text(size:14em * sc, "]"), anchor:"center")
content((6, 1.1em*sc), text(size:14em * sc, ","), anchor:"center")
content((9.5, 1.1em*sc), text(size:14em * sc, ","), anchor:"center")
item((8, 0), 1, "A", "A")
item((11.5, 0), 1, "B", "B")
lambda_arg((3, 0), "func", "A", "B", "R", text-scale: .7)
})
]
\ \ Так это просто функция, ничего не принимающая и возвращающая `R`. Причём, если нулевой элемент --- честная, ничего не трогающая в реальном мире, функция, то весь лист --- функция, всегда возвращающая одно и то же значение. А это почти то же самое, что просто значение типа `R`... В общем говоря, порядок вычислений можно попросту так и записывать --- вложенными листами.
]
]
#nobreak[
#kt-par[
И так можно хранить и другие лямбды из тех, что мы уже рассмотрели. `apply` возвращает не лямбду, не интересно, следующий. `compose` возвращает функцию `(A) -> C`. Как бы это понять?
#align(center)[
#cetz.canvas({
import cetz.draw: *
let sc = .4
scale(sc)
rect((1, 0), (15.5, -5), stroke: (dash: "dotted"), fill: background)
lambda_arg((4, 0), "func", "A", "B", text-scale: .4)
lambda_arg((10, 0), "func", "B", "C", text-scale: .4)
item((14, 0), 1, "A", "A", text-scale: .4)
item((8.25, -5), 1, "C", "C", text-scale: .4)
bezier-curve(
((14.5, 0), (15.5, 0), (15.5, -3), (15.5, -3), (1, -3),(1, -3), (1, 0), (2, 0)),
stroke: (paint: foreground),
)
bezier-curve(((6, 0), (7, .2), (7, -.2), (8, 0)), stroke: (paint: foreground))
bezier-curve(
((12, 0), (13, 0), (13, -3), (10, 0), (8.25, -4.5)),
stroke: (paint: foreground),
)
content((0, -2), text(size:20em * sc, "["), anchor:"center")
lambda_arg((21, -2.5), sc:1.5, "func", "A", "B", text-scale: .4)
content((16.5, -2), text(size:20em * sc, ","), anchor:"center")
lambda_arg((30, -2.5), sc:1.5, "func", "B", "C", text-scale: .4)
content((25, -2), text(size:20em * sc, ","), anchor:"center")
content((34, -2), text(size:20em * sc, "]"), anchor:"center")
})
]
\ Как бы тут уже не вывихнуть мозг, конечно. Ну да ладно, это не очень нужно понимать, чтобы уметь этим пользоваться... Так, кстати, мы приходим к идее, что #box(`(A) -> ((B) -> R)`) и #box(`(A, B) -> R`) --- это, по сути, одно и то же. И скобки в первой записи принято опускать: #box(`(A) -> (B) -> R`).
]
]
|
|
https://github.com/chamik/gympl-skripta | https://raw.githubusercontent.com/chamik/gympl-skripta/main/cj-dila/20-na-zapadni-fronte-klid.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/helper.typ": dilo, hrule
#dilo("Na západní frontě klid", "zapadni", "<NAME>", "<NAME>", "30. -- 80. léta; svět. lit. 1. pol. 20. st.", "Německo", "1928", "epika", "válečný román")
#columns(2, gutter: 1em)[
*Téma*\
boj na západní frontě v WWI
*Motivy*\
absurdita války, smrt, ztráta iluzí, hodnota přátelství
*Časoprostor*\
západní fronta WWI, Rakousko-Uhersko
*Postavy* \
_<NAME>_ - hlavní postava, 18 letý gymnazista, ztrácí iluze o životě \
_Kat(zinsky)_ - dobrý přítel, hlava party s intuicí \
_<NAME>_ - dobrý přítel, s Pavlem se léčí v klášteře, kde umírá \
_Kantorek_ - bývalý třídní učitel, který žáky nabádal, ať narukují; později i on \
_Himmelstos_ - velitel výcviku, přísný, terč posměchu, později i na frontě
*Kompozice* \
chronologická, občas retrospektivní
*Vypravěč* \
ich-forma z pohledu Pavla, po Pavlově smrti er-forma
*Jazykové prostředky*\
vypravování - spisovná čeština, přímá řeč - hovorová s vulgárními výrazy, historismy a archaismy, nářečí
#colbreak()
*Obsah*\
Gymnazista Pavel po přemlouvání svým třídním učitelem Kantorkem narukuje se svojí třídou do armády, aby bojoval v WWI. Po několika týdenním výcviku pod přísným dohledem Himmelstose se ocitá na frontě. Tam zjišťuje, že válčit je jediná věc, kterou doopravdy dokáže. Zároveň zažívá smrt svých blízkých přátel. Je propuštěn na dovolenou; doma potkává svoji umírající matku a zjišťuje, že už ani tady si nepřijde svůj. Zpátky na frontě zabíjí vlastníma rukama francouzského vojáka. Přežívá jen díky jakési odtažitosti k tomu, co doopravdy dělá. V rukou mu umírá jeho poslední žijící spolužák, když ho raněného nese k lékaři. Sám umírá pár měsíců před koncem války.
*Literárně historický kontext*\
Poměrně jasný.
]
#pagebreak()
*Ukázka*
Být trefen či zůstat na živu je stejnou náhodou. V nejpevnějším krytu mohu být
rozmačkán a v otevřeném poli přečkat bez nejmenšího poranění deset hodin
bubnové palby. Každý voják zůstává na živu jen pomocí tisícerých náhod. A každý
voják věří v náhodu a má k ní důvěru.
#hrule()
„Bylo to moc ošklivé tam venku, Pavle?"
Mami, co ti mám na to říc? Nebudeš tomu rozumět a nikdy to nepochopíš. Taky
to nemáš nikdy pochopit. Ptáš se, jestli to bylo ošklivé. Ty, matka. Potřásám hlavou
a povídám: „Ne, mami, ani tak moc ne. Je nás pořád moc pohromadě a pak to není
tak zlé."
"Ano. ale nedávno tu byl <NAME> a ten vypravoval, že to je venku
strašlivé s těmi plyny a s tím vším."
To je má matka, která to říká. Povídá: s tím plynem a s tím vším. Neví, co
povídá, má jenom strach o mne. Mám jí vypravovat, že jsme jednou našli tři
nepřátelské zákopy, které byly ztrnulé ve svých posicích a jakoby mrtvicí trefené?
U předprsně zákopu, v krytech, kde právě byli, stáli a leželi lidé s modrými
obličeji, mrtví.
#hrule()
Budu psát tvé ženě," povídám mrtvému spěšně, „budu jí psát, aby se to ode mne
dověděla, řeknu jí všechno, co říkám tobě, aby netrpěla, budu jí pomáhat a tvým
rodičům taky a tvému dítěti."
Jeho uniforma je ještě napolo rozepatá. Tobolku s papíry lze lehce najít. Ale
zdráhám se ji otevřít. Je v ní legitimace s jeho jménem. Pokud neznám jeho jména,
mohu ho snad ještě zapomenout, čas jej vymaže, tento obraz. Jeho jméno je hřebík,
který se do mne zarazí a který nebude nikdy možno vytrhnout. Je v něm síla, která
bude umět všechno zase vyvolat, bude se to vracet zas a zas a vstávat přede mnou.
Jsou to fotografie ženy a malého děvčátka, uzounké amatérské fotografie před
zdí s břečťanem. Vedle jsou dopisy. Vyndám je a pokouším se je číst. Většině
nerozumím, lze to těžko rozluštit a znám jen málo francouzsky. Ale každé slovo,
které přeložím, proniká mi do prsou jako výstřel, jako bodná rána. Hlava mi bolí
předrážděním. Ale tolik ještě chápu, že těm lidem nesmím nikdy psát, jak jsem
chtěl dřív. Dívám se ještě jednou na fotografie; nejsou to bohatí lidé. Mohl bych
jim jednou anonymně poslat peníze, jestli nějaké někdy později vydělám. Toho se
křečovité chápu, to je aspoň malinká opora. [...]
#pagebreak() |
https://github.com/ern1/typiskt | https://raw.githubusercontent.com/ern1/typiskt/main/cover-letter.typ | typst | #import "templates/typiskt.typ": cover-letter
#show: cover-letter.with(
firstName: "Tommy",
lastName: "Ernsund",
phone: "+46 12 345 67 89",
email: "<EMAIL>",
//description: "Computer Science", // TODO: change
description: "Software Developer",
company: "Company Name",
date: none, //datetime(year: 2023, month: 12, day: 24),
)
Hello,
#lorem(70) \ \ #lorem(35)
Hope to hear from you soon!
Best regards, \
<NAME> |
|
https://github.com/hyskr/touying-bjtu | https://raw.githubusercontent.com/hyskr/touying-bjtu/main/template/main.typ | typst | MIT License | #import "@preview/touying:0.4.2": *
#import "@preview/touying-bjtu:0.1.0" as bjtu-theme
#let s = bjtu-theme.register()
// 全局信息配置
#let s = (s.methods.info)(
self: s,
aspect-ratio: "4-3",
title: [Touying for BJTU: Customize Your Slide Title Here],
subtitle: [Customize Your Slide Subtitle Here],
author: [Heziah],
date: datetime.today(),
institution: [Beijing Jiaotong University],
)
// 提取方法
#let (init, slides) = utils.methods(s)
#show: init
// 提取幻灯片功能
#let (slide, empty-slide, title-slide, outline-slide, new-section-slide, ending-slide) = utils.slides(s)
#show: slides.with()
#outline-slide()
= 第一部分
== 幻灯片 I / i
幻灯片内容。
== 幻灯片 I / ii
幻灯片内容。
= 第二部分
== 幻灯片 II / i
幻灯片内容。
== 幻灯片 II / ii
幻灯片内容。
|
https://github.com/cu-e/typst-study | https://raw.githubusercontent.com/cu-e/typst-study/main/МатАнализ.typ | typst |
#set heading(numbering: "1.")
#set text(size: 14pt)
#set quote(block: true)
#show quote: set align(center)
#show outline.entry.where(
level: 1
): it => {
v(12pt, weak: true)
strong(it)
}
//header
#page( footer:[#align(center, text(14pt)[2024г.])])[
#align(center, text(17pt)[
*Конспект по Математическому Анализу*
])
#align(center, text(15pt)[
*<NAME>*
])
#align(center, text(14pt)[
*Уральский Федеральный Университет*
])
#align(center, text(14pt)[
Прикладная математика
])
#align(center, text(14pt)[
*Выпуск:* '28
])
]
//Оглавление
#page()[
#align(center, text(17pt)[
*Математический Анализ*
])
#align(center)[
*Преподаватель:* \
<NAME>
]
#outline(
title: "Оглавление",
indent: auto
)]
//Логика высказываней
#page()[
#align(center)[
= *Логика Высказываней*]
Имеет только _Истенность_ и _Ложь_,
не имеет недосказонастей!
#quote[
Если в математики это не аксиома, то нуждается в докозательстве!
]\ \
- Истина $A = 1$ \
- Ложь $A = 0$
_т.е В математике ложь это 0,а истина это 1!_
\
== Операция конъюкция
_Обозначется как *$and$* или *&*_
\
/ #underline("Если") ---: $A and B = 1 <=> A = B = 1$ \ оба истенны
\
! Каждая операция соответствует таблице истенности
\
=== Таблица истенности
#table(columns: 3, align: center,
[$A$], [$B$], [$A and B$],
[1], [1], [1],
[1], [0], [0],
[0], [1], [0],
[0], [0], [0], )
Это бинарная таблица похожая на *умножение*
\
=== Свойства
$A and I = A$ (т.к I это истина)
\
$A and L = L$ (т.к L это ложь)
\
$A and A = A$
\
$A and B = B and A$
\
$A and (B and C) = (A and B) and C$ (Ассациотивность), истенно только тогда когда все истенно
== Операция Дизъюнкция
_Обозначается как $or $ или_ *OR*
=== Таблица истенности
#table(columns: 3, align: center,
[$A$], [$B$], [$A or B$],
[1], [1], [1],
[1], [0], [1],
[0], [1], [1],
[0], [0], [0], )
Это бинарная таблица похожая на *сложение*
\
=== Свойства
$A or I = I$ (т.к I это истина)
\
$A or L = A$ (т.к L это ложь)
\
$A or B = B or A$ (коммутативность)
\
$A or (B or C) = (A or C) or B$
\
== Операция Отрицания
_Обозначается как $not$ или_ *not*
\
$0 = not A <=> A = 1$
\
== <NAME>
$not(A and B) = not A or not B$
\
$not(A and B) = 0 <=> A = B = 1 <=> A and B = 1$
\
$not (A or B) = not A and not B$
\
$not A or not B = 0 <=> A = B = 1$
#figure(
image("storage/image.png", width: 10cm), caption: "<NAME>",
)
== Операция Имплекации
_Обозначется как $=>$_
\
$underbrace("A => B", "Посылка => Следствие")$
=== Таблица Истенности
#table(columns: 3, align: center,
[ A ], [B], [$A => B$],
[1], [1], [1],
[1], [0], [0],
[0], [1], [1],
[0], [0], [1]
)
Это свойство служит доказательством от противного
Давайте отрицание применим к кванотору
$not(forall x in X (P(x)))$ - когда отрицание истенно?
\
Когда есть те ктогда не для всех икс $ ( = exists x in X (P(x)))$ не справедливо, что означает, что существует такое отрицание $P(x) = not (P(x))$;
#quote()[
При отрицании кванторов (), они заменяются на квантор (), а высказывание под кватнором отрицается
\ \
$not (forall x in X (P(x)) = exists x in X(not P(x)))$
]
\ \
Запомнить так же, то что
\
$exists => >= 1$
\
$! = <= 1$
\
$exists ! => = 1$
\
]
//Математическая логика
#page()[
#align(center)[
= *Математическая логика*
]
/ *Предикат*:--- Это функция с множеством значений (Например множество $RR$ (Целых чисел)) которое возращает $0$ , если утверждение истинно, либо 0, если оно ложно. например предикат $P(x, y)$\ $P:X→{0,1}$ где $X$ — это множество, откуда берутся значения переменных (например, множество целых чисел, вещественных чисел и т. д.), а результатом всегда будет либо 0, либо 1.
_т.е утверждает что-то о каждом элементе из X, возвращая 0 или 1 в зависимости от того, выполняется ли это утверждение для данного элемента._
*Пример* \
у нас есть множество целых чисел $X = {1, 2, 3, 4, 5}$. Предикат $P(x)$, который проверяет, является ли число чётным, будет принимать отдельные числа из этого множества $X$ по одному.
- $P(1)$ возвращает $0$, так как $1$ нечётное.
- $P(2)$ возвращает $1$, так как $2$ чётное.
- $P(3)$ возвращает $0$, так как $3$ нечётное.
Таким образом, предикат **принимает один элемент** (например, $x = 1$ ) из множества $X$ и проверяет условие для этого элемента.
*Вывод*: Предикат работает с отдельными элементами множества, а не с самим множеством.
]
|
|
https://github.com/rpitasky/typst-braid | https://raw.githubusercontent.com/rpitasky/typst-braid/main/braids.typ | typst | The Unlicense | #let braid(body, strands: 0, unit_width: 18pt, unit_height: 40pt, stroke: 2pt, bold_stroke: 4pt, curvature: 0.25) = {
let crossings = body.body.children.filter(el => repr(el.func()) == "attach" and el.base.text == str(sym.sigma))
box(
width: (strands+2) * unit_width,
height: crossings.len() * unit_height,
for (idx, crossing) in crossings.enumerate() {
if not crossing.has("bottom") {
panic("A crossing is missing an index.")
}
let index = int(crossing.bottom.text) - 1
for i in range(strands) {
if i != index and i != index + 1 {
place(
top+left,
line(
stroke: stroke,
start: (unit_width * i, unit_height * idx),
end: (unit_width * i, (unit_height) * (idx+1))
)
)
} else if i == index {
let (first_stroke, second_stroke) = if crossing.has("top") {
(stroke, bold_stroke)
} else {
(bold_stroke, stroke)
};
place(
top+left,
path(
stroke: first_stroke,
closed: false,
(unit_width * i, unit_height * idx),
((unit_width * i, unit_height * (idx+curvature)), (0pt, 0pt), (0pt, unit_height*curvature)),
((unit_width * (i+1), unit_height * (idx+1-curvature)), (0pt, -unit_height*curvature), (0pt, 0pt)),
(unit_width * (i+1), unit_height * (idx+1))
)
)
place(
top+left,
path(
stroke: second_stroke,
closed: false,
(unit_width * (i+1), unit_height * idx),
((unit_width * (i+1), unit_height * (idx+curvature)), (0pt, 0pt), (0pt, unit_height*curvature)),
((unit_width * i, unit_height * (idx+1-curvature)), (0pt, -unit_height*curvature), (0pt, 0pt)),
(unit_width * i, unit_height * (idx+1))
)
)
}
}
}
)
} |
https://github.com/KaarelKurik/conditional-plasticity | https://raw.githubusercontent.com/KaarelKurik/conditional-plasticity/main/template.typ | typst | #let project(title: "", authors: (), subject-class: (),
key-words: (),
body) = {
// Set the document's basic properties.
set document(author: authors.map(a => a.name), title: title)
let footer = locate(loc => {
align(center, counter(page).display())
let i = counter(page).at(loc).first();
if (i == 1) {
v(0.25em, weak:true)
line()
v(0.5em, weak:true)
if (subject-class.len() > 0) {
[2020 _Mathematics Subject Classification_. #subject-class.join(", ")]
}
v(0.5em, weak:true)
if (key-words.len() > 0) {
[_Key words and phrases._ #key-words.join("; ")]
}
}
})
set page(numbering: "1", number-align: center, footer: footer,
footer-descent: 30%,
)
set text(font: "Linux Libertine", lang: "en")
set heading(numbering: "1.1.")
// Title row.
align(center)[
#block(text(weight: 700, 1.75em, title))
]
// Author information.
pad(
top: 0.5em,
bottom: 0.5em,
x: 2em,
grid(
columns: (1fr,) * calc.min(3, authors.len()),
gutter: 1em,
..authors.map(author => align(center)[
*#author.name* \
#author.email
]),
),
)
// Main body.
set par(justify: true)
body
pad(
top: 0.5em,
x: 2em,
grid(
columns: (1fr,),
..authors.map(author => [
#smallcaps(author.affiliation) \
_Email address_: #raw(author.email)
])
)
)
} |
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/visualize/path.typ | typst | Apache License 2.0 | // Test paths.
---
#set page(height: 200pt, width: 200pt)
#table(
columns: (1fr, 1fr),
rows: (1fr, 1fr),
align: center + horizon,
path(
fill: red,
closed: true,
((0%, 0%), (4%, -4%)),
((50%, 50%), (4%, -4%)),
((0%, 50%), (4%, 4%)),
((50%, 0%), (4%, 4%)),
),
path(
fill: purple,
stroke: 1pt,
(0pt, 0pt),
(30pt, 30pt),
(0pt, 30pt),
(30pt, 0pt),
),
path(
fill: blue,
stroke: 1pt,
closed: true,
((30%, 0%), (35%, 30%), (-20%, 0%)),
((30%, 60%), (-20%, 0%), (0%, 0%)),
((50%, 30%), (60%, -30%), (60%, 0%)),
),
path(
stroke: 5pt,
closed: true,
(0pt, 30pt),
(30pt, 30pt),
(15pt, 0pt),
),
)
---
// Error: 7-9 path vertex must have 1, 2, or 3 points
#path(())
---
// Error: 7-47 path vertex must have 1, 2, or 3 points
#path(((0%, 0%), (0%, 0%), (0%, 0%), (0%, 0%)))
---
// Error: 7-31 point array must contain exactly two entries
#path(((0%, 0%), (0%, 0%, 0%)))
|
https://github.com/FrightenedFoxCN/cetz-cd | https://raw.githubusercontent.com/FrightenedFoxCN/cetz-cd/main/src/cetz-cd.typ | typst | #import "@preview/cetz:0.2.2"
#import "utils.typ": *
#import "arrows.typ": *
#import "parser.typ": *
#let make-cd(table, arrow, width, height) = style(styles => {align(center)[#cetz.canvas(length: 1cm, {
// quick check on whether all line has same number of items
let line-number = table.len()
let line-length = table.at(0).len()
// for line in table {
// if line.len() != line-length {
// panic("Numbers of items in a line is different. Check your input!")
// }
// }
// decide how large the "grid" is
let hspacing = (0, ) * line-number
let vspacing = (0, ) * line-length
for line in table.enumerate() {
for item in line.last().enumerate() {
vspacing.at(item.first()) = calc.max(
measure(item.last(), styles).width.cm(),
vspacing.at(item.first())
)
hspacing.at(line.first()) = calc.max(
measure(item.last(), styles).height.cm(),
hspacing.at(line.first())
)
}
}
// then convert the grid space to coordinate, 2cm in between for now
let coord_x = averaging-series(partial-sum-series(vspacing).enumerate().map(a => a.first() * width + a.last()))
let coord_y = averaging-series(partial-sum-series(hspacing).enumerate().map(a => - a.first() * height - a.last()))
// finally, place the item into the table
for line in table.enumerate() {
for item in line.last().enumerate() {
cetz.draw.content(
(coord_x.at(item.first()), coord_y.at(line.first())),
item.last(), anchor: "center"
)
}
}
// next, let's place the arrow
// the arrow is parsed into a three-dimensional array, first denoting the
// line, second denoting the item. An item has multiple lines departing
// from it, which is the third level of the array
for line in arrow.enumerate() {
for item in line.last().enumerate() {
for arrow in item.last() {
if arrow.direction == (0, 0) {
continue
}
let line-index = line.first()
let item-index = item.first()
let departure-point = (item-index, line-index)
let end-point = departure-point.zip(arrow.direction).map(((a, b)) => a + b)
// integrity check on boundaries
check-boundary(end-point, line-length, line-number)
let depart-x-c = vspacing.at(item-index) / 2
let depart-y-c = hspacing.at(line-index) / 2
let end-x-c = vspacing.at(end-point.at(0)) / 2
let end-y-c = hspacing.at(end-point.at(1)) / 2
let depart-height-c = measure(table.at(line-index).at(item-index), styles).height.cm() / 2
let depart-width-c = measure(table.at(line-index).at(item-index), styles).width.cm() / 2
let end-height-c = measure(table.at(end-point.last()).at(end-point.first()), styles).height.cm() / 2
let end-width-c = measure(table.at(end-point.last()).at(end-point.first()), styles).width.cm() / 2
// then we place the arrow a bit away from the departure point to the end point
// @TODO: custom padding length
let line-start = (
coord_x.at(item-index) +
(depart-width-c + 0.1)
* arrow.direction.at(0),
coord_y.at(line-index) -
(depart-height-c + 0.2)
* arrow.direction.at(1)
)
let line-end = (
coord_x.at(end-point.at(0)) -
(end-width-c + 0.1)
* arrow.direction.at(0),
coord_y.at(end-point.at(1)) +
(end-height-c + 0.1)
* arrow.direction.at(1))
let text-size = (measure(arrow.text, styles).width.cm(), measure(arrow.text, styles).height.cm())
let arrow-to-draw = cd-arrow(
line-start,
line-end,
arrow.style,
arrow.text,
text-size,
arrow.swapped,
arrow.bent,
arrow.offset
)
draw-arrow(arrow-to-draw)
}
}
}
})]})
#let cetz-cd(content, width: 2, height: 2) = {
let (table, arrows) = parser(content.text)
make-cd(table, arrows, width, height)
} |
|
https://github.com/EpicEricEE/typst-plugins | https://raw.githubusercontent.com/EpicEricEE/typst-plugins/master/united/src/data.typ | typst | #let eval = eval.with(mode: "math")
// Load a CSV file with prefixes.
//
// Parameters:
// - path: Path of the CSV file.
// - delimiter: Passed to the `csv` function.
#let read-prefixes(path, delimiter: ",") = {
let array = csv(path, delimiter: delimiter)
let prefixes = (:)
let prefixes-short = (:)
for (name, short, value) in array {
prefixes.insert(lower(name), eval(value))
prefixes-short.insert(short, eval(value))
}
(prefixes, prefixes-short)
}
// Load a CSV file with postfixes.
//
// Parameters:
// - path: Path of the CSV file.
// - delimiter: Passed to the `csv` function.
#let read-postfixes(path, delimiter: ",") = {
let array = csv(path, delimiter: delimiter)
let postfixes = (:)
for (name, value) in array {
postfixes.insert(lower(name), eval(value))
}
postfixes
}
// Load a CSV file with units.
//
// Parameters:
// - path: Path of the CSV file.
// - delimiter: Passed to the `csv` function.
#let read-units(path, delimiter: ",") = {
let array = csv(path, delimiter: delimiter)
let units = (:)
let units-short = (:)
let units-space = (:)
let units-short-space = (:)
for (name, short, value, space) in array {
units.insert(lower(name), eval(value))
units-short.insert(short, eval(value))
space = space in ("true", "1")
units-space.insert(lower(name), space)
units-short-space.insert(short, space)
}
(units, units-short, units-space, units-short-space)
}
// Load data from assets.
#let (
prefixes,
prefixes-short
) = read-prefixes("../assets/prefixes.csv")
#let (
units,
units-short,
units-space,
units-short-space
) = read-units("../assets/units.csv")
#let postfixes = read-postfixes("../assets/postfixes.csv")
|
|
https://github.com/8LWXpg/jupyter2typst | https://raw.githubusercontent.com/8LWXpg/jupyter2typst/master/test/test6.typ | typst | MIT License | #import "template.typ": *
#show: template
#block[
$ inline(integral) integral without without
h#text(baseline: -2pt)[$psi i g h e$]r
overbrace(x+⋯+x) n#[ times]
rho*3\
#box(inset: (left: 3pt, right: 3pt), outset: (top: 3pt, bottom: 3pt), fill: red)[$3equiv 5 mod 2$]
H e l l o L a T e X.\
dot(sum_0^n)display(binom(0, 1))\
#box(inset: (left: 3pt, right: 3pt), outset: (top: 3pt, bottom: 3pt), fill: rgb("#00ff00"))[a=b]\
::\
"::−"\
#box(inset: (left: 3pt, right: 3pt), outset: (top: 3pt, bottom: 3pt), stroke: 0.5pt)[$a a A B$]A
bold(A sin B)
bold(A sin B)
bold(A sin B)
a mod b\
a mod b\
binom(#[asdf], a a s)
≬\
bb(A)B\
#math.op("arcctg")
a.3
a\/b
acute(a b)\
frac(and, %)=3a a\
macron(a)\
h①\
grave(=)a\
a space.sixth b\
a space b\
and\
$$
a&=b+c \
d+e&=f
$$
\
mat(delim: #none,
a , b ;
c , d
)
\
mat(delim: #none,
a , b ;
c , d
)
\
cal(A a B b)\
frac(5, 4) = 1inline(frac(1, 4))\
#math.op("asin", limits: true)_z x\
x#box(inset: (bottom: 10pt), box(fill: black, width: 2em, height: 3em))x\
frac(c, d)\
#math.op("asin", limits: true)_y x\
#box(inset: (left: 3pt, right: 3pt), outset: (top: 3pt, bottom: 3pt), fill: blue)[h]e l l o\
a b\
a#h(-1em/6)b\
space a $
]
|
https://github.com/eliapasquali/typst-thesis-template | https://raw.githubusercontent.com/eliapasquali/typst-thesis-template/main/chapters/conclusion.typ | typst | Other | #pagebreak(to:"odd")
= Conclusioni
<cap:conclusioni>
#v(1em)
== Consuntivo finale
== Raggiungimento degli obiettivi
== Conoscenze acquisite
== Valutazione personale
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/meta/page-label.typ | typst | Apache License 2.0 | #set page(margin: (bottom: 20pt, rest: 10pt))
#let filler = lorem(20)
// (i) - (ii). No style opt. because of suffix.
#set page(numbering: "(i)")
#filler
#pagebreak()
#filler
// 3 - 4. Style opt. Page Label should use /D style.
#set page(numbering: "1")
#filler
#pagebreak()
#filler
// I - IV. Style opt. Page Label should use /R style and start at 1 again.
#set page(numbering: "I / I")
#counter(page).update(1)
#filler
#pagebreak()
#filler
#pagebreak()
#filler
#pagebreak()
#filler
// Pre: ほ, Pre: ろ, Pre: は, Pre: に. No style opt. Uses prefix field entirely.
// Counter update without numbering change.
#set page(numbering: "Pre: い")
#filler
#pagebreak()
#filler
#counter(page).update(2)
#filler
#pagebreak()
#filler
#pagebreak()
#filler
// aa & ba. Style opt only for values <= 26. Page Label uses lower alphabet style.
// Repeats letter each 26 pages or uses numbering directly as prefix.
#set page(numbering: "a")
#counter(page).update(27)
#filler
#pagebreak()
#counter(page).update(53)
#filler
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/034.%20charisma.html.typ | typst | charisma.html
It's Charisma, Stupid
November 2004, corrected June 2006Occam's razor says we should prefer the simpler of two explanations.
I begin by reminding readers of this principle because I'm about
to propose a theory that will offend both liberals and conservatives.
But Occam's razor means, in effect, that if you want to disagree
with it, you have a hell of a coincidence to explain.Theory: In US presidential elections, the more
charismatic candidate wins.People who write about politics, whether on the left or the right,
have a consistent bias: they take politics seriously. When one
candidate beats another they look for political explanations. The
country is shifting to the left, or the right. And that sort of
shift can certainly be the result of a presidential election, which
makes it easy to believe it was the cause.But when I think about why I voted for Clinton over the first George
Bush, it wasn't because I was shifting to the left. Clinton just
seemed more dynamic. He seemed to want the job more. Bush seemed
old and tired. I suspect it was the same for a lot of voters.Clinton didn't represent any national shift leftward.
[1]
He was
just more charismatic than <NAME> or (God help us) <NAME>.
In 2000 we practically got a controlled experiment to prove it:
Gore had Clinton's policies, but not his charisma, and he suffered
proportionally.
[2]
Same story in 2004. Kerry was smarter and more
articulate than Bush, but rather a stiff. And Kerry lost.As I looked further back, I kept finding the same pattern.
Pundits said Carter beat Ford
because the country distrusted the Republicans after Watergate.
And yet it also happened that Carter was famous for his big grin
and folksy ways, and Ford for being a boring klutz. Four years
later, pundits said the country had lurched to the right. But
Reagan, a former actor, also happened to be even more charismatic
than Carter (whose grin was somewhat less cheery after four stressful
years in office). In 1984 the charisma gap between Reagan and
Mondale was like that between Clinton and Dole, with similar results.
The first <NAME> managed to win in 1988, though he would later
be vanquished by one of the most charismatic presidents ever, because
in 1988 he was up against the notoriously uncharismatic
<NAME>.These are the elections I remember personally, but apparently the
same pattern played out in 1964 and 1972. The most recent
counterexample appears to be 1968, when Nixon beat the more charismatic Hubert
Humphrey. But when you examine that election, it tends to support
the charisma theory more than contradict it. As <NAME>
recounts in his famous book The Selling of the President 1968,
Nixon knew he had less charisma than Humphrey, and thus simply
refused to debate him on TV. He knew he couldn't afford to let the
two of them be seen side by side.Now a candidate probably couldn't get away with refusing to debate.
But in 1968 the custom of televised debates was still evolving. In
effect, Nixon won in 1968 because voters were never allowed to see
the real Nixon. All they saw were carefully scripted campaign
spots.Oddly enough, the most recent true counterexample is probably 1960.
Though this election is usually given as an example of the power
of TV, Kennedy apparently would not have won without fraud by party
machines in Illinois and Texas. But TV was still young in 1960;
only 87% of households had it.
[3]
Undoubtedly TV helped Kennedy,
so historians are correct in regarding this election as a
watershed. TV required a new kind of candidate. There would be no
more <NAME>.The charisma theory may also explain why Democrats tend to lose
presidential elections. The core of the Democrats' ideology seems
to be a belief in government. Perhaps this tends to attract people
who are earnest, but dull. Dukakis, Gore, and Kerry were so similar
in that respect that they might have been brothers. Good thing for
the Democrats that their screen lets through an occasional Clinton,
even if some scandal results.
[4]One would like to believe elections are won and lost on issues, if
only fake ones like <NAME>. And yet, if they are, we have
a remarkable coincidence to explain. In every presidential election
since TV became widespread, the apparently more charismatic candidate
has won. Surprising, isn't it, that voters' opinions on the issues
have lined up with charisma for 11 elections in a row?The political commentators who come up with shifts to the left or
right in their morning-after analyses are like the financial reporters
stuck writing stories day after day about the random fluctuations
of the stock market. Day ends, market closes up or down, reporter
looks for good or bad news respectively, and writes that the market
was up on news of Intel's earnings, or down on fears of instability
in the Middle East. Suppose we could somehow feed these reporters
false information about market closes, but give them all the other
news intact. Does anyone believe they would notice the anomaly,
and not simply write that stocks were up (or down) on whatever good
(or bad) news there was that day? That they would say, hey, wait
a minute, how can stocks be up with all this unrest in the Middle
East?I'm not saying that issues don't matter to voters. Of course they
do. But the major parties know so well which issues matter how
much to how many voters, and adjust their message so precisely in
response, that they tend to split the difference on the issues,
leaving the election to be decided by the one factor they can't
control: charisma.If the Democrats had been running a candidate as charismatic as
Clinton in the 2004 election, he'd have won. And we'd be reading
that the election was a referendum on the war in Iraq, instead of
that the Democrats are out of touch with evangelical Christians in
middle America.During the 1992 election, the Clinton campaign staff had a big sign
in their office saying "It's the economy, stupid." Perhaps it was
even simpler than they thought.PostscriptOpinions seem to be divided about the charisma theory. Some say
it's impossible, others say it's obvious. This seems a good sign.
Perhaps it's in the sweet spot midway between.As for it being impossible, I reply: here's the data; here's the
theory; theory explains data 100%. To a scientist, at least, that
means it deserves attention, however implausible it seems.You can't believe voters are so superficial that they just choose
the most charismatic guy? My theory doesn't require that. I'm not
proposing that charisma is the only factor, just that it's the only
one left after the efforts of the two parties cancel one
another out.As for the theory being obvious, as far as I know, no one has
proposed it before. Election forecasters are proud when they can
achieve the same results with much more complicated models.Finally, to the people who say that the theory is probably true,
but rather depressing: it's not so bad as it seems. The phenomenon
is like a pricing anomaly; once people realize it's there, it will
disappear. Once both parties realize it's a waste of time to
nominate uncharismatic candidates, they'll tend to nominate only
the most charismatic ones. And if the candidates are equally
charismatic, charisma will cancel out, and elections will be decided
on issues, as political commentators like to think they are now.Notes[1]
As Clinton himself discovered to his surprise when, in one of
his first acts as president, he tried to shift the military leftward.
After a bruising fight he escaped with a face-saving compromise.[2]
True, Gore won the popular vote. But politicians know the electoral
vote decides the election, so that's what they campaign for. If Bush
had been campaigning for the popular vote he would presumably have
got more of it. (Thanks to judgmentalist for this point.)[3]
Source: Nielsen Media Research. Of the remaining 13%, 11 didn't
have TV because they couldn't afford it. I'd argue that the missing
11% were probably also the 11% most susceptible to charisma.[4]
One implication of this theory is that parties shouldn't be too
quick to reject candidates with skeletons in their closets.
Charismatic candidates will tend to have more skeletons than squeaky
clean dullards, but in practice that doesn't seem to lose elections.
The current Bush, for example, probably did more drugs in his
twenties than any preceding president, and yet managed to get elected
with a base of evangelical Christians. All you have to do is say
you've reformed, and stonewall about the details.Thanks to <NAME>, <NAME>, <NAME>,
<NAME>, and <NAME> for reading drafts of this, and
to <NAME> for pointing out that I was wrong about 1968.
Comment on this essay.What Charisma IsPolitics and the Art of ActingJapanese Translation
|
|
https://github.com/kotfind/typst_task | https://raw.githubusercontent.com/kotfind/typst_task/master/example/second_tour.typ | typst | #import "../task.typ": tour, task
#show: tour.with(
title: [Second Tour]
)
#task[Task C][
Condition C
][
Solution C
Answer. C 42
]
#task[Task D][
Condition D
][
Solution D
Answer. D 42
]
|
|
https://github.com/EpicEricEE/typst-equate | https://raw.githubusercontent.com/EpicEricEE/typst-equate/master/tests/break/test.typ | typst | MIT License | #import "/src/lib.typ": equate
#set page(width: 6cm, height: 2cm, margin: 1em)
#show: equate
// Test equations breaking across page boundaries.
#show math.equation: set block(breakable: true)
$ a + b \
c - d \
e + f \
g = h $
$ a &= b \
&= d \
&= f \
g &= h $
// Test breakable parameter.
#equate(breakable: false, $
a + b \
c - d \
e + f \
g = h
$)
#show math.equation: set block(breakable: false)
$ a + b \
c - d \
e + f \
g = h $
#equate(breakable: true, $
a + b \
c - d \
e + f \
g = h
$)
|
https://github.com/polarkac/MTG-Stories | https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/051%20-%20March%20of%20the%20Machine/018_Episode%2010%3A%20The%20Rhythms%20of%20Life.typ | typst | #import "@local/mtgstory:0.2.0": conf
#show: doc => conf(
"Episode 10: The Rhythms of Life",
set_name: "March of the Machine",
story_date: datetime(day: 28, month: 04, year: 2023),
author: "<NAME>",
doc
)
On Kamigawa, a boy returns home, covered in the dust of a city's wreckage. When he arrives, he holds himself differently—no longer afforded the wide-eyed innocence of a child. Things have changed.
His mother has changed.
The boy's father knows it the moment he hears her voice, the moment he sees her. Gone is her body, replaced by a string of characters that glow in time with false breath. A spirit that hovers before him, holding his son's hand.
And yet it is unmistakably her.
A father can wish for many things. A boy can, as well. But the one wish they share is that their family can remain together.
Fate has taken much from them—but it has not taken that wish.
The father embraces his family. The mother returns home. And the boy stands weary, but happy, in the middle of it all.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
On Kaldheim, an elf stands on the gunwale of a commandeered ship. He watches the seas churn before him and counts the minutes. How long has it been since he's seen his brother?
A count of hundred. A count of two hundred.
How long has it been since the serpent dragged him under?
Two hundred fifty.
The fighting around them has come to a miraculous halt. Everywhere there is raucous cheering, everywhere there is music, everywhere his fellows celebrate a battle hard won.
Yet Harald has ears only for the sea.
Three hundred.
How long will it be until he gives up?
How long would it be until Tyvar gave up on him?
Harald never has to answer the question. On the count of three hundred thirteen, <NAME> bursts from the water clinging to the serpent's engorged head. Grinning as always, he slaps its surface. "Do you see, brother? You will never outdo this!"
Harald does not often cheer when met with a boast he cannot match—but today, he will make an exception.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
On Kaladesh, a mother braces herself for death. What hope does she have for anything else? Her only weapon is a length of sharp metal she scavenged from the wreckage of her plane. Surrounded by Phyrexian soldiers on a platform above the aetherflux reservoir, she has nowhere to go. The whirr of a metal blade heralds her end—but if she can at least push them off the platform as she falls, maybe it'll be safe for a while longer.
She takes a breath. Takes a step. Readies herself for the pain of impact—only for the whirring saw to still.
Soldiers crumble like piles of twigs, their metal limbs falling off the platform.
Hope blooms in Pia's breast. All over Ghirapur it is the same: the Phyrexians are falling. Some stop in place, some crumble apart. Those already compleated fall to the ground as if in a deep slumber. In the distance their warship is plummeting from the sky.
They've won.
<NAME>'s not sure of the mechanics of this. She doesn't know how any of this is happening, although she suspects Saheeli will fill her in later. What she does know—and what she has always known—is that she can trust in her daughter to get things done.
On that platform above the reservoir, Pia mutters a thanks to Chandra.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
You can say much with a drum. One rhythm tells the tale of the market's latest wares; another the arrival of a new family member; a third announces the passing of an elder. When you speak in this way it will carry over a great distance. Another community's drummer may hear it and bring the news to their own people, with a flourish to tell them from whence it came. For centuries, the Zhalfirins have known this: whatever they speak with their palms against stretched leather will soon resound through the plane.
And on this day the message is a simple one: #emph[Rejoice] .
Everywhere, the drums call; everywhere, rhythm fills the chests of the Zhalfirins and tells them of the reason for their joy. Phyrexia lies broken and defeated beyond the reach of time. Zhalfir itself has found a new home among the planes, a place where they may once more entertain visitors.
Visitors like their own wandering son, Teferi.
Teferi, too, can hear the drums. It is hard not to smile when he does. For him, it's been hundreds of years away—hundreds of years in which he might have forgotten the rhythmic language of home. What a relief to stand on a grassy hill and perfectly understand it, instead. Every beat tells him that he belongs here.
And, on other days, he might feel that's true. Most days.
Today the story is more complicated. For while the people of Zhalfir are celebrating their victory, Teferi is mourning his losses.
It took the better part of two days to find the perfect spot for Wrenn. As he searched, he tried to imagine what sort of things she'd like. Would she prefer to grow among the ancient baobab trees, protected by these gigantic arbors? What about afiya—were they chattier, since there were often so many of them? Did she admire the upright, unfaltering nature of marula, or was she more interested in the flexible and mystic yew? Zhalfir had all of these and more. Which is the most fitting tribute for the woman who saved the Multiverse?
In the middle of the second day, he realized he'd been going about it all the wrong way. The trick wasn't to wonder about the specifics of the thing.
With the acorn in hand, he thought of his friend, and he walked until it felt right.
So, he ended up here on this grassy hill overlooking town. A few oaks are within singing distance, he thinks, and from here she'll be able to see all manner of things in the village. And, when she's grown, she can choose to move wherever she likes. Zhalfir will welcome her.
Teferi digs. The earth is warm, the soil dark and rich. He lays the acorn in the little clearing he has made, then fills it in. Water from his own gourd he offers her. He sits next to the little mound and sighs.
#figure(image("018_Episode 10: The Rhythms of Life/01.jpg", width: 100%), caption: [Art by: Gaboleps], supplement: none, numbering: none)
"I think you'd like this," he says. "The music, I mean."
The mound says nothing.
"I #emph[should ] join them. You're right. But I wanted to make sure that you were all right first."
Down in the village, the flute players have emerged. They, too, begin to dance around the fires. He watches them for a little while: the youths with more vigor than skill, the married couples clinging to one another with easy grace, the Mirrans who do not know the steps and the children who teach them. By all accounts, it is a beautiful sight.
"I've waited a few hundred years. A couple more minutes won't hurt anybody. And I wanted to thank you, again, for everything you've done. It's~" He runs a hand along the back of his head. "Please don't get the wrong idea, I'm grateful, more grateful than I've ever been. But it's hard to lose another friend."
When he searches the faces around the fire, so many of them are familiar. Once, he knew everyone in this village: their mothers, their fathers, who made the best food and whose food is better off slyly served to their livestock.
He has lived for years without them. Centuries. Yet to them, he has only been gone a short while. There are bridges that his countrymen can never cross with him, things they can never understand. But family rarely understands each other completely.
This place—it is home, and it isn't. It's a home he needs to learn again.
They are friends he needs to make again.
In the wake of the war against Phyrexia, that feels an impossible task.
Teferi hangs his head back. "I know the others are here," he says. "And~ I know, I should go speak to them. See how they're doing."
Only the drums answer. He closes his eyes. For a long while he does not open them. Instead, he lays a palm against the earth and wills himself to truly feel it. To notice how it is still a little damp, how it feels when he tugs on the blades, the soft springiness of the dirt beneath. In the distance he can hear laughter. Someone, probably one of the Mirrans, is asking how to play the flute. Seconds later a shrill note jolts everyone offbeat—but more laughs follow. The crackle of a fire, the wind against his skin, the cool moonlight air like drinking fresh water~ all his wrongs made right.
There's a pain pricking at the corners of his eyes, a pressure.
For a little while, he will allow himself to weep. Tears for him and his lost years, tears for Karn and his lost past, tears for Nissa and Ajani and all the others who may never wake now that Phyrexia has fallen.
And, most of all, there are tears for those who cannot join in the dancing by the fire.
When he is done it is the dead of night. The moon hangs high overhead. Dancing's given way to story swapping, something he can no longer hear from where he is. The Zhalfirins listen as one of the Mirrans speaks. Fire paints his greenish skin gold, yet the look in his eyes is distant and pained. Thrun, they'd called him. He's asked after Melira every day since the end of the war.
They will all be paying the price for their freedom for years to come. Teferi included. He has to go see the others.
He gives the mound of dirt another reverent pat. "Thank you for the company," he says. "Next time, I'll bring the rest of our friends."
Once he's gotten to his feet it doesn't take long to reach his destination. Here in the healer's ward, there is no dancing. There can't be. In all his days as a boy, the ward was never so full as this. Beds spill out onto the surroundings, boughs shaped to hold the infirm. Healers move from one to another like bees within a hive. Here the wails of the dying provide a dirge to counter the joyous rhythm.
Teferi doesn't look away from any of it. The results of this war have been a long time coming. If he ignores them now, he does a disservice to all those who made it a war they could win. As he searches for his friends, he takes the time to visit some of the injured and wish them well. Here and there, when he knows of something that might help, he assists the healers in their duties.
Eventually, he makes his way into the main structure itself. Karn, Koth, and the others are easy to spot within it: they are sequestered away in their own section. Tonight, their new suns shine on them, but in the morning the awning above will be drawn, and they will be in the cool shade. He wonders how Karn feels about that sort of thing, if it makes any difference to him either way.
"~isn't a fairy tale. You have to stop holding on."
Ah—that voice. Kaya. She must have arrived while he was away. As he rounds the corner into their enclosure, he readies himself for the worst.
"There's a way. I just know there is. All we have to do is find it, right? And she's not getting any worse, so I don't see the harm."
"And if she awakens?"
The air in the room is taut as a bowstring when Teferi enters. All at once he can feel their eyes on him: Karn, Koth, Chandra, and now Kaya. But there are others in the room, too. Nissa and Ajani are both laid out on their own beds. Chandra's got Nissa's hand in her own—she's not moved since they got back from New Phyrexia. Koth and Karn worked with the healers here to remove as much Phyrexian metal from her as they could—but from the look of it she'd have to hold onto some. Ajani, next to her, is a newer sight. They haven't culled anything away from him yet.
Aside from their breathing and the occasional twitching of their eyes, they are motionless. Between them—and with some help from Saheeli, during her brief visit—they think they understand why. Wrenn swapped New Phyrexia and Zhalfir's places in the Multiverse. That means New Phyrexia is off somewhere nothing can reach.
#figure(image("018_Episode 10: The Rhythms of Life/02.jpg", width: 100%), caption: [Art by: Anato Finnstark], supplement: none, numbering: none)
"Suppose you're New Phyrexia. You're going to compleat thousands of people in short order. What's the best way to keep in contact with them?" Saheeli had asked them.
"Some kind of signal," answered Kaya. "A call only they can hear."
Saheeli nodded. "Just so. But what will you use to carry your signal? What unifies all Phyrexians?"
"Oil?" ventured Chandra.
"That's right. Maybe that's why they're so intent on spreading it—to amplify whatever signal they were spreading in the first place. Now that Phyrexia's left the Multiverse, they've gone out of range. The oil keeps listening for new orders but isn't receiving any. Why's that? Well, there are a host of possible answers all worthy of study. However, in#emph[ my] expert opinion, I think all of it was tied back to Norn. A megalomaniac of that scale wouldn't want anyone else to have control over her army. I imagine she was the only one who could send orders—and, further, that the oil is rendered inert without her. Wouldn't want a rival seizing control when you've lost contact, would you? So, without her~"
"They might be asleep forever," said Koth.
"Or they might wake up," said Chandra, who sounded as if she was trying to convince herself first and foremost. "The best thing to do is wait."
That had been days ago. Saheeli had long since left for Kaladesh anew. The rest of them? They'd continued to wait.
Of course, not everyone is so lucky as to escape with their lives in suspended animation. Melira's here, too. Teferi nods to her as he enters—but she hardly has the strength to acknowledge him. The healers have done what they can to hide the scent of rot, but there's no mistaking it. Her chest wound's gone bad. And, judging from the clamminess of her skin and the glassy look in her eyes, she doesn't have long. Koth's at her side.
Teferi feels a pang of anger at all of them, arguing over something that could wait when Melira's in this state. Thankfully, they stopped when he came in. "We can discuss that later, Kaya," he says. He picks up a small bowl and pours Melira fresh water. This he hands to Koth, who helps her to drink it. "For tonight—let's be grateful we're all here in one piece."
"Right," says Kaya. "Sorry."
"The war had us all on edge," says Koth. "And I can't say you're wrong—we do have to decide what to do with them. But maybe~ maybe not tonight."
"Do you have any good news?" Chandra asks. How like her—quick to anger, but just as quick to forgive and forget.
"Some," Kaya says. "Things are starting to stabilize. Everywhere I've been, people are starting to catch their breath. Liliana's all right. So's Vivien. Tyvar's going to be bragging about killing a Phyrexian sea serpent long after we're all dead. Kaito's helping the Wanderer with reconstruction efforts on Kamigawa. Still no sign of Jace, or Vraska, but I'm not holding my breath on either of those counts."
Their conversation picks up from there. Chandra has about a thousand questions to ask and only space for ten; Kaya's happy enough to answer whatever comes her way as a peace offering.
But that's their conversation. Tidings of distant planes probably mean little to Melira, who spent all her life within the confines of New Phyrexia. "Is there anything I can get you?" he asks her.
To his surprise, she offers a weak smile. "You know, it's funny."
"I'm told I'm very funny," he answers. Koth's staring daggers at him for it, but Teferi knows how important laughter is to the sick—and she does chuckle a little.
"Sort of, in a~ certain kind of way," says Melira. "I was hoping you'd come by. I have a request."
Teferi takes her hand. "Whatever I can do to help, just say the word."
Melira turns her head, moving as slowly as a stuck weathervane. Her glassy eyes land on Karn. "Could you call him over, too?"
There's no need. Karn can hear them well enough, and he lumbers on over. It occurs to Teferi that they might be the most miserable people on Zhalfir right now. A thick cloud of guilt stops up their throats and burns their eyes. All they can do is offer one another company.
At least until Melira breaks the silence. Though spoken in a quiet voice, the words are bright and clear as lightning. "I think~ I think I have an idea for fixing the two of them."
Koth frowns. "They're gone, Melira."
"Not all the way. If their hearts are untouched, I might be able to manage something. The signal's dead, right? So as long as we can cleanse their bodies, they should be okay." "~In theory," Karn says. Teferi always admired how someone as large as his old friend could speak in hushed tones. "Although the means of doing that have escaped us."
"That's just the thing," she says. "We have the pieces right here, they've all come together." She pauses. Teferi's not sure whether it's because she's mulling over her proposition, or because she's too tired to keep talking right now. Before he can offer her some peace to think things through, she continues. "If we're going to do it, we have to do it soon. I don't think I have much longer. It isn't going to be easy, and there'll be a price. But~ I want people to have hope that it #emph[can] happen. And maybe years from now someone will figure out an easier way to do it. One that won't require me, or Karn. People need that hope."
Teferi hangs his head. In the distance, he can hear the drums, announcing to all who will listen that Zhalfir will welcome them. Rejoice, they say. Celebrate that you've survived.
Not all of them have. Melira won't survive the night.
But if they can save Nissa and Ajani~ maybe it's worth trying. Maybe they can erase just a little of this illness.
"Let's hear the plan," Teferi says.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"There you are. Do you have time to talk?"
Koth isn't used to Elspeth's celestial shape, but he has some faith that the woman underneath is the same. Yet only some. A woman standing atop a tree, balanced on a single foot, overlooking her surroundings in perfect stillness, can hardly be said to look human. <NAME> never looked at him with such clear eyes before.
"Koth. It is good to see you," she says. Though she is atop the tree and he at its base, he hears her perfectly. "There is always time to talk, if talking is what's needed. Give me a moment."
She alights down from the tree, floating like a feather down in front of him. As the drums continue, they head out toward a clear patch of grass. Koth can't remember the last time he saw so many plants in one place. Maybe never. Everything here feels soft and delicate, as if it's under threat from his feet, but he tries not to dwell on it. This place is not Mirrodin. It can never be Mirrodin. The Mirrodin he fought for is as dead as <NAME>.
But that's part of what he wants to talk with Elspeth about. Once they're out of earshot, he takes a breath. Where to start? He could ask how she became~ this. Thank her for coming back to a place like Mirrodin knowing it could kill her. They could talk about what happened there, in those final moments, or they could talk about what's going on in the healer's ward now. Normally he'd know exactly where to start—but here, there's just too much.
"You want advice," Elspeth says.
He feels a smile tugging at his lips. "Didn't expect you to start things off. Yeah. I need advice."
She doesn't smile back, although there's a certain softness to her features. "This form has its blessings. What's troubling you?"
"Do you know about Melira's plan?" He asks. He isn't sure how far those new blessings extend.
"No. But I know that she will not be with us for much longer," she says. Elspeth looks up at the suns above them. "I will be very sorry for her loss."
Strange—the sorrow does touch her voice, but only slightly, and it never reaches her face. When he'd first met Elspeth, she cried often. To see her so composed now~ Koth is proud, but part of him worries about her. When all of this is over, what will remain of those who fought for Mirrodin? Elspeth's an angel now, the resistance survives bloodied and broken, Melira's about to die, and Karn~
What will they even do with Melira, afterward? She will be the first of them to die here. Among the Vulshok, bodies were burned and ashes spread. But what were the traditions here? Would the Zhalfirins want her buried beneath this soft earth, that the roots may claim her bones, and the worms feast on her flesh? They might say it's an honor to rejoin nature, but Koth knows otherwise.
Melira is from Mirrodin. They'll give her a proper send-off.
When it comes to that.
He presses his eyes closed. "She wants to try and heal the others. Ajani and the elf."
"I owe Ajani my life and much more besides," Elspeth says with a nod.
"It's going to kill her, doing that," Koth says. "And Karn's going to burn what he's got left of Venser's spark, too. They have it all planned out."
He doesn't care to argue about whether Ajani deserves redemption; Phyrexia warps those it touches. But there's a bitterness in his heart when he considers that Ajani might make it out of this with time to consider his actions. So many Mirrans won't have the same opportunity. And no one fought to bring the bodies of their compleated brethren back—only the bodies of the dead. Why is it that Ajani and Nissa are being given a second chance at life when so many of his companions aren't?
It's a difficult thing to swallow. More so when the Zhalfirins have been nothing but kind to them. Fresh cooked meals every night; new, colorful homes for each of them, and plenty of visits to stave off the silence; new clothes and new friends. Hard to ask for anything more than this. Mirrodin is dead, but Mirrans can live on, thanks to Zhalfir and her people. He's grateful to them—more grateful than he can hold in his mind at any one moment.
But do Ajani and Nissa deserve that same mercy?
Elspeth's brows inch closer to each other. "Hmm. How is it meant to work?"
That she doesn't ask about Melira or Karn upsets him. He can't quite keep his tone level when he answers. "First, she's going to inoculate their bodies against further infection. That's standard. But then Karn's going to pull their sparks out. And use Venser's spark to~ filter them somehow. Some theory Venser had before everything happened. When he brings them back, Melira will clean the sparks, and then Karn will place them back inside."
The whole thing sounds risky—more so when Elspeth doesn't immediately respond. Silence only leaves him to contemplate all the ways things can go wrong. He runs a hand over the back of his head, the rough metal a welcome anchor to reality. Everything else here was too soft, too plush—even the fabric of the clothing they'd lent him. People here were kind, valiant, and attentive—but they'd never know suffering the way he did. May this roughness always remind him of that.
"You're afraid," Elspeth says.
Part of him wants to argue the point—a large part, bellowing within his chest. But he knows she's right. "Aren't you?"
She looks up at the suns again. "No."
"She's going to die."
"In a manner of her own choosing," Elspeth answers. "Once, you said to me you'd fight for Mirrodin even if there was no Mirrodin left. You stayed, knowing you might die, and leaving me no choice in the matter. You were my friend, and I left Mirrodin thinking you'd been torn apart by Phyrexians. For years that thought pained me."
Koth looks down.
"I no longer feel that pain. But the lesson lingered: we all choose how to face our end, and some causes are worth the price of a life. Melira is willing to pay that price on behalf of others. There's valor in that that," she says. Elspeth lays her hands on his shoulders. "She is making this choice herself—and for the benefit of others. She'll be among friends."
"But what do we do afterward?" Koth says. At last, it feels like he's come to the truth of it. "What are we going to do here?"
Elspeth remains serene—but she does offer him a smile. "You will try to make this place home. It will welcome you, if you let it."
"I'm not sure I know how anymore," he says. "And I don't think it's that simple."
"Many things aren't. But that doesn't mean they're not worth trying."
She draws him into an embrace—and he finds himself slumping against her, trying to make sense of the cacophony in his head. Elspeth could be difficult sometimes, but this is something new entirely. He's not sure how much she's listening to him. This place isn't his home. His home is gone, forever, and she's just~
He hates how much the embrace helps. Words don't mean much, but at least she knows when someone needs holding. "You~ You've changed, haven't you?" he mumbles.
"Yes," comes the simple answer. "But I will always be your friend, Koth. If you ever have need of me, all you have to do is pray." He feels her squeezing tighter—and hears the flap of her wings. She's taking him somewhere. The irony hits him like a hammer to the chest. Years ago, he'd sunk her into the ground to send her someplace else. He didn't want her to have a choice. Tonight, she lifts him through the air to make his choice for him.
The wind whips against his skin. "This isn't goodbye."
"No, not with me," she says. "But you do owe someone a goodbye."
His feet touch the ground again. She's left him in front of the healer's ward. The earth brings with it the weight of what will soon happen.
It isn't his earth. The earth here is soft and springy, yielding too readily to him. Nothing here is metal save the things the Zhalfirins have made themselves—and even then, they favor bone and glass where the Vulshok would use steel.
Nothing about this place is home. He doesn't see how it could be. Maybe Elspeth can, from her vaunted place, but here on the ground~
"Thank you," he calls to Elspeth. As much as his mind was still an avalanche, she'd at least tried to help.
"Always," she says.
Always. So easy for her to say. She might be immortal now—she has all of eternity to make herself feel at home. But him?
Koth crosses his arms. He watches Elspeth take off beneath Mirrodin's suns. He watches the suns, too, as they move across a new firmament. Saheeli, the scientist, said it was probably an unintended consequence of the planes overlapping.
An unintended consequence—just like him and the other Mirrans.
He takes a breath. This won't be easy. But so long as those suns are in the sky, he can face it.
This place isn't home. But there's a little of home in it.
#v(0.35em)
#line(length: 100%, stroke: rgb(90%, 90%, 90%))
#v(0.35em)
"Let's do this outside. I want~ I want to be outside, when it happens."
In the face of everything they're about to do, Melira's request is simple. Teferi and the others promise to honor it.
Outside they go: Teferi, Karn, Koth, Chandra, Kaya, and Melira. The nights here are warmer than days on some other planes, but it is not an unpleasant sort of warmth. No, as they step out into the swaying grass, each feels curiously as if they're stepping into someone's home.
The first order of business is to spread out blankets for Nissa and Ajani to lay on. Next comes setting them down. The sunlight lends their Phyrexian implants a gilded luster.
"You're sure this is going to work?" Kaya asks.
"We won't know until we try," says Chandra, clearing away strands of hair from Nissa's face. "And it's worth it to try."
Silence, then, that Teferi worries will become another argument.
In the end Kaya nods. "Right. Well~ I'll help where I can."
He's grateful she doesn't illuminate all the ways this could go wrong. There are many, and there are many things left to do. For this to work, they must all work in harmony—just as the drums of different communities come together.
His job is, in some senses, the easiest, and he will lose the least through this process. Part of him almost resents that. No, that's a young man's thinking, and selfish besides. He will make good by doing good.
Right now, doing good means forming a small-time bubble around them. It won't last long. Looking over the others, he asks if they are ready.
Koth lays Melira down between Ajani and Nissa. With her eyes closed, she nods.
Karn takes his place standing over everyone's heads. His shoulders rise and fall with unnecessary breath. It is hard not to smile at it a little at him taking on such human traits—but this is not a time for smiling. Karn will lose almost as much as Melira in this. If he is nervous, it is only natural. "I'm willing to try."
"All right," says Teferi. "I won't be able to keep this up for long."
He breathes in. The soft vibration of the drums fills his lungs as surely as the air. Magic hums within him. His body aches still from the war's exertions, but he will not falter when his friends need him. Not anymore.
All at once, the air #emph[distorts] around them. The whipping grass slows to a near standstill. Beyond the unseen sphere of power, the outside world continues—but here they have only the space between two beats of a drum to save their friends.
Karn acts first. Driving his hand into the metal portions of their bodies, he pulls #emph[something] out, something shimmering and bright. Teferi's ears ring as Karn's magic builds within him, an engine gathering steam. Or is this hesitation? What he is about to do will change him forever. In some ways, he's saying goodbye to an old friend to save two new ones. No wonder there's so much angst to him.
Teferi strains against the waves of time. Karn will make the right choice here, he knows it.
Light leaks from within the silver golem's plates, light as pale and glistening as the moon's. Karn closes his eyes. "Together, Venser," he says, quiet in a way only he can be.
A thundering boom threatens to throw off Teferi's concentration, but he holds fast. It passes, and the two orbs Karn had been holding are gone. Melira takes the hands of her companions. She, too, begins to glow—a glow which spreads through the Nissa and Ajani. Her face wrenches in concentration.
First, the leonin.
Every second that cannot pass is a hook in Teferi's soul. He grunts as he strains to keep time stopped long enough for Melira to finish her work. As he watches, the glow ripples through Ajani. Wherever it goes, it leaves their skin brighter and removes the sinister luster Phyrexia has impressed upon them. The metal that remains is nearly as pure as moonsilver.
But this is only half the work.
Another boom sees the two orbs returning to Karn's hands—one whole and pure, one crumbling.
As Melira turns her attention to Nissa's orb, Teferi's heart sinks. It's flickering. Worse than that are the ash-like motes of energy falling away from it. Teferi is used to seeing things happen in slow motion; this is like watching a leaf decay right before his eyes. Lattice-holes open as the light courses through it.
"I can't slow that down!" Teferi shouts.
"Quickly!" says Melira.
Kaya helps her up, so that she can reach the orbs. A single touch and a halo of light ripples out of them.
It's almost time, almost.
Violet wreathes Kaya. She, too, drives her hands into the spheres of light. Together with Karn they send them back to their respective hosts.
Teferi drops his spell—and drops to his knees. Sweat slicks his brow. The drums return, carrying with them news he is too tired to follow. All he can do is turn his attention to the others and hope that what they've done is worth the sacrifices they've made.
His answer comes when Ajani's good eye flutters open, when breath returns to his scarred chest. Despite his injuries, he forces himself to sit up. "What~ Where am I?"
#figure(image("018_Episode 10: The Rhythms of Life/03.jpg", width: 100%), caption: [Art by: Viko Menezes], supplement: none, numbering: none)
"Zhalfir," says Kaya.
"Zhalfir? That's impossible," he says—but the more he looks around, the more he seems to realize it's the truth. But so too does the tiredness overtake him. He slumps back over to the ground. "Teferi~ I'll congratulate you some other time. I think my body needs to rest."
He's asleep before Teferi can say much else—and for the better. Let him have a few moments of peace before the horror of what he did dawns on him.
Karn slumps over, one hand to his great chest. The lights within him have dimmed; only the ghost of them remains, an afterimage in red and violet.
"Are you all right?" Teferi calls.
"It's~ I feel lonelier," Karn says. "I will miss him, but I'll be all right."
Koth is less sure. He kneels next to Melira and pulls her onto his lap. She fell as fast as Teferi did. The concern on his brow is easy to read—as is the misery that compels him to close his eyes. "She's gone."
Kaya lays a hand on his shoulder.
Tears run down his cheeks, yet Koth does not conceal them, or hide how much pain he's in. Teferi knows exactly the kind of pain that's driving him now. It isn't just Melira's death. It's everyone's, felt at the same time—all his friends, comrades~ almost everyone he ever knew. Gone.
Teferi pushes himself up. He and Karn both throw an arm around Koth as the tears take over. There aren't words enough at times like these. Their only consolation is that Melira's wound no longer troubles her—that she must no longer live in fear. But to say this would be cold comfort to Koth, so Teferi bites his tongue. Companionship will have to do. Grief is an awful burden to bear alone.
Yet the air is thick with grief. Next to them, Chandra shakes Nissa, anxiety flaring hotter with every second.
"What happened? Why isn't she awake yet? The flickering~"
"Something went wrong," Teferi says. He swallows. "I'm not sure if she's~"
Maybe it'd been too much to hope for. Returning someone from phyresis~ even Urza hadn't figured that one out. Who were they to try? Having no other options didn't mean they'd succeed. You could dedicate your whole life to a craft and never get any acclaim for it. You could spend every waking minute laboring to further a cause and never see it come to fruition. Wanting something so badly it threatens to break you does not mean you are entitled to it.
But sometimes~
"Chandra~ ?"
Sometimes, it'll be worth it.
It's worth it to be alive here on Zhalfir. It's worth it to be surrounded by old friends and older community—families old and new.
It's worth it to put the past, at last, behind him. To build a new future.
And it's worth it to see that happen for the others.
To see the tension on Chandra's face melt to pure happiness, to see her clutch Nissa close and to see Nissa hold her in turn, to hear the happy sobs along with the despairing ones~
This is life. This is what they all fought for. What Melira died for, why Karn's given up his spark, why Teferi spent hundreds of years trying to restore his home.
For this.
"I'm right here," Chandra says. She presses her lips to Nissa's. "I'm right here, and I'm not going anywhere."
Good, Teferi thinks.
He won't be going anywhere for a while, either.
|
|
https://github.com/crd2333/crd2333.github.io | https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/Reading/跟李沐学AI(论文)/MoCo.typ | typst | // ---
// order: 11
// ---
#import "/src/components/TypstTemplate/lib.typ": *
#show: project.with(
title: "d2l_paper",
lang: "zh",
)
= Momentum Contrast for Unsupervised Visual Representation Learning
- 时间:2019.11
== 题目 & 摘要
- MoCo 是 CVPR20 最佳论文提名,是 CV 领域对比学习里程碑的工作,以一己之力盘活了巨卷的 CV 领域
- 如果把深度学习比作一块蛋糕,那么强化学习只能是蛋糕上的一颗樱桃,有监督学习是蛋糕上的糖霜,只有自监督学习才能算是整个蛋糕的主体
- 什么是对比学习?对比学习是自监督的一种,可以看对比学习综述那篇文章(#link("https://crd2333.github.io/note/Reading/%E8%B7%9F%E6%9D%8E%E6%B2%90%E5%AD%A6AI%EF%BC%88%E8%AE%BA%E6%96%87%EF%BC%89/%E5%AF%B9%E6%AF%94%E5%AD%A6%E4%B9%A0%E4%B8%B2%E8%AE%B2/")[链接]),简单来讲就是让相似样本的特征空间尽量接近而不相似的尽量远离。对比学习的框架最大的好处就在于正负样本定义规则的灵活性
- 题目:动量对比学习的方法做无监督视觉特征学习
- 本文的亮点在于,MoCo(第一个)在主流的 CV 任务 (分类、检测、分割)中填补甚至超越无监督与有监督方法之间的鸿沟,并且容易迁移到下游任务
- MoCo 定义了一个动态 dictionary look-up 任务,它包含
+ 一个队列:队列中的样本无需全部梯度回传,可以放很多负样本,让字典变得很大
+ 一个移动平均的编码器:让字典的特征尽可能的保持一致
- MoCo 利用动量的特性,缓慢的更新一个编码器,从而让中间学习到的字典中的特征尽可能保持一致,一个又大又一致的字典有利于无监督对比学习的训练
== 引言
- 无监督在 CV 不成功的原因是什么?
- 原始信号空间的不同
- NLP 原始信号是离散的,词、词根、词缀,容易构建 tokenized dictionaries 做无监督学习,容易建模且建好的模型也好优化
- CV 原始信号是连续的、高维的,不像单词具有浓缩好的、简洁的语义信息,不适合构建一个字典,难以建模
- CV 领域近期无监督对比学习的各种方法可以被统一归纳为“动态字典法”,anchor 当做 query,而各个正负样本作为字典的 key,视图把字典建模得比较好使得 query 和 相似的 key 具有相似的特征 value
- 从这个角度来看,动态字典应当具有 large + consistency 的特点
+ large:从连续高维空间做更多的采样。字典 key 越多,表示的视觉信息越丰富,匹配时更容易找到具有区分性的本质特征。否则模型可能学到 shortcut 而不能泛化
+ consistent :字典里的 key 应该由相同的 or 相似的编码器生成。否则 query 很有可能去找同一个 encoder 生成的 key,而不是寻找语义相似性(另一种形式的 shortcut solution)
== 相关工作
- 跟有监督相比,两个主要不同点是代理任务 pretext tasks 和损失目标函数 loss functions, objective functions
- 损失函数
+ 判别式,比如分类
+ 生成式,比如重建整张图
+ 对比学习,目标是相似样本特征相近,并不固定,跟 encoder 输出相关
+ 对抗学习,衡量两个概率分布之间的差异
- 代理任务
+ denoising auto-encoders 重建整张图
+ context auto-encoders 重建某个 patch
+ cross-channel auto-encoders (colorization) 给图片上色当自监督信号
+ pseudo-labels 图片生成伪标签
+ exemplar image,比如同一张图片做不同的数据增广后都属于同一个类
+ patch ordering 九宫格方法:打乱了以后预测 patch 的顺序或随机选一个 patch 预测方位
+ 利用视频的顺序做 tracking
+ 做聚类的方法 clustering features
- 不同的代理任务可以和不同形式的对比学习目标函数配对使用,思考 MoCo, CPC, CMC 分别是怎么样的配对
- 另外,作者把相关工作归纳为动态字典法之后,总结说它们或多或少都受限于字典*大小*和*一致性*问题,如图
- 比如,同期的 SimCLR 就是第一种,正负样本对来自同一个 batch,因为 google 比较有钱才训练得起来;之前的 CPC, CMC 应该也都是这种,后来的 CLIP 应该也是
#q[PS:其实 MoCo 的 queue 结构也没有吹得那么好,单从效果不计成本地来看还是 large batchsize 比较厉害(money is all you need),后来 MoCo-v3 也换回去了。而且 KaiMing 团队后来自己的 SimSiam 也摈弃了动量编码器。不过,它的影响依旧是深远的]
- Inst Disc 是第二种,用 memory bank 的方式 off-line 地做字典,大小可以很大但一致性欠缺
- 其实 memory bank 加上 momentum 就是 MoCo 的雏形了,不过 MoCo 的扩展性更好(memory bank 需要存整个数据集,而 MoCo 只需要从中随机抽取)
#fig("/public/assets/Reading/limu_paper/MoCo/2024-10-04-23-00-36.png")
== 框架和方法
#fig("/public/assets/Reading/limu_paper/MoCo/2024-10-04-22-03-47.png", width: 50%)
- queue:剥离显卡内存对字典大小的限制,让字典大小(或者说 queue 的大小)和 batch size 分开,每次从中取出前 minibatch 个元素扔掉,从数据集中随机抽取 minibatch 个元素计算并入队。队列(字典)始终是整个数据集的子集,且大小为固定的超参数
- momentum encoder:有点强化学习目标网络的思想,减缓每次更新的剧烈程度,使得模型变得平稳一致,尽可能使字典(queue)里的 key 由跟 query 类似的 encoder 生成 $ theta_k <- m theta_k + (1-m)theta_q $
- MoCo 的代理任务 pretext task 是什么?事实上,MoCo 是一种建立模型的方法,可以和很灵活地与多种代理任务使用,这里以 instance discrimination 为例
- 损失函数为 infoNCE(info noise contrastive estimation),负样本从整个数据集中选取部分进行估计,把一个超级多的 $N$ 分类问题变成 data sample 和 noise samples 共 $1 + K$ 个类别来解决 softmax 不工作的问题,同时降低了计算复杂度 $ cal(L)_q = - log exp(q dot k_+ \/ tau)/(sum_(i=0)^K exp(q dot k_i \/ tau)) $
- 具体代码中,由于正样本就是类别 $0$ ,直接把 ground truth 设为全零然后做 cross entropy loss 就行了,很巧妙
#algo(
// stroke: none,
title: [*Algorithm 1:* 伪代码]
)[
- \# f_q, f_k: encoder networks for query and key
- \# queue: dictionary as a queue of K keys (CxK)
- \# m: momentum
- \# t: temperature
#no-number
+ f_k.params = f_q.params #comment[initialize]
+ for x in loader: #comment[load a minibatch x with N samples]
+ x_q = aug(x) #comment[a randomly augmented version]
+ x_k = aug(x) #comment[another randomly augmented version]
+ q = f_q.forward(x_q) #comment[queries: NxC]
+ k = f_k.forward(x_k) #comment[keys: NxC]
+ k = k.detach() #comment[no gradient to keys]
#no-number
+ \# positive logits: Nx1
+ l_pos = bmm(q.view(N,1,C), k.view(N,C,1))
+ \# negative logits: NxK
+ l_neg = mm(q.view(N,C), queue.view(C,K))
#no-number
+ \# logits: Nx(1+K)
+ logits = cat([l_pos, l_neg], dim=1)
+ \# contrastive loss, Eqn.(1)
+ labels = zeros(N) #comment[positives are the 0-th]
+ loss = CrossEntropyLoss(logits/t, labels)
#no-number
+ \# SGD update: query network
+ loss.backward()
+ update(f_q.params)
#no-number
+ \# momentum update: key network
+ f_k.params = m*f_k.params+(1-m)\*f_q.params
#no-number
+ \# update dictionary
+ enqueue(queue, k) #comment[enqueue the current minibatch]
+ dequeue(queue) #comment[dequeue the earliest minibatch]
]
- 最后还讲了一个避免 BN 信息泄露的小 trick,不是很懂为什么会信息泄露?
== 结果和讨论
- 主要分成了两大部分进行了结果的展示
+ Linear Classification Protocol:将训练好的模型 freeze,然后学最后一层线性层
- 对两个主要贡献做了消融实验
- 一个比较有意思的现象是 lr 设为了 $30$,也启示我们对比学习和有监督学习的目标函数分布可能大不相同
- 在 $7$ 个检测 or 分割的任务超越 ImageNet 有监督训练的结果,甚至有时大幅度超越
+ 迁移学习的效果
- 对 BN 和训练时长做了限制以得到公平的比较,感觉有点狂傲(用你的超参数打败你)
- 另外,如果把训练数据集换成 instagram,用更多数据、更大模型,性能会提升但性价比变低了
- 可能是因为大规模数据集没有完全被利用,可以尝试开发其它的代理任务 pretext task,比如 masked auto-encoding(也就是两年后的 MAE,震撼)
- 总之就是效果碾压,就不细记了
== 总结
|
|
https://github.com/Jollywatt/typst-fletcher | https://raw.githubusercontent.com/Jollywatt/typst-fletcher/master/tests/edge-bend-marks/test.typ | typst | MIT License | #set page(width: auto, height: auto, margin: 1em)
#import "/src/exports.typ" as fletcher: diagram, node, edge
#diagram(
spacing: (10mm, 6mm),
for (i, bend) in (0deg, 40deg, 80deg, -90deg).enumerate() {
let x = 2*i
(
(">->->->",),
("<<->>",),
(">>-<<",),
(marks: ((inherit: "hook", rev: true), "head")),
(marks: ((inherit: "hook", rev: true), "hook'")),
(marks: ("bar", "bar", "bar")),
(marks: ("||", "||")),
("<=>",),
("triple",),
(marks: ("o", "O")),
(marks: ((inherit: "solid", rev: true), "solid")),
).enumerate().map(((i, args)) => {
edge((x, i), (x + 1, i), ..args, bend: bend)
}).join()
}
)
|
https://github.com/AxiomOfChoices/Typst | https://raw.githubusercontent.com/AxiomOfChoices/Typst/master/Courses/Math%2018_905%20-%20Algebraic%20Topology%201/Assignments/Assignment%203.typ | typst | #import "@preview/commute:0.2.0": node, arr, commutative-diagram
#import "@preview/cetz:0.2.2"
#import "/Templates/generic.typ": latex, header
#import "@preview/ctheorems:1.1.0": *
#import "/Templates/math.typ": *
#import "/Templates/assignment.typ": *
#show: doc => header(title: "Assignment 3", name: "<NAME>", doc)
#show: latex
#show: NumberingAfter
#show: thmrules
#let col(x, clr) = text(fill: clr)[$#x$]
#let pb() = {
pagebreak(weak: true)
}
#let bar(el) = $overline(#el)$
#set enum(numbering: "(a)")
*Sources consulted* \
Students: \
Texts: Class Notes.
= Question
== Statement
+ Suppose that $cal(C)$ is a category and that
#align(center)[
#commutative-diagram(
node((0, 0), $A$),
node((0, 1), $B$),
node((1, 0), $C$),
node((1, 1), $P$),
arr($A$, $B$, $f$),
arr($A$, $C$, $g$),
arr($B$, $P$, $p_B$),
arr($C$, $P$, $p_C$),
node-padding: (50pt, 50pt),
)
#commutative-diagram(
node((0, 0), $A$),
node((0, 1), $B$),
node((1, 0), $C$),
node((1, 1), $P'$),
arr($A$, $B$, $f$),
arr($A$, $C$, $g$),
arr($B$, $P'$, $p'_B$),
arr($C$, $P'$, $p'_C$),
node-padding: (50pt, 50pt),
)
]
are two pushout squares in $cal(C)$. Prove that $P$ is isomorphic to $P'$.
+ Describe, with proof, the pushout in $Ab$ of a diagram of abelian groups.
== Solution
+ Let us consider the following commutative diagram
#align(center)[
#commutative-diagram(
node((0, 0), $A$),
node((0, 2), $B$),
node((2, 0), $C$),
node((2, 2), $P$, "p1"),
node((3, 3), $P'$, "p2"),
node((4, 4), $P$, "p3"),
arr($A$, $B$, $f$),
arr($A$, $C$, $g$),
arr($B$, "p1", $p_B$),
arr($C$, "p1", $p_C$),
arr($B$, "p2", $p'_B$, curve: 20deg),
arr($C$, "p2", $p'_C$, curve: -20deg),
arr($B$, "p3", $p_B$, curve: 30deg),
arr($C$, "p3", $p_C$, curve: -30deg),
arr("p1", "p2", $f$, "dashed"),
arr("p2", "p3", $g$, "dashed"),
node-padding: (25pt, 25pt),
)
]
the fact that $P$ is a pushout guarantees the existence of the unique dashed map $f$, and the fact that $P'$ is a pushout guarantees the existence of the unique dashed map $g$, both of which make the diagram above commute. Now this diagram clearly shows that the map $g compose f$ satisfies the pushout property for $P$ on itself, but also clearly $id_P$ also satisfies that property. By uniqueness, we then must have $g compose f = id_P$. An identical and symmetric argument shows that $f compose g = id_(P')$.
+ I claim that the pushout is given by $P = B times C quo gen((f(x),-g(x))\, x in A)$, we clearly have maps
$
p_B : B &-> P wide &wide
p_C : C &-> P\
x &|-> [(x, 0)]&
y &|-> [(0, y)]
$
and to check the pushout square commutes we have
$
(p_B compose f) (x) = p_B (f(x)) = [(f(x),0)]
=
[(0, g(x))]
=
p_C (g(x))
=
(p_C compose g) (x).
$
Now to show that it is indeed a pushout, for another object $P'$ and morphisms $p'_B, p'_C$ we have have that they factor as
#align(center)[
#commutative-diagram(
node((0,0), $B$),
node((2,0), $C$),
node((1,1), $B times C$),
node((1,3), $P'$),
arr($B$, $B times C$, $iota_B$),
arr($C$, $B times C$, $iota_C$),
arr($B times C$, $P'$, $p'$),
node-padding: (25pt, 25pt),
)
]
where $p'$ is given by $p'(x,y) = p'_B (x) + p'_C (y)$. Now since we assume $P'$ made the diagram commute we have
$
p'_B (f(x)) = p'_C (g(x)) \
p'(f(x), 0) = p'(0, g(x)) \
p'(f(x), -g(x)) = 0\
$
so then the kernel of $p'$ contains as a subgroup $gen((f(x), - g(x))\, x in A)$, so by isomorphism theorem, $p'$ factors through the quotient, that is
#align(center)[
#commutative-diagram(
node((0,0), $B$),
node((2,0), $C$),
node((1,1), $B times C$),
node((1,3), $P$),
node((1,5), $P'$),
arr($B$, $B times C$, $iota_B$),
arr($C$, $B times C$, $iota_C$),
arr($B times C$, $P$, ""),
arr($P$, $P'$, $tilde(p)'$),
node-padding: (25pt, 25pt),
)
]
for some unique map $p'$. But then that map is exactly the map that makes the following diagram commute
#align(center)[
#commutative-diagram(
node((0, 0), $A$),
node((0, 2), $B$),
node((2, 0), $C$),
node((2, 2), $P$, "p1"),
node((3, 3), $P'$, "p2"),
arr($A$, $B$, $f$),
arr($A$, $C$, $g$),
arr($B$, "p1", $p_B$),
arr($C$, "p1", $p_C$),
arr($B$, "p2", $p'_B$, curve: 20deg),
arr($C$, "p2", $p'_C$, curve: -20deg),
arr("p1", "p2", $p'$, "dashed"),
node-padding: (25pt, 25pt),
)
]
which proves that $P$ is indeed the pushout.
= Question
== Statement
We attach a 1-cell to $S^2$ by considering the pushout $P$ of the diagram
#align(center)[
#commutative-diagram(
node((0, 0), $S^0$),
node((0, 2), $S^2$),
node((2, 0), $D^1$),
node((2, 2), $P$),
arr($S^0$, $S^2$, $f$),
arr($S^0$, $D^1$, $g$),
arr($D^1$, $P$, $p_B$),
arr($S^2$, $P$, $p_C$),
node-padding: (25pt, 25pt),
)
]
Here the map $f$ from $S^0$ to $S^2$ takes one point to the north pole of $S^2$ and the other point to the south pole.
+ Describe in words or draw a picture of the pushout $P$.
+ Use the Mayer-Vietoris sequence to calculate the homology groups of $P$. To apply the Mayer-Vietoris sequence, the first step must be to describe open sets $A$ and $B$ covering $P$. You will then need to compute the homology groups of $A$, $B$ and $A sect B$.
== Solution
+ This pushout consists of a sphere $S^2$ glued to the 'handle' $D^1$ at the north and south pole.
+ We pick the following cover, $A$ will be $D^1$ along with two small neighborhoods of its endpoints on $S^2$, $B$ will be all of $S^2$ along with a small neighborhood of the endpoints of $D^1$ in $D^1$. If we imagine the sphere as embedding in 3d, with the induced metric we can say $A = B_epsilon (D^1), B = B_epsilon (S^2)$ for sufficiently small $epsilon$. In this case we get that $A sect B = B_epsilon (S^0)$ where we identify $S^0$ with the north and south pole. We note here that its easy to see that $A$ is homotopy equivalent to $D^1$ and hence to a point, and that $B$ is homotopy equivalent to $S^2$.
We now have the Mayer-Vietoris sequence, which gives us
$
... -> H_(n) (A sect B) -> H_n (A) plus.circle H_n (B) -> H_n (P) -> H_(n - 1) (A sect B) -> ...
$
then using homotopy equivalence we can get
$
... -> H_(n) (S^0) -> H_n (D^1) plus.circle H_n (S^2) -> H_n (P) -> H_(n - 1) (S^0) -> ...
$
now assume $n >= 2$, then we have $H_n (S^0) iso H_(n-1) (S^0) iso 0$, and clearly we have $H_n (D^1) iso 0$ so all together we get
$
0 -> H_n (S^2) -> H_n (P) -> 0
$
so we get an isomorphism and thus $H_n (P) = ZZ$ if $n = 2$ and $0$ for $n > 2$.
For $n = 1$ we take the exact sequence
$
... -> H_1 (D^1) plus.circle H_1 (S^2) -> H_1 (P) -> H_0 (S^0) -> H_0 (D^1) plus.circle H_0 (S^2) -> ...
$
and we again get
$
0 -> H_1 (P) -> ZZ times ZZ ->^f ZZ times ZZ -> ...
$
here the map $f$ is given by $(x,y) -> (x+y, -x-y)$ which has kernel $ZZ$ so $H_1 (P) iso ZZ$. Finally $H_0 (P) = ZZ$ because $P$ has one connected component.
= Question
== Statement
Calculate the homology groups of the Klein bottle.
== Solution
We can give the Klein bottle the following CW structure
#align(center)[
#cetz.canvas({
import cetz.draw: *
rect((0,0), (2,2), fill: gray)
line((2, 0), (0, 0),
stroke: red,
name: "bottom")
line((0, 2), (2, 2),
stroke: red,
name: "top")
line((0, 2), (0, 0),
stroke: blue,
name: "left")
line((2, 2), (2, 0),
stroke: blue,
name: "right")
line((2, 2), (0, 0),
stroke: yellow,
name: "diagonal")
mark("bottom.mid", "bottom.end", anchor: "center", symbol:">", offset: 0.85, fill: red, stroke: red)
mark("top.mid", "top.end", anchor: "center", symbol:">", offset: 0.85, fill: red, stroke: red)
mark("left.mid", "left.end", anchor: "center", symbol:">", offset: 0.85, fill: blue, stroke: blue)
mark("right.mid", "right.end", anchor: "center", symbol:">", offset: 0.85, fill: blue, stroke: blue)
mark("diagonal.mid", "diagonal.end", anchor: "center", symbol:">", offset: 1.2, fill: yellow, stroke: yellow)
content("top.mid", anchor: "south", padding: .2, [$A$])
content("bottom.mid", anchor: "north", padding: .2, [$A$])
content("left.mid", anchor: "east", padding: .16, [$B$])
content("right.mid", anchor: "west", padding: .1, [$B$])
content("diagonal.mid", anchor: "north-west", padding: .1, [$C$])
})
]
|
|
https://github.com/r4ai/typst-code-info | https://raw.githubusercontent.com/r4ai/typst-code-info/main/plugin.typ | typst | MIT License | #let PLUGIN_WASM_PATH = "./target/wasm32-unknown-unknown/release/code_info.wasm"
#let plugin = plugin(PLUGIN_WASM_PATH);
#let ADDED_LINE_TEXT_COLOR = rgb("#228b22")
#let ADDED_LINE_BG_COLOR = rgb("#d6f2c7")
#let DELETED_LINE_TEXT_COLOR = rgb("b22222")
#let DELETED_LINE_BG_COLOR = rgb("#ffd9d9")
#let HIGHLIGHTED_LINE_BG_COLOR = rgb("#cfecfc")
#let LINE_TEXT_COLOR = gray
#let LINE_BG_COLOR = rgb(0, 0, 0, 0)
#let to-string(content) = {
if type(content) == "string" {
content
} else if content.has("text") {
content.text
} else if content.has("children") {
content.children.map(to-string).join("")
} else if content.has("body") {
to-string(content.body)
} else if content == [ ] {
" "
}
}
#let parse-diff-code(
content-before,
content-after,
always-show-lines: (),
context-lines-count: 3,
) = json.decode(
plugin.parse_diff_code(
bytes(
json.encode((
content_before: to-string(content-before),
content_after: to-string(content-after),
always_show_lines: always-show-lines.map(v => v - 1),
context_lines_count: context-lines-count,
)),
),
),
)
#let default-code-info = (
caption: none,
label: none,
show-line-numbers: false,
start-line: 1,
highlighted-lines: (),
diff: false,
always-show-lines: none,
)
#let code-info-state = state("code-info", default-code-info)
#let code-info(
caption: default-code-info.caption,
label: default-code-info.label,
show-line-numbers: default-code-info.show-line-numbers,
start-line: default-code-info.start-line,
highlighted-lines: default-code-info.highlighted-lines,
diff: default-code-info.diff,
always-show-lines: default-code-info.always-show-lines,
) = {
code-info-state.update((
caption: caption,
label: label,
diff: diff,
show-line-numbers: show-line-numbers,
start-line: start-line,
highlighted-lines: highlighted-lines,
always-show-lines: always-show-lines,
))
}
#let default-diff-code = (
before: none,
after: none,
)
#let diff-code-state = state("diff-code", default-diff-code)
#let typst-label = label
#let code-block(
caption: none,
label: none,
supplement: none,
mono-font: none,
content,
) = [
#h(2em)
#figure(
kind: raw,
caption: if caption == none {
if label == none {
none
} else {
""
}
} else {
caption
},
supplement: supplement,
numbering: if label == none {
none
} else {
"1"
},
gap: 1em,
{
set par(justify: false)
set text(size: 9pt)
if mono-font != none {
set text(font: mono-font)
}
set block(spacing: 1em, above: 1em, below: 1em)
set line(length: 100%, stroke: 0.5pt)
line()
content
line()
},
) #if label != none {
typst-label(label)
}
#h(2em)
]
#let init-code-info(
breakable: true,
supplement: "Listing",
body-font: "linux libertine",
mono-font: "mono",
body,
) = {
show raw.where(block: true): it => locate(loc => {
let cur-code-info = code-info-state.at(loc)
let start-line = cur-code-info.start-line
let show-line-numbers = cur-code-info.show-line-numbers
let highlighted-lines = cur-code-info.highlighted-lines
let is-diff = cur-code-info.diff
let always-show-lines = cur-code-info.always-show-lines
let typst-line = line
let cur-diff-code = diff-code-state.at(loc)
show figure: set block(breakable: breakable)
show figure.caption: set text(font: body-font)
set figure.caption(
position: top,
separator: if cur-code-info.caption == none {
none
} else {
auto
},
)
if (is-diff and cur-diff-code.before == none and cur-diff-code.after == none) {
diff-code-state.update((
before: it,
after: none,
))
} else {
let code-content-before = if cur-diff-code.before == none {
it
} else {
cur-diff-code.before
}
let code-content-after = it
let code = parse-diff-code(
always-show-lines: if is-diff {
if always-show-lines == none {
()
} else {
always-show-lines
}
} else {
if always-show-lines == none {
array.range(1, it.lines.len() + 1)
} else {
always-show-lines
}
},
code-content-before,
code-content-after,
)
let code-lines-before = code-content-before.lines
let code-lines-after = code-content-after.lines
let code-block-content = grid(
columns: if show-line-numbers and is-diff {
(auto, auto, 1fr)
} else if show-line-numbers or is-diff {
(auto, 1fr)
} else {
(1fr)
},
inset: (y: 0.3em),
align: left + top,
stroke: none,
..code
.to_show_lines
.enumerate()
.map(((index, line)) => {
let show-highlighted = highlighted-lines.contains(index + 1)
let show-added = line.tag == "Insert"
let show-deleted = line.tag == "Delete"
let line-number-color = if show-added {
ADDED_LINE_TEXT_COLOR
} else if show-deleted {
DELETED_LINE_TEXT_COLOR
} else {
LINE_TEXT_COLOR
}
let line-bg-color = if show-highlighted {
HIGHLIGHTED_LINE_BG_COLOR
} else if show-added {
ADDED_LINE_BG_COLOR
} else if show-deleted {
DELETED_LINE_BG_COLOR
} else {
LINE_BG_COLOR
}
let line-number = if line.new_index != none {
line.new_index + 1
} else if line.old_index != none {
line.old_index + 1
} else {
none
}
let line-number-content = grid.cell(
fill: line-bg-color,
inset: (left: 0.3em, right: 1.2em),
text(
fill: line-number-color,
weight: if show-added or show-deleted {
"bold"
} else {
"regular"
},
if line-number == none {
""
} else {
str(line-number)
},
),
)
let line-content = if line.tag == "Spacer" {
text(fill: line-number-color, line.content)
} else if line.new_index != none {
code-lines-after.at(line.new_index)
} else if line.old_index != none {
code-lines-before.at(line.old_index)
} else {
none
}
let line-tag-content = grid.cell(
fill: line-bg-color,
inset: (
left: if show-line-numbers {
0em
} else {
1.2em
},
right: 0.5em,
),
text(
fill: line-number-color,
if line.tag == "Insert" {
"+"
} else if line.tag == "Delete" {
"-"
} else {
" "
},
),
)
if line.tag == "Spacer" {
grid.cell(
fill: line-bg-color,
inset: (y: 1em),
colspan: if show-line-numbers {
3
} else {
2
},
{
let height = 9pt
let gap = 6pt
let stroke = (
paint: line-number-color,
thickness: 1pt,
cap: "round",
)
let wavy-line = pattern(size: (13pt, height))[
#place(
path(
stroke: stroke,
((0pt, 1pt), (-25%, 0pt)),
((50%, height - gap), (-25%, 0pt)),
((100%, 1pt), (-25%, 0pt)),
),
)
#place(
path(
stroke: stroke,
((0pt, gap), (-25%, 0pt)),
((50%, height - 1pt), (-25%, 0pt)),
((100%, gap), (-25%, 0pt)),
),
)
]
rect(fill: wavy-line, width: 100%, height: height)
},
)
} else if is-diff and show-line-numbers {
(
line-number-content,
line-tag-content,
grid.cell(
inset: (right: 0.3em),
fill: line-bg-color,
line-content,
),
)
} else if is-diff {
(
line-tag-content,
grid.cell(
inset: (right: 1.2em),
fill: line-bg-color,
line-content,
),
)
} else if show-line-numbers {
(
line-number-content,
grid.cell(
fill: line-bg-color,
line-content,
),
)
} else {
grid.cell(
inset: (x: 1.2em),
fill: line-bg-color,
line-content,
)
}
})
.flatten()
)
code-info-state.update(default-code-info)
diff-code-state.update(default-diff-code)
code-block(
caption: cur-code-info.caption,
label: cur-code-info.label,
supplement: supplement,
mono-font: mono-font,
code-block-content,
)
}
})
body
}
|
https://github.com/bigskysoftware/hypermedia-systems-book | https://raw.githubusercontent.com/bigskysoftware/hypermedia-systems-book/main/ch08-tricks-of-the-htmx-masters.typ | typst | Other | #import "lib/definitions.typ": *
== Tricks Of The Htmx Masters
In this chapter we are going to look deeper into the htmx toolkit. We’ve
accomplished quite a bit with what we’ve learned so far. Still, when you are
developing Hypermedia-Driven Applications, there will be times when you need to
reach for additional options and techniques.
We will go over the more advanced attributes in htmx, as well as expand on the
advanced details of attributes we have already used.
Additionally, we will look at functionality that htmx offers beyond simple HTML
attributes: how htmx extends standard HTTP request and responses, how htmx works
with (and produces) events, and how to approach situations where there isn’t a
simple, single target on the page to be updated.
Finally, we will take a look at practical considerations when doing htmx
development: how to debug htmx-based applications effectively, security
considerations you will need to take into account when working with htmx, and
how to configure the behavior of htmx.
With the features and techniques in this chapter, you will be able to pull off
extremely sophisticated user interfaces using only htmx and perhaps a small bit
of hypermedia-friendly client-side scripting.
=== Htmx Attributes
#index[htmx][attributes]
Thus far we have used about fifteen different attributes from htmx in our
application. The most important ones have been:
/ `hx-get`, `hx-post`, etc.: #[
To specify the AJAX request an element should make
]
/ `hx-trigger`: #[
To specify the event that triggers a request
]
/ `hx-swap`: #[
To specify how to swap the returned HTML content into the DOM
]
/ `hx-target`: #[
To specify where in the DOM to swap the returned HTML content
]
Two of these attributes, `hx-swap` and `hx-trigger`, support a number of useful
options for creating more advanced Hypermedia-Driven Applications.
==== #indexed[hx-swap]
We’ll start with the hx-swap attribute. This is often not included on elements
that issue htmx-driven requests because its default behavior --- `innerHTML`,
which swaps the inner HTML of the element --- tends to cover most use cases.
We earlier saw situations where we wanted to override the default behavior and
use `outerHTML`, for example. And, in chapter 2, we discussed some other swap
options beyond these two, `beforebegin`,
`afterend`, etc.
In chapter 5, we also looked at the `swap` delay modifier for `hx-swap`, which
allowed us to fade some content out before it was removed from the DOM.
In addition to these, `hx-swap` offers further control with the following
modifiers:
/ `settle`: #[
#index[hx-swap][settle] Like `swap`, this allows you to apply a specific delay
between when the content has been swapped into the DOM and when its attributes
are
"settled", that is, updated from their old values (if any) to their new values.
This can give you fine-grained control over CSS transitions.
]
/ `show`: #[
#index[hx-swap][show] Allows you to specify an element that should be shown ---
that is, scrolled into the viewport of the browser if necessary --- when a
request is completed.
]
/ `scroll`: #[
#index[hx-swap][scroll] Allows you to specify a scrollable element (that is, an
element with scrollbars), that should be scrolled to the top or bottom when a
request is completed.
]
/ `focus-scroll`: #[
#index[hx-swap][focus-scroll] Allows you to specify that htmx should scroll to
the focused element when a request completes. The default for this modifier is "false."
]
So, for example, if we had a button that issued a `GET` request, and we wished
to scroll to the top of the `body` element when the request completed, we would
write the following HTML:
#figure(caption: [Scrolling to the top of the page])[
```html
<button hx-get="/contacts" hx-target="#content-div"
hx-swap="innerHTML show:body:top"> <1>
Get Contacts
</button>
``` ]
1. This tells htmx to show the top of the body after the swap occurs.
More details and examples can be found online in the `hx-swap`
#link("https://htmx.org/attributes/hx-swap/")[documentation].
==== hx-trigger
#index[hx-trigger][about]
#index[hx-trigger][element defaults]
Like `hx-swap`, `hx-trigger` can often be omitted when you are using htmx,
because the default behavior is typically what you want. Recall the default
triggering events are determined by an element’s type:
- Requests on `input`, `textarea` & `select` elements are triggered by the `change` event.
- Requests on `form` elements are triggered on the `submit` event.
- Requests on all other elements are triggered by the `click` event.
There are times, however, when you want a more elaborate trigger specification.
A classic example is the active search example we implemented in Contact.app:
#figure(
caption: [The active search input],
)[
```html
<input id="search" type="search" name="q"
value="{{ request.args.get('q') or '' }}"
hx-get="/contacts"
hx-trigger="search, keyup delay:200ms changed"/> <1>
``` ]
1. An elaborate trigger specification.
This example took advantage of two modifiers available for the
`hx-trigger` attribute:
/ `delay`: #[
#index[hx-trigger][delay] Allows you to specify a delay to wait before a request
is issued. If the event occurs again, the first event is discarded and the timer
resets. This allows you to "debounce" requests.
]
/ `changed`: #[
#index[hx-trigger][changed] Allows you to specify that a request should only be
issued when the
`value` property of the given element has changed.
]
`hx-trigger` has several additional modifiers. This makes sense, because events
are fairly complex and we want to be able to take advantage of all the power
they offer. We will discuss events in more detail below.
Here are the other modifiers available on `hx-trigger`:
/ `once`: #[
#index[hx-trigger][once] The given event will only trigger a request once.
]
/ `throttle`: #[
#index[hx-trigger][throttle] Allows you to throttle events, only issuing them
once every certain interval. This is different than `delay` in that the first
event will trigger immediately, but any following events will not trigger until
the throttle time period has elapsed.
]
/ `from`: #[
#index[hx-trigger][from] A CSS selector that allows you to pick another element
to listen for events on. We will see an example of this used later in the
chapter.
]
/ `target`: #[
#index[hx-trigger][target] A CSS selector that allows you to filter events to
only those that occur directly on a given element. In the DOM, events "bubble"
to their ancestor elements, so a `click` event on a button will also trigger a `click`
event on an enclosing `div`, all the way up to the `body` element. Sometimes you
want to specify an event directly on a given element, and this attribute allows
you to do that.
]
/ `consume`: #[
#index[hx-trigger][consume] If this option is set to `true`, the triggering
event will be cancelled and not propagate to ancestor elements.
]
/ `queue`: #[
#index[hx-trigger][queue] This option allows you to specify how events are
queued in htmx. By default, when htmx receives a triggering event, it will issue
a request and start an event queue. If the request is still in flight when
another event is received, it will queue the event and, when the request
finishes, trigger a new request. By default, it only keeps the last event it
receives, but you can modify that behavior using this option: for example, you
can set it to `none` and ignore all triggering events that occur during a
request.
]
===== Trigger filters
#index[hx-trigger][event filters]
The `hx-trigger` attribute also allows you to specify a _filter_
for events by using square brackets enclosing a JavaScript expression after the
event name.
Let’s say you have a complex situation where contacts should only be retrievable
in certain situations. You have a JavaScript function,
`contactRetrievalEnabled()` that returns a boolean, `true` if contacts can be
retrieved and `false` otherwise. How could you use this function to place a gate
on a button that issues a request to `/contacts`?
To do this using an event filter in htmx, you would write the following HTML:
#figure(caption: [The active search input])[
```html
<script>
function contactRetrievalEnabled() {
// code to test if contact retrieval is enabled
...
}
</script>
<button hx-get="/contacts"
hx-trigger="click[contactRetrievalEnabled()]"> <1>
Get Contacts
</button>
``` ]
1. A request is issued on click only when `contactRetrievalEnabled()`
returns `true`.
The button will not issue a request if `contactRetrievalEnabled()`
returns false, allowing you to dynamically control when the request will be
made. There are common situations that call for an event trigger, when you only
want to issue a request under specific circumstances:
- if a certain element has focus
- if a given form is valid
- if a set of inputs have specific values
Using event filters, you can use whatever logic you’d like to filter requests by
htmx.
===== Synthetic events
#index[hx-trigger][synthetic events]
In addition to these modifiers, `hx-trigger` offers a few "synthetic" events,
that is events that are not part of the regular DOM API. We have already seen `load` and `revealed` in
our lazy loading and infinite scroll examples, but htmx also gives you an `intersect` event
that triggers when an element intersects its a viewport.
#index[hx-trigger][intersect]
This synthetic event uses the modern Intersection Observer API, which you can
read more about at
#link(
"https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API",
)[MDN].
Intersection gives you fine-grained control over exactly when a request should
be triggered. For example, you can set a threshold and specify that the request
be issued only when an element is 50% visible.
The `hx-trigger` attribute certainly is the most complex in htmx. More details
and examples can be found in its
#link("https://htmx.org/attributes/hx-trigger/")[documentation].
==== Other Attributes <_other_attributes>
Htmx offers many other less commonly used attributes for fine-tuning the
behavior of your Hypermedia-Driven Application.
Here are some of the most useful ones:
/ hx-push-url: #[
#index[hx-push-url] "Pushes" the request URL (or some other value) into the
navigation bar.
]
/ hx-preserve: #[
#index[hx-preserve] Preserves a bit of the DOM between requests; the original
content will be kept, regardless of what is returned.
]
/ hx-sync: #[
#index[hx-sync] Synchronized requests between two or more elements.
]
/ hx-disable: #[
#index[hx-disable] Disables htmx behavior on this element and any children. We
will come back to this when we discuss the topic of security.
]
Let’s take a look at `hx-sync`, which allows us to synchronize AJAX requests
between two or more elements. Consider a simple case where we have two buttons
that both target the same element on the screen:
#figure(caption: [Two competing buttons])[
```html
<button hx-get="/contacts" hx-target="body">
Get Contacts
</button>
<button hx-get="/settings" hx-target="body">
Get Settings
</button>
``` ]
This is fine and will work, but what if a user clicks the "Get Contacts" button
and then the request takes a while to respond? And, in the meantime the user
clicks the "Get Settings" button? In this case we would have two requests in
flight at the same time.
If the `/settings` request finished first and displayed the user’s setting
information, they might be very surprised if they began making changes and then,
suddenly, the `/contacts` request finished and replaced the entire body with the
contacts instead!
To deal with this situation, we might consider using an `hx-indicator`
to alert the user that something is going on, making it less likely that they
click the second button. But if we really want to guarantee that there is only
one request at a time issued between these two buttons, the right thing to do is
to use the `hx-sync` attribute. Let’s enclose both buttons in a `div` and
eliminate the redundant `hx-target`
specification by hoisting the attribute up to that `div`. We can then use `hx-sync` on
that div to coordinate requests between the two buttons.
Here is our updated code:
#index[hx-sync][example]
#figure(caption: [Syncing two buttons])[
```html
<div hx-target="body" <1>
hx-sync="this"> <2>
<button hx-get="/contacts">
Get Contacts
</button>
<button hx-get="/settings">
Get Settings
</button>
</div>
``` ]
1. Hoist the duplicate `hx-target` attributes to the parent `div`.
2. Synchronize on the parent `div`.
By placing the `hx-sync` attribute on the `div` with the value `this`, we are
saying "Synchronize all htmx requests that occur within this
`div` element with one another." This means that if one button already has a
request in flight, other buttons within the `div` will not issue requests until
that has finished.
The `hx-sync` attribute supports a few different strategies that allow you to,
for example, replace an existing request in flight, or queue requests with a
particular queuing strategy. You can find complete documentation, as well as
examples, at the htmx.org page for
#link("https://htmx.org/attributes/hx-sync/")[`hx-sync`].
As you can see, htmx offers a lot of attribute-driven functionality for more
advanced Hypermedia-Driven Applications. A complete reference for all htmx
attributes can be found
#link("https://htmx.org/reference/#attributes")[on the htmx website].
=== Events
#index[events]
Thus far we have worked with JavaScript events in htmx primarily via the
`hx-trigger` attribute. This attribute has proven to be a powerful mechanism for
driving our application using a declarative, HTML-friendly syntax.
However, there is much more we can do with events. Events play a crucial role
both in the extension of HTML as a hypermedia, and, as we’ll see, in
hypermedia-friendly scripting. Events are the "glue" that brings the DOM, HTML,
htmx and scripting together. You might think of the DOM as a sophisticated "event
bus" for applications.
We can’t emphasize enough: to build advanced Hypermedia-Driven Applications, it
is worth the effort to learn about events
#link(
"https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Events",
)[in depth].
==== Htmx-Generated Events
#index[htmx events]
In addition to making it easy to _respond_ to events, htmx also
_emits_ many useful events. You can use these events to add more functionality
to your application, either via htmx itself, or by way of scripting.
Here are some of the most commonly used events triggered by htmx:
/ `htmx:load`: #[
#index[htmx events][htmx:load] Triggered when new content is loaded into the DOM
by htmx.
]
/ `htmx:configRequest`: #[
#index[htmx events][htmx:configRequest] Triggered before a request is issued,
allowing you to programmatically configure the request or cancel it entirely.
]
/ `htmx:afterRequest`: #[
#index[htmx events][htmx:afterRequest] Triggered after a request has responded.
]
/ `htmx:abort`: #[
#index[htmx events][htmx:abort] A custom event that can be sent to an
htmx-powered element to abort an open request.
]
==== Using the htmx:configRequest Event
Let’s look at an example of how to work with htmx-emitted events. We’ll use the `htmx:configRequest` event
to configure an HTTP request.
#index[localStorage]
Consider the following scenario: your server-side team has decided that they
want you to include a server-generated token for extra security on every
request. The token is going to be stored in `localStorage` in the browser, in
the slot `special-token`.
The token is being set via some JavaScript (don’t worry about the details yet)
when the user first logs in:
#figure(caption: [Getting The Token in JavaScript])[
```js
let response = await fetch("/token"); <1>
localStorage['special-token'] = await response.text();
``` ]
1. Get the value of the token then set it into localStorage
The server-side team wants you to include this special token on every request
made by htmx, as the `X-SPECIAL-TOKEN` header. How could you achieve this? One
way would be to catch the `htmx:configRequest` event and update the `detail.headers` object
with this token from
`localStorage`.
In VanillaJS, it would look something like this, placed in a `<script>`
tag in the `<head>` of our HTML document:
#figure(caption: [Adding the `X-SPECIAL-TOKEN` header])[
```js
document.body.addEventListener("htmx:configRequest", configEvent => {
configEvent.detail.headers['X-SPECIAL-TOKEN'] = <1>
localStorage['special-token'];
})
``` ]
1. Retrieve the value from local storage and set it into a header.
As you can see, we add a new value to the `headers` property of the event’s
detail property. After the event handler executes, this
`headers` property is read by htmx and used to construct the request headers for
the AJAX request it makes.
The `detail` property of the `htmx:configRequest` event contains a slew of
useful properties that you can update to change the "shape" of the request,
including:
/ `detail.parameters`: #[
#index[htmx:configRequest][detail.parameters] Allows you to add or remove
request parameters
]
/ `detail.target`: #[
#index[htmx:configRequest][detail.target] Allows you to update the target of the
request
]
/ `detail.verb`: #[
#index[htmx:configRequest][detail.verb] Allows you to update HTTP "verb" of the
request (e.g. `GET`)
]
So, for example, if the server-side team decided they wanted the token included
as a parameter, rather than as a request header, you could update your code to
look like this:
#figure(caption: [Adding the `token` parameter])[
```js
document.body.addEventListener("htmx:configRequest", configEvent => {
configEvent.detail.parameters['token'] = <1>
localStorage['special-token'];
})
``` ]
1. Retrieve the value from local storage and set it into a parameter.
As you can see, this gives you a lot of flexibility in updating the AJAX request
that htmx makes.
The full documentation for the `htmx:configRequest` event (and other events you
might be interested in) can be found
#link("https://htmx.org/events/#htmx:configRequest")[on the htmx website].
==== Canceling a Request Using htmx:abort
#index[htmx:abort] #index[canceling a request]
We can listen for any of the many useful events from htmx, and we can respond to
those events using `hx-trigger`. What else can we do with events?
It turns out that htmx itself listens for one special event,
`htmx:abort`. When htmx receives this event on an element that has a request in
flight, it will abort the request.
Consider a situation where we have a potentially long-running request to
`/contacts`, and we want to offer a way for the users to cancel the request.
What we want is a button that issues the request, driven by htmx, of course, and
then another button that will send an `htmx:abort`
event to the first one.
Here is what the code might look like:
#figure(caption: [A button with an abort])[
```html
<button id="contacts-btn" hx-get="/contacts" hx-target="body"> <1>
Get Contacts
</button>
<button
onclick="
document.getElementById('contacts-btn')
.dispatchEvent(new Event('htmx:abort')) <2>
">
Cancel
</button>
``` ]
1. A normal htmx-driven `GET` request to `/contacts`
2. JavaScript to look up the button and send it an `htmx:abort` event
So now, if a user clicks on the "Get Contacts" button and the request takes a
while, they can click on the "Cancel" button and end the request. Of course, in
a more sophisticated user interface, you may want to disable the "Cancel" button
unless an HTTP request is in flight, but that would be a pain to implement in
pure JavaScript.
Thankfully this isn’t too bad to implement in hyperscript, so let’s take a look
at what that would look like:
#figure(caption: [A hyperscript-Powered Button With An Abort])[
```html
<button id="contacts-btn" hx-get="/contacts" hx-target="body">
Get Contacts
</button>
<button
_="on click send htmx:abort to #contacts-btn
on htmx:beforeRequest from #contacts-btn remove @disabled from me
on htmx:afterRequest from #contacts-btn add @disabled to me">
Cancel
</button>
``` ]
Now we have a "Cancel" button that is enabled only when a request from the `contacts-btn` button
is in flight. And we are taking advantage of htmx-generated and handled events,
as well as the event-friendly syntax of hyperscript, to make it happen. Slick!
==== Server Generated Events <_server_generated_events>
We are going to talk more about the various ways that htmx enhances regular HTTP
requests and responses in the next section, but, since it involves events, we
are going to discuss one HTTP Response header that htmx supports: `HX-Trigger`.
We have discussed before how HTTP requests and responses support _headers_,
name-value pairs that contain metadata about a given request or response. We
took advantage of the
`HX-Trigger` request header, which includes the id of the element that triggered
a given request.
#index[response header][HX-Trigger]
In addition to this _request header_, htmx also supports a
_response header_ also named `HX-Trigger`. This response header allows you to _trigger an event_ on
the element that submitted an AJAX request. This turns out to be a powerful way
to coordinate elements in the DOM in a decoupled manner.
To see how this might work, let’s consider the following situation: we have a
button that grabs new contacts from some remote system on the server. We will
ignore the details of the server-side implementation, but we know that if we
issue a `POST` to the `/sync` path, it will trigger a synchronization with the
system.
Now, this synchronization may or may not result in new contacts being created.
In the case where new contacts _are_ created, we want to refresh our contacts
table. In the case where no contacts are created, we don’t want to refresh the
table.
To implement this we could conditionally add an `HX-Trigger` response header
with the value `contacts-updated`:
#figure(caption: [Conditionally Triggering a `contacts-updated` event])[
```py
@app.route('/sync', methods=["POST"])
def sync_with_server():
contacts_updated = RemoteServer.sync() <1>
resp = make_response(render_template('sync.html'))
if contacts_updated <2>
resp.headers['HX-Trigger'] = 'contacts-updated'
return resp
``` ]
1. A call to the remote system that synchronized our contact database with it
2. If any contacts were updated we conditionally trigger the
`contacts-updated` event on the client
This value would trigger the `contacts-updated` event on the button that made
the AJAX request to `/sync`. We can then take advantage of the
`from:` modifier of the `hx-trigger` attribute to listen for that event. With
this pattern we can effectively trigger htmx requests from the server side.
Here is what the client-side code might look like:
#figure(
caption: [The Contacts Table],
)[
```html
<button hx-post="/integrations/1"> <1>
Pull Contacts From Integration
</button>
...
<table hx-get="/contacts/table"
hx-trigger="contacts-updated from:body"> <2>
...
</table>
``` ]
1. The response to this request may conditionally trigger the
`contacts-updated` event
2. This table listens for the event and refreshes when it occurs
The table listens for the `contacts-updated` event, and it does so on the `body` element.
It listens on the `body` element since the event will bubble up from the button,
and this allows us to not couple the button and table together: we can move the
button and table around as we like and, via events, the behavior we want will
continue to work fine. Additionally, we may want _other_ elements or requests to
trigger the `contacts-updated` event, so this provides a general mechanism for
refreshing the contacts table in our application.
=== HTTP Requests & Responses <_http_requests_responses>
We have just seen an advanced feature of HTTP responses supported by htmx, the `HX-Trigger` response
header, but htmx supports quite a few more headers for both requests and
responses. In chapter 4 we discussed the headers present in HTTP Requests. Here
are some of the more important headers you can use to change htmx behavior with
HTTP responses:
/ `HX-Location`: #[
#index[response header][HX-Location] Causes a client-side redirection to a new
location
]
/ `HX-Push-Url`: #[
#index[response header][HX-Push-Url] Pushes a new URL into the location bar
]
/ `HX-Refresh`: #[
#index[response header][HX-Refresh] Refreshes the current page
]
/ `HX-Retarget`: #[
#index[response header][HX-Retarget] Allows you to specify a new target to swap
the response content into on the client side
]
You can find a reference for all requests and response headers in the
#link("https://htmx.org/reference/#headers")[htmx documentation].
==== HTTP Response Codes
#index[HTTP response codes]
Even more important than response headers, in terms of information conveyed to
the client, is the _HTTP Response Code_. We discussed HTTP Response Codes in
Chapter 3. By and large htmx handles various response codes in the manner that
you would expect: it swaps content for all 200-level response codes and does
nothing for others. There are, however, two "special" 200-level response codes:
- `204 No Content` - When htmx receives this response code, it will
_not_ swap any content into the DOM (even if the response has a body)
- `286` - When htmx receives this response code to a request that is polling, it
will stop the polling
You can override the behavior of htmx with respect to response codes by, you
guessed it, responding to an event! The `htmx:beforeSwap` event allows you to
change the behavior of htmx with respect to various status codes.
Let’s say that, rather than doing nothing when a `404` occurred, you wanted to
alert the user that an error had occurred. To do so, you want to invoke a
JavaScript method, `showNotFoundError()`. Let’s add some code to use the `htmx:beforeSwap` event
to make this happen:
#figure(caption: [Showing a 404 dialog])[
```js
document.body.addEventListener('htmx:beforeSwap', evt => { <1>
if (evt.detail.xhr.status === 404) { <2>
showNotFoundError();
}
});
``` ]
1. Hook into the `htmx:beforeSwap` event.
2. If the response code is a `404`, show the user a dialog.
You can also use the `htmx:beforeSwap` event to configure if the response should
be swapped into the DOM and what element the response should target. This gives
you quite a bit of flexibility in choosing how you want to use HTTP Response
codes in your application. Full documentation on the `htmx:beforeSwap` event can
be found at
#link("https://htmx.org/events/#htmx:beforeSwap")[htmx.org].
=== Updating Other Content <_updating_other_content>
Above we saw how to use a server-triggered event, via the `HX-Trigger`
HTTP response header, to update a piece of the DOM based on the response to
another part of the DOM. This technique addresses the general problem that comes
up in Hypermedia-Driven Applications: "How do I update other content?" After
all, in normal HTTP requests, there is only one
"target", the entire screen, and, similarly, in htmx-based requests, there is
only one target: either the explicit or implicit target of the element.
If you want to update other content in htmx, you have a few options:
==== Expanding Your Selection <_expanding_your_selection>
The first option, and the simplest, is to "expand the target." That is, rather
than simply replacing a small part of the screen, expand the target of your
htmx-driven request until it is large enough to enclose all the elements that
need to be updated on a screen. This has the tremendous advantage of being
simple and reliable. The downside is that it may not provide the user experience
that you want, and it may not play well with a particular server-side template
layout. Regardless, we always recommend at least thinking about this approach
first.
==== Out of Band Swaps
#index[hx-swap-oob]
#index[htmx][out of band swaps]
A second option, a bit more complex, is to take advantage of "Out Of Band"
content support in htmx. When htmx receives a response, it will inspect it for
top-level content that includes the `hx-swap-oob`
attribute. That content will be removed from the response, so it will not be
swapped into the DOM in the normal manner. Instead, it will be swapped in for
the content that it matches by id.
Let’s look at an example. Consider the situation we had earlier, where a
contacts table needs to be updated if an integration pulls down any new
contacts. Previously we solved this by using events and a server-triggered event
via the `HX-Trigger` response header.
This time, we’ll use the `hx-swap-oob` attribute in the response to the
`POST` to `/integrations/1`. The new contacts table content will
"piggyback" on the response.
#figure(caption: [The updated contacts table])[
```html
<button hx-post="/integrations/1"> <1>
Pull Contacts From Integration
</button>
...
<table id="contacts-table"> <2>
...
</table>
``` ]
1. The button still issues a `POST` to `/integrations/1`.
2. The table no longer listens for an event, but it now has an id.
Next, the response to the `POST` to `/integrations/1` will include the content
that needs to be swapped into the button, per the usual htmx mechanism. But it
will also include a new, updated version of the contacts table, which will be
marked as `hx-swap-oob="true"`. This content will be removed from the response
so that it is not inserted into the button. Instead, it is swapped into the DOM
in place of the existing table since it has a matching id.
#figure(caption: [A response with out-of-band content])[
```
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
...
Pull Contacts From Integration <1>
<table id="contacts-table" hx-swap-oob="true"> <2>
...
</table>
``` ]
1. This content will be placed in the button.
2. This content will be removed from the response and swapped by id.
Using this piggybacking technique, you can update content wherever needed on a
page. The `hx-swap-oob` attribute supports other additional features, all of
which are
#link("https://htmx.org/attributes/hx-swap-oob/")[documented].
Depending on how exactly your server-side templating technology works, and what
level of interactivity your application requires, out of band swapping can be a
powerful mechanism for content updates.
==== Events
#index[htmx patterns][server-triggered events]
Finally, the most complex mechanism for updating content is the one we saw back
in the events section: using server-triggered events to update elements. This
approach can be very clean, but also requires a deeper conceptual knowledge of
HTML and events, and a commitment to the event-driven approach. While we like
this style of development, it isn’t for everyone. We typically recommend this
pattern only if the htmx philosophy of event-driven hypermedia really speaks to
you.
If it _does_ speak to you, however, we say: go for it. We’ve created some very
complex and flexible user interfaces using this approach, and we are quite fond
of it.
==== Being Pragmatic
#index[hypermedia][limitations]
All of these approaches to the "Updating Other Content" problem will work, and
will often work well. However, there may come a point where it would just be
simpler to use a different approach for your UI, like the reactive one. As much
as we like the hypermedia approach, the reality is that there are some UX
patterns that simply cannot be implemented easily using it. The canonical
example of this sort of pattern, which we have mentioned before, is something
like a live online spreadsheet: it is simply too complex a user interface, with
too many interdependencies, to be done well via exchanges of hypermedia with a
server.
In cases like this, and any time you feel like an htmx-based solution is proving
to be more complex than another approach might be, we recommend that you
consider a different technology. Be pragmatic, and use the right tool for the
job. You can always use htmx for the parts of your application that aren’t as
complex and don’t need the full complexity of a reactive framework, and save
that complexity budget for the parts that do.
We encourage you to learn many different web technologies, with an eye to the
strengths and weaknesses of each one. This will give you a deep tool chest to
reach into when problems present themselves. Our experience is that, with htmx,
hypermedia is a tool you can reach for frequently.
=== Debugging
#index[events][debugging]
#index[htmx][debugging]
We are not ashamed to admit: we are big fans of events. They are the underlying
technology of almost any interesting user interface, and are particularly useful
in the DOM once they have been unlocked for general use in HTML. They let you
build nicely decoupled software while often preserving the locality of behavior
we like so much.
However, events are not perfect. One area where events can be particularly
tricky to deal with is _debugging_: you often want to know why an event _isn’t_ happening.
But where can you set a break point for something that _isn’t_ happening? The
answer, as of right now, is: you can’t.
There are two techniques that can help in this regard, one provided by htmx, the
other provided by Chrome, the browser by Google.
==== Logging Htmx Events <_logging_htmx_events>
The first technique, provided by htmx itself, is to call the
`htmx.logAll()` method. When you do this, htmx will log all the internal events
that occur as it goes about its business, loading up content, responding to
events and so forth.
This can be overwhelming, but with judicious filtering can help you zero in on a
problem. Here are what (a bit of) the logs look like when clicking on the "docs"
link on #link("https://htmx.org"), with
`logAll()` enabled:
#figure(
caption: [Htmx logs],
)[
```text
htmx:configRequest
<a href="/docs/">
Object { parameters: {}, unfilteredParameters: {}, headers: {…}, target: body, verb: "get", errors: [], withCredentials: false, timeout: 0, path: "/docs/", triggeringEvent: a
, … }
htmx.js:439:29
htmx:beforeRequest
<a href="/docs/">
Object { xhr: XMLHttpRequest, target: body, requestConfig: {…}, etc: {}, pathInfo: {…}, elt: a
}
htmx.js:439:29
htmx:beforeSend
<a class="htmx-request" href="/docs/">
Object { xhr: XMLHttpRequest, target: body, requestConfig: {…}, etc: {}, pathInfo: {…}, elt: a.htmx-request
}
htmx.js:439:29
htmx:xhr:loadstart
<a class="htmx-request" href="/docs/">
Object { lengthComputable: false, loaded: 0, total: 0, elt: a.htmx-request
}
htmx.js:439:29
htmx:xhr:progress
<a class="htmx-request" href="/docs/">
Object { lengthComputable: true, loaded: 4096, total: 19915, elt: a.htmx-request
}
htmx.js:439:29
htmx:xhr:progress
<a class="htmx-request" href="/docs/">
Object { lengthComputable: true, loaded: 19915, total: 19915, elt: a.htmx-request
}
htmx.js:439:29
htmx:beforeOnLoad
<a class="htmx-request" href="/docs/">
Object { xhr: XMLHttpRequest, target: body, requestConfig: {…}, etc: {}, pathInfo: {…}, elt: a.htmx-request
}
htmx.js:439:29
htmx:beforeSwap
<body hx-ext="class-tools, preload">
``` ]
Not exactly easy on the eyes, is it?
But, if you take a deep breath and squint, you can see that it isn’t
_that_ bad: a series of htmx events, some of which we have seen before (there’s `htmx:configRequest`!),
get logged to the console, along with the element they are triggered on.
After a bit of reading and filtering, you will be able to make sense of the
event stream, and it can help you debug htmx-related issues.
==== Monitoring Events in Chrome <_monitoring_events_in_chrome>
The preceding technique is useful if the problem is occurring somewhere
_within_ htmx, but what if htmx is never getting triggered at all? This comes up
some times, like when, for example, you have accidentally typed an event name
incorrectly somewhere.
In cases like this you will need recourse to a tool available in the browser
itself. Fortunately, the Chrome browser by Google provides a very useful
function, `monitorEvents()`, that allows you to monitor
_all_ events that are triggered on an element.
This feature is available _only_ in the console, so you can’t use it in code on
your page. But, if you are working with htmx in Chrome, and are curious why an
event isn’t triggering on an element, you can open the developers console and
type the following:
#figure(caption: [Htmx logs])[
```javascript
monitorEvents(document.getElementById("some-element"));
``` ]
This will then print _all_ the events that are triggered on the element with the
id `some-element` to the console. This can be very useful for understanding
exactly which events you want to respond to with htmx, or troubleshooting why an
expected event isn’t occurring.
Using these two techniques will help you as you (infrequently, we hope)
troubleshoot event-related issues when developing with htmx.
=== Security Considerations
#index[htmx][security]
#index[security]
In general, htmx and hypermedia tends to be more secure than JavaScript heavy
approaches to building web applications. This is because, by moving much of the
processing to the back end, the hypermedia approach tends not to expose as much
surface area of your system to end users for manipulation and shenanigans.
However, even with hypermedia, there are still situations that require care when
doing development. Of particular concern are situations where user-generated
content is shown to other users: a clever user might try to insert htmx code
that tricks the other users into clicking on content that triggers actions they
don’t want to take.
In general, all user-generated content should be escaped on the server-side, and
most server-side rendering frameworks provide functionality for handling this
situation. But there is always a risk that something slips through the cracks.
#index[hx-disable]
In order to help you sleep better at night, htmx provides the
`hx-disable` attribute. When this attribute is placed on an element, all htmx
attributes within that element will be ignored.
==== Content Security Policies & Htmx
A #indexed[Content Security Policy (CSP)] is a browser technology that allows
you to detect and prevent certain types of content injection-based attacks. A
full discussion of CSPs is beyond the scope of this book, but we refer you to
the
#link(
"https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP",
)[Mozilla Developer Network article]
on the topic for more information.
A common feature to disable using a CSP is the `eval()` feature of JavaScript,
which allows you to evaluate arbitrary JavaScript code from a string. This has
proven to be a security issue and many teams have decided that it is not worth
the risk to keep it enabled in their web applications.
#index[event filters][security]
Htmx does not make heavy use of `eval()` and, thus, a CSP with this restriction
in place will be fine. The one feature that does rely on
`eval()` is event filters, discussed above. If you decide to disable
`eval()` for your web application, you will not be able to use the event
filtering syntax.
=== Configuring
#index[htmx][configuration]
There are a large number of configuration options available for htmx. Some
examples of things you can configure are:
- The default swap style
- The default swap delay
- The default timeout of AJAX requests
A full list of configuration options can be found in the config section of the #link("https://htmx.org/docs/#config")[main htmx documentation].
Htmx is typically configured via a `meta` tag, found in the header of a page.
The name of the meta tag should be `htmx-config`, and the content attribute
should contain the configuration overrides, formatted as JSON. Here is an
example:
#figure(caption: [An htmx configuration via `meta` tag])[
```html
<meta name="htmx-config" content='{"defaultSwapStyle":"outerHTML"}'>
``` ]
In this case, we are overriding the default swap style from the usual
`innerHTML` to `outerHTML`. This might be useful if you find yourself using `outerHTML` more
frequently than `innerHTML` and want to avoid having to explicitly set that swap
value throughout your application.
#html-note[Semantic HTML][
Telling people to "use semantic HTML" instead of "read the spec" has led to a
lot of people guessing at the meaning of tags --- "looks pretty semantic to me!"
--- instead of engaging with the spec.
#blockquote(
attribution: [https://t-ravis.com/post/doc/semantic_the_8_letter_s-word/],
)[
I think being asked to write _meaningful_ HTML better lights the path to
realizing that it isn’t about what the text means to humans—it’s about using
tags for the purpose outlined in the specs to meet the needs of software like
browsers, assistive technologies, and search engines.
]
We recommend talking about, and writing, _conformant_ HTML. (We can always
bikeshed further). Use the elements to the full extent provided by the HTML
specification, and let the software take from it whatever meaning they can.
]
|
https://github.com/EliasRothfuss/vorlage_typst_doku-master | https://raw.githubusercontent.com/EliasRothfuss/vorlage_typst_doku-master/main/main.typ | typst | // #import "@preview/hydra:0.5.1": hydra
#import "config.typ"
#set page("a4")
#set text(
lang: "de",
size: 12pt,
font: "Arial",
)
#set par(
justify: true,
leading: 0.7em,
)
// Set the heading style
#show heading.where(level: 1): set text(size: 26pt)
#show heading.where(level: 1): set block(spacing: 2em)
#show heading.where(level: 2): set text(size: 20pt)
#show heading.where(level: 2): set block(spacing: 1.5em)
#show heading.where(level: 3): set text(size: 16pt)
#show heading.where(level: 3): set block(spacing: 1em)
#show heading.where(level: 4): set text(size: 14pt)
#show heading.where(level: 4): set block(spacing: 0.5em)
#show heading.where(level: 5): set text(size: 12pt)
#show heading.where(level: 5): set block(spacing: 0.5em)
#show heading.where(level: 6): set text(size: 12pt)
#show heading.where(level: 6): set block(spacing: 0.5em)
// Set caption style
#show figure.where(kind: image): set figure.caption(position: bottom)
#show figure.where(kind: image): set figure(supplement: "Abbildung")
#show figure.where(kind: table): set figure.caption(position: top)
#show figure.where(kind: table): set figure(supplement: "Tabelle")
#show figure.where(kind: raw): set figure.caption(position: top)
#show figure.where(kind: raw): set figure(supplement: "Code")
#set heading(outlined: false)
#include "pages/titelseite.typ"
#set page(numbering: "i")
#counter(page).update(1)
#set figure(numbering: "i")
#include "pages/erklaerung.typ"
#pagebreak()
#include "chapter/abstract.typ"
#pagebreak()
#show outline.entry.where(
level: 1
): it => {
v(12pt)
strong(it)
}
#outline(
title: "Inhaltsverzeichnis",
target: heading.where(outlined: true),
fill: none,
indent: auto,
)
#set figure(numbering: "1.1")
// Set Figure numbering to chapter.figure
#set figure(numbering: num => {
let chap_num = counter(heading).at(here()).first()
let fig_num = counter(figure).at(here()).first()
numbering("1.1", chap_num, fig_num)
})
#set heading(outlined: true)
#set page(numbering: "1")
#set heading(numbering: "1.")
#counter(page).update(1)
#counter(figure).update(0)
#include "chapter/einleitung.typ"
#pagebreak()
#counter(figure).update(0)
#include "chapter/grundlagen.typ"
#pagebreak()
#counter(figure).update(0)
#include "chapter/vorgehen.typ"
#pagebreak()
#counter(figure).update(0)
#include "chapter/umsetzung.typ"
#pagebreak()
#counter(figure).update(0)
#include "chapter/zusammenfassung.typ"
#pagebreak()
#set heading(numbering: none)
#bibliography(
"literature/literatur.bib",
title: "Literaturverzeichnis",
style: "institute-of-electrical-and-electronics-engineers"
)
#pagebreak()
#outline(target: figure.where(kind: image), title: "Abbildungsverzeichnis")
#pagebreak()
#outline(target: figure.where(kind: table), title: "Tabellenverzeichnis")
#pagebreak()
#set heading(numbering: "A.1")
// Set Figure numbering for appendix
#set figure(numbering: num => {
let chap_num = counter(heading).at(here()).first()
numbering("A.1", chap_num, num)
})
#counter(heading).update(0)
#counter(figure).update(0)
#include "chapter/anhang.typ"
#pagebreak()
#set heading(numbering: none)
#pagebreak()
#include "pages/abkuerzungen.typ" |
|
https://github.com/isaacholt100/isaacholt100.github.io | https://raw.githubusercontent.com/isaacholt100/isaacholt100.github.io/master/maths-notes/3-durham%3A-year-3/quantum-comp/quantum-comp.typ | typst | #import "../../template.typ": *
#show: doc => template(doc, hidden: ("proof", ), slides: false)
// FIND: - \*(\w+)\*: ([\s\S]*?)(?=\n-|\n\n)\n
// REPLACE: #$1[\n $2\n]\n
#let hbar = $planck.reduce$
#let ip(a, b) = $angle.l #a, #b angle.r$
#let ket(arg) = $#h(0.2pt) | #h(0.2pt) arg #h(0.2pt) angle.r$
#let bra(arg) = $angle.l #h(0.2pt) arg #h(0.2pt) | #h(0.2pt)$
#let braket(..args) = $angle.l #h(1pt) #args.pos().join(h(1pt) + "|" + h(1pt)) #h(1pt) angle.r$
#let Ket(arg) = $lr(| #h(1pt) arg #h(1pt) angle.r)$
#let Bra(arg) = $lr(angle.l #h(1pt) arg #h(1pt) |)$
#let Braket(..args) = $lr(angle.l #h(1pt) #args.pos().join(h(1pt) + "|" + h(1pt)) #h(1pt) angle.r)$
#let span = $op("span")$
#let conj(arg) = $arg^*$
#let expected(arg) = $angle.l arg angle.r$
#let vd(vector) = $bold(vector)$
#let nl = [\ ]
#let End = $"End"$
#let tp = $times.circle$ // tensor product
= Quantum mechanics essentials
== States and wave functions
- Probability of finding particle in $(a, b)$ is $ P(a, b; t) = integral_a^b |psi(x, t)|^2 dif x $ Wave function is normalised so that $P(-oo, +oo; t) = 1$.
== Dirac notation
#definition[
*Dual* of vector space $V$ is set of linear functionals from $V$ to $CC$: $ V^* := {Phi: V -> CC: forall a, b in CC, forall z, w in V, quad Phi(a vd(z) + b vd(w)) = a Phi(vd(z)) + b Phi(vd(w)) } $ We have $dim(V^*) = dim(V)$.
]
#remark[
If $V$ has inner product $ip(dot.op, dot.op)$, then an isomorphism is given by $vd(z) |-> Phi_(vd(z)) (dot.op) = ip(vd(z), dot.op)$.
]
#definition[
*Dual* of $vd(z) in V$ is the corresponding element in $V^*$, i.e. $Phi_(vd(z))$.
]
#remark[
If $V = CC^n$, can think of vectors in $V$ as $n times 1$ matrices and vectors in $V^*$ as $1 times n$ matrices.
]
#definition[
*Dirac notation* denotes vectors in a Hilbert space or its dual:
- Write $ket(psi)$ (a *ket*) for vector in Hilbert space $cal(H)$ corresponding to wave function $psi$.
- Write $bra(phi)$ (a *bra*) for dual vector in $cal(H)^*$.
- A *bra-ket* denotes an inner product: $ braket(phi, psi) := ip(phi, psi) = integral_(-oo)^oo phi^* (x, t) psi(x, t) dif x $
]
== Hilbert spaces
#definition[
*Hilbert space* is real or complex vector space with Hermitian inner product that is also a complete metric space with metric induced by the inner product. In particular, inner product satisfies:
// - $forall (a, b) in CC^2, a ket(psi) + b ket(phi) in cal(H)$
- *Hermiticity*: $braket(psi, phi) = braket(phi, psi)^*$.
- *Sesquilinearity* (linear in the second factor, anti-linear in the first). For $ket(phi) = c_1 ket(phi_1) + c_2 ket(phi_2)$:
$ braket(psi, phi) & = c_1 braket(psi, phi_1) + c_2 braket(psi, phi_2) \ braket(phi, psi) & = c_1^* braket(phi_1, psi) + c_2^* braket(phi_2, psi) $
- *Positive definiteness*: $braket(psi, psi) >= 0$ and $braket(psi, psi) = 0 <==> ket(psi) = 0$ (this corresponds with a *physical state* condition).
]
#definition[
A quantum mechanical system is described by a *state* $ket(psi)$ in Hilbert space $cal(H)$.
]
#remark[
States which differ by only a normalisation factor are physically equivalent: $ forall c in CC^*, quad ket(psi) tilde.op c ket(psi) $ For this reason, pure quantum mechanical states are called *rays* in the Hilbert space, and we normally assume that a state $ket(psi)$ has norm $1$: $norm(ket(psi)) = 1$.
]
#remark[
Note that the state labelled zero, $ket(0)$, is not equal to the zero state (the $0$ vector).
]
== Operators
#definition[
$hat(A): cal(H) -> cal(H)$ is *linear operator* if $ forall a, b in CC, forall ket(psi), ket(phi) in cal(H), quad hat(A) (a ket(psi) + b ket(phi)) = a \(hat(A) ket(psi)\) + b \(hat(A) ket(phi)\) $
]
#proposition[
Products and linear combinations of linear operators are also linear operators.
]
#definition[
*Adjoint (Hermitian conjugate)* of $hat(A)$, $hat(A)^dagger$, is defined by $ bra(psi) hat(A)^dagger ket(phi) = conj((bra(phi) hat(A) ket(psi))) $ for all states $ket(psi)$ and $ket(phi)$.
]
#definition[
$hat(H)$ is *self-adjoint (Hermitian)* if $hat(H)^dagger = hat(H)$. Self-adjoint operators correspond to *observables* (measurable quantities) since they have real eigenvalues.
]
#definition[
$hat(U)$ is *unitary* if $hat(U)^dagger hat(U) = hat(I)$. Unitary operators describe time-evolution in quantum mechanics.
]
#definition[
*Commutator* of operators $hat(A)$ and $hat(B)$ is $ \[hat(A), hat(B)\] = hat(A) hat(B) - hat(B) hat(A) $
]
#definition[
*Anti-commutator* of operators $hat(A)$ and $hat(B)$ is $ \{hat(A), hat(B)\} = hat(A) hat(B) + hat(B) hat(A) $
]
#definition[
*Expectation value* of observable $hat(A)$ on state $ket(psi)$ is $ expected(A)_psi := braket(psi, hat(A), psi) $ Interpreted as average outcome of many measurements of $hat(A)$ on same state $ket(psi)$.
]
== Matrix representation
#definition[
*Matrix form* of operator $hat(A)$ with respect to orthonormal basis ${ket(n)}$ is given by $ A_(i j) = braket(i, hat(A), j)$.
]
#proposition[
For operator $hat(A)$ with matrix representation $A$ in basis ${ket(n)}$, matrix representation of $hat(A)$ in basis ${ket(m)}$ is $B = S A S^(-1)$ where $S$ is change of basis matrix from old basis ${ket(n)}$ to new basis ${ket(m)}$.
]
== Time-evolution
#theorem[
Time-evolution of state is given by *Schrodinger equation*: $ i hbar dif / (dif t) ket(psi(t)) = hat(H) ket(psi(t)) ==> ket(psi(t)) = hat(U)_t ket(psi(0)) $ where $hat(H) = hat(K) + hat(V)$ is Hamiltonian operator, $hat(U)_t$ is unitary operator. If $hat(H)$ independent of $t$, then $hat(U)_t = exp(-i/planck.reduce t hat(H))$.
]
- *Principle of superposition*: Schrodinger equation is linear, so any linear combination of solutions is another solution.
#definition[
*Exponential* of operator $hat(A)$ is $ exp\(hat(A)\) := sum_(n in NN_0) hat(A)^n / n! $
]
= Measurement and uncertainty
== Observables
#proposition[
For Hilbert space of finite dimension $N$, operator $hat(A)$ has $N$ eigenvalues (counting multiplicities). Eigenvalues of Hermitian operator $hat(M)$ correspond to possible values of the measurable quantity it represents.
]
#definition[
*Spectrum* of operator $hat(H)$ is $ "Spec"\(hat(H)\) := \{lambda in CC: hat(H) - lambda hat(I) "non invertible"\} $ For finite-dimensional Hilbert space, this is equal to the set of eigenvalues of $hat(H)$.
]
#proposition[
Eigenstates $ket(n)$ of Hermitian operator $hat(H)$ corresponding to different eigenvalues $lambda_n$ are orthogonal. If eigenvalue is degenerate (multiplicity greater than one) then for each eigenspace (vector space spanned by the eigenvectors) with dimension greater than one, we can choose an orthogonal basis of eigenstates.
]
#definition[
Let $hat(A)$ have orthonormal eigenstates ${ket(v_i): i in [N]}$ and corresponding eigenvalues ${lambda_i: i in [N]}$. *Spectral representation* of $hat(A)$ is $ hat(A) = sum_(i = 1)^N lambda_i ket(v_i) bra(v_i) $ In particular, only eigenvalue of $hat(I)$ is $1$ with degeneracy $N$, so for any orthonormal basis ${ket(v_i): i in [N]}$ of $cal(H)$: $ hat(I) = sum_(i = 1)^N ket(v_i) bra(v_i) $
]
#definition[
When measurement is made on state $ket(psi) = sum_(i = 1)^N c_i ket(v_i)$, result is $lambda$ with probability $ p = sum_(i in [N], lambda_i = lambda) |braket(v_i, psi)|^2 = sum_(i in [N], lambda_i = lambda) |c_i|^2 $ If result is $lambda$, measuring again immediately after the measurement will yield $lambda$, so state collapses (up to irrelevant phase $e^(i alpha)$, $alpha in RR$) to $ 1/sqrt(p) sum_(i in [N], lambda_i = lambda) c_i ket(v_i) $ This is *collapse of the wavefunction* and cannot be represented by unitary transformation, so is not reversible.
// - *Definition*: $hat(A)$ *diagonalisable* if $hat(A) = hat(S) hat(D) hat(S)^(-1)$ where $hat(D)$ is diagonal and $hat(S)$ has columns corresponding to eigenvectors of $hat(A)$.
// - For $hat(A)$ diagonalisable, $ exp(hat(A)) = sum_(n = 0)^oo (hat(S) hat(D) hat(S)^(-1))^n / (n!) = hat(S) (sum_(n = 0)^oo hat(D)^n / (n!)) hat(S)^(-1) = hat(S) exp(hat(D)) hat(S)^(-1) $
]
#definition[
Linear operator $hat(P)$ is *projector* if $hat(P)^dagger = hat(P)$ and $hat(P)^2 = hat(P)$.
]
#definition[
For orthonormal eigenstates ${ket(v_i): i in [N]}$ of operator $hat(A)$ and corresponding eigenvalues ${lambda_i: i in [N]}$, define projection operator $ hat(P)_lambda = sum_(i in [N], lambda_i = lambda) ket(v_i) bra(v_i) $
]
#proposition[
Probability of measurement $hat(A)$ on state $ket(psi)$ yielding $lambda$ is $p_lambda = braket(psi, hat(P)_lambda, psi)$ and state collapses to $1/sqrt(p_lambda) hat(P)_lambda ket(psi)$.
]
#definition[
$hat(A)$ and $hat(B)$ are *compatible* if $\[hat(A), hat(B)\] = 0$.
]
#remark[
State can only have definite values for observables $A$ and $B$ if it is simultaneous eigenstate of both $hat(A)$ and $hat(B)$. There always exist simultaneous eigenstates for compatible operators.
]
#remark[
If $hat(A)$ and $hat(B)$ not compatible, measuring $A$ then $B$ then $A$ again will not always give same result for both measurements of $A$.
]
== Density matrices
#definition[
A state is *pure state* if it is definite, i.e. state of system is completely known, and only uncertainties are due to inherent uncertain nature of quantum mechanics.
]
#definition[
*Density matrix (density operator)* of *pure state* $ket(psi)$ is $ hat(rho) := ket(psi) bra(psi) $
]
#theorem[
There is bijection between density matrices and pure states, and $ hat(M) ket(psi) = lambda ket(psi) quad & <==> quad hat(M) hat(rho) = lambda hat(rho) \ ket(psi) -> hat(U) ket(psi) quad & <==> quad hat(rho) -> hat(U) hat(rho) hat(U)^dagger $ i.e. transforming state $ket(psi)$ by unitary operator $hat(U)$ is equivalent to transforming density matrix $hat(rho)$ to $hat(U) hat(rho) hat(U)^dagger$.
]
#definition[
For any orthonormal basis states ${ket(v_i): i in [N]}$, *trace* of $hat(A)$ is $ tr\(hat(A)\) = sum_(i = 1)^N braket(v_i, hat(A), v_i) $
]
#proposition[
Trace satisfies *cyclicity*: $ tr(A B C) = tr(B C A) = tr(C A B) $
]
#proposition[
Density matrix of pure state is a projector.
]
#proposition[
Density matrix $hat(rho)$ of pure state satisfies $tr\(hat(rho)\) = tr\(hat(rho)^2\) = 1$.
]
#definition[
*Mixed state* is one where state of system is not known. It is ensemble of pure states, each with associated probability of system being in that state: ${(p_i, ket(v_i)): i in [M]}$, where each $ket(v_i)$ is normalised. This is classical uncertainty rather than quantum uncertainty.
]
#definition[
*Density matrix* of *mixed state* is linear combination of density matrices for each pure state weighted by probability: $ hat(rho) := sum_(i = 1)^M p_i ket(v_i) bra(v_i) $ Can generalise definition to include possibility of ensembles containing mixed states: $hat(rho) = sum_(i = 1)^M p_i hat(rho)_i$ where $hat(rho)_i$ are mixed and/or pure density matrices.
]
#note[
One density matrix may be given by multiple mixed states.
]
#proposition[
Let $hat(A)$ observable, then expected value of measuring $hat(A)$ on $hat(rho)$ is $expected(hat(A)) = tr\(hat(rho) hat(A)\)$.
]
#proposition[
$hat(rho)$ is a density matrix of a pure/mixed state iff it satisfies:
- *Normalised*: $tr(hat(rho)) = 1$
- *Hermitian*: $hat(rho)^dagger = hat(rho)$
- *Semi-positive-definite*: for every state $ket(phi)$, $braket(phi, hat(rho), phi) >= 0$ (can be $= 0$ when $ket(phi) != 0$). This holds if $hat(rho)$ has non-negative eigenvalues, or if $tr\(hat(rho)^2\) <= 1$.
]
#proposition[
After taking measurement of pure or mixed state $hat(rho)$:
- Result is $lambda$ with probability $p_lambda = tr\(hat(P)_lambda hat(rho) hat(P)_lambda\) = tr\(hat(P)_lambda hat(rho)\) = tr\(hat(rho) hat(P)_lambda\)$.
- Density matrix after measuring value of $lambda$ is $1/p_lambda hat(P)_lambda hat(rho) hat(P)_lambda$.
]
#theorem[
Let $hat(rho)$ be density matrix, then $hat(rho)$ corresponds to pure state iff $tr\(hat(rho)^2\) = 1$.
]
= Qubits and the Bloch sphere
== Qubits
#definition[
A *qubit* is state in two-dimensional Hilbert space. Usually *computational basis* ${ket(0), ket(1)}$ denotes basis for such a Hilbert space.
]
#proposition[
General pure state in qubit system is of the form $ ket(psi) = cos(theta / 2) ket(0) + e^(i phi) sin(theta / 2) ket(1), quad 0 <= theta <= pi, 0 <= phi < 2pi $ So there is bijection between pure qubit states and points on $S^2$, called the *Bloch sphere*. Any point on Bloch sphere can be labelled by its position vector: $ vd(r) = vec(x, y, z), quad x = sin(theta) cos(phi), y = sin(theta) sin(phi), z = cos(theta) $
]
#definition[
We define six special states on the Bloch sphere: $
ket(+) := 1/sqrt(2) (ket(0) + ket(1)) <-> 1/sqrt(2) (1, 1)^T: & quad vd(r) = (1, 0, 0)^T, quad (theta, phi) = (pi\/2, 0) \
ket(-) := 1/sqrt(2) (ket(0) - ket(1)) <-> 1/sqrt(2) (1, -1)^T: & quad vd(r) = (-1, 0, 0)^T, quad (theta, phi) = (pi\/2, pi) \
ket(L) := 1/sqrt(2) (ket(0) + i ket(1)) <-> 1/sqrt(2) (1, i)^T: & quad vd(r) = (0, 1, 0)^T, quad (theta, phi) = (pi\/2, pi\/2) \
ket(R) := 1/sqrt(2) (ket(0) - i ket(1)) <-> 1/sqrt(2) (1, -i)^T: & quad vd(r) = (0, -1, 0)^T, quad (theta, phi) = (pi\/2, 3pi\/2) \
ket(0) <-> (1, 0)^T: & quad vd(r) = (0, 0, 1)^T, quad (theta, phi) = (0, dot.op) \
ket(1) <-> (0, 1)^T: & quad vd(r) = (0, 0, -1)^T, quad (theta, phi) = (pi, dot.op) $
]
== Inside the Bloch sphere
#definition[
*Pauli $sigma$-matrices* are $ sigma_1 := mat(0, 1; 1, 0), quad sigma_2 := mat(0, -i; i, 0), quad sigma_3 := mat(1, 0; 0, -1) $
]
#definition[
For pure state $ket(psi)$, *Bloch vector* $vd(r)$ is corresponding point on Bloch sphere. For mixed state ${(p_i, ket(v_i)): i in [M]}$, *Bloch vector* is $ vd(r) := sum_(i = 1)^M p_i vd(r_i) $ where $vd(r_i)$ is Bloch vector corresponding to pure state $ket(v_i)$.
]
#proposition[
Density matrix for state with Bloch vector $vd(r)$ is $ rho = 1/2 (I_2 + vd(r) dot.op sigma) $ where $vd(r) dot.op sigma = r_1 sigma_1 + r_2 sigma_2 + r_3 sigma_3 = x sigma_1 + y sigma_2 + z sigma_3$.
]
#proposition[
State is mixed iff its Bloch vector $vd(r)$ satisfies $|vd(r)| < 1$.
]
#proposition[
For any density matrix $rho$ defined by Bloch vector $vd(r)$, $ tr\(rho^2\) = 1/2 (1 + |vd(r)|^2) $
]
== Time evolution of a qubit
#remark[
Unitary transformations of a qubit correspond to rotations of points on/in Bloch sphere about the origin, representing the fact that unitary transformations cannot transform pure states to mixed states.
]
#remark[
Measurements transform any state to a pure state.
]
#proposition[
$tr(rho^2)$ is invariant under unitary transformations (time evolution).
]
- $tr(rho^2)$ measures how mixed a state is: $tr(rho^2) = 1$ for pure states, $tr(rho^2) = 1/2$ for the most mixed single qubit state, corresponding to the origin: $vd(r) = vd(0)$, $rho = 1/2 I$.
#proposition[
Mixing states can never produce a state further from origin than furthest initial state.
]
#note[
There are an infinite number of ways of writing a mixed state as an ensemble of two pure states: any line passing through the point represented by the mixed states intersects with the Bloch sphere twice - the intersection points give the pure states in the ensemble.
]
#definition[
*Trace distance* between density matrices $hat(rho)_1$ and $hat(rho)_2$ is $ D\(hat(rho)_1, hat(rho)_2\) := 1/2 tr |hat(rho)_1 - hat(rho)_2| = 1/4 tr |(vd(r_1) - vd(r_2)) dot.op sigma| = 1/2 |vd(r_1) - vd(r_2)| = 1/2 sum_(i = 1)^N |lambda_i| $ where $|hat(A)| = sqrt(hat(A)^dagger hat(A))$, $lambda_i$ are the eigenvalues of $hat(rho)_1 - hat(rho)_2$ (trace distance is equal to sum of eigenvalues since $hat(rho)_1 - hat(rho)_2$ is Hermitian).
]
#remark[
Trace distance gives notion of distance between two states.
]
#proposition[
Trace distance defines a *metric* on set of density matrices:
- *Non-negative*: $D\(hat(rho)_1, hat(rho)_2\) >= 0$.
- *Separates points*: $D\(hat(rho)_1, hat(rho)_2\) = 0 <==> hat(rho)_1 = hat(rho)_2$.
- *Symmetric*: $D\(hat(rho)_1, hat(rho)_2\) = D\(hat(rho)_2, hat(rho)_1\)$.
- *Triangle inequality*: $D\(hat(rho)_1, hat(rho)_3\) <= D\(hat(rho)_1, hat(rho)_2\) + D\(hat(rho)_2, hat(rho)_3\)$
]
== Pauli matrices
#definition[
*Levi-Cevita* tensor $epsilon_(i j k)$ is defined for ${i, j, k} subset.eq {1, 2, 3}$ as:
- $epsilon_(1 2 3) := epsilon_(2 3 1) := epsilon_(3 1 2) := 1$.
- $epsilon_(3 2 1) := epsilon_(1 3 2) := epsilon_(2 1 3) := -1$.
- $epsilon_(i j k) := 0$ otherwise.
]
#proposition[
Pauli matrices satisfy following properties:
- *Hermitian*: $sigma_i^dagger = sigma_i$.
- *Traceless*: $tr(sigma_i) = 0$.
- $[sigma_i, sigma_j] = sigma_i sigma_j - sigma_j sigma_i = 2i epsilon_(i j k) sigma_k$.
- ${sigma_i, sigma_j} = sigma_i sigma_j + sigma_j sigma_i = 2 delta_(i j) I_2$.
- $sigma_i sigma_j = delta_(i j) I_2 + i epsilon_(i j k) sigma_k$.
- They form a basis for vector space of $2 times 2$ Hermitian traceless matrices over $RR$.
]
#proposition[
Define measurement operators $X, Y, Z$ as $ X & := 1/2 (I_2 - sigma_1), quad Y := 1/2 (I_2 - sigma_2), quad Z := 1/2 (I_2 - sigma_3) $
$X$, $Y$ and $Z$ have their eigenvectors as the six special Bloch states, with eigenvalues $0$ or $1$: $ X ket(+) & = 0 ket(+), quad X ket(-) = 1 ket(-), \ Y ket(L) & = 0 ket(L), quad Y ket(R) = 1 ket(R), \ Z ket(0) & = 0 ket(0), quad Z ket(1) = 1 ket(1) $
]
#proposition[
Exponentials of Pauli matrices are unitary matrices: $forall alpha in RR$, $ exp(i alpha sigma_1) & = cos(alpha) I_2 + i sin(alpha) sigma_1, \ exp(i alpha sigma_2) & = cos(alpha) I_2 + i sin(alpha) sigma_2, \ exp(i alpha sigma_3) & = cos(alpha) I_2 + i sin(alpha) sigma_3 $
]
#proposition[
For $alpha in RR$, $vd(n) in RR^3$, $|vd(n)|^2 = 1$, $ U_alpha (vd(n)) := exp(i alpha vd(n) dot.op sigma) = cos(alpha) I_2 + i sin(alpha) vd(n) dot.op sigma $ is unitary transformation. If density matrix $rho = 1/2 (I_2 + vd(r) dot.op sigma)$ evolves with time according to this operator, then $ rho -> U_alpha (vd(n)) rho U_alpha (vd(n))^dagger = 1/2 (I_2 + (R_alpha (vd(n)) vd(r)) dot.op sigma) $ where $R_alpha (vd(n))$ is $3 times 3$ orthogonal matrix corresponding to rotation of angle $2 alpha$ about axis in the direction of $vd(n)$.
]
= Bipartite systems
== Tensor products
#definition[
*Tensor product* $ket(phi) tp ket(psi)$ in $H_1 tp H_2$ satisfies:
- *Scalar multiplication*: $c(ket(phi) tp ket(psi)) = (c ket(phi)) tp ket(psi) = ket(phi) tp (c ket(psi))$.
- *Linearity*:
- $a ket(psi) tp ket(phi_1) + b ket(psi) tp ket(phi_2) = ket(psi) tp (a ket(phi_1) + b ket(phi_2))$.
- $a ket(psi_1) tp ket(phi) + b ket(psi_2) tp ket(phi) = (a ket(psi_1) + b ket(psi_2)) tp ket(phi)$.
]
#definition[
Induced inner product on $H_1 tp H_2$ is defined as $ (bra(psi_1) tp bra(phi_1))(ket(psi_2) tp ket(phi_2)) = braket(psi_1, psi_2) braket(phi_1, phi_2) $
]
#proposition[
For bases $\{ket(v_i): i in [N_1]\}$ for $H_1$ and $\{ket(w_j): j in [N_2]\}$ for $H_2$, ${ket(v_i) tp ket(w_j), i in [N_1], j in [N_2]}$ is basis for $H_1 tp H_2$ and is orthonormal if $\{ket(v_i)\}$ and $\{ket(v_j)\}$ are orthonormal.
]
#definition[
Most general vector $ket(psi) in H_1 tp H_2$ can be expressed as $ ket(psi) = sum_(i in [N_1], thick j in [N_2]) c_(i, j) ket(v_i) tp ket(v_j) $ Generally, this cannot be written as a tensor product $ket(psi) tp ket(phi)$. If it can be, it is a *separable* state. If not, it is *entangled*.
]
#definition[
Hilbert space of $N$-qubit system is $2^N$-dimensional Hilbert space $H_N = H_q^(tp N)$ where $H_q$ is a single qubit Hilbert space.
]
#example[
Let $H_3 = H_q tp H_q tp H_q$. Operator $hat(I) tp hat(sigma_1) tp hat(I)$ acts on the second qubit and leaves the other two invariant.
]
== Linear operators and local unitary operations
#definition[
Linear operators on $H_1 tp H_2$ are linear combinations of $hat(A) tp hat(B)$, where $ \(hat(A) tp hat(B)\) (ket(psi) tp ket(phi)) := \(hat(A) ket(psi)\) tp \(hat(B) ket(phi)\) $
]
#proposition[
Properties of tensor product of linear operators:
- $hat(A) tp hat(B) + hat(C) tp hat(B) = \(hat(A) + hat(C)\) tp hat(B)$.
- $hat(A) tp hat(B) + hat(A) tp hat(D) = hat(A) tp \(hat(B) + hat(D)\)$.
- $\(hat(A) tp hat(B)\)^dagger = hat(A)^dagger tp hat(B)^dagger$.
- $\(hat(A) tp hat(B)\)\(hat(C) tp hat(D)\) = \(hat(A) hat(C) tp hat(B) hat(D)\)$.
- $tr_(cal(H)_A tp cal(H)_B) \(hat(A) tp hat(B)\) = tr_(cal(H)_A) \(hat(A)\) tr_(cal(H)_B) \(hat(B)\)$.
In particular, tensor product of linear operators preserves unitarity, Hermiticity, positivity, and tensor product of two projectors is a projector.
]
#definition[
Bipartite system is system described Hilbert space $cal(H)_A tp cal(H)_B$ which can be partitioned (separated) into two subsystems $A$ and $B$, described by Hilbert spaces $cal(H)_A$ and $cal(H)_B$. Alice has full control over system $A$, Bob has full control over system $B$, neither can control the other's system.
]
#definition[
For bipartite system, *local operations (LO)* are of the form $hat(U)_A tp hat(I)$ (for Alice) or $hat(I) tp hat(U)_B$ (for Bob) where $hat(U)_A$ and $hat(U)_B$ are unitary operators or measurement operators.
]
#proposition[
$hat(U)_A tp hat(I)$ and $hat(I) tp hat(U)_B$ commute: $\[hat(U)_A tp hat(I), hat(I) tp hat(U)_B\] = 0$, and their product is $hat(U)_A tp hat(U)_B$.
]
#theorem[
Any unitary transformation $hat(U)_A tp hat(U)_B$ (i.e. using LO) acting on separable state $ket(psi) tp ket(phi)$ produces another separable state: $hat(U)_A ket(psi) tp hat(U)_B ket(phi)$. In particular, an entangled state cannot be created from a separable state.
]
#definition[
A mixed state is *separable* iff it is an ensemble of separable states, and *entangled* otherwise.
]
#definition[
*Density matrix* of *separable pure state* $ket(Psi) = ket(psi) tp ket(phi)$ is $ hat(rho) := ket(Psi) bra(Psi) = (ket(psi) tp ket(phi)) (bra(psi) tp bra(phi)) = (ket(psi) bra(psi)) tp (ket(phi) bra(phi)) = hat(rho)_A tp hat(rho)_B $ where $hat(rho)_A = ket(psi) bra(psi)$ and $hat(rho)_B = ket(phi) bra(phi)$.
]
#definition[
*Density matrix* of *separable mixed state* is $ hat(rho) := sum_(i = 1)^M p_i hat(rho)_A^((i)) tp hat(rho)_B^((i)) $ where $\{hat(rho)_A^((i))\}$ are mixed or pure states of first system, $\{hat(rho)_B^((i))\}$ are mixed or pure states of second system.
]
== Matrix representation
#definition[
*Tensor product* of two vectors is given by e.g. $ vec(1, 2, 3) tp vec(4, 5) = vec(1 vec(4, 5), 2 vec(4, 5), 3 vec(4, 5)) = vec(4, 5, 8, 10, 12, 15) $ The expression is similar for matrices: $ mat(1, 2; 3, 4) tp mat(5, 6; 7, 8) = mat(1 mat(5, 6; 7, 8), 2 mat(5, 6; 7, 8); 3 mat(5, 6; 7, 8), 4 mat(5, 6; 7, 8)) = mat(5, 6, 10, 12; 7, 8, 14, 16; 15, 18, 20, 24; 21, 24, 28, 36) $
]
#definition[
*Controlled `NOT` (`CNOT`)* operator acts on $H_2 = H_q tp H_q$ and is defined as $ U = (I_2 + sigma_3) / 2 tp I_2 + (I_2 - sigma_3) / 2 tp sigma_1 $ We have $U ket(00) = ket(00)$, $U ket(01) = ket(01)$, $U ket(10) = ket(11)$, $U ket(11) = ket(10)$.
]
== Local measurements
#definition[
For bipartite system, *local measurements* are Hermitian operators of the form $hat(F) = hat(F)_A tp hat(I)$ for Alice and $hat(G) = hat(I) tp hat(G)_B$ for Bob.
]
#notation[
Projection operators of $hat(F)_A$ and $hat(G)_B$ for eigenvalues $lambda_i$ and $mu_j$ are denoted $hat(F)_(A i)$ and $hat(G)_(B j)$.
]
#remark[
In the full system $H_A tp H_B$, $hat(F)$ and $hat(G)$ are degenerate, with degeneracy given by dimension of other subsystem, i.e. $dim(cal(H)_B)$ for Alice's observable and $dim(cal(H)_A)$ for Bob's. Assuming no degeneracy in their own system, corresponding projection operators in full system are
$
hat(F)_i & = hat(F)_(A i) tp hat(I) = sum_(j = 1)^(N_2) ket(v_i) bra(v_i) tp ket(w_j) bra(w_j) \
hat(G)_j & = hat(I) tp hat(G)_(B j) = sum_(i = 1)^(N_1) ket(v_i) bra(v_i) tp ket(w_j) bra(w_j)
$
]
#note[
Since $\[hat(F), hat(G)\] = 0$, these measurements are compatible so final state is eigenstate of both $hat(F)$ and $hat(G)$. Probability of an outcome occuring is not affected by whether Alice or Bob measures first (or simultaneously).
]
#example[
Let $\{ket(v_i)\}, \{ket(w_j)\}$ be orthonormal eigenstates of operators $hat(F)_A$ and $hat(G)_B$ with non-degenerate eigenvalues $\{lambda_i\}$ and $\{mu_j\}$, $ket(Psi) = sum_(i in [N_1], med j in [N_2]) gamma_(i j) ket(v_i) tp ket(w_j)$ be entangled state, define $
alpha_i := (sum_(j = 1)^N_2 |gamma_(i j)|^2)^(1\/2), quad beta_j := (sum_(i = 1)^(N_1) |gamma_(i j)|^2)^(1\/2)
$ and define auxiliary states (set $ket(psi_j) = vd(0)$ when $beta_j = 0$ and $ket(phi_i) = vd(0)$ when $alpha_i = 0$): $
ket(psi_j) := 1/beta_j sum_(i = 1)^(N_1) gamma_(i j) ket(v_i) in cal(H)_A, quad ket(phi_i) := 1/alpha_i sum_(j = 1)^(N_2) gamma_(i j) ket(w_j) in cal(H)_B \
==> ket(Psi) = sum_(i = 1)^(N_1) alpha_i ket(v_i) tp ket(phi_i) = sum_(j = 1)^(N_2) beta_j ket(psi_j) tp ket(w_j)
$ If Alice measures $hat(F)$ with result $lambda_i$, entangled state $ket(Psi)$ collapses to separable state $
ket(Psi) -> hat(F)_i ket(Psi) = \(hat(F)_(A i) tp hat(I)\) ket(Psi) tilde ket(v_i) tp ket(phi_i)
$ So Bob's state depends on the result of Alice's measurement.
]
== Reduced density matrix
#definition[
For operator $hat(C) tp hat(D) in End(cal(H)_A tp cal(H)_B)$, *partial trace* over $cal(H)_A$ and $cal(H)_B$, $tr_A: End(cal(H)_A tp cal(H)_B) -> End(cal(H)_B)$ and $tr_B: End(cal(H)_A tp cal(H)_B) -> End(cal(H)_A)$, are $ tr_A \(hat(C) tp hat(D)\) := tr\(hat(C)\) hat(D), quad tr_B \(hat(C) tp hat(D)\) := tr\(hat(D)\) hat(C) $
]
#definition[
For bipartite system, the *reduced density matrix* of a subsystem is partial trace of density matrix over other subsystem. So for bipartite system, $ hat(rho)_A := tr_B \(hat(rho)\), quad hat(rho)_B := tr_A \(hat(rho)\) $
]
#proposition[
We have $tr\(hat(A) tp hat(B)\) = tr_A tr_B \(hat(A) tp hat(B)\) = tr_B tr_A \(hat(A) tp hat(B)\)$.
]
#note[
A reduced matrix describes one subsystem, assuming no knowledge of the other system.
]
#proposition[
- $hat(rho)_A$ is invariant under all local operations in system $B$ (for measurements, this is provided Alice does not learn about the result of the measurement in system B).
- Under unitary transformations $hat(U)$ in system $A$, $hat(rho)_A$ transforms as normal: $hat(rho)_A -> hat(U) hat(rho)_A hat(U)^dagger$.
- Local measurements in system $A$ can be described by $hat(rho)_A$ and operators acting on $cal(H)_A$: $tr_B \(hat(F)_i hat(rho) hat(F)_i\) = hat(F)_(A i) hat(rho)_A hat(F)_(A i)$.
]
#theorem[
If $ket(Psi) in cal(H)_A tp cal(H)_B$ is pure state, then $hat(rho)_A$ is pure iff $ket(Psi)$ is separable.
]
#corollary[
If spectrum of $hat(F)_A$ is non-degenerate then measuring $hat(F)_A$ in system $cal(H)_A$ produces separable state on system $cal(H)_A tp cal(H)_B$, i.e. *measurement destroys entanglement*.
]
#note[
Entanglement does not violate causality (does not allow communication faster than the speed of light). i.e., if Alice makes a local measurement on an entangled system, Bob cannot detect this, even though the reduced density matrix for his system has changed.
]
== Classical communication
- Alice and Bob can use classical communication (CC) to communicate results of measurements of their own subsystem. If the state was initially entangled, Bob communicating a measurement to Alice would give Alice information about her subsystem.
#definition[
*LOCC* is when Alice and Bob can use local operations (LO) and classical communication.
]
= Entanglement applications
== Bell states
#proposition[
Measurements of entanglement:
- Let $ket(Psi) in cal(H)_A tp cal(H)_B$. If $ket(Psi) = a ket(0) tp ket(phi) + b ket(1) tp ket(phi)$ for some $a, b in CC$, $ket(phi) in cal(H)_B$, then $ket(Psi)$ is separable, otherwise entangled.
- If reduced density matrix of either subsystem gives a pure state ($tr(rho^2) = 1$) then state is separable. If it gives a mixed state ($tr(rho^2) < 1$), state is entangled.
- $tr\(rho_A^2\) = tr\(rho_B^2\)$ gives measure of entanglement, with max value $1$ for no entanglement, min value $1\/2$ (for single qubit subsystem) for maximally entangled states.
]
#definition[
*Bell states* are defined as, for $x, y in {0, 1}$, $ ket(beta_(x y)) := 1/sqrt(2) (ket(0) tp ket(y) + (-1)^x ket(1) tp ket(overline(y))) $
]
#proposition[
Bell states are maximally entangled (trace of reduced density matrix of both sides is $1/2$) and form an orthonormal basis.
]
- Bell state basis is related to standard basis by unitary transformation, but Bell states can't be created from the separable standard basis by any LOCC process, since unitary transformations between them are not of form $hat(U)_A tp hat(U)_B$ (since this preserves separability), and measurements always produce a separable state.
- Alice and Bob can individually transform any Bell state to any other Bell state by the unitary operators $hat(U)_(x y) tp hat(I)$ and $hat(I) tp hat(U)_(x y)$ respectively: $ (hat(U)_(x y) tp hat(I)) ket(beta_(0 0)) = (hat(I) tp hat(U)_(x y)) ket(beta_(0 0)) = ket(beta_(x y)) $ where $ U_(0 0) = I_2, quad U_(0 1) = sigma_1, quad U_(1 0) = sigma_3, quad U_(1 1) = i sigma_2 $
== Superdense coding
- Qubit can be used instead of classical bit: $ket(0)$ corresponds to the bit $0$, $ket(1)$ corresponds to the bit $1$. In this case, the qubit can be measured with probability $1$ with the measurement operator $Z = 1/2 (I_2 - sigma_3)$, since $Z ket(0) = 0 ket(0)$, $Z ket(1) = 1 ket(1)$ so measurement with outcome $0$ means state is $ket(0)$ with probability $1$, measurement with outcome $1$ means state is $ket(1)$ with probability $1$.
- Alice can prepare the qubit to represent the classical bit to send to Bob: prepare any state $ket(psi)$ and measure on it with operator $1/2 (I_2 - sigma_3)$. Outcome is $0$ or $1$ - if outcome is equal to the bit $x$ she wants to send, $ket(psi)$ has been projected to $ket(x)$, so send this state to Bob. Otherwise, perform unitary transformation $sigma_1 ket(overline(x)) = ket(x)$ and send this state to Bob.
- *Superdense coding*:
- Superdense coding allows one qubit to transmit two classical bits of information.
- Alice and Bob share state $ket(beta_00)$.
- Alice applies operation $hat(U)_(x y) tp hat(I)$ to whole system where $(x y)_2$ is the two bit message she wants to send (this just acts on her qubit). Note that this does not transmit any information to Bob, as his reduced density matrix is $rho_B = 1/2 I$ before and after the transformation.
- Alice sends her qubit to Bob. Then Bob has the full Bell state $ket(beta_(x y))$ (he has both qubits). Bob then applies a measurement which has the four Bell states as eigenstates, which gives him the eigenvalue with probability $1$, e.g. he measures $ hat(B) = 0 ket(beta_00) bra(beta_00) + 1 ket(beta_01) bra(beta_01) + 2 ket(beta_10) bra(beta_10) + 3 ket(beta_11) bra(beta_11) $
== No-cloning theorem
#theorem(name: "No-cloning theorem")[
In quantum mechanics, it is impossible to clone an unknown state $ket(psi)$. More precisely, it is impossible to perform transformation $ket(psi) tp ket(phi) -> ket(psi) tp ket(psi)$ for an arbitrary unknown state $ket(psi)$ and fixed initial state $ket(phi)$.
]
== Teleportation
#definition[
*Hadamard gate* is transformation given by operator $
U_H := 1/sqrt(2) mat(1, 1; 1, -1) = 1/sqrt(2) (sigma_1 + sigma_3)
$ We have $hat(U)_H ket(0) = ket(+)$, $hat(U)_H ket(1) = ket(-)$.
]
#definition[
*Teleportation* is process of transferring quantum state $ket(psi)$ without using quantum communication (i.e. only using LOCC). It is as follows:
- Alice has state $ket(psi) = a ket(0) + b ket(1)$, Alice and Bob share Bell state $ket(beta_00)$, so full system state is $
ket(psi) tp ket(beta_00) & = 1/sqrt(2) ket(psi) tp ket(0) tp ket(0) + 1/sqrt(2) ket(psi) tp ket(1) tp ket(1) \
& = 1/sqrt(2) (a ket(000) + a ket(011) + b ket(100) + b ket(111))
$ Alice has first two qubits, Bob has third.
- Alice performs `CNOT` on her two qubits, transforming state to $ 1/sqrt(2) (a ket(000) + a ket(011) + b ket(110) + b ket(101)) $ `CNOT` operator is not of form $A tp B$ so it entangles Alice's qubits.
- Alice applies Hadamard gate to her system: $ hat(U)_H tp hat(I) tp hat(I) 1/sqrt(2) (a ket(000) + a ket(011) + b ket(100) + b ket(111)) = 1/2 sum_(x, y) ket(x) tp ket(y) tp hat(U)_(x y) ket(psi) $
- Alice measures with operator $Z$ on both her qubits, giving measurement $(x y)_2$, causing state to collapse to $ket(x) tp ket(y) tp hat(U)_(x y) ket(psi)$.
- Alice uses CC to send $(x y)_2$ to Bob. Bob then performs transformation $hat(U)_(x y)^(-1) = hat(U)_(x y)^dagger$ so his state becomes $ket(psi)$.
]
== Quantum key distribution (QKD)
#definition[
Let message $M$ and secret key $K$ be $n$-bit integers, $K$ is shared by Alice and Bob, where each bit of $k$ has value $0$ or $1$ with equal probability. *One-time pad encryption* is as follows:
- Alice produces encrypted message $C = M plus.circle K$, where $plus.circle$ is bitwise addition $mod 2$ (also bitwise `XOR`).
- Alice transmits $C$ to Bob. Bob decrypts message by calculating $ C plus.circle K = (M plus.circle K) plus.circle K = M plus.circle (K plus.circle K) = M plus.circle 0 = M $
- It is important that $K$ is at least as long as $M$ and is never reused.
- Drawback is that $K$ might be very long, and must be transmitted securely prior to communication.
]
#definition[
*BB84* protocol for transmitting secret key is as follows:
- Alice chooses random bit $x in {0, 1}$ with equal probability, makes random choice of $X$ or $Z$ with equal probability, then prepares qubit state according to the outcome: $ (0, Z) |-> ket(0), quad (1, Z) |-> ket(1), quad (0, X) |-> ket(+), quad (1, X) |-> ket(-) $ and sends this qubit to Bob using quantum communication.
- Bob randomly chooses $X$ or $Z$ with equal probability, then measures qubit with measurement operator $1/2 (I - sigma_X)$ or $1/2 (I - sigma_Z)$.
- This process is repeated enough to generate a sufficiently long key.
- Alice and Bob publicly reveal their choices of $X$ or $Z$ for each qubit (must be after Bob receives the qubit), discarding all qubits for which same choice was not made. When same choice is made for qubit, Alice's choice of qubit will match with Bob's measurement.
]
- *Security of BB84*:
- If Eve intercepts qubit, she must measure it to obtain information from it. But the four possible states are not all orthogonal, so Eve cannot make measurement which is guaranteed to distinguish them.
- If Eve measures with $Z$ and Alice chose $Z$, Eve would correctly measure the qubit. But if Alice chose $X$, Eve would measure $0$ or $1$ with equal probability, and forward the same random qubit $ket(0)$ or $ket(1)$ to Bob. If Bob measures with $X$, result is discarded anyway. If Bob measures with $Z$, measurement is same random result as Eve's measurement, so differs from Alice's key half the time.
- So for each (non-discarded) bit of key Eve intercepts and measures, probability that Alice and Bob's value differs is $1/4$, so currently Eve expects to know $3/4$ of the key, which is insecure. So Alice and Bob compare random subset of their keys and estimate error rate.
- If rate too high, they assume interference from Eve, discard the key and repeat entire process again.
== Bell inequalities
#definition[
*Local realism* is a property of a system:
- *Locality*: an effect at one point can be detected at another point only if something travels between those two points (no faster than the speed of light).
- *Realism*: measurements must be deterministic, i.e. measurements tell us a property of the system.
]
- *CHSH Bell-inequality*:
- Let system have observables $Q, R, S, T$ which takes values $plus.minus 1$. Realism states that any system state must have specific values for these, $(q, r, s, t)$.
- Take large number of system states and measure $Q S + R S + Q T - R T$ for each, calculate mean which gives estimate of expectation $EE(Q S + R S + Q T - R T)$.
- Now $Q = plus.minus R$, so either $(Q + R)S = 0$ and $(Q - R)T = plus.minus 2$ or $(Q + R)S = plus.minus 2$ and $(Q - R)T = 0$, hence $Q S + R S + Q T - R T = plus.minus 2$, and $ -2 <= EE(Q S + R S + Q T - R T) = EE(Q S) + EE(R S) + EE(Q T) - EE(R T) <= 2 $
- Consider following experiment:
- Charlie is in middle of Alice and Bob, who are separated arbitrarily.
- Charlie prepares many Bell states $ket(beta_11)$ and sends one qubit of each simultaneously to Alice and Bob, so they receive them at same time.
- Alice randomly chooses $Q$ or $R$ and makes that measurement on her qubit, Bob does same for random $S$ or $T$. Assuming locality, it is impossible that Alice or Bob's measurement affects the other by an influence of finite speed.
- If quantum mechanics satisfied local realism, Alice's and Bob's results are predetermined by a hidden variable describing Charlie's Bell state.
- Alice and Bob record measurement operator and result for each qubit, then compute $EE(Q S)$, $EE(R S)$, $EE(Q T)$, $EE(R T)$.
- Measurement operators are given by $ Q = sigma_1 tp I_2, R = sigma_3 tp I_2, quad S = I_2 tp (-1)/sqrt(2) (sigma_1 + sigma_3), T = I_2 tp (-1)/sqrt(2) (sigma_1 - sigma_3) $
- These give $EE(Q S) = EE(R S) = EE(Q T) = -EE(R T) = 1/sqrt(2)$, giving $EE(Q S) + EE(R S) + EE(Q T) - EE(R T) = 2 sqrt(2) > 2$, violating CHSH inequality.
- Experimental data confirms this violation, showing nature isn't described by theory obeying local realism, and nature is consistent with quantum mechanics.
= Information theory
== Classical information and Shannon entropy
#definition[
Let $X$ be random variable representing a message, $p(x) = PP(X = x)$ *Shannon entropy* is $
H(X) := -sum_x p(x) log_2 (p(x))
$ where conventionally $0 log 0 = 0$.
]
#theorem(name: "Shannon's noiseless coding theorem")[
$H(X)$ gives lower bound on average number of bits needed to encode message $X$.
]
#definition[
*Joint entropy* is $ H(X, Y) := -sum_(x, y) p(x, y) log_2 (p(x, y)) $
]
#proposition[
Joint entropy obeys *subadditivity*: $ H(X, Y) <= H(X) + H(Y) $ with equality iff $X$ and $Y$ are independent variables, i.e. when $p(x, y) = PP(X = x) PP(Y = y)$.
]
#definition[
*Relative entropy of $p(x)$ to $q(x)$* is defined for two random variables which take same values but with different distributions $p(x)$ and $q(x)$: $ H(p(x) || q(x)) & := sum_x (p(x) log_2 (p(x)) - p(x) log_2 (q(x))) \ & = -H(X) - sum_x p(x) log_2 (q(x)) $
]
#proposition[
Relative entropy is non-negative and $ H(p(x) || q(x)) = 0 <==> forall x, p(x) = q(x) $
]
#remark[
Relative entropy can diverge if for some $x$, $q(x) = 0$ and $p(x) != 0$
]
#definition[
*Conditional entropy* is $ H(X|Y) := H(X, Y) - H(Y) <= H(X) $
]
#definition[
*Mutual information* of $X$ and $Y$ is $ H(X : Y) := H(X) + H(Y) - H(X, Y) >= 0 $
]
== Quantum entropy
#definition[
*Von Neumann entropy* of quantum state with density operator $hat(rho)$ is $ S\(hat(rho)\) := -tr(hat(rho) log_2 (hat(rho))) = -sum_i p_i log_2 (p_i) $ where $hat(rho) = sum_i p_i ket(i) bra(i)$, $ket(i)$ are eigenstates of $hat(rho)$, $p_i$ are eigenvalues of $hat(rho)$. $S\(hat(rho)\)$ is Shannon entropy of ensemble of pure states described by $hat(rho)$.
]
#note[
To compute $log_2 \(hat(rho)\)$, diagonalise $hat(rho)$ (use spectral decomposition) and take $log_2$ of each diagonal element (use here the convention $log_2 (0) = 0$).
]
#remark[
For pure state, $S\(hat(rho)\) = -1 log_2 (1) = 0$.
]
#definition[
*(quantum) relative entropy* is measure of distance between two states: $
S\(hat(rho)_1 || hat(rho)_2\) := tr\(hat(rho)_1 log_2 \(hat(rho)_1\)\) - tr\(hat(rho)_1 log_2 \(hat(rho)_2\)\)
$
]
#proposition[
$S\(hat(rho)_1 || hat(rho)_2\) >= 0$ with equality iff $hat(rho)_1 = hat(rho)_2$.
]
#definition[
For bipartite system $cal(H) = cal(H)_A tp cal(H)_B$ described by density matrix $hat(rho)$ and reduced density matrices $hat(rho)_A$ and $hat(rho)_B$, define $ S(A) := S\(hat(rho)_A\), quad S(B) := S\(hat(rho)_B\), quad S(A, B) := S\(hat(rho)\) $ where $S(A, B)$ is *(quantum) joint entropy* of $A$ and $B$.
]
#definition[
*(quantum) conditional entropy* of $A$ and $B$ is $ S(A | B) := S(A, B) - S(B) $
]
#remark[
Unlike classical conditional entropy, quantum conditional entropy can be negative, e.g. if $hat(rho)$ describes pure state, $S(A, B) = 0$ but if entangled, $hat(rho)_B$ is not pure state so $S(B) > 0$.
]
#definition[
*(Quantum) mutual information* is $ I(A: B) = S(A: B) := S(A) + S(B) - S(A, B) $
]
#remark[
When $hat(rho)$ is pure state, $S(A) = S(B)$ so $I(A: B) = 2 S(A)$. So entanglement can be interpreted as mutual information: information shared by $A$ and $B$ and not in either one alone.
]
#definition[
*Entanglement entropy* is $S(A) = S(B)$ (these are equal since both reduced density matrices have same non-zero eigenvalues).
]
#import "@preview/quill:0.2.0": *
#let logicalop(o) = $op(#o)$
#let AND = logicalop("AND")
#let OR = logicalop("OR")
#let CNOT = logicalop("CNOT")
#let FANOUT = logicalop("FANOUT")
#let NOT = logicalop("NOT")
#let CnNOT(n) = logicalop($"C"^n #h(0pt) "NOT"$)
#let CCNOT = logicalop("CCNOT")
#let NAND = logicalop("NAND")
= Classical computing
== Basic gates
#notation[
Input for circuit diagrams has most significant bit at the top, circuits are read left to right, with last operation on the right.
]
#definition[
*(logical) gate* is function mapping bits to bits.
]
#definition[
Simplest gates are $f: {0, 1} -> {0, 1}$:
- *Identity gate*: $id(x) := x$.
- $c_0(x) := 0$.
- $c_1(x) := 1$.
- *#NOT gate*: $NOT(x) = overline(x)$.
]
#definition[
*#FANOUT gate* is defined as $
FANOUT: {0, 1} -> {0, 1}^2, quad FANOUT(x) := (x, x)
$
#figure(quantum-circuit(
row-spacing: 2pt,
setwire(1, stroke: 0pt), 2, ctrl(2, show-dot: false), setwire(1, stroke: 1pt), 1, rstick($x$), [\ ],
lstick($x$), 2, [\ ],
setwire(1, stroke: 0pt), 2, ctrl(0, show-dot: false), setwire(1, stroke: 1pt), 1, rstick($x$)
))
]
#definition[
*#AND gate* is given by its *truth table*:
#figure(table(align: center,columns: (auto, auto, auto), inset: 5pt, [], $0$, $1$, $0$, $0$, $0$, $1$, $0$, $1$))
#figure(quantum-circuit(
row-spacing: 2pt,
setwire(1, stroke: 1pt),
lstick($x$), 1, mqgate($#h(40pt)$, n: 3, radius: (right: 100%)), [\ ],
setwire(1, stroke: 0pt), 3, setwire(1, stroke: 1pt), 2, rstick($AND(x, y)$), [\ ],
lstick($y$), 2,
))
]
#definition[
*OR gate* is given by its *truth table*: $ #table(columns: (auto, auto, auto), inset: 5pt, [], $0$, $1$, $0$, $0$, $1$, $1$, $1$, $1$) $
]
#remark[
#AND and #OR are not reversible (invertible) so cannot be implemented by unitary operators.
]
- *Landauer's principle*: energy $E$ required to erase one bit satisfies $ E >= k_B T log(2) $ where $k_B$ is Boltzmann's constant, $T$ is temperature at which system operates.
#definition[
*Controlled #NOT (#CNOT) gate*, $CNOT: {0, 1}^2 -> {0, 1}^2$, is $ CNOT(x, y) := cases((x, y) & "if" x = 0, (x, NOT(y)) & "if" x = 1) quad = (x, x xor y) = (x, x + y mod 2) $ Inverse of #CNOT is #CNOT. $x$ is *control bit*, $y$ is *target bit*.
#figure(quantum-circuit(
lstick($y$), 2, targ(), 2, rstick($x xor y$), [\ ],
lstick($x$), 2, ctrl(-1), 2, rstick($x$)
))
]
#definition[
*#CnNOT(1) gate* is defined as $ CnNOT(1)(x_1, ..., x_n, y) := (x_1, ..., x_n, y xor "AND"(x_1, ..., x_n)) $ #CnNOT($n$) is reversible for all $n in NN$ and $(CnNOT(n))^(-1) = CnNOT(n)$. For $n = 2$, #CCNOT gate is called a *Toffoli gate*.
]
#figure(quantum-circuit(
lstick($y$), 2, targ(), 2, rstick($AND(x_1, x_2) xor y$), [\ ],
lstick($x_2$), 2, ctrl(-1), 2, rstick($x_2$), [\ ],
lstick($x_1$), 2, ctrl(-1), 2, rstick($x_1$)
))
#definition[
*#NAND gate* is defined as $ NAND(x, y) := NOT(AND(x, y)) $
]
#example[
Circuit diagram for $f: ZZ\/8 -> ZZ\/8$, $f(x) = x + overline(1)$.
#figure(quantum-circuit(
lstick($x_2$), 1, targ(), 4, [\ ],
lstick($x_1$), 1, ctrl(-1), 1, targ(), 2, [\ ],
lstick($x_0$), 1, ctrl(-1), 1, ctrl(-1), 1, gate("NOT")
))
]
== Universal gate sets
#notation[
For $f: {0, 1}^m -> {0, 1}^n$, can write as $ f(x_(m - 1), ..., x_0) = (f_(n - 1)(x_(m - 1), ..., x_0), ..., f_0 (x_(m - 1), ..., x_0)) $
]
#remark[
Can "copy" bits by introducing extra "ancillary" bits and using #CNOT gates:
#figure(quantum-circuit(
lstick($0$), 1, targ(), 1, rstick($x xor 0 = x$), [\ ],
lstick($x$), 1, ctrl(-1), 1, rstick($x$)
))
]
#definition[
A *universal gate set (UGS)* is finite set of gates which can construct an arbitrary function $f: {0, 1}^n -> {0, 1}^m$.
]
#proposition[
${NOT, AND, OR, CNOT}$ is a universal gate set.
]
#corollary[
${CNOT, AND}$ is a universal gate set.
]
#proposition[
${CCNOT}$ is a minimal ($1$-gate) UGS for reversible classical computation.
]
#remark[
There is an infinite number of UGSs.
]
== Computational resources and complexity
#definition[
An *algorithm* is a set of instructions (systematic procedure) for computing some output for a given input.
]
#definition[
Resources considered in complexity:
- *Time*: corresponds to numbers of gates in any UGS needed for implementing the circuit.
- *Space*: corresponds to number of bits (lines) in the circuit.
- $n$ denotes size in bits of input.
]
#example[
Computing $gcd(a, b)$ (assuming WLOG $a >= b$, $2^(n - 1) <= b < 2^n$ so $b$ has $n$ bits).
- Brute-force algorithm: try all $1 <= c <= b$, check if $c | a$ and $c | b$, return largest such $c$. Time complexity: $O(2^n)$.
- Euclid's algorithm has time complexity $O(n^3)$ (assuming division and remainder algorithm is $O(n^2)$) (since $r_(i + 2) < r_i\/2$).
]
#definition[
- *P* is complexity class of algorithms whose run time is at most polynomial time in $n$.
- *EXP* is complexity class of algorithms whose run time is at most exponential time in $n$. $"P" subset "EXP"$.
- *PSPACE* is class of algorithms which require space at most polynomial in $n$. $"P" subset.eq "PSPACE"$ (e.g. each line in circuit diagram is assumed to involve at least one gate).
- *NP* is complexity class of algorithms whose output can be verified to be correct in polynomial time, e.g. integer factorisation. Clearly $P subset.eq "NP"$.
- *NP-hard* problem is one such that, if you have an oracle for solving them, you can solve any NP problem in polynomial time (NP problems reduce polynomially to NP-hard problems).
- *NP-complete* is complexity class of problems which are NP-hard, e.g. travelling salesman.
- *PP* is class of algorithms which require time at most polynomial in $n$ to return correct answer with probability $> 1\/2$.
- *BPP* is class of algorithms which require time at most polynomial in $n$ to return correct answer with probability $> c > 1\/2$. $"P" subset.eq "BPP"$.
]
= Quantum circuits
#definition[
A *qubit* is a quantum system whose Hilbert space $H_1$ is $2$-dimensional, with basis ${ket(0), ket(1)}$. An $n$-qubit system has $2^n$-dimensional Hilbert space $H_n = H_1 tp dots.h.c tp H_1$. The *computational basis* for $H_n$ is $ {ket(0), ..., ket(2^n - 1)} $ where $ket(k) = ket((k_(n - 1)...k_0)_2)$ corresponds to $ket(k_(n - 1)) tp dots.h.c tp ket(k_0)$.
]
#definition[
*Quantum gate* is unitary map from $H_n$ to $H_n$.
]
#notation[
Let $X, Y, Z$ denote Pauli matrices $sigma_1, sigma_2, sigma_3$ respectively.
]
#notation[
A unitary $U: H_1 -> H_1$ is denoted
#quantum-circuit(
1, gate($U$), 1
)
]
#definition[
Define the gates $ S := mat(1, 0; 0, i), quad T := mat(1, 0; 0, e^(i pi\/4)), quad H := 1/sqrt(2) mat(1, 1; 1, -1) $ where $H$ is *Hadamard gate*. $S^2 = Z$, $T^2 = S$, $H^2 = I$. $H ket(0) = ket(+)$, $H ket(1) = ket(-)$.
]
#example[
Hadamard gate is useful when constructing uniform superpositions of all basis states: $ (H ket(0)) tp (H ket(0)) = 1/2 (ket(00) + ket(01) + ket(10) + ket(11)) $ and in general, $ (H ket(0)) tp dots.h.c tp (H ket(0)) = 1/(2^(n\/2)) sum_(k = 0)^(2^n - 1) ket(k) $
]
#definition[
*#CNOT* gate is $mat(I, 0; 0, X)$. Most significant bit is control bit, least significant bit is target bit.
#figure(quantum-circuit(
1, ctrl(1), 1, [\ ],
1, targ(), 1
))
]
#definition[
*Controlled-$U$* gate, $C$-$U$ maps $ket(0) tp ket(psi) = ket(0) tp ket(psi)$ and $ket(1) tp ket(psi) = ket(1) tp (U ket(psi))$.
#figure(quantum-circuit(
1, ctrl(1), 1, [\ ],
1, gate($U$), 1
))
]
#definition[
*CCNOT (Toffoli) gate* is $ mat(I, 0, 0, 0; 0, I, 0, 0; 0, 0, I, 0; 0, 0, 0, X) $
#figure(quantum-circuit(
1, ctrl(1), 1, [\ ],
1, ctrl(1), 1, [\ ],
1, targ(), 1
))
Note: Toffoli gate maps computational basis elements to computational basis elements, and computational basis elements are orthonormal.
]
#notation[
Measurement and classical bits are shown as e.g.
#figure(quantum-circuit(
1, meter(), setwire(2), 1, rstick($001$)
))
]
== Universal quantum computation
#proposition[
Every $N times N$ unitary can be written in terms of $U_(i j)$: "elementary" unitaries acting on $(i, i)$, $(i, j)$, $(j, i)$ and $(j, j)$ entries only, i.e. they are non-trivial in only one $2 times 2$ block (they act non-trivially on a two-dimensional subspace of the Hilbert space, spanned by two basis states $ket(i - 1)$ and $ket(j - 1)$).
]
#proposition[
$U$ is unitary iff its rows are orthonormal iff its columns are orthonormal (with respect to Hermitian inner product).
]
#example[
For unitary $ U = mat(a, d, g; b, e, h; c, f, j) $ we can find unitaries $U_1, U_2, U_3$ with $U_3 U_2 U_1 U = I$. Choose $U_1$ to have upper left $2 times 2$ block non-trivial and such that $ U_1 U = mat(a', d', g'; 0, e', h'; c', f', j') $ If $b = 0$, set $U_1 = I$. If $b != 0$, set $ U_1 = mat(alpha^*, beta^*, 0; beta, -alpha, 0; 0, 0, 1), quad alpha := a/sqrt(|a|^2 + |b|^2), beta = b/sqrt(|a|^2 + |b|^2) ==> beta a - alpha b = 0 $ Then set $ gamma = (a')/sqrt(|a'|^2 + |c'|^2), delta = (c')/sqrt(|a'|^2 + |c'|^2), quad U_2 = mat(gamma^*, 0, delta^*; 0, 1, 0; delta, 0, -gamma) ==> U_2 U_1 U = mat(1, 0, 0; 0, e'', h''; 0, f'', j'') =: U_3^dagger $ If $U in U(N)$ is unitary, then can find $N - 1$ unitaries $U_1, ..., U_(N - 1)$ where $U_i$ is non-trivial in first and $(i + 1)$th row such that $U_(N - 1) dots.h.c U_1 U$ has first row and first column $(1, 0, ..., 0)$ and non-trivial bottom-right $(N - 1) times (N - 1)$ block. So it can be reduced entirely by induction, to $1/2 N (N - 1)$ unitaries.
]
#remark[
$U$ acts on $n$ qubits so $N = 2^n$, so we need $approx 4^n$ elementary matrices, so complexity is exponential in number of qubits.
]
#example[
Any $4 times 4$ unitary can be written as product of 6 elementary unitaries: $ U = mat(*, *, 0, 0; *, *, 0, 0; 0, 0, 1, 0; 0, 0, 0, 1) mat(*, 0, *, 0; 0, 1, 0, 0; *, 0, *, 0; 0, 0, 0, 1) mat(*, 0, 0, *; 0, 1, 0, 0; 0, 0, 1, 0; *, 0, 0, *) mat(1, 0, 0, 0; 0, *, *, 0; 0, *, *, 0; 0, 0, 0, 1) mat(1, 0, 0, 0; 0, *, 0, *; 0, 0, 1, 0; 0, *, 0, *) mat(1, 0, 0, 0; 0, 1, 0, 0; 0, 0, *, *; 0, 0, *, *) $
]
#definition[
A *multiply-controlled unitary* is an $N times N$ unitary acting on subspace $"span"{ket(1...10), ket(1...11)}$. It applies a $2 times 2$ unitary to last qubit if all other qubits are $1$ and the identity otherwise.
]
#example[
Unitary acting on subspace $span{ket(1110), ket(1111)}$ is implemented as
#figure(quantum-circuit(
lstick($ket(q_3)$), 1, ctrl(1), 2, nl,
lstick($ket(q_2)$), 1, ctrl(1), 2, nl,
lstick($ket(q_1)$), 1, ctrl(1), 2, nl,
lstick($ket(q_0)$), 1, gate($U$), 2, nl,
))
]
#example[
If $i - 1$ and $j - 1$ differ in single bit, with all other bits $1$, this is multiply-controlled unitary with that bit as target, e.g. unitary acting on subspace $span{ket(1101), ket(1111)}$ is implemented as
#figure(quantum-circuit(
lstick($ket(q_3)$), 1, ctrl(1), 2, nl,
lstick($ket(q_2)$), 1, ctrl(1), 2, nl,
lstick($ket(q_1)$), 1, gate($U$), 2, nl,
lstick($ket(q_0)$), 1, ctrl(-1), 2, nl,
))
]
#example[
If $i - 1$ and $j - 1$ differ in single bit but others are not all $1$, use NOT gates to reverse the control bits which are $0$, e.g. unitary acting on $span{ket(0100), ket(0110)}$ is implemented as
#figure(quantum-circuit(
lstick($ket(q_3)$), 1, gate($X$), ctrl(1), gate($X$), 1, nl,
lstick($ket(q_2)$), 2, ctrl(1), 2, nl,
lstick($ket(q_1)$), 2, gate($U$), 2, nl,
lstick($ket(q_0)$), 1, gate($X$), ctrl(-1), gate($X$), 1, nl,
))
]
#definition[
*Gray code* between $(p_(n - 1)...p_0)$ and $(q_(n - 1)...q_0)$ is sequence of single bit flips that maps from $(p_(n - 1)...p_0)$ to $(q_(n - 1)...q_0)$, e.g. a Gray code for $111$ and $000$ is $111, 101, 001, 000$.
]
#remark[
Gray codes are not unique. (For practical reasons, it is easier to preserve the ordering between first and last, and penultimate and last items in the code.)
]
#example[
If $i - 1$ and $j - 1$ differ in multiple bits (e.g. $U_(i, j) = U_(8, 1)$, $i - 1 = 7 = (111)_2$, $j - 1 = 1 = (000)_2$), then use a Gray code to flip bits so that all apart from one are the same as $j - 1$. First bit flip $111 -> 101$ is implemented as #CCNOT. Second bit flip $101 -> 001$ is implemented as #CCNOT but if second qubit is $0$ instead of $1$. Then act with $U$ on subspace $span{ket(001), ket(000)}$ (i.e. on third qubit), then "undo" these CCNOT in reverse order:
#figure(quantum-circuit(
lstick($ket(q_2)$), ctrl(1), 1, targ(), 1, gate($X$), ctrl(1), gate($X$), 1, targ(), 1, ctrl(1), 1, nl,
lstick($ket(q_1)$), targ(), gate($X$), ctrl(-1), gate($X$), gate($X$), ctrl(1), gate($X$), gate($X$), ctrl(-1), gate($X$), targ(), 1, nl,
lstick($ket(q_0)$), ctrl(-1), 1, ctrl(-1), 2, gate($U$), 2, ctrl(-1), 1, ctrl(-1), 1
))
]
#example[
We can implement any multiply-controlled unitary with controlled-unitary (single control qubit) and CCNOT gates, by introducing ancillary bits. e.g. to implemented the multiply-controlled unitary acting on $q_0$ if $q_1 = q_2 = q_3 = 1$, use ancillary qubits $ket(a_1)$, $ket(a_2)$ (initially set to $0$):
#figure(quantum-circuit(
lstick($ket(q_3)$), ctrl(1), 3, ctrl(1), 1, nl,
lstick($ket(q_2)$), ctrl(2), 3, ctrl(2), 1, nl,
lstick($ket(q_1)$), 1, ctrl(1), 1, ctrl(1), 2, nl,
lstick($ket(a_2)$), targ(), ctrl(1), 1, ctrl(1), targ(), 1, nl,
lstick($ket(a_1)$), 1, targ(), ctrl(1), targ(), 2, nl,
lstick($ket(q_0)$), 2, gate($U$), 3
))
]
#proposition[
CCNOT can be implemented with $H$ (Hadamard) and $T$ gates (and their Hermitian conjugates).
]
#lemma[
Any single qubit unitary $U$ can be written as $U = e^(i alpha) A X B X C$ with $A, B, C$ single-qubit ($2 times 2$) unitaries, $A B C = 1$, $alpha in RR$. In particular, C-$U$ can be implemented as
#figure(quantum-circuit(
lstick($ket(q_1)$), 1, ctrl(1), 1, ctrl(1), gate($mat(1, 0; 0, e^(i alpha))$), 1, nl,
lstick($ket(q_0)$), gate($C$), targ(), gate($B$), targ(), gate($A$), 1
))
]
#corollary[
Any unitary can be implemented with single-qubit unitaries and CNOT.
]
#remark[
Number of elementary unitaries $U_i$ needed is $O(2^(2n))$. Gray code requires $O(n)$ $CnNOT(n - 1)$ gates, and representing these multiply-controlled unitaries as controlled-unitaries requires $O(n)$ CCNOT gates, so overall $U$ is represented as $O(n^2 2^(2n))$ operations.
]
#definition[
*BQP (bounded-error quantum polynomial)* decision problems are those which a unitary operation solves with probability of success $p > c$, with $c > 1/2$ a fixed constant (conventionally, $c = 2/3$), with polynomial growth in resources (i.e. number of CNOT and single-qubit unitary gates) as $n$ (number of qubits) is increased.
]
#note[
$"BPP" subset.eq "BQP"$, since any classical computation can be written in terms of CCNOT and CCNOT has fixed quantum cost. A source of randomness is the following circuit, giving $0$ and $1$ each with probability $1\/2$:
#figure(quantum-circuit(
lstick(ket(0)), 2, gate($H$), 2, meter(), setwire(2), 2
))
]
== Measurement
#note[
We can always measure using the Pauli $Z$ operator (so measure in the computational basis). To measure in different basis, act with a unitary to transform desired basis into computational basis, then measure in computational basis, then transform back to desired basis.
]
#example[
Let $U$ be single-qubit operator, with eigenvalues $plus.minus 1$, so it is Hermitian and unitary. Measuring $U$ can be achieved with the following circuit:
#figure(quantum-circuit(
lstick($ket(0)$), 1, gate($H$), 1, ctrl(1), 1, gate($H$), 1, meter(), setwire(2), 1, nl,
lstick($ket(psi)$), 3, gate($U$), 5
))
Acting with $H$ maps $ket(0) tp ket(psi) |-> 1/sqrt(2) (ket(0) + ket(1)) tp ket(psi)$. Acting with controlled-$U$ gives $1/sqrt(2) (ket(0) tp ket(psi) + ket(1) tp U ket(psi))$. Acting with $H$ again gives output $ 1/2 ((ket(0) + ket(1)) tp ket(psi) + (ket(0) - ket(1)) tp U ket(psi)) = 1/2 ket(0) tp (I + U) ket(psi) + 1/2 ket(1) tp (I - U) ket(psi) $ But $1/2 (I + U)$ is projector to $+1$ eigenspace of $U$, $1/2 (1 - U)$ is projector to $-1$ eigenspace of $U$, so if $ket(psi) = alpha ket(U_+) + beta ket(U_-)$, with $U ket(U_(plus.minus)) = plus.minus U_(plus.minus)$ then output is $ alpha ket(0) tp ket(U_+) + beta ket(1) tp ket(U_-) $ So result of measurement is $0$ with probability $|alpha|^2$, which collapses state to $ket(0) tp ket(U_+)$, and $1$ with probability $|beta|^2$, which collapses state to $ket(1) tp ket(U_-)$.
]
= Quantum error correction
#note[
We assume that an error only affects a single qubit.
]
== Correcting single bit flips
#definition[
A *code subspace* is a two-dimensional subspace of an $n$-qubit Hilbert space, in which the logical qubits live, such that each possible error (being considered) maps states in the code subspaces into a distinct two-dimensional subspace, and all of these error subspaces and the codespace are orthogonal.
]
#example[
Assume only error that can occur is flip of single qubit (same as classical case), i.e. each qubit has probability $p$ of $X$ gate being applied. We encode the state in a *code subspace*. Each qubit is encoded as 3 qubits: the *logical qubit* $ket(overline(0))$ is encoded as the _physical_ state $ket(000)$, $ket(overline(1))$ is encoded as $ket(111)$. So $ket(psi) = alpha ket(0) + beta ket(1)$ is mapped to $alpha ket(000) + beta ket(111)$, in the subspace $span{ket(000), ket(111)}$ of the Hilbert space of 3 qubits. The embedding is implemented as
#figure(quantum-circuit(
lstick($ket(q_2) = ket(psi)$), ctrl(1), ctrl(2), 1, nl,
lstick($ket(q_1) = ket(0)$), targ(), 2, nl,
lstick($ket(q_0) = ket(0)$), 1, targ(), 1
))
Single bit flip can map this state to $ alpha ket(001) + beta ket(110), quad alpha ket(010) + beta ket(101), quad alpha ket(100) + beta ket(011) $ which are all orthogonal to original state and each other. So different errors map to different orthogonal subspaces, hence we can make measurement to determine which subspace it is without affecting the $alpha$, $beta$ coefficients.
*Error syndromes* are operators with eigenspaces as the different subspaces, each with distinct eigenvalue. In this case, choose syndromes formed from $Z$ operator (this has eigenvalue $1$ for $ket(0)$, $-1$ for $ket(1)$). Let $Z_0 = I tp I tp Z$, $Z_1 = I tp Z tp I$, $Z_2 = Z tp I tp I$, then $
Z_0 Z_1 ket(000) = ket(000), quad Z_0 Z_1 ket(111) = ket(111), & quad Z_0 Z_2 ket(000) = ket(000), quad Z_0 Z_2 ket(111) = ket(111) \
Z_0 Z_1 ket(001) = -ket(001), quad Z_0 Z_1 ket(110) = -ket(110), & quad Z_0 Z_2 ket(001) = -ket(001), quad Z_0 Z_2 ket(110) = -ket(110) \
Z_0 Z_1 ket(010) = -ket(010), quad Z_0 Z_1 ket(101) = -ket(101), & quad Z_0 Z_2 ket(010) = ket(010), quad Z_0 Z_2 ket(101) = ket(101) \
Z_0 Z_1 ket(100) = ket(100), quad Z_0 Z_1 ket(011) = ket(011), & quad Z_0 Z_2 ket(100) = -ket(100), quad Z_0 Z_2 ket(011) = -ket(011)
$ So $span{ket(000), ket(111)}$ is $(1, 1)$ eigenspace, $span{ket(001), ket(110)}$ is $(-1, -1)$ eigenspace, $span{ket(010), ket(101)}$ is $(-1, 1)$ eigenspace, $span{ket(100), ket(011)}$ is $(1, -1)$ eigenspace. So if $ket(psi)$ is mapped to $ (1 - epsilon) ket(psi) + delta_2 X_2 ket(psi) + delta_1 X_1 ket(psi) + delta_0 X_0 ket(psi) $ then we measure $Z_0 Z_1$ and $Z_0 Z_2$, which collapses state to either $ ket(psi), quad X_2 ket(psi), quad X_1 ket(psi), quad X_0 ket(psi) $ Since the eigenvalues for this combination of measurements are distinct, they tell us which state $ket(psi)$ has been projected to. So can apply $I, X_2, X_1$ or $X_0$ to map back to $ket(psi)$. This can be implemented as
#figure(quantum-circuit(
lstick($ket(q_2)$), 3, ctrl(4), 6, targ(), targ(), 2, nl,
lstick($ket(q_1)$), 1, ctrl(2), 5, targ(), targ(), 5, nl,
lstick($ket(q_0)$), ctrl(1), 1, ctrl(2), 2, targ(), 8, nl,
lstick($ket(a_1) = ket(0)$), targ(), targ(), 3, ctrl(-1), 1, ctrl(-2), ctrl(-2), 2, ctrl(-3), meter(), setwire(2), rstick($Z_1 Z_0$), nl,
lstick($ket(a_0) = ket(0)$), 2, targ(), targ(), 1, ctrl(-1), 2, ctrl(-1), 1, ctrl(-4), ctrl(-4), meter(), setwire(2), rstick($Z_2 Z_0$)
))
where the measurements are to reset the ancilla so they can be reused.
]
#note[
We cannot use less than 3 qubits, since to encode with $n$ qubits, we need $n + 1$ orthogonal two-dimensional subspaces, which is possible in $2^n$-dimensional $n$ qubit Hilbert space iff $2^n >= 2(n + 1)$.
]
== Correcting general single qubit errors
#remark[
General error consists of acting with unitary operation $U_i$ on single physical qubit. Can use Bloch sphere rotation representation to write $ U_i = e_i I + a_i X_i + b_i Y_i + c_i Z_i $ So if state $ket(psi)$ is single logical qubit encoded in $n$-qubit Hilbert space, action of single qubit error on qubit $i$ transforms $ket(psi)$ to $ (1 - epsilon) ket(psi) + a_i X_i ket(psi) + b_i Y_i ket(psi) + c_i Z_i ket(psi) quad "for some" i $ If error depends of state of environment, state after errors occurs is entangled: $ ket(e_1) tp ket(psi) + sum_(i = 1)^n ket(e_(2i)) tp X_i ket(psi) + ket(e_(3i)) tp Y_i ket(psi) + ket(e_(4i)) tp Z_i ket(psi) $ (This is linear superpositon of single qubit errors). Measuring chosen error syndromes projects qubits to one of the subspaces, so state becomes one of $ ket(psi), quad X_i ket(psi), quad Y_i ket(psi), quad Z_i ket(psi) $ $3n + 1$ 2d subspaces are needed (corresponding to $3n$ single-qubit errors and original state), so we require $ 2^n >= 2(3n + 1) $ which is saturated by $n = 5$.
]
#remark[
In terms of errors, $X$ is a single bit flip, $Z$ is a phase flip ($alpha ket(0) + beta ket(1) -> alpha ket(0) - beta ket(1)$), $Y = i X Z$ is composition of both.
]
#definition[
We define a *coding* $c: H_1 -> H_n$, $ket(overline(0)) = c(ket(0))$, $ket(overline(1)) = c(ket(1))$.
]
#definition[
*Steane code* is coding using $7$ qubits, which uses the syndromes $ M_0 := X_0 X_4 X_5 X_6, quad M_1 & := X_1 X_3 X_5 X_6, quad M_2 := X_2 X_3 X_4 X_6, \ N_0 := Z_0 Z_4 Z_5 Z_6, quad N_1 & := Z_1 Z_3 Z_5 Z_6, quad N_2 := Z_2 Z_3 Z_4 Z_6 $ which all commute, so have simultaneous eigenstates. Code subspace is spanned by $
ket(overline(0)) & = 1/(2^(3\/2)) (1 + M_0)(1 + M_1)(1 + M_2) ket(0000000), \
ket(overline(1)) & = 1/(2^(3\/2)) (1 + M_0)(1 + M_1)(1 + M_2) ket(1111111)
$
]
#remark[
$M_j^2 = I$ so $M_j (1 + M_j) = 1 + M_j$ so $ket(overline(0)), ket(overline(1))$ are eigenstates of each $M_j$ with eigenvalue $1$. $ket(overline(0)), ket(overline(1))$ are also eigenstates of each $N_k$ with eigenvalue $1$. Each $M_j$ commutes with each $X_i$, and $ X_i Z_j = cases(Z_j X_i & "if" i != j, -Z_j X_i & "if" i = j) quad ==> quad X_i N_j = cases(N_j X_i & "if" N_j "does not contain" Z_i, -N_j X_i & "if" N_j "contains" Z_i) $ Hence $M_j X_i (alpha ket(overline(0)) + beta ket(overline(1))) = X_i M_j (alpha ket(overline(0)) + beta ket(overline(1))) = X_i (alpha ket(overline(0)) + beta ket(overline(1)))$ so this has eigenvalue $1$ for all $M_j$, and $ N_j X_i (alpha ket(overline(0)) + beta ket(overline(1))) = cases(X_i N_j (alpha ket(overline(0)) + beta ket(overline(1))) = X_i (alpha ket(overline(0)) + beta ket(overline(1))) & "if" Z_i in.not N_j ==> "eigenvalue" 1, -X_i N_j (alpha ket(overline(0)) + beta ket(overline(1))) = -X_i (alpha ket(overline(0)) + beta ket(overline(1))) & "if" Z_i in N_j ==> "eigenvalue" -1) $
For bit flips $X_i$:
#figure(table(
columns: (auto, auto, auto, auto, auto, auto, auto, auto),
$i$, $0$, $1$, $2$, $3$, $4$, $5$, $6$,
$(M_0, M_1, M_2)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$,
$(N_0, N_1, N_2)$, $(-1, 1, 1)$, $(1, -1, 1)$, $(1, 1, -1)$, $(1, -1, -1)$, $(-1, 1, -1)$, $(-1, -1, 1)$, $(-1, -1, -1)$
))
For phase flips (sign errors) $Z_i$:
#figure(table(
columns: (auto, auto, auto, auto, auto, auto, auto, auto),
$i$, $0$, $1$, $2$, $3$, $4$, $5$, $6$,
$(M_0, M_1, M_2)$, $(-1, 1, 1)$, $(1, -1, 1)$, $(1, 1, -1)$, $(1, -1, -1)$, $(-1, 1, -1)$, $(-1, -1, 1)$, $(-1, -1, -1)$,
$(N_0, N_1, N_2)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$, $(1, 1, 1)$,
))
Since $X Y = -Y X$, $Z Y = -Y Z$, $ M_j Y_i = cases(Y_i M_j & "if" X_i in.not M_j, -Y_i M_j & "if" X_i in M_j), quad cases(Y_i N_j & "if" Z_i in.not M_j, -Y_i M_j & "if" Z_i in M_j) $ For errors $Y_i$:
#figure(table(
columns: (auto, auto, auto, auto, auto, auto, auto, auto),
$i$, $0$, $1$, $2$, $3$, $4$, $5$, $6$,
$(M_0, M_1, M_2)$, $(-1, 1, 1)$, $(1, -1, 1)$, $(1, 1, -1)$, $(1, -1, -1)$, $(-1, 1, -1)$, $(-1, -1, 1)$, $(-1, -1, -1)$,
$(N_0, N_1, N_2)$, $(-1, 1, 1)$, $(1, -1, 1)$, $(1, 1, -1)$, $(1, -1, -1)$, $(-1, 1, -1)$, $(-1, -1, 1)$, $(-1, -1, -1)$,
))
]
#example[
If $(M_j, N_j)$ measured and eigenvalues are $(1, 1, 1)$, $(1, -1, -1)$ then error is $X_3$, and we correct it by applying $X_3^(-1) = X_3$.
]
== Fault-tolerant gates
#definition[
A gate $overline(U)$ is *fault-tolerant* if, when there is error $U_i$ on single physical qubit before the unitary operation, acting with the unitary produces state which differs from desired state only by a single-qubit error $V_j$, i.e. $ overline(U) U_i ket(psi) = V_j overline(U) ket(psi) $ This equivalent to $overline(U)$ mapping each eigenspace of the error syndromes to some eigenspace of the error syndromes.
]
#definition[
A logical gate $overline(G)$ is *transversal* if it is a tensor product of single qubit gates.
]
#proposition[
Every transversal gate is fault tolerant.
]
#example[
The operation $overline(X)$, acting on the logical Hilbert space $H_7$, acts as the #NOT operator on the code subspace $span{ket(0), ket(1)}$: $ overline(X) = X_6 X_5 X_4 X_3 X_2 X_1 X_0, quad ==> overline(X) ket(overline(0)) = ket(overline(1)), quad overline(X) ket(overline(1)) = ket(overline(0)) $
]
#example[
The operation $overline(Z) = Z_6 Z_5 Z_4 Z_3 Z_2 Z_1 Z_0$ commutes with each $M_i$ and leaves $ket(0000000)$ invariant so leaves $ket(overline(0))$ invariant. $overline(Z)$ anti-commutes with $overline(X)$ so acts within the code subspace and $overline(Z) ket(overline(0)) = ket(overline(1))$, $overline(Z) ket(overline(1)) = -ket(overline(1))$, so acts as Pauli $Z$ on logical qubits.
]
#example[
$overline(H) = H_6 H_5 H_4 H_3 H_2 H_1 H_0$ realises the Hadamard gate on logical qubits: $ overline(H) ket(overline(0)) = 1/sqrt(2) (ket(overline(0)) + ket(overline(1))), quad overline(H) ket(overline(1)) = 1/sqrt(2) (ket(overline(0)) - ket(overline(1))) $ We have $H X H = Z$ so $H_i X_i = Z_i H_i$, thus $ M_j overline(H) ket(psi) = overline(H) N_j ket(psi), quad N_j overline(H) ket(psi) = overline(H) M_j ket(psi) $ Hence if $ket(psi)$ is in an eigenspace of $M_j$ and $N_j$, $overline(H) ket(psi)$ also lies in an eigenspace of $M_j$ and $N_j$ but with the eigenvalues of $M_j$ and $N_j$ swapped. This means $overline(H)$ preserves the code subspace, so $overline(H) ket(overline(0))$ and $overline(H) ket(overline(1))$ lie in the code subspace. Now $ overline(H) ket(overline(0)) = overline(H) 1/2^(3\/2) (1 + M_0)(1 + M_1)(1 + M_2) ket(0000000) = 1/2^(3\/2) (1 + N_0)(1 + N_1)(1 + N_2) overline(H) ket(0000000) $ $overline(H)$ maps $ket(0000000)$ to uniform superposition of all computational basis states, and $1 + N_j$ is projector onto $+1$ eigenspace of $N_j$, so we have the component of the uniform superposition which lies in the code subspace, i.e. $ overline(H) ket(overline(0)) = 1/sqrt(2) (ket(overline(0)) + ket(overline(1))) $ Similarly, $ overline(H) ket(overline(1)) = overline(H) 1/2^(3\/2) (1 + M_0)(1 + M_1)(1 + M_2) ket(1111111) = 1/2^(3\/2) (1 + N_0)(1 + N_1)(1 + N_2) overline(H) ket(1111111) $ $overline(H)$ maps $ket(1111111)$ to uniform superposition of all computational basis states, with each state with an odd number of $1$'s negated, hence $ overline(H) ket(overline(1)) = 1/sqrt(2) (ket(overline(0)) - ket(overline(1))) $ as all computational basis states in $ket(overline(1))$ have odd number of $1$'s.
]
#example[
If two logical qubits are encoded with 14 physical qubits using Steane code, a logical #CNOT can be implemented as $ overline(CNOT) = product_(i = 1)^7 C_i "NOT"_i $ where $C_i "NOT"_i$ is #CNOT with $i$th qubit in first logical qubit as control and $i$th qubit in second logical qubit as target.
]
#theorem(name: "<NAME>")[
Not all gates in a UGS can be transversal.
]
= Quantum algorithms
== Simon's algorithm
#definition[
*Bitwise addition* of $a$ and $b$ is $a xor b = c$ where $c_i = a_i + b_i mod 2$.
]
#definition[
*Simon's problem* is: given an $n$-bit function $f: {0, 1}^n -> {0, 1}^n$, with $f(x xor a) = f(x)$ for all $x$ ($a != 0$) and $f(x) != f(y)$ otherwise, determine the period $a$.
]
#example[
Let $f: {0, 1}^n -> {0, 1}^n$ be $n$-bit function with period $a != 0$, so $f(x xor a) = f(x)$ and $f(x) != f(y)$ otherwise. To determine $a$ classically, we compute $f(x_i)$ until we find two values with $f(x_i) = f(x_j)$, then $a = x_i xor x_j$. After $m$ values are computed, we know $a != x_i xor x_j$ for all $i, j <= m$, so at most $1/2 m(m - 1)$ values are eliminated. There are $2^n - 1$ values for $a$, so this has complexity $O(2^(n\/2))$.
]
#definition[
*Bitwise product* of $x = (x_(n - 1)...x_0)_2$ and $y = (y_(n - 1)...y_0)_2$ is $ x dot.op y = x_(n - 1) y_(n - 1) dots.h.c + x_0 y_0 quad mod 2 $
]
#proposition[
$ H^(tp n) ket(0) & = 1/2^(n\/2) sum_(k = 0)^(2^n - 1) ket(k) quad "and" \ H^(tp n) ket(x) & = times.circle.big_(i = 0)^(n - 1) 1/sqrt(2) ((-1)^(0 dot.op x_i) ket(0) + (-1)^(1 dot.op x_i) ket(1)) = 1/2^(n\/2) sum_(k = 0)^(2^n - 1) (-1)^(k dot x) ket(k) $
]
#algorithm(name: "Simon's algorithm")[
Define the unitary operator $U_f$ acting on $n$ input qubits $ket(x)$ and $n$ output qubits $ket(m)$: $ U_f ket(x) ket(m) = ket(x) ket(m xor f(x)) $
+ Start with system in state $ket(0)_n tp ket(0)_n$ where $ket(0)_n = ket(00...0)$.
+ Apply $H^(tp n) tp I$ (i.e. acting on input qubits) to give $ 1/(2^(n\/2)) sum_(k = 0)^(2^n - 1) ket(k) tp ket(0)_n $
+ Apply $U_f$ to give $ 1/2^(n\/2) sum_(k = 0)^(2^n - 1) ket(k) tp ket(0 xor f(k)) = 1/2^(n\/2) sum_(k = 0)^(2^n - 1) ket(k) tp ket(f(k)) $
+ Measure the ancillary bits (the $ket(f(k))$) in the computational basis, yielding a random $f(x_0)$. The state collapses to $ 1/sqrt(2) (ket(x_0) + ket(x_0 xor a)) tp ket(f(x_0)) $
+ Discard ancillary bits and apply $H^(tp n)$ to input bits $1/sqrt(2) (ket(x_0) + ket(x_0 xor a))$ to give $
H^(tp n) 1/sqrt(2) (ket(x_0) + ket(x_0 xor a)) & = 1/sqrt(2) (H^(tp n) ket(x_0) + H^(tp n) ket(x_0 xor a)) \
& = 1/sqrt(2) 1/2^(n\/2) (sum_(k = 0)^(2^n - 1) (-1)^(k dot.op x_0) ket(k) + sum_(k = 0)^(2^n - 1) (-1)^(k dot.op (x_0 xor a)) ket(k)) \
& = 1/2^((n + 1)\/2) sum_(k = 0)^(2^n - 1) (-1)^(k dot.op x_0) (1 + (-1)^(k dot.op a)) ket(k) \
& = 1/2^((n - 1)\/2) sum_(k = 0: \ k dot a = 0)^(2^n - 1) (-1)^(k dot.op x_0) ket(k)
$
+ Measure the state in the computational basis, which gives $k in {0, ..., 2^n - 1}$ such that $k dot a = 0 thick mod 2$.
+ $a$ satisfies $n$ linearly independent equations of the form $k dot a = 0 thick mod 2$, so $O(n)$ measurements ($O(n)$ values of $k$) are needed to obtain all bits of $a$.
This can be implemented as
#figure(quantum-circuit(
lstick($ket(q_1)$), gate($H$), mqgate($U_f$, n: 2), gate($H$), meter(), setwire(2), 1, nl,
lstick($ket(q_0) = ket(0)$), 2, meter(), setwire(2), 2
))
]
#example[
Let $f: {0, 1}^3 -> {0, 1}^3$, $a = 010$, and $ f(000) & = f(010) = x, quad f(001) = f(011) = y, \ f(100) & = f(110) = z, quad f(101) = f(111) = w $ Using Simon's algorithm:
- Applying $H^(tp 3)$ to $ket(000) tp ket(000)$ gives $ 1/(2 sqrt(2)) (ket(000) + ket(001) + ket(010) + ket(011) + ket(100) + ket(101) + ket(110) + ket(111)) tp ket(000) $
- Applying $U_f$ gives $ 1/(2 sqrt(2)) ((ket(000) + ket(010)) tp ket(x) + (ket(001) + ket(011)) tp ket(y) + (ket(100) + ket(110)) tp ket(z) + (ket(101) + ket(111)) tp ket(w)) $
- Measure the ancillary bits, assuming it yields value corresponding to $ket(x)$, so state has collapsed to $ 1/sqrt(2) (ket(000) + ket(010)) tp ket(x) $
- Apply $H^(tp 3)$ to the input bits, giving $ 1/2^((3 - 1)\/2) sum_(k = 0: \ k dot a = 0)^(2^3 - 1) ket(k) = 1/2 (ket(000) + ket(001) + ket(101) + ket(100)) $
- Measuring $000$ gives no information. Measuring $001$ implies that $a_0 = 0$. Measuring $010$ implies that $a_1 = 0$. Measuring $101$ implies that $a_0 + a_2 = 0$. So measuring the last three imply $a = 010$ (since $a != 000$).
]
== Quantum Fourier transform
#definition[
*Quantum Fourier transform* is unitary operation $U_"FT"$ acting on the $n$ qubit space $H_n$, given by action on computational basis states: $ U_"FT" ket(x) = 1/2^(n\/2) sum_(y = 0)^(2^n - 1) e^(2 pi i x y \/ 2^n) ket(y) $ It is quantum version of the discrete Fourier transform: by linearity, if $ket(psi) = sum_(x = 0)^(2^n - 1) psi_x ket(x)$ and $ket(phi) = U_"FT" ket(psi) = sum_(y = 0)^(2^n - 1) phi_y ket(y)$, then $ phi_y = 1/2^(n\/2) sum_(x = 0)^(2^n - 1) e^(2pi i x y \/2^n) psi_x $ Note this is precisely the discrete Fourier transform on the vector $psi_x$.
]
#note[
Can check $U_"FT"$ is unitary by checking $U_"FT" ket(x)$ has norm $1$ and $U_"FT" ket(x)$ orthogonal to $U_"FT" ket(x')$ for $x != x'$ (i.e. it preserves the inner product).
]
#example[
Note that classically, computing $phi_y$ requires $2^n$ additions. If $y = y_(n - 1)...y_0$, i.e. $y = y_(n - 1) 2^(n - 1) + dots.h.c + y_0$, then $ e^(2 pi i x y\/2^n) = product_(l = 0)^(n - 1) e^(2pi i x y_l \/ 2^(n - l)) $ which gives $ U_"FT" ket(x) = 1/2^(n\/2) sum_(y = 0)^(2^n - 1) e^(2pi i x y\/2^n) ket(y) = 1/2^(n\/2) times.circle.big_(l = 0)^(n - 1) (ket(0) + e^(2pi i x \/2^(n - l)) ket(1)) $ Note this is similar to $ H^(tp n) ket(x) = times.circle.big_(i = 0)^(n - 1) 1/sqrt(2) (ket(0)_i + (-1)^(x_i) ket(1)_i) = sum_(y = 0)^(2^n - 1) (-1)^(x dot y) ket(y) $ However, for $U_"FT"$, phases in individual qubit states depend on $x$, not just $x_l$, so $U_"FT"$ cannot be realised only by single-qubit operations. Now also $ e^(2pi x \/2^(n - l)) = e^(2pi i(x_(n - 1) 2^(l - 1) + dots.h.c + x_0 2^(l - n))) = product_(m = 0)^(n - 1) e^(2pi i x_m \/2^(n - l - m)) = product_(m = 0)^(n - l - 1) e^(2pi i x_m\/2^(n - l - m)) $ since $e^(2pi i r) = 1$ for $r in ZZ$. So phase for $l = n - 1$ only depends on $x_0$, phase for $l = n - 2$ only depends on $x_0$ and $x_1$: $ U_"FT" ket(x) = 1/2^(n\/2) (ket(0) + e^(i pi x_0) ket(1)) tp (ket(0) + e^(i pi x_1) e^(i pi x_0 \/2)) tp dots.c $ When $U_"FT"$ is realised, order of qubits is reversed. QFT can be implemented of controlled-phase gates, where we apply unitaries $ R_k = mat(1, 0; 0, e^(i pi\/2^k)) $ Each qubit $i$ has controlled-$R_k$ applied, controlled by each qubit $j < i$, where $k = i - j$. E.g. for $n = 3$,
#figure(quantum-circuit(
lstick($ket(q_2)$), gate($H$), gate($S$), gate($T$), 3, swap(2), 1, nl,
lstick($ket(q_1)$), 1, ctrl(-1), 1, gate($H$), gate($S$), 3, nl,
lstick($ket(q_0)$), 2, ctrl(-2), 1, ctrl(-1), gate($H$), swap(-2), 1
))
where $times$ indicates swapping qubits. One gate is applied to qubit $0$, two gates applied to qubit $1$, ..., $n$ gates applied to qubit $n - 1$, so total number of gates required to implemented QFT is $O(n^2)$
]
== Shor's algorithm
#example[
Given $N in NN$, pick random $1 < y < N$. If $gcd(y, N) != 1$, we can find a divisor of $N$. If $gcd(y, N) = 1$, define $ f_y: ZZ -> ZZ\/N, quad f_y (a) = y^a quad mod N $ Period of $f_y$ is smallest $r in NN$ such that $f_y (r) = 1$. We have $f_y (a) = f_y (b)$ iff $a - b = 0 mod r$. Let $r$ be even (if $r$ odd, start again with different $y$). Now $ y^r - 1 = 0 mod N ==> (y^(r\/2) - 1)(y^(r\/2) + 1) = 0 mod N $ If either factor on LHS is multiple of $N$, start again with different $y$. Otherwise, we know $y^(r\/2) - 1$ and $N$ have common factor $< N$, and so use Euclid's algorithm to find $gcd(y^(r\/2) - 1, N)$.
]
#algorithm(name: "Shor's algorithm")[
- Shor's algorithm finds the smallest $r > 0$ such that $y^r equiv 1 mod N$.
- Start with state $ket(0)_n ket(0)_(n_0)$ where $n_0 = ceil(log_2 (N))$, $n = 2n_0$.
- Act with $H^(tp n)$ on input bits, giving $ 1/2^(n\/2) sum_(x = 0)^(2^n - 1) ket(x) tp ket(0) $
- Act with $U_f$ (where $U_f ket(x) ket(m) = ket(x) ket(m xor f(x))$), giving $ 1/2^(n\/2) sum_(x = 0)^(2^n - 1) ket(x) tp ket(f(x)) $
- Measure the output bits, yielding a random value $f(x_0)$ (assume WLOG that $x_0 < r$), which projects the state to $ 1/sqrt(Q + 1) sum_(m = 0)^Q ket(x_0 + m r) ket(f(x_0)) $ where $Q = |{i in {0, ..., 2^n - 1}: f(i) = f(x_0)}|$ which is approximately the largest integer strictly less than $2^n\/r$. Shift by random $x_0$ means we can't learn anything about $r$ by measuring input bits. Discard output bits.
- Perform QFT on input bits, giving $ U_"FT" 1/sqrt(Q + 1) sum_(m = 0)^Q ket(x_0 + m r) & = 1/sqrt(Q + 1) sum_(m = 0)^Q 1/2^(n\/2) sum_(y = 0)^(2^n - 1) e^(2pi i(x_0 + m r)y\/2^n) ket(y) \ & = 1/2^(n\/2) sum_(y = 0)^(2^n - 1) e^(2pi i x_0 y\/2^n) (1/sqrt(Q + 1) sum_(m = 0)^Q e^(2pi i m r y\/2^n)) ket(y) $
- Measure input bits in the computational basis. Probability that this yields value $y$ is $ p(y) & = abs(1/sqrt(Q + 1) 1/2^(n\/2) e^(2pi i x_0 y\/2^n) (sum_(m = 0)^Q e^(2pi i m r y\/2^n)))^2 \ & = 1/(2^n (Q + 1)) abs(sum_(m = 0)^Q e^(2pi i m r y\/2^n))^2 \ & = 1/(2^n (Q + 1)) abs((e^(2 pi i r y (Q + 1)\/2^n) - 1)/(e^(2 pi i r y\/2^n) - 1))^2 \ & = 1/(2^n (Q + 1)) abs((e^(pi i r y (Q + 1)\/2^n) (e^(pi i r y (Q + 1)\/2^n) - e^(-pi i r y(Q + 1)\/2^n)))/(e^(pi i r y\/2^n) (e^(pi i r y\/2^n) - e^(-pi i r y\/2^n))))^2 \ & = 1/(2^n (Q + 1)) (sin^2 (pi r y(Q + 1)\/2^n))/(sin^2 (pi r y\/2^n)) $
- When $r y\/2^n in ZZ$, we have $p(y) = 1/(2^n (Q + 1)) abs(sum_(m = 0)^Q 1)^2 = (Q + 1)\/2^n$. Now $Q + 1 approx 2^n\/r$ so $p(y) approx 1\/r$.
- If $r y\/2^n in.not ZZ$ (and not close to being an integer), then $sum_(m = 0)^Q e^(2pi i m r y\/2^n) < 1$ (typically a small value since phases do not add coherently) and $p(y) approx 1\/(2^n (Q + 1)) approx r\/4^n$. Note $r <= N < 2^(n_0) << 2^n$ implies that summing over all the approximately $2^n$ possibly values of $y$ gives $ sum_(y: r y\/2^n in.not ZZ) p(y) approx 2^n r\/4^n approx r\/2^n << 1 $
- Hence it is likely to measure $y$ such that $r y\/2^n$ is approximately an integer. Equivalently, $y\/2^n = j\/r$ for some $j in ZZ$.
- With high probability, $y$ will be the nearest integer to a multiple of $2^n\/r$, i.e. within $1\/2$ of $j 2^n\/r$, so $ abs(y/2^n - j/r) <= 1/2^(n + 1) <= 1/(2N^2) $ since $N <= 2^(n_0) = 2^(n\/2)$. There is unique fraction $j\/r$ with $r < N$ satisfying this (by triangle inequality), as $ abs(j_1/r_1 - j_2/r_2) >= 1/(r_1 r_2) > 1/N^2 $ unless $j_1/r_1 = j_2/r_2$ (this is why $n = 2n_0$ is chosen). $j/r$ can be obtained from $y\/2^n$ via continued fractions.
- If $j$ and $r$ have common divisor, we obtain $r_0 = r\/gcd(j, r)$ instead of $r$. But given the guess $r_0$, we can check if $r_0$ is the period by checking if $f(r_0) = 1$, and if not try $f(2r_0), f(3r_0), ...$. If these fail, run algorithm again. Probability of $j$ and $r$ having common divisor is $< 1\/2$.
] |
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/PianoDiProgetto/sections/Pianificazione.typ | typst | MIT License | #import "../functions.typ": glossary, team
= Pianificazione
Stabilite come #glossary("milestone") principali le #glossary("baseline") corrispondenti alle revisioni del progetto, gli #glossary[sprint] consentono di mantenere un approccio agile e permettono di essere reattivi rispetto ai cambiamenti nelle esigenze o nei requisiti del progetto. Ciascuno #glossary[sprint] contribuirà in modo incrementale al raggiungimento delle #glossary[milestone] e al successo complessivo del progetto.
Sulla base delle scadenze riportate nella sezione *Scadenze* si pianificano:
- 2 #glossary[sprint] di due settimane e 6 #glossary[sprint] da una settimana per arrivare alla revisione #glossary[RTB]\;
- 6 #glossary[sprint] della durata di una settimana per arrivare alla revisione #glossary[PB]\;
- 2 #glossary[sprint] della durata di una settimana per arrivare alla revisione #glossary[CA]\.
Il documento _Piano di Progetto_ si evolve durante tutto il ciclo di vita del progetto. Inizialmente, il team aveva pianificato #glossary[sprint] della durata di due settimane. Tuttavia, durante lo svolgimento, è risultato preferibile passare a #glossary[sprint] della durata di una settimana. Di conseguenza, il documento del _Piano di progetto_, in particolare nelle sezioni *Pianificazione* e *Preventivo*, viene aggiornato rispetto a questa modifica.
Nell’intervallo di tempo tra revisione #glossary[RTB] e revisione #glossary[PB], è stata pianificata una diminuzione del carico di lavoro nelle ultime due settimane del mese di Gennaio e la prima del mese di Febbraio, per permettere ai membri del team di dedicarsi agli altri impegni richiesti dal percorso di studio.
Per ciascuno #glossary[sprint] vengono stabilite le date di inizio e di fine, e il relativo insieme delle attività pianificate.
== Requirements and Technology Baseline #glossary[RTB]
#figure(
image("../assets/DiagrammiGantt/RTB.png", width: 100%),
caption: [
Diagramma di Gantt per le attività da svolgere per la #glossary[RTB].
],
)
*Inizio*: 10/11/2023
*Fine*: 19/01/2024
La definizione delle _Norme di Progetto_ è alla base di questa fase, per rendere chiare a tutti le modalità di lavoro. Parallelamente hanno inizio anche il _Piano di Progetto_, con particolare attenzione all'*Analisi dei rischi* e al *Modello di sviluppo*, l'_Analisi dei Requisiti_, il _Piano di Qualifica_ e il _Glossario_. I documenti in questa fase sono in continua crescita, data la loro natura incrementale.
Per favorire la consultazione online della #glossary[documentazione] stessa si prevede la creazione di un sito web "vetrina".
Si programma di sviluppare il Proof of Concept #glossary[PoC] del progetto secondo le indicazioni della Proponente, il cui feedback è fondamentale per verificare la coesione dello stack tecnologico, i suoi punti di forza e le sue carenze.
Durante questa fase il colloquio con la Proponente e i Committenti viene mantenuto costante; in questo modo, e grazie anche alle conoscenze acquisite dalla costruzione del #glossary[PoC], sarà possibile adattare e migliorare l'_Analisi dei Requisiti_.
#include "PianificazioneSprint/PrimoSprint.typ"
#include "PianificazioneSprint/SecondoSprint.typ"
#include "PianificazioneSprint/TerzoSprint.typ"
#include "PianificazioneSprint/QuartoSprint.typ"
#include "PianificazioneSprint/QuintoSprint.typ"
#include "PianificazioneSprint/SestoSprint.typ"
#include "PianificazioneSprint/SettimoSprint.typ"
#include "PianificazioneSprint/OttavoSprint.typ"
=== Pausa programmata di tre settimane
*Inizio*: Venerdì 19/01/2024
*Fine*: Giovedì 08/02/2024
Il team ha programmato una pausa di tre settimane per consentire ai membri di concentrarsi sugli esami e sulla preparazione in vista della revisione #glossary[RTB].
== Product Baseline #glossary[PB]
#figure(
image("../assets/DiagrammiGantt/PB.png", width: 100%),
caption: [
Diagramma di Gantt per le attività da svolgere per la #glossary[PB].
],
)
*Inizio*: 09/02/2024
*Fine*: 21/03/2024
In questa fase la concentrazione, soprattutto nel periodo iniziale, si focalizza sulla progettazione. Si inizia la stesura del documento delle _Specifiche Tecniche_ e lo studio di un'#glossary[architettura] adeguata. Iniziano il loro ciclo di vita anche il _Manuale Utente_ e le _Specifiche Tecniche_.
Attraverso le fasi di progettazione e sviluppo ha inizio la creazione del progetto vero e proprio, utilizzando le conoscenze acquisite durante la creazione del #glossary[PoC]. Lo sviluppo includerà anche le fasi di testing del prodotto.
#include "PianificazioneSprint/NonoSprint.typ"
#include "PianificazioneSprint/DecimoSprint.typ"
== Customer Acceptance #glossary[CA]
*Inizio*: 22/03/2024
*Fine*: 04/04/2024
La fase di Customer Acceptance si basa su una verifica e miglioramento di tutti i documenti citati nelle altre revisioni.
Il prodotto è considerato alla sua fase finale e si programma di modificarlo solamente per la risoluzione di #glossary[bug] o l'aggiunta di funzionalità richieste durante la revisione precedente (#glossary[PB]). |
https://github.com/denizenging/site | https://raw.githubusercontent.com/denizenging/site/master/page/about/index.tr.typ | typst | #import "@local/pub-page:0.0.0": *
#show: template(
title: "<NAME>",
description: "Dilibimci, Yazılımcı, Stoik...",
menu: (2, "user"),
)
Esenlikler! Ben, İstanbul Medeniyet Üniversitesinde, Dilbilim bölümünde okuyan bir öğrenciyim. Yanı sıra uzun zamandır da yazılımla uğraşıyorum.
22 yaşındayım. Yazılıma ilgili her şeyle uğraşmaktan, bildiğim her şeyi paylaşmaktan ve müziten çok hoşlanırım! İlgi alanlarımın arasında bilgisayarlı dilbilim, sesbilim, bilişsel dilbilim bulunmakta. Yazılım kısmında daha çok web düzenekleri kullansam da alt düzey yazılımlarla ve şifrebilim uğraşmayı oldukça severim.
Beni sosyal medyadan da takip edebilirsiniz. Bağlantılarımın bulunduğu #link("/page/links")[sayfadan ulaşabilirsiniz].
|
|
https://github.com/NathanBurgessDev/tabletop-war-game-helper | https://raw.githubusercontent.com/NathanBurgessDev/tabletop-war-game-helper/main/Dissertation/appendix.typ | typst | #import "@preview/timeliney:0.0.1"
#page(flipped: true)[
#figure(
timeliney.timeline(
show-grid: true,
{
import timeliney: *
headerline(group(([*January*], 4)),group(([*February*], 4)),group(([*March*], 4)),group(([*April*], 4)),group(([*May*],3)))
headerline(
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(3).map(n => strong(str(n + 1)))),
)
taskgroup(title: [*Other Commitments*], {
task("MDP Coursework",(0,3.5), style: (stroke: 2pt + gray))
task("HAI Coursework", (0, 1), style: (stroke: 2pt + gray))
task("Ethics Essay",(0,3.5),style: (stroke: 2pt+gray))
task("Graphics Coursework 1", (9,11.5), style: (stroke: 2pt + gray))
task("Graphics Coursework 2", (11,19), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Initial Write-up*], {
task("Interim Report", (0, 0.25), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Computer Vision*], {
task("Multi Ring Tracking",(5,6),style: (stroke: 2pt + gray))
task("Ring Identification", (8.5,12), style: (stroke: 2pt + gray))
task("Terrain Detection", (14, 15), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Game Logic*], {
task("Game Board Framework",(11.5,15.5), style: (stroke: 2pt + gray))
task("Line of Sight",(13,14.5), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Final Write-up*], {
task("Dissertation", (14, 16), style: (stroke: 2pt + gray))
task("Presentation Prep", (16, 18.5), style: (stroke: 2pt + gray))
})
milestone(
at: 0.25,
style: (stroke: (dash: "dashed")),
align(center, [
*Interim Report Submission*\
Jan 2023
])
)
milestone(
at: 16,
style: (stroke: (dash: "dashed")),
align(center, [
*Dissertation Submission*\
Apr 2024
])
)
milestone(
at: 18.5,
style: (stroke: (dash: "dashed")),
align(center, [
*Project Presentation*\
May 2024
])
)
}
),
caption: ([The final Gantt chart, showing the work done.])
) <thirdGannt>
]
#page(flipped: true)[
#figure(
timeliney.timeline(
show-grid: true,
{
import timeliney: *
headerline(group(([*January*], 4)),group(([*February*], 4)),group(([*March*], 4)),group(([*April*], 4)),group(([*May*],3)))
headerline(
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(3).map(n => strong(str(n + 1)))),
)
taskgroup(title: [*Other Commitments*], {
task("MDP Coursework",(0,3.5), style: (stroke: 2pt + gray))
task("HAI Coursework", (0, 1), style: (stroke: 2pt + gray))
task("Ethics Essay",(0,3.5),style: (stroke: 2pt+gray))
task("SEM Coursework", (6,12), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Initial Write-up*], {
task("Interim Report", (0, 0.25), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Computer Vision*], {
task("Multi Ring Tracking",(4,5),style: (stroke: 2pt + gray))
task("Ring Identification", (5,7), style: (stroke: 2pt + gray))
task("Terrain Detection", (7, 8), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Game Logic*], {
task("Game Board Framework",(8,10), style: (stroke: 2pt + gray))
task("Movement",(10,11), style: (stroke: 2pt + gray))
task("Line of Sight",(11,12), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Final Write-up*], {
task("Dissertation", (11, 14), style: (stroke: 2pt + gray))
task("Presentation Prep", (14, 18.5), style: (stroke: 2pt + gray))
})
milestone(
at: 0.25,
style: (stroke: (dash: "dashed")),
align(center, [
*Interim Report Submission*\
Jan 2023
])
)
milestone(
at: 14,
style: (stroke: (dash: "dashed")),
align(center, [
*Dissertation Submission*\
Apr 2024
])
)
milestone(
at: 18.5,
style: (stroke: (dash: "dashed")),
align(center, [
*Project Presentation*\
May 2024
])
)
}
),
caption: ([The Updated Gantt chart])
) <firstGantt>
]
#page(flipped: true)[
#figure(
timeliney.timeline(
show-grid: true,
{
import timeliney: *
headerline(group(([*October*], 4)), group(([*November*], 4)),group(([*December*], 4)),group(([*January*], 4)),group(([*February*], 4)),group(([*March*], 4)),group(([*April*], 4)),group(([*May*],3)))
headerline(
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(4).map(n => strong(str(n + 1)))),
group(..range(3).map(n => strong(str(n + 1)))),
)
taskgroup(title: [*Other Commitments*], {
task("MDP Coursework", (4, 7),(7.2,13.75), style: (stroke: 2pt + gray))
task("HAI Coursework", (2, 10), style: (stroke: 2pt + gray))
task("Ethics Essay",(1,4),(6,9),(10.5,13.75),style: (stroke: 2pt+gray))
task("SEM Coursework", (18, 24), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Initial Write-up*], {
task("Project Proposal", (2,4), style: (stroke: 2pt + gray))
task("Ethics Checklist", (4, 5), style: (stroke: 2pt + gray))
task("Interim Report", (7.5, 9.5), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Computer Vision*], {
task("Ring Detection", (3, 6), style: (stroke: 2pt + gray))
task("Terrain Detection", (6, 8), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Game Logic*], {
task("Game Board Framework",(13.75,16), style: (stroke: 2pt + gray))
task("Movement",(16,18), style: (stroke: 2pt + gray))
task("Line of Sight",(18,19), style: (stroke: 2pt + gray))
})
taskgroup(title: [*Final Write-up*], {
task("Dissertation", (22, 26), style: (stroke: 2pt + gray))
task("Presentation Prep", (26, 30), style: (stroke: 2pt + gray))
})
milestone(
at: 10,
style: (stroke: (dash: "dashed")),
align(center, [
*Interim Report Submission*\
Dec 2023
])
)
milestone(
at: 26,
style: (stroke: (dash: "dashed")),
align(center, [
*Dissertation Submission*\
Apr 2024
])
)
milestone(
at: 30.5,
style: (stroke: (dash: "dashed")),
align(center, [
*Project Presentation*\
May 2024
])
)
}
),
caption: ([The original Gantt chart])
)<secondGantt>
]
|
|
https://github.com/FlyinPancake/bsc-thesis | https://raw.githubusercontent.com/FlyinPancake/bsc-thesis/main/thesis/pages/chapters/chapter_5_conclusion.typ | typst | #import "@preview/big-todo:0.2.0": todo
= Conclusion
We conducted tests to assess the performance and usability of `vcluster` in
various scenarios. The results of these tests indicate that `vcluster` is a
promising tool for Kubernetes developers and operators. It is a lightweight
solution that can be used to create and manage virtual clusters with minimal
overhead.
As detailed in @postgres-test-exec-sec, the performance of `vcluster` demonstrates
its prowess, accommodating a substantial number of connected clients effectively.
Throughout the testing phase, however, an observation was made regarding the
PostgreSQL operator's sluggishness in removing databases from the cluster. This
potential bottleneck could pose challenges during application development for
Kubernetes, where swift cluster state resets between tests are crucial. In this
context, `vcluster` stands out as a more efficient tool, enabling rapid creation
and destruction of clusters. Additionally, its proficiency in seamlessly switching
between contexts proves advantageous when managing multiple clusters concurrently.
Subsequent exploration, as documented in @kafka-test-exec-sec, revealed that
`vcluster` performed admirably in running a Kafka cluster with latencies
comparable to a physical cluster. It is worth noting that challenges were
encountered with the Kafka cluster manager operator, which was intricately
designed and posed certain inconveniences when utilized in conjunction with
`vcluster`. It is important to recognize that such challenges are not inherent to
`vcluster` itself but may arise with specific operators.
In @crd-conflict-sec we explored the potential for vcluster to solve the @crd
version conflict problem. This solution may be useful for some other use cases as
well, but it is not a general solution, as it requires the operator to be aware of
the virtual cluster. It is also not a complete solution, as it does not solve the
problem of multiple versions of the same CRD being used in the same cluster.
Several intriguing avenues lie adjacent to the current investigation. For
instance, exploring the performance of `vcluster` under conditions involving a
substantial number of users or a high volume of virtual clusters would provide
valuable insights. Given the versatility mentioned in the documentation, assessing
how `vcluster` performs across different certified Kubernetes distributions,
particularly focusing on officially supported ones like `eks` and `k0s`, holds
notable interest.
The compatibility of `vcluster` with various Kubernetes operators is another
intriguing dimension to explore. As observed with the Strimzi operator for Apache
Kafka, investigating the compatibility of popular Kubernetes operators with
`vcluster` would shed light on the platform's adaptability to diverse use cases.
Delving into the performance limits and guarantees of `vcluster` stands as a
worthwhile pursuit. Understanding how resource limits are enforced and their
impact on virtual cluster performance would contribute to a comprehensive
assessment of `vcluster` capabilities.
Lastly, an exploration into the security aspects of `vcluster` for facilitating
multi-tenancy is a compelling avenue. As conventional multi-tenancy based solely
on namespaces may pose security concerns, investigating how `vcluster` can offer a
secure multi-tenancy solution, ensuring users cannot access each other's virtual
clusters or the host cluster, would be valuable.
|
|
https://github.com/Error-418-SWE/Documenti | https://raw.githubusercontent.com/Error-418-SWE/Documenti/src/3%20-%20PB/Documentazione%20esterna/Piano%20di%20Progetto/Piano%20di%20Progetto.typ | typst | #import "@preview/cetz:0.2.0": *
#import chart
#import "/template.typ": *
#show: project.with(
title: "Piano di Progetto",
subTitle: "Pianificazione, analisi e retrospettive",
authors: (
"<NAME>",
"<NAME>",
"<NAME>",
"<NAME>",
"<NAME>",
),
showLog: true,
showImagesIndex: false,
isExternalUse: true,
);
#show regex("€ (\d)+"): it => {
it
",00"
}
#show regex("\(-(\d)+\)"): it => {
text(it, fill: green)
}
#show regex("\(\+(\d)+\)"): it => {
text(it, fill: red)
}
#let roles = ("Responsabile", "Amministratore", "Analista", "Progettista", "Programmatore", "Verificatore")
#let pal = (red, blue, yellow, olive, orange, aqua, eastern, maroon, fuchsia, green)
#let roles-legend = rect(stroke: 0.5pt + luma(140))[
#let tuples = roles.zip(pal)
#stack(
spacing: 0.75em,
dir: ltr,
..tuples.map(tuple => stack(
dir: ltr,
spacing: 0.25em,
rect(stroke: 0.75pt, fill: tuple.at(1), width: 0.75em, height: 0.75em), tuple.at(0)
))
)
]
#let p = palette.new(colors: pal)
#let role-chart-size = (8, 7)
#let barchart-config = (
size: role-chart-size,
mode: "clustered",
value-key: (1,2),
bar-style: p,
legend: "legend.north",
legend-style: (orientation: ltr, spacing: 0.5, padding: 0.5em, stroke: 0.5pt + luma(140), item: (spacing: 0.5)),
x-label: "Ore",
x-tick-step: none,
labels: ("Preventivate", "Effettive")
)
#let barchart-label-config = (
anchor: "west",
padding: .2
)
#let piechart-config = (
radius: 2,
inner-radius: 1,
gap: 1deg,
slice-style: p,
label-key: (0),
value-key: (1),
outer-label: (content: "%"),
)
#let getMaxFromPlotData(data) = {
let max = 0
for datum in data {
if calc.max(..datum.slice(1)) > max {
max = calc.max(..datum.slice(1))
}
}
return max
}
#let compute-labels-x-coordinate(data, role-chart-size) = {
let i = 0
let x-coordinates = ()
let max = getMaxFromPlotData(data)
while(i < data.len()) {
x-coordinates.push((data.at(i).at(1)*role-chart-size.at(0)/max, data.at(i).at(2)*role-chart-size.at(0)/max))
i += 1
}
return x-coordinates
}
#let compute-labels-y-coordinate(data, role-chart-size) = {
let i = 0
let y-coordinates = ()
while(i < data.len()) {
y-coordinates.push(((role-chart-size.at(1) - 0.77 - i), (role-chart-size.at(1) - 1.17 - i)))
i += 1
}
return y-coordinates
}
#outline(
title: "Indice dei grafici",
target: figure.where(kind: "chart")
)
#pagebreak()
= Introduzione
== Scopo del documento
Il documento #pdp ha il compito di governare la pianificazione dell'avanzamento del progetto, determinando task e obiettivi da raggiungere e presentando un'analisi critica del lavoro fino a quel momento svolto. L'intento è rendicontare e valutare criticamente l'operato compiuto per migliorarlo, ove necessario, e gestire in modo efficace ed efficiente le risorse.
Il documento si articola in 6 sezioni principali:
- *Analisi dei Rischi* (@analisi-rischi): dedicata all'analisi preventiva dei rischi e delle relative contromisure;
- *Panoramica generale* (@panoramica-generale): dedicata all'analisi preventiva dei costi complessivi di realizzazione;
- *Periodi di sviluppo* (@periodi-di-sviluppo): dedicata all'analisi della suddivisione temporale dello sviluppo del progetto;
- *Pianificazione del lavoro* (@pianificazione-lavoro): dedicata alla descrizione della metodologia di lavoro adottata;
- *Preventivi di periodo* (@preventivi): dedicata alla pianificazione delle attività da svolgere per ciascuno Sprint;
- *Consuntivi di periodo* (@consuntivi): dedicata all'analisi retrospettiva del lavoro svolto in ciascuno Sprint. Riporta eventuali criticità ed azioni intraprese a fini migliorativi.
== Glossario
#glo_paragrafo
== Riferimenti <riferimenti>
=== Riferimenti a documentazione interna <riferimenti-interni>
- Documento #glo_v: \
_#link("https://github.com/Error-418-SWE/Documenti/blob/main/3%20-%20PB/Glossario_v" + glo_vo + ".pdf")_
#lastVisitedOn(13,02,2024)
=== Riferimenti normativi <riferimenti-normativi>
- Regolamento di progetto: \
_#link("https://www.math.unipd.it/~tullio/IS-1/2023/Dispense/PD2.pdf")_
#lastVisitedOn(13,02,2024)
- Gestione di progetto: \
_#link("https://www.math.unipd.it/~tullio/IS-1/2023/Dispense/T4.pdf")_
#lastVisitedOn(13,02,2024)
- I processi di ciclo di vita del software: \
_#link("https://www.math.unipd.it/~tullio/IS-1/2023/Dispense/T2.pdf")_
#lastVisitedOn(13,02,2024)
- _A guide to the project management body of knowledge (PMBOK guide)_ di Project Management Institute: \
_#link("https://openlibrary.org/works/OL16506273W/A_guide_to_the_project_management_body_of_knowledge_%28PMBOK_guide%29")_
#lastVisitedOn(26,02,2024)
=== Riferimenti informativi <riferimenti-informativi>
- Capitolato "Warehouse Management 3D" (C5) di _Sanmarco Informatica S.p.A._: \
_#link("https://www.math.unipd.it/~tullio/IS-1/2023/Progetto/C5.pdf")_
#lastVisitedOn(13, 02, 2024)
#pagebreak()
= Analisi dei Rischi <analisi-rischi>
Questa sezione elenca e categorizza i rischi, li analizza e fornisce strumenti di monitoraggio e mitigazione.
== Profili di rischio
A ciascun rischio individuato si associano:
- informazioni descrittive sul suo contesto;
- impatto: può essere lieve, medio o grave. Esprime l'ordine di grandezza dell'effetto generato dall'evento;
- probabilità: da 1 a 5. Esprime la probabilità di verificarsi del rischio;
- soglie di accettazione del rischio;
- azioni previste in funzione delle soglie, possono includere:
- l'eliminazione del rischio;
- la riduzione della sua probabilità o gravità;
- l'accettazione del rischio.
- conseguenze relative a ciascuna delle azioni possibili, esse descrivono gli effetti collaterali a breve o medio termine che il rischio può comportare.
I rischi sono suddivisi nelle categorie:
+ rischi tecnici;
+ rischi esterni;
+ rischi di project management.
== Rischi
=== Rischi tecnici
*RIS-1.1 Utilizzo problematico delle tecnologie* <RIS-1.1>
Le tecnologie individuate o suggerite durante i processi di analisi e progettazione potrebbero risultare complesse da comprendere e/o integrare.
- *Impatto*: medio;
- *Probabilità*: 4;
- *Soglie*:
- se, compreso un ragionevole rallentamento dovuto all'apprendimento di nuove competenze, la complessità delle tecnologie risulta gestibile con le competenze attuali del gruppo e non compromette significativamente la tempistica del progetto, si adotta "OT-1.1.2 Accettazione del rischio";
- se la complessità delle tecnologie supera significativamente le competenze attuali del gruppo e potrebbe quindi causare gravi ritardi nell'avanzamento dei lavori, si adotta "OT-1.1.1 Riduzione della sua probabilità o gravità".
*Opzioni di trattamento del rischio*
*OT-1.1.1 Riduzione della sua probabilità o gravità*
*Strategia di trattamento*
Viene considerata in sede di meeting la necessità di individuare tecnologie sostitutive che rimpiazzeranno quelle coinvolte, esse dovranno essere studiate ed implementate al fine di soddisfare i requisiti rimasti irrisolti a causa dell'occorrenza del rischio.
Se si ritiene necessario, si richiede una maggiore partecipazione da parte degli altri membri del gruppo, con conseguente riassegnazione di attività e/o ruoli, per integrare le nuove tecnologie nel minor tempo possibile.
*Conseguenze*
Le analisi tecnologiche precedentemente svolte devono essere riviste in luce delle nuove opzioni individuate. Quest'ultime possono essere a loro volta totalmente o parzialmente sconosciute al gruppo, il quale dovrà dedicare tempo, non preventivato precedentemente, alla loro comprensione ed integrazione.
Le nuove tecnologie individuate possono potenzialmente generare nuovamente questo rischio portando ad un rallentamento grave nell'avanzamento dei lavori.
\ *OT-1.1.2 Accettazione del rischio*
*Strategia di trattamento*
Il gruppo accetta la complessità delle tecnologie e si impegna a risolvere eventuali problemi nell'implementazione e utilizzo delle stesse.
Se si ritiene necessario, si richiede una maggiore partecipazione da parte degli altri membri del gruppo, con conseguente riassegnazione di attività e/o ruoli, alla risoluzione degli eventuali problemi riscontrati.
È importante tenere monitorata nel tempo la situazione per accertarsi che le circostanze non richiedano una modifica nell'approccio di risoluzione.
*Conseguenze*
Rallentamenti non preventivati che possono avere conseguenze a cascata sulle attività dipendenti e che possono coinvolgere più membri del gruppo per un periodo di tempo difficilmente prevedibile.
\ *RIS-1.2 Insufficiente preparazione teorica* <RIS-1.2>
Il progetto richiede conoscenze teoriche legate al percorso di studi sulle quali sono basate molteplici attività fondamentali. Per esempio, lo studio dei requisiti e la produzione degli Use Case con i relativi diagrammi, oppure il processo di design architetturale ed i relativi pattern da considerare.
Qualora uno o più membri del gruppo dovessero essere insufficientemente preparati sulla teoria necessaria, non potrebbero dare un valido contributo nell'avanzamento dei lavori.
- *Impatto*: grave;
- *Probabilità*: 2;
- *Soglie*:
- se le lacune teoriche sono sufficientemente contenute da permetterne un recupero abbastanza rapido da non inficiare sulla pianificazione dei lavori, si adotta "OT-1.2.1 Eliminazione del rischio";
- se il tempo necessario ad effettuare lo studio richiesto risulta significativamente esteso rispetto ai tempi dettati dalla pianificazione dei lavori, al punto da provocare rallentamenti che impedirebbero il corretto completamento dei task previsti, si adotta "OT-1.2.2 Riduzione della sua probabilità o gravità".
*Opzioni di trattamento del rischio*
*OT-1.2.1 Eliminazione del rischio*
*Strategia di trattamento*
I membri coinvolti devono colmare le lacune teoriche con lo studio individuale, ed eventualmente, anche con l'aiuto di altri membri del gruppo disponibili.
Nel caso in cui la preparazione dovesse risultare più lunga del previsto, rallentando così i lavori, è necessario segnalarlo tempestivamente al gruppo.
*Conseguenze*
Il periodo di studio individuale potrebbe comportare rallentamenti non prevedibili con precisione. I membri coinvolti, successivamente al recupero degli argomenti teorici mancanti, possono fornire completamente il loro contributo nell'avanzamento dei lavori e nelle decisioni progettuali.
\ *OT-1.2.2 Riduzione della sua probabilità o gravità*
*Strategia di trattamento*
Deve avvenire una pianificazione o ripianificazione dei lavori al fine di permettere ai membri coinvolti di svolgere solo attività non dipendenti dalle conoscenze teoriche mancanti.
Il recupero delle lacune deve avvenire in tempi brevi.
*Conseguenze*
I membri coinvolti non forniranno il loro contributo per i lavori relativi alle conoscenze teoriche mancanti, questo potrebbe comportare ritardi e ripianificazioni non preventivate.
Tali ripianificazioni permetterebbero ai membri coinvolti di proseguire con l'avanzamento dei lavori nonostante le mancanze teoriche.
Le decisioni prese dal gruppo relative a scelte che necessitano delle conoscenze teoriche mancanti, non potranno beneficiare del contributo dei membri impreparati.
=== Rischi esterni
\ *RIS-2.1 Rallentamento delle attività dovuto a cause esterne* <RIS-2.1>
Rallentamento nel completamento di attività e task assegnate derivato dalla congiunzione tra gli impegni individuali e progettuali.
Esso comporta un generale ritardo nello sviluppo.
- *Impatto*: grave;
- *Probabilità*: 4 _Probabilità aumentata nel periodo della sessione invernale_;
- *Soglie*:
- se i rallentamenti previsti possono ridurre l'efficienza lavorativa del gruppo, si attua il trattamento: "OT-2.1.1 Riduzione della sua probabilità o gravità".
*Opzioni di trattamento del rischio*
*OT-2.1.1 Riduzione della sua probabilità o gravità*
*Strategia di trattamento*
Implementazione di una pianificazione più flessibile decisa in sede di meeting, per adattarsi agli impegni individuali e progettuali.
Questo prevede una priorizzazione nell'uso di strumenti di lavoro asincroni, al fine di permettere a tutti i membri un'equa divisione del lavoro da svolgere nei momenti a loro più comodi, a patto di rispettare le linee guida delle Norme di Progetto.
Assegnazione chiara delle responsabilità in luce dei rallentamenti previsti e monitoraggio costante dello stato di avanzamento.
Inoltre è richiesta comunicazione costante con i membri del gruppo al fine di rendere note eventuali indisponibilità o impegni.
*Conseguenze*
Miglioramento dell'efficienza nel completamento dei compiti nonostante i rallentamenti inevitabili.
Le attività non svolte o completate parzialmente possono determinare uno slittamento della data di consegna e delle scadenze intermedie prefissate.
=== Rischi di project management
*RIS-4.1 Comunicazione con il Proponente* <RIS-4.1>
I contatti con il Proponente subiscono variazioni nella qualità e nella frequenza a causa di problematiche fuori dal controllo del gruppo. Questa situazione potrebbe causare un rallentamento significativo del lavoro, soprattutto durante l'analisi dei requisiti.
- *Impatto*: grave;
- *Probabilità*: 1;
- *Soglie*:
- se il periodo critico previsto è sufficientemente breve da permettere al gruppo di continuare con un corretto avanzamento dei lavori nonostante la variazione nella comunicazione con il Proponente, si adotta "OT-4.1.2 Accettazione del rischio" mantenendo monitorata la situazione;
- se il periodo critico previsto può mettere a rischio il corretto avanzamento dei lavori o la qualità degli stessi, si adotta "OT-4.1.1 Riduzione della sua probabilità o gravità".
*Opzioni di trattamento del rischio*
*OT-4.1.1 Riduzione della sua probabilità o gravità*
*Strategia di trattamento*
Si possono adottare, in accordo con il Proponente, una o più tra le seguenti strategie comunicative:
- uso di strumenti asincroni per facilitare lo scambio di informazioni tra gruppo e Proponente;
- pianificazione anticipata degli incontri di revisione dell'avanzamento;
- programmazione di incontri periodici di aggiornamento, anche brevi.
*Conseguenze*
È prevista una modifica sostanziale, più o meno temporanea, nella comunicazione con il Proponente. Questo potrebbe portare a ritardi nei lavori dovuti ad un maggior impegno da parte del gruppo nel garantire l'adozione delle nuove pratiche.
\ *OT-4.1.2 Accettazione del rischio*
*Strategia di trattamento*
Si attende il termine del periodo nel quale la comunicazione con il Proponente risulta problematica.
Per evitare perdite di tempo il gruppo continua a lavorare priorizzando task che non dipendono direttamente dall'intervento del Proponente.
In caso di Accettazione del rischio è importante tenere monitorata nel tempo la situazione per accertarsi che le circostanze non richiedano una modifica nell'approccio di risoluzione.
*Conseguenze*
Lo sviluppo potrebbe allontanarsi dalle linee guida o dalle aspettative del Proponente, non rispettando quanto preventivato o pianificato. Tale rischio, comporterebbe dunque la produzione di un software non in linea con le richieste, conducendo a rallentamenti per analisi, progettazione e implementazione aggiuntive.
\ *RIS-4.2 Irreperibilità di un membro del gruppo* <RIS-4.2>
Uno o più membri cessano temporaneamente la partecipazione attiva alle attività del gruppo. È necessario evitare che la durata di queste assenze impedisca il regolare svolgimento delle attività di progetto.
È da considerare ulteriormente grave la situazione in cui è mancata totalmente una segnalazione responsabile e preventiva di difficoltà o impedimenti da parte dei singoli membri coinvolti.
- *Impatto*: medio;
- *Probabilità*: 3;
- *Soglie*:
- se la cessazione della partecipazione da parte di uno o più membri del gruppo può causare un rallentamento nell'avanzamento dei lavori è necessario discutere con i diretti interessati al fine di inquadrare al meglio la situazione. Nel caso in cui tale dialogo rivelasse un'impossibilità nella ripresa delle attività in breve termine oppure nel caso in cui non sia possibile contattare i diretti interessati, è necessario attuare "OT-4.2.1 Riduzione della sua probabilità o gravità";
- in caso contrario, se contattando i membri coinvolti emerge la previsione certa di una corretta ripresa delle attività in breve tempo, si attua "OT-4.2.2 Accettazione del rischio".
*Opzioni di trattamento del rischio*
*OT-4.2.1 Riduzione della sua probabilità o gravità*
*Strategia di trattamento*
Le attività di lavoro assegnate ai membri coinvolti che non hanno una corretta conclusione nei tempi e nelle modalità previste possono essere riassegnate ad altri membri del gruppo.
Viene sollecitato il dialogo con i membri coinvolti per capire la situazione e programmare al meglio le attività da svolgere.
*Conseguenze*
Vengono ridotti, seppur non eliminati, i ritardi nell'avanzamento dei lavori.
Le task svolte senza una chiara condivisione di informazioni da parte dei membri coinvolti possono portare a risultati non conformi con le decisioni prese dal gruppo di lavoro quindi possono risultare in parte o totalmente inutilizzabili.
\ *OT-4.2.2 Accettazione del rischio*
*Strategia di trattamento*
Si attende il termine del periodo nel quale la partecipazione dei membri interessati risulti insufficiente.
In caso di Accettazione del rischio è importante tenere monitorata nel tempo la situazione per accertarsi che le circostanze non richiedano una modifica nell'approccio di risoluzione.
*Conseguenze*
I membri che si dovessero trovare in questa situazione rischierebbero di accentuare eventuali incomprensioni nel proprio lavoro senza la possibilità di confrontarsi con gli altri accorgendosi degli errori troppo tardi.
Lo stato di avanzamento dei lavori potrebbe subire ulteriori rallentamenti.
\ *RIS-4.3 Mancanza di task assegnati* <RIS-4.3>
Uno o più membri del gruppo completano tutti i task a loro assegnati prima della fine dello Sprint.
Il periodo rimanente senza task assegnati è sufficientemente ampio da permettere ai membri coinvolti di proseguire i lavori.
L'impatto è tanto maggiore quanto l'aumentare dei giorni rimanenti al termine dello Sprint.
- *Impatto*: medio;
- *Probabilità*: 3;
- *Soglie*:
- se il periodo di tempo da trascorrere senza task assegnati entro il successivo Sprint è sufficientemente ampio da poter permettere al membro del gruppo coinvolto di proseguire con altri lavori, si adotta "OT-4.3.1 Eliminazione del rischio".
*Opzioni di trattamento del rischio*
L'accettazione e la parziale riduzione del rischio non sono contemplate in quanto comporterebbero ritardi gravi.
*OT-4.3.1 Eliminazione del rischio*
*Strategia di trattamento*
Al termine dei task assegnati al singolo individuo, si evidenziano due possibili scenari:
- nel caso in cui lo Sprint backlog non sia stato completato, il membro del gruppo che ha terminato i task a lui assegnati deve procedere all'individuazione e alla selezione di un task non ancora assegnato;
- nel caso in cui lo Sprint backlog risulti terminato o tutti i task in esso contenuti risultino già in corso o in verifica, sarà compito del membro del gruppo evidenziare la sua disponibilità agli altri membri. Si dovrà dunque procedere con l'assegnazione di un task proveniente dal project backlog o con la creazione e assegnazione di un nuovo task.
*Conseguenze*
Minimizzare gli sprechi del tempo a disposizione è una priorità per il gruppo al fine di garantire il raggiungimento nei tempi attesi delle milestone prefissate.
Se il rischio dovesse verificarsi più volte consecutivamente è necessario rivedere l'effettiva efficacia della pianificazione dei task in sede di retrospettiva.
\ *RIS-4.4 Lenta ripresa dei lavori* <RIS-4.4>
Durante la durata del progetto è eccezionalmente possibile che si verifichino dei periodi in cui uno o più membri non godano di sufficiente disponibilità per partecipare alle attività di progetto.
Questo può avvenire, ad esempio, a seguito di particolari impegni personali o universitari.
La ripresa dei lavori a pieno regime, seguendo correttamente le norme prefissate ed utilizzando correttamente tutte le tecnologie, potrebbe non essere immediata.
Si ritiene fondamentale minimizzare l'impatto in termini di tempo e qualità sul lavoro svolto.
- *Impatto*: medio;
- *Probabilità*: 1;
- *Soglie*:
- se le circostanze consentono ai membri coinvolti di mantenere, nonostante i rallentamenti, un certo grado di familiarità con le norme, gli automatismi e le tecnologie adottate, si adotta "OT-4.4.1 Eliminazione del rischio";
- se l'allontanamento dai lavori avviene senza il mantenimento di un certo grado di familiarità con le dinamiche interne al gruppo, si adotta "OT-4.4.2 Riduzione della sua probabilità o gravità".
*Opzioni di trattamento del rischio*
*OT-4.4.1 Eliminazione del rischio*
*Strategia di trattamento*
I membri coinvolti devono mantenere familiarità con le norme, gli automatismi e le tecnologie adottate. Per farlo devono tenersi aggiornati sullo stato di avanzamento dei lavori e relativi possibili cambiamenti e decisioni prese dal gruppo, soprattutto relativamente alle #ndp.
*Conseguenze*
Viene minimizzato il tempo necessario al recupero completo degli automatismi necessari a produrre progressi soddisfacenti nei lavori.
\ *OT-4.4.2 Riduzione della sua probabilità o gravità*
*Strategia di trattamento*
Al termine dell'allontanamento, i membri coinvolti devono assicurarsi di essere aggiornati riguardo lo stato di avanzamento dei lavori ed eventuali cambiamenti e decisioni avvenute nel corso del periodo di assenza.
Inoltre, prima di produrre avanzamenti nei lavori, è necessario che le norme relative ai task da svolgere e le tecnologie da adottare, vengano recuperate.
*Conseguenze*
L'inevitabile rallentamento nella ripresa dei lavori viene mitigato e minimizzato così da poter produrre risultati soddisfacenti ed in linea con la qualità attesa dal gruppo.
#pagebreak()
= Panoramica generale <panoramica-generale>
Identificati i rischi, le relative contromisure e il calendario di progetto, è stato definito, mediante una pianificazione a ritroso, un preventivo iniziale dei costi di realizzazione del progetto.
È corretto evidenziare come i membri del gruppo non siano dotati di esperienza sufficiente per fornire un preventivo corretto e preciso sin dagli inizi dello sviluppo: per tale motivo, il prezzo indicato sarà soggetto a modifiche con l'avanzamento del progetto (seppur mai superando il prezzo preventivato in candidatura).
== Prospetto orario complessivo <prospetto-orario-complessivo>
La ripartizione delle ore tiene conto degli obiettivi disciplinari di sviluppo di competenze trasversali nei vari ruoli presenti all'interno del progetto.
#figure(
table(
columns: 8,
[*Membro*],[*Responsabile*],[*Amministratore*],[*Analista*],[*Progettista*],[*Programmatore*],[*Verificatore*],[*Totale*],
[Banzato], [10], [14], [15], [7], [28], [21], [95],
[Oseliero], [10], [14], [15], [7], [28], [21], [95],
[Gardin], [10], [14], [15], [7], [28], [21], [95],
[Todesco], [10], [14], [15], [7], [28], [21], [95],
[Carraro], [10], [14], [15], [7], [28], [21], [95],
[Zaccone], [10], [14], [15], [7], [28], [21], [95],
[Nardo], [10], [14], [15], [7], [28], [21], [95],
[Totale ore], [91], [70], [70], [98], [210], [126], [665],
[Costo\ orario], [€ 30], [€ 20], [€ 25], [€ 25], [€ 15], [€ 15], [/],
[Costo\ ruolo], [€ 2100], [€ 1960], [€ 2625], [€ 1225], [€ 2940], [€ 2205], [€ 13055],
),
caption: "Prospetto orario complessivo per membro e ruolo"
)
#let data = (
("Responsabile", 10),
("Amministratore", 14),
("Analista", 15),
("Progettista", 7),
("Programmatore", 28),
("Verificatore", 21),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo (complessiva)",
kind: "chart",
supplement: "Grafico"
)
== Preventivo calcolato
Il costo totale preventivato è di € 13055.
== Analisi dei ruoli
=== Responsabile
Il Responsabile è un ruolo presente durante l'intero progetto. Il suo compito è quello di gestire il gruppo e di assicurarsi che lo sviluppo proceda secondo le tempistiche predefinite e le aspettative del Committente. Deve inoltre redigere e far rispettare le #ndp, nonché le linee guida di sviluppo che l'intero gruppo deve rispettare. Essendo il ruolo più costoso, il numero di ore è stato scelto per favorire l'efficienza e non pesare eccessivamente sul costo finale.
=== Amministratore
L'Amministratore è un ruolo presente durante l'intero progetto, in quanto si occupa di predisporre e controllare il corretto utilizzo delle procedure e degli strumenti definiti nelle #ndp, andando anche a gestire e implementare automatismi, migliorando così l'efficienza del gruppo. Il monte ore scelto è tale, poiché essendo questo un ruolo di controllo, non sono richieste un numero elevato di ore.
=== Analista
L'Analista è il ruolo preposto all'individuazione, redazione, aggiornamento e tracciamento dei requisiti del progetto. Il modello Agile adottato dal gruppo prevede che l'attività di analisi si svolga in modo incrementale, seppur preminente inizialmente, che permetterà la redazione del documento #adr ai fini della _Requirements and Technology Baseline_. Pertanto, per il ruolo di Analista il gruppo riserva un numero di ore durante il periodo PB qualora si necessitasse di rivedere o aggiornare i requisiti individuati.
=== Progettista
Il Progettista ha il compito di delineare e documentare l'architettura del prodotto in modo da:
- soddisfare i requisiti raccolti nelle fasi pregresse;
- aiutare il gruppo di sviluppo con una documentazione chiara ed esaustiva.
Nello stabilire l'architettura deve quindi indicare anche quali saranno le tecnologie da utilizzare per la sua implementazione.
Ritenendolo un ruolo impegnativo dal punto di vista temporale, il numero di ore risulta maggiore rispetto ai ruoli precedenti.
=== Programmatore
Il Programmatore ha il compito di tradurre in codice eseguibile l'architettura prodotta dal progettista. Il ruolo prevede un numero di ore molto elevato poiché riteniamo il lavoro più dispendioso a livello temporale rispetto a quello delle altre figure professionali.
=== Verificatore
Il Verificatore è un ruolo presente durante l'intero progetto, che si occupa di mantenere degli standard qualitativi sul lavoro del gruppo: egli deve verificare la correttezza, esaustività e coerenza di tutti i documenti, e nella fase di codifica sarà colui che si occuperà di controllare la qualità del software prodotto. Proprio per questo il totale delle ore risulta essere il secondo più elevato dopo il Programmatore.
#pagebreak()
= Periodi di sviluppo <periodi-di-sviluppo>
== Introduzione
Il periodo compreso tra l'aggiudicazione del capitolato e la data di consegna del prodotto viene suddiviso in 2 periodi principali dettati dalle revisioni esterne _Requirements and Technology Baseline (RTB)_ e _Product Baseline (PB)_, rispettivamente previste per il 27-01-2024 e il 20-03-2024.\
Vengono pertanto a definirsi i seguenti periodi di sviluppo:
- *Periodo RTB*: dal 06-11-2023 al 26-01-2024;
- *Periodo PB*: dal 27-01-2024 al 20-03-2024.
Risulta ragionevole considerare il periodo di PB maggiormente impegnativo e durante il quale si svolgerà la maggior parte delle ore di lavoro, complici la fine delle lezioni universitarie, il termine della sessione invernale d'esami e l'assenza di festività. Per questo motivo, il gruppo _Error\_418_ ha deciso di suddividere le ore di lavoro in modo da svolgere il 65% delle ore totali durante il periodo PB e il restante 35% durante il periodo RTB.
#grid(
columns: (1fr, 1fr),
[
#set align(bottom)
#figure(
table(
columns: 3,
[*Periodo*], [*Ore*], [*Percentuale*],
[RTB], [231], [35%],
[PB], [434], [65%],
[Totale], [665], [100%]
),
caption: "Suddivisione oraria per periodo"
)
],
[
#set align(center)
#rect(stroke: 0.5pt + luma(140))[
#let tuples = ("RTB", "PB").zip(pal)
#stack(
spacing: 0.75em,
dir: ltr,
..tuples.map(tuple => stack(
dir: ltr,
spacing: 0.25em,
rect(stroke: 0.75pt, fill: tuple.at(1), width: 0.75em, height: 0.75em), tuple.at(0)
))
)
]
#let data = (
("RTB", 231),
("PB", 434),
)
#figure({
canvas({
import draw: *
chart.piechart(..piechart-config, radius: 1.5, inner-radius: 0.75,
data)}
)},
caption: "Suddivisione oraria per periodo",
kind: "chart",
supplement: "Grafico"
)
]
)
== Periodo RTB
- Periodo: dal 06-11-2023 al 26-01-2024 (56 giorni lavorativi);
- Obiettivi di periodo:
- #adr;
- Esplorazione e definizione dei domini tecnologico e applicativo;
- Produzione del Proof of Concept (PoC);
- Redazione documentazione relativa al periodo.
#roles-legend
#grid(
columns: (1fr, 1fr),
[
#set align(bottom)
#figure(
caption: "Suddivisione oraria per ruolo (RTB)",
table(
columns: 2,
[*Ruolo*], [*Ore*],
[Responsabile], [28],
[Amministratore], [42],
[Analista], [70],
[Progettista], [14],
[Programmatore], [28],
[Verificatore], [49],
[Totale], [231],
)
)
],
[
#let data = (
("Responsabile", 28),
("Amministratore", 42),
("Analista", 70),
("Progettista", 14),
("Programmatore", 28),
("Verificatore", 49),
)
#figure({
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo (RTB)",
kind: "chart",
supplement: "Grafico"
)
]
)
== Periodo PB
- Periodo: dal 27-01-2024 al 20-03-2024 (38 giorni lavorativi);
- Obiettivi di periodo:
- Scelte architetturali e di design;
- Implementazione del prodotto;
- Verifica, test e validazione del prodotto;
- Redazione documentazione relativa al periodo.
#roles-legend
#grid(
columns: (1fr, 1fr),
[
#figure(
caption: "Suddivisione oraria per ruolo (PB)",
table(
columns: 2,
[*Ruolo*], [*Ore*],
[Responsabile], [42],
[Amministratore], [56],
[Analista], [35],
[Progettista], [35],
[Programmatore], [168],
[Verificatore], [98],
[Totale], [434],
)
)
],
[
#set align(bottom)
#let data = (
("Responsabile", 42),
("Amministratore", 56),
("Analista", 35),
("Progettista", 35),
("Programmatore", 168),
("Verificatore", 98),
)
#figure({
canvas({
import draw: *
chart.piechart(..piechart-config, data)
})},
caption: "Suddivisione oraria per ruolo (PB)",
kind: "chart",
supplement: "Grafico"
)
]
)
#pagebreak()
= Pianificazione del lavoro <pianificazione-lavoro>
== Introduzione
La pianificazione ricopre un ruolo fondamentale nello sviluppo dell'intero progetto. Svolge il compito di stabilire quali obiettivi raggiungere in periodi di tempo determinati, organizzando le risorse in modo da rendere lo sviluppo efficace ed efficiente. Lo scopo principale deve essere pianificare le azioni da intraprendere nel periodo successivo, definendo tempistiche, modalità e obiettivi.
== Metodologia di lavoro Scrum
Il gruppo si è imposto una metodologia di lavoro *Agile* mediante l'applicazione del framework Scrum, determinando periodi di lavoro di durata fissa terminanti con un'analisi retrospettiva degli stessi.
Tale approccio è definibile *adattivo*, in grado dunque di adattarsi ad eventuali modifiche in corso d'opera, in merito soprattutto a cambiamenti di specifiche e requisiti.
L'intero sviluppo è dunque organizzato in iterazioni di lunghezza fissa, denominati in Scrum come *Sprint*.
L'analisi retrospettiva e il frequente contatto con il Proponente permettono di indirizzare lo sviluppo verso la realizzazione di un prodotto finale che si attenga quanto più possibile ai requisiti desiderati dall'azienda e alle sue aspettative, e una documentazione dettagliata e precisa che evolve e migliora insieme al prodotto.
=== Eventi dettati dal framework
La corretta applicazione del framework comporta il rispetto di determinati impegni, individuabili nello svolgimento di precisi eventi organizzativi quali:
- *Sprint planning*: evento decisionale da tenersi prima dell'avvio dello Sprint successivo. In questo incontro vengono stabiliti gli obiettivi da raggiungere e le task necessarie da compiere entro la fine dello stesso;
- *Sprint review*: al termine dello Sprint si compie un'azione di revisione del progresso, valutando gli obiettivi che sono stati (o meno) raggiunti;
- *Sprint retrospective*: al termine dello Sprint si compie un'azione di retrospettiva, analizzando eventuali criticità incontrate e stabilendo i possibili miglioramenti o meccanismi di mitigazione.
=== Organizzazione per Sprint
Gli Sprint sono periodi di sviluppo di durata fissa entro i quali si cerca di raggiungere obiettivi prefissati. Ciascuno Sprint viene stabilito, in termini di scope e obiettivi, in un momento precedente all'avvio dello Sprint stesso.
Il gruppo _Error\_418_ adotta periodi di Sprint di una settimana, ove l'intento è fissare obiettivi concretamente raggiungibili nell'arco di tempo stabilito. Festività o esigenze organizzative peculiari potrebbero indurre variazioni nella durata di singoli Sprint.
La pianificazione di uno Sprint sarà così composta:
- *Obiettivi prefissati*: gli obiettivi che si intende raggiungere entro il termine dello Sprint;
- *Preventivo costi*: preventivo dei costi dello Sprint, calcolato in base alle figure che vi operano e alla loro quantità di ore di lavoro previste.
Essenziale in questa organizzazione è l'analisi retrospettiva al termine di ogni Sprint. Essa permette di valutare in modo critico eventuali mancanze, criticità o errori che possono in questo modo venir affrontati per trovare soluzioni che ne mitighino gli effetti. È inoltre utile per identificare buone prassi e strategie che hanno portato a risultati positivi, in modo da poterle replicare in futuro.
La retrospettiva di uno Sprint si articolerà in:
- *Obiettivi raggiunti*: obiettivi fissati e concretamente raggiunti al termine dello Sprint;
- *Obiettivi mancati*: obiettivi non raggiunti al termine dello Sprint;
- *Problematiche insorte*: analisi delle criticità riscontrate durante lo Sprint, in modo da aver chiare le motivazioni che le hanno causate;
- *Risoluzioni attuate*: azioni compiute in risposta alle problematiche riscontrate durante lo Sprint, in modo che in futuro si possano prevenire o mitigare;
- *Panoramica dei costi effettivi* (consuntivo): al termine dello Sprint sarà possibile verificare se i costi preventivati rispecchino i costi effettivi, in base alle ore svolte per ogni ruolo;
- *Monitoraggio costi e ore*: tabelle che riportano le ore e il budget rimanenti per ogni ruolo e complessivamente.
=== Rotazione dei ruoli
Il gruppo _Error\_418_ adotta una rotazione dei ruoli, in modo da garantire che ciascun membro del gruppo abbia la possibilità di svolgere ciascun ruolo almeno una volta durante lo sviluppo del progetto. Questo permette di:
- aderire alle linee guida del regolamento del progetto didattico;
- offrire una visione d'insieme del progetto a tutti i membri del gruppo;
- garantire che ciascun membro del gruppo possa sviluppare competenze trasversali.
I ruoli assegnati a ciascun membro del gruppo sono riportati in @preventivi e @consuntivi. La ripartizione delle ore per ciascun ruolo è riportata in @prospetto-orario-complessivo.
#pagebreak()
= Preventivi di periodo <preventivi>
Il preventivo di ogni Sprint esprime gli obiettivi e il relativo costo preventivato, secondo un'attività di pianificazione mirata al conseguimento efficace ed efficiente degli obiettivi di periodo.
== Sprint 1 dal 06-11-2023 al 13-11-2023
=== Obiettivi prefissati
Gli obiettivi del primo Sprint si concentrano sulla correzione e sul miglioramento dei documenti e della repository in seguito alla valutazione esposta dal Committente.
Gli obiettivi dello Sprint 1 sono:
- riconfigurazione della repository;
- revisione preventivo costi con aggiornamento della suddivisione delle ore;
- implementazione di automazioni per la compilazione dei documenti;
- implementazione di automazioni per il versionamento dei documenti;
- stesura delle domande in merito al dominio tecnologico del capitolato;
- contatto con l'azienda Proponente per comunicare l'esito della candidatura;
- contatto con l'azienda Proponente per fissare un primo meeting di analisi.
=== Preventivo costi
Nel primo Sprint il gruppo svolge compiti correttivi e incentrati sul miglioramento dei documenti e dei processi. In quest'ottica, vede l'impiego principale delle figure:
- *Responsabile*: al fine di coordinare le attività e contattare l'azienda Proponente;
- *Amministratore*: al fine di gestire gli strumenti GitHub e Jira per la definizione delle automazioni per la compilazione e il versionamento dei documenti;
- *Verificatore*: al fine di garantire che le modifiche effettuate rispecchino gli standard qualitativi desiderati e implementino effettivamente le mancanze individuate dalla valutazione;
- *Analista*: al fine di redigere le principali domande da porre al Proponente in merito al dominio tecnologico da utilizzare.
In questo primo periodo, l'assegnazione dei ruoli di Progettista e di Programmatore è ritenuta precoce.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [3], [3],
[Carraro], [0], [4], [0], [0], [0], [0], [4],
[Gardin], [0], [4], [0], [0], [0], [0], [4],
[Nardo], [0], [0], [0], [0], [0], [3], [3],
[Oseliero], [3], [0], [0], [0], [0], [0], [3],
[Todesco], [0], [0], [3], [0], [0], [0], [3],
[Zaccone], [0], [0], [3], [0], [0], [0], [3],
[Totale ore], [3], [8], [6], [0], [0], [6], [23],
[Costo\ ruolo], [€ 90], [€ 160], [€ 150], [€ 0], [€ 0], [€ 90], [€ 490],
),
caption: "Prospetto del preventivo, Sprint 1"
)
#let data = (
("Responsabile", 3),
("Amministratore", 8),
("Analista", 6),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 6),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 1",
kind: "chart",
supplement: "Grafico"
)
== Sprint 2 dal 13-11-2023 al 20-11-2023
=== Obiettivi prefissati
Gli obiettivi del secondo Sprint si concentrano sull'individuazione degli Use Case del progetto, sul perfezionamento delle automazioni e sulla stesura iniziale delle #ndp.
Gli obiettivi dello Sprint 2 sono:
- riconfigurazione della repository;
- stesura della sezione _Introduzione_ del documento #ndp;
- stesura della sezione _Processi di supporto_ del documento #ndp;
- perfezionamento delle automazioni per la compilazione dei documenti;
- implementazione delle automazioni per il versionamento dei documenti;
- contatto con l'azienda Proponente per comunicare l'esito della candidatura;
- meeting con l'azienda Proponente per riflettere sull'#adr e sulle tecnologie da usare;
- inizio dell'individuazione e della stesura degli Use Case.
=== Preventivo costi
Nel secondo Sprint il gruppo svolge compiti correttivi incentrati sul miglioramento dei documenti e compiti orientati all'individuazione e alla stesura degli Use Case. In quest'ottica, vede l'impiego principale delle figure:
- *Responsabile*: al fine di coordinare le attività e contattare l'azienda Proponente;
- *Amministratore*: al fine di gestire gli strumenti GitHub e Jira per la definizione e il miglioramento delle automazioni per la compilazione e il versionamento dei documenti;
- *Verificatore*: al fine di garantire che le modifiche effettuate rispecchino gli standard qualitativi desiderati e implementino effettivamente le mancanze individuate dalla valutazione;
- *Analista*: al fine di individuare e sviluppare testualmente i principali Use Case in ottica del documento #adr.
In questo periodo, l'assegnazione dei ruoli di Programmatore e Progettista è ritenuta precoce.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [3], [3],
[Carraro], [0], [4], [0], [0], [0], [0], [4],
[Gardin], [0], [4], [0], [0], [0], [0], [4],
[Nardo], [0], [0], [3], [0], [0], [0], [3],
[Oseliero], [3], [0], [0], [0], [0], [0], [3],
[Todesco], [0], [0], [3], [0], [0], [0], [3],
[Zaccone], [0], [0], [3], [0], [0], [0], [3],
[Totale ore], [3], [8], [9], [0], [0], [3], [23],
[Costo\ ruolo], [€ 90], [€ 160], [€ 225], [€ 0], [€ 0], [€ 45], [€ 520],
),
caption: "Prospetto del preventivo, Sprint 2"
)
#let data = (
("Responsabile", 3),
("Amministratore", 8),
("Analista", 9),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 3),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 2",
kind: "chart",
supplement: "Grafico"
)
== Sprint 3 dal 20-11-2023 al 27-11-2023
=== Obiettivi prefissati
Gli obiettivi del terzo Sprint si concentrano sull'aggiornamento del documento #ndp, sul periodo di #adr (principalmente concentrata sui requisiti funzionali) e su un primo momento di esplorazione delle nuove tecnologie.
Gli obiettivi dello Sprint 3 sono:
- proseguimento del processo di individuazione e stesura degli Use Case;
- perfezionamento delle automazioni di versionamento documenti;
- perfezionamento del template usato per i documenti;
- aggiornamento del documento #ndp;
- studio iniziale della libreria Three.js;
- meeting con l'azienda Proponente per esporre ipotesi e Use Case individuati, richiedere chiarimenti ed avanzare opportune richieste;
- contatto con l'azienda Proponente per fissare il prossimo meeting.
=== Preventivo costi
Nel terzo Sprint, il gruppo svolge delle attività principalmente focalizzate sull'#adr e sull'aggiornamento e perfezionamento dei documenti e delle automazioni. In quest'ottica, vede l'impiego principale delle figure:
- *Responsabile*: al fine di coordinare le attività e contattare l'azienda Proponente;
- *Amministratore*: al fine di:
- gestire gli strumenti GitHub e Jira per la definizione delle automazioni per la compilazione e il versionamento dei documenti;
- aggiornare il documento #pdp.
- *Verificatore*: al fine di:
- verificare la correttezza delle modifiche ai documenti;
- effettuare un controllo sulla validità e formulazione degli Use Case individuati.
- *Analista*: al fine di individuare e formulare gli Use Case correlati ai requisiti funzionali;
- *Progettista*: al fine di condurre uno studio iniziale sulla libreria Three.js.
In questo periodo, l'assegnazione del ruolo di Programmatore è ritenuta precoce.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [3], [0], [0], [0], [0], [0], [3],
[Carraro], [0], [0], [4], [0], [0], [0], [4],
[Gardin], [0], [0], [0], [0], [0], [3], [3],
[Nardo], [0], [0], [0], [3], [0], [0], [3],
[Oseliero], [0], [0], [4], [0], [0], [0], [4],
[Todesco], [0], [3], [0], [0], [0], [0], [3],
[Zaccone], [0], [3], [0], [0], [0], [0], [3],
[Totale ore], [3], [6], [8], [3], [0], [3], [23],
[Costo\ ruolo], [€ 90], [€ 120], [€ 200], [€ 75], [€ 0], [€ 45], [€ 530],
),
caption: "Prospetto del preventivo, Sprint 3"
)
#let data = (
("Responsabile", 3),
("Amministratore", 6),
("Analista", 8),
("Progettista", 3),
("Programmatore", 0),
("Verificatore", 3),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 3",
kind: "chart",
supplement: "Grafico"
)
== Sprint 4 dal 27-11-2023 al 04-12-2023
=== Obiettivi prefissati
Gli obiettivi del quarto Sprint si concentrano sull'aggiornamento e perfezionamento dei documenti redatti negli scorsi Sprint, sulla finalizzazione di una prima versione
del documento #adr e sull'esplorazione delle nuove tecnologie.
Gli obiettivi dello Sprint 4 sono:
- perfezionamento della stesura degli Use Case individuati finora;
- redazione di una prima versione dell'#adr;
- scelta di una data per un eventuale primo meeting con il #cardin per ottenere un feedback sull'#adr;
- proseguimento della redazione e aggiornamento del documento #ndp;
- proseguimento periodo di studio di Three.js, finalizzato a:
- creazione di alcuni scaffali nell'ambiente tridimensionale;
- implementazione sistema di _drag and drop_;
- parametrizzazione degli elementi presenti nell'applicazione.
=== Preventivo costi
Nel quarto Sprint il gruppo svolge compiti correttivi incentrati sul miglioramento dei documenti, #adr e studio delle nuove tecnologie. In quest'ottica, vede l'impiego principale delle figure:
- *Responsabile*: al fine di coordinare le attività e contattare l'azienda Proponente;
- *Amministratore*: al fine di:
- gestire gli strumenti GitHub e Jira per la definizione delle automazioni per la compilazione e il versionamento dei documenti;
- aggiornare il documento #pdp.
- *Analista*: al fine di perfezionare la stesura degli Use Case individuati e redigere una prima versione del documento #adr;
- *Progettista*: al fine di condurre uno studio esplorativo sulla libreria Three.js;
- *Programmatore*: al fine di esplorare in modo pratico le tecnologie relative ai PoC;
- *Verificatore*: al fine di verificare la correttezza delle modifiche ai documenti e di effettuare un controllo sulla validità degli Use Case individuati nel documento #adr.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [3], [0], [0], [0], [0], [0], [3],
[Carraro], [0], [0], [3], [0], [1], [0], [4],
[Gardin], [0], [0], [0], [0], [0], [3], [3],
[Nardo], [0], [0], [0], [1], [2], [0], [3],
[Oseliero], [0], [0], [4], [0], [0], [0], [4],
[Todesco], [0], [3], [0], [0], [0], [0], [3],
[Zaccone], [0], [3], [0], [0], [0], [0], [3],
[Totale ore], [3], [6], [7], [1], [3], [3], [23],
[Costo\ ruolo], [€ 90], [€ 120], [€ 175], [€ 25], [€ 45], [€ 45], [€ 500],
),
caption: "Prospetto del preventivo, Sprint 4"
)
#let data = (
("Responsabile", 3),
("Amministratore", 6),
("Analista", 7),
("Progettista", 1),
("Programmatore", 3),
("Verificatore", 3),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 4",
kind: "chart",
supplement: "Grafico"
)
== Sprint 5 dal 04-12-2023 al 11-12-2023
=== Obiettivi prefissati
Gli obiettivi del quinto Sprint si concentrano sull'aggiornamento e perfezionamento dei documenti redatti negli scorsi Sprint, sulla finalizzazione di una prima versione
del documento #adr e sullo sviluppo dei primi PoC.
Gli obiettivi dello Sprint 5 sono:
- perfezionamento del sistema di _drag and drop_ nel PoC;
- creazione degli scaffali nel PoC;
- creazione ambiente Docker;
- svolgere meeting con il #cardin in merito all'#adr;
- perfezionare e aggiornare il documento #ndp;
- aggiornamento del documento #adr, introducendo i requisiti non funzionali;
- informarsi sul documento #pdq;
- aggiornare vecchi documenti con il nuovo template;
- migliorare affidabilità GitHub Actions;
- redigere il Glossario.
=== Preventivo costi
Nel quinto Sprint il gruppo svolge compiti correttivi incentrati sul miglioramento dei documenti, #adr e studio delle nuove tecnologie. In quest'ottica, vede l'impiego principale delle figure:
- *Responsabile*: al fine di coordinare le attività e contattare l'azienda Proponente;
- *Amministratore*: al fine di gestire gli strumenti GitHub e Jira;
- *Analista*: al fine di redigere il documento #adr;
- *Progettista*: al fine di continuare lo studio sulla libreria Three.js;
- *Programmatore*: al fine di sviluppare i PoC relativi allo studio della libreria;
- *Verificatore*: al fine di:
- verificare la correttezza delle modifiche ai documenti;
- effettuare un controllo sulla validità e formulazione degli Use Case individuati e del documento #adr.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [4], [0], [0], [0], [4],
[Carraro], [3], [0], [0], [0], [2], [0], [5],
[Gardin], [0], [0], [4], [0], [0], [0], [4],
[Nardo], [0], [3], [0], [0], [2], [0], [5],
[Oseliero], [0], [0], [0], [2], [2], [0], [4],
[Todesco], [0], [0], [0], [0], [0], [3], [3],
[Zaccone], [0], [0], [0], [2], [2], [0], [4],
[Totale ore], [3], [3], [8], [4], [8], [3], [29],
[Costo\ ruolo], [€ 90], [€ 60], [€ 200], [€ 100], [€ 120], [€ 45], [€ 615],
),
caption: "Prospetto del preventivo, Sprint 5"
)
#let data = (
("Responsabile", 3),
("Amministratore", 3),
("Analista", 8),
("Progettista", 4),
("Programmatore", 8),
("Verificatore", 3),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 5",
kind: "chart",
supplement: "Grafico"
)
== Sprint 6 dal 11-12-2023 al 18-12-2023
=== Obiettivi prefissati
Gli obiettivi del sesto Sprint si focalizzano sul miglioramento dei PoC e dei diversi documenti. Nel dettaglio:
- avanzare con la redazione del PoC "A" integrando il funzionamento dei bin;
- approfondire e migliorare l'utilizzo di Docker nel PoC corrispondente;
- migliorare le GitHub Actions risolvendo eventuali problemi o aggiungendo funzionalità;
- proseguire con il lavoro sul documento #ndp;
- redigere introduzione del documento #pdq;
- adeguare l'#adr in funzione di quanto emerso durante lo scorso meeting con il #cardin;
- aggiungere al documento #adr la tabella che correli gli Use Case ai requisiti.
=== Preventivo costi
Nel sesto Sprint il gruppo svolge compiti correttivi incentrati sul miglioramento dei documenti #adr e #ndp.
Il gruppo continua a produrre e migliorare PoC e inizia a scrivere il documento #pdq.
In quest'ottica vede l'impiego principale delle figure:
- *Responsabile*: al fine di coordinare le attività e contattare l'azienda Proponente;
- *Amministratore*: al fine di gestire gli strumenti GitHub e Jira, redigere i verbali e aggiornare il documento #pdp;
- *Analista*: al fine di redigere il documento #adr con relativi diagrammi UML;
- *Progettista*: al fine di progettare i PoC;
- *Programmatore*: al fine di sviluppare i PoC;
- *Verificatore*: al fine di verificare la correttezza delle modifiche ai documenti assicurandosi che siano coerenti con le #ndp.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [4], [0], [0], [0], [4],
[Carraro], [3], [0], [0], [0], [0], [0], [3],
[Gardin], [0], [0], [4], [0], [0], [0], [4],
[Nardo], [0], [3], [0], [0], [0], [0], [3],
[Oseliero], [0], [0], [0], [1], [3], [0], [4],
[Todesco], [0], [0], [0], [0], [0], [3], [3],
[Zaccone], [0], [0], [0], [1], [3], [0], [4],
[Totale ore], [3], [3], [8], [2], [6], [3], [25],
[Costo\ ruolo], [€ 90], [€ 60], [€ 200], [€ 50], [€ 90], [€ 45], [€ 535],
),
caption: "Prospetto del preventivo, Sprint 6"
)
#let data = (
("Responsabile", 3),
("Amministratore", 3),
("Analista", 8),
("Progettista", 2),
("Programmatore", 6),
("Verificatore", 3),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 6",
kind: "chart",
supplement: "Grafico"
)
== Sprint 7 dal 18-12-2023 al 25-12-2023
=== Obiettivi prefissati
Gli obiettivi del settimo Sprint si focalizzano sul completamento del PoC A, terminare l'#adr e in generale sull'avanzamento dei documenti e miglioramento delle automazioni. Nel dettaglio gli obiettivi posti sono:
- creazione di un PoC per il front-end e realizzazione di un PoC definitivo comprendente tutto il lavoro svolto finora;
- espansione del #pdq con l'individuazione delle metriche da utilizzare;
- aggiornamento delle #ndp;
- completamento del documento #adr con:
- implementazione del tracciamento requisito-fonte;
- revisione generale del documento per verificare la presenza e correttezza di tutti gli Use Case e requisiti necessari.
- miglioramento delle GitHub Actions risolvendo eventuali problemi o aggiungendo funzionalità;
- implementazione dell'automazione che evidenzia i termini presenti nel glossario all'interno dei documenti.
=== Preventivo costi
Nel settimo Sprint i compiti del gruppo sono incentrati sulla realizzazione del PoC finale e sul proseguimento e miglioramento di tutti i documenti necessari alla Requirements and Technology Baseline. Di conseguenza saranno essenziali le figure di:
- *Responsabile*: al fine di coordinare le attività e proseguire la redazione delle #ndp;
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- gestire GitHub e Jira;
- migliorare le GitHub Actions risolvendo gli errori legati al versionamento dei file e all'aggiornamento dei changelog.
- *Analista*: al fine di migliorare il documento #adr e aggiungere il tracciamento requisito-fonte;
- *Progettista*: al fine di studiare e confrontare le tecnologie per il front-end individuate;
- *Programmatore*: al fine di realizzare un PoC per il front-end e un PoC finale;
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp;
- individuare le metriche da inserire nel #pdq.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [4], [0], [0], [0], [0], [4],
[Carraro], [0], [0], [0], [0], [3], [0], [3],
[Gardin], [0], [0], [0], [3], [0], [0], [3],
[Nardo], [3], [0], [0], [0], [0], [0], [3],
[Oseliero], [0], [0], [0], [0], [2], [2], [4],
[Todesco], [0], [0], [0], [3], [0], [0], [3],
[Zaccone], [0], [0], [3], [0], [0], [0], [3],
[Totale ore], [3], [4], [3], [6], [5], [2], [23],
[Costo\ ruolo], [€ 90], [€ 80], [€ 75], [€ 150], [€ 75], [€ 30], [€ 500],
),
caption: "Prospetto del preventivo, Sprint 7"
)
#let data = (
("Responsabile", 3),
("Amministratore", 4),
("Analista", 3),
("Progettista", 6),
("Programmatore", 5),
("Verificatore", 2),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 7",
kind: "chart",
supplement: "Grafico"
)
== Sprint 8 dal 25-12-2023 al 01-01-2024
=== Obiettivi prefissati <obiettivi8>
Gli obiettivi dell'ottavo Sprint si incentrano sul completamento delle attività rimaste sospese nel settimo Sprint e sull'avanzamento del documento #ndp, oltre che sul miglioramento di alcuni aspetti di attività già concluse. Gli obiettivi prefissati per questo Sprint sono:
- miglioramento del Glossario:
- rendere _case insensitive_ l'individuazione dei termini;
- implementare la gestione di plurali e acronimi.
- proseguimento del documento #adr con:
- implementazione tracciamento requisito-fonte;
- miglioramento della resa grafica dei diagrammi UML tramite la loro conversione in SVG.
- proseguimento del documento #ndp, recuperando i capitoli non scritti nello Sprint precedente e espandendone altri;
- perfezionamento del PoC finale;
- miglioramento delle GitHub Actions risolvendo eventuali problemi o aggiungendo funzionalità;
- studio e confronto delle tecnologie riguardanti le API (Next.js e Express.js);
- realizzazione di un sito web per la documentazione;
- realizzazione di una dashboard per monitorare le metriche definite nel #pdq.
=== Preventivo costi
Gli obiettivi dell'ottavo Sprint riguardano tutti gli aspetti del progetto, di conseguenza saranno necessari tutti i ruoli presenti nel gruppo:
- *Responsabile*: al fine di coordinare le attività e proseguire la redazione delle #ndp;
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- gestire GitHub e Jira;
- migliorare le GitHub Actions individuando e rimuovendo bug;
- migliorare il Glossario secondo quanto descritto nel paragrafo di pianificazione di questo Sprint.
- *Analista*: al fine di:
- aggiungere il tracciamento requisito-fonte nel documento #adr;
- convertire i diagrammi UML in SVG.
- *Progettista*: al fine di studiare e confrontare le tecnologie per l'implementazione delle API;
- *Programmatore*: al fine di proseguire con l'implementazione del PoC finale;
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp;
- realizzare una dashboard per il monitoraggio delle metriche.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [3], [0], [0], [0], [0], [3],
[Carraro], [0], [0], [0], [0], [3], [0], [3],
[Gardin], [0], [0], [0], [3], [0], [0], [3],
[Nardo], [4], [0], [0], [0], [0], [0], [4],
[Oseliero], [0], [0], [0], [0], [0], [4], [4],
[Todesco], [0], [0], [0], [3], [0], [0], [3],
[Zaccone], [0], [0], [3], [0], [0], [0], [3],
[Totale ore], [4], [3], [3], [6], [3], [4], [23],
[Costo\ ruolo], [€ 120], [€ 60], [€ 75], [€ 150], [€ 45], [€ 60], [€ 510],
),
caption: "Prospetto del preventivo, Sprint 8"
)
#let data = (
("Responsabile", 4),
("Amministratore", 3),
("Analista", 3),
("Progettista", 6),
("Programmatore", 3),
("Verificatore", 4),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 8",
kind: "chart",
supplement: "Grafico"
)
== Sprint 9 dal 01-01-2024 al 08-01-2024
=== Obiettivi prefissati <obiettivi9>
Gli obiettivi dell'ottavo Sprint si incentrano sul completamento delle attività rimaste sospese nell'ottavo Sprint, sulla revisione dei documenti e del PoC in vista della valutazione RTB. Gli obiettivi prefissati per questo Sprint sono:
- estensione e revisione del documento #ndp;
- correzione degli errori riscontrati in alcuni UC nel documento #adr, compreso l'aggiornamento dei diagrammi UML associati;
- realizzazione di una dashboard per monitorare le metriche definite nel #pdq;
- revisione dei documenti prodotti finora;
- risoluzione dei problemi legati all'automazione per il versionamento dei documenti;
- ripresa dei contatti con il Proponente tramite l'invio di una comunicazione di aggiornamento corredata da un video demo del PoC.
=== Preventivo costi
Gli obiettivi del nono Sprint riguardano i seguenti ruoli:
- *Responsabile*: al fine di:
- coordinare le attività e proseguire la redazione delle #ndp;
- contattare il Proponente fornendo aggiornamenti sull'avanzamento dei lavori.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- gestire GitHub e Jira;
- migliorare le GitHub Actions individuando e rimuovendo bug;
- realizzare una dashboard per il monitoraggio delle metriche.
- *Analista*: al fine di correggere gli errori riscontrati in alcuni UC nel documento #adr;
- *Verificatore*: al fine di:
- revisionare i documenti prodotti nel corso degli Sprint precedenti;
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [5], [0], [0], [0], [0], [5],
[Carraro], [0], [0], [0], [0], [0], [4], [4],
[Gardin], [0], [5], [0], [0], [0], [0], [5],
[Nardo], [0], [0], [0], [0], [0], [4], [4],
[Oseliero], [0], [0], [5], [0], [0], [0], [5],
[Todesco], [0], [0], [0], [0], [0], [4], [4],
[Zaccone], [4], [0], [0], [0], [0], [0], [4],
[Totale ore], [4], [10], [5], [0], [0], [12], [31],
[Costo\ ruolo], [€ 120], [€ 200], [€ 125], [€ 0], [€ 0], [€ 180], [€ 625],
),
caption: "Prospetto del preventivo, Sprint 9"
)
#let data = (
("Responsabile", 4),
("Amministratore", 10),
("Analista", 5),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 12),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 9",
kind: "chart",
supplement: "Grafico"
)
== Sprint 10 dal 08-01-2024 al 14-01-2024
=== Obiettivi prefissati
Gli obiettivi del decimo Sprint vertono sulla revisione dei documenti in preparazione per il colloquio RTB. Gli obiettivi prefissati per questo Sprint sono:
- verificare la correttezza dei documenti redatti finora;
- identificare e correggere le cause del problema prestazionale del PoC;
- introdurre la compilazione automatica del documento a seguito dell'aggiornamento manuale di un changelog;
- ampliare la dashboard di monitoraggio con nuove visualizzazioni;
- preparare la presentazione a supporto della valutazione RTB in Google Slides.
La durata dello Sprint è inferiore di 1 giorno rispetto alla norma.
=== Preventivo costi
L'attività prevalente è la revisione documentale. I ruoli attivi durante questo Sprint sono:
- *Responsabile*: al fine di:
- coordinare le attività;
- redigere la presentazione a supporto della valutazione RTB.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- aggiornare le visualizzazioni della dashboard di monitoraggio;
- gestire GitHub e Jira.
- *Analista*: al fine di:
- ultimare la revisione del documento #adr.
- *Programmatore*: al fine di:
- identificare e correggere le cause del problema prestazionale del PoC.
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp;
- ricontrollare i documenti prodotti e correggere eventuali errori riscontrati.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [3], [3],
[Carraro], [0], [0], [0], [0], [0], [2], [2],
[Gardin], [0], [2], [0], [0], [0], [0], [2],
[Nardo], [0], [0], [0], [0], [2], [0], [2],
[Oseliero], [0], [0], [2], [0], [0], [0], [2],
[Todesco], [0], [0], [0], [0], [2], [0], [2],
[Zaccone], [2], [0], [0], [0], [0], [3], [5],
[Totale ore], [2], [2], [2], [0], [4], [8], [18],
[Costo\ ruolo], [€ 60], [€ 40], [€ 50], [€ 0], [€ 60], [€ 120], [€ 330],
),
caption: "Prospetto del preventivo, Sprint 10"
)
#let data = (
("Responsabile", 2),
("Amministratore", 2),
("Analista", 2),
("Progettista", 0),
("Programmatore", 4),
("Verificatore", 8),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 10",
kind: "chart",
supplement: "Grafico"
)
== Sprint 11 dal 14-01-2024 al 21-01-2024
=== Obiettivi prefissati
Gli obiettivi dell'undicesimo Sprint si incentrano sulla preparazione per il colloquio RTB oltre che sulla revisione di alcuni aspetti di attività già concluse. Gli obiettivi prefissati per questo Sprint sono:
- aggiornare il #pdq con nuove metriche individuate riguardanti:
- budget utilizzato;
- ore rimanenti.
- rimuovere Express dal PoC a seguito del feedback del #cardin;
- registrazione di un video dimostrazione del PoC destinato al proponente;
- creazione di un collegamento tra Jira e Grafana per il cruscotto di controllo della qualità;
- ultimare il documento #adr con alcune modifiche minori agli Use Cases;
- verificare la correttezza dei documenti redatti finora;
- preparare la candidatura per il colloquio RTB:
- Redigere nuova lettera di presentazione con aggiornamento di preventivo;
- Continuare la revisione dei documenti.
=== Preventivo costi
Sono state assegnate ore al ruolo di Verificatore e Analista a scapito di quello di Programmatore in quanto il PoC risulta ormai ultimato:
- *Responsabile*: al fine di coordinare le attività.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- gestire GitHub e Jira.
- *Analista*: al fine di:
- ultimare alcune modifiche al documento #adr;
- ricontrollare interamente il documento di #adr.
- *Progettista*: al fine di mettere per iscritto le motivazioni che hanno portato alla scelta delle tecnologie utilizzate nel PoC.
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp;
- ricontrollare i documenti prodotti e correggere eventuali errori riscontrati;
- inserire nel #pdq alcune metriche nuove.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [2], [0], [0], [0], [2],
[Carraro], [0], [0], [0], [0], [0], [2], [2],
[Gardin], [0], [0], [0], [0], [0], [2], [2],
[Nardo], [3], [0], [0], [0], [0], [0], [3],
[Oseliero], [0], [0], [1], [0], [0], [0], [1],
[Todesco], [0], [3], [0], [0], [0], [0], [3],
[Zaccone], [0], [0], [0], [0], [0], [2], [2],
[Totale ore], [3], [3], [3], [0], [0], [6], [15],
[Costo ruolo], [€ 90], [€ 60], [€ 75], [€ 0], [€ 0], [€ 90], [€ 315],
),
caption: "Prospetto del preventivo, Sprint 11"
)
#let data = (
("Responsabile", 3),
("Amministratore", 3),
("Analista", 3),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 6),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 11",
kind: "chart",
supplement: "Grafico"
)
== Sprint 12 dal 21-01-2024 al 28-01-2024
=== Obiettivi prefissati
Gli obiettivi del dodicesimo Sprint riguardano la partecipazione al colloquio RTB con il #cardin, in data 25/01/2024 alle ore 8:40.
In questo Sprint inizia la sessione d'esame invernale. Gli obiettivi prefissati per questo Sprint sono:
- estensione dei termini di Glossario;
- aggiunta di grafici significativi al #pdp, come ad esempio la suddivisione oraria per ruolo, mediante il pacchetto `plotst` di Typst;
- correzione minore della tabella che riporta il computo dei requisiti totali nel documento di #adr;
- ripasso generale individuale e collettivo prima del colloquio RTB.
=== Preventivo costi
Dato lo scope ridotto del lavoro, questo Sprint impiega un numero di ore inferiore rispetto agli Sprint precedenti. I ruoli attivi durante questo Sprint sono:
- *Responsabile*: al fine di coordinare le attività.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp.
- *Progettista*: al fine di estendere i termini del Glossario.
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp;
- ricontrollare i documenti prodotti e correggere eventuali errori riscontrati.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [1], [0], [0], [1],
[Carraro], [0], [0], [0], [1], [0], [0], [1],
[Gardin], [0], [0], [0], [1], [0], [0], [1],
[Nardo], [1], [0], [0], [0], [0], [0], [1],
[Oseliero], [0], [0], [0], [0], [0], [1], [1],
[Todesco], [0], [1], [0], [0], [0], [0], [1],
[Zaccone], [0], [0], [0], [0], [0], [1], [1],
[Totale ore], [1], [1], [0], [3], [0], [2], [7],
[Costo\ ruolo], [€ 30], [€ 20], [€ 0], [€ 75], [€ 0], [€ 30], [€ 155],
),
caption: "Prospetto del preventivo, Sprint 12"
)
#let data = (
("Responsabile", 1),
("Amministratore", 1),
("Analista", 0),
("Progettista", 3),
("Programmatore", 0),
("Verificatore", 2),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 12",
kind: "chart",
supplement: "Grafico"
)
== Sprint 13 dal 28-01-2024 al 04-02-2024
=== Obiettivi prefissati
Nessuno che esuli dalla normale amministrazione di progetto. Il gruppo attende la valutazione del colloquio RTB e si prepara per la sessione d'esame.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- organizzare il lavoro in reazione al feedback, se ricevuto.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [0], [0],
[Carraro], [0], [0], [0], [0], [0], [0], [0],
[Gardin], [2], [0], [0], [0], [0], [0], [2],
[Nardo], [0], [0], [0], [0], [0], [0], [0],
[Oseliero], [0], [1], [0], [0], [0], [0], [1],
[Todesco], [0], [0], [0], [0], [0], [0], [0],
[Zaccone], [0], [0], [0], [0], [0], [0], [0],
[Totale ore], [2], [1], [0], [0], [0], [0], [3],
[Costo\ ruolo], [€ 60], [€ 20], [€ 0], [€ 0], [€ 0], [€ 0], [€ 80],
),
caption: "Prospetto del preventivo, Sprint 13"
)
#let data = (
("Responsabile", 2),
("Amministratore", 1),
("Analista", 0),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 0),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 13",
kind: "chart",
supplement: "Grafico"
)
== Sprint 14 dal 04-02-2024 al 11-02-2024
=== Obiettivi prefissati
Il feedback ricevuto dopo il colloquio RTB con il #cardin ha evidenziato la necessità di una profonda revisione del documento di #adr, che pertanto è il focus principale di questo Sprint. Gli obiettivi prefissati per questo Sprint sono:
- revisione, correzione ed estensione del documento #adr secondo il feedback ricevuto;
- preparazione in vista del colloquio RTB con il #vardanega (data da definire);
- inclusione nella dashboard di monitoraggio delle metriche individuate nel corso dello Sprint precedente;
- estensione dei termini di Glossario;
- invio aggiornamento sullo stato del progetto al Proponente.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- contattare il Proponente fornendo aggiornamenti sull'avanzamento dei lavori.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- aggiornare la dashboard di monitoraggio.
- *Analista*: al fine di implementare le correzioni e le estensioni al documento #adr;
- *Progettista*: al fine di estendere i termini del Glossario.
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp;
- ricontrollare i documenti prodotti e correggere eventuali errori riscontrati.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [3], [0], [0], [0], [3],
[Carraro], [0], [1], [1], [0], [0], [2], [4],
[Gardin], [3], [0], [2], [0], [0], [0], [5],
[Nardo], [0], [0], [2], [0], [0], [3], [5],
[Oseliero], [0], [2], [0], [0], [0], [2], [4],
[Todesco], [0], [0], [0], [1], [0], [3], [4],
[Zaccone], [0], [0], [0], [1], [0], [3], [4],
[Totale ore], [3], [3], [8], [2], [0], [13], [29],
[Costo\ ruolo], [€ 90], [€ 60], [€ 200], [€ 50], [€ 0], [€ 195], [€ 595],
),
caption: "Prospetto del preventivo, Sprint 14"
)
#let data = (
("Responsabile", 3),
("Amministratore", 3),
("Analista", 8),
("Progettista", 2),
("Programmatore", 0),
("Verificatore", 13),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 14",
kind: "chart",
supplement: "Grafico"
)
== Sprint 15 dal 11-02-2024 al 18-02-2024
=== Obiettivi prefissati
Questo Sprint si occupa della preparazione per il colloquio RTB con il #vardanega e avvia le attività di progettazione del prodotto software. Gli obiettivi prefissati per questo Sprint sono:
- definizione del diagramma ER del database da implementare;
- studio preliminare degli elementi architetturali del prodotto;
- identificazione degli strumenti di appoggio per l'implementazione di una pipeline di CI/CD;
- configurazione della repository e dei suddetti strumenti;
- miglioramento della struttura informativa del documento #adr, in particolare per quanto riguarda il tracciamento tra casi d'uso e requisiti;
- aggiornamento delle GitHub Actions recentemente deprecate (`actions/upload-artifact`, `actions/download-artifact`, `actions/setup-python`);
- prenotazione del colloquio RTB con il #vardanega, nella seconda metà della settimana.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- candidare il gruppo al colloquio RTB;
- contattare il Proponente fornendo aggiornamenti sull'avanzamento dei lavori.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp;
- identificare gli strumenti di appoggio per l'implementazione di una pipeline di CI/CD;
- configurare la repository e i suddetti strumenti.
- *Analista*: al fine di:
- migliorare il tracciamento tra casi d'uso e requisiti nel documento #adr.
- *Progettista*: al fine di:
- definire il diagramma ER del database;
- studiare gli elementi architetturali del prodotto.
- *Programmatore*: al fine di:
- implementare uno scheletro dell'applicazione, simile al PoC per struttura ma privo dell'ambiente fornito da Three.js, con Docker Compose;
- realizzare un primo prototipo del database;
- aggiornare le GitHub Actions che fanno uso di dipendenze recentemente deprecate (`actions/upload-artifact`, `actions/download-artifact`, `actions/setup-python`).
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [2], [3], [0], [5],
[Carraro], [0], [0], [0], [0], [0], [3], [3],
[Gardin], [0], [0], [0], [0], [3], [0], [3],
[Nardo], [0], [2], [3], [0], [0], [0], [5],
[Oseliero], [0], [2], [0], [2], [0], [0], [4],
[Todesco], [3], [0], [0], [0], [0], [0], [3],
[Zaccone], [0], [0], [0], [0], [0], [4], [4],
[Totale ore], [3], [4], [3], [4], [6], [7], [27],
[<NAME>], [90], [80], [75], [100], [90], [105], [540],
),
caption: "Prospetto del preventivo, Sprint 15"
)
#let data = (
("Responsabile", 3),
("Amministratore", 4),
("Analista", 3),
("Progettista", 4),
("Programmatore", 6),
("Verificatore", 7),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 15",
kind: "chart",
supplement: "Grafico"
)
== Sprint 16 dal 18-02-2024 al 25-02-2024
=== Obiettivi prefissati
In questo Sprint prosegue il lavoro di kick-off del design del MVP. Gli obiettivi prefissati sono:
- ripresa dei colloqui con il Proponente, in modalità sincrona, il giorno 22/02/2024 alle ore 15:00;
- accorpamento del documento #ris nel #pdp;
- aggiornamento ed estensione del documento #pdp;
- definizione nel #pdq delle prime metriche di qualità del prodotto;
- estensione delle #ndp basata su _Architecture Definition process_ (6.4.4 del documento di riferimento) e _Design Definition process_ (6.4.5 del documento di riferimento);
- preparazione di un mock-up dell'interfaccia utente del prodotto con Figma;
- definizione del diagramma delle classi del MVP;
- esplorazione degli strumenti di testing (Jest, Jasmine, Mocha per lo unit testing, Playwright per l'end-to-end testing);
- generale adeguamento della documentazione al feedback ricevuto dal #vardanega a seguito del colloquio RTB.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- organizzare il lavoro in reazione al feedback;
- organizzare il meeting con il Proponente, anche tramite la redazione di una board Miro di supporto;
- contattare il Proponente fornendo aggiornamenti sull'avanzamento dei lavori;
- estendere le #ndp\;
- estendere l'#ris\.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\;
- accorpare il documento #ris nel #pdp\.
- *Analista*: al fine di:
- aggiornare il documento #adr a seguito dell'incontro con il Proponente.
- *Progettista*: al fine di:
- definire le prime metriche di qualità del prodotto nel #pdq\;
- definire il diagramma delle classi del MVP.
- *Programmatore*: al fine di:
- definire la UX;
- esplorare gli strumenti di testing individuati e selezionare quelli più in linea con le esigenze del progetto, anche tramite la realizzazione di dimostratori tecnologici;
- preparare un mock-up dell'interfaccia utente del prodotto con Figma.
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [2], [0], [0], [0], [4], [1], [7],
[Carraro], [0], [0], [2], [3], [0], [2], [7],
[Gardin], [2], [0], [0], [0], [4], [0], [6],
[Nardo], [0], [0], [2], [3], [3], [0], [8],
[Oseliero], [2], [0], [0], [0], [0], [2], [4],
[Todesco], [0], [3], [2], [0], [0], [2], [7],
[Zaccone], [2], [0], [0], [2], [0], [2], [6],
[Totale ore], [8], [3], [6], [8], [11], [9], [45],
[Costo ruolo], [240], [60], [150], [200], [165], [135], [950],
),
caption: "Prospetto del preventivo, Sprint 16"
)
#let data = (
("Responsabile", 8),
("Amministratore", 3),
("Analista", 6),
("Progettista", 8),
("Programmatore", 11),
("Verificatore", 9),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 16",
kind: "chart",
supplement: "Grafico"
)
== Sprint 17 dal 26-02-2024 al 03-03-2024
=== Obiettivi prefissati
Questo Sprint si occupa di cominciare il lavoro di definizione del design e di proseguire il lavoro sulla documentazione. Gli obiettivi prefissati sono:
- meeting interno di design thinking, in modalità sincrona, il giorno 26/02/2024 ore 15:00;
- colloquio con il Proponente, in modalità sincrona, il giorno 29/02/2024 alle ore 16:00;
- estendere le #ndp con il paragrafo di Technical processes/Implementation process;
- estendere le #ndp con il paragrafo di Technical processes/Integration process;
- estendere le #ndp con il paragrafo di Technical processes/Verification process;
- estendere le #ndp con il paragrafo di Organizational Project-Enabling processes/Quality Management process;
- aggiornare #ndp normando il sistema di tracciamento rischi - risoluzione;
- #adr: revisione del documento;
- redigere introduzione del documento Manuale Utente;
- redigere introduzione del documento Specifica Tecnica;
- #pdp: redigere preventivo e consuntivo dello Sprint 16;
- #pdp: redigere preventivo Sprint 17;
- aggiornare le metriche del #pdq con i dati aggiornati da Grafana;
- preparare il mock-up dell'UI da presentare al Proponente;
- aggiornare lo schema ER in seguito al feedback ricevuto dal Proponente.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- organizzare il lavoro in reazione al feedback;
- organizzare il meeting con il Proponente, anche tramite la redazione di una board Miro di supporto;
- estendere le #ndp\.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Analista*: al fine di:
- revisionare il documento #adr a seguito dell'incontro con il Proponente.
- *Progettista*: al fine di:
- aggiornare lo schema ER in seguito al meeting con il Proponente;
- definire una bozza dello schema delle classi.
- *Programmatore*: al fine di:
- aggiornare il mock-up prodotto per il meeting con il Proponente;
- modificare il database secondo il nuovo schema ER;
- iniziare i lavori sul database.
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [4], [3], [7],
[Carraro], [1], [0], [3], [2], [0], [0], [6],
[Gardin], [0], [0], [3], [0], [4], [0], [7],
[Nardo], [0], [2], [3], [0], [0], [2], [7],
[Oseliero], [0], [2], [0], [1], [4], [0], [7],
[Todesco], [3], [0], [0], [0], [0], [3], [6],
[Zaccone], [2], [0], [0], [2], [0], [0], [4],
[Totale ore], [6], [4], [9], [5], [12], [8], [44],
[<NAME>], [180], [80], [225], [125], [180], [120], [910],
),
caption: "Prospetto del preventivo, Sprint 17"
)
#let data = (
("Responsabile", 6),
("Amministratore", 4),
("Analista", 9),
("Progettista", 5),
("Programmatore", 12),
("Verificatore", 8),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 17",
kind: "chart",
supplement: "Grafico"
)
== Sprint 18 dal 03/03/2024 al 10/03/2024
=== Obiettivi prefissati
In questo Sprint proseguono le attività di design e di implementazione. Gli obiettivi prefissati sono:
- colloquio con il Proponente in data 07/03/2024;
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 17;
- aggiornamento della dashboard Grafana allo Sprint 17;
- estensione delle #ndp basata su _Transition process_ (6.4.10 dello standard di riferimento) e _Validation process_ (6.4.11 dello standard di riferimento);
- continuazione della revisione dell'#adr\;
- richiedere colloquio in modalità sincrona con il #cardin per discutere dei dubbi riguardanti la progettazione;
- individuazione della struttura del documento #st\;
- redazione sezioni riguardanti il database e l'architettura nella #st\;
- implementazione e popolazione database PostgreSQL;
- implementazione pattern middleware;
- realizzazione creazione ambiente Three.js;
- aggiornamento e aggiunta di funzionalità riguardanti le Zone dell'ambiente nel mock-up.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- organizzare il lavoro in reazione al feedback;
- organizzare il meeting con il Proponente, anche tramite la redazione di una board Miro di supporto;
- contattare il #cardin per richiedere un colloquio in modalità sincrona;
- estendere le #ndp\.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Analista*: al fine di:
- proseguire con la revisione del documento #adr\.
- *Progettista*: al fine di:
- proseguire con la definizione dell'architettura del prodotto.
- *Programmatore*: al fine di:
- implementare e popolare il database;
- iniziare l'implementazione delle prime funzionalità del prodotto.
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [2], [0], [1], [0], [2], [0], [5],
[Carraro], [2], [0], [0], [0], [2], [5], [9],
[Gardin], [0], [0], [1], [0], [2], [3], [6],
[Nardo], [0], [2], [0], [2], [2], [0], [6],
[Oseliero], [2], [0], [2], [0], [2], [1], [7],
[Todesco], [0], [0], [2], [2], [5], [0], [9],
[Zaccone], [0], [0], [2], [0], [5], [0], [7],
[Totale ore], [6], [2], [8], [4], [20], [9], [49],
[<NAME>], [180], [40], [200], [100], [300], [135], [955],
),
caption: "Prospetto del preventivo, Sprint 18"
)
#let data = (
("Responsabile", 6),
("Amministratore", 2),
("Analista", 8),
("Progettista", 4),
("Programmatore", 20),
("Verificatore", 9),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 18",
kind: "chart",
supplement: "Grafico"
)
== Sprint 19 dal 10/03/2024 al 17/03/2024
=== Obiettivi prefissati
In questo Sprint il lavoro si concentra principalmente sulla stesura di #st, #man e sull'implementazione del MVP. Gli obiettivi prefissati sono:
- colloquio con il Proponente in data 15/03/2024;
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 18;
- aggiornamento della dashboard Grafana allo Sprint 18;
- continuazione e conclusione della revisione e aggiornamento dell'#adr\;
- redazione sezioni riguardanti diagrammi delle classi e design pattern nella #st\;
- redazione sezioni requisiti e supporto tecnico nel #man\;
- implementazione completa della UI del MVP;
- iniziare l'implementazione degli elementi 3D.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- organizzare il lavoro in reazione al feedback;
- organizzare il meeting con il Proponente.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Analista*: al fine di:
- proseguire con la revisione del documento #adr\.
- *Progettista*: al fine di redigere le sezioni della #st\:
- Design pattern;
- Diagramma delle classi.
- *Programmatore*: al fine di:
- collaborare con il Progettista nella stesura della #st\;
- implementare la UI;
- implementare gli elementi 3D.
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [1], [0], [2], [4], [3], [10],
[Carraro], [0], [1], [0], [2], [5], [0], [8],
[Gardin], [0], [0], [0], [0], [5], [0], [5],
[Nardo], [0], [0], [3], [0], [3], [5], [11],
[Oseliero], [0], [0], [0], [0], [0], [5], [5],
[Todesco], [2], [0], [2], [0], [3], [0], [7],
[Zaccone], [0], [0], [0], [0], [5], [2], [7],
[Totale ore], [2], [2], [5], [4], [25], [15], [53],
[<NAME>], [60], [40], [125], [100], [375], [225], [925],
),
caption: "Prospetto del preventivo, Sprint 19"
)
#let data = (
("Responsabile", 2),
("Amministratore", 2),
("Analista", 5),
("Progettista", 4),
("Programmatore", 25),
("Verificatore", 15),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 19",
kind: "chart",
supplement: "Grafico"
)
== Sprint 20 dal 17/03/2024 al 24/03/2024
=== Obiettivi prefissati
Questo Sprint ha come obiettivi principali l'implementazione del MVP e la stesura del #man\. Sarà inoltre iniziata l'attività di revisione degli ultimi capitoli redatti delle #ndp\. Gli obiettivi prefissati sono:
- fissare un colloquio con il Proponente;
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 19;
- aggiornamento della dashboard Grafana allo Sprint 19;
- revisione dei seguenti capitoli delle #ndp\:
- Processo di gestione della qualità;
- Processo di implementazione;
- Processo di integrazione;
- Processo di verifica;
- Processo di transizione;
- Processo di validazione.
- conclusione della revisione dell'#adr e conseguente invio del documento al #cardin\;
- redazione della sezione della #st riguardante i requisiti soddisfatti dal prodotto;
- redazione delle seguenti sezioni del #man\:
- Configurazione ambiente;
- Creazione zona;
- Modifica zona;
- Impostazioni.
- spostare la sezione di requisiti di sistema e hardware dal #man alla #st\;
- concludere lo sviluppo del MVP.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- organizzare il lavoro in reazione al feedback;
- organizzare il meeting con il Proponente, anche tramite la redazione di una board Miro di supporto;
- revisionare i capitoli delle #ndp\.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Analista*: al fine di:
- concludere la revisione e l'aggiornamento dell'#adr\.
- *Programmatore*: al fine di:
- proseguire e concludere lo sviluppo del MVP;
- redigere le sezioni del #man\.
- *Verificatore*: al fine di:
- verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp\;
- affiancare il Responsabile nella revisione delle #ndp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [7], [2], [9],
[Carraro], [0], [0], [0], [0], [7], [0], [7],
[Gardin], [1], [0], [0], [0], [3], [4], [8],
[Nardo], [0], [0], [2], [0], [5], [2], [9],
[Oseliero], [0], [0], [0], [0], [5], [4], [9],
[Todesco], [0], [0], [1], [0], [5], [2], [8],
[Zaccone], [2], [1], [0], [0], [3], [4], [10],
[Totale ore], [3], [1], [3], [0], [35], [18], [60],
[Costo ruolo], [90], [20], [75], [0], [525], [270], [980],
),
caption: "Prospetto del preventivo, Sprint 20"
)
#let data = (
("Responsabile", 3),
("Amministratore", 1),
("Analista", 3),
("Progettista", 0),
("Programmatore", 35),
("Verificatore", 18),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 20",
kind: "chart",
supplement: "Grafico"
)
== Sprint 21 dal 24/03/2024 al 30/03/2024
=== Obiettivi prefissati
Lo Sprint 21 ha come obiettivo primario la convalida come MVP del prodotto software da parte del Proponente. Gli obiettivi prefissati sono:
- sostenere il colloquio di convalida del MVP con il Proponente, il giorno 26/04/2024;
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 20;
- aggiornamento della dashboard Grafana allo Sprint 20;
- armonizzazione dei seguenti capitoli delle #ndp a seguito della loro revisione, conclusasi durante lo Sprint 20:
- Processo di gestione della qualità;
- Processo di implementazione;
- Processo di integrazione;
- Processo di verifica;
- Processo di transizione;
- Processo di validazione.
- redazione delle seguenti sezioni del #man\:
- Impostazioni;
- Spostamento prodotto;
- Ispezione bin/zona.
- terminare le attività di codifica del MVP.
La pianificazione tiene conto delle festività pasquali, che potrebbero impattare negativamente l'intesità del lavoro durante questo Sprint e il successivo.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di:
- coordinare le attività;
- revisionare i capitoli delle #ndp\.
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Progettista*: al fine di:
- redigere le sezioni della #st\;
- redigere le sezioni del #man\.
- *Programmatore*: al fine di concludere le attività di codifica del MVP;
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [2], [0], [0], [0], [0], [3], [5],
[Carraro], [0], [0], [0], [1], [4], [2], [7],
[Gardin], [2], [0], [0], [0], [0], [3], [5],
[Nardo], [0], [0], [0], [1], [4], [0], [5],
[Oseliero], [0], [2], [0], [0], [0], [0], [2],
[Todesco], [0], [0], [0], [0], [5], [0], [5],
[Zaccone], [0], [0], [0], [0], [5], [0], [5],
[Totale ore], [4], [2], [0], [2], [18], [8], [34],
[<NAME>], [120], [40], [0], [50], [270], [120], [600],
),
caption: "Prospetto del preventivo, Sprint 21"
)
#let data = (
("Responsabile", 4),
("Amministratore", 2),
("Analista", 0),
("Progettista", 2),
("Programmatore", 18),
("Verificatore", 8),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 21",
kind: "chart",
supplement: "Grafico"
)
== Sprint 22 dal 30/03/2024 al 07/04/2024
=== Obiettivi prefissati
Lo Sprint 22 ha come obiettivo il proseguimento del lavoro sui documenti oggetti di valutazione in sede di PB. Gli obiettivi prefissati sono:
- candidatura allo sportello PB col #cardin (data da definire, vincolata alla ricezione del feedback positivo sul documento #adr\);
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 21;
- aggiornamento della dashboard Grafana allo Sprint 21;
- revisione generale del documento di #st\;
- redazione delle seguenti sezioni del #man\:
- Griglia di collocamento;
- Ispezione bin;
- Collocazione zona nell'ambiente 3D;
- Ricerca zona;
- Spostamento POV su bin;
- Visualizzazione informazione prodotto;
- Visualizzazione lista prodotti;
- Ricerca prodotti;
- Visualizzazione lista ordini movimentazione;
- Visualizzazione singolo ordine movimentazione.
- aggiornamento di alcune immagini del #man al fine di renderle coerenti con le nuove funzionalità implementate.
Non sono previsti meeting con il Proponente in questo Sprint.
La pianificazione tiene conto del rientro dalle festività pasquali, che potrebbero impattare negativamente l'intesità del lavoro durante questo Sprint.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di coordinare le attività;
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Progettista*: al fine di revisionare il documento #st\;
- *Programmatore*: al fine di:
- redigere le sezioni del #man\;
- aggiornare le immagini del #man\.
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [2], [0], [0], [2],
[Carraro], [2], [0], [0], [0], [0], [2], [4],
[Gardin], [0], [0], [0], [0], [0], [3], [3],
[Nardo], [0], [3], [0], [0], [0], [3], [6],
[Oseliero], [0], [0], [0], [2], [0], [0], [2],
[Todesco], [2], [0], [0], [0], [0], [3], [5],
[Zaccone], [0], [0], [0], [0], [0], [3], [3],
[Totale ore], [4], [3], [0], [4], [0], [14], [25],
[<NAME>], [120], [60], [0], [100], [0], [210], [490],
),
caption: "Prospetto del preventivo, Sprint 22"
)
#let data = (
("Responsabile", 4),
("Amministratore", 3),
("Analista", 0),
("Progettista", 4),
("Programmatore", 0),
("Verificatore", 14),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 22",
kind: "chart",
supplement: "Grafico"
)
== Sprint 23 dal 07/04/2024 al 14/04/2024
=== Obiettivi prefissati
Lo Sprint 23 ha come obiettivo la preparazione per il colloquio PB col #cardin (10/04/2024) e la conclusione del lavoro di revisione dei documenti. Gli obiettivi prefissati sono:
- preparazione della presentazione per il colloquio PB col #cardin;
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 22;
- aggiornamento della dashboard Grafana allo Sprint 22;
- separazione della lista dei test di unità e quella dei test di integrazione nel documento #pdq;
- redazione della sezione relativa ai test di sistema nel documento #pdq;
- aggiunta della sezione riguardante la guida sull'avvio del software nel #man;
- rimozione della sezione "Supporto tecnico" dal #man.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di coordinare le attività;
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
- *Progettista*: al fine di redigere le sezioni del #man\;
- *Verificatore*: al fine di verificare la correttezza del lavoro prodotto e la sua coerenza con le #ndp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [2], [2],
[Carraro], [0], [2], [0], [0], [0], [0], [2],
[Gardin], [0], [0], [0], [0], [0], [2], [2],
[Nardo], [0], [0], [0], [1], [0], [0], [1],
[Oseliero], [1], [0], [0], [0], [0], [2], [3],
[Todesco], [0], [0], [0], [0], [0], [2], [2],
[Zaccone], [0], [0], [0], [1], [0], [0], [1],
[Totale ore], [1], [2], [0], [2], [0], [8], [13],
[<NAME>], [30], [40], [0], [50], [0], [120], [240],
),
caption: "Prospetto del preventivo, Sprint 23"
)
#let data = (
("Responsabile", 1),
("Amministratore", 2),
("Analista", 0),
("Progettista", 2),
("Programmatore", 0),
("Verificatore", 8),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 23",
kind: "chart",
supplement: "Grafico"
)
== Sprint 24 dal 14/04/2024 al 21/04/2024
=== Obiettivi prefissati
Lo Sprint 24 prevede esclusivamente compiti di ordinaria amministrazione, in attesa della ricezione del feedback sul colloquio PB col #cardin. Gli obiettivi prefissati sono:
- candidatura allo sportello PB col #vardanega (data da definire, vincolata alla ricezione del feedback positivo sul documento #st\);
- aggiornamento ed estensione del documento #pdp\;
- aggiornamento grafici delle metriche nel #pdq allo Sprint 22;
- aggiornamento della dashboard Grafana allo Sprint 22.
=== Preventivo costi
Questo Sprint impiega:
- *Responsabile*: al fine di coordinare le attività;
- *Amministratore*: al fine di:
- redigere i verbali;
- aggiornare il documento #pdp\.
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [0], [0],
[Carraro], [0], [0], [0], [0], [0], [0], [0],
[Gardin], [0], [0], [0], [0], [0], [0], [0],
[Nardo], [1], [0], [0], [0], [0], [0], [1],
[Oseliero], [0], [0], [0], [0], [0], [0], [0],
[Todesco], [0], [0], [0], [0], [0], [0], [0],
[Zaccone], [0], [2], [0], [0], [0], [0], [2],
[Totale ore], [1], [2], [0], [0], [0], [0], [3],
[<NAME>], [30], [40], [0], [0], [0], [0], [70],
),
caption: "Prospetto del preventivo, Sprint 24"
)
#let data = (
("Responsabile", 1),
("Amministratore", 2),
("Analista", 0),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 0),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 24",
kind: "chart",
supplement: "Grafico"
)
#pagebreak()
= Consuntivi di periodo <consuntivi>
Il consuntivo di ogni Sprint permette di avere una valutazione critica dell'avanzamento dello sviluppo, valutando in modo oggettivo i punti positivi e negativi dello Sprint terminato.
Questa fase di retrospettiva è essenziale al fine di individuare possibili miglioramenti e di analizzare se la pianificazione ideata all'inizio dello Sprint sia stata concreta ed efficace.
== Sprint 1 dal 06-11-2023 al 13-11-2023
=== Obiettivi raggiunti
- Revisione preventivo costi con aggiornamento della suddivisione delle ore e conseguente riduzione del costo totale;
- Implementazione di automazioni per la compilazione dei documenti;
- Stesura delle domande in merito al dominio tecnologico del capitolato.
=== Obiettivi mancati
- Riconfigurazione della repository;
- Implementazione di automazioni per il versionamento dei documenti;
- Contatto con l'azienda Proponente per comunicare l'esito della candidatura;
- Contatto con l'azienda Proponente per fissare un primo meeting di analisi.
=== Problematiche
A causa di una sottostima del carico di lavoro per l'implementazione delle automazioni tramite GitHub Actions, alcuni obiettivi sono stati mancati, creando un effetto a catena che ha temporaneamente bloccato il caricamento di documenti nella repository.
Un altro problema riguarda la comunicazione con l'azienda Proponente: sfortunatamente, la mail inviata non ha ricevuto riscontro in tempo, impedendo di procedere ulteriormente con l'#adr e la programmazione di un meeting con l'azienda.
=== Risoluzioni attuate
Le automazioni riguardanti il versionamento verranno concluse nello Sprint successivo.
La problematica in merito al contatto con l'azienda vedrà l'intraprendersi di un'azione di sollecito con una seconda mail, e successivamente la richiesta di stabilire un nuovo canale di comunicazione.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [2 (-1)], [2 (-1)],
[Carraro], [0], [4], [0], [0], [0], [0], [4],
[Gardin], [0], [4], [0], [0], [0], [0], [4],
[Nardo], [0], [0], [0], [0], [0], [2 (-1)], [2 (-1)],
[Oseliero], [2 (-1)], [4 (+4)], [0], [0], [0], [0], [6 (+3)],
[Todesco], [0], [0], [2 (-1)], [0], [0], [0], [2 (-1)],
[Zaccone], [0], [0], [2 (-1)], [0], [0], [0], [2 (-1)],
[Totale ore], [2 (-1)], [12 (+4)], [4 (-2)], [0], [0], [4 (-2)], [22 (-1)],
[Costo\ ruolo], [€ 60 (-30)], [€ 240 (+80)], [€ 100 (-50)], [€ 0], [€ 0], [€ 60 (-30)], [€ 460 (-30)],
),
caption: "Prospetto del consuntivo, Sprint 1"
)
#let data = (
("Responsabile", 3, 2),
("Amministratore", 8, 12),
("Analista", 6, 4),
("Progettista", 0, 0),
("Programmatore", 0, 0),
("Verificatore", 6, 4),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 1",
kind: "chart",
supplement: "Grafico"
)
A causa dei rallentamenti incontrati, alcuni ruoli hanno visto un monte ore effettivo inferiore a quanto preventivato:
- Responsabile: il blocco temporaneo della repository e la mancata risposta dell'azienda non hanno permesso la produzione di documenti, limitando così il lavoro del Responsabile;
- Analista: la mancata risposta da parte dell'azienda ha impedito agli analisti di iniziare l'#adr, limitando dunque il lavoro alla stesura di domande in merito al dominio tecnologico;
- Verificatore: il mancato avanzamento ha prodotto documentazione ridotta rispetto a quanto preventivato, pertanto il Verificatore ha svolto un numero inferiore di ore.
Il ruolo dell'Amministratore, invece, ha visto un aumento delle ore rispetto a quanto preventivato, a causa di difficoltà incontrate nell'implementazione delle automazioni (errori, testing, verifica).
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [68], [€ 2040],
[Amministratore], [86], [€ 1720],
[Analista], [101], [€ 2525],
[Progettista], [49], [€ 1225],
[Programmatore], [196], [€ 2940],
[Verificatore], [143], [€ 2145],
[Rimanente], [643], [€ 12595],
),
caption: "Monitoraggio Sprint 1"
)
== Sprint 2 dal 13-11-2023 al 20-11-2023
=== Obiettivi raggiunti
- Riconfigurazione della repository;
- Stesura della sezione _Introduzione_ del documento #ndp;
- Stesura della sezione _Processi di supporto_ del documento #ndp;
- Perfezionamento delle automazioni per la compilazione dei documenti;
- Implementazione delle automazioni per il versionamento dei documenti;
- Contatto con l'azienda Proponente per comunicare l'esito della candidatura;
- Meeting con l'azienda Proponente per riflettere sull'#adr e sulle tecnologie da usare;
- Inizio dell'individuazione e della stesura degli Use Case.
=== Obiettivi mancati
Gli obiettivi sono stati tutti raggiunti, considerando anche che molti erano obiettivi mancati dello Sprint precedente.
=== Problematiche
- Il gruppo ha notato la mancanza di una struttura comune nei verbali, che porta a documenti senza una precisa convenzione e rallentamenti in fase di redazione;
- Sono emerse difficoltà nelle modalità di utilizzo della repository, nonostante sia stato dedicato del tempo per la formazione;
- Scarsa reattività in fase di review.
=== Risoluzioni attuate
- Per risolvere la problematica di una mancanza di struttura nei verbali, si sono fissate delle convenzioni da seguire nel documento #ndp;
- Per formare meglio il gruppo sulle modalità di utilizzo della repository, è stato dedicato tempo in più per fornire tutorial video e testuali a supporto del gruppo;
- Per ovviare alla scarsa reattività in fase di review, il gruppo prende la responsabilità di visionare spesso la casella mail personale per capire quando c'è bisogno di una review o di risolvere una conversation GitHub, oltre a sollecitare i Verificatori via i canali di comunicazione interni del gruppo in caso di mancate revisioni.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [2 (-1)], [2 (-1)],
[Carraro], [0], [5 (+1)], [0], [0], [0], [0], [5 (+1)],
[Gardin], [0], [4], [0], [0], [0], [0], [4],
[Nardo], [0], [0], [3], [0], [0], [0], [3],
[Oseliero], [3], [0], [0], [0], [0], [0], [3],
[Todesco], [0], [1 (+1)], [3], [0], [0], [0], [4 (+1)],
[Zaccone], [0], [2 (+2)], [3], [0], [0], [0], [5 (+2)],
[Totale ore], [3], [12 (+4)], [9], [0], [0], [2 (-1)], [26 (+3)],
[Costo\ ruolo], [€ 90], [€ 240 (+80)], [€ 225], [€ 0], [€ 0], [€ 30 (-15)], [€ 585 (+65)],
),
caption: "Prospetto del consuntivo, Sprint 2"
)
#let data = (
("Responsabile", 3, 3),
("Amministratore", 8, 12),
("Analista", 9, 9),
("Progettista", 0, 0),
("Programmatore", 0, 0),
("Verificatore", 3, 2),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 2",
kind: "chart",
supplement: "Grafico"
)
A causa delle problematiche incontrate, alcuni ruoli hanno visto un monte ore effettivo diverso a quanto preventivato:
- Amministratore: la scarsa comprensione delle modalità di utilizzo della repository da parte dei membri del gruppo ha portato gli Amministratori a dedicare delle ore in più mirate al perfezionamento della formazione dei membri del gruppo;
- Verificatore: la scarsa reattività in fase di review ha portato un impegno inferiore alle attività di revisione.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [65], [€ 1950],
[Amministratore], [74], [€ 1480],
[Analista], [92], [€ 2300],
[Progettista], [49], [€ 1225],
[Programmatore], [196], [€ 2940],
[Verificatore], [141], [€ 2115],
[Rimanente], [617], [€ 12010],
),
caption: "Monitoraggio Sprint 2"
)
== Sprint 3 dal 20-11-2023 al 27-11-2023
=== Obiettivi raggiunti
- Proseguimento del processo di individuazione e stesura degli Use Case;
- Perfezionamento delle automazioni di versionamento documenti;
- Perfezionamento del template usato per i documenti;
- Aggiornamento parziale del documento #ndp;
- Studio iniziale della libreria Three.js;
- Meeting con l'azienda Proponente per esporre ipotesi e Use Case individuati, richiedere chiarimenti ed avanzare opportune richieste;
- Contatto con l'azienda Proponente per fissare il prossimo meeting.
=== Obiettivi mancati
- Aggiornamento completo del documento #pdp con l'inserimento dei grafici di Gantt e di burndown.
=== Problematiche
Nonostante la realizzazione soddisfacente della maggior parte degli obiettivi concordati, sono emerse alcune criticità durante l'implementazione del progetto:
- si è riscontrata una pianificazione non ottimale e superficiale, attribuibile alla mancanza di un dettagliato processo di pianificazione durante la riunione precedente l'avvio dello Sprint;
- sono sorti problemi a causa della mancata definizione di standard per la creazione dei grafici di Gantt e burndown, comportando una stesura parziale dei paragrafi nel documento #pdp;
- la durata dei meeting ha superato le aspettative a causa di alcune inefficienze temporali;
- il gruppo ha rilevato la mancanza di standard per designare le persone responsabili della redazione dei verbali durante lo Sprint.
=== Risoluzioni attuate
Le risoluzioni attuate per risolvere i problemi citati in precedenza si concentrano su un'organizzazione e un'attenzione maggiore nel processo di pianificazione, oltre alla definizione di standard relativi ai grafici da inserire nei documenti e relativi allo svolgimento dei meeting e redazione dei rispettivi verbali. In particolare:
- è stata prestata una maggiore attenzione nella pianificazione del nuovo Sprint, introducendo nella board retrospettiva di Miro una bacheca relativa alle task da svolgere, che sono state tradotte fin da subito in ticket di Jira e assegnati ai rispettivi ruoli;
- la durata massima dei meeting di retrospettiva è stata fissata a 90 minuti (_soft limit_);
- sono stati definiti degli standard per la realizzazione dei grafici, adottando quelli proposti da Jira;
- è stato assegnato all'Amministratore il compito della redazione dei verbali.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [2 (-1)], [0], [0], [0], [0], [0], [2 (-1)],
[Carraro], [0], [0], [4], [0], [0], [0], [4],
[Gardin], [0], [0], [0], [0], [0], [3], [3],
[Nardo], [0], [0], [0], [3], [0], [0], [3],
[Oseliero], [0], [0], [4], [0], [0], [0], [4],
[Todesco], [0], [4 (+1)], [0], [0], [0], [0], [4 (+1)],
[Zaccone], [0], [4 (+1)], [0], [0], [0], [0], [4 (+1)],
[Totale ore], [2 (-1)], [8 (+2)], [8], [3], [0], [3], [24 (+1)],
[Costo\ ruolo], [€ 60 (-30)], [€ 160 (+40)], [€ 200], [€ 75], [€ 0], [€ 45], [€ 540 (+10)],
),
caption: "Prospetto del consuntivo, Sprint 3"
)
#let data = (
("Responsabile", 3, 2),
("Amministratore", 6, 8),
("Analista", 8, 8),
("Progettista", 3, 3),
("Programmatore", 0, 0),
("Verificatore", 3, 3),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 3",
kind: "chart",
supplement: "Grafico"
)
A causa dei problemi incontrati, alcuni ruoli hanno visto un monte ore effettivo diverso a quanto preventivato:
- Responsabile: a causa di mancati standard su come effettuare una pianificazione ottimale il responsabile ha impiegato meno ore di quanto previsto;
- Amministratore: a causa dei mancati standard sulla realizzazione dei grafici gli Amministratori hanno impiegato più ore finalizzate allo studio sui grafici da adottare.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [63], [€ 1890],
[Amministratore], [66], [€ 1320],
[Analista], [84], [€ 2100],
[Progettista], [46], [€ 1150],
[Programmatore], [196], [€ 2940],
[Verificatore], [138], [€ 2070],
[Rimanente], [593], [€ 11470],
),
caption: "Monitoraggio Sprint 3"
)
== Sprint 4 dal 27-11-2023 al 04-12-2023
=== Obiettivi raggiunti
- Perfezionamento della stesura degli Use Case individuati finora;
- Redazione di una prima versione del documento #adr;
- Scelta di una data per un eventuale primo meeting con il #cardin per ottenere un feedback sull'#adr;
- Proseguimento della redazione e aggiornamento del documento #ndp;
- Proseguimento periodo di studio di Three.js, finalizzato a:
- creazione di scaffali nell'ambiente tridimensionale;
- implementazione sistema di _drag and drop_;
- parametrizzazione degli elementi presenti nell'applicazione.
=== Obiettivi mancati
Tutti gli obiettivi sono stati raggiunti.
=== Problematiche
Il gruppo ha riportato una scarsa reattività durante il processo di review e verifica, con conseguente rallentamento del lavoro. Le review sono state rallentate anche dalla presenza di numerosi merge conflicts, che il gruppo si è ritrovato a gestire per la prima volta nel progetto.
Inoltre si è presa coscienza della necessità di apportare migliorie al processo di gestione di Jira.
=== Risoluzioni attuate
Il gruppo ha preso in considerazione l'utilizzo di Graphite per velocizzare il processo di review e creare pull request brevi. L'adozione di questo strumento, o suoi analoghi, verrà valutata dall'Amministratore.
Inoltre, il gruppo ha preso l'impegno di formarsi in maniera approfondita sull'utilizzo di Jira, per migliorare l'amministrazione del progetto.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [3], [0], [0], [0], [0], [0], [3],
[Carraro], [0], [0], [3], [0], [2 (+1)], [0], [5 (+1)],
[Gardin], [0], [0], [0], [0], [0], [4 (+1)], [4 (+1)],
[Nardo], [0], [0], [0], [1], [2], [0], [3],
[Oseliero], [0], [0], [4], [0], [0], [0], [4],
[Todesco], [0], [2 (-1)], [0], [0], [0], [0], [2 (-1)],
[Zaccone], [0], [2 (-1)], [0], [0], [0], [0], [2 (-1)],
[Totale ore], [3], [4 (-2)], [7], [1], [4 (+1)], [4 (+1)], [23],
[Costo\ ruolo], [€ 90], [€ 80 (-40)], [€ 175], [€ 25], [€ 60 (+15)], [€ 60 (+15)], [€ 490 (-10)],
),
caption: "Prospetto del consuntivo, Sprint 4"
)
#let data = (
("Responsabile", 3, 3),
("Amministratore", 6, 4),
("Analista", 7, 7),
("Progettista", 1, 1),
("Programmatore", 3, 4),
("Verificatore", 3, 4),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 4",
kind: "chart",
supplement: "Grafico"
)
A causa dei rallentamenti incontrati, alcuni ruoli hanno visto un monte ore effettivo diverso a quanto preventivato:
- Amministratore: l'utilizzo di Jira in maniera approssimativa ha portato ad un monte ore inferiore a quanto previsto;
- Verificatore: i numerosi merge conflicts hanno portato a notevoli rallentamenti in fase di review.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [60], [€ 1800],
[Amministratore], [62], [€ 1240],
[Analista], [77], [€ 1925],
[Progettista], [45], [€ 1125],
[Programmatore], [192], [€ 2880],
[Verificatore], [134], [€ 2010],
[Rimanente], [570], [€ 10980],
),
caption: "Monitoraggio Sprint 4"
)
== Sprint 5 dal 04-12-2023 al 11-12-2023
=== Obiettivi raggiunti
- Perfezionamento sistema di _drag and drop_ nel PoC;
- Creazione degli scaffali nel PoC;
- Miglioramento e aggiornamento documento #adr;
- Realizzazione PoC relativo alla creazione e posizionamento di scaffali dell'ambiente di lavoro;
- Realizzazione PoC relativo alla lettura e utilizzo di file SVG e comunicazione con database;
- Realizzazione PoC che integra l'utilizzo di Docker;
- Incontro con Proponente in data 06-12-23;
- Incontro con #cardin in data 07-12-23;
- Redazione del Glossario;
- Aggiornamento documento #ndp;
- Aggiornamento documento #pdp;
- Aggiornare vecchi documenti con nuovo template.
=== Obiettivi mancati
- Miglioramento GitHub Actions.
=== Problematiche
Durante il meeting di retrospettiva sono emerse le seguenti problematiche da migliorare:
- mancanza di precise convenzioni da adottare riguardanti il codice;
- lavoro concentrato principalmente nel weekend;
- richiesta di maggiore partecipazione dei membri del gruppo sulle board di Miro.
=== Risoluzioni attuate
Conseguentemente ai problemi rilevati, sono state individuate le relative soluzioni da adottare:
- normare le convenzioni di stesura del codice nelle #ndp;
- impegnarsi a fissare delle scadenze infrasettimanali così da ridurre il carico di lavoro nel weekend;
- utilizzare le board su Miro con anticipo aggiungendo il proprio feedback.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [4], [0], [0], [0], [4],
[Carraro], [3], [0], [0], [0], [2], [0], [5],
[Gardin], [0], [0], [4], [0], [0], [0], [4],
[Nardo], [0], [2 (-1)], [0], [0], [2], [0], [4 (-1)],
[Oseliero], [0], [0], [0], [1 (-1)], [2], [0], [3 (-1)],
[Todesco], [0], [0], [0], [0], [0], [3], [3],
[Zaccone], [0], [0], [0], [1 (-1)], [2], [0], [3 (-1)],
[Totale ore], [3], [2 (-1)], [8], [2 (-2)], [8], [3], [26 (-3)],
[Costo\ ruolo], [€ 90], [€ 40 (-20)], [€ 200], [€ 50 (-50)], [€ 120], [€ 45], [€ 545 (-70)],
),
caption: "Prospetto del consuntivo, Sprint 5"
)
#let data = (
("Responsabile", 3, 3),
("Amministratore", 3, 2),
("Analista", 8, 8),
("Progettista", 4, 2),
("Programmatore", 8, 8),
("Verificatore", 3, 3),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 5",
kind: "chart",
supplement: "Grafico"
)
Per produrre una prima versione dei PoC, i Progettisti (e in parte il Responsabile e l'Amministratore) hanno impiegato ore produttive come Programmatori, questo ha in parte significato una riduzione delle ore produttive previste per il ruolo assegnato.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [57], [€ 1710],
[Amministratore], [60], [€ 1200],
[Analista], [69], [€ 1725],
[Progettista], [43], [€ 1075],
[Programmatore], [184], [€ 2760],
[Verificatore], [131], [€ 1965],
[Rimanente], [544], [€ 10435],
),
caption: "Monitoraggio Sprint 5"
)
== Sprint 6 dal 11-12-2023 al 18-12-2023
=== Obiettivi raggiunti
- Avanzamento del PoC A con:
- miglioramento grafico degli scaffali;
- posizionamento automatico dei bin nello scaffale al momento della sua creazione;
- implementazione visualizzazione prodotti dei bin tramite alert JavaScript.
- Aggiornamento documento #ndp;
- Aggiornamento documento #adr con:
- refactoring degli Use Case secondo le indicazioni del #cardin;
- redazione requisiti funzionali;
- redazione requisiti di qualità;
- redazione requisiti di vincolo.
- Redazione introduzione #pdq;
- Meeting con l'azienda Proponente per:
- esporre il PoC A;
- aggiornare sullo stato dei lavori;
- chiedere feedback sui requisiti non funzionali.
- Conversione del Glossario in JSON per automatizzare l'individuazione dei termini nei documenti.
=== Obiettivi mancati
- Migliorare GitHub Actions risolvendo eventuali problemi o aggiungendo funzionalità;
- Implementazione PoC esplorativo per il front-end.
=== Problematiche
Durante il meeting di retrospettiva sono emerse le seguenti problematiche da migliorare:
- mancata comunicazione di situazioni di difficoltà o problemi che hanno portato a rallentamenti nella produzione di un PoC per il front-end;
- la stesura del documento #ndp ha una velocità di avanzamento troppo bassa.
=== Risoluzioni attuate
Conseguentemente ai problemi rilevati, sono state individuate le relative soluzioni da adottare:
- si richiede ai componenti del gruppo una comunicazione rapida delle difficoltà e problemi, non appena questi si presentano;
- il numero di persone assegnate alla redazione delle #ndp verrà incrementato da due a tre, in modo da garantire la produzione di un maggior numero di sezioni durante lo Sprint.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [4], [0], [0], [0], [4],
[Carraro], [3], [0], [0], [0], [0], [0], [3],
[Gardin], [0], [0], [4], [0], [0], [0], [4],
[Nardo], [0], [3], [0], [0], [0], [0], [3],
[Oseliero], [0], [0], [0], [1], [2 (-1)], [0], [3 (-1)],
[Todesco], [0], [0], [0], [0], [0], [3], [3],
[Zaccone], [0], [0], [0], [1], [3], [0], [4],
[Totale ore], [3], [3], [8], [2], [5 (-1)], [3], [24 (-1)],
[Costo\ ruolo], [€ 90], [€ 60], [€ 200], [€ 50], [€ 75 (-15)], [€ 45], [€ 520 (-15)],
),
caption: "Prospetto del consuntivo, Sprint 6"
)
#let data = (
("Responsabile", 3, 3),
("Amministratore", 3, 3),
("Analista", 8, 8),
("Progettista", 2, 2),
("Programmatore", 6, 5),
("Verificatore", 3, 3),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 6",
kind: "chart",
supplement: "Grafico"
)
Il ruolo di Programmatore presenta un monte ore effettivo minore rispetto a quello preventivato a causa di problemi tecnici che hanno interrotto la lavorazione del PoC.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [54], [€ 1620],
[Amministratore], [57], [€ 1140],
[Analista], [61], [€ 1525],
[Progettista], [41], [€ 1025],
[Programmatore], [179], [€ 2685],
[Verificatore], [128], [€ 1920],
[Rimanente], [520], [€ 9915],
),
caption: "Monitoraggio Sprint 6"
)
== Sprint 7 dal 18-12-2023 al 25-12-2023
=== Obiettivi raggiunti
- Integrazione dei PoC realizzati in un unico PoC;
- Individuazione di metriche di base per il #pdq;
- Aggiornamento del documento #ndp;
- Automatizzata l'individuazione dei termini del glossario nei documenti.
=== Obiettivi mancati
- Implementazione del tracciamento requisito-fonte nel documento #adr;
- revisione generale del documento per verificare la presenza e la correttezza di tutti gli Use Case e requisiti necessari;
- Correzione di bug presenti nelle GitHub Actions.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
- il pool mensile di automatismi inclusi nel piano gratuito relativi ai ticket di Jira è stato esaurito;
- lentezza nell'approvazione delle correzioni richieste nelle review;
- nelle review delle pull request alcuni commenti risultano essere poco chiari e/o sbrigativi;
- il progresso relativo al documento #ndp è risultato scarso in confronto con gli Sprint precedenti, e in ritardo rispetto a quanto pianificato;
- il tracciamento delle attività relative al PoC non ha lo stesso livello di precisione di quello delle attività documentali.
=== Risoluzioni attuate
- Richiedere l'upgrade gratuito a Jira Standard per progetti open source;
- Notificare, tramite i canali dedicati, chi debba apportare correzioni così da velocizzare i tempi di approvazione;
- Impegnarsi a fornire commenti di review più precisi:
- indicando con precisione la natura dell'errore e, se utile, la sua localizzazione all'interno della riga;
- fornendo una possibile correzione, quando appropriato;
- nel caso di più occorrenze dello stesso errore, segnalarle tutte. Per evitare ripetizioni, fornire una valutazione della prima occorrenza ed inserire un riferimento a tale conversazione nelle occorrenze successive.
- Per accelerare il lavoro sulle #ndp, chi ha redatto le #ndp nel corso degli Sprint precedenti continuerà l'affiancamento nel corso dello Sprint successivo;
- La natura esplorativa del PoC richiede flessibilità nella pianificazione, ma si sottolinea la necessità di continuare a tracciare le attività per riferimento futuro.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [3 (-1)], [0], [0], [0], [0], [3 (-1)],
[Carraro], [0], [0], [0], [0], [3], [0], [3],
[Gardin], [0], [0], [0], [3], [0], [0], [3],
[Nardo], [2 (-1)], [0], [0], [0], [0], [0], [2 (-1)],
[Oseliero], [0], [0], [0], [0], [2], [2], [4],
[Todesco], [0], [0], [0], [2 (-1)], [0], [0], [2 (-1)],
[Zaccone], [0], [0], [1 (-2)], [0], [0], [0], [1 (-2)],
[Totale ore], [2 (-1)], [3 (-1)], [1 (-2)], [5 (-1)], [5], [2], [18 (-5)],
[Costo\ ruolo], [€ 60 (-30)], [€ 60 (-20)], [€ 25 (-50)], [€ 125 (-25)], [€ 75], [€ 30], [€ 375 (-125)],
),
caption: "Prospetto del consuntivo, Sprint 7"
)
#let data = (
("Responsabile", 3, 2),
("Amministratore", 4, 3),
("Analista", 3, 1),
("Progettista", 6, 5),
("Programmatore", 5, 5),
("Verificatore", 2, 2),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 7",
kind: "chart",
supplement: "Grafico"
)
Diversi ruoli risultano avere un monte ore minore rispetto a quello preventivato, a causa di rallentamenti nella stesura delle #ndp e, per quanto riguarda l'Analista, un rallentamento dovuto alla mancanza di materiale su cui lavorare: il materiale necessario era infatti presente in un branch di cui non era ancora stata fatta una pull request.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [52], [€ 1560],
[Amministratore], [54], [€ 1080],
[Analista], [60], [€ 1500],
[Progettista], [36], [€ 900],
[Programmatore], [174], [€ 2610],
[Verificatore], [126], [€ 1890],
[Rimanente], [502], [€ 9540],
),
caption: "Monitoraggio Sprint 7"
)
== Sprint 8 dal 26-12-2023 al 31-12-2023
=== Obiettivi raggiunti
- Miglioramento del Glossario con gestione di plurali e acronimi dei termini ed evidenziazione _case insensitive_;
- Implementato il tracciamento requisito-fonte nell'#adr;
- Aggiornamento del documento #ndp;
- Realizzato sito web per la documentazione;
- Automatizzata l'individuazione dei termini del Glossario nei documenti.
- Realizzazione di una dashboard preliminare Grafana;
- Perfezionamento PoC finale.
=== Obiettivi mancati
- La dashboard di monitoraggio non implementa tutte le metriche individuate nel #pdq;
- Rimane da comprendere a fondo le implicazioni della scelta tra Next.js ed Express.js;
- Permangono bug nelle GitHub Action.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
- i lavori hanno subito rallentamenti dovuti al periodo di festività.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [4 (+1)], [0], [0], [0], [0], [4 (+1)],
[Carraro], [0], [0], [0], [0], [5 (+2)], [0], [5 (+2)],
[Gardin], [0], [2 (+2)], [0], [2 (-1)], [0], [0], [4 (+1)],
[Nardo], [4], [0], [0], [0], [0], [0], [4],
[Oseliero], [0], [0], [0], [0], [0], [4], [4],
[Todesco], [0], [0], [0], [3], [0], [0], [3],
[Zaccone], [0], [0], [3], [0], [0], [0], [3],
[Totale ore], [4], [6 (+3)], [3], [5 (-1)], [5 (+2)], [4], [27 (+4)],
[Costo\ ruolo], [€ 120], [€ 120 (+60)], [€ 75], [€ 125 (-25)], [€ 75 (+30)], [€ 60], [€ 575 (+65)],
),
caption: "Prospetto del consuntivo, Sprint 8"
)
#let data = (
("Responsabile", 4, 4),
("Amministratore", 3, 6),
("Analista", 3, 3),
("Progettista", 6, 5),
("Programmatore", 3, 5),
("Verificatore", 4, 4),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 8",
kind: "chart",
supplement: "Grafico"
)
La ricerca di una valida soluzione per implementare la dashboard di monitoraggio ha richiesto un impegno maggiore rispetto a quanto preventivato. La scelta di adottare Grafana comporterà costi di adozione più alti rispetto a soluzioni standard. La contemporanea implementazione del sito web per la documentazione ha accentuato la necessità di avvalersi della figura dell'Amministratore.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [48], [€ 1440],
[Amministratore], [48], [€ 960],
[Analista], [57], [€ 1425],
[Progettista], [31], [€ 775],
[Programmatore], [169], [€ 2535],
[Verificatore], [122], [€ 1830],
[Rimanente], [475], [€ 8965],
),
caption: "Monitoraggio Sprint 8"
)
== Sprint 9 dal 01-01-2024 al 08-01-2024
=== Obiettivi raggiunti
Tutti gli obiettivi dello Sprint sono stati raggiunti.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
- le norme di formattazione del testo non sono state rispettate in alcune sezioni dei documenti, specialmente nelle sezioni redatte meno di recente;
- le norme per l'inclusione di riferimenti a documenti esterni sono imprecise;
- alcuni ticket non urgenti vengono erroneamente creati ed inseriti nel backlog dello Sprint in corso, falsando la reportistica di Jira.
=== Risoluzioni attuate
- Durante la verifica dei documenti in ottica RTB, il Verificatore controllerà anche l'uso conforme della formattazione del testo;
- Qualora il Redattore desideri includere un riferimento ad un documento esterno, dovrà sincerarsi che:
+ i riferimenti ai documenti riportino il numero di versione più recente;
+ i link ai documenti siano riportati nella sezione Riferimenti;
+ i link ai documenti siano riportati per esteso;
+ i link puntino al documento PDF contenuto nel branch `main`.
- I ticket non urgenti saranno aggiunti al backlog generale e non a quello dello Sprint in corso.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [3 (-2)], [0], [0], [0], [0], [3 (-2)],
[Carraro], [0], [0], [0], [0], [0], [3 (-1)], [3 (-1)],
[Gardin], [0], [1 (-4)], [0], [1 (+1)], [0], [0], [2 (-3)],
[Nardo], [0], [0], [0], [0], [0], [3 (-1)], [3 (-1)],
[Oseliero], [0], [0], [4 (-1)], [0], [0], [0], [4 (-1)],
[Todesco], [0], [0], [0], [0], [0], [3 (-1)], [3 (-1)],
[Zaccone], [3 (-1)], [0], [0], [0], [0], [0], [3 (-1)],
[Totale ore], [3 (-1)], [4 (-6)], [4 (-1)], [1 (+1)], [0], [9 (-3)], [21 (-10)],
[Costo\ ruolo], [€ 90 (-30)], [€ 80 (-120)], [€ 100 (-25)], [€ 25 (+25)], [€ 0], [€ 135 (-45)], [€ 430 (-195)],
),
caption: "Prospetto del consuntivo, Sprint 9"
)
#let data = (
("Responsabile", 4, 3),
("Amministratore", 10, 4),
("Analista", 5, 4),
("Progettista", 0, 1),
("Programmatore", 0, 0),
("Verificatore", 12, 9),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 9",
kind: "chart",
supplement: "Grafico"
)
La pianificazione delle attività per lo Sprint 9 è stata conservativa. L'entità di alcune attività, specialmente quelle relative alle automazioni e alla dashboard, si sono rivelate sensibilmente minori del previsto. Il Progettista, inizialmente non previsto, è stato interpellato per la realizzazione della presentazione per il colloquio RTB con il #cardin.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [45], [€ 1350],
[Amministratore], [44], [€ 880],
[Analista], [53], [€ 1325],
[Progettista], [30], [€ 750],
[Programmatore], [169], [€ 2535],
[Verificatore], [113], [€ 1695],
[Rimanente], [454], [€ 8535],
),
caption: "Monitoraggio Sprint 9"
)
== Sprint 10 dal 08-01-2024 al 14-01-2024
=== Obiettivi raggiunti
- I problemi prestazionali del PoC sono stati risolti con successo;
- Le automazioni hanno subito un refactoring al fine di migliorarne l'affidabilità, la flessibilità e il riuso;
- I documenti #pdq, #pdp sono stati verificati in vista della valutazione RTB;
- Preparazione della presentazione Google Slides a supporto della revisione RTB.
=== Obiettivi mancati
- La dashboard di monitoraggio non è stata ampliata.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
- la comunicazione tra i ruoli è risultata non allineata (_siloed_), e ha causato confusione e rallentamenti del processo di revisione;
- riscontrato uno sbilanciamento dell'impegno dei singoli membri del gruppo.
=== Risoluzioni attuate
I membri del gruppo si impegneranno a comunicare in maniera più tempestiva e precisa modifiche apportate e/o aggiornamenti alla configurazione degli strumenti di supporto.
L'impegno individuale dei membri risulta sbilanciato a causa dell'imminente sessione d'esame invernale. Pertanto, la pianificazione dei prossimi sprint terrà conto della ridotta disponibilità di tempo. È previsto che la situazione migliorerà entro il termine dello sprint 13.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [3], [3],
[Carraro], [0], [0], [0], [1 (+1)], [0], [2], [3 (+1)],
[Gardin], [0], [2], [0], [0], [0], [0], [2],
[Nardo], [0], [0], [0], [0], [2], [0], [2],
[Oseliero], [0], [0], [2], [0], [0], [0], [2],
[Todesco], [1 (+1)], [0], [0], [0], [2], [0], [3 (+1)],
[Zaccone], [1 (-1)], [0], [0], [0], [0], [3], [4 (-1)],
[Totale ore], [2], [2], [2], [1 (+1)], [4], [8], [19 (+1)],
[Costo\ ruolo], [€ 60], [€ 40], [€ 50], [€ 25 (+25)], [€ 60], [€ 120], [€ 355 (+25)],
),
caption: "Prospetto del consuntivo, Sprint 10"
)
#let data = (
("Responsabile", 2, 2),
("Amministratore", 2, 2),
("Analista", 2, 2),
("Progettista", 0, 1),
("Programmatore", 4, 4),
("Verificatore", 8, 8),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 10",
kind: "chart",
supplement: "Grafico"
)
La presenza del Progettista, non pianificata, deriva dalla necessità di decidere quale software adottare (tra Next.js ed Express.js) per l'implementazione di un componente del PoC. Entrambe le tecnologie sono state approfondite ed implementate durante lo sviluppo dei PoC, ma non era ancora stata presa una scelta definitiva. Inoltre, la redazione della presentazione per il colloquio RTB ha nuovamente richiesto l'intervento del Progettista.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [43], [€ 1290],
[Amministratore], [42], [€ 840],
[Analista], [51], [€ 1275],
[Progettista], [29], [€ 725],
[Programmatore], [165], [€ 2475],
[Verificatore], [105], [€ 1575],
[Rimanente], [435], [€ 8180],
),
caption: "Monitoraggio Sprint 10"
)
== Sprint 11 dal 14-01-2024 al 21-01-2024
=== Obiettivi raggiunti
- Inviata la candidatura al #cardin per il colloquio RTB;
- Al #pdp sono state apportate modifiche minori per migliorare la leggibilità dei dati;
- Il sito web è accessibile in ogni sua parte e comprende i riferimenti di ogni documento prodotto dal gruppo;
- I contenuti della presentazione per il colloquio RTB sono stati definiti e il documento è stato completato.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
- carico di lavoro asimmetrico tra i membri del gruppo a causa dei differenti esami da sostenere.
=== Risoluzioni attuate
Nessuna. La situazione di carico di lavoro asimmetrico è destinata a migliorare con l'approcciarsi della fine della sessione d'esami invernale.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [1 (-1)], [0], [0], [0], [1 (-1)],
[Carraro], [0], [0], [0], [0], [0], [1 (-1)], [1 (-1)],
[Gardin], [0], [0], [0], [0], [0], [1 (-1)], [1 (-1)],
[Nardo], [2 (-1)], [0], [0], [0], [0], [0], [2 (-1)],
[Oseliero], [0], [0], [1], [0], [0], [0], [1],
[Todesco], [0], [3], [0], [0], [0], [0], [3],
[Zaccone], [0], [0], [0], [0], [0], [2], [2],
[Totale ore], [2 (-1)], [3], [2 (-1)], [0], [0], [4 (-2)], [11 (-4)],
[Costo\ ruolo], [€ 60 (-30)], [€ 60], [€ 50 (-25)], [€ 0], [€ 0], [€ 60 (-30)], [€ 230 (-85)],
),
caption: "Prospetto del consuntivo, Sprint 11"
)
#let data = (
("Responsabile", 3, 2),
("Amministratore", 3, 3),
("Analista", 3, 2),
("Progettista", 0, 0),
("Programmatore", 0, 0),
("Verificatore", 6, 4),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 11",
kind: "chart",
supplement: "Grafico"
)
La necessità dei ruoli di Responsabile e di Analista è stata minore rispetto alle attese. Anche il Verificatore ha lavorato meno del previsto, visto l'entità minima delle modifiche ai documenti. Tale figura rimane comunque la più impegnata, a causa della necessità di verificare la presentazione per il colloquio RTB.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [41], [€ 1230],
[Amministratore], [39], [€ 780],
[Analista], [49], [€ 1225],
[Progettista], [29], [€ 725],
[Programmatore], [165], [€ 2475],
[Verificatore], [101], [€ 1515],
[Rimanente], [424], [€ 7950],
),
caption: "Monitoraggio Sprint 11"
)
== Sprint 12 dal 21-01-2024 al 28-01-2024
=== Obiettivi raggiunti
- Colloquio con il #cardin superato con luce "ampiamente verde";
- Aggiunta di grafici al #pdp:
- suddivisione oraria per ruolo per preventivi e consuntivi.
=== Obiettivi mancati
- Estensione dei termini di Glossario.
=== Problematiche
Nessuna.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [1], [0], [0], [1],
[Carraro], [0], [0], [1 (+1)], [1], [0], [0], [2 (+1)],
[Gardin], [0], [0], [0], [1], [0], [0], [1],
[Nardo], [1], [0], [0], [0], [0], [0], [1],
[Oseliero], [0], [0], [0], [0], [0], [1], [1],
[Todesco], [0], [1], [0], [0], [0], [0], [1],
[Zaccone], [0], [0], [0], [0], [0], [1], [1],
[Totale ore], [1], [1], [1 (+1)], [3], [0], [2], [8 (+1)],
[Costo\ ruolo], [€ 30], [€ 20], [€ 25 (+25)], [€ 75], [€ 0], [€ 30], [€ 180 (+25)],
),
caption: "Prospetto del consuntivo, Sprint 12"
)
#let data = (
("Responsabile", 1, 1),
("Amministratore", 1, 1),
("Analista", 0, 1),
("Progettista", 3, 3),
("Programmatore", 0, 0),
("Verificatore", 2, 2),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 12",
kind: "chart",
supplement: "Grafico"
)
Il totale delle ore di lavoro preventivate è stato in gran parte rispettato. Tuttavia, il ruolo di Analista è stato attivato per correggere un errore di forma riscontrato nel calcolo dei codici identificativi dei requisiti.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [40], [€ 1200],
[Amministratore], [38], [€ 760],
[Analista], [48], [€ 1200],
[Progettista], [26], [€ 650],
[Programmatore], [165], [€ 2475],
[Verificatore], [99], [€ 1485],
[Rimanente], [416], [€ 7770],
),
caption: "Monitoraggio Sprint 12"
)
== Sprint 13 dal 28-01-2024 al 04-02-2024
=== Obiettivi raggiunti
- Preparazione della pianificazione dello Sprint 14 a seguito del risultato del colloquio RTB sostenuto con il #cardin, ricevuto il 31/01/2024.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Il feedback ricevuto dal #cardin evidenzia diverse lacune nel documento #adr. Sarà dunque necessario investire risorse per correggere e migliorare il documento. Ciò potrebbe influire negativamente sull'avanzamento del progetto.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [0], [0],
[Carraro], [0], [0], [0], [0], [0], [0], [0],
[Gardin], [3 (+1)], [0], [0], [0], [0], [0], [3 (+1)],
[Nardo], [0], [0], [0], [0], [0], [0], [0],
[Oseliero], [0], [1], [0], [0], [0], [0], [1],
[Todesco], [0], [0], [0], [0], [0], [0], [0],
[Zaccone], [0], [0], [0], [0], [0], [0], [0],
[Totale ore], [3 (+1)], [1], [0], [0], [0], [0], [4 (+1)],
[Costo\ ruolo], [€ 90 (+30)], [€ 20], [€ 0], [€ 0], [€ 0], [€ 0], [€ 110 (+30)],
),
caption: "Prospetto del consuntivo, Sprint 13"
)
#let data = (
("Responsabile", 2, 3),
("Amministratore", 1, 1),
("Analista", 0, 0),
("Progettista", 0, 0),
("Programmatore", 0, 0),
("Verificatore", 0, 0),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 13",
kind: "chart",
supplement: "Grafico"
)
Lo Sprint 13 è stato caratterizzato da un momento di pausa, dovuto alla concomitanza degli esami, conclusosi mercoledì 31/01/2024 con la ricezione del feedback da parte del #cardin. Nel corso dei giorni rimanenti, il Responsabile ha aggiornato la pianificazione per adeguarla al diverso contesto di lavoro.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [37], [€ 1110],
[Amministratore], [37], [€ 740],
[Analista], [48], [€ 1200],
[Progettista], [26], [€ 650],
[Programmatore], [165], [€ 2475],
[Verificatore], [99], [€ 1485],
[Rimanente], [412], [€ 7660],
),
caption: "Monitoraggio Sprint 13"
)
== Sprint 14 dal 04-02-2024 al 11-02-2024
=== Obiettivi raggiunti
- Revisione, correzione ed estensione del documento #adr secondo il feedback ricevuto dal #cardin;
- Estensione delle metriche riportate nella dashboard di monitoraggio;
- Estensione dei termini di Glossario per riflettere i nuovi strumenti adottati;
- Contattato il Proponente per fornire aggiornamenti sullo stato dei lavori e programmare gli incontri successivi al colloquio RTB.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
- il carico di lavoro sul Verificatore è stato elevato;
- numerosi conflitti di _merge_ dovuti al lavoro parallelo su ticket associati allo stesso documento;
- la board Miro ha mostrato scarsità di riscontro su _Keep doings_ e _Improvements_ prima dell'incontro di retrospettiva.
=== Risoluzioni attuate
- I conflitti di _merge_ sono inevitabili quando più branch insistono sullo stesso documento. Durante questo Sprint la maggior parte del lavoro è avvenuta sul documento #adr. Si cercherà di evitare la sovrapposizione di lavoro sui documenti coordinando il lavoro di redattori e Verificatori;
- Il Responsabile ha sottolineato l'importanza di individuare _Keep doings_ e _Improvements_ prima dell'incontro di retrospettiva. Non si tratta di un'attività facoltativa, ma di un processo chiave nel miglioramento continuo del gruppo.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [3], [0], [0], [0], [3],
[Carraro], [0], [1], [1], [0], [0], [2], [4],
[Gardin], [3], [0], [2], [0], [0], [0], [5],
[Nardo], [0], [0], [2], [0], [0], [3], [5],
[Oseliero], [0], [2], [0], [0], [0], [2], [4],
[Todesco], [0], [0], [0], [1], [0], [3], [4],
[Zaccone], [0], [0], [0], [1], [0], [3], [4],
[Totale ore], [3], [3], [8], [2], [0], [13], [29],
[<NAME>], [90], [60], [200], [50], [0], [195], [595],
),
caption: "Prospetto del consuntivo, Sprint 14"
)
#let data = (
("Responsabile", 3, 3),
("Amministratore", 3, 3),
("Analista", 8, 8),
("Progettista", 2, 2),
("Programmatore", 0, 0),
("Verificatore", 13, 13),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 14",
kind: "chart",
supplement: "Grafico"
)
La pianificazione di questo Sprint è stata precisa e rispettata. Il lavoro del Verificatore è stato particolarmente intenso a causa delle numerose modifiche apportate al documento #adr.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [34], [1020],
[Amministratore], [34], [680],
[Analista], [40], [1000],
[Progettista], [24], [600],
[Programmatore], [165], [2475],
[Verificatore], [86], [1290],
[Rimanente], [383], [7065],
),
caption: "Monitoraggio, Sprint 14"
)
#box(
width: 1fr,
height: 7em,
fill: rgb(242, 236, 216, 100),
stroke: 1pt + rgb(217, 212, 194, 100),
align(
center + horizon,
[
Accesso alla PB\
A seguito del feedback ottenuto nel colloquio RTB, il documento cambia al fine di adeguarsi alle criticità riscontrate.
Le modifiche apportate sono raccolte e descritte nel verbale interno del 18/02/2024.
]
)
)
== Sprint 15 dal 11-02-2024 al 18-02-2024
=== Obiettivi raggiunti
Lo Sprint 15 si è concluso raggiungendo tutti gli obiettivi pianificati.
In data 16/02/2024 il gruppo ha sostenuto il colloquio con il #vardanega per il secondo sportello di valutazione RTB, di cui ha ottenuto l'esito, risultato positivo, in data 17/02/2024.
In particolare durante lo Sprint sono stati raggiunti i seguenti obiettivi:
- #adr\:
- rimosso il termine "_richiesta_" nella nomenclatura dell'UC-7;
- specificate le API utilizzate dall'UC7;
- inserito tracciamento casi d'uso - requisiti;
- espressa la distinzione tra obiettivi obbligatori, desiderabili e opzionali;
- il gruppo considera completato l'aggiornamento e l'adeguamento del documento, ritenendolo pronto per una seconda revisione da parte del #cardin.
- estensione del #glo\:
- le definizioni contenute all'interno del documento sono state aggiornate.
- #ndp\:
- rimosso capitolo 3 riguardo stili e convenzioni non derivante dallo standard ISO/IEC 12207:2017, il cui contenuto era già assorbito dal capitolo 4.6.3 delle #ndp_v\.
- #pdp\:
- redatti il consuntivo dello Sprint 14, terminato in data 4/02/2024, e preventivo dello Sprint 15 iniziato in data 11/02/2024.
- Automazioni:
- aggiornate alcune componenti delle automazioni implementate nel _repository_ a seguito del rilascio di una loro nuova versione, in particolare:
- `upload-artifact` si aggiorna dalla versione _v3_ alla versione _v4_;
- `download-artifact` si aggiorna dalla versione _v3_ alla versione _v4_;
- `setup-python` si aggiorna dalla versione _v4_ alla versione _v5_.
- Rimossa la GitHub Action di supporto alla revisione di spellchecking mediante ChatGPT. Tale Action era stata precedentemente disattivata a causa dei risultati non soddisfacenti prodotti e dei numerosi falsi positivi generati.
- Miglioramenti generali:
- al fine di promuovere una maggior coerenza stilistica, i termini ricorrenti all'interno dei documenti (es. nome di documenti, nome dei professori, ...) sono stati resi variabili riutilizzabili durante la stesura dei documenti;
- la sezione dei riferimenti dei documenti è stata uniformata in seguito all'introduzione delle variabili sopra citate;
- aggiunta della data di ultima consultazione dei riferimenti esterni.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
*P01*: Seppur durante lo Sprint siano state portate a termine tutte la task previste raggiungendo gli obiettivi prefissati, per alcune task non è stata rispettata la data di scadenza fissata. Ciò, pur non comportando rallentamenti o sovraccarico di lavoro, ha permesso di evidenziare nuovamente l'importanza di una comunicazione attiva.
*P02*: Alcuni membri del gruppo evidenziano gli impegni universitari non ancora conclusi, che comportano una riduzione in termini di disponibilità.
*P03*: Mancato sfruttamento del tempo alla conclusione delle task assegnate al singolo membro: si rinnova l'importanza di essere maggiormente proattivi.
=== Risoluzioni attuate
#figure(caption: [Risoluzioni attuate Sprint 15.],
table(
align: left,
columns: (auto, 1fr, auto),
[ID risoluzione], [Titolo], [Problematiche affrontate],
[RO1],[Riassegnazione task], [P01, P02],
[RO2],[Proattività],[P03]
)
)
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [3 (+1)], [2 (-1)], [1 (+1)], [6 (+1)],
[Carraro], [0], [0], [0], [0], [0], [3], [3],
[Gardin], [0], [0], [0], [0], [3], [0], [3],
[Nardo], [0], [2], [3], [0], [0], [0], [5],
[Oseliero], [0], [1 (-1)], [0], [2], [0], [0], [3 (-1)],
[Todesco], [2 (-1)], [0], [2 (+2)], [0], [0], [0], [4 (+1)],
[Zaccone], [1 (+1)], [1 (+1)], [0], [0], [0], [1 (-3)], [3 (-1)],
[Totale ore], [3], [4], [5 (+2)], [5 (+1)], [5 (-1)], [5 (-2)], [27],
[Costo ruolo], [90], [80], [125 (+50)], [125 (+25)], [75 (-15)], [75 (-30)], [570 (+30)],
),
caption: "Prospetto del consuntivo, Sprint 15"
)
#let data = (
("Responsabile", 3, 3),
("Amministratore", 4, 4),
("Analista", 3, 5),
("Progettista", 4, 5),
("Programmatore", 6, 5),
("Verificatore", 7, 5),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 15",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 15 le ore preventivate (27) e le ore effettive (27) sono state rispettate, seppur la distribuzione di queste sia stata differente e maggiormente distribuita tra i membri del gruppo. In particolare:
- le ore preventivate di Responsabile e Amministratore, seppur quantitativamente rispettate, sono state ridistribuite tra i membri del gruppo;
- le ore di Analista sono state maggiori di quanto preventivato: il tracciamento UC - requisti, per renderlo uniforme, automatico e facilmente estendibile è stato implementato mediante un automazione direttamente nei documenti in Typst e l'estensione dell'UC 7 ha richiesto più tempo del previsto;
- le ore di Progettista sono state maggiori di quanto preventivato: la progettazione dello schema ER del database è stata più complessa del previsto seppur permettendo di generare confronto tra i membri del gruppo.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [31], [930],
[Amministratore], [30], [600],
[Analista], [35], [875],
[Progettista], [19], [475],
[Programmatore], [160], [2400],
[Verificatore], [81], [1215],
[Rimanente], [356], [6495],
),
caption: "Monitoraggio, Sprint 15"
)
Lo Sprint 15, pur comportando un costo superiore a quanto preventivato, ha permesso di raggiungere tutti gli obiettivi prefissati. La distribuzione delle ore tra i membri del gruppo è stata differente da quanto preventivato, ma non ha comportato rallentamenti o sovraccarico di lavoro.
Il rendimento positivo dello Sprint 15 è supportato dalle principali metriche esposte dal #pdq, quali:
- CPI: il Cost Performance Index passa dal valore 0,97 (allo Sprint 14) a 0,98 indicando un miglioramento del rendimento del gruppo;
- EAC: l'Estimate At Completion passa dal valore 13437,22 € (allo Sprint 14) a 13292,70 € (seppur ancora maggiore al BAC), indicando una riduzione del costo finale del progetto se il rendimento del gruppo rimarrà costante.
== Sprint 16 dal 18-02-2024 al 25-02-2024
=== Obiettivi raggiunti
Lo Sprint 16 si è concluso raggiungendo tutti gli obiettivi pianificati, in particolare:
- ripresi i colloqui in modalità sincrona con il Proponente in data 22/02/2024;
- #adr\:
- aggiornato il documento in seguito al colloquio con il Proponente.
- #glo\:
- aggiunto Figma.
- #ndp\:
- redatti capitoli riguardanti:
- processo di definizione dell'architettura (6.4.4 nello standard ISO/IEC 12207:2017);
- processo di definizione del design (6.4.5 nello standard ISO/IEC 12207:2017).
- aggiornato il paragrafo 4.6.4 con la descrizione delle variabili Typst riguardanti termini ricorrenti, in particolare:
- nome del gruppo;
- Committenti;
- nomi di documenti;
- nomi di documenti con versione;
- data di ultima consultazione dei riferimenti.
- aggiornato il paragrafo 4.6.3 riguardante il contenuto dei verbali;
- aggiunte le metriche di qualità del prodotto;
- aggiunto Figma tra gli strumenti utilizzati;
- descritto il versionamento del software.
- #pdp\:
- accorpato documento #ris nel #pdp;
- estesa sezione di Analisi dei Rischi;
- codificati i rischi e relativi metodi di risoluzione;
- redatto preventivo dello Sprint 16, iniziato in data 18/02/2024;
- redatto consuntivo dello Sprint 15, terminato in data 18/02/2024.
- #pdq\:
- definite metriche di qualità del prodotto;
- corretti riferimenti assenti;
- aggiornati i grafici delle metriche allo Sprint 15.
- Automazioni:
- aggiornato template e file di supporto in modo da specificata l'affiliazione dei partecipanti esterni nei verbali;
- creato automazione per fornire un link all'artifact prodotto dalla compilazione dei documenti in pull request.
- Miglioramenti generali:
- aggiornate occorrenze dei nomi di documenti e Committenti nella documentazione utilizzando variabili Typst.
- Progettazione:
- definita una bozza del diagramma delle classi per il Minimum Viable Product;
- esplorati gli strumenti di testing (Jest, Jasmine, Mocha per lo unit testing, Playwright per l'end-to-end testing);
- preparato un mock-up dell'interfaccia utente del prodotto;
- definito il diagramma ER del database di supporto.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Nessuna.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [1 (-1)], [0], [0], [0], [4], [1], [6 (-1)],
[Carraro], [0], [0], [2], [1 (-2)], [0], [2], [5 (-2)],
[Gardin], [2], [1 (+1)], [0], [0], [4], [1 (+1)], [8 (+2)],
[Nardo], [1 (+1)], [0], [1 (-1)], [ (-3)], [3], [1 (+1)], [6 (-2)],
[Oseliero], [2], [0], [0], [2 (+2)], [1 (+1)], [1 (-1)], [6 (+2)],
[Todesco], [0], [2 (-1)], [3 (+1)], [0], [0], [1 (-1)], [6 (-1)],
[Zaccone], [2], [0], [0], [2], [0], [2], [6],
[Totale ore], [8], [3], [6], [5 (-3)], [12 (+1)], [9], [43 (-2)],
[<NAME>], [240], [60], [150], [125 (-75)], [180 (+15)], [135], [890 (-60)],
),
caption: "Prospetto del consuntivo, Sprint 16"
)
#let data = (
("Responsabile", 8, 8),
("Amministratore", 3, 3),
("Analista", 6, 6),
("Progettista", 8, 5),
("Programmatore", 11, 12),
("Verificatore", 9, 9),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 16",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 16 le ore effettive (43) sono risultate minori di due unità rispetto alle ore preventivate (45), in particolare:
- le ore di Progettista sono state minori di quanto preventivato: la progettazione dello schema ER del database è stata più semplice del previsto e ha permesso di generare confronto tra i membri del gruppo. Inoltre, la precedente esperienza di alcuni membri del gruppo con Figma ha permesso di realizzare il mock-up della UI in tempi brevi;
- le ore di Programmatore sono state maggiori di quanto preventivato a causa dell'elevato numero di strumenti di testing esplorati;
- a causa di impegni personali di alcuni membri del gruppo tutti i ruoli hanno subito ridistribuzioni delle ore lavorate, senza tuttavia risultare in situazioni eccedenti rispetto a quanto preventivato.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [23], [690],
[Amministratore], [27], [540],
[Analista], [29], [725],
[Progettista], [14], [350],
[Programmatore], [148], [2220],
[Verificatore], [72], [1080],
[Rimanente], [313], [5605],
),
caption: "Monitoraggio, Sprint 16"
)
Lo Sprint 16 ha permesso di raggiungere tutti gli obiettivi prefissati con una spesa inferiore alle attese. Nonostante la necessità di maggiori risorse orarie in alcuni ruoli, non sono stati rilevati rallentamenti nel completamento dei compiti previsti.
Il rendimento positivo dello Sprint 16 è supportato dalle principali metriche esposte dal #pdq, quali:
- CPI: il Cost Performance Index passa dal valore 0,98 (allo Sprint 15) a 1,02 e raggiunge un valore ottimale, denotando un miglioramento significativo del rendimento del gruppo;
- EAC: l'Estimate At Completion passa dal valore 13292,70 € (allo Sprint 15) a 12777,29 €, considerabile come valore ottimale in quanto minore del BAC.
== Sprint 17 dal 25-02-2024 al 03-03-2024
=== Obiettivi raggiunti
Lo Sprint 17 si è concluso raggiungendo la quasi totalità degli obiettivi pianificati, in particolare:
- colloquio con il Proponente in data 29/02/2024;
- #glo\:
- aggiunte definizioni di:
- Jest;
- Virtual Private Server;
- Testing;
- Unit test;
- Test di integrazione;
- Test di sistema.
- #man\:
- creato documento;
- redatta sezione Introduzione.
- #ndp\:
- redatti i capitoli:
- Processo di implementazione;
- Processo di gestione della qualità;
- Processo di integrazione;
- Processo di verifica.
- normato il sistema di tracciamento rischi-risoluzioni.
- #pdp\:
- redatto preventivo dello Sprint 17, iniziato in data 25/02/2024;
- redatto consuntivo dello Sprint 16, terminato in data 25/02/2024.
- #pdq\:
- aggiornati i grafici di monitoraggio delle metriche allo Sprint 16;
- aggiornata la dashboard con i dati sullo Sprint 16.
- #st\:
- creato documento;
- redatta sezione Introduzione.
- Automazioni:
- corretto errore di registrazione versione.
- Progettazione:
- aggiornato schema ER del database;
- aggiornato mock-up e presentato al Proponente.
- Codifica:
- implementato modulo di sanificazione SVG;
- implementata pagina di selezione modalità di creazione ambiente.
=== Obiettivi mancati
- #adr\:
- è iniziata, ma non terminata, la revisione completa del documento, comprensiva dell'aggiornamento riguardo le scelte fatte con il Proponente in data 29/02/2024.
- Progettazione:
- non sono ancora definite in modo preciso l'architettura e i design pattern da utilizzare.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
*P01*: Sono presenti dei dubbi riguardo il design del prodotto, nonostante i chiarimenti forniti dal #cardin via mail.
=== Risoluzioni attuate
#figure(caption: [Soluzioni individuate alle criticità riscontrate, Sprint 17.],
table(
align: left,
columns: (auto, 1fr, auto),
[ID risoluzione], [Titolo], [Criticità affrontate],
[R01],[Richiesta di colloquio via Zoom al #cardin], [P01]
)
)
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [5 (+1)], [3], [8 (+1)],
[Carraro], [1], [0], [3], [2], [0], [0], [6],
[Gardin], [0], [0], [3], [0], [3 (-1)], [0], [6 (-1)],
[Nardo], [0], [2], [2 (-1)], [0], [0], [2], [6 (-1)],
[Oseliero], [0], [2], [0], [1], [4], [0], [7],
[Todesco], [3], [0], [0], [0], [0], [3], [6],
[Zaccone], [2], [0], [2 (+2)], [2], [0], [0], [6 (+2)],
[Totale ore], [6], [4], [10 (+1)], [5], [12], [8], [45 (+1)],
[<NAME>], [180], [80], [250 (+25)], [125], [180], [120], [935 (+25)],
),
caption: "Prospetto del consuntivo, Sprint 17"
)
#let data = (
("Responsabile", 6, 6),
("Amministratore", 4, 4),
("Analista", 9, 10),
("Progettista", 5, 5),
("Programmatore", 12, 12),
("Verificatore", 8, 8),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 17",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 17 le ore effettive (45) hanno superato di una unità le ore preventivate (44), in particolare:
- l'Analista ha subito un incremento orario, dovuto alla quantità di lavoro richiesta dalla redazione del documento #adr.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [17], [510],
[Amministratore], [23], [460],
[Analista], [19], [475],
[Progettista], [9], [225],
[Programmatore], [136], [2040],
[Verificatore], [64], [960],
[Rimanente], [268], [4670],
),
caption: "Monitoraggio, Sprint 17"
)
Lo Sprint 17, pur comportando un costo superiore a quanto preventivato, ha permesso di raggiungere la quasi totalità degli obiettivi prefissati.
Il rendimento complessivamente positivo dello Sprint 17 è supportato dalle principali metriche esposte dal #pdq, quali:
- CPI: il Cost Performance Index passa dal valore 1,02 (allo Sprint 16) a 1,01. Nonostante il lieve calo continua a rappresentare un valore accettabile ($>=0,95$), e rimane molto vicino all'ottimalità, rappresentata da valori $>=1$;
- EAC: l'Estimate At Completion passa dal valore 12777,29 € (allo Sprint 16) a 12987,48 €. Anche se aumentato, il valore è minore del BAC e rientra nel range di accettabilità.
== Sprint 18 dal 03-03-2024 al 10-03-2024
=== Obiettivi raggiunti
Lo Sprint 18 si è concluso raggiungendo buona parte degli obiettivi pianificati, in particolare:
- colloquio con Proponente in data 07/03/2024;
- #adr\:
- proseguite revisione e aggiornamento del documento.
- #ndp\:
- redatto il capitolo Processo di validazione.
- #pdp\:
- redatto preventivo dello Sprint 18, iniziato in data 03/03/2024;
- redatto consuntivo dello Sprint 17, terminato in data 03/03/2024.
- #st\:
- individuata struttura del documento;
- redatta sezione Database.
- Progettazione:
- svolto colloquio con il #cardin.
- Codifica:
- migliorato e popolato il database di supporto;
- implementata l'importazione dei dati da database;
- aggiornato Docker Compose;
- implementate le classi:
- `Product`;
- `Bin`;
- `Zone`.
- implementati i pannelli di navigazione:
- `ProductPanel`;
- `BinPanel`;
- `ZonePanel`.
- rimossi elementi superflui.
=== Obiettivi mancati
- #adr\:
- non è ancora terminata la revisione completa del documento, comprensiva anche di aggiornamento dei casi d'uso e requisiti. Verrà completata obbligatoriamente entro la fine del prossimo Sprint.
- #pdq\:
- non sono state aggiornate le metriche all'interno del documento;
- mancato aggiornamento della dashboard Grafana allo Sprint 18.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
*P01*: Mancata comunicazione degli avanzamenti riguardo lo sviluppo del MVP hanno creato incomprensioni tra chi scriveva codice e chi si occupava della redazione della #st.
*P02*: Difficoltà nell'utilizzare le tecnologie adottate.
=== Risoluzioni attuate
#figure(caption: [Soluzioni individuate alle criticità riscontrate, Sprint 18.],
table(
align: left,
columns: (auto, 1fr, auto),
[ID risoluzione], [Titolo], [Criticità affrontate],
[R01], [Aggiornamento continuo sugli sviluppi], [P01],
[OT-1.1.2\ (#pdp_v)], [Accettazione del rischio], [P02]
)
)
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [2], [0], [1], [1 (+1)], [2], [0], [6 (+1)],
[Carraro], [1 (-1)], [0], [0], [0], [ (-2)], [5], [6 (-3)],
[Gardin], [0], [0], [1], [0], [3 (+1)], [3], [7 (+1)],
[Nardo], [0], [2], [0], [2], [1 (-1)], [0], [5 (-1)],
[Oseliero], [2], [0], [2], [0], [3 (+1)], [1], [8 (+1)],
[Todesco], [0], [0], [2], [1 (-1)], [5], [0], [8 (-1)],
[Zaccone], [0], [1 (+1)], [2], [0], [5], [0], [8 (+1)],
[Totale ore], [5 (-1)], [3 (+1)], [8], [4], [19 (-1)], [9], [48 (-1)],
[Costo ruolo], [150 (-30)], [60 (+20)], [200], [100], [285 (-15)], [135], [930 (-25)],
),
caption: "Prospetto del consuntivo, Sprint 18"
)
#let data = (
("Responsabile", 6, 5),
("Amministratore", 2, 3),
("Analista", 8, 8),
("Progettista", 4, 4),
("Programmatore", 20, 19),
("Verificatore", 9, 9),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 18",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 18 le ore effettive (48) sono risultate inferiori di un'unità rispetto al preventivato (49), in particolare:
- la redazione delle #ndp ha richiesto un impegno inferiore del previsto da parte del Responsabile;
- l'Amministratore ha dovuto espletare alcune attività non previste a supporto della codifica;
- il Programmatore ha richiesto un impegno inferiore del previsto grazie alle _utilities_ offerte dal framework Next.js per l'implementazione delle funzionalità richieste per l'accesso al database.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [12], [360],
[Amministratore], [20], [400],
[Analista], [11], [275],
[Progettista], [5], [125],
[Programmatore], [117], [1755],
[Verificatore], [55], [825],
[Rimanente], [220], [3740],
),
caption: "Monitoraggio, Sprint 18"
)
Nonostante la diminuzione dei costi rispetto al preventivo, lo Sprint 18 ha permesso di completare buona parte degli obiettivi previsti.
Il rendimento positivo dello Sprint 18 è supportato dalle principali metriche esposte dal #pdq, quali:
- CPI: il Cost Performance Index rimane costante a 1,01 rappresentando ancora un valore ottimale ($>=1$);
- EAC: l'Estimate At Completion diminuisce e passa dal valore di 12987,48 € (allo Sprint 17) a 12933,25 €.
== Sprint 19 dal 10-03-2024 al 17-03-2024
=== Obiettivi raggiunti
Lo Sprint 19 si è concluso raggiungendo buona parte degli obiettivi pianificati, in particolare:
- colloquio con Proponente in data 15/03/2024;
- #adr\:
- estensione e aggiornamento di Use Case e requisiti.
- #man\:
- redatte le sezioni Requisiti e Supporto Tecnico.
- #ndp\:
- individuata la struttura del documento;
- redatte le sezioni:
- Requisiti;
- Installazione;
- Supporto Tecnico.
- #pdp\:
- redatto preventivo dello Sprint 19, iniziato in data 10/03/2024;
- redatto consuntivo dello Sprint 18, terminato in data 10/03/2024.
- #st\:
- redatte le sezioni:
- Design pattern;
- Diagramma delle classi.
- Codifica:
- implementata creazione del piano rettangolare in Three.js;
- implementata lettura del file SVG;
- implementato elemento `orderItem`;
- implementato pulsante "reimposta" nel Settings Panel;
- implementata creazione zona;
- implementata modifica zona;
- implementata eliminazione zona;
- rimosso pulsante "sincronizzazione" da Order Panel;
- implementato pulsante "sincronizzazione" in Settings Panel;
- implementato componente `productItemDetails`;
- implementato `productItem` per il Product Panel;
- implementati campi di modifica delle dimensioni della planimetria in Settings Panel;
- implementata simulazione delle API per lo spostamento dei prodotti;
- implementati test per le Server Actions prodotte.
=== Obiettivi mancati
- #adr\:
- non è stata effettuata la verifica completa del lavoro svolto e, di conseguenza, non è stato inviato il documento al #cardin\.
- #pdp\:
- non è stato redatto il preventivo dello Sprint 19, iniziato in data 10/03/2024.
- #pdq\:
- non sono state aggiornate le metriche all'interno del documento;
- mancato aggiornamento della dashboard Grafana.
- Codifica:
- implementazione della creazione del piano a partire da un file SVG.
=== Problematiche
Durante il meeting di retrospettiva è sorta la seguente problematica:
*P01*: Gli avanzamenti dei documenti #pdp e #pdq sono stati minori delle aspettative.
=== Risoluzioni attuate
#figure(caption: [Soluzioni individuate alle criticità riscontrate, Sprint 19.],
table(
align: left,
columns: (auto, 1fr, auto),
[ID risoluzione], [Titolo], [Criticità affrontate],
[R01], [Migliore ridistribuzione dei task], [P01]
)
)
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [1], [0], [2], [4], [3], [10],
[Carraro], [0], [1], [0], [1 (-1)], [5], [0], [7 (-1)],
[Gardin], [0], [0], [0], [0], [7 (+2)], [0], [7 (+2)],
[Nardo], [0], [0], [2 (-1)], [0], [3], [4 (-1)], [9 (-2)],
[Oseliero], [0], [0], [0], [0], [0], [5], [5],
[Todesco], [2], [0], [2], [0], [3], [0], [7],
[Zaccone], [0], [0], [0], [0], [6 (+1)], [2], [8 (+1)],
[Totale ore], [2], [2], [4 (-1)], [3 (-1)], [28 (+3)], [14 (-1)], [53],
[Costo ruolo], [60], [40], [100 (-25)], [75 (-25)], [420 (+45)], [210 (-15)], [905 (-20)],
),
caption: "Prospetto del consuntivo, Sprint 19"
)
#let data = (
("Responsabile", 2, 2),
("Amministratore", 2, 2),
("Analista", 5, 4),
("Progettista", 4, 3),
("Programmatore", 25, 28),
("Verificatore", 15, 14),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 19",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 19 le ore effettive sono state pari a quelle preventivate (53), tuttavia si segnalano diverse variazioni nella composizione del prospetto orario. In particolare:
- le ore di Analista risultano inferiori grazie all'individuazione di un processo di definizione dei diagrammi degli Use case più efficiente;
- le ore di Progettista risultano inferiori grazie ad una stesura delle sezioni della #st più rapida del previsto;
- le ore di Programmatore sono aumentate a causa di problematiche insorte nell'implementazione della creazione del piano a partire da un file SVG. Questo aumento di ore, però, non è stato sufficiente a permettere il completamento della task, sebbene ci siano comunque stati degli avanzamenti;
- le ore di Verificatore sono diminuite a causa di un minor afflusso di modifiche al codice rispetto alle attese.
Sebbene le ore effettive siano state pari a quelle preventivate, la composizione del prospetto orario appare diversa rispetto al pianificato.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [10], [300],
[Amministratore], [18], [360],
[Analista], [7], [175],
[Progettista], [2], [50],
[Programmatore], [89], [1335],
[Verificatore], [41], [615],
[Rimanente], [167], [2835],
),
caption: "Monitoraggio, Sprint 19"
)
Durante lo Sprint 19 sono stati raggiunti buona parte degli obiettivi. Nonostante la fluttuazione delle ore lavorate per ruolo, il consuntivo finale è risultato inferiore al preventivo, poiché l'aumento delle ore di Programmatore richieste è stato assorbito dal minor impegno richiesto ad altre figure.
Vengono di seguito riportate le principali metriche esposte dal #pdq riguardanti l'andamento dello Sprint:
- CPI: il Cost Performance Index diminuisce ma rimane entro l'intervallo di ottimalità, passando dal valore 1,01 (allo Sprint 18) a 1,00;
- EAC: l'Estimate At Completion passa dal valore 12933,25 € (allo Sprint 18) a 12990,31 €. Anche se aumentato, è considerato accettabile in quanto rispetta la condizione di $<=$ BAC+5\% espressa dal #pdq\.
== Sprint 20 dal 17-03-2024 al 24-03-2024
=== Obiettivi raggiunti
Lo Sprint 20 si è concluso con il completamento della maggior parte degli obiettivi di periodo. In particolare:
- fissati due colloqui con il Proponente in data 22/03/2024 (per offrire un aggiornamento sul progresso) e 26/03/2024 (al fine di qualificare il lavoro svolto come MVP);
- inviato il documento #adr al #cardin, requisito obbligatorio per proseguire alla candidatura per la Product Baseline;
- #ndp\:
- revisionati i capitoli:
- Processo di gestione della qualità;
- Processo di implementazione;
- Processo di integrazione;
- Processo di verifica;
- Processo di transizione;
- Processo di validazione.
- #pdq\:
- aggiornate le metriche all'interno del documento;
- aggiornata la dashboard Grafana.
- #pdp\:
- redatto preventivo dello Sprint 20, iniziato in data 17/03/2024;
- redatto consuntivo dello Sprint 19, terminato in data 17/03/2024.
- #st\:
- redatta la sezione riguardante i requisiti soddisfatti.
- aggiunte le sezioni Requisiti di sistema e Requisiti hardware.
- #man\:
- redatte le sezioni:
- Modifica zona;
- Eliminazione zona;
- Richiesta spostamento prodotto.
- rimosse le sezioni Requisiti di sistema e Requisiti hardware;
- aggiornato template del documento.
=== Obiettivi mancati
- #man\:
- non è stata redatta la sezione Impostazioni.
=== Problematiche
Durante il meeting di retrospettiva è sorta la seguente problematica:
*P01*: Gli avanzamenti del documento #man sono stati minori delle aspettative.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [7], [2], [9],
[Carraro], [0], [0], [0], [0], [7], [0], [7],
[Gardin], [1], [0], [0], [0], [4 (+1)], [3 (-1)], [8],
[Nardo], [0], [0], [2], [0], [6 (+1)], [2], [10 (+1)],
[Oseliero], [0], [0], [0], [0], [5], [4], [9],
[Todesco], [0], [0], [1], [0], [6 (+1)], [2], [9 (+1)],
[Zaccone], [1 (-1)], [2 (+1)], [0], [0], [3], [4], [10],
[Totale ore], [2 (-1)], [2 (+1)], [3], [0], [38 (+3)], [17 (-1)], [62 (+2)],
[<NAME>], [60 (-30)], [40 (+20)], [75], [0], [570 (+45)], [255 (-15)], [1000 (+20)],
),
caption: "Prospetto del consuntivo, Sprint 20"
)
#let data = (
("Responsabile", 3, 2),
("Amministratore", 1, 2),
("Analista", 3, 3),
("Progettista", 0, 0),
("Programmatore", 35, 38),
("Verificatore", 18, 17),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 20",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 20 le ore effettive (62) hanno superato di due unità le ore preventivate (60), in particolare:
- gli impegni del Responsabile sono stati assolti in minor tempo rispetto a quanto preventivato;
- la stesura dei verbali ha impegnato maggiormente l'Amministratore;
- dopo aver concluso tutte le attività di codifica pianificate, la figura del Programmatore è stata impegnata nell'implementazione di alcune funzionalità desiderabili o opzionali. Pertanto, le ore di Programmatore sono aumentate rispetto a quanto preventivato;
- l'impiego della figura del Verificatore è stato leggermente inferiore alle aspettative.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [8], [240],
[Amministratore], [16], [320],
[Analista], [4], [100],
[Progettista], [2], [50],
[Programmatore], [51], [765],
[Verificatore], [24], [360],
[Rimanente], [105], [1835],
),
caption: "Monitoraggio, Sprint 20"
)
Nonostante l'aumento di costo, lo Sprint 20 ha permesso di completare la maggior parte degli obiettivi prefissati.
Vengono di seguito riportate le principali metriche esposte dal #pdq riguardanti l'andamento dello Sprint:
- CPI: il Cost Performance Index rimane stabile sul valore ottimale di 1,00;
- EAC: l'Estimate At Completion rimane sostanzialmente invariato, passando dal valore di 12990,31 € (allo Sprint 19) a 12990,86 €.
== Sprint 21 dal 24-03-2024 al 31-03-2024
=== Obiettivi raggiunti
Lo Sprint 21 si è concluso con il completamento di tutti gli obiettivi di periodo. In particolare:
- il Proponente ha convalidato, in data 26/04/2024, il prodotto software WMS3 come MVP del capitolato C5;
- #ndp\: armonizzazione del contenuto dei capitoli relativi a:
- Processo di gestione della qualità;
- Processo di implementazione;
- Processo di integrazione;
- Processo di verifica;
- Processo di transizione;
- Processo di validazione.
- #pdq\:
- aggiornate le metriche all'interno del documento;
- aggiornata la dashboard Grafana.
- #pdp\:
- redatto preventivo dello Sprint 21, iniziato in data 24/03/2024;
- redatto consuntivo dello Sprint 20, terminato in data 24/03/2024.
- #man\: redatte le sezioni:
- Impostazioni;
- Spostamento prodotto;
- Ispezione bin/zona.
- terminata l'attività di codifica in concomitanza con la convalida del prodotto come MVP.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
*P01*: Le incombenti festività pasquali e gli impegni personali associati hanno rallentato il lavoro di redazione dei documenti.
*P02*: La ricezione del feedback sul documento #adr, adeguato al feedback ricevuto al colloquio RTB col #cardin, non è ancora avvenuta.
=== Risoluzioni attuate
#figure(caption: [Soluzioni individuate alle criticità riscontrate, Sprint 21.],
table(
align: left,
columns: (auto, 1fr, auto),
[ID risoluzione], [Titolo], [Criticità affrontate],
[R01], [Ridistribuzione dei task tra i membri], [P01]
)
)
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [2], [0], [0], [0], [0], [5 (+2)], [7 (+2)],
[Carraro], [0], [0], [0], [1], [2 (-2)], [3 (+1)], [6 (-1)],
[Gardin], [1 (-1)], [0], [0], [0], [1 (+1)], [3], [5],
[Nardo], [0], [0], [0], [1], [3 (-1)], [0], [4 (-1)],
[Oseliero], [0], [2], [0], [0], [0], [0], [2],
[Todesco], [0], [0], [0], [0], [6 (+1)], [0], [6 (+1)],
[Zaccone], [0], [0], [0], [0], [6 (+1)], [0], [6 (+1)],
[Totale ore], [3 (-1)], [2], [0], [2], [18], [11 (+3)], [36 (+2)],
[Costo ruolo], [90 (-30)], [40], [0], [50], [270], [165 (+45)], [615 (+15)],
),
caption: "Prospetto del consuntivo, Sprint 21"
)
#let data = (
("Responsabile", 4, 3),
("Amministratore", 2, 2),
("Analista", 0, 0),
("Progettista", 2, 2),
("Programmatore", 18, 18),
("Verificatore", 8, 11),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 21",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 21 le ore effettive (36) hanno superato di due unità le ore preventivate (34), in particolare:
- le ore richieste al Responsabile per il lavoro sul documento #ndp ha richiesto 1 ora in meno rispetto al previsto;
- le ore di Programmatore, sebbene complessivamente invariate, sono state riassegnate durante lo Sprint in risposta alla problematica insorta `P01`;
- le ore di Verificatore richieste sono risultate maggiori rispetto alle aspettative a causa di una sottostima della mole di lavoro.
Nel complesso, la riduzione delle ore di Responsabile ha assorbito parzialmente l'incremento di costi dovuto all'impiego maggiore del Verificatore. Tuttavia, il consuntivo finale è risultato superiore al preventivo.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [5], [150],
[Amministratore], [14], [280],
[Analista], [4], [100],
[Progettista], [0], [0],
[Programmatore], [33], [495],
[Verificatore], [13], [195],
[Rimanente], [69], [1220],
),
caption: "Monitoraggio, Sprint 21"
)
Lo Sprint 21 si è concluso con il completamento totale delle attività previste. Inoltre, la convalida del prodotto come MVP permetterà di proseguire con la candidatura per la Product Baseline, una volta ricevuto anche il feedback sul documento #adr.
Vengono di seguito riportate le principali metriche esposte dal #pdq riguardanti l'andamento dello Sprint:
- CPI: il Cost Performance Index rimane stabile sul valore ottimale di 1,00;
- EAC: l'Estimate At Completion aumenta, passando dal valore di 12990,86 € (allo Sprint 20) a 13010,59 €.
L'aumento dell'EAC, sebbene non significativo, ne avvicina il valore a quello del BAC, ovvero il massimale consentito.
== Sprint 22 dal 30-03-2024 al 07-04-2024
=== Obiettivi raggiunti
Lo Sprint 22 si è concluso con il completamento di tutti gli obiettivi di periodo. In particolare:
- il gruppo ha prenotato il colloquio per la Product Baseline col #cardin in data 10/04/2024;
- #pdq\:
- aggiornate le metriche all'interno del documento;
- aggiornata la dashboard Grafana.
- #pdp\:
- redatto preventivo dello Sprint 22, iniziato in data 30/03/2024;
- redatto consuntivo dello Sprint 21, terminato in data 30/03/2024.
- #man\:
- redatte le sezioni:
- Griglia di collocamento;
- Ispezione bin;
- Collocazione zona nell'ambiente 3D;
- Ricerca zona;
- Spostamento POV su bin;
- Visualizzazione informazione prodotto;
- Visualizzazione lista prodotti;
- Ricerca prodotti;
- Visualizzazione lista ordini movimentazione;
- Visualizzazione singolo ordine movimentazione.
- aggiornate le immagini del prodotto software WMS3.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Nessuna.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [1 (+1)], [0], [2], [0], [0], [3 (+1)],
[Carraro], [2], [0], [0], [0], [0], [2], [4],
[Gardin], [0], [0], [0], [0], [0], [3], [3],
[Nardo], [0], [2 (-1)], [0], [0], [0], [4 (+1)], [6],
[Oseliero], [0], [1 (+1)], [0], [2], [0], [0], [3 (+1)],
[Todesco], [2], [0], [0], [0], [0], [3], [5],
[Zaccone], [0], [0], [0], [0], [0], [3], [3],
[Totale ore], [4], [4 (+1)], [0], [4], [0], [15 (+1)], [27 (+2)],
[<NAME>uolo], [120], [80 (+20)], [0], [100], [0], [225 (+15)], [525 (+35)],
),
caption: "Prospetto del consuntivo, Sprint 22"
)
#let data = (
("Responsabile", 4, 4),
("Amministratore", 3, 4),
("Analista", 0, 0),
("Progettista", 4, 4),
("Programmatore", 0, 0),
("Verificatore", 14, 15),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 22",
kind: "chart",
supplement: "Grafico"
)
Durante lo Sprint 22 le ore effettive (27) hanno superato di due unità le ore preventivate (25), in particolare:
- l'Amministratore è stato impegnato nella risoluzione di un problema di connessione alle API dell'ITS adottato dal gruppo, oltre che nello svolgimento delle attività pianificate;
- il Verificatore ha richiesto un impegno maggiore del previsto per la verifica della lista dei test di sistema.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [1], [30],
[Amministratore], [10], [200],
[Analista], [4], [100],
[Progettista], [-4], [-100],
[Programmatore], [33], [495],
[Verificatore], [-2], [-30],
[Rimanente], [42], [695],
),
caption: "Monitoraggio, Sprint 22"
)
Lo Sprint 22 si è concluso con il completamento di tutti gli obiettivi di periodo.
Vengono di seguito riportate le principali metriche esposte dal #pdq riguardanti l'andamento dello Sprint:
- CPI: il Cost Performance Index rimane stabile sul valore ottimale di 1,00;
- EAC: l'Estimate At Completion diminuisce e si allontana dal valore di soglia (BAC), passando dal valore di 13010,59 € (allo Sprint 21) a 12992,10 €.
== Sprint 23 dal 07-04-2024 al 14-04-2024
=== Obiettivi raggiunti
Lo Sprint 23 si è concluso con il completamento di tutti gli obiettivi di periodo. In particolare:
- è stata sostenuto il colloquio per la Product Baseline col #cardin in data 10/04/2024;
- #pdq\:
- separata la lista dei test di unità da quelli di integrazione;
- redatta la lista dei test di sistema;
- aggiornate le metriche all'interno del documento;
- aggiornata la dashboard Grafana.
- #pdp\:
- redatto preventivo dello Sprint 23, iniziato in data 07/04/2024;
- redatto consuntivo dello Sprint 22, terminato in data 07/04/2024.
- #man:\
- aggiunta la sezione sull'avvio del software;
- rimossa la sezione sul supporto tecnico.
=== Obiettivi mancati
Nessuno.
=== Problematiche
Nessuna.
=== Risoluzioni attuate
Nessuna.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [2], [2],
[Carraro], [0], [2], [0], [0], [0], [0], [2],
[Gardin], [0], [0], [0], [0], [0], [2], [2],
[Nardo], [0], [0], [0], [1], [0], [0], [1],
[Oseliero], [1], [0], [0], [0], [0], [2], [3],
[Todesco], [0], [0], [0], [0], [0], [2], [2],
[Zaccone], [0], [0], [0], [1], [0], [0], [1],
[Totale ore], [1], [2], [0], [2], [0], [8], [13],
[Costo ruolo], [30], [40], [0], [50], [0], [120], [240],
),
caption: "Prospetto del consuntivo, Sprint 23"
)
#let data = (
("Responsabile", 1, 1),
("Amministratore", 2, 2),
("Analista", 0, 0),
("Progettista", 2, 2),
("Programmatore", 0, 0),
("Verificatore", 8, 8),
)
#let x-coordinates = compute-labels-x-coordinate(data, role-chart-size)
#let y-coordinates = compute-labels-y-coordinate(data, role-chart-size)
#figure({
import draw: *
canvas({
chart.barchart(..barchart-config, data)
let i = 0
while(i < data.len()) {
content(
(x-coordinates.at(i).at(0), y-coordinates.at(i).at(0)),
[#data.at(i).at(1)],
..barchart-label-config
)
content(
(x-coordinates.at(i).at(1), y-coordinates.at(i).at(1)),
[#data.at(i).at(2)],
..barchart-label-config
)
i += 1
}
})},
caption: "Suddivisione oraria per ruolo, consuntivo Sprint 23",
kind: "chart",
supplement: "Grafico"
)
Le ore preventivate per lo Sprint 23 sono state rispettate, con un totale di 13 ore effettive.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [5], [150],
[Amministratore], [14], [280],
[Analista], [4], [100],
[Progettista], [0], [0],
[Programmatore], [33], [495],
[Verificatore], [13], [195],
[Rimanente], [69], [1220],
),
caption: "Monitoraggio, Sprint 23"
)
Lo Sprint 23 si è concluso con il completamento di tutti gli obiettivi di periodo.
Vengono di seguito riportate le principali metriche esposte dal #pdq riguardanti l'andamento dello Sprint:
- CPI: il Cost Performance Index migliora leggermente e passa a 1,01;
- EAC: l'Estimate At Completion diminuisce e si allontana ulteriormente dal valore di soglia (BAC), passando dal valore di 12992,10 € (allo Sprint 22) a 12968,71 €.
== Sprint 24 dal 14-04-2024 al 21-04-2024
=== Obiettivi raggiunti
Lo Sprint 24 si è concluso con il completamento di tutti gli obiettivi di periodo, meno che uno. In particolare:
- #pdq\:
- aggiornate le metriche all'interno del documento;
- aggiornata la dashboard Grafana.
- #pdp\:
- redatto preventivo dello Sprint 22, iniziato in data 30/03/2024;
- redatto consuntivo dello Sprint 21, terminato in data 30/03/2024.
=== Obiettivi mancati
Nonostante la ricezione del feedback del colloquio PB col #cardin sia avvenuta il giorno 17/04/2024, non è stata inviata la candidatura per il secondo colloquio di Product Baseline col #vardanega.
=== Problematiche
Durante il meeting di retrospettiva sono sorte le seguenti problematiche:
*P01*: Alcuni membri del gruppo sono stati impegnati nelle attività di definizione dei propri _stage_ formativi.
=== Risoluzioni attuate
Nessuna. La candidatura è rimandata allo Sprint successivo.
=== Panoramica dei costi effettivi
#figure(
table(
columns: 8,
[*Membro*], [*Responsabile*], [*Amministratore*], [*Analista*], [*Progettista*], [*Programmatore*], [*Verificatore*], [*Totale*],
[Banzato], [0], [0], [0], [0], [0], [0], [0],
[Carraro], [0], [0], [0], [0], [0], [0], [0],
[Gardin], [0], [0], [0], [0], [0], [0], [0],
[Nardo], [1], [0], [0], [0], [0], [0], [1],
[Oseliero], [0], [0], [0], [0], [0], [0], [0],
[Todesco], [0], [0], [0], [0], [0], [0], [0],
[Zaccone], [0], [2], [0], [0], [0], [0], [2],
[Totale ore], [1], [2], [0], [0], [0], [0], [3],
[Costo ruolo], [30], [40], [0], [0], [0], [0], [70],
),
caption: "Prospetto del preventivo, Sprint 24"
)
#let data = (
("Responsabile", 1),
("Amministratore", 2),
("Analista", 0),
("Progettista", 0),
("Programmatore", 0),
("Verificatore", 0),
)
#figure({
roles-legend
canvas({
import draw: *
chart.piechart(..piechart-config, data)}
)},
caption: "Suddivisione oraria per ruolo, preventivo Sprint 24",
kind: "chart",
supplement: "Grafico"
)
Le ore preventivate per lo Sprint 24 sono state rispettate, con un totale di 3 ore effettive.
=== Monitoraggio costi e ore
#figure(
table(
columns: 3,
[*Ruolo*], [*Ore rimanenti*], [*Budget rimanente*],
[Responsabile], [-1], [-30],
[Amministratore], [6], [120],
[Analista], [4], [100],
[Progettista], [-6], [-150],
[Programmatore], [33], [495],
[Verificatore], [-10], [-150],
[Rimanente], [26], [385],
),
caption: "Monitoraggio, Sprint 24"
)
Lo Sprint 24 si è concluso con il completamento di tutti gli obiettivi di periodo, tranne uno.
Vengono di seguito riportate le principali metriche esposte dal #pdq riguardanti l'andamento dello Sprint:
- CPI: il Cost Performance Index rimane stabile sul valore ottimale di 1,01;
- EAC: l'Estimate At Completion aumenta lievemente, attestandosi su 12969,18 € rispetto al valore di 12968,71 € dello Sprint precedente. |
|
https://github.com/EGmux/ControlTheory-2023.2 | https://raw.githubusercontent.com/EGmux/ControlTheory-2023.2/main/unit3/stabilitiy.typ | typst | #set heading(numbering: "1.")
=== Basic systems
\
#figure(
image("../assets/unit.png", width: 80%),
caption: [],
) <fig-unit>
compute E(T) when T $-> infinity $, where E is the error function
==== Big picture of error computation
\
do Laplace transform for this
$e(∞) = lim_"t->∞"e(t)$
and then arrive at this
$f(∞) = lim_"s->0" s F(s)$
expanding
$e(∞) = lim_"s->0" {R(s)[1-T(s)]$
for any function this is the error computation 💡, that is error is computed in function
of starting signal and transfer function.
==== System with feedback
\
==== Computing the error
\
$e(∞) = lim_"s->0" s E(s)$ and $E(s) = R(S)/(1 + G(S))$
thus in this case $G(S) -> ∞$ allows $e(∞) = 0$
==== Looking at the error from multiple input families
\
===== step function as input
\
$e(∞) &= lim_"s->0" s (R(S))/(1+G(S)) && \
&= (1)/(1+lim_"s->0" G(s)) && \ $
thus we conclude that $e(∞) -> 0 => lim_"s->0" G(S) -> ∞$ , $G(S)$must have at least a _single integrator_
===== saw tooth function as input
\
$e(∞) &= lim_"s->0" s 1/s^2 (1)/(1+G(s)) && \
&= lim_"s->0" s^2 G(S) && \ $
thus we conclude that , $G(S)$ must have at least 2 _integrators_
===== parabola function as input
\
$e(∞) &= lim_"s->0" s (1)/(s^3) (1)/(1+G(S)) && \
&= lim_"s->0" s^2 G(S) && \ $
thus we conclude that $G(S)$ must have at least 3 _integrators_
==== What to analyze to compute error in practice
\
Compute the limit $lim_"s->0" s^n G(S)$ where _n_ refers to number of _integrators_
for instance
$G(S) = (120(S+2))/((S+3)(S+4)) $ compute the following limits
- $lim_"s->0" G(S)$ = 20
- $lim_"s->0" s G(S)$ = 0
- $lim_"s->0" s^2 G(S)$ = 0
🚨 expect easy computations when doing the tests, if numbers appear "strang" then you're doing something wrong!
💡 error only make sense on stable systems, otherwise analysis is useless
=== Static error constant
\
#math.equation(block:true,
$K_p := e_"step" &= (1)/(1+lim_"s->0" G(S)) && \ $
)
#math.equation(block:true,
$K_r := e_"step" &= (1)/(1+lim_"s->0" s G(S)) && \ $
)
#math.equation(block:true,
$K_a := e_"step" &= (1)/(1+lim_"s->0" s^2 G(S)) && \ $
)
==== The stationary error table
\
// #table( // TODO: fix this
// columns: (1fr, auto, auto),
// inset: 10pt,
// align: horizon,
// [],[*Input*], [*Error*], [*Type 0*] , [*Type 1*], [*Type 2*],
// "Degrau",
// ,
// [
// $h$: degrau \
// $D$: outer radius \
// $d$: inner radius
// ],
// $ sqrt(2) / 12 a^3 $,
// [$a$: edge length]
// )
==== Inferences from this classification
\
Suppose a system has _$K_v$_ = 1000
- System must be stable
- System must be type 1, due to _$K_v$_ is contant
- The input was a sawtooth
- can also compute the error easily
===== A practical problem
\
#figure(
image("../assets/eu.png", width: 80%),
caption: [the system],
) <fig-eu>
- $G(S) = (K(S+5))/(S(S+6)(S+7)(S+8)) $
- $e(∞) = 10%$
#math.equation(block:true,
$ K_p &= lim_"s->0" G(s) => K_p -> ∞ => e_"step" (∞) = 0 && \ $
)
#math.equation(block:true,
$ K_v = lim_"s _> 0" s G(s) = "CTE" $
)
#math.equation(block:true,
$ K_a = lim_"S->0" s^2 G(S) = 0 => e_"parabola"(∞) = 0 $
)
🚨 what looks like a 'G(s)' in the diagram is actually 'C(s)'
===== Generic system with feedback
\
#figure(
image("../assets/generic.png", width: 80%),
caption: [],
) <fig-generic>
🚨 using _$K_{a,p,v}$_ is only applicable to $G(S$, not $T(S)$
reduces to
#math.equation(block:true,
$ (G(S))/(1 + G(S)[H(S) - 1]) $
)
|
|
https://github.com/DJmouton/Enseignement | https://raw.githubusercontent.com/DJmouton/Enseignement/main/SNT/Réseaux sociaux/Cours fonctionnement des réseaux sociaux.typ | typst | #import "/Templates/layouts.typ": SNT, titre, sous_titre
#import "/Templates/utils.typ": cadre, pointillets
#import "@preview/cetz:0.2.2"
#show: doc => SNT(doc)
#sous_titre[SNT - Réseaux Sociaux - Partie 1]
#titre[Fonctionnement des Réseaux Sociaux]
#let repères_historiques = cadre(titre: "Repères historique")[
#table(
stroke: none,
inset: 0em,
column-gutter: 10pt,
row-gutter: 15pt,
columns: (auto, 1fr),
[*1995*], "Naissance d'un des premier réseaux sociaux, Classmates, qui met en relation d'anciens camarades de classe.",
[*2006*], "Apparition de Twitter (désormais X) et ouverture de Facebook au grand public.",
[*2009*], "Lancement de la messagerie instantanée WhatsApp",
[*2010*], "Multiplication du partage des photos et des vidéos via l'entrée de nouveaux acteurs comme Instagram et Snapchat.",
)
]
#let fun_facts = cadre(titre: "Le savais-tu ?")[
- Si l'ensemble des utilisateurs de X (anciennement Twitter) formaient un pays, ce serait le quatrième plus grand pays au monde.
- En 1998, l'agence fédérale américaine qui régit le droit de la consommation aux États-Unis a promulgé le texte COPPA, qui protège les internautes de moins de 13 ans d'une utilisation abusive de leurs informations personnelles à des fins commerciales. Pour éviter toute complication juridique, les réseaux sociaux ont interdit l'inscription avant cet âge.
]
#grid(
columns: (1fr, 1fr),
column-gutter: 10pt,
repères_historiques,
fun_facts,
)
= Caractéristiques des réseaux sociaux
#cadre(titre: "L'essentiel")[
Contrairement à un *réseau informatique*, composé de machines connectées physiquement entre elles, un *réseau social* met en relation des personnes (grâce à des liens d'"amitié") ou encore des entreprises (mise en relation avec le client) de façon virtuelle.
Grâce à des applications dédiées, les usager échangent toutes sortes de contenus numériques: messages, photos, vidéos, sons, ²xetc.
]
#let figure_nombre_utilisateurs = align(center, cetz.canvas({
import cetz.draw: circle, content
let ratio_bulle = 0.4
let ratio_texte = 10
let distance_nom_utilisateurs = 0.6
let bulle_réseau(position, utilisateurs, couleur_bulle, couleur_texte, nom) = {
let rayon = calc.sqrt(utilisateurs / 3.14) * ratio_bulle
let taille_texte = rayon * ratio_texte * 1pt
circle(position, radius: rayon, stroke: 1pt, fill: couleur_bulle.luma(), name: nom)
content((nom, taille_texte * distance_nom_utilisateurs, nom + ".north"), text(size: taille_texte, fill: couleur_texte, [*#nom*]))
content((nom, taille_texte * distance_nom_utilisateurs, nom + ".south"), text(size: taille_texte, fill: couleur_texte, [#utilisateurs M]))
}
bulle_réseau((-7.2, 0), 50, blue, white, "Facebook")
bulle_réseau((-5, -2), 46.6, red, white, "Youtube")
bulle_réseau((-3.3, 0), 32, green, black, "Whatsapp")
bulle_réseau((-2, -2), 30, rgb("#e6008a"), white, "Instagram")
bulle_réseau((-0.7, -0.3), 23, rgb("#00e3e3"), black, "Tiktok")
bulle_réseau((0.6, -1.6), 21, yellow, black, "Snapchat")
bulle_réseau((2, -0.5), 15, luma(100), white, "X")
bulle_réseau((3.3, -1.2), 13.5, rgb("#2bb1ff"), black, "LinkedIn")
bulle_réseau((4.7, -0.6), 13, rgb("#c90000"), white, "Pinterest")
}))
#figure(
caption: "Le nombre d'abonnés des réseaux sociaux les plus utilisés en France, en 2024.",
figure_nombre_utilisateurs
)
= L'identité numérique et sa protection
#cadre(titre: "L'essentiel")[
En ligne, tout comme en société, chaque individu est caractérisé par son *identité numérique*. Toutes les traces que nous laissons sur les réseaux sociaux ont une influence sur la perception qu'ont les autres de nous-même, et forgent notre *e-réputation*.
La connection à un réseau social se fait via un processus d'*authentification* par un login et un mot de passe qui sont personnels. Un paramétrage adapté du compte permet de maîtriser la *confidentialité* des données.
]
#let figure_identité_numérique = align(center, cetz.canvas({
import cetz.draw: line, content
content((0, 0), text(size: 45pt, emoji.person))
let texte_ligne(départ: (0, 0), arrivée: (1, 1), distance_coude: 0.5, titre: "Zoukland", sous_titre: "wsh la zone du sang") = {
let (x_départ, y_départ) = départ
let (x_arrivée, y_arrivée) = arrivée
line(départ, (x_départ + distance_coude, y_départ), (x_départ + distance_coude, y_arrivée), arrivée, stroke: 0.5pt,name: "ligne")
let orientation = "east"
if (x_départ - x_arrivée < 0) {orientation = "west"}
content("ligne.end", anchor: "south-" + orientation, rect(stroke: none, inset: 3pt, text(size: 10pt, weight: "bold", titre)))
content("ligne.end", anchor: "north-" + orientation, rect(stroke: none, inset: 2pt, text(size: 8pt, sous_titre)))
}
let décallage_départ = 1
let coude_base = 1
let longueur = 3.5
let interligne = 0.15
let séparation_verticale = 1
// Gauche
texte_ligne(départ: (- décallage_départ, interligne * 2), arrivée: (- longueur, (séparation_verticale + interligne) * 2), distance_coude: - coude_base,
titre: "Réputation",
sous_titre: "ce que l'on dit sur moi"
)
texte_ligne(départ: (- décallage_départ, interligne), arrivée: (- longueur, séparation_verticale + interligne), distance_coude: - (coude_base + interligne),
titre: "Expression",
sous_titre: "ce que je dis"
)
texte_ligne(départ: (- décallage_départ, 0), arrivée: (- longueur, 0), distance_coude: - coude_base,
titre: "Réseau",
sous_titre: "qui je connais"
)
texte_ligne(départ: (- décallage_départ, - interligne), arrivée: (- longueur, - (séparation_verticale + interligne)), distance_coude: - (coude_base + interligne),
titre: "Certification",
sous_titre: "qui peut certifier mon identité"
)
texte_ligne(départ: (- décallage_départ, - interligne * 2), arrivée: (- longueur, - (séparation_verticale + interligne) * 2), distance_coude: - coude_base,
titre: "Achat",
sous_titre: "ce que j'achète"
)
// Droite
texte_ligne(départ: (décallage_départ, interligne * 2), arrivée: (longueur, (séparation_verticale + interligne) * 2), distance_coude: coude_base,
titre: "Opinion",
sous_titre: "ce que je pense"
)
texte_ligne(départ: (décallage_départ, interligne), arrivée: (longueur, séparation_verticale + interligne), distance_coude: coude_base + interligne,
titre: "Savoir",
sous_titre: "ce que je sais"
)
texte_ligne(départ: (décallage_départ, 0), arrivée: (longueur, 0), distance_coude: coude_base,
titre: "Hobbies",
sous_titre: "ce que j'aime"
)
texte_ligne(départ: (décallage_départ, - interligne), arrivée: (longueur, - (séparation_verticale + interligne)), distance_coude: coude_base + interligne,
titre: "Avatar",
sous_titre: "comment je me montre"
)
texte_ligne(départ: (décallage_départ, - interligne * 2), arrivée: (longueur, - (séparation_verticale + interligne) * 2), distance_coude: coude_base,
titre: "Profession",
sous_titre: "quel est mon métier et où je travaille"
)
}))
#figure(
caption: "Les parties de l'identité numérique.",
figure_identité_numérique
)
== Exercice 1: Un exemple de "login social"
Paola, une jeune créatrice du web, veut s'inscrire sur un site de financement participatif pour percevoir des pourboires de la part de ses fans. Le site lui propose deux modes de connexion \[ci-contre\]: une authentification dite "classique", ou une authentification rapide par un processus dit de "login social".
=== 1. Présenter les deux solutions qui s'offrent à elle pour créer son compte.
#pointillets
#pointillets
=== 2. Quelles informations doivent être fournies pour une authentification classique ?
#pointillets
=== 3. Quelles pourraient être les informations transmises par Google au site de financement participatif si Paola choisit le login social ?
#pointillets
#pointillets
=== 4. Quelle solution devrait préférer Paola si elle ne tient pas à divulger ces informations ?
#pointillets
= Modèle économique des réseaux sociaux
#cadre(titre: "L'essentiel")[
Les entreprises de réseaux sociaux génèrent plus de la moitié de leurs revenus grâce à la publicité. La revente de données, l'intégration de jeux en ligne ou encore d'applications génèrent également des revenus. Des algorithmes proposent en outre des *publicités ciblées* selon les relations de l'abonné, ses goûts, ses opinions, son âge, etc.
]
== Exercice 2: Quelques extraits d'une politique de confidentialité
Amir installe une nouvelle application sur son téléphone portable. Pour valider l'inscription, il doit "reconnaître avoir lu" la politique de confidentialité. Curioux, il choisir de parcourir le document:
#block(
breakable: true,
stroke: 0pt,
fill: luma(225),
width: 100%,
inset: 5pt
)[
"Lorsque vous utilisez nos services \[...\] nous devons ainsi collecter des informations importantes vous concernant, par exemple : votre nom, nom d'utilisateur, mot de passe, adresse, _e-mail_, numéro de téléphone et date de naissance. \[...\] Sur internet, il convient de faire preuve de bon sens : n'envoyez pas de messages ou ne partagez pas de contenu que vous ne voudriez pas que quelqu'un d'autre puisse enregistrer ou partager \[...\].
#linebreak()
Nous collectons des informations sur votre activité \[...\] la manière dont vous interagissez avec d'autres \[utilisateurs\], tels que leur noms, l'heure et la date de vos échanges, le nombre de messages que vous échangez avec vos amis, les amis avec lesquels vous échangez le plus de message.
#linebreak()
Nous collections des informations issues des capteurs de l'appareil, comme les accéléromètres, gyroscopes, boussoles, microphones et la présence ou non d'écouteurs connectés ; et des informations sur vos connexions réseau mobile et sans fil, comme votre numéro de téléphone mobile, votre prestataire de service, votre adresse IP et la puissance du signal. \[...\] nous pouvons - avec votre autorisation - collecter des informations depuis le répertoire de votre appareil \[...\].
#linebreak()
Lorsque vous visitez notre site web nous collectons également des informations dans les fichiers journaux, entre autres : les pages que vous avez visitées avant ou après avoir accédé à notre site, \[...\].
#linebreak()
Nous utilisons certaines des informations que nous receuillons à votre sujet pour essayer de choisir les bonnes publicités et vous les envoyer au bon moment. Par exemple, si vous avez cliqué sur une série de publicités pour des jeux-vidéo, nous pourrons coninuer à vous en présenter \[...\]."
#align(right, "(Extrait au 30 oct. 2019)")
]
=== 1. Lire l'extrait et souligner les informations dont dispose l'application concertant le matériel utilisé par Amir.
=== 2. a. Entourer les informations collectées par le site au moment de l'inscription.
=== b. Cette liste est-elle exhaustive ? Justifier.
#pointillets
#pointillets
=== 3. Que pensez-vous de la possibilité, pour l'application, de collecter des informations sur les amis d'Amir ?
#pointillets
#pointillets
#pointillets
=== 4. Comment l'application pourra-t-elle proposer à Amir des publicités ciblées ?
#pointillets
#pointillets
#pointillets
=== 5. La politique de confidentialité dont vous avez lu un extrait compte plus de 10 900 mots, ce qui représente 40 pages d'un roman. Pourquoi ces documents sont-ils généralement si long ?
#pointillets
#pointillets
#pointillets
= Résumé
#cadre(titre: "Définitions à connaître")[
- Un *réseau social* rassemble des personnes physiques ou morales, mises en relation par l'intermédiaire d'une application dédiée. La connexion à un réseau social s'effectue depuis toute machine connectée à Internet: ordinateur, smartphone, tablette, etc.
- Un *abonné actif* est un abonné se connectant régulièrement au réseau social.
- L'*identité numérique* est l'ensemble des traces numériques qu'une personne ou une collectivité laisse sur le Web, comme ses coordonnées ou ses goûts.
- L'*e-réputation* (ou web-réputation) est la notoriété associée à une identité numérique. Elle peut renvoyer une image positive ou négative de cette identité.
- Le *processus d'authentification* permet l'accès individualisé à un réseau à l'aide d'un _login_ et d'un mot de passe.
]
=== Il faut être capable de :
#set list(marker: [#sym.square])
- définir les termes suivants : réseau social, identité numérique
- distinguer plusieurs réseaux sociaux selon leur caractéristiques
- identifier les sources de revenu d'une entreprise du réseautage social
- expliquer la notion : "Quand c'est gratuit, c'est vous le produit"
- comprendre que le paramétrage d'un abonnement influe sur la confidentialité des données personnelles
|
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/math/style-04.typ | typst | Other | // Test text properties.
$text(#red, "time"^2) + sqrt("place")$
|
https://github.com/wuc9521/CS-scholars-report | https://raw.githubusercontent.com/wuc9521/CS-scholars-report/main/sections/intro.typ | typst | = Introduction
== Motivation
We chose to focus on CS-scholars for our project due to the increasing importance of computer science in today's technology-driven world. The field of computer science is rapidly evolving, with new discoveries and innovations emerging constantly. By creating a comprehensive database and analysis tool for CS-scholars, we aim to:
1. Facilitate collaboration among researchers by making it easier to find experts in specific subfields.
2. Help students and aspiring researchers identify potential mentors and research groups.
3. Provide insights into research trends and the impact of various institutions in the field of computer science.
4. Offer a valuable resource for funding agencies and policymakers to make informed decisions about research support.
The importance of this application lies in its potential to accelerate scientific progress by fostering connections and providing a clear overview of the computer science research landscape.
== Application
We designed CS-scholars to give users information about various aspects of computer science research. It combines information about research locations, scholars in those locations, their grants, and their publications. Using an intuitive and responsive frontend, it gives easy access to a big amount of data stored in our database.
Users can find top publishing scholars in areas combined with computer science, see how different institutions are impacting the computer science field, or even add data and information of their own.
== Task Assignment
We have two members in our group, and we divided the tasks as follows:
<NAME>:
- Data Preprocessing
- Frontend Development
- Helping with Backend
- Database Setup and Data Insertion
<NAME>:
- Backend Development
- Database Setup and Data Insertion
- Evaluation
Our cooperation is very good, and we have been working together to solve the problems we encountered. |
|
https://github.com/RiccardoTonioloDev/Bachelor-Thesis | https://raw.githubusercontent.com/RiccardoTonioloDev/Bachelor-Thesis/main/chapters/introduction.typ | typst | Other | #import "@preview/glossarium:0.4.1": gls
#pagebreak(to: "odd")
= Introduzione
== Stimare le profondità
L'avere la possibilità di misurare la profondità di un'immagine e quindi potenzialmente la profondità di ciascun frame all'interno di uno _streaming_ video, apre le porte alla risoluzione di una vasta gamma di problemi che richiedono una stima precisa delle distanze tra gli oggetti all'interno di un determinato campo visivo.
Alcuni problemi appartenenti a questa classe includono:
- Prevenzione dalle collisioni: lo sviluppo di algoritmi che, controllando un oggetto fisico in movimento, cercano di evitare impatti con altre entità lungo il suo percorso;
- Percezione tridimensionale: una serie di algoritmi che analizzano dati di profondità per ricostruire una scena tridimensionale dell'ambiente circostante;
- Realtà aumentata: un settore che prevede, attraverso l'uso di visori e altri dispositivi, il posizionamento di elementi grafici virtuali nel campo visivo dell'utente in modo che si integrino naturalmente con la realtà.
Esistono soluzioni _hardware_ per avere delle misurazioni di profondità con alta precisione. I sistemi _hardware_ più famosi ed utilizzati sono:
- Sensori di profondità, come ad esempio il @LiDAR;
- Sistemi di fotografia stereoscopica, come ad esempio la @FotoStereo.
Il problema con questi sistemi hardware risiede, tuttavia, in due aspetti principali:
- Nel caso dei sensori @LiDAR, il costo elevato può rendere il prodotto finale meno competitivo sul mercato o ridurre i margini di profitto per l'azienda che lo fornisce. Ad esempio, se si volesse integrare un @LiDAR in un robot da giardino, il costo aggiuntivo potrebbe influire negativamente sulla competitività, relativa al prezzo, del prodotto.
- D'altro canto, l'integrazione di un sistema basato su fotocamere stereoscopiche presenta altre sfide pratiche. Non è sempre semplice trovare spazio per le due fotocamere necessarie e gestire la loro calibrazione può essere complicato. Questi problemi possono limitare l'applicabilità della tecnologia in ambienti dove lo spazio è ristretto o dove la calibrazione precisa è difficile da ottenere o mantenere.
== _Monocular Depth Estimation_
Un'altra soluzione promettente è quella di sviluppare una rete neurale in grado di prevedere correttamente le profondità di un'immagine a partire da una singola immagine in _input_, un approccio noto come _Monocular Depth Estimation_ (@MDE). Se riuscisse ad essere realizzata con successo, tale soluzione permetterebbe il vantaggio di utilizzare una sola fotocamera, con il potenziale di un sensore @LiDAR.
Tuttavia, questo tipo di approccio, oltre alle tradizionali sfide del _machine learning_, come la ricerca di dataset adeguati e la costruzione di un modello adatto, presenta ulteriori difficoltà, in particolare nei sistemi @embedded, che hanno vincoli significativi in termini di memoria, energia e di potenza computazionale.
Modello particolarmente interessante di _machine learning_ è PyDNet, in quanto fortemente leggero (meno di 2 milioni di parametri) e discretamente performante, considerata la sua dimensione. Per questo motivo è stato scelto per essere il modello di riferimento da usare come punto di partenza del tirocinio.
== Obbiettivi
Il tirocinio si è quindi strutturato su due macro obbiettivi:
+ Studiare come il problema di @MDE è stato affrontato da PyDNet V1 e V2:
- Studiarne i _paper_ e i relativi codici;
- Inserire nella procedura di allenamento un sistema di _logging_ per analizzare i costi provenienti dalle varie _loss function_;
- Riprodurre l'allenamento di PyDNet V1 per verificare i risultati enunciati nel _paper_;
- Migrare tutto il codice di PyDNet V1 e il codice del modello di PyDNet V2 da @Tensorflow a @PyTorch;
- Riprodurre l'allenamento di PyDNet V1 nella sua versione migrata per verificare che si ottengano gli stessi risultati e che quindi la versione migrata sia equivalente all'originale.
+ Esplorare soluzioni per ottenere modelli migliori in termini di efficacia e efficienza:
- Verificare come variano le prestazioni di PyDNet V1 al variare degli iperparametri;
- Esplorare eventuali nuove tecniche e strategie al fine di creare un modello migliore nel compito di @MDE.
== Organizzazione del documento
Relativamente al documento sono state adottate le seguenti convenzioni tipografiche:
- Gli acronimi, le abbreviazioni e i termini ambigui o di uso non comune menzionati vengono definiti nel glossario, situato alla fine del presente documento;
- I _link_ ipertestuali interni al documento utilizzano la seguente formattazione: #text(fill: blue.darken(70%),weight: "medium","parola")\;
- Ogni dichiarazione, risultato o prodotto proveniente da letteratura scientifica, viene accompagnato da una citazione a tale fonte mediante un indice, utilizzabile per rintracciare la fonte di tale dichiarazione mediante la bibliografia, situata alla fine del presente documento;
- I termini in lingua straniera o facenti parti del gergo tecnico sono evidenziati con il carattere _corsivo_.
La presente tesi è organizzata come segue:
- #link(<ch:pydnet>)[Il secondo capitolo]: descrive in primo luogo l'architettura, il funzionamento e l'addestramento di PyDNet V1, in secondo luogo descrive il processo di migrazione del modello da @Tensorflow a @PyTorch e la successiva verifica di validità del prodotto del processo di migrazione.
- #link(<ch:xinet>)[Il terzo capitolo]: descrive l'architettura, il funzionamento e la validazione di XiNet, una rete neurale convoluzionale parametrica creata per essere particolarmente efficiente in termini di consumo energetico.
- #link(<ch:attention>)[Il quarto capitolo]: descrive l'architettura, il funzionamento e i casi d'uso di meccanismi di attenzione quali la _self attention_ e i _convolutional block attention module_ nell'ambito della _computer vision_.
- #link(<ch:pyxinet>)[Il quinto capitolo]: descrive l'approccio esplorativo nella costruzione di PyXiNet, investigando vari tipi di architetture, progettate mediante iterazioni successive basate sui risultati delle precedenti, al fine di trovare una miglior alternativa a PyDNet.
- #link(<ch:risultati>)[Il sesto capitolo]: analizza i risultati ottenuti dai vari modelli di PyXiNet, sia dal punto di vista quantitativo che qualitativo;
- #link(<ch:conclusioni>)[Il settimo capitolo]: descrive i traguardi raggiunti e le conclusioni deducibili.
|
https://github.com/arran4/resume | https://raw.githubusercontent.com/arran4/resume/main/resume.typ | typst | #import "@preview/modern-cv:0.4.0": *
#show: resume.with(
author: (
firstname: "Arran",
lastname: "Ubels",
email: "<EMAIL>",
phone: "(+61) 421-798-794",
github: "arran4",
linkedin: "arranubels",
address: "Melbourne, Australia",
positions: (
"Software Developer",
"Application & Integration Developer",
)
),
date: datetime.today().display()
)
= Experience
#resume-entry(
title: "base2Services",
location: "Melbourne, Australia",
date: "2017 - February 2024",
description: "Contract Software Engineer"
)
#resume-item[
- Develop in a wide range of languages, frameworks and environments.
- Go, Python, ColdFusion, Java, Kotlin, Typescript, C\#, Ruby, Shell Scripting
- AWS SAM, SvelteKit, Apache Camel, Karaf, Github Actions
- Designed and built a AWS Resource auditor in the style of a linter
- Built Jenkins Plugins relating to inventory and data warehousing (Data Lake.)
- Built and maintained a pure multi-container Bash application for cloud integrated RDS backups
]
#resume-entry(
title: "IWT",
location: "Melbourne, Australia",
date: "2023",
description: "Contract Software Engineer via base2Services"
)
#resume-item[
- Took over responsibility for converting designs to a fully fledged Typescript SvelteKit serverless application handed over before launch to the directors of the client
- Aided, trained and reviewed changes of juniors
- Sole senior on project, also responsible for the CI/CD system and deployment to AWS via AWS SAM & CDN, completely Serverless deploy
- Maintained libraries and components as required, pushed upstream changes as required, built services to support JAM stack
]
#resume-entry(
title: "CircleIn",
location: "Melbourne, Australia",
date: "2022",
description: "Contract Software Engineer"
)
#resume-item[
- Designed, built, maintained, documented and trained resources to use; a mail integration, processing and templating serverless application in Go
- Worked with Python with AWS Athena, Data lake & Glue. Document pre-existing "click-ops" solution, fixing and resolving issues, fully Kotlin based CDK conversion.
]
#resume-entry(
title: "FosterNet at Foster Heating",
location: "Melbourne, Australia",
date: "2018- 2022",
description: "Freelance Software Engineer"
)
#resume-item[
- Translated, quoted, designed, produced, supported a complete Excel to website conversion of their quoting application (From a set of network shared Excel 1997 XLM Macro document.)
- Go on Google Apps Engine Classic, Typescript Vue.js front end, Github Actions for CI
]
#resume-entry(
title: "Fulfilio",
location: "Melbourne, Australia",
date: "2021",
description: "Contract Software Engineer via base2Services"
)
#resume-item[
- Scoped, designed, built, and handed over a Apache Camel integration, using Oracle's GraalVM native compilation and AWS Fargate
- Jenkins for CI, Docker for the container, worked with DevOps for deployment system.
]
#resume-entry(
title: "Myob",
location: "Melbourne, Australia",
date: "2021 - 2022",
description: "Contract Software Engineer"
)
#resume-item[
- Educational role - No development - Teaching programming in Python + Django to assist with career change to DevOps.
- Was responsible for assisting a ops team learn development practices, by helping them develop an application in Python to reduce their workload.
- Taught CA cert checking, and OAuth2 low levels, also Go programming via feedback, pull request reviews, and producing educational content.
]
#resume-entry(
title: "Redbubble",
location: "Melbourne, Australia",
date: "2020",
description: "Contract Software Engineer via Cogent"
)
#resume-item[
- Part of a new team created to assist the marketing team with data integration projects
- Used AWS Lambda, AWS Event Bridge, Apache Airflow, and GCP Bigquery for (mostly) Google Analytics
- Mostly developed in Typescript. Was some Typescript + React development.
]
#resume-entry(
title: "City Of Melbourne",
location: "Melbourne, Australia",
date: "2019",
description: "Contract Software Engineer via Cogent"
)
#resume-item[
- Contract for the early stages of City of Melbourne “Whats On”. Consultation, design, estimate, and site scaffolding.
- Vue.js with a Ruby on Rails + Postgres backend
]
#resume-entry(
title: "Qantas/Jetstar (formally Hooroo)",
location: "Melbourne, Australia",
date: "2019",
description: "Contract Software Engineer via Cogent"
)
#resume-item[
- Short contract to assisted lead developer with day to day for a Typescript - RxJS - React based vertical slice "Jetstar Packages".
]
#resume-entry(
title: "Foster Heating",
location: "Melbourne, Australia",
date: "2017-2018",
description: "Freelance Software Engineer - XLM & VBA"
)
#resume-item[
- Initial engagement, maintain the existing application with a couple of needed expansions as I learned the system for a future build out
- Took client requirements, quoted, build, deployed and supported changes to an existing Excel XLM application (added some VBA for easier reading.)
]
#resume-entry(
title: "Consible",
location: "Melbourne, Australia",
date: "2016 - 2017",
description: "Cofounder"
)
#resume-item[
- Went through an accelerator, sought customers, architecture, planning, finances, etc
- 80% of the dev and ops work: built out a full suite of Micro services. Using, Go, React, etcd, redis, TiDB, Wrecker (CI), extensive use of Docker,
- Built desktop apps, web apps, Android mobile app.
]
#resume-entry(
title: "base2Services",
location: "Melbourne, Australia",
date: "2012 - 2015",
description: "Application & Integration Developer"
)
#resume-item[
- Design, develop and deploy solutions for clients both external and internal.
- SCRUM Software life cycle.
- Implement and maintain a Continuous Integration (CI) pipeline for existing and new projects, Jenkins and Octopus Deploy
- Maintain, estimate and extend existing Jboss 4 and Jboss 7 Java applications
- Extend existing applications with REST and SOAP based API for 3rd parties and mobile devices
- Develop and maintain Open Source: Kagura Bi, a simple reporting service, built on Karaf, Camel, Spring, REST and JQuery.
- Maintain, adapt and expand C\# 2.0 for Windows Mobile applications
- Work with operations to design and develop applications to assist and automate, particularly relating to AWS and SaaS offerings
- Worked on a couple internal startups / spin-outs in a range of languages, Go, Java, Python, C\#, ColdFusion, and tools, Apache Camel, Karaf, Jboss.
]
#resume-entry(
title: "Ubels Computer Services",
location: "Melbourne, Australia",
date: "2009 - 2011",
description: "Contract Software Developer and System Admin"
)
#resume-item[
- Built Android Apps. Deployed and maintained ESXi Servers, running LAMP stack educational software (mostly Moodle/Mahara.)
- Designed and documented deployments, and infrastructure at various small clients.
]
#resume-entry(
title: "Australian Sales and Promotions",
location: "Melbourne, Australia",
date: "2009 - 2011",
description: "Contract C\# and Microsoft Dynamics Developer"
)
#resume-item[
- Designed, built and maintained internal applications in C\#, RCTI generator, Sales Monitoring & Reporting
- Customized and extended Microsoft Dynamics with applications and plugins to meet requirements
- Trained staff up in internal tools and Microsoft software, Dynamics, Reporting services, etc
]
#resume-entry(
title: "Dimension Data (Now NTT)",
location: "Melbourne, Australia",
date: "2007 - 2009",
description: "NAT MS Ops Center - CIS - Nortel IVR and Sun Solaris Engineer"
)
#resume-item[
- Maintained Sun Servers (Solaris 8,9,10) and Nortel IVRs inside Telstra, Sensis, and some banks and mining companies
- Built internal tools in PHP, and built up the KnowledgeBase (a wiki)
]
= Education
#resume-entry(
title: "Monash University",
location: "B.S. in Software Engineering",
date: "2011-2013",
description: "Clayton - Part time"
)
#resume-entry(
title: "Monash University",
location: "B.S. in Computer Science",
date: "2004 - 2007",
description: "Clayton - Full time"
)
= Projects
#resume-entry(
title: "Golang iCal Parser & Serializer",
location: [#github-link("arran4/golang-ical")],
date: "May 2018 - Present",
description: "BDFL"
)
#resume-item[
- Implement a bi-directional iCal parser and serializer in go - none which did both existed prior
- Maintain, review & merge contributions
]
= Preferences
I am comfortable with most languages, however that doesn't mean I don't have preferences.
#resume-skill-item("Languages", (strong("Go"), strong("Typescript"), "Dart", ))
#resume-skill-item("Frameworks", (strong("SvelteKit"), strong("Flutter"), "Vue", ))
#resume-skill-item("Tools", (strong("Github Actions"), "Gitlab", "Jenkins", "AWS SAM CLI", ))
#resume-skill-item("Infra", (strong("FaaS"), strong("Google App Engine"), "ECS / Fargate", "Containerization", ))
= Aspirations
I am interested in getting more involved with some technologies that I haven't used yet.
#resume-skill-item("Languages", (strong("Rust"), strong("Zig"), "WASM", ))
#resume-skill-item("Tools", (strong("Kubernetes"), ))
|
|
https://github.com/Nrosa01/TFG-2023-2024-UCM | https://raw.githubusercontent.com/Nrosa01/TFG-2023-2024-UCM/main/Memoria%20Typst/capitulos/ComparacionYPruebas.typ | typst | #import "@preview/subpar:0.1.0"
Una vez entendidos los simuladores de arena y las implementaciones realizadas, se procede a evaluar y compararlas de la siguiente forma:
- Por un lado, una evaluación de rendimiento en cada una de las implementaciones, tanto en CPU como en GPU.
- Por el otro, una evaluación de usabilidad. Para esta se evaluará la capacidad de los usuarios de expandir el sistema dadas unas instrucciones.
== Comparación de rendimiento <ComparacionRendimiento>
Para asegurar que la comparación sea justa, todas las pruebas se han realizado en el mismo hardware en 2 equipos distintos. Además, para medir el rendimiento se aumentará el número de partículas simuladas por segundo hasta que ninguno de los sistemas pueda ejecutar la simulación en tiempo real, esto es, 60 veces por segundo. La simulación siempre se lleva a cabo en una matriz cuadrada de $N*N$ celdas. El tamaño de la simulación será mostrado en cada gráfica de rendimiento.
Las características de los equipos usados para las pruebas son las siguientes:
#v(10pt)
#set align(center)
#rect()[
#set text(9pt)
#grid(columns: 2, column-gutter: 40pt, inset: 0.5em,
[
#set align(left)
Equipo 1:
- CPU: AMD Ryzen 5 5500
- GPU: NVIDIA GeForce RTX 4060
- RAM: 16 GB GDDR4 3200 MHz
- Sistema operativo: Windows 11
],
[
#set align(left)
Equipo 2:
- CPU: AMD Ryzen 7 2700x
- GPU: AMD RX 5700XT
- RAM: 32 GB GDDR4 3200 MHz
- Sistema operativo: Windows 11
]
)]
#set align(left)
#v(10pt)
Se han realizado las diversas pruebas:
- Comparación entre todos los simuladores con el mismo tipo de partícula.
- Comparación entre los simuladores de CPU con una partícula demandante.
Las gráficas estarán ordenadas de mayor a menor cantidad de partículas simuladas a 60 fotogramas por segundo.
A continuación se muestran los resultados obtenidos en las pruebas de rendimiento con GPU:
#subpar.grid(
figure(image("../images/compc1.png"), caption: [
Resultados primer equipo
]), <a>,
figure(image("../images/compc2.png"), caption: [
Resultados segundo equipo
]), <b>,
gap: 10pt,
columns: (1fr, 1fr),
caption: [Resultados de las pruebas de rendimiento con GPU],
label: <full>,
)
#v(5pt)
Esta solo se usó una partícula de arena, ya que todos los simuladores la implementaban. La implementación de esta es lo más similar posible en todos los simuladores, para que la comparación sea justa. Como puede observarse en la @full, la diferencia entre simular en la GPU y la CPU es considerablemente grande.
A continuación se muestra una segunda prueba, realizada solo entre las implementaciones en CPU. Esto permite observar la diferencia de rendimiento entre los distintos simuladores en CPU mejor que en la gráfica anterior. Además, dado que las implementaciones en CPU tienen más tipos de partículas, se ha optado por usar una partícula compleja común a todas, la particula de `planta`. Esta partícula tiene la peculiaridad de que necesita comprobar el estado de todos sus vecinos para buscar agua que transformar en planta. Esta búsqueda incurre en un coste computacional mayor que el de la arena, que solo necesita comprobar el estado de sus vecinos para caer.
#subpar.grid(
figure(image("../images/compc3.png"), caption: [
Resultados primer equipo
]), <a2>,
figure(image("../images/compc4.png"), caption: [
Resultados segundo equipo
]), <b2>,
gap: 10pt,
columns: (1fr, 1fr),
caption: [Resultados de las pruebas de rendimiento con GPU],
label: <full2>,
)
En esta gráfica puede apreciarse la diferencia en cuanto a rendimiento entre ambas implementaciones, además de la diferencia de rendimiento entre una partícula simple y una compleja respecto a la @full
Una vez realizadas las pruebas de rendimiento, se procede a evaluar la usabilidad de los distintos sistemas.
== Comparación de usabilidad
Para evaluar la usabilidad de los distintos sistemas se ha realizado una encuesta a un grupo de 12 personas. En ella se les ha pedido que realicen una serie de tareas en el simulador de Lua, el de Rust web o ambos. Se descartó el simulador en GPU al resultar complejo de expandir y de ejecutar debido a los requisitos necesarios para su ejecución. La tarea fue la misma para ambos simuladores y el proceso fue grabado para su posterior análisis. Se evalúa tanto el tiempo que tardan en realizar la tarea como la cantidad de errores y confusiones que cometen. El grupo de usuarios seleccionado cubre un perfil amplio de individuos, desde de estudiantes de informática hasta personas sin conocimientos previos en programación. En ambos casos, ninguno de los usuarios había utilizado previamente ninguno de los simuladores ni conocían la existencia de los simuladores de arena.
La tarea pedida consistía en crear 4 particulas: Arena, Agua, Gas y Lava. La arena trata de moverse hacia debajo si hay vacío o agua, en caso de no poder, realiza el mismo intento hacia abajo a la derecha y abajo a la izquierda. Es decir, intenta moverse en las 3 direcciones descritas si hay aire o agua. Solo se mueve una vez en la primera dirección en la que es posible en cada generación. La particula de gas tiene el mismo comportamiento que el de arena pero yendo hacia arriba en vez de hacia abajo. La partícula de agua se comporta igual que la arena, pero si no puede moverse en ninguna de las 3 direcciones descritas, se debe intentar mover también directamente a la derecha y a la izquierda, en ese orden. Por último, la particula de lava es igual a la de arena en su movimiento, pero si toca una partícula de agua la convierte en gas. Este punto es importante, pues es la particula de lava la que detecta si hay agua y no al revés. Esto tiene implicaciones en su implementación mediante bloques. Esta tarea es común a las pruebas de Lua y Blockly.
Para la realización de dicha tarea, se explicó que es un simulador de arena y como usarlo. Para esto, se enseña en tiempo real como crear una partícula básica que va hacia abajo sin comprobar nada, además de mencionar como podría hacer la comprobación de detectar una particula en una dirección. Además, se muestra la solución a los usuarios, sin mostrar el código o los bloques, se les enseña las particulas y su comportamiento de forma visual para que tengan una referencia respecto al objetivo a lograr.
No todas las personas pudieron realizar la prueba de Lua debido a falta de conocimientos o indisposición. Sin embargo otros usuarios que no poseen un perfil técnico accedieron y la realizaron. Para minimizar el sesgo, un grupo de usuarios primero realizó la prueba con Lua y otro con Rust web.
Para la realización de la prueba de Lua, los parámetros registrados son los siguientes:
- Necesitó asistencia en la creación de la lógica: Positivo si el usuario necesitó ayuda activa después de la explicación inicial. Las dudas preguntadas por el usuario no cuentan como necesitar ayuda.
- Usó el método isEmpty: Esta funcion permite comprobar si la particula en una direccion es vacía. Este parámetro es positivo si hizo uso del método sin sugerencia previa.
- Necesitó ayuda para implementar movimiento aleatorio horizontal: Se considera positivo si el usuario necesitó asistencia del probador para añadirle aleatoriedad de movimiento a las partículas
- Necesitó ayuda con errores de ejecución: Se considera positivo si el usuario necesitó asistencia del probador para solucionar la ejecución de una partícula que provocase que el programa crashee.
- Terminó la prueba: Se considera positivo si el usuario terminó la prueba en menos, negativo si por frustración u otras razones no la terminó.
#pagebreak()
Los resultados son los siguientes:
#figure(image("../images/pruebaUsuarioLua.png", width: 50%), caption: [
Resultados de las pruebas con usuarios para la prueba de Lua
])
Todos los usuarios fueron capaces de crear las partículas deseadas de una manera rápida, aunque no de la más efectiva en la mayoría de casos. Prácticamente ningún usuario hizo uso de la función 'isEmpty' sin sugerencia previa del supervisor, la mayoría aprovechaba para reutilizar la lógica creada en la partícula de arena, lo que provocaba que usasen la función 'check_neighbour_multiple' de una forma incorrecta, ya que lo empleaban para hacer comprobaciones con un solo tipo de partícula.
Todos los usuarios tuvieron errores de ejecución. Las causas más comunes son: falta de escribir 'api:' para llamar a funciones de programación de partículas, errores de escritura a la hora de hacer uso de estas mismas funciones, y olvidos de hacer uso de 'end' para cerrar claúsulas condicionales 'if'. La mayoría de participantes pudieron implementar aleatoriedad en el movimiento de manera satisfactoria sin ayuda del probador.
De estos 5 usuarios, 3 realizaron esta prueba antes que la de Blockly. Estos usuarios entendieron más rápido el sistema de bloques y el como se debía realizar la comprobación de partículas vecinas.
Como particularidad, uno de estos usuarios, a la hora de implementar aleatoriedad, buscó hacer uso de una funcionalidad 'randomTransformation(horizontalReflection)' inexistente en esta simulación. No se detectó ningún sesgo de este tipo por parte de los participantes que realizaron la prueba de Lua y después la de Blockly.
Respecto la prueba de Blockly, los parámetros registrados son los siguientes:
- Necesitó ayuda: Positivo si el usuario necesitó ayuda activa después de la explicación inicial. Las dudas preguntadas por el usuario no cuentan como necesitar ayuda.
- Usó el array: Se refiere a si el usuario uso la funcionalidad de array del bloque de partícula. Este bloque es el primero que se explica y se destaca su función de poder representar varias partículas, además de mostrar un ejemplo de esto. Este parámetro es positivo ei el usuario usó el array en alguna ocasión.
- Usó el foreach: Se refiere a si el usuario uso la funcionalidad de foreach del bloque de partícula. Este bloque se considera el más complejo de entender debido a que las direcciones que están dentro de este son modificadas. Este bloque es necesario para realizar la última tarea. Este parámetro es positivo si el usuario usó el foreach en alguna ocasión sin ayuda.
- Usó una transformación horizontal: Todas las partículas que se pedían tenían la peculiaridad de tener que comprobar ambas direcciones horizontales. Este parámetro es positivo si el usuario usó una transformación horizontal en alguna ocasión sin ayuda. Se espera que ningún usuario use este bloque debido a que no es necesario y no se pide, pero usarlo sería ideal.
- Usó el bloque touching: Este bloque permite resolver el último comportamiento de una forma alternativa. Si el usuario propone o decide usar este bloque, se considera positivo.
- Terminó la prueba: Se considera positivo si el usuario terminó la prueba en menos, negativo si por frustración u otras razones no la terminó.
Los resultados son los siguientes:
#figure(image("../images/pruebaUsuario.png", width: 50%), caption: [
Resultados de las pruebas con usuarios
])
Además de estos datos cuantitativos, se han recogido datos cualitativos. Estos datos se han recogido durante la prueba mediante anotaciones de los supervisores. Los resultados de estas anotaciones son los siguientes:
La mayoría de usuarios presentan problemas al inicio para usar el bloque que representa una partícula, pues a pesar de la explicación y el ejemplo mostrado, la mayoría olvida el funcionamiento de este bloque. Salvo una excepción, todos los usuarios tuvieron problemas para entender el bloque foreach, siendo el bloque que más ayuda necesitó. Por otro lado, se observó que tras una única explicación, los usuarios pueden navegar la interfaz con facilidad y no presentan problemas para añadir, eliminar partículas, poner la simulación en pausa y en general, usar los controles básicos del simulador. Sin embargo, a pesar de que entendían los elementos de la interfaz y su función, hubo uno que causó cierta fricción cognitiva: El botón de añadir partícula. Los usuarios lo pulsaban y procedían a editar bloques en el espacio actual, que no había cambiado, pues los usuarios parecían asumir que añadir una nueva particula la selecciona automáticamente.
Dos usuarios realizaron esta prueba antes que la de Lua. Estos usuarios tuvieron los problemas comunes que el resto de usuarios (no cerrar claúsulas if con un end, no usar 'api:'), sin embargo estos dos usuarios sí usaron el método 'isEmpty' sin sugerencia previa. Ambos pudieron realizar la tarea con mayor soltura que los usuarios que realizaron la prueba de Lua antes.
Una vez concluidas las pruebas, se procedió al análisis de los resultados obtenidos. A diferencia de las pruebas, no se hará una diferenciación entre rendimiento y usabilidad, sino que se decidió evaluar el valor que aporta cada simulador por separado. Finalmente se concluye con una vista global y se proponen mejoras para futuras versiones.
Lo más destacable de las pruebas es la gran diferencia de rendimiento entre los simuladores implementados en CPU y el implementado en GPU. Sin embargo el coste de implementación y extensión es mucho mayor. En este simulador no se pudo probar la usabilidad con usuarios debido a que para ello se requería compilar el proyecto, lo cual requiere ciertas herramientas que muchos usuarios no están dispuestos a instalar. Una implementación en GPU puede resultar ideal cuando se requiere un alto rendimiento pero además el comportamiento que se trata de lograr es altamente específico y de una complejidad moderada. Una implementación de estas características puede resultar útil para la investigación de autómatas celulares en las que se requiera procesar una gran cantidad de partícula simultáneamente para buscar patrones de grandes dimensiones que no podrían ser detectados con una implementación en CPU. Podría resultar interesante la investigación de un sistema que permita generalizar reglas para crear autómatas celulares en la GPU de forma flexible. Esto no ha sido posible con simuladores de arena debido a que en estos, una partícula puede modificar las vecinas, sin embargo, en los autómatas celulares cada celda como mucho puede modificarse a sí misma, lo que podría simplificar la implementación.
La implementación en C++ se realizó como una base para medir las demás. Esta permitió cuantificar la penalización de rendimiento que incurre la flexiblidad usar un lenguaje interpretado como Lua, aún en su versión JIT, así como usar WebAssembly. El desarrollo de simuladores de arena en C++ incurre en el mismo problema que la GPU, se requiere acceso al código y herramientas de desarrollo para poder extenderlo.
En cuanto a la implementacion en Lua, sorprende el rendimiento que puede lograr dada la flexibilidad que ofrece. Sin embargo, desarrollar interfaces es más complejo debido a la escasez de librerías para ello. Con suficiente trabajo, esta implementación tiene el potencial de ser la solución más balanceada de idónea para simuladores de arena en CPU, pues mediante el multihilo el rendimiento logrado resulta ser superior a lo esperado.
Por último, la implementación en Rust con Blockly destaca por resultar más lenta de lo esperado. La curva de aprendizaje mediante bloques resultó superior a la esperada, sin embargo, pasada esta, los usuarios parecen ser capaces de desarrollar particulas con facilidad sin requerir nociones de programación. Esta implementación resulta ser la más accesible debido a que simplemente requiere de un navegador, software que cualquier dispositivo inteligente actual posee. Debido a su rendimiento, esta implementación no es idónea para explorar simulaciones de una gran complejidad o tamaño. Esto podría paliarse mediante la implementación de multihilo, sin embargo, debido a las reglas de seguridad de memoria de Rust y la poca madurez de multihilo en WebAssembly, esta tarea resulta en una complejidad muy alta, existiendo la posibilidad de que no se pueda lograr. |
|
https://github.com/giZoes/justsit-thesis-typst-template | https://raw.githubusercontent.com/giZoes/justsit-thesis-typst-template/main/README.md | markdown | MIT License | # 苏理工学位论文typst模板 justsit-thesis-typst-template
基于苏理工论文参考格式修改完善的 Typst 模板,能够简洁、快速、持续生成 PDF 格式的毕业论文。
[Typst中文教程](https://typst-doc-cn.github.io/tutorial/introduction.html)
也可以参考 [Typst 中文文档网站](https://typst-doc-cn.github.io/docs/) 迅速入门。
## 使用
**重要!!!非官方文档,可能存在老师不认可的风险!!**
快速浏览效果: 查看 `thesis.pdf`,样例论文源码:查看 `thesis.typ`
**你只需要修改 `thesis.typ` 文件即可,基本可以满足你的所有需求。**
如果你认为不能满足你的需求,可以先查阅后面的 [Q&A](#qa) 部分。
### VS Code 本地编辑(推荐)
1. 在 VS Code 中安装 [Tinymist Typst](https://marketplace.visualstudio.com/items?itemName=myriad-dreamin.tinymist) 和 [Typst Preview](https://marketplace.visualstudio.com/items?itemName=mgt19937.typst-preview) 插件。前者负责语法高亮和错误检查等功能,后者负责预览。
- 也推荐下载 [Typst Companion](https://marketplace.visualstudio.com/items?itemName=CalebFiggers.typst-companion) 插件,其提供了例如 `Ctrl + B` 进行加粗等便捷的快捷键。
- 你还可以下载[南京大学学位论文](https://github.com/nju-lug/modern-nju-thesis?tab=readme-ov-file)作者开发的 [Typst Sync](https://marketplace.visualstudio.com/items?itemName=OrangeX4.vscode-typst-sync) 和 [Typst Sympy Calculator](https://marketplace.visualstudio.com/items?itemName=OrangeX4.vscode-typst-sympy-calculator) 插件,前者提供了本地包的云同步功能,后者提供了基于 Typst 语法的科学计算器功能。
2. 用 VS Code 打开下载好的目录,打开 `thesis.typ` 文件,并按下 `Ctrl + K V` 进行实时编辑和预览。

### 特性 / 路线图
- **说明文档**
- [ ] 暂时参考样文列出内容,或查看[南京大学学位论文](https://github.com/nju-lug/modern-nju-thesis?tab=readme-ov-file)以了解详细
- **类型检查**
- [ ] 应该对所有函数入参进行类型检查,及时报错
- **全局配置**
- [x] 类似 LaTeX 中的 `documentclass` 的全局信息配置
- [x] **盲审模式**,将个人信息替换成小黑条,并且隐藏致谢页面,论文提交阶段使用
- [x] **双面模式**,会加入空白页,便于打印
- [x] **自定义字体配置**,可以配置「宋体」、「黑体」与「楷体」等字体对应的具体字体
- [x] **数学字体配置**:模板不提供配置,用户可以自己使用 `#show math.equation: set text(font: "Fira Math")` 更改
- **模板**
- [x] 字体测试页
- [x] 封面
- [x] 论文标题页
- [x] 声明页
- [x] 中文摘要
- [x] 英文摘要
- [x] 目录页
- [x] 插图目录
- [x] 表格目录
- [x] 符号表
- [x] 致谢
- **编号**
- [x] 前言使用罗马数字编号
- [x] 附录使用罗马数字编号
- [x] 表格使用 `1-1` 格式进行编号
- [x] 数学公式使用 `(1-1)` 格式进行编号
- [x] 三级标题下的编号按如此顺序:1);(1);①
- **环境**
- [ ] 定理环境(这个也可以自己使用第三方包配置)
- **其他文件**
- [x] 本科生开题报告
- [ ] 本科生任务书
- [ ] 中期检查报告
## 其他文件
还实现了本科生的开题报告,只需要预览和编辑 `others` 目录下的文件即可。(一般般完善)

## Q&A
### 我不会 LaTeX,可以用这个模板写论文吗?
可以。
如果你不关注模板的具体实现原理,你可以用 Markdown Like 的语法进行编写,只需要按照模板的结构编写即可。
### 我不会编程,可以用这个模板写论文吗?
同样可以。
如果仅仅是当成是入门一款类似于 Markdown 的语言,相信使用该模板的体验会比使用 Word 编写更好。
### 为什么我的字体没有显示出来,而是一个个「豆腐块」?
这是因为本地没有对应的字体,**这种情况经常发生在 MacOS 的「楷体」显示上**。
你应该安装本目录下的 `fonts` 里的所有字体,里面包含了可以免费商用的「方正楷体」和「方正仿宋」,然后再重新渲染测试即可。
你可以使用 `#fonts-display-page()` 显示一个字体渲染测试页面,查看对应的字体是否显示成功。
如果还是不能成功,你可以按照模板里的说明自行配置字体,例如
```typst
#let (...) = documentclass(
fonts: (楷体: ("Times New Roman", "FZKai-Z03S")),
)
```
先是填写英文字体,然后再填写你需要的「楷体」中文字体。
**字体名称可以通过 `typst fonts` 命令查询。**
如果找不到你所需要的字体,可能是因为 **该字体变体(Variants)数量过少**,导致 Typst 无法识别到该中文字体。
### 学习 Typst 需要多久?
一般而言,仅仅进行简单的编写,不关注布局的话,你可以打开模板就开始写了。
如果你想进一步学习 Typst 的语法,例如如何排篇布局,如何设置页脚页眉等,一般只需要几个小时就能学会。
如果你还想学习 Typst 的「[元信息](https://typst-doc-cn.github.io/docs/reference/meta/)」部分,进而能够编写自己的模板,一般而言需要几天的时间阅读文档,以及他人编写的模板代码。
如果你有 Python 或 JavaScript 等脚本语言的编写经验,了解过函数式编程、宏、样式、组件化开发等概念,入门速度会快很多。
### 为什么只有一个 thesis.typ 文件,没有按章节分多个文件?
因为 Typst **语法足够简洁**、**编译速度足够快**、并且 **拥有光标点击处双向链接功能**。
语法简洁的好处是,即使把所有内容都写在同一个文件,你也可以很简单地分辨出各个部分的内容。
编译速度足够快的好处是,你不再需要像 LaTeX 一样,将内容分散在几个文件,并通过注释的方式提高编译速度。
光标点击处双向链接功能,使得你可以直接拖动预览窗口到你想要的位置,然后用鼠标点击即可到达对应源码所在位置。
还有一个好处是,单个源文件便于同步和分享。
即使你还是想要分成几个章节,也是可以的,Typst 支持你使用 `#import` 和 `#include` 语法将其他文件的内容导入或置入。你可以新建文件夹 `chapters`,然后将各个章节的源文件放进去,然后通过 `#include` 置入 `thesis.typ` 里。
### 我如何更改页面上的样式?具体的语法是怎么样的?
理论上你并不需要更改 `resources` 目录下的任何文件,无论是样式还是其他的配置,你都可以在 `thesis.typ` 文件内修改函数参数实现更改。具体的更改方式可以阅读 `resources` 目录下的文件的函数参数。
例如,想要更改页面边距为 `50pt`,只需要将
```typst
#show: doc
```
改为
```typst
#show: doc.with(margin: (x: 50pt))
```
即可。
## 开发者指南
### template 目录
- `thesis.typ` 文件: 你的论文源文件,可以随意更改这个文件的名字,甚至你可以将这个文件在同级目录下复制多份,维持多个版本。
- `ref.bib` 文件: 用于放置参考文献。
- `images` 目录: 用于放置图片。
### 内部目录(.\resources下)
- `utils` 目录: 包含了模板使用到的各种自定义辅助函数,存放没有外部依赖,且 **不会渲染出页面的函数**。
- `pages` 目录: 包含了模板用到的各个 **独立页面**,例如封面页、声明页、摘要等,即 **会渲染出不影响其他页面的独立页面的函数**。
- `layouts` 目录: 布局目录,存放着用于排篇布局的、应用于 `show` 指令的、**横跨多个页面的函数**,例如为了给页脚进行罗马数字编码的前言 `preface` 函数。
- 主要分成了 `doc` 文稿、`preface` 前言、`mainmatter` 正文与 `appendix` 附录/后记。
- `lib.typ`:
- **职责一**: 作为一个统一的对外接口,暴露出内部的 utils 函数。
- **职责二**: 使用 **函数闭包** 特性,通过 `documentclass` 函数类进行全局信息配置,然后暴露出拥有了全局配置的、具体的 `layouts` 和 `pages` 内部函数。
## 致谢
- 感谢 [@OrangeX4](https://github.com/OrangeX4) 开发的 [南京大学学位论文](https://github.com/nju-lug/modern-nju-thesis?tab=readme-ov-file) Typst 模板
## License
This project is licensed under the MIT License.
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/showybox/0.1.1/showy.typ | typst | Apache License 2.0 | #let showybox(
frame: (
upper-color: black,
lower-color: white,
border-color: black,
radius: 5pt,
width: 2pt,
dash: "solid"
),
title-style: (
color: white,
weight: "bold",
align: left
),
body-style: (
color: black,
align: left
),
title: "",
body
) = {
/*
* Main container of the showybox
*/
rect(
fill: frame.at("border-color", default: black),
radius: frame.at("radius", default: 5pt),
inset: 0pt,
stroke: (
paint: frame.at("border-color", default: black),
dash: frame.at("dash", default: "solid"),
thickness: frame.at("width", default: 2pt)
)
)[
/*
* Title of the showybox. We'll check if it is
* empty. If so, skip its drawing and only put
* the body
*/
#if title != "" {
rect(
inset:(x: 1em, y: 0.5em),
width: 100%,
fill: frame.at("upper-color", default: black),
radius: (top: frame.at("radius", default: 5pt)))[
#align(
title-style.at("align", default: left),
text(
title-style.at("color", default: white),
weight: title-style.at("weight", default: "bold"),
title
)
)
]
v(-1.1em)
} // Otherwise, don't put a title
/*
* Body of the showybox
*/
#rect(
fill: frame.at("lower-color", default: white),
width: 100%,
inset:(x: 1em, y: 0.75em),
radius:
if title != "" {
(bottom: frame.at("radius", default: 5pt))
} else {
frame.at("radius", default: 5pt)
}
)[
// Content
#align(
body-style.at("align", default: left),
text(body-style.at("color", default: black), body)
)
]
]
} |
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/ops-invalid-25.typ | typst | Other | // Error: 2:3-2:8 cannot apply 'not' to string
#let x = "Hey"
#(not x = "a")
|
https://github.com/mental2008/Resume | https://raw.githubusercontent.com/mental2008/Resume/main/English/main.typ | typst | #import "@preview/octique:0.1.0": *
#show heading: set text(font: "Linux Biolinum")
#show link: underline
// Uncomment the following lines to adjust the size of text
// The recommend resume text size is from `10pt` to `12pt`
#set text(
size: 10pt,
)
// Feel free to change the margin below to best fit your own CV
#set page(
margin: (x: 0.9cm, y: 1.3cm),
)
// For more customizable options, please refer to official reference: https://typst.app/docs/reference/
#set par(justify: true)
#show outline: it => {
show heading: set align(center)
it
}
#let chiline() = {v(-3pt); line(length: 100%); v(-5pt)}
// = <NAME> 杨凌云
#align(center, text(size: 20pt)[
#text(size: 24pt)[*Lingyun Yang*] #text(font: "华文楷体", size: 20pt)[杨凌云]
])
#align(center, text(10pt)[
#octique-inline("mail") #link("mailto:<EMAIL>") | #octique-inline("globe") #link("https://www.lingyunyang.com")[lingyunyang.com] | #octique-inline("book") #link("https://paper.lingyunyang.com")[Reading Notes] | #emoji.phone.receiver (+86) 135-0284-6103 | #octique-inline("location") Hong Kong
])
== Education
#chiline()
#link("https://hkust.edu.hk")[*Hong Kong University of Science and Technology*] #h(1fr) *Hong Kong, China* \
Ph.D. in Computer Science and Engineering, Advisor: *<NAME>* #h(1fr) Sep. 2020 -- Present \
- Research Domain: Cluster Management, Model Serving Systems, Resource Scheduling
#link("https://www.scut.edu.cn/en/")[*South China University of Technology*] #h(1fr) *Guangzhou, China* \
B.Eng. in Computer Science and Technology, Elite Class (GPA: 3.82/4) #h(1fr) Sep. 2016 -- Jul. 2020 \
- National Scholarship, The First Prize Scholarship
== Research and Selected Projects
#chiline()
I have a broad interest in resource management for large-scale data centers / AI infrastructure. Specifically, my research focuses on: (a) improving resource efficiency for AI/GPU clusters; (b) building efficient and low-cost AI model serving systems. \
#link("https://arxiv.org/abs/2407.02031")[*SwiftDiffusion: Efficient Diffusion Model Serving with Add-on Modules*] #h(1fr) (review, 2407.02031) \
#text(style: "italic", size: 7.5pt)[*Lingyun Yang*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Wei Wang] \
- Built an efficient text-to-image system that generates images with diffusion models and add-on modules (i.e., ControlNets and LoRAs).
- Incorporated system designs, e.g., ControlNet-as-a-Service, async bounded LoRA loading, latent parallelelism for CFG computation.
- Achieved up to *7.8×* in serving latency and *1.6×* in throughput without compromising image quality.
*Prism: GPU-Disaggregated Serving for Deep Learning Recommendation Models at Scale* #h(1fr) (review) \
#text(style: "italic", size: 7.5pt)[*Lingyun Yang*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>] \
- Proposed a GPU-disaggregated DLRM serving system to eliminate resource mismatch and meet elastic demand; leveraged RDMA network to offload computation on separate GPU and CPU nodes; resource-aware graph partitioning; topology-aware scheduling.
- In daily scenarios (e.g., a crowded GPU cluster with > 90% allocation rate), reduced CPU fragments by *53%* and GPU fragments by *27%*; In the Double 11 Shopping Festival, saved up to *90%* of GPUs when loaning GPU servers from training clusters.
#link("https://www.usenix.org/conference/atc23/presentation/weng")[*Beware of Fragmentation: Scheduling GPU-Sharing Workloads with Fragmentation Gradient Descent*] #h(1fr) (ATC'23) \
#text(style: "italic", size: 7.5pt)[*Lingyun Yang*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>] #h(1fr) #octique-inline("mark-github") #link("https://github.com/hkust-adsl/kubernetes-scheduler-simulator")[hkust-adsl/kubernetes-scheduler-simulator] \
- Formally quantified *statistical* GPU resource fragments in shared GPU clusters.
- Proposed the fragmentation gradient descent (FGD) scheduling algorithm to reduce resource fragmentation.
- Reduced unallocated GPUs by up to *49%* compared to state-of-the-art scheduling policies.
*Large-Scale GPU Sharing and Overcommitment in Production* \
- Enabled large-scale GPU sharing in production clusters, with over *10k* shared GPU containers running daily.
- Support the co-location of GPU tasks with different priorities (e.g., latency-sensitive, best-effort).
- Designed and implemented the node-level agent and the cluster-level controller.
#link("https://dl.acm.org/doi/10.1145/3472883.3486987")[*Morphling: Fast, Near-Optimal Auto-Configuration for Cloud-Native Model Serving*] #h(1fr) (SoCC'21) \
#text(style: "italic", size: 7.5pt)[*<NAME>*, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>] #h(1fr) #octique-inline("mark-github") #link("https://github.com/kubedl-io/morphling")[kubedl-io/morphling]
- An auto-configuration framework for AI serving on Kubernetes; part of Alibaba's open-sourced #link("https://github.com/kubedl-io/kubedl")[KubeDL], a #link("https://landscape.cncf.io/?item=provisioning--automation-configuration--kubedl")[CNCF sandbox] project.
- Combined meta-learning and bayesian optimization to quickly find the *optimal* resource configuration (e.g., CPU cores, GPU memory, GPU timeshare, GPU type) and runtime parameters (i.e., batch size).
== Work Experience
#chiline()
*Alibaba Group* #h(1fr) *Hangzhou, China* \
_Research Intern_, Cluster Management Group, Mentor: *<NAME>* #h(1fr) Dec. 2020 -- Present \
- Conducted several research projects including Morphling, GPU sharing, FGD, Prism, SwiftDiffusion (details as mentioned above).
*Microsoft Research Asia* #h(1fr) *Beijing, China* \
_Research Intern_, Innovation Engineering Group #h(1fr) Jul. 2019 -- Jun. 2020 \
- Conducted research on model robustness of face recognition. Star of Tomorrow Internship Award of Excellence.
== Technical Skills
#chiline()
- *Programming: * Golang, C++, Python, JavaScript, asynchronous, multithread, multiprocess, distributed, RDMA \
- *Machine Learning: * PyTorch, TensorFlow, Numpy, Matplotlib, HuggingFace \
- *Full Stack: * Web Frontend, Backend, SQL, Grafana, Docker, Kubernetes, Git, CI/CD \
|
|
https://github.com/SWATEngineering/Docs | https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/Glossario/functions.typ | typst | MIT License | #let glossary(content) = {
emph[#content#sub("G")]
}
#let team = {
emph[SWAT Engineering]
}
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cineca/0.1.0/util/utils.typ | typst | Apache License 2.0 | #let minutes-to-datetime(minutes) = {
let h = calc.trunc(minutes / 60)
let m = int(calc.round(calc.fract(minutes / 60) * 60))
return datetime(hour: h, minute: m, second: 0)
}
#let events-to-calendar-items(events, start) = {
let dict = (:)
for value in events {
if value.len() < 4 {
continue
}
let kday = str(value.at(0))
let stime = float(value.at(1))
let etime = float(value.at(2))
let body = value.at(3)
if not dict.keys().contains(kday) {
dict.insert(kday, (:))
}
let istart = calc.min((calc.trunc(stime) - start), 24) * 60 + calc.min(calc.round(calc.fract(stime) * 100), 60)
let iend = calc.min((calc.trunc(etime) - start), 24) * 60 + calc.min(calc.round(calc.fract(etime) * 100), 60)
let ilast = iend - istart
if ilast > 0 {
dict.at(kday).insert(str(istart), (ilast, body))
}
}
dict
}
#let default-header-style(day) = {
show: pad.with(y: 8pt)
set align(center + horizon)
set text(weight: "bold")
[Day #{day+1}]
}
#let default-item-style(time, body) = {
show: block.with(
fill: white,
height: 100%,
width: 100%,
stroke: (
left: blue + 2pt,
rest: blue.lighten(30%) + 0.4pt
),
inset: (rest: 0.4pt, left: 2pt),
clip: true
)
show: pad.with(2pt)
set par(leading: 4pt)
if time != none {
terms(
terms.item(time.display("[hour]:[minute]"), body)
)
} else {
body
}
}
#let default-time-style(time) = {
show: pad.with(x: 2pt)
move(dy: -4pt, time.display("[hour]:[minute]"))
}
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/foundations-02.typ | typst | Other | // Test panic.
// Error: 7-9 panicked
#panic()
|
https://github.com/stalomeow/resume-template | https://raw.githubusercontent.com/stalomeow/resume-template/main/example/resume.typ | typst | MIT License | #import "../template.typ": *
#show: resume.with(
margin-top: 1cm,
margin-bottom: 1cm,
margin-left: 2cm,
margin-right: 2cm,
fonts: ("Noto Sans SC",),
text-size: 10pt,
name: "ChatGPT",
phone: "(+86) 123-4567-8910",
wechat: "chatgpt",
email: "<EMAIL>",
github: "chatgpt",
sites: ("chat.openai.com",),
anonymous: false,
)
#section("教育经历", icon: fa-graduation-cap)[
#education(
school: "北京大学",
major: "计算机科学与技术",
degree: "硕士研究生",
start-time: "2022-09",
end-time: "2024-06",
)
#education(
school: "北京大学",
major: "计算机科学与技术",
degree: "本科",
start-time: "2018-09",
end-time: "2022-06",
)
]
#section("实习经历", icon: fa-work)[
#work(company: "ABC 科技有限公司", job: "实习 Unity 开发工程师", start-time: "2023-07", end-time: "2023-09")[
- 参与公司 AR 游戏项目的开发和优化,负责核心功能模块的实现和性能优化。
- 优化了 AR 游戏的图像识别算法,将识别速度提升了 40%,用户体验得到了明显改善。
- 完成了游戏中新功能的开发,增加了用户互动性,使用户留存率提高了 20%。
]
]
#section("个人项目", icon: fa-code)[
#project("chatgpt/space-demo", time: "2023-06", github: true, bv: "BV00000")[
开发一款科幻冒险游戏,玩家扮演宇航员探索未知星球,面对外星生物和危险环境,利用科技装备求生存。
- 设计并实现游戏核心功能,包括角色控制、任务系统、武器装备等。
- 优化游戏性能,确保在各种设备上流畅运行。
]
#project("虚拟化学实验室", time: "2023-04")[
开发一款虚拟化学实验应用,让学生通过虚拟环境进行化学实验,提高实验操作技能和安全意识。
- 制定项目计划和开发路线图。
- 编写核心代码和实验模块。
- 负责项目演示和用户反馈收集。
]
]
#section("技能", icon: fa-wrench)[
- Unity 游戏开发:
- 熟练掌握 Unity 引擎,具备扎实的 C\# 编程能力。
- 能够独立完成游戏逻辑设计与开发,包括 UI 设计、游戏机制实现、碰撞检测等。
- 对 Unity 中的光照、动画、特效等功能有较深入的了解和应用经验。
- 跨平台开发:
- 熟悉将游戏移植到不同平台的过程和技巧,包括 PC、移动设备等。
- 熟练使用 Unity 的跨平台特性和工具,确保游戏在不同设备上的性能和稳定性。
- 人工智能与游戏:
- 在研究生阶段深入学习人工智能与游戏领域,了解游戏智能算法和技术应用。
- 团队协作:
- 在多个团队项目中担任开发者和协调者角色,具备良好的沟通和团队合作能力。
- 能够理解和分析项目需求,按时高效地完成工作并参与团队讨论与改进。
]
|
https://github.com/Myriad-Dreamin/typst.ts | https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/github-pages/docs/ir-features.dark.typ | typst | Apache License 2.0 |
#import "graphs.typ": ir-feature-graph
#set page(
height: auto, width: auto, margin: 0.5em // , fill: rgb(13, 17, 23)
)
#set text(fill: white)
#show link: underline
#figure(
ir-feature-graph(stroke-color: white, bg-color: rgb(13, 17, 23), light-theme: false),
caption: [Figure: Features of the #emph("Vector Format"). ],
numbering: none,
)
|
https://github.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024 | https://raw.githubusercontent.com/Area-53-Robotics/53E-Notebook-Over-Under-2023-2024/giga-notebook/entries/decide-intake-type/entry.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "/packages.typ": notebookinator
#import notebookinator: *
#import themes.radial.components: *
#show: create-body-entry.with(
title: "Decide: Intake Type",
type: "decide",
date: datetime(year: 2023, month: 8, day: 4),
author: "<NAME>",
witness: "Violet Ridge",
)
We rated each option on a scale of 1 to 5 for compression, and a scale of 1 to 3
for speed. The reason we rated compression so high is due to the triballs
irregular shape, we need an intake that can easily conform to their shape.
#decision-matrix(
properties: ((name: "Compression"), (name: "Speed")),
("Claw", 2, 1),
("Flex Wheels", 3, 3),
("Sprockets and Rubber Bands", 5, 3),
)
#admonition(
type: "decision",
)[
We choose the intake design with sprockets and rubber bands due to its speed,
and more importantly its very high compression.
]
= Final Design
#grid(
columns: (1fr, 1fr),
gutter: 20pt,
figure(caption: "Front View", image("./front.png")),
figure(caption: "Back View", image("./back.png")),
figure(caption: "Side View", image("./side.png")),
figure(caption: "Isometric View", image("./iso.png")),
)
#image("/assets/intakes/part-drawings/1.png")
#image("/assets/intakes/part-drawings/2.png")
#image("/assets/intakes/part-drawings/3.png")
|
https://github.com/jneug/schule-typst | https://raw.githubusercontent.com/jneug/schule-typst/main/src/util/util.typ | typst | MIT License |
#import "../theme.typ"
// ================================
// = General utilities =
// ================================
#let get-text(element, sep: "") = {
if type(element) == "content" {
if element.has("text") {
element.text
} else if element.has("children") {
element.children.map(text).join(sep)
} else if element.has("child") {
text(element.child)
} else if element.has("body") {
text(element.body)
} else {
""
}
} else if type(element) in ("array", "dictionary") {
return ""
} else {
str(element)
}
}
#let get-text-color(background, light: theme.text.light, dark: theme.text.default, threshold: 62%) = {
if type(background) == gradient {
background = background.sample(50%)
}
if color.hsl(background).components(alpha: false).last() < threshold {
light
} else {
dark
}
}
#let extract-element(func, body, filter: it => true, all: false) = {
let _filter = it => return type(it) == content and it.func() == func and filter(it)
let elems = (body,)
if repr(body.func()) == "sequence" {
elems = body.children
}
let elem = elems.find(_filter)
if elem == none {
return (none, body)
} else {
return (
elem,
{
let found = false
for it in elems {
if not found and _filter(it) {
found = true
} else {
it
}
}
},
)
}
}
#let extract-title(body, level: 1, all: false) = {
let (title, rest) = extract-element(heading, body, filter: it => it.at("depth", default: 0) == level)
return (
if title != none {
title.body
} else {
none
},
rest,
)
}
/// Use in a #cmd-[show] rule to fix the decimal separator in math
/// mode from a dot to a comma.
/// - body (content): Body of the document.
/// -> content
#let decimal-fix(body) = {
show math.equation: it => {
show regex("\d+\.\d+"): it => {
show ".": {
"," + h(0pt)
}
it
}
it
}
body
}
/// Auto-joins items with #arg[sep] and #[last]. If #arg[items]
/// is an array, the items are joined, otherwise the value
/// is returened as is.
/// - items (any): The items to join.
/// - sep (content): The separator for joining.
/// - last (content): The last separator to use.
/// -> content
#let auto-join(items, sep: ", ", last: " and ") = if type(items) == "array" {
items.map(str).join(", ", last: " und ")
} else {
items
}
/// Creates an #cmd-[enum] from the supplied #sarg[args].
/// If only one argument is given, the content is shown as is,
/// without wrapping it in an enum.
/// - ..args (content): Content lines.
/// - func (function): A content function to wrap multiple lines
/// in (like #cmd-[enum] or #cmd-[list]).
/// -> content
#let auto-enum(..args, func: enum) = {
let items = args.pos().flatten()
if items.len() > 1 {
func(..args.named(), ..items)
} else if items.len() > 0 {
items.first()
}
}
/// Clamps a value between `min` and `max`.
///
/// In contrast to @cmd-(module:"math")[clamp] this function works for other values
/// than numbers, as long as they are comparable.
/// #codesnippet[```typ
/// // text-size = math.clamp(0.8em, 1.2em, text-size) // fails
/// text-size = util.clamp(0.8em, 1.2em, text-size) // works
/// ```]
///
/// // Tests
/// #test(
/// `util.clamp(0, 100, 50) == 50`,
/// `util.clamp(33%, 99%, 100%) == 99%`,
/// `util.clamp(-5in, 8in, -6in) == -5in`,
/// `util.clamp(-5in, 8in, 4in) == 4in`,
/// )
///
/// - min (integer, float, length, relative length, fraction, ratio): Minimum for `value`.
/// - min (integer, float, length, relative length, fraction, ratio): Maximum for `value`.
/// - value (integer, float, length, relative length, fraction, ratio): The value to clamp.
/// -> any
#let clamp(min, max, value) = {
assert.eq(type(min), type(max), message: "Can't clamp values of different types!")
assert.eq(type(min), type(value), message: "Can't clamp values of different types!")
if value < min {
return min
} else if value > max {
return max
} else {
return value
}
}
/// Alias for #cmd[raw] with #arg(block: false) set.
#let rawi = raw.with(block: false)
/// Positions #arg[body] in the margin of the page
/// #example[```
/// #marginnote(gutter:1cm, offset:-5pt)[Hallo\ Welt]
/// ```]
///
/// - position (alignment): Either #value(left) or #value(right).
/// - gutter (length): Gutter between text and note.
/// - offset (length): How much to offset the note along the y-axis.
/// - body (content): Content of the note.
/// -> content
#let marginnote(position: left, gutter: .5em, offset: 0pt, body) = {
context {
let _m = measure(body)
if position.x == right {
place(position, dx: gutter + _m.width, dy: offset, body)
} else {
place(position, dx: -1 * gutter - _m.width, dy: offset, body)
}
}
}
#let combine-ranges(
numbers,
sep: ", ",
last: " and ",
range-sep: [#h(.2em)--#h(.2em)],
max-items: 2,
) = {
let numbers = numbers.dedup().sorted()
let ranges = (
(
numbers.first(),
numbers.first(),
),
)
for j in numbers.slice(1) {
if j == ranges.last().last() + 1 {
ranges.last().at(1) = j
} else {
ranges.push((j, j))
}
}
ranges.map(((from, to)) => if from == to [#from] else if to - from < max-items {
range(from, to + 1).map(str)
} else [#from#range-sep#to]).flatten().join(
sep,
last: last,
)
}
#let inset-at(direction, inset, default: 0pt) = {
direction = repr(direction) // allows use of alignment values
if type(inset) == "dictionary" {
if direction in inset {
return inset.at(direction)
} else if direction in ("left", "right") and "x" in inset {
return inset.x
} else if direction in ("top", "bottom") and "y" in inset {
return inset.y
} else if "rest" in inset {
return inset.rest
} else {
return default
}
} else if inset == none {
return default
} else {
return inset
}
}
|
https://github.com/samchouse/dawson | https://raw.githubusercontent.com/samchouse/dawson/main/midterm/1/solutions.typ | typst | #set page(
numbering: "1 / 1",
header: context {
if counter(page).get().first() == 1 {
align(right)[By: <NAME> and <NAME>]
}
}
)
#align(center)[= Computer Science Mock Midterm Solutions]
+ d)
+ d)
+ f)
+ e)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
a = len(x) != len(y) and x != z and bool(y and not True)\
=> a = (len("SF1") != len("Dawson")) and ("SF1" != "SF1Dawson") and bool("Dawson" and not True)\
=> a = (3 != 6) and True and bool("Dawson" and False)\
=> a = True and True and False\
=> a = True and False\
=> a = False
]
)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
a = int(t / u) == t and u != t\
=> a = int(3 / 0.8) == 3 and 0.8 != 3\
=> a = (int(3.75) == 3) and (0.8 != 3)\
=> a = (3 == 3) and (0.8 != 3)\
=> a = True and True\
=> a = True
]
)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
a = salutation(h, c) + w\
=> a = salutation("Hello", ", ") + "World!"\
=> a = (x = "Hello"; y = ", "; x + y;) + "World!"\
=> a = ("Hello" + ", ") + "World!"\
=> a = "Hello, " + "World!"\
=> a = "Hello, World!"
]
)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
```py
def monthly_salary(hourly_wage: float, discount: float) -> float:
base_monthly_salary = hourly_wage * 8 * 22
return round(base_monthly_salary - (base_monthly_salary * (discount / 100)), 2)
```
]
)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
```py
def should_upgrade(bank_balance: float, age: float, have_job: bool) -> bool:
return (bank_balance - 1500) > 2000 and age > 4 and have_job
```
]
)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
```py
def calc_resultant(vec1: list[int], vec2: list[int], vec3: list[int]) -> list[int]:
def sum_at_index(index: int) -> int:
return vec1[index] + vec2[index] + vec3[index]
x_comp = sum_at_index(0)
y_comp = sum_at_index(1)
return [x_comp, y_comp]
```
]
)
#set enum(numbering: "Bonus 1.")
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
```py
import math
def calc_magnitude(resultant: list[int]) -> float:
return math.sqrt(resultant[0] ** 2 + resultant[1] ** 2)
```
]
)
+ #block(
fill: luma(230),
width: 100%,
inset: 8pt,
radius: 2pt,
[
```py
import math
def calc_angle(resultant: list[int]) -> float:
x = resultant[0]
y = resultant[1]
theta = math.degrees(math.atan(abs(y) / abs(x)))
if x < 0 and y >= 0:
theta = 180 - theta
elif x < 0 and y < 0:
theta += 180
elif x >= 0 and y < 0:
theta = 360 - theta
return theta
```
]
)
+ Arch Linux
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-11FB0.typ | typst | Apache License 2.0 | #let data = (
("LISU LETTER YHA", "Lo", 0),
)
|
https://github.com/tiankaima/typst-notes | https://raw.githubusercontent.com/tiankaima/typst-notes/master/15e640-twitter_post_sqrt_root_limit/main.typ | typst | Notes on this post:
$
1=lim_(n->0)sqrt(n+sqrt(n+sqrt(n+ ...)))
$
Some argues it's $0$ or $1$, that's simply because the form is ill-defined and the post is misleading intentionally, here's another way to look at this:
define $f_m$ as a series of functions:
$
f_m (x) = underbrace(sqrt(x+sqrt(x+sqrt(x+...))), "m times")
$
In this form we have two ways to write the original limit:
$
lim_(n->0) lim_(m->oo) f_m (n)
$
or
$
lim_(m->oo) lim_(n->0) f_m (n)
$
Limits are not interchangeable by default, and orders matter, when $display(lim_(n->0) f_m (n))$ comes first it's undoubtedly $0$, when $display(lim_(m->oo) f_m (n))$ comes first it's $1$.
Here's how $f_m (x)$ looks like with $m=1,2,3,4,5$: (Credit: WolframAlpha)
#image("image.png")
I hope this clears things up. |
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/cetz/0.2.0/src/lib/tree.typ | typst | Apache License 2.0 | // CeTZ Library for Layouting Tree-Nodes
#import "/src/util.typ"
#import "/src/draw.typ"
#import "/src/coordinate.typ"
#import "/src/vector.typ"
#import "/src/matrix.typ"
#import "/src/process.typ"
#import "/src/anchor.typ" as anchor_
#let typst-content = content
// Default edge draw callback
//
// - from (string): Source element name
// - to (string): Target element name
// - parent (node): Parent (source) tree node
// - child (node): Child (target) tree node
#let default-draw-edge(from, to, parent, child) = {
draw.line(from, to)
}
// Default node draw callback
//
// - node (node): The node to draw
#let default-draw-node(node, _) = {
let text = if type(node) in (content, str, int, float) {
[#node]
} else if type(node) == dictionary {
node.content
}
draw.get-ctx(ctx => {
draw.content((), text)
})
}
/// Lays out and renders tree nodes.
///
/// For each node, the `tree` function creates an anchor of the format
/// `"node-<depth>-<child-index>"` that can be used to query a nodes position
/// on the canvas. <tree-node-name>
///
/// #example(```
/// import cetz.tree
/// set-style(content: (padding: .1))
/// tree.tree(([Root], ([A], [A.A], [A.B]), ([B], [B.A])))
/// ```)
///
/// = parameters
///
/// ==== Nodes
///
/// A tree node is an array consisting of the nodes value at index $0$ followed by its child nodes. For the default `draw-node` function, the value (first item) of an node must be of type `<content>`.
///
/// *Example of a list of nodes:*
/// #example(```
/// cetz.tree.tree(([A], ([B], ([C], ([D],)))), direction: "right")
/// ```)
///
/// *Example of a tree of nodes:*
/// #example(```
/// cetz.tree.tree(([A], ([B], [C]), ([D], [E])), direction: "right")
/// ```)
///
/// ==== Drawing and Styling Tree Nodes
///
/// The @@tree() function takes an optional `draw-node:` and `draw-edge:`
/// callback function that can be used to customice node and edge drawing.
///
/// The `draw-node` function must take the current node and its parents node
/// anchor as arguments and return one or more elements.
///
/// For drawing edges between nodes, the `draw-edge` function must take two
/// node anchors and the target node as arguments and return one or more elements.
///
/// #example(```
/// import cetz.tree
/// set-style(content: (padding: .1))
/// let data = ([\*], ([A], [A.A], [A.B]), ([B], [B.A]))
/// tree.tree(
/// data,
/// direction: "right",
/// draw-node: (node, ..) => {
/// circle((), radius: .35, fill: blue, stroke: none)
/// content((), text(white, [#node.content]))
/// },
/// draw-edge: (from, to, ..) => {
/// let (a, b) = (from + ".center", to + ".center")
/// line((a: a, b: b, abs: true, number: .40),
/// (a: b, b: a, abs: true, number: .40))
/// }
/// )
/// ```)
/// - root (array): A nested array of content that describes the structure the tree should take. Example: `([root], [child 1], ([child 2], [grandchild 1]))`
/// - draw-node (auto,function): The function to call to draw a node. The function will be passed two positional arguments, the node to draw and the node's parent, and is expected to return elements (`(node, parent-node) => elements`). The node's position is accessible through the "center" anchor or by using the previous position coordinate `()`.
/// If `auto` is given, just the node's value will be drawn as content.
/// The following predefined styles can be used:
/// - draw-edge (none,auto,function): The function to call draw an edge between two nodes. The function will be passed the name of the starting node, the name of the ending node, and the end node and is expected to return elements (`(source-name, target-name, target-node) => elements`). If `auto` is given, a straight line will be drawn between nodes.
/// - direction (string): A string describing the direction the tree should grow in ("up", "down", "left", "right")
/// - parent-position (string): Positioning of parent nodes (begin, center, end)
/// - grow (float): Depth grow factor
/// - spread (float): Sibling spread factor
/// - name (none,string): The tree elements name
#let tree(
root,
draw-node: auto,
draw-edge: auto,
direction: "down",
parent-position: "center",
grow: 1,
spread: 1,
name: none
) = {
assert(parent-position in ("begin", "center","end", "after-end"))
assert(grow > 0)
assert(spread > 0)
direction = (
up: "north",
down: "south",
right: "east",
left: "west"
).at(direction)
if draw-edge == auto {
draw-edge = default-draw-edge
} else if draw-edge == none {
draw-edge = (..) => ()
}
if draw-node == auto {
draw-node = default-draw-node
}
assert(draw-node != none, message: "Node draw callback must be set!")
let build-node(tree, depth: 0, sibling: 0) = {
let children = ()
let content = none
if type(tree) == array {
children = tree.slice(1).enumerate().map(
((n, c)) => build-node(c, depth: depth + 1, sibling: n)
)
content = tree.at(0)
} else {
content = tree
}
return (
x: 0,
y: depth * grow,
n: sibling,
depth: depth,
children: children,
content: content
)
}
// Layout node recursive
//
// return:
// (node, left-x, right-x)
let layout-node(node, shift-x) = {
if node.children.len() == 0 {
node.x = shift-x
return (node, node.x, node.x)
} else {
let (min-x, max-x) = (none, none)
let (left, right) = (none, none)
let n-children = node.children.len()
for i in range(0, n-children) {
let child = node.children.at(i)
let (child-min-x, child-max-x) = (none, none)
(child, child-min-x, child-max-x) = layout-node(child, shift-x)
node.children.at(i) = child
left = util.min(child.x, left)
right = util.max(child.x, right)
min-x = util.min(min-x, child-min-x)
max-x = util.max(max-x, child-max-x)
shift-x = child-max-x + spread
}
if parent-position == "begin" {
node.x = left
} else if parent-position == "center" {
node.x = left + (right - left) / 2
} else if parent-position == "end" {
node.x = right
} else { //after-end
node.x = right+spread
max-x = max-x + spread
}
node.direct-min-x = left
node.direct-max-x = right
node.min-x = min-x
node.max-x = max-x
return (node, min-x, max-x)
}
}
let node-position(node) = {
if direction == "south" {
return (node.x, -node.y)
} else if direction == "north" {
return (node.x, node.y)
} else if direction == "west" {
return (-node.y, node.x)
} else if direction == "east" {
return (node.y, node.x)
} else {
panic(message: "Invalid tree direction.")
}
}
let anchors(node, parent-path) = {
if parent-path != none {
parent-path += "-"
} else {
parent-path = ""
}
let d = (:)
d.insert(parent-path + str(node.n), node-position(node))
for child in node.children {
d += anchors(child, parent-path + str(node.n))
}
return d
}
let build-element(node, parent-name) = {
let name = if parent-name != none {
parent-name + "-" + str(node.n)
} else {
"0"
}
// Render element
node.name = name
node.group-name = "g" + name
node.element = {
draw.anchor(node.name, node-position(node))
draw.group(name: node.group-name, {
draw.move-to(node-position(node))
draw.anchor("center", ())
draw-node(node, parent-name)
})
}
// Render children
node.children = node.children.map(c => build-element(c, name))
// Render edges
node.edges = if node.children != () {
draw.group({
for child in node.children {
draw-edge(node.group-name, child.group-name, node, child)
}
})
} else { () }
return node
}
let root = build-node(root)
let (nodes, ..) = layout-node(root, 0)
let node = build-element(nodes, none)
// Render node recursive
let render(node) = {
if node.element != none {
node.element
if "children" in node {
for child in node.children {
render(child)
}
}
node.edges
}
}
draw.group(name: name, render(node))
}
|
https://github.com/Gekkio/gb-ctr | https://raw.githubusercontent.com/Gekkio/gb-ctr/main/chapter/cartridges/mbc5.typ | typst | Creative Commons Attribution Share Alike 4.0 International | #import "../../common.typ": *
== MBC5 mapper chip
The majority of games for Game Boy Color use the MBC5 chip. MBC5 supports ROM sizes up to 64 Mbit (512 banks of #hex("4000") bytes), and RAM sizes up to 1 Mbit (16 banks of #hex("2000") bytes). The information in this section is based on my MBC5 research, and The Cycle-Accurate Game Boy Docs @tcagbd.
=== MBC5 registers
#reg-figure(
caption: [#hex-range("0000", "1FFF") - RAMG - MBC5 RAM gate register]
)[
#reg-table(
[W-0], [W-0], [W-0], [W-0], [W-0], [W-0], [W-0], [W-0],
table.cell(colspan: 8)[RAMG\<7:0\>],
[bit 7], [6], [5], [4], [3], [2], [1], [bit 0]
)
#set align(left)
#grid(
columns: (auto, 1fr),
gutter: 1em,
[*bit 7-0*], [
*RAMG\<7:0\>*: RAM gate register\
#bin("00001010") = enable access to cartridge RAM\
All other values disable access to cartridge RAM
]
)
]
The 8-bit MBC5 RAMG register works in a similar manner as MBC1 RAMG, but it is a full 8-bit register so upper bits matter when writing to it. Only #bin("00001010") enables RAM access, and all other values (including #bin("10001010") for example) disable access to RAM.
When RAM access is disabled, all writes to the external RAM area #hex-range("A000", "BFFF") are ignored, and reads return undefined values. Pan Docs recommends disabling RAM when it's not being accessed to protect the contents @pandocs.
#speculation[
We don't know the physical implementation of RAMG, but it's certainly possible that the #bin("00001010") bit pattern check is done at write time and the register actually consists of just a single bit.
]
#reg-figure(
caption: [#hex-range("2000", "2FFF") - ROMB0 - MBC5 lower ROM bank register]
)[
#reg-table(
[W-0], [W-0], [W-0], [W-0], [W-0], [W-0], [W-0], [W-1],
table.cell(colspan: 8)[ROMB0\<7:0\>],
[bit 7], [6], [5], [4], [3], [2], [1], [bit 0]
)
#set align(left)
#grid(
columns: (auto, 1fr),
gutter: 1em,
[*bit 7-0*], [
*ROMB0\<7:0\>*: Lower ROM bank register
]
)
]
The 8-bit ROMB0 register is used as the lower 8 bits of the ROM bank number when the CPU accesses the #hex-range("4000", "7FFF") memory area.
#reg-figure(
caption: [#hex-range("3000", "3FFF") - ROMB1 - MBC5 upper ROM bank register]
)[
#reg-table(
[U], [U], [U], [U], [U], [U], [U], [W-0],
unimpl-bit(), unimpl-bit(), unimpl-bit(), unimpl-bit(), unimpl-bit(), unimpl-bit(), unimpl-bit(), [ROMB1],
[bit 7], [6], [5], [4], [3], [2], [1], [bit 0]
)
#set align(left)
#grid(
columns: (auto, 1fr),
gutter: 1em,
[*bit 7-1*], [*Unimplemented*: Ignored during writes],
[*bit 0*], [*ROMB1*: Upper ROM bank register],
)
]
The 1-bit ROMB1 register is used as the most significant bit (bit 9) of the ROM bank number when the CPU accesses the #hex-range("4000", "7FFF") memory area.
#reg-figure(
caption: [#hex-range("4000", "5FFF") - RAMB - MBC5 RAM bank register]
)[
#reg-table(
[U], [U], [U], [U], [W-0], [W-0], [W-0], [W-0],
unimpl-bit(), unimpl-bit(), unimpl-bit(), unimpl-bit(), table.cell(colspan: 4)[RAMB\<3:0\>],
[bit 7], [6], [5], [4], [3], [2], [1], [bit 0]
)
#set align(left)
#grid(
columns: (auto, 1fr),
gutter: 1em,
[*bit 7-4*], [*Unimplemented*: Ignored during writes],
[*bit 3-0*], [*RAMB\<3:0\>*: RAM bank register],
)
]
The 4-bit RAMB register is used as the RAM bank number when the CPU accesses the #hex-range("A000", "BFFF") memory area.
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/常微分方程/作业/2100012990 郭子荀 常微分方程 4.typ | typst | #import "../../template.typ": proof, note, corollary, lemma, theorem, definition, example, remark, proposition,der, partialDer, Spec
#import "../../template.typ": *
// Take a look at the file `template.typ` in the file panel
// to customize this template and discover how it works.
#show: note.with(
title: "作业4",
author: "YHTQ",
date: none,
logo: none,
withOutlined : false,
withTitle :false,
)
应交时间为 4月17日
#set heading(numbering: none)
= 4.1
== p133 1
利用 p27 习题3,对于任意 $x_0 in RR$, 该方程过 $(x_0,0 )$ 的解存在且唯一当且仅当题中瑕积分发散。反之,瑕积分收敛时,过该点存在除 $y = 0$ 之外的解,当然意味着 $y = 0$ 是奇解
== 2
不妨设 $f$ 可微\
根据奇解存在的必要条件,有 $y = sin x$ 满足:
$
cases(
y = x y' + f(y'),
0 = x + der(f(y'), y')
)
$
在二式中解出 $x$ 代回一式得:
$
y + der(f(y'), y') y' = f(y')\
sin x + der(f(cos x), cos x) cos x = f (cos x)
$
设 $f(cos x) = u cos x, der(f(cos x), cos x) = u + cos x der(u, cos x)$,有:
$
sin x + u cos x + cos^2 x der(u, cos x) = u cos x\
sin x + cos^2 x der(u, cos x) = 0\
dif u = (sin^2 x)/(cos^2 x) dif x = tan^2 x dif x\
u = integral tan^2 x dif x = integral 1/(cos^2 x) -1 dif x = integral 1/(cos^2 x) dif x - x = tan x - x + C\
f(cos x) = sin x - x cos x + C cos x\
dif f(cos x) = (cos x - cos x + x sin x - C sin x) dif x = (x sin x - C sin x) dif x
$
此外,原方程中二式还表明:
$
der(f(cos x), cos x) = -x\
dif f(cos x) = -x dif cos x = x sin x dif x \
$
从而原方程组有解 $y = sin x$ 当且仅当 $f(cos x) = integral x sin x dif x = sin x - x cos x $ \
为了验证它是否是充分条件,计算:
$
(partial^2(y - x y' -f(y')))/(partial (y')^2)\
= - f''(y') = - der(x , cos x) = -1/(sin x)
$
显然只要定义域合理(分母不为零)便有上式非零,而 $(partial F)/(partial y) = 1 != 0$,由充分条件知 $sin x != 0$,便有奇解成立
例如,可取 $f(y') = sqrt(1-y'^2) - y' arccos y', y' in [-1, 1]$,当然 $y = sin x$ 只能是其在区间 $[0, pi]$ 上的解(当然,不难看出 $y = sin x$ 不可能是原方程在无穷区间上的解),在上面的计算中发现至少在 $(0, pi)$ 上奇解成立\
更进一步,在 $0$ 处奇解也是成立的。事实上,原方程化为:
$
y = x y' + sqrt(1-y'^2) - y' arccos y'\
$
不难看出过点 $(0, 0)$,可以构造出不同于 $y = sin x$ 的解:
$
y = x - arccos 1 = x
$
但在 $pi$ 处无法实现,因此对于给定的函数 $f, y = sin x$ 至少是 $[0, pi)$ 区间上的奇解
= 4.3
== p139 1
$
y'^4 - y'^3 - y^2 y' + y^2 = 0\
(1-y') y^2 = y'^3(1-y')
$
除去 $y' = 1$,化简得:
$
y^2 = y'^3\
y' = (y)^(2/3)\
$
再除去 $y = 0$,得:
$
y^(-2/3) y' = 1\
3 y^(1/3) = x + C\
y = ((x + C)/3)^3
$
这就是所求通积分 $v(x, y, C) = y - ((x + C)/3)^3$
为了计算包络,计算:
$
partialDer(v, c) = -((x+c)/3)^2 = 0
$
化简得 $y = 0$,不难验证 $y = 0$ 是包络(如果不考虑 $y' = 1$ 对应的一族解,如果考虑的话不存在包络)
== 2
=== 1
由定义,就是求曲线族的包络,联立:
$
cases(
y = c x^2 - c^2,
0 = x^2 - 2 c
)
$
得 $c = x^2/2$ 代入 $1$ 式,得:
$
y = 2 c^2 - c^2 = c^2 = x^4/4
$
曲线 $y = x^4/4$ 交 $y = c x^2 - c^2$ 于 $plus.minus sqrt(2 c)$,并且可以验证都是相切的,因此确实是包络
=== 3
联立:
$
cases(
y = c(x-c)^2,
0 = (x-c)^2 + 2c (c-x)
)
$
计算得 $x - c = 0 或 3 c, y = 0 或 4 c^3$\
容易验证两个解都满足非退化条件,都是包络
== 3
满足条件的直线为:
$
x/c + c y/a = 1\
x/c - c y/a = 1\
$
所求曲线是上述直线族的包络。联立:
$
cases(
x/c + c epsilon y/(2 a^2) = 1,
-x^2/c + epsilon y/(2 a^2) = 0
)
$
得 $c = (2 a^2 x^2)/(epsilon y)$,代入:
$
(epsilon y)/(2 a^2 x) + x^2 = 1\
y = 1/epsilon 2 a^2 x (1- x^2)
$
其中 $epsilon = plus.minus 1$,因此:
$
y = plus.minus 2 a^2 x (1- x^2)
$
满足要求,当然两条曲线的任意拼接以及与直线的拼接也都符合要求,但应当去掉导数为 $0$ 的点
= 4.8
== p151 3
任取点列使得
$
y_n -> y_0, y_n > y_0\
$
注意到有积分方程:
$
phi(x, x_0, y_n) = y_n + integral_(x_0)^(x) f(s, phi(s, x_0, y_n)) dif s
$
因此这族关于 $x$ 的函数等度连续一致有界\
只需证明它的所有一致收敛子列都收敛于 $phi(x, x_0, y_0)$ 不妨就设本身一致收敛,两边取极限将有:
$
lim_(n -> infinity) phi(x, x_0, y_n) = y_0 + integral_(x_0)^(x) f(s, lim_(n -> infinity) phi(x, x_0, y_n)) dif s
$
(还要利用 $f$ 一致连续/有界)\
表明 $lim_(n -> infinity) phi(x, x_0, y_n)$ 是一个过 $(x_0, y_0)$ 的解,继而:
$
lim_(n -> infinity) phi(x, x_0, y_n) <= phi(x, x_0, y_0)
$
另一方面,断言 $epsilon$ 充分小时,有 $phi(x, x_0, y_0 + epsilon) >= phi(x, x_0, y_0)$,从而上式只能取等\
事实上,设 $phi, psi$ 分别是微分方程过 $(x_0, y_0), (x_0, y_0 + epsilon)$ 的最大解,显然至少在 $x_0$ 的附近 $psi > phi$,断言 $psi >= phi$,否则设在某一段出现 $psi < phi$ 立刻与 $psi$ 是最大解矛盾!
== p161 1
方程中 $f$ 满足局部 Lipschitz 条件,因此解的存在唯一性定理成立,设解为 $phi(x, eta)$ 将有:
$
phi(x, eta) = eta + integral_(0)^(x) sin (s phi(s, eta)) dif s
$
对 $eta$ 求导:
$
partialDer(phi, eta) = 1 + integral_(0)^(x) s cos (s phi(s, eta)) partialDer(phi, eta) dif s
$
设 $u = partialDer(phi, eta)$,有微分方程:
$
cases(
u' = x cos (x phi(x, eta)) u,
u(0) = 1
)
$
注意到 $u = 0$ 是微分方程 $ u' = x cos (x phi(s, eta)) u$ 在另一个初值的解,而这是线性方程,当然满足唯一性条件,因此不同初值的解不可能相交,结合 $u(0) = 1$ 知 $u > 0$,证毕
== 2
有积分方程:
$
y = y_0 + integral_(x_0)^(x) f(s, y) dif s
$
计算得:
$
partialDer(y, x_0) = - f(x_0, y_0) + integral_(x_0)^(x) partialDer(y, x_0) partialDer(f(s, y), y) dif s\
partialDer(y, y_0) = 1 + integral_(x_0)^(x) partialDer(y, y_0) partialDer(f(s, y), y) dif s
$
进而:
$
partialDer(y, x_0) + f(x_0, y_0) partialDer(y, y_0)\
= integral_(x_0)^(x) f(x_0, y_0) partialDer(y, x_0) partialDer(f(s, y), y) + partialDer(y, y_0) partialDer(f(s, y), y) dif s\
= integral_(x_0)^(x) (partialDer(y, x_0) + f(x_0, y_0) partialDer(y, y_0)) partialDer(f(s, y), y) dif s
$
设 $u = partialDer(y, x_0) + f(x_0, y_0) partialDer(y, y_0)$,上式表明:
$
u = integral_(x_0)^(x) u partialDer(f(s, y), y) dif s\
cases(
u' = u partialDer(f(x, y), y),
u(x_0) = 0
)\
$
显然该方程的唯一解是 $u = 0$,证毕 |
|
https://github.com/drupol/master-thesis | https://raw.githubusercontent.com/drupol/master-thesis/main/src/thesis/theme/UMONS-logo.typ | typst | Other | #import "./common/metadata.typ": *
#import "./colors.typ": *
#{
set text(font: "Liberation Sans")
set align(right)
set par(leading: 6pt)
{
set text(weight: 300)
upper[
#text(fill: umons-grey)[#underline(
offset: 4pt,
stroke: umons-red,
)[U]]#text(fill: umons-red)[mons]\
]
}
{
set text(size: .35em, weight: 250)
text(fill: umons-grey)[University of Mons]
}
}
|
https://github.com/curvenote/pubmatter | https://raw.githubusercontent.com/curvenote/pubmatter/main/pubmatter.typ | typst | MIT License | #import "@preview/scienceicons:0.0.6": orcid-icon, email-icon, open-access-icon, github-icon, cc-icon, cc-zero-icon, cc-by-icon, cc-nc-icon, cc-nd-icon, cc-sa-icon
#import "./validate-frontmatter.typ": load, show-citation
#let THEME = state("THEME", (color: blue.darken(20%), font: ""))
#let with-theme(func) = context {
let theme = THEME.at(here())
func(theme)
}
/// Create a ORCID link with an ORCID logo
///
/// ```example
/// #pubmatter.orcid-link(orcid: "0000-0002-7859-8394")
/// ```
///
/// - orcid (str): Use an ORCID identifier with no URL, e.g. `0000-0000-0000-0000`
/// -> content
#let orcid-link(
orcid: none,
) = {
let orcid-green = rgb("#AECD54")
if (orcid == none) { return orcid-icon(color: orcid-green) }
return link("https://orcid.org/" + orcid, orcid-icon(color: orcid-green))
}
/// Create a DOI link
///
/// ```example
/// #pubmatter.doi-link(doi: "10.1190/tle35080703.1")
/// ```
///
/// - doi (str): Only include the DOI identifier, not the URL
/// -> content
#let doi-link(doi: none) = {
if (doi == none) { return none }
// Proper practices are to show the whole DOI link in text
return link("https://doi.org/" + doi, "https://doi.org/" + doi)
}
/// Create a mailto link with an email icon
///
/// ```example
/// #pubmatter.email-link(email: "<EMAIL>")
/// ```
///
/// - email (str): Email as a string
/// -> content
#let email-link(email: none) = {
if (email == none) { return none }
return link("mailto:" + email, email-icon(color: gray))
}
/// Create a link to Wikipedia with an OpenAccess icon.
///
/// ```example
/// #pubmatter.open-access-link()
/// ```
///
/// -> content
#let open-access-link() = {
let orange = rgb("#E78935")
return link("https://en.wikipedia.org/wiki/Open_access", open-access-icon(color: orange))
}
/// Create a link to a GitHub profile with the GitHub icon.
///
/// ```example
/// #pubmatter.github-link(github: "rowanc1")
/// ```
///
/// - github (str): GitHub username (no `@`)
/// -> content
#let github-link(github: none) = {
return link("https://github.com/" + github, github-icon())
}
/// Create a spaced content array separated with a `spacer`.
///
/// The default spacer is ` | `, and undefined elements are removed.
///
/// ```example
/// #pubmatter.show-spaced-content(("Hello", "There"))
/// ```
///
/// - spacer (content): How to join the content
/// - content (array): The various things to going together
/// -> content
#let show-spaced-content(spacer: text(fill: gray)[#h(8pt) | #h(8pt)], content) = {
content.filter(h => h != none and h != "").join(spacer)
}
/// Show license badge
///
/// Works for creative common license and other license.
///
/// ```example
/// #pubmatter.show-license-badge(pubmatter.load((license: "CC0")))
/// ```
///
/// ```example
/// #pubmatter.show-license-badge(pubmatter.load((license: "CC-BY-4.0")))
/// ```
///
/// ```example
/// #pubmatter.show-license-badge(pubmatter.load((license: "CC-BY-NC-4.0")))
/// ```
///
/// ```example
/// #pubmatter.show-license-badge(pubmatter.load((license: "CC-BY-NC-ND-4.0")))
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-license-badge(color: black, fm) = {
let license = if ("license" in fm) { fm.license }
if (license == none) { return none }
if (license.id == "CC0-1.0") {
return link(license.url, [#cc-icon(color: color)#cc-zero-icon(color: color)])
}
if (license.id == "CC-BY-4.0") {
return link(license.url, [#cc-icon(color: color)#cc-by-icon(color: color)])
}
if (license.id == "CC-BY-NC-4.0") {
return link(license.url, [#cc-icon(color: color)#cc-by-icon(color: color)#cc-nc-icon(color: color)])
}
if (license.id == "CC-BY-NC-SA-4.0") {
return link(license.url, [#cc-icon(color: color)#cc-by-icon(color: color)#cc-nc-icon(color: color)])
}
if (license.id == "CC-BY-ND-4.0") {
return link(license.url, [#cc-icon(color: color)#cc-by-icon(color: color)#cc-nd-icon(color: color)])
}
if (license.id == "CC-BY-NC-ND-4.0") {
return link(license.url, [#cc-icon(color: color)#cc-by-icon(color: color)#cc-nc-icon(color: color)#cc-nd-icon(color: color)])
}
}
/// Show copyright
///
/// Function chose a short citation with the copyright year followed by the license text.
/// If the license is a Creative Commons License, additional explainer text is shown.
///
/// ```example
/// #pubmatter.show-copyright(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-copyright(fm) = {
let year = if (fm.date != none) { fm.date.display("[year]") }
let citation = show-citation(show-year: false, fm)
let license = if ("license" in fm) { fm.license }
if (license == none) {
return [Copyright © #{ year }
#citation#{if (fm.at("open-access", default: none) == true){[. This article is open-access.]}}]
}
return [Copyright © #{ year }
#citation.
This #{if (fm.at("open-access", default: none) == true){[is an open-access article]} else {[article is]}} distributed under the terms of the
#link(license.url, license.name) license#{
if (license.id == "CC-BY-4.0") {
[, which enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator]
} else if (license.id == "CC-BY-NC-4.0") {
[, which enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for _noncommercial purposes only_, and only so long as attribution is given to the creator]
} else if (license.id == "CC-BY-NC-SA-4.0") {
[, which enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms]
} else if (license.id == "CC-BY-ND-4.0") {
[, which enables reusers to copy and distribute the material in any medium or format in _unadapted form only_, and only so long as attribution is given to the creator]
} else if (license.id == "CC-BY-NC-ND-4.0") {
[, which enables reusers to copy and distribute the material in any medium or format in _unadapted form only_, for _noncommercial purposes only_, and only so long as attribution is given to the creator]
}
}.]
}
/// Show authors
///
/// ```example
/// #pubmatter.show-authors(authors)
/// ```
///
/// - size (length): Size of the author text
/// - weight (weight): Weight of the author text
/// - show-affiliations (boolean): Show affiliations text
/// - show-orcid (boolean): Show orcid logo
/// - show-email (boolean): Show email logo
/// - show-github (boolean): Show github logo
/// - authors (fm, array): The frontmatter object or authors directly
/// -> content
#let show-authors(
size: 10pt,
weight: "semibold",
show-affiliations: true,
show-orcid: true,
show-email: true,
show-github: true,
authors,
) = {
// Allow to pass frontmatter as well
let authors = if (type(authors) == dictionary and "authors" in authors) {authors.authors} else { authors }
if authors.len() == 0 { return none }
return box(inset: (top: 10pt, bottom: 5pt), width: 100%, {
with-theme((theme) => {
set text(size, font: theme.font)
authors.map(author => {
text(size, font: theme.font, weight: weight, author.name)
if (show-affiliations and "affiliations" in author) {
text(size: 2.5pt, [~]) // Ensure this is not a linebreak
if (type(author.affiliations) == str) {
super(author.affiliations)
} else if (type(author.affiliations) == array) {
super(author.affiliations.map((affiliation) => str(affiliation.index)).join(","))
}
}
if (show-orcid and "orcid" in author) {
orcid-link(orcid: author.orcid)
}
if (show-github and "github" in author) {
github-link(github: author.github)
}
if (show-email and "email" in author) {
email-link(email: author.email)
}
}).join(", ", last: ", and ")
})
})
}
/// Show affiliations
///
/// ```example
/// #pubmatter.show-affiliations(affiliations)
/// ```
///
/// - size (length): Size of the affiliations text
/// - fill (color): Color of of the affiliations text
/// - affiliations (fm, array): The frontmatter object or affiliations directly
/// -> content
#let show-affiliations(size: 8pt, fill: gray.darken(50%), affiliations) = {
// Allow to pass frontmatter as well
let affiliations = if (type(affiliations) == dictionary and "affiliations" in affiliations) {affiliations.affiliations} else { affiliations }
if affiliations.len() == 0 { return none }
return box(inset: (bottom: 9pt), width: 100%, {
with-theme((theme) => {
set text(size, font: theme.font, fill: fill)
affiliations.map(affiliation => {
super(str(affiliation.index))
text(size: 2.5pt, [~]) // Ensure this is not a linebreak
if ("name" in affiliation) {
affiliation.name
} else if ("institution" in affiliation) {
affiliation.institution
}
}).join(", ")
})
})
}
/// Show author block, including author, icon links (e.g. ORCID, email, etc.) and affiliations
///
/// ```example
/// #pubmatter.show-author-block(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-author-block(fm) = {
show-authors(fm)
show-affiliations(fm)
}
/// Show title and subtitle
///
/// ```example
/// #pubmatter.show-title(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-title(fm) = {
with-theme(theme => {
set text(font: theme.font)
let title = if (type(fm) == dictionary and "title" in fm) {fm.title} else if (type(fm) == str or type(fm) == content) { fm } else { none }
let subtitle = if (type(fm) == dictionary and "subtitle" in fm) {fm.subtitle} else { none }
if (title != none) {
box(inset: (bottom: 2pt), width: 100%, text(17pt, weight: "bold", fill: theme.color, title))
}
if (subtitle != none) {
parbreak()
box(width: 100%, text(12pt, fill: gray.darken(30%), subtitle))
}
})
}
/// Show title block - title, authors and affiliations
///
/// ```example
/// #pubmatter.show-title-block(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-title-block(fm) = {
with-theme(theme => {
show-title(fm)
show-author-block(fm)
})
}
/// Show page footer
///
/// Default is the venue, date and page numbers
///
/// ```example
/// #pubmatter.show-page-footer(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-page-footer(fm) = {
return block(
width: 100%,
stroke: (top: 1pt + gray),
inset: (top: 8pt, right: 2pt),
with-theme((theme) => [
#set text(font: theme.font)
#grid(columns: (75%, 25%),
align(left, text(size: 9pt, fill: gray.darken(50%),
show-spaced-content((
if("venue" in fm) {emph(fm.venue)},
if("date" in fm and fm.date != none) {fm.date.display("[month repr:long] [day], [year]")}
))
)),
align(right)[
#text(
size: 9pt, fill: gray.darken(50%)
)[
#counter(page).display() of #{context {counter(page).final(here()).first()}}
]
]
)
])
)
}
/// Show page header
///
/// Default an open-access badge and the DOI and then the running-title and citation
///
/// ```example
/// #pubmatter.show-page-header(theme: (font: "Noto Sans"), fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-page-header(fm) = context {
let loc = here()
if(loc.page() == 1) {
let headers = (
if ("open-access" in fm) {[#smallcaps[Open Access] #open-access-link()]},
if ("doi" in fm) { link("https://doi.org/" + fm.doi, "https://doi.org/" + fm.doi)}
)
// TODO: There is a bug in the first page state update
// https://github.com/typst/typst/issues/2987
return with-theme((theme) => {
align(left, text(size: 8pt, font: theme.font, fill: gray, show-spaced-content(headers)))
})
} else {
return with-theme((theme) => {align(right, text(size: 8pt, font: theme.font, fill: gray.darken(50%),
show-spaced-content((
if ("short-title" in fm) { fm.short-title } else if ("title" in fm) { fm.title },
if ("citation" in fm) { fm.citation },
))
))
})
}
}
/// Show all abstracts (e.g. abstract, plain language summary)
///
/// ```example
/// #pubmatter.show-abstracts(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-abstracts(fm) = {
let abstracts
if (type(fm) == "content") {
abstracts = ((title: "Abstract", content: fm),)
} else if (type(fm) == dictionary and "abstracts" in fm) {
abstracts = fm.abstracts
} else {
return
}
with-theme((theme) => {
abstracts.map(abs => {
set text(font: theme.font)
text(fill: theme.color, weight: "semibold", size: 9pt, abs.title)
parbreak()
set par(justify: true)
text(size: 9pt, abs.content)
}).join(parbreak())
})
}
/// Show keywords
///
/// ```example
/// #pubmatter.show-keywords(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-keywords(fm) = {
let keywords
if (type(fm) == dictionary and "keywords" in fm) {
keywords = fm.keywords
} else {
return
}
if (keywords.len() > 0) {
with-theme((theme) => {
text(size: 9pt, font: theme.font, {
text(fill: theme.color, weight: "semibold", "Keywords")
h(8pt)
keywords.join(", ")
})
})
}
}
/// Show abstract-block including all abstracts and keywords
///
/// ```example
/// #pubmatter.show-abstract-block(fm)
/// ```
///
/// - fm (fm): The frontmatter object
/// -> content
#let show-abstract-block(fm) = {
box(inset: (top: 16pt, bottom: 16pt), stroke: (top: 0.5pt + gray.lighten(30%), bottom: 0.5pt + gray.lighten(30%)), show-abstracts(fm))
show-keywords(fm)
v(10pt)
}
|
https://github.com/herbhuang/utdallas-thesis-template-typst | https://raw.githubusercontent.com/herbhuang/utdallas-thesis-template-typst/main/content/proposal/schedule.typ | typst | MIT License | #import "/utils/todo.typ": TODO
= Schedule
#TODO[ // Remove this block
*Thesis Schedule*
- When will the thesis Start
- Create a rough plan for your thesis (separate the time in iterations with a length of 2-4 weeks)
- Each iteration should contain several smaller work items - Again keep it high-level and make to keep your plan realistic
- Make sure the work-items are measurable and deliverable, they should describe features that are vertically integrated
- Do not include thesis writing or presentation tasks
]
|
https://github.com/loreanvictor/master-thesis | https://raw.githubusercontent.com/loreanvictor/master-thesis/main/common/metadata.typ | typst | MIT License | // Enter your thesis data here:
#let titleEnglish = "Realtime Collaboration on Diagrams"
#let titleGerman = "Zusammenarbeit an Diagrammen in Echtzeit"
#let degree = "Masters"
#let program = "Informatics: Games Engineering"
#let supervisor = "Prof. Dr. <NAME>"
#let advisors = ()
#let author = "<NAME>"
#let startDate = "15/10/2023"
#let submissionDate = "15/04/2024"
#let presentationDate = datetime(year: 2024, month: 04, day: 11)
#let feedbacklogSubmissionDate = datetime(year: 2024, month: 09, day: 14)
|
https://github.com/OrangeX4/vscode-typst-sync | https://raw.githubusercontent.com/OrangeX4/vscode-typst-sync/main/README.md | markdown | MIT License | 
# Typst Sync
A tool for Typst local packages management and synchronization.
https://github.com/OrangeX4/vscode-typst-sync
## Features
- `Shift + Cmd/Ctrl + P` to open the command panel to execute the commands. Of course, you can also set shortcuts for these commands.
- **Typst Package Manage**: manage your typst local packages.
- `Import Typst Package` command: get text like `#import "@preview/example:0.1.0": *` and search local packages and preview packages.
- `Import Typst Local Package` command: get text like `#import "@local/mytemplate:0.1.0": *` and only search local packages.
- `Create Typst Local Package` command: create a typst local package.
- `Open Typst Local Package` command: open a typst local package entrypoint file to edit it.
- **Sync**: synchronize local packages to remote repository.
- You MUST to configure `syncRepo` setting to enable sync feature. **You can input a empty repo in github to init it.**
- `Typst Sync`: synchronize local packages with remote repository.
- `Push Typst Repo`: push to repository (actually consistent with `Typst Sync`).
- `Pull Typst Repo`: pull from repository only.
## Requirements
**You should have git installed locally for simple-git to work.**
## Extension Settings
This extension contributes the following settings:
- `vscode-typst-sync.syncRepo`: syncRepo like https://github.com/OrangeX4/typst-sync-repo.git
- `vscode-typst-sync.dataDir`: data-dir in https://github.com/typst/packages#local-packages
## Release Notes
### 0.2.0
- add command `Import Typst Package`
- add command `Import Typst Local Package`
- add command `Create Typst Local Package`
- add command `Open Typst Local Package`
- add command `Push Typst Repo`
- add command `Pull Typst Repo`
- add command `Typst Sync`
## License
This project is licensed under the MIT License.
|
https://github.com/jgm/typst-hs | https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/enum-align-03.typ | typst | Other | // Test valid number align values (horizontal)
#set enum(number-align: start)
#set enum(number-align: end)
#set enum(number-align: left)
#set enum(number-align: right)
// Error: 25-28 alignment must be horizontal
#set enum(number-align: top)
|
https://github.com/dankelley/typst_templates | https://raw.githubusercontent.com/dankelley/typst_templates/main/te/0.0.1/te.typ | typst | MIT License | // https://typst.app/docs/tutorial/making-a-template/
// https://github.com/typst/packages/?tab=readme-ov-file#local-packages
#let conf(
course: none,
type: none,
date: none,
instructions: none,
doc,
) = {
set text(font: "Times Roman", size: 14pt)
set page("us-letter",
header: [
*#course*
#h(1fr)
*#type*
#h(1fr)
*#date*
])
text(12pt)[*Instructions.* #instructions]
show heading.where(level: 1): set text(font: "Times Roman", size: 14pt)
doc
}
|
https://github.com/Its-Alex/resume | https://raw.githubusercontent.com/Its-Alex/resume/master/resume_en.typ | typst | MIT License | #import "lib/template.typ": resume_template
#import "lib/profile.typ": profile
#import "lib/skills.typ": skills
#import "lib/languages.typ": languages
#import "lib/interests.typ": interests
#import "lib/experiences.typ": experiences
#import "lib/education.typ": education
#import "lib/volunteering.typ": volunteering
#let resume = yaml("datas/en.yaml")
#set page(margin: (x: 0pt, y: 0pt))
#set text(font: "Manjari", fill: rgb("3F3E40"))
#show link: link => {
text(weight: 600)[#underline()[#link]]
}
#show: resume_template.with(
profil_picture: "profil-picture.png",
translations: resume.translations,
profile: profile(resume.profile),
description: resume.description,
skills: skills(resume.translations.skills, resume.skills),
languages: languages(resume.translations.languages, resume.languages),
interests: interests(resume.translations.interests, resume.interests),
experiences: experiences(resume.translations.experiences, resume.experiences),
education: education(resume.translations.education, resume.education),
volunteering: volunteering(resume.translations.volunteering, resume.volunteering),
)
|
https://github.com/spherinder/ethz-infk-thesis | https://raw.githubusercontent.com/spherinder/ethz-infk-thesis/master/src/chapters/02-background.typ | typst | #import "../dependencies.typ": *
= Background
== Subsection
Here's a footnote #footnote("The plural form of 'footnote' is not 'feetnote'.")
#lorem(400)
== Another Subsection
#lorem(300)
|
|
https://github.com/jamesrswift/pixel-pipeline | https://raw.githubusercontent.com/jamesrswift/pixel-pipeline/main/docs/manual.typ | typst | The Unlicense | #import "@preview/springer-spaniel:0.1.0"
#show: springer-spaniel.template(
title: [PixelPipeline],
authors: (
(
name: "<NAME>",
institute: none,
address: "",
email: ""
),
// ... and so on
),
abstract: lorem(75),
// debug: true, // Highlights structural elements and links
// frame: 1pt, // A border around the page for white on white display
// printer-test: true, // Suitably placed CMYK printer tests
)
#show "PixelPipeline": raw("PixelPipeline")
#pagebreak()
#outline(indent: auto)
#pagebreak()
#include "chapters/package.typ"
= Pipeline Creation
See @fig:pipeline
#figure(
image("assets/images/Pipeline.svg"),
caption: lorem(25)
) <fig:pipeline>
== Primitives
#springer-spaniel.gentle-clues.warning[
PixelPipeline doesn't support cyclic anchors. Such would result in an infinite loop.
]
== Stages
== layers
= Pipeline Invocation |
https://github.com/Dav1com/minerva-report-fcfm | https://raw.githubusercontent.com/Dav1com/minerva-report-fcfm/master/lib/departamentos.typ | typst | MIT No Attribution | /// Descripción del Área de Humanidades
/// -> Departamento
#let adh = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Área de Humanidades"
),
logo: read.with("../logos/adh.svg")
)
/// Descripción del Departamento de Astronomía
/// -> Departamento
#let das = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Astronomía"
),
logo: read.with("../logos/das.svg")
)
/// Descripción del Departamento de Ciencias de la Computación
/// -> Departamento
#let dcc = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ciencias de la Computación"
),
logo: read.with("../logos/dcc.svg")
)
/// Descripción del Departamento de Física
/// -> Departamento
#let dfi = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Física"
),
logo: read.with("../logos/dfi.svg")
)
/// Descripción del Departamento de Geofísica
/// -> Departamento
#let dgf = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Geofísica"
),
logo: read.with("../logos/dgf.svg")
)
/// Descripción del Departamento de Ingeniería Civil
/// -> Departmento
#let dic = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Civil"
),
logo: read.with("../logos/dic.svg")
)
/// Descripción del Departamento de Ingeniería Electrica
/// -> Departmento
#let die = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Eléctrica"
),
logo: read.with("../logos/die.svg")
)
/// Descripción del Departamento de Ingeniería Industrial
/// -> Departmento
#let dii = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Industrial"
),
logo: read.with("../logos/dii.svg")
)
/// Descripción del Departamento de Ingeniería Matemática
/// -> Departmento
#let dim = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Matemática"
),
logo: read.with("../logos/dim.svg")
)
/// Descripción del Departamento de Ingeniería Mecánica
/// -> Departmento
#let dimec = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Mecánica"
),
logo: read.with("../logos/dimec.svg")
)
/// Descripción del Departamento de Ingeniería de Minas
/// -> Departmento
#let dimin = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería de Minas"
),
logo: read.with("../logos/dimin.svg")
)
/// Descripción del Departamento de Ingeniería Química, Biotecnología y Materiales
/// -> Departmento
#let diqbm = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Departamento de Ingeniería Química, Biotecnología y Materiales"
),
logo: read.with("../logos/diqbm.svg")
)
/// Descripción del Departamento de Geología
/// -> Departmento
#let geo = (
nombre: (
"Universidad de Chile",
"Facultad de Ciencias Físicas y Matemáticas",
"Deparamento de Geología"
),
logo: read.with("../logos/geo.svg")
)
/// Esta función permite obtener ayuda sobre cualquier función
/// del template. Para saber qué funciones y variables define
/// el template simplemente deja que el autocompletado te guíe,
/// luego puedes llamar esta función para obtener más ayuda.
///
/// - nombre (string): Puede ser el nombre de una función o
/// variable, entonces la función entrega
/// ayuda general sobre esta. Si se entrega
/// algo de la forma `"help(nombre)"` entonces
/// entrega ayuda específica sobre el argumento
/// `nombre`.
/// -> content
#let help(nombre) = {
import "../meta.typ": *
return help-leaf("departamentos")(nombre)
}
|
https://github.com/rossmeier/TDTS07 | https://raw.githubusercontent.com/rossmeier/TDTS07/master/report.typ | typst | #import "@preview/cetz:0.2.0"
#import "@local/juro-typst:0.0.0"
#show: juro-typst.juro-report.with(
author: "<NAME>",
title: "TDTS07 Lab Report",
date: datetime(year: 2024, month: 2, day: 20),
keywords: ("Lab", "TDTS07", "LiU")
)
// if life becomes to boring, lol :D
//#show heading: set text(fill: gradient.linear(..color.map.rainbow.map(c => {c.darken(30%)})))
#outline()
= Modeling and Simulation with SystemC <sec:systemc-traffic>
== System Model (`Controller`)
The modeled crossing is depicted in @system-overview and contains one lane per incoming car, with cars only ever passing the crossing straight without turning. As soon as coming from a side, the respective sensor is activated and stays active as long as cars are waiting on that side. If the light is green, one car per second will pass the crossing, ultimately cleaning out the queue if no new cars are arriving.
#figure(caption: "System overview")[
#cetz.canvas({
import cetz.draw: *
let quarter(angle, lable-anchor, name) = {
group({
rotate(angle * 1deg)
//grid((-5,-5),(5,5), stroke: (dash: "dashed", paint: gray))
line((1,0), (4,0), stroke: (dash: "loosely-dashed"))
line((1,1), (4,1))
line((1,-1), (4,-1))
rect((1.3,0.1), (3.9,0.9), name: "sensor")
rect((1,0), (1.2, 1), fill: black)
content("sensor", angle: calc.rem(angle, 180) * 1deg, raw("Sensor" + name))
rect((1.1, 1.1), (2.1, 1.6), name: "light")
circle((to: "light", rel:(-.2, 0)), radius: 0.15, stroke: red, fill: red)
circle((to: "light", rel:(.2, 0)), radius: 0.15, stroke: green)
line("light.north-east", (to: "light.north-east", rel: (.4,.2)), mark: (start: "stealth", fill: black))
content((rel: (0.1, 0)), anchor: lable-anchor, raw("Light"+name))
})
}
quarter(0, "west", "E")
quarter(90, "south", "N")
quarter(180, "east", "W")
quarter(270, "north", "S")
})
] <system-overview>
Each sensor is modeled as an input `sc_in<bool>` and each light as an output `sc_out<int>`, where the `int` is actually an instance of the enum `LightColor` in the code but `int` is used instead to work nicely with the included tracing of SystemC.
The controller generally operates by setting a direction's light to green as long as the respective sensor indicates that cars are waiting. To ensure safe operation, the controller maintains a `state` indicating which direction cars are allowed to pass. This can be `Idle`, `NorthSouth`, or `WestEast`. Each light will only be turned green if the controller is in the respective state. If a sensor turns on and the state is idle, the state will be changed to the required state by the respective light. If the state does not permit the light to turn green, a state change will be requested and scheduled by triggering an `sc_event` after a specified `max_wait_time`. When this event is triggered, a state change is forced by first turning all lights red and then changing the state to the opposite of the previous one. If all lights turn red because no more cars are detected, the controller either changes to the other state if it has been previously requested or returns to the `Idle` state.
== Traffic Generator (`Generator`)
There are 4 traffic generators in the simulation. Each one is connected to one side of the crossing. Accordingly, the generator provides the following interface:
- `sc_in<int> light` for getting the light signal's color relevant for the generator
- `sc_out<bool> sensor` for feeding back sensor data to the controller
- `sc_out<int> queue_length` for being able to trace and debug the internal queue length of the generator
The generator handles both the arrival of new cars, which occur at fixed times and are random with a likelihood depending on the generator's predefined `pressure` value
$ P_"arrival" = cases(
min(1.0, "pressure") &"if" "queue_length" = 0,
min(1.0, 1.0 - "queue_length"/"pressure") &"else",
) $
and the passing of cars over the crossing whenever the light signal is green. Both those events are evaluated and executed once a second with a $100 "ms"$ delay between both events to better visualize the events in the trace file.
== Testbench (`TestBench`)
The test bench contains an `sc_main` function that instantiates one `Controller` with one `Generator` per side. All relevant signals are initiated and connected to the entities as well as traced to an output file.
== Results
To verify functionality, the generator is tested with a scenario that visualizes all edge cases. To achieve reproducibility, the random generator is initialized with a fixed seed at the beginning of the simulation. To nicely reproduce edge cases, the following pressure values are used for the `Generator`s:
- North: 10
- South: 5
- West: 2
- East: 0.2
The high pressure for north and south cause a constant flow of cars from the respective directions, so both lights are essentially always green. The state is only changed after the timer for the `max_wait_time` has expired. The pressure from the west and east side is considerably lower so that sometimes the queues are emptied out and the state is changed back to `NorthSouth` before the timer's expiry. The very low pressure on the east side is additionally there to ensure that that individual light turns red when no cars are arriving anymore, even if the opposite light stays green.
@fig:traffic-plot shows the resulting traces. It can be seen that all rules and intended behaviors are fulfilled. Only opposing lights are green at the same time (value of 1 in the plot) and the periods get scheduled based on need with a maximum switching time of 10 seconds.
#figure(
caption: "Waveforms of the simulation run",
image("Lab1/trace.svg"),
) <fig:traffic-plot>
= Formal verification with UPPAAL
== Getting Started
- `E<> P.s3` is satisfied as there are clearly paths that lead to the `s3` state for the automation `P`.
- `A<> P.s3` is not satisfied. This is easier to understand using the equivalent `not E[] not P.s3` #footnote(link("https://docs.uppaal.org/language-reference/query-semantics/symb_queries/")) where it becomes quite apparent that `P` is not in `s3` on many occasions.
== Fischer
To adapt the example for higher values of `n`, the additional automats are just added to the system declarations. The query for verifying the mutex condition is generated using the python script shown in @lst:query-script.
// remove first line of code containing definition of N that is not relevant here
#let code = read("./Lab2/gen_cond.py").replace(regex("^.+?\n"), "")
#figure(
align(left, raw(code, lang: "python")),
caption: [ Python code for generating the queries for any $N$ ],
) <lst:query-script>
The results of the calculation times (measured with a smartphone stopwatch) are presented in @tab:fischer-runtimes. It is noteworthy that the first evaluation after altering the model is a substantially slower than subsequent evaluations. The times given are the ones of the subsequent evaluations.
#figure(
caption: [Runtimes of the query evaluation],
table(
columns: (auto, auto),
[$N$], [time (s)],
[8], [0.42],
[9], [0.65],
[10], [1.0],
[11], [3.0],
)
) <tab:fischer-runtimes>
Linear regression with the model $"time" (N) = a dot b^N$ leads to $a = 2.32 dot 10^(-3); b = 1.88$ and thus $"time"(12) = 4.6 "s "$.
When introducing a different minimum wait time $m != k$ for the transition from `wait` to `cs`, the mutex condition holds iff $m <= k$. This makes intuitive sense because it is required that all processes have a chance to detect the foreign lock before proceeding to the critical section.
== Traffic Light Controller
The controller consists of 5 different automata, where opposing directions use the same template. The different automata used are shown in @fig:traffic-automata which also shows that the automata for the different directions are structured nearly the same, with just the synchronization channels swapped.
#figure(
image("traffic_light.png", width: 80%),
caption: "Different automata used for modeling the traffic light controller"
) <fig:traffic-automata>
For simplicity in this model, different from the controller described in @sec:systemc-traffic, the controller does not automatically switch to the other direction if no cars are coming, but is governed by a fixed timer that alternates between both possible directions. The queries ensuring the correct operation of the controller are shown in @lst:traffic-queries although the last query exists similarly for every direction.
#figure(
[```
A[] not deadlock
A[] not ((LightN.Green or LightS.Green) and (LightW.Green or LightE.Green))
E<> LightN.Green
```],
caption: "Queries used to verify the model"
) <lst:traffic-queries>
// display image with fixed pt/pixel ratio
#let img-fixed(..args, scale: 100%) = {
let img = image(..args)
style(styles => {
image(..args, width: measure(img, styles).width * scale * 3)
})
}
== Alternating Bit Protocol
The send events (`s0`, `s1`, `sack0`, `sack1`) are modeled using synchronous channels that are written to be the sender/receiver and read by the channel. The corresponding receive events (`r0`, `r1`, `rack0`, `rack1`) are modeled as broadcast channels, as the receiver/sender might not always be able to receive them depending on their current state. To make the usage of clocks and synchronization channels more intuitive, the same logical state is partially split up in several states where all but the last are "urgent" so the synchronization still works correctly.
=== Sender
The sender automat depicted in @fig:bit-sender generates a new message after being idle for `gen = 3` time-units. Each new message is sent to the correct channel according to the current status of the sender. After sending, if the sender receives the correct ack message, it changes state and goes back to idle. If the timeout (default $5 "s "$) is reached or the wrong ack message is received, the transmission is attempted again.
#figure(
img-fixed("bit_sender.png", scale: 55%),
caption: "The sender",
) <fig:bit-sender>
=== Channel <sec:bit-channel>
The channel is depcited in @fig:bit-channel. It models a binary symmetrical channel for both the data as well as the ack messages. It receives any of the potential send events (`s0`, `s1`, `sack0`, `sack1`) and emits the corresponding or opposite receive event (`r0`, `r1`, `rack0`, `rack1`). The error probabilities are modeled by exploiting the fact that UPPAAL randomly chooses one of all possible edges every time. The modeled error probability $P_"error" = 1/4$ is achieved by having 3 "correct" edges and 1 "error" edge for each possible event.
#figure(
img-fixed("bit_channel.png", scale: 55%),
caption: "The channel",
) <fig:bit-channel>
=== Receiver
The receiver is depicted in @fig:bit-receiver. It works very similarly to the sender. When receiving the correct message for the current state, it emits the corresponding ack and changes state. When timing out or receiving the wrong message, it emits the opposite ack and goes back to waiting.
#figure(
img-fixed("bit_receiver.png", scale: 55%),
caption: "The receiver",
) <fig:bit-receiver>
=== Verification
The following queries are used to verify correct function
- `S.send0 --> R.recv0`: ensure that all messages sent by the sender are eventually received by the receiver. This passes with using an ideal channel, but does not pass using the binary symmetrical channel described in @sec:bit-channel.
- `R.recv0 --> C.ssack0` and `R.recv0 --> (C.rrack0 or C.rrack1)`: ensure that the receiver tries to acknoledge received messages either successfully or unsuccessfully (because of the lossy channel)
- `A[] not deadlock`: ensure that the whole system can not end up in a deadlock condition
= Design-space exploration with MPARM
== Energy Minimization
The results of the different parametrizations are shown in @tab:gsm-results. Run 1 uses the default settings of the simulator. After observing the very low cache miss rates, the cache sizes were reduced (eg. runs 2 and 3), resulting in run 2 for optimal cache sizes. Then the frequency divider was systematically increased (runs 4 to 6), with diminishing returns for a divisor larger than 6. Thus the result of run 6 is considered optimized and still contains considerable headroom to the $20 "ms"$ time limit as well.
#let results = (
([`-F0,1 --dt=4 --ds=12 --it=1 --is=13`], "45.03", "1.69", 1.13, 0.61,),
([`-F0,1 --dt=4 --ds=9 --it=1 --is=9`], "29.77", "1.74", 2.35, 0.92),
([`-F0,1 --dt=4 --ds=9 --it=1 --is=8`], "33.13", "1.94", 2.15, 2.62),
([`-F0,2 --dt=4 --ds=9 --it=1 --is=9`], "18.90", "3.58", 2.35, 0.92),
([`-F0,3 --dt=4 --ds=9 --it=1 --is=9`], "18.02", "5.10", 2.35, 0.92),
([`-F0,6 --dt=4 --ds=9 --it=1 --is=9`], "17.83", "9.68", 2.35, 0.92)
)
#figure(
table(columns: (auto, auto, auto, auto, auto, auto),
[Run], [Parameters], [Time], [Energy], [D-Miss], [I-Miss],
..results.enumerate(start:1).map(((i, arg)) => {
let (cmd, e, t, dmiss, imiss) = arg
(
str(i),
cmd,
str(t) + " ms",
str(e) + " µJ",
str(dmiss) + " %",
str(imiss) + " %",
)
}).flatten()
),
caption: "Results of the different runs",
) <tab:gsm-results>
== Concurrency Optimization
Without any further modifications, the most apparent difference between the two versions is that the shared version takes more time with $14.1 "ms"$ vs $10.0 "ms"$ with the queue version. However, the queue version has a higher relative bus occupation of $56.25%$ vs $44.89%$.
To reduce the shared version's bus occupation even further, the core clocks are lowered. This is done by setting the `-Fx,y` arguments for the simulation. Lowering all core clocks by a factor of two gives a lower bus occupation of only $23.57%$ but a total runtime of $26.3 "ms"$ which does not satisfy the time requirement of $20.0 "ms"$. Using one full-speed and two half speed cores results in bus usages of around $30%$. The configuration `-F0,2 -F1,2 -F2,1` is able to complete the decoding in $18.8 "ms"$ with a bus occupation of $31.78%$.
== Mapping/Scheduling
The critical path of the problem is $"T1" --> "T3" --> "T5"$. By executing this path immediately and without any unnecessary delays between tasks, an optimal schedule can be achieved. The most straightforward solution to this problem is to execute this critical path on one processor and the other tasks (T2 and T4) on the other processor. Such a schedule with $"SL" = 35 "ms"$ is shown in @fig:schedule-rearranged.
#figure(caption: "Schedule with changed task assigments")[
#cetz.canvas({
import cetz.draw: *
let task(name, proc, start, length) = {
rect((start/5, proc*2), ((start+length)/5, proc*2+1), name: name, fill: if proc == 0 {gray} else {none})
content(name)[#name]
}
line((0,-.5), (0,3.5))
content((-.2,.5),anchor:"east")[Proc. 1]
content((-.2,2.5),anchor:"east")[Proc. 2]
task("T1", 0, 0, 5)
task("T3", 0, 5, 10)
task("T5", 0, 15, 20)
task("T2", 1, 0, 10)
task("T4", 1, 10, 5)
line((1,1), (2,2), stroke: (dash: "dashed"), mark: (end: "stealth"))
for m in (5, 15, 35) {
content((m/5,-.1), anchor: "north")[$#m "ms"$]
}
})
] <fig:schedule-rearranged>
The same optimal schedule length can however also be reached while still keeping the original assignments of tasks to processors by evaluating the $"T2" --> "T4"$ path while T5 is running on the other processor. The resulting schedule in shown in @fig:schedule-orig.
#figure(caption: "Schedule with original task assignments")[
#cetz.canvas({
import cetz.draw: *
let task(name, proc, start, length) = {
rect((start/5, proc*2), ((start+length)/5, proc*2+1), name: name, fill: if proc == 0 {gray} else {none})
content(name)[#name]
}
line((0,-.5), (0,3.5))
content((-.2,.5),anchor:"east")[Proc. 1]
content((-.2,2.5),anchor:"east")[Proc. 2]
task("T1", 1, 0, 5)
task("T3", 0, 5, 10)
task("T5", 1, 15, 20)
task("T2", 0, 15, 10)
task("T4", 0, 25, 5)
line((1,2), (1,1), stroke: (dash: "dashed"), mark: (end: "stealth"))
line((3,1), (3,2), stroke: (dash: "dashed"), mark: (end: "stealth"))
line((1,2), (5,1), stroke: (dash: "dashed"), mark: (end: "stealth"))
for m in (5, 15, 35) {
content((m/5,-.1), anchor: "north")[$#m "ms"$]
}
})
] <fig:schedule-orig>
|
|
https://github.com/MrToWy/Bachelorarbeit | https://raw.githubusercontent.com/MrToWy/Bachelorarbeit/master/Code/injectUserAufruf.typ | typst | ```ts
@UseGuards(InjectUser)
@Public()
@Get(':id')
findOne(@User() user:UserDto, @Req() request: Request, @Param('id') id: string) {
[...]
``` |
|
https://github.com/Myriad-Dreamin/tinymist | https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/editors/neovim/Configuration.md | markdown | Apache License 2.0 | # Tinymist Server Configuration
## `outputPath`
The path pattern to store Typst artifacts, you can use `$root` or `$dir` or `$name` to do magic configuration, e.g. `$dir/$name` (default) and `$root/target/$dir/$name`.
- **Type**: `string`
## `exportPdf`
The extension can export PDFs of your Typst files. This setting controls whether this feature is enabled and how often it runs.
- **Type**: `string`
- **Enum**:
- `never`: Never export PDFs, you will manually run typst.
- `onSave`: Export PDFs when you save a file.
- `onType`: Export PDFs as you type in a file.
- `onDocumentHasTitle`: Export PDFs when a document has a title (and save a file), which is useful to filter out template files.
- **Default**: `"never"`
## `rootPath`
Configure the root for absolute paths in typst. Hint: you can set the rootPath to `-`, so that tinymist will always use parent directory of the file as the root path. Note: for neovim users, if it complains root not found, you must set `require("lspconfig")["tinymist"].setup { root_dir }` as well, see [tinymist#528](https://github.com/Myriad-Dreamin/tinymist/issues/528).
- **Type**: `string` or `null`
## `semanticTokens`
Enable or disable semantic tokens (LSP syntax highlighting)
- **Type**: `string`
- **Enum**:
- `enable`: Use semantic tokens for syntax highlighting
- `disable`: Do not use semantic tokens for syntax highlighting
- **Default**: `"enable"`
## `systemFonts`
A flag that determines whether to load system fonts for Typst compiler, which is useful for ensuring reproducible compilation. If set to null or not set, the extension will use the default behavior of the Typst compiler. Note: You need to restart LSP to change this options.
- **Type**: `boolean`
- **Default**: `true`
## `fontPaths`
A list of file or directory path to fonts. Note: The configuration source in higher priority will **override** the configuration source in lower priority. The order of precedence is: Configuration `tinymist.fontPaths` > Configuration `tinymist.typstExtraArgs.fontPaths` > LSP's CLI Argument `--font-path` > The environment variable `TYPST_FONT_PATHS` (a path list separated by `;` (on Windows) or `:` (Otherwise)). Note: If the path to fonts is a relative path, it will be resolved based on the root directory. Note: In VSCode, you can use VSCode variables in the path, e.g. `${workspaceFolder}/fonts`.
- **Type**: `array` or `null`
## `compileStatus`
In VSCode, enable compile status meaning that the extension will show the compilation status in the status bar. Since Neovim and Helix don't have a such feature, it is disabled by default at the language server label.
- **Type**: `string`
- **Enum**:
- `enable`
- `disable`
- **Default**: `"disable"`
## `typstExtraArgs`
You can pass any arguments as you like, and we will try to follow behaviors of the **same version** of typst-cli. Note: the arguments may be overridden by other settings. For example, `--font-path` will be overridden by `tinymist.fontPaths`.
- **Type**: `array`
- **Default**: `[]`
## `formatterMode`
The extension can format Typst files using typstfmt or typstyle.
- **Type**: `string`
- **Enum**:
- `disable`: Formatter is not activated.
- `typstyle`: Use typstyle formatter.
- `typstfmt`: Use typstfmt formatter.
- **Default**: `"disable"`
## `formatterPrintWidth`
Set the print width for the formatter, which is a **soft limit** of characters per line. See [the definition of *Print Width*](https://prettier.io/docs/en/options.html#print-width). Note: this has lower priority than the formatter's specific configurations.
- **Type**: `number`
- **Default**: `120`
|
https://github.com/Flower101010/Typst_template | https://raw.githubusercontent.com/Flower101010/Typst_template/main/README.md | markdown | # Typst Assignment Template
A simple assignment template for [Typst](https://typst.app/).
## Example Usage
You can find an example [here](./example.typ) as well as its render [here](./example.pdf).
|
|
https://github.com/rxt1077/it610 | https://raw.githubusercontent.com/rxt1077/it610/master/markup/slides/compose.typ | typst | #import "@preview/fletcher:0.5.1" as fletcher: diagram, node, edge
#import fletcher.shapes: parallelogram
#import "/templates/slides.typ": *
#show: university-theme.with(
short-title: [Compose],
)
#title-slide(
title: [Docker Compose],
)
#alternate(
title: [Goals],
image: licensed-image(
file: "/images/compose.png",
license: "FAIRUSE",
title: [Docker Compose logo],
url: "https://codeblog.dotsandbrackets.com/quick-intro-to-docker-compose/compose-logo/",
),
text: [
- Run multiple containers
- Have containers communicate with each other
- Easily to bring a whole system up and down
],
)
#alternate(
title: [The Good],
image: licensed-image(
file: "/images/thumbs-up.jpg",
license: "CC BY 2.0",
title: [thumbs up],
url: "https://live.staticflickr.com/2426/3958637561_c266717b95_o_d.jpg",
author: [<NAME>],
author-url: "https://www.flickr.com/photos/62337512@N00/",
),
text: [
- Already included with Docker Desktop
- Easy to run
- Can be used on dev machines
],
)
#alternate(
title: [The Bad],
image: licensed-image(
file: "/images/thumbs-down.jpg",
license: "CC BY-NC-SA 2.0",
title: [2 thumbs down],
url: "https://www.flickr.com/photos/88494026@N00/2516298393",
author: [sparklemotion0],
author-url: "https://www.flickr.com/photos/88494026@N00",
),
text: [
- Strange persistence choices
- Difficult to run on multiple machines
- Not considered production stable
- Path name issues
],
)
#slide(title: [Setup])[
#align(center,
text(20pt,
diagram(
spacing: 15pt,
node-fill: gradient.radial(white, blue, radius: 200%),
node-stroke: blue,
(
node((2,0), [Project Root],
shape: parallelogram.with(angle: 20deg),
fill: gradient.radial(white, green, radius: 200%)),
edge((1,1), "-"),
edge((2,1), "-"),
edge((3,1), "-"),
edge((5,1), "-"),
node((1,1), [docker-compose.yml], shape: rect),
node((2,1), [.env], shape: rect),
node((3,1), [web],
shape: parallelogram.with(angle: 20deg),
fill: gradient.radial(white, green, radius: 200%)),
edge((2.5,2), "-"),
edge((3.75,2), "-"),
node((2.5,2), [Dockerfile]),
node((3.75,2), [httpd.conf]),
node((5,1), [db],
shape: parallelogram.with(angle: 20deg),
fill: gradient.radial(white, green, radius: 200%)),
edge("-"),
node((5, 2), [Dockerfile])
)
)
)
)
\
- `docker-compose.yml` in the root of the project
- `.env` file will be loaded, environment variables can be used
- directories are used for individual parts of system
]
#focus-slide()[
What goes in the docker-compose.yml?
]
#matrix-slide(columns: 3, rows: 2,
[*version* \
info only, not required or recommended],
[*services* \
containers that will be run, with options],
[*networks* \
individual networks to be created, if omitted one network linking all services will be created],
[*volumes* \
persistent data stores],
[*configs* \
volumes for config files],
[*secrets* \
configs for sensitive data]
)
#slide(title: [Services])[
- largest part of a docker-compose.yml file
- most options in the compose file spec
- services can be resolved by their name internally
]
#slide(title: link("https://github.com/docker/awesome-compose/blob/master/pihole-cloudflared-DoH/compose.yaml")[Service Example], [
#set text(10pt)
```yaml
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "8080:80/tcp"
- "8443:443/tcp"
environment:
- TZ=${TIMEZONE}
- PIHOLE_DNS_=172.20.0.2#5054;1.1.1.1 # referencing by name results in "Invalid IP detected in PIHOLE_DNS_: cloudflared#5054"
- WEBPASSWORD=${<PASSWORD>}
- REV_SERVER=true
- REV_SERVER_TARGET=${PIHOLE_ROUTER_IP}
- REV_SERVER_DOMAIN=${PIHOLE_NETWORK_DOMAIN}
- REV_SERVER_CIDR=${PIHOLE_REVERSE_DNS}
- ServerIP=${PIHOLE_HOST_IP}
- ServerIPv6=${PIHOLE_HOST_IPV6}
#dns:
#- 127.0.0.1 # "Sets your container's resolve settings to localhost so it can resolve DHCP hostnames [...]" - github.com/pi-hole/docker-pi-hole
#- 1.1.1.1 # Backup server
volumes: # store your data between container upgrades
- "/etc/pihole/:/etc/pihole/"
- "/etc/dnsmasq.d/:/etc/dnsmasq.d/"
cap_add:
- NET_ADMIN # Recommended but not required (DHCP needs NET_ADMIN) https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
depends_on:
- "cloudflared"
restart: always
networks:
- dns-net
```
])
#slide(title: [Running])[
- similar options to `docker` command but with a few things fixed
- `docker-compose.yml` expected to be in the direction where you are running `docker compose`
- `docker compose up` brings everything up in the foreground
- `docker compose down` (Ctrl-C if running) brings things down (gracefully, hopefully)
- `docker compose build` builds all custom Docker images, don't forget!
- `docker compose exec <service>` run something on a running service
- `docker compose run <service>` run a running container
]
#slide(title: [Resources])[
#align(center,
licensed-image(
file: "/images/compose-diagram.svg",
license: "CC BY-NC 4.0",
title: [compose-diagram.svg],
url: "https://rxt1077.github.io/it610/docs/images/compose-diagram.svg",
author: [<NAME>],
author-url: "https://using.tech",
width: 75%,
)
)
\
- #link("https://docs.docker.com/compose/compose-file/)")[Full Compose File Spec]
- #link("https://docs.docker.com/compose/reference/")[Docker Compose CLI reference]
- #link("https://github.com/docker/awesome-compose/")[Awesome Compose (curated list of cool Docker Compose examples)]
]
|
|
https://github.com/TypstApp-team/typst | https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/text/tracking-spacing.typ | typst | Apache License 2.0 | // Test tracking characters apart or together.
---
// Test tracking.
#set text(tracking: -0.01em)
I saw Zoe yӛsterday, on the tram.
---
// Test tracking for only part of paragraph.
I'm in#text(tracking: 0.15em + 1.5pt)[ spaace]!
---
// Test that tracking doesn't disrupt mark placement.
#set text(font: ("PT Sans", "Noto Serif Hebrew"))
#set text(tracking: 0.3em)
טֶקסט
---
// Test tracking in arabic text (makes no sense whatsoever)
#set text(tracking: 0.3em)
النص
---
// Test word spacing.
#set text(spacing: 1em)
My text has spaces.
---
// Test word spacing relative to the font's space width.
#set text(spacing: 50% + 1pt)
This is tight.
|
https://github.com/a-mhamdi/graduation-report | https://raw.githubusercontent.com/a-mhamdi/graduation-report/main/Typst/fr-Rapport/Title-page.typ | typst | MIT License | #let titlepage(
title: "",
titre: "",
diploma: "",
program: "",
supervisor: "",
author: "",
date: none,
) = {
set page(
header: none,
margin: (left: 30mm, right: 30mm, top: 40mm, bottom: 40mm),
numbering: none,
number-align: center,
)
let body-font = "Garamond"
let sans-font = "Garamond"
set text(
font: body-font,
size: 12pt,
lang: "fr"
)
set par(leading: 1em)
// --- Title Page ---
v(15mm)
align(center, text(font: sans-font, 1.25em, weight: 100, "Ministère de l'Enseignement Supérieur et de la Recherche Scientifique, Tunisie"))
v(10mm)
align(center, text(font: sans-font, 1.25em, weight: 100, "Institut Supérieur des Etudes Technologiques de Bizerte"))
v(5mm)
align(center, image("Logo-ISETBZ.png", width: 25%))
v(15mm)
align(center, text(font: sans-font, 1.3em, weight: 100, diploma + " en " + program))
v(8mm)
align(center, text(font: sans-font, 2em, weight: 700, title))
pad(
top: 3em,
right: 15%,
left: 15%,
grid(
columns: 2,
gutter: 1em,
strong("Etudiant(e) : "), author,
strong("Encadrant(e) : "), supervisor,
strong("Date : "), date,
)
)
pagebreak()
}
|
https://github.com/TechnoElf/mqt-qcec-diff-thesis | https://raw.githubusercontent.com/TechnoElf/mqt-qcec-diff-thesis/main/content/background/dd.typ | typst | #import "@preview/cetz:0.2.2": canvas, draw
#import "@preview/fletcher:0.5.1": diagram, node, edge
#import "@preview/gentle-clues:0.9.0": example
#import "@preview/tablex:0.0.8": tablex, vlinex, hlinex
#import "@preview/quill:0.3.0": quantum-circuit, lstick, rstick, ctrl, targ
== Decision Diagrams
Decision diagrams in general are a type of @dag, that may be used to express control flow through a series of conditions.
It consists of a set of decision nodes and terminal nodes.
The decision nodes represent an arbitrary decision based on an input value and may thus have any number of outgoing edges.
The terminal nodes represent output values and may not have outgoing edges.
A @bdd is a specific kind of decision diagram, where there are two terminal nodes (0 and 1) and each decision node has two outgoing edges, depending solely on a single bit of an input value.
@bdd[s] may be used to represent any boolean function, as illustrated by the following example.
#example(breakable: false)[
Example @bdd[s] implementing boolean functions with an arity of $2$ are show in @example_bdd_xor and @example_bdd_and.
#figure(
grid(
columns: (4fr, 1fr, 4fr, 1fr, 4fr),
align(
horizon,
canvas({
import draw: *
let x = calc.cos(calc.asin(0.25)) - calc.cos(calc.asin(0.5)) - 0.6
merge-path(close: true, {
arc((0.5, 0), start: -90deg, stop: -30deg, anchor: "end", name: "bcurve")
arc((0.5,0), start: 30deg, stop: 90deg, anchor: "start", name: "tcurve")
line("tcurve.end", (-0.5, 0.5))
arc((), start: 30deg, stop: -30deg, anchor: "start")
})
arc((-0.6, -0.5), start: -30deg, stop: 30deg)
line((x, 0.25), (-1, 0.25))
line((x, -0.25), (-1, -0.25))
line((0.5, 0), (1, 0))
content((-1, 0.25), [$x_0$], anchor: "east")
content((-1, -0.25), [$x_1$], anchor: "east")
})
),
align(horizon)[$<=>$],
align(
horizon,
tablex(
columns: (1cm, 1cm, 1cm),
auto-vlines: false,
auto-hlines: false,
[*$x_1$*], [*$x_2$*], vlinex(), [out],
hlinex(),
[0], [0], [0],
[0], [1], [1],
[1], [0], [1],
[1], [1], [0]
)
),
align(horizon)[$<=>$],
align(
horizon,
diagram(
node-stroke: .1em,
node((0, 0), [$x_1$], radius: 1em),
edge((0, 0), (-0.5, 1), [0], "->"),
edge((0, 0), (0.5, 1), [1], "->"),
node((-0.5, 1), [$x_0$], radius: 1em),
node((0.5, 1), [$x_0$], radius: 1em),
edge((-0.5, 1), (-0.5, 2), [0], "->"),
edge((-0.5, 1), (0.5, 2), [1], "->"),
edge((0.5, 1), (0.5, 2), [0], "->"),
edge((0.5, 1), (-0.5, 2), [1], "->"),
node((-0.5, 2), [$0$], shape: rect),
node((0.5, 2), [$1$], shape: rect)
)
)
),
caption: [A @bdd for an XOR gate.]
) <example_bdd_xor>
#figure(
grid(
columns: (4fr, 1fr, 4fr, 1fr, 4fr),
align(
horizon,
canvas({
import draw: *
merge-path(close: true, {
arc((0,0), start: -90deg, stop: 90deg, radius: 0.5, anchor: "origin", name: "curve")
line((0, 0.5), (-0.5, 0.5), (-0.5, -0.5))
})
line((-0.5, 0.25), (-1, 0.25))
line((-0.5, -0.25), (-1, -0.25))
line((0.5, 0), (1, 0))
content((-1, 0.25), [$x_0$], anchor: "east")
content((-1, -0.25), [$x_1$], anchor: "east")
})
),
align(horizon)[$<=>$],
align(
horizon,
tablex(
columns: (1cm, 1cm, 1cm),
auto-vlines: false,
auto-hlines: false,
[*$x_1$*], [*$x_2$*], vlinex(), [out],
hlinex(),
[0], [0], [0],
[0], [1], [0],
[1], [0], [0],
[1], [1], [1]
)
),
align(horizon)[$<=>$],
align(
horizon,
diagram(
node-stroke: .1em,
node((1, 0), [$x_1$], radius: 1em),
edge((1, 0), (0, 2), [0], "->"),
edge((1, 0), (1, 1), [1], "->"),
node((1, 1), [$x_0$], radius: 1em),
edge((1, 1), (0, 2), [0], "->"),
edge((1, 1), (1, 2), [1], "->"),
node((0, 2), [$0$], shape: rect),
node((1, 2), [$1$], shape: rect)
)
)
),
caption: [A @bdd for an AND gate.]
) <example_bdd_and>
]
A quantum @dd in turn is a representation of the system matrix of a quantum circuit.
Here the end nodes represent specific values in the matrix.
The location of the value is determined by the outgoing edges of the previous nodes.
At each layer $q_n$ of the graph, the matrix is recursively split up into four sub matrices.
The left to right, the edges represent a decent into the top left, top right, bottom left and bottom right corners of the matrix respectively.
As the $0$ end node is usually the most common, it can be left out entirely and represented by unconnected edges instead.
Additionally, the edges can be assigned weights to represent a multiplication of the entire matrix by a coefficient.
Therefore, only a single end node with the value $1$ is needed in quantum @dd[s].
The following example demonstrates the relation between a quantum gate, its system matrix and the resulting @dd representation.
#example(breakable: false)[
#figure(
grid(
columns: (4fr, 1fr, 4fr, 1fr, 4fr),
align(
horizon,
quantum-circuit(
lstick($|q_0〉$), ctrl(1), 1, [\ ],
lstick($|q_1〉$), targ(), 1
)
),
align(horizon)[$<=>$],
align(horizon)[$mat(
1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 0, 1;
0, 0, 1, 0
)$],
align(horizon)[$<=>$],
align(
horizon,
diagram(
node-stroke: .1em,
edge((0, -0.5), (0, 0), [], "->"),
node((0, 0), [$q_1$], radius: 1em),
edge((0, 0), (0, 1), [], "->", bend: -30deg),
edge((0, 0), (0, 0.5), [], "->", bend: -10deg),
edge((0, 0), (0, 0.5), [], "->", bend: 10deg),
edge((0, 0), (0, 1), [], "->", bend: 30deg),
node((0, 0.5), [$0$], stroke: 0pt),
node((0, 1), [$q_0$], radius: 1em),
edge((0, 1), (-0.5, 1.5), [], "->", bend: 10deg),
edge((0, 1), (0, 2), [], "->", bend: -10deg),
edge((0, 1), (0, 2), [], "->", bend: 10deg),
edge((0, 1), (0.5, 1.5), [], "->", bend: -10deg),
node((-0.5, 1.5), [$0$], stroke: 0pt),
node((0.5, 1.5), [$0$], stroke: 0pt),
node((0, 2), [$1$], shape: rect),
)
)
),
caption: [A quantum CNOT gate, its matrix representation and the @dd of the matrix.]
) <example_dd>
]
|
|
https://github.com/kdog3682/mathematical | https://raw.githubusercontent.com/kdog3682/mathematical/main/0.1.0/src/demos/index.typ | typst | #import "fancy-arithmetic-sequence.typ": fancy-arithmetic-sequence
#import "proportion-table.typ": proportion-table
#import "ratio.typ": ratio
|
|
https://github.com/d-mironov/typst-slides-uni-freiburg | https://raw.githubusercontent.com/d-mironov/typst-slides-uni-freiburg/main/README.md | markdown | MIT License | # typst-slides-uni-freiburg
Typst Presentation template to use in the University of Freiburg.
|
https://github.com/Skimmeroni/Appunti | https://raw.githubusercontent.com/Skimmeroni/Appunti/main/Metodi%20Algebrici/Crittografia/Primalita.typ | typst | Creative Commons Zero v1.0 Universal | #import "../Metodi_defs.typ": *
Si voglia generare, fissato un certo ordine di grandezza,
un numero primo qualsiasi di tale dimensione. L'approccio
piú semplice consiste nello scegliere un numero $n$ dispari
di tale ordine di grandezza fissato e valutare se tale numero
é primo; se non lo é, si considera $n + 2$ e si valuta se é
primo, se non lo é si valuta $n + 4$, ecc... Per il Teorema
dei Numeri Primi, é garantito che un numero venga trovato
entro $O(log(n))$ passi.
Viene detto *test di primalitá* un procedimento, in genere
espresso sotto forma di algoritmo, che permette di determinare
se un numero intero $n$ qualsiasi sia o non sia un numero primo.
Il test piú semplice, detto _test naive_, prevede di calcolare,
per ciascun $1 < k < n$, la divisione fra $n$ e $k$: se
esiste almeno un $k$ tale per cui $k | n$, allora $n$ non é
primo, altrimenti lo é. Questo approccio puó essere migliorato
osservando come nell'intervallo $1 < k < n$ possono ripetersi
piú volte dei multipli di numeri primi: se tali numeri fossero
divisori di $n$, per il @Euclid-lemma anche i loro fattori lo
sarebbero, ma questi sarebbero giá stati testati (essendo
certamente minori di $n$). Pertanto, non é necessario considerare
tutti i numeri nell'intervallo $1 < k < n$, ma soltanto quelli
primi.
#lemma[
Sia $n in NN$ con $n gt.eq 1$. Se $n$ é un numero composto,
allora almeno uno dei numeri primi che costituiscono la sua
fattorizzazione é minore di $floor(sqrt(n))$.
]
#proof[
Si supponga che, in una certa iterazione del test naive, venga
trovato un numero $p$ che é divisore di $n$. Allora é possibile
scrivere $n = p q$ per un certo $q in ZZ$. Per come il test
é strutturato, $q$ deve necessariamente essere maggiore o uguale
a $p$, perché altrimenti il test lo avrebbe giá individuato (o
avrebbe individuato un numero primo della sua fattorizzazione).
Se $q$ fosse maggiore di $p$, allora non sarebbe rilevante ai
fini del test, perché $p$ verrebbe scoperto prima di $q$ ed
il test terminerebbe comunque. L'unico caso rilevante si ha
con $p = q$, ovvero $n = p dot p = p^(2)$, da cui si ha
$sqrt(n) = p$. Dato che $p$ é stato scelto casualmente, si ha
che deve esistere almeno un $p$ con queste caratteristiche
nell'intervallo $(1, n)$.
]
Il test naive é un test _deterministico_, ovvero garantisce di restituire
sempre la risposta corretta. In altre parole, se il test determina che un
certo numero $n$ é un numero primo, allora tale numero é effettivamente un
numero primo, mentre se determina che é un numero composto allora é
effettivamente un numero composto. Il problema di tale test é che
richiede troppe computazioni, specialmente per numeri grandi.
Approcci diversi sono forniti dai test _probabilistici_, ovvero che
non garantiscono di fornire la risposta corretta ma lo fanno a meno
di una certa percentuale. Il vantaggio di tali test é che sono molto
piú veloci dei test deterministici, e quindi utilizzabili nella pratica
(specialmente quando un certo margine di errore é tollerato). Inoltre,
piú test probabilistici possono essere applicati ad uno stesso numero:
piú test confermano lo stesso risultato e maggiore é la certezza del
responso.
Un esempio di test probabilistico per determinare se un intero $n$ é primo
é il *test di primalitá di Fermat*. Se $n$ é pari ed é diverso da $2$, allora
é certamente un numero composto. Pertanto, senza perdita di generalitá, si
assuma che $n$ sia dispari. L'algoritmo procede come segue:
#line(length: 100%)
+ Si fissi un parametro $k$, che determina il numero di volte che l'algoritmo
verrá eseguito;
+ Si scelga un qualsiasi numero $b$ tale per cui $0 < b < n$;
+ Si calcoli $"MCD"(b, n)$;
+ Se $"MCD"(b, n) > 1$, allora $n$ é certamente un numero composto,
perché ha almeno $"MCD"(b, n)$ come divisore, e l'algoritmo termina.
Altrimenti, si calcoli $b^(n − 1) mod n$;
+ Se $b^(n − 1) equiv.not 1 mod n$, allora $n$ é certamente un numero
composto, perché altrimenti violerebbe il @Fermat-little-theorem, e
l'algoritmo termina. Se invece $b^(n − 1) equiv 1 mod n$, l'iterazione
corrente per il test é "inconclusiva";
+ Se l'algoritmo é giá stato eseguito $k$ volte ed é sempre stato
inconclusivo, allora $n$ é _probabilmente_ un numero primo, e
l'algoritmo termina, altrimenti riprende dal punto 2.
#line(length: 100%)
Si noti come l'algoritmo garantisca di identificare un numero composto
come tale, ma non garantisce di identificare un numero primo come tale.
Ovvero, potrebbe presentarsi una situazione in cui l'algoritmo ha in
input un numero composto ma lo classifica come primo.
L'algoritmo si basa infatti sull'assunto che esistano pochissimi numeri
composti $n$ tali per cui vale $b^(n − 1) equiv 1 mod n$, e che quindi
ripetendo l'algoritmo abbastanza volte non si incorra in un falso positivo.
Naturalmente, é necessario sostanziare l'assunto che i numeri con queste
caratteristiche siano effettivamente pochi.
Siano $n > 1$ un numero intero dispari e $b$ un intero qualsiasi,
primo con $n$. Se vale $b^(n − 1) equiv 1 mod n$ si dice che $n$
è *pseudoprimo di Fermat* rispetto alla base $b$. Il nome "pseudoprimo
di Fermat" viene dal fatto che tali numeri possiedono una caratteristica
che ci si aspetta possiedano solamente i numeri primi (rispettare l'enunciato
del @Fermat-little-theorem) pur non essendo necessariamente numeri primi.
#example[
$15$ é uno pseudoprimo di Fermat per la base $4$. Infatti, $4$ e $15$
sono primi fra di loro ed é vero che $4^(14) equiv 1 mod 15$. Non é
peró pseudoprimo rispetto a $2$, perché pur essendo $2$ e $15$ coprimi
si ha che $2^(14)$ non é congruente ad $1$ modulo $15$. Una iterazione
del test di primalitá di Fermat scegliendo $b = 4$ darebbe per $15$
risposta inconclusiva, mentre una iterazione con $b = 2$ classificherebbe
(correttamente) $15$ come numero composto.
]
#lemma[
Un numero primo $p$ é pseudoprimo di Fermat rispetto a qualsiasi base.
]
#proof[
Se $p$ é primo, allora é rispettato il @Fermat-little-theorem, e
quindi vale $b^(p) equiv b mod p$ per qualsiasi $b in ZZ$. Inoltre,
se $p$ non é divisore di $b$, allora vale anche $b^(p - 1) equiv 1
mod p$. Affinché $p$ sia pseudoprimo di Fermat rispetto ad una certa
base $b$ deve essere coprimo con $b$; se $p$ non é divisore di $b$,
allora é certamente coprimo con $b$. Pertanto, per $p$ vale $b^(p - 1)
equiv 1 mod p$, che é la definizione di pseudoprimo di Fermat.
]
#theorem[
Per ogni intero $b > 1$, esistono infiniti pseudoprimi di Fermat
composti rispetto alla base $b$.
]
// #proof[
// Dimostrabile, da aggiungere
// ]
Si supponga di aver applicato il test di primalitá di Fermat $r$ volte,
usando quindi $r$ basi (non necessariamente distinte) $b_(1), b_(2), ...,
b_(r)$, e di aver trovato che $n$ é probabilmente primo. L'efficienza del
test dipende dalla probabilitá che $n$ sia effettivamente primo.
#theorem[
Sia $n > 1$ un intero composto dispari. Se $n$ non è pseudoprimo di
Fermat rispetto ad almeno una base $b$, allora $n$ non è pseudoprimo
per almeno la metà delle basi possibili viste modulo $n$, cioè le
$phi(n)$ basi $b$ con $0 < b < n$ e $"MCD"(b, n) = 1$.
] <Fermat-pseudoprime-estimate>
// #proof[
// Dimostrabile, da aggiungere
// ]
Se $n$ è un numero composto e vale $b_(i)^(n − 1) equiv 1 mod n$,
si ha che $n$ é pseudoprimo di Fermat rispetto a $b_(i)$. Per il
@Fermat-pseudoprime-estimate, la probabilitá che $n$ "superi" il
test pur non essendo un numero primo é $frac(1, 2)$. Dato che ogni
iterazione dell'algoritmo é indipendente dalle altre, la probabilitá
che $n$ "superi" tutte e $k$ le iterazioni del test pur non essendo
un numero primo é $(frac(1, 2))^(k)$.
#example[
Ripetendo il test di primalitá di Fermat $k = 5$ volte, la probabilitá
di classificare incorrettamente un numero composto come primo é circa
$0.03$. Ripetendo il test $k = 10$ volte, la probabilitá é circa $0.001$.
Ripetendo il test $k = 50$ volte, la probabilitá é circa $9 dot 10^(-16)$.
]
Sia $n$ un numero intero composto. Se esiste almeno una base per la quale
$n$ non é pseudoprimo, ripetendo il test di primalitá di Fermat infinite
volte é garantito che prima o poi verrá compiuta una iterazione con tale
base, classificandolo correttamente come composto. Esistono peró dei numeri
che sono pseudoprimi di Fermat rispetto ad ogni possibile base, e che quindi
saranno sempre in grado di "eludere" il test di primalitá di Fermat a
prescindere di quante volte lo si ripeta.
Sia $n > 1$ un intero dispari composto. Si dice che $n$ è un *numero di
Carmichael* se é pseudoprimo di Fermat rispetto a qualsiasi base, ovvero
se vale:
$ b^(n − 1) equiv 1 mod n
space "per ogni" space b in ZZ space "con" space (b, n) = 1 $
#example[
I numeri di Carmichael minori di $10000$ sono soltanto $7$. In
particolare, questi sono: $561, 1105, 1729, 2465, 2821, 6601, 8911$.
]
#lemma[
Un numero intero composto $n > 1$ è un numero di Carmichael se e
soltanto se:
- $n$ non ha alcun quadrato perfetto tra i suoi divisori;
- $p − 1 divides n − 1$ per ogni divisore primo $p$ di $n$.
] <Is-a-Carmichael-number>
// #proof[
// Dimostrabile, da aggiungere
// ]
#corollary[
Un numero di Carmichael è prodotto di almeno tre numeri primi distinti.
] <Carmichael-three-primes>
#proof[
Si supponga per assurdo che un numero di Carmichael sia prodotto di meno
di tre numeri primi distinti. Naturalmente, non si puó avere un numero di
Carmichael che sia prodotto di un solo numero primo, perché altrimenti
si violerebbe il @Is-a-Carmichael-number.
Devono allora esistere due numeri primi distinti $p$ e $q$ tali che
$n = p q$. Si supponga, senza perdita di generalitá, che $p < q$. Per
il @Is-a-Carmichael-number deve aversi $p - 1 divides n - 1$ e $q - 1
divides n - 1$. Ovvero:
$ n − 1 = (p − 1)(q − 1) + (p − 1) + (q − 1) $
Da cui si ricava $p - 1 divides q - 1$. Analogamente, da $q − 1 divides
n − 1$ si ha $q − 1 divides p − 1$. Ma allora si ha $p − 1 = q − 1$,
ovvero $p = q$, che é in contraddizione con le ipotesi di partenza in
quanto si ha assunto che $p$ e $q$ fossero distinti. Occorre quindi
concludere che un numero di Carmichael è prodotto di tre o piú numeri
primi distinti.
]
#example[
Si consideri il numero $561$. I suoi divisori sono $1, 3, 11, 17, 33,
51, 187, 561$ e nessuno di questi é un quadrato perfetto. Inoltre, i
suoi divisori primi sono $p_(1) = 3$, $p_(2) = 11$ e $p_(3) = 17$, e
si ha $p_(1) - 1 divides 560$, $p_(2) - 1 divides 560$, $p_(3) - 1
divides 560$. In base al @Is-a-Carmichael-number, si ha che $561$ é un
numero di Carmichael. Inoltre, per il @Carmichael-three-primes, tale
numero deve essere il prodotto di almeno tre primi distinti, ed in
effetti si ha $561 = 3 dot 11 dot 17$.
]
|
https://github.com/darkMatter781x/OverUnderNotebook | https://raw.githubusercontent.com/darkMatter781x/OverUnderNotebook/main/packages.typ | typst | // Provides a centralized way to specify package versions
#import "@local/notebookinator:1.0.0"
#import "@preview/codetastic:0.2.2"
#import "@preview/diagraph:0.1.0"
#import "@preview/gentle-clues:0.6.0"
#import "@preview/codly:0.2.0" |
|
https://github.com/Hobr/njust_thesis_typst_template | https://raw.githubusercontent.com/Hobr/njust_thesis_typst_template/main/page/cover.typ | typst | MIT License | // 封面
// 字体
#import "../util/font.typ": fonts, fontSize
// 信息栏
#let coverKey(key) = rect(
width: 100%,
inset: 5pt,
stroke: none,
text(
size: fontSize.三号,
font: fonts.zh_等线,
overhang: true,
spacing: 1250%,
)[#key],
)
#let coverValue(value) = rect(
width: 100%,
inset: 5pt,
stroke: (
bottom: 0.75pt + black,
),
text(
size: fontSize.三号,
font: fonts.zh_等线,
)[#value],
)
// 展示封面
#let showCover(info, author) = align(center)[
#v(75pt)
#table(
columns: (auto),
rows: (auto, auto, auto),
stroke: none,
gutter: 32pt,
par(
leading: 1.15em,
text(
size: fontSize.一号,
font: fonts.zh_黑体,
tracking: -1.5pt,
)[《ChatGPT时代的科技论文检索与写作》课程报告],
),
text(
size: fontSize.小一,
font: fonts.zh_黑体,
fill: rgb("#2f5597"),
)[#info.title],
)
#v(18pt)
#grid(
columns: (25%, 57%),
rows: (40pt, 40pt),
coverKey("班级(班号)"),
coverValue(author.at(0).classid),
coverKey("姓 名"),
coverValue(author.at(0).name),
coverKey("学 号"),
coverValue(author.at(0).studentid),
coverKey("学 院"),
coverValue(author.at(0).department),
)
#v(44pt)
#table(
columns: (auto),
rows: (auto, auto, auto),
stroke: none,
gutter: 18pt,
text(size: fontSize.三号, font: fonts.zh_等线)[南京理工大学],
text(size: fontSize.三号, font: fonts.zh_等线)[#info.date],
)]
|
https://github.com/yhtq/Notes | https://raw.githubusercontent.com/yhtq/Notes/main/抽象代数/章节/环.typ | typst | #import "../../template.typ": *
#show: note.with(
title: "抽象代数/代数学基础",
author: "YHTQ",
date: none,
logo: none,
)
#let chapterTwo = [
= 环
== 概念
#definition[环][
设 $R$ 为非空集合,$dot, +$ 为 $R$ 上的二元运算,若 $R$ 满足:
- $R$ 关于 $dot$ 构成幺半群,单位元往往记为 $1$
- $R$ 关于 $+$ 构成交换群,单位元往往记为 $0$
- $dot$ 对 $+$ 有左/右分配律,即:
$
forall a, b, c in R, space a (b + c) &= (a b) + (a c) = a b + a c \
(b + c) a &= (b a) + (c a) = b a + c a
$
(习惯上乘法优先级高于加法)\
则称 $R$ 为环\
若乘法也交换,则称 $R$ 为交换环或可交换环\
若非零元都可逆,则称其为除环(division ring)或者体(skew field)\
一般而言,额外规定 $0 != 1$,换言之环至少有两个元素
]
注:这里定义环的乘法有单位元,许多资料中不要求环有单位元,此时有单位元的环称为幺环
#definition[域][
交换的可除环称为域
]
#example[][
- $ZZ$ 在通常的加法和乘法下构成环。当 $k > 1$ 时,$k ZZ$ 无乘法单位元,但其它要求都满足
- $ZZ[1/n] := {a / N^r, a in ZZ, r in ZZ_(>=0)}$
- 设 $R$ 是环。则:
- $R[x]$ 也即 $R$ 上的多项式构成环
- $"Mat"_(n times m) (R)$ 也即 $R$ 上的 $n times m$ 矩阵构成环
- 四元数:$H = {a + b i + c j + d k, a, b, c, d in RR}$,其乘法规定为:
$
i^2 = j^2 = k^2 = -1\
i j = - j i = k, j k = - k j = i, k i = - i k = j
$
类似复数,定义其共轭为:
$
overline(a + b i + c j + d k) = a - b i - c j - d k
$
注:共轭不是同构,我们有:$overline(z w) = overline(w) overline(z)$
定义其模为 $norm(z) := z overline(z) = a^2 + b^2 +c^2 +d^2$,进而 $Inv(z) = overline(z)/norm(z)$
注:四元数中,可以只要求 $a, b, c, d in QQ$,$i^2 = A, j^2 = B, k^2 = i j i j = -i^2 j^2 = -A B$,这是一种在数论上很有用的四元数代数。对某些 $A, B$,这个代数相当于一个矩阵代数,但大部分情况不是
- 群环(group ring):设 $G$ 是交换群,定义:
$
R[G] = {sum_(g in G) a_g g, a_g in R, "仅有有限个" a_g != 0}\
$
乘法规定为类似多项式的乘法\
例:
- $G = ZZ_n$,则 $R[G] = {a_0 +a_1 sigma + a_2 sigma^2 +... +a_(n-1)sigma^(n-1)}$
- $G = ZZ$,则 $R[G] = R[RR]$
]
#proposition[][
- 除零环外,$0$ 是不可逆的
- $forall a in R, space 0 a = a 0 = 0$
- $forall a in R, space (-1) a = a (-1) = -a$
- $forall a in R, space (-a) (-b) = a b$
- $forall a in R, space a (-b) = (-a) b = - (a b)$
- $forall a in R, space (-a) b = a (-b) = - (a b)$
]
#definition[环之间的映射][
设 $R, S$ 都是环,$f: R -> S$ :
+ 为环同态,若:
- 保持加法:$forall a, b in R, space f(a + b) = f(a) + f(b)$
- 保持乘法:$forall a, b in R, space f(a b) = f(a) f(b)$
- 保持单位元:$f(1) = 1$
此时规定 $ker(f) = Inv(f)(0)$,类似有 $f$ 是单射 $<=> ker(f) = {0}$
+ 为环同构,若 $f$ 是双射
]
#example[][
+ $Z -> Z_n$ 上有自然的模 $n$ 同态
+ 若 $R$ 是交换环(注意交换性),则存在 $R$ 上多项式环 $R[x]$ 到 $R$ 的代入同态 $f$\
注:若没有交换性,则 $(a x)(a x) ?= a^2 x^2$
]
#definition[][
设 $R$ 是环:
- 称 $x$ 是零除子/零因子(zero-divider),若存在 $y != 0$ 使得 $x y = 0$ 或 $y x = 0$
- 称 $x$ 为可逆元/单位,若存在 $y$ 使得 $x y = y x = 1$
- 单位一定不是零因子
- 将所有 $R$ 上的乘法可逆元构成的集合记为 $R^times$,它关于乘法构成群
]
#proposition[][
若 $a$ 不是零除子,则:
$
a b = a c => a(b - c) = 0 => b = c
$
]
#definition[整环][
无零除子的交换环称为整环(integral domain)
]
由上面命题知,整环中的乘法满足消去律
#example[][
- 在 $ZZ_n$ 中,可逆元是那些与 $n$ 互素的元素,零除子是那些与 $n$ 不互素的元素(除 $0$ 之外)
- 设 $R$ 是整环,则 $R[x]$ 也是整环
]
#lemma[][
设 $R$ 是有限整环,则 $R$ 是域
]
#proof[
只需证明每个非零元都有乘法逆\
设 $a in R, a != 0$。显然 $f := [x | x -> a x]$ 是加法群同态。\
更进一步,$f$ 是单射,因为 $ker(f) = {0}$,而 $R$ 是有限集,故 $f$ 是双射。\
因此,取 $b = Inv(f)(1)$ 立得 $a b = 1$
]
#theorem[][
有限的可除环均可交换(从而为域)
]
其证明颇为复杂
#definition[分式域/fraction field][
模仿整数构造有理数的方法,可以从整环中构造分式域:
$
"Frac"(R) = {(a, b) in R times (R - {0})}
$
称 $(a, b) = (c, d)$ 当且仅当 $a d = b c$,类似有理数定义乘法和加法
]
显然 $QQ$ 是 $ZZ$ 的分式域,分式多项式是整多项式的分式域
== 理想
环中的理想类似群中正规子群,是为了构造商结构必须的
#definition[左理想(left ideal)][
设 $R$ 是环,$I$ 是 $R$ 的子集,若:
- $I$ 关于加法构成子群(对加法封闭)
- $forall a in R, space forall b in I, space a b in I$
则称 $I$ 是 $R$ 的左理想
]
交换第二条中的乘顺序,得到右理想的定义
#definition[双边理想/理想(ideal/two-sided ideal)][
若 $I$ 同时是左理想和右理想,则称 $I$ 是双边理想或者理想
]
注意:一个理想几乎总不是子环,首先它几乎不可能包含单位元,否则将有 $I = R$
#example[][
设 $R$ 是交换环,${a_j | j in J} subset R$,称:
$
{sum_(j in J) r_j a_j | r_j in R, "仅有有限个" r_j != 0}
$
为 ${a_j | j in J}$ 生成的理想。显然它也是包含 ${a_j | j in J}$ 的最小理想,一般记作 $(a_j | j in J)$\
考虑 $ZZ$,则 ${2, 4, 6, 8}$ 的生成理想就是全体偶数。一般的,整数集合生成的理想就是它们的最大公约数生成的理想
]
#definition[理想之间的运算][
设 $I, J$ 是环 $R$ 的双边理想,则:
- $I + J = {a + b | a in I, b in J}$
- $I J = {a_i b_j "的有限和" | a_i in I, b_j in J}$
]
这可以理解为某些元素生成的理想之和就是这些生成元的和生成的理想,乘法类似
#proposition[][
注意到 $I J subset I, I J subset J$,从而:
$
I J subset I sect J
$
]
== 商环
#definition[商环][
设 $R$ 是环,$I$ 是 $R$ 的(双边)理想,定义:
$
quotient(R, I) = {a + I | a in R}
$
其中 $a + I = b + I <=> a - b in I$,定义:
$
(a + I) + (b + I) = (a + b) + I\
(a + I) (b + I) = (a b) + I
$
则 $quotient(R, I)$ 关于上述加法和乘法构成环,称为 $R$ 模 $I$ 的商环
]
注:两条规定的合理性分别来自于双边理想的三条性质
类似可以有自然同态 $R -> quotient(R, I)$\
#theorem[同构定理][
设 $R, S$ 是环,$phi: R -> S$ 是环同态,则:
- $ker (phi)$ 是双边理想
- $phi$ 诱导了环同构 $quotient(R, ker(phi)) -> phi(R)$
]
#theorem[对应定理][
设 $I$ 是 双边理想,则:
$
{J | J "是左/右/双边理想", I subset J} <-> {J | J "是"quotient(R, I) "的双边理想"}
$
之间存在保持各种结构的一一对应
]
#example[][
$
funcDef(phi, R[G], R, sum a_j x_i, sum a_j)
$
是自然的同态,且有:
$
ker(phi) = (g - 1 | g in G)
$
]
#example[][
$quotient(QQ[x, y, z], (x - y^2, y - z^3)) tilde.eq QQ[z]$
]
#proof[
我们发现:
$
x y = (x - y^2 + y^2)y = y^3 + *(x-y^2)\ = (y - z^3 + z^3)^3 + *(x -y^2) = z^9 + *(x - y^2) + *(y - z^3)
$
其他项类似可得
]
上面例子说明,某种意义上我们确实可以认为在商环中,$x = y^2, y = z^3$,换言之商环就是把一些元素等同起来,从而简化了环的结构。
#example[][
容易验证:
$
RR[x] &-> CC\
$
]
== 中国剩余定理
#theorem[中国剩余定理][
设 $m_i$ 两两互素,则:
$
funcDef(phi, ZZ, quotient(ZZ, m_1 ZZ) times ... times quotient(ZZ, m_n ZZ), a, (a mod m_1, ..., a mod m_n))
$
是满同态,且:
$
ker (phi) = quotient(ZZ, m_1 ZZ) times ... times quotient(ZZ, m_n ZZ)
$
]
为了推广这个结论,我们一定程度上需要推广互素的概念:
#definition[][
称交换环的两个理想 $I, J$ 互素,如果:
$
I + J = R
$
]
这个定义的思路来自于数论中的裴蜀定理\
#theorem[推广的中国剩余定理][
设 $I_i$ 是交换环 $R$ 的理想
+
$
funcDef(phi, ZZ, I_1 times ... times I_n, a, (a mod m_1, ..., a mod m_n))
$
是同态,且 $ker(phi) = I_1 sect I_2 sect ... sect I_n$
+ 若 $I_i$ 两两互素,则:
+ $phi$ 是满射
+ $I_1 sect I_2 sect ... sect I_n = I_1 I_2 ... I_n$
]
#proof[
- 先证 $n=2$ 的情形,剩下的采用归纳法\
(1)结论是显然的,只证 (2)
+ 注意到:
$
R = I_1 + I_2 => 1 = a + b, a in I_1, b in I_2
$
从而对 $forall c in I_1 sect I_2$,将有:
$
c = c a_1 + c a_2
$
而由理想的定义,$c a_1 in I_1, c a_2 in I_2$,进而由理想乘积的定义立得 $c in I J$\
另一侧由前面的命题是显然的,因此 $I_1 I_2 = I_1 sect I_2$
+ 继续使用上面的 $a, b$,注意到:
$
phi(a) = (a_1 + I_1, a_1 + I_2) = (0, 1 - a_2 + I_2) = (0, 1 + I_2) \
phi(b) = ... = (1 + I_1, 0)
$
因此 $(x + I_1, y + I_2) = phi(x a + y b) in im(phi)$,这就表明 $phi$ 是满射
- 再使用归纳法,关键是证明:
$
I_1 "与" I_2 I_3 ... I_k "互素"
$
设:
$
1 = a_i + b_i, b_i in I_i, a_i in I_1
$
则:
$
1 = product_i (a_i + b_i) = sum_i * a_i + product_i b_i
$
由理想定义,$sum_i * a_i in I_1$,而 $product_i b_i in I_2 I_3 ... I_n$,这就给出了证明
]
== 极大理想、素理想
#definition[极大理想][
设 $R$ 是环,$m$ 是 $R$ 的双边理想,若 $m != R$ 且没有包含 $m$ 的双边理想,则称 $m$ 是极大理想
]
#example[][
- 在 $ZZ$ 中,$quotient(ZZ, p ZZ)$ 就是极大理想
- 在 $RR[x]$ 中,$(p, g(x))$ ,其中 $g(x)$ 是不可约多项式,也是极大理想
- 设 $G$ 是有限群,$CC[G] supset I_G = (g-1, g in G)$ 是极大理想,且 $quotient(R, I_G) = CC$
]
#theorem[][
任何非平凡的理想 $I$ 都包含在某个极大理想中
]
#proof[
这个证明需要用到 Zoun 引理。
取 $S$ 是所有 $R$ 中包含 $I$ 的非平凡理想,对于包含关系,我们有 $S$ 是偏序集。在这个集合上我们试图套用 Zoun 引理,只需验证所有链都有上界即可。
设 $I_x space x in S$ 是链,显然:
$
union_(x in S) I_x
$
是这个链的一个上界,它是理想也是显然的,只需证明它不是 $R$。\
事实上,注意到:
$
1 in.not I_x
$
进而 $1 in.not union_(x in S) I_x$
这就完成了证明
]
#lemma[][
交换环 $R$ 是域当且仅当 $R$ 中只有平凡理想
]
#proof[
若 $R$ 是域,则 $R$ 中只有平凡理想显然。\
反之,若 $R$ 中只有平凡理想,取 $a in R, a != 0$,则 $(a)$ 是非平凡理想,从而 $(a) = R$,因此存在 $b$ 使得 $a b = 1$
]
注:该引理仅对交换环成立。事实上,若取 $R$ 为全体 $n times n$ 矩阵,则没有非平凡理想,但它也不是域。
#proposition[][
设 $R$ 是交换环,则 $I$ 是极大理想当且仅当 $quotient(R, I)$ 是域
]
#proof[
由第四同构定理,$I$ 是极大理想当且仅当 $quotient(R, I)$ 仅有平凡理想,进而由引理知结论成立。
]
#definition[素理想][
设 $R$ 是环,$P$ 是非平凡理想,称 $P$ 是素理想,若:
$
a b in P => a in P or b in P
$
]
#proposition[][
设 $R$ 是环,$P$ 是非平凡理想,则 $P$ 是素理想当且仅当 $quotient(R, P)$ 是整环
]
#proof[
取 $pi$ 是自然同态:
- $arrow.l.double:$ 设 $a, b in R, a b in P$,则:
$
pi(a b) = 0 => pi(a) pi(b) = 0 => a in P or b in P
$
- $arrow.r.double:$ 设 $a, b in R, pi(a) pi(b) = 0$,则:
$
pi(a) pi(b) = 0 => pi(a b) = 0 => a b in P => a in P or b in P => pi(a) = 0 or pi(b) = 0
$
]
#corollary[][
交换环中的极大理想都是素理想。
]
注:反之不然,$ZZ[x]$ 中 $(p)$ 是素理想,但不是极大理想
#proposition[][
设 $R$ 是交换环:
+ 设 $I_i$ 是一些理想,$P$ 是素理想。若:
$
P subset sect_(i in I) I_i
$
则存在 $i$ 使得 $I_i subset P$
+ 若:
$
P = sect_(i in I) I_i
$
则 $P = I_i$
+ 设 $P_i$ 是一些素理想,$I$ 是理想且:
$
I subset union_(i in I) P_i
$
则:
$
exists i: I subset P_i
$
]
#proof[
若对每个 $i$,存在 $x_i$ 使得 $x_i in.not P$,则:
$
product_i x_i in sect_(i in I) I_i
$
又由素理想定义:
$
product_i x_i in.not P
$
这表明:
$
P subset.not sect_(i in I) I_i
$
对于结论二,天生有:
$
P = sect_(i in I) I_i subset I_i
$
因此结论成立
对于结论三,用归纳法证明反向结论:若 $I$ 不在 $n$ 个素理想之中,则 $I$ 不在这些素理想的并之中。
- $n = 1$ 时显然
- 假设 $n - 1$ 时成立,则对 $x_i in I$:
- 若存在某个 $x$ 不在所有 $P_i$ 中,由归纳假设结论显然成立。
- 否则,设 $forall x : x in.not union_(k=1)^(n-1) P_k => x in P_n$\
根据条件,可取 $x_i in.not P_i$,由假设知:
$
x_1, x_2, ..., x_(n-1) in P_n\
x_n in.not P_n
$
从而考虑:
$
product_i x_i
$
]
#lemma[][
设 $f$ 是环同态,$J$ 是素理想,则 $Inv(f)(J)$ 也是素理想
]
#proof[
取 $pi: f(R) -> quotient(f(R), J)$,则对 $pi compose f$ 使用第一同构定理:
$
quotient(R, Inv(f)(J)) <= quotient(f(R), J)
$
上式右端是整环,从而左端也是整环,进而 $Inv(f)(J)$ 是素理想
]
== 整环
这节中所有都讨论的是整环,也即没有零因子的交换环。
#definition[整除][
称 $a | b$,如果:
- 存在 $q$ 使得 $b = a q$
- $b in (a)$
(以上两者等价)
]
#definition[素元][
称一个元素 $p$ 是素元,如果:
- $(p)$ 是素理想
- $p | a b <=> p | a or p | b$
(以上两者等价)
]
#definition[相伴][
两个元素 $a, b$ 被称为相伴的,如果:
- $a = b u$ 且 $u$ 是单位(是可逆元)
- $a = b u and b = a v$
(以上两者等价)
]
相伴的引入是因为我们无法避免 $p = (-1)(-p)$ 之类我们不想要的非平凡分解,这种情况的产生依赖于 $-1$ 是可逆元,我们希望排除这些情况
#definition[不可约元][
称一个元素 $p$ 不可约,如果:
$
p = a b <=> a "为单位或" b "为单位"
$
]
#proposition[][
$p$ 是素元 $=>$ $p$ 是不可约元
]
#proof[
假设 $p$ 是可约的素元,也即存在不是单位元的 $u, v$,$p = u v$\
由 $(p)$ 是素理想,不妨设 $u in (p)$,也即 $u = k p$\
这意味着:
$
p = u v = k p v => k v = 1
$
进而 $v$ 是单位,证毕。
]
=== 主理想整环
#definition[主理想整环 Principle Ideal Domain, PID][
称一个整环是主理想整环,如果它的每个理想都是主理想(由一个元素生成)
]
#proposition[][
在 PID 中非平凡素理想等价于极大理想
]
#proposition[][
在 PID 中素元等价于不可约元
]
#example[][
$ZZ[x]$ 不是主理想整环,因为 $(2, x)$ 不是主理想
]
=== 欧几里得整环
#definition[欧几里得整环(ED)][
一个整环 $R$ 称为欧几里得整环,如果存在范数 $N: R -> ZZ_(>=0)$,满足:
+ $N(0) = 0$
+ $forall a, b: exists q, r: a = b q + r, N(r) < N(b)$
]
注:欧几里得整环就是可以做辗转相除法的整环,其中 $q, r$ 未必唯一,但不影响辗转相除
#example[][
+ 域是欧几里得整环(显然每两个元素都能整除)
+ 整数环,多项式环是欧几里得整环
+ 高斯整数环:$ZZ[i] subset CC$ 是欧几里得整环,其中范数就是复数的范数:
给定 $a, b in ZZ[i]$,选取 $q$ 使得 $|Re(q) -a/b|<= 1/2, |Im(q) -a/b|<= 1/2$,此时有:
$
|a - b q| = |b| |a/b - q| <= 1/2 |b| < |b|
$
显然这样的 $q$ 的选取并不唯一,但我们并不关心
+ $ZZ[omega_3]$,其中 $omega_3$ 是三次单位根,也是欧几里得整环
]
#proposition[][
欧几里得整环一定是主理想整环
]
#proof[
和整数的情形很类似。\
设 $I$ 是欧几里得整环 $R$ 的理想,取 $a in I, a != 0$ 使得 $N(a)$ 最小。\
断言 $I = (a)$:\
事实上,若 $b in I$,则:
$
b = a q + r
$
而 $r = b -a q in I, N(r) < N(a)$。由 $a$ 的范数最小性,$r = 0$,从而 $b in (a)$
]
欧几里得整环是历史较早的判断整环是否是主理想整环的方法,但泛用性比较有限。例如,$ZZ[c], c in CC$ 是主理想整环的例子很多,但在通常的范数下是欧几里得整环的例子屈指可数。
=== 唯一分解整环
#definition[唯一分解整环(UFD)][
称一个整环 $R$ 是唯一分解整环,如果:
+ $forall r in R$,$r$ 不是 $0$,单位,则$r$ 可以写成若干个不可约元 $p_1, p_2, ..., p_m$ 的乘积
+ 这种分解在不考虑顺序和相伴的情况下是唯一的,也即若 $r = p_1 p_2 ... p_n = q_1 q_2 ... q_m$,则 $n = m$,且将存在置换 $sigma$ 使得 $sigma(p_1, p_2, ..., p_n)$ 与 $(q_1, q_2, ...,q_n)$ 每个元素都相伴
]
#example[][
- $ZZ[sqrt(-5)]$ 不是唯一分解整环,因为 $6 = 2 times 3 = (1 + sqrt(-5)) (1 - sqrt(-5))$,其中 $1 plus.minus sqrt(5)$ 是不可约元,但不是素元
]
#proposition[][
唯一分解整环中,不可约元等价于素元
]
#proof[
只需证明不可约元是素元:\
假设 $p$ 不可约,且 $p | a b <=> p r = a b$\
将 $a, b$ 写成不可约元的乘积:
$
a = q_1 q_2 ... q_n, b = q_(n+1) q_(n+2) ... q_(n+m\
$
又由分解的唯一性,其中必有一个元素 $q_i$ 与 $p$ 相伴。
设 $p = r q_i$。不妨设 $b = k q_i$,则 $b = k Inv(r) p => p | b$,证毕。
]
#definition[最大公约数(gcd)][
设 $a, b$ 是整环 $R$ 中的元素,称 $d$ 是 $a, b$ 的最大公约数,如果:
- $d | a, d | b$
- $forall c: c | a, c | b => c | d$
也等价于取出 $a, b$ 不可约分解中所有共同(相伴)部分
]
#proposition[][
最大公因数在相伴的意义下是唯一的
]
类似也可以定义最大公倍数(lcm)
#theorem[][
主理想整环都是唯一分解整环
]
#proof[
假设 $r$ 不是 $0$,单位,先证明存在性:
假设 $r$ 不能分解成有限个不可约元的乘积
+ 显然 $r$ 不是不可约元,因此 $r = a_1 b_1$,$a_1, b_1$ 都不是 $0$,单位元
+ 如果 $a_1, b_1$ 都可有限分解,则 $r$ 也可有限分解,矛盾。因此不妨设 $b_1$ 不能有限分解,从而 $b_1$ 可以继续写成 $b_1 = a_2 b_2$
+ 以此类推,我们得到一个无穷序列 $r = a_1 b_1 = a_1 a_2 b_2 = ...$
+ 这说明我们得到了一个无穷升链:
$
(r) subset (b_1) subset (b_2) subset ...
$
考虑:
$
S = union (b_i)
$
它确实是理想。而我们在一个主理想整环中,因此可设:
$
S = (b)
$
有定义,存在 $i$ 使得 $b in (b_i)$,由此 $(b) = (b_i)$,且对所有 $n > i$,均有:
$
(b_n) = (b_i)
$
+ 然而在上面的分解中,若 $(b_i) = (b_(i+1))$,则有:
$
b_i = a_(i+1) b_(i+1)\
b_(i+1) = k b_i
$
从而:
$
b_(i+1) = k a_(i+1) b_(i+1) => k a_(i+1) = 1
$
这将给出 $a_(i+1)$ 是单位,矛盾!
]
=== 多项式环
#lemma[][
域上的多项式环是欧几里得整环,主理想整环,唯一分解整环。
]
#theorem[][
若 $R$ 是唯一分解整环,当且仅当 $R[x]$ 也是唯一分解整环(进而 $n$ 元多项式环也是唯一分解整环)
]
#proof[
若 $R[x]$ 是唯一分解整环,在 $R[x]$ 中取得 $r in R$ 非零、单位,它可被写作 $R[x]$ 中若干不可约元的乘积,从而:
$
r = p_1(x) p_2(x) ... p_m (x)
$
观察两侧多项式的次数,注意到在整环中满足 $deg(f g) = deg(f) + deg(g)$,由此右边每一项次数均为 $0$,这表明右边也都是常多项式,也即:
$
p_i (x) = r_i\
r = r_1 r_2 ... r_m
$
而 $r_i$ 在 $R[x]$ 中不可约,在它的子环中当然不可约,同时由子环关系也易得唯一性。
若 $R$ 是唯一分解整环,先证明一个引理:
#lemma[高斯引理][
设 $p(x) in R[x]$ 是非零多项式,$F$ 是 $R$ 的分式域。
- 若 $p(x)$ 可以写作 $F[x]$ 中若干多项式的乘积,则可以写作 $R[x]$ 中若干多项式的乘积。
- 若其在 $F[x]$ 中可约,则在 $R[x]$ 中也可约。
]
#proof[
设 $p(x) = A(x) B(x)$,$A(x), B(x) in F[x]$ 且次数大于等于 $1$。往证可以将右侧调整为 $R[x]$ 中元素的乘积(也就是通分)。
取 $d_1, d_2 in R-{0}$ 使得:
$
d_1 A(x) = a(x) in R[x], d_2 B(x) = b(x) in R[x]
$
记 $d = d_1 d_2$,将有:
$
d p(x) = a(x) b(x)
$
我们目标是消去 $d$ 中不可约元素:
- 若 $d$ 可逆,则有 $p(x) = Inv(d) a(x) b(x)$,结论成立。
- 若不然,则考虑它的唯一分解:
$
d = product_i p_i^(alpha_i)
$
考虑商环 $quotient(R, (p_i))$,由于 $p_i$ 是素元,故商环是整环。考虑 $R[x] -> quotient(R, (p_i))[x]$ 的自然同态 $pi$,有:
$
0 = pi(d p(x)) = pi(a(x)) pi(b(x))
$
由于这是整环,因此 $pi(a(x)) = 0$ 或 $pi(b(x)) = 0$。\
不失一般性,设 $pi(a(x)) = 0$,这意味着 $a(x)$ 中所有系数都有因子 $p_i$,因此可设:
$
a(x) = p_i a_1(x)
$
因此:
$
(d / p_i) p(x) = a_1(x) b(x)
$
注意到 $d$ 的唯一分解减少了,反复进行即可彻底消去 $d$,最终可得:
$
p(x) = a_n (x) b_n (x)
$
其中 $a_n (x), b_n (x) in R[x]$
]
注:这个引理是经典的若多项式在有理数域上可约,则在整数域上也可约的推广。
回到定理的证明,设 $a(x) in R[x] - {0}$,设它不是单位。取 $d$ 为 $a(x)$ 所有系数的最大公因子,则 $a(x) = d a_1(x)$,$a_1(x)$ 所有系数最大公因子是 $1$。要找 $a(x)$ 的分解只需找到 $a_1(x)$ 的分解即可。\
我们知道以域为系数的多项式环是唯一分解整环,因此在 $F[x]$ 中,可将 $a_1(x)$ 写作若干不可约元的乘积:
$
a_1(x) = P_1(x) P_2(x) ... P_n (x)
$
连续利用高斯引理,可得:
$
a_1(x) = p_1(x) p_2(x) ... p_n (x)
$
其中 $p_i (x) in R[x]$。\
另一方面,每个 $p_i (x)$ 在 $F[x]$ 中不可约,且它的系数的最大公因子也是 $1$(否则它的最大公因子将成为 $a_1(x)$ 的最大公因子),因此 $p_i (x)$ 在 $R[x]$ 中也不可约。\
这就证明了分解的存在性。
再证明其唯一性,设:
$
a_1(x) = p_1(x) p_2(x) ... p_n (x) = q_1(x) q_2(x) ... q_m (x)
$
+ 考虑其中非常多项式的项:
把它们都看作 $F[x]$ 中多项式,这也给出了两种不可约分解(由高斯引理在 $R[x]$ 中不可约的元素在 $F[x]$ 中也不可约),进而它们在 $F[x]$ 中在不考虑顺序和相伴的意义下唯一。不妨设它们顺序相同,进而:
$
p_i (x) = k_i q_i (x), k_i in F
$
通分一下即得:
$
a p_i (x) = b q_i (x), a, b in R
$
- 由于 $p_i (x), q_i (x)$ 在 $R[x]$ 中不可约,因此系数的最大公因子是 $1$。从而左右两端系数的最大公因子分别为 $a, b$,这表明 $a ~ b$ 或者 $a = u b$。
- 进而:
$
a p_i (x) = b q_i (x) => u b p_i (x) = b q_i (x) => q_i (x) = u p_i (x)
$
+ 消去这些元素,两侧都剩下一些 $R$ 中的不可约元。由 $R$ 中的分解唯一性,这些 $R$ 中的不可约元当然在不考虑顺序和相伴的意义下唯一。
这就证明了唯一性。
]
注:类似的性质对于欧几里得整环,主理想整环都不对,例如 $ZZ[x]$ 不是主理想整环,但却是唯一分解整环。
由上面的定理,如何判断多项式是否可约便成为重要的话题。由于高斯引理,我们不妨只考虑域系数的多项式。\
#lemma[][
设 $f(x)$ 是 $F[x]$ 中二次或三次多项式,则 $f(x)$ 可约当且仅当它可约。
]
#proof[
由于有根当且仅当可以分解出一次多项式,而二次或三次多项式若可约一定有一次因式,故结论显然。
]
#lemma[][
设整系数多项式 $f(x) = sum_(i = 0)^n a_i x^i$ 有有理根 $p/q$,其中 $p, q$ 互素,则 $p | a_0, q | a_n$
]
#theorem[爱森斯坦判别法 Eisenstein's criterion][
设 $R$ 是整环,$f(x) = sum_(i = 0)^n a_i x^i$ 是 $R[x]$ 中的多项式,$a_n != 0$。设存在素理想 $p$ 使得:
+ $a_n in.not p$
+ $a_i in p, i != n$
+ $a_o in.not p^2$
则多项式不可约。
]
#proof[
不妨设 $a_n = 1$\
若 $f(x) = p(x) q(x)$,在整环 $(quotient(R, p))[x]$ 中考虑,由高斯引理我们进而可以在分式域中考虑。取 $pi$ 是 $R[x]$ 到 $quotient(R, p)$ 分式域上多项式的自然同态,发现:
$
pi(f(x)) = x^n
$
进而 $pi(p(x)) = x^i, pi(q(x)) = x^(n-i)$
另一方面,$p(x), q(x)$ 应当都是首一(或相伴于首一多项式),因此 $pi(p(x)), pi(q(x))$ 应当不降低次数,故 $i, n-i !=0$。\
这表明 $p(x), q(x)$ 的常数项 $p_0, q_0 in p$,进而 $p_0 q_0 = a_0 in p^2$,矛盾!
]
#example[][
$EE_p (x) = (x^p - 1)/(x-1) = x^(p-1) + x^(p-2) + ... + 1$ 是不可约多项式。\
将 $x$ 替换成 $x - 1$,得到:
$
(x-1)^(p-1) + (x-1)^(p-2) + ... + 1
$
展开可以验证,它满足爱森斯坦判别法的条件,因此它不可约。\
这个多项式被称为分圆多项式。
]
#theorem[][
设 $F$ 是域,$p(x)$ 是 $F[x]$ 中不可约元当且仅当 $quotient(F[x], (p[x]))$ 是域
]
#proof[
注意到域上的多项式环是主理想整环,其中素理想等价于极大理想,故结论显然成立。
]
#example[][
在域 $ZZ_3$ 中,注意到:
$
p(x) = x^2 + x - 1
$
没有根,因此它不可约,进而 $quotient(ZZ_3[x], (p[x]))$ 是域。
]
#lemma[][
域上多项式的根的数量不超过多项式的次数
]
#corollary[][
域上有限乘法群是循环群
]
#proof[
设 $G$ 是域 $F$ 上的有限乘法群。设 $exp(G) = n$,则 $G$ 中存在 $x$ 使得 $ord(x) = n$,进而对群中所有元素,均有:
$
x^n = 1
$
另一方面,这个多项式至多有 $n$ 个根,因此 $|G| <= n$,这表明 $|G| = n = ord(x)$,进而 $G$ 是循环群。
]
=== 二次整数环(Quadratic integer ring)
#definition[二次整数环][
设 $D$ 是无平方因子的整数,称 $D$ 的二次整数环为:
$
OO = cases(
ZZ[sqrt(D)] = {x + y sqrt(D) space x, y in ZZ} space "若" D = 2,3 mod 4,
ZZ[(1 + sqrt(D))/2] = {(x + y sqrt(D))/2 space x, y in ZZ} space "若" D = 1 mod 4
)
$
并在其上定义自然的共轭,范数 $a overline(a)$(保证其在整数环中)。
]
注:
- 这里的范数未必是正的,但满足 $N(a b) = N(a) N(b)$
- 这里的 $N(a)$ 可以理解为 $ZZ[D]$ 实际上是 $ZZ$ 上的二维线性空间,它的 $N$ 就是线性映射:$[x | x -> a x]$ 的行列式
#lemma[][
二次整数环中所有的单位元就是范数为 $plus.minus 1$ 的元素
]
#example[Perl 方程][
设 $D <0, D = 2,3 mod 4$,从而 Perl 方程:
$
(x - sqrt(D) y)(x + sqrt(D) y) = plus.minus 1
$
的解都满足 $N(x + sqrt(D)), N(x - sqrt(D)) = plus.minus 1$,从而它们都是单位\
换言之, Perl 方程可以仅在 $OO[D]$ 中所有可逆元构成的群上讨论
]
#theorem[][
对所有二次整数环,是主理想整环当且仅当它是唯一分解整环
]
#proposition[][
- 对所有 $D < 0$,有且仅有 $9$ 个 $D$ 使得 $OO[D]$ 是唯一分解整环,包括典型的高斯整数环 $OO(-1)$
- 对 $D > 0$,猜想有无穷多个 $D$ 使得 $OO[D]$ 是唯一分解整环。计算机验证下大约一半的 $D$ 满足这个条件,因此也有更进一步的猜想。
]
#theorem[费尔马定理][
素数 $p$ 可以写成两个整数的平方和,也即:
$
exists a, b: p = a^2 + b^2
$
当且仅当 $p = 2$ 或者 $p = 1 mod 4$
]
这是费尔马年代非常不平凡的定理,但用现在的理论来看可以给出很直观的证明,本节后半部分即是该定理的证明。
#lemma[][
高斯整数环中,范数为素数的元素都不可约
]
#lemma[][
高斯整数环中,所有不可约元的范数都是素数或者素数的平方
]
#proof[
设 $pi$ 是不可约元,我们需要用到:
$
ZZ sect (pi)
$
是 $ZZ$ 的素理想。事实上,容易建立 $ZZ$ 到 $ZZ[i]$ 之间的同态。之前我们证明了素理想的原像还是素理想,而当 $(pi)$ 拉回 $ZZ$ 时,它的原像当然就是 $ZZ sect (pi)$
因此,设 $ZZ sect (pi) = (p)$,显然有:
$
p = pi a => p^2 = N(p) = N(pi) N(a)
$
这自然给出 $N(pi) = p, p^2$
]
#lemma[][
高斯整数环中,所有不可约元可以分类为:
- $1 + i$,其范数为 $2$
- $N(pi) = p^2$,素数 $p$ 满足 $p = 3 mod 4$
- $x + y i$ 与 $x - y i$,若 $x^2 + y^2 = p$ 为素数,且 $p = 1 mod 4$
]
#proof[
对任意不可约元 $pi$:
+ 若 $N(pi) = 2$,显然有 $pi = 1 plus.minus i$,它们是相伴的不可约元
+ 对任意素数 $p in ZZ[i]$,注意到要么 $p$ 不可约,要么 $p = a b$,$N(a) = N(b) = p$,这给出 $a, b$ 不可约
- 若 $p | N(pi), p = 3 mod 4$。注意到:
$
a^2 + b^2 = 0, 1, 2 mod 4
$
这表明不存在 $N(a) = p$ 的元素,因此 $N(pi) = p^2$。另一方面,这也表明 $N(pi) = p^2$ 的元素一定都不可约
- 若 $p | N(pi), p = 3 mod 4$,断言此时 $p$ 可约。注意到:
+ 模 $p$ 的乘法群是循环群,取其生成元 $x$,则 $ord(x) = p - 1$
+ 考虑 $x^((p-1)/2)$,它的阶为 $2$,显然 $x^((p-1)/2) = minus 1$
+ 进而,取 $a = x^((p-1)/4)$,我们有:
$
p | a^2+1 = (a+i)(a-i)
$
+ 若 $p$ 不可约,这将给出:
$
p | a + i or p | a - i
$
由整除的定义,这是荒谬的
这就证明了 $p$ 一定可约,设 $p = a b$,注意到 $a overline(a) = a b => b = overline(a)$,且 $N(a) = p$,这表明 $a, overline(a)$ 是一对不可约元
]
]
#chapterTwo |
|
https://github.com/frectonz/the-pg-book | https://raw.githubusercontent.com/frectonz/the-pg-book/main/book/073.%20judgement.html.typ | typst | judgement.html
Two Kinds of Judgement
April 2007There are two different ways people judge you. Sometimes judging
you correctly is the end goal. But there's a second much more
common type of judgement where it isn't. We tend to regard all
judgements of us as the first type. We'd probably be happier if
we realized which are and which aren't.The first type of judgement, the type where judging you is the end
goal, include court cases, grades in classes, and most competitions.
Such judgements can of course be mistaken, but because the goal is
to judge you correctly, there's usually some kind of appeals process.
If you feel you've been misjudged, you can protest that you've been
treated unfairly.Nearly all the judgements made on children are of this type, so we
get into the habit early in life of thinking that all judgements
are.But in fact there is a second much larger class of judgements where
judging you is only a means to something else. These include college
admissions, hiring and investment decisions, and of course the
judgements made in dating. This kind of judgement is not really
about you.Put yourself in the position of someone selecting players for a
national team. Suppose for the sake of simplicity that this is a
game with no positions, and that you have to select 20 players.
There will be a few stars who clearly should make the team, and
many players who clearly shouldn't. The only place your judgement
makes a difference is in the borderline cases. Suppose you screw
up and underestimate the 20th best player, causing him not to make
the team, and his place to be taken by the 21st best. You've still
picked a good team. If the players have the usual distribution of
ability, the 21st best player will be only slightly worse than the
20th best. Probably the difference between them will be less than
the measurement error.The 20th best player may feel he has been misjudged. But your goal
here wasn't to provide a service estimating people's ability. It
was to pick a team, and if the difference between the 20th and 21st
best players is less than the measurement error, you've still done
that optimally.It's a false analogy even to use the word unfair to describe this
kind of misjudgement. It's not aimed at producing a correct estimate
of any given individual, but at selecting a reasonably optimal set.One thing that leads us astray here is that the selector seems to
be in a position of power. That makes him seem like a judge. If
you regard someone judging you as a customer instead of a judge,
the expectation of fairness goes away. The author of a good novel
wouldn't complain that readers were unfair for preferring a
potboiler with a racy cover. Stupid, perhaps, but not unfair.Our early training and our self-centeredness combine to make us
believe that every judgement of us is about us. In fact most aren't.
This is a rare case where being less self-centered will make people
more confident. Once you realize how little most people judging
you care about judging you accurately—once you realize that because
of the normal distribution of most applicant pools, it matters least
to judge accurately in precisely the cases where judgement has the
most effect—you won't take rejection so personally.And curiously enough, taking rejection less personally may help you
to get rejected less often. If you think someone judging you will
work hard to judge you correctly, you can afford to be passive.
But the more you realize that most judgements are greatly influenced
by random, extraneous factors—that most people judging you are
more like a fickle novel buyer than a wise and perceptive
magistrate—the more you realize you can do things to influence the
outcome.One good place to apply this principle is in college applications.
Most high school students applying to college do it with the usual
child's mix of inferiority and self-centeredness: inferiority in
that they assume that admissions committees must be all-seeing;
self-centeredness in that they assume admissions committees care
enough about them to dig down into their application and figure out
whether they're good or not. These combine to make applicants
passive in applying and hurt when they're rejected. If college
applicants realized how quick and impersonal most selection processes
are, they'd make more effort to sell themselves, and take the outcome
less personally.Spanish TranslationRussian TranslationArabic Translation
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/wrap-indent/0.1.0/wrap-indent.typ | typst | Apache License 2.0 | /*
wrap-indent package implementation.
Usage:
```typ
#show terms.item: allow-wrapping
/ #wrap-in(rect):
Content spanning multiple
lines gets wrapped.
But non-indented content does not.
```
See the README for more examples and explanation.
*/
/// Internal variables that you don't need to worry about
#let wrap-indent-unique-key = "wrap-indent-package-unique-key"
#let wrap-indent-state = state(wrap-indent-unique-key)
/// Takes a function and prepares it for the term
/// item by storing it in state.
///
/// Use with term list syntax like: `/ #wrap-in(rect): Stuff!`
///
/// This requires first using: `#show terms.item: allow-wrapping`
#let wrap-in(func) = wrap-indent-state.update((func, ))
/// Allow wrapping with a show rule on `terms.item` to make
/// the `wrap-in()` function work with term list syntax!
///
/// Show rule example:`#show terms.item: allow-wrapping`
///
/// Then use term list syntax like: `/ #wrap-in(rect): Stuff!`
///
/// When a term list item is shown, we check if the `term` looks
/// has our unique key. If so, we place our state as content, then
/// retrieve our input function from the state under context.
/// Finally, we call the input function with the term's `description`
/// as the function's argument.
#let allow-wrapping(item) = {
if item.term.at("key", default: none) == wrap-indent-unique-key {
// Place the state
item.term
// Then get the wrapper function using context
context {
let wrapper = wrap-indent-state.get().at(0)
wrapper(item.description)
}
} else {
item
}
}
/*
== Sidenote:
I needed to place the input function in an array -- although a
dictionary would also work -- otherwise the function turns into
raw text before I can access it from the state. It seems plain
function values automatically evaluate to content even if the
state is value-only and not displayed? Odd.
Here's a minimal example for the sidenote:
```typ
#{
let st = state("id")
st.update(rect)
context st.get() // already evaluated to raw text?
st.update((rect,))
context st.get().at(0)("in a rect")
// ^ but this works?
}
```
*/
|
https://github.com/crd2333/Astro_typst_notebook | https://raw.githubusercontent.com/crd2333/Astro_typst_notebook/main/src/components/TypstTemplate/math.typ | typst | #import "@preview/quick-maths:0.1.0": shorthands
#import "@preview/mitex:0.2.4": *
// 以类似格式添加符号缩写
#let shorthand = shorthands.with(
($+-$, $plus.minus$),
($|-$, math.tack),
($=<$, $<=$), // =< becomes '≤'
($<==$, math.arrow.l.double), // Replaces '≤',似乎需要某一边有东西才能正常工作,原因未知
($~$, $med$),
)
// 以类似格式添加文本缩写
#let le = $<=$
#let ge = $>=$
#let infty = $infinity$
#let int = $integral$
#let wave = $tilde$ // alternative to ~
// $wave$, $prop$, $approx$
#let r1 = $rho_1$
#let r2 = $rho_2$
#let si = $sigma$
#let Si = $Sigma$
#let s1 = $sigma_1$
#let s2 = $sigma_2$
#let de = $delta$
#let De = $Delta$
#let d1 = $delta_1$
#let D1 = $Delta_1$
#let d2 = $delta_2$
#let D2 = $Delta_2$
#let al = $alpha$
#let a1 = $alpha_1$
#let a2 = $alpha_2$
#let an = $alpha_n$
#let aN = $alpha_N$
#let ta = $tau$
#let th = $theta$
#let t1 = $theta_1$
#let t2 = $theta_2$
#let tn = $theta_n$
#let tN = $theta_N$
#let ep = $epsilon$
#let ep1 = $epsilon_1$
#let ep2 = $epsilon_2$
#let ga = $gamma$
#let g1 = $gamma_1$
#let g2 = $gamma_2$
#let la = $lambda$
#let l1 = $lambda_1$
#let l2 = $lambda_2$
#let ln = $lambda_n$
#let lN = $lambda_N$
#let La = $Lambda$
#let L1 = $Lambda_1$
#let L2 = $Lambda_2$
#let Ln = $Lambda_n$
#let LN = $Lambda_N$
#let p1 = $pi_1$
#let p2 = $pi_2$
#let pn = $pi_n$
#let pN = $pi_N$
#let na = $nabla$
#let di = $dif$
#let pa = $diff$ // partial
// 文本运算符
#let argmax = math.op("argmax")
// 带圈数字,在 sym 里没找到,Unicode 字符中的又太小,故自己实现,希望没 bug
#let czero = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "0")]) // circle zero
#let cone = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "1")]) // circle one
#let ctwo = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "2")])
#let cthree = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "3")])
#let cfour = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "4")])
#let cfive = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "5")])
#let csix = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "6")])
#let cseven = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "7")])
#let ceight = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "8")])
#let cnine = box(baseline: 15%, circle(radius: 5pt)[#align(center + horizon, "9")])
|
|
https://github.com/Duolei-Wang/modern-sustech-thesis | https://raw.githubusercontent.com/Duolei-Wang/modern-sustech-thesis/main/template/sections/content.typ | typst | MIT License | // 离线编译运行
#import "../lib.typ": sustech-thesis
// 上线之后采用此段注释
// #import "@preview/modern-sustech-thesis:0.1.1": sustech-thesis
// 离线测试使用
// #import "@local/modern-sustech-thesis:0.1.1": sustech-thesis
// template-curryst
#import "@preview/curryst:0.3.0": rule, proof-tree
#import "@preview/ctheorems:1.1.2": *
#set text(
font: "sims"
)
#show: thmrules.with(
qed-symbol: $square$
)
// template-theorem
#let indent = h(2em)
#let define = thmbox(
"def",
"定义"
)
#let theorem = thmbox(
"def",
"定理",
)
#let prop = thmbox(
"def",
"性质",
)
#let notation = thmbox(
"def",
"符号",
)
#let example = thmbox(
"def",
"例子",
)
#let remark = thmbox(
"def",
"评注",
)
#let proof = thmproof(
"proof",
text[证明],
)
#set math.cases(
gap: 1em,
)
// template-sustech-thesis
#let indent = h(2em)
#show: sustech-thesis.with(
isCN: false,
information: (
title: (
[Title line 1],
[Title line 2],
[],
),
subtitle: [Substitle],
abstract-body: (
[#lorem(40)],
[#lorem(40)]
),
keywords: (
[Keyword 1],
[Keyword 2],
[Keyword 3],
[Keyword 4],
),
author: [<NAME>],
department: [Math],
major: [Mathematics and Applied Mathematics],
advisor: [Academic#h(1em)Advisor],
),
information-EN: (
title: (
[#lorem(10)],
[#lorem(10)],
),
subtitle: [#lorem(4)],
abstract-body: (
[#lorem(40)],
[#lorem(40)]
),
keywords: (
[#lorem(10)],
[#lorem(10)],
),
author: [慕青],
department: [Math],
major: [Math],
advisor: [Advisor],
),
toc-title: [目录],
)
= h1
== h1.1
= h2
== h2.1
#lorem(299)@wang2010guide
#pagebreak()
#bibliography(
"refer.bib",
) |
https://github.com/7sDream/fonts-and-layout-zhCN | https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/02-concepts/bezier.typ | typst | Other | #import "/template/template.typ": web-page-template
#import "/template/components.typ": note
#import "/lib/glossary.typ": tr
#show: web-page-template
// ## Bézier curves
== 贝塞尔曲线
// We've already mentioned that outlines in digital fonts are made out of lines and Bézier curves. I'm not going to spend a lot of time getting into the mathematics of how Bézier curves work, but if you're doing any kind of implementing - rasterising, font editing or font manipulation - you would be wise to spend some time looking through Mike Kamerman's [A Primer on Bézier Curves](https://pomax.github.io/BezierInfo-2/).
前文已经介绍过,数字字体中的#tr[outline]是由直线和贝塞尔曲线组成的。本书不会过多介绍关于贝塞尔曲线的数学知识,但如果你想实际实现一些东西——比如字体#tr[rasterization]器、编辑器、操作工具——的话,可以花时间看看<NAME>man的《A Primer on Bézier Curves》@Kamerman.PrimerBezier。
// What is worth considering, for our purposes, is the difference between *quadratic* and *cubic* Bézier curves. As we've seen, Bézier curves are specified by a start point, an end point, and one or more control points, which pull the curve towards them. A quadratic Bézier curve has one control point, whereas a cubic Bézier curve has two. (Higher order curves are possible, but not used in type design.)
但对于本书的主旨来说,*二次*和*三次*贝塞尔曲线的区别是值得讨论一下的。贝塞尔曲线是由一个起始点、一个终结点,和一个或多个控制点构造而成的。这些控制点会将曲线往它们所在的位置拉动。二次贝塞尔曲线有一个控制点,而三次的会有两个。更高次的贝塞尔曲线也是存在的,但没有在字体设计中使用。
// TrueType outlines use quadratic Bézier curves, whereas PostScript fonts use cubic Bézier curves; it's possible to convert from a quadratic curve to a cubic curve and get the same shape, but it's not always possible to perfectly go from a cubic curve to a quadratic curve - you have to approximate it. Again, you'll find all the details in A Primer on Bézier Curves.
TrueType中使用二次贝塞尔曲线来绘制#tr[outline],PostScript中则是使用三次贝塞尔曲线。在不改变曲线形状的情况下,将二次曲线转换为三次曲线是可行的。但反过来却不行,这样得到的二次曲线会是原三次曲线的近似。关于曲线转换的详细内容,也请参考《A Primer on Bézier Curves》。
// I've said that outlines are made out of Bézier curves, but a more accurate way to say that is that they are made of a series of Bézier curves joined together. When two Bézier curves meet, they can either meet in a *smooth* join or at a *corner*. You will use both of these types of join in type design; generally, you will want things to be smooth if they would form part of the same "stroke" of a pen or brush, but there are places where you'll need corner joins too. Here you see a portion of the top edge of a letter m, which contains a smooth join and a corner join.
之前我们说#tr[outline]由贝塞尔曲线组成,实际上更准确的表述应该是它由一系列贝塞尔曲线首尾相连组成。当两条贝塞尔曲线相连时。它们可以“平滑”的连接,也可以形成“转角”。这两种连接方式在设计中都会使用。通常来说,如果是在勾勒笔或笔刷画出的笔画的话,会平滑地连接曲线。但也有一些地方是需要生硬的转角的。你可以从@figure:m-top 中看到,字母 m 的上顶端同时使用了光滑连接和转角。
#figure(caption: [
m 上部的轮廓曲线
])[#include "mtop.typ"] <figure:m-top>
// The main condition for a smooth join is that you can draw a straight line through an on-curve point and the both off-curve points (often called "handles" because you can pull on them to drag the curve around) to the left and right of it - as you can see in the first join above. The second join has handles at differing angles, so it is a corner join.
平滑连接所需的条件是:连接点与其左右两侧的相邻控制点在同一条直线上。这种在曲线上的(起始和终结)点和控制点之间的连线通常被叫做“手柄”,因为你可以通过拉动它们来控制曲线。@figure:m-top 中的第一个连接处就是平滑的,而第二个连接处的两个手柄角度不同,它们并不在一条直线上,所以是一个转角连接。
// Your font editor will have ways of creating smooth and corner joins; the problem is that what we've called a "smooth" join, formed by aligning the angles of the handles (which is called *C1* continuity), isn't guaranteed to be *visually* smooth. Consider the following two paths:
字体编辑器通常会有各种方式来构造平滑或者转角连接。但问题是,即使是这种手柄在同一直线上的光滑连接(这也被称为*C1*连续),也不能保证它在*视觉*上是光滑的。看看@figure:g2 中的两个示例。
#figure(caption: [
连续性的区别
])[#include "g2.typ"] <figure:g2>
// Do they look very similar? How about if you think of them as roads - which is easier to drive on? The upper path will require you to bring your steering wheel back to central for the straighter portion where the two Bézier curves join, before turning right again. The lower path continuously curves so that if you get your steering wheel in the right place at the start of the road, you hardly need to turn the wheel as you drive along it. Many font editors have facilities for matching the curvature on both sides of the on-curve point (which is known as G2 continuity), either natively or through add-ons like [SpeedPunk](https://yanone.de/software/speedpunk/) or [SuperTool](http://www.corvelsoftware.co.uk/software/supertool/).
它们看起来很相似。但如果把这两条线想象成道路的话,在哪一条路上开车更简单更舒适呢?上面那条路,你在起点处方向盘是在初始位置,而在两条贝塞尔曲线的交点处也需要把方向盘回正,然后再重新打方向继续转弯。而下面那条路,你可以在起点就把方向盘摆到一个正确的位置,而沿着这条路开的时候几乎不需要再动方向。许多字体编辑器都提供了为曲线上的点进行这种曲率匹配(也称为 G2 连续)的能力。有些软件内置这种功能,其他一些也可以通过Speed Punk#[@Yanone.SpeedPunk]或SuperTool#[@Cozens.SuperTool.2019]等插件完成。
|
https://github.com/Quaternijkon/notebook | https://raw.githubusercontent.com/Quaternijkon/notebook/main/content/数据结构与算法/.chapter-数据结构/哈希表/同构字符串.typ | typst | #import "../../../../lib.typ":*
=== #Title(
title: [同构字符串],
reflink: "https://leetcode.cn/problems/isomorphic-strings/description/",
level: 1,
)<同构字符串>
#note(
title: [
同构字符串
],
description: [
给定两个字符串 s 和 t ,判断它们是否是同构的。
如果 s 中的字符可以按某种映射关系替换得到 t ,那么这两个字符串是同构的。
每个出现的字符都应当映射到另一个字符,同时不改变字符的顺序。不同字符不能映射到同一个字符上,相同字符只能映射到同一个字符上,字符可以映射到自己本身。
],
examples: ([
输入:s = "egg", t = "add"
输出:true
],[
输入:s = "foo", t = "bar"
输出:false
],[
输入:s = "paper", t = "title"
输出:true
]
),
tips: [
1 <= s.length <= $5 * 10^4$
t.length == s.length
s 和 t 由任意有效的 ASCII 字符组成
],
solutions: (
( name:[散列表],
text:[
使用两个映射表分别记录字符串 `s` 和 `t` 中每个字符的第一次出现位置,然后检查两个字符串对应位置的字符是否具有相同的第一次出现位置,以判断是否同构。
],code:[
```cpp
class Solution {
public:
bool isIsomorphic(string s, string t) {
int n=s.size();
unordered_map<char,int> smap;
unordered_map<char,int> tmap;
for(int i=0;i<n;++i)
if(smap[s[i]]==0) smap[s[i]]=i+1;
for(int i=0;i<n;++i)
if(tmap[t[i]]==0) tmap[t[i]]=i+1;
for(int i=0;i<n;++i)
if(smap[s[i]]!=tmap[t[i]]) return false;
return true;
}
};
```
]),
),
gain:none,
) |
|
https://github.com/VisualFP/docs | https://raw.githubusercontent.com/VisualFP/docs/main/SA/design_concept/content/design/design_iteration_2_decisions_application.typ | typst | #import "../../../style.typ": *
#import "@preview/tablex:0.0.6": tablex, hlinex
#let load-drawio-svg(path, ..args) = image.decode(read(path).replace("Text is not SVG - cannot display", ""), ..args)
= Function Application <application-decision>
For function application, there are two possible application styles up for discussion:
/ Explicit: Leave higher-order function values as such and apply them
explicitly using a dedicated application function.
/ Elaborate: #[Embedd a deeper understanding of application into the
language, which allows to resolve nested curried function values to
their arguments if necessary.]
A side-by-side comparison of how double application of two `5` literals to
an `addition` function would look like in both styles can be seen in
@elaborate_application and @explicit_application.
#columns(2)[
#figure(
load-drawio-svg("../../static/design_specification_application_elaborate.svg", width: 70%),
caption: [Elaborate application])<elaborate_application>
#colbreak()
#figure(load-drawio-svg("../../static/design_specification_application_explicit.svg"),
caption: [Explicit application])<explicit_application>
]
Of these two styles, the elaborate application style was chosen over
explicit application since the elaborate style stays readable when scaling
up to more extensive examples, while the explicit style would start to feel
overloaded more quickly.
The type resolution for the elaborate application style works like this:
1. A type hole of a value $A_1 arrow.r ... arrow.r A_n$ is encountered.
2. A value of type
$B_1 arrow.r ... arrow.r B_n arrow.r A_1 arrow.r ... arrow.r A_n$
is inserted into the type hole,
3. #[The editor resolves the curried function into its nested values and
matches the ending values $A_1 arrow.r ... arrow.r A_n$ with the expected
type of the hole.
The remaining arguments $B_1 arrow.r ... arrow.r B_n$ are then processed
as new type holes to be filled in.]
An example of the elaborate application system can be seen in
@elaborate_application_example.
#figure(
tablex(
auto-lines: false,
columns: (auto, auto, auto),
[Type hole], [Inserted Value], [Result], hlinex(stroke: 0.5pt),
$A arrow.r A$, $A$, [_error_], hlinex(stroke: 0.5pt),
$A$, $A arrow.r A$, [_new type hole:_ $A$], hlinex(stroke: 0.5pt),
$B arrow.r C$, $A arrow.r B arrow.r C$, [_new type hole:_ $A$], hlinex(stroke: 0.5pt),
$C$, $A arrow.r B arrow.r C$, [_new type holes:_ $A$, $B$],
),
kind: "table",
supplement: "Table",
caption: [Examples of elaborate application resolution])<elaborate_application_example>
|
|
https://github.com/kaarmu/splash | https://raw.githubusercontent.com/kaarmu/splash/main/src/palettes/xkcd.typ | typst | MIT License | #let xkcd-rgb = (
"acid-green": rgb("#8ffe09"),
"adobe": rgb("#bd6c48"),
"algae": rgb("#54ac68"),
"algae-green": rgb("#21c36f"),
"almost-black": rgb("#070d0d"),
"amber": rgb("#feb308"),
"amethyst": rgb("#9b5fc0"),
"apple": rgb("#6ecb3c"),
"apple-green": rgb("#76cd26"),
"apricot": rgb("#ffb16d"),
"aqua": rgb("#13eac9"),
"aqua-blue": rgb("#02d8e9"),
"aqua-green": rgb("#12e193"),
"aqua-marine": rgb("#2ee8bb"),
"aquamarine": rgb("#04d8b2"),
"army-green": rgb("#4b5d16"),
"asparagus": rgb("#77ab56"),
"aubergine": rgb("#3d0734"),
"auburn": rgb("#9a3001"),
"avocado": rgb("#90b134"),
"avocado-green": rgb("#87a922"),
"azul": rgb("#1d5dec"),
"azure": rgb("#069af3"),
"baby-blue": rgb("#a2cffe"),
"baby-green": rgb("#8cff9e"),
"baby-pink": rgb("#ffb7ce"),
"baby-poo": rgb("#ab9004"),
"baby-poop": rgb("#937c00"),
"baby-poop-green": rgb("#8f9805"),
"baby-puke-green": rgb("#b6c406"),
"baby-purple": rgb("#ca9bf7"),
"baby-shit-brown": rgb("#ad900d"),
"baby-shit-green": rgb("#889717"),
"banana": rgb("#ffff7e"),
"banana-yellow": rgb("#fafe4b"),
"barbie-pink": rgb("#fe46a5"),
"barf-green": rgb("#94ac02"),
"barney": rgb("#ac1db8"),
"barney-purple": rgb("#a00498"),
"battleship-grey": rgb("#6b7c85"),
"beige": rgb("#e6daa6"),
"berry": rgb("#990f4b"),
"bile": rgb("#b5c306"),
"black": rgb("#000000"),
"bland": rgb("#afa88b"),
"blood": rgb("#770001"),
"blood-orange": rgb("#fe4b03"),
"blood-red": rgb("#980002"),
"blue": rgb("#0343df"),
"blue-blue": rgb("#2242c7"),
"blue-green": rgb("#137e6d"),
"blue-grey": rgb("#607c8e"),
"blue-purple": rgb("#5729ce"),
"blue-violet": rgb("#5d06e9"),
"blue-with-a-hint-of-purple": rgb("#533cc6"),
"blue/green": rgb("#0f9b8e"),
"blue/grey": rgb("#758da3"),
"blue/purple": rgb("#5a06ef"),
"blueberry": rgb("#464196"),
"bluegreen": rgb("#017a79"),
"bluegrey": rgb("#85a3b2"),
"bluey-green": rgb("#2bb179"),
"bluey-grey": rgb("#89a0b0"),
"bluey-purple": rgb("#6241c7"),
"bluish": rgb("#2976bb"),
"bluish-green": rgb("#10a674"),
"bluish-grey": rgb("#748b97"),
"bluish-purple": rgb("#703be7"),
"blurple": rgb("#5539cc"),
"blush": rgb("#f29e8e"),
"blush-pink": rgb("#fe828c"),
"booger": rgb("#9bb53c"),
"booger-green": rgb("#96b403"),
"bordeaux": rgb("#7b002c"),
"boring-green": rgb("#63b365"),
"bottle-green": rgb("#044a05"),
"brick": rgb("#a03623"),
"brick-orange": rgb("#c14a09"),
"brick-red": rgb("#8f1402"),
"bright-aqua": rgb("#0bf9ea"),
"bright-blue": rgb("#0165fc"),
"bright-cyan": rgb("#41fdfe"),
"bright-green": rgb("#01ff07"),
"bright-lavender": rgb("#c760ff"),
"bright-light-blue": rgb("#26f7fd"),
"bright-light-green": rgb("#2dfe54"),
"bright-lilac": rgb("#c95efb"),
"bright-lime": rgb("#87fd05"),
"bright-lime-green": rgb("#65fe08"),
"bright-magenta": rgb("#ff08e8"),
"bright-olive": rgb("#9cbb04"),
"bright-orange": rgb("#ff5b00"),
"bright-pink": rgb("#fe01b1"),
"bright-purple": rgb("#be03fd"),
"bright-red": rgb("#ff000d"),
"bright-sea-green": rgb("#05ffa6"),
"bright-sky-blue": rgb("#02ccfe"),
"bright-teal": rgb("#01f9c6"),
"bright-turquoise": rgb("#0ffef9"),
"bright-violet": rgb("#ad0afd"),
"bright-yellow": rgb("#fffd01"),
"bright-yellow-green": rgb("#9dff00"),
"british-racing-green": rgb("#05480d"),
"bronze": rgb("#a87900"),
"brown": rgb("#653700"),
"brown-green": rgb("#706c11"),
"brown-grey": rgb("#8d8468"),
"brown-orange": rgb("#b96902"),
"brown-red": rgb("#922b05"),
"brown-yellow": rgb("#b29705"),
"brownish": rgb("#9c6d57"),
"brownish-green": rgb("#6a6e09"),
"brownish-grey": rgb("#86775f"),
"brownish-orange": rgb("#cb7723"),
"brownish-pink": rgb("#c27e79"),
"brownish-purple": rgb("#76424e"),
"brownish-red": rgb("#9e3623"),
"brownish-yellow": rgb("#c9b003"),
"browny-green": rgb("#6f6c0a"),
"browny-orange": rgb("#ca6b02"),
"bruise": rgb("#7e4071"),
"bubble-gum-pink": rgb("#ff69af"),
"bubblegum": rgb("#ff6cb5"),
"bubblegum-pink": rgb("#fe83cc"),
"buff": rgb("#fef69e"),
"burgundy": rgb("#610023"),
"burnt-orange": rgb("#c04e01"),
"burnt-red": rgb("#9f2305"),
"burnt-siena": rgb("#b75203"),
"burnt-sienna": rgb("#b04e0f"),
"burnt-umber": rgb("#a0450e"),
"burnt-yellow": rgb("#d5ab09"),
"burple": rgb("#6832e3"),
"butter": rgb("#ffff81"),
"butter-yellow": rgb("#fffd74"),
"butterscotch": rgb("#fdb147"),
"cadet-blue": rgb("#4e7496"),
"camel": rgb("#c69f59"),
"camo": rgb("#7f8f4e"),
"camo-green": rgb("#526525"),
"camouflage-green": rgb("#4b6113"),
"canary": rgb("#fdff63"),
"canary-yellow": rgb("#fffe40"),
"candy-pink": rgb("#ff63e9"),
"caramel": rgb("#af6f09"),
"carmine": rgb("#9d0216"),
"carnation": rgb("#fd798f"),
"carnation-pink": rgb("#ff7fa7"),
"carolina-blue": rgb("#8ab8fe"),
"celadon": rgb("#befdb7"),
"celery": rgb("#c1fd95"),
"cement": rgb("#a5a391"),
"cerise": rgb("#de0c62"),
"cerulean": rgb("#0485d1"),
"cerulean-blue": rgb("#056eee"),
"charcoal": rgb("#343837"),
"charcoal-grey": rgb("#3c4142"),
"chartreuse": rgb("#c1f80a"),
"cherry": rgb("#cf0234"),
"cherry-red": rgb("#f7022a"),
"chestnut": rgb("#742802"),
"chocolate": rgb("#3d1c02"),
"chocolate-brown": rgb("#411900"),
"cinnamon": rgb("#ac4f06"),
"claret": rgb("#680018"),
"clay": rgb("#b66a50"),
"clay-brown": rgb("#b2713d"),
"clear-blue": rgb("#247afd"),
"cloudy-blue": rgb("#acc2d9"),
"cobalt": rgb("#1e488f"),
"cobalt-blue": rgb("#030aa7"),
"cocoa": rgb("#875f42"),
"coffee": rgb("#a6814c"),
"cool-blue": rgb("#4984b8"),
"cool-green": rgb("#33b864"),
"cool-grey": rgb("#95a3a6"),
"copper": rgb("#b66325"),
"coral": rgb("#fc5a50"),
"coral-pink": rgb("#ff6163"),
"cornflower": rgb("#6a79f7"),
"cornflower-blue": rgb("#5170d7"),
"cranberry": rgb("#9e003a"),
"cream": rgb("#ffffc2"),
"creme": rgb("#ffffb6"),
"crimson": rgb("#8c000f"),
"custard": rgb("#fffd78"),
"cyan": rgb("#00ffff"),
"dandelion": rgb("#fedf08"),
"dark": rgb("#1b2431"),
"dark-aqua": rgb("#05696b"),
"dark-aquamarine": rgb("#017371"),
"dark-beige": rgb("#ac9362"),
"dark-blue": rgb("#00035b"),
"dark-blue-green": rgb("#005249"),
"dark-blue-grey": rgb("#1f3b4d"),
"dark-brown": rgb("#341c02"),
"dark-coral": rgb("#cf524e"),
"dark-cream": rgb("#fff39a"),
"dark-cyan": rgb("#0a888a"),
"dark-forest-green": rgb("#002d04"),
"dark-fuchsia": rgb("#9d0759"),
"dark-gold": rgb("#b59410"),
"dark-grass-green": rgb("#388004"),
"dark-green": rgb("#033500"),
"dark-green-blue": rgb("#1f6357"),
"dark-grey": rgb("#363737"),
"dark-grey-blue": rgb("#29465b"),
"dark-hot-pink": rgb("#d90166"),
"dark-indigo": rgb("#1f0954"),
"dark-khaki": rgb("#9b8f55"),
"dark-lavender": rgb("#856798"),
"dark-lilac": rgb("#9c6da5"),
"dark-lime": rgb("#84b701"),
"dark-lime-green": rgb("#7ebd01"),
"dark-magenta": rgb("#960056"),
"dark-maroon": rgb("#3c0008"),
"dark-mauve": rgb("#874c62"),
"dark-mint": rgb("#48c072"),
"dark-mint-green": rgb("#20c073"),
"dark-mustard": rgb("#a88905"),
"dark-navy": rgb("#000435"),
"dark-navy-blue": rgb("#00022e"),
"dark-olive": rgb("#373e02"),
"dark-olive-green": rgb("#3c4d03"),
"dark-orange": rgb("#c65102"),
"dark-pastel-green": rgb("#56ae57"),
"dark-peach": rgb("#de7e5d"),
"dark-periwinkle": rgb("#665fd1"),
"dark-pink": rgb("#cb416b"),
"dark-plum": rgb("#3f012c"),
"dark-purple": rgb("#35063e"),
"dark-red": rgb("#840000"),
"dark-rose": rgb("#b5485d"),
"dark-royal-blue": rgb("#02066f"),
"dark-sage": rgb("#598556"),
"dark-salmon": rgb("#c85a53"),
"dark-sand": rgb("#a88f59"),
"dark-sea-green": rgb("#11875d"),
"dark-seafoam": rgb("#1fb57a"),
"dark-seafoam-green": rgb("#3eaf76"),
"dark-sky-blue": rgb("#448ee4"),
"dark-slate-blue": rgb("#214761"),
"dark-tan": rgb("#af884a"),
"dark-taupe": rgb("#7f684e"),
"dark-teal": rgb("#014d4e"),
"dark-turquoise": rgb("#045c5a"),
"dark-violet": rgb("#34013f"),
"dark-yellow": rgb("#d5b60a"),
"dark-yellow-green": rgb("#728f02"),
"darkblue": rgb("#030764"),
"darkgreen": rgb("#054907"),
"darkish-blue": rgb("#014182"),
"darkish-green": rgb("#287c37"),
"darkish-pink": rgb("#da467d"),
"darkish-purple": rgb("#751973"),
"darkish-red": rgb("#a90308"),
"deep-aqua": rgb("#08787f"),
"deep-blue": rgb("#040273"),
"deep-brown": rgb("#410200"),
"deep-green": rgb("#02590f"),
"deep-lavender": rgb("#8d5eb7"),
"deep-lilac": rgb("#966ebd"),
"deep-magenta": rgb("#a0025c"),
"deep-orange": rgb("#dc4d01"),
"deep-pink": rgb("#cb0162"),
"deep-purple": rgb("#36013f"),
"deep-red": rgb("#9a0200"),
"deep-rose": rgb("#c74767"),
"deep-sea-blue": rgb("#015482"),
"deep-sky-blue": rgb("#0d75f8"),
"deep-teal": rgb("#00555a"),
"deep-turquoise": rgb("#017374"),
"deep-violet": rgb("#490648"),
"denim": rgb("#3b638c"),
"denim-blue": rgb("#3b5b92"),
"desert": rgb("#ccad60"),
"diarrhea": rgb("#9f8303"),
"dirt": rgb("#8a6e45"),
"dirt-brown": rgb("#836539"),
"dirty-blue": rgb("#3f829d"),
"dirty-green": rgb("#667e2c"),
"dirty-orange": rgb("#c87606"),
"dirty-pink": rgb("#ca7b80"),
"dirty-purple": rgb("#734a65"),
"dirty-yellow": rgb("#cdc50a"),
"dodger-blue": rgb("#3e82fc"),
"drab": rgb("#828344"),
"drab-green": rgb("#749551"),
"dried-blood": rgb("#4b0101"),
"duck-egg-blue": rgb("#c3fbf4"),
"dull-blue": rgb("#49759c"),
"dull-brown": rgb("#876e4b"),
"dull-green": rgb("#74a662"),
"dull-orange": rgb("#d8863b"),
"dull-pink": rgb("#d5869d"),
"dull-purple": rgb("#84597e"),
"dull-red": rgb("#bb3f3f"),
"dull-teal": rgb("#5f9e8f"),
"dull-yellow": rgb("#eedc5b"),
"dusk": rgb("#4e5481"),
"dusk-blue": rgb("#26538d"),
"dusky-blue": rgb("#475f94"),
"dusky-pink": rgb("#cc7a8b"),
"dusky-purple": rgb("#895b7b"),
"dusky-rose": rgb("#ba6873"),
"dust": rgb("#b2996e"),
"dusty-blue": rgb("#5a86ad"),
"dusty-green": rgb("#76a973"),
"dusty-lavender": rgb("#ac86a8"),
"dusty-orange": rgb("#f0833a"),
"dusty-pink": rgb("#d58a94"),
"dusty-purple": rgb("#825f87"),
"dusty-red": rgb("#b9484e"),
"dusty-rose": rgb("#c0737a"),
"dusty-teal": rgb("#4c9085"),
"earth": rgb("#a2653e"),
"easter-green": rgb("#8cfd7e"),
"easter-purple": rgb("#c071fe"),
"ecru": rgb("#feffca"),
"egg-shell": rgb("#fffcc4"),
"eggplant": rgb("#380835"),
"eggplant-purple": rgb("#430541"),
"eggshell": rgb("#ffffd4"),
"eggshell-blue": rgb("#c4fff7"),
"electric-blue": rgb("#0652ff"),
"electric-green": rgb("#21fc0d"),
"electric-lime": rgb("#a8ff04"),
"electric-pink": rgb("#ff0490"),
"electric-purple": rgb("#aa23ff"),
"emerald": rgb("#01a049"),
"emerald-green": rgb("#028f1e"),
"evergreen": rgb("#05472a"),
"faded-blue": rgb("#658cbb"),
"faded-green": rgb("#7bb274"),
"faded-orange": rgb("#f0944d"),
"faded-pink": rgb("#de9dac"),
"faded-purple": rgb("#916e99"),
"faded-red": rgb("#d3494e"),
"faded-yellow": rgb("#feff7f"),
"fawn": rgb("#cfaf7b"),
"fern": rgb("#63a950"),
"fern-green": rgb("#548d44"),
"fire-engine-red": rgb("#fe0002"),
"flat-blue": rgb("#3c73a8"),
"flat-green": rgb("#699d4c"),
"fluorescent-green": rgb("#08ff08"),
"fluro-green": rgb("#0aff02"),
"foam-green": rgb("#90fda9"),
"forest": rgb("#0b5509"),
"forest-green": rgb("#06470c"),
"forrest-green": rgb("#154406"),
"french-blue": rgb("#436bad"),
"fresh-green": rgb("#69d84f"),
"frog-green": rgb("#58bc08"),
"fuchsia": rgb("#ed0dd9"),
"gold": rgb("#dbb40c"),
"golden": rgb("#f5bf03"),
"golden-brown": rgb("#b27a01"),
"golden-rod": rgb("#f9bc08"),
"golden-yellow": rgb("#fec615"),
"goldenrod": rgb("#fac205"),
"grape": rgb("#6c3461"),
"grape-purple": rgb("#5d1451"),
"grapefruit": rgb("#fd5956"),
"grass": rgb("#5cac2d"),
"grass-green": rgb("#3f9b0b"),
"grassy-green": rgb("#419c03"),
"green": rgb("#15b01a"),
"green-apple": rgb("#5edc1f"),
"green-blue": rgb("#06b48b"),
"green-brown": rgb("#544e03"),
"green-grey": rgb("#77926f"),
"green-teal": rgb("#0cb577"),
"green-yellow": rgb("#c9ff27"),
"green/blue": rgb("#01c08d"),
"green/yellow": rgb("#b5ce08"),
"greenblue": rgb("#23c48b"),
"greenish": rgb("#40a368"),
"greenish-beige": rgb("#c9d179"),
"greenish-blue": rgb("#0b8b87"),
"greenish-brown": rgb("#696112"),
"greenish-cyan": rgb("#2afeb7"),
"greenish-grey": rgb("#96ae8d"),
"greenish-tan": rgb("#bccb7a"),
"greenish-teal": rgb("#32bf84"),
"greenish-turquoise": rgb("#00fbb0"),
"greenish-yellow": rgb("#cdfd02"),
"greeny-blue": rgb("#42b395"),
"greeny-brown": rgb("#696006"),
"greeny-grey": rgb("#7ea07a"),
"greeny-yellow": rgb("#c6f808"),
"grey": rgb("#929591"),
"grey-blue": rgb("#6b8ba4"),
"grey-brown": rgb("#7f7053"),
"grey-green": rgb("#789b73"),
"grey-pink": rgb("#c3909b"),
"grey-purple": rgb("#826d8c"),
"grey-teal": rgb("#5e9b8a"),
"grey/blue": rgb("#647d8e"),
"grey/green": rgb("#86a17d"),
"greyblue": rgb("#77a1b5"),
"greyish": rgb("#a8a495"),
"greyish-blue": rgb("#5e819d"),
"greyish-brown": rgb("#7a6a4f"),
"greyish-green": rgb("#82a67d"),
"greyish-pink": rgb("#c88d94"),
"greyish-purple": rgb("#887191"),
"greyish-teal": rgb("#719f91"),
"gross-green": rgb("#a0bf16"),
"gunmetal": rgb("#536267"),
"hazel": rgb("#8e7618"),
"heather": rgb("#a484ac"),
"heliotrope": rgb("#d94ff5"),
"highlighter-green": rgb("#1bfc06"),
"hospital-green": rgb("#9be5aa"),
"hot-green": rgb("#25ff29"),
"hot-magenta": rgb("#f504c9"),
"hot-pink": rgb("#ff028d"),
"hot-purple": rgb("#cb00f5"),
"hunter-green": rgb("#0b4008"),
"ice": rgb("#d6fffa"),
"ice-blue": rgb("#d7fffe"),
"icky-green": rgb("#8fae22"),
"indian-red": rgb("#850e04"),
"indigo": rgb("#380282"),
"indigo-blue": rgb("#3a18b1"),
"iris": rgb("#6258c4"),
"irish-green": rgb("#019529"),
"ivory": rgb("#ffffcb"),
"jade": rgb("#1fa774"),
"jade-green": rgb("#2baf6a"),
"jungle-green": rgb("#048243"),
"kelley-green": rgb("#009337"),
"kelly-green": rgb("#02ab2e"),
"kermit-green": rgb("#5cb200"),
"key-lime": rgb("#aeff6e"),
"khaki": rgb("#aaa662"),
"khaki-green": rgb("#728639"),
"kiwi": rgb("#9cef43"),
"kiwi-green": rgb("#8ee53f"),
"lavender": rgb("#c79fef"),
"lavender-blue": rgb("#8b88f8"),
"lavender-pink": rgb("#dd85d7"),
"lawn-green": rgb("#4da409"),
"leaf": rgb("#71aa34"),
"leaf-green": rgb("#5ca904"),
"leafy-green": rgb("#51b73b"),
"leather": rgb("#ac7434"),
"lemon": rgb("#fdff52"),
"lemon-green": rgb("#adf802"),
"lemon-lime": rgb("#bffe28"),
"lemon-yellow": rgb("#fdff38"),
"lichen": rgb("#8fb67b"),
"light-aqua": rgb("#8cffdb"),
"light-aquamarine": rgb("#7bfdc7"),
"light-beige": rgb("#fffeb6"),
"light-blue": rgb("#95d0fc"),
"light-blue-green": rgb("#7efbb3"),
"light-blue-grey": rgb("#b7c9e2"),
"light-bluish-green": rgb("#76fda8"),
"light-bright-green": rgb("#53fe5c"),
"light-brown": rgb("#ad8150"),
"light-burgundy": rgb("#a8415b"),
"light-cyan": rgb("#acfffc"),
"light-eggplant": rgb("#894585"),
"light-forest-green": rgb("#4f9153"),
"light-gold": rgb("#fddc5c"),
"light-grass-green": rgb("#9af764"),
"light-green": rgb("#96f97b"),
"light-green-blue": rgb("#56fca2"),
"light-greenish-blue": rgb("#63f7b4"),
"light-grey": rgb("#d8dcd6"),
"light-grey-blue": rgb("#9dbcd4"),
"light-grey-green": rgb("#b7e1a1"),
"light-indigo": rgb("#6d5acf"),
"light-khaki": rgb("#e6f2a2"),
"light-lavendar": rgb("#efc0fe"),
"light-lavender": rgb("#dfc5fe"),
"light-light-blue": rgb("#cafffb"),
"light-light-green": rgb("#c8ffb0"),
"light-lilac": rgb("#edc8ff"),
"light-lime": rgb("#aefd6c"),
"light-lime-green": rgb("#b9ff66"),
"light-magenta": rgb("#fa5ff7"),
"light-maroon": rgb("#a24857"),
"light-mauve": rgb("#c292a1"),
"light-mint": rgb("#b6ffbb"),
"light-mint-green": rgb("#a6fbb2"),
"light-moss-green": rgb("#a6c875"),
"light-mustard": rgb("#f7d560"),
"light-navy": rgb("#155084"),
"light-navy-blue": rgb("#2e5a88"),
"light-neon-green": rgb("#4efd54"),
"light-olive": rgb("#acbf69"),
"light-olive-green": rgb("#a4be5c"),
"light-orange": rgb("#fdaa48"),
"light-pastel-green": rgb("#b2fba5"),
"light-pea-green": rgb("#c4fe82"),
"light-peach": rgb("#ffd8b1"),
"light-periwinkle": rgb("#c1c6fc"),
"light-pink": rgb("#ffd1df"),
"light-plum": rgb("#9d5783"),
"light-purple": rgb("#bf77f6"),
"light-red": rgb("#ff474c"),
"light-rose": rgb("#ffc5cb"),
"light-royal-blue": rgb("#3a2efe"),
"light-sage": rgb("#bcecac"),
"light-salmon": rgb("#fea993"),
"light-sea-green": rgb("#98f6b0"),
"light-seafoam": rgb("#a0febf"),
"light-seafoam-green": rgb("#a7ffb5"),
"light-sky-blue": rgb("#c6fcff"),
"light-tan": rgb("#fbeeac"),
"light-teal": rgb("#90e4c1"),
"light-turquoise": rgb("#7ef4cc"),
"light-urple": rgb("#b36ff6"),
"light-violet": rgb("#d6b4fc"),
"light-yellow": rgb("#fffe7a"),
"light-yellow-green": rgb("#ccfd7f"),
"light-yellowish-green": rgb("#c2ff89"),
"lightblue": rgb("#7bc8f6"),
"lighter-green": rgb("#75fd63"),
"lighter-purple": rgb("#a55af4"),
"lightgreen": rgb("#76ff7b"),
"lightish-blue": rgb("#3d7afd"),
"lightish-green": rgb("#61e160"),
"lightish-purple": rgb("#a552e6"),
"lightish-red": rgb("#fe2f4a"),
"lilac": rgb("#cea2fd"),
"liliac": rgb("#c48efd"),
"lime": rgb("#aaff32"),
"lime-green": rgb("#89fe05"),
"lime-yellow": rgb("#d0fe1d"),
"lipstick": rgb("#d5174e"),
"lipstick-red": rgb("#c0022f"),
"macaroni-and-cheese": rgb("#efb435"),
"magenta": rgb("#c20078"),
"mahogany": rgb("#4a0100"),
"maize": rgb("#f4d054"),
"mango": rgb("#ffa62b"),
"manilla": rgb("#fffa86"),
"marigold": rgb("#fcc006"),
"marine": rgb("#042e60"),
"marine-blue": rgb("#01386a"),
"maroon": rgb("#650021"),
"mauve": rgb("#ae7181"),
"medium-blue": rgb("#2c6fbb"),
"medium-brown": rgb("#7f5112"),
"medium-green": rgb("#39ad48"),
"medium-grey": rgb("#7d7f7c"),
"medium-pink": rgb("#f36196"),
"medium-purple": rgb("#9e43a2"),
"melon": rgb("#ff7855"),
"merlot": rgb("#730039"),
"metallic-blue": rgb("#4f738e"),
"mid-blue": rgb("#276ab3"),
"mid-green": rgb("#50a747"),
"midnight": rgb("#03012d"),
"midnight-blue": rgb("#020035"),
"midnight-purple": rgb("#280137"),
"military-green": rgb("#667c3e"),
"milk-chocolate": rgb("#7f4e1e"),
"mint": rgb("#9ffeb0"),
"mint-green": rgb("#8fff9f"),
"minty-green": rgb("#0bf77d"),
"mocha": rgb("#9d7651"),
"moss": rgb("#769958"),
"moss-green": rgb("#658b38"),
"mossy-green": rgb("#638b27"),
"mud": rgb("#735c12"),
"mud-brown": rgb("#60460f"),
"mud-green": rgb("#606602"),
"muddy-brown": rgb("#886806"),
"muddy-green": rgb("#657432"),
"muddy-yellow": rgb("#bfac05"),
"mulberry": rgb("#920a4e"),
"murky-green": rgb("#6c7a0e"),
"mushroom": rgb("#ba9e88"),
"mustard": rgb("#ceb301"),
"mustard-brown": rgb("#ac7e04"),
"mustard-green": rgb("#a8b504"),
"mustard-yellow": rgb("#d2bd0a"),
"muted-blue": rgb("#3b719f"),
"muted-green": rgb("#5fa052"),
"muted-pink": rgb("#d1768f"),
"muted-purple": rgb("#805b87"),
"nasty-green": rgb("#70b23f"),
"navy": rgb("#01153e"),
"navy-blue": rgb("#001146"),
"navy-green": rgb("#35530a"),
"neon-blue": rgb("#04d9ff"),
"neon-green": rgb("#0cff0c"),
"neon-pink": rgb("#fe019a"),
"neon-purple": rgb("#bc13fe"),
"neon-red": rgb("#ff073a"),
"neon-yellow": rgb("#cfff04"),
"nice-blue": rgb("#107ab0"),
"night-blue": rgb("#040348"),
"ocean": rgb("#017b92"),
"ocean-blue": rgb("#03719c"),
"ocean-green": rgb("#3d9973"),
"ocher": rgb("#bf9b0c"),
"ochre": rgb("#bf9005"),
"ocre": rgb("#c69c04"),
"off-blue": rgb("#5684ae"),
"off-green": rgb("#6ba353"),
"off-white": rgb("#ffffe4"),
"off-yellow": rgb("#f1f33f"),
"old-pink": rgb("#c77986"),
"old-rose": rgb("#c87f89"),
"olive": rgb("#6e750e"),
"olive-brown": rgb("#645403"),
"olive-drab": rgb("#6f7632"),
"olive-green": rgb("#677a04"),
"olive-yellow": rgb("#c2b709"),
"orange": rgb("#f97306"),
"orange-brown": rgb("#be6400"),
"orange-pink": rgb("#ff6f52"),
"orange-red": rgb("#fd411e"),
"orange-yellow": rgb("#ffad01"),
"orangeish": rgb("#fd8d49"),
"orangered": rgb("#fe420f"),
"orangey-brown": rgb("#b16002"),
"orangey-red": rgb("#fa4224"),
"orangey-yellow": rgb("#fdb915"),
"orangish": rgb("#fc824a"),
"orangish-brown": rgb("#b25f03"),
"orangish-red": rgb("#f43605"),
"orchid": rgb("#c875c4"),
"pale": rgb("#fff9d0"),
"pale-aqua": rgb("#b8ffeb"),
"pale-blue": rgb("#d0fefe"),
"pale-brown": rgb("#b1916e"),
"pale-cyan": rgb("#b7fffa"),
"pale-gold": rgb("#fdde6c"),
"pale-green": rgb("#c7fdb5"),
"pale-grey": rgb("#fdfdfe"),
"pale-lavender": rgb("#eecffe"),
"pale-light-green": rgb("#b1fc99"),
"pale-lilac": rgb("#e4cbff"),
"pale-lime": rgb("#befd73"),
"pale-lime-green": rgb("#b1ff65"),
"pale-magenta": rgb("#d767ad"),
"pale-mauve": rgb("#fed0fc"),
"pale-olive": rgb("#b9cc81"),
"pale-olive-green": rgb("#b1d27b"),
"pale-orange": rgb("#ffa756"),
"pale-peach": rgb("#ffe5ad"),
"pale-pink": rgb("#ffcfdc"),
"pale-purple": rgb("#b790d4"),
"pale-red": rgb("#d9544d"),
"pale-rose": rgb("#fdc1c5"),
"pale-salmon": rgb("#ffb19a"),
"pale-sky-blue": rgb("#bdf6fe"),
"pale-teal": rgb("#82cbb2"),
"pale-turquoise": rgb("#a5fbd5"),
"pale-violet": rgb("#ceaefa"),
"pale-yellow": rgb("#ffff84"),
"parchment": rgb("#fefcaf"),
"pastel-blue": rgb("#a2bffe"),
"pastel-green": rgb("#b0ff9d"),
"pastel-orange": rgb("#ff964f"),
"pastel-pink": rgb("#ffbacd"),
"pastel-purple": rgb("#caa0ff"),
"pastel-red": rgb("#db5856"),
"pastel-yellow": rgb("#fffe71"),
"pea": rgb("#a4bf20"),
"pea-green": rgb("#8eab12"),
"pea-soup": rgb("#929901"),
"pea-soup-green": rgb("#94a617"),
"peach": rgb("#ffb07c"),
"peachy-pink": rgb("#ff9a8a"),
"peacock-blue": rgb("#016795"),
"pear": rgb("#cbf85f"),
"periwinkle": rgb("#8e82fe"),
"periwinkle-blue": rgb("#8f99fb"),
"perrywinkle": rgb("#8f8ce7"),
"petrol": rgb("#005f6a"),
"pig-pink": rgb("#e78ea5"),
"pine": rgb("#2b5d34"),
"pine-green": rgb("#0a481e"),
"pink": rgb("#ff81c0"),
"pink-purple": rgb("#db4bda"),
"pink-red": rgb("#f5054f"),
"pink/purple": rgb("#ef1de7"),
"pinkish": rgb("#d46a7e"),
"pinkish-brown": rgb("#b17261"),
"pinkish-grey": rgb("#c8aca9"),
"pinkish-orange": rgb("#ff724c"),
"pinkish-purple": rgb("#d648d7"),
"pinkish-red": rgb("#f10c45"),
"pinkish-tan": rgb("#d99b82"),
"pinky": rgb("#fc86aa"),
"pinky-purple": rgb("#c94cbe"),
"pinky-red": rgb("#fc2647"),
"piss-yellow": rgb("#ddd618"),
"pistachio": rgb("#c0fa8b"),
"plum": rgb("#580f41"),
"plum-purple": rgb("#4e0550"),
"poison-green": rgb("#40fd14"),
"poo": rgb("#8f7303"),
"poo-brown": rgb("#885f01"),
"poop": rgb("#7f5e00"),
"poop-brown": rgb("#7a5901"),
"poop-green": rgb("#6f7c00"),
"powder-blue": rgb("#b1d1fc"),
"powder-pink": rgb("#ffb2d0"),
"primary-blue": rgb("#0804f9"),
"prussian-blue": rgb("#004577"),
"puce": rgb("#a57e52"),
"puke": rgb("#a5a502"),
"puke-brown": rgb("#947706"),
"puke-green": rgb("#9aae07"),
"puke-yellow": rgb("#c2be0e"),
"pumpkin": rgb("#e17701"),
"pumpkin-orange": rgb("#fb7d07"),
"pure-blue": rgb("#0203e2"),
"purple": rgb("#7e1e9c"),
"purple-blue": rgb("#632de9"),
"purple-brown": rgb("#673a3f"),
"purple-grey": rgb("#866f85"),
"purple-pink": rgb("#e03fd8"),
"purple-red": rgb("#990147"),
"purple/blue": rgb("#5d21d0"),
"purple/pink": rgb("#d725de"),
"purpleish": rgb("#98568d"),
"purpleish-blue": rgb("#6140ef"),
"purpleish-pink": rgb("#df4ec8"),
"purpley": rgb("#8756e4"),
"purpley-blue": rgb("#5f34e7"),
"purpley-grey": rgb("#947e94"),
"purpley-pink": rgb("#c83cb9"),
"purplish": rgb("#94568c"),
"purplish-blue": rgb("#601ef9"),
"purplish-brown": rgb("#6b4247"),
"purplish-grey": rgb("#7a687f"),
"purplish-pink": rgb("#ce5dae"),
"purplish-red": rgb("#b0054b"),
"purply": rgb("#983fb2"),
"purply-blue": rgb("#661aee"),
"purply-pink": rgb("#f075e6"),
"putty": rgb("#beae8a"),
"racing-green": rgb("#014600"),
"radioactive-green": rgb("#2cfa1f"),
"raspberry": rgb("#b00149"),
"raw-sienna": rgb("#9a6200"),
"raw-umber": rgb("#a75e09"),
"really-light-blue": rgb("#d4ffff"),
"red": rgb("#e50000"),
"red-brown": rgb("#8b2e16"),
"red-orange": rgb("#fd3c06"),
"red-pink": rgb("#fa2a55"),
"red-purple": rgb("#820747"),
"red-violet": rgb("#9e0168"),
"red-wine": rgb("#8c0034"),
"reddish": rgb("#c44240"),
"reddish-brown": rgb("#7f2b0a"),
"reddish-grey": rgb("#997570"),
"reddish-orange": rgb("#f8481c"),
"reddish-pink": rgb("#fe2c54"),
"reddish-purple": rgb("#910951"),
"reddy-brown": rgb("#6e1005"),
"rich-blue": rgb("#021bf9"),
"rich-purple": rgb("#720058"),
"robin-egg-blue": rgb("#8af1fe"),
"robins-egg": rgb("#6dedfd"),
"robins-egg-blue": rgb("#98eff9"),
"rosa": rgb("#fe86a4"),
"rose": rgb("#cf6275"),
"rose-pink": rgb("#f7879a"),
"rose-red": rgb("#be013c"),
"rosy-pink": rgb("#f6688e"),
"rouge": rgb("#ab1239"),
"royal": rgb("#0c1793"),
"royal-blue": rgb("#0504aa"),
"royal-purple": rgb("#4b006e"),
"ruby": rgb("#ca0147"),
"russet": rgb("#a13905"),
"rust": rgb("#a83c09"),
"rust-brown": rgb("#8b3103"),
"rust-orange": rgb("#c45508"),
"rust-red": rgb("#aa2704"),
"rusty-orange": rgb("#cd5909"),
"rusty-red": rgb("#af2f0d"),
"saffron": rgb("#feb209"),
"sage": rgb("#87ae73"),
"sage-green": rgb("#88b378"),
"salmon": rgb("#ff796c"),
"salmon-pink": rgb("#fe7b7c"),
"sand": rgb("#e2ca76"),
"sand-brown": rgb("#cba560"),
"sand-yellow": rgb("#fce166"),
"sandstone": rgb("#c9ae74"),
"sandy": rgb("#f1da7a"),
"sandy-brown": rgb("#c4a661"),
"sandy-yellow": rgb("#fdee73"),
"sap-green": rgb("#5c8b15"),
"sapphire": rgb("#2138ab"),
"scarlet": rgb("#be0119"),
"sea": rgb("#3c9992"),
"sea-blue": rgb("#047495"),
"sea-green": rgb("#53fca1"),
"seafoam": rgb("#80f9ad"),
"seafoam-blue": rgb("#78d1b6"),
"seafoam-green": rgb("#7af9ab"),
"seaweed": rgb("#18d17b"),
"seaweed-green": rgb("#35ad6b"),
"sepia": rgb("#985e2b"),
"shamrock": rgb("#01b44c"),
"shamrock-green": rgb("#02c14d"),
"shit": rgb("#7f5f00"),
"shit-brown": rgb("#7b5804"),
"shit-green": rgb("#758000"),
"shocking-pink": rgb("#fe02a2"),
"sick-green": rgb("#9db92c"),
"sickly-green": rgb("#94b21c"),
"sickly-yellow": rgb("#d0e429"),
"sienna": rgb("#a9561e"),
"silver": rgb("#c5c9c7"),
"sky": rgb("#82cafc"),
"sky-blue": rgb("#75bbfd"),
"slate": rgb("#516572"),
"slate-blue": rgb("#5b7c99"),
"slate-green": rgb("#658d6d"),
"slate-grey": rgb("#59656d"),
"slime-green": rgb("#99cc04"),
"snot": rgb("#acbb0d"),
"snot-green": rgb("#9dc100"),
"soft-blue": rgb("#6488ea"),
"soft-green": rgb("#6fc276"),
"soft-pink": rgb("#fdb0c0"),
"soft-purple": rgb("#a66fb5"),
"spearmint": rgb("#1ef876"),
"spring-green": rgb("#a9f971"),
"spruce": rgb("#0a5f38"),
"squash": rgb("#f2ab15"),
"steel": rgb("#738595"),
"steel-blue": rgb("#5a7d9a"),
"steel-grey": rgb("#6f828a"),
"stone": rgb("#ada587"),
"stormy-blue": rgb("#507b9c"),
"straw": rgb("#fcf679"),
"strawberry": rgb("#fb2943"),
"strong-blue": rgb("#0c06f7"),
"strong-pink": rgb("#ff0789"),
"sun-yellow": rgb("#ffdf22"),
"sunflower": rgb("#ffc512"),
"sunflower-yellow": rgb("#ffda03"),
"sunny-yellow": rgb("#fff917"),
"sunshine-yellow": rgb("#fffd37"),
"swamp": rgb("#698339"),
"swamp-green": rgb("#748500"),
"tan": rgb("#d1b26f"),
"tan-brown": rgb("#ab7e4c"),
"tan-green": rgb("#a9be70"),
"tangerine": rgb("#ff9408"),
"taupe": rgb("#b9a281"),
"tea": rgb("#65ab7c"),
"tea-green": rgb("#bdf8a3"),
"teal": rgb("#029386"),
"teal-blue": rgb("#01889f"),
"teal-green": rgb("#25a36f"),
"tealish": rgb("#24bca8"),
"tealish-green": rgb("#0cdc73"),
"terra-cotta": rgb("#c9643b"),
"terracota": rgb("#cb6843"),
"terracotta": rgb("#ca6641"),
"tiffany-blue": rgb("#7bf2da"),
"tomato": rgb("#ef4026"),
"tomato-red": rgb("#ec2d01"),
"topaz": rgb("#13bbaf"),
"toupe": rgb("#c7ac7d"),
"toxic-green": rgb("#61de2a"),
"tree-green": rgb("#2a7e19"),
"true-blue": rgb("#010fcc"),
"true-green": rgb("#089404"),
"turquoise": rgb("#06c2ac"),
"turquoise-blue": rgb("#06b1c4"),
"turquoise-green": rgb("#04f489"),
"turtle-green": rgb("#75b84f"),
"twilight": rgb("#4e518b"),
"twilight-blue": rgb("#0a437a"),
"ugly-blue": rgb("#31668a"),
"ugly-brown": rgb("#7d7103"),
"ugly-green": rgb("#7a9703"),
"ugly-pink": rgb("#cd7584"),
"ugly-purple": rgb("#a442a0"),
"ugly-yellow": rgb("#d0c101"),
"ultramarine": rgb("#2000b1"),
"ultramarine-blue": rgb("#1805db"),
"umber": rgb("#b26400"),
"velvet": rgb("#750851"),
"vermillion": rgb("#f4320c"),
"very-dark-blue": rgb("#000133"),
"very-dark-brown": rgb("#1d0200"),
"very-dark-green": rgb("#062e03"),
"very-dark-purple": rgb("#2a0134"),
"very-light-blue": rgb("#d5ffff"),
"very-light-brown": rgb("#d3b683"),
"very-light-green": rgb("#d1ffbd"),
"very-light-pink": rgb("#fff4f2"),
"very-light-purple": rgb("#f6cefc"),
"very-pale-blue": rgb("#d6fffe"),
"very-pale-green": rgb("#cffdbc"),
"vibrant-blue": rgb("#0339f8"),
"vibrant-green": rgb("#0add08"),
"vibrant-purple": rgb("#ad03de"),
"violet": rgb("#9a0eea"),
"violet-blue": rgb("#510ac9"),
"violet-pink": rgb("#fb5ffc"),
"violet-red": rgb("#a50055"),
"viridian": rgb("#1e9167"),
"vivid-blue": rgb("#152eff"),
"vivid-green": rgb("#2fef10"),
"vivid-purple": rgb("#9900fa"),
"vomit": rgb("#a2a415"),
"vomit-green": rgb("#89a203"),
"vomit-yellow": rgb("#c7c10c"),
"warm-blue": rgb("#4b57db"),
"warm-brown": rgb("#964e02"),
"warm-grey": rgb("#978a84"),
"warm-pink": rgb("#fb5581"),
"warm-purple": rgb("#952e8f"),
"washed-out-green": rgb("#bcf5a6"),
"water-blue": rgb("#0e87cc"),
"watermelon": rgb("#fd4659"),
"weird-green": rgb("#3ae57f"),
"wheat": rgb("#fbdd7e"),
"white": rgb("#ffffff"),
"windows-blue": rgb("#3778bf"),
"wine": rgb("#80013f"),
"wine-red": rgb("#7b0323"),
"wintergreen": rgb("#20f986"),
"wisteria": rgb("#a87dc2"),
"yellow": rgb("#ffff14"),
"yellow-brown": rgb("#b79400"),
"yellow-green": rgb("#c0fb2d"),
"yellow-ochre": rgb("#cb9d06"),
"yellow-orange": rgb("#fcb001"),
"yellow-tan": rgb("#ffe36e"),
"yellow/green": rgb("#c8fd3d"),
"yellowgreen": rgb("#bbf90f"),
"yellowish": rgb("#faee66"),
"yellowish-brown": rgb("#9b7a01"),
"yellowish-green": rgb("#b0dd16"),
"yellowish-orange": rgb("#ffab0f"),
"yellowish-tan": rgb("#fcfc81"),
"yellowy-brown": rgb("#ae8b0c"),
"yellowy-green": rgb("#bff128"),
)
|
https://github.com/ofurtumi/typst-packages | https://raw.githubusercontent.com/ofurtumi/typst-packages/main/templates/ass/0.1.1/lib.typ | typst | #let header(project: "", num: "") = align(center + bottom,text(20pt, [#project #num]))
#let template(project: none, class: "Háskóli Íslands", writer: "<NAME>", doc) = {
set document(
author: writer
)
set grid(gutter: 16pt)
set page(
numbering: "1.",
header: counter(page).display(n => {
box(
inset: (bottom: 10pt),
stroke: (bottom: 1pt),
text(
8pt,
align(center + bottom)[
#grid(
columns: (1fr, 3fr, 1fr),
class,
text(20pt, { if n < 2 [#project] }),
writer
)
]
)
)
}
)
)
set raw(theme: "./GruvBox.tmTheme")
show raw.where(block: true): it => block(
width: 100%,
fill: rgb("#282828"),
inset: 16pt,
radius: 5pt,
text(size: 8pt, fill: rgb("#a2aabc"),it)
)
show raw.where(block: false): it => box(
fill: rgb("#282828"),
inset: (x: 4pt, y: 0pt),
outset: (y: 4pt),
radius: 2pt,
text(size: 7pt, fill: rgb("#ebdbb2"), it)
)
doc
}
|
|
https://github.com/MatejKafka/ctu-thesis-typst | https://raw.githubusercontent.com/MatejKafka/ctu-thesis-typst/main/template/front.typ | typst | #let title-page(
print,
title: "",
author: (
name: "",
email: "",
url: "",
),
submission-date: datetime.today(),
bachelor: false,
supervisor: "",
faculty: "",
department: "",
study-programme: "",
branch-of-study: "",
) = {
// render as a separate page
// inner margin is 8mm due to binding loss, but without
// the bent page extra, which is not an issue for the title page
let inside-margin = if print {8mm} else {0mm}
show: page.with(margin: (top: 0mm, bottom: 0mm, inside: inside-margin, outside: 0mm))
set align(center)
set place(center)
set text(font: "Technika", weight: "extralight", size: 10.3pt, fallback: false)
// shorthand to vertically position elements
let b(dy, content, size: none, weight: none) = {
set text(size: size) if size != none
set text(weight: weight) if weight != none
place(dy: dy, text(content))
}
b(33mm)[
Czech Technical University in Prague \
#faculty \
#department
]
b(63.5mm)[
#image("./res/symbol_cvut_konturova_verze_cb.svg")
]
b(131.5mm, size: 12.5pt)[
#if bachelor [
Bachelor's Thesis
] else [
Master's Thesis
]
]
b(140.7mm, size: 14.8pt, weight: "regular")[
#title
]
b(154.25mm, [
#text(size: 12.5pt, style: "italic")[#author.name] \
\
#author.email \
#link(author.url)
])
b(210mm)[Supervisor: #supervisor]
b(235.2mm)[Study programme: #study-programme]
b(241.2mm)[Branch of study: #branch-of-study]
b(254.3mm)[#submission-date.display("[month repr:long] [year]")]
}
#let abstract-page(
submission-date,
abstract-en: [],
abstract-cz: [],
acknowledgement: [],
declaration: [
I declare that the presented work was developed independently and that I have listed all sources of information used within it in accordance with the methodical instructions for observing the ethical principles in the preparation of university theses.
]
) = {
// render as a separate page; add room at the bottom for TODOs and notes
show: page.with(margin: (bottom: 0mm))
set heading(outlined: false, bookmarked: false)
// pretty hacky way to disable the implicit linebreak in my heading style
show heading: it => {
show pagebreak: it => {linebreak()}
block(it)
//block(it, above: 2pt)
}
// no idea why there is a margin here
v(-30.2pt)
[
= Abstract
#abstract-en
]
[
= Abstrakt (CZ)
#abstract-cz
]
v(6.6pt)
//v(-6pt)
grid(columns: (47.5%, 47.5%), gutter: 5%,
[
= Acknowledgement
#set text(style: "italic")
#acknowledgement
],
[
= Declaration
#declaration
In Prague, #submission-date.display("[day]. [month]. [year]")
#v(2em)
#repeat[.]
],
)
context {
set text(size: 15pt, weight: "bold")
set align(center)
v(1em)
grid(columns: (47%, 47%), gutter: 6%,
{
let todo-count = counter("todo").final().at(0);
if (todo-count > 0) {
set text(fill: red)
block(width: 100%, inset: 4pt)[#todo-count TODOs remaining]
}
},
{
let note-count = counter("note").final().at(0);
if (note-count > 0) {
block(fill: yellow, width: 100%, inset: 4pt)[#note-count notes]
}
}
)
}
}
#let introduction(print, ..args) = {
// hide empty pages from web version
if print {
// assignment must be on a single sheet from both sides
pagebreak(to: "odd")
} else {
// Typst cannot embed PDFs, add the assignment separately
page[assignment page 1]
page[assignment page 2]
}
if print {
pagebreak(to: "odd", weak: true)
}
abstract-page(..args)
if print {
// outline should be on the right, but the outline title has a pagebreak
pagebreak(to: "even")
}
outline(depth: 2)
pagebreak(weak: true)
} |
|
https://github.com/Mc-Zen/pillar | https://raw.githubusercontent.com/Mc-Zen/pillar/main/src/impl.typ | typst | MIT License | #import "@preview/zero:0.1.0": ztable
#let stdstroke = stroke
#let stdtable = table
#let cols(spec, stroke: auto, line-distance: 1.6pt) = {
assert(type(spec) == str, message: "expected a `str` argument, got `" + type(spec) + "`")
let aligns = (r: right, c: center, l: left, a: auto)
let format = ()
let align = ()
let columns = ()
let vline-specs = ()
let i = 0
let col = 0
let count-vlines = 0
while i < spec.len() {
let c = spec.at(i)
if lower(c) in "lcra" {
align.push(aligns.at(lower(c)))
format.push(if lower(c) == c { none } else { auto })
columns.push(auto)
col += 1
count-vlines = 0
} else if c == "|" {
count-vlines += 1
vline-specs.push((col, count-vlines, stroke))
assert(count-vlines <= 2, message: "At most two consecutive `|` are supported. ")
} else if c == "[" {
let end = spec.slice(i).position("]")
if end == none {
assert(false, message: "Unmatched `[`")
}
let width = eval(spec.slice(i + 1, i + end))
if count-vlines == 0 {
assert(columns.len() > 0, message: "Unexpected width specification `" + spec.slice(i, i + end + 1) + "` at the beginning")
assert(width == auto or type(width) in (relative, length, fraction), message: "column width expects a relative length, fraction, or auto, found " + type(width))
columns.last() = width
} else {
assert(width == none or type(width) in (length, color, gradient, pattern, dictionary, stdstroke), message: "vline stroke expects a length, color, gradient, pattern, dictionary, stroke, or none, found " + type(width))
vline-specs.last().last() = width
}
i += end
} else if c == " " {
} else {
if c == "]" { assert(false, message: "Unexpected `]`") }
assert(false, message: "Unknown column type `" + c + "`")
}
i += 1
}
let vlines = ()
let column-gutter = (auto,) * columns.len()
for (col, num, stroke) in vline-specs {
let vline = table.vline
if stroke != auto { vline = vline.with(stroke: stroke) }
if num == 2 {
assert(col != 0 and col != columns.len(), message: "Double lines `||` are currently not supported before the first and after the last column. ")
vlines.push(vline(x: col - 1, position: end))
column-gutter.at(col - 1) = line-distance
}
vlines.push(vline(x: col))
}
if auto in format { format = (format: format) }
else { format = (:) }
arguments(
columns: columns,
align: align,
stroke: none,
column-gutter: column-gutter,
..vlines,
..format
)
}
#let table(..children) = {
let named = children.named()
if "cols" in named {
let colspec = named.cols
named.remove("cols")
return ztable(
..cols(colspec),
..named,
..children.pos()
)
} else { return stdtable(..children) }
} |
https://github.com/typst-community/glossarium | https://raw.githubusercontent.com/typst-community/glossarium/master/themes/default.typ | typst | MIT License | // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
// glossarium figure kind
#let __glossarium_figure = "glossarium_entry"
// prefix of label for references query
#let __glossary_label_prefix = "__gls:"
// global state containing the glossary entry and their location
// A glossary entry is a `dictionary`.
// See `__normalize_entry_list`.
#let __glossary_entries = state("__glossary_entries", (:))
// glossarium version
#let glossarium_version = "0.5.0"
// error prefix
#let __glossarium_error_prefix = "glossarium@" + glossarium_version + " error : "
// Errors types
#let __key_not_found = "key_not_found"
#let __attribute_is_empty = "attribute_is_empty"
#let __glossary_is_empty = "glossary_is_empty"
#let __entry_has_neither_short_nor_long = "entry_has_neither_short_nor_long"
#let __unknown_error = "unknown_error"
// __error_message(key, kind, ..kwargs) -> str
// Generate an error message
//
// # Arguments
// key (str): the key of the term
// kind (str): the kind of the error
// kwargs (arguments): additional arguments
//
// # Returns
// The error message
#let __error_message(key, kind, ..kwargs) = {
let msg = none
let kwargs = kwargs.named() // convert arguments sink to dictionary
// Generate the error message
if kind == __key_not_found {
msg = "key '" + key + "' not found"
} else if kind == __attribute_is_empty {
let attr = kwargs.at("attr")
msg = "requested attribute " + attr + "is empty for key '" + key + "'"
} else if kind == __entry_has_neither_short_nor_long {
msg = "entry '" + key + "' has neither short nor long form"
} else if kind == __glossary_is_empty {
msg = "glossary is empty. Use `register-glossary(entry-list)` immediately after `make-glossary`."
} else {
msg = "unknown error"
}
// return the error message
return __glossarium_error_prefix + msg
}
// __query_labels_with_key(loc, key, before: false) -> array<label>
// Query the labels with the key
//
// # Arguments
// loc (location): the location of the reference
// key (str): the key of the term
// before (bool): if true, it will query the labels before the location
//
// # Returns
// The labels with the key
#let __query_labels_with_key(loc, key, before: false) = {
if before {
return query(
selector(label(__glossary_label_prefix + key)).before(
loc,
inclusive: false,
),
)
} else {
return query(selector(label(__glossary_label_prefix + key)))
}
}
// __get_entry_with_key(loc, key) -> dictionary
// Get an entry from the glossary
//
// # Arguments
// loc (location): the location of the reference
// key (str): the key of the term
//
// # Returns
// The entry of the term
//
// # Panics
// If the key is not found, it will raise a `key_not_found` error
#let __get_entry_with_key(loc, key) = {
let entries = if sys.version <= version(0, 11, 1) {
__glossary_entries.final()
} else {
__glossary_entries.at(loc)
}
if key in entries {
return entries.at(key)
} else {
panic(__error_message(key, __key_not_found))
}
}
// __is_first_or_long(loc, key, long: none) -> bool
// Check if the key is the first reference to the term or long form is requested
//
// # Arguments
// loc (location): the location of the reference
// key (str): the key of the term
// long (bool): if true, it will return true if the long form is requested
//
// # Returns
// True if the key is the first reference to the term or long form is requested
#let __is_first_or_long(loc, key, long: none) = {
let gloss = __query_labels_with_key(loc, key, before: true)
return gloss == () or long == true
}
#let __has_attribute(entry, key) = {
let attr = entry.at(key, default: "")
return attr != "" and attr != []
}
#let has-short(entry) = __has_attribute(entry, "short")
#let has-long(entry) = __has_attribute(entry, "long")
#let has-artshort(entry) = __has_attribute(entry, "artshort")
#let has-artlong(entry) = __has_attribute(entry, "artlong")
#let has-plural(entry) = __has_attribute(entry, "plural")
#let has-longplural(entry) = __has_attribute(entry, "longplural")
#let has-description(entry) = __has_attribute(entry, "description")
#let has-group(entry) = __has_attribute(entry, "group")
// __link_and_label(key, text, prefix: none, suffix: none) -> contextual content
// Build a link and a label
//
// # Arguments
// key (str): the key of the term
// text (content): the text to be displayed
// prefix (str|content): the prefix to be added to the label
// suffix (str|content): the suffix to be added to the label
//
// # Returns
// The link and the entry label
#let __link_and_label(key, text, prefix: none, suffix: none) = (
context {
return [#prefix#link(
label(key),
text,
)#suffix#label(__glossary_label_prefix + key)]
}
)
// gls(key, suffix: none, long: none, display: none) -> contextual content
// Reference to term
//
// # Arguments
// key (str): the key of the term
// suffix (str): the suffix to be added to the short form
// long (bool): enable/disable the long form
// display (str): override text to be displayed
//
// # Returns
// The link and the entry label
#let gls(key, suffix: none, long: none, display: none) = {
context {
let entry = __get_entry_with_key(here(), key)
// Attributes
let ent-long = entry.at("long", default: "")
let ent-short = entry.at("short", default: "")
// Conditions
let is-first-or-long = __is_first_or_long(here(), key, long: long)
let has-long = has-long(entry)
let has-short = has-short(entry)
// Link text
// 1. If `display` attribute is provided, use it
// 2. Else, if
// a. The entry is referenced for the first time OR long form is explicitly requested
// AND
// b. The entry has a nonempty `long` attribute
// AND
// c. long form is not disabled
// 3. Else, return the `short` attribute + suffix
// Priority order:
// 1. `gls(key, display: "text")` will return `text`
// 2. `gls(key, long: false)` will return `short+suffix`
// 3. If attribute `long` is empty, `gls(key)` will return `short+suffix`
// 4. The first `gls(key)` will return `long (short+suffix)`
// 5. `gls(key, long: true)` will return `long (short+suffix)`
let link-text = []
if display != none {
link-text += [#display]
} else if is-first-or-long and has-long and long != false {
if has-short {
link-text += [#ent-long (#ent-short#suffix)]
} else {
link-text += [#ent-long]
}
} else {
link-text += [#ent-short#suffix]
}
return __link_and_label(entry.key, link-text)
}
}
// agls(key, suffix: none, long: none) -> contextual content
// Reference to term with article
//
// # Arguments
// key (str): the key of the term
// suffix (str|content): the suffix to be added to the short form
// long (bool): enable/disable the long form
//
// # Returns
// The link and the entry label
#let agls(key, suffix: none, long: none) = {
context {
let entry = __get_entry_with_key(here(), key)
// Attributes
let ent-long = entry.at("long", default: "")
let ent-short = entry.at("short", default: "")
let ent-artlong = entry.at("artlong", default: "a")
let ent-artshort = entry.at("artshort", default: "a")
// Conditions
let is-first-or-long = __is_first_or_long(here(), key, long: long)
let has-long = has-long(entry)
let has-short = has-short(entry)
// Link text
let link-text = none
let article = none
if is-first-or-long and has-long and long != false {
if has-short {
link-text += [#ent-long (#ent-short#suffix)]
} else {
link-text += [#ent-long]
}
article = ent-artlong
} else if has-long {
link-text += [#ent-long#suffix]
article = ent-artlong
} else {
// Default to short
link-text = [#ent-short#suffix]
article = ent-artshort
}
// Return
return __link_and_label(entry.key, link-text, prefix: [#article ])
}
}
// glspl(key, long: none) -> contextual content
// Reference to term with plural form
//
// # Arguments
// key (str): the key of the term
// long (bool): enable/disable the long form
//
// # Returns
// The link and the entry label
#let glspl(key, long: none) = {
context {
let default-plural-suffix = "s"
let entry = __get_entry_with_key(here(), key)
// Attributes
let ent-short = entry.at("short", default: "")
let ent-plural = entry.at("plural", default: "")
let ent-long = entry.at("long", default: "")
let ent-longplural = entry.at("longplural", default: "")
// Conditions
let is-first-or-long = __is_first_or_long(here(), key, long: long)
let has-short = has-short(entry)
let has-plural = has-plural(entry)
let has-long = has-long(entry)
let has-longplural = has-longplural(entry)
let longplural = if not has-longplural and has-long {
// Default longplural
// if the entry long plural is not provided, then fallback to adding default
// default-plural-suffix
[#ent-long#default-plural-suffix]
} else {
[#ent-longplural]
}
let shortplural = if not has-plural {
// Default short plural
// if the entry plural is not provided, then fallback to adding default
// default-plural-suffix
[#ent-short#default-plural-suffix]
} else {
[#ent-plural]
}
// Link text
let link-text = if is-first-or-long and has-long and long != false {
if has-short {
[#longplural (#shortplural)]
} else {
[#longplural]
}
} else if has-long {
[#longplural]
} else {
// Default to short
[#shortplural]
}
return __link_and_label(entry.key, link-text)
}
}
// __gls_attribute(key, attr) -> contextual str|content
// Get the specified attribute from entry
//
// # Arguments
// key (str): the key of the term
// attr (str): the attribute to be retrieved
//
// # Returns
// The attribute of the term
#let __gls_attribute(key, attr, link: false) = (
context {
let entry = __get_entry_with_key(here(), key)
if link {
return __link_and_label(entry.key, entry.at(attr))
} else if attr in entry {
return entry.at(attr)
} else {
panic(__error_message(key, __attribute_is_empty, attr: attr))
}
}
)
// gls-key(key, link: false) -> str
// Get the key of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The key of the term
#let gls-key(key, link: false) = __gls_attribute(key, "key", link: link)
// gls-short(key, link: false) -> str
// Get the short form of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The short form of the term
#let gls-short(key, link: false) = __gls_attribute(key, "short", link: link)
// gls-artshort(key, link: false) -> str|content
// Get the article of the short form
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The article of the short form
#let gls-artshort(key, link: false) = __gls_attribute(
key,
"artshort",
link: link,
)
// gls-plural(key, link: false) -> str|content
// Get the plural form of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The plural form of the term
#let gls-plural(key, link: false) = __gls_attribute(key, "plural", link: link)
// gls-long(key, link: false) -> str|content
// Get the long form of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The long form of the term
#let gls-long(key, link: false) = __gls_attribute(key, "long", link: link)
// gls-artlong(key, link: false) -> str|content
// Get the article of the long form
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The article of the long form
#let gls-artlong(key, link: false) = __gls_attribute(key, "artlong", link: link)
// gls-longplural(key, link: false) -> str|content
// Get the long plural form of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The long plural form of the term
#let gls-longplural(key, link: false) = __gls_attribute(
key,
"longplural",
link: link,
)
// gls-description(key, link: false) -> str|content
// Get the description of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The description of the term
#let gls-description(key, link: false) = __gls_attribute(
key,
"description",
link: link,
)
// gls-group(key, link: false) -> str
// Get the group of the term
//
// # Arguments
// key (str): the key of the term
// link (bool): enable link to glossary
//
// # Returns
// The group of the term
#let gls-group(key, link: false) = __gls_attribute(key, "group", link: link)
// make-glossary(body) -> content
// Show rule for glossary
//
// # Arguments
// body (content): whole document
//
// # Usage
// Transform everything
// ```typ
// #show: make-glossary
// ```
#let make-glossary(body) = {
// Select all figure refs and filter by __glossarium_figure
// Transform the ref to the glossary term
show ref: r => {
if (r.element != none and r.element.func() == figure and r.element.kind == __glossarium_figure) {
// call to the general citing function
let key = str(r.target)
if key.ends-with(":pl") {
// Plural ref
glspl(str(key).slice(0, -3))
} else {
// Default ref
gls(str(key), suffix: r.citation.supplement)
}
} else {
r
}
}
body
}
// __normalize_entry_list(entry-list) -> list<dictionary>
// Add default values to each entry.
//
// # Arguments
// entry-list (list<dictionary>): the list of entries
//
// # Returns
// The normalized entry list
#let __normalize_entry_list(entry-list) = {
let new-list = ()
for entry in entry-list {
if not has-short(entry) and not has-long(entry) {
panic(__error_message(entry.key, __entry_has_neither_short_nor_long))
}
new-list.push((
key: entry.key,
short: entry.at("short", default: ""),
artshort: entry.at("artshort", default: "a"),
plural: entry.at("plural", default: ""),
long: entry.at("long", default: ""),
artlong: entry.at("artlong", default: "a"),
longplural: entry.at("longplural", default: ""),
description: entry.at("description", default: ""),
group: entry.at("group", default: ""),
))
}
return new-list
}
// get-entry-back-references(entry) -> array<content>
// Get the back references of the entry
//
// # Arguments
// entry (dictionary): the entry
//
// # Returns
// The back references as an array of links
#let get-entry-back-references(entry) = {
let term-references = __query_labels_with_key(here(), entry.key)
return term-references
.map(x => x.location())
.sorted(key: x => x.page())
.fold(
(values: (), pages: ()),
((values, pages), x) => {
if pages.contains(x.page()) {
// Skip duplicate references
return (values: values, pages: pages)
} else {
// Add the back reference
values.push(x)
pages.push(x.page())
return (values: values, pages: pages)
}
},
)
.values
.map(x => {
let page-numbering = x.page-numbering()
if page-numbering == none {
page-numbering = "1"
}
return link(x)[#numbering(page-numbering, ..counter(page).at(x))]
})
}
// count-refs(entry) -> int
// Count the number of references to the entry
//
// # Arguments
// entry (dictionary): the entry
//
// # Returns
// The number of references to the entry
#let count-refs(entry) = {
let refs = __query_labels_with_key(here(), entry.key)
return refs.len()
}
// default-print-back-references(entry) -> contextual content
// Print the back references of the entry
//
// # Arguments
// entry (dictionary): the entry
//
// # Returns
// Joined back references
#let default-print-back-references(entry) = {
return get-entry-back-references(entry).join(", ")
}
// default-print-description(entry) -> content
// Print the description of the entry
//
// # Arguments
// entry (dictionary): the entry
//
// # Returns
// The description of the entry
#let default-print-description(entry) = {
return entry.at("description")
}
// default-print-title(entry) -> content
// Print the title of the entry
//
// # Arguments
// entry (dictionary): the entry
//
// # Returns
// The title of the entry
#let default-print-title(entry) = {
let caption = []
let txt = text.with(weight: 600)
if has-long(entry) and has-short(entry) {
caption += txt(emph(entry.short) + [ -- ] + entry.long)
} else if has-long(entry) {
caption += txt(entry.long)
} else {
caption += txt(emph(entry.short))
}
return caption
}
// default-print-gloss(
// entry,
// show-all: false,
// disable-back-references: false,
// user-print-title: default-print-title,
// user-print-description: default-print-description,
// user-print-back-references: default-print-back-references,
// ) -> contextual content
// Print the entry
//
// # Arguments
// entry (dictionary): the entry
// show-all (bool): show all entries
// disable-back-references (bool): disable back references
// ...
//
// # Returns
// The gloss content
#let default-print-gloss(
entry,
show-all: false,
disable-back-references: false,
user-print-title: default-print-title,
user-print-description: default-print-description,
user-print-back-references: default-print-back-references,
) = (
context {
let caption = []
if show-all == true or count-refs(entry) != 0 {
// Title
caption += user-print-title(entry)
// Description
if has-description(entry) {
// Title - Description separator
caption += ": "
caption += user-print-description(entry)
}
// Back references
if disable-back-references != true {
caption += " "
caption += user-print-back-references(entry)
}
}
return caption
}
)
// default-print-reference(
// entry,
// show-all: false,
// disable-back-references: false,
// user-print-gloss: default-print-gloss,
// user-print-title: default-print-title,
// user-print-description: default-print-description,
// user-print-back-references: default-print-back-references,
// ) -> contextual content
// Print the entry
//
// # Arguments
// entry (dictionary): the entry
// show-all (bool): show all entries
// disable-back-references (bool): disable back references
// ..;
//
// # Returns
// A glossarium figure+labels
#let default-print-reference(
entry,
show-all: false,
disable-back-references: false,
user-print-gloss: default-print-gloss,
user-print-title: default-print-title,
user-print-description: default-print-description,
user-print-back-references: default-print-back-references,
) = {
return [
#show figure.where(kind: __glossarium_figure): it => if sys.version >= version(0, 12, 0) {
align(start, it.caption)
} else {
it.caption
}
#par(
hanging-indent: 1em,
first-line-indent: 0em,
)[
#figure(
supplement: "",
kind: __glossarium_figure,
numbering: none,
caption: user-print-gloss(
entry,
show-all: show-all,
disable-back-references: disable-back-references,
user-print-title: user-print-title,
user-print-description: user-print-description,
user-print-back-references: user-print-back-references,
),
)[]#label(entry.key)
// The line below adds a ref shorthand for plural form, e.g., "@term:pl"
#figure(
kind: __glossarium_figure,
supplement: "",
)[] #label(entry.key + ":pl")
]
#parbreak()
]
}
// default-group-break() -> content
// Default group break
#let default-group-break() = {
return []
}
// default-print-glossary(
// entries,
// groups,
// show-all: false,
// disable-back-references: false,
// user-print-reference: default-print-reference
// user-group-break: default-group-break,
// user-print-gloss: default-print-gloss,
// user-print-title: default-print-title,
// user-print-description: default-print-description,
// user-print-back-references: default-print-back-references,
// ) -> contextual content
// Default glossary print function
//
// # Arguments
// entries (array<dictionary>): the list of entries
// groups (array<str>): the list of groups
// show-all (bool): show all entries
// disable-back-references (bool): disable back references
// ...
//
// # Warnings
// A strong warning is given not to override `user-print-reference` without
// careful consideration of `default-print-reference`'s original implementation.
// The package's behaviour may break in unexpected ways if not handled correctly.
//
// # Returns
// The glossary content
#let default-print-glossary(
entries,
groups,
show-all: false,
disable-back-references: false,
user-print-reference: default-print-reference,
user-group-break: default-group-break,
user-print-gloss: default-print-gloss,
user-print-title: default-print-title,
user-print-description: default-print-description,
user-print-back-references: default-print-back-references,
) = {
let body = []
let previous-headings = query(selector(heading).before(here()))
let group-heading-level = 1
if previous-headings.len() != 0 {
group-heading-level = previous-headings.last().level
}
for group in groups.sorted() {
let group-entries = entries.filter(x => x.at("group") == group)
let group-ref-counts = group-entries.map(count-refs)
let print-group = (group != "" and (show-all == true or group-ref-counts.any(x => x > 0)))
// Only print group name if any entries are referenced
if print-group {
body += [#heading(group, level: group-heading-level)]
}
for entry in group-entries.sorted(key: x => x.key) {
body += user-print-reference(
entry,
show-all: show-all,
disable-back-references: disable-back-references,
user-print-gloss: user-print-gloss,
user-print-title: user-print-title,
user-print-description: user-print-description,
user-print-back-references: user-print-back-references,
)
}
body += user-group-break()
}
return body
}
// __update_glossary(entries) -> none
// Update the global state glossary
//
// # Arguments
// entries (array<dictionary>): the list of entries
#let __update_glossary(entries) = {
__glossary_entries.update(x => {
for entry in entries {
if entry.key in entries {
panic("Duplicate key: " + entry.key)
}
x.insert(entry.key, entry)
}
return x
})
}
// register-glossary(entry-list) -> none
// Register the glossary entries
//
// # Arguments
// entries (array<dictionary>): the list of entries
#let register-glossary(entry-list) = {
if sys.version <= version(0, 11, 1) {
return
}
// Normalize entry-list
let entries = __normalize_entry_list(entry-list)
__update_glossary(entries)
}
// print-glossary(
// entry-list,
// show-all: false,
// disable-back-references: false,
// user-print-glossary: default-print-glossary,
// user-print-reference: default-print-reference,
// user-group-break: default-group-break,
// user-print-gloss: default-print-gloss,
// user-print-title: default-print-title,
// user-print-description: default-print-description,
// user-print-back-references: default-print-back-references,
// ) -> contextual content
// Print the glossary
//
// # Arguments
// entry-list (list<dictionary>): the list of entries
// show-all (bool): show all entries
// disable-back-references (bool): disable back references
// ...
//
// # Warnings
// A strong warning is given not to override `user-print-reference` without
// careful consideration of `default-print-reference`'s original implementation.
// The package's behaviour may break in unexpected ways if not handled correctly.
//
// # Usage
// Print the glossary
// ```typ
// print-glossary(entry-list)
// ```
#let print-glossary(
entry-list,
show-all: false,
disable-back-references: false,
user-print-glossary: default-print-glossary,
user-print-reference: default-print-reference,
user-group-break: default-group-break,
user-print-gloss: default-print-gloss,
user-print-title: default-print-title,
user-print-description: default-print-description,
user-print-back-references: default-print-back-references,
) = {
if entry-list == none {
panic("entry-list is required")
}
let entries = ()
if sys.version <= version(0, 11, 1) {
// Normalize entry-list
entries = __normalize_entry_list(entry-list)
// Update state
__update_glossary(entries)
} else {
context {
if __glossary_entries.get().len() == 0 {
panic(__error_message(none, __glossary_is_empty))
}
}
}
// Glossary
let body = []
body += context {
let el = if sys.version <= version(0, 11, 1) {
entries
} else if entry-list != none {
__glossary_entries.get().values().filter(x => (x.key in entry-list.map(x => x.key)))
}
user-print-glossary(
el,
el.map(x => x.at("group")).dedup(),
show-all: show-all,
disable-back-references: disable-back-references,
user-print-reference: user-print-reference,
user-group-break: user-group-break,
user-print-gloss: user-print-gloss,
user-print-title: user-print-title,
user-print-description: user-print-description,
user-print-back-references: user-print-back-references,
)
}
// Content
body
}
|
https://github.com/LDemetrios/Typst4k | https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/syntax/bugs.typ | typst | --- issue-4571-panic-when-compiling-invalid-file ---
// Test that trying to parse the following does not result in a panic.
// Error: 1:9-10 unclosed delimiter
// Error: 1:22 expected pattern
// Error: 1:23-24 unexpected star
// Error: 2:1-2:2 the character `#` is not valid in code
// Error: 2:2-2:8 expected pattern, found keyword `import`
// Hint: 2:2-2:8 keyword `import` is not allowed as an identifier; try `import_` instead
// Error: 2:9-2:20 expected identifier, found string
// Error: 3:1-3:2 the character `#` is not valid in code
// Error: 3:2-3:5 expected pattern, found keyword `let`
// Hint: 3:2-3:5 keyword `let` is not allowed as an identifier; try `let_` instead
// Error: 4:3-4:4 unexpected equals sign
// Error: 4:5-4:6 unclosed delimiter
// Error: 4:6 expected equals sign
#import (hntle-clues: *
#import "/util.typ": qrlink
#let auton(
) = {
|
|
https://github.com/typst/packages | https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-1B170.typ | typst | Apache License 2.0 | #let data = (
("NUSHU CHARACTER-1B170", "Lo", 0),
("NUSHU CHARACTER-1B171", "Lo", 0),
("NUSHU CHARACTER-1B172", "Lo", 0),
("NUSHU CHARACTER-1B173", "Lo", 0),
("NUSHU CHARACTER-1B174", "Lo", 0),
("NUSHU CHARACTER-1B175", "Lo", 0),
("NUSHU CHARACTER-1B176", "Lo", 0),
("NUSHU CHARACTER-1B177", "Lo", 0),
("NUSHU CHARACTER-1B178", "Lo", 0),
("NUSHU CHARACTER-1B179", "Lo", 0),
("NUSHU CHARACTER-1B17A", "Lo", 0),
("NUSHU CHARACTER-1B17B", "Lo", 0),
("NUSHU CHARACTER-1B17C", "Lo", 0),
("NUSHU CHARACTER-1B17D", "Lo", 0),
("NUSHU CHARACTER-1B17E", "Lo", 0),
("NUSHU CHARACTER-1B17F", "Lo", 0),
("NUSHU CHARACTER-1B180", "Lo", 0),
("NUSHU CHARACTER-1B181", "Lo", 0),
("NUSHU CHARACTER-1B182", "Lo", 0),
("NUSHU CHARACTER-1B183", "Lo", 0),
("NUSHU CHARACTER-1B184", "Lo", 0),
("NUSHU CHARACTER-1B185", "Lo", 0),
("NUSHU CHARACTER-1B186", "Lo", 0),
("NUSHU CHARACTER-1B187", "Lo", 0),
("NUSHU CHARACTER-1B188", "Lo", 0),
("NUSHU CHARACTER-1B189", "Lo", 0),
("NUSHU CHARACTER-1B18A", "Lo", 0),
("NUSHU CHARACTER-1B18B", "Lo", 0),
("NUSHU CHARACTER-1B18C", "Lo", 0),
("NUSHU CHARACTER-1B18D", "Lo", 0),
("NUSHU CHARACTER-1B18E", "Lo", 0),
("NUSHU CHARACTER-1B18F", "Lo", 0),
("NUSHU CHARACTER-1B190", "Lo", 0),
("NUSHU CHARACTER-1B191", "Lo", 0),
("NUSHU CHARACTER-1B192", "Lo", 0),
("NUSHU CHARACTER-1B193", "Lo", 0),
("NUSHU CHARACTER-1B194", "Lo", 0),
("NUSHU CHARACTER-1B195", "Lo", 0),
("NUSHU CHARACTER-1B196", "Lo", 0),
("NUSHU CHARACTER-1B197", "Lo", 0),
("NUSHU CHARACTER-1B198", "Lo", 0),
("NUSHU CHARACTER-1B199", "Lo", 0),
("NUSHU CHARACTER-1B19A", "Lo", 0),
("NUSHU CHARACTER-1B19B", "Lo", 0),
("NUSHU CHARACTER-1B19C", "Lo", 0),
("NUSHU CHARACTER-1B19D", "Lo", 0),
("NUSHU CHARACTER-1B19E", "Lo", 0),
("NUSHU CHARACTER-1B19F", "Lo", 0),
("NUSHU CHARACTER-1B1A0", "Lo", 0),
("NUSHU CHARACTER-1B1A1", "Lo", 0),
("NUSHU CHARACTER-1B1A2", "Lo", 0),
("NUSHU CHARACTER-1B1A3", "Lo", 0),
("NUSHU CHARACTER-1B1A4", "Lo", 0),
("NUSHU CHARACTER-1B1A5", "Lo", 0),
("NUSHU CHARACTER-1B1A6", "Lo", 0),
("NUSHU CHARACTER-1B1A7", "Lo", 0),
("NUSHU CHARACTER-1B1A8", "Lo", 0),
("NUSHU CHARACTER-1B1A9", "Lo", 0),
("NUSHU CHARACTER-1B1AA", "Lo", 0),
("NUSHU CHARACTER-1B1AB", "Lo", 0),
("NUSHU CHARACTER-1B1AC", "Lo", 0),
("NUSHU CHARACTER-1B1AD", "Lo", 0),
("NUSHU CHARACTER-1B1AE", "Lo", 0),
("NUSHU CHARACTER-1B1AF", "Lo", 0),
("NUSHU CHARACTER-1B1B0", "Lo", 0),
("NUSHU CHARACTER-1B1B1", "Lo", 0),
("NUSHU CHARACTER-1B1B2", "Lo", 0),
("NUSHU CHARACTER-1B1B3", "Lo", 0),
("NUSHU CHARACTER-1B1B4", "Lo", 0),
("NUSHU CHARACTER-1B1B5", "Lo", 0),
("NUSHU CHARACTER-1B1B6", "Lo", 0),
("NUSHU CHARACTER-1B1B7", "Lo", 0),
("NUSHU CHARACTER-1B1B8", "Lo", 0),
("NUSHU CHARACTER-1B1B9", "Lo", 0),
("NUSHU CHARACTER-1B1BA", "Lo", 0),
("NUSHU CHARACTER-1B1BB", "Lo", 0),
("NUSHU CHARACTER-1B1BC", "Lo", 0),
("NUSHU CHARACTER-1B1BD", "Lo", 0),
("NUSHU CHARACTER-1B1BE", "Lo", 0),
("NUSHU CHARACTER-1B1BF", "Lo", 0),
("NUSHU CHARACTER-1B1C0", "Lo", 0),
("NUSHU CHARACTER-1B1C1", "Lo", 0),
("NUSHU CHARACTER-1B1C2", "Lo", 0),
("NUSHU CHARACTER-1B1C3", "Lo", 0),
("NUSHU CHARACTER-1B1C4", "Lo", 0),
("NUSHU CHARACTER-1B1C5", "Lo", 0),
("NUSHU CHARACTER-1B1C6", "Lo", 0),
("NUSHU CHARACTER-1B1C7", "Lo", 0),
("NUSHU CHARACTER-1B1C8", "Lo", 0),
("NUSHU CHARACTER-1B1C9", "Lo", 0),
("NUSHU CHARACTER-1B1CA", "Lo", 0),
("NUSHU CHARACTER-1B1CB", "Lo", 0),
("NUSHU CHARACTER-1B1CC", "Lo", 0),
("NUSHU CHARACTER-1B1CD", "Lo", 0),
("NUSHU CHARACTER-1B1CE", "Lo", 0),
("NUSHU CHARACTER-1B1CF", "Lo", 0),
("NUSHU CHARACTER-1B1D0", "Lo", 0),
("NUSHU CHARACTER-1B1D1", "Lo", 0),
("NUSHU CHARACTER-1B1D2", "Lo", 0),
("NUSHU CHARACTER-1B1D3", "Lo", 0),
("NUSHU CHARACTER-1B1D4", "Lo", 0),
("NUSHU CHARACTER-1B1D5", "Lo", 0),
("NUSHU CHARACTER-1B1D6", "Lo", 0),
("NUSHU CHARACTER-1B1D7", "Lo", 0),
("NUSHU CHARACTER-1B1D8", "Lo", 0),
("NUSHU CHARACTER-1B1D9", "Lo", 0),
("NUSHU CHARACTER-1B1DA", "Lo", 0),
("NUSHU CHARACTER-1B1DB", "Lo", 0),
("NUSHU CHARACTER-1B1DC", "Lo", 0),
("NUSHU CHARACTER-1B1DD", "Lo", 0),
("NUSHU CHARACTER-1B1DE", "Lo", 0),
("NUSHU CHARACTER-1B1DF", "Lo", 0),
("NUSHU CHARACTER-1B1E0", "Lo", 0),
("NUSHU CHARACTER-1B1E1", "Lo", 0),
("NUSHU CHARACTER-1B1E2", "Lo", 0),
("NUSHU CHARACTER-1B1E3", "Lo", 0),
("NUSHU CHARACTER-1B1E4", "Lo", 0),
("NUSHU CHARACTER-1B1E5", "Lo", 0),
("NUSHU CHARACTER-1B1E6", "Lo", 0),
("NUSHU CHARACTER-1B1E7", "Lo", 0),
("NUSHU CHARACTER-1B1E8", "Lo", 0),
("NUSHU CHARACTER-1B1E9", "Lo", 0),
("NUSHU CHARACTER-1B1EA", "Lo", 0),
("NUSHU CHARACTER-1B1EB", "Lo", 0),
("NUSHU CHARACTER-1B1EC", "Lo", 0),
("NUSHU CHARACTER-1B1ED", "Lo", 0),
("NUSHU CHARACTER-1B1EE", "Lo", 0),
("NUSHU CHARACTER-1B1EF", "Lo", 0),
("NUSHU CHARACTER-1B1F0", "Lo", 0),
("NUSHU CHARACTER-1B1F1", "Lo", 0),
("NUSHU CHARACTER-1B1F2", "Lo", 0),
("NUSHU CHARACTER-1B1F3", "Lo", 0),
("NUSHU CHARACTER-1B1F4", "Lo", 0),
("NUSHU CHARACTER-1B1F5", "Lo", 0),
("NUSHU CHARACTER-1B1F6", "Lo", 0),
("NUSHU CHARACTER-1B1F7", "Lo", 0),
("NUSHU CHARACTER-1B1F8", "Lo", 0),
("NUSHU CHARACTER-1B1F9", "Lo", 0),
("NUSHU CHARACTER-1B1FA", "Lo", 0),
("NUSHU CHARACTER-1B1FB", "Lo", 0),
("NUSHU CHARACTER-1B1FC", "Lo", 0),
("NUSHU CHARACTER-1B1FD", "Lo", 0),
("NUSHU CHARACTER-1B1FE", "Lo", 0),
("NUSHU CHARACTER-1B1FF", "Lo", 0),
("NUSHU CHARACTER-1B200", "Lo", 0),
("NUSHU CHARACTER-1B201", "Lo", 0),
("NUSHU CHARACTER-1B202", "Lo", 0),
("NUSHU CHARACTER-1B203", "Lo", 0),
("NUSHU CHARACTER-1B204", "Lo", 0),
("NUSHU CHARACTER-1B205", "Lo", 0),
("NUSHU CHARACTER-1B206", "Lo", 0),
("NUSHU CHARACTER-1B207", "Lo", 0),
("NUSHU CHARACTER-1B208", "Lo", 0),
("NUSHU CHARACTER-1B209", "Lo", 0),
("NUSHU CHARACTER-1B20A", "Lo", 0),
("NUSHU CHARACTER-1B20B", "Lo", 0),
("NUSHU CHARACTER-1B20C", "Lo", 0),
("NUSHU CHARACTER-1B20D", "Lo", 0),
("NUSHU CHARACTER-1B20E", "Lo", 0),
("NUSHU CHARACTER-1B20F", "Lo", 0),
("NUSHU CHARACTER-1B210", "Lo", 0),
("NUSHU CHARACTER-1B211", "Lo", 0),
("NUSHU CHARACTER-1B212", "Lo", 0),
("NUSHU CHARACTER-1B213", "Lo", 0),
("NUSHU CHARACTER-1B214", "Lo", 0),
("NUSHU CHARACTER-1B215", "Lo", 0),
("NUSHU CHARACTER-1B216", "Lo", 0),
("NUSHU CHARACTER-1B217", "Lo", 0),
("NUSHU CHARACTER-1B218", "Lo", 0),
("NUSHU CHARACTER-1B219", "Lo", 0),
("NUSHU CHARACTER-1B21A", "Lo", 0),
("NUSHU CHARACTER-1B21B", "Lo", 0),
("NUSHU CHARACTER-1B21C", "Lo", 0),
("NUSHU CHARACTER-1B21D", "Lo", 0),
("NUSHU CHARACTER-1B21E", "Lo", 0),
("NUSHU CHARACTER-1B21F", "Lo", 0),
("NUSHU CHARACTER-1B220", "Lo", 0),
("NUSHU CHARACTER-1B221", "Lo", 0),
("NUSHU CHARACTER-1B222", "Lo", 0),
("NUSHU CHARACTER-1B223", "Lo", 0),
("NUSHU CHARACTER-1B224", "Lo", 0),
("NUSHU CHARACTER-1B225", "Lo", 0),
("NUSHU CHARACTER-1B226", "Lo", 0),
("NUSHU CHARACTER-1B227", "Lo", 0),
("NUSHU CHARACTER-1B228", "Lo", 0),
("NUSHU CHARACTER-1B229", "Lo", 0),
("NUSHU CHARACTER-1B22A", "Lo", 0),
("NUSHU CHARACTER-1B22B", "Lo", 0),
("NUSHU CHARACTER-1B22C", "Lo", 0),
("NUSHU CHARACTER-1B22D", "Lo", 0),
("NUSHU CHARACTER-1B22E", "Lo", 0),
("NUSHU CHARACTER-1B22F", "Lo", 0),
("NUSHU CHARACTER-1B230", "Lo", 0),
("NUSHU CHARACTER-1B231", "Lo", 0),
("NUSHU CHARACTER-1B232", "Lo", 0),
("NUSHU CHARACTER-1B233", "Lo", 0),
("NUSHU CHARACTER-1B234", "Lo", 0),
("NUSHU CHARACTER-1B235", "Lo", 0),
("NUSHU CHARACTER-1B236", "Lo", 0),
("NUSHU CHARACTER-1B237", "Lo", 0),
("NUSHU CHARACTER-1B238", "Lo", 0),
("NUSHU CHARACTER-1B239", "Lo", 0),
("NUSHU CHARACTER-1B23A", "Lo", 0),
("NUSHU CHARACTER-1B23B", "Lo", 0),
("NUSHU CHARACTER-1B23C", "Lo", 0),
("NUSHU CHARACTER-1B23D", "Lo", 0),
("NUSHU CHARACTER-1B23E", "Lo", 0),
("NUSHU CHARACTER-1B23F", "Lo", 0),
("NUSHU CHARACTER-1B240", "Lo", 0),
("NUSHU CHARACTER-1B241", "Lo", 0),
("NUSHU CHARACTER-1B242", "Lo", 0),
("NUSHU CHARACTER-1B243", "Lo", 0),
("NUSHU CHARACTER-1B244", "Lo", 0),
("NUSHU CHARACTER-1B245", "Lo", 0),
("NUSHU CHARACTER-1B246", "Lo", 0),
("NUSHU CHARACTER-1B247", "Lo", 0),
("NUSHU CHARACTER-1B248", "Lo", 0),
("NUSHU CHARACTER-1B249", "Lo", 0),
("NUSHU CHARACTER-1B24A", "Lo", 0),
("NUSHU CHARACTER-1B24B", "Lo", 0),
("NUSHU CHARACTER-1B24C", "Lo", 0),
("NUSHU CHARACTER-1B24D", "Lo", 0),
("NUSHU CHARACTER-1B24E", "Lo", 0),
("NUSHU CHARACTER-1B24F", "Lo", 0),
("NUSHU CHARACTER-1B250", "Lo", 0),
("NUSHU CHARACTER-1B251", "Lo", 0),
("NUSHU CHARACTER-1B252", "Lo", 0),
("NUSHU CHARACTER-1B253", "Lo", 0),
("NUSHU CHARACTER-1B254", "Lo", 0),
("NUSHU CHARACTER-1B255", "Lo", 0),
("NUSHU CHARACTER-1B256", "Lo", 0),
("NUSHU CHARACTER-1B257", "Lo", 0),
("NUSHU CHARACTER-1B258", "Lo", 0),
("NUSHU CHARACTER-1B259", "Lo", 0),
("NUSHU CHARACTER-1B25A", "Lo", 0),
("NUSHU CHARACTER-1B25B", "Lo", 0),
("NUSHU CHARACTER-1B25C", "Lo", 0),
("NUSHU CHARACTER-1B25D", "Lo", 0),
("NUSHU CHARACTER-1B25E", "Lo", 0),
("NUSHU CHARACTER-1B25F", "Lo", 0),
("NUSHU CHARACTER-1B260", "Lo", 0),
("NUSHU CHARACTER-1B261", "Lo", 0),
("NUSHU CHARACTER-1B262", "Lo", 0),
("NUSHU CHARACTER-1B263", "Lo", 0),
("NUSHU CHARACTER-1B264", "Lo", 0),
("NUSHU CHARACTER-1B265", "Lo", 0),
("NUSHU CHARACTER-1B266", "Lo", 0),
("NUSHU CHARACTER-1B267", "Lo", 0),
("NUSHU CHARACTER-1B268", "Lo", 0),
("NUSHU CHARACTER-1B269", "Lo", 0),
("NUSHU CHARACTER-1B26A", "Lo", 0),
("NUSHU CHARACTER-1B26B", "Lo", 0),
("NUSHU CHARACTER-1B26C", "Lo", 0),
("NUSHU CHARACTER-1B26D", "Lo", 0),
("NUSHU CHARACTER-1B26E", "Lo", 0),
("NUSHU CHARACTER-1B26F", "Lo", 0),
("NUSHU CHARACTER-1B270", "Lo", 0),
("NUSHU CHARACTER-1B271", "Lo", 0),
("NUSHU CHARACTER-1B272", "Lo", 0),
("NUSHU CHARACTER-1B273", "Lo", 0),
("NUSHU CHARACTER-1B274", "Lo", 0),
("NUSHU CHARACTER-1B275", "Lo", 0),
("NUSHU CHARACTER-1B276", "Lo", 0),
("NUSHU CHARACTER-1B277", "Lo", 0),
("NUSHU CHARACTER-1B278", "Lo", 0),
("NUSHU CHARACTER-1B279", "Lo", 0),
("NUSHU CHARACTER-1B27A", "Lo", 0),
("NUSHU CHARACTER-1B27B", "Lo", 0),
("NUSHU CHARACTER-1B27C", "Lo", 0),
("NUSHU CHARACTER-1B27D", "Lo", 0),
("NUSHU CHARACTER-1B27E", "Lo", 0),
("NUSHU CHARACTER-1B27F", "Lo", 0),
("NUSHU CHARACTER-1B280", "Lo", 0),
("NUSHU CHARACTER-1B281", "Lo", 0),
("NUSHU CHARACTER-1B282", "Lo", 0),
("NUSHU CHARACTER-1B283", "Lo", 0),
("NUSHU CHARACTER-1B284", "Lo", 0),
("NUSHU CHARACTER-1B285", "Lo", 0),
("NUSHU CHARACTER-1B286", "Lo", 0),
("NUSHU CHARACTER-1B287", "Lo", 0),
("NUSHU CHARACTER-1B288", "Lo", 0),
("NUSHU CHARACTER-1B289", "Lo", 0),
("NUSHU CHARACTER-1B28A", "Lo", 0),
("NUSHU CHARACTER-1B28B", "Lo", 0),
("NUSHU CHARACTER-1B28C", "Lo", 0),
("NUSHU CHARACTER-1B28D", "Lo", 0),
("NUSHU CHARACTER-1B28E", "Lo", 0),
("NUSHU CHARACTER-1B28F", "Lo", 0),
("NUSHU CHARACTER-1B290", "Lo", 0),
("NUSHU CHARACTER-1B291", "Lo", 0),
("NUSHU CHARACTER-1B292", "Lo", 0),
("NUSHU CHARACTER-1B293", "Lo", 0),
("NUSHU CHARACTER-1B294", "Lo", 0),
("NUSHU CHARACTER-1B295", "Lo", 0),
("NUSHU CHARACTER-1B296", "Lo", 0),
("NUSHU CHARACTER-1B297", "Lo", 0),
("NUSHU CHARACTER-1B298", "Lo", 0),
("NUSHU CHARACTER-1B299", "Lo", 0),
("NUSHU CHARACTER-1B29A", "Lo", 0),
("NUSHU CHARACTER-1B29B", "Lo", 0),
("NUSHU CHARACTER-1B29C", "Lo", 0),
("NUSHU CHARACTER-1B29D", "Lo", 0),
("NUSHU CHARACTER-1B29E", "Lo", 0),
("NUSHU CHARACTER-1B29F", "Lo", 0),
("NUSHU CHARACTER-1B2A0", "Lo", 0),
("NUSHU CHARACTER-1B2A1", "Lo", 0),
("NUSHU CHARACTER-1B2A2", "Lo", 0),
("NUSHU CHARACTER-1B2A3", "Lo", 0),
("NUSHU CHARACTER-1B2A4", "Lo", 0),
("NUSHU CHARACTER-1B2A5", "Lo", 0),
("NUSHU CHARACTER-1B2A6", "Lo", 0),
("NUSHU CHARACTER-1B2A7", "Lo", 0),
("NUSHU CHARACTER-1B2A8", "Lo", 0),
("NUSHU CHARACTER-1B2A9", "Lo", 0),
("NUSHU CHARACTER-1B2AA", "Lo", 0),
("NUSHU CHARACTER-1B2AB", "Lo", 0),
("NUSHU CHARACTER-1B2AC", "Lo", 0),
("NUSHU CHARACTER-1B2AD", "Lo", 0),
("NUSHU CHARACTER-1B2AE", "Lo", 0),
("NUSHU CHARACTER-1B2AF", "Lo", 0),
("NUSHU CHARACTER-1B2B0", "Lo", 0),
("NUSHU CHARACTER-1B2B1", "Lo", 0),
("NUSHU CHARACTER-1B2B2", "Lo", 0),
("NUSHU CHARACTER-1B2B3", "Lo", 0),
("NUSHU CHARACTER-1B2B4", "Lo", 0),
("NUSHU CHARACTER-1B2B5", "Lo", 0),
("NUSHU CHARACTER-1B2B6", "Lo", 0),
("NUSHU CHARACTER-1B2B7", "Lo", 0),
("NUSHU CHARACTER-1B2B8", "Lo", 0),
("NUSHU CHARACTER-1B2B9", "Lo", 0),
("NUSHU CHARACTER-1B2BA", "Lo", 0),
("NUSHU CHARACTER-1B2BB", "Lo", 0),
("NUSHU CHARACTER-1B2BC", "Lo", 0),
("NUSHU CHARACTER-1B2BD", "Lo", 0),
("NUSHU CHARACTER-1B2BE", "Lo", 0),
("NUSHU CHARACTER-1B2BF", "Lo", 0),
("NUSHU CHARACTER-1B2C0", "Lo", 0),
("NUSHU CHARACTER-1B2C1", "Lo", 0),
("NUSHU CHARACTER-1B2C2", "Lo", 0),
("NUSHU CHARACTER-1B2C3", "Lo", 0),
("NUSHU CHARACTER-1B2C4", "Lo", 0),
("NUSHU CHARACTER-1B2C5", "Lo", 0),
("NUSHU CHARACTER-1B2C6", "Lo", 0),
("NUSHU CHARACTER-1B2C7", "Lo", 0),
("NUSHU CHARACTER-1B2C8", "Lo", 0),
("NUSHU CHARACTER-1B2C9", "Lo", 0),
("NUSHU CHARACTER-1B2CA", "Lo", 0),
("NUSHU CHARACTER-1B2CB", "Lo", 0),
("NUSHU CHARACTER-1B2CC", "Lo", 0),
("NUSHU CHARACTER-1B2CD", "Lo", 0),
("NUSHU CHARACTER-1B2CE", "Lo", 0),
("NUSHU CHARACTER-1B2CF", "Lo", 0),
("NUSHU CHARACTER-1B2D0", "Lo", 0),
("NUSHU CHARACTER-1B2D1", "Lo", 0),
("NUSHU CHARACTER-1B2D2", "Lo", 0),
("NUSHU CHARACTER-1B2D3", "Lo", 0),
("NUSHU CHARACTER-1B2D4", "Lo", 0),
("NUSHU CHARACTER-1B2D5", "Lo", 0),
("NUSHU CHARACTER-1B2D6", "Lo", 0),
("NUSHU CHARACTER-1B2D7", "Lo", 0),
("NUSHU CHARACTER-1B2D8", "Lo", 0),
("NUSHU CHARACTER-1B2D9", "Lo", 0),
("NUSHU CHARACTER-1B2DA", "Lo", 0),
("NUSHU CHARACTER-1B2DB", "Lo", 0),
("NUSHU CHARACTER-1B2DC", "Lo", 0),
("NUSHU CHARACTER-1B2DD", "Lo", 0),
("NUSHU CHARACTER-1B2DE", "Lo", 0),
("NUSHU CHARACTER-1B2DF", "Lo", 0),
("NUSHU CHARACTER-1B2E0", "Lo", 0),
("NUSHU CHARACTER-1B2E1", "Lo", 0),
("NUSHU CHARACTER-1B2E2", "Lo", 0),
("NUSHU CHARACTER-1B2E3", "Lo", 0),
("NUSHU CHARACTER-1B2E4", "Lo", 0),
("NUSHU CHARACTER-1B2E5", "Lo", 0),
("NUSHU CHARACTER-1B2E6", "Lo", 0),
("NUSHU CHARACTER-1B2E7", "Lo", 0),
("NUSHU CHARACTER-1B2E8", "Lo", 0),
("NUSHU CHARACTER-1B2E9", "Lo", 0),
("NUSHU CHARACTER-1B2EA", "Lo", 0),
("NUSHU CHARACTER-1B2EB", "Lo", 0),
("NUSHU CHARACTER-1B2EC", "Lo", 0),
("NUSHU CHARACTER-1B2ED", "Lo", 0),
("NUSHU CHARACTER-1B2EE", "Lo", 0),
("NUSHU CHARACTER-1B2EF", "Lo", 0),
("NUSHU CHARACTER-1B2F0", "Lo", 0),
("NUSHU CHARACTER-1B2F1", "Lo", 0),
("NUSHU CHARACTER-1B2F2", "Lo", 0),
("NUSHU CHARACTER-1B2F3", "Lo", 0),
("NUSHU CHARACTER-1B2F4", "Lo", 0),
("NUSHU CHARACTER-1B2F5", "Lo", 0),
("NUSHU CHARACTER-1B2F6", "Lo", 0),
("NUSHU CHARACTER-1B2F7", "Lo", 0),
("NUSHU CHARACTER-1B2F8", "Lo", 0),
("NUSHU CHARACTER-1B2F9", "Lo", 0),
("NUSHU CHARACTER-1B2FA", "Lo", 0),
("NUSHU CHARACTER-1B2FB", "Lo", 0),
)
|
https://github.com/0xPARC/0xparc-intro-book | https://raw.githubusercontent.com/0xPARC/0xparc-intro-book/main/old-stuff/preamble.typ | typst | #let pair = math.op("pair")
#let sha = math.op("hash")
#let msg = math.sans("msg")
#let Com = math.op("Com")
#let Flatten = math.op("Flatten")
#let Enc = math.op("Enc")
#let pk = math.sans("pk")
#let sk = math.sans("sk")
// https://github.com/vEnhance/dotfiles/blob/main/typst/packages/local/evan/1.0.0/evan.typ
#import "@preview/ctheorems:1.1.2": *
#let fonts = (
text: ("Linux Libertine"),
sans: ("Noto Sans"),
mono: ("Inconsolata"),
)
#let colors = (
title: eastern,
headers: maroon,
)
#let toc = {
show outline.entry.where(level: 1): it => {
v(1.2em, weak:true)
text(weight:"bold", font:fonts.sans, it)
}
text(fill:colors.title, size:1.4em, font:fonts.sans, [*Table of contents*])
v(0.6em)
outline(
title: none,
indent: 2em,
)
}
#let eqn(s) = {
set math.equation(numbering: "(1)")
s
}
#let theorem = thmbox("main", "Theorem", fill: rgb("#ffeeee"), base_level: 1)
#let lemma = thmbox("main", "Lemma", fill: rgb("#ffeeee"), base_level: 1)
#let proposition = thmbox("main", "Proposition", fill: rgb("#ffeeee"), base_level: 1)
#let claim = thmbox("main", "Claim", fill: rgb("#ffeeee"), base_level: 1)
#let definition = thmbox("main", "Definition", fill: rgb("#ddddff"), base_level: 1)
#let example = thmbox("main", "Example", fill: rgb("#ffffdd"), base_level: 1)
#let algorithm = thmbox("main", "Algorithm", fill: rgb("#ddffdd"), base_level: 1)
#let remark = thmbox("main", "Remark", fill: rgb("#eeeeee"), base_level: 1)
#let problem = thmplain("main", "Problem", base_level: 1)
#let exercise = thmplain("main", "Problem", base_level: 1)
#let todo = thmbox("todo", "TODO", fill: rgb("#ddaa77")).with(numbering: none)
#let proof = thmproof("proof", "Proof")
#let assumption = thmbox("main", "Assumption", fill: rgb("#eeeeaa"), base_level: 1)
#let goal = thmbox("main", "Goal", fill: rgb("#eeeeaa"), base_level: 1)
#let url(s) = {
link(s, text(font:fonts.mono, s))
}
#let pmod(x) = $space (mod #x)$
// Main entry point to use in a global show rule
#let evan(
title: none,
author: none,
subtitle: none,
date: none,
maketitle: true,
body
) = {
// Set document parameters
if (title != none) {
set document(title: title)
}
if (author != none) {
set document(author: author)
}
// General settings
set page(
paper: "a4",
margin: auto,
header: context {
set align(right)
set text(size:0.8em)
if (not maketitle or counter(page).get().first() > 1) {
text(weight:"bold", title)
if (author != none) {
h(0.2em)
sym.dash.em
h(0.2em)
text(style:"italic", author)
}
}
},
numbering: "1",
)
set par(
justify: true
)
set text(
font:fonts.text,
size:11pt,
)
// Theorem environments
show: thmrules.with(qed-symbol: $square$)
// Change quote display
set quote(block: true)
show quote: set pad(x:2em, y:0em)
show quote: it => {
set text(style: "italic")
v(-1em)
it
v(-0.5em)
}
// Section headers
set heading(numbering: "1.1")
show heading: it => {
set text(font:fonts.sans)
block([
#if (it.numbering != none) [
#text(fill:colors.headers, "§" + counter(heading).display())
#h(0.2em)
]
#it.body
#v(0.4em)
])
}
// Hyperlinks in blue text
show link: it => {
if (type(it.dest) == "label") {
set text(fill:red)
it
} else {
set text(fill:blue)
it
}
}
show ref: it => {
if (it.supplement == auto) {
link(it.target, it)
} else {
link(it.target, it.supplement)
}
}
// Title page, if maketitle is true
if maketitle {
v(2.5em)
set align(center)
set block(spacing: 2em)
block(text(fill:colors.title, size:2em, font:fonts.sans, weight:"bold", title))
if (subtitle != none) {
block(text(size:1.5em, font:fonts.sans, weight:"bold", subtitle))
}
if (author != none) {
block(smallcaps(text(size:1.7em, author)))
}
if (type(date) == "datetime") {
block(text(size:1.2em, date.display("[day] [month repr:long] [year]")))
}
else if (date != none) {
block(text(size:1.2em, date))
}
v(1.5em)
}
body
}
|
|
https://github.com/Robotechnic/alchemist | https://raw.githubusercontent.com/Robotechnic/alchemist/master/src/default.typ | typst | MIT License | #let default = (
atom-sep: 3em,
delta: 0.2em,
angle-increment: 45deg,
base-angle: 0deg,
debug: false,
) |
https://github.com/phinixplus/docs | https://raw.githubusercontent.com/phinixplus/docs/master/source/cpu/cpu.typ | typst | Other | #let keywords = (
"PHINIX+", "CPU",
"Instruction Set",
"Computer Engineering",
"Computer Architecture"
)
#import "/source/template.typ": template
#show: x => template("Part 1: The CPU", "0.4.6", keywords, x)
#import "preface.typ": preface; #preface
#import "intro.typ": intro; #intro
#import "regfile.typ": regfile; #regfile
|
Subsets and Splits